Types
Object types in the Management API
FailureResponse
The failure response object.
Error
The error object.
The error code.
The error message.
Account
The Account object.
The Account’s unique identifier.
Environment
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Environment’s Account.
The Account object.
The Account’s unique identifier.
EnvironmentResponse
The result of a mutation which creates or modifies an Environment.
The Environment which was created or modified.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
PageInfo
The page info object used for pagination.
Points to the first item returned in the results. Used when paginating backward.
Points to the last item returned in the results. Used when paginating forward.
A boolean that indicates whether a next page of results exists. Can be used to display a “next page” button in user interfaces, for example.
A boolean that indicates whether a previous page of results exists. Can be used to display a “previous page” button in user interfaces, for example.
Application
The Application object.
Propel Applications represent the web or mobile app you are building. They provide the API credentials that allow your client- or server-side app to access the Propel API. The Application’s Propeller determines the speed and cost of your Metric Queries.
The Application’s unique identifier.
The Application’s unique name.
The Application’s description.
The Application’s Account.
The Account object.
The Account’s unique identifier.
The Application’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Application’s creation date and time in UTC.
The Application’s last modification date and time in UTC.
The Application’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Application’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Application’s OAuth 2.0 client identifier.
The Application’s OAuth 2.0 client secret.
The Application’s Propeller.
A Propeller determines your Application’s query processing power. The larger the Propeller, the faster the queries and the higher the cost. Every Propel Application (and therefore every set of API credentials) has a Propeller that determines the speed and cost of queries.
P1_X_SMALL
: Max records per second: 5,000,000 records per secondP1_SMALL
: Max records per second: 25,000,000 records per secondP1_MEDIUM
: Max records per second: 100,000,000 records per secondP1_LARGE
: Max records per second: 250,000,000 records per secondP1_X_LARGE
: Max records per second: 500,000,000 records per second
The Application’s OAuth 2.0 scopes.
The API operations an Application is authorized to perform.
ADMIN
: Grant read/write access to Data Sources, Data Pools, Metrics and Policies.APPLICATION_ADMIN
: Grant read/write access to Applications.DATA_POOL_QUERY
: Grant read access to query Data Pools.DATA_POOL_READ
: Grant read access to read Data Pools.DATA_POOL_STATS
: Grant read access to fetch column statistics from Data Pools.ENVIRONMENT_ADMIN
: Grant read/write access to Environments.METRIC_QUERY
: Grant read access to query Metrics.METRIC_STATS
: Grant read access to fetch Dimension statistics from Metrics.METRIC_READ
: Grant read access to Metrics.
This does not allow querying Metrics. For that, see METRIC_QUERY
.
A paginated list of Data Pool Access Policies associated with the Application.
Arguments
ApplicationResponse
The result of a mutation which creates or modifies an Application.
The Application which was created or modified.
The Application object.
Propel Applications represent the web or mobile app you are building. They provide the API credentials that allow your client- or server-side app to access the Propel API. The Application’s Propeller determines the speed and cost of your Metric Queries.
The Application’s unique identifier.
The Application’s unique name.
The Application’s description.
The Application’s Account.
The Account object.
The Account’s unique identifier.
The Application’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Application’s creation date and time in UTC.
The Application’s last modification date and time in UTC.
The Application’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Application’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Application’s OAuth 2.0 client identifier.
The Application’s OAuth 2.0 client secret.
The Application’s Propeller.
A Propeller determines your Application’s query processing power. The larger the Propeller, the faster the queries and the higher the cost. Every Propel Application (and therefore every set of API credentials) has a Propeller that determines the speed and cost of queries.
P1_X_SMALL
: Max records per second: 5,000,000 records per secondP1_SMALL
: Max records per second: 25,000,000 records per secondP1_MEDIUM
: Max records per second: 100,000,000 records per secondP1_LARGE
: Max records per second: 250,000,000 records per secondP1_X_LARGE
: Max records per second: 500,000,000 records per second
The Application’s OAuth 2.0 scopes.
The API operations an Application is authorized to perform.
ADMIN
: Grant read/write access to Data Sources, Data Pools, Metrics and Policies.APPLICATION_ADMIN
: Grant read/write access to Applications.DATA_POOL_QUERY
: Grant read access to query Data Pools.DATA_POOL_READ
: Grant read access to read Data Pools.DATA_POOL_STATS
: Grant read access to fetch column statistics from Data Pools.ENVIRONMENT_ADMIN
: Grant read/write access to Environments.METRIC_QUERY
: Grant read access to query Metrics.METRIC_STATS
: Grant read access to fetch Dimension statistics from Metrics.METRIC_READ
: Grant read access to Metrics.
This does not allow querying Metrics. For that, see METRIC_QUERY
.
A paginated list of Data Pool Access Policies associated with the Application.
Arguments
DataSource
The Data Source object.
A Data Source is a connection to your data warehouse. It has the necessary connection details for Propel to access Snowflake or any other supported Data Source.
The Data Source’s unique identifier.
The Data Source’s unique name.
The Data Source’s description.
The Data Source’s Account.
The Account object.
The Account’s unique identifier.
The Data Source’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Data Source’s creation date and time in UTC.
The Data Source’s last modification date and time in UTC.
The Data Source’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Source’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Source’s type.
The types of Data Sources.
WEBHOOK
: Indicates a Webhook Data Source.TWILIO_SEGMENT
: Indicates a Twilio Segment Data Source.S3
: Indicates an Amazon S3 Data Source.Redshift
: Indicates a Redshift Data Source.POSTGRESQL
: Indicates a PostgreSQL Data Source.KAFKA
: Indicates a Kafka Data Source.Http
: Indicates an Http Data Source.CLICKHOUSE
: Indicates a ClickHouse Data Source.AMAZON_DYNAMODB
: Indicates an Amazon DynamoDB Data Source.AMAZON_DATA_FIREHOSE
: Indicates an Amazon Data Firehose Data Source.Snowflake
: Indicates a Snowflake Data Source.INTERNAL
: Indicates an internal Data Source.
The Data Source’s status.
The status of a Data Source.
CREATED
: The Data Source has been created, but it is not connected yet.CONNECTING
: Propel is attempting to connect the Data Source.CONNECTED
: The Data Source is connected.BROKEN
: The Data Source failed to connect.DELETING
: Propel is deleting the Data Source.
The Data Source’s connection settings.
The Snowflake Data Source connection settings.
The Snowflake account. This is the part before the “snowflakecomputing.com” part of your Snowflake URL.
The Snowflake database name.
The Snowflake warehouse name. It should be “PROPELLING” if you used the default name in the setup script.
The Snowflake schema.
The Snowflake username. It should be “PROPEL” if you used the default name in the setup script.
The Snowflake role. It should be “PROPELLER” if you used the default name in the setup script.
The Amazon Data Firehose Data Source’s connection settings.
Enables or disables access control for the Data Pool. If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
HTTP basic access authentication credentials. You must configure these same credentials to be included in the
X-Amz-Firehose-Access-Key
header when Amazon Data Firehose issues requests to its custom HTTP endpoint.
Additional columns for the table in Propel.
Copy this value into the URL field when configuring your Amazon Data Firehose to deliver to a custom HTTP endpoint.
Override the Data Pool’s table settings. These describe how the Data Pool’s table is created in ClickHouse, and a
default will be chosen based on the Data Pool’s timestamp
value, if any. You can override these
defaults in order to specify a custom table engine, custom ORDER BY, etc.
See TableSettings
The primary timestamp column, if any.
Copy this value into the X-Amz-Firehose-Access-Key
header when configuring your Amazon Data Firehose to deliver to a custom HTTP endpoint.
The Amazon DynamoDB Data Source’s connection settings.
Enables or disables access control for the Data Pool. If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
HTTP basic access authentication credentials. You must configure these same credentials to be included in the
X-Amz-Firehose-Access-Key
header when Amazon Data Firehose transmits records from your DynamoDB table to its
custom HTTP endpoint.
Additional columns for the table in Propel.
Copy this value into the URL field when configuring your Amazon Data Firehose to transmit records from your DynamoDB table to a custom HTTP endpoint.
Override the Data Pool’s table settings. These describe how the Data Pool’s table is created in ClickHouse, and a
default will be chosen based on the Data Pool’s timestamp
value, if any. You can override these
defaults in order to specify a custom table engine, custom ORDER BY, etc.
See TableSettings
The primary timestamp column, if any.
Copy this value into the X-Amz-Firehose-Access-Key
header when configuring your Amazon Data Firehose to
transmit records from your DynamoDB table to its custom HTTP endpoint.
The ClickHouse Data Source connection settings.
Which database to connect to
The password for the provided user
Whether the user has readonly permissions or not for querying ClickHouse
The URL where the ClickHouse host is listening to HTTP[S] connections
The user for authenticating against the ClickHouse host
The HTTP Data Source connection settings.
The HTTP Basic authentication settings for uploading new data.
If this parameter is not provided, anyone with the URL to your tables will be able to upload data. While it’s OK to test without HTTP Basic authentication, we recommend enabling it.
The HTTP Data Source’s tables.
The Kafka Data Source connection settings.
The type of authentication to use. Can be SCRAM-SHA-256, SCRAM-SHA-512, PLAIN or NONE
The bootstrap server(s) to connect to
The password for the provided user
Whether the the connection to the Kafka servers is encrypted or not
The user for authenticating against the Kafka servers
The PostgreSQL Data Source connection settings.
Which database to connect to
The host where PostgreSQL is listening
The port where PostgreSQL is listening (usually 5432)
Which schema to use
The user for authenticating against PostgreSQL
The connection settings for an Amazon S3 Data Source. These include the Amazon S3 bucket name, the AWS access key ID, and the tables (along with their paths). We do not allow fetching the AWS secret access key after it has been set.
The AWS access key ID for an IAM user with sufficient access to the Amazon S3 bucket.
The name of the Amazon S3 bucket.
The Amazon S3 Data Source’s tables.
The Twilio Segment Data Source connection settings.
Enables or disables access control for the Data Pool. If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
The HTTP basic authentication settings for the Twilio Segment Data Source URL. If this parameter is not provided, anyone with the webhook URL will be able to send events. While it’s OK to test without HTTP Basic authentication, we recommend enabling it.
The additional columns for the table in Propel.
Override the Data Pool’s table settings. These describe how the Data Pool’s table is created in ClickHouse, and a
default will be chosen based on the Data Pool’s timestamp
and uniqueId
values, if any. You can override these
defaults in order to specify a custom table engine, custom ORDER BY, etc.
See TableSettings
The primary timestamp column, if any.
The webhook URL that Twilio Segment should POST events to.
The Webhook Data Source connection settings.
Enables or disables access control for the Data Pool. If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
The HTTP basic authentication settings for the Webhook Data Source URL. If this parameter is not provided, anyone with the webhook URL will be able to send events. While it’s OK to test without HTTP Basic authentication, we recommend enabling it.
The additional columns for the table in Propel.
Override the Data Pool’s table settings. These describe how the Data Pool’s table is created in ClickHouse, and a
default will be chosen based on the Data Pool’s timestamp
and uniqueId
values, if any. You can override these
defaults in order to specify a custom table engine, custom ORDER BY, etc.
See TableSettings
The primary timestamp column, if any.
The Webhook URL for posting JSON events
The tenant ID column, if any.
deprecated: Will be removed; use Data Pool Access Policies instead.The unique ID column, if any. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated.
deprecated: Will be removed; use Table Settings to define the primary key.The tables contained within the Data Source, according to the most recent table introspection.
Arguments
See TableConnection
A list of table introspections performed for the Data Source. You can see how tables and columns changed over time by paging through this list.
Arguments
A list of checks performed on the Data Source during its most recent connection attempt.
The Data Source Check object.
Data Source Checks are executed when setting up your Data Source. They check that Propel will be able to receive data and setup Data Pools.
The exact Checks to perform vary by Data Source. For example, Snowflake-backed Data Sources will have their own specific Checks.
The name of the Data Source Check to be performed.
A description of the Data Source Check to be performed.
The status of the Data Source Check (all checks begin as NOT_STARTED before transitioning to SUCCEEDED or FAILED).
The status of a Data Source Check.
NOT_STARTED
: The Check has not started.SUCCEEDED
: The Check succeeded.FAILED
: The Check failed.
The time at which the Data Source Check was performed.
If you list Data Pools via the dataPools
field on a Data Source, you will get Data Pools for the Data Source.
The dataPools
field uses cursor-based pagination typical of GraphQL APIs. You can use the pairs of parameters first
and after
or last
and before
to page forward or backward through the results, respectively.
For forward pagination, the first
parameter defines the number of results to return, and the after
parameter defines the cursor to continue from. You should pass the cursor for the last result of the current page to after
.
For backward pagination, the last
parameter defines the number of results to return, and the before
parameter defines the cursor to continue from. You should pass the cursor for the first result of the current page to before
.
Arguments
DataSourceCheck
The Data Source Check object.
Data Source Checks are executed when setting up your Data Source. They check that Propel will be able to receive data and setup Data Pools.
The exact Checks to perform vary by Data Source. For example, Snowflake-backed Data Sources will have their own specific Checks.
The name of the Data Source Check to be performed.
A description of the Data Source Check to be performed.
The status of the Data Source Check (all checks begin as NOT_STARTED before transitioning to SUCCEEDED or FAILED).
The status of a Data Source Check.
NOT_STARTED
: The Check has not started.SUCCEEDED
: The Check succeeded.FAILED
: The Check failed.
The time at which the Data Source Check was performed.
TableIntrospection
The table introspection object.
When setting up a Data Source, Propel may need to introspect tables in order to determine what tables and columns are available to create Data Pools from. The table introspection represents the lifecycle of this operation (whether it’s in-progress, succeeded, or failed) and the resulting tables and columns. These will be captured as table and column objects, respectively.
The Data Source the table introspection was performed for. See DataSource
The status of the table introspection.
The status of a table introspection.
NOT_STARTED
: The table introspection has not started.STARTED
: The table introspection has started.SUCCEEDED
: The table introspection succeeded.FAILED
: The table introspection failed.
The table introspection’s creation date and time in UTC.
The table introspection’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The table introspection’s last modification date and time in UTC.
The table introspection’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The number of tables introspected.
Table
The table object.
Once a table introspection succeeds, it creates a new table object for every table it introspected.
The table’s ID.
The table’s name.
The Data Source to which the table belongs. See DataSource
The number of rows contained within the table at the time of introspection. Check the table’s cachedAt
time, since this info can become out of date.
The size of the table (in bytes) at the time of introspection. Check the table’s cachedAt
time, since this info can become out of date.
The time at which the table was cached (i.e., the time at which it was introspected).
The time at which the table was created. This is the same as its cachedAt
time.
The table’s creator. This corresponds to the initiator of the table Introspection. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The table’s columns.
Arguments
The column connection object.
Learn more about pagination in GraphQL.
The time at which the columns were cached (i.e., the time at which they were introspected).
The column connection’s edges.
The column connection’s nodes.
The column object.
Once a table introspection succeeds, it creates a new table object for every table it introspected. Within each table object, it also creates a column object for every column it introspected.
The column’s name.
The column’s type.
Whether the column is nullable, meaning whether it accepts a null value.
The time at which the column was cached (i.e., the time at which it was introspected).
The time at which the column was created. This is the same as its cachedAt
time.
The column’s creator. This corresponds to the initiator of the table introspection. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
This is the suggested Data Pool column type to use when converting this Data Source column to a Data Pool column.
Propel makes this suggestion based on the Data Source column type. If the Data Source column type is unsupported, this field returns null
.
Sometimes, you know better which Data Pool column type to convert to. In these cases, you can refer
to supportedDataPoolColumnTypes
for the full set of supported conversions.
See ColumnType
This is the set of supported Data Pool column types you can use when converting this Data Source column to a Data Pool column. If the Data Source column type is unsupported, this field returns an empty array.
For example, a numeric Data Source column type could be converted to a narrower or wider numeric Data Pool column type; a string-valued Data Source column type could be mapped to a date or timestamp Data Pool column type.
See ColumnType
Information about the column obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the column obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the column obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the column obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the column obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the column obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedThe column connection’s page info.
The page info object used for pagination.
Points to the first item returned in the results. Used when paginating backward.
Points to the last item returned in the results. Used when paginating forward.
A boolean that indicates whether a next page of results exists. Can be used to display a “next page” button in user interfaces, for example.
A boolean that indicates whether a previous page of results exists. Can be used to display a “previous page” button in user interfaces, for example.
The table’s columns which can be used as a timestamp for a Data Pool.
Arguments
The column connection object.
Learn more about pagination in GraphQL.
The time at which the columns were cached (i.e., the time at which they were introspected).
The column connection’s edges.
The column connection’s nodes.
The column object.
Once a table introspection succeeds, it creates a new table object for every table it introspected. Within each table object, it also creates a column object for every column it introspected.
The column’s name.
The column’s type.
Whether the column is nullable, meaning whether it accepts a null value.
The time at which the column was cached (i.e., the time at which it was introspected).
The time at which the column was created. This is the same as its cachedAt
time.
The column’s creator. This corresponds to the initiator of the table introspection. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
This is the suggested Data Pool column type to use when converting this Data Source column to a Data Pool column.
Propel makes this suggestion based on the Data Source column type. If the Data Source column type is unsupported, this field returns null
.
Sometimes, you know better which Data Pool column type to convert to. In these cases, you can refer
to supportedDataPoolColumnTypes
for the full set of supported conversions.
See ColumnType
This is the set of supported Data Pool column types you can use when converting this Data Source column to a Data Pool column. If the Data Source column type is unsupported, this field returns an empty array.
For example, a numeric Data Source column type could be converted to a narrower or wider numeric Data Pool column type; a string-valued Data Source column type could be mapped to a date or timestamp Data Pool column type.
See ColumnType
Information about the column obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the column obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the column obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the column obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the column obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the column obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedThe column connection’s page info.
The page info object used for pagination.
Points to the first item returned in the results. Used when paginating backward.
Points to the last item returned in the results. Used when paginating forward.
A boolean that indicates whether a next page of results exists. Can be used to display a “next page” button in user interfaces, for example.
A boolean that indicates whether a previous page of results exists. Can be used to display a “previous page” button in user interfaces, for example.
The table’s columns which can be used as a measure for a Metric.
Arguments
The column connection object.
Learn more about pagination in GraphQL.
The time at which the columns were cached (i.e., the time at which they were introspected).
The column connection’s edges.
The column connection’s nodes.
The column object.
Once a table introspection succeeds, it creates a new table object for every table it introspected. Within each table object, it also creates a column object for every column it introspected.
The column’s name.
The column’s type.
Whether the column is nullable, meaning whether it accepts a null value.
The time at which the column was cached (i.e., the time at which it was introspected).
The time at which the column was created. This is the same as its cachedAt
time.
The column’s creator. This corresponds to the initiator of the table introspection. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
This is the suggested Data Pool column type to use when converting this Data Source column to a Data Pool column.
Propel makes this suggestion based on the Data Source column type. If the Data Source column type is unsupported, this field returns null
.
Sometimes, you know better which Data Pool column type to convert to. In these cases, you can refer
to supportedDataPoolColumnTypes
for the full set of supported conversions.
See ColumnType
This is the set of supported Data Pool column types you can use when converting this Data Source column to a Data Pool column. If the Data Source column type is unsupported, this field returns an empty array.
For example, a numeric Data Source column type could be converted to a narrower or wider numeric Data Pool column type; a string-valued Data Source column type could be mapped to a date or timestamp Data Pool column type.
See ColumnType
Information about the column obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the column obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the column obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the column obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the column obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the column obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedThe column connection’s page info.
The page info object used for pagination.
Points to the first item returned in the results. Used when paginating backward.
Points to the last item returned in the results. Used when paginating forward.
A boolean that indicates whether a next page of results exists. Can be used to display a “next page” button in user interfaces, for example.
A boolean that indicates whether a previous page of results exists. Can be used to display a “previous page” button in user interfaces, for example.
Information about the table obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the table obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the table obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the table obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the table obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the table obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the table obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the table obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the table obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedColumn
The column object.
Once a table introspection succeeds, it creates a new table object for every table it introspected. Within each table object, it also creates a column object for every column it introspected.
The column’s name.
The column’s type.
Whether the column is nullable, meaning whether it accepts a null value.
The time at which the column was cached (i.e., the time at which it was introspected).
The time at which the column was created. This is the same as its cachedAt
time.
The column’s creator. This corresponds to the initiator of the table introspection. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
This is the suggested Data Pool column type to use when converting this Data Source column to a Data Pool column.
Propel makes this suggestion based on the Data Source column type. If the Data Source column type is unsupported, this field returns null
.
Sometimes, you know better which Data Pool column type to convert to. In these cases, you can refer
to supportedDataPoolColumnTypes
for the full set of supported conversions.
The Propel data types.
BOOLEAN
: True or false.STRING
: A variable-length string.FLOAT
: A 32-bit signed double-precision floating point number.DOUBLE
: A 64-bit signed double-precision floating point number.INT8
: An 8-bit signed integer, with a minimum value of -2⁷ and a maximum value of 2⁷-1.INT16
: A 16-bit signed integer, with a minimum value of -2¹⁵ and a maximum value of 2¹⁵-1.INT32
: A 32-bit signed integer, with a minimum value of -2³¹ and a maximum value of 2³¹-1.INT64
: A 64-bit signed integer, with a minimum value of -2⁶³ and a maximum value of 2⁶³-1.DATE
: A date without a timestamp. For example, “YYYY-MM-DD”.TIMESTAMP
: A date with a timestamp. For example, “yyy-MM-dd HH:mm:ss”.JSON
: A JavaScript Object Notation (JSON) document.CLICKHOUSE
: A ClickHouse-specific type.
This is the set of supported Data Pool column types you can use when converting this Data Source column to a Data Pool column. If the Data Source column type is unsupported, this field returns an empty array.
For example, a numeric Data Source column type could be converted to a narrower or wider numeric Data Pool column type; a string-valued Data Source column type could be mapped to a date or timestamp Data Pool column type.
The Propel data types.
BOOLEAN
: True or false.STRING
: A variable-length string.FLOAT
: A 32-bit signed double-precision floating point number.DOUBLE
: A 64-bit signed double-precision floating point number.INT8
: An 8-bit signed integer, with a minimum value of -2⁷ and a maximum value of 2⁷-1.INT16
: A 16-bit signed integer, with a minimum value of -2¹⁵ and a maximum value of 2¹⁵-1.INT32
: A 32-bit signed integer, with a minimum value of -2³¹ and a maximum value of 2³¹-1.INT64
: A 64-bit signed integer, with a minimum value of -2⁶³ and a maximum value of 2⁶³-1.DATE
: A date without a timestamp. For example, “YYYY-MM-DD”.TIMESTAMP
: A date with a timestamp. For example, “yyy-MM-dd HH:mm:ss”.JSON
: A JavaScript Object Notation (JSON) document.CLICKHOUSE
: A ClickHouse-specific type.
Information about the column obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the column obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the column obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the column obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the column obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the column obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInternalConnectionSettings
SnowflakeConnectionSettings
The Snowflake Data Source connection settings.
The Snowflake account. This is the part before the “snowflakecomputing.com” part of your Snowflake URL.
The Snowflake database name.
The Snowflake warehouse name. It should be “PROPELLING” if you used the default name in the setup script.
The Snowflake schema.
The Snowflake username. It should be “PROPEL” if you used the default name in the setup script.
The Snowflake role. It should be “PROPELLER” if you used the default name in the setup script.
HttpBasicAuthSettings
The HTTP Basic authentication settings.
Username for HTTP Basic authentication that must be included in the Authorization header when uploading new data.
Password for HTTP Basic authentication that must be included in the Authorization header when uploading new data.
HttpDataSourceTable
An HTTP Data Source’s table.
The ID of the table
The name of the table
All the columns present in the table
A column in an HTTP Data Source’s table.
The column name. It has to be unique within a table.
The column type.
The Propel data types.
BOOLEAN
: True or false.STRING
: A variable-length string.FLOAT
: A 32-bit signed double-precision floating point number.DOUBLE
: A 64-bit signed double-precision floating point number.INT8
: An 8-bit signed integer, with a minimum value of -2⁷ and a maximum value of 2⁷-1.INT16
: A 16-bit signed integer, with a minimum value of -2¹⁵ and a maximum value of 2¹⁵-1.INT32
: A 32-bit signed integer, with a minimum value of -2³¹ and a maximum value of 2³¹-1.INT64
: A 64-bit signed integer, with a minimum value of -2⁶³ and a maximum value of 2⁶³-1.DATE
: A date without a timestamp. For example, “YYYY-MM-DD”.TIMESTAMP
: A date with a timestamp. For example, “yyy-MM-dd HH:mm:ss”.JSON
: A JavaScript Object Notation (JSON) document.CLICKHOUSE
: A ClickHouse-specific type.
The ClickHouse type to use when type
is set to CLICKHOUSE
.
Whether the column’s type is nullable or not.
HttpDataSourceColumn
A column in an HTTP Data Source’s table.
The column name. It has to be unique within a table.
The column type.
The Propel data types.
BOOLEAN
: True or false.STRING
: A variable-length string.FLOAT
: A 32-bit signed double-precision floating point number.DOUBLE
: A 64-bit signed double-precision floating point number.INT8
: An 8-bit signed integer, with a minimum value of -2⁷ and a maximum value of 2⁷-1.INT16
: A 16-bit signed integer, with a minimum value of -2¹⁵ and a maximum value of 2¹⁵-1.INT32
: A 32-bit signed integer, with a minimum value of -2³¹ and a maximum value of 2³¹-1.INT64
: A 64-bit signed integer, with a minimum value of -2⁶³ and a maximum value of 2⁶³-1.DATE
: A date without a timestamp. For example, “YYYY-MM-DD”.TIMESTAMP
: A date with a timestamp. For example, “yyy-MM-dd HH:mm:ss”.JSON
: A JavaScript Object Notation (JSON) document.CLICKHOUSE
: A ClickHouse-specific type.
The ClickHouse type to use when type
is set to CLICKHOUSE
.
Whether the column’s type is nullable or not.
S3DataSourceTable
An Amazon S3 Data Source’s table.
The ID of the table
The name of the table
The path to the table’s files in Amazon S3.
All the columns present in the table
A column in an Amazon S3 Data Source’s table.
The column name.
The column type.
The Propel data types.
BOOLEAN
: True or false.STRING
: A variable-length string.FLOAT
: A 32-bit signed double-precision floating point number.DOUBLE
: A 64-bit signed double-precision floating point number.INT8
: An 8-bit signed integer, with a minimum value of -2⁷ and a maximum value of 2⁷-1.INT16
: A 16-bit signed integer, with a minimum value of -2¹⁵ and a maximum value of 2¹⁵-1.INT32
: A 32-bit signed integer, with a minimum value of -2³¹ and a maximum value of 2³¹-1.INT64
: A 64-bit signed integer, with a minimum value of -2⁶³ and a maximum value of 2⁶³-1.DATE
: A date without a timestamp. For example, “YYYY-MM-DD”.TIMESTAMP
: A date with a timestamp. For example, “yyy-MM-dd HH:mm:ss”.JSON
: A JavaScript Object Notation (JSON) document.CLICKHOUSE
: A ClickHouse-specific type.
Whether the column’s type is nullable or not.
S3DataSourceColumn
A column in an Amazon S3 Data Source’s table.
The column name.
The column type.
The Propel data types.
BOOLEAN
: True or false.STRING
: A variable-length string.FLOAT
: A 32-bit signed double-precision floating point number.DOUBLE
: A 64-bit signed double-precision floating point number.INT8
: An 8-bit signed integer, with a minimum value of -2⁷ and a maximum value of 2⁷-1.INT16
: A 16-bit signed integer, with a minimum value of -2¹⁵ and a maximum value of 2¹⁵-1.INT32
: A 32-bit signed integer, with a minimum value of -2³¹ and a maximum value of 2³¹-1.INT64
: A 64-bit signed integer, with a minimum value of -2⁶³ and a maximum value of 2⁶³-1.DATE
: A date without a timestamp. For example, “YYYY-MM-DD”.TIMESTAMP
: A date with a timestamp. For example, “yyy-MM-dd HH:mm:ss”.JSON
: A JavaScript Object Notation (JSON) document.CLICKHOUSE
: A ClickHouse-specific type.
Whether the column’s type is nullable or not.
AmazonDataFirehoseDataSourceColumn
A column in an Amazon Data Firehose Data Source’s table.
The column name.
The JSON property that the column will be derived from. For example, if you send a JSON event like this:
{ "greeting": { "message": "hello, world" } }
Then you can use the JSON property “greeting.message” to extract “hello, world” to a column.
The column type.
The Propel data types.
BOOLEAN
: True or false.STRING
: A variable-length string.FLOAT
: A 32-bit signed double-precision floating point number.DOUBLE
: A 64-bit signed double-precision floating point number.INT8
: An 8-bit signed integer, with a minimum value of -2⁷ and a maximum value of 2⁷-1.INT16
: A 16-bit signed integer, with a minimum value of -2¹⁵ and a maximum value of 2¹⁵-1.INT32
: A 32-bit signed integer, with a minimum value of -2³¹ and a maximum value of 2³¹-1.INT64
: A 64-bit signed integer, with a minimum value of -2⁶³ and a maximum value of 2⁶³-1.DATE
: A date without a timestamp. For example, “YYYY-MM-DD”.TIMESTAMP
: A date with a timestamp. For example, “yyy-MM-dd HH:mm:ss”.JSON
: A JavaScript Object Notation (JSON) document.CLICKHOUSE
: A ClickHouse-specific type.
Whether the column’s type is nullable or not.
AmazonDynamoDBDataSourceColumn
A column in an Amazon DynamoDB Data Source’s table.
The column name.
The JSON property that the column will be derived from. For example, if you send a JSON event like this:
{ "greeting": { "message": "hello, world" } }
Then you can use the JSON property “greeting.message” to extract “hello, world” to a column.
The column type.
The Propel data types.
BOOLEAN
: True or false.STRING
: A variable-length string.FLOAT
: A 32-bit signed double-precision floating point number.DOUBLE
: A 64-bit signed double-precision floating point number.INT8
: An 8-bit signed integer, with a minimum value of -2⁷ and a maximum value of 2⁷-1.INT16
: A 16-bit signed integer, with a minimum value of -2¹⁵ and a maximum value of 2¹⁵-1.INT32
: A 32-bit signed integer, with a minimum value of -2³¹ and a maximum value of 2³¹-1.INT64
: A 64-bit signed integer, with a minimum value of -2⁶³ and a maximum value of 2⁶³-1.DATE
: A date without a timestamp. For example, “YYYY-MM-DD”.TIMESTAMP
: A date with a timestamp. For example, “yyy-MM-dd HH:mm:ss”.JSON
: A JavaScript Object Notation (JSON) document.CLICKHOUSE
: A ClickHouse-specific type.
Whether the column’s type is nullable or not.
TwilioSegmentDataSourceColumn
A column in a Twilio Segment Data Source’s table.
The column name.
The JSON property that the column will be derived from. For example, if you POST a JSON event like this:
{ "greeting": { "message": "hello, world" } }
Then you can use the JSON property “greeting.message” to extract “hello, world” to a column.
The column type.
The Propel data types.
BOOLEAN
: True or false.STRING
: A variable-length string.FLOAT
: A 32-bit signed double-precision floating point number.DOUBLE
: A 64-bit signed double-precision floating point number.INT8
: An 8-bit signed integer, with a minimum value of -2⁷ and a maximum value of 2⁷-1.INT16
: A 16-bit signed integer, with a minimum value of -2¹⁵ and a maximum value of 2¹⁵-1.INT32
: A 32-bit signed integer, with a minimum value of -2³¹ and a maximum value of 2³¹-1.INT64
: A 64-bit signed integer, with a minimum value of -2⁶³ and a maximum value of 2⁶³-1.DATE
: A date without a timestamp. For example, “YYYY-MM-DD”.TIMESTAMP
: A date with a timestamp. For example, “yyy-MM-dd HH:mm:ss”.JSON
: A JavaScript Object Notation (JSON) document.CLICKHOUSE
: A ClickHouse-specific type.
Whether the column’s type is nullable or not.
WebhookDataSourceColumn
A column in a Webhook Data Source’s table.
The column name.
The JSON property that the column will be derived from. For example, if you POST a JSON event like this:
{ "greeting": { "message": "hello, world" } }
Then you can use the JSON property “greeting.message” to extract “hello, world” to a column.
The column type.
The Propel data types.
BOOLEAN
: True or false.STRING
: A variable-length string.FLOAT
: A 32-bit signed double-precision floating point number.DOUBLE
: A 64-bit signed double-precision floating point number.INT8
: An 8-bit signed integer, with a minimum value of -2⁷ and a maximum value of 2⁷-1.INT16
: A 16-bit signed integer, with a minimum value of -2¹⁵ and a maximum value of 2¹⁵-1.INT32
: A 32-bit signed integer, with a minimum value of -2³¹ and a maximum value of 2³¹-1.INT64
: A 64-bit signed integer, with a minimum value of -2⁶³ and a maximum value of 2⁶³-1.DATE
: A date without a timestamp. For example, “YYYY-MM-DD”.TIMESTAMP
: A date with a timestamp. For example, “yyy-MM-dd HH:mm:ss”.JSON
: A JavaScript Object Notation (JSON) document.CLICKHOUSE
: A ClickHouse-specific type.
Whether the column’s type is nullable or not.
DataSourceResponse
The result of a mutation which creates or modifies a Data Source.
The Data Source which was created or modified.
The Data Source object.
A Data Source is a connection to your data warehouse. It has the necessary connection details for Propel to access Snowflake or any other supported Data Source.
The Data Source’s unique identifier.
The Data Source’s unique name.
The Data Source’s description.
The Data Source’s Account.
The Account object.
The Account’s unique identifier.
The Data Source’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Data Source’s creation date and time in UTC.
The Data Source’s last modification date and time in UTC.
The Data Source’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Source’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Source’s type.
The types of Data Sources.
WEBHOOK
: Indicates a Webhook Data Source.TWILIO_SEGMENT
: Indicates a Twilio Segment Data Source.S3
: Indicates an Amazon S3 Data Source.Redshift
: Indicates a Redshift Data Source.POSTGRESQL
: Indicates a PostgreSQL Data Source.KAFKA
: Indicates a Kafka Data Source.Http
: Indicates an Http Data Source.CLICKHOUSE
: Indicates a ClickHouse Data Source.AMAZON_DYNAMODB
: Indicates an Amazon DynamoDB Data Source.AMAZON_DATA_FIREHOSE
: Indicates an Amazon Data Firehose Data Source.Snowflake
: Indicates a Snowflake Data Source.INTERNAL
: Indicates an internal Data Source.
The Data Source’s status.
The status of a Data Source.
CREATED
: The Data Source has been created, but it is not connected yet.CONNECTING
: Propel is attempting to connect the Data Source.CONNECTED
: The Data Source is connected.BROKEN
: The Data Source failed to connect.DELETING
: Propel is deleting the Data Source.
The Data Source’s connection settings.
The tables contained within the Data Source, according to the most recent table introspection.
Arguments
See TableConnection
A list of table introspections performed for the Data Source. You can see how tables and columns changed over time by paging through this list.
Arguments
A list of checks performed on the Data Source during its most recent connection attempt.
The Data Source Check object.
Data Source Checks are executed when setting up your Data Source. They check that Propel will be able to receive data and setup Data Pools.
The exact Checks to perform vary by Data Source. For example, Snowflake-backed Data Sources will have their own specific Checks.
The name of the Data Source Check to be performed.
A description of the Data Source Check to be performed.
The status of the Data Source Check (all checks begin as NOT_STARTED before transitioning to SUCCEEDED or FAILED).
If the Data Source Check failed, this field includes a descriptive error message.
See Error
The time at which the Data Source Check was performed.
If you list Data Pools via the dataPools
field on a Data Source, you will get Data Pools for the Data Source.
The dataPools
field uses cursor-based pagination typical of GraphQL APIs. You can use the pairs of parameters first
and after
or last
and before
to page forward or backward through the results, respectively.
For forward pagination, the first
parameter defines the number of results to return, and the after
parameter defines the cursor to continue from. You should pass the cursor for the last result of the current page to after
.
For backward pagination, the last
parameter defines the number of results to return, and the before
parameter defines the cursor to continue from. You should pass the cursor for the first result of the current page to before
.
Arguments
DataPool
The Data Pool object. Data Pools are Propel’s high-speed data store and cache
The Data Pool’s unique identifier.
The Data Pool’s unique name.
The Data Pool’s description.
The Data Pool’s Account.
The Account object.
The Account’s unique identifier.
The Data Pool’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Data Pool’s creation date and time in UTC.
The Data Pool’s last modification date and time in UTC.
The Data Pool’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Pool’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Pool’s Data Source. See DataSource
The Data Pool’s status.
The status of a Data Pool.
CREATED
: The Data Pool has been created and will be set up soon.PENDING
: Propel is attempting to set up the Data Pool.LIVE
: The Data Pool is set up and serving data. Check its Syncs to monitor data ingestion.SETUP_FAILED
: The Data Pool setup failed. Check its Setup Tasks before re-attempting setup.CONNECTING
CONNECTED
BROKEN
PAUSING
PAUSED
DELETING
: Propel is deleting the Data Pool and all of its associated data.
The Data Pool’s data retention in days (not yet supported).
The name of the Data Pool’s table.
The Data Pool’s primary timestamp column, if any.
A Data Pool’s primary timestamp column. Propel uses the primary timestamp to order and partition your data in Data Pools. It will serve as the time dimension for your Metrics.
The name of the column that represents the primary timestamp.
The primary timestamp column’s type.
The number of records in the Data Pool.
The amount of storage in terabytes used by the Data Pool.
The Data Pool’s columns.
Arguments
The Data Pool column connection object.
Learn more about pagination in GraphQL.
The Data Pool column connection’s edges.
The Data Pool column edge object.
Learn more about pagination in GraphQL.
The edge’s cursor.
The edge’s node.
See DataPoolColumn
The Data Pool column connection’s nodes.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
See ColumnType
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadThe Data Pool column connection’s page info.
The page info object used for pagination.
Points to the first item returned in the results. Used when paginating backward.
Points to the last item returned in the results. Used when paginating forward.
A boolean that indicates whether a next page of results exists. Can be used to display a “next page” button in user interfaces, for example.
A boolean that indicates whether a previous page of results exists. Can be used to display a “previous page” button in user interfaces, for example.
The list of measures (numeric columns) in the Data Pool.
Arguments
The Data Pool column connection object.
Learn more about pagination in GraphQL.
The Data Pool column connection’s edges.
The Data Pool column edge object.
Learn more about pagination in GraphQL.
The edge’s cursor.
The edge’s node.
See DataPoolColumn
The Data Pool column connection’s nodes.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
See ColumnType
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadThe Data Pool column connection’s page info.
The page info object used for pagination.
Points to the first item returned in the results. Used when paginating backward.
Points to the last item returned in the results. Used when paginating forward.
A boolean that indicates whether a next page of results exists. Can be used to display a “next page” button in user interfaces, for example.
A boolean that indicates whether a previous page of results exists. Can be used to display a “previous page” button in user interfaces, for example.
A list of setup tasks performed on the Data Pool during its most recent setup attempt.
The Data Pool Setup Task object.
Data Pool Setup Tasks are executed when setting up your Data Pool. They ensure Propel will be able to sync records from your Data Source to your Data Pool.
The exact Setup Tasks to perform vary by Data Source. For example, Data Pools pointing to a Snowflake-backed Data Sources will have their own specific Setup Tasks.
The name of the Data Pool Setup Task to be performed.
A description of the Data Pool Setup Task to be performed.
The status of the Data Pool Setup Task (all setup tasks begin as NOT_STARTED before transitioning to SUCCEEDED or FAILED).
The status of a Data Pool Setup Task.
NOT_STARTED
: The Data Pool Setup Task has not been started yet.SUCCEEDED
: The Data Pool Setup Task has completed successfully.FAILED
: The Data Pool Setup Task has failed.
The time at which the Data Pool Setup Task was completed.
Settings related to Data Pool syncing.
Settings related to Data Pool syncing.
Indicates whether syncing is enabled or disabled.
The Data Pool Sync Status. It indicates whether a Data Pool is syncing data or not.
ENABLED
: Syncing is enabled for the Data Pool.DISABLING
: Propel is disabling syncing for the Data Pool.DISABLED
: Syncing is disabled for the Data Pool.
The syncing interval.
Note that the syncing interval is approximate. For example, setting the syncing interval to EVERY_1_HOUR
does not mean that syncing will occur exactly on the hour. Instead, the syncing interval starts relative to
when the Data Pool goes LIVE
, and Propel will attempt to sync approximately every hour. Additionally,
if you pause or resume syncing, this too can shift the syncing interval around.
The available Data Pool sync intervals. Specify unit of time between attempts to sync data from your data warehouse.
Note that the syncing interval is approximate. For example, setting the syncing interval to EVERY_1_HOUR
does not mean that syncing will occur exactly on the hour. Instead, the syncing interval starts relative to when the Data Pool goes LIVE
, and Propel will attempt to sync approximately every hour. Additionally, if you pause or resume syncing, this too can shift the syncing interval around.
EVERY_1_MINUTE
EVERY_5_MINUTES
EVERY_15_MINUTES
EVERY_30_MINUTES
EVERY_1_HOUR
EVERY_4_HOURS
EVERY_12_HOURS
EVERY_24_HOURS
The date and time of the most recent Sync in UTC.
The list of Syncs of the Data Pool.
Arguments
The filter to apply when listing the Syncs for a Data Pool.
EMPTY
: Returns only Syncs with empty records.NOT_EMPTY
: Returns only Syncs that contain one or more records.ALL
: Returns all Syncs, regardless of whether they contain records or not.
See SyncConnection
The Deletion Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The Add Column Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The UpdateDataPoolRecords Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
Whether the Data Pool has access control enabled or not.
If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
A paginated list of Data Pool Access Policies available on the Data Pool.
Arguments
Validates a custom expression against the Data Pool’s available columns. If the provided expression is invalid, the ValidateExpressionResult response will contain a reason explaining why.
Arguments
Response returned by the validateExpression query for validating expressions in Custom Metrics.
Returns whether the expression is valid or not with a reason explaining why.
True if the expression is valid, false otherwise.
The reason for why the expression is not valid in case it isn’t, null otherwise.
The Data Pool’s table settings.
A Data Pool’s table settings.
These describe how the Data Pool’s table is created in ClickHouse.
The ClickHouse table engine for the Data Pool’s table.
A Data Pool’s table engine.
The PARTITION BY clause for the Data Pool’s table.
The PRIMARY KEY clause for the Data Pool’s table.
The ORDER BY clause for the Data Pool’s table.
The TTL clause for the Data Pool’s table.
The Data Pool’s columns that participate in its PARTITION BY clause.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
The Propel data types.
BOOLEAN
: True or false.STRING
: A variable-length string.FLOAT
: A 32-bit signed double-precision floating point number.DOUBLE
: A 64-bit signed double-precision floating point number.INT8
: An 8-bit signed integer, with a minimum value of -2⁷ and a maximum value of 2⁷-1.INT16
: A 16-bit signed integer, with a minimum value of -2¹⁵ and a maximum value of 2¹⁵-1.INT32
: A 32-bit signed integer, with a minimum value of -2³¹ and a maximum value of 2³¹-1.INT64
: A 64-bit signed integer, with a minimum value of -2⁶³ and a maximum value of 2⁶³-1.DATE
: A date without a timestamp. For example, “YYYY-MM-DD”.TIMESTAMP
: A date with a timestamp. For example, “yyy-MM-dd HH:mm:ss”.JSON
: A JavaScript Object Notation (JSON) document.CLICKHOUSE
: A ClickHouse-specific type.
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadThe Data Pool’s columns that participate in its PRIMARY KEY clause.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
The Propel data types.
BOOLEAN
: True or false.STRING
: A variable-length string.FLOAT
: A 32-bit signed double-precision floating point number.DOUBLE
: A 64-bit signed double-precision floating point number.INT8
: An 8-bit signed integer, with a minimum value of -2⁷ and a maximum value of 2⁷-1.INT16
: A 16-bit signed integer, with a minimum value of -2¹⁵ and a maximum value of 2¹⁵-1.INT32
: A 32-bit signed integer, with a minimum value of -2³¹ and a maximum value of 2³¹-1.INT64
: A 64-bit signed integer, with a minimum value of -2⁶³ and a maximum value of 2⁶³-1.DATE
: A date without a timestamp. For example, “YYYY-MM-DD”.TIMESTAMP
: A date with a timestamp. For example, “yyy-MM-dd HH:mm:ss”.JSON
: A JavaScript Object Notation (JSON) document.CLICKHOUSE
: A ClickHouse-specific type.
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadThe Data Pool’s columns that participate in its ORDER BY clause.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
The Propel data types.
BOOLEAN
: True or false.STRING
: A variable-length string.FLOAT
: A 32-bit signed double-precision floating point number.DOUBLE
: A 64-bit signed double-precision floating point number.INT8
: An 8-bit signed integer, with a minimum value of -2⁷ and a maximum value of 2⁷-1.INT16
: A 16-bit signed integer, with a minimum value of -2¹⁵ and a maximum value of 2¹⁵-1.INT32
: A 32-bit signed integer, with a minimum value of -2³¹ and a maximum value of 2³¹-1.INT64
: A 64-bit signed integer, with a minimum value of -2⁶³ and a maximum value of 2⁶³-1.DATE
: A date without a timestamp. For example, “YYYY-MM-DD”.TIMESTAMP
: A date with a timestamp. For example, “yyy-MM-dd HH:mm:ss”.JSON
: A JavaScript Object Notation (JSON) document.CLICKHOUSE
: A ClickHouse-specific type.
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadThe Data Pool’s tenant ID, if configured.
deprecated: Will be removed; use Data Pool Access Policies insteadThe Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
deprecated: Will be removed; use table settings to define the primary key.A Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
The name of the column that represents the unique ID.
TableSettings
A Data Pool’s table settings.
These describe how the Data Pool’s table is created in ClickHouse.
The ClickHouse table engine for the Data Pool’s table.
A Data Pool’s table engine.
Parameters for the MergeTree table engine.
The type is always MERGE_TREE
.
See TableEngineType
Parameters for the ReplacingMergeTree table engine.
The type is always REPLACING_MERGE_TREE
.
See TableEngineType
The ver
parameter to the ReplacingMergeTree engine.
Parameters for the SummingMergeTree table engine.
The type is always SUMMING_MERGE_TREE
.
See TableEngineType
The columns argument for the SummingMergeTree table engine
Parameters for the AggregatingMergeTree table engine.
The type is always AGGREGATING_MERGE_TREE
.
See TableEngineType
Parameters for the PostgreSQL table engine.
The type is always POSTGRESQL
.
See TableEngineType
The PARTITION BY clause for the Data Pool’s table.
The PRIMARY KEY clause for the Data Pool’s table.
The ORDER BY clause for the Data Pool’s table.
The TTL clause for the Data Pool’s table.
MergeTreeTableEngine
Parameters for the MergeTree table engine.
The type is always MERGE_TREE
.
ClickHouse table engine types.
MERGE_TREE
: The MergeTree table engine.REPLACING_MERGE_TREE
: The ReplacingMergeTree table engine.SUMMING_MERGE_TREE
: The SummingMergeTree table engine.AGGREGATING_MERGE_TREE
: The AggregatingMergeTree table engine.POSTGRESQL
: The PostgreSQL table engine.
ReplacingMergeTreeTableEngine
Parameters for the ReplacingMergeTree table engine.
The type is always REPLACING_MERGE_TREE
.
ClickHouse table engine types.
MERGE_TREE
: The MergeTree table engine.REPLACING_MERGE_TREE
: The ReplacingMergeTree table engine.SUMMING_MERGE_TREE
: The SummingMergeTree table engine.AGGREGATING_MERGE_TREE
: The AggregatingMergeTree table engine.POSTGRESQL
: The PostgreSQL table engine.
The ver
parameter to the ReplacingMergeTree engine.
SummingMergeTreeTableEngine
Parameters for the SummingMergeTree table engine.
The type is always SUMMING_MERGE_TREE
.
ClickHouse table engine types.
MERGE_TREE
: The MergeTree table engine.REPLACING_MERGE_TREE
: The ReplacingMergeTree table engine.SUMMING_MERGE_TREE
: The SummingMergeTree table engine.AGGREGATING_MERGE_TREE
: The AggregatingMergeTree table engine.POSTGRESQL
: The PostgreSQL table engine.
The columns argument for the SummingMergeTree table engine
AggregatingMergeTreeTableEngine
Parameters for the AggregatingMergeTree table engine.
The type is always AGGREGATING_MERGE_TREE
.
ClickHouse table engine types.
MERGE_TREE
: The MergeTree table engine.REPLACING_MERGE_TREE
: The ReplacingMergeTree table engine.SUMMING_MERGE_TREE
: The SummingMergeTree table engine.AGGREGATING_MERGE_TREE
: The AggregatingMergeTree table engine.POSTGRESQL
: The PostgreSQL table engine.
PostgreSqlTableEngine
Parameters for the PostgreSQL table engine.
The type is always POSTGRESQL
.
ClickHouse table engine types.
MERGE_TREE
: The MergeTree table engine.REPLACING_MERGE_TREE
: The ReplacingMergeTree table engine.SUMMING_MERGE_TREE
: The SummingMergeTree table engine.AGGREGATING_MERGE_TREE
: The AggregatingMergeTree table engine.POSTGRESQL
: The PostgreSQL table engine.
DataPoolSetupTask
The Data Pool Setup Task object.
Data Pool Setup Tasks are executed when setting up your Data Pool. They ensure Propel will be able to sync records from your Data Source to your Data Pool.
The exact Setup Tasks to perform vary by Data Source. For example, Data Pools pointing to a Snowflake-backed Data Sources will have their own specific Setup Tasks.
The name of the Data Pool Setup Task to be performed.
A description of the Data Pool Setup Task to be performed.
The status of the Data Pool Setup Task (all setup tasks begin as NOT_STARTED before transitioning to SUCCEEDED or FAILED).
The status of a Data Pool Setup Task.
NOT_STARTED
: The Data Pool Setup Task has not been started yet.SUCCEEDED
: The Data Pool Setup Task has completed successfully.FAILED
: The Data Pool Setup Task has failed.
The time at which the Data Pool Setup Task was completed.
Dimension
The Dimension object that represents a column in a table.
The column name it represents.
The column data type.
Whether the column is nullable.
Whether the column is a unique key.
deprecated: This is Snowflake-specific, and will be removedThe statistics for the dimension values. Fetching statistics incurs query costs.
deprecated: Issue normal queries for calculating statsStatistics about a particular Dimension.
An array of unique values for the Dimension, up to 1,000. Empty if the Dimension contains more than 1,000 unique values. Fetching unique values incurs query costs.
Arguments
The minimum value of the Dimension.
The maximum value of the Dimension.
The average value of the Dimension. Empty for non-numeric Dimensions.
The Query statistics and metadata.
The Query Info object. It contains metadata and statistics about a Query performed.
The Query’s unique identifier.
The date and time in UTC when the Query was created.
The unique identifier of the actor that performed the Query.
The date and time in UTC when the Query was last modified.
The unique identifier of the actor that modified the Query.
The bytes processed by the Query.
The duration of the Query in milliseconds.
The number of records processed by the Query.
The bytes returned by the Query.
The number of records returned by the Query.
The Query status.
See QueryStatus
The Query subtype.
See QuerySubtype
The SQL the query executed.
DimensionStatistics
Statistics about a particular Dimension.
An array of unique values for the Dimension, up to 1,000. Empty if the Dimension contains more than 1,000 unique values. Fetching unique values incurs query costs.
Arguments
The minimum value of the Dimension.
The maximum value of the Dimension.
The average value of the Dimension. Empty for non-numeric Dimensions.
The Query statistics and metadata.
The Query Info object. It contains metadata and statistics about a Query performed.
The Query’s unique identifier.
The date and time in UTC when the Query was created.
The unique identifier of the actor that performed the Query.
The date and time in UTC when the Query was last modified.
The unique identifier of the actor that modified the Query.
The bytes processed by the Query.
The duration of the Query in milliseconds.
The number of records processed by the Query.
The bytes returned by the Query.
The number of records returned by the Query.
The Propeller used for this query.
A Propeller determines your Application’s query processing power. The larger the Propeller, the faster the queries and the higher the cost. Every Propel Application (and therefore every set of API credentials) has a Propeller that determines the speed and cost of queries.
P1_X_SMALL
: Max records per second: 5,000,000 records per secondP1_SMALL
: Max records per second: 25,000,000 records per secondP1_MEDIUM
: Max records per second: 100,000,000 records per secondP1_LARGE
: Max records per second: 250,000,000 records per secondP1_X_LARGE
: Max records per second: 500,000,000 records per second
The Query status.
The Query status.
COMPLETED
: The Query was completed succesfully.ERROR
: The Query experienced an error.TIMED_OUT
: The Query timed out.
The Query type.
The Query type.
METRIC
: Indicates a Metric Query.STATS
: Indicates a Dimension Stats Query.REPORT
: Indicates a Report Query.RECORDS
: Indicates a Record Table Query.RECORDS_BY_UNIQUE_ID
: Indicates records queried by unique ID.SQL
: Indicates a SQL Query.TOP_VALUES
: Indicates a Top Values Query.
The Query subtype.
The Query subtype.
COUNTER
: Indicates a Metric counter Query.TIME_SERIES
: Indicates a Metric time series Query.LEADERBOARD
: Indicates a Metric leaderboard Query.
The SQL the query executed.
MaterializedView
The Materialized View’s unique identifier.
The Materialized View’s unique name.
The Materialized View’s description.
The Materialized View’s Account.
The Account object.
The Account’s unique identifier.
The Materialized View’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Materialized View’s creation date and time in UTC.
The Materialized View’s last modification date and time in UTC.
The Materialized View’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Materialized View’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The SQL that the Materialized View executes.
The Materialized View’s destination (AKA “target”) Data Pool.
The Data Pool object. Data Pools are Propel’s high-speed data store and cache
The Data Pool’s unique identifier.
The Data Pool’s unique name.
The Data Pool’s description.
The Data Pool’s Account.
The Account object.
The Account’s unique identifier.
The Data Pool’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Data Pool’s creation date and time in UTC.
The Data Pool’s last modification date and time in UTC.
The Data Pool’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Pool’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Pool’s Data Source. See DataSource
The Data Pool’s status.
The status of a Data Pool.
CREATED
: The Data Pool has been created and will be set up soon.PENDING
: Propel is attempting to set up the Data Pool.LIVE
: The Data Pool is set up and serving data. Check its Syncs to monitor data ingestion.SETUP_FAILED
: The Data Pool setup failed. Check its Setup Tasks before re-attempting setup.CONNECTING
CONNECTED
BROKEN
PAUSING
PAUSED
DELETING
: Propel is deleting the Data Pool and all of its associated data.
The Data Pool’s data retention in days (not yet supported).
The name of the Data Pool’s table.
The Data Pool’s primary timestamp column, if any.
A Data Pool’s primary timestamp column. Propel uses the primary timestamp to order and partition your data in Data Pools. It will serve as the time dimension for your Metrics.
The name of the column that represents the primary timestamp.
The primary timestamp column’s type.
The number of records in the Data Pool.
The amount of storage in terabytes used by the Data Pool.
The Data Pool’s columns.
Arguments
The Data Pool column connection object.
Learn more about pagination in GraphQL.
The Data Pool column connection’s edges.
The Data Pool column connection’s nodes.
See DataPoolColumn
The list of measures (numeric columns) in the Data Pool.
Arguments
The Data Pool column connection object.
Learn more about pagination in GraphQL.
The Data Pool column connection’s edges.
The Data Pool column connection’s nodes.
See DataPoolColumn
A list of setup tasks performed on the Data Pool during its most recent setup attempt.
The Data Pool Setup Task object.
Data Pool Setup Tasks are executed when setting up your Data Pool. They ensure Propel will be able to sync records from your Data Source to your Data Pool.
The exact Setup Tasks to perform vary by Data Source. For example, Data Pools pointing to a Snowflake-backed Data Sources will have their own specific Setup Tasks.
The name of the Data Pool Setup Task to be performed.
A description of the Data Pool Setup Task to be performed.
The status of the Data Pool Setup Task (all setup tasks begin as NOT_STARTED before transitioning to SUCCEEDED or FAILED).
If the Data Pool Setup Task failed, this field includes a descriptive error message.
See Error
The time at which the Data Pool Setup Task was completed.
Settings related to Data Pool syncing.
Settings related to Data Pool syncing.
Indicates whether syncing is enabled or disabled.
The syncing interval.
Note that the syncing interval is approximate. For example, setting the syncing interval to EVERY_1_HOUR
does not mean that syncing will occur exactly on the hour. Instead, the syncing interval starts relative to
when the Data Pool goes LIVE
, and Propel will attempt to sync approximately every hour. Additionally,
if you pause or resume syncing, this too can shift the syncing interval around.
The date and time of the most recent Sync in UTC.
The list of Syncs of the Data Pool.
Arguments
The filter to apply when listing the Syncs for a Data Pool.
EMPTY
: Returns only Syncs with empty records.NOT_EMPTY
: Returns only Syncs that contain one or more records.ALL
: Returns all Syncs, regardless of whether they contain records or not.
See SyncConnection
The Deletion Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The Add Column Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The UpdateDataPoolRecords Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
Whether the Data Pool has access control enabled or not.
If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
A paginated list of Data Pool Access Policies available on the Data Pool.
Arguments
Validates a custom expression against the Data Pool’s available columns. If the provided expression is invalid, the ValidateExpressionResult response will contain a reason explaining why.
Arguments
Response returned by the validateExpression query for validating expressions in Custom Metrics.
Returns whether the expression is valid or not with a reason explaining why.
True if the expression is valid, false otherwise.
The reason for why the expression is not valid in case it isn’t, null otherwise.
The Data Pool’s table settings.
A Data Pool’s table settings.
These describe how the Data Pool’s table is created in ClickHouse.
The ClickHouse table engine for the Data Pool’s table.
See TableEngine
The PARTITION BY clause for the Data Pool’s table.
The PRIMARY KEY clause for the Data Pool’s table.
The ORDER BY clause for the Data Pool’s table.
The TTL clause for the Data Pool’s table.
The Data Pool’s columns that participate in its PARTITION BY clause.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
See ColumnType
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadThe Data Pool’s columns that participate in its PRIMARY KEY clause.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
See ColumnType
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadThe Data Pool’s columns that participate in its ORDER BY clause.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
See ColumnType
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadThe Data Pool’s tenant ID, if configured.
deprecated: Will be removed; use Data Pool Access Policies insteadThe Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
deprecated: Will be removed; use table settings to define the primary key.A Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
The name of the column that represents the unique ID.
The Materialized View’s source Data Pool.
The Data Pool object. Data Pools are Propel’s high-speed data store and cache
The Data Pool’s unique identifier.
The Data Pool’s unique name.
The Data Pool’s description.
The Data Pool’s Account.
The Account object.
The Account’s unique identifier.
The Data Pool’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Data Pool’s creation date and time in UTC.
The Data Pool’s last modification date and time in UTC.
The Data Pool’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Pool’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Pool’s Data Source. See DataSource
The Data Pool’s status.
The status of a Data Pool.
CREATED
: The Data Pool has been created and will be set up soon.PENDING
: Propel is attempting to set up the Data Pool.LIVE
: The Data Pool is set up and serving data. Check its Syncs to monitor data ingestion.SETUP_FAILED
: The Data Pool setup failed. Check its Setup Tasks before re-attempting setup.CONNECTING
CONNECTED
BROKEN
PAUSING
PAUSED
DELETING
: Propel is deleting the Data Pool and all of its associated data.
The Data Pool’s data retention in days (not yet supported).
The name of the Data Pool’s table.
The Data Pool’s primary timestamp column, if any.
A Data Pool’s primary timestamp column. Propel uses the primary timestamp to order and partition your data in Data Pools. It will serve as the time dimension for your Metrics.
The name of the column that represents the primary timestamp.
The primary timestamp column’s type.
The number of records in the Data Pool.
The amount of storage in terabytes used by the Data Pool.
The Data Pool’s columns.
Arguments
The Data Pool column connection object.
Learn more about pagination in GraphQL.
The Data Pool column connection’s edges.
The Data Pool column connection’s nodes.
See DataPoolColumn
The list of measures (numeric columns) in the Data Pool.
Arguments
The Data Pool column connection object.
Learn more about pagination in GraphQL.
The Data Pool column connection’s edges.
The Data Pool column connection’s nodes.
See DataPoolColumn
A list of setup tasks performed on the Data Pool during its most recent setup attempt.
The Data Pool Setup Task object.
Data Pool Setup Tasks are executed when setting up your Data Pool. They ensure Propel will be able to sync records from your Data Source to your Data Pool.
The exact Setup Tasks to perform vary by Data Source. For example, Data Pools pointing to a Snowflake-backed Data Sources will have their own specific Setup Tasks.
The name of the Data Pool Setup Task to be performed.
A description of the Data Pool Setup Task to be performed.
The status of the Data Pool Setup Task (all setup tasks begin as NOT_STARTED before transitioning to SUCCEEDED or FAILED).
If the Data Pool Setup Task failed, this field includes a descriptive error message.
See Error
The time at which the Data Pool Setup Task was completed.
Settings related to Data Pool syncing.
Settings related to Data Pool syncing.
Indicates whether syncing is enabled or disabled.
The syncing interval.
Note that the syncing interval is approximate. For example, setting the syncing interval to EVERY_1_HOUR
does not mean that syncing will occur exactly on the hour. Instead, the syncing interval starts relative to
when the Data Pool goes LIVE
, and Propel will attempt to sync approximately every hour. Additionally,
if you pause or resume syncing, this too can shift the syncing interval around.
The date and time of the most recent Sync in UTC.
The list of Syncs of the Data Pool.
Arguments
The filter to apply when listing the Syncs for a Data Pool.
EMPTY
: Returns only Syncs with empty records.NOT_EMPTY
: Returns only Syncs that contain one or more records.ALL
: Returns all Syncs, regardless of whether they contain records or not.
See SyncConnection
The Deletion Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The Add Column Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The UpdateDataPoolRecords Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
Whether the Data Pool has access control enabled or not.
If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
A paginated list of Data Pool Access Policies available on the Data Pool.
Arguments
Validates a custom expression against the Data Pool’s available columns. If the provided expression is invalid, the ValidateExpressionResult response will contain a reason explaining why.
Arguments
Response returned by the validateExpression query for validating expressions in Custom Metrics.
Returns whether the expression is valid or not with a reason explaining why.
True if the expression is valid, false otherwise.
The reason for why the expression is not valid in case it isn’t, null otherwise.
The Data Pool’s table settings.
A Data Pool’s table settings.
These describe how the Data Pool’s table is created in ClickHouse.
The ClickHouse table engine for the Data Pool’s table.
See TableEngine
The PARTITION BY clause for the Data Pool’s table.
The PRIMARY KEY clause for the Data Pool’s table.
The ORDER BY clause for the Data Pool’s table.
The TTL clause for the Data Pool’s table.
The Data Pool’s columns that participate in its PARTITION BY clause.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
See ColumnType
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadThe Data Pool’s columns that participate in its PRIMARY KEY clause.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
See ColumnType
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadThe Data Pool’s columns that participate in its ORDER BY clause.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
See ColumnType
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadThe Data Pool’s tenant ID, if configured.
deprecated: Will be removed; use Data Pool Access Policies insteadThe Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
deprecated: Will be removed; use table settings to define the primary key.A Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
The name of the column that represents the unique ID.
Other Data Pools queried by the Materialized View.
The Data Pool object. Data Pools are Propel’s high-speed data store and cache
The Data Pool’s unique identifier.
The Data Pool’s unique name.
The Data Pool’s description.
The Data Pool’s Account.
The Account object.
The Account’s unique identifier.
The Data Pool’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Data Pool’s creation date and time in UTC.
The Data Pool’s last modification date and time in UTC.
The Data Pool’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Pool’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Pool’s Data Source. See DataSource
The Data Pool’s status.
The status of a Data Pool.
CREATED
: The Data Pool has been created and will be set up soon.PENDING
: Propel is attempting to set up the Data Pool.LIVE
: The Data Pool is set up and serving data. Check its Syncs to monitor data ingestion.SETUP_FAILED
: The Data Pool setup failed. Check its Setup Tasks before re-attempting setup.CONNECTING
CONNECTED
BROKEN
PAUSING
PAUSED
DELETING
: Propel is deleting the Data Pool and all of its associated data.
The Data Pool’s data retention in days (not yet supported).
The name of the Data Pool’s table.
The Data Pool’s primary timestamp column, if any.
A Data Pool’s primary timestamp column. Propel uses the primary timestamp to order and partition your data in Data Pools. It will serve as the time dimension for your Metrics.
The name of the column that represents the primary timestamp.
The primary timestamp column’s type.
The number of records in the Data Pool.
The amount of storage in terabytes used by the Data Pool.
The Data Pool’s columns.
Arguments
The Data Pool column connection object.
Learn more about pagination in GraphQL.
The Data Pool column connection’s edges.
The Data Pool column connection’s nodes.
See DataPoolColumn
The list of measures (numeric columns) in the Data Pool.
Arguments
The Data Pool column connection object.
Learn more about pagination in GraphQL.
The Data Pool column connection’s edges.
The Data Pool column connection’s nodes.
See DataPoolColumn
A list of setup tasks performed on the Data Pool during its most recent setup attempt.
The Data Pool Setup Task object.
Data Pool Setup Tasks are executed when setting up your Data Pool. They ensure Propel will be able to sync records from your Data Source to your Data Pool.
The exact Setup Tasks to perform vary by Data Source. For example, Data Pools pointing to a Snowflake-backed Data Sources will have their own specific Setup Tasks.
The name of the Data Pool Setup Task to be performed.
A description of the Data Pool Setup Task to be performed.
The status of the Data Pool Setup Task (all setup tasks begin as NOT_STARTED before transitioning to SUCCEEDED or FAILED).
If the Data Pool Setup Task failed, this field includes a descriptive error message.
See Error
The time at which the Data Pool Setup Task was completed.
Settings related to Data Pool syncing.
Settings related to Data Pool syncing.
Indicates whether syncing is enabled or disabled.
The syncing interval.
Note that the syncing interval is approximate. For example, setting the syncing interval to EVERY_1_HOUR
does not mean that syncing will occur exactly on the hour. Instead, the syncing interval starts relative to
when the Data Pool goes LIVE
, and Propel will attempt to sync approximately every hour. Additionally,
if you pause or resume syncing, this too can shift the syncing interval around.
The date and time of the most recent Sync in UTC.
The list of Syncs of the Data Pool.
Arguments
The filter to apply when listing the Syncs for a Data Pool.
EMPTY
: Returns only Syncs with empty records.NOT_EMPTY
: Returns only Syncs that contain one or more records.ALL
: Returns all Syncs, regardless of whether they contain records or not.
See SyncConnection
The Deletion Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The Add Column Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The UpdateDataPoolRecords Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
Whether the Data Pool has access control enabled or not.
If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
A paginated list of Data Pool Access Policies available on the Data Pool.
Arguments
Validates a custom expression against the Data Pool’s available columns. If the provided expression is invalid, the ValidateExpressionResult response will contain a reason explaining why.
Arguments
Response returned by the validateExpression query for validating expressions in Custom Metrics.
Returns whether the expression is valid or not with a reason explaining why.
True if the expression is valid, false otherwise.
The reason for why the expression is not valid in case it isn’t, null otherwise.
The Data Pool’s table settings.
A Data Pool’s table settings.
These describe how the Data Pool’s table is created in ClickHouse.
The ClickHouse table engine for the Data Pool’s table.
See TableEngine
The PARTITION BY clause for the Data Pool’s table.
The PRIMARY KEY clause for the Data Pool’s table.
The ORDER BY clause for the Data Pool’s table.
The TTL clause for the Data Pool’s table.
The Data Pool’s columns that participate in its PARTITION BY clause.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
See ColumnType
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadThe Data Pool’s columns that participate in its PRIMARY KEY clause.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
See ColumnType
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadThe Data Pool’s columns that participate in its ORDER BY clause.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
See ColumnType
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadThe Data Pool’s tenant ID, if configured.
deprecated: Will be removed; use Data Pool Access Policies insteadThe Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
deprecated: Will be removed; use table settings to define the primary key.A Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
The name of the column that represents the unique ID.
MaterializedViewResponse
The result of a mutation which creates or modifies a Materialized View.
The Materialized View which was created or modified.
The Materialized View’s unique identifier.
The Materialized View’s unique name.
The Materialized View’s description.
The Materialized View’s Account.
The Account object.
The Account’s unique identifier.
The Materialized View’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Materialized View’s creation date and time in UTC.
The Materialized View’s last modification date and time in UTC.
The Materialized View’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Materialized View’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The SQL that the Materialized View executes.
The Materialized View’s destination (AKA “target”) Data Pool.
The Data Pool object. Data Pools are Propel’s high-speed data store and cache
The Data Pool’s unique identifier.
The Data Pool’s unique name.
The Data Pool’s description.
The Data Pool’s Environment.
See Environment
The Data Pool’s creation date and time in UTC.
The Data Pool’s last modification date and time in UTC.
The Data Pool’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Pool’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Pool’s Data Source. See DataSource
The Data Pool’s status.
See DataPoolStatus
The Data Pool’s data retention in days (not yet supported).
The name of the Data Pool’s table.
The number of records in the Data Pool.
The amount of storage in terabytes used by the Data Pool.
The Data Pool’s columns.
Arguments
The list of measures (numeric columns) in the Data Pool.
Arguments
A list of setup tasks performed on the Data Pool during its most recent setup attempt.
Settings related to Data Pool syncing.
See DataPoolSyncing
The Deletion Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The Add Column Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The UpdateDataPoolRecords Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
Whether the Data Pool has access control enabled or not.
If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
A paginated list of Data Pool Access Policies available on the Data Pool.
Arguments
Validates a custom expression against the Data Pool’s available columns. If the provided expression is invalid, the ValidateExpressionResult response will contain a reason explaining why.
Arguments
The Data Pool’s table settings.
See TableSettings
The Data Pool’s columns that participate in its PARTITION BY clause.
See DataPoolColumn
The Data Pool’s columns that participate in its PRIMARY KEY clause.
See DataPoolColumn
The Data Pool’s columns that participate in its ORDER BY clause.
See DataPoolColumn
The Data Pool’s tenant ID, if configured.
deprecated: Will be removed; use Data Pool Access Policies insteadSee Tenant
The Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
deprecated: Will be removed; use table settings to define the primary key.See UniqueId
The Materialized View’s source Data Pool.
The Data Pool object. Data Pools are Propel’s high-speed data store and cache
The Data Pool’s unique identifier.
The Data Pool’s unique name.
The Data Pool’s description.
The Data Pool’s Environment.
See Environment
The Data Pool’s creation date and time in UTC.
The Data Pool’s last modification date and time in UTC.
The Data Pool’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Pool’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Pool’s Data Source. See DataSource
The Data Pool’s status.
See DataPoolStatus
The Data Pool’s data retention in days (not yet supported).
The name of the Data Pool’s table.
The number of records in the Data Pool.
The amount of storage in terabytes used by the Data Pool.
The Data Pool’s columns.
Arguments
The list of measures (numeric columns) in the Data Pool.
Arguments
A list of setup tasks performed on the Data Pool during its most recent setup attempt.
Settings related to Data Pool syncing.
See DataPoolSyncing
The Deletion Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The Add Column Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The UpdateDataPoolRecords Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
Whether the Data Pool has access control enabled or not.
If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
A paginated list of Data Pool Access Policies available on the Data Pool.
Arguments
Validates a custom expression against the Data Pool’s available columns. If the provided expression is invalid, the ValidateExpressionResult response will contain a reason explaining why.
Arguments
The Data Pool’s table settings.
See TableSettings
The Data Pool’s columns that participate in its PARTITION BY clause.
See DataPoolColumn
The Data Pool’s columns that participate in its PRIMARY KEY clause.
See DataPoolColumn
The Data Pool’s columns that participate in its ORDER BY clause.
See DataPoolColumn
The Data Pool’s tenant ID, if configured.
deprecated: Will be removed; use Data Pool Access Policies insteadSee Tenant
The Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
deprecated: Will be removed; use table settings to define the primary key.See UniqueId
Other Data Pools queried by the Materialized View.
The Data Pool object. Data Pools are Propel’s high-speed data store and cache
The Data Pool’s unique identifier.
The Data Pool’s unique name.
The Data Pool’s description.
The Data Pool’s Environment.
See Environment
The Data Pool’s creation date and time in UTC.
The Data Pool’s last modification date and time in UTC.
The Data Pool’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Pool’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Pool’s Data Source. See DataSource
The Data Pool’s status.
See DataPoolStatus
The Data Pool’s data retention in days (not yet supported).
The name of the Data Pool’s table.
The number of records in the Data Pool.
The amount of storage in terabytes used by the Data Pool.
The Data Pool’s columns.
Arguments
The list of measures (numeric columns) in the Data Pool.
Arguments
A list of setup tasks performed on the Data Pool during its most recent setup attempt.
Settings related to Data Pool syncing.
See DataPoolSyncing
The Deletion Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The Add Column Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The UpdateDataPoolRecords Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
Whether the Data Pool has access control enabled or not.
If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
A paginated list of Data Pool Access Policies available on the Data Pool.
Arguments
Validates a custom expression against the Data Pool’s available columns. If the provided expression is invalid, the ValidateExpressionResult response will contain a reason explaining why.
Arguments
The Data Pool’s table settings.
See TableSettings
The Data Pool’s columns that participate in its PARTITION BY clause.
See DataPoolColumn
The Data Pool’s columns that participate in its PRIMARY KEY clause.
See DataPoolColumn
The Data Pool’s columns that participate in its ORDER BY clause.
See DataPoolColumn
The Data Pool’s tenant ID, if configured.
deprecated: Will be removed; use Data Pool Access Policies insteadSee Tenant
The Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
deprecated: Will be removed; use table settings to define the primary key.See UniqueId
Filter
The fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
The available Filter operators.
EQUALS
: Selects values that are equal to the specified value.NOT_EQUALS
: Selects values that are not equal to the specified value.GREATER_THAN
: Selects values that are greater than the specified value.GREATER_THAN_OR_EQUAL_TO
: Selects values that are greater or equal to the specified value.LESS_THAN
: Selects values that are less than the specified value.LESS_THAN_OR_EQUAL_TO
: Selects values that are less or equal to the specified value.IS_NULL
: Selects values that are null. This operator does not accept a value.IS_NOT_NULL
: Selects values that are not null. This operator does not accept a value.LIKE
: Selects values that match the specified pattern.NOT_LIKE
: “Selects values that do not match the specified pattern.
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
UpdateDataPoolRecordsJobSetColumn
The fields for creating an Update Data Pool Records Job.
{
"column": "status",
"expression": "'completed'"
}
{
"column": "counter",
"expression": "counter + 1"
}
{
"column": "full_name",
"expression": "concat(first_name, ' ', last_name)"
}
The name of the column to update.
The value to which the column will be updated. Once evaluated, it should be of the same data type as the column.
Timestamp
A Data Pool’s primary timestamp column. Propel uses the primary timestamp to order and partition your data in Data Pools. It will serve as the time dimension for your Metrics.
The name of the column that represents the primary timestamp.
The primary timestamp column’s type.
Tenant
A Data Pool’s tenant ID column. The tenant ID column is used to control access to your data with access policies.
The name of the column that represents the tenant ID.
The tenant ID column’s type.
UniqueId
A Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
The name of the column that represents the unique ID.
DataPoolColumn
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
The Propel data types.
BOOLEAN
: True or false.STRING
: A variable-length string.FLOAT
: A 32-bit signed double-precision floating point number.DOUBLE
: A 64-bit signed double-precision floating point number.INT8
: An 8-bit signed integer, with a minimum value of -2⁷ and a maximum value of 2⁷-1.INT16
: A 16-bit signed integer, with a minimum value of -2¹⁵ and a maximum value of 2¹⁵-1.INT32
: A 32-bit signed integer, with a minimum value of -2³¹ and a maximum value of 2³¹-1.INT64
: A 64-bit signed integer, with a minimum value of -2⁶³ and a maximum value of 2⁶³-1.DATE
: A date without a timestamp. For example, “YYYY-MM-DD”.TIMESTAMP
: A date with a timestamp. For example, “yyy-MM-dd HH:mm:ss”.JSON
: A JavaScript Object Notation (JSON) document.CLICKHOUSE
: A ClickHouse-specific type.
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadDataPoolResponse
The result of a mutation which creates or modifies a Data Pool.
The Data Pool which was created or modified.
The Data Pool object. Data Pools are Propel’s high-speed data store and cache
The Data Pool’s unique identifier.
The Data Pool’s unique name.
The Data Pool’s description.
The Data Pool’s Account.
The Account object.
The Account’s unique identifier.
The Data Pool’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Data Pool’s creation date and time in UTC.
The Data Pool’s last modification date and time in UTC.
The Data Pool’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Pool’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Pool’s Data Source. See DataSource
The Data Pool’s status.
The status of a Data Pool.
CREATED
: The Data Pool has been created and will be set up soon.PENDING
: Propel is attempting to set up the Data Pool.LIVE
: The Data Pool is set up and serving data. Check its Syncs to monitor data ingestion.SETUP_FAILED
: The Data Pool setup failed. Check its Setup Tasks before re-attempting setup.CONNECTING
CONNECTED
BROKEN
PAUSING
PAUSED
DELETING
: Propel is deleting the Data Pool and all of its associated data.
The Data Pool’s data retention in days (not yet supported).
The name of the Data Pool’s table.
The Data Pool’s primary timestamp column, if any.
A Data Pool’s primary timestamp column. Propel uses the primary timestamp to order and partition your data in Data Pools. It will serve as the time dimension for your Metrics.
The name of the column that represents the primary timestamp.
The primary timestamp column’s type.
The number of records in the Data Pool.
The amount of storage in terabytes used by the Data Pool.
The Data Pool’s columns.
Arguments
The Data Pool column connection object.
Learn more about pagination in GraphQL.
The Data Pool column connection’s edges.
The Data Pool column connection’s nodes.
See DataPoolColumn
The list of measures (numeric columns) in the Data Pool.
Arguments
The Data Pool column connection object.
Learn more about pagination in GraphQL.
The Data Pool column connection’s edges.
The Data Pool column connection’s nodes.
See DataPoolColumn
A list of setup tasks performed on the Data Pool during its most recent setup attempt.
The Data Pool Setup Task object.
Data Pool Setup Tasks are executed when setting up your Data Pool. They ensure Propel will be able to sync records from your Data Source to your Data Pool.
The exact Setup Tasks to perform vary by Data Source. For example, Data Pools pointing to a Snowflake-backed Data Sources will have their own specific Setup Tasks.
The name of the Data Pool Setup Task to be performed.
A description of the Data Pool Setup Task to be performed.
The status of the Data Pool Setup Task (all setup tasks begin as NOT_STARTED before transitioning to SUCCEEDED or FAILED).
If the Data Pool Setup Task failed, this field includes a descriptive error message.
See Error
The time at which the Data Pool Setup Task was completed.
Settings related to Data Pool syncing.
Settings related to Data Pool syncing.
Indicates whether syncing is enabled or disabled.
The syncing interval.
Note that the syncing interval is approximate. For example, setting the syncing interval to EVERY_1_HOUR
does not mean that syncing will occur exactly on the hour. Instead, the syncing interval starts relative to
when the Data Pool goes LIVE
, and Propel will attempt to sync approximately every hour. Additionally,
if you pause or resume syncing, this too can shift the syncing interval around.
The date and time of the most recent Sync in UTC.
The list of Syncs of the Data Pool.
Arguments
The filter to apply when listing the Syncs for a Data Pool.
EMPTY
: Returns only Syncs with empty records.NOT_EMPTY
: Returns only Syncs that contain one or more records.ALL
: Returns all Syncs, regardless of whether they contain records or not.
See SyncConnection
The Deletion Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The Add Column Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The UpdateDataPoolRecords Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
Whether the Data Pool has access control enabled or not.
If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
A paginated list of Data Pool Access Policies available on the Data Pool.
Arguments
Validates a custom expression against the Data Pool’s available columns. If the provided expression is invalid, the ValidateExpressionResult response will contain a reason explaining why.
Arguments
Response returned by the validateExpression query for validating expressions in Custom Metrics.
Returns whether the expression is valid or not with a reason explaining why.
True if the expression is valid, false otherwise.
The reason for why the expression is not valid in case it isn’t, null otherwise.
The Data Pool’s table settings.
A Data Pool’s table settings.
These describe how the Data Pool’s table is created in ClickHouse.
The ClickHouse table engine for the Data Pool’s table.
See TableEngine
The PARTITION BY clause for the Data Pool’s table.
The PRIMARY KEY clause for the Data Pool’s table.
The ORDER BY clause for the Data Pool’s table.
The TTL clause for the Data Pool’s table.
The Data Pool’s columns that participate in its PARTITION BY clause.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
See ColumnType
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadThe Data Pool’s columns that participate in its PRIMARY KEY clause.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
See ColumnType
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadThe Data Pool’s columns that participate in its ORDER BY clause.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
See ColumnType
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadThe Data Pool’s tenant ID, if configured.
deprecated: Will be removed; use Data Pool Access Policies insteadThe Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
deprecated: Will be removed; use table settings to define the primary key.A Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
The name of the column that represents the unique ID.
DataPoolSyncing
Settings related to Data Pool syncing.
Indicates whether syncing is enabled or disabled.
The Data Pool Sync Status. It indicates whether a Data Pool is syncing data or not.
ENABLED
: Syncing is enabled for the Data Pool.DISABLING
: Propel is disabling syncing for the Data Pool.DISABLED
: Syncing is disabled for the Data Pool.
The syncing interval.
Note that the syncing interval is approximate. For example, setting the syncing interval to EVERY_1_HOUR
does not mean that syncing will occur exactly on the hour. Instead, the syncing interval starts relative to
when the Data Pool goes LIVE
, and Propel will attempt to sync approximately every hour. Additionally,
if you pause or resume syncing, this too can shift the syncing interval around.
The available Data Pool sync intervals. Specify unit of time between attempts to sync data from your data warehouse.
Note that the syncing interval is approximate. For example, setting the syncing interval to EVERY_1_HOUR
does not mean that syncing will occur exactly on the hour. Instead, the syncing interval starts relative to when the Data Pool goes LIVE
, and Propel will attempt to sync approximately every hour. Additionally, if you pause or resume syncing, this too can shift the syncing interval around.
EVERY_1_MINUTE
EVERY_5_MINUTES
EVERY_15_MINUTES
EVERY_30_MINUTES
EVERY_1_HOUR
EVERY_4_HOURS
EVERY_12_HOURS
EVERY_24_HOURS
The date and time of the most recent Sync in UTC.
Sync
The Sync object.
This represents the process of syncing data from your Data Source (for example, a Snowflake data warehouse) to your Data Pool.
The Sync’s unique identifier.
The Sync’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Sync’s creation date and time in UTC.
The Sync’s last modification date and time in UTC.
The Sync’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Sync’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Sync’s Data Pool’s Data Source.
The Data Source object.
A Data Source is a connection to your data warehouse. It has the necessary connection details for Propel to access Snowflake or any other supported Data Source.
The Data Source’s unique identifier.
The Data Source’s unique name.
The Data Source’s description.
The Data Source’s Account.
The Account object.
The Account’s unique identifier.
The Data Source’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Data Source’s creation date and time in UTC.
The Data Source’s last modification date and time in UTC.
The Data Source’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Source’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Source’s type.
The types of Data Sources.
WEBHOOK
: Indicates a Webhook Data Source.TWILIO_SEGMENT
: Indicates a Twilio Segment Data Source.S3
: Indicates an Amazon S3 Data Source.Redshift
: Indicates a Redshift Data Source.POSTGRESQL
: Indicates a PostgreSQL Data Source.KAFKA
: Indicates a Kafka Data Source.Http
: Indicates an Http Data Source.CLICKHOUSE
: Indicates a ClickHouse Data Source.AMAZON_DYNAMODB
: Indicates an Amazon DynamoDB Data Source.AMAZON_DATA_FIREHOSE
: Indicates an Amazon Data Firehose Data Source.Snowflake
: Indicates a Snowflake Data Source.INTERNAL
: Indicates an internal Data Source.
The Data Source’s status.
The status of a Data Source.
CREATED
: The Data Source has been created, but it is not connected yet.CONNECTING
: Propel is attempting to connect the Data Source.CONNECTED
: The Data Source is connected.BROKEN
: The Data Source failed to connect.DELETING
: Propel is deleting the Data Source.
The Data Source’s connection settings.
The tables contained within the Data Source, according to the most recent table introspection.
Arguments
See TableConnection
A list of table introspections performed for the Data Source. You can see how tables and columns changed over time by paging through this list.
Arguments
A list of checks performed on the Data Source during its most recent connection attempt.
The Data Source Check object.
Data Source Checks are executed when setting up your Data Source. They check that Propel will be able to receive data and setup Data Pools.
The exact Checks to perform vary by Data Source. For example, Snowflake-backed Data Sources will have their own specific Checks.
The name of the Data Source Check to be performed.
A description of the Data Source Check to be performed.
The status of the Data Source Check (all checks begin as NOT_STARTED before transitioning to SUCCEEDED or FAILED).
If the Data Source Check failed, this field includes a descriptive error message.
See Error
The time at which the Data Source Check was performed.
If you list Data Pools via the dataPools
field on a Data Source, you will get Data Pools for the Data Source.
The dataPools
field uses cursor-based pagination typical of GraphQL APIs. You can use the pairs of parameters first
and after
or last
and before
to page forward or backward through the results, respectively.
For forward pagination, the first
parameter defines the number of results to return, and the after
parameter defines the cursor to continue from. You should pass the cursor for the last result of the current page to after
.
For backward pagination, the last
parameter defines the number of results to return, and the before
parameter defines the cursor to continue from. You should pass the cursor for the first result of the current page to before
.
Arguments
The number of new, updated, and deleted records contained within the Sync, if known. This excludes filtered records.
The (compressed) size of the Sync, in bytes, if known.
The status of the Sync (all Syncs begin as SYNCING before transitioning to SUCCEEDED or FAILED).
The status of a Sync.
SYNCING
: Propel is actively syncing records contained within the Sync.SUCCEEDED
: The Sync succeeded. Propel successfully synced all records contained within the Sync.FAILED
: The Sync failed. Propel failed to sync some or all records contained within the Sync.DELETING
: Propel is deleting the Sync.
The time at which the Sync started.
The time at which the Sync succeeded.
The time at which the Sync failed.
The number of new records contained within the Sync, if known. This excludes filtered records.
deprecated: All records are considered to be processed; seeprocessedRecords
insteadThe number of updated records contained within the Sync, if known. This excludes filtered records.
deprecated: All records are considered to be processed; seeprocessedRecords
insteadThe number of deleted records contained within the Sync, if known. This excludes filtered records.
deprecated: All records are considered to be processed; seeprocessedRecords
insteadThe number of filtered records contained within the Sync, due to issues such as a missing timestamp Dimension, if any are known to be invalid.
deprecated: All records are considered to be processed; seeprocessedRecords
insteadMetricReportConnection
The Metric Report connection object.
It includes headers
and rows
for a single page of a report. It also allows paging forward and backward to other
pages of the report.
The report connection’s page info.
The page info object used for pagination.
Points to the first item returned in the results. Used when paginating backward.
Points to the last item returned in the results. Used when paginating forward.
A boolean that indicates whether a next page of results exists. Can be used to display a “next page” button in user interfaces, for example.
A boolean that indicates whether a previous page of results exists. Can be used to display a “previous page” button in user interfaces, for example.
The report connection’s edges.
The Metric Report edge object.
The edge’s node.
The Metric Report node object.
This type represents a single row of a report.
An ordered array of display names for your dimensions and Metrics, as defined in the report input. Use this to display your table’s header.
An ordered array of columns. Each column contains the dimension and Metric values for a single row, as defined in the report input. Use this to display a single row within your table.
The edge’s cursor.
The report connection’s nodes.
The Metric Report node object.
This type represents a single row of a report.
An ordered array of display names for your dimensions and Metrics, as defined in the report input. Use this to display your table’s header.
An ordered array of columns. Each column contains the dimension and Metric values for a single row, as defined in the report input. Use this to display a single row within your table.
An ordered array of display names for your dimensions and Metrics, as defined in the report input. Use this to display your table’s header.
An ordered array of rows. Each row contains dimension and Metric values, as defined in the report input. Use these to display the rows of your table.
The Query statistics and metadata.
The Query Info object. It contains metadata and statistics about a Query performed.
The Query’s unique identifier.
The date and time in UTC when the Query was created.
The unique identifier of the actor that performed the Query.
The date and time in UTC when the Query was last modified.
The unique identifier of the actor that modified the Query.
The bytes processed by the Query.
The duration of the Query in milliseconds.
The number of records processed by the Query.
The bytes returned by the Query.
The number of records returned by the Query.
The Propeller used for this query.
A Propeller determines your Application’s query processing power. The larger the Propeller, the faster the queries and the higher the cost. Every Propel Application (and therefore every set of API credentials) has a Propeller that determines the speed and cost of queries.
P1_X_SMALL
: Max records per second: 5,000,000 records per secondP1_SMALL
: Max records per second: 25,000,000 records per secondP1_MEDIUM
: Max records per second: 100,000,000 records per secondP1_LARGE
: Max records per second: 250,000,000 records per secondP1_X_LARGE
: Max records per second: 500,000,000 records per second
The Query status.
The Query status.
COMPLETED
: The Query was completed succesfully.ERROR
: The Query experienced an error.TIMED_OUT
: The Query timed out.
The Query type.
The Query type.
METRIC
: Indicates a Metric Query.STATS
: Indicates a Dimension Stats Query.REPORT
: Indicates a Report Query.RECORDS
: Indicates a Record Table Query.RECORDS_BY_UNIQUE_ID
: Indicates records queried by unique ID.SQL
: Indicates a SQL Query.TOP_VALUES
: Indicates a Top Values Query.
The Query subtype.
The Query subtype.
COUNTER
: Indicates a Metric counter Query.TIME_SERIES
: Indicates a Metric time series Query.LEADERBOARD
: Indicates a Metric leaderboard Query.
The SQL the query executed.
MetricReportEdge
The Metric Report edge object.
The edge’s node.
The Metric Report node object.
This type represents a single row of a report.
An ordered array of display names for your dimensions and Metrics, as defined in the report input. Use this to display your table’s header.
An ordered array of columns. Each column contains the dimension and Metric values for a single row, as defined in the report input. Use this to display a single row within your table.
The edge’s cursor.
MetricReportNode
The Metric Report node object.
This type represents a single row of a report.
An ordered array of display names for your dimensions and Metrics, as defined in the report input. Use this to display your table’s header.
An ordered array of columns. Each column contains the dimension and Metric values for a single row, as defined in the report input. Use this to display a single row within your table.
Metric
The Metric object.
A Metric is a business indicator measured over time.
The Metric’s unique identifier.
The Metric’s unique name.
The Metric’s description.
The Metric’s Account.
The Account object.
The Account’s unique identifier.
The Metric’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Metric’s creation date and time in UTC.
The Metric’s last modification date and time in UTC.
The Metric’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Metric’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Metric’s Dimensions. These Dimensions are available to Query Filters.
The Dimension object that represents a column in a table.
The column name it represents.
The column data type.
Whether the column is nullable.
Whether the column is a unique key.
deprecated: This is Snowflake-specific, and will be removedThe statistics for the dimension values. Fetching statistics incurs query costs.
deprecated: Issue normal queries for calculating statsStatistics about a particular Dimension.
An array of unique values for the Dimension, up to 1,000. Empty if the Dimension contains more than 1,000 unique values. Fetching unique values incurs query costs.
Arguments
The minimum value of the Dimension.
The maximum value of the Dimension.
The average value of the Dimension. Empty for non-numeric Dimensions.
The Metric’s timestamp, if any. This is the same as its Data Pool’s timestamp, if any.
The Dimension object that represents a column in a table.
The column name it represents.
The column data type.
Whether the column is nullable.
Whether the column is a unique key.
deprecated: This is Snowflake-specific, and will be removedThe statistics for the dimension values. Fetching statistics incurs query costs.
deprecated: Issue normal queries for calculating statsStatistics about a particular Dimension.
An array of unique values for the Dimension, up to 1,000. Empty if the Dimension contains more than 1,000 unique values. Fetching unique values incurs query costs.
Arguments
The minimum value of the Dimension.
The maximum value of the Dimension.
The average value of the Dimension. Empty for non-numeric Dimensions.
List the Boosters associated to the Metric.
Arguments
The Metric’s type. The different Metric types determine how the values are calculated.
The available Metric types.
COUNT
: Counts the number of records that matches the Metric Filters. For time series, it will count the values for each time granularity.SUM
: Sums the values of the specified column for every record that matches the Metric Filters. For time series, it will sum the values for each time granularity.COUNT_DISTINCT
: Counts the number of distinct values in the specified column for every record that matches the Metric Filters. For time series, it will count the distinct values for each time granularity.AVERAGE
: Averages the values of the specified column for every record that matches the Metric Filters. For time series, it will average the values for each time granularity.MIN
: Selects the minimum value of the specified column for every record that matches the Metric Filters. For time series, it will select the minimum value for each time granularity.MAX
: Selects the maximum value of the specified column for every record that matches the Metric Filters. For time series, it will select the maximum value for each time granularity.CUSTOM
: Aggregates values based on the provided custom expression.
The settings for the Metric. The settings are specific to the Metric’s type.
A Metric’s settings, depending on its type.
Settings for Count Metrics.
Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Metric Filters are present, all records will be included. To filter at query time, add Dimensions and use the filters
property on the timeSeriesInput
, counterInput
, or leaderboardInput
objects. There is no need to add filters
to be able to filter at query time.
You can provide the filters in the form of SQL.
Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Metric Filters are present, all records will be included. To filter at query time, add Dimensions and use the filters
property on the timeSeriesInput
, counterInput
, or leaderboardInput
objects. There is no need to add filters
to be able to filter at query time.
filterSql
insteadSee Filter
Settings for Sum Metrics.
Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Metric Filters are present, all records will be included. To filter at query time, add Dimensions and use the filters
property on the timeSeriesInput
, counterInput
, or leaderboardInput
objects. There is no need to add filters
to be able to filter at query time.
You can provide the filters in the form of SQL.
Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Metric Filters are present, all records will be included. To filter at query time, add Dimensions and use the filters
property on the timeSeriesInput
, counterInput
, or leaderboardInput
objects. There is no need to add filters
to be able to filter at query time.
filterSql
insteadSee Filter
Settings for Count Distinct Metrics.
Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Metric Filters are present, all records will be included. To filter at query time, add Dimensions and use the filters
property on the timeSeriesInput
, counterInput
, or leaderboardInput
objects. There is no need to add filters
to be able to filter at query time.
You can provide the filters in the form of SQL.
The Dimension where the count distinct operation is going to be performed.
See Dimension
Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Metric Filters are present, all records will be included. To filter at query time, add Dimensions and use the filters
property on the timeSeriesInput
, counterInput
, or leaderboardInput
objects. There is no need to add filters
to be able to filter at query time.
filterSql
insteadSee Filter
Settings for Average Metrics.
Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Metric Filters are present, all records will be included. To filter at query time, add Dimensions and use the filters
property on the timeSeriesInput
, counterInput
, or leaderboardInput
objects. There is no need to add filters
to be able to filter at query time.
You can provide the filters in the form of SQL.
Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Metric Filters are present, all records will be included. To filter at query time, add Dimensions and use the filters
property on the timeSeriesInput
, counterInput
, or leaderboardInput
objects. There is no need to add filters
to be able to filter at query time.
filterSql
insteadSee Filter
Settings for Min Metrics.
Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Metric Filters are present, all records will be included. To filter at query time, add Dimensions and use the filters
property on the timeSeriesInput
, counterInput
, or leaderboardInput
objects. There is no need to add filters
to be able to filter at query time.
You can provide the filters in the form of SQL.
Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Metric Filters are present, all records will be included. To filter at query time, add Dimensions and use the filters
property on the timeSeriesInput
, counterInput
, or leaderboardInput
objects. There is no need to add filters
to be able to filter at query time.
filterSql
insteadSee Filter
Settings for Max Metrics.
Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Metric Filters are present, all records will be included. To filter at query time, add Dimensions and use the filters
property on the timeSeriesInput
, counterInput
, or leaderboardInput
objects. There is no need to add filters
to be able to filter at query time.
You can provide the filters in the form of SQL.
Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Metric Filters are present, all records will be included. To filter at query time, add Dimensions and use the filters
property on the timeSeriesInput
, counterInput
, or leaderboardInput
objects. There is no need to add filters
to be able to filter at query time.
filterSql
insteadSee Filter
Settings for Custom Metrics.
Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Metric Filters are present, all records will be included. To filter at query time, add Dimensions and use the filters
property on the timeSeriesInput
, counterInput
, or leaderboardInput
objects. There is no need to add filters
to be able to filter at query time.
You can provide the filters in the form of SQL.
The expression that defines the aggregation function for this Metric.
Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Metric Filters are present, all records will be included. To filter at query time, add Dimensions and use the filters
property on the timeSeriesInput
, counterInput
, or leaderboardInput
objects. There is no need to add filters
to be able to filter at query time.
filterSql
insteadSee Filter
The Metric’s measure. Access this from the Metric’s settings
object instead.
settings
object instead.The Dimension object that represents a column in a table.
The column name it represents.
The column data type.
Whether the column is nullable.
Whether the column is a unique key.
deprecated: This is Snowflake-specific, and will be removedThe statistics for the dimension values. Fetching statistics incurs query costs.
deprecated: Issue normal queries for calculating statsStatistics about a particular Dimension.
An array of unique values for the Dimension, up to 1,000. Empty if the Dimension contains more than 1,000 unique values. Fetching unique values incurs query costs.
Arguments
The minimum value of the Dimension.
The maximum value of the Dimension.
The average value of the Dimension. Empty for non-numeric Dimensions.
Query the Metric in counter format. Returns the Metric’s value for the given time range and filters.
deprecated: Use the top-levelcounter
query insteadArguments
The fields for querying a Metric in counter format.
A Metric’s counter query returns a single value over a given time range.
The Metric to query. You can query a pre-configured Metric by ID or name, or you can query an ad hoc Metric that you define inline.
The ID of a pre-configured Metric.
The name of a pre-configured Metric.
An ad hoc Custom Metric.
An ad hoc Count Metric.
An ad hoc Sum Metric.
An ad hoc Average Metric.
An ad hoc Min Metric.
An ad hoc Max Metric.
An ad hoc Count Distinct Metric.
The time range for calculating the counter.
The fields required to specify the time range for a time series, counter, or leaderboard Metric query.
If no relative or absolute time ranges are provided, Propel defaults to an absolute time range beginning with the earliest record in the Metric’s Data Pool and ending with the latest record.
If both relative and absolute time ranges are provided, the relative time range will take precedence.
If a LAST_N
relative time period is selected, an n
≥ 1 must be provided. If no n
is provided or n
< 1, a BAD_REQUEST
error will be returned.
The timestamp field to use when querying. Defaults to the timestamp configured on the Data Pool or Metric, if any. Set this to filter on an alternative timestamp field.
The relative time period.
The number of time units for the LAST_N
relative periods.
The optional start timestamp (inclusive). Defaults to the timestamp of the earliest record in the Data Pool.
The optional end timestamp (exclusive). Defaults to the timestamp of the latest record in the Data Pool.
The time zone to use. Dates and times are always returned in UTC, but setting the time zone influences relative time ranges and granularities.
You can set this to “America/Los_Angeles”, “Europe/Berlin”, or any other value in the IANA time zone database. Defaults to “UTC”.
The Query Filters to apply before retrieving the counter data, in the form of SQL. If no Query Filters are provided, all data is included.
The ID of the Metric to query.
Required if metricName
is not specified.
metric
The name of the Metric to query.
Required if metricId
is not specified.
metric
The Query Filters to apply before retrieving the counter data. If no Query Filters are provided, all data is included.
deprecated: UsefilterSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
See FilterOperator
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
The counter response object. It contains a single Metric value for the given time range and Query Filters.
The value of the counter.
The Query statistics and metadata.
The Query Info object. It contains metadata and statistics about a Query performed.
The Query’s unique identifier.
The date and time in UTC when the Query was created.
The unique identifier of the actor that performed the Query.
The date and time in UTC when the Query was last modified.
The unique identifier of the actor that modified the Query.
The bytes processed by the Query.
The duration of the Query in milliseconds.
The number of records processed by the Query.
The bytes returned by the Query.
The number of records returned by the Query.
The Query status.
See QueryStatus
The Query subtype.
See QuerySubtype
The SQL the query executed.
Query the Metric in time series format. Returns arrays of timestamps and the Metric’s values for the given time range and filters.
deprecated: Use the top-leveltimeSeries
query insteadArguments
The fields for querying a Metric in time series format.
A Metric’s time series query returns the values over a given time range aggregated by a given time granularity; day, month, or year, for example.
The Metric to Query. It can be a pre-created one or it can be inlined here.
The ID of a pre-configured Metric.
The name of a pre-configured Metric.
An ad hoc Custom Metric.
An ad hoc Count Metric.
An ad hoc Sum Metric.
An ad hoc Average Metric.
An ad hoc Min Metric.
An ad hoc Max Metric.
An ad hoc Count Distinct Metric.
The time range for calculating the time series.
The fields required to specify the time range for a time series, counter, or leaderboard Metric query.
If no relative or absolute time ranges are provided, Propel defaults to an absolute time range beginning with the earliest record in the Metric’s Data Pool and ending with the latest record.
If both relative and absolute time ranges are provided, the relative time range will take precedence.
If a LAST_N
relative time period is selected, an n
≥ 1 must be provided. If no n
is provided or n
< 1, a BAD_REQUEST
error will be returned.
The timestamp field to use when querying. Defaults to the timestamp configured on the Data Pool or Metric, if any. Set this to filter on an alternative timestamp field.
The relative time period.
The number of time units for the LAST_N
relative periods.
The optional start timestamp (inclusive). Defaults to the timestamp of the earliest record in the Data Pool.
The optional end timestamp (exclusive). Defaults to the timestamp of the latest record in the Data Pool.
The time zone to use. Dates and times are always returned in UTC, but setting the time zone influences relative time ranges and granularities.
You can set this to “America/Los_Angeles”, “Europe/Berlin”, or any other value in the IANA time zone database. Defaults to “UTC”.
The time granularity (hour, day, month, etc.) to aggregate the Metric values by.
The available time series granularities. Granularities define the unit of time to aggregate the Metric data for a time series query.
For example, if the granularity is set to DAY
, then the the time series query will return a label and a value for each day.
If there are no records for a given time series granularity, Propel will return the label and a value of “0” so that the time series can be properly visualized.
MINUTE
: Aggregates values by minute intervals.FIVE_MINUTES
: Aggregates values by 5-minute intervals.TEN_MINUTES
: Aggregates values by 10-minute intervals.FIFTEEN_MINUTES
: Aggregates values by 15-minute intervals.HOUR
: Aggregates values by hourly intervals.DAY
: Aggregates values by daily intervals.WEEK
: Aggregates values by weekly intervals.MONTH
: Aggregates values by monthly intervals.YEAR
: Aggregates values by yearly intervals.
The Query Filters to apply before retrieving the time series data, in the form of SQL. If no Query Filters are provided, all data is included.
Columns to group by.
The ID of the Metric to query.
Required if metricName
is not specified.
metric
The name of the Metric to query.
Required if metricId
is not specified.
metric
The Query Filters to apply before retrieving the time series data. If no Query Filters are provided, all data is included.
deprecated: UsefilterSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
See FilterOperator
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
The time series response object. It contains an array of time series labels and an array of Metric values for the given time range and Query Filters.
The time series labels.
The time series values.
The time series values for each group in groupBy
, if specified.
The time series response object for a group specified in groupBy
. It contains an array of time series labels and an array of Metric values for a particular group.
The time series group’s columns.
The time series group’s labels.
The time series group’s values.
The Query statistics and metadata.
The Query Info object. It contains metadata and statistics about a Query performed.
The Query’s unique identifier.
The date and time in UTC when the Query was created.
The unique identifier of the actor that performed the Query.
The date and time in UTC when the Query was last modified.
The unique identifier of the actor that modified the Query.
The bytes processed by the Query.
The duration of the Query in milliseconds.
The number of records processed by the Query.
The bytes returned by the Query.
The number of records returned by the Query.
The Query status.
See QueryStatus
The Query subtype.
See QuerySubtype
The SQL the query executed.
Query the Metric in leaderboard format. Returns a table (array of rows) with the selected dimensions and the Metric’s corresponding values for the given time range and filters.
deprecated: Use the top-levelleaderboard
query insteadArguments
The fields for querying a Metric in leaderboard format.
A Metric’s leaderboard query returns an ordered table of Dimension and Metric values over a given time range.
The Metric to query. You can query a pre-configured Metric by ID or name, or you can query an ad hoc Metric that you define inline.
The ID of a pre-configured Metric.
The name of a pre-configured Metric.
An ad hoc Custom Metric.
An ad hoc Count Metric.
An ad hoc Sum Metric.
An ad hoc Average Metric.
An ad hoc Min Metric.
An ad hoc Max Metric.
An ad hoc Count Distinct Metric.
The time range for calculating the leaderboard.
The fields required to specify the time range for a time series, counter, or leaderboard Metric query.
If no relative or absolute time ranges are provided, Propel defaults to an absolute time range beginning with the earliest record in the Metric’s Data Pool and ending with the latest record.
If both relative and absolute time ranges are provided, the relative time range will take precedence.
If a LAST_N
relative time period is selected, an n
≥ 1 must be provided. If no n
is provided or n
< 1, a BAD_REQUEST
error will be returned.
The timestamp field to use when querying. Defaults to the timestamp configured on the Data Pool or Metric, if any. Set this to filter on an alternative timestamp field.
The relative time period.
The number of time units for the LAST_N
relative periods.
The optional start timestamp (inclusive). Defaults to the timestamp of the earliest record in the Data Pool.
The optional end timestamp (exclusive). Defaults to the timestamp of the latest record in the Data Pool.
The time zone to use. Dates and times are always returned in UTC, but setting the time zone influences relative time ranges and granularities.
You can set this to “America/Los_Angeles”, “Europe/Berlin”, or any other value in the IANA time zone database. Defaults to “UTC”.
One or many Dimensions to group the Metric values by. Typically, Dimensions in a leaderboard are what you want to compare and rank.
The fields for creating or modifying a Dimension.
The name of the column to create the Dimension from.
The sort order of the rows. It can be ascending (ASC
) or descending (DESC
) order. Defaults to descending (DESC
) order when not provided.
The available sort orders.
ASC
: Sort in ascending order.DESC
: Sort in descending order.
The number of rows to be returned. It can be a number between 1 and 1,000.
The Query Filters to apply before retrieving the leaderboard data, in the form of SQL. If no Query Filters are provided, all data is included.
The ID of the Metric to query.
Required if metricName
is not specified.
metric
The name of the Metric to query.
Required if metricId
is not specified.
metric
The Query Filters to apply before retrieving the leaderboard data. If no Query Filters are provided, all data is included.
deprecated: UsefilterSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
See FilterOperator
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
The leaderboard response object. It contains an array of headers and a table (array of rows) with the selected Dimensions and corresponding Metric values for the given time range and Query Filters.
The table headers. It contains the Dimension and Metric names.
An ordered array of rows. Each row contains the Dimension values and the corresponding Metric value. A Dimension value can be empty. A Metric value will never be empty.
The Query statistics and metadata.
The Query Info object. It contains metadata and statistics about a Query performed.
The Query’s unique identifier.
The date and time in UTC when the Query was created.
The unique identifier of the actor that performed the Query.
The date and time in UTC when the Query was last modified.
The unique identifier of the actor that modified the Query.
The bytes processed by the Query.
The duration of the Query in milliseconds.
The number of records processed by the Query.
The bytes returned by the Query.
The number of records returned by the Query.
The Query status.
See QueryStatus
The Query subtype.
See QuerySubtype
The SQL the query executed.
List the Policies associated to the Metric.
deprecated: Use Data Pool Access Policies insteadArguments
See PolicyConnection
Whether or not access control is enabled for the Metric.
deprecated: Use Data Pool Access Policies insteadCountMetricSettings
Settings for Count Metrics.
Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Metric Filters are present, all records will be included. To filter at query time, add Dimensions and use the filters
property on the timeSeriesInput
, counterInput
, or leaderboardInput
objects. There is no need to add filters
to be able to filter at query time.
You can provide the filters in the form of SQL.
Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Metric Filters are present, all records will be included. To filter at query time, add Dimensions and use the filters
property on the timeSeriesInput
, counterInput
, or leaderboardInput
objects. There is no need to add filters
to be able to filter at query time.
filterSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
The available Filter operators.
EQUALS
: Selects values that are equal to the specified value.NOT_EQUALS
: Selects values that are not equal to the specified value.GREATER_THAN
: Selects values that are greater than the specified value.GREATER_THAN_OR_EQUAL_TO
: Selects values that are greater or equal to the specified value.LESS_THAN
: Selects values that are less than the specified value.LESS_THAN_OR_EQUAL_TO
: Selects values that are less or equal to the specified value.IS_NULL
: Selects values that are null. This operator does not accept a value.IS_NOT_NULL
: Selects values that are not null. This operator does not accept a value.LIKE
: Selects values that match the specified pattern.NOT_LIKE
: “Selects values that do not match the specified pattern.
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
SumMetricSettings
Settings for Sum Metrics.
Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Metric Filters are present, all records will be included. To filter at query time, add Dimensions and use the filters
property on the timeSeriesInput
, counterInput
, or leaderboardInput
objects. There is no need to add filters
to be able to filter at query time.
You can provide the filters in the form of SQL.
The Dimension to be summed.
The Dimension object that represents a column in a table.
The column name it represents.
The column data type.
Whether the column is nullable.
Whether the column is a unique key.
deprecated: This is Snowflake-specific, and will be removedThe statistics for the dimension values. Fetching statistics incurs query costs.
deprecated: Issue normal queries for calculating statsStatistics about a particular Dimension.
An array of unique values for the Dimension, up to 1,000. Empty if the Dimension contains more than 1,000 unique values. Fetching unique values incurs query costs.
Arguments
The minimum value of the Dimension.
The maximum value of the Dimension.
The average value of the Dimension. Empty for non-numeric Dimensions.
Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Metric Filters are present, all records will be included. To filter at query time, add Dimensions and use the filters
property on the timeSeriesInput
, counterInput
, or leaderboardInput
objects. There is no need to add filters
to be able to filter at query time.
filterSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
The available Filter operators.
EQUALS
: Selects values that are equal to the specified value.NOT_EQUALS
: Selects values that are not equal to the specified value.GREATER_THAN
: Selects values that are greater than the specified value.GREATER_THAN_OR_EQUAL_TO
: Selects values that are greater or equal to the specified value.LESS_THAN
: Selects values that are less than the specified value.LESS_THAN_OR_EQUAL_TO
: Selects values that are less or equal to the specified value.IS_NULL
: Selects values that are null. This operator does not accept a value.IS_NOT_NULL
: Selects values that are not null. This operator does not accept a value.LIKE
: Selects values that match the specified pattern.NOT_LIKE
: “Selects values that do not match the specified pattern.
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
CountDistinctMetricSettings
Settings for Count Distinct Metrics.
Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Metric Filters are present, all records will be included. To filter at query time, add Dimensions and use the filters
property on the timeSeriesInput
, counterInput
, or leaderboardInput
objects. There is no need to add filters
to be able to filter at query time.
You can provide the filters in the form of SQL.
The Dimension where the count distinct operation is going to be performed.
The Dimension object that represents a column in a table.
The column name it represents.
The column data type.
Whether the column is nullable.
Whether the column is a unique key.
deprecated: This is Snowflake-specific, and will be removedThe statistics for the dimension values. Fetching statistics incurs query costs.
deprecated: Issue normal queries for calculating statsStatistics about a particular Dimension.
An array of unique values for the Dimension, up to 1,000. Empty if the Dimension contains more than 1,000 unique values. Fetching unique values incurs query costs.
Arguments
The minimum value of the Dimension.
The maximum value of the Dimension.
The average value of the Dimension. Empty for non-numeric Dimensions.
Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Metric Filters are present, all records will be included. To filter at query time, add Dimensions and use the filters
property on the timeSeriesInput
, counterInput
, or leaderboardInput
objects. There is no need to add filters
to be able to filter at query time.
filterSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
The available Filter operators.
EQUALS
: Selects values that are equal to the specified value.NOT_EQUALS
: Selects values that are not equal to the specified value.GREATER_THAN
: Selects values that are greater than the specified value.GREATER_THAN_OR_EQUAL_TO
: Selects values that are greater or equal to the specified value.LESS_THAN
: Selects values that are less than the specified value.LESS_THAN_OR_EQUAL_TO
: Selects values that are less or equal to the specified value.IS_NULL
: Selects values that are null. This operator does not accept a value.IS_NOT_NULL
: Selects values that are not null. This operator does not accept a value.LIKE
: Selects values that match the specified pattern.NOT_LIKE
: “Selects values that do not match the specified pattern.
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
AverageMetricSettings
Settings for Average Metrics.
Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Metric Filters are present, all records will be included. To filter at query time, add Dimensions and use the filters
property on the timeSeriesInput
, counterInput
, or leaderboardInput
objects. There is no need to add filters
to be able to filter at query time.
You can provide the filters in the form of SQL.
The Dimension to be averaged.
The Dimension object that represents a column in a table.
The column name it represents.
The column data type.
Whether the column is nullable.
Whether the column is a unique key.
deprecated: This is Snowflake-specific, and will be removedThe statistics for the dimension values. Fetching statistics incurs query costs.
deprecated: Issue normal queries for calculating statsStatistics about a particular Dimension.
An array of unique values for the Dimension, up to 1,000. Empty if the Dimension contains more than 1,000 unique values. Fetching unique values incurs query costs.
Arguments
The minimum value of the Dimension.
The maximum value of the Dimension.
The average value of the Dimension. Empty for non-numeric Dimensions.
Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Metric Filters are present, all records will be included. To filter at query time, add Dimensions and use the filters
property on the timeSeriesInput
, counterInput
, or leaderboardInput
objects. There is no need to add filters
to be able to filter at query time.
filterSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
The available Filter operators.
EQUALS
: Selects values that are equal to the specified value.NOT_EQUALS
: Selects values that are not equal to the specified value.GREATER_THAN
: Selects values that are greater than the specified value.GREATER_THAN_OR_EQUAL_TO
: Selects values that are greater or equal to the specified value.LESS_THAN
: Selects values that are less than the specified value.LESS_THAN_OR_EQUAL_TO
: Selects values that are less or equal to the specified value.IS_NULL
: Selects values that are null. This operator does not accept a value.IS_NOT_NULL
: Selects values that are not null. This operator does not accept a value.LIKE
: Selects values that match the specified pattern.NOT_LIKE
: “Selects values that do not match the specified pattern.
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
MinMetricSettings
Settings for Min Metrics.
Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Metric Filters are present, all records will be included. To filter at query time, add Dimensions and use the filters
property on the timeSeriesInput
, counterInput
, or leaderboardInput
objects. There is no need to add filters
to be able to filter at query time.
You can provide the filters in the form of SQL.
The Dimension to select the minimum from.
The Dimension object that represents a column in a table.
The column name it represents.
The column data type.
Whether the column is nullable.
Whether the column is a unique key.
deprecated: This is Snowflake-specific, and will be removedThe statistics for the dimension values. Fetching statistics incurs query costs.
deprecated: Issue normal queries for calculating statsStatistics about a particular Dimension.
An array of unique values for the Dimension, up to 1,000. Empty if the Dimension contains more than 1,000 unique values. Fetching unique values incurs query costs.
Arguments
The minimum value of the Dimension.
The maximum value of the Dimension.
The average value of the Dimension. Empty for non-numeric Dimensions.
Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Metric Filters are present, all records will be included. To filter at query time, add Dimensions and use the filters
property on the timeSeriesInput
, counterInput
, or leaderboardInput
objects. There is no need to add filters
to be able to filter at query time.
filterSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
The available Filter operators.
EQUALS
: Selects values that are equal to the specified value.NOT_EQUALS
: Selects values that are not equal to the specified value.GREATER_THAN
: Selects values that are greater than the specified value.GREATER_THAN_OR_EQUAL_TO
: Selects values that are greater or equal to the specified value.LESS_THAN
: Selects values that are less than the specified value.LESS_THAN_OR_EQUAL_TO
: Selects values that are less or equal to the specified value.IS_NULL
: Selects values that are null. This operator does not accept a value.IS_NOT_NULL
: Selects values that are not null. This operator does not accept a value.LIKE
: Selects values that match the specified pattern.NOT_LIKE
: “Selects values that do not match the specified pattern.
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
MaxMetricSettings
Settings for Max Metrics.
Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Metric Filters are present, all records will be included. To filter at query time, add Dimensions and use the filters
property on the timeSeriesInput
, counterInput
, or leaderboardInput
objects. There is no need to add filters
to be able to filter at query time.
You can provide the filters in the form of SQL.
The Dimension to select the maximum from.
The Dimension object that represents a column in a table.
The column name it represents.
The column data type.
Whether the column is nullable.
Whether the column is a unique key.
deprecated: This is Snowflake-specific, and will be removedThe statistics for the dimension values. Fetching statistics incurs query costs.
deprecated: Issue normal queries for calculating statsStatistics about a particular Dimension.
An array of unique values for the Dimension, up to 1,000. Empty if the Dimension contains more than 1,000 unique values. Fetching unique values incurs query costs.
Arguments
The minimum value of the Dimension.
The maximum value of the Dimension.
The average value of the Dimension. Empty for non-numeric Dimensions.
Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Metric Filters are present, all records will be included. To filter at query time, add Dimensions and use the filters
property on the timeSeriesInput
, counterInput
, or leaderboardInput
objects. There is no need to add filters
to be able to filter at query time.
filterSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
The available Filter operators.
EQUALS
: Selects values that are equal to the specified value.NOT_EQUALS
: Selects values that are not equal to the specified value.GREATER_THAN
: Selects values that are greater than the specified value.GREATER_THAN_OR_EQUAL_TO
: Selects values that are greater or equal to the specified value.LESS_THAN
: Selects values that are less than the specified value.LESS_THAN_OR_EQUAL_TO
: Selects values that are less or equal to the specified value.IS_NULL
: Selects values that are null. This operator does not accept a value.IS_NOT_NULL
: Selects values that are not null. This operator does not accept a value.LIKE
: Selects values that match the specified pattern.NOT_LIKE
: “Selects values that do not match the specified pattern.
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
CustomMetricSettings
Settings for Custom Metrics.
Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Metric Filters are present, all records will be included. To filter at query time, add Dimensions and use the filters
property on the timeSeriesInput
, counterInput
, or leaderboardInput
objects. There is no need to add filters
to be able to filter at query time.
You can provide the filters in the form of SQL.
The expression that defines the aggregation function for this Metric.
Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Metric Filters are present, all records will be included. To filter at query time, add Dimensions and use the filters
property on the timeSeriesInput
, counterInput
, or leaderboardInput
objects. There is no need to add filters
to be able to filter at query time.
filterSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
The available Filter operators.
EQUALS
: Selects values that are equal to the specified value.NOT_EQUALS
: Selects values that are not equal to the specified value.GREATER_THAN
: Selects values that are greater than the specified value.GREATER_THAN_OR_EQUAL_TO
: Selects values that are greater or equal to the specified value.LESS_THAN
: Selects values that are less than the specified value.LESS_THAN_OR_EQUAL_TO
: Selects values that are less or equal to the specified value.IS_NULL
: Selects values that are null. This operator does not accept a value.IS_NOT_NULL
: Selects values that are not null. This operator does not accept a value.LIKE
: Selects values that match the specified pattern.NOT_LIKE
: “Selects values that do not match the specified pattern.
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
MetricResponse
The result of a mutation which creates or modifies a Metric.
The Metric which was created or modified.
The Metric object.
A Metric is a business indicator measured over time.
The Metric’s unique identifier.
The Metric’s unique name.
The Metric’s description.
The Metric’s Account.
The Account object.
The Account’s unique identifier.
The Metric’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Metric’s creation date and time in UTC.
The Metric’s last modification date and time in UTC.
The Metric’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Metric’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Metric’s Dimensions. These Dimensions are available to Query Filters.
The Dimension object that represents a column in a table.
The column name it represents.
The column data type.
Whether the column is nullable.
Whether the column is a unique key.
deprecated: This is Snowflake-specific, and will be removedThe statistics for the dimension values. Fetching statistics incurs query costs.
deprecated: Issue normal queries for calculating statsThe Metric’s timestamp, if any. This is the same as its Data Pool’s timestamp, if any.
The Dimension object that represents a column in a table.
The column name it represents.
The column data type.
Whether the column is nullable.
Whether the column is a unique key.
deprecated: This is Snowflake-specific, and will be removedThe statistics for the dimension values. Fetching statistics incurs query costs.
deprecated: Issue normal queries for calculating statsList the Boosters associated to the Metric.
Arguments
The Metric’s type. The different Metric types determine how the values are calculated.
The available Metric types.
COUNT
: Counts the number of records that matches the Metric Filters. For time series, it will count the values for each time granularity.SUM
: Sums the values of the specified column for every record that matches the Metric Filters. For time series, it will sum the values for each time granularity.COUNT_DISTINCT
: Counts the number of distinct values in the specified column for every record that matches the Metric Filters. For time series, it will count the distinct values for each time granularity.AVERAGE
: Averages the values of the specified column for every record that matches the Metric Filters. For time series, it will average the values for each time granularity.MIN
: Selects the minimum value of the specified column for every record that matches the Metric Filters. For time series, it will select the minimum value for each time granularity.MAX
: Selects the maximum value of the specified column for every record that matches the Metric Filters. For time series, it will select the maximum value for each time granularity.CUSTOM
: Aggregates values based on the provided custom expression.
The settings for the Metric. The settings are specific to the Metric’s type.
A Metric’s settings, depending on its type.
The Metric’s measure. Access this from the Metric’s settings
object instead.
settings
object instead.The Dimension object that represents a column in a table.
The column name it represents.
The column data type.
Whether the column is nullable.
Whether the column is a unique key.
deprecated: This is Snowflake-specific, and will be removedThe statistics for the dimension values. Fetching statistics incurs query costs.
deprecated: Issue normal queries for calculating statsQuery the Metric in counter format. Returns the Metric’s value for the given time range and filters.
deprecated: Use the top-levelcounter
query insteadArguments
The fields for querying a Metric in counter format.
A Metric’s counter query returns a single value over a given time range.
The Metric to query. You can query a pre-configured Metric by ID or name, or you can query an ad hoc Metric that you define inline.
See MetricInput
The time range for calculating the counter.
See TimeRangeInput
The time zone to use. Dates and times are always returned in UTC, but setting the time zone influences relative time ranges and granularities.
You can set this to “America/Los_Angeles”, “Europe/Berlin”, or any other value in the IANA time zone database. Defaults to “UTC”.
The Query Filters to apply before retrieving the counter data, in the form of SQL. If no Query Filters are provided, all data is included.
The ID of the Metric to query.
Required if metricName
is not specified.
metric
The name of the Metric to query.
Required if metricId
is not specified.
metric
The Query Filters to apply before retrieving the counter data. If no Query Filters are provided, all data is included.
deprecated: UsefilterSql
insteadSee FilterInput
Query the Metric in time series format. Returns arrays of timestamps and the Metric’s values for the given time range and filters.
deprecated: Use the top-leveltimeSeries
query insteadArguments
The fields for querying a Metric in time series format.
A Metric’s time series query returns the values over a given time range aggregated by a given time granularity; day, month, or year, for example.
The Metric to Query. It can be a pre-created one or it can be inlined here.
See MetricInput
The time range for calculating the time series.
See TimeRangeInput
The time zone to use. Dates and times are always returned in UTC, but setting the time zone influences relative time ranges and granularities.
You can set this to “America/Los_Angeles”, “Europe/Berlin”, or any other value in the IANA time zone database. Defaults to “UTC”.
The time granularity (hour, day, month, etc.) to aggregate the Metric values by.
The Query Filters to apply before retrieving the time series data, in the form of SQL. If no Query Filters are provided, all data is included.
Columns to group by.
The ID of the Metric to query.
Required if metricName
is not specified.
metric
The name of the Metric to query.
Required if metricId
is not specified.
metric
The Query Filters to apply before retrieving the time series data. If no Query Filters are provided, all data is included.
deprecated: UsefilterSql
insteadSee FilterInput
The time series response object. It contains an array of time series labels and an array of Metric values for the given time range and Query Filters.
The time series labels.
The time series values.
The time series values for each group in groupBy
, if specified.
Query the Metric in leaderboard format. Returns a table (array of rows) with the selected dimensions and the Metric’s corresponding values for the given time range and filters.
deprecated: Use the top-levelleaderboard
query insteadArguments
The fields for querying a Metric in leaderboard format.
A Metric’s leaderboard query returns an ordered table of Dimension and Metric values over a given time range.
The Metric to query. You can query a pre-configured Metric by ID or name, or you can query an ad hoc Metric that you define inline.
See MetricInput
The time range for calculating the leaderboard.
See TimeRangeInput
The time zone to use. Dates and times are always returned in UTC, but setting the time zone influences relative time ranges and granularities.
You can set this to “America/Los_Angeles”, “Europe/Berlin”, or any other value in the IANA time zone database. Defaults to “UTC”.
One or many Dimensions to group the Metric values by. Typically, Dimensions in a leaderboard are what you want to compare and rank.
See DimensionInput
The sort order of the rows. It can be ascending (ASC
) or descending (DESC
) order. Defaults to descending (DESC
) order when not provided.
See Sort
The number of rows to be returned. It can be a number between 1 and 1,000.
The Query Filters to apply before retrieving the leaderboard data, in the form of SQL. If no Query Filters are provided, all data is included.
The ID of the Metric to query.
Required if metricName
is not specified.
metric
The name of the Metric to query.
Required if metricId
is not specified.
metric
The Query Filters to apply before retrieving the leaderboard data. If no Query Filters are provided, all data is included.
deprecated: UsefilterSql
insteadSee FilterInput
The leaderboard response object. It contains an array of headers and a table (array of rows) with the selected Dimensions and corresponding Metric values for the given time range and Query Filters.
The table headers. It contains the Dimension and Metric names.
An ordered array of rows. Each row contains the Dimension values and the corresponding Metric value. A Dimension value can be empty. A Metric value will never be empty.
List the Policies associated to the Metric.
deprecated: Use Data Pool Access Policies insteadArguments
See PolicyConnection
Whether or not access control is enabled for the Metric.
deprecated: Use Data Pool Access Policies insteadValidateExpressionResult
Response returned by the validateExpression query for validating expressions in Custom Metrics.
Returns whether the expression is valid or not with a reason explaining why.
True if the expression is valid, false otherwise.
The reason for why the expression is not valid in case it isn’t, null otherwise.
TimeSeriesResponse
The time series response object. It contains an array of time series labels and an array of Metric values for the given time range and Query Filters.
The time series labels.
The time series values.
The time series values for each group in groupBy
, if specified.
The time series response object for a group specified in groupBy
. It contains an array of time series labels and an array of Metric values for a particular group.
The time series group’s columns.
The time series group’s labels.
The time series group’s values.
The Query statistics and metadata.
The Query Info object. It contains metadata and statistics about a Query performed.
The Query’s unique identifier.
The date and time in UTC when the Query was created.
The unique identifier of the actor that performed the Query.
The date and time in UTC when the Query was last modified.
The unique identifier of the actor that modified the Query.
The bytes processed by the Query.
The duration of the Query in milliseconds.
The number of records processed by the Query.
The bytes returned by the Query.
The number of records returned by the Query.
The Propeller used for this query.
A Propeller determines your Application’s query processing power. The larger the Propeller, the faster the queries and the higher the cost. Every Propel Application (and therefore every set of API credentials) has a Propeller that determines the speed and cost of queries.
P1_X_SMALL
: Max records per second: 5,000,000 records per secondP1_SMALL
: Max records per second: 25,000,000 records per secondP1_MEDIUM
: Max records per second: 100,000,000 records per secondP1_LARGE
: Max records per second: 250,000,000 records per secondP1_X_LARGE
: Max records per second: 500,000,000 records per second
The Query status.
The Query status.
COMPLETED
: The Query was completed succesfully.ERROR
: The Query experienced an error.TIMED_OUT
: The Query timed out.
The Query type.
The Query type.
METRIC
: Indicates a Metric Query.STATS
: Indicates a Dimension Stats Query.REPORT
: Indicates a Report Query.RECORDS
: Indicates a Record Table Query.RECORDS_BY_UNIQUE_ID
: Indicates records queried by unique ID.SQL
: Indicates a SQL Query.TOP_VALUES
: Indicates a Top Values Query.
The Query subtype.
The Query subtype.
COUNTER
: Indicates a Metric counter Query.TIME_SERIES
: Indicates a Metric time series Query.LEADERBOARD
: Indicates a Metric leaderboard Query.
The SQL the query executed.
TimeSeriesResponseGroup
The time series response object for a group specified in groupBy
. It contains an array of time series labels and an array of Metric values for a particular group.
The time series group’s columns.
The time series group’s labels.
The time series group’s values.
CounterResponse
The counter response object. It contains a single Metric value for the given time range and Query Filters.
The value of the counter.
The Query statistics and metadata.
The Query Info object. It contains metadata and statistics about a Query performed.
The Query’s unique identifier.
The date and time in UTC when the Query was created.
The unique identifier of the actor that performed the Query.
The date and time in UTC when the Query was last modified.
The unique identifier of the actor that modified the Query.
The bytes processed by the Query.
The duration of the Query in milliseconds.
The number of records processed by the Query.
The bytes returned by the Query.
The number of records returned by the Query.
The Propeller used for this query.
A Propeller determines your Application’s query processing power. The larger the Propeller, the faster the queries and the higher the cost. Every Propel Application (and therefore every set of API credentials) has a Propeller that determines the speed and cost of queries.
P1_X_SMALL
: Max records per second: 5,000,000 records per secondP1_SMALL
: Max records per second: 25,000,000 records per secondP1_MEDIUM
: Max records per second: 100,000,000 records per secondP1_LARGE
: Max records per second: 250,000,000 records per secondP1_X_LARGE
: Max records per second: 500,000,000 records per second
The Query status.
The Query status.
COMPLETED
: The Query was completed succesfully.ERROR
: The Query experienced an error.TIMED_OUT
: The Query timed out.
The Query type.
The Query type.
METRIC
: Indicates a Metric Query.STATS
: Indicates a Dimension Stats Query.REPORT
: Indicates a Report Query.RECORDS
: Indicates a Record Table Query.RECORDS_BY_UNIQUE_ID
: Indicates records queried by unique ID.SQL
: Indicates a SQL Query.TOP_VALUES
: Indicates a Top Values Query.
The Query subtype.
The Query subtype.
COUNTER
: Indicates a Metric counter Query.TIME_SERIES
: Indicates a Metric time series Query.LEADERBOARD
: Indicates a Metric leaderboard Query.
The SQL the query executed.
LeaderboardResponse
The leaderboard response object. It contains an array of headers and a table (array of rows) with the selected Dimensions and corresponding Metric values for the given time range and Query Filters.
The table headers. It contains the Dimension and Metric names.
An ordered array of rows. Each row contains the Dimension values and the corresponding Metric value. A Dimension value can be empty. A Metric value will never be empty.
The Query statistics and metadata.
The Query Info object. It contains metadata and statistics about a Query performed.
The Query’s unique identifier.
The date and time in UTC when the Query was created.
The unique identifier of the actor that performed the Query.
The date and time in UTC when the Query was last modified.
The unique identifier of the actor that modified the Query.
The bytes processed by the Query.
The duration of the Query in milliseconds.
The number of records processed by the Query.
The bytes returned by the Query.
The number of records returned by the Query.
The Propeller used for this query.
A Propeller determines your Application’s query processing power. The larger the Propeller, the faster the queries and the higher the cost. Every Propel Application (and therefore every set of API credentials) has a Propeller that determines the speed and cost of queries.
P1_X_SMALL
: Max records per second: 5,000,000 records per secondP1_SMALL
: Max records per second: 25,000,000 records per secondP1_MEDIUM
: Max records per second: 100,000,000 records per secondP1_LARGE
: Max records per second: 250,000,000 records per secondP1_X_LARGE
: Max records per second: 500,000,000 records per second
The Query status.
The Query status.
COMPLETED
: The Query was completed succesfully.ERROR
: The Query experienced an error.TIMED_OUT
: The Query timed out.
The Query type.
The Query type.
METRIC
: Indicates a Metric Query.STATS
: Indicates a Dimension Stats Query.REPORT
: Indicates a Report Query.RECORDS
: Indicates a Record Table Query.RECORDS_BY_UNIQUE_ID
: Indicates records queried by unique ID.SQL
: Indicates a SQL Query.TOP_VALUES
: Indicates a Top Values Query.
The Query subtype.
The Query subtype.
COUNTER
: Indicates a Metric counter Query.TIME_SERIES
: Indicates a Metric time series Query.LEADERBOARD
: Indicates a Metric leaderboard Query.
The SQL the query executed.
QueryInfo
The Query Info object. It contains metadata and statistics about a Query performed.
The Query’s unique identifier.
The date and time in UTC when the Query was created.
The unique identifier of the actor that performed the Query.
The date and time in UTC when the Query was last modified.
The unique identifier of the actor that modified the Query.
The bytes processed by the Query.
The duration of the Query in milliseconds.
The number of records processed by the Query.
The bytes returned by the Query.
The number of records returned by the Query.
The Propeller used for this query.
A Propeller determines your Application’s query processing power. The larger the Propeller, the faster the queries and the higher the cost. Every Propel Application (and therefore every set of API credentials) has a Propeller that determines the speed and cost of queries.
P1_X_SMALL
: Max records per second: 5,000,000 records per secondP1_SMALL
: Max records per second: 25,000,000 records per secondP1_MEDIUM
: Max records per second: 100,000,000 records per secondP1_LARGE
: Max records per second: 250,000,000 records per secondP1_X_LARGE
: Max records per second: 500,000,000 records per second
The Query status.
The Query status.
COMPLETED
: The Query was completed succesfully.ERROR
: The Query experienced an error.TIMED_OUT
: The Query timed out.
The Query type.
The Query type.
METRIC
: Indicates a Metric Query.STATS
: Indicates a Dimension Stats Query.REPORT
: Indicates a Report Query.RECORDS
: Indicates a Record Table Query.RECORDS_BY_UNIQUE_ID
: Indicates records queried by unique ID.SQL
: Indicates a SQL Query.TOP_VALUES
: Indicates a Top Values Query.
The Query subtype.
The Query subtype.
COUNTER
: Indicates a Metric counter Query.TIME_SERIES
: Indicates a Metric time series Query.LEADERBOARD
: Indicates a Metric leaderboard Query.
The SQL the query executed.
Booster
Boosters allow you to optimize Metric Queries for a subset of commonly used Dimensions. A Metric can have one or many Boosters to optimize for the different Query patterns.
Boosters can be understood as an aggregating index. The index is formed from left to right as follows:
- The Data Pool’s Tenant ID column (if present)
- Metric Filter columns (if present)
- Query Filter Dimensions (see
dimensions
) - The Data Pool’s timestamp column
The Booster’s unique identifier.
The Booster’s Account.
The Account object.
The Account’s unique identifier.
The Booster’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Booster’s creation date and time in UTC.
The Booster’s last modification date and time in UTC.
The Booster’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Booster’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The status of the Booster (once LIVE it will be available for speeding up Metric queries).
The Booster status.
CREATED
: The Booster has been created. Propel will start optimizing the Data Pool soon.OPTIMIZING
: Propel is setting up the Booster and optimizing the Data Pool.LIVE
: The Booster is now live and available to speed up Metric queries.FAILED
: Propel failed to setup the Booster. Please write to support. Alternatively, you can delete the Booster and try again.DELETING
: Propel is deleting the Booster and all of its associated data.
When the Booster is OPTIMIZING, this represents its progress as a number from 0 to 1. In all other states, progress is null.
Dimensions included in the Booster.
The Dimension object that represents a column in a table.
The column name it represents.
The column data type.
Whether the column is nullable.
Whether the column is a unique key.
deprecated: This is Snowflake-specific, and will be removedThe statistics for the dimension values. Fetching statistics incurs query costs.
deprecated: Issue normal queries for calculating statsStatistics about a particular Dimension.
An array of unique values for the Dimension, up to 1,000. Empty if the Dimension contains more than 1,000 unique values. Fetching unique values incurs query costs.
Arguments
The minimum value of the Dimension.
The maximum value of the Dimension.
The average value of the Dimension. Empty for non-numeric Dimensions.
The number of records in the Booster.
The amount of storage in terabytes used by the Booster.
BoosterResponse
The result of a mutation which creates or modifies a Booster.
The Booster which was created or modified.
Boosters allow you to optimize Metric Queries for a subset of commonly used Dimensions. A Metric can have one or many Boosters to optimize for the different Query patterns.
Boosters can be understood as an aggregating index. The index is formed from left to right as follows:
- The Data Pool’s Tenant ID column (if present)
- Metric Filter columns (if present)
- Query Filter Dimensions (see
dimensions
) - The Data Pool’s timestamp column
The Booster’s unique identifier.
The Booster’s Account.
The Account object.
The Account’s unique identifier.
The Booster’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Booster’s creation date and time in UTC.
The Booster’s last modification date and time in UTC.
The Booster’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Booster’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The status of the Booster (once LIVE it will be available for speeding up Metric queries).
The Booster status.
CREATED
: The Booster has been created. Propel will start optimizing the Data Pool soon.OPTIMIZING
: Propel is setting up the Booster and optimizing the Data Pool.LIVE
: The Booster is now live and available to speed up Metric queries.FAILED
: Propel failed to setup the Booster. Please write to support. Alternatively, you can delete the Booster and try again.DELETING
: Propel is deleting the Booster and all of its associated data.
When the Booster is OPTIMIZING, this represents its progress as a number from 0 to 1. In all other states, progress is null.
Dimensions included in the Booster.
The Dimension object that represents a column in a table.
The column name it represents.
The column data type.
Whether the column is nullable.
Whether the column is a unique key.
deprecated: This is Snowflake-specific, and will be removedThe statistics for the dimension values. Fetching statistics incurs query costs.
deprecated: Issue normal queries for calculating statsThe number of records in the Booster.
The amount of storage in terabytes used by the Booster.
Policy
The Policy type. It governs an Application’s access to a Metric’s data.
The Policy’s unique identifier.
The Policy’s Account.
The Account object.
The Account’s unique identifier.
The Policy’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Policy’s creation date and time in UTC.
The Policy’s last modification date and time in UTC.
The Policy’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Policy’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The type of Policy.
The types of Policies that can be applied to a Metric.
ALL_ACCESS
: Grants access to all Metric data.TENANT_ACCESS
: Grants access to a specified tenant’s Metric data.
The Application that is granted access. See Application
PolicyResponse
The result of a mutation which creates or modifies a Policy.
The Policy which was created or modified.
The Policy type. It governs an Application’s access to a Metric’s data.
The Policy’s unique identifier.
The Policy’s Account.
The Account object.
The Account’s unique identifier.
The Policy’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Policy’s creation date and time in UTC.
The Policy’s last modification date and time in UTC.
The Policy’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Policy’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The type of Policy.
The types of Policies that can be applied to a Metric.
ALL_ACCESS
: Grants access to all Metric data.TENANT_ACCESS
: Grants access to a specified tenant’s Metric data.
The Application that is granted access. See Application
DeletionJob
Deletion Job scheduled for a specific Data Pool.
The Deletion Job represents the asynchronous process of deleting data given some filters inside a Data Pool. It tracks the deletion process until it is finished, showing the progress and the outcome when it is finished.
The Deletion Job’s ID.
The Deletion Job’s creation date and time in UTC.
Who created the Deletion Job.
The Deletion Job’s last modification date and time in UTC.
Who last modified the Deletion Job.
Account to which the Deletion Job belongs.
The Account object.
The Account’s unique identifier.
Environment to which the Deletion Job belongs.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Data Pool whose records will be deleted by the Deletion Job. See DataPool
The current Deletion Job’s status.
CREATED
: The Job was created, but is not yet being executed.IN_PROGRESS
: The Job is executing.SUCCEEDED
: The Job succeeded.FAILED
: The Job failed. Check the error message.
The filters that will be used for deleting data, in the form of SQL. Data matching the filters will be deleted.
The current progress of the Deletion Job, from 0.0 to 1.0.
The time at which the Deletion Job started.
The time at which the Deletion Job succeeded.
The time at which the Deletion Job failed.
The list of filters that will be used for deleting data. Data matching the filters will be deleted.
deprecated: UsefilterSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
The available Filter operators.
EQUALS
: Selects values that are equal to the specified value.NOT_EQUALS
: Selects values that are not equal to the specified value.GREATER_THAN
: Selects values that are greater than the specified value.GREATER_THAN_OR_EQUAL_TO
: Selects values that are greater or equal to the specified value.LESS_THAN
: Selects values that are less than the specified value.LESS_THAN_OR_EQUAL_TO
: Selects values that are less or equal to the specified value.IS_NULL
: Selects values that are null. This operator does not accept a value.IS_NOT_NULL
: Selects values that are not null. This operator does not accept a value.LIKE
: Selects values that match the specified pattern.NOT_LIKE
: “Selects values that do not match the specified pattern.
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
RequestDeleteResponse
The response returned by the Deletion Job.
The Deletion Job that was just created.
Deletion Job scheduled for a specific Data Pool.
The Deletion Job represents the asynchronous process of deleting data given some filters inside a Data Pool. It tracks the deletion process until it is finished, showing the progress and the outcome when it is finished.
The Deletion Job’s ID.
The Deletion Job’s creation date and time in UTC.
Who created the Deletion Job.
The Deletion Job’s last modification date and time in UTC.
Who last modified the Deletion Job.
Account to which the Deletion Job belongs.
The Account object.
The Account’s unique identifier.
Environment to which the Deletion Job belongs.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Data Pool whose records will be deleted by the Deletion Job. See DataPool
The current Deletion Job’s status.
CREATED
: The Job was created, but is not yet being executed.IN_PROGRESS
: The Job is executing.SUCCEEDED
: The Job succeeded.FAILED
: The Job failed. Check the error message.
The filters that will be used for deleting data, in the form of SQL. Data matching the filters will be deleted.
The current progress of the Deletion Job, from 0.0 to 1.0.
The time at which the Deletion Job started.
The time at which the Deletion Job succeeded.
The time at which the Deletion Job failed.
The list of filters that will be used for deleting data. Data matching the filters will be deleted.
deprecated: UsefilterSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
See FilterOperator
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
DeletionJobResponse
The response returned by the Deletion Job.
The Deletion Job that was just created.
Deletion Job scheduled for a specific Data Pool.
The Deletion Job represents the asynchronous process of deleting data given some filters inside a Data Pool. It tracks the deletion process until it is finished, showing the progress and the outcome when it is finished.
The Deletion Job’s ID.
The Deletion Job’s creation date and time in UTC.
Who created the Deletion Job.
The Deletion Job’s last modification date and time in UTC.
Who last modified the Deletion Job.
Account to which the Deletion Job belongs.
The Account object.
The Account’s unique identifier.
Environment to which the Deletion Job belongs.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Data Pool whose records will be deleted by the Deletion Job. See DataPool
The current Deletion Job’s status.
CREATED
: The Job was created, but is not yet being executed.IN_PROGRESS
: The Job is executing.SUCCEEDED
: The Job succeeded.FAILED
: The Job failed. Check the error message.
The filters that will be used for deleting data, in the form of SQL. Data matching the filters will be deleted.
The current progress of the Deletion Job, from 0.0 to 1.0.
The time at which the Deletion Job started.
The time at which the Deletion Job succeeded.
The time at which the Deletion Job failed.
The list of filters that will be used for deleting data. Data matching the filters will be deleted.
deprecated: UsefilterSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
See FilterOperator
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
AddColumnToDataPoolJob
AddColumnToDataPoolJob scheduled for a specific Data Pool.
The Add Column Job represents the asynchronous process of adding a column, given its name and type, to a Data Pool. It tracks the process of adding a column until it is finished, showing the progress and the outcome when it is finished.
The AddColumnToDataPoolJob’s ID.
The AddColumnToDataPoolJob’s creation date and time in UTC.
Who created the AddColumnToDataPoolJob.
The AddColumnToDataPoolJob’s last modification date and time in UTC.
Who modified the AddColumnToDataPoolJob last.
Account to which the AddColumnToDataPoolJob belongs.
The Account object.
The Account’s unique identifier.
Environment to which the AddColumnToDataPoolJob belongs.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The current AddColumnToDataPoolJob’s status.
CREATED
: The Job was created, but is not yet being executed.IN_PROGRESS
: The Job is executing.SUCCEEDED
: The Job succeeded.FAILED
: The Job failed. Check the error message.
Name of the new column.
Type of the new column.
The Propel data types.
BOOLEAN
: True or false.STRING
: A variable-length string.FLOAT
: A 32-bit signed double-precision floating point number.DOUBLE
: A 64-bit signed double-precision floating point number.INT8
: An 8-bit signed integer, with a minimum value of -2⁷ and a maximum value of 2⁷-1.INT16
: A 16-bit signed integer, with a minimum value of -2¹⁵ and a maximum value of 2¹⁵-1.INT32
: A 32-bit signed integer, with a minimum value of -2³¹ and a maximum value of 2³¹-1.INT64
: A 64-bit signed integer, with a minimum value of -2⁶³ and a maximum value of 2⁶³-1.DATE
: A date without a timestamp. For example, “YYYY-MM-DD”.TIMESTAMP
: A date with a timestamp. For example, “yyy-MM-dd HH:mm:ss”.JSON
: A JavaScript Object Notation (JSON) document.CLICKHOUSE
: A ClickHouse-specific type.
The ClickHouse type of the new column when columnType
is set to CLICKHOUSE
.
JSON property to which the new column corresponds.
The current progress of the AddColumnToDataPool Job, from 0.0 to 1.0.
The time at which the AddColumnToDataPool Job started.
The time at which the AddColumnToDataPool Job succeeded.
The time at which the AddColumnToDataPool Job failed.
AddColumnToDataPoolJobResponse
The response returned by the Add Column Job.
The AddColumnToDataPool Job that was just created.
AddColumnToDataPoolJob scheduled for a specific Data Pool.
The Add Column Job represents the asynchronous process of adding a column, given its name and type, to a Data Pool. It tracks the process of adding a column until it is finished, showing the progress and the outcome when it is finished.
The AddColumnToDataPoolJob’s ID.
The AddColumnToDataPoolJob’s creation date and time in UTC.
Who created the AddColumnToDataPoolJob.
The AddColumnToDataPoolJob’s last modification date and time in UTC.
Who modified the AddColumnToDataPoolJob last.
Account to which the AddColumnToDataPoolJob belongs.
The Account object.
The Account’s unique identifier.
Environment to which the AddColumnToDataPoolJob belongs.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The current AddColumnToDataPoolJob’s status.
CREATED
: The Job was created, but is not yet being executed.IN_PROGRESS
: The Job is executing.SUCCEEDED
: The Job succeeded.FAILED
: The Job failed. Check the error message.
Name of the new column.
Type of the new column.
The Propel data types.
BOOLEAN
: True or false.STRING
: A variable-length string.FLOAT
: A 32-bit signed double-precision floating point number.DOUBLE
: A 64-bit signed double-precision floating point number.INT8
: An 8-bit signed integer, with a minimum value of -2⁷ and a maximum value of 2⁷-1.INT16
: A 16-bit signed integer, with a minimum value of -2¹⁵ and a maximum value of 2¹⁵-1.INT32
: A 32-bit signed integer, with a minimum value of -2³¹ and a maximum value of 2³¹-1.INT64
: A 64-bit signed integer, with a minimum value of -2⁶³ and a maximum value of 2⁶³-1.DATE
: A date without a timestamp. For example, “YYYY-MM-DD”.TIMESTAMP
: A date with a timestamp. For example, “yyy-MM-dd HH:mm:ss”.JSON
: A JavaScript Object Notation (JSON) document.CLICKHOUSE
: A ClickHouse-specific type.
The ClickHouse type of the new column when columnType
is set to CLICKHOUSE
.
JSON property to which the new column corresponds.
The current progress of the AddColumnToDataPool Job, from 0.0 to 1.0.
The time at which the AddColumnToDataPool Job started.
The time at which the AddColumnToDataPool Job succeeded.
The time at which the AddColumnToDataPool Job failed.
UpdateDataPoolRecordsJob
UpdateDataPoolRecords Job scheduled for a specific Data Pool. The Update Data Pool Records Job represents the asynchronous process of updating records given some filters, inside a Data Pool. It tracks the process of updating records until it is finished, showing the progress and the outcome when it is finished.
The UpdateDataPoolRecords Job’s ID
The UpdateDataPoolRecords Job’s creation date and time in UTC
Who created the UpdateDataPoolRecords Job
The UpdateDataPoolRecords Job’s last modification date and time in UTC
Who last modified the UpdateDataPoolRecords Job
Account to which the UpdateDataPoolRecords Job belongs
The Account object.
The Account’s unique identifier.
Environment to which the UpdateDataPoolRecords Job belongs
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Data Pool whose records will be updated by the UpdateDataPoolRecords Job See DataPool
The current UpdateDataPoolRecords Job’s status
CREATED
: The Job was created, but is not yet being executed.IN_PROGRESS
: The Job is executing.SUCCEEDED
: The Job succeeded.FAILED
: The Job failed. Check the error message.
The filters that will be used for updating data, in the form of SQL. Data matching the filters will be updated.
Describes how the job will update the records.
The fields for creating an Update Data Pool Records Job.
{
"column": "status",
"expression": "'completed'"
}
{
"column": "counter",
"expression": "counter + 1"
}
{
"column": "full_name",
"expression": "concat(first_name, ' ', last_name)"
}
The name of the column to update.
The value to which the column will be updated. Once evaluated, it should be of the same data type as the column.
The current progress of the UpdateDataPoolRecords Job, from 0.0 to 1.0.
The time at which the UpdateDataPoolRecords Job started.
The time at which the UpdateDataPoolRecords Job succeeded.
The time at which the UpdateDataPoolRecords Job failed.
The list of filters that will be used for updating data. Data matching the filters will be updated.
deprecated: UsefilterSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
The available Filter operators.
EQUALS
: Selects values that are equal to the specified value.NOT_EQUALS
: Selects values that are not equal to the specified value.GREATER_THAN
: Selects values that are greater than the specified value.GREATER_THAN_OR_EQUAL_TO
: Selects values that are greater or equal to the specified value.LESS_THAN
: Selects values that are less than the specified value.LESS_THAN_OR_EQUAL_TO
: Selects values that are less or equal to the specified value.IS_NULL
: Selects values that are null. This operator does not accept a value.IS_NOT_NULL
: Selects values that are not null. This operator does not accept a value.LIKE
: Selects values that match the specified pattern.NOT_LIKE
: “Selects values that do not match the specified pattern.
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
UpdateDataPoolRecordsJobResponse
The response returned by the Update Data Pool Records Job.
The UpdateDataPoolRecords Job that was just created.
UpdateDataPoolRecords Job scheduled for a specific Data Pool. The Update Data Pool Records Job represents the asynchronous process of updating records given some filters, inside a Data Pool. It tracks the process of updating records until it is finished, showing the progress and the outcome when it is finished.
The UpdateDataPoolRecords Job’s ID
The UpdateDataPoolRecords Job’s creation date and time in UTC
Who created the UpdateDataPoolRecords Job
The UpdateDataPoolRecords Job’s last modification date and time in UTC
Who last modified the UpdateDataPoolRecords Job
Account to which the UpdateDataPoolRecords Job belongs
The Account object.
The Account’s unique identifier.
Environment to which the UpdateDataPoolRecords Job belongs
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Data Pool whose records will be updated by the UpdateDataPoolRecords Job See DataPool
The current UpdateDataPoolRecords Job’s status
CREATED
: The Job was created, but is not yet being executed.IN_PROGRESS
: The Job is executing.SUCCEEDED
: The Job succeeded.FAILED
: The Job failed. Check the error message.
The filters that will be used for updating data, in the form of SQL. Data matching the filters will be updated.
Describes how the job will update the records.
The fields for creating an Update Data Pool Records Job.
{
"column": "status",
"expression": "'completed'"
}
{
"column": "counter",
"expression": "counter + 1"
}
{
"column": "full_name",
"expression": "concat(first_name, ' ', last_name)"
}
The name of the column to update.
The value to which the column will be updated. Once evaluated, it should be of the same data type as the column.
The current progress of the UpdateDataPoolRecords Job, from 0.0 to 1.0.
The time at which the UpdateDataPoolRecords Job started.
The time at which the UpdateDataPoolRecords Job succeeded.
The time at which the UpdateDataPoolRecords Job failed.
The list of filters that will be used for updating data. Data matching the filters will be updated.
deprecated: UsefilterSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
See FilterOperator
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
DataPoolAccessPolicy
The ID of the Data Pool Access Policy.
The Data Pool Access Policy’s unique name.
The Data Pool Access Policy’s description.
The Data Pool Access Policy’s Account.
The Account object.
The Account’s unique identifier.
The Data Pool Access Policy’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Data Pool Access Policy’s creation date and time in UTC.
The Data Pool Access Policy’s last modification date and time in UTC.
The Data Pool Access Policy’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Pool Access Policy’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
Columns that the Access Policy makes available for querying.
Row-level filters that the Access Policy applies before executing queries, in the form of SQL.
Applications that are assigned to this Data Pool Access Policy.
Arguments
Row-level filters that the Access Policy applies before executing queries.
deprecated: UsefiltersSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
The available Filter operators.
EQUALS
: Selects values that are equal to the specified value.NOT_EQUALS
: Selects values that are not equal to the specified value.GREATER_THAN
: Selects values that are greater than the specified value.GREATER_THAN_OR_EQUAL_TO
: Selects values that are greater or equal to the specified value.LESS_THAN
: Selects values that are less than the specified value.LESS_THAN_OR_EQUAL_TO
: Selects values that are less or equal to the specified value.IS_NULL
: Selects values that are null. This operator does not accept a value.IS_NOT_NULL
: Selects values that are not null. This operator does not accept a value.LIKE
: Selects values that match the specified pattern.NOT_LIKE
: “Selects values that do not match the specified pattern.
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
DataPoolAccessPolicyResponse
The Data Pool Access Policy.
The ID of the Data Pool Access Policy.
The Data Pool Access Policy’s unique name.
The Data Pool Access Policy’s description.
The Data Pool Access Policy’s Account.
The Account object.
The Account’s unique identifier.
The Data Pool Access Policy’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Data Pool Access Policy’s creation date and time in UTC.
The Data Pool Access Policy’s last modification date and time in UTC.
The Data Pool Access Policy’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Pool Access Policy’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
Columns that the Access Policy makes available for querying.
Row-level filters that the Access Policy applies before executing queries, in the form of SQL.
Applications that are assigned to this Data Pool Access Policy.
Arguments
Row-level filters that the Access Policy applies before executing queries.
deprecated: UsefiltersSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
See FilterOperator
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
Creates an Environment and returns the newly created Environment (or an error if creating the Environment fails).
Arguments
The result of a mutation which creates or modifies an Environment.
The Environment which was created or modified.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
Modifies an Environment with the provided fields. If any of the optional fields are omitted, those properties will be unchanged on the Environment.
Arguments
The result of a mutation which creates or modifies an Environment.
The Environment which was created or modified.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
Deletes a Environment by ID and returns its ID if the Environment was deleted successfully.
Arguments
Creates a Data Pool Access Policy for the specified Data Pool.
Arguments
The Data Pool Access Policy’s unique name. If not specified, Propel will set the ID as unique name.
The Data Pool Access Policy’s description.
The Data Pool to which the Access Policy belongs.
Columns that the Access Policy makes available for querying.
If set to ["*"]
, all columns will be available for querying.
Row-level filters that the Access Policy applies before executing queries, in the form of SQL.
Row-level filters that the Access Policy applies before executing queries.
deprecated: UsefilterSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
See FilterOperator
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
The Data Pool Access Policy.
The ID of the Data Pool Access Policy.
The Data Pool Access Policy’s unique name.
The Data Pool Access Policy’s description.
The Data Pool Access Policy’s Environment.
See Environment
The Data Pool Access Policy’s creation date and time in UTC.
The Data Pool Access Policy’s last modification date and time in UTC.
The Data Pool Access Policy’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Pool Access Policy’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
Columns that the Access Policy makes available for querying.
Row-level filters that the Access Policy applies before executing queries, in the form of SQL.
Applications that are assigned to this Data Pool Access Policy.
Arguments
Modifies a Data Pool Access Policy with the provided unique name, description, columns and rows. If any of the optional arguments are omitted, those properties will be unchanged on the Data Pool Access Policy.
Arguments
The Data Pool Access Policy’s new unique name.
The Data Pool Access Policy’s new description.
Columns that the Access Policy makes available for querying. If not provided this property will not be modified.
Row-level filters that the Access Policy applies before executing queries, in the form of SQL. If not provided this property will not be modified.
Row-level filters that the Access Policy applies before executing queries. If not provided this property will not be modified.
deprecated: UsefilterSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
See FilterOperator
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
The Data Pool Access Policy.
The ID of the Data Pool Access Policy.
The Data Pool Access Policy’s unique name.
The Data Pool Access Policy’s description.
The Data Pool Access Policy’s Environment.
See Environment
The Data Pool Access Policy’s creation date and time in UTC.
The Data Pool Access Policy’s last modification date and time in UTC.
The Data Pool Access Policy’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Pool Access Policy’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
Columns that the Access Policy makes available for querying.
Row-level filters that the Access Policy applies before executing queries, in the form of SQL.
Applications that are assigned to this Data Pool Access Policy.
Arguments
Deletes a Data Pool Access Policy by ID and returns its ID if the Data Pool Access Policy was deleted successfully.
Arguments
Assign a Data Pool Access Policy to an Application.
The Data Pool Access Policy will restrict the Data Pool rows and columns that the Application
can query. If the Data Pool has accessControlEnabled
set to true, the Application
must have a Data Pool Access Policy assigned in order to query the Data Pool.
An Application can have at most one Data Pool Access Policy assigned for a given Data Pool. If an Application already has a Data Pool Access Policy for a given Data Pool, and you call this mutation with another Data Pool Access Policy for the same Data Pool, the Application’s Data Pool Access Policy will be replaced.
Arguments
Unassign a Data Pool Access Policy from an Application.
Once unassigned, whether the Application will be able to query the Data Pool is
controlled by the Data Pool’s accessControlEnabled
property. If
accessControlEnabled
is true, the Application will no longer be able to query the
Data Pool. If accessControlEnabled
is false, the Application will be able to query
all data in the Data Pool, unrestricted.
Arguments
Creates a new Application and returns the newly created Application (or an error message if creating the Application fails).
Required scopes:
- APPLICATION_ADMIN
mutation {
createApplication(
input: {
uniqueName: "example_sample_application"
description: "My dashboard app"
scopes: [DATA_POOL_QUERY]
propeller: P1_X_SMALL
}
) {
... on ApplicationResponse {
application {
id
uniqueName
}
}
... on FailureResponse {
error {
code
message
}
}
}
}
Arguments
The fields for creating an Application.
The Application’s unique name. If not specified, Propel will set the ID as unique name.
The Application’s description.
The Application’s Propeller. If no Propeller is provided, Propel will set the Propeller to P1_X_SMALL
.
A Propeller determines your Application’s query processing power. The larger the Propeller, the faster the queries and the higher the cost. Every Propel Application (and therefore every set of API credentials) has a Propeller that determines the speed and cost of queries.
P1_X_SMALL
: Max records per second: 5,000,000 records per secondP1_SMALL
: Max records per second: 25,000,000 records per secondP1_MEDIUM
: Max records per second: 100,000,000 records per secondP1_LARGE
: Max records per second: 250,000,000 records per secondP1_X_LARGE
: Max records per second: 500,000,000 records per second
The Application’s API authorization scopes. If specified, at least one scope must be provided; otherwise, all scopes will be granted to the Application by default.
The API operations an Application is authorized to perform.
ADMIN
: Grant read/write access to Data Sources, Data Pools, Metrics and Policies.APPLICATION_ADMIN
: Grant read/write access to Applications.DATA_POOL_QUERY
: Grant read access to query Data Pools.DATA_POOL_READ
: Grant read access to read Data Pools.DATA_POOL_STATS
: Grant read access to fetch column statistics from Data Pools.ENVIRONMENT_ADMIN
: Grant read/write access to Environments.METRIC_QUERY
: Grant read access to query Metrics.METRIC_STATS
: Grant read access to fetch Dimension statistics from Metrics.METRIC_READ
: Grant read access to Metrics.
This does not allow querying Metrics. For that, see METRIC_QUERY
.
The result of a mutation which creates or modifies an Application.
If successful, an ApplicationResponse
will be returned; otherwise, a
FailureResponse
will be returned.
The result of a mutation which creates or modifies an Application.
The Application which was created or modified.
See Application
Modifies an Application with the provided unique name, description, Propeller, and scopes. If any of the optional arguments are omitted, those properties will be unchanged on the Application.
Arguments
The fields for modifying an Application.
The Application’s new unique name.
The Application’s new description.
The Application’s new Propeller.
A Propeller determines your Application’s query processing power. The larger the Propeller, the faster the queries and the higher the cost. Every Propel Application (and therefore every set of API credentials) has a Propeller that determines the speed and cost of queries.
P1_X_SMALL
: Max records per second: 5,000,000 records per secondP1_SMALL
: Max records per second: 25,000,000 records per secondP1_MEDIUM
: Max records per second: 100,000,000 records per secondP1_LARGE
: Max records per second: 250,000,000 records per secondP1_X_LARGE
: Max records per second: 500,000,000 records per second
The Application’s new API authorization scopes.
The API operations an Application is authorized to perform.
ADMIN
: Grant read/write access to Data Sources, Data Pools, Metrics and Policies.APPLICATION_ADMIN
: Grant read/write access to Applications.DATA_POOL_QUERY
: Grant read access to query Data Pools.DATA_POOL_READ
: Grant read access to read Data Pools.DATA_POOL_STATS
: Grant read access to fetch column statistics from Data Pools.ENVIRONMENT_ADMIN
: Grant read/write access to Environments.METRIC_QUERY
: Grant read access to query Metrics.METRIC_STATS
: Grant read access to fetch Dimension statistics from Metrics.METRIC_READ
: Grant read access to Metrics.
This does not allow querying Metrics. For that, see METRIC_QUERY
.
The result of a mutation which creates or modifies an Application.
If successful, an ApplicationResponse
will be returned; otherwise, a
FailureResponse
will be returned.
The result of a mutation which creates or modifies an Application.
The Application which was created or modified.
See Application
Deletes an Application by ID and returns its ID if the Application was deleted successfully.
Arguments
Deletes an Application by unique name and returns its ID if the Application was deleted successfully.
Arguments
Creates a new Data Source from the given Snowflake database using the specified Snowflake account, warehouse, schema, username, and role.
Returns the newly created Data Source (or an error message if creating the Data Source fails).
Arguments
The fields for creating a Snowflake Data Source.
The Data Source’s unique name. If not specified, Propel will set the ID as unique name.
The Data Source’s description.
The Data Source’s connection settings.
The fields for creating a Snowflake Data Source’s connection settings.
The Snowflake account. Only include the part before the “snowflakecomputing.com” part of your Snowflake URL (make sure you are in classic console, not Snowsight). For AWS-based accounts, this looks like “znXXXXX.us-east-2.aws”. For Google Cloud-based accounts, this looks like “ffXXXXX.us-central1.gcp”.
The Snowflake database name.
The Snowflake warehouse name. It should be “PROPELLING” if you used the default name in the setup script.
The Snowflake schema.
The Snowflake username. It should be “PROPEL” if you used the default name in the setup script.
The Snowflake password.
The Snowflake role. It should be “PROPELLER” if you used the default name in the setup script.
The result of a mutation which creates or modifies a DataSource.
If successful, an DataSourceResponse
will be returned; otherwise, a
FailureResponse
will be returned.
The result of a mutation which creates or modifies a Data Source.
The Data Source which was created or modified.
See DataSource
Modifies a Data Source with the provided unique name, description, and connection settings. If any of the optional arguments are omitted, those properties will be unchanged on the Data Source.
Arguments
The fields for modifying a Snowflake Data Source.
The Data Source’s new unique name.
The Data Source’s new description.
The Data Source’s new connection settings.
The fields for modifying a Snowflake Data Source’s connection settings.
The Snowflake account. Only include the part before the “snowflakecomputing.com” part of your Snowflake URL (make sure you are in classic console, not Snowsight). For AWS-based accounts, this looks like “znXXXXX.us-east-2.aws”. For Google Cloud-based accounts, this looks like “ffXXXXX.us-central1.gcp”. If not provided this property will not be modified.
The Snowflake database name. If not provided this property will not be modified.
The Snowflake warehouse name. It should be “PROPELLING” if you used the default name in the setup script. If not provided this property will not be modified.
The Snowflake schema. If not provided this property will not be modified.
The Snowflake username. It should be “PROPEL” if you used the default name in the setup script. If not provided this property will not be modified.
The Snowflake password. If not provided this property will not be modified.
The Snowflake role. It should be “PROPELLER” if you used the default name in the setup script. If not provided this property will not be modified.
The result of a mutation which creates or modifies a DataSource.
If successful, an DataSourceResponse
will be returned; otherwise, a
FailureResponse
will be returned.
The result of a mutation which creates or modifies a Data Source.
The Data Source which was created or modified.
See DataSource
Attempts to reconnect a Data Source. The mutation then returns the Data Source object.
Arguments
The Data Source object.
A Data Source is a connection to your data warehouse. It has the necessary connection details for Propel to access Snowflake or any other supported Data Source.
The Data Source’s unique identifier.
The Data Source’s unique name.
The Data Source’s description.
The Data Source’s Account.
The Account object.
The Account’s unique identifier.
The Data Source’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Data Source’s creation date and time in UTC.
The Data Source’s last modification date and time in UTC.
The Data Source’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Source’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Source’s type.
The types of Data Sources.
WEBHOOK
: Indicates a Webhook Data Source.TWILIO_SEGMENT
: Indicates a Twilio Segment Data Source.S3
: Indicates an Amazon S3 Data Source.Redshift
: Indicates a Redshift Data Source.POSTGRESQL
: Indicates a PostgreSQL Data Source.KAFKA
: Indicates a Kafka Data Source.Http
: Indicates an Http Data Source.CLICKHOUSE
: Indicates a ClickHouse Data Source.AMAZON_DYNAMODB
: Indicates an Amazon DynamoDB Data Source.AMAZON_DATA_FIREHOSE
: Indicates an Amazon Data Firehose Data Source.Snowflake
: Indicates a Snowflake Data Source.INTERNAL
: Indicates an internal Data Source.
The Data Source’s status.
The status of a Data Source.
CREATED
: The Data Source has been created, but it is not connected yet.CONNECTING
: Propel is attempting to connect the Data Source.CONNECTED
: The Data Source is connected.BROKEN
: The Data Source failed to connect.DELETING
: Propel is deleting the Data Source.
The Data Source’s connection settings.
The tables contained within the Data Source, according to the most recent table introspection.
Arguments
See TableConnection
A list of table introspections performed for the Data Source. You can see how tables and columns changed over time by paging through this list.
Arguments
A list of checks performed on the Data Source during its most recent connection attempt.
The Data Source Check object.
Data Source Checks are executed when setting up your Data Source. They check that Propel will be able to receive data and setup Data Pools.
The exact Checks to perform vary by Data Source. For example, Snowflake-backed Data Sources will have their own specific Checks.
The name of the Data Source Check to be performed.
A description of the Data Source Check to be performed.
The status of the Data Source Check (all checks begin as NOT_STARTED before transitioning to SUCCEEDED or FAILED).
If the Data Source Check failed, this field includes a descriptive error message.
See Error
The time at which the Data Source Check was performed.
If you list Data Pools via the dataPools
field on a Data Source, you will get Data Pools for the Data Source.
The dataPools
field uses cursor-based pagination typical of GraphQL APIs. You can use the pairs of parameters first
and after
or last
and before
to page forward or backward through the results, respectively.
For forward pagination, the first
parameter defines the number of results to return, and the after
parameter defines the cursor to continue from. You should pass the cursor for the last result of the current page to after
.
For backward pagination, the last
parameter defines the number of results to return, and the before
parameter defines the cursor to continue from. You should pass the cursor for the first result of the current page to before
.
Arguments
Introspects the tables in a Data Source.
Returns the tables along with when they were last cached from the Data Source.
Arguments
The table introspection object.
When setting up a Data Source, Propel may need to introspect tables in order to determine what tables and columns are available to create Data Pools from. The table introspection represents the lifecycle of this operation (whether it’s in-progress, succeeded, or failed) and the resulting tables and columns. These will be captured as table and column objects, respectively.
The Data Source the table introspection was performed for. See DataSource
The status of the table introspection.
The status of a table introspection.
NOT_STARTED
: The table introspection has not started.STARTED
: The table introspection has started.SUCCEEDED
: The table introspection succeeded.FAILED
: The table introspection failed.
The table introspection’s creation date and time in UTC.
The table introspection’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The table introspection’s last modification date and time in UTC.
The table introspection’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The number of tables introspected.
Tests that Propel can actually connect to the data warehouse. Updates the status.
Arguments
The result of a mutation which creates or modifies a DataSource.
If successful, an DataSourceResponse
will be returned; otherwise, a
FailureResponse
will be returned.
The result of a mutation which creates or modifies a Data Source.
The Data Source which was created or modified.
See DataSource
Deletes a Data Source by ID and returns its ID if the Data Source was deleted successfully.
Arguments
Deletes a Data Source by unique name and returns its ID if the Data Source was deleted successfully.
Arguments
Creates a new Data Pool.
Returns the newly created Data Pool or an error message if it fails.
mutation {
createDataPoolV2(input: {
dataSource: "DSOXXXXX"
table: "tacosoft_sales_analytics"
timestamp: { columnName: "timestamp" }
columns: [
{
columnName: "timestamp",
type: TIMESTAMP,
isNullable: true
},
{
columnName: "order_id",
type: STRING,
isNullable: true
},
{
columnName: "taco_name",
type: STRING,
isNullable: true
},
{
columnName: "toppings",
type: JSON,
isNullable: true
},
{
columnName: "quantity",
type: INT32,
isNullable: true
},
{
columnName: "taco_unit_price",
type: INT32,
isNullable: true
},
{
columnName: "taco_total_price",
type: INT32,
isNullable: true
}
]
}) {
dataPool {
id
uniqueName
tableSettings {
orderBy
engine {
... on MergeTreeTableEngine {
type
}
... on ReplacingMergeTreeTableEngine {
type
ver
}
... on SummingMergeTreeTableEngine {
type
columns
}
... on AggregatingMergeTreeTableEngine {
type
}
... on PostgreSqlTableEngine {
type
}
}
}
}
}
}
Arguments
The fields for creating a Data Pool.
The Data Source that will be used to create the Data Pool.
The table that the Data Pool will sync from.
The table’s primary timestamp column.
Propel uses the primary timestamp to order and partition your data in Data Pools. It’s part of what makes Propel fast for larger data sets. It will also serve as the time dimension for your Metrics.
If you do not provide a primary timestamp column, you will need to supply an alternate timestamp when querying your Data Pool or its Metrics using the TimeRangeInput.
The fields to specify the Data Pool’s primary timestamp column. Propel uses the primary timestamp to order and partition your data in Data Pools. It will serve as the time dimension for your Metrics.
The name of the column that represents the primary timestamp.
The Data Pool’s unique name. If not specified, Propel will set the ID as the unique name.
The Data Pool’s description.
The list of columns.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
See ColumnType
The ClickHouse type to use when type
is set to CLICKHOUSE
.
Whether the column is nullable, meaning whether it accepts a null value.
The Data Pool’s syncing settings.
The fields for modifying the Data Pool syncing.
Enables or disables access control for the Data Pool.
If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
Override the Data Pool’s table settings. These describe how the Data Pool’s table is created in ClickHouse, and a
default will be chosen based on the Data Pool’s timestamp
and uniqueId
values, if any. You can override these
defaults in order to specify a custom table engine, custom ORDER BY, etc.
A Data Pool’s table settings.
These describe how the Data Pool’s table is created in ClickHouse.
The ClickHouse table engine for the Data Pool’s table.
This field is optional. A default will be chosen based on the Data Pool’s timestamp
and uniqueId
values, if specified.
See TableEngineInput
The PARTITION BY clause for the Data Pool’s table.
This field is optional. A default will be chosen based on the Data Pool’s timestamp
and uniqueId
values, if specified.
The PRIMARY KEY clause for the Data Pool’s table.
This field is optional. A default will be chosen based on the Data Pool’s timestamp
and uniqueId
values, if specified.
The ORDER BY clause for the Data Pool’s table.
This field is optional. A default will be chosen based on the Data Pool’s timestamp
and uniqueId
values, if specified.
The TTL clause for the Data Pool’s table.
The Data Pool’s optional tenant ID column. The tenant ID column is used to control access to your data with access policies.
deprecated: Will be removed; use Data Pool Access Policies insteadThe fields to specify the Data Pool’s tenant ID column. The tenant ID column is used to control access to your data with access policies.
The name of the column that represents the tenant ID.
The Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
deprecated: Will be removed; use table settings to define the primary key.The fields to specify the Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
The name of the column that represents the unique ID.
The result of a mutation which creates or modifies a Data Pool.
The Data Pool which was created or modified.
The Data Pool object. Data Pools are Propel’s high-speed data store and cache
The Data Pool’s unique identifier.
The Data Pool’s unique name.
The Data Pool’s description.
The Data Pool’s Environment.
See Environment
The Data Pool’s creation date and time in UTC.
The Data Pool’s last modification date and time in UTC.
The Data Pool’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Pool’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Pool’s Data Source. See DataSource
The Data Pool’s status.
See DataPoolStatus
The Data Pool’s data retention in days (not yet supported).
The name of the Data Pool’s table.
The number of records in the Data Pool.
The amount of storage in terabytes used by the Data Pool.
The Data Pool’s columns.
Arguments
The list of measures (numeric columns) in the Data Pool.
Arguments
A list of setup tasks performed on the Data Pool during its most recent setup attempt.
Settings related to Data Pool syncing.
See DataPoolSyncing
The Deletion Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The Add Column Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The UpdateDataPoolRecords Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
Whether the Data Pool has access control enabled or not.
If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
A paginated list of Data Pool Access Policies available on the Data Pool.
Arguments
Validates a custom expression against the Data Pool’s available columns. If the provided expression is invalid, the ValidateExpressionResult response will contain a reason explaining why.
Arguments
The Data Pool’s table settings.
See TableSettings
The Data Pool’s columns that participate in its PARTITION BY clause.
See DataPoolColumn
The Data Pool’s columns that participate in its PRIMARY KEY clause.
See DataPoolColumn
The Data Pool’s columns that participate in its ORDER BY clause.
See DataPoolColumn
The Data Pool’s tenant ID, if configured.
deprecated: Will be removed; use Data Pool Access Policies insteadSee Tenant
The Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
deprecated: Will be removed; use table settings to define the primary key.See UniqueId
Modifies a Data Pool with the provided unique name, description, and data retention time. If any of the optional arguments are omitted, those properties will be unchanged on the Data Pool.
Arguments
The fields for modifying a Data Pool.
The Data Pool’s new unique name.
The Data Pool’s new description.
The Data Pool’s new data retention in days.
The Data Pool’s new syncing settings.
The fields for modifying the Data Pool syncing.
The table’s primary timestamp column.
Propel uses the primary timestamp to order and partition your data in Data Pools. It’s part of what makes Propel fast for larger data sets. It will also serve as the time dimension for your Metrics.
If you do not provide a primary timestamp column, you will need to supply an alternate timestamp when querying your Data Pool or its Metrics using the TimeRangeInput.
The fields to specify the Data Pool’s primary timestamp column. Propel uses the primary timestamp to order and partition your data in Data Pools. It will serve as the time dimension for your Metrics.
The name of the column that represents the primary timestamp.
Enables or disables access control for the Data Pool.
If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
The result of a mutation which creates or modifies a Data Pool.
If successful, an DataPoolResponse
will be returned; otherwise, a
FailureResponse
will be returned.
Retries to set up the Data Pool identified by the given ID.
Arguments
The Data Pool object. Data Pools are Propel’s high-speed data store and cache
The Data Pool’s unique identifier.
The Data Pool’s unique name.
The Data Pool’s description.
The Data Pool’s Account.
The Account object.
The Account’s unique identifier.
The Data Pool’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Data Pool’s creation date and time in UTC.
The Data Pool’s last modification date and time in UTC.
The Data Pool’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Pool’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Pool’s Data Source. See DataSource
The Data Pool’s status.
The status of a Data Pool.
CREATED
: The Data Pool has been created and will be set up soon.PENDING
: Propel is attempting to set up the Data Pool.LIVE
: The Data Pool is set up and serving data. Check its Syncs to monitor data ingestion.SETUP_FAILED
: The Data Pool setup failed. Check its Setup Tasks before re-attempting setup.CONNECTING
CONNECTED
BROKEN
PAUSING
PAUSED
DELETING
: Propel is deleting the Data Pool and all of its associated data.
The Data Pool’s data retention in days (not yet supported).
The name of the Data Pool’s table.
The Data Pool’s primary timestamp column, if any.
A Data Pool’s primary timestamp column. Propel uses the primary timestamp to order and partition your data in Data Pools. It will serve as the time dimension for your Metrics.
The name of the column that represents the primary timestamp.
The primary timestamp column’s type.
The number of records in the Data Pool.
The amount of storage in terabytes used by the Data Pool.
The Data Pool’s columns.
Arguments
The Data Pool column connection object.
Learn more about pagination in GraphQL.
The Data Pool column connection’s edges.
The Data Pool column connection’s nodes.
See DataPoolColumn
The list of measures (numeric columns) in the Data Pool.
Arguments
The Data Pool column connection object.
Learn more about pagination in GraphQL.
The Data Pool column connection’s edges.
The Data Pool column connection’s nodes.
See DataPoolColumn
A list of setup tasks performed on the Data Pool during its most recent setup attempt.
The Data Pool Setup Task object.
Data Pool Setup Tasks are executed when setting up your Data Pool. They ensure Propel will be able to sync records from your Data Source to your Data Pool.
The exact Setup Tasks to perform vary by Data Source. For example, Data Pools pointing to a Snowflake-backed Data Sources will have their own specific Setup Tasks.
The name of the Data Pool Setup Task to be performed.
A description of the Data Pool Setup Task to be performed.
The status of the Data Pool Setup Task (all setup tasks begin as NOT_STARTED before transitioning to SUCCEEDED or FAILED).
If the Data Pool Setup Task failed, this field includes a descriptive error message.
See Error
The time at which the Data Pool Setup Task was completed.
Settings related to Data Pool syncing.
Settings related to Data Pool syncing.
Indicates whether syncing is enabled or disabled.
The syncing interval.
Note that the syncing interval is approximate. For example, setting the syncing interval to EVERY_1_HOUR
does not mean that syncing will occur exactly on the hour. Instead, the syncing interval starts relative to
when the Data Pool goes LIVE
, and Propel will attempt to sync approximately every hour. Additionally,
if you pause or resume syncing, this too can shift the syncing interval around.
The date and time of the most recent Sync in UTC.
The list of Syncs of the Data Pool.
Arguments
The filter to apply when listing the Syncs for a Data Pool.
EMPTY
: Returns only Syncs with empty records.NOT_EMPTY
: Returns only Syncs that contain one or more records.ALL
: Returns all Syncs, regardless of whether they contain records or not.
See SyncConnection
The Deletion Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The Add Column Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The UpdateDataPoolRecords Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
Whether the Data Pool has access control enabled or not.
If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
A paginated list of Data Pool Access Policies available on the Data Pool.
Arguments
Validates a custom expression against the Data Pool’s available columns. If the provided expression is invalid, the ValidateExpressionResult response will contain a reason explaining why.
Arguments
Response returned by the validateExpression query for validating expressions in Custom Metrics.
Returns whether the expression is valid or not with a reason explaining why.
True if the expression is valid, false otherwise.
The reason for why the expression is not valid in case it isn’t, null otherwise.
The Data Pool’s table settings.
A Data Pool’s table settings.
These describe how the Data Pool’s table is created in ClickHouse.
The ClickHouse table engine for the Data Pool’s table.
See TableEngine
The PARTITION BY clause for the Data Pool’s table.
The PRIMARY KEY clause for the Data Pool’s table.
The ORDER BY clause for the Data Pool’s table.
The TTL clause for the Data Pool’s table.
The Data Pool’s columns that participate in its PARTITION BY clause.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
See ColumnType
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadThe Data Pool’s columns that participate in its PRIMARY KEY clause.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
See ColumnType
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadThe Data Pool’s columns that participate in its ORDER BY clause.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
See ColumnType
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadThe Data Pool’s tenant ID, if configured.
deprecated: Will be removed; use Data Pool Access Policies insteadThe Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
deprecated: Will be removed; use table settings to define the primary key.A Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
The name of the column that represents the unique ID.
Retries to set up the Data Pool identified by the given unique name.
Arguments
The Data Pool object. Data Pools are Propel’s high-speed data store and cache
The Data Pool’s unique identifier.
The Data Pool’s unique name.
The Data Pool’s description.
The Data Pool’s Account.
The Account object.
The Account’s unique identifier.
The Data Pool’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Data Pool’s creation date and time in UTC.
The Data Pool’s last modification date and time in UTC.
The Data Pool’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Pool’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Pool’s Data Source. See DataSource
The Data Pool’s status.
The status of a Data Pool.
CREATED
: The Data Pool has been created and will be set up soon.PENDING
: Propel is attempting to set up the Data Pool.LIVE
: The Data Pool is set up and serving data. Check its Syncs to monitor data ingestion.SETUP_FAILED
: The Data Pool setup failed. Check its Setup Tasks before re-attempting setup.CONNECTING
CONNECTED
BROKEN
PAUSING
PAUSED
DELETING
: Propel is deleting the Data Pool and all of its associated data.
The Data Pool’s data retention in days (not yet supported).
The name of the Data Pool’s table.
The Data Pool’s primary timestamp column, if any.
A Data Pool’s primary timestamp column. Propel uses the primary timestamp to order and partition your data in Data Pools. It will serve as the time dimension for your Metrics.
The name of the column that represents the primary timestamp.
The primary timestamp column’s type.
The number of records in the Data Pool.
The amount of storage in terabytes used by the Data Pool.
The Data Pool’s columns.
Arguments
The Data Pool column connection object.
Learn more about pagination in GraphQL.
The Data Pool column connection’s edges.
The Data Pool column connection’s nodes.
See DataPoolColumn
The list of measures (numeric columns) in the Data Pool.
Arguments
The Data Pool column connection object.
Learn more about pagination in GraphQL.
The Data Pool column connection’s edges.
The Data Pool column connection’s nodes.
See DataPoolColumn
A list of setup tasks performed on the Data Pool during its most recent setup attempt.
The Data Pool Setup Task object.
Data Pool Setup Tasks are executed when setting up your Data Pool. They ensure Propel will be able to sync records from your Data Source to your Data Pool.
The exact Setup Tasks to perform vary by Data Source. For example, Data Pools pointing to a Snowflake-backed Data Sources will have their own specific Setup Tasks.
The name of the Data Pool Setup Task to be performed.
A description of the Data Pool Setup Task to be performed.
The status of the Data Pool Setup Task (all setup tasks begin as NOT_STARTED before transitioning to SUCCEEDED or FAILED).
If the Data Pool Setup Task failed, this field includes a descriptive error message.
See Error
The time at which the Data Pool Setup Task was completed.
Settings related to Data Pool syncing.
Settings related to Data Pool syncing.
Indicates whether syncing is enabled or disabled.
The syncing interval.
Note that the syncing interval is approximate. For example, setting the syncing interval to EVERY_1_HOUR
does not mean that syncing will occur exactly on the hour. Instead, the syncing interval starts relative to
when the Data Pool goes LIVE
, and Propel will attempt to sync approximately every hour. Additionally,
if you pause or resume syncing, this too can shift the syncing interval around.
The date and time of the most recent Sync in UTC.
The list of Syncs of the Data Pool.
Arguments
The filter to apply when listing the Syncs for a Data Pool.
EMPTY
: Returns only Syncs with empty records.NOT_EMPTY
: Returns only Syncs that contain one or more records.ALL
: Returns all Syncs, regardless of whether they contain records or not.
See SyncConnection
The Deletion Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The Add Column Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The UpdateDataPoolRecords Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
Whether the Data Pool has access control enabled or not.
If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
A paginated list of Data Pool Access Policies available on the Data Pool.
Arguments
Validates a custom expression against the Data Pool’s available columns. If the provided expression is invalid, the ValidateExpressionResult response will contain a reason explaining why.
Arguments
Response returned by the validateExpression query for validating expressions in Custom Metrics.
Returns whether the expression is valid or not with a reason explaining why.
True if the expression is valid, false otherwise.
The reason for why the expression is not valid in case it isn’t, null otherwise.
The Data Pool’s table settings.
A Data Pool’s table settings.
These describe how the Data Pool’s table is created in ClickHouse.
The ClickHouse table engine for the Data Pool’s table.
See TableEngine
The PARTITION BY clause for the Data Pool’s table.
The PRIMARY KEY clause for the Data Pool’s table.
The ORDER BY clause for the Data Pool’s table.
The TTL clause for the Data Pool’s table.
The Data Pool’s columns that participate in its PARTITION BY clause.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
See ColumnType
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadThe Data Pool’s columns that participate in its PRIMARY KEY clause.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
See ColumnType
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadThe Data Pool’s columns that participate in its ORDER BY clause.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
See ColumnType
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadThe Data Pool’s tenant ID, if configured.
deprecated: Will be removed; use Data Pool Access Policies insteadThe Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
deprecated: Will be removed; use table settings to define the primary key.A Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
The name of the column that represents the unique ID.
Extracts the schema from the table and updates the schema object.
Arguments
The result of a mutation which creates or modifies a Data Pool.
If successful, an DataPoolResponse
will be returned; otherwise, a
FailureResponse
will be returned.
Tests that Propel has access to the Data Pool’s table in its corresponding Data Source and will be able to Sync data. Updates the status.
Arguments
The result of a mutation which creates or modifies a Data Pool.
If successful, an DataPoolResponse
will be returned; otherwise, a
FailureResponse
will be returned.
Deletes a Data Pool by ID and returns its ID if the Data Pool was deleted successfully.
Arguments
Deletes a Data Pool by unique name and returns its ID if the Data Pool was deleted successfully.
Arguments
Disables syncing of a Data Pool.
Arguments
The Data Pool object. Data Pools are Propel’s high-speed data store and cache
The Data Pool’s unique identifier.
The Data Pool’s unique name.
The Data Pool’s description.
The Data Pool’s Account.
The Account object.
The Account’s unique identifier.
The Data Pool’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Data Pool’s creation date and time in UTC.
The Data Pool’s last modification date and time in UTC.
The Data Pool’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Pool’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Pool’s Data Source. See DataSource
The Data Pool’s status.
The status of a Data Pool.
CREATED
: The Data Pool has been created and will be set up soon.PENDING
: Propel is attempting to set up the Data Pool.LIVE
: The Data Pool is set up and serving data. Check its Syncs to monitor data ingestion.SETUP_FAILED
: The Data Pool setup failed. Check its Setup Tasks before re-attempting setup.CONNECTING
CONNECTED
BROKEN
PAUSING
PAUSED
DELETING
: Propel is deleting the Data Pool and all of its associated data.
The Data Pool’s data retention in days (not yet supported).
The name of the Data Pool’s table.
The Data Pool’s primary timestamp column, if any.
A Data Pool’s primary timestamp column. Propel uses the primary timestamp to order and partition your data in Data Pools. It will serve as the time dimension for your Metrics.
The name of the column that represents the primary timestamp.
The primary timestamp column’s type.
The number of records in the Data Pool.
The amount of storage in terabytes used by the Data Pool.
The Data Pool’s columns.
Arguments
The Data Pool column connection object.
Learn more about pagination in GraphQL.
The Data Pool column connection’s edges.
The Data Pool column connection’s nodes.
See DataPoolColumn
The list of measures (numeric columns) in the Data Pool.
Arguments
The Data Pool column connection object.
Learn more about pagination in GraphQL.
The Data Pool column connection’s edges.
The Data Pool column connection’s nodes.
See DataPoolColumn
A list of setup tasks performed on the Data Pool during its most recent setup attempt.
The Data Pool Setup Task object.
Data Pool Setup Tasks are executed when setting up your Data Pool. They ensure Propel will be able to sync records from your Data Source to your Data Pool.
The exact Setup Tasks to perform vary by Data Source. For example, Data Pools pointing to a Snowflake-backed Data Sources will have their own specific Setup Tasks.
The name of the Data Pool Setup Task to be performed.
A description of the Data Pool Setup Task to be performed.
The status of the Data Pool Setup Task (all setup tasks begin as NOT_STARTED before transitioning to SUCCEEDED or FAILED).
If the Data Pool Setup Task failed, this field includes a descriptive error message.
See Error
The time at which the Data Pool Setup Task was completed.
Settings related to Data Pool syncing.
Settings related to Data Pool syncing.
Indicates whether syncing is enabled or disabled.
The syncing interval.
Note that the syncing interval is approximate. For example, setting the syncing interval to EVERY_1_HOUR
does not mean that syncing will occur exactly on the hour. Instead, the syncing interval starts relative to
when the Data Pool goes LIVE
, and Propel will attempt to sync approximately every hour. Additionally,
if you pause or resume syncing, this too can shift the syncing interval around.
The date and time of the most recent Sync in UTC.
The list of Syncs of the Data Pool.
Arguments
The filter to apply when listing the Syncs for a Data Pool.
EMPTY
: Returns only Syncs with empty records.NOT_EMPTY
: Returns only Syncs that contain one or more records.ALL
: Returns all Syncs, regardless of whether they contain records or not.
See SyncConnection
The Deletion Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The Add Column Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The UpdateDataPoolRecords Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
Whether the Data Pool has access control enabled or not.
If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
A paginated list of Data Pool Access Policies available on the Data Pool.
Arguments
Validates a custom expression against the Data Pool’s available columns. If the provided expression is invalid, the ValidateExpressionResult response will contain a reason explaining why.
Arguments
Response returned by the validateExpression query for validating expressions in Custom Metrics.
Returns whether the expression is valid or not with a reason explaining why.
True if the expression is valid, false otherwise.
The reason for why the expression is not valid in case it isn’t, null otherwise.
The Data Pool’s table settings.
A Data Pool’s table settings.
These describe how the Data Pool’s table is created in ClickHouse.
The ClickHouse table engine for the Data Pool’s table.
See TableEngine
The PARTITION BY clause for the Data Pool’s table.
The PRIMARY KEY clause for the Data Pool’s table.
The ORDER BY clause for the Data Pool’s table.
The TTL clause for the Data Pool’s table.
The Data Pool’s columns that participate in its PARTITION BY clause.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
See ColumnType
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadThe Data Pool’s columns that participate in its PRIMARY KEY clause.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
See ColumnType
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadThe Data Pool’s columns that participate in its ORDER BY clause.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
See ColumnType
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadThe Data Pool’s tenant ID, if configured.
deprecated: Will be removed; use Data Pool Access Policies insteadThe Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
deprecated: Will be removed; use table settings to define the primary key.A Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
The name of the column that represents the unique ID.
Re-enables syncing of a Data Pool.
Arguments
The Data Pool object. Data Pools are Propel’s high-speed data store and cache
The Data Pool’s unique identifier.
The Data Pool’s unique name.
The Data Pool’s description.
The Data Pool’s Account.
The Account object.
The Account’s unique identifier.
The Data Pool’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Data Pool’s creation date and time in UTC.
The Data Pool’s last modification date and time in UTC.
The Data Pool’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Pool’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Pool’s Data Source. See DataSource
The Data Pool’s status.
The status of a Data Pool.
CREATED
: The Data Pool has been created and will be set up soon.PENDING
: Propel is attempting to set up the Data Pool.LIVE
: The Data Pool is set up and serving data. Check its Syncs to monitor data ingestion.SETUP_FAILED
: The Data Pool setup failed. Check its Setup Tasks before re-attempting setup.CONNECTING
CONNECTED
BROKEN
PAUSING
PAUSED
DELETING
: Propel is deleting the Data Pool and all of its associated data.
The Data Pool’s data retention in days (not yet supported).
The name of the Data Pool’s table.
The Data Pool’s primary timestamp column, if any.
A Data Pool’s primary timestamp column. Propel uses the primary timestamp to order and partition your data in Data Pools. It will serve as the time dimension for your Metrics.
The name of the column that represents the primary timestamp.
The primary timestamp column’s type.
The number of records in the Data Pool.
The amount of storage in terabytes used by the Data Pool.
The Data Pool’s columns.
Arguments
The Data Pool column connection object.
Learn more about pagination in GraphQL.
The Data Pool column connection’s edges.
The Data Pool column connection’s nodes.
See DataPoolColumn
The list of measures (numeric columns) in the Data Pool.
Arguments
The Data Pool column connection object.
Learn more about pagination in GraphQL.
The Data Pool column connection’s edges.
The Data Pool column connection’s nodes.
See DataPoolColumn
A list of setup tasks performed on the Data Pool during its most recent setup attempt.
The Data Pool Setup Task object.
Data Pool Setup Tasks are executed when setting up your Data Pool. They ensure Propel will be able to sync records from your Data Source to your Data Pool.
The exact Setup Tasks to perform vary by Data Source. For example, Data Pools pointing to a Snowflake-backed Data Sources will have their own specific Setup Tasks.
The name of the Data Pool Setup Task to be performed.
A description of the Data Pool Setup Task to be performed.
The status of the Data Pool Setup Task (all setup tasks begin as NOT_STARTED before transitioning to SUCCEEDED or FAILED).
If the Data Pool Setup Task failed, this field includes a descriptive error message.
See Error
The time at which the Data Pool Setup Task was completed.
Settings related to Data Pool syncing.
Settings related to Data Pool syncing.
Indicates whether syncing is enabled or disabled.
The syncing interval.
Note that the syncing interval is approximate. For example, setting the syncing interval to EVERY_1_HOUR
does not mean that syncing will occur exactly on the hour. Instead, the syncing interval starts relative to
when the Data Pool goes LIVE
, and Propel will attempt to sync approximately every hour. Additionally,
if you pause or resume syncing, this too can shift the syncing interval around.
The date and time of the most recent Sync in UTC.
The list of Syncs of the Data Pool.
Arguments
The filter to apply when listing the Syncs for a Data Pool.
EMPTY
: Returns only Syncs with empty records.NOT_EMPTY
: Returns only Syncs that contain one or more records.ALL
: Returns all Syncs, regardless of whether they contain records or not.
See SyncConnection
The Deletion Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The Add Column Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The UpdateDataPoolRecords Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
Whether the Data Pool has access control enabled or not.
If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
A paginated list of Data Pool Access Policies available on the Data Pool.
Arguments
Validates a custom expression against the Data Pool’s available columns. If the provided expression is invalid, the ValidateExpressionResult response will contain a reason explaining why.
Arguments
Response returned by the validateExpression query for validating expressions in Custom Metrics.
Returns whether the expression is valid or not with a reason explaining why.
True if the expression is valid, false otherwise.
The reason for why the expression is not valid in case it isn’t, null otherwise.
The Data Pool’s table settings.
A Data Pool’s table settings.
These describe how the Data Pool’s table is created in ClickHouse.
The ClickHouse table engine for the Data Pool’s table.
See TableEngine
The PARTITION BY clause for the Data Pool’s table.
The PRIMARY KEY clause for the Data Pool’s table.
The ORDER BY clause for the Data Pool’s table.
The TTL clause for the Data Pool’s table.
The Data Pool’s columns that participate in its PARTITION BY clause.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
See ColumnType
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadThe Data Pool’s columns that participate in its PRIMARY KEY clause.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
See ColumnType
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadThe Data Pool’s columns that participate in its ORDER BY clause.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
See ColumnType
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadThe Data Pool’s tenant ID, if configured.
deprecated: Will be removed; use Data Pool Access Policies insteadThe Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
deprecated: Will be removed; use table settings to define the primary key.A Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
The name of the column that represents the unique ID.
Creates a new Count Metric from the given Data Pool and returns the newly created Metric (or an error message if creating the Metric fails).
Arguments
The fields for creating a new Count Metric.
The Data Pool that powers this Metric.
The Metric’s unique name. If not specified, Propel will set the ID as unique name.
The Metric’s description.
The Metric’s Filters, in the form of SQL. Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Filters are present, all records will be included.
The Metric’s Dimensions. Dimensions define the columns that will be available to filter the Metric at query time.
The fields for creating or modifying a Dimension.
The name of the column to create the Dimension from.
The Metric’s Filters. Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Filters are present, all records will be included.
deprecated: UsefilterSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
See FilterOperator
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
The result of a mutation which creates or modifies a Metric.
The Metric which was created or modified.
The Metric object.
A Metric is a business indicator measured over time.
The Metric’s unique identifier.
The Metric’s unique name.
The Metric’s description.
The Metric’s Environment.
See Environment
The Metric’s creation date and time in UTC.
The Metric’s last modification date and time in UTC.
The Metric’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Metric’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Metric’s Dimensions. These Dimensions are available to Query Filters.
See Dimension
The Metric’s timestamp, if any. This is the same as its Data Pool’s timestamp, if any.
See Dimension
List the Boosters associated to the Metric.
Arguments
The Metric’s type. The different Metric types determine how the values are calculated.
See MetricType
The settings for the Metric. The settings are specific to the Metric’s type.
See MetricSettings
The Metric’s measure. Access this from the Metric’s settings
object instead.
settings
object instead.See Dimension
Query the Metric in counter format. Returns the Metric’s value for the given time range and filters.
deprecated: Use the top-levelcounter
query insteadArguments
See CounterInput
See CounterResponse
Query the Metric in time series format. Returns arrays of timestamps and the Metric’s values for the given time range and filters.
deprecated: Use the top-leveltimeSeries
query insteadArguments
See TimeSeriesInput
Query the Metric in leaderboard format. Returns a table (array of rows) with the selected dimensions and the Metric’s corresponding values for the given time range and filters.
deprecated: Use the top-levelleaderboard
query insteadArguments
See LeaderboardInput
List the Policies associated to the Metric.
deprecated: Use Data Pool Access Policies insteadArguments
See PolicyConnection
Whether or not access control is enabled for the Metric.
deprecated: Use Data Pool Access Policies insteadCreates a new Count Distinct Metric from the given Data Pool and returns the newly created Metric (or an error message if creating the Metric fails).
Arguments
The fields for creating a new Count Distinct Metric.
The Data Pool that powers this Metric.
The Metric’s unique name. If not specified, Propel will set the ID as unique name.
The Metric’s description.
The Metric’s Filters, in the form of SQL. Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Filters are present, all records will be included.
The Metric’s Dimensions. Dimensions define the columns that will be available to filter the Metric at query time.
The fields for creating or modifying a Dimension.
The name of the column to create the Dimension from.
The Dimension over which the count distinct operation is going to be performed.
The fields for creating or modifying a Dimension.
The name of the column to create the Dimension from.
The Metric’s Filters. Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Filters are present, all records will be included.
deprecated: UsefilterSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
See FilterOperator
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
The result of a mutation which creates or modifies a Metric.
The Metric which was created or modified.
The Metric object.
A Metric is a business indicator measured over time.
The Metric’s unique identifier.
The Metric’s unique name.
The Metric’s description.
The Metric’s Environment.
See Environment
The Metric’s creation date and time in UTC.
The Metric’s last modification date and time in UTC.
The Metric’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Metric’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Metric’s Dimensions. These Dimensions are available to Query Filters.
See Dimension
The Metric’s timestamp, if any. This is the same as its Data Pool’s timestamp, if any.
See Dimension
List the Boosters associated to the Metric.
Arguments
The Metric’s type. The different Metric types determine how the values are calculated.
See MetricType
The settings for the Metric. The settings are specific to the Metric’s type.
See MetricSettings
The Metric’s measure. Access this from the Metric’s settings
object instead.
settings
object instead.See Dimension
Query the Metric in counter format. Returns the Metric’s value for the given time range and filters.
deprecated: Use the top-levelcounter
query insteadArguments
See CounterInput
See CounterResponse
Query the Metric in time series format. Returns arrays of timestamps and the Metric’s values for the given time range and filters.
deprecated: Use the top-leveltimeSeries
query insteadArguments
See TimeSeriesInput
Query the Metric in leaderboard format. Returns a table (array of rows) with the selected dimensions and the Metric’s corresponding values for the given time range and filters.
deprecated: Use the top-levelleaderboard
query insteadArguments
See LeaderboardInput
List the Policies associated to the Metric.
deprecated: Use Data Pool Access Policies insteadArguments
See PolicyConnection
Whether or not access control is enabled for the Metric.
deprecated: Use Data Pool Access Policies insteadCreates a new Sum Metric from the given Data Pool and returns the newly created Metric (or an error message if creating the Metric fails).
Arguments
The fields for creating a new Sum Metric.
The Data Pool that powers this Metric.
The Metric’s unique name. If not specified, Propel will set the ID as unique name.
The Metric’s description.
The Metric’s Filters, in the form of SQL. Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Filters are present, all records will be included.
The Metric’s Dimensions. Dimensions define the columns that will be available to filter the Metric at query time.
The fields for creating or modifying a Dimension.
The name of the column to create the Dimension from.
The column to be summed.
The fields for creating or modifying a Dimension.
The name of the column to create the Dimension from.
The Metric’s Filters. Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Filters are present, all records will be included.
deprecated: UsefilterSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
See FilterOperator
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
The result of a mutation which creates or modifies a Metric.
The Metric which was created or modified.
The Metric object.
A Metric is a business indicator measured over time.
The Metric’s unique identifier.
The Metric’s unique name.
The Metric’s description.
The Metric’s Environment.
See Environment
The Metric’s creation date and time in UTC.
The Metric’s last modification date and time in UTC.
The Metric’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Metric’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Metric’s Dimensions. These Dimensions are available to Query Filters.
See Dimension
The Metric’s timestamp, if any. This is the same as its Data Pool’s timestamp, if any.
See Dimension
List the Boosters associated to the Metric.
Arguments
The Metric’s type. The different Metric types determine how the values are calculated.
See MetricType
The settings for the Metric. The settings are specific to the Metric’s type.
See MetricSettings
The Metric’s measure. Access this from the Metric’s settings
object instead.
settings
object instead.See Dimension
Query the Metric in counter format. Returns the Metric’s value for the given time range and filters.
deprecated: Use the top-levelcounter
query insteadArguments
See CounterInput
See CounterResponse
Query the Metric in time series format. Returns arrays of timestamps and the Metric’s values for the given time range and filters.
deprecated: Use the top-leveltimeSeries
query insteadArguments
See TimeSeriesInput
Query the Metric in leaderboard format. Returns a table (array of rows) with the selected dimensions and the Metric’s corresponding values for the given time range and filters.
deprecated: Use the top-levelleaderboard
query insteadArguments
See LeaderboardInput
List the Policies associated to the Metric.
deprecated: Use Data Pool Access Policies insteadArguments
See PolicyConnection
Whether or not access control is enabled for the Metric.
deprecated: Use Data Pool Access Policies insteadCreates a new Average Metric from the given Data Pool and returns the newly created Metric (or an error message if creating the Metric fails).
Arguments
The fields for creating a new Average Metric.
The Data Pool that powers this Metric.
The Metric’s unique name.
The Metric’s description.
The Metric’s Filters, in the form of SQL. Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Filters are present, all records will be included.
The Metric’s Dimensions. Dimensions define the columns that will be available to filter the Metric at query time.
The fields for creating or modifying a Dimension.
The name of the column to create the Dimension from.
The column to be averaged.
The fields for creating or modifying a Dimension.
The name of the column to create the Dimension from.
The Metric’s Filters. Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Filters are present, all records will be included.
deprecated: UsefilterSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
See FilterOperator
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
The result of a mutation which creates or modifies a Metric.
The Metric which was created or modified.
The Metric object.
A Metric is a business indicator measured over time.
The Metric’s unique identifier.
The Metric’s unique name.
The Metric’s description.
The Metric’s Environment.
See Environment
The Metric’s creation date and time in UTC.
The Metric’s last modification date and time in UTC.
The Metric’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Metric’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Metric’s Dimensions. These Dimensions are available to Query Filters.
See Dimension
The Metric’s timestamp, if any. This is the same as its Data Pool’s timestamp, if any.
See Dimension
List the Boosters associated to the Metric.
Arguments
The Metric’s type. The different Metric types determine how the values are calculated.
See MetricType
The settings for the Metric. The settings are specific to the Metric’s type.
See MetricSettings
The Metric’s measure. Access this from the Metric’s settings
object instead.
settings
object instead.See Dimension
Query the Metric in counter format. Returns the Metric’s value for the given time range and filters.
deprecated: Use the top-levelcounter
query insteadArguments
See CounterInput
See CounterResponse
Query the Metric in time series format. Returns arrays of timestamps and the Metric’s values for the given time range and filters.
deprecated: Use the top-leveltimeSeries
query insteadArguments
See TimeSeriesInput
Query the Metric in leaderboard format. Returns a table (array of rows) with the selected dimensions and the Metric’s corresponding values for the given time range and filters.
deprecated: Use the top-levelleaderboard
query insteadArguments
See LeaderboardInput
List the Policies associated to the Metric.
deprecated: Use Data Pool Access Policies insteadArguments
See PolicyConnection
Whether or not access control is enabled for the Metric.
deprecated: Use Data Pool Access Policies insteadCreates a new Min Metric from the given Data Pool and returns the newly created Metric (or an error message if creating the Metric fails).
Arguments
The fields for creating a new Minimum (Min) Metric.
The Data Pool that powers this Metric.
The Metric’s unique name. If not specified, Propel will set the ID as unique name.
The Metric’s description.
The Metric’s Filters, in the form of SQL. Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Filters are present, all records will be included.
The Metric’s Dimensions. Dimensions define the columns that will be available to filter the Metric at query time.
The fields for creating or modifying a Dimension.
The name of the column to create the Dimension from.
The fields for creating or modifying a Dimension.
The name of the column to create the Dimension from.
The Metric’s Filters. Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Filters are present, all records will be included.
deprecated: UsefilterSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
See FilterOperator
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
The result of a mutation which creates or modifies a Metric.
The Metric which was created or modified.
The Metric object.
A Metric is a business indicator measured over time.
The Metric’s unique identifier.
The Metric’s unique name.
The Metric’s description.
The Metric’s Environment.
See Environment
The Metric’s creation date and time in UTC.
The Metric’s last modification date and time in UTC.
The Metric’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Metric’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Metric’s Dimensions. These Dimensions are available to Query Filters.
See Dimension
The Metric’s timestamp, if any. This is the same as its Data Pool’s timestamp, if any.
See Dimension
List the Boosters associated to the Metric.
Arguments
The Metric’s type. The different Metric types determine how the values are calculated.
See MetricType
The settings for the Metric. The settings are specific to the Metric’s type.
See MetricSettings
The Metric’s measure. Access this from the Metric’s settings
object instead.
settings
object instead.See Dimension
Query the Metric in counter format. Returns the Metric’s value for the given time range and filters.
deprecated: Use the top-levelcounter
query insteadArguments
See CounterInput
See CounterResponse
Query the Metric in time series format. Returns arrays of timestamps and the Metric’s values for the given time range and filters.
deprecated: Use the top-leveltimeSeries
query insteadArguments
See TimeSeriesInput
Query the Metric in leaderboard format. Returns a table (array of rows) with the selected dimensions and the Metric’s corresponding values for the given time range and filters.
deprecated: Use the top-levelleaderboard
query insteadArguments
See LeaderboardInput
List the Policies associated to the Metric.
deprecated: Use Data Pool Access Policies insteadArguments
See PolicyConnection
Whether or not access control is enabled for the Metric.
deprecated: Use Data Pool Access Policies insteadCreates a new Max Metric from the given Data Pool and returns the newly created Metric (or an error message if creating the Metric fails).
Arguments
The fields for creating a new Maximum (Max) Metric.
The Data Pool that powers this Metric.
The Metric’s unique name. If not specified, Propel will set the ID as unique name.
The Metric’s description.
The Metric’s Filters, in the form of SQL. Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Filters are present, all records will be included.
The Metric’s Dimensions. Dimensions define the columns that will be available to filter the Metric at query time.
The fields for creating or modifying a Dimension.
The name of the column to create the Dimension from.
The column to calculate the maximum from.
The fields for creating or modifying a Dimension.
The name of the column to create the Dimension from.
The Metric’s Filters. Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Filters are present, all records will be included.
deprecated: UsefilterSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
See FilterOperator
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
The result of a mutation which creates or modifies a Metric.
The Metric which was created or modified.
The Metric object.
A Metric is a business indicator measured over time.
The Metric’s unique identifier.
The Metric’s unique name.
The Metric’s description.
The Metric’s Environment.
See Environment
The Metric’s creation date and time in UTC.
The Metric’s last modification date and time in UTC.
The Metric’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Metric’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Metric’s Dimensions. These Dimensions are available to Query Filters.
See Dimension
The Metric’s timestamp, if any. This is the same as its Data Pool’s timestamp, if any.
See Dimension
List the Boosters associated to the Metric.
Arguments
The Metric’s type. The different Metric types determine how the values are calculated.
See MetricType
The settings for the Metric. The settings are specific to the Metric’s type.
See MetricSettings
The Metric’s measure. Access this from the Metric’s settings
object instead.
settings
object instead.See Dimension
Query the Metric in counter format. Returns the Metric’s value for the given time range and filters.
deprecated: Use the top-levelcounter
query insteadArguments
See CounterInput
See CounterResponse
Query the Metric in time series format. Returns arrays of timestamps and the Metric’s values for the given time range and filters.
deprecated: Use the top-leveltimeSeries
query insteadArguments
See TimeSeriesInput
Query the Metric in leaderboard format. Returns a table (array of rows) with the selected dimensions and the Metric’s corresponding values for the given time range and filters.
deprecated: Use the top-levelleaderboard
query insteadArguments
See LeaderboardInput
List the Policies associated to the Metric.
deprecated: Use Data Pool Access Policies insteadArguments
See PolicyConnection
Whether or not access control is enabled for the Metric.
deprecated: Use Data Pool Access Policies insteadCreates a new Custom Metric from the given Data Pool and returns the newly created Metric (or an error message if creating the Metric fails).
Arguments
The fields for creating a new Custom Metric.
The Data Pool that powers this Metric.
The Metric’s unique name. If not specified, Propel will set the ID as unique name.
The Metric’s description.
The Metric’s Filters, in the form of SQL. Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Filters are present, all records will be included.
The Metric’s Dimensions. Dimensions define the columns that will be available to filter the Metric at query time.
The fields for creating or modifying a Dimension.
The name of the column to create the Dimension from.
The expression that defines the aggregation function for this Metric.
The Metric’s Filters. Metric Filters allow defining a Metric with a subset of records from the given Data Pool. If no Filters are present, all records will be included.
deprecated: UsefilterSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
See FilterOperator
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
The result of a mutation which creates or modifies a Metric.
The Metric which was created or modified.
The Metric object.
A Metric is a business indicator measured over time.
The Metric’s unique identifier.
The Metric’s unique name.
The Metric’s description.
The Metric’s Environment.
See Environment
The Metric’s creation date and time in UTC.
The Metric’s last modification date and time in UTC.
The Metric’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Metric’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Metric’s Dimensions. These Dimensions are available to Query Filters.
See Dimension
The Metric’s timestamp, if any. This is the same as its Data Pool’s timestamp, if any.
See Dimension
List the Boosters associated to the Metric.
Arguments
The Metric’s type. The different Metric types determine how the values are calculated.
See MetricType
The settings for the Metric. The settings are specific to the Metric’s type.
See MetricSettings
The Metric’s measure. Access this from the Metric’s settings
object instead.
settings
object instead.See Dimension
Query the Metric in counter format. Returns the Metric’s value for the given time range and filters.
deprecated: Use the top-levelcounter
query insteadArguments
See CounterInput
See CounterResponse
Query the Metric in time series format. Returns arrays of timestamps and the Metric’s values for the given time range and filters.
deprecated: Use the top-leveltimeSeries
query insteadArguments
See TimeSeriesInput
Query the Metric in leaderboard format. Returns a table (array of rows) with the selected dimensions and the Metric’s corresponding values for the given time range and filters.
deprecated: Use the top-levelleaderboard
query insteadArguments
See LeaderboardInput
List the Policies associated to the Metric.
deprecated: Use Data Pool Access Policies insteadArguments
See PolicyConnection
Whether or not access control is enabled for the Metric.
deprecated: Use Data Pool Access Policies insteadModifies a Metric by ID with the provided unique name, description, and Dimensions. If any of the optional arguments are omitted, those properties will be unchanged on the Metric.
Arguments
The fields for modifying a Metric.
The ID of the Metric to modify.
The Metric’s new unique name.
The Metric’s new description.
The Metric’s new Dimensions. Used to add or remove Dimensions.
The fields for creating or modifying a Dimension.
The name of the column to create the Dimension from.
The Metric’s new Filters, in the form of SQL. Used to add or remove Metric Filters.
Enables or disables access control for the Metric.
The Metric’s new Filters. Used to add or remove Metric Filters.
deprecated: UsefilterSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
See FilterOperator
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
The result of a mutation which creates or modifies a Metric.
The Metric which was created or modified.
The Metric object.
A Metric is a business indicator measured over time.
The Metric’s unique identifier.
The Metric’s unique name.
The Metric’s description.
The Metric’s Environment.
See Environment
The Metric’s creation date and time in UTC.
The Metric’s last modification date and time in UTC.
The Metric’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Metric’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Metric’s Dimensions. These Dimensions are available to Query Filters.
See Dimension
The Metric’s timestamp, if any. This is the same as its Data Pool’s timestamp, if any.
See Dimension
List the Boosters associated to the Metric.
Arguments
The Metric’s type. The different Metric types determine how the values are calculated.
See MetricType
The settings for the Metric. The settings are specific to the Metric’s type.
See MetricSettings
The Metric’s measure. Access this from the Metric’s settings
object instead.
settings
object instead.See Dimension
Query the Metric in counter format. Returns the Metric’s value for the given time range and filters.
deprecated: Use the top-levelcounter
query insteadArguments
See CounterInput
See CounterResponse
Query the Metric in time series format. Returns arrays of timestamps and the Metric’s values for the given time range and filters.
deprecated: Use the top-leveltimeSeries
query insteadArguments
See TimeSeriesInput
Query the Metric in leaderboard format. Returns a table (array of rows) with the selected dimensions and the Metric’s corresponding values for the given time range and filters.
deprecated: Use the top-levelleaderboard
query insteadArguments
See LeaderboardInput
List the Policies associated to the Metric.
deprecated: Use Data Pool Access Policies insteadArguments
See PolicyConnection
Whether or not access control is enabled for the Metric.
deprecated: Use Data Pool Access Policies insteadMigrates a Metric from one Data Pool to another.
Arguments
The Metric object.
A Metric is a business indicator measured over time.
The Metric’s unique identifier.
The Metric’s unique name.
The Metric’s description.
The Metric’s Account.
The Account object.
The Account’s unique identifier.
The Metric’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Metric’s creation date and time in UTC.
The Metric’s last modification date and time in UTC.
The Metric’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Metric’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Metric’s Dimensions. These Dimensions are available to Query Filters.
The Dimension object that represents a column in a table.
The column name it represents.
The column data type.
Whether the column is nullable.
Whether the column is a unique key.
deprecated: This is Snowflake-specific, and will be removedThe statistics for the dimension values. Fetching statistics incurs query costs.
deprecated: Issue normal queries for calculating statsThe Metric’s timestamp, if any. This is the same as its Data Pool’s timestamp, if any.
The Dimension object that represents a column in a table.
The column name it represents.
The column data type.
Whether the column is nullable.
Whether the column is a unique key.
deprecated: This is Snowflake-specific, and will be removedThe statistics for the dimension values. Fetching statistics incurs query costs.
deprecated: Issue normal queries for calculating statsList the Boosters associated to the Metric.
Arguments
The Metric’s type. The different Metric types determine how the values are calculated.
The available Metric types.
COUNT
: Counts the number of records that matches the Metric Filters. For time series, it will count the values for each time granularity.SUM
: Sums the values of the specified column for every record that matches the Metric Filters. For time series, it will sum the values for each time granularity.COUNT_DISTINCT
: Counts the number of distinct values in the specified column for every record that matches the Metric Filters. For time series, it will count the distinct values for each time granularity.AVERAGE
: Averages the values of the specified column for every record that matches the Metric Filters. For time series, it will average the values for each time granularity.MIN
: Selects the minimum value of the specified column for every record that matches the Metric Filters. For time series, it will select the minimum value for each time granularity.MAX
: Selects the maximum value of the specified column for every record that matches the Metric Filters. For time series, it will select the maximum value for each time granularity.CUSTOM
: Aggregates values based on the provided custom expression.
The settings for the Metric. The settings are specific to the Metric’s type.
A Metric’s settings, depending on its type.
The Metric’s measure. Access this from the Metric’s settings
object instead.
settings
object instead.The Dimension object that represents a column in a table.
The column name it represents.
The column data type.
Whether the column is nullable.
Whether the column is a unique key.
deprecated: This is Snowflake-specific, and will be removedThe statistics for the dimension values. Fetching statistics incurs query costs.
deprecated: Issue normal queries for calculating statsQuery the Metric in counter format. Returns the Metric’s value for the given time range and filters.
deprecated: Use the top-levelcounter
query insteadArguments
The fields for querying a Metric in counter format.
A Metric’s counter query returns a single value over a given time range.
The Metric to query. You can query a pre-configured Metric by ID or name, or you can query an ad hoc Metric that you define inline.
See MetricInput
The time range for calculating the counter.
See TimeRangeInput
The time zone to use. Dates and times are always returned in UTC, but setting the time zone influences relative time ranges and granularities.
You can set this to “America/Los_Angeles”, “Europe/Berlin”, or any other value in the IANA time zone database. Defaults to “UTC”.
The Query Filters to apply before retrieving the counter data, in the form of SQL. If no Query Filters are provided, all data is included.
The ID of the Metric to query.
Required if metricName
is not specified.
metric
The name of the Metric to query.
Required if metricId
is not specified.
metric
The Query Filters to apply before retrieving the counter data. If no Query Filters are provided, all data is included.
deprecated: UsefilterSql
insteadSee FilterInput
Query the Metric in time series format. Returns arrays of timestamps and the Metric’s values for the given time range and filters.
deprecated: Use the top-leveltimeSeries
query insteadArguments
The fields for querying a Metric in time series format.
A Metric’s time series query returns the values over a given time range aggregated by a given time granularity; day, month, or year, for example.
The Metric to Query. It can be a pre-created one or it can be inlined here.
See MetricInput
The time range for calculating the time series.
See TimeRangeInput
The time zone to use. Dates and times are always returned in UTC, but setting the time zone influences relative time ranges and granularities.
You can set this to “America/Los_Angeles”, “Europe/Berlin”, or any other value in the IANA time zone database. Defaults to “UTC”.
The time granularity (hour, day, month, etc.) to aggregate the Metric values by.
The Query Filters to apply before retrieving the time series data, in the form of SQL. If no Query Filters are provided, all data is included.
Columns to group by.
The ID of the Metric to query.
Required if metricName
is not specified.
metric
The name of the Metric to query.
Required if metricId
is not specified.
metric
The Query Filters to apply before retrieving the time series data. If no Query Filters are provided, all data is included.
deprecated: UsefilterSql
insteadSee FilterInput
The time series response object. It contains an array of time series labels and an array of Metric values for the given time range and Query Filters.
The time series labels.
The time series values.
The time series values for each group in groupBy
, if specified.
Query the Metric in leaderboard format. Returns a table (array of rows) with the selected dimensions and the Metric’s corresponding values for the given time range and filters.
deprecated: Use the top-levelleaderboard
query insteadArguments
The fields for querying a Metric in leaderboard format.
A Metric’s leaderboard query returns an ordered table of Dimension and Metric values over a given time range.
The Metric to query. You can query a pre-configured Metric by ID or name, or you can query an ad hoc Metric that you define inline.
See MetricInput
The time range for calculating the leaderboard.
See TimeRangeInput
The time zone to use. Dates and times are always returned in UTC, but setting the time zone influences relative time ranges and granularities.
You can set this to “America/Los_Angeles”, “Europe/Berlin”, or any other value in the IANA time zone database. Defaults to “UTC”.
One or many Dimensions to group the Metric values by. Typically, Dimensions in a leaderboard are what you want to compare and rank.
See DimensionInput
The sort order of the rows. It can be ascending (ASC
) or descending (DESC
) order. Defaults to descending (DESC
) order when not provided.
See Sort
The number of rows to be returned. It can be a number between 1 and 1,000.
The Query Filters to apply before retrieving the leaderboard data, in the form of SQL. If no Query Filters are provided, all data is included.
The ID of the Metric to query.
Required if metricName
is not specified.
metric
The name of the Metric to query.
Required if metricId
is not specified.
metric
The Query Filters to apply before retrieving the leaderboard data. If no Query Filters are provided, all data is included.
deprecated: UsefilterSql
insteadSee FilterInput
The leaderboard response object. It contains an array of headers and a table (array of rows) with the selected Dimensions and corresponding Metric values for the given time range and Query Filters.
The table headers. It contains the Dimension and Metric names.
An ordered array of rows. Each row contains the Dimension values and the corresponding Metric value. A Dimension value can be empty. A Metric value will never be empty.
List the Policies associated to the Metric.
deprecated: Use Data Pool Access Policies insteadArguments
See PolicyConnection
Whether or not access control is enabled for the Metric.
deprecated: Use Data Pool Access Policies insteadDeletes a Metric by ID and returns its ID if the Metric was deleted successfully.
Arguments
Deletes a Metric by unique name and returns its ID if the Metric was deleted successfully.
Arguments
Creates a new Booster for the given Metric and returns the newly created Booster.
A Booster significantly improves the query performance for a Metric.
Arguments
The fields for creating a new Booster.
Boosters can be understood as an aggregating index. The index is formed from left to right as follows:
- The Data Pool’s Tenant ID column (if present)
- Metric Filter columns (if present)
- Query Filter Dimensions (see
dimensions
) - The Data Pool’s timestamp column
The Booster’s Metric.
Dimensions to include in the Booster.
Follow these guidelines when specifying Dimensions:
- Specify Dimensions in descending order of importance for filtering and in ascending order of cardinality.
- Take into consideration hierarchical relationships as well (for example, a “country” Dimension should appear before a “state” Dimension).
The fields for creating or modifying a Dimension.
The name of the column to create the Dimension from.
The result of a mutation which creates or modifies a Booster.
The Booster which was created or modified.
Boosters allow you to optimize Metric Queries for a subset of commonly used Dimensions. A Metric can have one or many Boosters to optimize for the different Query patterns.
Boosters can be understood as an aggregating index. The index is formed from left to right as follows:
- The Data Pool’s Tenant ID column (if present)
- Metric Filter columns (if present)
- Query Filter Dimensions (see
dimensions
) - The Data Pool’s timestamp column
The Booster’s unique identifier.
The Booster’s Environment.
See Environment
The Booster’s creation date and time in UTC.
The Booster’s last modification date and time in UTC.
The Booster’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Booster’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The status of the Booster (once LIVE it will be available for speeding up Metric queries).
See BoosterStatus
If the Booster fails during the optimization process, this field includes a descriptive error message.
See Error
When the Booster is OPTIMIZING, this represents its progress as a number from 0 to 1. In all other states, progress is null.
The number of records in the Booster.
The amount of storage in terabytes used by the Booster.
Deletes a Booster by ID and then returns the same ID if the Booster was deleted successfully.
A Booster significantly improves the query performance for a Metric.
Arguments
Schedules a new Deletion Job on the specified Data Pool.
Arguments
The fields for creating a Deletion Job.
The Data Pool that is going to get the data deleted
The filters that will be used for deleting data, in the form of SQL. Data matching these filters will be deleted.
The list of filters that will be used for deleting data. Data matching these filters will be deleted.
deprecated: UsefilterSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
See FilterOperator
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
The response returned by the Deletion Job.
The Deletion Job that was just created.
Deletion Job scheduled for a specific Data Pool.
The Deletion Job represents the asynchronous process of deleting data given some filters inside a Data Pool. It tracks the deletion process until it is finished, showing the progress and the outcome when it is finished.
The Deletion Job’s ID.
The Deletion Job’s creation date and time in UTC.
Who created the Deletion Job.
The Deletion Job’s last modification date and time in UTC.
Who last modified the Deletion Job.
Environment to which the Deletion Job belongs.
See Environment
The Data Pool whose records will be deleted by the Deletion Job. See DataPool
The filters that will be used for deleting data, in the form of SQL. Data matching the filters will be deleted.
The current progress of the Deletion Job, from 0.0 to 1.0.
The time at which the Deletion Job started.
The time at which the Deletion Job succeeded.
The time at which the Deletion Job failed.
Schedules a new AddColumnToDataPoolJob on the specified Data Pool.
Arguments
The fields for creating an Add Column Job.
The Data Pool to which the column will be added.
Name of the new column.
Type of the new column.
The Propel data types.
BOOLEAN
: True or false.STRING
: A variable-length string.FLOAT
: A 32-bit signed double-precision floating point number.DOUBLE
: A 64-bit signed double-precision floating point number.INT8
: An 8-bit signed integer, with a minimum value of -2⁷ and a maximum value of 2⁷-1.INT16
: A 16-bit signed integer, with a minimum value of -2¹⁵ and a maximum value of 2¹⁵-1.INT32
: A 32-bit signed integer, with a minimum value of -2³¹ and a maximum value of 2³¹-1.INT64
: A 64-bit signed integer, with a minimum value of -2⁶³ and a maximum value of 2⁶³-1.DATE
: A date without a timestamp. For example, “YYYY-MM-DD”.TIMESTAMP
: A date with a timestamp. For example, “yyy-MM-dd HH:mm:ss”.JSON
: A JavaScript Object Notation (JSON) document.CLICKHOUSE
: A ClickHouse-specific type.
The ClickHouse type of the new column when columnType
is set to CLICKHOUSE
.
JSON property to which the new column corresponds.
The response returned by the Add Column Job.
The AddColumnToDataPool Job that was just created.
AddColumnToDataPoolJob scheduled for a specific Data Pool.
The Add Column Job represents the asynchronous process of adding a column, given its name and type, to a Data Pool. It tracks the process of adding a column until it is finished, showing the progress and the outcome when it is finished.
The AddColumnToDataPoolJob’s ID.
The AddColumnToDataPoolJob’s creation date and time in UTC.
Who created the AddColumnToDataPoolJob.
The AddColumnToDataPoolJob’s last modification date and time in UTC.
Who modified the AddColumnToDataPoolJob last.
Environment to which the AddColumnToDataPoolJob belongs.
See Environment
Name of the new column.
Type of the new column.
See ColumnType
The ClickHouse type of the new column when columnType
is set to CLICKHOUSE
.
JSON property to which the new column corresponds.
The current progress of the AddColumnToDataPool Job, from 0.0 to 1.0.
The time at which the AddColumnToDataPool Job started.
The time at which the AddColumnToDataPool Job succeeded.
The time at which the AddColumnToDataPool Job failed.
Schedules a new UpdateDataPoolRecords Job on the specified Data Pool.
Arguments
The fields for creating an Update Data Pool Records Job.
The Data Pool that is going to get its records updated.
The filters that will be used for updating records, in the form of SQL. Records matching these filters will be updated.
Describes how the job will update the records.
The fields for creating an Update Data Pool Records Job.
{
"column": "status",
"expression": "'completed'"
}
{
"column": "counter",
"expression": "counter + 1"
}
{
"column": "full_name",
"expression": "concat(first_name, ' ', last_name)"
}
The name of the column to update.
The value to which the column will be updated. Once evaluated, it should be of the same data type as the column.
The list of filters that will be used for updating records. Records matching these filters will be updated.
deprecated: UsefilterSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
See FilterOperator
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
The response returned by the Update Data Pool Records Job.
The UpdateDataPoolRecords Job that was just created.
UpdateDataPoolRecords Job scheduled for a specific Data Pool. The Update Data Pool Records Job represents the asynchronous process of updating records given some filters, inside a Data Pool. It tracks the process of updating records until it is finished, showing the progress and the outcome when it is finished.
The UpdateDataPoolRecords Job’s ID
The UpdateDataPoolRecords Job’s creation date and time in UTC
Who created the UpdateDataPoolRecords Job
The UpdateDataPoolRecords Job’s last modification date and time in UTC
Who last modified the UpdateDataPoolRecords Job
Environment to which the UpdateDataPoolRecords Job belongs
See Environment
The Data Pool whose records will be updated by the UpdateDataPoolRecords Job See DataPool
The filters that will be used for updating data, in the form of SQL. Data matching the filters will be updated.
Describes how the job will update the records.
The current progress of the UpdateDataPoolRecords Job, from 0.0 to 1.0.
The time at which the UpdateDataPoolRecords Job started.
The time at which the UpdateDataPoolRecords Job succeeded.
The time at which the UpdateDataPoolRecords Job failed.
Manually trigger a Sync for a Data Pool.
Arguments
The Sync object.
This represents the process of syncing data from your Data Source (for example, a Snowflake data warehouse) to your Data Pool.
The Sync’s unique identifier.
The Sync’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Sync’s creation date and time in UTC.
The Sync’s last modification date and time in UTC.
The Sync’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Sync’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Sync’s Data Pool’s Data Source.
The Data Source object.
A Data Source is a connection to your data warehouse. It has the necessary connection details for Propel to access Snowflake or any other supported Data Source.
The Data Source’s unique identifier.
The Data Source’s unique name.
The Data Source’s description.
The Data Source’s Environment.
See Environment
The Data Source’s creation date and time in UTC.
The Data Source’s last modification date and time in UTC.
The Data Source’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Source’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Source’s type.
See DataSourceType
The Data Source’s status.
See DataSourceStatus
The Data Source’s connection settings.
The tables contained within the Data Source, according to the most recent table introspection.
Arguments
See TableConnection
A list of table introspections performed for the Data Source. You can see how tables and columns changed over time by paging through this list.
Arguments
A list of checks performed on the Data Source during its most recent connection attempt.
See DataSourceCheck
If you list Data Pools via the dataPools
field on a Data Source, you will get Data Pools for the Data Source.
The dataPools
field uses cursor-based pagination typical of GraphQL APIs. You can use the pairs of parameters first
and after
or last
and before
to page forward or backward through the results, respectively.
For forward pagination, the first
parameter defines the number of results to return, and the after
parameter defines the cursor to continue from. You should pass the cursor for the last result of the current page to after
.
For backward pagination, the last
parameter defines the number of results to return, and the before
parameter defines the cursor to continue from. You should pass the cursor for the first result of the current page to before
.
Arguments
The number of new, updated, and deleted records contained within the Sync, if known. This excludes filtered records.
The (compressed) size of the Sync, in bytes, if known.
The status of the Sync (all Syncs begin as SYNCING before transitioning to SUCCEEDED or FAILED).
The status of a Sync.
SYNCING
: Propel is actively syncing records contained within the Sync.SUCCEEDED
: The Sync succeeded. Propel successfully synced all records contained within the Sync.FAILED
: The Sync failed. Propel failed to sync some or all records contained within the Sync.DELETING
: Propel is deleting the Sync.
The time at which the Sync started.
The time at which the Sync succeeded.
The time at which the Sync failed.
The number of new records contained within the Sync, if known. This excludes filtered records.
deprecated: All records are considered to be processed; seeprocessedRecords
insteadThe number of updated records contained within the Sync, if known. This excludes filtered records.
deprecated: All records are considered to be processed; seeprocessedRecords
insteadThe number of deleted records contained within the Sync, if known. This excludes filtered records.
deprecated: All records are considered to be processed; seeprocessedRecords
insteadThe number of filtered records contained within the Sync, due to issues such as a missing timestamp Dimension, if any are known to be invalid.
deprecated: All records are considered to be processed; seeprocessedRecords
insteadManually trigger a re-Sync for a Data Pool.
Arguments
The Sync object.
This represents the process of syncing data from your Data Source (for example, a Snowflake data warehouse) to your Data Pool.
The Sync’s unique identifier.
The Sync’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Sync’s creation date and time in UTC.
The Sync’s last modification date and time in UTC.
The Sync’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Sync’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Sync’s Data Pool’s Data Source.
The Data Source object.
A Data Source is a connection to your data warehouse. It has the necessary connection details for Propel to access Snowflake or any other supported Data Source.
The Data Source’s unique identifier.
The Data Source’s unique name.
The Data Source’s description.
The Data Source’s Environment.
See Environment
The Data Source’s creation date and time in UTC.
The Data Source’s last modification date and time in UTC.
The Data Source’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Source’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Source’s type.
See DataSourceType
The Data Source’s status.
See DataSourceStatus
The Data Source’s connection settings.
The tables contained within the Data Source, according to the most recent table introspection.
Arguments
See TableConnection
A list of table introspections performed for the Data Source. You can see how tables and columns changed over time by paging through this list.
Arguments
A list of checks performed on the Data Source during its most recent connection attempt.
See DataSourceCheck
If you list Data Pools via the dataPools
field on a Data Source, you will get Data Pools for the Data Source.
The dataPools
field uses cursor-based pagination typical of GraphQL APIs. You can use the pairs of parameters first
and after
or last
and before
to page forward or backward through the results, respectively.
For forward pagination, the first
parameter defines the number of results to return, and the after
parameter defines the cursor to continue from. You should pass the cursor for the last result of the current page to after
.
For backward pagination, the last
parameter defines the number of results to return, and the before
parameter defines the cursor to continue from. You should pass the cursor for the first result of the current page to before
.
Arguments
The number of new, updated, and deleted records contained within the Sync, if known. This excludes filtered records.
The (compressed) size of the Sync, in bytes, if known.
The status of the Sync (all Syncs begin as SYNCING before transitioning to SUCCEEDED or FAILED).
The status of a Sync.
SYNCING
: Propel is actively syncing records contained within the Sync.SUCCEEDED
: The Sync succeeded. Propel successfully synced all records contained within the Sync.FAILED
: The Sync failed. Propel failed to sync some or all records contained within the Sync.DELETING
: Propel is deleting the Sync.
The time at which the Sync started.
The time at which the Sync succeeded.
The time at which the Sync failed.
The number of new records contained within the Sync, if known. This excludes filtered records.
deprecated: All records are considered to be processed; seeprocessedRecords
insteadThe number of updated records contained within the Sync, if known. This excludes filtered records.
deprecated: All records are considered to be processed; seeprocessedRecords
insteadThe number of deleted records contained within the Sync, if known. This excludes filtered records.
deprecated: All records are considered to be processed; seeprocessedRecords
insteadThe number of filtered records contained within the Sync, due to issues such as a missing timestamp Dimension, if any are known to be invalid.
deprecated: All records are considered to be processed; seeprocessedRecords
insteadCreates a new Materialized View. Returns the newly created Materialized View (or an error message if creating the Materialized View fails).
Arguments
The fields for creating a Materialized View.
The Materialized View’s unique name. If not specified, Propel will set the ID as the unique name.
The Materialized View’s description.
The SQL that the Materialized View will execute.
By default, a destination Data Pool with default settings will be created for the Materialized View; however, you can customize the destination Data Pool (or point to an existing Data Pool), by setting this field. Use this to target an existing Data Pool or the engine settings of a new Data Pool.
The fields for targeting an existing Data Pool or a new Data Pool.
If specified, the Materialized View will target an existing Data Pool. Ensure the Data Pool’s schema is compatible with your Materialized View’s SQL statement.
See DataPoolInput
If specified, the Materialized View will create and target a new Data Pool. You can further customize the new Data Pool’s engine settings.
By default, a Materialized View only applies to records added after its creation. This option allows to backfill all the data that was present before the Materialized View creation.
Whether historical data should be backfilled or not
Defines the order in which historical data is backfilled. Defaults to OLDEST_FIRST
if not specified.
See PartitionOrder
The result of a mutation which creates or modifies a Materialized View.
The Materialized View which was created or modified.
The Materialized View’s unique identifier.
The Materialized View’s unique name.
The Materialized View’s description.
The Materialized View’s Environment.
See Environment
The Materialized View’s creation date and time in UTC.
The Materialized View’s last modification date and time in UTC.
The Materialized View’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Materialized View’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The SQL that the Materialized View executes.
The Materialized View’s destination (AKA “target”) Data Pool.
See DataPool
Modifies a Materialized View. If any of the optional arguments are omitted, those properties will be unchanged on the Materialized View.
Arguments
The result of a mutation which creates or modifies a Materialized View.
The Materialized View which was created or modified.
The Materialized View’s unique identifier.
The Materialized View’s unique name.
The Materialized View’s description.
The Materialized View’s Environment.
See Environment
The Materialized View’s creation date and time in UTC.
The Materialized View’s last modification date and time in UTC.
The Materialized View’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Materialized View’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The SQL that the Materialized View executes.
The Materialized View’s destination (AKA “target”) Data Pool.
See DataPool
Deletes a Materialized View and returns its ID if the Materialized View was deleted successfully.
Note that deleting a Materialized View does not delete its target Data Pool. If you want to delete its target
Data Pool, you must use the deleteDataPool
mutation.
Arguments
Creates a new Amazon Data Firehose Data Source from the given settings.
Returns the newly created Data Source (or an error message if creating the Data Source fails).
Arguments
The Amazon Data Firehose Data Source’s connection settings
The Amazon Data Firehose Data Source’s connection settings.
Enables or disables access control for the Data Pool. If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
HTTP basic access authentication credentials. You must configure these same credentials to be included in the
X-Amz-Firehose-Access-Key
header when Amazon Data Firehose issues requests to its custom HTTP endpoint.
Additional columns for the table in Propel.
Override the Data Pool’s table settings. These describe how the Data Pool’s table is created in ClickHouse, and a
default will be chosen based on the Data Pool’s timestamp
value, if any. You can override these
defaults in order to specify a custom table engine, custom ORDER BY, etc.
The primary timestamp column, if any.
The Amazon Data Firehose Data Source’s description.
The Amazon Data Firehose Data Source’s unique name. If not specified, Propel will set the ID as unique name.
The result of a mutation which creates or modifies a Data Source.
The Data Source which was created or modified.
The Data Source object.
A Data Source is a connection to your data warehouse. It has the necessary connection details for Propel to access Snowflake or any other supported Data Source.
The Data Source’s unique identifier.
The Data Source’s unique name.
The Data Source’s description.
The Data Source’s Environment.
See Environment
The Data Source’s creation date and time in UTC.
The Data Source’s last modification date and time in UTC.
The Data Source’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Source’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Source’s type.
See DataSourceType
The Data Source’s status.
See DataSourceStatus
The Data Source’s connection settings.
The tables contained within the Data Source, according to the most recent table introspection.
Arguments
See TableConnection
A list of table introspections performed for the Data Source. You can see how tables and columns changed over time by paging through this list.
Arguments
A list of checks performed on the Data Source during its most recent connection attempt.
See DataSourceCheck
If you list Data Pools via the dataPools
field on a Data Source, you will get Data Pools for the Data Source.
The dataPools
field uses cursor-based pagination typical of GraphQL APIs. You can use the pairs of parameters first
and after
or last
and before
to page forward or backward through the results, respectively.
For forward pagination, the first
parameter defines the number of results to return, and the after
parameter defines the cursor to continue from. You should pass the cursor for the last result of the current page to after
.
For backward pagination, the last
parameter defines the number of results to return, and the before
parameter defines the cursor to continue from. You should pass the cursor for the first result of the current page to before
.
Arguments
Modifies the Data Source by the ID or unique name provided with the given unique name, description, and connection settings.
If any of the optional arguments are omitted, those properties will be unchanged on the Data Source.
Arguments
The Amazon Data Firehose Data Source’s new connection settings. If not provided this property will not be modified.
The Amazon Data Firehose Data Source’s connection settings.
HTTP basic access authentication credentials. You must configure these same credentials to be included in the
X-Amz-Firehose-Access-Key
header when Amazon Data Firehose issues requests to its custom HTTP endpoint. If not provided this property will not be modified.
The Amazon Data Firehose Data Source’s new description. If not provided this property will not be modified.
The Amazon Data Firehose Data Source’s new unique name. If not provided this property will not be modified.
The result of a mutation which creates or modifies a Data Source.
The Data Source which was created or modified.
The Data Source object.
A Data Source is a connection to your data warehouse. It has the necessary connection details for Propel to access Snowflake or any other supported Data Source.
The Data Source’s unique identifier.
The Data Source’s unique name.
The Data Source’s description.
The Data Source’s Environment.
See Environment
The Data Source’s creation date and time in UTC.
The Data Source’s last modification date and time in UTC.
The Data Source’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Source’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Source’s type.
See DataSourceType
The Data Source’s status.
See DataSourceStatus
The Data Source’s connection settings.
The tables contained within the Data Source, according to the most recent table introspection.
Arguments
See TableConnection
A list of table introspections performed for the Data Source. You can see how tables and columns changed over time by paging through this list.
Arguments
A list of checks performed on the Data Source during its most recent connection attempt.
See DataSourceCheck
If you list Data Pools via the dataPools
field on a Data Source, you will get Data Pools for the Data Source.
The dataPools
field uses cursor-based pagination typical of GraphQL APIs. You can use the pairs of parameters first
and after
or last
and before
to page forward or backward through the results, respectively.
For forward pagination, the first
parameter defines the number of results to return, and the after
parameter defines the cursor to continue from. You should pass the cursor for the last result of the current page to after
.
For backward pagination, the last
parameter defines the number of results to return, and the before
parameter defines the cursor to continue from. You should pass the cursor for the first result of the current page to before
.
Arguments
Creates a new Amazon DynamoDB Data Source from the given settings.
Returns the newly created Data Source (or an error message if creating the Data Source fails).
Arguments
The Amazon DynamoDB Data Source’s connection settings
The Amazon DynamoDB Data Source’s connection settings.
Enables or disables access control for the Data Pool. If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
HTTP basic access authentication credentials. You must configure these same credentials to be included in the
X-Amz-Firehose-Access-Key
header when Amazon Data Firehose transmits records from your DynamoDB table to its
custom HTTP endpoint.
Additional columns for the table in Propel.
Override the Data Pool’s table settings. These describe how the Data Pool’s table is created in ClickHouse, and a
default will be chosen based on the Data Pool’s timestamp
value, if any. You can override these
defaults in order to specify a custom table engine, custom ORDER BY, etc.
The primary timestamp column, if any.
The Amazon DynamoDB Data Source’s description.
The Amazon DynamoDB Data Source’s unique name. If not specified, Propel will set the ID as unique name.
The result of a mutation which creates or modifies a Data Source.
The Data Source which was created or modified.
The Data Source object.
A Data Source is a connection to your data warehouse. It has the necessary connection details for Propel to access Snowflake or any other supported Data Source.
The Data Source’s unique identifier.
The Data Source’s unique name.
The Data Source’s description.
The Data Source’s Environment.
See Environment
The Data Source’s creation date and time in UTC.
The Data Source’s last modification date and time in UTC.
The Data Source’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Source’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Source’s type.
See DataSourceType
The Data Source’s status.
See DataSourceStatus
The Data Source’s connection settings.
The tables contained within the Data Source, according to the most recent table introspection.
Arguments
See TableConnection
A list of table introspections performed for the Data Source. You can see how tables and columns changed over time by paging through this list.
Arguments
A list of checks performed on the Data Source during its most recent connection attempt.
See DataSourceCheck
If you list Data Pools via the dataPools
field on a Data Source, you will get Data Pools for the Data Source.
The dataPools
field uses cursor-based pagination typical of GraphQL APIs. You can use the pairs of parameters first
and after
or last
and before
to page forward or backward through the results, respectively.
For forward pagination, the first
parameter defines the number of results to return, and the after
parameter defines the cursor to continue from. You should pass the cursor for the last result of the current page to after
.
For backward pagination, the last
parameter defines the number of results to return, and the before
parameter defines the cursor to continue from. You should pass the cursor for the first result of the current page to before
.
Arguments
Modifies the Data Source by the ID or unique name provided with the given unique name, description, and connection settings.
If any of the optional arguments are omitted, those properties will be unchanged on the Data Source.
Arguments
The Amazon DynamoDB Data Source’s new connection settings. If not provided this property will not be modified.
The Amazon DynamoDB Data Source’s connection settings.
HTTP basic access authentication credentials. You must configure these same credentials to be included in the
X-Amz-Firehose-Access-Key
header when Amazon Data Firehose transmits records from your DynamoDB table to its
custom HTTP endpoint. If not provided this property will not be modified.
The Amazon DynamoDB Data Source’s new description. If not provided this property will not be modified.
The Amazon DynamoDB Data Source’s new unique name. If not provided this property will not be modified.
The result of a mutation which creates or modifies a Data Source.
The Data Source which was created or modified.
The Data Source object.
A Data Source is a connection to your data warehouse. It has the necessary connection details for Propel to access Snowflake or any other supported Data Source.
The Data Source’s unique identifier.
The Data Source’s unique name.
The Data Source’s description.
The Data Source’s Environment.
See Environment
The Data Source’s creation date and time in UTC.
The Data Source’s last modification date and time in UTC.
The Data Source’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Source’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Source’s type.
See DataSourceType
The Data Source’s status.
See DataSourceStatus
The Data Source’s connection settings.
The tables contained within the Data Source, according to the most recent table introspection.
Arguments
See TableConnection
A list of table introspections performed for the Data Source. You can see how tables and columns changed over time by paging through this list.
Arguments
A list of checks performed on the Data Source during its most recent connection attempt.
See DataSourceCheck
If you list Data Pools via the dataPools
field on a Data Source, you will get Data Pools for the Data Source.
The dataPools
field uses cursor-based pagination typical of GraphQL APIs. You can use the pairs of parameters first
and after
or last
and before
to page forward or backward through the results, respectively.
For forward pagination, the first
parameter defines the number of results to return, and the after
parameter defines the cursor to continue from. You should pass the cursor for the last result of the current page to after
.
For backward pagination, the last
parameter defines the number of results to return, and the before
parameter defines the cursor to continue from. You should pass the cursor for the first result of the current page to before
.
Arguments
This mutation creates a new ClickHouse Data Source.
The mutation returns the newly created Data Source (or an error message if creating the Data Source fails).
Arguments
The ClickHouse Data Source’s connection settings
The ClickHouse Data Source connection settings.
Which database to connect to
The password for the provided user
The URL where the ClickHouse host is listening to HTTP[S] connections
The user for authenticating against the ClickHouse host
The ClickHouse Data Source’s description.
The ClickHouse Data Source’s unique name. If not specified, Propel will set the ID as unique name.
The result of a mutation which creates or modifies a Data Source.
The Data Source which was created or modified.
The Data Source object.
A Data Source is a connection to your data warehouse. It has the necessary connection details for Propel to access Snowflake or any other supported Data Source.
The Data Source’s unique identifier.
The Data Source’s unique name.
The Data Source’s description.
The Data Source’s Environment.
See Environment
The Data Source’s creation date and time in UTC.
The Data Source’s last modification date and time in UTC.
The Data Source’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Source’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Source’s type.
See DataSourceType
The Data Source’s status.
See DataSourceStatus
The Data Source’s connection settings.
The tables contained within the Data Source, according to the most recent table introspection.
Arguments
See TableConnection
A list of table introspections performed for the Data Source. You can see how tables and columns changed over time by paging through this list.
Arguments
A list of checks performed on the Data Source during its most recent connection attempt.
See DataSourceCheck
If you list Data Pools via the dataPools
field on a Data Source, you will get Data Pools for the Data Source.
The dataPools
field uses cursor-based pagination typical of GraphQL APIs. You can use the pairs of parameters first
and after
or last
and before
to page forward or backward through the results, respectively.
For forward pagination, the first
parameter defines the number of results to return, and the after
parameter defines the cursor to continue from. You should pass the cursor for the last result of the current page to after
.
For backward pagination, the last
parameter defines the number of results to return, and the before
parameter defines the cursor to continue from. You should pass the cursor for the first result of the current page to before
.
Arguments
This mutation selects a Data Source by its ID or unique name and modifies it to have the given unique name, description, and connection settings.
If any of the optional arguments are omitted, those properties will be unchanged on the Data Source.
Arguments
The ClickHouse Data Source’s new connection settings. If not provided this property will not be modified.
The ClickHouse Data Source connection settings.
Which database to connect to If not provided this property will not be modified.
The password for the provided user If not provided this property will not be modified.
The URL where the ClickHouse host is listening to HTTP[S] connections If not provided this property will not be modified.
The user for authenticating against the ClickHouse host If not provided this property will not be modified.
The ClickHouse Data Source’s new description. If not provided this property will not be modified.
The ClickHouse Data Source’s new unique name. If not provided this property will not be modified.
The result of a mutation which creates or modifies a Data Source.
The Data Source which was created or modified.
The Data Source object.
A Data Source is a connection to your data warehouse. It has the necessary connection details for Propel to access Snowflake or any other supported Data Source.
The Data Source’s unique identifier.
The Data Source’s unique name.
The Data Source’s description.
The Data Source’s Environment.
See Environment
The Data Source’s creation date and time in UTC.
The Data Source’s last modification date and time in UTC.
The Data Source’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Source’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Source’s type.
See DataSourceType
The Data Source’s status.
See DataSourceStatus
The Data Source’s connection settings.
The tables contained within the Data Source, according to the most recent table introspection.
Arguments
See TableConnection
A list of table introspections performed for the Data Source. You can see how tables and columns changed over time by paging through this list.
Arguments
A list of checks performed on the Data Source during its most recent connection attempt.
See DataSourceCheck
If you list Data Pools via the dataPools
field on a Data Source, you will get Data Pools for the Data Source.
The dataPools
field uses cursor-based pagination typical of GraphQL APIs. You can use the pairs of parameters first
and after
or last
and before
to page forward or backward through the results, respectively.
For forward pagination, the first
parameter defines the number of results to return, and the after
parameter defines the cursor to continue from. You should pass the cursor for the last result of the current page to after
.
For backward pagination, the last
parameter defines the number of results to return, and the before
parameter defines the cursor to continue from. You should pass the cursor for the first result of the current page to before
.
Arguments
Creates a new HTTP Data Source from the given settings.
Returns the newly created Data Source (or an error message if creating the Data Source fails).
Arguments
The HTTP Data Source’s connection settings
The HTTP Data Source connection settings.
The HTTP Basic authentication settings for uploading new data.
If this parameter is not provided, anyone with the URL to your tables will be able to upload data. While it’s OK to test without HTTP Basic authentication, we recommend enabling it.
The HTTP Data Source’s tables.
The HTTP Data Source’s description.
The HTTP Data Source’s unique name. If not specified, Propel will set the ID as unique name.
The result of a mutation which creates or modifies a Data Source.
The Data Source which was created or modified.
The Data Source object.
A Data Source is a connection to your data warehouse. It has the necessary connection details for Propel to access Snowflake or any other supported Data Source.
The Data Source’s unique identifier.
The Data Source’s unique name.
The Data Source’s description.
The Data Source’s Environment.
See Environment
The Data Source’s creation date and time in UTC.
The Data Source’s last modification date and time in UTC.
The Data Source’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Source’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Source’s type.
See DataSourceType
The Data Source’s status.
See DataSourceStatus
The Data Source’s connection settings.
The tables contained within the Data Source, according to the most recent table introspection.
Arguments
See TableConnection
A list of table introspections performed for the Data Source. You can see how tables and columns changed over time by paging through this list.
Arguments
A list of checks performed on the Data Source during its most recent connection attempt.
See DataSourceCheck
If you list Data Pools via the dataPools
field on a Data Source, you will get Data Pools for the Data Source.
The dataPools
field uses cursor-based pagination typical of GraphQL APIs. You can use the pairs of parameters first
and after
or last
and before
to page forward or backward through the results, respectively.
For forward pagination, the first
parameter defines the number of results to return, and the after
parameter defines the cursor to continue from. You should pass the cursor for the last result of the current page to after
.
For backward pagination, the last
parameter defines the number of results to return, and the before
parameter defines the cursor to continue from. You should pass the cursor for the first result of the current page to before
.
Arguments
This mutation selects a Data Source by its ID or unique name and modifies it to have the given unique name, description, and connection settings.
If any of the optional arguments are omitted, those properties will be unchanged on the Data Source.
Arguments
The HTTP Data Source’s new connection settings. If not provided this property will not be modified.
The HTTP Data Source connection settings.
The HTTP Basic authentication settings for uploading new data.
If this parameter is not provided, anyone with the URL to your tables will be able to upload data. While it’s OK to test without HTTP Basic authentication, we recommend enabling it. If not provided this property will not be modified.
Set this to false
to disable HTTP Basic authentication. Any previously stored HTTP Basic authentication settings will be cleared out. If not provided this property will not be modified.
The HTTP Data Source’s tables. If not provided this property will not be modified.
The HTTP Data Source’s new description. If not provided this property will not be modified.
The HTTP Data Source’s new unique name. If not provided this property will not be modified.
The result of a mutation which creates or modifies a Data Source.
The Data Source which was created or modified.
The Data Source object.
A Data Source is a connection to your data warehouse. It has the necessary connection details for Propel to access Snowflake or any other supported Data Source.
The Data Source’s unique identifier.
The Data Source’s unique name.
The Data Source’s description.
The Data Source’s Environment.
See Environment
The Data Source’s creation date and time in UTC.
The Data Source’s last modification date and time in UTC.
The Data Source’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Source’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Source’s type.
See DataSourceType
The Data Source’s status.
See DataSourceStatus
The Data Source’s connection settings.
The tables contained within the Data Source, according to the most recent table introspection.
Arguments
See TableConnection
A list of table introspections performed for the Data Source. You can see how tables and columns changed over time by paging through this list.
Arguments
A list of checks performed on the Data Source during its most recent connection attempt.
See DataSourceCheck
If you list Data Pools via the dataPools
field on a Data Source, you will get Data Pools for the Data Source.
The dataPools
field uses cursor-based pagination typical of GraphQL APIs. You can use the pairs of parameters first
and after
or last
and before
to page forward or backward through the results, respectively.
For forward pagination, the first
parameter defines the number of results to return, and the after
parameter defines the cursor to continue from. You should pass the cursor for the last result of the current page to after
.
For backward pagination, the last
parameter defines the number of results to return, and the before
parameter defines the cursor to continue from. You should pass the cursor for the first result of the current page to before
.
Arguments
This mutation creates a new Kafka Data Source.
The mutation returns the newly created Data Source (or an error message if creating the Data Source fails).
Arguments
The Kafka Data Source’s connection settings
The Kafka Data Source connection settings.
The type of authentication to use. Can be SCRAM-SHA-256, SCRAM-SHA-512, PLAIN or NONE
The bootstrap server(s) to connect to
The password for the provided user
Whether the the connection to the Kafka servers is encrypted or not
The user for authenticating against the Kafka servers
The Kafka Data Source’s description.
The Kafka Data Source’s unique name. If not specified, Propel will set the ID as unique name.
The result of a mutation which creates or modifies a Data Source.
The Data Source which was created or modified.
The Data Source object.
A Data Source is a connection to your data warehouse. It has the necessary connection details for Propel to access Snowflake or any other supported Data Source.
The Data Source’s unique identifier.
The Data Source’s unique name.
The Data Source’s description.
The Data Source’s Environment.
See Environment
The Data Source’s creation date and time in UTC.
The Data Source’s last modification date and time in UTC.
The Data Source’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Source’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Source’s type.
See DataSourceType
The Data Source’s status.
See DataSourceStatus
The Data Source’s connection settings.
The tables contained within the Data Source, according to the most recent table introspection.
Arguments
See TableConnection
A list of table introspections performed for the Data Source. You can see how tables and columns changed over time by paging through this list.
Arguments
A list of checks performed on the Data Source during its most recent connection attempt.
See DataSourceCheck
If you list Data Pools via the dataPools
field on a Data Source, you will get Data Pools for the Data Source.
The dataPools
field uses cursor-based pagination typical of GraphQL APIs. You can use the pairs of parameters first
and after
or last
and before
to page forward or backward through the results, respectively.
For forward pagination, the first
parameter defines the number of results to return, and the after
parameter defines the cursor to continue from. You should pass the cursor for the last result of the current page to after
.
For backward pagination, the last
parameter defines the number of results to return, and the before
parameter defines the cursor to continue from. You should pass the cursor for the first result of the current page to before
.
Arguments
This mutation selects a Data Source by its ID or unique name and modifies it to have the given unique name, description, and connection settings.
If any of the optional arguments are omitted, those properties will be unchanged on the Data Source.
Arguments
The Kafka Data Source’s new connection settings. If not provided this property will not be modified.
The Kafka Data Source connection settings.
The type of authentication to use. Can be SCRAM-SHA-256, SCRAM-SHA-512, PLAIN or NONE If not provided this property will not be modified.
The bootstrap server(s) to connect to If not provided this property will not be modified.
The password for the provided user If not provided this property will not be modified.
Whether the the connection to the Kafka servers is encrypted or not If not provided this property will not be modified.
The user for authenticating against the Kafka servers If not provided this property will not be modified.
The Kafka Data Source’s new description. If not provided this property will not be modified.
The Kafka Data Source’s new unique name. If not provided this property will not be modified.
The result of a mutation which creates or modifies a Data Source.
The Data Source which was created or modified.
The Data Source object.
A Data Source is a connection to your data warehouse. It has the necessary connection details for Propel to access Snowflake or any other supported Data Source.
The Data Source’s unique identifier.
The Data Source’s unique name.
The Data Source’s description.
The Data Source’s Environment.
See Environment
The Data Source’s creation date and time in UTC.
The Data Source’s last modification date and time in UTC.
The Data Source’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Source’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Source’s type.
See DataSourceType
The Data Source’s status.
See DataSourceStatus
The Data Source’s connection settings.
The tables contained within the Data Source, according to the most recent table introspection.
Arguments
See TableConnection
A list of table introspections performed for the Data Source. You can see how tables and columns changed over time by paging through this list.
Arguments
A list of checks performed on the Data Source during its most recent connection attempt.
See DataSourceCheck
If you list Data Pools via the dataPools
field on a Data Source, you will get Data Pools for the Data Source.
The dataPools
field uses cursor-based pagination typical of GraphQL APIs. You can use the pairs of parameters first
and after
or last
and before
to page forward or backward through the results, respectively.
For forward pagination, the first
parameter defines the number of results to return, and the after
parameter defines the cursor to continue from. You should pass the cursor for the last result of the current page to after
.
For backward pagination, the last
parameter defines the number of results to return, and the before
parameter defines the cursor to continue from. You should pass the cursor for the first result of the current page to before
.
Arguments
Creates a new PostgreSQL Data Source.
Returns the newly created Data Source (or an error message if creating the Data Source fails).
Example:
mutation {
createPostgreSqlDataSource(input: {
uniqueName: "example_postgresql_data_source"
description: "My PostgreSQL Data Source"
connectionSettings: {
database: "example_database"
schema: "public"
host: "postgresql.example.com"
port: 5432
user: "user"
password: "password"
}
}) {
dataSource {
id
uniqueName
}
}
}
Replace the placeholder values with your actual configuration.
Arguments
The PostgreSQL Data Source’s connection settings
The PostgreSQL Data Source connection settings.
Which database to connect to
The host where PostgreSQL is listening
The password for the provided user
The port where PostgreSQL is listening (usually 5432)
Which schema to use
The user for authenticating against PostgreSQL
The PostgreSQL Data Source’s description.
The PostgreSQL Data Source’s unique name. If not specified, Propel will set the ID as unique name.
The result of a mutation which creates or modifies a Data Source.
The Data Source which was created or modified.
The Data Source object.
A Data Source is a connection to your data warehouse. It has the necessary connection details for Propel to access Snowflake or any other supported Data Source.
The Data Source’s unique identifier.
The Data Source’s unique name.
The Data Source’s description.
The Data Source’s Environment.
See Environment
The Data Source’s creation date and time in UTC.
The Data Source’s last modification date and time in UTC.
The Data Source’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Source’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Source’s type.
See DataSourceType
The Data Source’s status.
See DataSourceStatus
The Data Source’s connection settings.
The tables contained within the Data Source, according to the most recent table introspection.
Arguments
See TableConnection
A list of table introspections performed for the Data Source. You can see how tables and columns changed over time by paging through this list.
Arguments
A list of checks performed on the Data Source during its most recent connection attempt.
See DataSourceCheck
If you list Data Pools via the dataPools
field on a Data Source, you will get Data Pools for the Data Source.
The dataPools
field uses cursor-based pagination typical of GraphQL APIs. You can use the pairs of parameters first
and after
or last
and before
to page forward or backward through the results, respectively.
For forward pagination, the first
parameter defines the number of results to return, and the after
parameter defines the cursor to continue from. You should pass the cursor for the last result of the current page to after
.
For backward pagination, the last
parameter defines the number of results to return, and the before
parameter defines the cursor to continue from. You should pass the cursor for the first result of the current page to before
.
Arguments
Selects a Data Source by its ID or unique name and modifies it to have the given unique name, description, and connection settings.
If any of the optional arguments are omitted, those properties will be unchanged on the Data Source.
Arguments
The PostgreSQL Data Source’s new connection settings. If not provided this property will not be modified.
The PostgreSQL Data Source connection settings.
Which database to connect to If not provided this property will not be modified.
The host where PostgreSQL is listening If not provided this property will not be modified.
The password for the provided user If not provided this property will not be modified.
The port where PostgreSQL is listening (usually 5432) If not provided this property will not be modified.
Which schema to use If not provided this property will not be modified.
The user for authenticating against PostgreSQL If not provided this property will not be modified.
The PostgreSQL Data Source’s new description. If not provided this property will not be modified.
The PostgreSQL Data Source’s new unique name. If not provided this property will not be modified.
The result of a mutation which creates or modifies a Data Source.
The Data Source which was created or modified.
The Data Source object.
A Data Source is a connection to your data warehouse. It has the necessary connection details for Propel to access Snowflake or any other supported Data Source.
The Data Source’s unique identifier.
The Data Source’s unique name.
The Data Source’s description.
The Data Source’s Environment.
See Environment
The Data Source’s creation date and time in UTC.
The Data Source’s last modification date and time in UTC.
The Data Source’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Source’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Source’s type.
See DataSourceType
The Data Source’s status.
See DataSourceStatus
The Data Source’s connection settings.
The tables contained within the Data Source, according to the most recent table introspection.
Arguments
See TableConnection
A list of table introspections performed for the Data Source. You can see how tables and columns changed over time by paging through this list.
Arguments
A list of checks performed on the Data Source during its most recent connection attempt.
See DataSourceCheck
If you list Data Pools via the dataPools
field on a Data Source, you will get Data Pools for the Data Source.
The dataPools
field uses cursor-based pagination typical of GraphQL APIs. You can use the pairs of parameters first
and after
or last
and before
to page forward or backward through the results, respectively.
For forward pagination, the first
parameter defines the number of results to return, and the after
parameter defines the cursor to continue from. You should pass the cursor for the last result of the current page to after
.
For backward pagination, the last
parameter defines the number of results to return, and the before
parameter defines the cursor to continue from. You should pass the cursor for the first result of the current page to before
.
Arguments
Creates a new Amazon S3 Data Source pointed at the specified Amazon S3 bucket.
Returns the newly created Data Source (or an error message if creating the Data Source fails).
Arguments
The Amazon S3 Data Source’s connection settings
The connection settings for an Amazon S3 Data Source. These include the Amazon S3 bucket name, the AWS access key ID, and the tables (along with their paths). We do not allow fetching the AWS secret access key after it has been set.
The AWS access key ID for an IAM user with sufficient access to the Amazon S3 bucket.
The AWS secret access key for an IAM user with sufficient access to the Amazon S3 bucket.
The name of the Amazon S3 bucket.
The Amazon S3 Data Source’s tables.
The Amazon S3 Data Source’s description.
The Amazon S3 Data Source’s unique name. If not specified, Propel will set the ID as unique name.
The result of a mutation which creates or modifies a Data Source.
The Data Source which was created or modified.
The Data Source object.
A Data Source is a connection to your data warehouse. It has the necessary connection details for Propel to access Snowflake or any other supported Data Source.
The Data Source’s unique identifier.
The Data Source’s unique name.
The Data Source’s description.
The Data Source’s Environment.
See Environment
The Data Source’s creation date and time in UTC.
The Data Source’s last modification date and time in UTC.
The Data Source’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Source’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Source’s type.
See DataSourceType
The Data Source’s status.
See DataSourceStatus
The Data Source’s connection settings.
The tables contained within the Data Source, according to the most recent table introspection.
Arguments
See TableConnection
A list of table introspections performed for the Data Source. You can see how tables and columns changed over time by paging through this list.
Arguments
A list of checks performed on the Data Source during its most recent connection attempt.
See DataSourceCheck
If you list Data Pools via the dataPools
field on a Data Source, you will get Data Pools for the Data Source.
The dataPools
field uses cursor-based pagination typical of GraphQL APIs. You can use the pairs of parameters first
and after
or last
and before
to page forward or backward through the results, respectively.
For forward pagination, the first
parameter defines the number of results to return, and the after
parameter defines the cursor to continue from. You should pass the cursor for the last result of the current page to after
.
For backward pagination, the last
parameter defines the number of results to return, and the before
parameter defines the cursor to continue from. You should pass the cursor for the first result of the current page to before
.
Arguments
This mutation selects a Data Source by its ID or unique name and modifies it to have the given unique name, description, and connection settings.
If any of the optional arguments are omitted, those properties will be unchanged on the Data Source.
Arguments
The Amazon S3 Data Source’s new connection settings. If not provided this property will not be modified.
The connection settings for an Amazon S3 Data Source. These include the Amazon S3 bucket name, the AWS access key ID, and the tables (along with their paths). We do not allow fetching the AWS secret access key after it has been set.
The AWS access key ID for an IAM user with sufficient access to the Amazon S3 bucket. If not provided this property will not be modified.
The AWS secret access key for an IAM user with sufficient access to the Amazon S3 bucket. If not provided this property will not be modified.
The name of the Amazon S3 bucket. If not provided this property will not be modified.
The Amazon S3 Data Source’s tables. If not provided this property will not be modified.
The Amazon S3 Data Source’s new description. If not provided this property will not be modified.
The Amazon S3 Data Source’s new unique name. If not provided this property will not be modified.
The result of a mutation which creates or modifies a Data Source.
The Data Source which was created or modified.
The Data Source object.
A Data Source is a connection to your data warehouse. It has the necessary connection details for Propel to access Snowflake or any other supported Data Source.
The Data Source’s unique identifier.
The Data Source’s unique name.
The Data Source’s description.
The Data Source’s Environment.
See Environment
The Data Source’s creation date and time in UTC.
The Data Source’s last modification date and time in UTC.
The Data Source’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Source’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Source’s type.
See DataSourceType
The Data Source’s status.
See DataSourceStatus
The Data Source’s connection settings.
The tables contained within the Data Source, according to the most recent table introspection.
Arguments
See TableConnection
A list of table introspections performed for the Data Source. You can see how tables and columns changed over time by paging through this list.
Arguments
A list of checks performed on the Data Source during its most recent connection attempt.
See DataSourceCheck
If you list Data Pools via the dataPools
field on a Data Source, you will get Data Pools for the Data Source.
The dataPools
field uses cursor-based pagination typical of GraphQL APIs. You can use the pairs of parameters first
and after
or last
and before
to page forward or backward through the results, respectively.
For forward pagination, the first
parameter defines the number of results to return, and the after
parameter defines the cursor to continue from. You should pass the cursor for the last result of the current page to after
.
For backward pagination, the last
parameter defines the number of results to return, and the before
parameter defines the cursor to continue from. You should pass the cursor for the first result of the current page to before
.
Arguments
Creates a new Twilio Segment Data Source from the given settings.
Returns the newly created Data Source (or an error message if creating the Data Source fails).
Arguments
The Twilio Segment Data Source’s connection settings
The Twilio Segment Data Source connection settings.
Enables or disables access control for the Data Pool. If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
The HTTP basic authentication settings for the Twilio Segment Data Source URL. If this parameter is not provided, anyone with the webhook URL will be able to send events. While it’s OK to test without HTTP Basic authentication, we recommend enabling it.
The additional columns for the table in Propel.
Override the Data Pool’s table settings. These describe how the Data Pool’s table is created in ClickHouse, and a
default will be chosen based on the Data Pool’s timestamp
and uniqueId
values, if any. You can override these
defaults in order to specify a custom table engine, custom ORDER BY, etc.
The primary timestamp column, if any.
The Twilio Segment Data Source’s description.
The Twilio Segment Data Source’s unique name. If not specified, Propel will set the ID as unique name.
The result of a mutation which creates or modifies a Data Source.
The Data Source which was created or modified.
The Data Source object.
A Data Source is a connection to your data warehouse. It has the necessary connection details for Propel to access Snowflake or any other supported Data Source.
The Data Source’s unique identifier.
The Data Source’s unique name.
The Data Source’s description.
The Data Source’s Environment.
See Environment
The Data Source’s creation date and time in UTC.
The Data Source’s last modification date and time in UTC.
The Data Source’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Source’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Source’s type.
See DataSourceType
The Data Source’s status.
See DataSourceStatus
The Data Source’s connection settings.
The tables contained within the Data Source, according to the most recent table introspection.
Arguments
See TableConnection
A list of table introspections performed for the Data Source. You can see how tables and columns changed over time by paging through this list.
Arguments
A list of checks performed on the Data Source during its most recent connection attempt.
See DataSourceCheck
If you list Data Pools via the dataPools
field on a Data Source, you will get Data Pools for the Data Source.
The dataPools
field uses cursor-based pagination typical of GraphQL APIs. You can use the pairs of parameters first
and after
or last
and before
to page forward or backward through the results, respectively.
For forward pagination, the first
parameter defines the number of results to return, and the after
parameter defines the cursor to continue from. You should pass the cursor for the last result of the current page to after
.
For backward pagination, the last
parameter defines the number of results to return, and the before
parameter defines the cursor to continue from. You should pass the cursor for the first result of the current page to before
.
Arguments
Modifies the Data Source by the ID or unique name provided with the given unique name, description, and connection settings.
If any of the optional arguments are omitted, those properties will be unchanged on the Data Source.
Arguments
The Twilio Segment Data Source’s new connection settings. If not provided this property will not be modified.
The Twilio Segment Data Source connection settings.
The HTTP basic authentication settings for the Twilio Segment Data Source URL. If this parameter is not provided, anyone with the webhook URL will be able to send events. While it’s OK to test without HTTP Basic authentication, we recommend enabling it. If not provided this property will not be modified.
Set this to false
to disable HTTP Basic authentication. Any previously stored HTTP Basic authentication settings will be cleared out. If not provided this property will not be modified.
The Twilio Segment Data Source’s new description. If not provided this property will not be modified.
The Twilio Segment Data Source’s new unique name. If not provided this property will not be modified.
The result of a mutation which creates or modifies a Data Source.
The Data Source which was created or modified.
The Data Source object.
A Data Source is a connection to your data warehouse. It has the necessary connection details for Propel to access Snowflake or any other supported Data Source.
The Data Source’s unique identifier.
The Data Source’s unique name.
The Data Source’s description.
The Data Source’s Environment.
See Environment
The Data Source’s creation date and time in UTC.
The Data Source’s last modification date and time in UTC.
The Data Source’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Source’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Source’s type.
See DataSourceType
The Data Source’s status.
See DataSourceStatus
The Data Source’s connection settings.
The tables contained within the Data Source, according to the most recent table introspection.
Arguments
See TableConnection
A list of table introspections performed for the Data Source. You can see how tables and columns changed over time by paging through this list.
Arguments
A list of checks performed on the Data Source during its most recent connection attempt.
See DataSourceCheck
If you list Data Pools via the dataPools
field on a Data Source, you will get Data Pools for the Data Source.
The dataPools
field uses cursor-based pagination typical of GraphQL APIs. You can use the pairs of parameters first
and after
or last
and before
to page forward or backward through the results, respectively.
For forward pagination, the first
parameter defines the number of results to return, and the after
parameter defines the cursor to continue from. You should pass the cursor for the last result of the current page to after
.
For backward pagination, the last
parameter defines the number of results to return, and the before
parameter defines the cursor to continue from. You should pass the cursor for the first result of the current page to before
.
Arguments
Creates a new Webhook Data Source from the given settings.
Returns the newly created Data Source (or an error message if creating the Data Source fails).
Arguments
The Webhook Data Source’s connection settings
The Webhook Data Source connection settings.
Enables or disables access control for the Data Pool. If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
The HTTP basic authentication settings for the Webhook Data Source URL. If this parameter is not provided, anyone with the webhook URL will be able to send events. While it’s OK to test without HTTP Basic authentication, we recommend enabling it.
The additional columns for the table in Propel.
Override the Data Pool’s table settings. These describe how the Data Pool’s table is created in ClickHouse, and a
default will be chosen based on the Data Pool’s timestamp
and uniqueId
values, if any. You can override these
defaults in order to specify a custom table engine, custom ORDER BY, etc.
The tenant ID column, if any.
The primary timestamp column, if any.
The unique ID column, if any. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated.
The Webhook Data Source’s description.
The Webhook Data Source’s unique name. If not specified, Propel will set the ID as unique name.
The result of a mutation which creates or modifies a Data Source.
The Data Source which was created or modified.
The Data Source object.
A Data Source is a connection to your data warehouse. It has the necessary connection details for Propel to access Snowflake or any other supported Data Source.
The Data Source’s unique identifier.
The Data Source’s unique name.
The Data Source’s description.
The Data Source’s Environment.
See Environment
The Data Source’s creation date and time in UTC.
The Data Source’s last modification date and time in UTC.
The Data Source’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Source’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Source’s type.
See DataSourceType
The Data Source’s status.
See DataSourceStatus
The Data Source’s connection settings.
The tables contained within the Data Source, according to the most recent table introspection.
Arguments
See TableConnection
A list of table introspections performed for the Data Source. You can see how tables and columns changed over time by paging through this list.
Arguments
A list of checks performed on the Data Source during its most recent connection attempt.
See DataSourceCheck
If you list Data Pools via the dataPools
field on a Data Source, you will get Data Pools for the Data Source.
The dataPools
field uses cursor-based pagination typical of GraphQL APIs. You can use the pairs of parameters first
and after
or last
and before
to page forward or backward through the results, respectively.
For forward pagination, the first
parameter defines the number of results to return, and the after
parameter defines the cursor to continue from. You should pass the cursor for the last result of the current page to after
.
For backward pagination, the last
parameter defines the number of results to return, and the before
parameter defines the cursor to continue from. You should pass the cursor for the first result of the current page to before
.
Arguments
Modifies the Data Source by the ID or unique name provided with the given unique name, description, and connection settings.
If any of the optional arguments are omitted, those properties will be unchanged on the Data Source.
Arguments
The Webhook Data Source’s new connection settings. If not provided this property will not be modified.
The Webhook Data Source connection settings.
The HTTP basic authentication settings for the Webhook Data Source URL. If this parameter is not provided, anyone with the webhook URL will be able to send events. While it’s OK to test without HTTP Basic authentication, we recommend enabling it. If not provided this property will not be modified.
Set this to false
to disable HTTP Basic authentication. Any previously stored HTTP Basic authentication settings will be cleared out. If not provided this property will not be modified.
The Webhook Data Source’s new description. If not provided this property will not be modified.
The Webhook Data Source’s new unique name. If not provided this property will not be modified.
The result of a mutation which creates or modifies a Data Source.
The Data Source which was created or modified.
The Data Source object.
A Data Source is a connection to your data warehouse. It has the necessary connection details for Propel to access Snowflake or any other supported Data Source.
The Data Source’s unique identifier.
The Data Source’s unique name.
The Data Source’s description.
The Data Source’s Environment.
See Environment
The Data Source’s creation date and time in UTC.
The Data Source’s last modification date and time in UTC.
The Data Source’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Source’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Source’s type.
See DataSourceType
The Data Source’s status.
See DataSourceStatus
The Data Source’s connection settings.
The tables contained within the Data Source, according to the most recent table introspection.
Arguments
See TableConnection
A list of table introspections performed for the Data Source. You can see how tables and columns changed over time by paging through this list.
Arguments
A list of checks performed on the Data Source during its most recent connection attempt.
See DataSourceCheck
If you list Data Pools via the dataPools
field on a Data Source, you will get Data Pools for the Data Source.
The dataPools
field uses cursor-based pagination typical of GraphQL APIs. You can use the pairs of parameters first
and after
or last
and before
to page forward or backward through the results, respectively.
For forward pagination, the first
parameter defines the number of results to return, and the after
parameter defines the cursor to continue from. You should pass the cursor for the last result of the current page to after
.
For backward pagination, the last
parameter defines the number of results to return, and the before
parameter defines the cursor to continue from. You should pass the cursor for the first result of the current page to before
.
Arguments
Deprecated. Use retryDataPoolSetup
instead.
retryDataPoolSetup
insteadArguments
The Data Pool object. Data Pools are Propel’s high-speed data store and cache
The Data Pool’s unique identifier.
The Data Pool’s unique name.
The Data Pool’s description.
The Data Pool’s Account.
The Account object.
The Account’s unique identifier.
The Data Pool’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Data Pool’s creation date and time in UTC.
The Data Pool’s last modification date and time in UTC.
The Data Pool’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Pool’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Pool’s Data Source. See DataSource
The Data Pool’s status.
The status of a Data Pool.
CREATED
: The Data Pool has been created and will be set up soon.PENDING
: Propel is attempting to set up the Data Pool.LIVE
: The Data Pool is set up and serving data. Check its Syncs to monitor data ingestion.SETUP_FAILED
: The Data Pool setup failed. Check its Setup Tasks before re-attempting setup.CONNECTING
CONNECTED
BROKEN
PAUSING
PAUSED
DELETING
: Propel is deleting the Data Pool and all of its associated data.
The Data Pool’s data retention in days (not yet supported).
The name of the Data Pool’s table.
The Data Pool’s primary timestamp column, if any.
A Data Pool’s primary timestamp column. Propel uses the primary timestamp to order and partition your data in Data Pools. It will serve as the time dimension for your Metrics.
The name of the column that represents the primary timestamp.
The primary timestamp column’s type.
The number of records in the Data Pool.
The amount of storage in terabytes used by the Data Pool.
The Data Pool’s columns.
Arguments
The Data Pool column connection object.
Learn more about pagination in GraphQL.
The Data Pool column connection’s edges.
The Data Pool column connection’s nodes.
See DataPoolColumn
The list of measures (numeric columns) in the Data Pool.
Arguments
The Data Pool column connection object.
Learn more about pagination in GraphQL.
The Data Pool column connection’s edges.
The Data Pool column connection’s nodes.
See DataPoolColumn
A list of setup tasks performed on the Data Pool during its most recent setup attempt.
The Data Pool Setup Task object.
Data Pool Setup Tasks are executed when setting up your Data Pool. They ensure Propel will be able to sync records from your Data Source to your Data Pool.
The exact Setup Tasks to perform vary by Data Source. For example, Data Pools pointing to a Snowflake-backed Data Sources will have their own specific Setup Tasks.
The name of the Data Pool Setup Task to be performed.
A description of the Data Pool Setup Task to be performed.
The status of the Data Pool Setup Task (all setup tasks begin as NOT_STARTED before transitioning to SUCCEEDED or FAILED).
If the Data Pool Setup Task failed, this field includes a descriptive error message.
See Error
The time at which the Data Pool Setup Task was completed.
Settings related to Data Pool syncing.
Settings related to Data Pool syncing.
Indicates whether syncing is enabled or disabled.
The syncing interval.
Note that the syncing interval is approximate. For example, setting the syncing interval to EVERY_1_HOUR
does not mean that syncing will occur exactly on the hour. Instead, the syncing interval starts relative to
when the Data Pool goes LIVE
, and Propel will attempt to sync approximately every hour. Additionally,
if you pause or resume syncing, this too can shift the syncing interval around.
The date and time of the most recent Sync in UTC.
The list of Syncs of the Data Pool.
Arguments
The filter to apply when listing the Syncs for a Data Pool.
EMPTY
: Returns only Syncs with empty records.NOT_EMPTY
: Returns only Syncs that contain one or more records.ALL
: Returns all Syncs, regardless of whether they contain records or not.
See SyncConnection
The Deletion Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The Add Column Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The UpdateDataPoolRecords Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
Whether the Data Pool has access control enabled or not.
If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
A paginated list of Data Pool Access Policies available on the Data Pool.
Arguments
Validates a custom expression against the Data Pool’s available columns. If the provided expression is invalid, the ValidateExpressionResult response will contain a reason explaining why.
Arguments
Response returned by the validateExpression query for validating expressions in Custom Metrics.
Returns whether the expression is valid or not with a reason explaining why.
True if the expression is valid, false otherwise.
The reason for why the expression is not valid in case it isn’t, null otherwise.
The Data Pool’s table settings.
A Data Pool’s table settings.
These describe how the Data Pool’s table is created in ClickHouse.
The ClickHouse table engine for the Data Pool’s table.
See TableEngine
The PARTITION BY clause for the Data Pool’s table.
The PRIMARY KEY clause for the Data Pool’s table.
The ORDER BY clause for the Data Pool’s table.
The TTL clause for the Data Pool’s table.
The Data Pool’s columns that participate in its PARTITION BY clause.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
See ColumnType
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadThe Data Pool’s columns that participate in its PRIMARY KEY clause.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
See ColumnType
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadThe Data Pool’s columns that participate in its ORDER BY clause.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
See ColumnType
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadThe Data Pool’s tenant ID, if configured.
deprecated: Will be removed; use Data Pool Access Policies insteadThe Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
deprecated: Will be removed; use table settings to define the primary key.A Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
The name of the column that represents the unique ID.
Creates a new Policy granting an Application access to a Metric’s data.
deprecated: Use Data Pool Access Policies insteadArguments
The fields for creating a Policy.
The Metric to which the Policy will be applied.
The type of Policy to create.
The types of Policies that can be applied to a Metric.
ALL_ACCESS
: Grants access to all Metric data.TENANT_ACCESS
: Grants access to a specified tenant’s Metric data.
The Application that will be granted access to the Metric.
The result of a mutation which creates or modifies a Policy.
The Policy which was created or modified.
The Policy type. It governs an Application’s access to a Metric’s data.
The Policy’s unique identifier.
The Policy’s Environment.
See Environment
The Policy’s creation date and time in UTC.
The Policy’s last modification date and time in UTC.
The Policy’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Policy’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The type of Policy.
See PolicyType
The Application that is granted access. See Application
Modifies an existing Policy. You can modify the Application’s level of access to the Metric’s data.
deprecated: Use Data Pool Access Policies insteadArguments
The result of a mutation which creates or modifies a Policy.
The Policy which was created or modified.
The Policy type. It governs an Application’s access to a Metric’s data.
The Policy’s unique identifier.
The Policy’s Environment.
See Environment
The Policy’s creation date and time in UTC.
The Policy’s last modification date and time in UTC.
The Policy’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Policy’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The type of Policy.
See PolicyType
The Application that is granted access. See Application
Deletes a Policy. The associated Application will no longer have access to the Metric’s data.
deprecated: Use Data Pool Access Policies insteadArguments
Schedules a new Deletion Job on the specified Data Pool.
deprecated: This has been renamed tocreateDeletionJob
Arguments
The fields for creating a Deletion Job.
The Data Pool that is going to get the data deleted
The filters that will be used for deleting data, in the form of SQL. Data matching these filters will be deleted.
The list of filters that will be used for deleting data. Data matching these filters will be deleted.
deprecated: UsefilterSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
See FilterOperator
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
The response returned by the Deletion Job.
The Deletion Job that was just created.
Deletion Job scheduled for a specific Data Pool.
The Deletion Job represents the asynchronous process of deleting data given some filters inside a Data Pool. It tracks the deletion process until it is finished, showing the progress and the outcome when it is finished.
The Deletion Job’s ID.
The Deletion Job’s creation date and time in UTC.
Who created the Deletion Job.
The Deletion Job’s last modification date and time in UTC.
Who last modified the Deletion Job.
Environment to which the Deletion Job belongs.
See Environment
The Data Pool whose records will be deleted by the Deletion Job. See DataPool
The filters that will be used for deleting data, in the form of SQL. Data matching the filters will be deleted.
The current progress of the Deletion Job, from 0.0 to 1.0.
The time at which the Deletion Job started.
The time at which the Deletion Job succeeded.
The time at which the Deletion Job failed.
SqlColumnResponse
The name of the returned column.
The returned column’s type.
The Propel data types.
BOOLEAN
: True or false.STRING
: A variable-length string.FLOAT
: A 32-bit signed double-precision floating point number.DOUBLE
: A 64-bit signed double-precision floating point number.INT8
: An 8-bit signed integer, with a minimum value of -2⁷ and a maximum value of 2⁷-1.INT16
: A 16-bit signed integer, with a minimum value of -2¹⁵ and a maximum value of 2¹⁵-1.INT32
: A 32-bit signed integer, with a minimum value of -2³¹ and a maximum value of 2³¹-1.INT64
: A 64-bit signed integer, with a minimum value of -2⁶³ and a maximum value of 2⁶³-1.DATE
: A date without a timestamp. For example, “YYYY-MM-DD”.TIMESTAMP
: A date with a timestamp. For example, “yyy-MM-dd HH:mm:ss”.JSON
: A JavaScript Object Notation (JSON) document.CLICKHOUSE
: A ClickHouse-specific type.
Whether the column is nullable, meaning whether it accepts a null value.
SqlResponse
Response from the SQL API.
The column names in the same order as present in the data
field.
The name of the returned column.
The returned column’s type.
The Propel data types.
BOOLEAN
: True or false.STRING
: A variable-length string.FLOAT
: A 32-bit signed double-precision floating point number.DOUBLE
: A 64-bit signed double-precision floating point number.INT8
: An 8-bit signed integer, with a minimum value of -2⁷ and a maximum value of 2⁷-1.INT16
: A 16-bit signed integer, with a minimum value of -2¹⁵ and a maximum value of 2¹⁵-1.INT32
: A 32-bit signed integer, with a minimum value of -2³¹ and a maximum value of 2³¹-1.INT64
: A 64-bit signed integer, with a minimum value of -2⁶³ and a maximum value of 2⁶³-1.DATE
: A date without a timestamp. For example, “YYYY-MM-DD”.TIMESTAMP
: A date with a timestamp. For example, “yyy-MM-dd HH:mm:ss”.JSON
: A JavaScript Object Notation (JSON) document.CLICKHOUSE
: A ClickHouse-specific type.
Whether the column is nullable, meaning whether it accepts a null value.
The data gathered by the SQL query. The data is returned in an N x M matrix format, where the first dimension are the rows retrieved, and the second dimension are the columns. Each cell can be either a string or null, and the string can represent a number, text, date or boolean value.
The Query statistics and metadata.
The Query Info object. It contains metadata and statistics about a Query performed.
The Query’s unique identifier.
The date and time in UTC when the Query was created.
The unique identifier of the actor that performed the Query.
The date and time in UTC when the Query was last modified.
The unique identifier of the actor that modified the Query.
The bytes processed by the Query.
The duration of the Query in milliseconds.
The number of records processed by the Query.
The bytes returned by the Query.
The number of records returned by the Query.
The Propeller used for this query.
A Propeller determines your Application’s query processing power. The larger the Propeller, the faster the queries and the higher the cost. Every Propel Application (and therefore every set of API credentials) has a Propeller that determines the speed and cost of queries.
P1_X_SMALL
: Max records per second: 5,000,000 records per secondP1_SMALL
: Max records per second: 25,000,000 records per secondP1_MEDIUM
: Max records per second: 100,000,000 records per secondP1_LARGE
: Max records per second: 250,000,000 records per secondP1_X_LARGE
: Max records per second: 500,000,000 records per second
The Query status.
The Query status.
COMPLETED
: The Query was completed succesfully.ERROR
: The Query experienced an error.TIMED_OUT
: The Query timed out.
The Query type.
The Query type.
METRIC
: Indicates a Metric Query.STATS
: Indicates a Dimension Stats Query.REPORT
: Indicates a Report Query.RECORDS
: Indicates a Record Table Query.RECORDS_BY_UNIQUE_ID
: Indicates records queried by unique ID.SQL
: Indicates a SQL Query.TOP_VALUES
: Indicates a Top Values Query.
The Query subtype.
The Query subtype.
COUNTER
: Indicates a Metric counter Query.TIME_SERIES
: Indicates a Metric time series Query.LEADERBOARD
: Indicates a Metric leaderboard Query.
The SQL the query executed.
DescribeSqlResponse
Response from the describe SQL API.
The columns that the query would return.
The name of the returned column.
The returned column’s type.
The Propel data types.
BOOLEAN
: True or false.STRING
: A variable-length string.FLOAT
: A 32-bit signed double-precision floating point number.DOUBLE
: A 64-bit signed double-precision floating point number.INT8
: An 8-bit signed integer, with a minimum value of -2⁷ and a maximum value of 2⁷-1.INT16
: A 16-bit signed integer, with a minimum value of -2¹⁵ and a maximum value of 2¹⁵-1.INT32
: A 32-bit signed integer, with a minimum value of -2³¹ and a maximum value of 2³¹-1.INT64
: A 64-bit signed integer, with a minimum value of -2⁶³ and a maximum value of 2⁶³-1.DATE
: A date without a timestamp. For example, “YYYY-MM-DD”.TIMESTAMP
: A date with a timestamp. For example, “yyy-MM-dd HH:mm:ss”.JSON
: A JavaScript Object Notation (JSON) document.CLICKHOUSE
: A ClickHouse-specific type.
Whether the column is nullable, meaning whether it accepts a null value.
DataGridConnection
The Data Grid connection.
It includes headers
and rows
for a single page of a Data Grid table. It also allows paging forward and backward to other
pages of the Data Grid table.
The Data Grid table’s headers.
An array of arrays containing the values of the Data Grid table’s rows.
The Query statistics and metadata.
The Query Info object. It contains metadata and statistics about a Query performed.
The Query’s unique identifier.
The date and time in UTC when the Query was created.
The unique identifier of the actor that performed the Query.
The date and time in UTC when the Query was last modified.
The unique identifier of the actor that modified the Query.
The bytes processed by the Query.
The duration of the Query in milliseconds.
The number of records processed by the Query.
The bytes returned by the Query.
The number of records returned by the Query.
The Propeller used for this query.
A Propeller determines your Application’s query processing power. The larger the Propeller, the faster the queries and the higher the cost. Every Propel Application (and therefore every set of API credentials) has a Propeller that determines the speed and cost of queries.
P1_X_SMALL
: Max records per second: 5,000,000 records per secondP1_SMALL
: Max records per second: 25,000,000 records per secondP1_MEDIUM
: Max records per second: 100,000,000 records per secondP1_LARGE
: Max records per second: 250,000,000 records per secondP1_X_LARGE
: Max records per second: 500,000,000 records per second
The Query status.
The Query status.
COMPLETED
: The Query was completed succesfully.ERROR
: The Query experienced an error.TIMED_OUT
: The Query timed out.
The Query type.
The Query type.
METRIC
: Indicates a Metric Query.STATS
: Indicates a Dimension Stats Query.REPORT
: Indicates a Report Query.RECORDS
: Indicates a Record Table Query.RECORDS_BY_UNIQUE_ID
: Indicates records queried by unique ID.SQL
: Indicates a SQL Query.TOP_VALUES
: Indicates a Top Values Query.
The Query subtype.
The Query subtype.
COUNTER
: Indicates a Metric counter Query.TIME_SERIES
: Indicates a Metric time series Query.LEADERBOARD
: Indicates a Metric leaderboard Query.
The SQL the query executed.
The Data Grid table’s page info.
The page info object used for pagination.
Points to the first item returned in the results. Used when paginating backward.
Points to the last item returned in the results. Used when paginating forward.
A boolean that indicates whether a next page of results exists. Can be used to display a “next page” button in user interfaces, for example.
A boolean that indicates whether a previous page of results exists. Can be used to display a “previous page” button in user interfaces, for example.
The Data Grid table’s edges.
DataGridEdge
The Data Grid edge object.
The edge’s cursor.
DataGridNode
The Data Grid table’s node.
This type represents a single row of a Data Grid table.
The Data Grid table’s headers.
An array of the values for the row.
RecordsByUniqueIdResponse
The Data Pool columns for the record.
An array of values for the record.
The Query statistics and metadata.
The Query Info object. It contains metadata and statistics about a Query performed.
The Query’s unique identifier.
The date and time in UTC when the Query was created.
The unique identifier of the actor that performed the Query.
The date and time in UTC when the Query was last modified.
The unique identifier of the actor that modified the Query.
The bytes processed by the Query.
The duration of the Query in milliseconds.
The number of records processed by the Query.
The bytes returned by the Query.
The number of records returned by the Query.
The Propeller used for this query.
A Propeller determines your Application’s query processing power. The larger the Propeller, the faster the queries and the higher the cost. Every Propel Application (and therefore every set of API credentials) has a Propeller that determines the speed and cost of queries.
P1_X_SMALL
: Max records per second: 5,000,000 records per secondP1_SMALL
: Max records per second: 25,000,000 records per secondP1_MEDIUM
: Max records per second: 100,000,000 records per secondP1_LARGE
: Max records per second: 250,000,000 records per secondP1_X_LARGE
: Max records per second: 500,000,000 records per second
The Query status.
The Query status.
COMPLETED
: The Query was completed succesfully.ERROR
: The Query experienced an error.TIMED_OUT
: The Query timed out.
The Query type.
The Query type.
METRIC
: Indicates a Metric Query.STATS
: Indicates a Dimension Stats Query.REPORT
: Indicates a Report Query.RECORDS
: Indicates a Record Table Query.RECORDS_BY_UNIQUE_ID
: Indicates records queried by unique ID.SQL
: Indicates a SQL Query.TOP_VALUES
: Indicates a Top Values Query.
The Query subtype.
The Query subtype.
COUNTER
: Indicates a Metric counter Query.TIME_SERIES
: Indicates a Metric time series Query.LEADERBOARD
: Indicates a Metric leaderboard Query.
The SQL the query executed.
TopValuesResponse
An array with the list of values.
Returns the Application specified by the given ID.
Arguments
The Application object.
Propel Applications represent the web or mobile app you are building. They provide the API credentials that allow your client- or server-side app to access the Propel API. The Application’s Propeller determines the speed and cost of your Metric Queries.
The Application’s unique identifier.
The Application’s unique name.
The Application’s description.
The Application’s Account.
The Account object.
The Account’s unique identifier.
The Application’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Application’s creation date and time in UTC.
The Application’s last modification date and time in UTC.
The Application’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Application’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Application’s OAuth 2.0 client identifier.
The Application’s OAuth 2.0 client secret.
The Application’s Propeller.
A Propeller determines your Application’s query processing power. The larger the Propeller, the faster the queries and the higher the cost. Every Propel Application (and therefore every set of API credentials) has a Propeller that determines the speed and cost of queries.
P1_X_SMALL
: Max records per second: 5,000,000 records per secondP1_SMALL
: Max records per second: 25,000,000 records per secondP1_MEDIUM
: Max records per second: 100,000,000 records per secondP1_LARGE
: Max records per second: 250,000,000 records per secondP1_X_LARGE
: Max records per second: 500,000,000 records per second
The Application’s OAuth 2.0 scopes.
The API operations an Application is authorized to perform.
ADMIN
: Grant read/write access to Data Sources, Data Pools, Metrics and Policies.APPLICATION_ADMIN
: Grant read/write access to Applications.DATA_POOL_QUERY
: Grant read access to query Data Pools.DATA_POOL_READ
: Grant read access to read Data Pools.DATA_POOL_STATS
: Grant read access to fetch column statistics from Data Pools.ENVIRONMENT_ADMIN
: Grant read/write access to Environments.METRIC_QUERY
: Grant read access to query Metrics.METRIC_STATS
: Grant read access to fetch Dimension statistics from Metrics.METRIC_READ
: Grant read access to Metrics.
This does not allow querying Metrics. For that, see METRIC_QUERY
.
A paginated list of Data Pool Access Policies associated with the Application.
Arguments
Returns the Application with the given unique name.
Arguments
The Application object.
Propel Applications represent the web or mobile app you are building. They provide the API credentials that allow your client- or server-side app to access the Propel API. The Application’s Propeller determines the speed and cost of your Metric Queries.
The Application’s unique identifier.
The Application’s unique name.
The Application’s description.
The Application’s Account.
The Account object.
The Account’s unique identifier.
The Application’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Application’s creation date and time in UTC.
The Application’s last modification date and time in UTC.
The Application’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Application’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Application’s OAuth 2.0 client identifier.
The Application’s OAuth 2.0 client secret.
The Application’s Propeller.
A Propeller determines your Application’s query processing power. The larger the Propeller, the faster the queries and the higher the cost. Every Propel Application (and therefore every set of API credentials) has a Propeller that determines the speed and cost of queries.
P1_X_SMALL
: Max records per second: 5,000,000 records per secondP1_SMALL
: Max records per second: 25,000,000 records per secondP1_MEDIUM
: Max records per second: 100,000,000 records per secondP1_LARGE
: Max records per second: 250,000,000 records per secondP1_X_LARGE
: Max records per second: 500,000,000 records per second
The Application’s OAuth 2.0 scopes.
The API operations an Application is authorized to perform.
ADMIN
: Grant read/write access to Data Sources, Data Pools, Metrics and Policies.APPLICATION_ADMIN
: Grant read/write access to Applications.DATA_POOL_QUERY
: Grant read access to query Data Pools.DATA_POOL_READ
: Grant read access to read Data Pools.DATA_POOL_STATS
: Grant read access to fetch column statistics from Data Pools.ENVIRONMENT_ADMIN
: Grant read/write access to Environments.METRIC_QUERY
: Grant read access to query Metrics.METRIC_STATS
: Grant read access to fetch Dimension statistics from Metrics.METRIC_READ
: Grant read access to Metrics.
This does not allow querying Metrics. For that, see METRIC_QUERY
.
A paginated list of Data Pool Access Policies associated with the Application.
Arguments
Returns the Applications within the Environment.
The applications
query uses cursor-based pagination typical of GraphQL APIs. You can use the pairs of parameters first
and after
or last
and before
to page forward or backward through the results, respectively.
For forward pagination, the first
parameter defines the number of results to return, and the after
parameter defines the cursor to continue from. You should pass the cursor for the last result of the current page to after
.
For backward pagination, the last
parameter defines the number of results to return, and the before
parameter defines the cursor to continue from. You should pass the cursor for the first result of the current page to before
.
Arguments
The Application connection object.
Learn more about pagination in GraphQL.
The Application connection’s edges.
The Application edge object.
Learn more about pagination in GraphQL.
The edge’s cursor.
The edge’s node.
See Application
The Application connection’s nodes.
The Application object.
Propel Applications represent the web or mobile app you are building. They provide the API credentials that allow your client- or server-side app to access the Propel API. The Application’s Propeller determines the speed and cost of your Metric Queries.
The Application’s unique identifier.
The Application’s unique name.
The Application’s description.
The Application’s Environment.
See Environment
The Application’s creation date and time in UTC.
The Application’s last modification date and time in UTC.
The Application’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Application’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Application’s OAuth 2.0 client identifier.
The Application’s OAuth 2.0 client secret.
The Application’s OAuth 2.0 scopes.
See ApplicationScope
A paginated list of Data Pool Access Policies associated with the Application.
Arguments
The Application connection’s page info.
The page info object used for pagination.
Points to the first item returned in the results. Used when paginating backward.
Points to the last item returned in the results. Used when paginating forward.
A boolean that indicates whether a next page of results exists. Can be used to display a “next page” button in user interfaces, for example.
A boolean that indicates whether a previous page of results exists. Can be used to display a “previous page” button in user interfaces, for example.
Returns the Data Source specified by the given ID.
query {
dataSource(id: "DSOXXXXX") {
id
uniqueName
type
tables (first: 100){
nodes {
id
name
columns (first: 100) {
nodes {
name
type
isNullable
supportedDataPoolColumnTypes
}
}
}
}
}
}
Arguments
The Data Source object.
A Data Source is a connection to your data warehouse. It has the necessary connection details for Propel to access Snowflake or any other supported Data Source.
The Data Source’s unique identifier.
The Data Source’s unique name.
The Data Source’s description.
The Data Source’s Account.
The Account object.
The Account’s unique identifier.
The Data Source’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Data Source’s creation date and time in UTC.
The Data Source’s last modification date and time in UTC.
The Data Source’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Source’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Source’s type.
The types of Data Sources.
WEBHOOK
: Indicates a Webhook Data Source.TWILIO_SEGMENT
: Indicates a Twilio Segment Data Source.S3
: Indicates an Amazon S3 Data Source.Redshift
: Indicates a Redshift Data Source.POSTGRESQL
: Indicates a PostgreSQL Data Source.KAFKA
: Indicates a Kafka Data Source.Http
: Indicates an Http Data Source.CLICKHOUSE
: Indicates a ClickHouse Data Source.AMAZON_DYNAMODB
: Indicates an Amazon DynamoDB Data Source.AMAZON_DATA_FIREHOSE
: Indicates an Amazon Data Firehose Data Source.Snowflake
: Indicates a Snowflake Data Source.INTERNAL
: Indicates an internal Data Source.
The Data Source’s status.
The status of a Data Source.
CREATED
: The Data Source has been created, but it is not connected yet.CONNECTING
: Propel is attempting to connect the Data Source.CONNECTED
: The Data Source is connected.BROKEN
: The Data Source failed to connect.DELETING
: Propel is deleting the Data Source.
The Data Source’s connection settings.
The tables contained within the Data Source, according to the most recent table introspection.
Arguments
See TableConnection
A list of table introspections performed for the Data Source. You can see how tables and columns changed over time by paging through this list.
Arguments
A list of checks performed on the Data Source during its most recent connection attempt.
The Data Source Check object.
Data Source Checks are executed when setting up your Data Source. They check that Propel will be able to receive data and setup Data Pools.
The exact Checks to perform vary by Data Source. For example, Snowflake-backed Data Sources will have their own specific Checks.
The name of the Data Source Check to be performed.
A description of the Data Source Check to be performed.
The status of the Data Source Check (all checks begin as NOT_STARTED before transitioning to SUCCEEDED or FAILED).
If the Data Source Check failed, this field includes a descriptive error message.
See Error
The time at which the Data Source Check was performed.
If you list Data Pools via the dataPools
field on a Data Source, you will get Data Pools for the Data Source.
The dataPools
field uses cursor-based pagination typical of GraphQL APIs. You can use the pairs of parameters first
and after
or last
and before
to page forward or backward through the results, respectively.
For forward pagination, the first
parameter defines the number of results to return, and the after
parameter defines the cursor to continue from. You should pass the cursor for the last result of the current page to after
.
For backward pagination, the last
parameter defines the number of results to return, and the before
parameter defines the cursor to continue from. You should pass the cursor for the first result of the current page to before
.
Arguments
Returns the Data Source specified by the given unique name.
Arguments
The Data Source object.
A Data Source is a connection to your data warehouse. It has the necessary connection details for Propel to access Snowflake or any other supported Data Source.
The Data Source’s unique identifier.
The Data Source’s unique name.
The Data Source’s description.
The Data Source’s Account.
The Account object.
The Account’s unique identifier.
The Data Source’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Data Source’s creation date and time in UTC.
The Data Source’s last modification date and time in UTC.
The Data Source’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Source’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Source’s type.
The types of Data Sources.
WEBHOOK
: Indicates a Webhook Data Source.TWILIO_SEGMENT
: Indicates a Twilio Segment Data Source.S3
: Indicates an Amazon S3 Data Source.Redshift
: Indicates a Redshift Data Source.POSTGRESQL
: Indicates a PostgreSQL Data Source.KAFKA
: Indicates a Kafka Data Source.Http
: Indicates an Http Data Source.CLICKHOUSE
: Indicates a ClickHouse Data Source.AMAZON_DYNAMODB
: Indicates an Amazon DynamoDB Data Source.AMAZON_DATA_FIREHOSE
: Indicates an Amazon Data Firehose Data Source.Snowflake
: Indicates a Snowflake Data Source.INTERNAL
: Indicates an internal Data Source.
The Data Source’s status.
The status of a Data Source.
CREATED
: The Data Source has been created, but it is not connected yet.CONNECTING
: Propel is attempting to connect the Data Source.CONNECTED
: The Data Source is connected.BROKEN
: The Data Source failed to connect.DELETING
: Propel is deleting the Data Source.
The Data Source’s connection settings.
The tables contained within the Data Source, according to the most recent table introspection.
Arguments
See TableConnection
A list of table introspections performed for the Data Source. You can see how tables and columns changed over time by paging through this list.
Arguments
A list of checks performed on the Data Source during its most recent connection attempt.
The Data Source Check object.
Data Source Checks are executed when setting up your Data Source. They check that Propel will be able to receive data and setup Data Pools.
The exact Checks to perform vary by Data Source. For example, Snowflake-backed Data Sources will have their own specific Checks.
The name of the Data Source Check to be performed.
A description of the Data Source Check to be performed.
The status of the Data Source Check (all checks begin as NOT_STARTED before transitioning to SUCCEEDED or FAILED).
If the Data Source Check failed, this field includes a descriptive error message.
See Error
The time at which the Data Source Check was performed.
If you list Data Pools via the dataPools
field on a Data Source, you will get Data Pools for the Data Source.
The dataPools
field uses cursor-based pagination typical of GraphQL APIs. You can use the pairs of parameters first
and after
or last
and before
to page forward or backward through the results, respectively.
For forward pagination, the first
parameter defines the number of results to return, and the after
parameter defines the cursor to continue from. You should pass the cursor for the last result of the current page to after
.
For backward pagination, the last
parameter defines the number of results to return, and the before
parameter defines the cursor to continue from. You should pass the cursor for the first result of the current page to before
.
Arguments
Returns the Data Sources within the Environment.
A Data Source is a connection to your data warehouse. It has the necessary connection details for Propel to access Snowflake or any other supported Data Source. Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads.
The dataSources
query uses cursor-based pagination typical of GraphQL APIs. You can use the pairs of parameters first
and after
or last
and before
to page forward or backward through the results, respectively.
For forward pagination, the first
parameter defines the number of results to return, and the after
parameter defines the cursor to continue from. You should pass the cursor for the last result of the current page to after
.
For backward pagination, the last
parameter defines the number of results to return, and the before
parameter defines the cursor to continue from. You should pass the cursor for the first result of the current page to before
.
Arguments
The Data Source connection object.
Learn more about pagination in GraphQL.
The Data Source connection’s edges.
The Data Source edge object.
Learn more about pagination in GraphQL.
The edge’s cursor.
The edge’s node.
See DataSource
The Data Source connection’s nodes.
The Data Source object.
A Data Source is a connection to your data warehouse. It has the necessary connection details for Propel to access Snowflake or any other supported Data Source.
The Data Source’s unique identifier.
The Data Source’s unique name.
The Data Source’s description.
The Data Source’s Environment.
See Environment
The Data Source’s creation date and time in UTC.
The Data Source’s last modification date and time in UTC.
The Data Source’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Source’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Source’s type.
See DataSourceType
The Data Source’s status.
See DataSourceStatus
The Data Source’s connection settings.
The tables contained within the Data Source, according to the most recent table introspection.
Arguments
See TableConnection
A list of table introspections performed for the Data Source. You can see how tables and columns changed over time by paging through this list.
Arguments
A list of checks performed on the Data Source during its most recent connection attempt.
See DataSourceCheck
If you list Data Pools via the dataPools
field on a Data Source, you will get Data Pools for the Data Source.
The dataPools
field uses cursor-based pagination typical of GraphQL APIs. You can use the pairs of parameters first
and after
or last
and before
to page forward or backward through the results, respectively.
For forward pagination, the first
parameter defines the number of results to return, and the after
parameter defines the cursor to continue from. You should pass the cursor for the last result of the current page to after
.
For backward pagination, the last
parameter defines the number of results to return, and the before
parameter defines the cursor to continue from. You should pass the cursor for the first result of the current page to before
.
Arguments
The Data Source connection’s page info.
The page info object used for pagination.
Points to the first item returned in the results. Used when paginating backward.
Points to the last item returned in the results. Used when paginating forward.
A boolean that indicates whether a next page of results exists. Can be used to display a “next page” button in user interfaces, for example.
A boolean that indicates whether a previous page of results exists. Can be used to display a “previous page” button in user interfaces, for example.
Returns the Data Pool specified by the given ID.
A Data Pool is a cached table hydrated from your data warehouse optimized for high-concurrency and low-latency queries.
Arguments
The Data Pool object. Data Pools are Propel’s high-speed data store and cache
The Data Pool’s unique identifier.
The Data Pool’s unique name.
The Data Pool’s description.
The Data Pool’s Account.
The Account object.
The Account’s unique identifier.
The Data Pool’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Data Pool’s creation date and time in UTC.
The Data Pool’s last modification date and time in UTC.
The Data Pool’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Pool’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Pool’s Data Source. See DataSource
The Data Pool’s status.
The status of a Data Pool.
CREATED
: The Data Pool has been created and will be set up soon.PENDING
: Propel is attempting to set up the Data Pool.LIVE
: The Data Pool is set up and serving data. Check its Syncs to monitor data ingestion.SETUP_FAILED
: The Data Pool setup failed. Check its Setup Tasks before re-attempting setup.CONNECTING
CONNECTED
BROKEN
PAUSING
PAUSED
DELETING
: Propel is deleting the Data Pool and all of its associated data.
The Data Pool’s data retention in days (not yet supported).
The name of the Data Pool’s table.
The Data Pool’s primary timestamp column, if any.
A Data Pool’s primary timestamp column. Propel uses the primary timestamp to order and partition your data in Data Pools. It will serve as the time dimension for your Metrics.
The name of the column that represents the primary timestamp.
The primary timestamp column’s type.
The number of records in the Data Pool.
The amount of storage in terabytes used by the Data Pool.
The Data Pool’s columns.
Arguments
The Data Pool column connection object.
Learn more about pagination in GraphQL.
The Data Pool column connection’s edges.
The Data Pool column connection’s nodes.
See DataPoolColumn
The list of measures (numeric columns) in the Data Pool.
Arguments
The Data Pool column connection object.
Learn more about pagination in GraphQL.
The Data Pool column connection’s edges.
The Data Pool column connection’s nodes.
See DataPoolColumn
A list of setup tasks performed on the Data Pool during its most recent setup attempt.
The Data Pool Setup Task object.
Data Pool Setup Tasks are executed when setting up your Data Pool. They ensure Propel will be able to sync records from your Data Source to your Data Pool.
The exact Setup Tasks to perform vary by Data Source. For example, Data Pools pointing to a Snowflake-backed Data Sources will have their own specific Setup Tasks.
The name of the Data Pool Setup Task to be performed.
A description of the Data Pool Setup Task to be performed.
The status of the Data Pool Setup Task (all setup tasks begin as NOT_STARTED before transitioning to SUCCEEDED or FAILED).
If the Data Pool Setup Task failed, this field includes a descriptive error message.
See Error
The time at which the Data Pool Setup Task was completed.
Settings related to Data Pool syncing.
Settings related to Data Pool syncing.
Indicates whether syncing is enabled or disabled.
The syncing interval.
Note that the syncing interval is approximate. For example, setting the syncing interval to EVERY_1_HOUR
does not mean that syncing will occur exactly on the hour. Instead, the syncing interval starts relative to
when the Data Pool goes LIVE
, and Propel will attempt to sync approximately every hour. Additionally,
if you pause or resume syncing, this too can shift the syncing interval around.
The date and time of the most recent Sync in UTC.
The list of Syncs of the Data Pool.
Arguments
The filter to apply when listing the Syncs for a Data Pool.
EMPTY
: Returns only Syncs with empty records.NOT_EMPTY
: Returns only Syncs that contain one or more records.ALL
: Returns all Syncs, regardless of whether they contain records or not.
See SyncConnection
The Deletion Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The Add Column Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The UpdateDataPoolRecords Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
Whether the Data Pool has access control enabled or not.
If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
A paginated list of Data Pool Access Policies available on the Data Pool.
Arguments
Validates a custom expression against the Data Pool’s available columns. If the provided expression is invalid, the ValidateExpressionResult response will contain a reason explaining why.
Arguments
Response returned by the validateExpression query for validating expressions in Custom Metrics.
Returns whether the expression is valid or not with a reason explaining why.
True if the expression is valid, false otherwise.
The reason for why the expression is not valid in case it isn’t, null otherwise.
The Data Pool’s table settings.
A Data Pool’s table settings.
These describe how the Data Pool’s table is created in ClickHouse.
The ClickHouse table engine for the Data Pool’s table.
See TableEngine
The PARTITION BY clause for the Data Pool’s table.
The PRIMARY KEY clause for the Data Pool’s table.
The ORDER BY clause for the Data Pool’s table.
The TTL clause for the Data Pool’s table.
The Data Pool’s columns that participate in its PARTITION BY clause.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
See ColumnType
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadThe Data Pool’s columns that participate in its PRIMARY KEY clause.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
See ColumnType
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadThe Data Pool’s columns that participate in its ORDER BY clause.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
See ColumnType
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadThe Data Pool’s tenant ID, if configured.
deprecated: Will be removed; use Data Pool Access Policies insteadThe Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
deprecated: Will be removed; use table settings to define the primary key.A Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
The name of the column that represents the unique ID.
Returns the Data Pool specified by the given unique name.
A Data Pool is a cached table hydrated from your data warehouse optimized for high-concurrency and low-latency queries.
Arguments
The Data Pool object. Data Pools are Propel’s high-speed data store and cache
The Data Pool’s unique identifier.
The Data Pool’s unique name.
The Data Pool’s description.
The Data Pool’s Account.
The Account object.
The Account’s unique identifier.
The Data Pool’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Data Pool’s creation date and time in UTC.
The Data Pool’s last modification date and time in UTC.
The Data Pool’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Pool’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Pool’s Data Source. See DataSource
The Data Pool’s status.
The status of a Data Pool.
CREATED
: The Data Pool has been created and will be set up soon.PENDING
: Propel is attempting to set up the Data Pool.LIVE
: The Data Pool is set up and serving data. Check its Syncs to monitor data ingestion.SETUP_FAILED
: The Data Pool setup failed. Check its Setup Tasks before re-attempting setup.CONNECTING
CONNECTED
BROKEN
PAUSING
PAUSED
DELETING
: Propel is deleting the Data Pool and all of its associated data.
The Data Pool’s data retention in days (not yet supported).
The name of the Data Pool’s table.
The Data Pool’s primary timestamp column, if any.
A Data Pool’s primary timestamp column. Propel uses the primary timestamp to order and partition your data in Data Pools. It will serve as the time dimension for your Metrics.
The name of the column that represents the primary timestamp.
The primary timestamp column’s type.
The number of records in the Data Pool.
The amount of storage in terabytes used by the Data Pool.
The Data Pool’s columns.
Arguments
The Data Pool column connection object.
Learn more about pagination in GraphQL.
The Data Pool column connection’s edges.
The Data Pool column connection’s nodes.
See DataPoolColumn
The list of measures (numeric columns) in the Data Pool.
Arguments
The Data Pool column connection object.
Learn more about pagination in GraphQL.
The Data Pool column connection’s edges.
The Data Pool column connection’s nodes.
See DataPoolColumn
A list of setup tasks performed on the Data Pool during its most recent setup attempt.
The Data Pool Setup Task object.
Data Pool Setup Tasks are executed when setting up your Data Pool. They ensure Propel will be able to sync records from your Data Source to your Data Pool.
The exact Setup Tasks to perform vary by Data Source. For example, Data Pools pointing to a Snowflake-backed Data Sources will have their own specific Setup Tasks.
The name of the Data Pool Setup Task to be performed.
A description of the Data Pool Setup Task to be performed.
The status of the Data Pool Setup Task (all setup tasks begin as NOT_STARTED before transitioning to SUCCEEDED or FAILED).
If the Data Pool Setup Task failed, this field includes a descriptive error message.
See Error
The time at which the Data Pool Setup Task was completed.
Settings related to Data Pool syncing.
Settings related to Data Pool syncing.
Indicates whether syncing is enabled or disabled.
The syncing interval.
Note that the syncing interval is approximate. For example, setting the syncing interval to EVERY_1_HOUR
does not mean that syncing will occur exactly on the hour. Instead, the syncing interval starts relative to
when the Data Pool goes LIVE
, and Propel will attempt to sync approximately every hour. Additionally,
if you pause or resume syncing, this too can shift the syncing interval around.
The date and time of the most recent Sync in UTC.
The list of Syncs of the Data Pool.
Arguments
The filter to apply when listing the Syncs for a Data Pool.
EMPTY
: Returns only Syncs with empty records.NOT_EMPTY
: Returns only Syncs that contain one or more records.ALL
: Returns all Syncs, regardless of whether they contain records or not.
See SyncConnection
The Deletion Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The Add Column Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The UpdateDataPoolRecords Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
Whether the Data Pool has access control enabled or not.
If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
A paginated list of Data Pool Access Policies available on the Data Pool.
Arguments
Validates a custom expression against the Data Pool’s available columns. If the provided expression is invalid, the ValidateExpressionResult response will contain a reason explaining why.
Arguments
Response returned by the validateExpression query for validating expressions in Custom Metrics.
Returns whether the expression is valid or not with a reason explaining why.
True if the expression is valid, false otherwise.
The reason for why the expression is not valid in case it isn’t, null otherwise.
The Data Pool’s table settings.
A Data Pool’s table settings.
These describe how the Data Pool’s table is created in ClickHouse.
The ClickHouse table engine for the Data Pool’s table.
See TableEngine
The PARTITION BY clause for the Data Pool’s table.
The PRIMARY KEY clause for the Data Pool’s table.
The ORDER BY clause for the Data Pool’s table.
The TTL clause for the Data Pool’s table.
The Data Pool’s columns that participate in its PARTITION BY clause.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
See ColumnType
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadThe Data Pool’s columns that participate in its PRIMARY KEY clause.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
See ColumnType
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadThe Data Pool’s columns that participate in its ORDER BY clause.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
See ColumnType
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadThe Data Pool’s tenant ID, if configured.
deprecated: Will be removed; use Data Pool Access Policies insteadThe Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
deprecated: Will be removed; use table settings to define the primary key.A Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
The name of the column that represents the unique ID.
Returns the Data Pools within the Environment.
A Data Pool is a cached table hydrated from your data warehouse optimized for high-concurrency and low-latency queries. Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads.
The dataPools
query uses cursor-based pagination typical of GraphQL APIs. You can use the pairs of parameters first
and after
or last
and before
to page forward or backward through the results, respectively.
For forward pagination, the first
parameter defines the number of results to return, and the after
parameter defines the cursor to continue from. You should pass the cursor for the last result of the current page to after
.
For backward pagination, the last
parameter defines the number of results to return, and the before
parameter defines the cursor to continue from. You should pass the cursor for the first result of the current page to before
.
Arguments
The Data Pool connection object.
Learn more about pagination in GraphQL.
The Data Pool connection’s edges. See DataPoolEdge
The Data Pool connection’s page info.
The page info object used for pagination.
Points to the first item returned in the results. Used when paginating backward.
Points to the last item returned in the results. Used when paginating forward.
A boolean that indicates whether a next page of results exists. Can be used to display a “next page” button in user interfaces, for example.
A boolean that indicates whether a previous page of results exists. Can be used to display a “previous page” button in user interfaces, for example.
Returns the Environment specified by the given ID.
Arguments
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
Returns the Materialized Views within the Environment.
The materializedViews
query uses cursor-based pagination typical of GraphQL APIs. You can use the pairs of parameters first
and after
or last
and before
to page forward or backward through the results, respectively.
For forward pagination, the first
parameter defines the number of results to return, and the after
parameter defines the cursor to continue from. You should pass the cursor for the last result of the current page to after
.
For backward pagination, the last
parameter defines the number of results to return, and the before
parameter defines the cursor to continue from. You should pass the cursor for the first result of the current page to before
.
Arguments
The Materialized View connection object.
Learn more about pagination in GraphQL.
The Materialized View connection’s edges.
The Materialized View edge object.
Learn more about pagination in GraphQL.
The edge’s cursor.
The edge’s node.
See MaterializedView
The Materialized View connection’s nodes.
The Materialized View’s unique identifier.
The Materialized View’s unique name.
The Materialized View’s description.
The Materialized View’s Environment.
See Environment
The Materialized View’s creation date and time in UTC.
The Materialized View’s last modification date and time in UTC.
The Materialized View’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Materialized View’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The SQL that the Materialized View executes.
The Materialized View’s destination (AKA “target”) Data Pool.
See DataPool
The Materialized View connection’s page info.
The page info object used for pagination.
Points to the first item returned in the results. Used when paginating backward.
Points to the last item returned in the results. Used when paginating forward.
A boolean that indicates whether a next page of results exists. Can be used to display a “next page” button in user interfaces, for example.
A boolean that indicates whether a previous page of results exists. Can be used to display a “previous page” button in user interfaces, for example.
Returns the Data Pool Access Policy specified by the given ID.
A Data Pool Access Policy limits the data that Applications can access within a Data Pool.
Arguments
The ID of the Data Pool Access Policy.
The Data Pool Access Policy’s unique name.
The Data Pool Access Policy’s description.
The Data Pool Access Policy’s Account.
The Account object.
The Account’s unique identifier.
The Data Pool Access Policy’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Data Pool Access Policy’s creation date and time in UTC.
The Data Pool Access Policy’s last modification date and time in UTC.
The Data Pool Access Policy’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Pool Access Policy’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
Columns that the Access Policy makes available for querying.
Row-level filters that the Access Policy applies before executing queries, in the form of SQL.
Applications that are assigned to this Data Pool Access Policy.
Arguments
Row-level filters that the Access Policy applies before executing queries.
deprecated: UsefiltersSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
See FilterOperator
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
Returns the Metric specified by the given ID.
A Metric is a business indicator measured over time.
Arguments
The Metric object.
A Metric is a business indicator measured over time.
The Metric’s unique identifier.
The Metric’s unique name.
The Metric’s description.
The Metric’s Account.
The Account object.
The Account’s unique identifier.
The Metric’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Metric’s creation date and time in UTC.
The Metric’s last modification date and time in UTC.
The Metric’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Metric’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Metric’s Dimensions. These Dimensions are available to Query Filters.
The Dimension object that represents a column in a table.
The column name it represents.
The column data type.
Whether the column is nullable.
Whether the column is a unique key.
deprecated: This is Snowflake-specific, and will be removedThe statistics for the dimension values. Fetching statistics incurs query costs.
deprecated: Issue normal queries for calculating statsThe Metric’s timestamp, if any. This is the same as its Data Pool’s timestamp, if any.
The Dimension object that represents a column in a table.
The column name it represents.
The column data type.
Whether the column is nullable.
Whether the column is a unique key.
deprecated: This is Snowflake-specific, and will be removedThe statistics for the dimension values. Fetching statistics incurs query costs.
deprecated: Issue normal queries for calculating statsList the Boosters associated to the Metric.
Arguments
The Metric’s type. The different Metric types determine how the values are calculated.
The available Metric types.
COUNT
: Counts the number of records that matches the Metric Filters. For time series, it will count the values for each time granularity.SUM
: Sums the values of the specified column for every record that matches the Metric Filters. For time series, it will sum the values for each time granularity.COUNT_DISTINCT
: Counts the number of distinct values in the specified column for every record that matches the Metric Filters. For time series, it will count the distinct values for each time granularity.AVERAGE
: Averages the values of the specified column for every record that matches the Metric Filters. For time series, it will average the values for each time granularity.MIN
: Selects the minimum value of the specified column for every record that matches the Metric Filters. For time series, it will select the minimum value for each time granularity.MAX
: Selects the maximum value of the specified column for every record that matches the Metric Filters. For time series, it will select the maximum value for each time granularity.CUSTOM
: Aggregates values based on the provided custom expression.
The settings for the Metric. The settings are specific to the Metric’s type.
A Metric’s settings, depending on its type.
The Metric’s measure. Access this from the Metric’s settings
object instead.
settings
object instead.The Dimension object that represents a column in a table.
The column name it represents.
The column data type.
Whether the column is nullable.
Whether the column is a unique key.
deprecated: This is Snowflake-specific, and will be removedThe statistics for the dimension values. Fetching statistics incurs query costs.
deprecated: Issue normal queries for calculating statsQuery the Metric in counter format. Returns the Metric’s value for the given time range and filters.
deprecated: Use the top-levelcounter
query insteadArguments
The fields for querying a Metric in counter format.
A Metric’s counter query returns a single value over a given time range.
The Metric to query. You can query a pre-configured Metric by ID or name, or you can query an ad hoc Metric that you define inline.
See MetricInput
The time range for calculating the counter.
See TimeRangeInput
The time zone to use. Dates and times are always returned in UTC, but setting the time zone influences relative time ranges and granularities.
You can set this to “America/Los_Angeles”, “Europe/Berlin”, or any other value in the IANA time zone database. Defaults to “UTC”.
The Query Filters to apply before retrieving the counter data, in the form of SQL. If no Query Filters are provided, all data is included.
The ID of the Metric to query.
Required if metricName
is not specified.
metric
The name of the Metric to query.
Required if metricId
is not specified.
metric
The Query Filters to apply before retrieving the counter data. If no Query Filters are provided, all data is included.
deprecated: UsefilterSql
insteadSee FilterInput
Query the Metric in time series format. Returns arrays of timestamps and the Metric’s values for the given time range and filters.
deprecated: Use the top-leveltimeSeries
query insteadArguments
The fields for querying a Metric in time series format.
A Metric’s time series query returns the values over a given time range aggregated by a given time granularity; day, month, or year, for example.
The Metric to Query. It can be a pre-created one or it can be inlined here.
See MetricInput
The time range for calculating the time series.
See TimeRangeInput
The time zone to use. Dates and times are always returned in UTC, but setting the time zone influences relative time ranges and granularities.
You can set this to “America/Los_Angeles”, “Europe/Berlin”, or any other value in the IANA time zone database. Defaults to “UTC”.
The time granularity (hour, day, month, etc.) to aggregate the Metric values by.
The Query Filters to apply before retrieving the time series data, in the form of SQL. If no Query Filters are provided, all data is included.
Columns to group by.
The ID of the Metric to query.
Required if metricName
is not specified.
metric
The name of the Metric to query.
Required if metricId
is not specified.
metric
The Query Filters to apply before retrieving the time series data. If no Query Filters are provided, all data is included.
deprecated: UsefilterSql
insteadSee FilterInput
The time series response object. It contains an array of time series labels and an array of Metric values for the given time range and Query Filters.
The time series labels.
The time series values.
The time series values for each group in groupBy
, if specified.
Query the Metric in leaderboard format. Returns a table (array of rows) with the selected dimensions and the Metric’s corresponding values for the given time range and filters.
deprecated: Use the top-levelleaderboard
query insteadArguments
The fields for querying a Metric in leaderboard format.
A Metric’s leaderboard query returns an ordered table of Dimension and Metric values over a given time range.
The Metric to query. You can query a pre-configured Metric by ID or name, or you can query an ad hoc Metric that you define inline.
See MetricInput
The time range for calculating the leaderboard.
See TimeRangeInput
The time zone to use. Dates and times are always returned in UTC, but setting the time zone influences relative time ranges and granularities.
You can set this to “America/Los_Angeles”, “Europe/Berlin”, or any other value in the IANA time zone database. Defaults to “UTC”.
One or many Dimensions to group the Metric values by. Typically, Dimensions in a leaderboard are what you want to compare and rank.
See DimensionInput
The sort order of the rows. It can be ascending (ASC
) or descending (DESC
) order. Defaults to descending (DESC
) order when not provided.
See Sort
The number of rows to be returned. It can be a number between 1 and 1,000.
The Query Filters to apply before retrieving the leaderboard data, in the form of SQL. If no Query Filters are provided, all data is included.
The ID of the Metric to query.
Required if metricName
is not specified.
metric
The name of the Metric to query.
Required if metricId
is not specified.
metric
The Query Filters to apply before retrieving the leaderboard data. If no Query Filters are provided, all data is included.
deprecated: UsefilterSql
insteadSee FilterInput
The leaderboard response object. It contains an array of headers and a table (array of rows) with the selected Dimensions and corresponding Metric values for the given time range and Query Filters.
The table headers. It contains the Dimension and Metric names.
An ordered array of rows. Each row contains the Dimension values and the corresponding Metric value. A Dimension value can be empty. A Metric value will never be empty.
List the Policies associated to the Metric.
deprecated: Use Data Pool Access Policies insteadArguments
See PolicyConnection
Whether or not access control is enabled for the Metric.
deprecated: Use Data Pool Access Policies insteadReturns the Metric specified by the given unique name.
A Metric is a business indicator measured over time.
Arguments
The Metric object.
A Metric is a business indicator measured over time.
The Metric’s unique identifier.
The Metric’s unique name.
The Metric’s description.
The Metric’s Account.
The Account object.
The Account’s unique identifier.
The Metric’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Metric’s creation date and time in UTC.
The Metric’s last modification date and time in UTC.
The Metric’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Metric’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Metric’s Dimensions. These Dimensions are available to Query Filters.
The Dimension object that represents a column in a table.
The column name it represents.
The column data type.
Whether the column is nullable.
Whether the column is a unique key.
deprecated: This is Snowflake-specific, and will be removedThe statistics for the dimension values. Fetching statistics incurs query costs.
deprecated: Issue normal queries for calculating statsThe Metric’s timestamp, if any. This is the same as its Data Pool’s timestamp, if any.
The Dimension object that represents a column in a table.
The column name it represents.
The column data type.
Whether the column is nullable.
Whether the column is a unique key.
deprecated: This is Snowflake-specific, and will be removedThe statistics for the dimension values. Fetching statistics incurs query costs.
deprecated: Issue normal queries for calculating statsList the Boosters associated to the Metric.
Arguments
The Metric’s type. The different Metric types determine how the values are calculated.
The available Metric types.
COUNT
: Counts the number of records that matches the Metric Filters. For time series, it will count the values for each time granularity.SUM
: Sums the values of the specified column for every record that matches the Metric Filters. For time series, it will sum the values for each time granularity.COUNT_DISTINCT
: Counts the number of distinct values in the specified column for every record that matches the Metric Filters. For time series, it will count the distinct values for each time granularity.AVERAGE
: Averages the values of the specified column for every record that matches the Metric Filters. For time series, it will average the values for each time granularity.MIN
: Selects the minimum value of the specified column for every record that matches the Metric Filters. For time series, it will select the minimum value for each time granularity.MAX
: Selects the maximum value of the specified column for every record that matches the Metric Filters. For time series, it will select the maximum value for each time granularity.CUSTOM
: Aggregates values based on the provided custom expression.
The settings for the Metric. The settings are specific to the Metric’s type.
A Metric’s settings, depending on its type.
The Metric’s measure. Access this from the Metric’s settings
object instead.
settings
object instead.The Dimension object that represents a column in a table.
The column name it represents.
The column data type.
Whether the column is nullable.
Whether the column is a unique key.
deprecated: This is Snowflake-specific, and will be removedThe statistics for the dimension values. Fetching statistics incurs query costs.
deprecated: Issue normal queries for calculating statsQuery the Metric in counter format. Returns the Metric’s value for the given time range and filters.
deprecated: Use the top-levelcounter
query insteadArguments
The fields for querying a Metric in counter format.
A Metric’s counter query returns a single value over a given time range.
The Metric to query. You can query a pre-configured Metric by ID or name, or you can query an ad hoc Metric that you define inline.
See MetricInput
The time range for calculating the counter.
See TimeRangeInput
The time zone to use. Dates and times are always returned in UTC, but setting the time zone influences relative time ranges and granularities.
You can set this to “America/Los_Angeles”, “Europe/Berlin”, or any other value in the IANA time zone database. Defaults to “UTC”.
The Query Filters to apply before retrieving the counter data, in the form of SQL. If no Query Filters are provided, all data is included.
The ID of the Metric to query.
Required if metricName
is not specified.
metric
The name of the Metric to query.
Required if metricId
is not specified.
metric
The Query Filters to apply before retrieving the counter data. If no Query Filters are provided, all data is included.
deprecated: UsefilterSql
insteadSee FilterInput
Query the Metric in time series format. Returns arrays of timestamps and the Metric’s values for the given time range and filters.
deprecated: Use the top-leveltimeSeries
query insteadArguments
The fields for querying a Metric in time series format.
A Metric’s time series query returns the values over a given time range aggregated by a given time granularity; day, month, or year, for example.
The Metric to Query. It can be a pre-created one or it can be inlined here.
See MetricInput
The time range for calculating the time series.
See TimeRangeInput
The time zone to use. Dates and times are always returned in UTC, but setting the time zone influences relative time ranges and granularities.
You can set this to “America/Los_Angeles”, “Europe/Berlin”, or any other value in the IANA time zone database. Defaults to “UTC”.
The time granularity (hour, day, month, etc.) to aggregate the Metric values by.
The Query Filters to apply before retrieving the time series data, in the form of SQL. If no Query Filters are provided, all data is included.
Columns to group by.
The ID of the Metric to query.
Required if metricName
is not specified.
metric
The name of the Metric to query.
Required if metricId
is not specified.
metric
The Query Filters to apply before retrieving the time series data. If no Query Filters are provided, all data is included.
deprecated: UsefilterSql
insteadSee FilterInput
The time series response object. It contains an array of time series labels and an array of Metric values for the given time range and Query Filters.
The time series labels.
The time series values.
The time series values for each group in groupBy
, if specified.
Query the Metric in leaderboard format. Returns a table (array of rows) with the selected dimensions and the Metric’s corresponding values for the given time range and filters.
deprecated: Use the top-levelleaderboard
query insteadArguments
The fields for querying a Metric in leaderboard format.
A Metric’s leaderboard query returns an ordered table of Dimension and Metric values over a given time range.
The Metric to query. You can query a pre-configured Metric by ID or name, or you can query an ad hoc Metric that you define inline.
See MetricInput
The time range for calculating the leaderboard.
See TimeRangeInput
The time zone to use. Dates and times are always returned in UTC, but setting the time zone influences relative time ranges and granularities.
You can set this to “America/Los_Angeles”, “Europe/Berlin”, or any other value in the IANA time zone database. Defaults to “UTC”.
One or many Dimensions to group the Metric values by. Typically, Dimensions in a leaderboard are what you want to compare and rank.
See DimensionInput
The sort order of the rows. It can be ascending (ASC
) or descending (DESC
) order. Defaults to descending (DESC
) order when not provided.
See Sort
The number of rows to be returned. It can be a number between 1 and 1,000.
The Query Filters to apply before retrieving the leaderboard data, in the form of SQL. If no Query Filters are provided, all data is included.
The ID of the Metric to query.
Required if metricName
is not specified.
metric
The name of the Metric to query.
Required if metricId
is not specified.
metric
The Query Filters to apply before retrieving the leaderboard data. If no Query Filters are provided, all data is included.
deprecated: UsefilterSql
insteadSee FilterInput
The leaderboard response object. It contains an array of headers and a table (array of rows) with the selected Dimensions and corresponding Metric values for the given time range and Query Filters.
The table headers. It contains the Dimension and Metric names.
An ordered array of rows. Each row contains the Dimension values and the corresponding Metric value. A Dimension value can be empty. A Metric value will never be empty.
List the Policies associated to the Metric.
deprecated: Use Data Pool Access Policies insteadArguments
See PolicyConnection
Whether or not access control is enabled for the Metric.
deprecated: Use Data Pool Access Policies insteadReturns the Metrics within the Environment.
A Metric is a business indicator measured over time. Each Metric is associated with one Data Pool, which is a cached table hydrated from your data warehouse optimized for high-concurrency and low-latency queries. Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads.
The metrics
query uses cursor-based pagination typical of GraphQL APIs. You can use the pairs of parameters first
and after
or last
and before
to page forward or backward through the results, respectively.
For forward pagination, the first
parameter defines the number of results to return, and the after
parameter defines the cursor to continue from. You should pass the cursor for the last result of the current page to after
.
For backward pagination, the last
parameter defines the number of results to return, and the before
parameter defines the cursor to continue from. You should pass the cursor for the first result of the current page to before
.
Arguments
The Metric connection object.
Learn more about pagination in GraphQL.
The Metric connection’s edges. See MetricEdge
The Metric connection’s page info.
The page info object used for pagination.
Points to the first item returned in the results. Used when paginating backward.
Points to the last item returned in the results. Used when paginating forward.
A boolean that indicates whether a next page of results exists. Can be used to display a “next page” button in user interfaces, for example.
A boolean that indicates whether a previous page of results exists. Can be used to display a “previous page” button in user interfaces, for example.
Returns the Booster specified by the given ID.
A Booster significantly improves the query performance for a Metric.
Arguments
Boosters allow you to optimize Metric Queries for a subset of commonly used Dimensions. A Metric can have one or many Boosters to optimize for the different Query patterns.
Boosters can be understood as an aggregating index. The index is formed from left to right as follows:
- The Data Pool’s Tenant ID column (if present)
- Metric Filter columns (if present)
- Query Filter Dimensions (see
dimensions
) - The Data Pool’s timestamp column
The Booster’s unique identifier.
The Booster’s Account.
The Account object.
The Account’s unique identifier.
The Booster’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Booster’s creation date and time in UTC.
The Booster’s last modification date and time in UTC.
The Booster’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Booster’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The status of the Booster (once LIVE it will be available for speeding up Metric queries).
The Booster status.
CREATED
: The Booster has been created. Propel will start optimizing the Data Pool soon.OPTIMIZING
: Propel is setting up the Booster and optimizing the Data Pool.LIVE
: The Booster is now live and available to speed up Metric queries.FAILED
: Propel failed to setup the Booster. Please write to support. Alternatively, you can delete the Booster and try again.DELETING
: Propel is deleting the Booster and all of its associated data.
When the Booster is OPTIMIZING, this represents its progress as a number from 0 to 1. In all other states, progress is null.
Dimensions included in the Booster.
The Dimension object that represents a column in a table.
The column name it represents.
The column data type.
Whether the column is nullable.
Whether the column is a unique key.
deprecated: This is Snowflake-specific, and will be removedThe statistics for the dimension values. Fetching statistics incurs query costs.
deprecated: Issue normal queries for calculating statsThe number of records in the Booster.
The amount of storage in terabytes used by the Booster.
Returns a Policy by ID.
Arguments
The Policy type. It governs an Application’s access to a Metric’s data.
The Policy’s unique identifier.
The Policy’s Account.
The Account object.
The Account’s unique identifier.
The Policy’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Policy’s creation date and time in UTC.
The Policy’s last modification date and time in UTC.
The Policy’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Policy’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The type of Policy.
The types of Policies that can be applied to a Metric.
ALL_ACCESS
: Grants access to all Metric data.TENANT_ACCESS
: Grants access to a specified tenant’s Metric data.
The Application that is granted access. See Application
Returns a Sync by ID.
Arguments
The Sync object.
This represents the process of syncing data from your Data Source (for example, a Snowflake data warehouse) to your Data Pool.
The Sync’s unique identifier.
The Sync’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Sync’s creation date and time in UTC.
The Sync’s last modification date and time in UTC.
The Sync’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Sync’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Sync’s Data Pool’s Data Source.
The Data Source object.
A Data Source is a connection to your data warehouse. It has the necessary connection details for Propel to access Snowflake or any other supported Data Source.
The Data Source’s unique identifier.
The Data Source’s unique name.
The Data Source’s description.
The Data Source’s Environment.
See Environment
The Data Source’s creation date and time in UTC.
The Data Source’s last modification date and time in UTC.
The Data Source’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Source’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Source’s type.
See DataSourceType
The Data Source’s status.
See DataSourceStatus
The Data Source’s connection settings.
The tables contained within the Data Source, according to the most recent table introspection.
Arguments
See TableConnection
A list of table introspections performed for the Data Source. You can see how tables and columns changed over time by paging through this list.
Arguments
A list of checks performed on the Data Source during its most recent connection attempt.
See DataSourceCheck
If you list Data Pools via the dataPools
field on a Data Source, you will get Data Pools for the Data Source.
The dataPools
field uses cursor-based pagination typical of GraphQL APIs. You can use the pairs of parameters first
and after
or last
and before
to page forward or backward through the results, respectively.
For forward pagination, the first
parameter defines the number of results to return, and the after
parameter defines the cursor to continue from. You should pass the cursor for the last result of the current page to after
.
For backward pagination, the last
parameter defines the number of results to return, and the before
parameter defines the cursor to continue from. You should pass the cursor for the first result of the current page to before
.
Arguments
The number of new, updated, and deleted records contained within the Sync, if known. This excludes filtered records.
The (compressed) size of the Sync, in bytes, if known.
The status of the Sync (all Syncs begin as SYNCING before transitioning to SUCCEEDED or FAILED).
The status of a Sync.
SYNCING
: Propel is actively syncing records contained within the Sync.SUCCEEDED
: The Sync succeeded. Propel successfully synced all records contained within the Sync.FAILED
: The Sync failed. Propel failed to sync some or all records contained within the Sync.DELETING
: Propel is deleting the Sync.
The time at which the Sync started.
The time at which the Sync succeeded.
The time at which the Sync failed.
The number of new records contained within the Sync, if known. This excludes filtered records.
deprecated: All records are considered to be processed; seeprocessedRecords
insteadThe number of updated records contained within the Sync, if known. This excludes filtered records.
deprecated: All records are considered to be processed; seeprocessedRecords
insteadThe number of deleted records contained within the Sync, if known. This excludes filtered records.
deprecated: All records are considered to be processed; seeprocessedRecords
insteadThe number of filtered records contained within the Sync, due to issues such as a missing timestamp Dimension, if any are known to be invalid.
deprecated: All records are considered to be processed; seeprocessedRecords
insteadReturns a table by ID.
Arguments
The table object.
Once a table introspection succeeds, it creates a new table object for every table it introspected.
The table’s ID.
The table’s name.
The Data Source to which the table belongs. See DataSource
The number of rows contained within the table at the time of introspection. Check the table’s cachedAt
time, since this info can become out of date.
The size of the table (in bytes) at the time of introspection. Check the table’s cachedAt
time, since this info can become out of date.
The time at which the table was cached (i.e., the time at which it was introspected).
The time at which the table was created. This is the same as its cachedAt
time.
The table’s creator. This corresponds to the initiator of the table Introspection. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The table’s columns.
Arguments
The column connection object.
Learn more about pagination in GraphQL.
The time at which the columns were cached (i.e., the time at which they were introspected).
The column connection’s edges.
See ColumnEdge
The table’s columns which can be used as a timestamp for a Data Pool.
Arguments
The column connection object.
Learn more about pagination in GraphQL.
The time at which the columns were cached (i.e., the time at which they were introspected).
The column connection’s edges.
See ColumnEdge
The table’s columns which can be used as a measure for a Metric.
Arguments
The column connection object.
Learn more about pagination in GraphQL.
The time at which the columns were cached (i.e., the time at which they were introspected).
The column connection’s edges.
See ColumnEdge
Information about the table obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the table obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the table obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the table obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the table obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the table obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the table obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the table obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the table obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedBuild a report, or table, consisting of multiple Metrics broken down by one-or-more dimensions.
The first few columns of the report are the dimensions you choose to break down by. The subsequent columns are the
Metrics you choose to query. By default, the report sorts on the first Metric in descending order, but you can
configure this with the orderByMetric
and sort
inputs.
Finally, reports use cursor-based pagination. You can control page size with the first
and
last
inputs.
Arguments
The fields for querying a Metric Report.
A Metric Report is a table whose columns include dimensions and Metric values, calculated over a given time range.
The time range for calculating the Metric Report.
The fields required to specify the time range for a time series, counter, or leaderboard Metric query.
If no relative or absolute time ranges are provided, Propel defaults to an absolute time range beginning with the earliest record in the Metric’s Data Pool and ending with the latest record.
If both relative and absolute time ranges are provided, the relative time range will take precedence.
If a LAST_N
relative time period is selected, an n
≥ 1 must be provided. If no n
is provided or n
< 1, a BAD_REQUEST
error will be returned.
The timestamp field to use when querying. Defaults to the timestamp configured on the Data Pool or Metric, if any. Set this to filter on an alternative timestamp field.
The relative time period.
The number of time units for the LAST_N
relative periods.
The optional start timestamp (inclusive). Defaults to the timestamp of the earliest record in the Data Pool.
The optional end timestamp (exclusive). Defaults to the timestamp of the latest record in the Data Pool.
The time zone to use. Dates and times are always returned in UTC, but setting the time zone influences relative time ranges and granularities.
You can set this to “America/Los_Angeles”, “Europe/Berlin”, or any other value in the IANA time zone database. Defaults to “UTC”.
One or many dimensions to group the Metric values by. Typically, dimensions in a report are what you want to compare and rank.
The fields for specifying a dimension to include in a Metric Report.
The column name of the dimension to include in a Metric Report. This must match the name of a Data Pool column.
The name to display in the headers
array when displaying the report. This defaults to the column name if unspecified.
One or more Metrics to include in the Metric Report. These will be broken down by dimensions
.
The fields for specifying a Metric to include in a Metric Report.
The Metric to query. You can query a pre-configured Metric by ID or name, or you can query an ad hoc Metric that you define inline.
See MetricInput
The name to display in the headers
array when displaying the report. This defaults to the Metric’s unique name if unspecified.
The Query Filters to apply when calculating the Metric, in the form of SQL.
The sort order for the Metric. It can be ascending (ASC
) or descending (DESC
) order. Defaults to descending (DESC
) order when not provided.
See Sort
The Metric’s unique name. If not specified, Propel will lookup the Metric by ID.
deprecated: Usemetric
The Metric’s ID. If not specified, Propel will lookup the Metric by unique name.
deprecated: Usemetric
The Query Filters to apply when calculating the Metric.
deprecated: UsefilterSql
insteadSee FilterInput
The Query Filters to apply when building the Metric Report, in the form of SQL. These can be used to filter out rows.
The index of the column to order the Metric Report by. The index is 1-based and defaults to the first Metric column. In other words, by default, reports are ordered by the first Metric; however, you can order by the second Metric, third Metric, etc., by overriding the orderByColumn
input. You can also order by dimensions this way.
The number of rows to be returned when paging forward. It can be a number between 1 and 1,000.
The cursor to use when paging forward.
The number of rows to be returned when paging forward. It can be a number between 1 and 1,000.
The cursor to use when paging backward.
The Query Filters to apply when building the Metric Report. These can be used to filter out rows.
deprecated: UsefilterSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
See FilterOperator
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
The Metric Report connection object.
It includes headers
and rows
for a single page of a report. It also allows paging forward and backward to other
pages of the report.
The report connection’s page info.
The page info object used for pagination.
Points to the first item returned in the results. Used when paginating backward.
Points to the last item returned in the results. Used when paginating forward.
A boolean that indicates whether a next page of results exists. Can be used to display a “next page” button in user interfaces, for example.
A boolean that indicates whether a previous page of results exists. Can be used to display a “previous page” button in user interfaces, for example.
The report connection’s edges.
The report connection’s nodes.
The Metric Report node object.
This type represents a single row of a report.
An ordered array of display names for your dimensions and Metrics, as defined in the report input. Use this to display your table’s header.
An ordered array of columns. Each column contains the dimension and Metric values for a single row, as defined in the report input. Use this to display a single row within your table.
An ordered array of display names for your dimensions and Metrics, as defined in the report input. Use this to display your table’s header.
An ordered array of rows. Each row contains dimension and Metric values, as defined in the report input. Use these to display the rows of your table.
The Query statistics and metadata.
The Query Info object. It contains metadata and statistics about a Query performed.
The Query’s unique identifier.
The date and time in UTC when the Query was created.
The unique identifier of the actor that performed the Query.
The date and time in UTC when the Query was last modified.
The unique identifier of the actor that modified the Query.
The bytes processed by the Query.
The duration of the Query in milliseconds.
The number of records processed by the Query.
The bytes returned by the Query.
The number of records returned by the Query.
The Query status.
See QueryStatus
The Query subtype.
See QuerySubtype
The SQL the query executed.
Query Data Pools using SQL.
Arguments
Input to the SqlV1 api.
The SQL query.
Response from the SQL API.
The column names in the same order as present in the data
field.
The name of the returned column.
The returned column’s type.
See ColumnType
Whether the column is nullable, meaning whether it accepts a null value.
The data gathered by the SQL query. The data is returned in an N x M matrix format, where the first dimension are the rows retrieved, and the second dimension are the columns. Each cell can be either a string or null, and the string can represent a number, text, date or boolean value.
The Query statistics and metadata.
The Query Info object. It contains metadata and statistics about a Query performed.
The Query’s unique identifier.
The date and time in UTC when the Query was created.
The unique identifier of the actor that performed the Query.
The date and time in UTC when the Query was last modified.
The unique identifier of the actor that modified the Query.
The bytes processed by the Query.
The duration of the Query in milliseconds.
The number of records processed by the Query.
The bytes returned by the Query.
The number of records returned by the Query.
The Query status.
See QueryStatus
The Query subtype.
See QuerySubtype
The SQL the query executed.
Describe SQL statements Data Pools.
Arguments
Input for describing SqlV1 inputs.
The SQL query.
Response from the describe SQL API.
The columns that the query would return.
The name of the returned column.
The returned column’s type.
See ColumnType
Whether the column is nullable, meaning whether it accepts a null value.
Returns the individual records of a Data Pool with the convenience of built-in pagination, filtering, and sorting.
Arguments
The fields for querying Data Grid records.
The time range for retrieving the records.
The fields required to specify the time range for a time series, counter, or leaderboard Metric query.
If no relative or absolute time ranges are provided, Propel defaults to an absolute time range beginning with the earliest record in the Metric’s Data Pool and ending with the latest record.
If both relative and absolute time ranges are provided, the relative time range will take precedence.
If a LAST_N
relative time period is selected, an n
≥ 1 must be provided. If no n
is provided or n
< 1, a BAD_REQUEST
error will be returned.
The timestamp field to use when querying. Defaults to the timestamp configured on the Data Pool or Metric, if any. Set this to filter on an alternative timestamp field.
The relative time period.
The number of time units for the LAST_N
relative periods.
The optional start timestamp (inclusive). Defaults to the timestamp of the earliest record in the Data Pool.
The optional end timestamp (exclusive). Defaults to the timestamp of the latest record in the Data Pool.
The time zone to use. Dates and times are always returned in UTC, but setting the time zone influences relative time ranges and granularities.
You can set this to “America/Los_Angeles”, “Europe/Berlin”, or any other value in the IANA time zone database. Defaults to “UTC”.
The columns to retrieve.
The index of the column to order the table by. The index is 1-based. If not provided, records will be ordered by their timestamp by default.
The sort order of the rows. It can be ascending (ASC
) or descending (DESC
) order. Defaults to descending (DESC
) order when not provided.
The available sort orders.
ASC
: Sort in ascending order.DESC
: Sort in descending order.
The filters to apply to the records, in the form of SQL. You may only filter on columns included in the columns
array input.
The number of rows to be returned when paging forward. It can be a number between 1 and 1,000.
The cursor to use when paging forward.
The number of rows to be returned when paging forward. It can be a number between 1 and 1,000.
The cursor to use when paging backward.
The filters to apply to the records. You may only filter on columns included in the columns
array input.
filterSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
See FilterOperator
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
The Data Grid connection.
It includes headers
and rows
for a single page of a Data Grid table. It also allows paging forward and backward to other
pages of the Data Grid table.
The Data Grid table’s headers.
An array of arrays containing the values of the Data Grid table’s rows.
The Query statistics and metadata.
The Query Info object. It contains metadata and statistics about a Query performed.
The Query’s unique identifier.
The date and time in UTC when the Query was created.
The unique identifier of the actor that performed the Query.
The date and time in UTC when the Query was last modified.
The unique identifier of the actor that modified the Query.
The bytes processed by the Query.
The duration of the Query in milliseconds.
The number of records processed by the Query.
The bytes returned by the Query.
The number of records returned by the Query.
The Query status.
See QueryStatus
The Query subtype.
See QuerySubtype
The SQL the query executed.
The Data Grid table’s page info.
The page info object used for pagination.
Points to the first item returned in the results. Used when paginating backward.
Points to the last item returned in the results. Used when paginating forward.
A boolean that indicates whether a next page of results exists. Can be used to display a “next page” button in user interfaces, for example.
A boolean that indicates whether a previous page of results exists. Can be used to display a “previous page” button in user interfaces, for example.
The Data Grid table’s edges.
Returns records by the given unique IDs.
Arguments
The fields for querying records by unique ID.
The columns to retrieve.
The unique IDs of the records to retrieve.
The Data Pool columns for the record.
An array of values for the record.
The Query statistics and metadata.
The Query Info object. It contains metadata and statistics about a Query performed.
The Query’s unique identifier.
The date and time in UTC when the Query was created.
The unique identifier of the actor that performed the Query.
The date and time in UTC when the Query was last modified.
The unique identifier of the actor that modified the Query.
The bytes processed by the Query.
The duration of the Query in milliseconds.
The number of records processed by the Query.
The bytes returned by the Query.
The number of records returned by the Query.
The Query status.
See QueryStatus
The Query subtype.
See QuerySubtype
The SQL the query executed.
Returns an array of the most frequent values in a given column. The resulting array is sorted in descending order of approximate frequency of values.
Arguments
The fields for querying the top values in a given column.
The column to fetch the unique values from.
The time range for calculating the top values.
The fields required to specify the time range for a time series, counter, or leaderboard Metric query.
If no relative or absolute time ranges are provided, Propel defaults to an absolute time range beginning with the earliest record in the Metric’s Data Pool and ending with the latest record.
If both relative and absolute time ranges are provided, the relative time range will take precedence.
If a LAST_N
relative time period is selected, an n
≥ 1 must be provided. If no n
is provided or n
< 1, a BAD_REQUEST
error will be returned.
The timestamp field to use when querying. Defaults to the timestamp configured on the Data Pool or Metric, if any. Set this to filter on an alternative timestamp field.
The relative time period.
The number of time units for the LAST_N
relative periods.
The optional start timestamp (inclusive). Defaults to the timestamp of the earliest record in the Data Pool.
The optional end timestamp (exclusive). Defaults to the timestamp of the latest record in the Data Pool.
The time zone to use. Dates and times are always returned in UTC, but setting the time zone influences relative time ranges and granularities.
You can set this to “America/Los_Angeles”, “Europe/Berlin”, or any other value in the IANA time zone database. Defaults to “UTC”.
The maximum number of values to return. It can be a number between 1 and 1,000. If the parameter is omitted, default value 10 is used.
An array with the list of values.
Query a metric in counter format. Returns a single metric value for the given time range and filters.
Arguments
The fields for querying a Metric in counter format.
A Metric’s counter query returns a single value over a given time range.
The Metric to query. You can query a pre-configured Metric by ID or name, or you can query an ad hoc Metric that you define inline.
The ID of a pre-configured Metric.
The name of a pre-configured Metric.
An ad hoc Custom Metric.
An ad hoc Count Metric.
An ad hoc Sum Metric.
An ad hoc Average Metric.
An ad hoc Min Metric.
An ad hoc Max Metric.
An ad hoc Count Distinct Metric.
The time range for calculating the counter.
The fields required to specify the time range for a time series, counter, or leaderboard Metric query.
If no relative or absolute time ranges are provided, Propel defaults to an absolute time range beginning with the earliest record in the Metric’s Data Pool and ending with the latest record.
If both relative and absolute time ranges are provided, the relative time range will take precedence.
If a LAST_N
relative time period is selected, an n
≥ 1 must be provided. If no n
is provided or n
< 1, a BAD_REQUEST
error will be returned.
The timestamp field to use when querying. Defaults to the timestamp configured on the Data Pool or Metric, if any. Set this to filter on an alternative timestamp field.
The relative time period.
The number of time units for the LAST_N
relative periods.
The optional start timestamp (inclusive). Defaults to the timestamp of the earliest record in the Data Pool.
The optional end timestamp (exclusive). Defaults to the timestamp of the latest record in the Data Pool.
The time zone to use. Dates and times are always returned in UTC, but setting the time zone influences relative time ranges and granularities.
You can set this to “America/Los_Angeles”, “Europe/Berlin”, or any other value in the IANA time zone database. Defaults to “UTC”.
The Query Filters to apply before retrieving the counter data, in the form of SQL. If no Query Filters are provided, all data is included.
The ID of the Metric to query.
Required if metricName
is not specified.
metric
The name of the Metric to query.
Required if metricId
is not specified.
metric
The Query Filters to apply before retrieving the counter data. If no Query Filters are provided, all data is included.
deprecated: UsefilterSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
See FilterOperator
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
The counter response object. It contains a single Metric value for the given time range and Query Filters.
The value of the counter.
The Query statistics and metadata.
The Query Info object. It contains metadata and statistics about a Query performed.
The Query’s unique identifier.
The date and time in UTC when the Query was created.
The unique identifier of the actor that performed the Query.
The date and time in UTC when the Query was last modified.
The unique identifier of the actor that modified the Query.
The bytes processed by the Query.
The duration of the Query in milliseconds.
The number of records processed by the Query.
The bytes returned by the Query.
The number of records returned by the Query.
The Query status.
See QueryStatus
The Query subtype.
See QuerySubtype
The SQL the query executed.
Query metrics in counter format. Returns a metric value for each input in the array of inputs.
Arguments
The fields for querying a Metric in counter format.
A Metric’s counter query returns a single value over a given time range.
The Metric to query. You can query a pre-configured Metric by ID or name, or you can query an ad hoc Metric that you define inline.
The ID of a pre-configured Metric.
The name of a pre-configured Metric.
An ad hoc Custom Metric.
An ad hoc Count Metric.
An ad hoc Sum Metric.
An ad hoc Average Metric.
An ad hoc Min Metric.
An ad hoc Max Metric.
An ad hoc Count Distinct Metric.
The time range for calculating the counter.
The fields required to specify the time range for a time series, counter, or leaderboard Metric query.
If no relative or absolute time ranges are provided, Propel defaults to an absolute time range beginning with the earliest record in the Metric’s Data Pool and ending with the latest record.
If both relative and absolute time ranges are provided, the relative time range will take precedence.
If a LAST_N
relative time period is selected, an n
≥ 1 must be provided. If no n
is provided or n
< 1, a BAD_REQUEST
error will be returned.
The timestamp field to use when querying. Defaults to the timestamp configured on the Data Pool or Metric, if any. Set this to filter on an alternative timestamp field.
The relative time period.
The number of time units for the LAST_N
relative periods.
The optional start timestamp (inclusive). Defaults to the timestamp of the earliest record in the Data Pool.
The optional end timestamp (exclusive). Defaults to the timestamp of the latest record in the Data Pool.
The time zone to use. Dates and times are always returned in UTC, but setting the time zone influences relative time ranges and granularities.
You can set this to “America/Los_Angeles”, “Europe/Berlin”, or any other value in the IANA time zone database. Defaults to “UTC”.
The Query Filters to apply before retrieving the counter data, in the form of SQL. If no Query Filters are provided, all data is included.
The ID of the Metric to query.
Required if metricName
is not specified.
metric
The name of the Metric to query.
Required if metricId
is not specified.
metric
The Query Filters to apply before retrieving the counter data. If no Query Filters are provided, all data is included.
deprecated: UsefilterSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
See FilterOperator
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
The counter response object. It contains a single Metric value for the given time range and Query Filters.
The value of the counter.
The Query statistics and metadata.
The Query Info object. It contains metadata and statistics about a Query performed.
The Query’s unique identifier.
The date and time in UTC when the Query was created.
The unique identifier of the actor that performed the Query.
The date and time in UTC when the Query was last modified.
The unique identifier of the actor that modified the Query.
The bytes processed by the Query.
The duration of the Query in milliseconds.
The number of records processed by the Query.
The bytes returned by the Query.
The number of records returned by the Query.
The Query status.
See QueryStatus
The Query subtype.
See QuerySubtype
The SQL the query executed.
Query a metric in time series format. Returns arrays of timestamps and metric values for the given time range and filters.
Arguments
The fields for querying a Metric in time series format.
A Metric’s time series query returns the values over a given time range aggregated by a given time granularity; day, month, or year, for example.
The Metric to Query. It can be a pre-created one or it can be inlined here.
The ID of a pre-configured Metric.
The name of a pre-configured Metric.
An ad hoc Custom Metric.
An ad hoc Count Metric.
An ad hoc Sum Metric.
An ad hoc Average Metric.
An ad hoc Min Metric.
An ad hoc Max Metric.
An ad hoc Count Distinct Metric.
The time range for calculating the time series.
The fields required to specify the time range for a time series, counter, or leaderboard Metric query.
If no relative or absolute time ranges are provided, Propel defaults to an absolute time range beginning with the earliest record in the Metric’s Data Pool and ending with the latest record.
If both relative and absolute time ranges are provided, the relative time range will take precedence.
If a LAST_N
relative time period is selected, an n
≥ 1 must be provided. If no n
is provided or n
< 1, a BAD_REQUEST
error will be returned.
The timestamp field to use when querying. Defaults to the timestamp configured on the Data Pool or Metric, if any. Set this to filter on an alternative timestamp field.
The relative time period.
The number of time units for the LAST_N
relative periods.
The optional start timestamp (inclusive). Defaults to the timestamp of the earliest record in the Data Pool.
The optional end timestamp (exclusive). Defaults to the timestamp of the latest record in the Data Pool.
The time zone to use. Dates and times are always returned in UTC, but setting the time zone influences relative time ranges and granularities.
You can set this to “America/Los_Angeles”, “Europe/Berlin”, or any other value in the IANA time zone database. Defaults to “UTC”.
The time granularity (hour, day, month, etc.) to aggregate the Metric values by.
The available time series granularities. Granularities define the unit of time to aggregate the Metric data for a time series query.
For example, if the granularity is set to DAY
, then the the time series query will return a label and a value for each day.
If there are no records for a given time series granularity, Propel will return the label and a value of “0” so that the time series can be properly visualized.
MINUTE
: Aggregates values by minute intervals.FIVE_MINUTES
: Aggregates values by 5-minute intervals.TEN_MINUTES
: Aggregates values by 10-minute intervals.FIFTEEN_MINUTES
: Aggregates values by 15-minute intervals.HOUR
: Aggregates values by hourly intervals.DAY
: Aggregates values by daily intervals.WEEK
: Aggregates values by weekly intervals.MONTH
: Aggregates values by monthly intervals.YEAR
: Aggregates values by yearly intervals.
The Query Filters to apply before retrieving the time series data, in the form of SQL. If no Query Filters are provided, all data is included.
Columns to group by.
The ID of the Metric to query.
Required if metricName
is not specified.
metric
The name of the Metric to query.
Required if metricId
is not specified.
metric
The Query Filters to apply before retrieving the time series data. If no Query Filters are provided, all data is included.
deprecated: UsefilterSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
See FilterOperator
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
The time series response object. It contains an array of time series labels and an array of Metric values for the given time range and Query Filters.
The time series labels.
The time series values.
The time series values for each group in groupBy
, if specified.
The time series response object for a group specified in groupBy
. It contains an array of time series labels and an array of Metric values for a particular group.
The time series group’s columns.
The time series group’s labels.
The time series group’s values.
The Query statistics and metadata.
The Query Info object. It contains metadata and statistics about a Query performed.
The Query’s unique identifier.
The date and time in UTC when the Query was created.
The unique identifier of the actor that performed the Query.
The date and time in UTC when the Query was last modified.
The unique identifier of the actor that modified the Query.
The bytes processed by the Query.
The duration of the Query in milliseconds.
The number of records processed by the Query.
The bytes returned by the Query.
The number of records returned by the Query.
The Query status.
See QueryStatus
The Query subtype.
See QuerySubtype
The SQL the query executed.
Query a metric in leaderboard format. Returns a table (array of rows) with the selected dimensions and the metric’s corresponding values for the given time range and filters.
Arguments
The fields for querying a Metric in leaderboard format.
A Metric’s leaderboard query returns an ordered table of Dimension and Metric values over a given time range.
The Metric to query. You can query a pre-configured Metric by ID or name, or you can query an ad hoc Metric that you define inline.
The ID of a pre-configured Metric.
The name of a pre-configured Metric.
An ad hoc Custom Metric.
An ad hoc Count Metric.
An ad hoc Sum Metric.
An ad hoc Average Metric.
An ad hoc Min Metric.
An ad hoc Max Metric.
An ad hoc Count Distinct Metric.
The time range for calculating the leaderboard.
The fields required to specify the time range for a time series, counter, or leaderboard Metric query.
If no relative or absolute time ranges are provided, Propel defaults to an absolute time range beginning with the earliest record in the Metric’s Data Pool and ending with the latest record.
If both relative and absolute time ranges are provided, the relative time range will take precedence.
If a LAST_N
relative time period is selected, an n
≥ 1 must be provided. If no n
is provided or n
< 1, a BAD_REQUEST
error will be returned.
The timestamp field to use when querying. Defaults to the timestamp configured on the Data Pool or Metric, if any. Set this to filter on an alternative timestamp field.
The relative time period.
The number of time units for the LAST_N
relative periods.
The optional start timestamp (inclusive). Defaults to the timestamp of the earliest record in the Data Pool.
The optional end timestamp (exclusive). Defaults to the timestamp of the latest record in the Data Pool.
The time zone to use. Dates and times are always returned in UTC, but setting the time zone influences relative time ranges and granularities.
You can set this to “America/Los_Angeles”, “Europe/Berlin”, or any other value in the IANA time zone database. Defaults to “UTC”.
One or many Dimensions to group the Metric values by. Typically, Dimensions in a leaderboard are what you want to compare and rank.
The fields for creating or modifying a Dimension.
The name of the column to create the Dimension from.
The sort order of the rows. It can be ascending (ASC
) or descending (DESC
) order. Defaults to descending (DESC
) order when not provided.
The available sort orders.
ASC
: Sort in ascending order.DESC
: Sort in descending order.
The number of rows to be returned. It can be a number between 1 and 1,000.
The Query Filters to apply before retrieving the leaderboard data, in the form of SQL. If no Query Filters are provided, all data is included.
The ID of the Metric to query.
Required if metricName
is not specified.
metric
The name of the Metric to query.
Required if metricId
is not specified.
metric
The Query Filters to apply before retrieving the leaderboard data. If no Query Filters are provided, all data is included.
deprecated: UsefilterSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
See FilterOperator
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
The leaderboard response object. It contains an array of headers and a table (array of rows) with the selected Dimensions and corresponding Metric values for the given time range and Query Filters.
The table headers. It contains the Dimension and Metric names.
An ordered array of rows. Each row contains the Dimension values and the corresponding Metric value. A Dimension value can be empty. A Metric value will never be empty.
The Query statistics and metadata.
The Query Info object. It contains metadata and statistics about a Query performed.
The Query’s unique identifier.
The date and time in UTC when the Query was created.
The unique identifier of the actor that performed the Query.
The date and time in UTC when the Query was last modified.
The unique identifier of the actor that modified the Query.
The bytes processed by the Query.
The duration of the Query in milliseconds.
The number of records processed by the Query.
The bytes returned by the Query.
The number of records returned by the Query.
The Query status.
See QueryStatus
The Query subtype.
See QuerySubtype
The SQL the query executed.
Returns the Deletion Job specified by the given ID.
The Deletion Job represents the asynchronous process of deleting data given some filters inside a Data Pool.
Arguments
Deletion Job scheduled for a specific Data Pool.
The Deletion Job represents the asynchronous process of deleting data given some filters inside a Data Pool. It tracks the deletion process until it is finished, showing the progress and the outcome when it is finished.
The Deletion Job’s ID.
The Deletion Job’s creation date and time in UTC.
Who created the Deletion Job.
The Deletion Job’s last modification date and time in UTC.
Who last modified the Deletion Job.
Account to which the Deletion Job belongs.
The Account object.
The Account’s unique identifier.
Environment to which the Deletion Job belongs.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Data Pool whose records will be deleted by the Deletion Job. See DataPool
The current Deletion Job’s status.
CREATED
: The Job was created, but is not yet being executed.IN_PROGRESS
: The Job is executing.SUCCEEDED
: The Job succeeded.FAILED
: The Job failed. Check the error message.
The filters that will be used for deleting data, in the form of SQL. Data matching the filters will be deleted.
The current progress of the Deletion Job, from 0.0 to 1.0.
The time at which the Deletion Job started.
The time at which the Deletion Job succeeded.
The time at which the Deletion Job failed.
The list of filters that will be used for deleting data. Data matching the filters will be deleted.
deprecated: UsefilterSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
See FilterOperator
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
Returns the AddColumnToDataPoolJob specified by the given ID.
The AddColumnToDataPoolJob represents the asynchronous process of adding a column, given its name and type, to a Data Pool.
Arguments
AddColumnToDataPoolJob scheduled for a specific Data Pool.
The Add Column Job represents the asynchronous process of adding a column, given its name and type, to a Data Pool. It tracks the process of adding a column until it is finished, showing the progress and the outcome when it is finished.
The AddColumnToDataPoolJob’s ID.
The AddColumnToDataPoolJob’s creation date and time in UTC.
Who created the AddColumnToDataPoolJob.
The AddColumnToDataPoolJob’s last modification date and time in UTC.
Who modified the AddColumnToDataPoolJob last.
Account to which the AddColumnToDataPoolJob belongs.
The Account object.
The Account’s unique identifier.
Environment to which the AddColumnToDataPoolJob belongs.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The current AddColumnToDataPoolJob’s status.
CREATED
: The Job was created, but is not yet being executed.IN_PROGRESS
: The Job is executing.SUCCEEDED
: The Job succeeded.FAILED
: The Job failed. Check the error message.
Name of the new column.
Type of the new column.
The Propel data types.
BOOLEAN
: True or false.STRING
: A variable-length string.FLOAT
: A 32-bit signed double-precision floating point number.DOUBLE
: A 64-bit signed double-precision floating point number.INT8
: An 8-bit signed integer, with a minimum value of -2⁷ and a maximum value of 2⁷-1.INT16
: A 16-bit signed integer, with a minimum value of -2¹⁵ and a maximum value of 2¹⁵-1.INT32
: A 32-bit signed integer, with a minimum value of -2³¹ and a maximum value of 2³¹-1.INT64
: A 64-bit signed integer, with a minimum value of -2⁶³ and a maximum value of 2⁶³-1.DATE
: A date without a timestamp. For example, “YYYY-MM-DD”.TIMESTAMP
: A date with a timestamp. For example, “yyy-MM-dd HH:mm:ss”.JSON
: A JavaScript Object Notation (JSON) document.CLICKHOUSE
: A ClickHouse-specific type.
The ClickHouse type of the new column when columnType
is set to CLICKHOUSE
.
JSON property to which the new column corresponds.
The current progress of the AddColumnToDataPool Job, from 0.0 to 1.0.
The time at which the AddColumnToDataPool Job started.
The time at which the AddColumnToDataPool Job succeeded.
The time at which the AddColumnToDataPool Job failed.
Returns the AddColumnToDataPool Job specified by a given status.
Arguments
CREATED
: The Job was created, but is not yet being executed.IN_PROGRESS
: The Job is executing.SUCCEEDED
: The Job succeeded.FAILED
: The Job failed. Check the error message.
The Add column to Data Pool Job connection object.
Learn more about pagination in GraphQL.
The Add column to Data Pool Job connection’s edges. See AddColumnToDataPoolJobEdge
The Add column to Data Pool Job connection’s nodes. See AddColumnToDataPoolJob
The Add column to Data Pool Job connection’s page info.
The page info object used for pagination.
Points to the first item returned in the results. Used when paginating backward.
Points to the last item returned in the results. Used when paginating forward.
A boolean that indicates whether a next page of results exists. Can be used to display a “next page” button in user interfaces, for example.
A boolean that indicates whether a previous page of results exists. Can be used to display a “previous page” button in user interfaces, for example.
Returns the UpdateDataPoolRecords Job specified by the given ID.
The UpdateDataPoolRecords Job represents the asynchronous process of updating records inside a Data Pool.
Arguments
UpdateDataPoolRecords Job scheduled for a specific Data Pool. The Update Data Pool Records Job represents the asynchronous process of updating records given some filters, inside a Data Pool. It tracks the process of updating records until it is finished, showing the progress and the outcome when it is finished.
The UpdateDataPoolRecords Job’s ID
The UpdateDataPoolRecords Job’s creation date and time in UTC
Who created the UpdateDataPoolRecords Job
The UpdateDataPoolRecords Job’s last modification date and time in UTC
Who last modified the UpdateDataPoolRecords Job
Account to which the UpdateDataPoolRecords Job belongs
The Account object.
The Account’s unique identifier.
Environment to which the UpdateDataPoolRecords Job belongs
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Data Pool whose records will be updated by the UpdateDataPoolRecords Job See DataPool
The current UpdateDataPoolRecords Job’s status
CREATED
: The Job was created, but is not yet being executed.IN_PROGRESS
: The Job is executing.SUCCEEDED
: The Job succeeded.FAILED
: The Job failed. Check the error message.
The filters that will be used for updating data, in the form of SQL. Data matching the filters will be updated.
Describes how the job will update the records.
The fields for creating an Update Data Pool Records Job.
{
"column": "status",
"expression": "'completed'"
}
{
"column": "counter",
"expression": "counter + 1"
}
{
"column": "full_name",
"expression": "concat(first_name, ' ', last_name)"
}
The name of the column to update.
The value to which the column will be updated. Once evaluated, it should be of the same data type as the column.
The current progress of the UpdateDataPoolRecords Job, from 0.0 to 1.0.
The time at which the UpdateDataPoolRecords Job started.
The time at which the UpdateDataPoolRecords Job succeeded.
The time at which the UpdateDataPoolRecords Job failed.
The list of filters that will be used for updating data. Data matching the filters will be updated.
deprecated: UsefilterSql
insteadThe fields of a filter.
You can construct more complex filters using and
and or
. For example, to construct a filter equivalent to
(value > 0 AND value <= 100) OR status = "confirmed"
you could write
{
"column": "value",
"operator": "GREATER_THAN",
"value": "0",
"and": [{
"column": "value",
"operator": "LESS_THAN_OR_EQUAL_TO",
"value": "0"
}],
"or": [{
"column": "status",
"operator": "EQUALS",
"value": "confirmed"
}]
}
Note that and
takes precedence over or
.
The name of the column to filter on.
The operation to perform when comparing the column and filter values.
See FilterOperator
The value to compare the column to.
Additional filters to AND with this one. AND takes precedence over OR.
Additional filters to OR with this one. AND takes precedence over OR.
Returns the UpdateDataPoolRecords Job specified by a given status.
Arguments
CREATED
: The Job was created, but is not yet being executed.IN_PROGRESS
: The Job is executing.SUCCEEDED
: The Job succeeded.FAILED
: The Job failed. Check the error message.
The Update Data Pool records Job connection object.
Learn more about pagination in GraphQL.
The Update Data Pool records Job connection’s edges. See UpdateDataPoolRecordsJobEdge
The Update Data Pool records Job connection’s nodes. See UpdateDataPoolRecordsJob
The Update Data Pool records Job connection’s page info.
The page info object used for pagination.
Points to the first item returned in the results. Used when paginating backward.
Points to the last item returned in the results. Used when paginating forward.
A boolean that indicates whether a next page of results exists. Can be used to display a “next page” button in user interfaces, for example.
A boolean that indicates whether a previous page of results exists. Can be used to display a “previous page” button in user interfaces, for example.
Returns the Materialized View specified by the given ID.
Arguments
The Materialized View’s unique identifier.
The Materialized View’s unique name.
The Materialized View’s description.
The Materialized View’s Account.
The Account object.
The Account’s unique identifier.
The Materialized View’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Materialized View’s creation date and time in UTC.
The Materialized View’s last modification date and time in UTC.
The Materialized View’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Materialized View’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The SQL that the Materialized View executes.
The Materialized View’s destination (AKA “target”) Data Pool.
The Data Pool object. Data Pools are Propel’s high-speed data store and cache
The Data Pool’s unique identifier.
The Data Pool’s unique name.
The Data Pool’s description.
The Data Pool’s Environment.
See Environment
The Data Pool’s creation date and time in UTC.
The Data Pool’s last modification date and time in UTC.
The Data Pool’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Pool’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Pool’s Data Source. See DataSource
The Data Pool’s status.
See DataPoolStatus
The Data Pool’s data retention in days (not yet supported).
The name of the Data Pool’s table.
The number of records in the Data Pool.
The amount of storage in terabytes used by the Data Pool.
The Data Pool’s columns.
Arguments
The list of measures (numeric columns) in the Data Pool.
Arguments
A list of setup tasks performed on the Data Pool during its most recent setup attempt.
Settings related to Data Pool syncing.
See DataPoolSyncing
The Deletion Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The Add Column Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The UpdateDataPoolRecords Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
Whether the Data Pool has access control enabled or not.
If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
A paginated list of Data Pool Access Policies available on the Data Pool.
Arguments
Validates a custom expression against the Data Pool’s available columns. If the provided expression is invalid, the ValidateExpressionResult response will contain a reason explaining why.
Arguments
The Data Pool’s table settings.
See TableSettings
The Data Pool’s columns that participate in its PARTITION BY clause.
See DataPoolColumn
The Data Pool’s columns that participate in its PRIMARY KEY clause.
See DataPoolColumn
The Data Pool’s columns that participate in its ORDER BY clause.
See DataPoolColumn
The Data Pool’s tenant ID, if configured.
deprecated: Will be removed; use Data Pool Access Policies insteadSee Tenant
The Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
deprecated: Will be removed; use table settings to define the primary key.See UniqueId
The Materialized View’s source Data Pool.
The Data Pool object. Data Pools are Propel’s high-speed data store and cache
The Data Pool’s unique identifier.
The Data Pool’s unique name.
The Data Pool’s description.
The Data Pool’s Environment.
See Environment
The Data Pool’s creation date and time in UTC.
The Data Pool’s last modification date and time in UTC.
The Data Pool’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Pool’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Pool’s Data Source. See DataSource
The Data Pool’s status.
See DataPoolStatus
The Data Pool’s data retention in days (not yet supported).
The name of the Data Pool’s table.
The number of records in the Data Pool.
The amount of storage in terabytes used by the Data Pool.
The Data Pool’s columns.
Arguments
The list of measures (numeric columns) in the Data Pool.
Arguments
A list of setup tasks performed on the Data Pool during its most recent setup attempt.
Settings related to Data Pool syncing.
See DataPoolSyncing
The Deletion Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The Add Column Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The UpdateDataPoolRecords Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
Whether the Data Pool has access control enabled or not.
If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
A paginated list of Data Pool Access Policies available on the Data Pool.
Arguments
Validates a custom expression against the Data Pool’s available columns. If the provided expression is invalid, the ValidateExpressionResult response will contain a reason explaining why.
Arguments
The Data Pool’s table settings.
See TableSettings
The Data Pool’s columns that participate in its PARTITION BY clause.
See DataPoolColumn
The Data Pool’s columns that participate in its PRIMARY KEY clause.
See DataPoolColumn
The Data Pool’s columns that participate in its ORDER BY clause.
See DataPoolColumn
The Data Pool’s tenant ID, if configured.
deprecated: Will be removed; use Data Pool Access Policies insteadSee Tenant
The Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
deprecated: Will be removed; use table settings to define the primary key.See UniqueId
Other Data Pools queried by the Materialized View.
The Data Pool object. Data Pools are Propel’s high-speed data store and cache
The Data Pool’s unique identifier.
The Data Pool’s unique name.
The Data Pool’s description.
The Data Pool’s Environment.
See Environment
The Data Pool’s creation date and time in UTC.
The Data Pool’s last modification date and time in UTC.
The Data Pool’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Pool’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Pool’s Data Source. See DataSource
The Data Pool’s status.
See DataPoolStatus
The Data Pool’s data retention in days (not yet supported).
The name of the Data Pool’s table.
The number of records in the Data Pool.
The amount of storage in terabytes used by the Data Pool.
The Data Pool’s columns.
Arguments
The list of measures (numeric columns) in the Data Pool.
Arguments
A list of setup tasks performed on the Data Pool during its most recent setup attempt.
Settings related to Data Pool syncing.
See DataPoolSyncing
The Deletion Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The Add Column Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The UpdateDataPoolRecords Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
Whether the Data Pool has access control enabled or not.
If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
A paginated list of Data Pool Access Policies available on the Data Pool.
Arguments
Validates a custom expression against the Data Pool’s available columns. If the provided expression is invalid, the ValidateExpressionResult response will contain a reason explaining why.
Arguments
The Data Pool’s table settings.
See TableSettings
The Data Pool’s columns that participate in its PARTITION BY clause.
See DataPoolColumn
The Data Pool’s columns that participate in its PRIMARY KEY clause.
See DataPoolColumn
The Data Pool’s columns that participate in its ORDER BY clause.
See DataPoolColumn
The Data Pool’s tenant ID, if configured.
deprecated: Will be removed; use Data Pool Access Policies insteadSee Tenant
The Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
deprecated: Will be removed; use table settings to define the primary key.See UniqueId
Returns the Materialized View specified by its unique name.
Arguments
The Materialized View’s unique identifier.
The Materialized View’s unique name.
The Materialized View’s description.
The Materialized View’s Account.
The Account object.
The Account’s unique identifier.
The Materialized View’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Materialized View’s creation date and time in UTC.
The Materialized View’s last modification date and time in UTC.
The Materialized View’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Materialized View’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The SQL that the Materialized View executes.
The Materialized View’s destination (AKA “target”) Data Pool.
The Data Pool object. Data Pools are Propel’s high-speed data store and cache
The Data Pool’s unique identifier.
The Data Pool’s unique name.
The Data Pool’s description.
The Data Pool’s Environment.
See Environment
The Data Pool’s creation date and time in UTC.
The Data Pool’s last modification date and time in UTC.
The Data Pool’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Pool’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Pool’s Data Source. See DataSource
The Data Pool’s status.
See DataPoolStatus
The Data Pool’s data retention in days (not yet supported).
The name of the Data Pool’s table.
The number of records in the Data Pool.
The amount of storage in terabytes used by the Data Pool.
The Data Pool’s columns.
Arguments
The list of measures (numeric columns) in the Data Pool.
Arguments
A list of setup tasks performed on the Data Pool during its most recent setup attempt.
Settings related to Data Pool syncing.
See DataPoolSyncing
The Deletion Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The Add Column Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The UpdateDataPoolRecords Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
Whether the Data Pool has access control enabled or not.
If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
A paginated list of Data Pool Access Policies available on the Data Pool.
Arguments
Validates a custom expression against the Data Pool’s available columns. If the provided expression is invalid, the ValidateExpressionResult response will contain a reason explaining why.
Arguments
The Data Pool’s table settings.
See TableSettings
The Data Pool’s columns that participate in its PARTITION BY clause.
See DataPoolColumn
The Data Pool’s columns that participate in its PRIMARY KEY clause.
See DataPoolColumn
The Data Pool’s columns that participate in its ORDER BY clause.
See DataPoolColumn
The Data Pool’s tenant ID, if configured.
deprecated: Will be removed; use Data Pool Access Policies insteadSee Tenant
The Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
deprecated: Will be removed; use table settings to define the primary key.See UniqueId
The Materialized View’s source Data Pool.
The Data Pool object. Data Pools are Propel’s high-speed data store and cache
The Data Pool’s unique identifier.
The Data Pool’s unique name.
The Data Pool’s description.
The Data Pool’s Environment.
See Environment
The Data Pool’s creation date and time in UTC.
The Data Pool’s last modification date and time in UTC.
The Data Pool’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Pool’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Pool’s Data Source. See DataSource
The Data Pool’s status.
See DataPoolStatus
The Data Pool’s data retention in days (not yet supported).
The name of the Data Pool’s table.
The number of records in the Data Pool.
The amount of storage in terabytes used by the Data Pool.
The Data Pool’s columns.
Arguments
The list of measures (numeric columns) in the Data Pool.
Arguments
A list of setup tasks performed on the Data Pool during its most recent setup attempt.
Settings related to Data Pool syncing.
See DataPoolSyncing
The Deletion Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The Add Column Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The UpdateDataPoolRecords Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
Whether the Data Pool has access control enabled or not.
If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
A paginated list of Data Pool Access Policies available on the Data Pool.
Arguments
Validates a custom expression against the Data Pool’s available columns. If the provided expression is invalid, the ValidateExpressionResult response will contain a reason explaining why.
Arguments
The Data Pool’s table settings.
See TableSettings
The Data Pool’s columns that participate in its PARTITION BY clause.
See DataPoolColumn
The Data Pool’s columns that participate in its PRIMARY KEY clause.
See DataPoolColumn
The Data Pool’s columns that participate in its ORDER BY clause.
See DataPoolColumn
The Data Pool’s tenant ID, if configured.
deprecated: Will be removed; use Data Pool Access Policies insteadSee Tenant
The Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
deprecated: Will be removed; use table settings to define the primary key.See UniqueId
Other Data Pools queried by the Materialized View.
The Data Pool object. Data Pools are Propel’s high-speed data store and cache
The Data Pool’s unique identifier.
The Data Pool’s unique name.
The Data Pool’s description.
The Data Pool’s Environment.
See Environment
The Data Pool’s creation date and time in UTC.
The Data Pool’s last modification date and time in UTC.
The Data Pool’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Pool’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Pool’s Data Source. See DataSource
The Data Pool’s status.
See DataPoolStatus
The Data Pool’s data retention in days (not yet supported).
The name of the Data Pool’s table.
The number of records in the Data Pool.
The amount of storage in terabytes used by the Data Pool.
The Data Pool’s columns.
Arguments
The list of measures (numeric columns) in the Data Pool.
Arguments
A list of setup tasks performed on the Data Pool during its most recent setup attempt.
Settings related to Data Pool syncing.
See DataPoolSyncing
The Deletion Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The Add Column Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The UpdateDataPoolRecords Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
Whether the Data Pool has access control enabled or not.
If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
A paginated list of Data Pool Access Policies available on the Data Pool.
Arguments
Validates a custom expression against the Data Pool’s available columns. If the provided expression is invalid, the ValidateExpressionResult response will contain a reason explaining why.
Arguments
The Data Pool’s table settings.
See TableSettings
The Data Pool’s columns that participate in its PARTITION BY clause.
See DataPoolColumn
The Data Pool’s columns that participate in its PRIMARY KEY clause.
See DataPoolColumn
The Data Pool’s columns that participate in its ORDER BY clause.
See DataPoolColumn
The Data Pool’s tenant ID, if configured.
deprecated: Will be removed; use Data Pool Access Policies insteadSee Tenant
The Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
deprecated: Will be removed; use table settings to define the primary key.See UniqueId
AddColumnToDataPoolJobConnection
The Add column to Data Pool Job connection object.
Learn more about pagination in GraphQL.
The Add column to Data Pool Job connection’s edges. See AddColumnToDataPoolJobEdge
The Add column to Data Pool Job connection’s nodes. See AddColumnToDataPoolJob
The Add column to Data Pool Job connection’s page info.
The page info object used for pagination.
Points to the first item returned in the results. Used when paginating backward.
Points to the last item returned in the results. Used when paginating forward.
A boolean that indicates whether a next page of results exists. Can be used to display a “next page” button in user interfaces, for example.
A boolean that indicates whether a previous page of results exists. Can be used to display a “previous page” button in user interfaces, for example.
AddColumnToDataPoolJobEdge
The Add column to Data Pool Job edge object.
Learn more about pagination in GraphQL.
The edge’s cursor.
The edge’s node. See AddColumnToDataPoolJob
ApplicationConnection
The Application connection object.
Learn more about pagination in GraphQL.
The Application connection’s edges.
The Application edge object.
Learn more about pagination in GraphQL.
The edge’s cursor.
The edge’s node.
The Application object.
Propel Applications represent the web or mobile app you are building. They provide the API credentials that allow your client- or server-side app to access the Propel API. The Application’s Propeller determines the speed and cost of your Metric Queries.
The Application’s unique identifier.
The Application’s unique name.
The Application’s description.
The Application’s Environment.
See Environment
The Application’s creation date and time in UTC.
The Application’s last modification date and time in UTC.
The Application’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Application’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Application’s OAuth 2.0 client identifier.
The Application’s OAuth 2.0 client secret.
The Application’s OAuth 2.0 scopes.
See ApplicationScope
A paginated list of Data Pool Access Policies associated with the Application.
Arguments
The Application connection’s nodes.
The Application object.
Propel Applications represent the web or mobile app you are building. They provide the API credentials that allow your client- or server-side app to access the Propel API. The Application’s Propeller determines the speed and cost of your Metric Queries.
The Application’s unique identifier.
The Application’s unique name.
The Application’s description.
The Application’s Account.
The Account object.
The Account’s unique identifier.
The Application’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Application’s creation date and time in UTC.
The Application’s last modification date and time in UTC.
The Application’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Application’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Application’s OAuth 2.0 client identifier.
The Application’s OAuth 2.0 client secret.
The Application’s Propeller.
A Propeller determines your Application’s query processing power. The larger the Propeller, the faster the queries and the higher the cost. Every Propel Application (and therefore every set of API credentials) has a Propeller that determines the speed and cost of queries.
P1_X_SMALL
: Max records per second: 5,000,000 records per secondP1_SMALL
: Max records per second: 25,000,000 records per secondP1_MEDIUM
: Max records per second: 100,000,000 records per secondP1_LARGE
: Max records per second: 250,000,000 records per secondP1_X_LARGE
: Max records per second: 500,000,000 records per second
The Application’s OAuth 2.0 scopes.
The API operations an Application is authorized to perform.
ADMIN
: Grant read/write access to Data Sources, Data Pools, Metrics and Policies.APPLICATION_ADMIN
: Grant read/write access to Applications.DATA_POOL_QUERY
: Grant read access to query Data Pools.DATA_POOL_READ
: Grant read access to read Data Pools.DATA_POOL_STATS
: Grant read access to fetch column statistics from Data Pools.ENVIRONMENT_ADMIN
: Grant read/write access to Environments.METRIC_QUERY
: Grant read access to query Metrics.METRIC_STATS
: Grant read access to fetch Dimension statistics from Metrics.METRIC_READ
: Grant read access to Metrics.
This does not allow querying Metrics. For that, see METRIC_QUERY
.
A paginated list of Data Pool Access Policies associated with the Application.
Arguments
The Application connection’s page info.
The page info object used for pagination.
Points to the first item returned in the results. Used when paginating backward.
Points to the last item returned in the results. Used when paginating forward.
A boolean that indicates whether a next page of results exists. Can be used to display a “next page” button in user interfaces, for example.
A boolean that indicates whether a previous page of results exists. Can be used to display a “previous page” button in user interfaces, for example.
ApplicationEdge
The Application edge object.
Learn more about pagination in GraphQL.
The edge’s cursor.
The edge’s node.
The Application object.
Propel Applications represent the web or mobile app you are building. They provide the API credentials that allow your client- or server-side app to access the Propel API. The Application’s Propeller determines the speed and cost of your Metric Queries.
The Application’s unique identifier.
The Application’s unique name.
The Application’s description.
The Application’s Account.
The Account object.
The Account’s unique identifier.
The Application’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Application’s creation date and time in UTC.
The Application’s last modification date and time in UTC.
The Application’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Application’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Application’s OAuth 2.0 client identifier.
The Application’s OAuth 2.0 client secret.
The Application’s Propeller.
A Propeller determines your Application’s query processing power. The larger the Propeller, the faster the queries and the higher the cost. Every Propel Application (and therefore every set of API credentials) has a Propeller that determines the speed and cost of queries.
P1_X_SMALL
: Max records per second: 5,000,000 records per secondP1_SMALL
: Max records per second: 25,000,000 records per secondP1_MEDIUM
: Max records per second: 100,000,000 records per secondP1_LARGE
: Max records per second: 250,000,000 records per secondP1_X_LARGE
: Max records per second: 500,000,000 records per second
The Application’s OAuth 2.0 scopes.
The API operations an Application is authorized to perform.
ADMIN
: Grant read/write access to Data Sources, Data Pools, Metrics and Policies.APPLICATION_ADMIN
: Grant read/write access to Applications.DATA_POOL_QUERY
: Grant read access to query Data Pools.DATA_POOL_READ
: Grant read access to read Data Pools.DATA_POOL_STATS
: Grant read access to fetch column statistics from Data Pools.ENVIRONMENT_ADMIN
: Grant read/write access to Environments.METRIC_QUERY
: Grant read access to query Metrics.METRIC_STATS
: Grant read access to fetch Dimension statistics from Metrics.METRIC_READ
: Grant read access to Metrics.
This does not allow querying Metrics. For that, see METRIC_QUERY
.
A paginated list of Data Pool Access Policies associated with the Application.
Arguments
BoosterConnection
The Booster connection object.
Learn more about pagination in GraphQL.
The Booster connection’s edges. See BoosterEdge
The Booster connection’s page info.
The page info object used for pagination.
Points to the first item returned in the results. Used when paginating backward.
Points to the last item returned in the results. Used when paginating forward.
A boolean that indicates whether a next page of results exists. Can be used to display a “next page” button in user interfaces, for example.
A boolean that indicates whether a previous page of results exists. Can be used to display a “previous page” button in user interfaces, for example.
BoosterEdge
The Booster edge object.
Learn more about pagination in GraphQL.
The edge’s cursor.
ColumnConnection
The column connection object.
Learn more about pagination in GraphQL.
The time at which the columns were cached (i.e., the time at which they were introspected).
The column connection’s edges.
The column edge object.
Learn more about pagination in GraphQL.
The edge’s cursor.
The edge’s node.
The column object.
Once a table introspection succeeds, it creates a new table object for every table it introspected. Within each table object, it also creates a column object for every column it introspected.
The column’s name.
The column’s type.
Whether the column is nullable, meaning whether it accepts a null value.
The time at which the column was cached (i.e., the time at which it was introspected).
The time at which the column was created. This is the same as its cachedAt
time.
The column’s creator. This corresponds to the initiator of the table introspection. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
This is the suggested Data Pool column type to use when converting this Data Source column to a Data Pool column.
Propel makes this suggestion based on the Data Source column type. If the Data Source column type is unsupported, this field returns null
.
Sometimes, you know better which Data Pool column type to convert to. In these cases, you can refer
to supportedDataPoolColumnTypes
for the full set of supported conversions.
See ColumnType
This is the set of supported Data Pool column types you can use when converting this Data Source column to a Data Pool column. If the Data Source column type is unsupported, this field returns an empty array.
For example, a numeric Data Source column type could be converted to a narrower or wider numeric Data Pool column type; a string-valued Data Source column type could be mapped to a date or timestamp Data Pool column type.
See ColumnType
Information about the column obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the column obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the column obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the column obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the column obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the column obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedThe column connection’s nodes.
The column object.
Once a table introspection succeeds, it creates a new table object for every table it introspected. Within each table object, it also creates a column object for every column it introspected.
The column’s name.
The column’s type.
Whether the column is nullable, meaning whether it accepts a null value.
The time at which the column was cached (i.e., the time at which it was introspected).
The time at which the column was created. This is the same as its cachedAt
time.
The column’s creator. This corresponds to the initiator of the table introspection. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
This is the suggested Data Pool column type to use when converting this Data Source column to a Data Pool column.
Propel makes this suggestion based on the Data Source column type. If the Data Source column type is unsupported, this field returns null
.
Sometimes, you know better which Data Pool column type to convert to. In these cases, you can refer
to supportedDataPoolColumnTypes
for the full set of supported conversions.
The Propel data types.
BOOLEAN
: True or false.STRING
: A variable-length string.FLOAT
: A 32-bit signed double-precision floating point number.DOUBLE
: A 64-bit signed double-precision floating point number.INT8
: An 8-bit signed integer, with a minimum value of -2⁷ and a maximum value of 2⁷-1.INT16
: A 16-bit signed integer, with a minimum value of -2¹⁵ and a maximum value of 2¹⁵-1.INT32
: A 32-bit signed integer, with a minimum value of -2³¹ and a maximum value of 2³¹-1.INT64
: A 64-bit signed integer, with a minimum value of -2⁶³ and a maximum value of 2⁶³-1.DATE
: A date without a timestamp. For example, “YYYY-MM-DD”.TIMESTAMP
: A date with a timestamp. For example, “yyy-MM-dd HH:mm:ss”.JSON
: A JavaScript Object Notation (JSON) document.CLICKHOUSE
: A ClickHouse-specific type.
This is the set of supported Data Pool column types you can use when converting this Data Source column to a Data Pool column. If the Data Source column type is unsupported, this field returns an empty array.
For example, a numeric Data Source column type could be converted to a narrower or wider numeric Data Pool column type; a string-valued Data Source column type could be mapped to a date or timestamp Data Pool column type.
The Propel data types.
BOOLEAN
: True or false.STRING
: A variable-length string.FLOAT
: A 32-bit signed double-precision floating point number.DOUBLE
: A 64-bit signed double-precision floating point number.INT8
: An 8-bit signed integer, with a minimum value of -2⁷ and a maximum value of 2⁷-1.INT16
: A 16-bit signed integer, with a minimum value of -2¹⁵ and a maximum value of 2¹⁵-1.INT32
: A 32-bit signed integer, with a minimum value of -2³¹ and a maximum value of 2³¹-1.INT64
: A 64-bit signed integer, with a minimum value of -2⁶³ and a maximum value of 2⁶³-1.DATE
: A date without a timestamp. For example, “YYYY-MM-DD”.TIMESTAMP
: A date with a timestamp. For example, “yyy-MM-dd HH:mm:ss”.JSON
: A JavaScript Object Notation (JSON) document.CLICKHOUSE
: A ClickHouse-specific type.
Information about the column obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the column obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the column obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the column obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the column obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the column obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedThe column connection’s page info.
The page info object used for pagination.
Points to the first item returned in the results. Used when paginating backward.
Points to the last item returned in the results. Used when paginating forward.
A boolean that indicates whether a next page of results exists. Can be used to display a “next page” button in user interfaces, for example.
A boolean that indicates whether a previous page of results exists. Can be used to display a “previous page” button in user interfaces, for example.
ColumnEdge
The column edge object.
Learn more about pagination in GraphQL.
The edge’s cursor.
The edge’s node.
The column object.
Once a table introspection succeeds, it creates a new table object for every table it introspected. Within each table object, it also creates a column object for every column it introspected.
The column’s name.
The column’s type.
Whether the column is nullable, meaning whether it accepts a null value.
The time at which the column was cached (i.e., the time at which it was introspected).
The time at which the column was created. This is the same as its cachedAt
time.
The column’s creator. This corresponds to the initiator of the table introspection. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
This is the suggested Data Pool column type to use when converting this Data Source column to a Data Pool column.
Propel makes this suggestion based on the Data Source column type. If the Data Source column type is unsupported, this field returns null
.
Sometimes, you know better which Data Pool column type to convert to. In these cases, you can refer
to supportedDataPoolColumnTypes
for the full set of supported conversions.
The Propel data types.
BOOLEAN
: True or false.STRING
: A variable-length string.FLOAT
: A 32-bit signed double-precision floating point number.DOUBLE
: A 64-bit signed double-precision floating point number.INT8
: An 8-bit signed integer, with a minimum value of -2⁷ and a maximum value of 2⁷-1.INT16
: A 16-bit signed integer, with a minimum value of -2¹⁵ and a maximum value of 2¹⁵-1.INT32
: A 32-bit signed integer, with a minimum value of -2³¹ and a maximum value of 2³¹-1.INT64
: A 64-bit signed integer, with a minimum value of -2⁶³ and a maximum value of 2⁶³-1.DATE
: A date without a timestamp. For example, “YYYY-MM-DD”.TIMESTAMP
: A date with a timestamp. For example, “yyy-MM-dd HH:mm:ss”.JSON
: A JavaScript Object Notation (JSON) document.CLICKHOUSE
: A ClickHouse-specific type.
This is the set of supported Data Pool column types you can use when converting this Data Source column to a Data Pool column. If the Data Source column type is unsupported, this field returns an empty array.
For example, a numeric Data Source column type could be converted to a narrower or wider numeric Data Pool column type; a string-valued Data Source column type could be mapped to a date or timestamp Data Pool column type.
The Propel data types.
BOOLEAN
: True or false.STRING
: A variable-length string.FLOAT
: A 32-bit signed double-precision floating point number.DOUBLE
: A 64-bit signed double-precision floating point number.INT8
: An 8-bit signed integer, with a minimum value of -2⁷ and a maximum value of 2⁷-1.INT16
: A 16-bit signed integer, with a minimum value of -2¹⁵ and a maximum value of 2¹⁵-1.INT32
: A 32-bit signed integer, with a minimum value of -2³¹ and a maximum value of 2³¹-1.INT64
: A 64-bit signed integer, with a minimum value of -2⁶³ and a maximum value of 2⁶³-1.DATE
: A date without a timestamp. For example, “YYYY-MM-DD”.TIMESTAMP
: A date with a timestamp. For example, “yyy-MM-dd HH:mm:ss”.JSON
: A JavaScript Object Notation (JSON) document.CLICKHOUSE
: A ClickHouse-specific type.
Information about the column obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the column obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the column obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the column obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the column obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedInformation about the column obtained from Snowflake.
deprecated: This is Snowflake-specific, and will be removedDataPoolAccessPolicyConnection
The Data Pool Access Policy connection object.
Learn more about pagination in GraphQL.
The Data Pool Access Policy connection’s edges. See DataPoolAccessPolicyEdge
The Data Pool Access Policy connection’s nodes. See DataPoolAccessPolicy
The Data Pool Access Policy connection’s page info.
The page info object used for pagination.
Points to the first item returned in the results. Used when paginating backward.
Points to the last item returned in the results. Used when paginating forward.
A boolean that indicates whether a next page of results exists. Can be used to display a “next page” button in user interfaces, for example.
A boolean that indicates whether a previous page of results exists. Can be used to display a “previous page” button in user interfaces, for example.
DataPoolAccessPolicyEdge
The Data Pool Access Policy edge object.
Learn more about pagination in GraphQL.
The edge’s cursor.
The edge’s node. See DataPoolAccessPolicy
DataPoolColumnConnection
The Data Pool column connection object.
Learn more about pagination in GraphQL.
The Data Pool column connection’s edges.
The Data Pool column edge object.
Learn more about pagination in GraphQL.
The edge’s cursor.
The edge’s node.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
See ColumnType
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadThe Data Pool column connection’s nodes.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
The Propel data types.
BOOLEAN
: True or false.STRING
: A variable-length string.FLOAT
: A 32-bit signed double-precision floating point number.DOUBLE
: A 64-bit signed double-precision floating point number.INT8
: An 8-bit signed integer, with a minimum value of -2⁷ and a maximum value of 2⁷-1.INT16
: A 16-bit signed integer, with a minimum value of -2¹⁵ and a maximum value of 2¹⁵-1.INT32
: A 32-bit signed integer, with a minimum value of -2³¹ and a maximum value of 2³¹-1.INT64
: A 64-bit signed integer, with a minimum value of -2⁶³ and a maximum value of 2⁶³-1.DATE
: A date without a timestamp. For example, “YYYY-MM-DD”.TIMESTAMP
: A date with a timestamp. For example, “yyy-MM-dd HH:mm:ss”.JSON
: A JavaScript Object Notation (JSON) document.CLICKHOUSE
: A ClickHouse-specific type.
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadThe Data Pool column connection’s page info.
The page info object used for pagination.
Points to the first item returned in the results. Used when paginating backward.
Points to the last item returned in the results. Used when paginating forward.
A boolean that indicates whether a next page of results exists. Can be used to display a “next page” button in user interfaces, for example.
A boolean that indicates whether a previous page of results exists. Can be used to display a “previous page” button in user interfaces, for example.
DataPoolColumnEdge
The Data Pool column edge object.
Learn more about pagination in GraphQL.
The edge’s cursor.
The edge’s node.
The name of the Data Source column that this Data Pool column derives from.
The Data Pool column’s type. This may differ from the corresponding Data Source column’s type.
The Propel data types.
BOOLEAN
: True or false.STRING
: A variable-length string.FLOAT
: A 32-bit signed double-precision floating point number.DOUBLE
: A 64-bit signed double-precision floating point number.INT8
: An 8-bit signed integer, with a minimum value of -2⁷ and a maximum value of 2⁷-1.INT16
: A 16-bit signed integer, with a minimum value of -2¹⁵ and a maximum value of 2¹⁵-1.INT32
: A 32-bit signed integer, with a minimum value of -2³¹ and a maximum value of 2³¹-1.INT64
: A 64-bit signed integer, with a minimum value of -2⁶³ and a maximum value of 2⁶³-1.DATE
: A date without a timestamp. For example, “YYYY-MM-DD”.TIMESTAMP
: A date with a timestamp. For example, “yyy-MM-dd HH:mm:ss”.JSON
: A JavaScript Object Notation (JSON) document.CLICKHOUSE
: A ClickHouse-specific type.
The ClickHouse type. This is the exact representation of the type in ClickHouse.
Whether the column is nullable, meaning whether it accepts a null value.
The name of the Data Source column that this Data Pool column derives from.
deprecated: Start usingcolumnName
insteadDataPoolConnection
The Data Pool connection object.
Learn more about pagination in GraphQL.
The Data Pool connection’s edges. See DataPoolEdge
The Data Pool connection’s page info.
The page info object used for pagination.
Points to the first item returned in the results. Used when paginating backward.
Points to the last item returned in the results. Used when paginating forward.
A boolean that indicates whether a next page of results exists. Can be used to display a “next page” button in user interfaces, for example.
A boolean that indicates whether a previous page of results exists. Can be used to display a “previous page” button in user interfaces, for example.
DataPoolEdge
The Data Pool edge object.
Learn more about pagination in GraphQL.
The edge’s cursor.
DataSourceConnection
The Data Source connection object.
Learn more about pagination in GraphQL.
The Data Source connection’s edges.
The Data Source edge object.
Learn more about pagination in GraphQL.
The edge’s cursor.
The edge’s node.
The Data Source object.
A Data Source is a connection to your data warehouse. It has the necessary connection details for Propel to access Snowflake or any other supported Data Source.
The Data Source’s unique identifier.
The Data Source’s unique name.
The Data Source’s description.
The Data Source’s Environment.
See Environment
The Data Source’s creation date and time in UTC.
The Data Source’s last modification date and time in UTC.
The Data Source’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Source’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Source’s type.
See DataSourceType
The Data Source’s status.
See DataSourceStatus
The Data Source’s connection settings.
The tables contained within the Data Source, according to the most recent table introspection.
Arguments
See TableConnection
A list of table introspections performed for the Data Source. You can see how tables and columns changed over time by paging through this list.
Arguments
A list of checks performed on the Data Source during its most recent connection attempt.
See DataSourceCheck
If you list Data Pools via the dataPools
field on a Data Source, you will get Data Pools for the Data Source.
The dataPools
field uses cursor-based pagination typical of GraphQL APIs. You can use the pairs of parameters first
and after
or last
and before
to page forward or backward through the results, respectively.
For forward pagination, the first
parameter defines the number of results to return, and the after
parameter defines the cursor to continue from. You should pass the cursor for the last result of the current page to after
.
For backward pagination, the last
parameter defines the number of results to return, and the before
parameter defines the cursor to continue from. You should pass the cursor for the first result of the current page to before
.
Arguments
The Data Source connection’s nodes.
The Data Source object.
A Data Source is a connection to your data warehouse. It has the necessary connection details for Propel to access Snowflake or any other supported Data Source.
The Data Source’s unique identifier.
The Data Source’s unique name.
The Data Source’s description.
The Data Source’s Account.
The Account object.
The Account’s unique identifier.
The Data Source’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Data Source’s creation date and time in UTC.
The Data Source’s last modification date and time in UTC.
The Data Source’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Source’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Source’s type.
The types of Data Sources.
WEBHOOK
: Indicates a Webhook Data Source.TWILIO_SEGMENT
: Indicates a Twilio Segment Data Source.S3
: Indicates an Amazon S3 Data Source.Redshift
: Indicates a Redshift Data Source.POSTGRESQL
: Indicates a PostgreSQL Data Source.KAFKA
: Indicates a Kafka Data Source.Http
: Indicates an Http Data Source.CLICKHOUSE
: Indicates a ClickHouse Data Source.AMAZON_DYNAMODB
: Indicates an Amazon DynamoDB Data Source.AMAZON_DATA_FIREHOSE
: Indicates an Amazon Data Firehose Data Source.Snowflake
: Indicates a Snowflake Data Source.INTERNAL
: Indicates an internal Data Source.
The Data Source’s status.
The status of a Data Source.
CREATED
: The Data Source has been created, but it is not connected yet.CONNECTING
: Propel is attempting to connect the Data Source.CONNECTED
: The Data Source is connected.BROKEN
: The Data Source failed to connect.DELETING
: Propel is deleting the Data Source.
The Data Source’s connection settings.
The tables contained within the Data Source, according to the most recent table introspection.
Arguments
See TableConnection
A list of table introspections performed for the Data Source. You can see how tables and columns changed over time by paging through this list.
Arguments
A list of checks performed on the Data Source during its most recent connection attempt.
The Data Source Check object.
Data Source Checks are executed when setting up your Data Source. They check that Propel will be able to receive data and setup Data Pools.
The exact Checks to perform vary by Data Source. For example, Snowflake-backed Data Sources will have their own specific Checks.
The name of the Data Source Check to be performed.
A description of the Data Source Check to be performed.
The status of the Data Source Check (all checks begin as NOT_STARTED before transitioning to SUCCEEDED or FAILED).
If the Data Source Check failed, this field includes a descriptive error message.
See Error
The time at which the Data Source Check was performed.
If you list Data Pools via the dataPools
field on a Data Source, you will get Data Pools for the Data Source.
The dataPools
field uses cursor-based pagination typical of GraphQL APIs. You can use the pairs of parameters first
and after
or last
and before
to page forward or backward through the results, respectively.
For forward pagination, the first
parameter defines the number of results to return, and the after
parameter defines the cursor to continue from. You should pass the cursor for the last result of the current page to after
.
For backward pagination, the last
parameter defines the number of results to return, and the before
parameter defines the cursor to continue from. You should pass the cursor for the first result of the current page to before
.
Arguments
The Data Source connection’s page info.
The page info object used for pagination.
Points to the first item returned in the results. Used when paginating backward.
Points to the last item returned in the results. Used when paginating forward.
A boolean that indicates whether a next page of results exists. Can be used to display a “next page” button in user interfaces, for example.
A boolean that indicates whether a previous page of results exists. Can be used to display a “previous page” button in user interfaces, for example.
DataSourceEdge
The Data Source edge object.
Learn more about pagination in GraphQL.
The edge’s cursor.
The edge’s node.
The Data Source object.
A Data Source is a connection to your data warehouse. It has the necessary connection details for Propel to access Snowflake or any other supported Data Source.
The Data Source’s unique identifier.
The Data Source’s unique name.
The Data Source’s description.
The Data Source’s Account.
The Account object.
The Account’s unique identifier.
The Data Source’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Data Source’s creation date and time in UTC.
The Data Source’s last modification date and time in UTC.
The Data Source’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Source’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Source’s type.
The types of Data Sources.
WEBHOOK
: Indicates a Webhook Data Source.TWILIO_SEGMENT
: Indicates a Twilio Segment Data Source.S3
: Indicates an Amazon S3 Data Source.Redshift
: Indicates a Redshift Data Source.POSTGRESQL
: Indicates a PostgreSQL Data Source.KAFKA
: Indicates a Kafka Data Source.Http
: Indicates an Http Data Source.CLICKHOUSE
: Indicates a ClickHouse Data Source.AMAZON_DYNAMODB
: Indicates an Amazon DynamoDB Data Source.AMAZON_DATA_FIREHOSE
: Indicates an Amazon Data Firehose Data Source.Snowflake
: Indicates a Snowflake Data Source.INTERNAL
: Indicates an internal Data Source.
The Data Source’s status.
The status of a Data Source.
CREATED
: The Data Source has been created, but it is not connected yet.CONNECTING
: Propel is attempting to connect the Data Source.CONNECTED
: The Data Source is connected.BROKEN
: The Data Source failed to connect.DELETING
: Propel is deleting the Data Source.
The Data Source’s connection settings.
The tables contained within the Data Source, according to the most recent table introspection.
Arguments
See TableConnection
A list of table introspections performed for the Data Source. You can see how tables and columns changed over time by paging through this list.
Arguments
A list of checks performed on the Data Source during its most recent connection attempt.
The Data Source Check object.
Data Source Checks are executed when setting up your Data Source. They check that Propel will be able to receive data and setup Data Pools.
The exact Checks to perform vary by Data Source. For example, Snowflake-backed Data Sources will have their own specific Checks.
The name of the Data Source Check to be performed.
A description of the Data Source Check to be performed.
The status of the Data Source Check (all checks begin as NOT_STARTED before transitioning to SUCCEEDED or FAILED).
If the Data Source Check failed, this field includes a descriptive error message.
See Error
The time at which the Data Source Check was performed.
If you list Data Pools via the dataPools
field on a Data Source, you will get Data Pools for the Data Source.
The dataPools
field uses cursor-based pagination typical of GraphQL APIs. You can use the pairs of parameters first
and after
or last
and before
to page forward or backward through the results, respectively.
For forward pagination, the first
parameter defines the number of results to return, and the after
parameter defines the cursor to continue from. You should pass the cursor for the last result of the current page to after
.
For backward pagination, the last
parameter defines the number of results to return, and the before
parameter defines the cursor to continue from. You should pass the cursor for the first result of the current page to before
.
Arguments
DeletionJobConnection
The Deletion Job connection object.
Learn more about pagination in GraphQL.
The Deletion Job connection’s edges. See DeletionJobEdge
The Deletion Job connection’s nodes. See DeletionJob
The Deletion Job connection’s page info.
The page info object used for pagination.
Points to the first item returned in the results. Used when paginating backward.
Points to the last item returned in the results. Used when paginating forward.
A boolean that indicates whether a next page of results exists. Can be used to display a “next page” button in user interfaces, for example.
A boolean that indicates whether a previous page of results exists. Can be used to display a “previous page” button in user interfaces, for example.
DeletionJobEdge
The Deletion Job edge object.
Learn more about pagination in GraphQL.
The edge’s cursor.
The edge’s node. See DeletionJob
MaterializedViewConnection
The Materialized View connection object.
Learn more about pagination in GraphQL.
The Materialized View connection’s edges.
The Materialized View edge object.
Learn more about pagination in GraphQL.
The edge’s cursor.
The edge’s node.
The Materialized View’s unique identifier.
The Materialized View’s unique name.
The Materialized View’s description.
The Materialized View’s Environment.
See Environment
The Materialized View’s creation date and time in UTC.
The Materialized View’s last modification date and time in UTC.
The Materialized View’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Materialized View’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The SQL that the Materialized View executes.
The Materialized View’s destination (AKA “target”) Data Pool.
See DataPool
The Materialized View connection’s nodes.
The Materialized View’s unique identifier.
The Materialized View’s unique name.
The Materialized View’s description.
The Materialized View’s Account.
The Account object.
The Account’s unique identifier.
The Materialized View’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Materialized View’s creation date and time in UTC.
The Materialized View’s last modification date and time in UTC.
The Materialized View’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Materialized View’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The SQL that the Materialized View executes.
The Materialized View’s destination (AKA “target”) Data Pool.
The Data Pool object. Data Pools are Propel’s high-speed data store and cache
The Data Pool’s unique identifier.
The Data Pool’s unique name.
The Data Pool’s description.
The Data Pool’s Environment.
See Environment
The Data Pool’s creation date and time in UTC.
The Data Pool’s last modification date and time in UTC.
The Data Pool’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Pool’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Pool’s Data Source. See DataSource
The Data Pool’s status.
See DataPoolStatus
The Data Pool’s data retention in days (not yet supported).
The name of the Data Pool’s table.
The number of records in the Data Pool.
The amount of storage in terabytes used by the Data Pool.
The Data Pool’s columns.
Arguments
The list of measures (numeric columns) in the Data Pool.
Arguments
A list of setup tasks performed on the Data Pool during its most recent setup attempt.
Settings related to Data Pool syncing.
See DataPoolSyncing
The Deletion Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The Add Column Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The UpdateDataPoolRecords Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
Whether the Data Pool has access control enabled or not.
If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
A paginated list of Data Pool Access Policies available on the Data Pool.
Arguments
Validates a custom expression against the Data Pool’s available columns. If the provided expression is invalid, the ValidateExpressionResult response will contain a reason explaining why.
Arguments
The Data Pool’s table settings.
See TableSettings
The Data Pool’s columns that participate in its PARTITION BY clause.
See DataPoolColumn
The Data Pool’s columns that participate in its PRIMARY KEY clause.
See DataPoolColumn
The Data Pool’s columns that participate in its ORDER BY clause.
See DataPoolColumn
The Data Pool’s tenant ID, if configured.
deprecated: Will be removed; use Data Pool Access Policies insteadSee Tenant
The Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
deprecated: Will be removed; use table settings to define the primary key.See UniqueId
The Materialized View’s source Data Pool.
The Data Pool object. Data Pools are Propel’s high-speed data store and cache
The Data Pool’s unique identifier.
The Data Pool’s unique name.
The Data Pool’s description.
The Data Pool’s Environment.
See Environment
The Data Pool’s creation date and time in UTC.
The Data Pool’s last modification date and time in UTC.
The Data Pool’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Pool’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Pool’s Data Source. See DataSource
The Data Pool’s status.
See DataPoolStatus
The Data Pool’s data retention in days (not yet supported).
The name of the Data Pool’s table.
The number of records in the Data Pool.
The amount of storage in terabytes used by the Data Pool.
The Data Pool’s columns.
Arguments
The list of measures (numeric columns) in the Data Pool.
Arguments
A list of setup tasks performed on the Data Pool during its most recent setup attempt.
Settings related to Data Pool syncing.
See DataPoolSyncing
The Deletion Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The Add Column Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The UpdateDataPoolRecords Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
Whether the Data Pool has access control enabled or not.
If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
A paginated list of Data Pool Access Policies available on the Data Pool.
Arguments
Validates a custom expression against the Data Pool’s available columns. If the provided expression is invalid, the ValidateExpressionResult response will contain a reason explaining why.
Arguments
The Data Pool’s table settings.
See TableSettings
The Data Pool’s columns that participate in its PARTITION BY clause.
See DataPoolColumn
The Data Pool’s columns that participate in its PRIMARY KEY clause.
See DataPoolColumn
The Data Pool’s columns that participate in its ORDER BY clause.
See DataPoolColumn
The Data Pool’s tenant ID, if configured.
deprecated: Will be removed; use Data Pool Access Policies insteadSee Tenant
The Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
deprecated: Will be removed; use table settings to define the primary key.See UniqueId
Other Data Pools queried by the Materialized View.
The Data Pool object. Data Pools are Propel’s high-speed data store and cache
The Data Pool’s unique identifier.
The Data Pool’s unique name.
The Data Pool’s description.
The Data Pool’s Environment.
See Environment
The Data Pool’s creation date and time in UTC.
The Data Pool’s last modification date and time in UTC.
The Data Pool’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Pool’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Pool’s Data Source. See DataSource
The Data Pool’s status.
See DataPoolStatus
The Data Pool’s data retention in days (not yet supported).
The name of the Data Pool’s table.
The number of records in the Data Pool.
The amount of storage in terabytes used by the Data Pool.
The Data Pool’s columns.
Arguments
The list of measures (numeric columns) in the Data Pool.
Arguments
A list of setup tasks performed on the Data Pool during its most recent setup attempt.
Settings related to Data Pool syncing.
See DataPoolSyncing
The Deletion Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The Add Column Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The UpdateDataPoolRecords Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
Whether the Data Pool has access control enabled or not.
If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
A paginated list of Data Pool Access Policies available on the Data Pool.
Arguments
Validates a custom expression against the Data Pool’s available columns. If the provided expression is invalid, the ValidateExpressionResult response will contain a reason explaining why.
Arguments
The Data Pool’s table settings.
See TableSettings
The Data Pool’s columns that participate in its PARTITION BY clause.
See DataPoolColumn
The Data Pool’s columns that participate in its PRIMARY KEY clause.
See DataPoolColumn
The Data Pool’s columns that participate in its ORDER BY clause.
See DataPoolColumn
The Data Pool’s tenant ID, if configured.
deprecated: Will be removed; use Data Pool Access Policies insteadSee Tenant
The Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
deprecated: Will be removed; use table settings to define the primary key.See UniqueId
The Materialized View connection’s page info.
The page info object used for pagination.
Points to the first item returned in the results. Used when paginating backward.
Points to the last item returned in the results. Used when paginating forward.
A boolean that indicates whether a next page of results exists. Can be used to display a “next page” button in user interfaces, for example.
A boolean that indicates whether a previous page of results exists. Can be used to display a “previous page” button in user interfaces, for example.
MaterializedViewEdge
The Materialized View edge object.
Learn more about pagination in GraphQL.
The edge’s cursor.
The edge’s node.
The Materialized View’s unique identifier.
The Materialized View’s unique name.
The Materialized View’s description.
The Materialized View’s Account.
The Account object.
The Account’s unique identifier.
The Materialized View’s Environment.
The Environments object.
Environments are independent and isolated Propel workspaces for development, staging (testing), and production workloads. Environments are hosted in a specific region, initially in us-east-2 only.
The Environment’s unique identifier.
The Environment’s unique name.
The Environment’s description.
The Environment’s creation date and time in UTC.
The Environment’s last modification date and time in UTC.
The Environment’s creator. It can be either a User ID, an Environment ID, or “system” if it was created by Propel.
The Environment’s last modifier. It can be either a User ID, an Environment ID, or “system” if it was modified by Propel.
The Materialized View’s creation date and time in UTC.
The Materialized View’s last modification date and time in UTC.
The Materialized View’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Materialized View’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The SQL that the Materialized View executes.
The Materialized View’s destination (AKA “target”) Data Pool.
The Data Pool object. Data Pools are Propel’s high-speed data store and cache
The Data Pool’s unique identifier.
The Data Pool’s unique name.
The Data Pool’s description.
The Data Pool’s Environment.
See Environment
The Data Pool’s creation date and time in UTC.
The Data Pool’s last modification date and time in UTC.
The Data Pool’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Pool’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Pool’s Data Source. See DataSource
The Data Pool’s status.
See DataPoolStatus
The Data Pool’s data retention in days (not yet supported).
The name of the Data Pool’s table.
The number of records in the Data Pool.
The amount of storage in terabytes used by the Data Pool.
The Data Pool’s columns.
Arguments
The list of measures (numeric columns) in the Data Pool.
Arguments
A list of setup tasks performed on the Data Pool during its most recent setup attempt.
Settings related to Data Pool syncing.
See DataPoolSyncing
The Deletion Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The Add Column Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The UpdateDataPoolRecords Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
Whether the Data Pool has access control enabled or not.
If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
A paginated list of Data Pool Access Policies available on the Data Pool.
Arguments
Validates a custom expression against the Data Pool’s available columns. If the provided expression is invalid, the ValidateExpressionResult response will contain a reason explaining why.
Arguments
The Data Pool’s table settings.
See TableSettings
The Data Pool’s columns that participate in its PARTITION BY clause.
See DataPoolColumn
The Data Pool’s columns that participate in its PRIMARY KEY clause.
See DataPoolColumn
The Data Pool’s columns that participate in its ORDER BY clause.
See DataPoolColumn
The Data Pool’s tenant ID, if configured.
deprecated: Will be removed; use Data Pool Access Policies insteadSee Tenant
The Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
deprecated: Will be removed; use table settings to define the primary key.See UniqueId
The Materialized View’s source Data Pool.
The Data Pool object. Data Pools are Propel’s high-speed data store and cache
The Data Pool’s unique identifier.
The Data Pool’s unique name.
The Data Pool’s description.
The Data Pool’s Environment.
See Environment
The Data Pool’s creation date and time in UTC.
The Data Pool’s last modification date and time in UTC.
The Data Pool’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Pool’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Pool’s Data Source. See DataSource
The Data Pool’s status.
See DataPoolStatus
The Data Pool’s data retention in days (not yet supported).
The name of the Data Pool’s table.
The number of records in the Data Pool.
The amount of storage in terabytes used by the Data Pool.
The Data Pool’s columns.
Arguments
The list of measures (numeric columns) in the Data Pool.
Arguments
A list of setup tasks performed on the Data Pool during its most recent setup attempt.
Settings related to Data Pool syncing.
See DataPoolSyncing
The Deletion Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The Add Column Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The UpdateDataPoolRecords Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
Whether the Data Pool has access control enabled or not.
If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
A paginated list of Data Pool Access Policies available on the Data Pool.
Arguments
Validates a custom expression against the Data Pool’s available columns. If the provided expression is invalid, the ValidateExpressionResult response will contain a reason explaining why.
Arguments
The Data Pool’s table settings.
See TableSettings
The Data Pool’s columns that participate in its PARTITION BY clause.
See DataPoolColumn
The Data Pool’s columns that participate in its PRIMARY KEY clause.
See DataPoolColumn
The Data Pool’s columns that participate in its ORDER BY clause.
See DataPoolColumn
The Data Pool’s tenant ID, if configured.
deprecated: Will be removed; use Data Pool Access Policies insteadSee Tenant
The Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
deprecated: Will be removed; use table settings to define the primary key.See UniqueId
Other Data Pools queried by the Materialized View.
The Data Pool object. Data Pools are Propel’s high-speed data store and cache
The Data Pool’s unique identifier.
The Data Pool’s unique name.
The Data Pool’s description.
The Data Pool’s Environment.
See Environment
The Data Pool’s creation date and time in UTC.
The Data Pool’s last modification date and time in UTC.
The Data Pool’s creator. It can be either a User ID, an Application ID, or “system” if it was created by Propel.
The Data Pool’s last modifier. It can be either a User ID, an Application ID, or “system” if it was modified by Propel.
The Data Pool’s Data Source. See DataSource
The Data Pool’s status.
See DataPoolStatus
The Data Pool’s data retention in days (not yet supported).
The name of the Data Pool’s table.
The number of records in the Data Pool.
The amount of storage in terabytes used by the Data Pool.
The Data Pool’s columns.
Arguments
The list of measures (numeric columns) in the Data Pool.
Arguments
A list of setup tasks performed on the Data Pool during its most recent setup attempt.
Settings related to Data Pool syncing.
See DataPoolSyncing
The Deletion Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The Add Column Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
The UpdateDataPoolRecords Jobs that were historically issued to this Data Pool, sorted by creation time, in descending order.
Arguments
Whether the Data Pool has access control enabled or not.
If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
A paginated list of Data Pool Access Policies available on the Data Pool.
Arguments
Validates a custom expression against the Data Pool’s available columns. If the provided expression is invalid, the ValidateExpressionResult response will contain a reason explaining why.
Arguments
The Data Pool’s table settings.
See TableSettings
The Data Pool’s columns that participate in its PARTITION BY clause.
See DataPoolColumn
The Data Pool’s columns that participate in its PRIMARY KEY clause.
See DataPoolColumn
The Data Pool’s columns that participate in its ORDER BY clause.
See DataPoolColumn
The Data Pool’s tenant ID, if configured.
deprecated: Will be removed; use Data Pool Access Policies insteadSee Tenant
The Data Pool’s unique ID column. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated within the Data Pool.
deprecated: Will be removed; use table settings to define the primary key.See UniqueId
MetricConnection
The Metric connection object.
Learn more about pagination in GraphQL.
The Metric connection’s edges. See MetricEdge
The Metric connection’s page info.
The page info object used for pagination.
Points to the first item returned in the results. Used when paginating backward.
Points to the last item returned in the results. Used when paginating forward.
A boolean that indicates whether a next page of results exists. Can be used to display a “next page” button in user interfaces, for example.
A boolean that indicates whether a previous page of results exists. Can be used to display a “previous page” button in user interfaces, for example.
MetricEdge
The Metric edge object.
Learn more about pagination in GraphQL.
The edge’s cursor.
PolicyConnection
The Policy connection object.
Learn more about pagination in GraphQL.
The Policy connection’s edges. See PolicyEdge
The Policy connection’s page info.
The page info object used for pagination.
Points to the first item returned in the results. Used when paginating backward.
Points to the last item returned in the results. Used when paginating forward.
A boolean that indicates whether a next page of results exists. Can be used to display a “next page” button in user interfaces, for example.
A boolean that indicates whether a previous page of results exists. Can be used to display a “previous page” button in user interfaces, for example.
PolicyEdge
The Policy edge object.
Learn more about pagination in GraphQL.
The edge’s cursor.
SyncConnection
The Sync connection object.
Learn more about pagination in GraphQL.
The Sync connection’s page info.
The page info object used for pagination.
Points to the first item returned in the results. Used when paginating backward.
Points to the last item returned in the results. Used when paginating forward.
A boolean that indicates whether a next page of results exists. Can be used to display a “next page” button in user interfaces, for example.
A boolean that indicates whether a previous page of results exists. Can be used to display a “previous page” button in user interfaces, for example.
SyncEdge
The Sync edge object.
Learn more about pagination in GraphQL.
The edge’s cursor.
TableConnection
The table connection object.
Learn more about pagination in GraphQL.
The time at which the tables were cached (i.e., the time at which they were introspected).
The table connection’s page info.
The page info object used for pagination.
Points to the first item returned in the results. Used when paginating backward.
Points to the last item returned in the results. Used when paginating forward.
A boolean that indicates whether a next page of results exists. Can be used to display a “next page” button in user interfaces, for example.
A boolean that indicates whether a previous page of results exists. Can be used to display a “previous page” button in user interfaces, for example.
TableEdge
The table edge object.
Learn more about pagination in GraphQL.
The edge’s cursor.
TableIntrospectionConnection
The table introspection connection object.
Learn more about pagination in GraphQL.
The table introspection connection’s edges. See TableIntrospectionEdge
The table introspection connection’s nodes. See TableIntrospection
The table introspection connection’s page info.
The page info object used for pagination.
Points to the first item returned in the results. Used when paginating backward.
Points to the last item returned in the results. Used when paginating forward.
A boolean that indicates whether a next page of results exists. Can be used to display a “next page” button in user interfaces, for example.
A boolean that indicates whether a previous page of results exists. Can be used to display a “previous page” button in user interfaces, for example.
TableIntrospectionEdge
The table introspection edge object.
Learn more about pagination in GraphQL.
The edge’s cursor.
The edge’s node. See TableIntrospection
UpdateDataPoolRecordsJobConnection
The Update Data Pool records Job connection object.
Learn more about pagination in GraphQL.
The Update Data Pool records Job connection’s edges. See UpdateDataPoolRecordsJobEdge
The Update Data Pool records Job connection’s nodes. See UpdateDataPoolRecordsJob
The Update Data Pool records Job connection’s page info.
The page info object used for pagination.
Points to the first item returned in the results. Used when paginating backward.
Points to the last item returned in the results. Used when paginating forward.
A boolean that indicates whether a next page of results exists. Can be used to display a “next page” button in user interfaces, for example.
A boolean that indicates whether a previous page of results exists. Can be used to display a “previous page” button in user interfaces, for example.
UpdateDataPoolRecordsJobEdge
The Update Data Pool records Job edge object.
Learn more about pagination in GraphQL.
The edge’s cursor.
The edge’s node. See UpdateDataPoolRecordsJob
AmazonDataFirehoseConnectionSettings
The Amazon Data Firehose Data Source’s connection settings.
Enables or disables access control for the Data Pool. If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
HTTP basic access authentication credentials. You must configure these same credentials to be included in the
X-Amz-Firehose-Access-Key
header when Amazon Data Firehose issues requests to its custom HTTP endpoint.
The HTTP Basic authentication settings.
Username for HTTP Basic authentication that must be included in the Authorization header when uploading new data.
Password for HTTP Basic authentication that must be included in the Authorization header when uploading new data.
Additional columns for the table in Propel.
A column in an Amazon Data Firehose Data Source’s table.
The column name.
The JSON property that the column will be derived from. For example, if you send a JSON event like this:
{ "greeting": { "message": "hello, world" } }
Then you can use the JSON property “greeting.message” to extract “hello, world” to a column.
The column type.
The Propel data types.
BOOLEAN
: True or false.STRING
: A variable-length string.FLOAT
: A 32-bit signed double-precision floating point number.DOUBLE
: A 64-bit signed double-precision floating point number.INT8
: An 8-bit signed integer, with a minimum value of -2⁷ and a maximum value of 2⁷-1.INT16
: A 16-bit signed integer, with a minimum value of -2¹⁵ and a maximum value of 2¹⁵-1.INT32
: A 32-bit signed integer, with a minimum value of -2³¹ and a maximum value of 2³¹-1.INT64
: A 64-bit signed integer, with a minimum value of -2⁶³ and a maximum value of 2⁶³-1.DATE
: A date without a timestamp. For example, “YYYY-MM-DD”.TIMESTAMP
: A date with a timestamp. For example, “yyy-MM-dd HH:mm:ss”.JSON
: A JavaScript Object Notation (JSON) document.CLICKHOUSE
: A ClickHouse-specific type.
Whether the column’s type is nullable or not.
Copy this value into the URL field when configuring your Amazon Data Firehose to deliver to a custom HTTP endpoint.
Override the Data Pool’s table settings. These describe how the Data Pool’s table is created in ClickHouse, and a
default will be chosen based on the Data Pool’s timestamp
value, if any. You can override these
defaults in order to specify a custom table engine, custom ORDER BY, etc.
A Data Pool’s table settings.
These describe how the Data Pool’s table is created in ClickHouse.
The ClickHouse table engine for the Data Pool’s table.
A Data Pool’s table engine.
The PARTITION BY clause for the Data Pool’s table.
The PRIMARY KEY clause for the Data Pool’s table.
The ORDER BY clause for the Data Pool’s table.
The TTL clause for the Data Pool’s table.
The primary timestamp column, if any.
Copy this value into the X-Amz-Firehose-Access-Key
header when configuring your Amazon Data Firehose to deliver to a custom HTTP endpoint.
AmazonDynamoDBConnectionSettings
The Amazon DynamoDB Data Source’s connection settings.
Enables or disables access control for the Data Pool. If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
HTTP basic access authentication credentials. You must configure these same credentials to be included in the
X-Amz-Firehose-Access-Key
header when Amazon Data Firehose transmits records from your DynamoDB table to its
custom HTTP endpoint.
The HTTP Basic authentication settings.
Username for HTTP Basic authentication that must be included in the Authorization header when uploading new data.
Password for HTTP Basic authentication that must be included in the Authorization header when uploading new data.
Additional columns for the table in Propel.
A column in an Amazon DynamoDB Data Source’s table.
The column name.
The JSON property that the column will be derived from. For example, if you send a JSON event like this:
{ "greeting": { "message": "hello, world" } }
Then you can use the JSON property “greeting.message” to extract “hello, world” to a column.
The column type.
The Propel data types.
BOOLEAN
: True or false.STRING
: A variable-length string.FLOAT
: A 32-bit signed double-precision floating point number.DOUBLE
: A 64-bit signed double-precision floating point number.INT8
: An 8-bit signed integer, with a minimum value of -2⁷ and a maximum value of 2⁷-1.INT16
: A 16-bit signed integer, with a minimum value of -2¹⁵ and a maximum value of 2¹⁵-1.INT32
: A 32-bit signed integer, with a minimum value of -2³¹ and a maximum value of 2³¹-1.INT64
: A 64-bit signed integer, with a minimum value of -2⁶³ and a maximum value of 2⁶³-1.DATE
: A date without a timestamp. For example, “YYYY-MM-DD”.TIMESTAMP
: A date with a timestamp. For example, “yyy-MM-dd HH:mm:ss”.JSON
: A JavaScript Object Notation (JSON) document.CLICKHOUSE
: A ClickHouse-specific type.
Whether the column’s type is nullable or not.
Copy this value into the URL field when configuring your Amazon Data Firehose to transmit records from your DynamoDB table to a custom HTTP endpoint.
Override the Data Pool’s table settings. These describe how the Data Pool’s table is created in ClickHouse, and a
default will be chosen based on the Data Pool’s timestamp
value, if any. You can override these
defaults in order to specify a custom table engine, custom ORDER BY, etc.
A Data Pool’s table settings.
These describe how the Data Pool’s table is created in ClickHouse.
The ClickHouse table engine for the Data Pool’s table.
A Data Pool’s table engine.
The PARTITION BY clause for the Data Pool’s table.
The PRIMARY KEY clause for the Data Pool’s table.
The ORDER BY clause for the Data Pool’s table.
The TTL clause for the Data Pool’s table.
The primary timestamp column, if any.
Copy this value into the X-Amz-Firehose-Access-Key
header when configuring your Amazon Data Firehose to
transmit records from your DynamoDB table to its custom HTTP endpoint.
ClickHouseConnectionSettings
The ClickHouse Data Source connection settings.
Which database to connect to
The password for the provided user
Whether the user has readonly permissions or not for querying ClickHouse
The URL where the ClickHouse host is listening to HTTP[S] connections
The user for authenticating against the ClickHouse host
HttpConnectionSettings
The HTTP Data Source connection settings.
The HTTP Basic authentication settings for uploading new data.
If this parameter is not provided, anyone with the URL to your tables will be able to upload data. While it’s OK to test without HTTP Basic authentication, we recommend enabling it.
The HTTP Basic authentication settings.
Username for HTTP Basic authentication that must be included in the Authorization header when uploading new data.
Password for HTTP Basic authentication that must be included in the Authorization header when uploading new data.
The HTTP Data Source’s tables.
An HTTP Data Source’s table.
The ID of the table
The name of the table
All the columns present in the table
A column in an HTTP Data Source’s table.
The column name. It has to be unique within a table.
The column type.
See ColumnType
The ClickHouse type to use when type
is set to CLICKHOUSE
.
Whether the column’s type is nullable or not.
KafkaConnectionSettings
The Kafka Data Source connection settings.
The type of authentication to use. Can be SCRAM-SHA-256, SCRAM-SHA-512, PLAIN or NONE
The bootstrap server(s) to connect to
The password for the provided user
Whether the the connection to the Kafka servers is encrypted or not
The user for authenticating against the Kafka servers
PostgreSqlConnectionSettings
The PostgreSQL Data Source connection settings.
Which database to connect to
The host where PostgreSQL is listening
The port where PostgreSQL is listening (usually 5432)
Which schema to use
The user for authenticating against PostgreSQL
S3ConnectionSettings
The connection settings for an Amazon S3 Data Source. These include the Amazon S3 bucket name, the AWS access key ID, and the tables (along with their paths). We do not allow fetching the AWS secret access key after it has been set.
The AWS access key ID for an IAM user with sufficient access to the Amazon S3 bucket.
The name of the Amazon S3 bucket.
The Amazon S3 Data Source’s tables.
An Amazon S3 Data Source’s table.
The ID of the table
The name of the table
The path to the table’s files in Amazon S3.
All the columns present in the table
A column in an Amazon S3 Data Source’s table.
The column name.
The column type.
See ColumnType
Whether the column’s type is nullable or not.
TwilioSegmentConnectionSettings
The Twilio Segment Data Source connection settings.
Enables or disables access control for the Data Pool. If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
The HTTP basic authentication settings for the Twilio Segment Data Source URL. If this parameter is not provided, anyone with the webhook URL will be able to send events. While it’s OK to test without HTTP Basic authentication, we recommend enabling it.
The HTTP Basic authentication settings.
Username for HTTP Basic authentication that must be included in the Authorization header when uploading new data.
Password for HTTP Basic authentication that must be included in the Authorization header when uploading new data.
The additional columns for the table in Propel.
A column in a Twilio Segment Data Source’s table.
The column name.
The JSON property that the column will be derived from. For example, if you POST a JSON event like this:
{ "greeting": { "message": "hello, world" } }
Then you can use the JSON property “greeting.message” to extract “hello, world” to a column.
The column type.
The Propel data types.
BOOLEAN
: True or false.STRING
: A variable-length string.FLOAT
: A 32-bit signed double-precision floating point number.DOUBLE
: A 64-bit signed double-precision floating point number.INT8
: An 8-bit signed integer, with a minimum value of -2⁷ and a maximum value of 2⁷-1.INT16
: A 16-bit signed integer, with a minimum value of -2¹⁵ and a maximum value of 2¹⁵-1.INT32
: A 32-bit signed integer, with a minimum value of -2³¹ and a maximum value of 2³¹-1.INT64
: A 64-bit signed integer, with a minimum value of -2⁶³ and a maximum value of 2⁶³-1.DATE
: A date without a timestamp. For example, “YYYY-MM-DD”.TIMESTAMP
: A date with a timestamp. For example, “yyy-MM-dd HH:mm:ss”.JSON
: A JavaScript Object Notation (JSON) document.CLICKHOUSE
: A ClickHouse-specific type.
Whether the column’s type is nullable or not.
Override the Data Pool’s table settings. These describe how the Data Pool’s table is created in ClickHouse, and a
default will be chosen based on the Data Pool’s timestamp
and uniqueId
values, if any. You can override these
defaults in order to specify a custom table engine, custom ORDER BY, etc.
A Data Pool’s table settings.
These describe how the Data Pool’s table is created in ClickHouse.
The ClickHouse table engine for the Data Pool’s table.
A Data Pool’s table engine.
The PARTITION BY clause for the Data Pool’s table.
The PRIMARY KEY clause for the Data Pool’s table.
The ORDER BY clause for the Data Pool’s table.
The TTL clause for the Data Pool’s table.
The primary timestamp column, if any.
The webhook URL that Twilio Segment should POST events to.
WebhookConnectionSettings
The Webhook Data Source connection settings.
Enables or disables access control for the Data Pool. If the Data Pool has access control enabled, Applications must be assigned Data Pool Access Policies in order to query the Data Pool and its Metrics.
The HTTP basic authentication settings for the Webhook Data Source URL. If this parameter is not provided, anyone with the webhook URL will be able to send events. While it’s OK to test without HTTP Basic authentication, we recommend enabling it.
The HTTP Basic authentication settings.
Username for HTTP Basic authentication that must be included in the Authorization header when uploading new data.
Password for HTTP Basic authentication that must be included in the Authorization header when uploading new data.
The additional columns for the table in Propel.
A column in a Webhook Data Source’s table.
The column name.
The JSON property that the column will be derived from. For example, if you POST a JSON event like this:
{ "greeting": { "message": "hello, world" } }
Then you can use the JSON property “greeting.message” to extract “hello, world” to a column.
The column type.
The Propel data types.
BOOLEAN
: True or false.STRING
: A variable-length string.FLOAT
: A 32-bit signed double-precision floating point number.DOUBLE
: A 64-bit signed double-precision floating point number.INT8
: An 8-bit signed integer, with a minimum value of -2⁷ and a maximum value of 2⁷-1.INT16
: A 16-bit signed integer, with a minimum value of -2¹⁵ and a maximum value of 2¹⁵-1.INT32
: A 32-bit signed integer, with a minimum value of -2³¹ and a maximum value of 2³¹-1.INT64
: A 64-bit signed integer, with a minimum value of -2⁶³ and a maximum value of 2⁶³-1.DATE
: A date without a timestamp. For example, “YYYY-MM-DD”.TIMESTAMP
: A date with a timestamp. For example, “yyy-MM-dd HH:mm:ss”.JSON
: A JavaScript Object Notation (JSON) document.CLICKHOUSE
: A ClickHouse-specific type.
Whether the column’s type is nullable or not.
Override the Data Pool’s table settings. These describe how the Data Pool’s table is created in ClickHouse, and a
default will be chosen based on the Data Pool’s timestamp
and uniqueId
values, if any. You can override these
defaults in order to specify a custom table engine, custom ORDER BY, etc.
A Data Pool’s table settings.
These describe how the Data Pool’s table is created in ClickHouse.
The ClickHouse table engine for the Data Pool’s table.
A Data Pool’s table engine.
The PARTITION BY clause for the Data Pool’s table.
The PRIMARY KEY clause for the Data Pool’s table.
The ORDER BY clause for the Data Pool’s table.
The TTL clause for the Data Pool’s table.
The primary timestamp column, if any.
The Webhook URL for posting JSON events
The tenant ID column, if any.
deprecated: Will be removed; use Data Pool Access Policies instead.The unique ID column, if any. Propel uses the primary timestamp and a unique ID to compose a primary key for determining whether records should be inserted, deleted, or updated.
deprecated: Will be removed; use Table Settings to define the primary key.- FailureResponse
- Error
- Account
- Environment
- EnvironmentResponse
- PageInfo
- Application
- ApplicationResponse
- DataSource
- DataSourceCheck
- TableIntrospection
- Table
- Column
- InternalConnectionSettings
- SnowflakeConnectionSettings
- HttpBasicAuthSettings
- HttpDataSourceTable
- HttpDataSourceColumn
- S3DataSourceTable
- S3DataSourceColumn
- AmazonDataFirehoseDataSourceColumn
- AmazonDynamoDBDataSourceColumn
- TwilioSegmentDataSourceColumn
- WebhookDataSourceColumn
- DataSourceResponse
- DataPool
- TableSettings
- MergeTreeTableEngine
- ReplacingMergeTreeTableEngine
- SummingMergeTreeTableEngine
- AggregatingMergeTreeTableEngine
- PostgreSqlTableEngine
- DataPoolSetupTask
- Dimension
- DimensionStatistics
- MaterializedView
- MaterializedViewResponse
- Filter
- UpdateDataPoolRecordsJobSetColumn
- Timestamp
- Tenant
- UniqueId
- DataPoolColumn
- DataPoolResponse
- DataPoolSyncing
- Sync
- MetricReportConnection
- MetricReportEdge
- MetricReportNode
- Metric
- CountMetricSettings
- SumMetricSettings
- CountDistinctMetricSettings
- AverageMetricSettings
- MinMetricSettings
- MaxMetricSettings
- CustomMetricSettings
- MetricResponse
- ValidateExpressionResult
- TimeSeriesResponse
- TimeSeriesResponseGroup
- CounterResponse
- LeaderboardResponse
- QueryInfo
- Booster
- BoosterResponse
- Policy
- PolicyResponse
- DeletionJob
- RequestDeleteResponse
- DeletionJobResponse
- AddColumnToDataPoolJob
- AddColumnToDataPoolJobResponse
- UpdateDataPoolRecordsJob
- UpdateDataPoolRecordsJobResponse
- DataPoolAccessPolicy
- DataPoolAccessPolicyResponse
- SqlColumnResponse
- SqlResponse
- DescribeSqlResponse
- DataGridConnection
- DataGridEdge
- DataGridNode
- RecordsByUniqueIdResponse
- TopValuesResponse
- AddColumnToDataPoolJobConnection
- AddColumnToDataPoolJobEdge
- ApplicationConnection
- ApplicationEdge
- BoosterConnection
- BoosterEdge
- ColumnConnection
- ColumnEdge
- DataPoolAccessPolicyConnection
- DataPoolAccessPolicyEdge
- DataPoolColumnConnection
- DataPoolColumnEdge
- DataPoolConnection
- DataPoolEdge
- DataSourceConnection
- DataSourceEdge
- DeletionJobConnection
- DeletionJobEdge
- MaterializedViewConnection
- MaterializedViewEdge
- MetricConnection
- MetricEdge
- PolicyConnection
- PolicyEdge
- SyncConnection
- SyncEdge
- TableConnection
- TableEdge
- TableIntrospectionConnection
- TableIntrospectionEdge
- UpdateDataPoolRecordsJobConnection
- UpdateDataPoolRecordsJobEdge
- AmazonDataFirehoseConnectionSettings
- AmazonDynamoDBConnectionSettings
- ClickHouseConnectionSettings
- HttpConnectionSettings
- KafkaConnectionSettings
- PostgreSqlConnectionSettings
- S3ConnectionSettings
- TwilioSegmentConnectionSettings
- WebhookConnectionSettings