Quotas and limits
This document lists the
quotas
and
limits
that apply to
BigQuery. For more information on quotas, see
Virtual Private Cloud quotas
.
A quota restricts how much of a shared Google Cloud resource your
Google Cloud project can use, including hardware, software, and network
components. Therefore, quotas are a part of a system that does the
following:
- Monitors your use or consumption of Google Cloud products and
services.
- Restricts your consumption of those resources, for reasons that include
ensuring fairness and reducing spikes in usage.
- Maintains
configurations that automatically enforce prescribed restrictions.
- Provides a means to request or make changes to the quota.
In most cases, when a quota is exceeded, the system immediately blocks
access to the relevant Google resource, and the task that you're trying to
perform fails. In most cases, quotas apply to each Google Cloud project
and are shared across all applications and IP addresses that use that
Google Cloud project.
There are also
limits
on BigQuery resources. These
limits are unrelated to the quota system. Limits cannot be changed unless
otherwise stated.
By default, BigQuery
quotas and limits apply on a
per-project
basis.
Quotas and limits that apply on a different basis are indicated as
such; for example, the maximum number of columns
per table
, or the maximum
number of concurrent API requests
per user
.
Specific policies vary depending on resource availability, user profile, Service
Usage history, and other factors, and are subject to change without notice.
Quota replenishment
Daily quotas are replenished at regular intervals throughout the day,
reflecting their intent to guide rate limiting behaviors. Intermittent refresh is also done
to avoid long disruptions when quota is exhausted. More quota is typically
made available within minutes rather than globally replenished once daily.
Request a quota increase
To increase or decrease most quotas, use the Google Cloud console.
For more information, see
Request a higher quota
.
For step-by-step guidance through the process of requesting a quota increase
in Google Cloud console, click
Guide me
:
Guide me
Cap quota usage
To learn how you can limit usage of a particular resource by specifying a
smaller quota than the default, see
Capping usage
.
Required permissions
To view and update your BigQuery quotas in the
Google Cloud console, you need the same permissions as for any Google Cloud
quota. For more information, see
Google Cloud quota permissions
.
Troubleshoot
For information about troubleshooting errors related to quotas and limits, see
Troubleshooting BigQuery quota errors
.
Jobs
Quotas and limits apply to jobs that BigQuery runs on your behalf
whether they are run by using Google Cloud console, the bq command-line tool, or
programmatically using the REST API or client libraries.
Query jobs
The following quotas apply to query jobs created automatically by
running interactive queries, scheduled queries, and jobs submitted by using the
jobs.query
and query-type
jobs.insert
API methods:
The following limits apply to query jobs created automatically by
running interactive queries, scheduled queries, and jobs submitted by using the
jobs.query
and query-type
jobs.insert
API methods:
Limit
|
Default
|
Notes
|
Maximum number of queued interactive queries
|
1,000 queries
|
Your project can queue up to 1,000 interactive queries.
Additional interactive queries that exceed this limit return a
quota error.
|
Maximum number of queued batch queries
|
20,000 queries
|
Your project can queue up to 20,000 batch queries. Additional batch
queries that exceed this limit return a quota error.
|
Maximum number of concurrent interactive queries against
Bigtable external data sources
|
16 queries
|
Your project can run up to sixteen concurrent queries against a
Bigtable external
data source
.
|
Maximum number of concurrent queries that contain remote
functions
|
10 queries
|
You can run up to ten concurrent queries with
remote functions
per project.
|
Maximum number of concurrent multi-statement queries
|
1,000 multi-statement queries
|
Your project can run up to 1,000 concurrent
multi-statement queries
.
For other quotas and limits related to multi-statement queries, see
Multi-statement queries
.
|
Maximum number of concurrent legacy SQL queries that contain UDFs
|
6 queries
|
Your project can run up to six concurrent legacy SQL queries with user-defined
functions (UDFs).
This limit includes both
interactive
and
batch
queries.
Interactive queries that contain UDFs also count toward the concurrent
limit for interactive queries. This limit does not apply to GoogleSQL
queries.
|
Daily query size limit
|
Unlimited
|
By default, there is no daily query size limit. However,
you can set limits on the amount of data users can query by
creating
custom quotas
to control
query
usage per day
or
query usage
per day per user
.
|
Daily destination table update limit
|
See
Maximum number of table
operations per day
.
|
Updates to destination tables in a query job count toward the limit on the
maximum number of table
operations per day for the destination tables. Destination table updates include
append and overwrite operations that are performed by queries that you run by
using the Google Cloud console, using the bq command-line tool, or calling the
jobs.query
and query-type
jobs.insert
API methods.
|
Query/multi-statement query execution-time limit
|
6 hours
|
A query or multi-statement query can execute for up to 6 hours, and
then it fails. However, sometimes queries are retried. A query can be tried
up to three times, and each attempt can run for up to 6 hours. As a
result, it's possible for a query to have a total runtime of more than
6 hours.
CREATE MODEL
job timeout defaults to 24 hours, with the exception of time series, AutoML, and hyperparameter tuning jobs which timeout at 72 hours.
|
Maximum number of resources referenced per query
|
1,000 resources
|
A query can reference up to 1,000 total of unique
tables
, unique
views
, unique
user-defined functions
(UDFs), and unique
table
functions
after full expansion. This limit includes the following:
- Tables, views, UDFs, and table functions directly
referenced by
the query.
- Tables, views, UDFs, and table functions referenced by other
views/UDFs/table functions referenced in the query.
|
Maximum unresolved legacy SQL query length
|
256 KB
|
An unresolved legacy SQL query can be up to 256 KB long. If
your query is longer, you receive the following error:
The query
is too large.
To stay within this limit, consider replacing large arrays or lists with
query parameters.
|
Maximum unresolved GoogleSQL query length
|
1 MB
|
An unresolved GoogleSQL query can be up to 1 MB long. If
your query is longer, you receive the following error:
The query is too
large.
To stay within this limit, consider replacing large arrays or lists with query
parameters.
|
Maximum resolved legacy and GoogleSQL query length
|
12 MB
|
The limit on resolved query length includes the
length of all views and wildcard tables referenced by the query.
|
Maximum number of GoogleSQL query parameters
|
10,000 parameters
|
A GoogleSQL query can have up to 10,000 parameters.
|
Maximum request size
|
10 MB
|
The request size can be up to 10 MB, including additional properties like
query parameters.
|
Maximum response size
|
10 GB compressed
|
Sizes vary depending on compression
ratios for the data. The actual response size might be significantly larger
than 10 GB.
The maximum response size is unlimited when
writing large query
results to a destination table
.
|
Maximum row size
|
100 MB
|
The maximum row size is
approximate, because the limit is based on the internal representation of row data.
The maximum row size limit is enforced during certain stages of query job
execution.
|
Maximum columns in a table, query result, or view
definition
|
10,000 columns
|
A table, query result, or view
definition can have up to 10,000 columns.
|
Maximum concurrent slots for on-demand pricing
|
2,000 slots per project
20,000 slots per organization
|
With on-demand pricing, your project can have up to 2,000 concurrent slots.
There is also a 20,000 concurrent slots cap at the organization level.
BigQuery tires to allocate slots fairly between projects
within an organization if their total demand is higher than 20,000 slots.
BigQuery slots are shared among all
queries in a single project. BigQuery might burst beyond this
limit to accelerate your queries.
To check how many slots you're using, see
Monitoring BigQuery using
Cloud Monitoring
.
|
Maximum CPU usage per scanned data for on-demand pricing
|
256 CPU seconds per MiB scanned
|
With on-demand pricing, your query can use up to approximately 256 CPU seconds
per MiB of scanned data. If your query is too CPU-intensive for the amount of data
being processed, the query fails with a
billingTierLimitExceeded
error.
For more information,
see
billingTierLimitExceeded
.
|
Multi-statement transaction table mutations
|
100 tables
|
A
transaction
can mutate data in at most 100 tables.
|
Multi-statement transaction partition modifications
|
100,000 partition modifications
|
A
transaction
can perform at most 100,000 partition modifications.
|
BigQuery Omni maximum query result size
|
20 GiB uncompressed
|
The maximum result size is 20 GiB logical bytes when querying
Azure
or
AWS
data. If your query result is larger than 20 GiB, consider exporting
the results to
Amazon S3
or
Blob Storage
.
For more information, see
BigQuery Omni
Limitations
.
|
BigQuery Omni total query result size per day
|
1 TB
|
The total query result sizes for a project is 1 TB per day.
For more information, see
BigQuery Omni
limitations
.
|
BigQuery Omni maximum row size
|
10 MiB
|
The maximum row size is 10 MiB when querying
Azure
or
AWS
data. For more information, see
BigQuery Omni
Limitations
.
|
Although scheduled queries use features of the
BigQuery Data Transfer Service
,
scheduled queries are not transfers, and are not subject to
load job limits
.
Export jobs
The following limits apply to jobs that
export data
from BigQuery by using the bq command-line tool, Google Cloud console,
or the export-type
jobs.insert
API method.
Limit
|
Default
|
Notes
|
Maximum number of exported bytes per day
|
50 TB
|
You can export up to 50 TB(Tebibytes) of data per day from a project for
free using the shared slot pool. You can
set up a Cloud Monitoring
alert policy that provides notification of the number of bytes exported.
To export more than 50 TB(Tebibytes) of data per day, do one of the
following:
|
Maximum number of export jobs per day
|
100,000 exports
|
You can run up to 100,000 exports per day in a project.
To run more than 100,000 exports per day, do one of the following:
|
Maximum table size exported to a single file
|
1 GB
|
You can export up to 1 GB of table data to a single file. To export more
than 1 GB of data, use a
wildcard
to export the data into multiple files. When you export
data to multiple files, the size of the files varies. In some cases,
the size of the output files is more than 1 GB.
|
Wildcard URIs
per export
|
500 URIs
|
An export can have up to 500 wildcard URIs.
|
For more information about viewing your current export job usage, see
View current quota usage
.
Load jobs
The following limits apply when you
load data
into BigQuery, using the
Google Cloud console, the bq command-line tool, or the load-type
jobs.insert
API method.
Limit
|
Default
|
Notes
|
Load jobs per table per day
|
1,500 jobs
|
Load jobs, including failed load jobs, count toward the limit on the
number of table operations per day for the destination table.
For information about limits on the number of table operations per day
for standard tables and partitioned tables, see
Tables
.
|
Load jobs per day
|
100,000 jobs
|
Your project is replenished with a maximum of 100,000 load jobs quota every 24 hours.
Failed load jobs count toward this limit. In some cases, it is possible to run more
than 100,000 load jobs in 24 hours if a prior day's quota is
not fully used.
|
Maximum columns per table
|
10,000 columns
|
A table can have up to 10,000 columns.
|
Maximum size per load job
|
15 TB
|
The total size for all of your CSV, JSON, Avro, Parquet, and ORC input
files can be up to 15 TB.
|
Maximum number of source URIs in job configuration
|
10,000 URIs
|
A job configuration can have up to 10,000 source URIs.
|
Maximum number of files per load job
|
10,000,000 files
|
A load job can have up to 10 million total files, including all files
matching all wildcard URIs.
|
Maximum number of files in the source Cloud Storage bucket
|
Approximately 60,000,000 files
|
A load job can read from a Cloud Storage bucket containing up to
approximately 60,000,000 files.
|
Load job execution-time limit
|
6 hours
|
A load job fails if it executes for longer than six hours.
|
Avro: Maximum size for file data blocks
|
16 MB
|
The size limit for Avro file data blocks is 16 MB.
|
CSV: Maximum cell size
|
100 MB
|
CSV cells can be up to 100 MB in size.
|
CSV: Maximum row size
|
100 MB
|
CSV rows can be up to 100 MB in size.
|
CSV: Maximum file size - compressed
|
4 GB
|
The size limit for a compressed CSV file is 4 GB.
|
CSV: Maximum file size - uncompressed
|
5 TB
|
The size limit for an uncompressed CSV file is 5 TB.
|
JSON: Maximum row size
|
100 MB
|
JSON rows can be up to 100 MB in size.
|
JSON: Maximum file size - compressed
|
4 GB
|
The size limit for a compressed JSON file is 4 GB.
|
JSON: Maximum file size - uncompressed
|
5 TB
|
The size limit for an uncompressed JSON file is 5 TB.
|
If you regularly exceed the load job limits due to frequent updates, consider
streaming data into BigQuery
instead.
For information on viewing your current load job usage, see
View current quota usage
.
BigQuery Data Transfer Service load job quota considerations
Load jobs created by BigQuery Data Transfer Service transfers are included in
BigQuery's quotas on load jobs. It's important to consider
how many transfers you enable in each project to prevent transfers and other
load jobs from producing
quotaExceeded
errors.
You can use the following equation to estimate how many load jobs are required
by your transfers:
Number of daily jobs = Number of transfers x Number of tables x
Schedule frequency x Refresh window
Where:
Number of transfers
is the number of transfer configurations you enable in
your project.
Number of tables
is the number of tables created by each specific transfer
type. The number of tables varies by transfer type:
- Campaign Manager transfers create approximately 25 tables.
- Google Ads transfers create approximately 60 tables.
- Google Ad Manager transfers create approximately 40 tables.
- Google Play transfers create approximately 25 tables.
- Search Ads 360 transfers create approximately 50 tables.
- YouTube transfers create approximately 50 tables.
Schedule frequency
describes how often the transfer runs. Transfer run schedules
are provided for each transfer type:
Refresh window
is the number of days to include in the data transfer. If you
enter 1, there is no daily backfill.
Copy jobs
The following limits apply to BigQuery jobs for
copying tables
, including jobs
that create a copy, clone, or snapshot of a standard table, table clone, or
table snapshot.
The limits apply to jobs created by using the Google Cloud console, the
bq command-line tool, or the
jobs.insert
method
that
specifies the
copy
field
in the job configuration.
Copy jobs count toward these limits whether they succeed or fail.
Limit
|
Default
|
Notes
|
Copy jobs per destination table per day
|
|
See
Table operations per day
.
|
Copy jobs per day
|
100,000 jobs
|
Your project can run up to 100,000 copy jobs per day.
|
Cross-region copy jobs per destination table per day
|
100 jobs
|
Your project can run up to 100 cross-region copy jobs for a destination
table per day.
|
Cross-region copy jobs per day
|
2,000 jobs
|
Your project can run up to 2,000 cross-region copy jobs per day.
|
Number of source tables to copy
|
1,200 source tables
|
You can copy from up to 1,200 source tables per copy job.
|
For information on viewing your current copy job usage, see
Copy jobs - View current quota usage
.
The following limits apply to
copying
datasets
:
Limit
|
Default
|
Notes
|
Maximum number of tables in the source dataset
|
20,000 tables
|
A source dataset can have up to 20,000 tables.
|
Maximum number of tables that can be copied per run to a destination
dataset in the same region
|
20,000 tables
|
Your project can copy 20,000 tables per run to a destination dataset
that is in the same region.
|
Maximum number of tables that can be copied per run to a destination
dataset in a different region
|
1,000 tables
|
Your project can copy 1,000 tables per run to a destination dataset
that is in a different region.
For example, if you configure a cross-region copy of a dataset with
8,000 tables in it, then BigQuery Data Transfer Service automatically creates eight runs
in a sequential manner. The first run copies 1,000 tables. Twenty-four
hours later, the second run copies 1,000 tables. This process continues
until all tables in the dataset are copied, up to the maximum of 20,000
tables per dataset.
|
Reservations
The following quotas apply to
reservations
:
Quota
|
Default
|
Notes
|
Total number of slots for the EU region
|
5,000 slots
|
The maximum number of BigQuery slots you can purchase
in the EU multi-region by using the Google Cloud console.
View quotas in Google Cloud console
|
Total number of slots for the US region
|
10,000 slots
|
The maximum number of BigQuery slots you can purchase
in the US multi-region by using the Google Cloud console.
View quotas in Google Cloud console
|
Total number of slots for the
us-east1
region
|
4,000 slots
|
The maximum number of BigQuery slots that you can
purchase in the listed region by using the Google Cloud console.
View quotas in Google Cloud console
|
Total number of slots for the following regions:
asia-south1
asia-southeast1
europe-west2
us-central1
us-west1
|
2,000 slots
|
The maximum number of BigQuery slots that you can
purchase in each of the listed regions by using the Google Cloud console.
View quotas in Google Cloud console
|
Total number of slots for the following regions:
asia-east1
asia-northeast1
asia-northeast3
asia-southeast2
australia-southeast1
europe-north1
europe-west1
europe-west3
europe-west4
northamerica-northeast1
us-east4
southamerica-east1
|
1,000 slots
|
The maximum number of BigQuery slots you can
purchase in each of the listed regions by using the Google Cloud console.
View quotas in Google Cloud console
|
Total number of slots for BigQuery Omni regions
|
100 slots
|
The maximum number of BigQuery slots you can purchase
in the
BigQuery Omni
regions by using the Google Cloud console.
View quotas in Google Cloud console
|
Total number of slots for all other regions
|
500 slots
|
The maximum number of BigQuery slots you can
purchase in each other region by using the Google Cloud console.
View quotas in Google Cloud console
|
The following limits apply to
reservations
:
Limit
|
Value
|
Notes
|
Number of
administration projects
for slot reservations
|
5 projects per organization
|
The maximum number of projects within an organization that can contain a
reservation or an active commitment for slots for a given location /
region.
|
Maximum number of
standard
edition reservations
|
10 reservations per project
|
The maximum number of standard edition reservations per administration project
within an organization for a given location / region.
|
Maximum number of
Enterprise or Enterprise Plus
edition reservations
|
200 reservations per project
|
The maximum number of Enterprise or Enterprise Plus edition reservations per administration project
within an organization for a given location / region.
|
Datasets
The following limits apply to BigQuery
datasets
:
Limit
|
Default
|
Notes
|
Maximum number of datasets
|
Unlimited
|
There is no limit on the number of datasets that a project can have.
|
Number of tables per dataset
|
Unlimited
|
When you use an API call, enumeration performance slows as you approach
50,000 tables in a dataset. The Google Cloud console can display up to
50,000 tables for each dataset.
|
Number of authorized resources
in a dataset's access control list
|
2,500 resources
|
A dataset's access control list can have up to 2,500 total authorized
resources, including
authorized views
,
authorized datasets
,
and
authorized functions
.
If you exceed this limit due to a large
number of authorized views, consider grouping the views into authorized
datasets.
|
Number of dataset update operations per dataset per 10 seconds
|
5 operations
|
Your project can make up to five dataset update operations every 10 seconds.
The dataset update limit includes all metadata update operations
performed by the following:
- Google Cloud console
- The bq command-line tool
- BigQuery client libraries
- The following API methods:
- The following DDL statements:
|
Maximum length of a dataset description
|
16,384 characters
|
When you add a description to a dataset, the text can be at most 16,384
characters.
|
Tables
All tables
The following limits apply to all BigQuery tables.
Limit
|
Default
|
Notes
|
Maximum length of a column name
|
300 characters
|
Your column name can be at most 300 characters.
|
Maximum length of a column description
|
1,024 characters
|
When you add a description to a column, the text
can be at most 1,024 characters.
|
Maximum depth of nested records
|
15 levels
|
Columns of type
RECORD
can contain nested
RECORD
types, also called
child
records. The maximum nested depth limit is 15 levels.
This limit is independent of whether the records are scalar or
array-based (repeated).
|
Standard tables
The following limits apply to BigQuery standard (built-in)
tables
:
Limit
|
Default
|
Notes
|
Table modifications per day
|
1,500 modifications
|
Your project can make up to 1,500 table modifications per table per day,
whether the modification appends data, updates data, or truncates the
table. This limit cannot be changed and includes the combined total of all
load jobs
,
copy jobs
, and
query jobs
that append to or overwrite a destination table.
DML statements
do not
count toward the number of table
modifications per day.
Streaming data
does not
count toward the number of table
modifications per day.
|
Maximum rate of table metadata update operations per table
|
5 operations per 10 seconds
|
Your project can make up to five table metadata update operations per 10 seconds
per table. This limit applies to all table metadata update operations,
performed by the following:
- Google Cloud console
- The bq command-line tool
- BigQuery client libraries
- The following API methods:
- DDL
statements on tables
This limit also includes the combined total of all load jobs,
copy jobs, and query jobs that append to or overwrite a destination table
or that use a
DML
DELETE
,
INSERT
,
MERGE
,
TRUNCATE TABLE
, or
UPDATE
statements to write
data to a table. Note that while DML statements count toward this limit, they are not subject to it if it is reached. DML operations have
dedicated rate limits
.
If you exceed this limit, you get an error message like
Exceeded rate limits: too many table update operations for this
table
.
This error is transient; you can retry with an exponential backoff.
To identify the operations that count toward this limit, you can
Inspect your logs
. Refer to
Troubleshoot quota errors
for guidance on diagnosing and resolving this error.
|
Maximum number of columns per table
|
10,000 columns
|
Each table, query result,
or view definition can have up to 10,000 columns.
|
External tables
The following limits apply to BigQuery tables with data stored on
Cloud Storage in Parquet, ORC, Avro, CSV, or JSON format:
Limit
|
Default
|
Notes
|
Maximum number of source URIs per external table
|
10,000 URIs
|
Each external table can have up to 10,000 source URIs.
|
Maximum number of files per external table
|
10,000,000 files
|
An external table can have up to 10 million files, including all
files matching all wildcard URIs.
|
Maximum size of stored data on Cloud Storage per external table
|
600 TB
|
An external table can have up to 600 terabytes across all
input files. This limit applies to the file sizes as stored on
Cloud Storage; this size is not the same as the size used in the
query
pricing
formula.
For
externally
partitioned
tables, the limit is applied after
partition pruning
.
|
Maximum number of files in the source Cloud Storage bucket
|
Approximately 60,000,000 files
|
An external table can reference a Cloud Storage bucket containing
up to approximately 60,000,000 files. For
externally
partitioned
tables, this limit is applied before
partition pruning
.
|
Partitioned tables
The following limits apply to BigQuery
partitioned tables
.
Partition limits apply to the combined total of all
load jobs
,
copy jobs
, and
query jobs
that append to or overwrite a destination partition.
A single job can affect multiple partitions. For example, query jobs and load
jobs can write to multiple partitions.
BigQuery uses the number of partitions affected by a
job when determining how much of the limit the job consumes. Streaming
inserts do not affect this limit.
For information about strategies to stay within the limits for partitioned
tables, see
Troubleshooting quota errors
.
Limit
|
Default
|
Notes
|
Number of partitions per partitioned table
|
10,000 partitions
|
Each partitioned table can have up to 10,000 partitions. If you exceed
this limit, consider using
clustering
in addition to, or instead of, partitioning.
|
Number of partitions modified by a single job
|
4,000 partitions
|
Each job operation (query or load) can affect up to 4,000 partitions.
BigQuery rejects any query or load job that attempts to
modify more than 4,000 partitions.
|
Number of partition modifications per ingestion-time partitioned table
per day
|
5,000 modifications
|
Your project can make up to 5,000 partition modifications per day,
whether the modification appends data, updates data, or truncates an
ingestion-time partitioned table.
DML statements
do not
count toward the number of partition
modifications per day.
|
Number of partition modifications per column-partitioned table per day
|
30,000 modifications
|
Your project can make up to 30,000 partition modifications per day for
a column-partitioned table.
DML statements
do not
count toward the number of partition
modifications per day.
Streaming data
does not
count toward the number of partition
modifications per day.
|
Maximum rate of table metadata update operations per partitioned table
|
50 modifications per 10 seconds
|
Your project can make up to 50 modifications per partitioned table every
10 seconds. This limit applies to all partitioned table metadata update
operations, performed by the following:
- Google Cloud console
- The bq command-line tool
- BigQuery client libraries
- The following API methods:
- DDL
statements on tables
This limit also includes the combined total of all load jobs,
copy jobs, and query jobs that append to or overwrite a destination table
or that use a
DML
DELETE
,
INSERT
,
MERGE
,
TRUNCATE TABLE
, or
UPDATE
statements to write
data to a table.
If you exceed this limit, you get an error message like
Exceeded rate limits: too many partitioned table update
operations for this table
.
This error is transient; you can retry with an exponential backoff.
To identify the operations that count toward this limit, you can
Inspect your logs
.
|
Number of possible ranges for range partitioning
|
10,000 ranges
|
A range-partitioned table can have up to 10,000 possible ranges.
This limit applies to the partition specification when you create the
table. After you create the table, the limit also applies to the actual
number of partitions.
|
Table clones
The following limits apply to BigQuery
table clones
:
Limit
|
Default
|
Notes
|
Maximum number of clones and snapshots in a chain
|
3 table clones or snapshots
|
Clones and snapshots in combination
are limited to a depth of 3. When you clone or snapshot a base table, you can
clone or snapshot the result only two more times; attempting to clone or
snapshot the result a third time results in an error. For example, you can
create clone A of the base table, create snapshot B of clone A, and create
clone C of snapshot B. To make additional duplicates of the third-level clone
or snapshot, use a
copy operation
instead.
|
Maximum number of clones and snapshots for a base table
|
1,000 table clones or snapshots
|
You can have no more than 1,000 existing clones and snapshots combined
of a given base table. For example, if you have 600 snapshots and 400 clones,
you reach the limit.
|
Table snapshots
The following limits apply to BigQuery
table snapshots
:
Limit
|
Default
|
Notes
|
Maximum number of concurrent table snapshot jobs
|
100 jobs
|
Your project can run up to 100 concurrent table snapshot jobs.
|
Maximum number of table snapshot jobs per day
|
50,000 jobs
|
Your project can run up to 50,000 table snapshot jobs per day.
|
Maximum number of table snapshot jobs per table per day
|
50 jobs
|
Your project can run up to 50 table snapshot jobs per table per day.
|
Maximum number of metadata updates per table snapshot per 10 seconds.
|
5 updates
|
Your project can update a table snapshot's metadata up to five times every 10
seconds.
|
Maximum number of clones and snapshots in a chain
|
3 table clones or snapshots
|
Clones and snapshots in combination
are limited to a depth of 3. When you clone or snapshot a base table, you can
clone or snapshot the result only two more times; attempting to clone or
snapshot the result a third time results in an error. For example, you can
create clone A of the base table, create snapshot B of clone A, and create
clone C of snapshot B. To make additional duplicates of the third-level clone
or snapshot, use a
copy operation
instead.
|
Maximum number of clones and snapshots for a base table
|
1,000 table clones or snapshots
|
You can have no more than 1,000 existing clones and snapshots combined
of a given base table. For example, if you have 600 snapshots and 400 clones,
you reach the limit.
|
Views
The following quotas and limits apply to
views
and
materialized views
.
Logical views
The following limits apply to BigQuery standard
views
:
Limit
|
Default
|
Notes
|
Maximum number of nested view levels
|
16 levels
|
BigQuery supports up to 16 levels of nested views.
Creating views up to this limit is possible, but querying is limited to
15 levels. If the limit is exceeded, BigQuery returns an
INVALID_INPUT
error.
|
Maximum length of a GoogleSQL query used to define a view
|
256 K characters
|
A single GoogleSQL query that defines a view can be up to 256 K
characters long. This limit applies to a single query and does not
include the length of the views referenced in the query.
|
Maximum number of authorized views per dataset
|
|
See
Datasets
.
|
Materialized views
The following limits apply to BigQuery
materialized views
:
Limit
|
Default
|
Notes
|
Base table references (same dataset)
|
20 materialized views
|
Each base table can be referenced by up to 20 materialized views
from the same dataset.
|
Base table references (same project)
|
100 materialized views
|
Each base table can be referenced by up to 100 materialized views
from the same project.
|
Base table references (entire organization)
|
500 materialized views
|
Each base table can be referenced by up to 500 materialized views
from the entire organization.
|
Maximum number of authorized views per dataset
|
|
See
Datasets
.
|
Search indexes
The following limits apply to BigQuery
search indexes
:
Limit
|
Default
|
Notes
|
Number of
CREATE INDEX
DDL statements per project per
region per day
|
500 operations
|
Your project can issue up to 500
CREATE INDEX
DDL
operations every day within a region.
|
Number of search index DDL statements per table per day
|
20 operations
|
Your project can issue up to 20
CREATE INDEX
or
DROP INDEX
DDL operations per table per day.
|
Maximum total size of table data per organization allowed for search
index creation that does not run in a reservation
|
100 TB in multi-regions; 20 TB in all other regions
|
You can create a search index for a table if the overall size of
tables with indexes in your organization is below your region’s limit:
100 TB for the
US
and
EU
multi-regions, and
20 TB for all other regions. If your index-management jobs run in
your
own reservation
, then this limit doesn't apply.
|
Vector indexes
The following limits apply to BigQuery
vector indexes
:
Limit
|
Default
|
Notes
|
Base table minimum number of rows
|
5,000 rows
|
A table must have at least 5,000 rows to create a vector index.
|
Base table maximum number of rows
|
1,000,000,000 rows
|
A table can have at most 1,000,000,000 rows to create a vector index.
|
Maximum size of the array in the indexed column
|
1,600 elements
|
The column to index can have at most 1,600 elements in the array.
|
Minimum table size for vector index population
|
10 MB
|
If you create a vector index on a table that is under 10 MB,
then the index is not populated. Similarly, if you delete data from a
vector-indexed table such that the table size is under 10 MB, then
the vector index is temporarily disabled. This happens regardless of
whether you use your own reservation for your index-management jobs.
Once a vector-indexed table's size again exceeds 10 MB, its index is
populated automatically.
|
Number of
CREATE VECTOR INDEX
DDL statements per project
per region per day
|
500 operations
|
For each project, you can issue up to 500
CREATE VECTOR INDEX
operations per day for each region.
|
Number of vector index DDL statements per table per day
|
10 operations
|
You can issue up to 10
CREATE VECTOR INDEX
or
DROP VECTOR INDEX
operations per table per day.
|
Maximum total size of table data per organization allowed for vector
index creation that does not run in a reservation
|
20 TB
|
You can create a vector index for a table if the total size of tables
with indexes in your organization is under 20 TB. If your
index-management jobs run in
your
own reservation
, then this limit doesn't apply.
|
Routines
The following quotas and limits apply to
routines
.
User-defined functions
The following limits apply to both temporary and persistent
user-defined functions (UDFs)
in GoogleSQL queries.
Limit
|
Default
|
Notes
|
Maximum output per row
|
5 MB
|
The maximum amount of data that your JavaScript UDF can output when
processing a single row is approximately 5 MB.
|
Maximum concurrent legacy SQL queries with Javascript UDFs
|
6 queries
|
Your project can have up to six concurrent legacy SQL queries that contain UDFs
in JavaScript.
This limit includes both interactive and
batch
queries.
This limit does not apply to GoogleSQL queries.
|
Maximum JavaScript UDF resources per query
|
50 resources
|
A query job can have up to 50 JavaScript UDF resources, such as inline
code blobs or external files.
|
Maximum size of inline code blob
|
32 KB
|
An inline code blob in a UDF can be up to 32 KB in size.
|
Maximum size of each external code resource
|
1 MB
|
The maximum size of each JavaScript code resource is one MB.
|
The following limits apply to persistent UDFs:
Limit
|
Default
|
Notes
|
Maximum length of a UDF name
|
256 characters
|
A UDF name can be up to 256 characters long.
|
Maximum number of arguments
|
256 arguments
|
A UDF can have up to 256 arguments.
|
Maximum length of an argument name
|
128 characters
|
A UDF argument name can be up to 128 characters long.
|
Maximum depth of a UDF reference chain
|
16 references
|
A UDF reference chain can be up to 16 references deep.
|
Maximum depth of a
STRUCT
type argument or output
|
15 levels
|
A
STRUCT
type UDF argument or output can be up to
15 levels deep.
|
Maximum number of fields in
STRUCT
type arguments or output
per UDF
|
1,024 fields
|
A UDF can have up to 1024 fields in
STRUCT
type arguments
and output.
|
Maximum number of JavaScript libraries in a
CREATE FUNCTION
statement
|
50 libraries
|
A
CREATE FUNCTION
statement can have up to 50 JavaScript
libraries.
|
Maximum length of included JavaScript library paths
|
5,000 characters
|
The path for a JavaScript library included in a UDF can be up to 5,000
characters long.
|
Maximum update rate per UDF per 10 seconds
|
5 updates
|
Your project can update a UDF up to five times every 10 seconds.
|
Maximum number of authorized UDFs per dataset
|
|
See
Datasets
.
|
Remote functions
The following limits apply to
remote functions
in
BigQuery.
Limit
|
Default
|
Notes
|
Maximum number of concurrent queries that contain remote
functions
|
10 queries
|
You can run up to ten concurrent queries with
remote functions
per project.
|
Maximum input size
|
5 MB
|
The maximum total size of all input arguments from a single row is 5
MB.
|
HTTP response size limit (Cloud Functions 1st gen)
|
10 MB
|
HTTP response body from your Cloud Function 1st gen is up to
10 MB. Exceeding this value causes query failures.
|
HTTP response size limit (Cloud Functions 2nd gen or
Cloud Run)
|
15 MB
|
HTTP response body from your Cloud Function 2nd gen or
Cloud Run is up to 15 MB. Exceeding this value causes query
failures.
|
Max HTTP invocation time limit (Cloud Functions 1st gen
|
9 minutes
|
You can set your own time limit for your Cloud Function 1st
gen for an individual HTTP invocation, but the max time limit is
9 minutes
.
Exceeding the time limit set for your Cloud Function 1st gen may
cause HTTP invocation failures and query failure
after a limited number of retries.
|
HTTP invocation time limit (Cloud Functions 2nd gen or
Cloud Run)
|
20 minutes
|
The time limit for an individual HTTP invocation to your
Cloud Function 2nd gen or Cloud Run.
Exceeding this value may cause HTTP invocation failures and query failure
after a limited number of retries.
|
Table functions
The following limits apply to BigQuery
table functions
:
Limit
|
Default
|
Notes
|
Maximum length of a table function name
|
256 characters
|
The name of a table function can be up to 256 characters in length.
|
Maximum length of an argument name
|
128 characters
|
The name of a table function argument can be up to 128 characters in
length.
|
Maximum number of arguments
|
256 arguments
|
A table function can have up to 256 arguments.
|
Maximum depth of a table function reference chain
|
16 references
|
A table function reference chain can be up to 16 references deep.
|
Maximum depth of argument or output of type
STRUCT
|
15 levels
|
A
STRUCT
argument for a table function can be up to 15
levels deep. Similarly, a
STRUCT
record in a table
function's output can be up to 15 levels deep.
|
Maximum number of fields in argument or return table of type
STRUCT
per table function
|
1,024 fields
|
A
STRUCT
argument for
a table function can have up to 1,024 fields.
Similarly, a
STRUCT
record in a table function's output can have up to 1,024 fields.
|
Maximum number of columns in return table
|
1,024 columns
|
A table returned by a table
function can have up to 1,024 columns.
|
Maximum length of return table column names
|
128 characters
|
Column names in returned tables can be up to 128
characters long.
|
Maximum number of updates per table function per 10 seconds
|
5 updates
|
Your project can update a table function up to five times every 10
seconds.
|
Stored procedures for Apache Spark
The following limits apply for
BigQuery stored procedures for
Apache Spark
:
Limit
|
Default
|
Notes
|
Maximum number of concurrent stored procedure queries
|
50
|
You can run up to 50 concurrent stored procedure queries for each
project.
|
Maximum number of in-use CPUs
|
12,000
|
You can use up to 12,000 CPUs for each project. Queries that have already
been processed don't consume this limit.
You can use up to 2,400 CPUs for each location for each project,
except in the following locations:
asia-south2
australia-southeast2
europe-central2
europe-west8
northamerica-northeast2
southamerica-west1
In these locations, you can use up to 500 CPUs for each location for
each project.
If you run concurrent queries in a multi-region location and a single region
location that is in the same geographic area, then your queries might consume
the same concurrent CPU quota.
|
Maximum total size of in-use standard persistent disks
|
204.8 TB
|
You can use up to 204.8 TB standard persistent disks for each location
for each project. Queries that have already been processed don't consume
this limit.
If you run concurrent queries in a multi-region location and a single region
location that is in the same geographic area, then your queries might consume
the same standard persistent disk quota.
|
Notebooks
All
Dataform quotas and limits
and
Colab Enterprise quotas and limits
apply to
notebooks in BigQuery
.
The following limits also apply:
Limit
|
Default
|
Notes
|
Maximum notebook size
|
20 MB
|
A notebook's size is the total of its content, metadata, and encoding overhead.
You can view the size of notebook content by expanding the notebook header, clicking
View
, and then clicking
Notebook info
.
|
Maximum number of requests per second to Dataform
|
100
|
Notebooks are created and managed through Dataform.
Any action that creates or modifies a notebook counts against this quota.
This quota is shared with saved queries. For example, if you make 50 changes
to notebooks and 50 changes to saved queries within 1 second, you
reach the quota.
|
Saved queries
All
Dataform quotas and limits
apply to
saved queries
.
The following
limits also apply:
Limit
|
Default
|
Notes
|
Maximum saved query size
|
10 MB
|
|
Maximum number of requests per second to Dataform
|
100
|
Saved queries are created and managed through Dataform.
Any action that creates or modifies a saved query counts against this quota.
This quota is shared with notebooks. For example, if you make 50 changes
to notebooks and 50 changes to saved queries within 1 second, you
reach the quota.
|
Data manipulation language
The following limits apply for BigQuery
data manipulation language (DML)
statements:
Limit
|
Default
|
Notes
|
DML statements per day
|
Unlimited
|
The number of DML statements your project can run per day is unlimited.
DML statements
do not
count toward the number of
table modifications per day
or the number of
partitioned table
modifications per day
for partitioned tables.
DML statements have the following
limitations
to be aware of.
|
Concurrent mutating DML statements per table
|
2 statements
|
BigQuery runs up to two concurrent mutating DML
statements (
UPDATE
,
DELETE
, and
MERGE
) for each table. Additional mutating DML statements
for a table are queued.
|
Queued mutating DML statements per table
|
20 statements
|
A table can have up to 20 mutating DML statements in the queue waiting to
run. If you submit additional mutating DML statements for
the table, then those statements fail.
|
Maximum time in queue for DML statement
|
6 hours
|
An interactive priority DML statement can wait in the queue for up to
six hours. If the statement has not run after six hours, it fails.
|
Maximum rate of DML statements for each table
|
25 statements every 10 seconds
|
Your project can run up to 25 DML statements every 10 seconds for each table. Both
INSERT
and mutating DML statements contribute to this limit.
|
For more information about mutating DML statements, see
INSERT
DML concurrency
and
UPDATE, DELETE, MERGE
DML concurrency
.
Multi-statement queries
The following limits apply to
multi-statement queries
in
BigQuery.
Limit
|
Default
|
Notes
|
Maximum number of concurrent multi-statement queries
|
1,000 multi-statement queries
|
Your project can run up to 1,000 concurrent
multi-statement queries
.
|
Cumulative time limit
|
24 hours
|
The cumulative time limit for a multi-statement query is 24 hours.
|
Statement time limit
|
6 hours
|
The time limit for an individual statement within a
multi-statement query is 6 hours.
|
Recursive CTEs in queries
The following limits apply to
recursive common table expressions (CTEs)
in
BigQuery.
Limit
|
Default
|
Notes
|
Iteration limit
|
500 iterations
|
The recursive CTE can execute this number of iterations. If this limit
is exceeded, an error is produced. To work around iteration limits, see
Troubleshoot iteration limit errors
.
|
Row-level security
The following limits apply for BigQuery
row-level access policies
:
Limit
|
Default
|
Notes
|
Maximum number of row-access policies per table
|
400 policies
|
A table can have up to 400 row-access policies.
|
Maximum number of row-access policies per query
|
6000 policies
|
A query can access up to a total of 6000 row-access policies.
|
Maximum number of
CREATE
/
DROP
DDL statements
per policy per 10 seconds
|
5 statements
|
Your project can make up to five
CREATE
or
DROP
statements
per row-access policy resource every 10 seconds.
|
DROP ALL ROW ACCESS POLICIES
statements per table per
10 seconds
|
5 statements
|
Your project can make up to five
DROP ALL ROW ACCESS POLICIES
statements per table every 10 seconds.
|
Data policies
The following limits apply for
column-level dynamic data masking
:
Limit
|
Default
|
Notes
|
Maximum number of data policies per policy tag.
|
8 policies per policy tag
|
Up to eight data policies per policy tag. One of these policies can be used for
column-level access controls
.
Duplicate masking expressions are not supported.
|
BigQuery ML
The following limits apply to BigQuery ML.
Query jobs
All
query job quotas and limits
apply to GoogleSQL
query jobs that use BigQuery ML statements and functions.
CREATE MODEL
statements
The following limits apply to
CREATE MODEL
jobs:
Limit
|
Default
|
Notes
|
CREATE MODEL
statement queries per 48 hours for each project
|
20,000 statement queries
|
Some models are trained by utilizing
Vertex AI services
,
which have their own
resource and quota
management
.
|
Execution-time limit
|
24 hours or 72 hours
|
CREATE MODEL
job timeout defaults to 24 hours, with the exception of time series,
AutoML, and hyperparameter tuning jobs which timeout at 72
hours.
|
Vertex AI and Cloud AI service functions
The following limits apply to functions that use Vertex AI large
language models (LLMs) and Cloud AI services:
Function
|
Requests per minute
|
Rows per job
|
Number of concurrently running jobs
|
ML.ANNOTATE_IMAGE
|
900
|
307,800
|
5
|
ML.TRANSLATE
|
3,000
|
1,026,000
|
5
|
ML.UNDERSTAND_TEXT
|
300
|
102,600
|
5
|
ML.GENERATE_TEXT
when using a remote model over a
gemini-1.5-pro
model
|
60
|
21,600
|
5
|
ML.GENERATE_TEXT
when using a remote model over a
gemini-1.5-flash
model
|
200
|
72,000
|
5
|
ML.GENERATE_TEXT
when using a remote model over the
gemini-1.0-pro-vision
model
|
60
|
20,000
|
1
|
ML.GENERATE_TEXT
when using a remote model over a
gemini-1.0-pro
,
text-bison
,
text-bison-32
, or
text-unicorn
model
|
60
|
30,000
|
5
|
ML.GENERATE_EMBEDDING
when used with remote models over
Vertex AI
multimodalembedding
models
|
120
|
25,000
|
1
|
ML.GENERATE_EMBEDDING
when used with remote models over
Vertex AI
textembedding-gecko
and
textembedding-gecko-multilingual
models
|
1,500
|
1,000,000
|
1
|
ML.PROCESS_DOCUMENT
|
600
|
205,200
|
5
|
ML.TRANSCRIBE
|
60
|
1,000
|
5
|
For more information about quota for Vertex AI LLMs and the
Cloud AI service APIs, see the following documents:
To request more quota for the BigQuery ML functions, adjust the
quota for the associated Vertex AI LLM or Cloud AI service first,
and then send an email to bqml-feedback@google.com and include information about
the adjusted LLM or Cloud AI service quota. For more information about how to
request more quota for these services, see
Request a higher quota
.
Quota definitions
The following list describes the quotas that apply to
Vertex AI and Cloud AI service functions:
The following examples show how to interpret quota limitations in typical
situations:
I have a quota of 1,000 QPM in Vertex AI, so a query with
100,000 rows should take around 100 minutes. Why is the job running longer?
Job runtimes can vary even for the same input data. In
Vertex AI, remote procedure calls (RPCs) have different
priorities in order to avoid quota drainage. When there isn't enough quota,
RPCs with lower priorities wait and possibly fail if it takes too long to
process them.
How should I interpret the rows per job quota?
In BigQuery, a query can execute for up to six hours. The
maximum supported rows is a function of this timeline and your
Vertex AI QPM quota, in order to ensure that
BigQuery can complete query processing in six hours. Since
typically a query can't use the whole quota, this is a lower
number than your QPM quota multiplied by 360.
What happens if I run a batch inference job on a table with more
rows than the rows per job quota, for example 10,000,000 rows?
BigQuery only processes the number of rows specified by the
rows per job quota. You are only charged for the successful API calls for
that number of rows, instead of the full 10,000,000 rows in your table. For
the rest of the rows, BigQuery responds to the request with a
A retryable error occurred: the maximum size quota per query has reached
error, which is returned in the
status
column of the result. You can use
this set of
SQL scripts
or this
Dataform package
to iterate through inference calls until all rows
are successfully processed.
I have many more rows to process than the rows per job quota. Will
splitting my rows across multiple queries and running them simultaneously
help?
No, because these queries are consuming the same BigQuery ML
requests per minute quota and Vertex AI QPM quota. If there
are multiple queries that all stay within the rows per job quota and number
of concurrently running jobs quota, the cumulative processing exhausts the
requests per minute quota.
BI Engine
The following limits apply to
BigQuery BI Engine
.
Limit
|
Default
|
Notes
|
Maximum reservation size per project per location (
SQL Interface
)
|
250 GiB
|
Applies when using BI Engine with BigQuery. Applies in all cases except Looker Studio without
native integration
.
You can
request an increase
of the maximum reservation capacity for your projects.
Reservation increases are available in most regions, and might take
from 3 days to one week to process.
|
Maximum reservation size per project per location (
Looker Studio
)
|
100 GB
|
Applies when using BI Engine with Looker Studio without
native integration
.
This limit does not affect the size of the tables that you query as
BI Engine loads in-memory only the columns used in your
queries, not the entire table.
|
Maximum data model size per table (
Looker Studio
)
|
10 GB
|
Applies when using BI Engine with Looker Studio without
native integration
.
If you have a 100 GB reservation per project per location,
BI Engine limits the reservation per table to 10 GB.
The rest of the available reservation is used for other tables in the project.
|
Maximum partitions per table (
Looker Studio
)
|
500 partitions
|
Applies when using BI Engine with Looker Studio without
native integration
.
BI Engine for Looker Studio supports
up to a maximum of 500 partitions per table.
|
Maximum rows per query (
Looker Studio
)
|
150 million
|
Applies when using BI Engine with Looker Studio without
native integration
.
BI Engine for Looker Studio supports up to 150
million rows of queried data, depending on query complexity.
|
Analytics Hub
The following limits apply to
Analytics Hub
:
Limit
|
Default
|
Notes
|
Maximum number of data exchanges per project
|
500 exchanges
|
You can create up to 500 data exchanges in a project.
|
Maximum number of listings per data exchange
|
1,000 listings
|
You can create up to 1,000 listings in a data exchange.
|
Maximum number of linked datasets per shared dataset
|
1,000 linked datasets
|
All Analytics Hub subscribers, combined, can have a maximum
of 1,000 linked datasets per shared dataset.
|
API quotas and limits
These quotas and limits apply to
BigQuery API
requests.
BigQuery API
The following quotas apply to
BigQuery API
(core)
requests:
Quota
|
Default
|
Notes
|
Requests per day
|
Unlimited
|
Your project can make an unlimited number of BigQuery API requests per
day.
View quota in Google Cloud console
|
Maximum
tabledata.list
bytes per minute
|
7.5 GB in multi-regions; 3.7 GB in all other regions
|
Your project can return a maximum of 7.5 GB of table row data per
minute via
tabledata.list
in the
us
and
eu
multi-regions, and 3.7 GB of table row data per minute
in all other regions. This quota applies to the project that contains
the table being read. Other APIs including
jobs.getQueryResults
and
fetching results from
jobs.query
and
jobs.insert
can also consume this quota.
View quota in Google Cloud console
The
BigQuery Storage Read API
can sustain significantly higher throughput than
tabledata.list
. If you need more throughput than allowed
under this quota, consider using the BigQuery Storage Read API.
|
The following limits apply to
BigQuery API
(core) requests:
Limit
|
Default
|
Notes
|
Maximum number of API requests per second per user per method
|
100 requests
|
A user can make up to 100 API requests per second to an API method.
If a user makes more than 100 requests per second to a method,
then throttling can occur.
This limit does not apply to
streaming inserts
.
|
Maximum number of concurrent API requests per user
|
300 requests
|
If a user makes more than 300 concurrent requests, throttling can occur.
This limit does not apply to streaming inserts.
|
Maximum
jobs.get
requests per second
|
1,000 requests
|
Your project can make up to 1,000
jobs.get
requests per second.
|
Maximum
jobs.query
response size
|
20 MB
|
By default, there is no maximum row count for the number of rows of
data returned by
jobs.query
per page of results. However,
you are limited to the 20-MB maximum response size. You can alter the
number of rows to return by using the
maxResults
parameter.
|
Maximum
jobs.getQueryResults
row size
|
20 MB
|
The maximum row size is approximate because the limit is based on the
internal representation of row data. The limit is enforced during
transcoding.
|
Maximum
projects.list
requests per second
|
2 requests
|
Your project can make up to two
projects.list
requests per second.
|
Maximum number of
tabledata.list
requests per second
|
1,000 requests
|
Your project can make up to 1,000
tabledata.list
requests per second.
|
Maximum rows per
tabledata.list
response
|
100,000 rows
|
A
tabledata.list
call can return up to 100,000 table rows.
For more information, see
Paging through results
using the API
.
|
Maximum
tabledata.list
row size
|
100 MB
|
The maximum row size is approximate because the limit is based on the
internal representation of row data. The limit is enforced during
transcoding.
|
Maximum
tables.insert
requests per second
|
10 requests
|
Your project can make up to 10
tables.insert
requests per second.
The
tables.insert
method creates a new,
empty table in a dataset. The limit includes SQL statements that create
tables, such as
CREATE TABLE
and
queries that write results to destination tables
.
|
BigQuery Connection API
The following quotas apply to
BigQuery Connection API
requests:
Quota
|
Default
|
Notes
|
Read requests per minute
|
1,000 requests per minute
|
Your project can make up to 1,000 requests per minute to
BigQuery Connection API methods that read connection data.
View quota in Google Cloud console
|
Write requests per minute
|
100 requests per minute
|
Your project can make up to 100 requests per minute to BigQuery Connection API
methods that create or update connections.
View quota in Google Cloud console
|
AWS connections per region
|
50 connections per region.
|
Your project can have up to 50 AWS connections per AWS region.
|
Azure connections per region
|
50 connections per region.
|
Your project can have up to 50 Azure connections per Azure region.
|
BigQuery Migration API
The following limits apply to the
BigQuery Migration API
:
Limit
|
Default
|
Notes
|
Individual file size for batch SQL translation
|
10 MB
|
Each individual source and metadata file can be up to 10 MB.
This limit does not apply to the metadata zip file produced by the
dwh-migration-dumper
command-line extraction tool.
|
Total size of source files for batch SQL translation
|
1 GB
|
The total size of all input files uploaded to Cloud Storage
can be up to 1 GB. This includes all source files, and all metadata
files if you choose to include them.
|
Input string size for interactive SQL translation
|
1 MB
|
The string that you enter for interactive SQL translation must not
exceed 1 MB.
|
Maximum configuration file size for interactive SQL translation
|
50 MB
|
Individual metadata files (compressed) and YAML config files in
Cloud Storage must not exceed 50 MB. If the file size exceeds 50 MB,
the interactive translator skips that configuration file during
translation and produces an error message. One method to reduce the
metadata file size is to use the
?database
or
?schema
flags to filter on databases when you
generate the metadata
.
|
The following quotas apply to the
BigQuery Migration API
. The following
default values apply in most cases. The defaults for your project might be
different:
Quota
|
Default
|
Notes
|
EDWMigration Service List Requests per minute
EDWMigration Service List Requests per minute per user
|
12,000 requests
2,500 requests
|
Your project can make up to 12,000 Migration API List requests
per minute.
Each user can make up to 2,500 Migration API List requests per minute.
View quotas in Google Cloud console
|
EDWMigration Service Get Requests per minute
EDWMigration Service Get Requests per minute per user
|
25,000 requests
2,500 requests
|
Your project can make up to 25,000 Migration API Get requests
per minute.
Each user can make up to 2,500 Migration API Get requests per
minute.
View quotas in Google Cloud console
|
EDWMigration Service Other Requests per minute
EDWMigration Service Other Requests per minute per user
|
25 requests
5 requests
|
Your project can make up to 25 other Migration API requests
per minute.
Each user can make up to 5 other Migration API requests per minute.
View quotas in Google Cloud console
|
Interactive SQL translation requests per minute
Interactive SQL translation requests per minute per user
|
200 requests
50 requests
|
Your project can make up to 200 SQL translation service requests
per minute.
Each user can make up to 50 other SQL translation service requests
per minute.
View quotas in Google Cloud console
|
BigQuery Reservation API
The following quotas apply to
BigQuery Reservation API
requests:
BigQuery Data Policy API
The following limits apply for
the
Data Policy API
(
preview
):
Limit
|
Default
|
Notes
|
Maximum number of
dataPolicy.list
calls.
|
400 requests per minute per project
600 requests per minute per organization
|
|
Maximum number of
dataPolicy.testIamPermissions
calls.
|
400 requests per minute per project
600 requests per minute per organization
|
|
Maximum number of read requests.
|
1200 requests per minute per project
1800 requests per minute per organization
|
This includes calls to
dataPolicy.get
and
dataPolicy.getIamPolicy
.
|
Maximum number of write requests.
|
600 requests per minute per project
900 requests per minute per organization
|
This includes calls to:
|
IAM API
The following quotas apply when you use
Identity and Access Management
features in BigQuery to retrieve and set IAM
policies, and to test IAM permissions.
Data control language (DCL) statements
count towards
SetIAMPolicy
quota.
Storage Read API
The following quotas apply to
BigQuery Storage Read API
requests:
Quota
|
Default
|
Notes
|
Read data plane requests per minute per user
|
25,000 requests
|
Each user can make up to 25,000
ReadRows
calls per minute
per project.
View quota in Google Cloud console
|
Read control plane requests per minute per user
|
5,000 requests
|
Each user can make up to 5,000 Storage Read API metadata
operation calls per minute per project. The metadata calls include the
CreateReadSession
and
SplitReadStream
methods.
View quota in Google Cloud console
|
The following limits apply to
BigQuery Storage Read API
requests:
Limit
|
Default
|
Notes
|
Maximum row/filter length
|
1 MB
|
When you use the Storage Read API
CreateReadSession
call, you are limited to a maximum length
of 1 MB for each row or filter.
|
Maximum serialized data size
|
128 MB
|
When you use the Storage Read API
ReadRows
call, the serialized representation of the data in an individual
ReadRowsResponse
message cannot be larger than 128 MB.
|
Maximum concurrent connections
|
2,000 in multi-regions; 400 in regions
|
You can open a maximum of 2,000 concurrent
ReadRows
connections per project in the
us
and
eu
multi-regions, and 400 concurrent
ReadRows
connections in
other regions. In some cases you may be limited to fewer concurrent
connections than this limit.
|
Maximum per-stream memory usage
|
1.5 GB
|
The maximum per-stream memory is approximate because the limit is based
on the internal representation of the row data. Streams utilizing more
than 1.5 GB memory for a single row might fail. For more information, see
Troubleshoot resources exceeded issues
.
|
Storage Write API
The following quotas apply to
Storage Write API
requests. The following quotas can be applied at the folder level. These quotas are then aggregated and shared across all child projects. To enable this configuration, contact
Cloud Customer Care
.
If you plan to
request a higher quota limit
,
include the quota error message in your request to expedite processing.
Quota
|
Default
|
Notes
|
Concurrent connections
|
1,000 in a region; 10,000 in a multi-region
|
The concurrent connections quota is based on the client project that initiates the Storage Write API request, not the project containing the BigQuery dataset resource. The initiating project is the project associated with the
API key
or the
service account
.
Your project can operate on 1,000 concurrent connections in
a region, or 10,000 concurrent connections in the
US
and
EU
multi-regions.
When you use the
default stream
in Java or Go, we recommend using
Storage Write API multiplexing
to write to multiple destination tables with shared connections in order
to reduce the number of overall connections that are needed. If you are
using the
Beam
connector with at-least-once semantics
, you can set
UseStorageApiConnectionPool
to
TRUE
to enable multiplexing.
View quota in Google Cloud console
You can view usage quota and limits metrics for your projects in
Cloud Monitoring
. Select the concurrent connections limit name based on your region. The options are
ConcurrentWriteConnectionsPerProject
,
ConcurrentWriteConnectionsPerProjectEU
, and
ConcurrentWriteConnectionsPerProjectRegion
for
us
,
eu
, and other regions, respectively.
It is strongly recommended that you set up
alerts
to monitor your quota usage and limits. In addition, if your traffic patterns experience spikes and/or regular organic growth, it might be beneficial to consider over-provisioning your quota by 25 - 50% in order to handle unexpected demand.
|
Throughput
|
3 GB per second throughput in multi-regions; 300 MB per second in
regions
|
You can stream up to 3 GBps in the
us
and
eu
multi-regions, and 300 MBps in other regions per project.
View quota in Google Cloud console
You can view usage quota and limits metrics for your projects in
Cloud Monitoring
. Select the throughput limit name based on your region. The options are
AppendBytesThroughputPerProject
,
AppendBytesThroughputPerProjectEU
, and
AppendBytesThroughputPerProjectRegion
for
us
,
eu
, and other regions, respectively. Write throughput quota is metered based on the project where the target dataset resides, not the client project.
It is strongly recommended that you set up
alerts
to monitor your quota usage and limits. In addition, if your traffic patterns experience spikes and/or regular organic growth, it might be beneficial to consider over-provisioning your quota by 25 - 50% in order to handle unexpected demand.
|
CreateWriteStream
requests
|
10,000 streams every hour, per project per region
|
You can call
CreateWriteStream
up to 10,000 times per hour
per project per region. Consider using the
default stream
if you don't need exactly-once semantics.
This quota is per hour but the metric shown in the
Google Cloud console is per minute.
|
Pending stream bytes
|
10 TB in multi-regions; 1 TB in regions
|
For every commit that you trigger, you can commit up to 10 TB in
the
us
and
eu
multi-regions, and
1 TB in other regions. There is no quota reporting on this quota.
|
The following limits apply to
Storage Write API
requests:
Limit
|
Default
|
Notes
|
Batch commits
|
10,000 streams per table
|
You can commit up to 10,000 streams in each
BatchCommitWriteStream
call.
|
AppendRows
request size
|
10 MB
|
The maximum request size is 10 MB.
|
Streaming inserts
The following quotas and limits apply when you stream data into
BigQuery by using the
legacy streaming API
.
For information about strategies to stay within these limits, see
Troubleshooting quota errors
.
If you exceed these quotas, you get
quotaExceeded
errors.
Limit
|
Default
|
Notes
|
Maximum bytes per second per project in the
us
and
eu
multi-regions
|
1 GB per second
|
Your project can stream up to 1 GB per second. This quota is
cumulative within a given multi-region. In other words, the sum of
bytes per second streamed to all tables for a given project within a
multi-region is limited to 1 GB.
Exceeding this limit causes
quotaExceeded
errors.
If necessary, you can request a quota increase by contacting
Cloud Customer Care
. Request any increase as
early as possible, at minimum two weeks before you need it. Quota
increase takes time to become available, especially in the case of
a significant increase.
|
Maximum bytes per second per project in all other locations
|
300 MB per second
|
Your project can stream up to 300 MB per second in all locations
except the
us
and
eu
multi-regions. This
quota is cumulative within a given multi-region. In other words, the
sum of bytes per second streamed to all tables for a given project
within a region is limited to 300 MB.
Exceeding this limit causes
quotaExceeded
errors.
If necessary, you can request a quota increase by contacting
Cloud Customer Care
. Request any increase as
early as possible, at minimum two weeks before you need it. Quota
increase takes time to become available, especially in the case of
a significant increase.
|
Maximum row size
|
10 MB
|
Exceeding this value causes
invalid
errors.
|
HTTP request size limit
|
10 MB
|
Exceeding this value causes
invalid
errors.
Internally the request is translated from HTTP JSON into an
internal data structure. The translated data structure has its own
enforced size limit. It's hard to predict the size of the resulting
internal data structure, but if you keep your HTTP requests to 10 MB or
less, the chance of hitting the internal limit is low.
|
Maximum rows per request
|
50,000 rows
|
A maximum of 500 rows is recommended. Batching can increase performance
and throughput to a point, but at the cost of per-request latency.
Too few rows per request and the overhead of each request can make
ingestion inefficient. Too many rows per request and the throughput can
drop. Experiment with representative data (schema and data sizes)
to determine the ideal batch size for your data.
|
insertId
field length
|
128 characters
|
Exceeding this value causes
invalid
errors.
|
For additional streaming quota, see
Request a quota increase
.