google-bigquery

Google BigQuery API integration with managed OAuth. Run SQL queries, manage datasets and tables, and analyze data at scale. Use this skill when users want to query BigQuery data, create or manage datasets/tables, run analytics jobs, or work with BigQuery resources. For other third party apps, use the api-gateway skill (https://clawhub.ai/byungkyu/api-gateway).

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "google-bigquery" with this command: npx skills add maton/google-bigquery

Google BigQuery

Access the Google BigQuery API with managed OAuth authentication. Run SQL queries, manage datasets and tables, and analyze data at scale.

Quick Start

# Run a simple query
python <<'EOF'
import urllib.request, os, json
data = json.dumps({'query': 'SELECT 1 as test_value', 'useLegacySql': False}).encode()
req = urllib.request.Request('https://gateway.maton.ai/google-bigquery/bigquery/v2/projects/{projectId}/queries', data=data, method='POST')
req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}')
req.add_header('Content-Type', 'application/json')
print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2))
EOF

Base URL

https://gateway.maton.ai/google-bigquery/bigquery/v2/{resource-path}

Replace {resource-path} with the actual BigQuery API endpoint path. The gateway proxies requests to bigquery.googleapis.com and automatically injects your OAuth token.

Authentication

All requests require the Maton API key in the Authorization header:

Authorization: Bearer $MATON_API_KEY

Environment Variable: Set your API key as MATON_API_KEY:

export MATON_API_KEY="YOUR_API_KEY"

Getting Your API Key

  1. Sign in or create an account at maton.ai
  2. Go to maton.ai/settings
  3. Copy your API key

Connection Management

Manage your Google BigQuery OAuth connections at https://ctrl.maton.ai.

List Connections

python <<'EOF'
import urllib.request, os, json
req = urllib.request.Request('https://ctrl.maton.ai/connections?app=google-bigquery&status=ACTIVE')
req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}')
print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2))
EOF

Create Connection

python <<'EOF'
import urllib.request, os, json
data = json.dumps({'app': 'google-bigquery'}).encode()
req = urllib.request.Request('https://ctrl.maton.ai/connections', data=data, method='POST')
req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}')
req.add_header('Content-Type', 'application/json')
print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2))
EOF

Get Connection

python <<'EOF'
import urllib.request, os, json
req = urllib.request.Request('https://ctrl.maton.ai/connections/{connection_id}')
req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}')
print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2))
EOF

Response:

{
  "connection": {
    "connection_id": "c8463a31-e5b4-4e52-9a32-e78dcd7ba7b1",
    "status": "ACTIVE",
    "creation_time": "2026-02-14T09:02:02.780520Z",
    "last_updated_time": "2026-02-14T09:02:19.977436Z",
    "url": "https://connect.maton.ai/?session_token=...",
    "app": "google-bigquery",
    "metadata": {}
  }
}

Open the returned url in a browser to complete OAuth authorization.

Delete Connection

python <<'EOF'
import urllib.request, os, json
req = urllib.request.Request('https://ctrl.maton.ai/connections/{connection_id}', method='DELETE')
req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}')
print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2))
EOF

Specifying Connection

If you have multiple Google BigQuery connections, specify which one to use with the Maton-Connection header:

python <<'EOF'
import urllib.request, os, json
req = urllib.request.Request('https://gateway.maton.ai/google-bigquery/bigquery/v2/projects')
req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}')
req.add_header('Maton-Connection', 'c8463a31-e5b4-4e52-9a32-e78dcd7ba7b1')
print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2))
EOF

If omitted, the gateway uses the default (oldest) active connection.

API Reference

Projects

List Projects

List all projects accessible to the authenticated user.

GET /google-bigquery/bigquery/v2/projects

Response:

{
  "kind": "bigquery#projectList",
  "projects": [
    {
      "id": "my-project-123",
      "numericId": "822245862053",
      "projectReference": {
        "projectId": "my-project-123"
      },
      "friendlyName": "My Project"
    }
  ],
  "totalItems": 1
}

Datasets

List Datasets

GET /google-bigquery/bigquery/v2/projects/{projectId}/datasets

Query Parameters:

  • maxResults - Maximum number of results to return
  • pageToken - Token for pagination
  • all - Include hidden datasets if true

Get Dataset

GET /google-bigquery/bigquery/v2/projects/{projectId}/datasets/{datasetId}

Create Dataset

POST /google-bigquery/bigquery/v2/projects/{projectId}/datasets
Content-Type: application/json

{
  "datasetReference": {
    "datasetId": "my_dataset",
    "projectId": "{projectId}"
  },
  "description": "My dataset description",
  "location": "US"
}

Response:

{
  "kind": "bigquery#dataset",
  "id": "my-project:my_dataset",
  "datasetReference": {
    "datasetId": "my_dataset",
    "projectId": "my-project"
  },
  "location": "US",
  "creationTime": "1771059780773"
}

Update Dataset (PATCH)

PATCH /google-bigquery/bigquery/v2/projects/{projectId}/datasets/{datasetId}
Content-Type: application/json

{
  "description": "Updated description"
}

Delete Dataset

DELETE /google-bigquery/bigquery/v2/projects/{projectId}/datasets/{datasetId}

Query Parameters:

  • deleteContents - If true, delete all tables in the dataset (default: false)

Tables

List Tables

GET /google-bigquery/bigquery/v2/projects/{projectId}/datasets/{datasetId}/tables

Query Parameters:

  • maxResults - Maximum number of results to return
  • pageToken - Token for pagination

Get Table

GET /google-bigquery/bigquery/v2/projects/{projectId}/datasets/{datasetId}/tables/{tableId}

Create Table

POST /google-bigquery/bigquery/v2/projects/{projectId}/datasets/{datasetId}/tables
Content-Type: application/json

{
  "tableReference": {
    "projectId": "{projectId}",
    "datasetId": "{datasetId}",
    "tableId": "my_table"
  },
  "schema": {
    "fields": [
      {"name": "id", "type": "INTEGER", "mode": "REQUIRED"},
      {"name": "name", "type": "STRING", "mode": "NULLABLE"},
      {"name": "created_at", "type": "TIMESTAMP", "mode": "NULLABLE"}
    ]
  }
}

Response:

{
  "kind": "bigquery#table",
  "id": "my-project:my_dataset.my_table",
  "tableReference": {
    "projectId": "my-project",
    "datasetId": "my_dataset",
    "tableId": "my_table"
  },
  "schema": {
    "fields": [
      {"name": "id", "type": "INTEGER", "mode": "REQUIRED"},
      {"name": "name", "type": "STRING", "mode": "NULLABLE"},
      {"name": "created_at", "type": "TIMESTAMP", "mode": "NULLABLE"}
    ]
  },
  "numRows": "0",
  "type": "TABLE"
}

Update Table (PATCH)

PATCH /google-bigquery/bigquery/v2/projects/{projectId}/datasets/{datasetId}/tables/{tableId}
Content-Type: application/json

{
  "description": "Updated table description"
}

Delete Table

DELETE /google-bigquery/bigquery/v2/projects/{projectId}/datasets/{datasetId}/tables/{tableId}

Table Data

List Table Data

Retrieve rows from a table.

GET /google-bigquery/bigquery/v2/projects/{projectId}/datasets/{datasetId}/tables/{tableId}/data

Query Parameters:

  • maxResults - Maximum number of results to return
  • pageToken - Token for pagination
  • startIndex - Zero-based index of the starting row

Response:

{
  "kind": "bigquery#tableDataList",
  "totalRows": "100",
  "rows": [
    {
      "f": [
        {"v": "1"},
        {"v": "Alice"},
        {"v": "1.7710597807E9"}
      ]
    }
  ],
  "pageToken": "..."
}

Insert Table Data (Streaming)

Insert rows into a table using streaming insert. Note: Requires BigQuery paid tier.

POST /google-bigquery/bigquery/v2/projects/{projectId}/datasets/{datasetId}/tables/{tableId}/insertAll
Content-Type: application/json

{
  "rows": [
    {"json": {"id": 1, "name": "Alice"}},
    {"json": {"id": 2, "name": "Bob"}}
  ]
}

Jobs and Queries

Run Query (Synchronous)

Execute a SQL query and return results directly.

POST /google-bigquery/bigquery/v2/projects/{projectId}/queries
Content-Type: application/json

{
  "query": "SELECT * FROM `my_dataset.my_table` LIMIT 10",
  "useLegacySql": false,
  "maxResults": 100
}

Response:

{
  "kind": "bigquery#queryResponse",
  "schema": {
    "fields": [
      {"name": "id", "type": "INTEGER"},
      {"name": "name", "type": "STRING"}
    ]
  },
  "jobReference": {
    "projectId": "my-project",
    "jobId": "job_abc123",
    "location": "US"
  },
  "totalRows": "2",
  "rows": [
    {"f": [{"v": "1"}, {"v": "Alice"}]},
    {"f": [{"v": "2"}, {"v": "Bob"}]}
  ],
  "jobComplete": true,
  "totalBytesProcessed": "1024"
}

Query Parameters:

  • useLegacySql - Use legacy SQL syntax (default: false for GoogleSQL)
  • maxResults - Maximum results per page
  • timeoutMs - Query timeout in milliseconds

Create Job (Asynchronous)

Submit a job for asynchronous execution.

POST /google-bigquery/bigquery/v2/projects/{projectId}/jobs
Content-Type: application/json

{
  "configuration": {
    "query": {
      "query": "SELECT * FROM `my_dataset.my_table`",
      "useLegacySql": false,
      "destinationTable": {
        "projectId": "{projectId}",
        "datasetId": "{datasetId}",
        "tableId": "results_table"
      },
      "writeDisposition": "WRITE_TRUNCATE"
    }
  }
}

List Jobs

GET /google-bigquery/bigquery/v2/projects/{projectId}/jobs

Query Parameters:

  • maxResults - Maximum number of results to return
  • pageToken - Token for pagination
  • stateFilter - Filter by job state: done, pending, running
  • projection - full or minimal

Response:

{
  "kind": "bigquery#jobList",
  "jobs": [
    {
      "id": "my-project:US.job_abc123",
      "jobReference": {
        "projectId": "my-project",
        "jobId": "job_abc123",
        "location": "US"
      },
      "state": "DONE",
      "statistics": {
        "creationTime": "1771059781456",
        "startTime": "1771059782203",
        "endTime": "1771059782324"
      }
    }
  ]
}

Get Job

GET /google-bigquery/bigquery/v2/projects/{projectId}/jobs/{jobId}

Query Parameters:

  • location - Job location (e.g., "US", "EU")

Get Query Results

Retrieve results from a completed query job.

GET /google-bigquery/bigquery/v2/projects/{projectId}/queries/{jobId}

Query Parameters:

  • location - Job location
  • maxResults - Maximum results per page
  • pageToken - Token for pagination
  • startIndex - Zero-based starting row

Cancel Job

POST /google-bigquery/bigquery/v2/projects/{projectId}/jobs/{jobId}/cancel

Query Parameters:

  • location - Job location

Pagination

BigQuery uses token-based pagination. List responses include a pageToken when more results exist:

GET /google-bigquery/bigquery/v2/projects/{projectId}/datasets?maxResults=10&pageToken={token}

Response:

{
  "datasets": [...],
  "nextPageToken": "eyJvZmZzZXQiOjEwfQ=="
}

Use the nextPageToken value as pageToken in subsequent requests.

Code Examples

JavaScript

// Run a query
const response = await fetch(
  'https://gateway.maton.ai/google-bigquery/bigquery/v2/projects/my-project/queries',
  {
    method: 'POST',
    headers: {
      'Authorization': `Bearer ${process.env.MATON_API_KEY}`,
      'Content-Type': 'application/json'
    },
    body: JSON.stringify({
      query: 'SELECT * FROM `my_dataset.my_table` LIMIT 10',
      useLegacySql: false
    })
  }
);
const data = await response.json();
console.log(data.rows);

Python

import os
import requests

# Run a query
response = requests.post(
    'https://gateway.maton.ai/google-bigquery/bigquery/v2/projects/my-project/queries',
    headers={'Authorization': f'Bearer {os.environ["MATON_API_KEY"]}'},
    json={
        'query': 'SELECT * FROM `my_dataset.my_table` LIMIT 10',
        'useLegacySql': False
    }
)
data = response.json()
for row in data.get('rows', []):
    print([field['v'] for field in row['f']])

Schema Field Types

Common BigQuery data types for table schemas:

TypeDescription
STRINGVariable-length character data
INTEGER64-bit signed integer
FLOAT64-bit IEEE floating point
BOOLEANTrue or false
TIMESTAMPAbsolute point in time
DATECalendar date
TIMETime of day
DATETIMEDate and time
BYTESVariable-length binary data
NUMERICExact numeric value with 38 digits of precision
BIGNUMERICExact numeric value with 76+ digits of precision
GEOGRAPHYGeographic data
JSONJSON data
RECORDNested fields (also called STRUCT)

Field Modes:

  • NULLABLE - Field can be null (default)
  • REQUIRED - Field cannot be null
  • REPEATED - Field is an array

Notes

  • Project IDs are typically in the format project-name or project-name-12345
  • Dataset IDs follow naming rules: letters, numbers, underscores (max 1024 characters)
  • Table IDs follow same naming rules as datasets
  • Job IDs are generated by BigQuery and include location prefix
  • Query results use f (fields) and v (value) structure
  • Streaming inserts require BigQuery paid tier (not available in free tier)
  • Use useLegacySql: false for GoogleSQL (standard SQL) syntax
  • IMPORTANT: When using curl commands, use curl -g when URLs contain brackets to disable glob parsing
  • IMPORTANT: When piping curl output to jq or other commands, environment variables like $MATON_API_KEY may not expand correctly in some shell environments

Error Handling

StatusMeaning
400Missing Google BigQuery connection or invalid request
401Invalid or missing Maton API key
403Access denied (insufficient permissions or quota exceeded)
404Resource not found (project, dataset, table, or job)
409Resource already exists
429Rate limited
4xx/5xxPassthrough error from BigQuery API

Troubleshooting: API Key Issues

  1. Check that the MATON_API_KEY environment variable is set:
echo $MATON_API_KEY
  1. Verify the API key is valid by listing connections:
python <<'EOF'
import urllib.request, os, json
req = urllib.request.Request('https://ctrl.maton.ai/connections')
req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}')
print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2))
EOF

Troubleshooting: Invalid App Name

  1. Ensure your URL path starts with google-bigquery. For example:
  • Correct: https://gateway.maton.ai/google-bigquery/bigquery/v2/projects
  • Incorrect: https://gateway.maton.ai/bigquery/v2/projects

Resources

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Research

Arxiv Reader

Read and analyze arXiv papers by fetching LaTeX source, listing sections, or extracting abstracts

Registry SourceRecently Updated
067
Profile unavailable
Research

Ai Task Hub

AI task hub for image analysis, background removal, speech-to-text, text-to-speech, markdown conversion, points balance/ledger lookup, and async execute/poll...

Registry SourceRecently Updated
1205
Profile unavailable
Research

Hugging Face Papers

Browse trending papers, search by keyword, and get paper details from Hugging Face Papers

Registry SourceRecently Updated
013
Profile unavailable