BigQuery to Cloud Storage TFRecords template

The BigQuery to Cloud Storage TFRecords template is a pipeline that reads data from a BigQuery query and writes it to a Cloud Storage bucket in TFRecord format. You can specify the training, testing, and validation percentage splits. By default, the split is 1 or 100% for the training set and 0 or 0% for testing and validation sets. When setting the dataset split, the sum of training, testing, and validation needs to add up to 1 or 100% (for example, 0.6+0.2+0.2). Dataflow automatically determines the optimal number of shards for each output dataset.

Pipeline requirements

  • The BigQuery dataset and table must exist.
  • The output Cloud Storage bucket must exist before pipeline execution. Training, testing, and validation subdirectories do not need to preexist and are autogenerated.

Template parameters

Required parameters

  • readQuery: A BigQuery SQL query that extracts data from the source. For example, select * from dataset1.sample_table.
  • outputDirectory: The top-level Cloud Storage path prefix to use when writing the training, testing, and validation TFRecord files. Subdirectories for resulting training, testing, and validation TFRecord files are automatically generated from outputDirectory. For example, gs://mybucket/output.

Optional parameters

  • readIdColumn: Name of the BigQuery column storing the unique identifier of the row.
  • invalidOutputPath: Cloud Storage path where to write BigQuery rows that cannot be converted to target entities. For example, gs://your-bucket/your-path.
  • outputSuffix: The file suffix for the training, testing, and validation TFRecord files that are written. The default value is .tfrecord.
  • trainingPercentage: The percentage of query data allocated to training TFRecord files. The default value is 1, or 100%.
  • testingPercentage: The percentage of query data allocated to testing TFRecord files. The default value is 0, or 0%.
  • validationPercentage: The percentage of query data allocated to validation TFRecord files. The default value is 0, or 0%.

Run the template

Console

  1. Go to the Dataflow Create job from template page.
  2. Go to Create job from template
  3. In the Job name field, enter a unique job name.
  4. Optional: For Regional endpoint, select a value from the drop-down menu. The default region is us-central1.

    For a list of regions where you can run a Dataflow job, see Dataflow locations.

  5. From the Dataflow template drop-down menu, select the BigQuery to TFRecords template.
  6. In the provided parameter fields, enter your parameter values.
  7. Click Run job.

gcloud

In your shell or terminal, run the template:

gcloud dataflow jobs run JOB_NAME \
    --gcs-location gs://dataflow-templates-REGION_NAME/VERSION/Cloud_BigQuery_to_GCS_TensorFlow_Records \
    --region REGION_NAME \
    --parameters \
readQuery=READ_QUERY,\
outputDirectory=OUTPUT_DIRECTORY,\
trainingPercentage=TRAINING_PERCENTAGE,\
testingPercentage=TESTING_PERCENTAGE,\
validationPercentage=VALIDATION_PERCENTAGE,\
outputSuffix=OUTPUT_FILENAME_SUFFIX

Replace the following:

  • JOB_NAME: a unique job name of your choice
  • VERSION: the version of the template that you want to use

    You can use the following values:

  • REGION_NAME: the region where you want to deploy your Dataflow job—for example, us-central1
  • READ_QUERY: the BigQuery query to run
  • OUTPUT_DIRECTORY: the Cloud Storage path prefix for output datasets
  • TRAINING_PERCENTAGE: the decimal percentage split for the training dataset
  • TESTING_PERCENTAGE: the decimal percentage split for the testing dataset
  • VALIDATION_PERCENTAGE: the decimal percentage split for the validation dataset
  • OUTPUT_FILENAME_SUFFIX: the preferred output TensorFlow Record file suffix

API

To run the template using the REST API, send an HTTP POST request. For more information on the API and its authorization scopes, see projects.templates.launch.

POST https://dataflow.googleapis.com/v1b3/projects/PROJECT_ID/locations/LOCATION/templates:launch?gcsPath=gs://dataflow-templates-LOCATION/VERSION/Cloud_BigQuery_to_GCS_TensorFlow_Records
{
   "jobName": "JOB_NAME",
   "parameters": {
       "readQuery":"READ_QUERY",
       "outputDirectory":"OUTPUT_DIRECTORY",
       "trainingPercentage":"TRAINING_PERCENTAGE",
       "testingPercentage":"TESTING_PERCENTAGE",
       "validationPercentage":"VALIDATION_PERCENTAGE",
       "outputSuffix":"OUTPUT_FILENAME_SUFFIX"
   },
   "environment": { "zone": "us-central1-f" }
}

Replace the following:

  • PROJECT_ID: the Google Cloud project ID where you want to run the Dataflow job
  • JOB_NAME: a unique job name of your choice
  • VERSION: the version of the template that you want to use

    You can use the following values:

  • LOCATION: the region where you want to deploy your Dataflow job—for example, us-central1
  • READ_QUERY: the BigQuery query to run
  • OUTPUT_DIRECTORY: the Cloud Storage path prefix for output datasets
  • TRAINING_PERCENTAGE: the decimal percentage split for the training dataset
  • TESTING_PERCENTAGE: the decimal percentage split for the testing dataset
  • VALIDATION_PERCENTAGE: the decimal percentage split for the validation dataset
  • OUTPUT_FILENAME_SUFFIX: the preferred output TensorFlow Record file suffix

What's next