Skip to content

Commit b5f33bf

Browse files
authored
Simplify BigQuery samples according to our standard. (#207)
* Simplify BigQuery samples according to our standard. * Address comments * Add region tag. * Re-enable cache. Remove .travis.yml
1 parent 8554ebd commit b5f33bf

15 files changed

+671
-1033
lines changed

.travis.yml

Lines changed: 0 additions & 95 deletions
This file was deleted.

bigquery/README.md

Lines changed: 50 additions & 50 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,6 @@ analytics data warehouse.
1111

1212
* [Setup](#setup)
1313
* [Samples](#samples)
14-
* [Create A Simple Application With the API](#create-a-simple-application-with-the-api)
1514
* [Datasets](#datasets)
1615
* [Queries](#queries)
1716
* [Tables](#tables)
@@ -28,17 +27,6 @@ analytics data warehouse.
2827

2928
## Samples
3029

31-
### Create A Simple Application With the API
32-
33-
View the [documentation][basics_docs] or the [source code][basics_code].
34-
35-
__Run the sample:__
36-
37-
node getting_started
38-
39-
[basics_docs]: https://cloud.google.com/bigquery/create-simple-app-api
40-
[basics_code]: getting_started.js
41-
4230
### Datasets
4331

4432
View the [documentation][datasets_docs] or the [source code][datasets_code].
@@ -47,25 +35,22 @@ __Usage:__ `node datasets --help`
4735

4836
```
4937
Commands:
50-
create <name> Create a new dataset.
51-
delete <datasetId> Delete the specified dataset.
52-
list List datasets in the authenticated project.
38+
create <datasetId> Create a new dataset with the specified ID.
39+
delete <datasetId> Delete the dataset with the specified ID.
40+
list List datasets in the specified project.
5341
size <datasetId> Calculate the size of the specified dataset.
5442
5543
Options:
56-
--projectId, -p Optionally specify the project ID to use.
57-
[string]
58-
--help Show help [boolean]
44+
--projectId, -p Optionally specify the project ID to use. [string] [default: "nodejs-docs-samples"]
45+
--help Show help [boolean]
5946
6047
Examples:
61-
node datasets create my_dataset Create a new dataset named "my_dataset".
62-
node datasets delete my_dataset Delete "my_dataset".
63-
node datasets list List datasets.
64-
node datasets list -p bigquery-public-data List datasets in a project other than the
65-
authenticated project.
66-
node datasets size my_dataset Calculate the size of "my_dataset".
67-
node datasets size hacker_news -p Calculate the size of
68-
bigquery-public-data "bigquery-public-data:hacker_news".
48+
node datasets create my_dataset Create a new dataset with the ID "my_dataset".
49+
node datasets delete my_dataset Delete a dataset identified as "my_dataset".
50+
node datasets list List datasets.
51+
node datasets list -p bigquery-public-data List datasets in the "bigquery-public-data" project.
52+
node datasets size my_dataset Calculate the size of "my_dataset".
53+
node datasets size hacker_news -p bigquery-public-data Calculate the size of "bigquery-public-data:hacker_news".
6954
7055
For more information, see https://cloud.google.com/bigquery/docs
7156
```
@@ -81,17 +66,19 @@ __Usage:__ `node queries --help`
8166

8267
```
8368
Commands:
84-
sync <query> Run a synchronous query.
85-
async <query> Start an asynchronous query.
86-
poll <jobId> Get the status of a job.
69+
sync <sqlQuery> Run the specified synchronous query.
70+
async <sqlQuery> Start the specified asynchronous query.
71+
wait <jobId> Wait for the specified job to complete and retrieve its results.
8772
8873
Options:
89-
--help Show help [boolean]
74+
--help Show help [boolean]
9075
9176
Examples:
92-
node queries sync "SELECT * FROM publicdata:samples.natality LIMIT 5;"
93-
node queries async "SELECT * FROM publicdata:samples.natality LIMIT 5;"
94-
node queries poll 12345
77+
node queries sync "SELECT * FROM
78+
`publicdata.samples.natality` LIMIT 5;"
79+
node queries async "SELECT * FROM
80+
`publicdata.samples.natality` LIMIT 5;"
81+
node queries wait job_VwckYXnR8yz54GBDMykIGnrc2
9582
9683
For more information, see https://cloud.google.com/bigquery/docs
9784
```
@@ -107,27 +94,40 @@ __Usage:__ `node tables --help`
10794

10895
```
10996
Commands:
110-
create <dataset> <table> Create a new table in the specified dataset.
111-
list <dataset> List tables in the specified dataset.
112-
delete <dataset> <table> Delete a table in the specified dataset.
113-
import <dataset> <table> <file> Import data from a local file or a Google Cloud Storage
114-
file into BigQuery.
115-
export <dataset> <table> <bucket> <file> Export a table from BigQuery to Google Cloud Storage.
97+
create <datasetId> <tableId> Create a new table with the specified ID in the
98+
specified dataset.
99+
list <datasetId> List tables in the specified dataset.
100+
delete <datasetId> <tableId> Delete the specified table from the specified dataset.
101+
copy <srcDatasetId> <srcTableId> <destDatasetId> Make a copy of an existing table.
102+
<destTableId>
103+
browse <datasetId> <tableId> List the rows from the specified table.
104+
import <datasetId> <tableId> <fileName> Import data from a local file or a Google Cloud Storage
105+
file into the specified table.
106+
export <datasetId> <tableId> <bucketName> <fileName> Export a table from BigQuery to Google Cloud Storage.
107+
insert <datasetId> <tableId> <json_or_file> Insert a JSON array (as a string or newline-delimited
108+
file) into a BigQuery table.
116109
117110
Options:
118-
--help Show help [boolean]
111+
--help Show help [boolean]
119112
120113
Examples:
121-
node tables create my_dataset my_table Create table "my_table" in "my_dataset".
122-
node tables list my_dataset List tables in "my_dataset".
123-
node tables delete my_dataset my_table Delete "my_table" from "my_dataset".
124-
node tables import my_dataset my_table ./data.csv Import a local file into a table.
125-
node tables import my_dataset my_table data.csv Import a GCS file into a table.
126-
--bucket my-bucket
127-
node tables export my_dataset my_table my-bucket Export my_dataset:my_table to
128-
my-file gcs://my-bucket/my-file as raw CSV
129-
node tables export my_dataset my_table my-bucket Export my_dataset:my_table to
130-
my-file -f JSON --gzip gcs://my-bucket/my-file as gzipped JSON
114+
node tables create my_dataset my_table Create table "my_table" in "my_dataset".
115+
node tables list my_dataset List tables in "my_dataset".
116+
node tables browse my_dataset my_table Display rows from "my_table" in "my_dataset".
117+
node tables delete my_dataset my_table Delete "my_table" from "my_dataset".
118+
node tables import my_dataset my_table ./data.csv Import a local file into a table.
119+
node tables import my_dataset my_table data.csv --bucket Import a GCS file into a table.
120+
my-bucket
121+
node tables export my_dataset my_table my-bucket my-file Export my_dataset:my_table to gcs://my-bucket/my-file as
122+
raw CSV.
123+
node tables export my_dataset my_table my-bucket my-file -f Export my_dataset:my_table to gcs://my-bucket/my-file as
124+
JSON --gzip gzipped JSON.
125+
node tables insert my_dataset my_table json_string Insert the JSON array represented by json_string into
126+
my_dataset:my_table.
127+
node tables insert my_dataset my_table json_file Insert the JSON objects contained in json_file (one per
128+
line) into my_dataset:my_table.
129+
node tables copy src_dataset src_table dest_dataset Copy src_dataset:src_table to dest_dataset:dest_table.
130+
dest_table
131131
132132
For more information, see https://cloud.google.com/bigquery/docs
133133
```

0 commit comments

Comments
 (0)