Skip to content

Commit 50c6b8b

Browse files
committed
Merge branch 'master' into spanner-gapic-migration
2 parents b3b7f04 + c8f3624 commit 50c6b8b

File tree

412 files changed

+23491
-2082
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

412 files changed

+23491
-2082
lines changed

.circleci/config.yml

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -79,6 +79,19 @@ jobs:
7979
- run:
8080
name: Run integration tests for google-cloud-bigquery
8181
command: ./utilities/verify_single_it.sh google-cloud-bigquery
82+
83+
bigtable_it:
84+
working_directory: ~/googleapis
85+
<<: *anchor_docker
86+
<<: *anchor_auth_vars
87+
steps:
88+
- checkout
89+
- run:
90+
<<: *anchor_run_decrypt
91+
- run:
92+
name: Run integration tests for google-cloud-bigtable
93+
command: ./utilities/verify_single_it.sh google-cloud-bigtable -Dbigtable.env=prod -Dbigtable.table=projects/gcloud-devel/instances/google-cloud-bigtable/tables/integration-tests
94+
8295
compute_it:
8396
working_directory: ~/googleapis
8497
<<: *anchor_docker
@@ -220,6 +233,10 @@ workflows:
220233
filters:
221234
branches:
222235
only: master
236+
- bigtable_it:
237+
filters:
238+
branches:
239+
only: master
223240
- compute_it:
224241
filters:
225242
branches:

README.md

Lines changed: 12 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,7 @@ Java idiomatic client for [Google Cloud Platform][cloud-platform] services.
1313
- [Client Library Documentation][client-lib-docs]
1414

1515
This library supports the following Google Cloud Platform services with clients at a [GA](#versioning) quality level:
16+
- [BigQuery](google-cloud-bigquery) (GA)
1617
- [Stackdriver Logging](google-cloud-logging) (GA)
1718
- [Cloud Datastore](google-cloud-datastore) (GA)
1819
- [Cloud Natural Language](google-cloud-language) (GA)
@@ -22,7 +23,6 @@ This library supports the following Google Cloud Platform services with clients
2223

2324
This library supports the following Google Cloud Platform services with clients at a [Beta](#versioning) quality level:
2425

25-
- [BigQuery](google-cloud-bigquery) (Beta)
2626
- [Cloud Data Loss Prevention](google-cloud-dlp) (Beta)
2727
- [Stackdriver Error Reporting](google-cloud-errorreporting) (Beta)
2828
- [Cloud Firestore](google-cloud-firestore) (Beta)
@@ -31,6 +31,7 @@ This library supports the following Google Cloud Platform services with clients
3131
- [Cloud Spanner](google-cloud-spanner) (Beta)
3232
- [Cloud Video Intelligence](google-cloud-video-intelligence) (Beta)
3333
- [Stackdriver Trace](google-cloud-trace) (Beta)
34+
- [Text-to-Speech](google-cloud-texttospeech) (Beta)
3435

3536
This library supports the following Google Cloud Platform services with clients at an [Alpha](#versioning) quality level:
3637

@@ -58,22 +59,28 @@ If you are using Maven, add this to your pom.xml file
5859
<dependency>
5960
<groupId>com.google.cloud</groupId>
6061
<artifactId>google-cloud</artifactId>
61-
<version>0.38.0-alpha</version>
62+
<version>0.43.0-alpha</version>
6263
</dependency>
6364
```
6465
If you are using Gradle, add this to your dependencies
6566
```Groovy
66-
compile 'com.google.cloud:google-cloud:0.38.0-alpha'
67+
compile 'com.google.cloud:google-cloud:0.43.0-alpha'
6768
```
6869
If you are using SBT, add this to your dependencies
6970
```Scala
70-
libraryDependencies += "com.google.cloud" % "google-cloud" % "0.38.0-alpha"
71+
libraryDependencies += "com.google.cloud" % "google-cloud" % "0.43.0-alpha"
7172
```
7273
[//]: # ({x-version-update-end})
7374

7475
It also works just as well to declare a dependency only on the specific clients that you need. See the README of
7576
each client for instructions.
7677

78+
If you're using IntelliJ or Eclipse, you can add client libraries to your project using these IDE plugins:
79+
* [Cloud Tools for IntelliJ](https://cloud.google.com/tools/intellij/docs/client-libraries)
80+
* [Cloud Tools for Eclipse](https://cloud.google.com/eclipse/docs/libraries)
81+
82+
Besides adding client libraries, the plugins provide additional functionality, such as service account key management. Refer to the documentation for each plugin for more details.
83+
7784
These client libraries can be used on App Engine standard for Java 8 runtime, App Engine flexible (including the Compat runtime). Most of the libraries do not work on the App Engine standard for Java 7 runtime, however, Datastore, Storage, and Bigquery should work.
7885

7986
If you are running into problems with version conflicts, see [Version Management](#version-management).
@@ -285,7 +292,7 @@ The easiest way to solve version conflicts is to use google-cloud's BOM. In Mave
285292
<dependency>
286293
<groupId>com.google.cloud</groupId>
287294
<artifactId>google-cloud-bom</artifactId>
288-
<version>0.38.0-alpha</version>
295+
<version>0.43.0-alpha</version>
289296
<type>pom</type>
290297
<scope>import</scope>
291298
</dependency>

RELEASING.md

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -109,11 +109,9 @@ Go to the [releases page](https://github.com/GoogleCloudPlatform/google-cloud-ja
109109

110110
Ensure that the format is consistent with previous releases (for an example, see the [0.1.0 release](https://github.com/GoogleCloudPlatform/google-cloud-java/releases/tag/v0.1.0)). After adding any missing updates and reformatting as necessary, publish the draft.
111111

112-
11. Create a new draft for the next release. Note any commits not included in the release that have been submitted before the release commit, to ensure they are documented in the next release.
112+
11. Run `python utilities/bump_versions.py next_snapshot patch` to include "-SNAPSHOT" in the current project version (Alternatively, update the versions in `versions.txt` to the correct versions for the next release.). Then, run `python utilities/replace_versions.py` to update the `pom.xml` files. (If you see updates in `README.md` files at this step, you probably did something wrong.)
113113

114-
12. Run `python utilities/bump_versions next_snapshot patch` to include "-SNAPSHOT" in the current project version (Alternatively, update the versions in `versions.txt` to the correct versions for the next release.). Then, run `python utilities/replace_versions.py` to update the `pom.xml` files. (If you see updates in `README.md` files at this step, you probably did something wrong.)
115-
116-
13. Create and merge in another PR to reflect the updated project version. For an example of what this PR should look like, see [#227](https://github.com/GoogleCloudPlatform/google-cloud-java/pull/227).
114+
13. Create and merge in another PR to reflect the updated project version.
117115

118116
Improvements
119117
============

TESTING.md

Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@
33
This library provides tools to help write tests for code that uses the following google-cloud services:
44

55
- [BigQuery](#testing-code-that-uses-bigquery)
6+
- [Bigtable](#testing-code-that-uses-bigtable)
67
- [Compute](#testing-code-that-uses-compute)
78
- [Datastore](#testing-code-that-uses-datastore)
89
- [DNS](#testing-code-that-uses-dns)
@@ -41,6 +42,29 @@ Here is an example that clears the dataset created in Step 3.
4142
RemoteBigQueryHelper.forceDelete(bigquery, dataset);
4243
```
4344

45+
### Testing code that uses Bigtable
46+
47+
Bigtable integration tests can either be run against an emulator or a real Bigtable table. The
48+
target environment can be selected via the `bigtable.env` system property. By default it is set to
49+
`emulator` and the other option is `prod`.
50+
51+
To use the `emulator` environment, please install the gcloud sdk and use it to install the
52+
`cbtemulator` via `gcloud components install bigtable`.
53+
54+
To use the `prod` environment:
55+
1. Set up the target table using `google-cloud-bigtable/scripts/setup-test-table.sh`
56+
2. Download the [JSON service account credentials file][create-service-account] from the Google
57+
Developer's Console.
58+
3. Set the environment variable `GOOGLE_APPLICATION_CREDENTIALS` to the path of the credentials file
59+
4. Set the system property `bigtable.env=prod` and `bigtable.table` to the full table name you
60+
created earlier. Example:
61+
```shell
62+
mvn verify -am -pl google-cloud-bigtable \
63+
-Dbigtable.env=prod \
64+
-Dbigtable.table=projects/my-project/instances/my-instance/tables/my-table
65+
```
66+
67+
4468
### Testing code that uses Compute
4569

4670
Currently, there isn't an emulator for Google Compute, so an alternative is to create a test

google-cloud-bigquery/README.md

Lines changed: 3 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -12,9 +12,6 @@ Java idiomatic client for [Google Cloud BigQuery][cloud-bigquery].
1212
- [Product Documentation][bigquery-product-docs]
1313
- [Client Library Documentation][bigquery-client-lib-docs]
1414

15-
> Note: This client is a work-in-progress, and may occasionally
16-
> make backwards-incompatible changes.
17-
1815
Quickstart
1916
----------
2017
[//]: # ({x-version-update-start:google-cloud-bigquery:released})
@@ -23,16 +20,16 @@ If you are using Maven, add this to your pom.xml file
2320
<dependency>
2421
<groupId>com.google.cloud</groupId>
2522
<artifactId>google-cloud-bigquery</artifactId>
26-
<version>0.38.0-beta</version>
23+
<version>1.25.0</version>
2724
</dependency>
2825
```
2926
If you are using Gradle, add this to your dependencies
3027
```Groovy
31-
compile 'com.google.cloud:google-cloud-bigquery:0.38.0-beta'
28+
compile 'com.google.cloud:google-cloud-bigquery:1.25.0'
3229
```
3330
If you are using SBT, add this to your dependencies
3431
```Scala
35-
libraryDependencies += "com.google.cloud" % "google-cloud-bigquery" % "0.38.0-beta"
32+
libraryDependencies += "com.google.cloud" % "google-cloud-bigquery" % "1.25.0"
3633
```
3734
[//]: # ({x-version-update-end})
3835

google-cloud-bigquery/pom.xml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
33
<modelVersion>4.0.0</modelVersion>
44
<artifactId>google-cloud-bigquery</artifactId>
5-
<version>0.38.1-beta-SNAPSHOT</version><!-- {x-version-update:google-cloud-bigquery:current} -->
5+
<version>1.25.1-SNAPSHOT</version><!-- {x-version-update:google-cloud-bigquery:current} -->
66
<packaging>jar</packaging>
77
<name>Google Cloud BigQuery</name>
88
<url>https://github.com/GoogleCloudPlatform/google-cloud-java/tree/master/google-cloud-bigquery</url>
@@ -12,7 +12,7 @@
1212
<parent>
1313
<groupId>com.google.cloud</groupId>
1414
<artifactId>google-cloud-pom</artifactId>
15-
<version>0.38.1-alpha-SNAPSHOT</version><!-- {x-version-update:google-cloud-pom:current} -->
15+
<version>0.43.1-alpha-SNAPSHOT</version><!-- {x-version-update:google-cloud-pom:current} -->
1616
</parent>
1717
<properties>
1818
<site.installationModule>google-cloud-bigquery</site.installationModule>

google-cloud-bigquery/src/main/java/com/google/cloud/bigquery/BigQuery.java

Lines changed: 45 additions & 42 deletions
Original file line numberDiff line numberDiff line change
@@ -522,7 +522,7 @@ public int hashCode() {
522522
* } catch (BigQueryException e) {
523523
* // the dataset was not created
524524
* }
525-
* } </pre>
525+
* }</pre>
526526
*
527527
* @throws BigQueryException upon failure
528528
*/
@@ -538,7 +538,7 @@ public int hashCode() {
538538
* String fieldName = "string_field";
539539
* TableId tableId = TableId.of(datasetName, tableName);
540540
* // Table field definition
541-
* Field field = Field.of(fieldName, Field.Type.string());
541+
* Field field = Field.of(fieldName, LegacySQLTypeName.STRING);
542542
* // Table schema definition
543543
* Schema schema = Schema.of(field);
544544
* TableDefinition tableDefinition = StandardTableDefinition.of(schema);
@@ -553,6 +553,32 @@ public int hashCode() {
553553
/**
554554
* Creates a new job.
555555
*
556+
* <p>Example of loading a newline-delimited-json file with textual fields from GCS to a table.
557+
* <pre> {@code
558+
* String datasetName = "my_dataset_name";
559+
* String tableName = "my_table_name";
560+
* String sourceUri = "gs://cloud-samples-data/bigquery/us-states/us-states.json";
561+
* TableId tableId = TableId.of(datasetName, tableName);
562+
* // Table field definition
563+
* Field[] fields = new Field[] {
564+
* Field.of("name", LegacySQLTypeName.STRING),
565+
* Field.of("post_abbr", LegacySQLTypeName.STRING)
566+
* };
567+
* // Table schema definition
568+
* Schema schema = Schema.of(fields);
569+
* LoadJobConfiguration configuration = LoadJobConfiguration.builder(tableId, sourceUri)
570+
* .setFormatOptions(FormatOptions.json())
571+
* .setCreateDisposition(CreateDisposition.CREATE_IF_NEEDED)
572+
* .setSchema(schema)
573+
* .build();
574+
* // Load the table
575+
* Job remoteLoadJob = bigquery.create(JobInfo.of(configuration));
576+
* remoteLoadJob = remoteLoadJob.waitFor();
577+
* // Check the table
578+
* System.out.println("State: " + remoteLoadJob.getStatus().getState());
579+
* return ((StandardTableDefinition) bigquery.getTable(tableId).getDefinition()).getNumRows();
580+
* }</pre>
581+
*
556582
* <p>Example of creating a query job.
557583
* <pre> {@code
558584
* String query = "SELECT field FROM my_dataset_name.my_table_name";
@@ -861,8 +887,7 @@ public int hashCode() {
861887
* Lists the table's rows.
862888
*
863889
* <p>Example of listing table rows, specifying the page size.
864-
*
865-
* <pre>{@code
890+
* <pre> {@code
866891
* String datasetName = "my_dataset_name";
867892
* String tableName = "my_table_name";
868893
* // This example reads the result 100 rows per RPC call. If there's no need to limit the number,
@@ -882,16 +907,15 @@ public int hashCode() {
882907
* Lists the table's rows.
883908
*
884909
* <p>Example of listing table rows, specifying the page size.
885-
*
886-
* <pre>{@code
910+
* <pre> {@code
887911
* String datasetName = "my_dataset_name";
888912
* String tableName = "my_table_name";
889913
* TableId tableIdObject = TableId.of(datasetName, tableName);
890914
* // This example reads the result 100 rows per RPC call. If there's no need to limit the number,
891915
* // simply omit the option.
892916
* TableResult tableData =
893917
* bigquery.listTableData(tableIdObject, TableDataListOption.pageSize(100));
894-
* for (FieldValueList row : rowIterator.hasNext()) {
918+
* for (FieldValueList row : tableData.iterateAll()) {
895919
* // do something with the row
896920
* }
897921
* }</pre>
@@ -904,17 +928,16 @@ public int hashCode() {
904928
* Lists the table's rows. If the {@code schema} is not {@code null}, it is available to the
905929
* {@link FieldValueList} iterated over.
906930
*
907-
* <p>Example of listing table rows.
908-
*
909-
* <pre>{@code
931+
* <p>Example of listing table rows with schema.
932+
* <pre> {@code
910933
* String datasetName = "my_dataset_name";
911934
* String tableName = "my_table_name";
912935
* Schema schema = ...;
913-
* String field = "my_field";
936+
* String field = "field";
914937
* TableResult tableData =
915938
* bigquery.listTableData(datasetName, tableName, schema);
916939
* for (FieldValueList row : tableData.iterateAll()) {
917-
* row.get(field)
940+
* row.get(field);
918941
* }
919942
* }</pre>
920943
*
@@ -927,9 +950,8 @@ TableResult listTableData(
927950
* Lists the table's rows. If the {@code schema} is not {@code null}, it is available to the
928951
* {@link FieldValueList} iterated over.
929952
*
930-
* <p>Example of listing table rows.
931-
*
932-
* <pre>{@code
953+
* <p>Example of listing table rows with schema.
954+
* <pre> {@code
933955
* Schema schema =
934956
* Schema.of(
935957
* Field.of("word", LegacySQLTypeName.STRING),
@@ -1047,28 +1069,21 @@ TableResult listTableData(
10471069
* queries. Since dry-run queries are not actually executed, there's no way to retrieve results.
10481070
*
10491071
* <p>Example of running a query.
1050-
*
1051-
* <pre>{@code
1052-
* String query = "SELECT distinct(corpus) FROM `bigquery-public-data.samples.shakespeare`";
1053-
* QueryJobConfiguration queryConfig = QueryJobConfiguration.of(query);
1054-
*
1055-
* // To run the legacy syntax queries use the following code instead:
1056-
* // String query = "SELECT unique(corpus) FROM [bigquery-public-data:samples.shakespeare]"
1057-
* // QueryJobConfiguration queryConfig =
1058-
* // QueryJobConfiguration.newBuilder(query).setUseLegacySql(true).build();
1059-
*
1072+
* <pre> {@code
1073+
* String query = "SELECT unique(corpus) FROM [bigquery-public-data:samples.shakespeare]";
1074+
* QueryJobConfiguration queryConfig =
1075+
* QueryJobConfiguration.newBuilder(query).setUseLegacySql(true).build();
10601076
* for (FieldValueList row : bigquery.query(queryConfig).iterateAll()) {
10611077
* // do something with the data
10621078
* }
10631079
* }</pre>
10641080
*
10651081
* <p>Example of running a query with query parameters.
1066-
*
1067-
* <pre>{@code
1068-
* String query =
1069-
* "SELECT distinct(corpus) FROM `bigquery-public-data.samples.shakespeare` where word_count > ?";
1082+
* <pre> {@code
1083+
* String query = "SELECT distinct(corpus) FROM `bigquery-public-data.samples.shakespeare` where word_count > @wordCount";
1084+
* // Note, standard SQL is required to use query parameters. Legacy SQL will not work.
10701085
* QueryJobConfiguration queryConfig = QueryJobConfiguration.newBuilder(query)
1071-
* .addPositionalParameter(QueryParameterValue.int64(5))
1086+
* .addNamedParameter("wordCount", QueryParameterValue.int64(5))
10721087
* .build();
10731088
* for (FieldValueList row : bigquery.query(queryConfig).iterateAll()) {
10741089
* // do something with the data
@@ -1092,18 +1107,6 @@ TableResult query(QueryJobConfiguration configuration, JobOption... options)
10921107
* <p>See {@link #query(QueryJobConfiguration, JobOption...)} for examples on populating a {@link
10931108
* QueryJobConfiguration}.
10941109
*
1095-
* <p>The recommended way to create a randomly generated JobId is the following:
1096-
*
1097-
* <pre>{@code
1098-
* JobId jobId = JobId.of();
1099-
* }</pre>
1100-
*
1101-
* For a user specified job id with an optional prefix use the following:
1102-
*
1103-
* <pre>{@code
1104-
* JobId jobId = JobId.of("my_prefix-my_unique_job_id");
1105-
* }</pre>
1106-
*
11071110
* @throws BigQueryException upon failure
11081111
* @throws InterruptedException if the current thread gets interrupted while waiting for the query
11091112
* to complete

0 commit comments

Comments
 (0)