Skip to content

Conversation

@dhermes
Copy link
Contributor

@dhermes dhermes commented Dec 4, 2015

A CreateCluster request response doesn't actually indicate
success or failure. Rather it returns a Cluster message with
the validated request parts inside and a current_operation
attached.

We implement _process_operation so that we can determine the
ID of that long-running operation (so it can be checked for
completion / success, if desired by the user). In addition
we seek to notify the user when the request began.

From the service definition we know that the current_operation
is a long-running operation and that:

The embedded operation's "metadata" field type is CreateClusterMetadata,
The embedded operation's "response" field type is Cluster, if successful.

The Operation metadata is of type Any (which uses a type_url
and raw bytes to provide any protobuf message type in a single
field, but still allow it to be parsed into it's true type after
the fact). So we expect CreateCluster responses to have long-running
operations with a type URL matching CreateClusterMetadata.

As a result, we introduce a utility (_parse_pb_any_to_native) for
parsing an Any field into the native protobuf type specified by the
type URL. Since we know we need to handle CreateClusterMetadata values,
we add a default mapping (_TYPE_URL_MAP) from the corresponding type url
for that message type to the native Python type.

The CreateClusterMetadata type has request_time and
finish_time fields of type Timestamp so we also
add the _pb_timestamp_to_datetime helper for converting protobuf
messages into native Python datetime.datetime objects.

@dhermes dhermes added the api: bigtable Issues related to the Bigtable API. label Dec 4, 2015
@googlebot googlebot added the cla: yes This human has signed the Contributor License Agreement. label Dec 4, 2015
@dhermes
Copy link
Contributor Author

dhermes commented Dec 4, 2015

@tseaver Sorry this is kind of a doozy. I wrote a long description to try to give you some context and quick windows into what the actual data returned from the API is.


This PR is doing something very simple: parse the information from the CreateCluster response. But it takes a bit of machinery for doing this.

This comment was marked as spam.

This comment was marked as spam.

This comment was marked as spam.

This comment was marked as spam.

A CreateCluster request response doesn't actual indicate
success or failure. Rather it returns a cluster object with
the validated request parts inside and a `current_operation`
attached.

We implement `_process_operation` so that we can determine the
ID of that long-running operation (so it can be checked for
completion / success, if desired by the user). In addition
we seek to notify the user when the request began.

From the [service definition][1] we know that the `current_operation`
is a [long-running operation][2] and that:

>  The embedded operation's "metadata" field type is `CreateClusterMetadata`,
>  The embedded operation's "response" field type is `Cluster`, if successful.

The [`Operation` metadata][3] is of type [`Any`][4] (which uses a `type_url`
and raw bytes to provide **any** protobuf message type in a single
field, but still allow it to be parsed into it's true type after
the fact). So we expect `CreateCluster` responses to have long-running
operations with a type URL matching [`CreateClusterMetadata`][5].

As a result, we introduce a utility (`_parse_pb_any_to_native`) for
parsing an `Any` field into the native protobuf type specified by the
type URL. Since we know we need to handle `CreateClusterMetadata` values,
we add a default mapping (`_TYPE_URL_MAP`) from the corresponding type url
for that message type to the native Python type.

The `CreateClusterMetadata` type has `request_time` and
`finish_time` fields of type [`Timestamp`][6] so we also
add the `_pb_timestamp_to_datetime` helper for converting protobuf
messages into native Python `datetime.datetime` objects.

[1]: https://github.com/GoogleCloudPlatform/cloud-bigtable-client/blob/8e363d72eb39d921dfdf5daf4a36032aa9d003e2/bigtable-protos/src/main/proto/google/bigtable/admin/cluster/v1/bigtable_cluster_service.proto#L64
[2]: https://github.com/GoogleCloudPlatform/cloud-bigtable-client/blob/8e363d72eb39d921dfdf5daf4a36032aa9d003e2/bigtable-protos/src/main/proto/google/bigtable/admin/cluster/v1/bigtable_cluster_data.proto#L74
[3]: https://github.com/GoogleCloudPlatform/cloud-bigtable-client/blob/8e363d72eb39d921dfdf5daf4a36032aa9d003e2/bigtable-protos/src/main/proto/google/longrunning/operations.proto#L82
[4]: https://github.com/GoogleCloudPlatform/cloud-bigtable-client/blob/8e363d72eb39d921dfdf5daf4a36032aa9d003e2/bigtable-protos/src/main/proto/google/protobuf/any.proto#L58
[5]: https://github.com/GoogleCloudPlatform/cloud-bigtable-client/blob/8e363d72eb39d921dfdf5daf4a36032aa9d003e2/bigtable-protos/src/main/proto/google/bigtable/admin/cluster/v1/bigtable_cluster_service_messages.proto#L83-L92
[6]: https://github.com/GoogleCloudPlatform/cloud-bigtable-client/blob/8e363d72eb39d921dfdf5daf4a36032aa9d003e2/bigtable-protos/src/main/proto/google/protobuf/timestamp.proto#L78
@dhermes dhermes force-pushed the bigtable-parse-operation branch from 9bcb777 to 388bf37 Compare December 4, 2015 23:32
@tseaver
Copy link
Contributor

tseaver commented Dec 5, 2015

LGTM

dhermes added a commit that referenced this pull request Dec 5, 2015
Adding helpers to parse Bigtable create cluster operation.
@dhermes dhermes merged commit cc7a2ed into googleapis:master Dec 5, 2015
@dhermes dhermes deleted the bigtable-parse-operation branch December 5, 2015 02:43
parthea pushed a commit that referenced this pull request Nov 24, 2025
Co-authored-by: release-please[bot] <55107282+release-please[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

api: bigtable Issues related to the Bigtable API. cla: yes This human has signed the Contributor License Agreement.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants