diff --git a/google-api-grpc/proto-google-cloud-dialogflow-v2/src/main/java/com/google/cloud/dialogflow/v2/StreamingDetectIntentRequest.java b/google-api-grpc/proto-google-cloud-dialogflow-v2/src/main/java/com/google/cloud/dialogflow/v2/StreamingDetectIntentRequest.java index 8520e9bd3e77..f9c25cebf5e7 100644 --- a/google-api-grpc/proto-google-cloud-dialogflow-v2/src/main/java/com/google/cloud/dialogflow/v2/StreamingDetectIntentRequest.java +++ b/google-api-grpc/proto-google-cloud-dialogflow-v2/src/main/java/com/google/cloud/dialogflow/v2/StreamingDetectIntentRequest.java @@ -8,13 +8,27 @@ * *
* The top-level message sent by the client to the - * `StreamingDetectIntent` method. + * [StreamingDetectIntent][] method. * Multiple request messages should be sent in order: - * 1. The first message must contain `session`, `query_input` plus optionally - * `query_params`. The message must not contain `input_audio`. - * 2. If `query_input` was set to a streaming input audio config, - * all subsequent messages must contain only `input_audio`. - * Otherwise, finish the request stream. + * 1. The first message must contain [StreamingDetectIntentRequest.session][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.session], + * [StreamingDetectIntentRequest.query_input] plus optionally + * [StreamingDetectIntentRequest.query_params]. If the client wants to + * receive an audio response, it should also contain + * [StreamingDetectIntentRequest.output_audio_config][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.output_audio_config]. The message + * must not contain [StreamingDetectIntentRequest.input_audio][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.input_audio]. + * 2. If [StreamingDetectIntentRequest.query_input][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.query_input] was set to + * [StreamingDetectIntentRequest.query_input.audio_config][], all subsequent + * messages must contain [StreamingDetectIntentRequest.input_audio] to + * continue with Speech recognition. + * If you decide to rather detect an intent from text input after you + * already started Speech recognition, please send a message with + * [StreamingDetectIntentRequest.query_input.text][]. + * However, note that: + * * Dialogflow will bill you for the audio duration so far. + * * Dialogflow discards all Speech recognition results in favor of the + * input text. + * * Dialogflow will use the language code from the first message. + * After you sent all input, you must half-close or abort the request stream. ** * Protobuf type {@code google.cloud.dialogflow.v2.StreamingDetectIntentRequest} @@ -616,13 +630,27 @@ protected Builder newBuilderForType(com.google.protobuf.GeneratedMessageV3.Build * *
* The top-level message sent by the client to the
- * `StreamingDetectIntent` method.
+ * [StreamingDetectIntent][] method.
* Multiple request messages should be sent in order:
- * 1. The first message must contain `session`, `query_input` plus optionally
- * `query_params`. The message must not contain `input_audio`.
- * 2. If `query_input` was set to a streaming input audio config,
- * all subsequent messages must contain only `input_audio`.
- * Otherwise, finish the request stream.
+ * 1. The first message must contain [StreamingDetectIntentRequest.session][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.session],
+ * [StreamingDetectIntentRequest.query_input] plus optionally
+ * [StreamingDetectIntentRequest.query_params]. If the client wants to
+ * receive an audio response, it should also contain
+ * [StreamingDetectIntentRequest.output_audio_config][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.output_audio_config]. The message
+ * must not contain [StreamingDetectIntentRequest.input_audio][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.input_audio].
+ * 2. If [StreamingDetectIntentRequest.query_input][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.query_input] was set to
+ * [StreamingDetectIntentRequest.query_input.audio_config][], all subsequent
+ * messages must contain [StreamingDetectIntentRequest.input_audio] to
+ * continue with Speech recognition.
+ * If you decide to rather detect an intent from text input after you
+ * already started Speech recognition, please send a message with
+ * [StreamingDetectIntentRequest.query_input.text][].
+ * However, note that:
+ * * Dialogflow will bill you for the audio duration so far.
+ * * Dialogflow discards all Speech recognition results in favor of the
+ * input text.
+ * * Dialogflow will use the language code from the first message.
+ * After you sent all input, you must half-close or abort the request stream.
*
*
* Protobuf type {@code google.cloud.dialogflow.v2.StreamingDetectIntentRequest}
diff --git a/google-api-grpc/proto-google-cloud-dialogflow-v2/src/main/proto/google/cloud/dialogflow/v2/session.proto b/google-api-grpc/proto-google-cloud-dialogflow-v2/src/main/proto/google/cloud/dialogflow/v2/session.proto
index b87f313d4de1..0cb45fa9e63d 100644
--- a/google-api-grpc/proto-google-cloud-dialogflow-v2/src/main/proto/google/cloud/dialogflow/v2/session.proto
+++ b/google-api-grpc/proto-google-cloud-dialogflow-v2/src/main/proto/google/cloud/dialogflow/v2/session.proto
@@ -268,16 +268,32 @@ message QueryResult {
}
// The top-level message sent by the client to the
-// `StreamingDetectIntent` method.
+// [StreamingDetectIntent][] method.
//
// Multiple request messages should be sent in order:
//
-// 1. The first message must contain `session`, `query_input` plus optionally
-// `query_params`. The message must not contain `input_audio`.
+// 1. The first message must contain [StreamingDetectIntentRequest.session][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.session],
+// [StreamingDetectIntentRequest.query_input] plus optionally
+// [StreamingDetectIntentRequest.query_params]. If the client wants to
+// receive an audio response, it should also contain
+// [StreamingDetectIntentRequest.output_audio_config][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.output_audio_config]. The message
+// must not contain [StreamingDetectIntentRequest.input_audio][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.input_audio].
+// 2. If [StreamingDetectIntentRequest.query_input][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.query_input] was set to
+// [StreamingDetectIntentRequest.query_input.audio_config][], all subsequent
+// messages must contain [StreamingDetectIntentRequest.input_audio] to
+// continue with Speech recognition.
+// If you decide to rather detect an intent from text input after you
+// already started Speech recognition, please send a message with
+// [StreamingDetectIntentRequest.query_input.text][].
+//
+// However, note that:
+//
+// * Dialogflow will bill you for the audio duration so far.
+// * Dialogflow discards all Speech recognition results in favor of the
+// input text.
+// * Dialogflow will use the language code from the first message.
//
-// 2. If `query_input` was set to a streaming input audio config,
-// all subsequent messages must contain only `input_audio`.
-// Otherwise, finish the request stream.
+// After you sent all input, you must half-close or abort the request stream.
message StreamingDetectIntentRequest {
// Required. The name of the session the query is sent to.
// Format of the session name:
diff --git a/google-api-grpc/proto-google-cloud-dialogflow-v2beta1/src/main/java/com/google/cloud/dialogflow/v2beta1/StreamingDetectIntentRequest.java b/google-api-grpc/proto-google-cloud-dialogflow-v2beta1/src/main/java/com/google/cloud/dialogflow/v2beta1/StreamingDetectIntentRequest.java
index 129baeaacbfa..d0fdf6c9f465 100644
--- a/google-api-grpc/proto-google-cloud-dialogflow-v2beta1/src/main/java/com/google/cloud/dialogflow/v2beta1/StreamingDetectIntentRequest.java
+++ b/google-api-grpc/proto-google-cloud-dialogflow-v2beta1/src/main/java/com/google/cloud/dialogflow/v2beta1/StreamingDetectIntentRequest.java
@@ -8,14 +8,27 @@
*
* * The top-level message sent by the client to the - * `StreamingDetectIntent` method. + * [StreamingDetectIntent][] method. * Multiple request messages should be sent in order: - * 1. The first message must contain `session`, `query_input` plus optionally - * `query_params`. If the client wants to receive an audio response, it - * should also contain `output_audio_config`. The message must not contain - * `input_audio`. - * 2. If `query_input` was set to a streaming input audio config, - * all subsequent messages must contain `input_audio`. Otherwise, finish the request stream. + * 1. The first message must contain [StreamingDetectIntentRequest.session][google.cloud.dialogflow.v2beta1.StreamingDetectIntentRequest.session], + * [StreamingDetectIntentRequest.query_input] plus optionally + * [StreamingDetectIntentRequest.query_params]. If the client wants to + * receive an audio response, it should also contain + * [StreamingDetectIntentRequest.output_audio_config][google.cloud.dialogflow.v2beta1.StreamingDetectIntentRequest.output_audio_config]. The message + * must not contain [StreamingDetectIntentRequest.input_audio][google.cloud.dialogflow.v2beta1.StreamingDetectIntentRequest.input_audio]. + * 2. If [StreamingDetectIntentRequest.query_input][google.cloud.dialogflow.v2beta1.StreamingDetectIntentRequest.query_input] was set to + * [StreamingDetectIntentRequest.query_input.audio_config][], all subsequent + * messages must contain [StreamingDetectIntentRequest.input_audio] to + * continue with Speech recognition. + * If you decide to rather detect an intent from text input after you + * already started Speech recognition, please send a message with + * [StreamingDetectIntentRequest.query_input.text][]. + * However, note that: + * * Dialogflow will bill you for the audio duration so far. + * * Dialogflow discards all Speech recognition results in favor of the + * input text. + * * Dialogflow will use the language code from the first message. + * After you sent all input, you must half-close or abort the request stream. ** * Protobuf type {@code google.cloud.dialogflow.v2beta1.StreamingDetectIntentRequest} @@ -628,14 +641,27 @@ protected Builder newBuilderForType(com.google.protobuf.GeneratedMessageV3.Build * *
* The top-level message sent by the client to the
- * `StreamingDetectIntent` method.
+ * [StreamingDetectIntent][] method.
* Multiple request messages should be sent in order:
- * 1. The first message must contain `session`, `query_input` plus optionally
- * `query_params`. If the client wants to receive an audio response, it
- * should also contain `output_audio_config`. The message must not contain
- * `input_audio`.
- * 2. If `query_input` was set to a streaming input audio config,
- * all subsequent messages must contain `input_audio`. Otherwise, finish the request stream.
+ * 1. The first message must contain [StreamingDetectIntentRequest.session][google.cloud.dialogflow.v2beta1.StreamingDetectIntentRequest.session],
+ * [StreamingDetectIntentRequest.query_input] plus optionally
+ * [StreamingDetectIntentRequest.query_params]. If the client wants to
+ * receive an audio response, it should also contain
+ * [StreamingDetectIntentRequest.output_audio_config][google.cloud.dialogflow.v2beta1.StreamingDetectIntentRequest.output_audio_config]. The message
+ * must not contain [StreamingDetectIntentRequest.input_audio][google.cloud.dialogflow.v2beta1.StreamingDetectIntentRequest.input_audio].
+ * 2. If [StreamingDetectIntentRequest.query_input][google.cloud.dialogflow.v2beta1.StreamingDetectIntentRequest.query_input] was set to
+ * [StreamingDetectIntentRequest.query_input.audio_config][], all subsequent
+ * messages must contain [StreamingDetectIntentRequest.input_audio] to
+ * continue with Speech recognition.
+ * If you decide to rather detect an intent from text input after you
+ * already started Speech recognition, please send a message with
+ * [StreamingDetectIntentRequest.query_input.text][].
+ * However, note that:
+ * * Dialogflow will bill you for the audio duration so far.
+ * * Dialogflow discards all Speech recognition results in favor of the
+ * input text.
+ * * Dialogflow will use the language code from the first message.
+ * After you sent all input, you must half-close or abort the request stream.
*
*
* Protobuf type {@code google.cloud.dialogflow.v2beta1.StreamingDetectIntentRequest}
diff --git a/google-api-grpc/proto-google-cloud-dialogflow-v2beta1/src/main/proto/google/cloud/dialogflow/v2beta1/session.proto b/google-api-grpc/proto-google-cloud-dialogflow-v2beta1/src/main/proto/google/cloud/dialogflow/v2beta1/session.proto
index 412eb25350a1..dbe2631db1ae 100644
--- a/google-api-grpc/proto-google-cloud-dialogflow-v2beta1/src/main/proto/google/cloud/dialogflow/v2beta1/session.proto
+++ b/google-api-grpc/proto-google-cloud-dialogflow-v2beta1/src/main/proto/google/cloud/dialogflow/v2beta1/session.proto
@@ -363,16 +363,32 @@ message KnowledgeAnswers {
}
// The top-level message sent by the client to the
-// `StreamingDetectIntent` method.
+// [StreamingDetectIntent][] method.
//
// Multiple request messages should be sent in order:
//
-// 1. The first message must contain `session`, `query_input` plus optionally
-// `query_params`. If the client wants to receive an audio response, it
-// should also contain `output_audio_config`. The message must not contain
-// `input_audio`.
-// 2. If `query_input` was set to a streaming input audio config,
-// all subsequent messages must contain `input_audio`. Otherwise, finish the request stream.
+// 1. The first message must contain [StreamingDetectIntentRequest.session][google.cloud.dialogflow.v2beta1.StreamingDetectIntentRequest.session],
+// [StreamingDetectIntentRequest.query_input] plus optionally
+// [StreamingDetectIntentRequest.query_params]. If the client wants to
+// receive an audio response, it should also contain
+// [StreamingDetectIntentRequest.output_audio_config][google.cloud.dialogflow.v2beta1.StreamingDetectIntentRequest.output_audio_config]. The message
+// must not contain [StreamingDetectIntentRequest.input_audio][google.cloud.dialogflow.v2beta1.StreamingDetectIntentRequest.input_audio].
+// 2. If [StreamingDetectIntentRequest.query_input][google.cloud.dialogflow.v2beta1.StreamingDetectIntentRequest.query_input] was set to
+// [StreamingDetectIntentRequest.query_input.audio_config][], all subsequent
+// messages must contain [StreamingDetectIntentRequest.input_audio] to
+// continue with Speech recognition.
+// If you decide to rather detect an intent from text input after you
+// already started Speech recognition, please send a message with
+// [StreamingDetectIntentRequest.query_input.text][].
+//
+// However, note that:
+//
+// * Dialogflow will bill you for the audio duration so far.
+// * Dialogflow discards all Speech recognition results in favor of the
+// input text.
+// * Dialogflow will use the language code from the first message.
+//
+// After you sent all input, you must half-close or abort the request stream.
message StreamingDetectIntentRequest {
// Required. The name of the session the query is sent to.
// Format of the session name:
diff --git a/google-cloud-clients/google-cloud-dialogflow/synth.metadata b/google-cloud-clients/google-cloud-dialogflow/synth.metadata
index 79546a9ea6da..e3dcd8b6ab0f 100644
--- a/google-cloud-clients/google-cloud-dialogflow/synth.metadata
+++ b/google-cloud-clients/google-cloud-dialogflow/synth.metadata
@@ -1,19 +1,19 @@
{
- "updateTime": "2019-08-16T07:45:34.882148Z",
+ "updateTime": "2019-08-22T07:44:04.854710Z",
"sources": [
{
"generator": {
"name": "artman",
- "version": "0.33.0",
- "dockerImage": "googleapis/artman@sha256:c6231efb525569736226b1f7af7565dbc84248efafb3692a5bb1d2d8a7975d53"
+ "version": "0.34.0",
+ "dockerImage": "googleapis/artman@sha256:38a27ba6245f96c3e86df7acb2ebcc33b4f186d9e475efe2d64303aec3d4e0ea"
}
},
{
"git": {
"name": "googleapis",
"remote": "https://github.com/googleapis/googleapis.git",
- "sha": "2a02e33c79cbf23d316c57e1c78f915e1d905eee",
- "internalRef": "263682410"
+ "sha": "92bebf78345af8b2d3585220527115bda8bdedf8",
+ "internalRef": "264715111"
}
}
],