@@ -288,10 +288,10 @@ def create(
288288
289289 truncation: The truncation strategy to use for the model response.
290290
291- - `auto`: If the context of this response and previous ones exceeds the model's
292- context window size, the model will truncate the response to fit the context
293- window by dropping input items in the middle of the conversation.
294- - `disabled` (default): If a model response will exceed the context window size
291+ - `auto`: If the input to this Response exceeds the model's context window size,
292+ the model will truncate the response to fit the context window by dropping
293+ items from the beginning of the conversation.
294+ - `disabled` (default): If the input size will exceed the context window size
295295 for a model, the request will fail with a 400 error.
296296
297297 user: This field is being replaced by `safety_identifier` and `prompt_cache_key`. Use
@@ -527,10 +527,10 @@ def create(
527527
528528 truncation: The truncation strategy to use for the model response.
529529
530- - `auto`: If the context of this response and previous ones exceeds the model's
531- context window size, the model will truncate the response to fit the context
532- window by dropping input items in the middle of the conversation.
533- - `disabled` (default): If a model response will exceed the context window size
530+ - `auto`: If the input to this Response exceeds the model's context window size,
531+ the model will truncate the response to fit the context window by dropping
532+ items from the beginning of the conversation.
533+ - `disabled` (default): If the input size will exceed the context window size
534534 for a model, the request will fail with a 400 error.
535535
536536 user: This field is being replaced by `safety_identifier` and `prompt_cache_key`. Use
@@ -766,10 +766,10 @@ def create(
766766
767767 truncation: The truncation strategy to use for the model response.
768768
769- - `auto`: If the context of this response and previous ones exceeds the model's
770- context window size, the model will truncate the response to fit the context
771- window by dropping input items in the middle of the conversation.
772- - `disabled` (default): If a model response will exceed the context window size
769+ - `auto`: If the input to this Response exceeds the model's context window size,
770+ the model will truncate the response to fit the context window by dropping
771+ items from the beginning of the conversation.
772+ - `disabled` (default): If the input size will exceed the context window size
773773 for a model, the request will fail with a 400 error.
774774
775775 user: This field is being replaced by `safety_identifier` and `prompt_cache_key`. Use
@@ -1719,10 +1719,10 @@ async def create(
17191719
17201720 truncation: The truncation strategy to use for the model response.
17211721
1722- - `auto`: If the context of this response and previous ones exceeds the model's
1723- context window size, the model will truncate the response to fit the context
1724- window by dropping input items in the middle of the conversation.
1725- - `disabled` (default): If a model response will exceed the context window size
1722+ - `auto`: If the input to this Response exceeds the model's context window size,
1723+ the model will truncate the response to fit the context window by dropping
1724+ items from the beginning of the conversation.
1725+ - `disabled` (default): If the input size will exceed the context window size
17261726 for a model, the request will fail with a 400 error.
17271727
17281728 user: This field is being replaced by `safety_identifier` and `prompt_cache_key`. Use
@@ -1958,10 +1958,10 @@ async def create(
19581958
19591959 truncation: The truncation strategy to use for the model response.
19601960
1961- - `auto`: If the context of this response and previous ones exceeds the model's
1962- context window size, the model will truncate the response to fit the context
1963- window by dropping input items in the middle of the conversation.
1964- - `disabled` (default): If a model response will exceed the context window size
1961+ - `auto`: If the input to this Response exceeds the model's context window size,
1962+ the model will truncate the response to fit the context window by dropping
1963+ items from the beginning of the conversation.
1964+ - `disabled` (default): If the input size will exceed the context window size
19651965 for a model, the request will fail with a 400 error.
19661966
19671967 user: This field is being replaced by `safety_identifier` and `prompt_cache_key`. Use
@@ -2197,10 +2197,10 @@ async def create(
21972197
21982198 truncation: The truncation strategy to use for the model response.
21992199
2200- - `auto`: If the context of this response and previous ones exceeds the model's
2201- context window size, the model will truncate the response to fit the context
2202- window by dropping input items in the middle of the conversation.
2203- - `disabled` (default): If a model response will exceed the context window size
2200+ - `auto`: If the input to this Response exceeds the model's context window size,
2201+ the model will truncate the response to fit the context window by dropping
2202+ items from the beginning of the conversation.
2203+ - `disabled` (default): If the input size will exceed the context window size
22042204 for a model, the request will fail with a 400 error.
22052205
22062206 user: This field is being replaced by `safety_identifier` and `prompt_cache_key`. Use
0 commit comments