Skip to content

Commit 9d4c603

Browse files
release: 0.5.0-alpha.2 (#319)
Automated Release PR --- ## 0.5.0-alpha.2 (2026-02-05) Full Changelog: [v0.5.0-alpha.1...v0.5.0-alpha.2](v0.5.0-alpha.1...v0.5.0-alpha.2) ### Features * Adds support for the `safety_identifier` parameter ([f20696b](f20696b)) --- This pull request is managed by Stainless's [GitHub App](https://github.com/apps/stainless-app). The [semver version number](https://semver.org/#semantic-versioning-specification-semver) is based on included [commit messages](https://www.conventionalcommits.org/en/v1.0.0/). Alternatively, you can manually set the version number in the title of this pull request. For a better experience, it is recommended to use either rebase-merge or squash-merge when merging this pull request. 🔗 Stainless [website](https://www.stainlessapi.com) 📚 Read the [docs](https://app.stainlessapi.com/docs) 🙋 [Reach out](mailto:[email protected]) for help or questions --------- Co-authored-by: stainless-app[bot] <142633134+stainless-app[bot]@users.noreply.github.com>
1 parent 75e3efa commit 9d4c603

122 files changed

Lines changed: 3620 additions & 1250 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

.release-please-manifest.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
11
{
2-
".": "0.5.0-alpha.1"
2+
".": "0.5.0-alpha.2"
33
}

.stats.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
configured_endpoints: 108
2-
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/llamastack%2Fllama-stack-client-faa8aea30f68f4757456ffabbaa687cace33f1dc3b3eba9cb074ca4500a6fa43.yml
3-
openapi_spec_hash: 8cea736f660e8842c3a2580469d331aa
4-
config_hash: aa28e451064c13a38ddc44df99ebf52a
2+
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/llamastack%2Fllama-stack-client-958e990011d6b4c27513743a151ec4c80c3103650a80027380d15f1d6b108e32.yml
3+
openapi_spec_hash: 5b49d825dbc2a26726ca752914a65114
4+
config_hash: 19b84a0a93d566334ae134dafc71991f

CHANGELOG.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,13 @@
11
# Changelog
22

3+
## 0.5.0-alpha.2 (2026-02-05)
4+
5+
Full Changelog: [v0.5.0-alpha.1...v0.5.0-alpha.2](https://github.com/llamastack/llama-stack-client-python/compare/v0.5.0-alpha.1...v0.5.0-alpha.2)
6+
7+
### Features
8+
9+
* Adds support for the `safety_identifier` parameter ([f20696b](https://github.com/llamastack/llama-stack-client-python/commit/f20696b6c1855c40e191980812ba3fd70b1f3577))
10+
311
## 0.5.0-alpha.1 (2026-02-04)
412

513
Full Changelog: [v0.4.0-alpha.15...v0.5.0-alpha.1](https://github.com/llamastack/llama-stack-client-python/compare/v0.4.0-alpha.15...v0.5.0-alpha.1)

api.md

Lines changed: 18 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -103,10 +103,10 @@ Methods:
103103

104104
- <code title="post /v1/prompts">client.prompts.<a href="./src/llama_stack_client/resources/prompts/prompts.py">create</a>(\*\*<a href="src/llama_stack_client/types/prompt_create_params.py">params</a>) -> <a href="./src/llama_stack_client/types/prompt.py">Prompt</a></code>
105105
- <code title="get /v1/prompts/{prompt_id}">client.prompts.<a href="./src/llama_stack_client/resources/prompts/prompts.py">retrieve</a>(prompt_id, \*\*<a href="src/llama_stack_client/types/prompt_retrieve_params.py">params</a>) -> <a href="./src/llama_stack_client/types/prompt.py">Prompt</a></code>
106-
- <code title="post /v1/prompts/{prompt_id}">client.prompts.<a href="./src/llama_stack_client/resources/prompts/prompts.py">update</a>(prompt_id, \*\*<a href="src/llama_stack_client/types/prompt_update_params.py">params</a>) -> <a href="./src/llama_stack_client/types/prompt.py">Prompt</a></code>
106+
- <code title="put /v1/prompts/{prompt_id}">client.prompts.<a href="./src/llama_stack_client/resources/prompts/prompts.py">update</a>(prompt_id, \*\*<a href="src/llama_stack_client/types/prompt_update_params.py">params</a>) -> <a href="./src/llama_stack_client/types/prompt.py">Prompt</a></code>
107107
- <code title="get /v1/prompts">client.prompts.<a href="./src/llama_stack_client/resources/prompts/prompts.py">list</a>() -> <a href="./src/llama_stack_client/types/prompt_list_response.py">PromptListResponse</a></code>
108108
- <code title="delete /v1/prompts/{prompt_id}">client.prompts.<a href="./src/llama_stack_client/resources/prompts/prompts.py">delete</a>(prompt_id) -> None</code>
109-
- <code title="post /v1/prompts/{prompt_id}/set-default-version">client.prompts.<a href="./src/llama_stack_client/resources/prompts/prompts.py">set_default_version</a>(prompt_id, \*\*<a href="src/llama_stack_client/types/prompt_set_default_version_params.py">params</a>) -> <a href="./src/llama_stack_client/types/prompt.py">Prompt</a></code>
109+
- <code title="put /v1/prompts/{prompt_id}/set-default-version">client.prompts.<a href="./src/llama_stack_client/resources/prompts/prompts.py">set_default_version</a>(prompt_id, \*\*<a href="src/llama_stack_client/types/prompt_set_default_version_params.py">params</a>) -> <a href="./src/llama_stack_client/types/prompt.py">Prompt</a></code>
110110

111111
## Versions
112112

@@ -442,18 +442,6 @@ Methods:
442442

443443
# Alpha
444444

445-
## Inference
446-
447-
Types:
448-
449-
```python
450-
from llama_stack_client.types.alpha import InferenceRerankResponse
451-
```
452-
453-
Methods:
454-
455-
- <code title="post /v1alpha/inference/rerank">client.alpha.inference.<a href="./src/llama_stack_client/resources/alpha/inference.py">rerank</a>(\*\*<a href="src/llama_stack_client/types/alpha/inference_rerank_params.py">params</a>) -> <a href="./src/llama_stack_client/types/alpha/inference_rerank_response.py">InferenceRerankResponse</a></code>
456-
457445
## PostTraining
458446

459447
Types:
@@ -486,9 +474,9 @@ from llama_stack_client.types.alpha.post_training import (
486474
Methods:
487475

488476
- <code title="get /v1alpha/post-training/jobs">client.alpha.post_training.job.<a href="./src/llama_stack_client/resources/alpha/post_training/job.py">list</a>() -> <a href="./src/llama_stack_client/types/alpha/post_training/job_list_response.py">JobListResponse</a></code>
489-
- <code title="get /v1alpha/post-training/job/artifacts">client.alpha.post_training.job.<a href="./src/llama_stack_client/resources/alpha/post_training/job.py">artifacts</a>(\*\*<a href="src/llama_stack_client/types/alpha/post_training/job_artifacts_params.py">params</a>) -> <a href="./src/llama_stack_client/types/alpha/post_training/job_artifacts_response.py">JobArtifactsResponse</a></code>
490-
- <code title="post /v1alpha/post-training/job/cancel">client.alpha.post_training.job.<a href="./src/llama_stack_client/resources/alpha/post_training/job.py">cancel</a>(\*\*<a href="src/llama_stack_client/types/alpha/post_training/job_cancel_params.py">params</a>) -> None</code>
491-
- <code title="get /v1alpha/post-training/job/status">client.alpha.post_training.job.<a href="./src/llama_stack_client/resources/alpha/post_training/job.py">status</a>(\*\*<a href="src/llama_stack_client/types/alpha/post_training/job_status_params.py">params</a>) -> <a href="./src/llama_stack_client/types/alpha/post_training/job_status_response.py">JobStatusResponse</a></code>
477+
- <code title="get /v1alpha/post-training/job/artifacts">client.alpha.post_training.job.<a href="./src/llama_stack_client/resources/alpha/post_training/job.py">artifacts</a>() -> <a href="./src/llama_stack_client/types/alpha/post_training/job_artifacts_response.py">JobArtifactsResponse</a></code>
478+
- <code title="post /v1alpha/post-training/job/cancel">client.alpha.post_training.job.<a href="./src/llama_stack_client/resources/alpha/post_training/job.py">cancel</a>() -> None</code>
479+
- <code title="get /v1alpha/post-training/job/status">client.alpha.post_training.job.<a href="./src/llama_stack_client/resources/alpha/post_training/job.py">status</a>() -> <a href="./src/llama_stack_client/types/alpha/post_training/job_status_response.py">JobStatusResponse</a></code>
492480

493481
## Benchmarks
494482

@@ -538,6 +526,18 @@ Methods:
538526
- <code title="get /v1alpha/admin/inspect/routes">client.alpha.admin.<a href="./src/llama_stack_client/resources/alpha/admin.py">list_routes</a>(\*\*<a href="src/llama_stack_client/types/alpha/admin_list_routes_params.py">params</a>) -> <a href="./src/llama_stack_client/types/route_list_response.py">RouteListResponse</a></code>
539527
- <code title="get /v1alpha/admin/version">client.alpha.admin.<a href="./src/llama_stack_client/resources/alpha/admin.py">version</a>() -> <a href="./src/llama_stack_client/types/shared/version_info.py">VersionInfo</a></code>
540528

529+
## Inference
530+
531+
Types:
532+
533+
```python
534+
from llama_stack_client.types.alpha import InferenceRerankResponse
535+
```
536+
537+
Methods:
538+
539+
- <code title="post /v1alpha/inference/rerank">client.alpha.inference.<a href="./src/llama_stack_client/resources/alpha/inference.py">rerank</a>(\*\*<a href="src/llama_stack_client/types/alpha/inference_rerank_params.py">params</a>) -> <a href="./src/llama_stack_client/types/alpha/inference_rerank_response.py">InferenceRerankResponse</a></code>
540+
541541
# Beta
542542

543543
## Datasets
@@ -558,7 +558,7 @@ Methods:
558558

559559
- <code title="get /v1beta/datasets/{dataset_id}">client.beta.datasets.<a href="./src/llama_stack_client/resources/beta/datasets.py">retrieve</a>(dataset_id) -> <a href="./src/llama_stack_client/types/beta/dataset_retrieve_response.py">DatasetRetrieveResponse</a></code>
560560
- <code title="get /v1beta/datasets">client.beta.datasets.<a href="./src/llama_stack_client/resources/beta/datasets.py">list</a>() -> <a href="./src/llama_stack_client/types/beta/dataset_list_response.py">DatasetListResponse</a></code>
561-
- <code title="post /v1beta/datasetio/append-rows/{dataset_id}">client.beta.datasets.<a href="./src/llama_stack_client/resources/beta/datasets.py">appendrows</a>(dataset_id, \*\*<a href="src/llama_stack_client/types/beta/dataset_appendrows_params.py">params</a>) -> None</code>
561+
- <code title="post /v1beta/datasetio/append-rows/{dataset_id}">client.beta.datasets.<a href="./src/llama_stack_client/resources/beta/datasets.py">appendrows</a>(path_dataset_id, \*\*<a href="src/llama_stack_client/types/beta/dataset_appendrows_params.py">params</a>) -> None</code>
562562
- <code title="get /v1beta/datasetio/iterrows/{dataset_id}">client.beta.datasets.<a href="./src/llama_stack_client/resources/beta/datasets.py">iterrows</a>(dataset_id, \*\*<a href="src/llama_stack_client/types/beta/dataset_iterrows_params.py">params</a>) -> <a href="./src/llama_stack_client/types/beta/dataset_iterrows_response.py">DatasetIterrowsResponse</a></code>
563563
- <code title="post /v1beta/datasets">client.beta.datasets.<a href="./src/llama_stack_client/resources/beta/datasets.py">register</a>(\*\*<a href="src/llama_stack_client/types/beta/dataset_register_params.py">params</a>) -> <a href="./src/llama_stack_client/types/beta/dataset_register_response.py">DatasetRegisterResponse</a></code>
564564
- <code title="delete /v1beta/datasets/{dataset_id}">client.beta.datasets.<a href="./src/llama_stack_client/resources/beta/datasets.py">unregister</a>(dataset_id) -> None</code>

pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[project]
22
name = "llama_stack_client"
3-
version = "0.5.0-alpha.1"
3+
version = "0.5.0-alpha.2"
44
description = "The official Python library for the llama-stack-client API"
55
dynamic = ["readme"]
66
license = "MIT"

src/llama_stack_client/_version.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,4 +7,4 @@
77
# File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.
88

99
__title__ = "llama_stack_client"
10-
__version__ = "0.5.0-alpha.1" # x-release-please-version
10+
__version__ = "0.5.0-alpha.2" # x-release-please-version

src/llama_stack_client/resources/alpha/__init__.py

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -56,12 +56,6 @@
5656
)
5757

5858
__all__ = [
59-
"InferenceResource",
60-
"AsyncInferenceResource",
61-
"InferenceResourceWithRawResponse",
62-
"AsyncInferenceResourceWithRawResponse",
63-
"InferenceResourceWithStreamingResponse",
64-
"AsyncInferenceResourceWithStreamingResponse",
6559
"PostTrainingResource",
6660
"AsyncPostTrainingResource",
6761
"PostTrainingResourceWithRawResponse",
@@ -86,6 +80,12 @@
8680
"AsyncAdminResourceWithRawResponse",
8781
"AdminResourceWithStreamingResponse",
8882
"AsyncAdminResourceWithStreamingResponse",
83+
"InferenceResource",
84+
"AsyncInferenceResource",
85+
"InferenceResourceWithRawResponse",
86+
"AsyncInferenceResourceWithRawResponse",
87+
"InferenceResourceWithStreamingResponse",
88+
"AsyncInferenceResourceWithStreamingResponse",
8989
"AlphaResource",
9090
"AsyncAlphaResource",
9191
"AlphaResourceWithRawResponse",

src/llama_stack_client/resources/alpha/alpha.py

Lines changed: 24 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -55,10 +55,6 @@
5555

5656

5757
class AlphaResource(SyncAPIResource):
58-
@cached_property
59-
def inference(self) -> InferenceResource:
60-
return InferenceResource(self._client)
61-
6258
@cached_property
6359
def post_training(self) -> PostTrainingResource:
6460
return PostTrainingResource(self._client)
@@ -75,6 +71,10 @@ def eval(self) -> EvalResource:
7571
def admin(self) -> AdminResource:
7672
return AdminResource(self._client)
7773

74+
@cached_property
75+
def inference(self) -> InferenceResource:
76+
return InferenceResource(self._client)
77+
7878
@cached_property
7979
def with_raw_response(self) -> AlphaResourceWithRawResponse:
8080
"""
@@ -96,10 +96,6 @@ def with_streaming_response(self) -> AlphaResourceWithStreamingResponse:
9696

9797

9898
class AsyncAlphaResource(AsyncAPIResource):
99-
@cached_property
100-
def inference(self) -> AsyncInferenceResource:
101-
return AsyncInferenceResource(self._client)
102-
10399
@cached_property
104100
def post_training(self) -> AsyncPostTrainingResource:
105101
return AsyncPostTrainingResource(self._client)
@@ -116,6 +112,10 @@ def eval(self) -> AsyncEvalResource:
116112
def admin(self) -> AsyncAdminResource:
117113
return AsyncAdminResource(self._client)
118114

115+
@cached_property
116+
def inference(self) -> AsyncInferenceResource:
117+
return AsyncInferenceResource(self._client)
118+
119119
@cached_property
120120
def with_raw_response(self) -> AsyncAlphaResourceWithRawResponse:
121121
"""
@@ -140,10 +140,6 @@ class AlphaResourceWithRawResponse:
140140
def __init__(self, alpha: AlphaResource) -> None:
141141
self._alpha = alpha
142142

143-
@cached_property
144-
def inference(self) -> InferenceResourceWithRawResponse:
145-
return InferenceResourceWithRawResponse(self._alpha.inference)
146-
147143
@cached_property
148144
def post_training(self) -> PostTrainingResourceWithRawResponse:
149145
return PostTrainingResourceWithRawResponse(self._alpha.post_training)
@@ -160,15 +156,15 @@ def eval(self) -> EvalResourceWithRawResponse:
160156
def admin(self) -> AdminResourceWithRawResponse:
161157
return AdminResourceWithRawResponse(self._alpha.admin)
162158

159+
@cached_property
160+
def inference(self) -> InferenceResourceWithRawResponse:
161+
return InferenceResourceWithRawResponse(self._alpha.inference)
162+
163163

164164
class AsyncAlphaResourceWithRawResponse:
165165
def __init__(self, alpha: AsyncAlphaResource) -> None:
166166
self._alpha = alpha
167167

168-
@cached_property
169-
def inference(self) -> AsyncInferenceResourceWithRawResponse:
170-
return AsyncInferenceResourceWithRawResponse(self._alpha.inference)
171-
172168
@cached_property
173169
def post_training(self) -> AsyncPostTrainingResourceWithRawResponse:
174170
return AsyncPostTrainingResourceWithRawResponse(self._alpha.post_training)
@@ -185,15 +181,15 @@ def eval(self) -> AsyncEvalResourceWithRawResponse:
185181
def admin(self) -> AsyncAdminResourceWithRawResponse:
186182
return AsyncAdminResourceWithRawResponse(self._alpha.admin)
187183

184+
@cached_property
185+
def inference(self) -> AsyncInferenceResourceWithRawResponse:
186+
return AsyncInferenceResourceWithRawResponse(self._alpha.inference)
187+
188188

189189
class AlphaResourceWithStreamingResponse:
190190
def __init__(self, alpha: AlphaResource) -> None:
191191
self._alpha = alpha
192192

193-
@cached_property
194-
def inference(self) -> InferenceResourceWithStreamingResponse:
195-
return InferenceResourceWithStreamingResponse(self._alpha.inference)
196-
197193
@cached_property
198194
def post_training(self) -> PostTrainingResourceWithStreamingResponse:
199195
return PostTrainingResourceWithStreamingResponse(self._alpha.post_training)
@@ -210,15 +206,15 @@ def eval(self) -> EvalResourceWithStreamingResponse:
210206
def admin(self) -> AdminResourceWithStreamingResponse:
211207
return AdminResourceWithStreamingResponse(self._alpha.admin)
212208

209+
@cached_property
210+
def inference(self) -> InferenceResourceWithStreamingResponse:
211+
return InferenceResourceWithStreamingResponse(self._alpha.inference)
212+
213213

214214
class AsyncAlphaResourceWithStreamingResponse:
215215
def __init__(self, alpha: AsyncAlphaResource) -> None:
216216
self._alpha = alpha
217217

218-
@cached_property
219-
def inference(self) -> AsyncInferenceResourceWithStreamingResponse:
220-
return AsyncInferenceResourceWithStreamingResponse(self._alpha.inference)
221-
222218
@cached_property
223219
def post_training(self) -> AsyncPostTrainingResourceWithStreamingResponse:
224220
return AsyncPostTrainingResourceWithStreamingResponse(self._alpha.post_training)
@@ -234,3 +230,7 @@ def eval(self) -> AsyncEvalResourceWithStreamingResponse:
234230
@cached_property
235231
def admin(self) -> AsyncAdminResourceWithStreamingResponse:
236232
return AsyncAdminResourceWithStreamingResponse(self._alpha.admin)
233+
234+
@cached_property
235+
def inference(self) -> AsyncInferenceResourceWithStreamingResponse:
236+
return AsyncInferenceResourceWithStreamingResponse(self._alpha.inference)

src/llama_stack_client/resources/alpha/eval/eval.py

Lines changed: 40 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -86,7 +86,13 @@ def evaluate_rows(
8686
Evaluate a list of rows on a benchmark.
8787
8888
Args:
89-
benchmark_config: A benchmark configuration for evaluation.
89+
benchmark_id: The ID of the benchmark
90+
91+
benchmark_config: The configuration for the benchmark
92+
93+
input_rows: The rows to evaluate
94+
95+
scoring_functions: The scoring functions to use for the evaluation
9096
9197
extra_headers: Send extra headers
9298
@@ -132,7 +138,13 @@ def evaluate_rows_alpha(
132138
Evaluate a list of rows on a benchmark.
133139
134140
Args:
135-
benchmark_config: A benchmark configuration for evaluation.
141+
benchmark_id: The ID of the benchmark
142+
143+
benchmark_config: The configuration for the benchmark
144+
145+
input_rows: The rows to evaluate
146+
147+
scoring_functions: The scoring functions to use for the evaluation
136148
137149
extra_headers: Send extra headers
138150
@@ -176,7 +188,9 @@ def run_eval(
176188
Run an evaluation on a benchmark.
177189
178190
Args:
179-
benchmark_config: A benchmark configuration for evaluation.
191+
benchmark_id: The ID of the benchmark
192+
193+
benchmark_config: The configuration for the benchmark
180194
181195
extra_headers: Send extra headers
182196
@@ -213,7 +227,9 @@ def run_eval_alpha(
213227
Run an evaluation on a benchmark.
214228
215229
Args:
216-
benchmark_config: A benchmark configuration for evaluation.
230+
benchmark_id: The ID of the benchmark
231+
232+
benchmark_config: The configuration for the benchmark
217233
218234
extra_headers: Send extra headers
219235
@@ -279,7 +295,13 @@ async def evaluate_rows(
279295
Evaluate a list of rows on a benchmark.
280296
281297
Args:
282-
benchmark_config: A benchmark configuration for evaluation.
298+
benchmark_id: The ID of the benchmark
299+
300+
benchmark_config: The configuration for the benchmark
301+
302+
input_rows: The rows to evaluate
303+
304+
scoring_functions: The scoring functions to use for the evaluation
283305
284306
extra_headers: Send extra headers
285307
@@ -325,7 +347,13 @@ async def evaluate_rows_alpha(
325347
Evaluate a list of rows on a benchmark.
326348
327349
Args:
328-
benchmark_config: A benchmark configuration for evaluation.
350+
benchmark_id: The ID of the benchmark
351+
352+
benchmark_config: The configuration for the benchmark
353+
354+
input_rows: The rows to evaluate
355+
356+
scoring_functions: The scoring functions to use for the evaluation
329357
330358
extra_headers: Send extra headers
331359
@@ -369,7 +397,9 @@ async def run_eval(
369397
Run an evaluation on a benchmark.
370398
371399
Args:
372-
benchmark_config: A benchmark configuration for evaluation.
400+
benchmark_id: The ID of the benchmark
401+
402+
benchmark_config: The configuration for the benchmark
373403
374404
extra_headers: Send extra headers
375405
@@ -408,7 +438,9 @@ async def run_eval_alpha(
408438
Run an evaluation on a benchmark.
409439
410440
Args:
411-
benchmark_config: A benchmark configuration for evaluation.
441+
benchmark_id: The ID of the benchmark
442+
443+
benchmark_config: The configuration for the benchmark
412444
413445
extra_headers: Send extra headers
414446

0 commit comments

Comments
 (0)