Update dependency huggingface-hub to v0.36.0 #62
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
==0.15.1->==0.36.0Warning
Some dependencies could not be looked up. Check the warning logs for more information.
Release Notes
huggingface/huggingface_hub (huggingface-hub)
v0.36.0: [v0.36.0] Last Stop Before 1.0Compare Source
This is the final minor release before v1.0.0. This release focuses on performance optimizations to
HfFileSystemand adds a newget_organization_overviewAPI endpoint.We'll continue to release security patches as needed, but v0.37 will not happen. The next release will be 1.0.0. We’re also deeply grateful to the entire Hugging Face community for their feedback, bug reports, and suggestions that have shaped this library.
Full Changelog: huggingface/huggingface_hub@v0.35.0...v0.36.0
📁
HfFileSystemMajor optimizations have been implemented in
HfFileSystem:fsinstance. This is particularily useful when streaming datasets in a distributed training environment. Each worker won't have to rebuild their cache anymoreListing files with
.glob()has been greatly optimized:maxdepth: do less/treecalls inglob()by @lhoestq in #3389Minor updates:
🌍
HfApiIt is now possible to get high-level information about an organization, the same way it is already possible to do with users:
🛠️ Small fixes and maintenance
🐛 Bug and typo fixes
sentence_similaritydocstring by @tolgaakar in #3374🏗️ internal
tyquality by @hanouticelina in #3441Community contributions
The following contributors have made changes to the library over the last release. Thank you!
* Add quotes for better shell compatibility (#3369)
* update the
sentence_similaritydocstring (#3374) (#3375)* Use all tools unless explicit allowed_tools (#3397)
* The error message as previously displayed... (#3405)
* Add client support for the organization overview endpoint (#3436)
v0.35.3: [v0.35.3] Fiximage-to-imagetarget size parameter mapping & tiny agents allow tools list bugCompare Source
This release includes two bug fixes:
Full Changelog: huggingface/huggingface_hub@v0.35.2...v0.35.3
v0.35.2: [v0.35.2] Welcoming Z.ai as Inference Providers!Compare Source
Full Changelog: huggingface/huggingface_hub@v0.35.1...v0.35.2
New inference provider! 🔥
Z.ai is now officially an Inference Provider on the Hub. See full documentation here: https://huggingface.co/docs/inference-providers/providers/zai-org.
Misc:
v0.35.1: [v0.35.1] Do not retry on 429 and skip forward ref in strict dataclassCompare Source
strictdataclasses #3376Full Changelog: huggingface/huggingface_hub@v0.35.0...v0.35.1
v0.35.0: [v0.35.0] Announcing Scheduled Jobs: run cron jobs on GPU on the Hugging Face Hub!Compare Source
Scheduled Jobs
In v0.34.0 release, we announced Jobs, a new way to run compute on the Hugging Face Hub. In this new release, we are announcing Scheduled Jobs to run Jobs on a regular basic. Think "cron jobs running on GPU".
This comes with a fully-fledge CLI:
It is now possible to run a command with
uv run:hf jobs uv runby @lhoestq in #3303Some other improvements have been added to the existing Jobs API for a better UX.
And finally, Jobs documentation has been updated with new examples (and some fixes):
CLI updates
In addition to the Scheduled Jobs, some improvements have been added to the
hfCLI.Inference Providers
Welcome Scaleway and PublicAI!
Two new partners have been integrated to Inference Providers: Scaleway and PublicAI! (as part of releases
0.34.5and0.34.6).Image-to-video
Image to video is now supported in the
InferenceClient:Miscellaneous
Header
content-typeis now correctly set when sending an image or audio request (e.g. forimage-to-imagetask). It is inferred either from the filename or the URL provided by the user. If user is directly passing raw bytes, the content-type header has to be set manually.A
.reasoningfield has been added to the Chat Completion output. This is used by some providers to return reasoning tokens separated from the.contentstream of tokens.MCP & tiny-agents updates
tiny-agentsnow handlesAGENTS.mdinstruction file (see https://agents.md/).Tools filtering has already been improved to avoid loading non-relevant tools from an MCP server:
🛠️ Small fixes and maintenance
🐛 Bug and typo fixes
HF_HUB_DISABLE_XETin the environment dump by @hanouticelina in #3290appsas a parameter toHfApi.list_modelsby @anirbanbasu in #3322🏗️ internal
tytype checker by @hanouticelina in #3294tycheck quality by @hanouticelina in #3320is_jsonableif circular reference by @Wauplin in #3348Community contributions
The following contributors have made changes to the library over the last release. Thank you!
appsas a parameter toHfApi.list_models(#3322)v0.34.6: [v0.34.6]: Welcoming PublicAI as Inference Providers!Compare Source
Full Changelog: huggingface/huggingface_hub@v0.34.5...v0.34.6
⚡ New provider: PublicAI
Public AI Inference Utility is a nonprofit, open-source project building products and organizing advocacy to support the work of public AI model builders like the Swiss AI Initiative, AI Singapore, AI Sweden, and the Barcelona Supercomputing Center. Think of a BBC for AI, a public utility for AI, or public libraries for AI.
v0.34.5: [v0.34.5]: Welcoming Scaleway as Inference Providers!Compare Source
Full Changelog: huggingface/huggingface_hub@v0.34.4...v0.34.5
⚡ New provider: Scaleway
Scaleway is a European cloud provider, serving latest LLM models through its Generative APIs alongside a complete cloud ecosystem.
v0.34.4: [v0.34.4] Support Image to Video inference + QoL in jobs API, auth and utilitiesCompare Source
Biggest update is the support of Image-To-Video task with inference provider Fal AI
And some quality of life improvements:
Full Changelog: huggingface/huggingface_hub@v0.34.3...v0.34.4
v0.34.3: [v0.34.3] Jobs improvements andwhoamiuser prefixCompare Source
Full Changelog: huggingface/huggingface_hub@v0.34.2...v0.34.3
v0.34.2: [v0.34.2] Bug fixes: Windows path handling & resume download size fixCompare Source
Full Changelog: huggingface/huggingface_hub@v0.34.1...v0.34.2
v0.34.1: [v0.34.1] [CLI] print help if no command providedCompare Source
Full Changelog: huggingface/huggingface_hub@v0.34.0...v0.34.1
v0.34.0: [v0.34.0] Announcing Jobs: a new way to run compute on Hugging Face!Compare Source
🔥🔥🔥 Announcing Jobs: a new way to run compute on Hugging Face!
We're thrilled to introduce a powerful new command-line interface for running and managing compute jobs on Hugging Face infrastructure! With the new
hf jobscommand, you can now seamlessly launch, monitor, and manage jobs using a familiar Docker-like experience. Run any command in Docker images (from Docker Hub, Hugging Face Spaces, or your own custom images) on a variety of hardware including CPUs, GPUs, and TPUs - all with simple, intuitive commands.Key features:
run,ps,logs,inspect,cancel) to run and manage jobsuv(experimental)All features are available both from Python (
run_job,list_jobs, etc.) and the CLI (hf jobs).Example usage:
You can also pass environment variables and secrets, select hardware flavors, run jobs in organizations, and use the experimental
uvrunner for Python scripts with inline dependencies.Check out the Jobs guide for more examples and details.
🚀 The CLI is now
hf! (formerlyhuggingface-cli)Glad to announce a long awaited quality-of-life improvement: the Hugging Face CLI has been officially renamed from
huggingface-clitohf! The legacyhuggingface-cliremains available without any breaking change, but is officially deprecated. We took the opportunity update the syntax to a more modern command formathf <resource> <action> [options](e.g.hf auth login,hf repo create,hf jobs run).Run
hf --helpto know more about the CLI options.⚡ Inference
🖼️ Image-to-image
Added support for
image-to-imagetask in theInferenceClientfor Replicate and fal.ai providers, allowing quick image generation using FLUX.1-Kontext-dev:image-to-imagesupport for Replicate provider by @hanouticelina in #3188image-to-imagesupport for fal.ai provider by @hanouticelina in #3187In addition to this, it is now possible to directly pass a
PIL.Imageas input to theInferenceClient.🤖 Tiny-Agents
tiny-agentsgot a nice update to deal with environment variables and secrets. We've also changed its input format to follow more closely the config format from VSCode. Here is an up to date config to run Github MCP Server with a token:🐛 Bug fixes
InferenceClientandtiny-agentsgot a few quality of life improvements and bug fixes:📤 Xet
Integration of Xet is now stable and production-ready. A majority of file transfer are now handled using this protocol on new repos. A few improvements have been shipped to ease developer experience during uploads:
Documentation has already been written to explain better the protocol and its options:
🛠️ Small fixes and maintenance
🐛 Bug and typo fixes
healthRouteinstead of GET / to check status by @mfuntowicz in #3165expandargument when listing files in repos by @lhoestq in #3195libcstincompatibility with Python 3.13 by @hanouticelina in #3251🏗️ internal
v0.33.5: [v0.33.5] [Inference] Fix aUserWarningwhen streaming withAsyncInferenceClientCompare Source
AsyncInferenceClient#3252Full Changelog: huggingface/huggingface_hub@v0.33.4...v0.33.5
v0.33.4: [v0.33.4] [Tiny-Agent]: Fix schema validation error for default MCP toolsCompare Source
Full Changelog: huggingface/huggingface_hub@v0.33.3...v0.33.4
v0.33.3: [v0.33.3] [Tiny-Agent]: Update tiny-agents exampleCompare Source
Full Changelog: huggingface/huggingface_hub@v0.33.2...v0.33.3
v0.33.2: [v0.33.2] [Tiny-Agent]: Switch to VSCode MCP formatCompare Source
Full Changelog: huggingface/huggingface_hub@v0.33.1...v0.33.2
Breaking changes:
Example of
agent.json:Find more examples in https://huggingface.co/datasets/tiny-agents/tiny-agents
v0.33.1: [v0.33.1]: Inference Providers Bug Fixes, Tiny-Agents Message handling Improvement, and Inference Endpoints Health Check UpdateCompare Source
Full Changelog: huggingface/huggingface_hub@v0.33.0...v0.33.1
This release introduces bug fixes for chat completion type compatibility and feature extraction parameters, enhanced message handling in tiny-agents, and updated inference endpoint health check:
v0.33.0: [v0.33.0]: Welcoming Featherless.AI and Groq as Inference Providers!Compare Source
⚡ New provider: Featherless.AI
Featherless AI is a serverless AI inference provider with unique model loading and GPU orchestration abilities that makes an exceptionally large catalog of models available for users. Providers often offer either a low cost of access to a limited set of models, or an unlimited range of models with users managing servers and the associated costs of operation. Featherless provides the best of both worlds offering unmatched model range and variety but with serverless pricing. Find the full list of supported models on the models page.
⚡ New provider: Groq
At the heart of Groq's technology is the Language Processing Unit (LPU™), a new type of end-to-end processing unit system that provides the fastest inference for computationally intensive applications with a sequential component, such as Large Language Models (LLMs). LPUs are designed to overcome the limitations of GPUs for inference, offering significantly lower latency and higher throughput. This makes them ideal for real-time AI applications.
Groq offers fast AI inference for openly-available models. They provide an API that allows developers to easily integrate these models into their applications. It offers an on-demand, pay-as-you-go model for accessing a wide range of openly-available LLMs.
🤖 MCP and Tiny-agents
It is now possible to run tiny-agents using a local server e.g. llama.cpp. 100% local agents are right behind the corner!
Fixing some DX issues in the
tiny-agentsCLI.tiny-agentscli exit issues by @Wauplin in #3125📚 Documentation
New translation from the Hindi-speaking community, for the community!
🛠️ Small fixes and maintenance
😌 QoL improvements
🐛 Bug and typo fixes
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
To execute skipped test pipelines write comment
/ok-to-test.Documentation
Find out how to configure dependency updates in MintMaker documentation or see all available configuration options in Renovate documentation.