Skip to content

Enabling prompt_cache make LocalAI panic #553

@mudler

Description

@mudler

LocalAI version:
v0.18.0

Environment, CPU architecture, OS, and Version:

Describe the bug

To Reproduce
Enable prompt_cache_all and set prompt_cache_path to a llama.cpp compatible model

Expected behavior

Logs
N/A (will collect soon)

Additional context

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions