Skip to content

Commit d12ca8f

Browse files
committed
Minor readme file changes
1 parent 95b1dca commit d12ca8f

File tree

1 file changed

+11
-11
lines changed

1 file changed

+11
-11
lines changed

README.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,12 @@
1-
# AVP Agents Share Thoughts, Not Text
1+
# AVP Agents Share Thoughts, Not Text
22

33
[![PyPI](https://img.shields.io/pypi/v/avp.svg)](https://pypi.org/project/avp/)
44
[![CI](https://github.com/VectorArc/avp-python/actions/workflows/ci.yml/badge.svg)](https://github.com/VectorArc/avp-python/actions/workflows/ci.yml)
55
[![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](LICENSE)
66
[![Python](https://img.shields.io/badge/python-3.9+-blue.svg)](https://python.org)
77
[![Spec](https://img.shields.io/badge/spec-v0.3-blue.svg)](https://github.com/VectorArc/avp-spec)
88

9-
When LLM agents hand off work as text, the next agent re-processes everything from scratch. AVP transfers the actual computation KV-cache, hidden states, attention so the receiving agent picks up where the sender left off. 46-78% fewer tokens, 2-4x faster. Sometimes more accurate than text. Built on [LatentMAS](https://arxiv.org/abs/2511.20639).
9+
When LLM agents hand off work as text, the next agent re-processes everything from scratch. AVP transfers the actual computation KV-cache, hidden states, attention so the receiving agent picks up where the sender left off. 46-78% fewer tokens, 2-4x faster. Sometimes more accurate than text. Built on [LatentMAS](https://arxiv.org/abs/2511.20639).
1010

1111
```bash
1212
pip install avp
@@ -47,9 +47,9 @@ answer = connector.generate(prompt, context=context)
4747
| Qwen 7B | Llama 3B | 74.5% | 47.0% |
4848
| Llama 3B | Qwen 7B | **90.0%** | **79.3%** |
4949

50-
A small 3B model sharing its reasoning lifts a 7B solver to 90% on math and 79.3% on code. The projection is vocabulary-mediated no learned parameters, no training data, works across model families.
50+
A small 3B model sharing its reasoning lifts a 7B solver to 90% on math and 79.3% on code. The projection is vocabulary-mediated no learned parameters, no training data, works across model families.
5151

52-
Full results: **[Benchmarks](docs/BENCHMARKS.md)** 8 benchmarks, 5 models, 2 families, reproducible.
52+
Full results: **[Benchmarks](docs/BENCHMARKS.md)** 8 benchmarks, 5 models, 2 families, reproducible.
5353

5454
## How It Works
5555

@@ -87,7 +87,7 @@ Replace `llm.invoke()` with `avp.generate()`. Your framework sees text in, text
8787
| **CrewAI** | `BaseLLM.call()` override |
8888
| **PydanticAI** | `FunctionModel` callback |
8989
| **LlamaIndex** | `CustomLLM.complete()` override |
90-
| **A2A / MCP** | Complementary AVP handles tensor transfer, they handle routing |
90+
| **A2A / MCP** | Complementary AVP handles tensor transfer, they handle routing |
9191
| **HuggingFace** | Full latent pipeline (KV-cache + hidden states) |
9292

9393
See **[Framework Integration Guide](docs/FRAMEWORK_INTEGRATION.md)** for working examples.
@@ -130,7 +130,7 @@ answer = avp.generate("Solve: 24 * 17 + 3",
130130
<details>
131131
<summary><strong>vLLM</strong></summary>
132132

133-
**Latent transfer is not supported on vLLM yet.** The latent pipeline (`think()`/`generate()` with context) requires HuggingFace Transformers. `VLLMConnector` exists for text-only generation and model identity it will error if you pass latent context. vLLM latent support is on the roadmap.
133+
**Latent transfer is not supported on vLLM yet.** The latent pipeline (`think()`/`generate()` with context) requires HuggingFace Transformers. `VLLMConnector` exists for text-only generation and model identity it will error if you pass latent context. vLLM latent support is on the roadmap.
134134

135135
</details>
136136

@@ -158,12 +158,12 @@ answer = connector.generate(prompt, context=restored)
158158

159159
## Documentation
160160

161-
- **[AVP Specification](https://github.com/VectorArc/avp-spec)** Binary format, handshake, transport
162-
- **[Benchmarks](docs/BENCHMARKS.md)** 8 benchmarks, 5 models, 2 families
163-
- **[Framework Integration](docs/FRAMEWORK_INTEGRATION.md)** LangGraph, CrewAI, PydanticAI, LlamaIndex
164-
- **[Examples](examples/)** Quickstart, cross-model, and agent demos
161+
- **[AVP Specification](https://github.com/VectorArc/avp-spec)** Binary format, handshake, transport
162+
- **[Benchmarks](docs/BENCHMARKS.md)** 8 benchmarks, 5 models, 2 families
163+
- **[Framework Integration](docs/FRAMEWORK_INTEGRATION.md)** LangGraph, CrewAI, PydanticAI, LlamaIndex
164+
- **[Examples](examples/)** Quickstart, cross-model, and agent demos
165165
- **[CHANGELOG](CHANGELOG.md)**
166166

167167
## License
168168

169-
Apache 2.0 see [LICENSE](LICENSE)
169+
Apache 2.0 see [LICENSE](LICENSE)

0 commit comments

Comments
 (0)