-
-
Notifications
You must be signed in to change notification settings - Fork 46
Home
flyto-core is the local AI tool layer I always wanted.
It's an MCP server that lets models call real, executable tools — instead of generating glue code you have to fix.
No cloud. No SaaS. No copy-paste scripts.
Model says what to do → tool actually runs → structured result comes back.
If you've worked with agents or tool chains, you've probably seen this:
- Generated Playwright code that half works
- Copy → paste → debug → repeat
- Long prompts trying to "stabilize" behavior
- Small models forgetting steps mid-chain
- Multi-step workflows collapsing halfway through
flyto-core doesn't ask the model to write scripts.
It lets the model call tools that are already written and tested.
Execution > code generation.
Think in terms of real tasks:
- Launch a browser, navigate, click, fill forms, extract text, take screenshots
- Make HTTP requests and test APIs
- Read and write to SQLite / Postgres
- Parse PDF / Excel / Word files
- Resize / convert images, run OCR
- Send Slack / Telegram / Email notifications
- Run loops, conditionals, parallel flows
Not "generate code that does this".
Actually do it.
Everything runs on your machine. Model, tools, execution.
The model does not see 300+ tool schemas.
Only 6 MCP tools are exposed:
list_modulessearch_modulesget_module_infoget_module_examplesexecute_modulevalidate_params
Modules are discovered dynamically and executed through execute_module().
Tool schema size stays constant.
In practice, model size and reasoning stability matter more than clever prompt engineering.
Say you ask:
"Open example.com, extract the H1, and take a screenshot."
The flow looks like:
search_modules("browser")execute_module("browser.goto", {...})execute_module("browser.extract_text", {...})execute_module("browser.screenshot", {...})
The model isn't writing browser automation code.
It's orchestrating real execution.
Structured output comes back like:
{
"h1_text": "Example Domain",
"screenshot_path": "/tmp/example.png"
}python -m venv .venv
source .venv/bin/activate
pip install flyto-coreExample MCP config:
{
"flyto-core": {
"command": "python",
"args": ["-m", "core.mcp_server"]
}
}From my testing:
- 70B-class models → stable multi-step chaining
- 13B → inconsistent
- 7B → fragile beyond single-step calls
Tool chaining reliability scales more with model capacity than prompt tweaks.
No. Only 6 tool schemas are registered. Modules are invoked dynamically.
No. Standard MCP tool-calling works fine.
Yes. Fully compatible with venv.
flyto-core isn't just a tool list.
It's an execution layer for local models.
If local AI is going to move beyond demos, we need reliable multi-step execution — not just better text generation.
This is one attempt at that direction.