For technical details, please check our paper (extended version), "Agentic Specification Generator for Move Programs" published at ASE'25.
The tool is called msg.
Most the works of dependencies excluding some patches in deps/aptos-core and
deps/movefmt are not our works.
RUSTFLAGS="--cfg tokio_unstable" cargo build --releaseThe binary is in target/release/msg
cd deps/aptos-core
RUSTFLAGS="--cfg tokio_unstable" cargo build --releaseThe binary is called target/release/aptos
Please copy it to aptos-local and put into the directories in your PATH (e.g.
$HOME/.local/bin)
Please check deps/aptos-core/scripts/dev_setup.sh for relevant dependencies
for Move Prover to work.
MSG will call aptos-local internally.
The buliding is tested working under gcc-14 for its current Cargo.lock.
export CC=gcc-14
export CXX=g++-14For easier environment management, use the provided .envrc.example:
# Copy the example file
cp .envrc.example .envrc
# Edit .envrc and add your API keys
vim .envrc # or your preferred editor
# If using direnv (recommended):
direnv allow
# Or source manually:
source .envrcThe .envrc file lets you manage all environment variables in one place, including:
- LLM API keys (OpenAI, Claude, DeepSeek, LiteLLM)
- Compiler settings (GCC version, Rust flags)
- Development settings (logging, debugging)
Note: Make sure .envrc is in your .gitignore to avoid committing secrets!
You only need to set the API key for the backend you're using:
For GPT-3.5, GPT-4, o1, and o3-mini models:
export OPENAI_API_KEY=sk-your-openai-keyFor Claude 3.5 and 3.7 Sonnet:
export CLAUDE_API_KEY=sk-ant-your-claude-keyFor DeepSeek v3 and R1:
export DEEPSEEK_API_KEY=your-deepseek-keyFor unified access to multiple LLM providers via LiteLLM proxy:
export LITELLM_API_KEY=sk-your-litellm-key
export LITELLM_BASE_URL=http://localhost:4000 # Required: your LiteLLM proxy URLLiteLLM Setup:
- Install:
pip install litellm[proxy] - Create
litellm_config.yamlwith your provider configs - Start proxy:
litellm --config litellm_config.yaml --port 4000 - Set both environment variables above
See LiteLLM documentation for configuration details.
msg --llm-backend <backend> -m <module> -f <function> [OPTIONS]MSG automatically searches for Move.toml in the current directory and parent directories.
-p, --path <PATH>- Path to Move project directory (optional, defaults to current directory)--llm-backend <BACKEND>- LLM backend to use (see Available Backends below)-m, --module <MODULE>- Module name to analyze-f, --function <FUNCTION>- Function name to generate specs for-r, --rounds <N>- Number of refinement rounds (default: 3)--agent- Use agent-based refinement (iterative, more thorough)--inline- Enable function inlining (shows both original and inlined versions to LLM)--eval-output-file <FILE>- Save detailed results to JSON file--file <PATH>- Custom output path for generated spec (default:msg.spec.move)
Agent-based generation with GPT-4o (from project directory):
cd demo/BasicCoin
msg --llm-backend gpt-4o -m BasicCoin -f transfer -r3 --agentOr specify path explicitly:
msg -p demo/BasicCoin --llm-backend gpt-4o -m BasicCoin -f transfer -r3 --agentAll-in-one generation with Claude:
msg -p demo/BasicCoin --llm-backend claude-3-5-sonnet -m BasicCoin -f transfer -r3With function inlining and o3-mini:
msg -p demo/BasicCoin --llm-backend o3-mini -m BasicCoin -f transfer -r3 --inline --agentUsing LiteLLM proxy:
export LITELLM_API_KEY=sk-1234
export LITELLM_BASE_URL=http://localhost:4000
msg -p demo/BasicCoin --llm-backend litellm-gpt4o -m BasicCoin -f transfer -r3 --agentSave detailed results:
msg -p demo/BasicCoin --llm-backend o3-mini -m BasicCoin -f transfer -r3 \
--agent --eval-output-file results.jsonCustom output file:
msg -p demo/BasicCoin --llm-backend o3-mini -m BasicCoin -f transfer -r3 \
--agent --file my-spec.moveMSG generates the following output files:
-
msg.spec.move(default) - The final generated Move specification for the target function- Contains the spec block with
ensures,aborts_if,modifies, etc. - Includes any error messages as comments if generation failed
- Can be customized with
--file <path>argument
- Contains the spec block with
-
JSON evaluation file (if
--eval-output-filespecified) - Detailed generation logs including:- All refinement rounds with prompts and responses
- Prover results for each iteration
- Timing information
- Final success/failure status
OpenAI:
gpt-3.5-turbo- GPT-3.5 Turbogpt-4-turbo- GPT-4 Turbogpt-4o- GPT-4 Omnigpt-4o-mini- GPT-4 Omni Minio1- OpenAI o1 (reasoning model)o1-mini- OpenAI o1 Minio3-mini- OpenAI o3 Mini (medium reasoning effort)o3-mini-high- OpenAI o3 Mini (high reasoning effort)o3-mini-low- OpenAI o3 Mini (low reasoning effort)
Claude:
claude-3-5-sonnet- Claude 3.5 Sonnetclaude-3-7-sonnet- Claude 3.7 Sonnet
DeepSeek:
deepseek-v3- DeepSeek v3deepseek-r1- DeepSeek R1 (reasoning model)
LiteLLM Proxy:
litellm- Generic (default: gpt-4)litellm-gpt4o- GPT-4o via LiteLLMlitellm-claude- Claude 3.5 Sonnet via LiteLLMlitellm-o1- OpenAI o1 via LiteLLMlitellm-o3-mini- OpenAI o3-mini via LiteLLMlitellm-deepseek- DeepSeek via LiteLLM
The demo/BasicCoin directory contains a complete example with:
- A simple coin transfer implementation
- Reference specification outputs (agent-based and all-in-one)
- Automated end-to-end test script
Run on BasicCoin demo:
cd demo/BasicCoin
msg -p . --llm-backend o3-mini -m BasicCoin -f transfer -r3 --agentAutomated testing:
The demo/run-e2e-test.sh script provides comprehensive testing:
cd demo
# Run with default settings
./run-e2e-test.sh
# Test different backends
./run-e2e-test.sh --llm-backend gpt-4o
# With inlining enabled
./run-e2e-test.sh --inline --rounds 2
# Show help
./run-e2e-test.sh --helpSee demo/README.md for detailed testing instructions and output examples.
To add support for new LLM models, edit src/llm_config.rs:
In the LLMBackends lazy_static block (around line 72), add your new backend:
backends.insert("your-model-name".to_string(), LLMBackend {
model_iden: ModelIden::new(AdapterKind::OpenAI, "actual-model-id"),
chat_options: ChatOptions {
max_tokens: Some(8192), // Optional, adjust as needed
..Default::default()
},
is_reasoning: false, // Set true for reasoning models (o1, o3, deepseek-r1)
api_key_env_var: "YOUR_API_KEY_ENV_VAR".to_string(),
custom_base_url: None, // Or Some("https://api.example.com") for custom endpoints
});AdapterKind::OpenAI- OpenAI and OpenAI-compatible APIsAdapterKind::Anthropic- Claude models (requires max_tokens)AdapterKind::DeepSeek- DeepSeek models- Other adapters available in the genai crate
- model_iden: Adapter type + actual model ID from provider
- chat_options: Model-specific options (max_tokens, temperature, etc.)
- is_reasoning: Set to
truefor reasoning models (affects output handling) - api_key_env_var: Environment variable name for API key
- custom_base_url: Optional custom endpoint (for proxies or custom deployments)
backends.insert("gpt-4.5-turbo".to_string(), LLMBackend {
model_iden: ModelIden::new(AdapterKind::OpenAI, "gpt-4.5-turbo-2025-01-15"),
chat_options: ChatOptions {
..Default::default()
},
is_reasoning: false,
api_key_env_var: "OPENAI_API_KEY".to_string(),
custom_base_url: None,
});backends.insert("custom-llm".to_string(), LLMBackend {
model_iden: ModelIden::new(AdapterKind::OpenAI, "your-model"),
chat_options: ChatOptions {
..Default::default()
},
is_reasoning: false,
api_key_env_var: "CUSTOM_API_KEY".to_string(),
custom_base_url: Some("https://your-api.example.com".to_string()),
});After adding the backend:
cargo build --release
export YOUR_API_KEY_ENV_VAR=your-key
msg -p demo/BasicCoin --llm-backend your-model-name -m BasicCoin -f transfer -r1Agentic Specification Generator for Move Programs
@inproceedings{fu:msg,
title = {{Agentic Specification Generator for Move Programs}},
author = {Yu-Fu Fu and Meng Xu and Taesoo Kim},
booktitle = {40th IEEE/ACM International Conference on Automated Software Engineering},
month = nov,
year = 2025,
address = {Seoul, Korea},
}