Skip to content

sslab-gatech/MSG

Repository files navigation

MSG (Move Specification Generator)

For technical details, please check our paper (extended version), "Agentic Specification Generator for Move Programs" published at ASE'25.

The tool is called msg.

Most the works of dependencies excluding some patches in deps/aptos-core and deps/movefmt are not our works.

Build

MSG

RUSTFLAGS="--cfg tokio_unstable" cargo build --release

The binary is in target/release/msg

Build Aptos Move Toolchain

cd deps/aptos-core
RUSTFLAGS="--cfg tokio_unstable" cargo build --release

The binary is called target/release/aptos Please copy it to aptos-local and put into the directories in your PATH (e.g. $HOME/.local/bin)

Please check deps/aptos-core/scripts/dev_setup.sh for relevant dependencies for Move Prover to work.

MSG will call aptos-local internally.

The buliding is tested working under gcc-14 for its current Cargo.lock.

export CC=gcc-14
export CXX=g++-14

Environment Setup

Using .envrc (Recommended)

For easier environment management, use the provided .envrc.example:

# Copy the example file
cp .envrc.example .envrc

# Edit .envrc and add your API keys
vim .envrc  # or your preferred editor

# If using direnv (recommended):
direnv allow

# Or source manually:
source .envrc

The .envrc file lets you manage all environment variables in one place, including:

  • LLM API keys (OpenAI, Claude, DeepSeek, LiteLLM)
  • Compiler settings (GCC version, Rust flags)
  • Development settings (logging, debugging)

Note: Make sure .envrc is in your .gitignore to avoid committing secrets!

Manual Environment Setup

You only need to set the API key for the backend you're using:

OpenAI Models

For GPT-3.5, GPT-4, o1, and o3-mini models:

export OPENAI_API_KEY=sk-your-openai-key

Claude Models

For Claude 3.5 and 3.7 Sonnet:

export CLAUDE_API_KEY=sk-ant-your-claude-key

DeepSeek Models

For DeepSeek v3 and R1:

export DEEPSEEK_API_KEY=your-deepseek-key

LiteLLM Proxy

For unified access to multiple LLM providers via LiteLLM proxy:

export LITELLM_API_KEY=sk-your-litellm-key
export LITELLM_BASE_URL=http://localhost:4000  # Required: your LiteLLM proxy URL

LiteLLM Setup:

  1. Install: pip install litellm[proxy]
  2. Create litellm_config.yaml with your provider configs
  3. Start proxy: litellm --config litellm_config.yaml --port 4000
  4. Set both environment variables above

See LiteLLM documentation for configuration details.

Usage

Basic Command Structure

msg --llm-backend <backend> -m <module> -f <function> [OPTIONS]

MSG automatically searches for Move.toml in the current directory and parent directories.

Common Options

  • -p, --path <PATH> - Path to Move project directory (optional, defaults to current directory)
  • --llm-backend <BACKEND> - LLM backend to use (see Available Backends below)
  • -m, --module <MODULE> - Module name to analyze
  • -f, --function <FUNCTION> - Function name to generate specs for
  • -r, --rounds <N> - Number of refinement rounds (default: 3)
  • --agent - Use agent-based refinement (iterative, more thorough)
  • --inline - Enable function inlining (shows both original and inlined versions to LLM)
  • --eval-output-file <FILE> - Save detailed results to JSON file
  • --file <PATH> - Custom output path for generated spec (default: msg.spec.move)

Examples

Agent-based generation with GPT-4o (from project directory):

cd demo/BasicCoin
msg --llm-backend gpt-4o -m BasicCoin -f transfer -r3 --agent

Or specify path explicitly:

msg -p demo/BasicCoin --llm-backend gpt-4o -m BasicCoin -f transfer -r3 --agent

All-in-one generation with Claude:

msg -p demo/BasicCoin --llm-backend claude-3-5-sonnet -m BasicCoin -f transfer -r3

With function inlining and o3-mini:

msg -p demo/BasicCoin --llm-backend o3-mini -m BasicCoin -f transfer -r3 --inline --agent

Using LiteLLM proxy:

export LITELLM_API_KEY=sk-1234
export LITELLM_BASE_URL=http://localhost:4000
msg -p demo/BasicCoin --llm-backend litellm-gpt4o -m BasicCoin -f transfer -r3 --agent

Save detailed results:

msg -p demo/BasicCoin --llm-backend o3-mini -m BasicCoin -f transfer -r3 \
  --agent --eval-output-file results.json

Custom output file:

msg -p demo/BasicCoin --llm-backend o3-mini -m BasicCoin -f transfer -r3 \
  --agent --file my-spec.move

Output Files

MSG generates the following output files:

  • msg.spec.move (default) - The final generated Move specification for the target function

    • Contains the spec block with ensures, aborts_if, modifies, etc.
    • Includes any error messages as comments if generation failed
    • Can be customized with --file <path> argument
  • JSON evaluation file (if --eval-output-file specified) - Detailed generation logs including:

    • All refinement rounds with prompts and responses
    • Prover results for each iteration
    • Timing information
    • Final success/failure status

Available Backends

OpenAI:

  • gpt-3.5-turbo - GPT-3.5 Turbo
  • gpt-4-turbo - GPT-4 Turbo
  • gpt-4o - GPT-4 Omni
  • gpt-4o-mini - GPT-4 Omni Mini
  • o1 - OpenAI o1 (reasoning model)
  • o1-mini - OpenAI o1 Mini
  • o3-mini - OpenAI o3 Mini (medium reasoning effort)
  • o3-mini-high - OpenAI o3 Mini (high reasoning effort)
  • o3-mini-low - OpenAI o3 Mini (low reasoning effort)

Claude:

  • claude-3-5-sonnet - Claude 3.5 Sonnet
  • claude-3-7-sonnet - Claude 3.7 Sonnet

DeepSeek:

  • deepseek-v3 - DeepSeek v3
  • deepseek-r1 - DeepSeek R1 (reasoning model)

LiteLLM Proxy:

  • litellm - Generic (default: gpt-4)
  • litellm-gpt4o - GPT-4o via LiteLLM
  • litellm-claude - Claude 3.5 Sonnet via LiteLLM
  • litellm-o1 - OpenAI o1 via LiteLLM
  • litellm-o3-mini - OpenAI o3-mini via LiteLLM
  • litellm-deepseek - DeepSeek via LiteLLM

Examples and Testing

BasicCoin Demo

The demo/BasicCoin directory contains a complete example with:

  • A simple coin transfer implementation
  • Reference specification outputs (agent-based and all-in-one)
  • Automated end-to-end test script

Run on BasicCoin demo:

cd demo/BasicCoin
msg -p . --llm-backend o3-mini -m BasicCoin -f transfer -r3 --agent

Automated testing: The demo/run-e2e-test.sh script provides comprehensive testing:

cd demo

# Run with default settings
./run-e2e-test.sh

# Test different backends
./run-e2e-test.sh --llm-backend gpt-4o

# With inlining enabled
./run-e2e-test.sh --inline --rounds 2

# Show help
./run-e2e-test.sh --help

See demo/README.md for detailed testing instructions and output examples.

Adding New LLM Backends

To add support for new LLM models, edit src/llm_config.rs:

1. Add Backend Configuration

In the LLMBackends lazy_static block (around line 72), add your new backend:

backends.insert("your-model-name".to_string(), LLMBackend {
    model_iden: ModelIden::new(AdapterKind::OpenAI, "actual-model-id"),
    chat_options: ChatOptions {
        max_tokens: Some(8192),  // Optional, adjust as needed
        ..Default::default()
    },
    is_reasoning: false,  // Set true for reasoning models (o1, o3, deepseek-r1)
    api_key_env_var: "YOUR_API_KEY_ENV_VAR".to_string(),
    custom_base_url: None,  // Or Some("https://api.example.com") for custom endpoints
});

2. Supported Adapter Types

  • AdapterKind::OpenAI - OpenAI and OpenAI-compatible APIs
  • AdapterKind::Anthropic - Claude models (requires max_tokens)
  • AdapterKind::DeepSeek - DeepSeek models
  • Other adapters available in the genai crate

3. Configuration Fields

  • model_iden: Adapter type + actual model ID from provider
  • chat_options: Model-specific options (max_tokens, temperature, etc.)
  • is_reasoning: Set to true for reasoning models (affects output handling)
  • api_key_env_var: Environment variable name for API key
  • custom_base_url: Optional custom endpoint (for proxies or custom deployments)

4. Example: Adding a New OpenAI Model

backends.insert("gpt-4.5-turbo".to_string(), LLMBackend {
    model_iden: ModelIden::new(AdapterKind::OpenAI, "gpt-4.5-turbo-2025-01-15"),
    chat_options: ChatOptions {
        ..Default::default()
    },
    is_reasoning: false,
    api_key_env_var: "OPENAI_API_KEY".to_string(),
    custom_base_url: None,
});

5. Example: Adding a Custom API Endpoint

backends.insert("custom-llm".to_string(), LLMBackend {
    model_iden: ModelIden::new(AdapterKind::OpenAI, "your-model"),
    chat_options: ChatOptions {
        ..Default::default()
    },
    is_reasoning: false,
    api_key_env_var: "CUSTOM_API_KEY".to_string(),
    custom_base_url: Some("https://your-api.example.com".to_string()),
});

6. Rebuild and Test

After adding the backend:

cargo build --release
export YOUR_API_KEY_ENV_VAR=your-key
msg -p demo/BasicCoin --llm-backend your-model-name -m BasicCoin -f transfer -r1

Publications

Agentic Specification Generator for Move Programs

@inproceedings{fu:msg,
  title        = {{Agentic Specification Generator for Move Programs}},
  author       = {Yu-Fu Fu and Meng Xu and Taesoo Kim},
  booktitle    = {40th IEEE/ACM International Conference on Automated Software Engineering},
  month        = nov,
  year         = 2025,
  address      = {Seoul, Korea},
}

About

Agentic Specification Generator for Move Programs

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published