A framework for creating and storing FollowTheMoney entities, used by OpenLobbying.
Warning
This is a work in progress. Expect breaking changes and incomplete features.
Muckrake is the data pipeline. It is partially inspired by zavod and other FollowTheMoney tools.
Run uv run muckrake --help for a full list of available commands.
You can find crawlers for various datasets in datasets/. At a minimum, each dataset consists of a config.yml with metadata and a crawl.py script that outputs FollowTheMoney statements in CSV format.
To crawl a dataset, run uv run muckrake crawl {dataset_name}. Run uv run muckrake list to see available datasets.
Many data sources have composite fields that contain multiple entities. We use LLMs to extract unique entities and relationships from these fields, and store them as candidates in the database for review and approval. See NER docs for details.
# Create extraction candidates for one dataset
uv run muckrake ner-extract open_access --extractor llm --limit 50
# Review candidates in a terminal UI
uv run muckrake ner-reviewOur goal is to link entities across datasets to provide a unified view of lobbying and political finance for any given person, company, or organisation.
# Create dedupe candidates across all datasets
uv run muckrake xref
# Review candidates in a terminal UI
uv run muckrake dedupeWe also want to collapse duplicate relationship edges across datasets, especially for ORCL and PRCA. This is done automatically, no review step required.
uv run muckrake dedupe-edgesStatements are loaded into a working store (SQLite for local development, Postgres in production) with uv run muckrake load. This reads the statements CSV files and applies any approved NER candidates before materialising entities and relationships.
The primary user of Muckrake data is OpenLobbying, an open database of lobbying and political finance data.
Start the API server:
uv run muckrake serverStart the Svelte frontend:
cd openlobbying
npm run devIn development, frontend requests to /api/* are proxied to http://127.0.0.1:8000 via Vite.
- Local default: SQLite at
data/muckrake.db. - Production: set
MUCKRAKE_DATABASE_URLto a SQLAlchemy-compatible Postgres URL, for example:
export MUCKRAKE_DATABASE_URL="postgresql+psycopg://muckrake:[email protected]:5432/muckrake"- VPS guide and templates:
docs/deploy/README.md - MVP deploy model: promote one curated DB artifact (includes dedupe + NER state), not dataset files alone.
- One-command deploy (code + data):
./scripts/deploy_to_vps.sh {ip_address}