Skip to content

AIMLPM/markcrawl

MarkCrawl by iD8 🕷️📝

Turn any webpage or website into clean Markdown for LLM pipelines — in one command.

CI PyPI Version License MCP Server

pip install markcrawl
markcrawl --base https://docs.example.com --out ./output --show-progress

MarkCrawl is a crawl-and-structure engine. It fetches one page or crawls an entire website, strips navigation/scripts/boilerplate, and writes clean Markdown files with a structured JSONL index. Every page includes a citation with the access date. No API keys needed.

Everything else — LLM extraction, Supabase upload, MCP server, LangChain tools — is optional and installed separately.

LLM agents: Load docs/LLM_PROMPT.md as a system prompt to generate correct MarkCrawl commands automatically.

Quickstart (2 minutes)

pip install markcrawl
markcrawl --base https://quotes.toscrape.com --out ./demo --max-pages 5 --show-progress

Your ./demo folder now contains:

demo/
├── index__a4f3b2c1d0.md    ← clean Markdown of the page
├── page-2__b7e2d1f0a3.md
├── ...
└── pages.jsonl              ← structured index (one JSON line per page)

Each line in pages.jsonl:

{
  "url": "https://quotes.toscrape.com/",
  "title": "Quotes to Scrape",
  "crawled_at": "2026-04-04T12:30:00Z",
  "citation": "Quotes to Scrape. quotes.toscrape.com. Available at: https://quotes.toscrape.com/ [Accessed April 04, 2026].",
  "tool": "markcrawl",
  "text": "# Quotes to Scrape\n\n> "The world as we have created it is a process of our thinking..." — Albert Einstein\n\nTags: change, deep-thoughts, thinking, world..."
}

Common Recipes

Scrape a single page:

markcrawl --base https://example.com/pricing --no-sitemap --max-pages 1

Scrape a single JS-rendered page (React, Vue, YouTube, etc.):

markcrawl --base "https://www.youtube.com/@channel/videos" \
  --no-sitemap --max-pages 1 --render-js
# → outputs one .md file with video titles, view counts, and dates

For infinite-scroll pages like YouTube, this captures the first ~28 videos from the initial render.

Crawl a docs site:

markcrawl --base https://docs.example.com --max-pages 500 --concurrency 5 --show-progress

Crawl a subsection without sitemap wandering:

Large sites (YouTube, GitHub, etc.) have sitemaps with thousands of unrelated pages. Use --no-sitemap to crawl only from your target URL:

markcrawl --base https://docs.example.com/guides \
  --no-sitemap --max-pages 50 --show-progress

Competitive analysis (crawl 3 competitors, extract pricing):

markcrawl --base https://competitor-one.com/pricing --no-sitemap --max-pages 1 --out ./comp1
markcrawl --base https://competitor-two.com/pricing --no-sitemap --max-pages 1 --out ./comp2
markcrawl --base https://competitor-three.com/pricing --no-sitemap --max-pages 1 --out ./comp3
markcrawl-extract \
  --jsonl ./comp1/pages.jsonl ./comp2/pages.jsonl ./comp3/pages.jsonl \
  --fields pricing_tiers features free_trial --show-progress
# → extracted.jsonl with structured pricing data across all three

Docs site → RAG chatbot (full pipeline: crawl, embed, query):

markcrawl --base https://docs.example.com --out ./docs --max-pages 500 --concurrency 5 --show-progress
markcrawl-upload --jsonl ./docs/pages.jsonl --show-progress
# → pages are chunked, embedded, and uploaded to Supabase/pgvector
# Wire your chatbot to query the vector table — see docs/SUPABASE.md

API docs → code generation prompt:

markcrawl --base https://api.example.com/docs --out ./api-docs --max-pages 200 --show-progress
# Feed the output to an LLM:
# "Using the API documentation in ./api-docs/pages.jsonl, generate a
#  typed Python client with methods for each endpoint."

Back up a blog before it shuts down:

markcrawl --base https://engineering.example.com/blog \
  --no-sitemap --max-pages 1000 --concurrency 5 --out ./blog-archive --show-progress
# → every post saved as clean Markdown with citations and access dates

Resume an interrupted crawl:

markcrawl --base https://docs.example.com --out ./docs --resume --show-progress
How it compares to other crawlers

Different tools make different tradeoffs. This table summarizes the main differences:

MarkCrawl FireCrawl Crawl4AI Scrapy
License MIT AGPL-3.0 Apache-2.0 BSD-3
Install pip install markcrawl SaaS or self-host pip + Playwright pip + framework
Output Markdown + JSONL Markdown + JSON Markdown Custom pipelines
JS rendering Optional (--render-js) Built-in Built-in Plugin
LLM extraction Optional add-on Via API Built-in None
Best for Single-site crawl → Markdown Hosted scraping API AI-native crawling Large-scale distributed

Each tool has strengths: FireCrawl excels as a hosted API, Crawl4AI has deep browser automation, and Scrapy handles massive distributed workloads. MarkCrawl focuses on simple local crawls that produce LLM-ready Markdown.

Benchmark results (8 tools, 8 sites, 92 queries)

Speed: scrapy+md is fastest (9.1 pages/sec), colly+md second (5.8), markcrawl third (2.9). Firecrawl (self-hosted) is slowest at 0.8 pages/sec and only crawled 6 of 8 sites (react-dev and stripe-docs fail with connection resets). See speed comparison.

Output cleanliness: markcrawl has the lowest nav pollution (4 words of preamble per page vs 275-398 for others), at the cost of 84% recall vs 97% for crawlee. See quality comparison.

RAG answer quality: markcrawl produces the highest LLM answer scores (3.91/5) with the fewest chunks per page (10.1). Firecrawl scores well on answer quality (4.04/5) but on a smaller query set (70 vs 92 queries). The gap between tools is small but consistent:

Tool Chunks/page Answer Quality (/5) Annual cost (100K pages, 1K queries/day)
markcrawl 10.1 3.91 $4,505
scrapy+md 12.6 3.86 $5,464
firecrawl 13.0 4.04 $5,835
crawl4ai 16.9 3.82 $6,960
colly+md 19.0 3.83 $7,213
playwright 19.8 3.74 $7,320
crawlee 21.1 3.80 $7,467

Fewer chunks = lower storage and embedding costs. Cleaner chunks = better answers with less context. For the complete cost analysis across all scales (100 to 1M pages) with full methodology, see COST_AT_SCALE.md.

See also: Retrieval quality | Answer quality

Installation

The core crawler is the only thing you need. Everything else is optional.

pip install markcrawl                # Core crawler (free, no API keys)

Optional add-ons:

pip install markcrawl[extract]       # + LLM extraction (OpenAI, Claude, Gemini, Grok)
pip install markcrawl[js]            # + JavaScript rendering (Playwright)
pip install markcrawl[upload]        # + Supabase upload with embeddings
pip install markcrawl[mcp]           # + MCP server for AI agents
pip install markcrawl[langchain]     # + LangChain tool wrappers
pip install markcrawl[all]           # Everything

For Playwright, also run playwright install chromium after installing.

Install from source (for development)
git clone https://github.com/AIMLPM/markcrawl.git
cd markcrawl
python -m venv .venv
source .venv/bin/activate
pip install -e ".[all]"

Crawling

markcrawl --base https://www.example.com --out ./output --show-progress

Add flags as needed:

markcrawl \
  --base https://www.example.com \
  --out ./output \
  --include-subdomains \        # crawl sub.example.com too
  --render-js \                 # render JavaScript (React, Vue, etc.)
  --concurrency 5 \             # fetch 5 pages in parallel
  --proxy http://proxy:8080 \   # route through a proxy
  --max-pages 200 \             # stop after 200 pages
  --format markdown \           # or "text" for plain text
  --show-progress

Resume an interrupted crawl:

markcrawl --base https://www.example.com --out ./output --resume --show-progress

Output

Each page becomes a .md file with a citation header:

# Getting Started

> URL: https://docs.example.com/getting-started
> Crawled: April 04, 2026
> Citation: Getting Started. docs.example.com. Available at: https://docs.example.com/getting-started [Accessed April 04, 2026].

Welcome to the platform. This guide walks you through installation...

Navigation, footer, cookie banners, and scripts are stripped. Only the main content remains.

All crawler CLI arguments
Argument Description
--base Base site URL to crawl
--out Output directory
--format markdown or text (default: markdown)
--show-progress Print progress and crawl events
--render-js Render JavaScript with Playwright before extracting
--concurrency Pages to fetch in parallel (default: 1)
--proxy HTTP/HTTPS proxy URL
--resume Resume from saved state
--include-subdomains Include subdomains under the base domain
--max-pages Max pages to save; 0 = unlimited (default: 500)
--delay Minimum delay between requests in seconds (default: 0, adaptive throttle adjusts automatically)
--timeout Per-request timeout in seconds (default: 15)
--min-words Skip pages with fewer words (default: 20)
--user-agent Override the default user agent
--use-sitemap / --no-sitemap Enable/disable sitemap discovery. Use --no-sitemap when you want to scrape a specific page or subsection — without it, large sites (YouTube, GitHub) may discover thousands of unrelated pages via their sitemap

Optional: structured extraction

If you need structured data (not just text), the extraction add-on uses an LLM to pull specific fields from each page.

pip install markcrawl[extract]

markcrawl-extract \
  --jsonl ./output/pages.jsonl \
  --fields company_name pricing features \
  --show-progress

Auto-discover fields across multiple crawled sites:

markcrawl-extract \
  --jsonl ./comp1/pages.jsonl ./comp2/pages.jsonl ./comp3/pages.jsonl \
  --auto-fields \
  --context "competitor pricing analysis" \
  --show-progress

Supports OpenAI, Anthropic (Claude), Google Gemini, and xAI (Grok) via --provider.

Extraction details

Provider and model selection

markcrawl-extract --jsonl ... --fields pricing --provider openai         # default
markcrawl-extract --jsonl ... --fields pricing --provider anthropic      # Claude
markcrawl-extract --jsonl ... --fields pricing --provider gemini         # Gemini
markcrawl-extract --jsonl ... --fields pricing --provider grok           # Grok
markcrawl-extract --jsonl ... --fields pricing --model gpt-4o           # override model
Provider API key env var Default model
OpenAI OPENAI_API_KEY gpt-4o-mini
Anthropic ANTHROPIC_API_KEY claude-sonnet-4-20250514
Google Gemini GEMINI_API_KEY gemini-2.0-flash
xAI (Grok) XAI_API_KEY grok-3-mini-fast

All extraction CLI arguments

Argument Description
--jsonl Path(s) to pages.jsonl — pass multiple for cross-site analysis
--fields Field names to extract (space-separated)
--auto-fields Auto-discover fields by sampling pages
--context Describe your goal for auto-discovery
--sample-size Pages to sample for auto-discovery (default: 3)
--provider openai, anthropic, gemini, or grok
--model Override the default model
--output Output path (default: extracted.jsonl)
--delay Delay between LLM calls in seconds (default: 0.25)
--show-progress Print progress

Output format

Extracted rows include LLM attribution:

{
  "url": "https://competitor.com/pricing",
  "citation": "Pricing. competitor.com. Available at: ... [Accessed April 04, 2026].",
  "pricing_tiers": "Starter ($29/mo), Pro ($99/mo), Enterprise (contact sales)",
  "extracted_by": "gpt-4o-mini (openai)",
  "extraction_note": "Field values were extracted by an LLM and may be interpreted, not verbatim."
}

Optional: Supabase vector search (RAG)

Chunk pages, generate embeddings, and upload to Supabase with pgvector:

pip install markcrawl[upload]

markcrawl --base https://docs.example.com --out ./output --show-progress
markcrawl-upload --jsonl ./output/pages.jsonl --show-progress

Requires SUPABASE_URL, SUPABASE_KEY, and OPENAI_API_KEY. See docs/SUPABASE.md for table setup, query examples, and recommendations.

Optional: agent integrations

MarkCrawl includes integrations for AI agents. Each is an optional add-on.

MCP Server (Claude Desktop, Cursor, Windsurf)
pip install markcrawl[mcp]
{
  "mcpServers": {
    "markcrawl": {
      "command": "python",
      "args": ["-m", "markcrawl.mcp_server"]
    }
  }
}

Tools: crawl_site, list_pages, read_page, search_pages, extract_data

LangChain Tool
pip install markcrawl[langchain]
from markcrawl.langchain import all_tools
from langchain_openai import ChatOpenAI
from langchain.agents import initialize_agent, AgentType

agent = initialize_agent(tools=all_tools, llm=ChatOpenAI(model="gpt-4o-mini"),
                         agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION)
agent.run("Crawl docs.example.com and summarize their auth guide")
OpenClaw Skill (WhatsApp, Telegram, Slack)
npx clawhub install markcrawl-skill

See AIMLPM/markcrawl-clawhub-skill.

LLM assistant prompt

Copy the system prompt from docs/LLM_PROMPT.md into any LLM to get an assistant that generates correct MarkCrawl commands.

When NOT to use MarkCrawl

  • Sites behind login/auth — no cookie or session support
  • Aggressive bot protection (Cloudflare, Akamai) — no anti-bot evasion
  • Millions of pages — designed for hundreds to low thousands; use Scrapy for scale
  • PDF content — HTML only (PDF support is on the roadmap)
  • JavaScript SPAs — add markcrawl[js] and use --render-js for React/Vue/Angular
  • Infinite-scroll pages--render-js renders the initial page load but does not scroll; you'll get the first screenful of content (e.g., ~28 of 82 YouTube videos). For complete listings, combine with the platform's API or RSS feed (e.g., YouTube's /feeds/videos.xml?channel_id=...)

Architecture

MarkCrawl is a web crawler. The optional layers (extraction, upload, agents) are separate add-ons that work with the crawler's output.

CORE (free, no API keys)              OPTIONAL ADD-ONS
┌──────────────────────────┐
│ 1. Discover URLs         │          markcrawl[extract]  — LLM field extraction
│    (sitemap or links)    │          markcrawl[upload]   — Supabase/pgvector RAG
│ 2. Fetch & clean HTML    │          markcrawl[js]       — Playwright JS rendering
│ 3. Write Markdown + JSONL│          markcrawl[mcp]      — MCP server for agents
│    + auto-citation       │          markcrawl[langchain] — LangChain tools
└──────────────────────────┘

For internals, see docs/ARCHITECTURE.md.

Extending MarkCrawl

from markcrawl import crawl

result = crawl("https://example.com", out_dir="./output")
print(f"Saved {result.pages_saved} pages")
# Process output in your own pipeline
import json
with open(result.index_file) as f:
    for line in f:
        page = json.loads(line)
        your_db.insert(page)  # Pinecone, Weaviate, Elasticsearch, etc.
# Use individual components
from markcrawl import chunk_text
from markcrawl.extract import LLMClient, extract_fields

See docs/ARCHITECTURE.md for the full module map and extensibility guide.

Cost

The core crawler is free. Two optional features have API costs:

Feature Cost When
Structured extraction ~$0.01-0.03 per page markcrawl-extract
Supabase upload ~$0.0001 per page markcrawl-upload

Setting up API keys

Only needed for extraction and upload. The core crawler requires no keys.

# .env — in your working directory
OPENAI_API_KEY="sk-..."           # extraction (--provider openai) + upload
ANTHROPIC_API_KEY="sk-ant-..."    # extraction (--provider anthropic)
GEMINI_API_KEY="AI..."            # extraction (--provider gemini)
XAI_API_KEY="xai-..."             # extraction (--provider grok)
SUPABASE_URL="https://..."        # upload
SUPABASE_KEY="eyJ..."             # upload (service-role key)
source .env
Project structure
.
├── README.md
├── LICENSE
├── PRIVACY.md
├── SECURITY.md
├── CONTRIBUTING.md
├── CODE_OF_CONDUCT.md
├── Dockerfile
├── Makefile
├── glama.json
├── pyproject.toml
├── requirements.txt
├── .github/
│   ├── pull_request_template.md
│   └── workflows/
│       ├── ci.yml
│       └── publish.yml
├── docs/
│   ├── ARCHITECTURE.md
│   ├── LLM_PROMPT.md
│   ├── MCP_SUBMISSION.md
│   ├── RAG_RETRIEVAL_RESEARCH.md
│   └── SUPABASE.md
├── tests/
│   ├── __init__.py
│   ├── test_chunker.py
│   ├── test_core.py
│   ├── test_extract.py
│   └── test_upload.py
└── markcrawl/
    ├── __init__.py
    ├── cli.py
    ├── core.py               # orchestrator
    ├── fetch.py              # HTTP/Playwright fetching
    ├── robots.py             # robots.txt parsing
    ├── throttle.py           # adaptive rate limiting
    ├── state.py              # crawl state & resume
    ├── urls.py               # URL normalization & filtering
    ├── extract_content.py    # HTML → Markdown conversion
    ├── chunker.py
    ├── exceptions.py
    ├── utils.py
    ├── extract.py            # LLM field extraction
    ├── extract_cli.py
    ├── upload.py
    ├── upload_cli.py
    ├── langchain.py
    └── mcp_server.py

Roadmap

  • Canonical URL support
  • Fuzzy duplicate-content detection
  • PDF support
  • Authenticated crawling
  • Multi-provider embeddings
Shipped features
  • pip install markcrawl on PyPI
  • 102 automated tests + GitHub Actions CI (Python 3.10-3.13) + ruff linting
  • Markdown and plain text output with auto-citation
  • Sitemap-first crawling with robots.txt compliance
  • Text chunking with configurable overlap
  • Supabase/pgvector upload for RAG
  • JavaScript rendering via Playwright
  • Concurrent fetching and proxy support
  • Resume interrupted crawls
  • LLM extraction (OpenAI, Claude, Gemini, Grok) with auto-field discovery
  • MCP server, LangChain tools, OpenClaw skill

Contributing

See CONTRIBUTING.md. If you used an LLM to generate code, include the prompt in your PR.

Security

See SECURITY.md.

Privacy

MarkCrawl runs locally. No telemetry, no analytics, no data sent anywhere. See PRIVACY.md.

License

MIT. See LICENSE.

About

Fast Python web crawler for RAG and AI ingestion. Extracts clean Markdown from any site for LLMs and vector stores.

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages