You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* chore: change license to MIT, and add ws dependency
Closes#29
* refactor: standardize Firecrawl naming, update README and CHANGELOG, and adjust license year
- Updated instances of "FireCrawl" to "Firecrawl" for consistency across documentation and code
- Enhanced README with additional configuration instructions and acknowledgments
- Revised CHANGELOG to reflect recent changes and optimizations
- Updated license year from 2023 to 2025
Closes#30
Copy file name to clipboardExpand all lines: README.md
+41-18Lines changed: 41 additions & 18 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,9 @@
2
2
3
3
A Model Context Protocol (MCP) server implementation that integrates with [Firecrawl](https://github.com/mendableai/firecrawl) for web scraping capabilities.
4
4
5
-
Big thanks to [@vrknetha](https://github.com/vrknetha), [@cawstudios](https://caw.tech) for the initial implementation!
5
+
> Big thanks to [@vrknetha](https://github.com/vrknetha), [@cawstudios](https://caw.tech) for the initial implementation!
6
+
>
7
+
> You can also play around with [our MCP Server on MCP.so's playground](https://mcp.so/playground?server=firecrawl-mcp-server). Thanks to MCP.so for hosting and [@gstarwd](https://github.com/gstarwd) for integrating our server.
6
8
7
9
## Features
8
10
@@ -11,10 +13,10 @@ Big thanks to [@vrknetha](https://github.com/vrknetha), [@cawstudios](https://ca
11
13
- URL discovery and crawling
12
14
- Web search with content extraction
13
15
- Automatic retries with exponential backoff
14
-
-- Efficient batch processing with built-in rate limiting
16
+
- Efficient batch processing with built-in rate limiting
15
17
- Credit usage monitoring for cloud API
16
18
- Comprehensive logging system
17
-
- Support for cloud and self-hosted FireCrawl instances
19
+
- Support for cloud and self-hosted Firecrawl instances
18
20
- Mobile/Desktop viewport support
19
21
- Smart content filtering with tag inclusion/exclusion
20
22
@@ -36,22 +38,44 @@ npm install -g firecrawl-mcp
36
38
37
39
Configuring Cursor 🖥️
38
40
Note: Requires Cursor version 0.45.6+
41
+
For the most up-to-date configuration instructions, please refer to the official Cursor documentation on configuring MCP servers:
42
+
[Cursor MCP Server Configuration Guide](https://docs.cursor.com/context/model-context-protocol#configuring-mcp-servers)
> If you are using Windows and are running into issues, try `cmd /c "set FIRECRAWL_API_KEY=your-api-key && npx -y firecrawl-mcp"`
51
75
52
-
Replace `your-api-key` with your FireCrawl API key.
76
+
Replace `your-api-key` with your Firecrawl API key. If you don't have one yet, you can create an account and get it from https://www.firecrawl.dev/app/api-keys
53
77
54
-
After adding, refresh the MCP server list to see the new tools. The Composer Agent will automatically use FireCrawl MCP when appropriate, but you can explicitly request it by describing your web scraping needs. Access the Composer via Command+L (Mac), select "Agent" next to the submit button, and enter your query.
78
+
After adding, refresh the MCP server list to see the new tools. The Composer Agent will automatically use Firecrawl MCP when appropriate, but you can explicitly request it by describing your web scraping needs. Access the Composer via Command+L (Mac), select "Agent" next to the submit button, and enter your query.
55
79
56
80
### Running on Windsurf
57
81
@@ -64,17 +88,16 @@ Add this to your `./codeium/windsurf/model_config.json`:
64
88
"command": "npx",
65
89
"args": ["-y", "firecrawl-mcp"],
66
90
"env": {
67
-
"FIRECRAWL_API_KEY": "YOUR_API_KEY_HERE"
91
+
"FIRECRAWL_API_KEY": "YOUR_API_KEY"
68
92
}
69
93
}
70
94
}
71
95
}
72
96
```
73
97
74
-
75
98
### Installing via Smithery (Legacy)
76
99
77
-
To install FireCrawl for Claude Desktop automatically via [Smithery](https://smithery.ai/server/@mendableai/mcp-server-firecrawl):
100
+
To install Firecrawl for Claude Desktop automatically via [Smithery](https://smithery.ai/server/@mendableai/mcp-server-firecrawl):
78
101
79
102
```bash
80
103
npx -y @smithery/cli install @mendableai/mcp-server-firecrawl --client claude
Generate a standardized llms.txt (and optionally llms-full.txt) file for a given domain. This file defines how large language models should interact with the site.
406
431
407
432
```json
@@ -422,10 +447,8 @@ Arguments:
422
447
- showFullText (boolean, optional): Whether to include llms-full.txt contents in the response.
423
448
424
449
Returns:
425
-
- Generated llms.txt file contents and optionally the llms-full.txt (data.llmstxt and/or data.llmsfulltxt)
426
-
427
-
428
450
451
+
- Generated llms.txt file contents and optionally the llms-full.txt (data.llmstxt and/or data.llmsfulltxt)
429
452
430
453
## Logging System
431
454
@@ -440,7 +463,7 @@ The server includes comprehensive logging:
440
463
Example log messages:
441
464
442
465
```
443
-
[INFO] FireCrawl MCP Server initialized successfully
466
+
[INFO] Firecrawl MCP Server initialized successfully
444
467
[INFO] Starting scrape for URL: https://example.com
445
468
[INFO] Batch operation queued with ID: batch_1
446
469
[WARNING] Credit usage has reached warning threshold
Copy file name to clipboardExpand all lines: package.json
+4-3Lines changed: 4 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
{
2
2
"name": "firecrawl-mcp",
3
3
"version": "1.7.2",
4
-
"description": "MCP server for FireCrawl web scraping integration. Supports both cloud and self-hosted instances. Features include web scraping, batch processing, structured data extraction, and LLM-powered content analysis.",
4
+
"description": "MCP server for Firecrawl web scraping integration. Supports both cloud and self-hosted instances. Features include web scraping, batch processing, structured data extraction, and LLM-powered content analysis.",
0 commit comments