This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
This is a Python-based documentation scraper that converts ANY documentation website into a Claude skill. It's a single-file tool (doc_scraper.py) that scrapes documentation, extracts code patterns, detects programming languages, and generates structured skill files ready for use with Claude.
pip3 install requests beautifulsoup4
python3 cli/doc_scraper.py --config configs/godot.json
python3 cli/doc_scraper.py --config configs/react.json
python3 cli/doc_scraper.py --config configs/vue.json
python3 cli/doc_scraper.py --config configs/django.json
python3 cli/doc_scraper.py --config configs/fastapi.json
python3 cli/doc_scraper.py --interactive
python3 cli/doc_scraper.py --name react --url https://react.dev/ --description "React framework"
python3 cli/doc_scraper.py --config configs/godot.json --skip-scrape
# If scrape was interrupted
python3 cli/doc_scraper.py --config configs/godot.json --resume
# Start fresh (clear checkpoint)
python3 cli/doc_scraper.py --config configs/godot.json --fresh
# 1. Estimate page count
python3 cli/estimate_pages.py configs/godot.json
# 2. Split into focused sub-skills
python3 cli/split_config.py configs/godot.json --strategy router
# 3. Generate router skill
python3 cli/generate_router.py configs/godot-*.json
# 4. Package multiple skills
python3 cli/package_multi.py output/godot*/
# Option 1: During scraping (API-based, requires ANTHROPIC_API_KEY)
pip3 install anthropic
export ANTHROPIC_API_KEY=sk-ant-...
python3 cli/doc_scraper.py --config configs/react.json --enhance
# Option 2: During scraping (LOCAL, no API key - uses Claude Code Max)
python3 cli/doc_scraper.py --config configs/react.json --enhance-local
# Option 3: Standalone after scraping (API-based)
python3 cli/enhance_skill.py output/react/
# Option 4: Standalone after scraping (LOCAL, no API key)
python3 cli/enhance_skill_local.py output/react/
The LOCAL enhancement option (--enhance-local or enhance_skill_local.py) opens a new terminal with Claude Code, which analyzes reference files and enhances SKILL.md automatically. This requires Claude Code Max plan but no API key.
# One-time setup
./setup_mcp.sh
# Then in Claude Code, use natural language:
"List all available configs"
"Generate config for Tailwind at https://tailwindcss.com/docs"
"Split configs/godot.json using router strategy"
"Generate router for configs/godot-*.json"
"Package skill at output/react/"
9 MCP tools available: list_configs, generate_config, validate_config, estimate_pages, scrape_docs, package_skill, upload_skill, split_config, generate_router
Set "max_pages": 20 in the config file to test with fewer pages.
The entire tool is contained in doc_scraper.py (~737 lines). It follows a class-based architecture with a single DocToSkillConverter class that handles:
Scrape Phase:
output/{name}_data/pages/*.json + summary.jsonBuild Phase:
output/{name}_data/output/{name}/SKILL.md + output/{name}/references/*.mdSkill_Seekers/
├── cli/ # CLI tools
│ ├── doc_scraper.py # Main scraping & building tool
│ ├── enhance_skill.py # AI enhancement (API-based)
│ ├── enhance_skill_local.py # AI enhancement (LOCAL, no API)
│ ├── estimate_pages.py # Page count estimator
│ ├── split_config.py # Large docs splitter (NEW)
│ ├── generate_router.py # Router skill generator (NEW)
│ ├── package_skill.py # Single skill packager
│ └── package_multi.py # Multi-skill packager (NEW)
├── mcp/ # MCP server
│ ├── server.py # 9 MCP tools (includes upload)
│ └── README.md
├── configs/ # Preset configurations
│ ├── godot.json
│ ├── godot-large-example.json # Large docs example (NEW)
│ ├── react.json
│ └── ...
├── docs/ # Documentation
│ ├── CLAUDE.md # Technical architecture (this file)
│ ├── LARGE_DOCUMENTATION.md # Large docs guide (NEW)
│ ├── ENHANCEMENT.md
│ ├── MCP_SETUP.md
│ └── ...
└── output/ # Generated output (git-ignored)
├── {name}_data/ # Raw scraped data (cached)
│ ├── pages/ # Individual page JSONs
│ ├── summary.json # Scraping summary
│ └── checkpoint.json # Resume checkpoint (NEW)
└── {name}/ # Generated skill
├── SKILL.md # Main skill file with examples
├── SKILL.md.backup # Backup (if enhanced)
├── references/ # Categorized documentation
│ ├── index.md
│ ├── getting_started.md
│ ├── api.md
│ └── ...
├── scripts/ # Empty (for user scripts)
└── assets/ # Empty (for user assets)
Config files in configs/*.json contain:
name: Skill identifier (e.g., "godot", "react")description: When to use this skillbase_url: Starting URL for scrapingselectors: CSS selectors for content extraction
main_content: Main documentation content (e.g., "article", "div[role='main']")title: Page title selectorcode_blocks: Code sample selector (e.g., "pre code", "pre")url_patterns: URL filtering
include: Only scrape URLs containing these patternsexclude: Skip URLs containing these patternscategories: Keyword-based categorization mappingrate_limit: Delay between requests (seconds)max_pages: Maximum pages to scrapesplit_strategy: (Optional) How to split large docs: "auto", "category", "router", "size"split_config: (Optional) Split configuration
target_pages_per_skill: Pages per sub-skill (default: 5000)create_router: Create router/hub skill (default: true)split_by_categories: Category names to split bycheckpoint: (Optional) Checkpoint/resume configuration
enabled: Enable checkpointing (default: false)interval: Save every N pages (default: 1000)Auto-detect existing data: Tool checks for output/{name}_data/ and prompts to reuse, avoiding re-scraping.
Language detection: Detects code languages from:
language-*, lang-*)def, const, func, etc.)Pattern extraction: Looks for "Example:", "Pattern:", "Usage:" markers in content and extracts following code blocks (up to 5 per page).
Smart categorization:
Enhanced SKILL.md: Generated with:
AI-Powered Enhancement: Two scripts to dramatically improve SKILL.md quality:
enhance_skill.py: Uses Anthropic API (~$0.15-$0.30 per skill, requires API key)enhance_skill_local.py: Uses Claude Code Max (free, no API key needed)Large Documentation Support (NEW): Handle 10K-40K+ page documentation:
split_config.py: Split large configs into multiple focused sub-skillsgenerate_router.py: Create intelligent router/hub skills that direct queriespackage_multi.py: Package multiple skills at onceCheckpoint/Resume (NEW): Never lose progress on long scrapes:
--resume flag--fresh flagis_valid_url() doc_scraper.py:47-62extract_content() doc_scraper.py:64-131detect_language() doc_scraper.py:133-163extract_patterns() doc_scraper.py:165-181smart_categorize() doc_scraper.py:280-321infer_categories() doc_scraper.py:323-349generate_quick_reference() doc_scraper.py:351-370create_enhanced_skill_md() doc_scraper.py:424-540scrape_all() doc_scraper.py:226-249main() doc_scraper.py:661-733# 1. Scrape + Build
python3 cli/doc_scraper.py --config configs/godot.json
# Time: 20-40 minutes
# 2. Package
python3 cli/package_skill.py output/godot/
# Result: godot.zip
# 1. Use existing data
python3 cli/doc_scraper.py --config configs/godot.json --skip-scrape
# Time: 1-3 minutes
# 2. Package
python3 cli/package_skill.py output/godot/
# Option 1: Interactive
python3 cli/doc_scraper.py --interactive
# Option 2: Copy and modify
cp configs/react.json configs/myframework.json
# Edit configs/myframework.json
python3 cli/doc_scraper.py --config configs/myframework.json
# 1. Estimate page count (fast, 1-2 minutes)
python3 cli/estimate_pages.py configs/godot.json
# 2. Split into focused sub-skills
python3 cli/split_config.py configs/godot.json --strategy router --target-pages 5000
# Creates: godot-scripting.json, godot-2d.json, godot-3d.json, etc.
# 3. Scrape all in parallel (4-8 hours instead of 20-40!)
for config in configs/godot-*.json; do
python3 cli/doc_scraper.py --config $config &
done
wait
# 4. Generate intelligent router skill
python3 cli/generate_router.py configs/godot-*.json
# 5. Package all skills
python3 cli/package_multi.py output/godot*/
# 6. Upload all .zip files to Claude
# Result: Router automatically directs queries to the right sub-skill!
Time savings: Parallel scraping reduces 20-40 hours to 4-8 hours
See full guide: Large Documentation Guide
To find the right CSS selectors for a documentation site:
from bs4 import BeautifulSoup
import requests
url = "https://docs.example.com/page"
soup = BeautifulSoup(requests.get(url).content, 'html.parser')
# Try different selectors
print(soup.select_one('article'))
print(soup.select_one('main'))
print(soup.select_one('div[role="main"]'))
IMPORTANT: You must install the package before running tests
# 1. Install package in editable mode (one-time setup)
pip install -e .
# 2. Run all tests
pytest
# 3. Run specific test files
pytest tests/test_config_validation.py
pytest tests/test_github_scraper.py
# 4. Run with verbose output
pytest -v
# 5. Run with coverage report
pytest --cov=src/skill_seekers --cov-report=html
Why install first?
skill_seekers.cli which requires the package to be installedpip install -e .Test Coverage:
No content extracted: Check main_content selector. Common values: article, main, div[role="main"], div.content
Poor categorization: Edit categories section in config with better keywords specific to the documentation structure
Force re-scrape: Delete cached data with rm -rf output/{name}_data/
Rate limiting issues: Increase rate_limit value in config (e.g., from 0.5 to 1.0 seconds)
After building, verify quality:
cat output/godot/SKILL.md # Should have real code examples
cat output/godot/references/index.md # Should show categories
ls output/godot/references/ # Should have category .md files
Skill_Seekers automatically detects llms.txt files before HTML scraping:
{base_url}/llms-full.txt (complete documentation){base_url}/llms.txt (standard version){base_url}/llms-small.txt (quick reference)If no llms.txt is found, automatically falls back to HTML scraping.