Bugcrowd Scraper is a Python tool that collects engagement data and scope targets from Bugcrowd and exports structured items using Scrapy feeds.
- Collects engagements from the Bugcrowd public listings endpoint
- Follows each engagement brief to fetch scope details
- Automatic pagination across engagement pages
- Rotating proxy support via proxies.txt
- Deploy and run with Scrapyd + ScrapydWeb
- Python 3.11+
- Scrapyd and Scrapyd Client
- ScrapydWeb and Logparser (optional, recommended)
pip install -r requirements.txtAdd proxies in the following format:
username:password@ip:port
Install deploy dependencies:
pip install scrapyd scrapyd-clientStart Scrapyd:
scrapydDeploy the project:
scrapyd-client deployInstall the web panel:
pip install scrapydweb logparserStart the services:
scrapyd &
scrapydweb &
logparser &- POST /schedule.json → Run spider
- GET /listspiders.json → List spiders
- GET /listprojects.json → Projects
- GET /listjobs.json → Active jobs
- POST /cancel.json → Stop job
To schedule the spider via API:
curl http://localhost:6800/schedule.json -d "project=bugcrowdscraper" -d "spider=engagementspider"Items are exported according to the FEEDS setting in Scrapy or via spider run parameters.
