Back to Changelog

📦New: S3 Compatible Storage Integration

📦New: S3 Compatible Storage Integration

WebcrawlerAPI now supports direct export to any S3 compatible storage.

What's new:

  • Export crawl results directly to Amazon S3 buckets (or any S3 compatible storage: Cloudflare R2, DigitalOcean Spaces, Wasabi, Backblaze B2, etc.)
  • Simple setup with API keys and bucket information
  • We don't store your keys after job ends

How it works:

When starting a job via API, just add several parameters, like access_key_id, secret_access_key and a few others. Crawled data will be placed under the specified path. Your keys will be deleted after the job ends. Read Upload to S3 docs for detailed information.