We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation .
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement . We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
enable the user to pass a "timeout parameter" to both the scrape and the crawl endpoint. If the timeout is exceeded, please send the user a clear error message. On the crawl endpoint, return any pages that have already been scraped but include messages notifying them that the timeout was exceeded.
If the task is completed within two days, we'll include a $10 dollar tip :)
This is an intro bounty. We are looking for exciting people that will buy in so we can start to ramp up.
The text was updated successfully, but these errors were encountered:
/attempt #59
/claim #59
Thank you for contributing to mendableai/firecrawl!
Add a bounty ? Share on socials
DO NOT START WORKING ON THIS BEFORE GETTING ASSIGNED, OTHERWISE WE CAN'T AWARD THE BOUNTY Other attempts will be allowed if the user assigned does not open a PR within 48 hours.
Sorry, something went wrong.
@nickscamara Can I get assigned?
@ezhil56x all yours!
@nickscamara Do we need a default timeout or not required?
Hi, Is this issue still open, or is someone working on it?
@parthusun8 , the issue is still open, but fixing it would need us to make some real complex changes to our bull queue system to allow the /crawl route to timeout. So far, we’ve found that stopping an active job in bull isn’t possible. This means we’d have to change the deepest parts of our system to add a timeout feature to Firecrawl.
/crawl
ezhil56x
No branches or pull requests