You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am scraping a website using playwright with @sparticuz/chromium in AWS Lambda. It sometimes throws below error on page.goto(): page.goto: net:: ERR_INSUFFICIENT_RESOURCES
Code structure:
An API call is made that scrapes data from a website. Since scraping might fail due to various reasons, I have added 2 times retry in code. So when an error occurs a new context will be created from existing browser instance and scraping will start again. In the end we iterate over opened contexts and close them and then close the browser.
In case the code fails even after retrying 2 times internally, we send back error.
ERR_INSUFFICIENT_RESOURCES error scenario:
After giving back error as mentioned above, an API call is made again for scraping again, it starts giving ERR_INSUFFICIENT_RESOURCES error continuously from this point.
Tries keeping Lambda Memory and Ephemeral Storage to 10000 MB and still the error persists.
The text was updated successfully, but these errors were encountered:
Please ensure the browser instance is fully closed before returning the Lambda error response.
It seems the ERR_INSUFFICIENT_RESOURCES error relates to browser resources persisting between Lambda invocations, when environment is frozen during browser shutdown and reused for the next invocation.
I am scraping a website using playwright with @sparticuz/chromium in AWS Lambda. It sometimes throws below error on page.goto():
page.goto: net:: ERR_INSUFFICIENT_RESOURCES
Code structure:
An API call is made that scrapes data from a website. Since scraping might fail due to various reasons, I have added 2 times retry in code. So when an error occurs a new context will be created from existing browser instance and scraping will start again. In the end we iterate over opened contexts and close them and then close the browser.
In case the code fails even after retrying 2 times internally, we send back error.
ERR_INSUFFICIENT_RESOURCES error scenario:
After giving back error as mentioned above, an API call is made again for scraping again, it starts giving ERR_INSUFFICIENT_RESOURCES error continuously from this point.
Tries keeping Lambda Memory and Ephemeral Storage to 10000 MB and still the error persists.
The text was updated successfully, but these errors were encountered: