-
Notifications
You must be signed in to change notification settings - Fork 489
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[feature request]: Script to scrape coursera in python #1440
Comments
@THEGAMECHANGER416 Welcome to Rotten-Scripts🥳 Thanks for opening this Issue 🙌, This Will definitely Improve our Project💖.While we are having a look at this. If you want to work on this then,feel free to self-assign and start working on it.📄 Use |
/assign |
This issue has been assigned to @THEGAMECHANGER416! |
@THEGAMECHANGER416, this issue hasn't had any activity in 7 days. It will become unassigned in 14 days to make room for someone else to contribute. |
can I be /assigned |
/assign |
This issue has been assigned to @RafaelJohn9! |
@RafaelJohn9, this issue hasn't had any activity in 7 days. It will become unassigned in 14 days to make room for someone else to contribute. |
Is there an existing issue for this?
Describe the feature.
I would like to add a script to Scrape the coursera website for courses using beautifulsoup or selenium library
Problem/Motivation.
In today's vast online education landscape, platforms like Coursera offer a wealth of valuable courses. However, efficiently accessing detailed course information for research or educational tools can be a daunting task. Manual data collection is time-consuming and prone to errors, hindering accurate analyses and tool development.
Possible Solution/Pitch.
to create a Coursera scraper using BeautifulSoup. This project aims to streamline the collection of Coursera course data for developers and researchers. By automating data extraction, we enable effortless creation of educational resources, research studies, and data-driven insights.
Anything else?
No response
Code of Conduct
The text was updated successfully, but these errors were encountered: