Skip to content

A set of spiders and scrapers to extract location information from places that post their location on the internet.

License

Notifications You must be signed in to change notification settings

thedyrt/alltheplaces

 
 

Repository files navigation

All the Places

A project to generate point of interest (POI) data sourced primarily from major websites with 'store location' pages. The project uses scrapy, a popular Python-based web scraping framework, to write individual site spiders to retrieve POI data, publishing the results in a standard format. There are various scrapy tutorials, this series on YouTube is reasonable.

Getting started

Development setup

Windows users may need to follow some extra steps, please follow the scrapy docs for upto date details.

  1. Clone a copy of the project from the GitHub All The Places repo (or your own fork if you are considering contributing to the project):

    $ git clone [email protected]:alltheplaces/alltheplaces.git
    
  2. If you haven't done so already, install pipenv and check that it runs:

    $ pipenv --version
    pipenv, version 2022.8.30
    
  3. Use pipenv to install the project dependencies:

    $ cd alltheplaces
    $ pipenv install
    
  4. Test for successful project installation:

    $ pipenv run scrapy
    

    If the above runs without complaint, then you have a functional installation and are ready to run and write spiders.

Contributing code

Many of the sites provide their data in a standard format. Others export their data via simple APIs. We have a number of guides to help you develop spiders:

The weekly run

The output from running the project is published on a regular cadence to our website: alltheplaces.xyz. You should not run all the spiders to pick up the output: the less the project "bothers" a website the more we will be tolerated.

Contact us

Communication is primarily through tickets on the project GitHub issue tracker. Many contributors are also present on OSM US Slack, in particular we watch the #poi channel.

License

The data generated by our spiders is provided on our website and released under Creative Commons’ CC-0 waiver.

The spider software that produces this data (this repository) is licensed under the MIT license.

About

A set of spiders and scrapers to extract location information from places that post their location on the internet.

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 98.6%
  • Other 1.4%