Open, serverless, and local friendly Data Platform for the Filecoin Ecosystem.
This repository contains all the code and related artifacts to process Filecoin data from diverse sources (on-chain and off-chain). You can go directly to the processed datasets or explore the different metrics and pipelines.
- Open: Code and data are open source and relies on open standards and formats.
- Permissionless Collaboration: Collaborate on data, models, and pipelines. Fork the repo and run the platform locally in mintures. No constraints or platform lock-ins.
- Decentralization Options: Runs on a laptop, server, CI runner, or even on decentralized compute networks like Bacalhau. No local setup required; it even works seamlessly in GitHub Codespaces.
- Data as Code: Each commit generates and pushes all table files to R2.
- Modular Flexibility: Replace, extend, or remove individual components. Compatible with tons of tools. At the end of the day, tables are Parquet files.
- Low Friction Data Usage: Raw and processed data is available to anyone openly. Use whatever tool you want!
- Modern Data Engineering: Supports data engineering essentials such as typing, testing, materialized views, and development branches. Utilizes best practices, including declarative transformations, and utilizes state-of-the-art tools like DuckDB.
This project is in active development. You can help by giving ideas, answering questions, reporting bugs, proposing enhancements, improving the documentation, and fixing bugs. Feel free to open issues and pull requests!
Some ways you can contribute to this project:
- Adding new data sources.
- Improving the data quality of existing datasets.
- Adding tests to the data pipelines.
You can run the Filecoin Data Portal locally using Python Virtual Environment or VSCode Development Containers. You'll need the following secrets in your environment:
- A
SPACESCOPE_TOKEN
to access Spacescope API. - A Google Cloud Platform
GOOGLE_APPLICATION_CREDENTIALS
for accessing BigQuery. - A
SPARK_API_BEARER_TOKEN
for accessing Spark retrievals API. - A
DUNE_API_KEY
for accessing Dune Analytics.
Clone the repository and run the following commands (or make setup
) from the root folder:
# Create a virtual environment
pip install uv && uv venv
# Install the package and dependencies
uv pip install -U -e .[dev]
Now, you should be able to spin up Dagster UI (make dev
) and access it locally.
You can jump into the repository Development Container. Once inside the develpment environment, you'll only need to run make dev
to spin up the Dagster UI locally. The development environment can also run in your browser thanks to GitHub Codespaces!
The datasets provided by this service are made available "as is", without any warranties or guarantees of any kind, either expressed or implied. By using these datasets, you agree that you do so at your own risk.