-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"Complex" addresses produce Covalent portfolio_v2 timeouts #66
Comments
ApproachesI see 2 main types of solutions:
My take:
@acemasterjb What are your thoughts? |
@lajarre For solution 2, we might still end up relying on at least one other API to accomplish this. The way I see it, for solution 2 we'll need to query a given treasury for all ERC20 tokens it holds (this might have to be via Ethplorer), and then query the historical balance for each of these assets using Eth RPC getter calls (via a node operator like Infura or Pokt). At which point we can store this data in our db and use it for our benefit. As for alternative data providers, Ethplorer can give up to 1000 responses per call per account or Moralis, which is a pretty popular solution (but from personal experience with this service, I think it should be used as a last resort). Edit: an additional remark on solution 2: we could build and then append our own database as suggested, but I think we'd still need to do two queries. One to check for new tokens each treasury in our db holds periodically, and another to check transfer history between last tick and current tick (where a "tick" is each time this query process happens). |
I ran the scheduler locally with a frequency of 30 secs for about a day or so. Whilst it was running I logged every address that gave these Covalent ReadTimeout issues which are most likely "complex addresses". Here are the results, where |
Thanks for that! From which list are the addresses you used coming?
I'm wondering if we couldn't just:
I'm not sure how expensive this would be (using Infura for example, but I'm also wondering if a private node wouldn't be faster here). Also, there might be some tokens which are incorrectly implementing the All in all this option is really about building own own transfers database right on top of the blockchain data. @vaughnmck any thought on that? |
This is coming from CryptoStats' simple treasury list/adapter.
Just a rough estimate would be, considering we would need data for each treasury, for each asset/token they have, considering a new block is mined every ~14s:
Where Again this is just a worse case scenario, but for reference here are Infura's pricing plans. |
I guess this is even worse, because we would need to start indexing from the genesis block, not just from 1 year ago. If we have the blockchain on disk, this is a nested loop like:
Then we would need to dump memory regularly to a database, which we would then need to re-process in order to have the clean treasury.asset balance series. |
My answer is based entirely on resources. We do not have the resources to pay Infura for ~6bn calls so we should rule it out until we do. We may have enough resources to run our own node, however. (Running a full node on AWS is ~$1k/y) I'd recommend maintaining Covalent/Bitquery as a quick retrieval system for when a user searches for new tokens/treasury addresses. We can always backfill the response with more accurate transaction data once we've re-processed the chain. |
@acemasterjb What about the following solution: in We should probably have, on top of our |
@lajarre Interesting... What do you think about appending those treasuries in that Redis set you proposed on fail and have an hourly/frequent Celery task that runs through, and exhausts, that list retrying to get covalent data? Also, I should say that a few commits ago I bumped up the read and connect timeouts for |
Sounds like a good approach!
Why not increase the timeout. Ideally we want to have different timeouts when we're running a cron compared to when we're running live. |
Wouldn't this be inconsequential? I previously bumped up the timeouts under that assumption since the deployment will timeout after 30s anyway. |
Yes that's true for now. Let's just keep this in mind for now. This will have an impact whenever we start changing the backend architecture (notably #91 ). |
See #24 (comment)
What we mean by "complex address" is an address that has a lot of ERC20 or native token transfers. Covalent doesn't play well with these.
TBD:
The text was updated successfully, but these errors were encountered: