-
Notifications
You must be signed in to change notification settings - Fork 491
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create a standardized (probably-inefficient) frozen database. #820
Comments
Reading CSV files is error prone, and they're not very fit for expressing relations. There's little point in each implementation duplicating this functionality when they're probably just going to load the data into a relational DB anyway. Consider using Sqlite for any interop format. It is simple, available anywhere and probably what any mobile wallet is going to use anyway. If you need a readable CSV file, it can be extracted from the db with a simple query. |
Hi @fluxdeveloper thank for the replay. I never said that is should be a CSV. |
I agree that this would be desirable for end-users, and implementing export/import to/from a single format is a much better idea than implementing one for each implementation's format. Curious to hear other implementers' feedback, but I'm in favor of this. |
Note that one difficulty may be that seeds are also different depending on the implementations... |
A big issue that does not seem to be considered yet is: should the standardized database be complete enough to also include private keys? Each implementation has its own way of generating private keys for channel, I believe most or all of them use some deterministic derivation method, but because this is not standardized, each implementation uses a different deterministic derivation method (that may or may not be BIP32-compatible). If so, it seems to me that a standardized database is fairly useless, as each implementation cannot do anything with the channels exported by another implementation, even if they were given the root private key in a separate mechanism. We would have to first standardize the derivation from the root private key to the per-channel key, which will be painful for at least one implementation (basically if it is not the derivation method used by your implementation now you are going to cry rivers of blood). If the standardized database contains private keys, then that it sufficient to steal funds and any mechanism to create a standardized database must be put in a higher-security domain (in e.g. the same way that C-Lightning cordons off the private keys in a separate process Probably the wisest thing to do would be to have both:
|
I understand that
But can I assume that implementing it now will cost much less blood than implementing it a year from now? About the channel private keys, I don't know how large this file is, but my assumption is that this file is not very complicated and not very large. If this is true, than maybe for this db readability is not very important, and it can be secure/compressed/encrypted. Thanks |
We discussed this during the spec meeting (#821) and the overall consensus from implementers was that this would be undoable in practice. Maybe in a future world when the lightning stack is more stable, but not now. |
I wish this issue is revised to allow migration between lnd and CLN without channel closures |
This proposal is for the standardization of a (probably-inefficient) frozen database. The emphasis in this database is readability and clarity and not so much about efficiency. The db contain enough information so that one can resume a node using this file alone.
The proposal is that all implementation will support the export to this standardized-database (the ability to import from the standardized-database to each implementation database format is not related).
I can imagine something like this:
Why?
Now that we got this out of the way
lnd load
than just copy all files and hope that it works.Others
A readable standardized-db will probably take more storage, but I don't think it is a problem as this step will not be regularly executed. And only by people who understand what they are doing. It is important to estimate the amount of storage required before starting the export.
csv_data_from_bigsun
@cdecker @rustyrussell @ZmnSCPxj @Roasbeef
Please feel free to edit my proposal to add more reasons. I am aware of my lack of knowledge in the field.
The text was updated successfully, but these errors were encountered: