-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
API to retrieve column blob into slice of arbitrary length #116
Comments
As I understand it, the API was designed this way to avoid passing the user a slice of C memory which will become invalidated after the Really I know that you said you don't need streaming blobs but if you want to avoid redundant |
Lines 938 to 975 in 6c1d4ad
|
Hi,
First of all, thank you for creating this library. I've been using
database/sql
for my personal projects for a while now, and having to jump through hoops to get any kind of concurrency working, and living without goodies such as save points. This library seems to provide an easy and lightweight alternative that maps much better to SQLite.I have a column that holds binary protos. These are control messages with scalar fields, so I'm not worried about unbounded growth and don't need streaming blobs. The natural choice to access them would be
(*Stmt).ColumnBytes
.The problem is that the only way to use
(*Stmt).ColumnBytes
correctly when you don't know the size of the blob in advance seems to be by doing:This is wasteful, since
(*Stmt).ColumnBytes
is already calling(*Stmt).ColumnLen
internally. It's also rather verbose, and makes reusing a previously allocated slice somewhat cumbersome (we'd have to possibly extend the slice so its length matchesn
).Would you consider changing this to be similar to e.g.
crypto/hash.(Hash).Sum
? I.e. instead ofcopy
ing the result of(*Stmt).columnBytes
into the user's slice,append
it.You may not want to modify an existing API to avoid breaking users. This could be a new method, e.g.
(*Stmt).ColumnBytesAppend
or some better sounding name.The text was updated successfully, but these errors were encountered: