Replies: 1 comment 2 replies
-
For a new file, we don't know how much data will be written, it's expensive to allow 4MB of memory if only 4KB data will be written. These 64KB pages is used to allocate memory more efficently in case of we don't know how much data will be written. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I want to add this:
When I test the writing of a large file, the file will be split into chunks according to 64MB. We all know this. Every time I write data to c.pages[][], when indx is 0, it will be divided into 64 pages. From the second time, the BS will be set to 4MB. However, this operation will be performed in the next chunk. I think we can add the judgment of chunk off to avoid dividing into 64 shares each time the chunk indx is 0 (except the first chunk)
code
‘func (c *wChunk) WriteAt(p []byte, off int64) (n int, err error) {
if int(off)+len(p) > chunkSize {
return 0, fmt.Errorf("write out of chunk boudary: %d > %d", int(off)+len(p), chunkSize)
}
if off < int64(c.uploaded) {
return 0, fmt.Errorf("Cannot overwrite uploaded block: %d < %d", off, c.uploaded)
}
}’
Why is this needed:
In this way, all chunks except the first write of the first chunk are written to the page in 4MB. This can speed up some speed
Beta Was this translation helpful? Give feedback.
All reactions