Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[wip] get rid of joint PK, just use local_*_id #15

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from
Draft

Conversation

matt-codecov
Copy link
Collaborator

@matt-codecov matt-codecov commented Jul 31, 2024

not currently working on this, just an idea for later

the idea behind the joint PK is that it makes records globally unique if we distribute processing to different hosts. when we merge all the reports together at the end, we can just union the tables and not worry about non-unique IDs or updating foreign keys

while the joint PK is way better than UUIDs perf-wise, profile data still shows a lot of time is spent updating indexes for the joint PK. if we switch to INTEGER PRIMARY KEY it's a good deal faster. a specific comparison:

  • using the joint PK, one profile spends 1.94s in sqlite3VdbeExec, with 1.04s of that in sqlite3BTreeIndexMoveto
  • without the joint PK, the profile spends 868ms in sqlite3VdbeExec with 324ms of that in sqlite3BTreeIndexMoveto
  • overall runtime with the joint pk was 6.34s, overall runtime without it was 5.22s

merging is also really an INSERT, so it also has to pay for index updates. while switching to INTEGER PRIMARY KEY would necessitate more work to update foreign keys, it's not clear whether that cost outweighs the savings from making index updates faster in both places

to decide whether to make this change, we need some data around merging:

  • how much does this change the perf of merging (good or bad) for small inputs? large inputs?
  • we expect to merge incremental results one-by-one into a final report, rather than merging two large final reports. what size is the typical incremental result?

@Swatinem
Copy link
Collaborator

we expect to merge incremental results one-by-one into a final report, rather than merging two large final reports. what size is the typical incremental result?

I believe the coverage reports are both commutative and associative (in other words, order independent and grouping independent). That is also something that should easily be verifiable with a fuzzer / property testing.

But yes, it might be interesting to figure out what the actual perf implications here are. That info should help decide whether we want to pre-merge all individual coverage files within one upload, and maybe even batch multiple uploads?

@matt-codecov
Copy link
Collaborator Author

matt-codecov commented Jul 31, 2024

my idea for how to update foreign keys when merging isn't great:

  • create a temp table (old_id, new_id) mapping old PKs to new PKs
  • insert a single row (null, largest_existing_id), taking largest_existing_id from the report we're merging into. sqlite will now assign rowids starting one past largest_existing_id
  • insert all of the local_sample_ids from the report we're merging from into the ID map's old_id column. let sqlite assign new_id. we now have our map
  • when inserting coverage_sample, branches_data, method_data, span_data from the src database into the dest database, join id_map and use id_map.new_id for local_sample_id

if the temp table (or maybe we should attach a temp database) is in memory, building the ID map should be extremely fast? so the big question is how painful those joins would be. at this point, we don't have indexes on anything other than PKs (creating them is slow, having them slows down inserts, and they also bloat the db, so my plan was to defer creating them until post-merge if possible)

the sql would look something like this (ish):

-- Create a temp table mapping old PKs to new PKs
CREATE TEMPORARY TABLE id_map (
  new_id INTEGER PRIMARY KEY,
  old_id INTEGER
);

-- Insert a row with the largest rowid in the destination in the `new_id` column
-- This resets where new `rowid`s will be assigned from
INSERT INTO id_map (new_id, old_id)
  SELECT
    MAX(dest.coverage_sample.local_sample_id) AS new_id,
    null as old_id
  FROM dest.coverage_sample;

-- Create the actual map of old PKs to new PKs. Sqlite auto-assigns `new_id`
INSERT INTO id_map (old_id)
  SELECT src.coverage_sample.local_sample_id FROM src.coverage_sample;

-- Move coverage samples from src to dest with the new IDs
-- You can maybe skip this join and rely on sqlite assigning the exact same sequence
-- of rowids when inserting here as it did when inserting in `id_map`
INSERT INTO dest.coverage_sample
  SELECT id_map.new_id, src.coverage_sample.raw_upload_id, src.coverage_sample.hits...
  FROM src.coverage_sample
  INNER JOIN id_map ON id_map.old_id = src.coverage_sample.local_sample_id;

-- Move related tables, updating their FKs
-- Let their PKs be assigned by sqlite
INSERT INTO dest.branches_data
  SELECT id_map.new_id as local_sample_id, ...
  FROM src.branches_data
  INNER JOIN id_map ON id_map.old_id = src.branches_data.local_sample_id;

-- and so on

what i wanted to make work was:

INSERT INTO dest.coverage_sample SELECT * FROM src.coverage_sample RETURNING dest.coverage_sample.local_sample_id as new_id, src.coverage_sample.local_sample_id as old_id;

but sqlite RETURNING clauses are only allowed to reference the table being modified

@matt-codecov
Copy link
Collaborator Author

I believe the coverage reports are both commutative and associative (in other words, order independent and grouping independent). That is also something that should easily be verifiable with a fuzzer / property testing.

i think this is true, so i suppose we could do a mergesort-style merge rather than merging all of the incremental results one-by-one

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants