Skip to content

Conversation

@Bobobalink
Copy link
Contributor

Instead of iterating over the linked list (also requires hash lookups) and erasing one by one, just reset the backing linked list and value storage, then 0 out the bucket array.

This allows us to directly find the bucket that points to each value in
the value array, which makes erasing elements given an iterator faster
(before we needed one key lookup for each element erased). Practically,
this makes clearing large "chunks" of the FixedMap faster.
Instead of iterating over the linked list (also requires hash lookups)
and erasing one by one, just reset the backing linked list and value
storage, then 0 out the bucket array.
{
// update the backlink of the value pointed to by the bucket we're about to put in
// table_loc
bucket_for_value_index(bucket.value_index_) = table_loc;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is actually quite a lot of extra work that is required for every single insertion and deletion. We expect there to be a lot of buckets that need to be moved, and now each of those moves also requires touching this separate array.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant