Skip to content

Comments

Attempt to Fix merge of Sliced NodeCollections#3720

Open
med-ayssar wants to merge 11 commits intonest:masterfrom
med-ayssar:issue-3706
Open

Attempt to Fix merge of Sliced NodeCollections#3720
med-ayssar wants to merge 11 commits intonest:masterfrom
med-ayssar:issue-3706

Conversation

@med-ayssar
Copy link
Contributor

No description provided.

@gtrensch gtrensch added the S: Normal Handle this with default priority label Jan 23, 2026
@gtrensch
Copy link
Contributor

@med-ayssar, thank you for your contribution!

Could you please add a description to this PR? Since you called it an attempt, perhaps this PR should be considered a draft?

Copy link
Contributor

@heplesser heplesser left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@med-ayssar I am a bit concerned about this PR. In some places it seems to introduce unnecessary complexity into the code and in other places I fear considerable performance hits. Until now, adding sliced node collections has been prohibited, because it is difficult to implement. I have not seen (but may have overlooked) user requests to support this. In any case, it must not reduce performance. I also noticed that some tests now fail (see https://github.com/nest/nest-simulator/actions/runs/20521568875/job/58957854360#step:32:10332) while there are no tests for the new functionality.

node_ids.reserve( n );

for ( auto node_ptr = array; node_ptr != array + n; ++node_ptr )
if ( n == 0 )
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Am I overlooking something here or does this change essentially turn 8 lines of code into 20 without changing the logic? I suspect that the new code will be less efficient, because it requires n-1 temporary single-element NodeCollections to be created and merged, while the original code just creates a vector of node IDs and then creates the NodeCollection in one go.

Predicate< typename Container::value_type > pred = []( const auto& current, const auto& next )
{ return next == current + 1; },
Filter< typename Container::value_type > filter = []( const auto& ) { return true; } )
{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I find this code challenging to follow without comments explaining the logic.

std::span< const bool > span { array, n };

auto slices = vector_util::split_into_contiguous_slices(
span, true, []( auto current, auto next ) { return current == next; }, []( const auto& v ) { return v == true; } );
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This code seems even more complex and likely performance-problematic that the variant working on node IDs directly.

// But here we iterate c1 and check if c2 contains.
for ( const auto& elem : c1 )
{
if ( c2.contains( elem.node_id ) )
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For composite collections c2, contains() is a complex operation, so I am afraid this is too costly. See below for other ideas.

}
return NodeCollectionPTR( *this + *rhs_ptr );

std::vector< NodeCollectionPrimitive > new_parts = to_primitives_();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the following could be integrated with the code below if that is revised as suggested below.

{
return std::make_shared< NodeCollectionComposite >( new_parts );
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I read the code above right, then execution of the if-branch will always lead to either throw or return and execution will in that case never pass this point. That needs to be clear from the code, right now it is hidden.

Comment on lines 1036 to 1037
std::sort( new_parts.begin(), new_parts.end(), primitive_sort_op );
merge_parts_( new_parts );
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't it be more efficient to apply merge-sort logic here, iterating over both sides in parallel? In that process, one would also detect overlaps, so that no extra overlap detection is needed, improving efficiency. Since parts1 and parts2 are sorted per precondition, no sorting here is needed. One can then cover the case where rhs_ptr is a primitive connection by just constructing a single-element vector containing it.

if ( is_sliced_ )
{
if ( part_it->overlapping( rhs ) )
for ( const auto& elem : *this )
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Iterating over all elements of a collection is potentially very expensive, since collections can be large. We need to find a way to avoid it.

auto it = begin();
const auto end_it = end();

// Initialize the first range
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might want to point out here that the algorithm is very similar to create_().

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

S: Normal Handle this with default priority

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants