Skip to content

Conversation

@AurelienFT
Copy link
Contributor

@AurelienFT AurelienFT commented Sep 13, 2024

REVIEWER NOTE : Review #2162 first for context

Linked Issues/PRs

Closes #2186

Description

This PR allows you to insert Transaction in the TxPool v2 service and so perform all the necessary verifications and the insertion.

Changes in code logic from TxPool v1

  • The verifications are performed in a new order specified in Allow insertion of transactions in the TxPool v2 #2186. The goal is to avoid making the computation heavy work if the simple checks aren't valid. In this new version we also ensure that verifications are done in order by having wrapper type around each step to allow only one verification path.

  • The insertion is performed in a separate thread pool, the goal is to not block the pool on any verifications/insertions and to manage the ressources we allocate to these works

  • The insertion rules and conditions has change to the following :

    • A transaction with dependencies can collide only with one other transaction
    • A transaction without dependencies can collide with multiple transaction
    • Rules to free up space for new transaction
      • If a transaction is colliding with another verify if deleting the colliding transaction and dependents subtree is enough otherwise refuses the tx
      • If a transaction is dependent and not enough space, don't accept transaction
      • If a transaction is executable, try to free has much space used by less profitable transactions as possible in the pool to include it
  • New limits on the size of the pool : max_pool_bytes_size and max_pool_gas

Changes from the base branch txpool-v2

All the tests has been refactored to be way more clearer to understand and easier to read. They also now all follow the naming and the GWT convention.

Checklist

  • Breaking changes are clearly marked as such in the PR description and changelog
  • New behavior is reflected in tests
  • The specification matches the implemented behavior (link update PR if changes are needed)

Before requesting review

  • I have reviewed the code myself
  • I have created follow-up issues caused by this PR and linked them here

@AurelienFT AurelienFT changed the title Txpool v2 api Txpool v2 insertion Sep 13, 2024
@AurelienFT AurelienFT mentioned this pull request Sep 13, 2024
5 tasks
@AurelienFT AurelienFT marked this pull request as ready for review September 18, 2024 09:14
@AurelienFT AurelienFT requested a review from a team September 18, 2024 09:15
@AurelienFT AurelienFT self-assigned this Sep 18, 2024
@AurelienFT AurelienFT marked this pull request as ready for review September 23, 2024 14:53
) -> Result<Collisions<S::StorageIndex>, Error> {
let mut collisions = Collisions::new();
) -> Result<Option<S::StorageIndex>, Error> {
let mut collision: Option<S::StorageIndex> = None;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to document the rule that we agreed on in the code since it is not clear why we allow only one collision=)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, I want to highlight that we agreed that this is applicable to dependent transactions. In the case of executable transactions, we can collide with more than one transaction if it is more profitable than all collided subtrees=)

But we can handle it in a separate PR if you want=) But I think it is better to return all behavior of the BasicCollisionManager and just check the number of collided transactions in another place

Copy link
Contributor Author

@AurelienFT AurelienFT Sep 25, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok I will add the documentation and will change the behavior however I don't like to check the number of collisions depending on being executable or not outside of the CollisionManager maybe I can add a new separate method to the collision manager. For me it's not only a detector it should be able to resolve them and be sure that the one left are less worth

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done I documented in collision manager and check pool full. I have split the detection and resolution of collisions in two different function and reworked the whole logic. Could be a bit optimised on the collisions resolution iteration but I think it will not be our pain point for now

Comment on lines 256 to 288
let mut gas_left = self.current_gas.saturating_add(tx_gas);
let mut bytes_left = self.current_bytes_size.saturating_add(bytes_size);
let mut txs_left = self.storage.count().saturating_add(1);
let current_ratio = Ratio::new(tx.tip(), tx_gas);
let mut sorted_txs = self
.storage
.get_worst_ratio_tip_gas_subtree_roots()?
.into_iter();
while gas_left > self.config.pool_limits.max_gas
|| bytes_left > self.config.pool_limits.max_bytes_size
|| txs_left > self.config.pool_limits.max_txs
{
let storage_id = sorted_txs.next().ok_or(Error::NotInsertedLimitHit)?;
let storage_data = self.storage.get(&storage_id)?;
let mut dependencies = self.storage.get_dependencies(storage_id)?;
match dependencies.next() {
Some(_) => {}
None => {
let stored_ratio = Ratio::new(
storage_data.transaction.tip(),
storage_data.transaction.max_gas(),
);
if stored_ratio >= current_ratio {
return Err(Error::NotInsertedLimitHit);
}
}
}
gas_left = gas_left.saturating_sub(storage_data.dependents_cumulative_gas);
bytes_left =
bytes_left.saturating_sub(storage_data.dependents_cumulative_bytes_size);
txs_left = txs_left.saturating_sub(1);
removed_transactions.push(storage_id);
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you store the total_gas_of_children and the total_size_of_children in the storage data, you don't need to do iterations here=) Plus I think removed_transactions doesn't belong to this function as well.

Copy link
Contributor Author

@AurelienFT AurelienFT Sep 25, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The iteration is to delete multiple subtrees if needed. However, the dependencies things is useless and I have simplified the function
If this function doesn't retuns the removed_transactions we will not know which transaction to delete to make some space

Comment on lines 98 to 105
self.config.black_list.check_blacklisting(&tx)?;
Self::check_blob_does_not_exist(&tx, &latest_view)?;
self.storage
.validate_inputs(&tx, &latest_view, self.config.utxo_validation)?;
let collision = self
.collision_manager
.collect_colliding_transaction(&tx, &self.storage)?;
let dependencies = self.storage.collect_transaction_dependencies(&tx)?;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why we can't call can_insert_transaction instead?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this case I will need to make can_insert_transaction to return the deleted transactions and the dependencies which, IMO, makes the function prototype very strange versus a bit of code duplication, I went for a bit of code duplication

Comment on lines 106 to 113
if let Some(collision) = collision {
if self
.storage
.is_in_dependencies_subtrees(collision, &dependencies)?
{
return Err(Error::NotInsertedCollisionIsDependency);
}
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't like that this functionality moved from storage to TxPool. I think it should be part of the validate_inputs logic

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I have moved it to can_store_transaction of storage which IMO is more approriate

let current_ratio = Ratio::new(tx.tip(), tx_gas);
let mut sorted_txs = self
.storage
.get_worst_ratio_tip_gas_subtree_roots()?
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think get_worst_ratio_tip_gas_subtree_roots shouldn't live inside of the storage. We overload the storage responsibility. It is responsibility of the TxPool to decide how to squeeze out non profitable transactions.

Plus this function is too heavy. It iterates over all nodes in the graph.

I think we should use simplest implementation and leave proper functionality for follow-up PR.

The simplest solution will be to have a sorting map in the TxPool that sorts all dependent transactions(and maybe executable as well). When TxPool is full, we just go over less profitable transactions in mapping and delete them with all their children while there is no space, and the new transaction is more profitable(taking into account the profitability of children).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem is that the sorting will be redo each time we delete a child of someone as the cumulative_gas/cumulative_tip of a lot of transactions will change. So I was thinking that computing it during insert could be best but I will change.
But I don't really know how this can be done outside of the storage has it's the storage that is in charge of modifying the cumulative_gas/cumulative_tip and the rest of the txpool don't know when these values are updated

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Following our discussion, I changed this, now it uses a function inside the selection algorithm which seems to be the most appropriate for this. I have also fixed a bug that was there

@AurelienFT AurelienFT requested a review from xgreenx September 25, 2024 23:50
@AurelienFT AurelienFT merged commit c128c75 into txpool-v2 Sep 27, 2024
@AurelienFT AurelienFT deleted the txpool-v2-api branch September 27, 2024 16:04
AurelienFT added a commit that referenced this pull request Oct 7, 2024
REVIEWER NOTE: Review #2193
first for context
## Linked Issues/PRs
Closes #2187

## Description

(2k of additions is the tests from TxPool v1 which has been instantiated
again)

This PR adds all the logic for the service and shared state of the
TxPool to be able to be used by all the others modules and to make all
the auxiliary mechanisms working.

The PR changes the way the `SharedState` interact with the pool and the
service. Now the `SharedState` only sends commands to the service that
will answer him whenever he is time to do depending on the priority of
the request (managed through a tokio select biaised). This makes all
methods to return async even `find` which I'm not a big fan of but
didn't found any better solution.
**EDIT: I have made some tests especially tries to run `execute_suite`
tests and it seems that blocking can cause "double blocking forever" and
so I'm up for an alternative**

Here is a list of all interactions made by the different modules and
their new name/behavior/reason :

### Block producer
| Existing | New |
| - | - |
| select_transactions(max_gas) | select_transactions(max_gas) |

### PoA
| Existing | New |
| - | - |
| total_consumable_gas() | Nothing (unused) |
| transaction_status_events() | Replaced by new_txs_notifier
[reason](https://github.com/FuelLabs/fuel-core/blob/d5ba082264e1e3db54c6c81f6036eb06b9d3044c/crates/services/consensus_module/poa/src/service.rs#L590)
|
| pending_number() | Not used anymore after transaction_status_events()
change |
| remove_txs() | broadcast_txs_skipped_reason() |

### Block Importer
| Existing | New |
| - | - |
| Stream with blockEvents | Stream with block events |

### P2P
| Existing | New |
| - | - |
| gossiped_tx_stream | tx_from_p2p_stream |
| new_tx_gossip_subscription | new_peers_subscribed_stream |
| broadcast_transaction | broadcast_transaction |
| get_tx_ids| get_tx_ids |
| get_txs | get_txs |
| notify_gossip_transaction_validity |
notify_gossip_transaction_validity |

### GraphQL API

| Existing | New |
| - | - |
| transaction | transaction |
| tx_update_subscribe | tx_update_subscribe |


### Others changes

- The PR also changes the error type to help separate subdomains of
errors and try to be more clear about the moment errors can happens.
- The TTL management of the transactions is also in place as the less
priority task for the service.
- A new thread pool has been added to manage P2P sync that could take a
long time (same behavior as txpool v1)
- The transactions are now stored as Arc because they need to be
shareable with others modules (API, P2P)
- All the tests from the previous pool has been instantiated and pass. 
- The TxPool don't have anymore unused authorization and match the
correct linting requirements.


## Checklist
- [x] Breaking changes are clearly marked as such in the PR description
and changelog
- [x] New behavior is reflected in tests
- [x] [The specification](https://github.com/FuelLabs/fuel-specs/)
matches the implemented behavior (link update PR if changes are needed)

### Before requesting review
- [x] I have reviewed the code myself
- [x] I have created follow-up issues caused by this PR and linked them
here

---------

Co-authored-by: Green Baneling <[email protected]>
kuroki-yosuke added a commit to kuroki-yosuke/core-fuel that referenced this pull request Apr 5, 2025
REVIEWER NOTE: Review FuelLabs/fuel-core#2193
first for context
## Linked Issues/PRs
Closes FuelLabs/fuel-core#2187

## Description

(2k of additions is the tests from TxPool v1 which has been instantiated
again)

This PR adds all the logic for the service and shared state of the
TxPool to be able to be used by all the others modules and to make all
the auxiliary mechanisms working.

The PR changes the way the `SharedState` interact with the pool and the
service. Now the `SharedState` only sends commands to the service that
will answer him whenever he is time to do depending on the priority of
the request (managed through a tokio select biaised). This makes all
methods to return async even `find` which I'm not a big fan of but
didn't found any better solution.
**EDIT: I have made some tests especially tries to run `execute_suite`
tests and it seems that blocking can cause "double blocking forever" and
so I'm up for an alternative**

Here is a list of all interactions made by the different modules and
their new name/behavior/reason :

### Block producer
| Existing | New |
| - | - |
| select_transactions(max_gas) | select_transactions(max_gas) |

### PoA
| Existing | New |
| - | - |
| total_consumable_gas() | Nothing (unused) |
| transaction_status_events() | Replaced by new_txs_notifier
[reason](https://github.com/FuelLabs/fuel-core/blob/d5ba082264e1e3db54c6c81f6036eb06b9d3044c/crates/services/consensus_module/poa/src/service.rs#L590)
|
| pending_number() | Not used anymore after transaction_status_events()
change |
| remove_txs() | broadcast_txs_skipped_reason() |

### Block Importer
| Existing | New |
| - | - |
| Stream with blockEvents | Stream with block events |

### P2P
| Existing | New |
| - | - |
| gossiped_tx_stream | tx_from_p2p_stream |
| new_tx_gossip_subscription | new_peers_subscribed_stream |
| broadcast_transaction | broadcast_transaction |
| get_tx_ids| get_tx_ids |
| get_txs | get_txs |
| notify_gossip_transaction_validity |
notify_gossip_transaction_validity |

### GraphQL API

| Existing | New |
| - | - |
| transaction | transaction |
| tx_update_subscribe | tx_update_subscribe |


### Others changes

- The PR also changes the error type to help separate subdomains of
errors and try to be more clear about the moment errors can happens.
- The TTL management of the transactions is also in place as the less
priority task for the service.
- A new thread pool has been added to manage P2P sync that could take a
long time (same behavior as txpool v1)
- The transactions are now stored as Arc because they need to be
shareable with others modules (API, P2P)
- All the tests from the previous pool has been instantiated and pass. 
- The TxPool don't have anymore unused authorization and match the
correct linting requirements.


## Checklist
- [x] Breaking changes are clearly marked as such in the PR description
and changelog
- [x] New behavior is reflected in tests
- [x] [The specification](https://github.com/FuelLabs/fuel-specs/)
matches the implemented behavior (link update PR if changes are needed)

### Before requesting review
- [x] I have reviewed the code myself
- [x] I have created follow-up issues caused by this PR and linked them
here

---------

Co-authored-by: Green Baneling <[email protected]>
ChapmaBeerbohm added a commit to ChapmaBeerbohm/holo-kit that referenced this pull request Sep 26, 2025
REVIEWER NOTE: Review FuelLabs/fuel-core#2193
first for context
## Linked Issues/PRs
Closes FuelLabs/fuel-core#2187

## Description

(2k of additions is the tests from TxPool v1 which has been instantiated
again)

This PR adds all the logic for the service and shared state of the
TxPool to be able to be used by all the others modules and to make all
the auxiliary mechanisms working.

The PR changes the way the `SharedState` interact with the pool and the
service. Now the `SharedState` only sends commands to the service that
will answer him whenever he is time to do depending on the priority of
the request (managed through a tokio select biaised). This makes all
methods to return async even `find` which I'm not a big fan of but
didn't found any better solution.
**EDIT: I have made some tests especially tries to run `execute_suite`
tests and it seems that blocking can cause "double blocking forever" and
so I'm up for an alternative**

Here is a list of all interactions made by the different modules and
their new name/behavior/reason :

### Block producer
| Existing | New |
| - | - |
| select_transactions(max_gas) | select_transactions(max_gas) |

### PoA
| Existing | New |
| - | - |
| total_consumable_gas() | Nothing (unused) |
| transaction_status_events() | Replaced by new_txs_notifier
[reason](https://github.com/FuelLabs/fuel-core/blob/d5ba082264e1e3db54c6c81f6036eb06b9d3044c/crates/services/consensus_module/poa/src/service.rs#L590)
|
| pending_number() | Not used anymore after transaction_status_events()
change |
| remove_txs() | broadcast_txs_skipped_reason() |

### Block Importer
| Existing | New |
| - | - |
| Stream with blockEvents | Stream with block events |

### P2P
| Existing | New |
| - | - |
| gossiped_tx_stream | tx_from_p2p_stream |
| new_tx_gossip_subscription | new_peers_subscribed_stream |
| broadcast_transaction | broadcast_transaction |
| get_tx_ids| get_tx_ids |
| get_txs | get_txs |
| notify_gossip_transaction_validity |
notify_gossip_transaction_validity |

### GraphQL API

| Existing | New |
| - | - |
| transaction | transaction |
| tx_update_subscribe | tx_update_subscribe |


### Others changes

- The PR also changes the error type to help separate subdomains of
errors and try to be more clear about the moment errors can happens.
- The TTL management of the transactions is also in place as the less
priority task for the service.
- A new thread pool has been added to manage P2P sync that could take a
long time (same behavior as txpool v1)
- The transactions are now stored as Arc because they need to be
shareable with others modules (API, P2P)
- All the tests from the previous pool has been instantiated and pass. 
- The TxPool don't have anymore unused authorization and match the
correct linting requirements.


## Checklist
- [x] Breaking changes are clearly marked as such in the PR description
and changelog
- [x] New behavior is reflected in tests
- [x] [The specification](https://github.com/FuelLabs/fuel-specs/)
matches the implemented behavior (link update PR if changes are needed)

### Before requesting review
- [x] I have reviewed the code myself
- [x] I have created follow-up issues caused by this PR and linked them
here

---------

Co-authored-by: Green Baneling <[email protected]>
GeorgeBake added a commit to GeorgeBake/pipeline-cli that referenced this pull request Sep 29, 2025
REVIEWER NOTE: Review FuelLabs/fuel-core#2193
first for context
## Linked Issues/PRs
Closes FuelLabs/fuel-core#2187

## Description

(2k of additions is the tests from TxPool v1 which has been instantiated
again)

This PR adds all the logic for the service and shared state of the
TxPool to be able to be used by all the others modules and to make all
the auxiliary mechanisms working.

The PR changes the way the `SharedState` interact with the pool and the
service. Now the `SharedState` only sends commands to the service that
will answer him whenever he is time to do depending on the priority of
the request (managed through a tokio select biaised). This makes all
methods to return async even `find` which I'm not a big fan of but
didn't found any better solution.
**EDIT: I have made some tests especially tries to run `execute_suite`
tests and it seems that blocking can cause "double blocking forever" and
so I'm up for an alternative**

Here is a list of all interactions made by the different modules and
their new name/behavior/reason :

### Block producer
| Existing | New |
| - | - |
| select_transactions(max_gas) | select_transactions(max_gas) |

### PoA
| Existing | New |
| - | - |
| total_consumable_gas() | Nothing (unused) |
| transaction_status_events() | Replaced by new_txs_notifier
[reason](https://github.com/FuelLabs/fuel-core/blob/d5ba082264e1e3db54c6c81f6036eb06b9d3044c/crates/services/consensus_module/poa/src/service.rs#L590)
|
| pending_number() | Not used anymore after transaction_status_events()
change |
| remove_txs() | broadcast_txs_skipped_reason() |

### Block Importer
| Existing | New |
| - | - |
| Stream with blockEvents | Stream with block events |

### P2P
| Existing | New |
| - | - |
| gossiped_tx_stream | tx_from_p2p_stream |
| new_tx_gossip_subscription | new_peers_subscribed_stream |
| broadcast_transaction | broadcast_transaction |
| get_tx_ids| get_tx_ids |
| get_txs | get_txs |
| notify_gossip_transaction_validity |
notify_gossip_transaction_validity |

### GraphQL API

| Existing | New |
| - | - |
| transaction | transaction |
| tx_update_subscribe | tx_update_subscribe |


### Others changes

- The PR also changes the error type to help separate subdomains of
errors and try to be more clear about the moment errors can happens.
- The TTL management of the transactions is also in place as the less
priority task for the service.
- A new thread pool has been added to manage P2P sync that could take a
long time (same behavior as txpool v1)
- The transactions are now stored as Arc because they need to be
shareable with others modules (API, P2P)
- All the tests from the previous pool has been instantiated and pass. 
- The TxPool don't have anymore unused authorization and match the
correct linting requirements.


## Checklist
- [x] Breaking changes are clearly marked as such in the PR description
and changelog
- [x] New behavior is reflected in tests
- [x] [The specification](https://github.com/FuelLabs/fuel-specs/)
matches the implemented behavior (link update PR if changes are needed)

### Before requesting review
- [x] I have reviewed the code myself
- [x] I have created follow-up issues caused by this PR and linked them
here

---------

Co-authored-by: Green Baneling <[email protected]>
BugSeeker84 added a commit to BugSeeker84/fuel-core that referenced this pull request Nov 16, 2025
REVIEWER NOTE: Review FuelLabs/fuel-core#2193
first for context
## Linked Issues/PRs
Closes FuelLabs/fuel-core#2187

## Description

(2k of additions is the tests from TxPool v1 which has been instantiated
again)

This PR adds all the logic for the service and shared state of the
TxPool to be able to be used by all the others modules and to make all
the auxiliary mechanisms working.

The PR changes the way the `SharedState` interact with the pool and the
service. Now the `SharedState` only sends commands to the service that
will answer him whenever he is time to do depending on the priority of
the request (managed through a tokio select biaised). This makes all
methods to return async even `find` which I'm not a big fan of but
didn't found any better solution.
**EDIT: I have made some tests especially tries to run `execute_suite`
tests and it seems that blocking can cause "double blocking forever" and
so I'm up for an alternative**

Here is a list of all interactions made by the different modules and
their new name/behavior/reason :

### Block producer
| Existing | New |
| - | - |
| select_transactions(max_gas) | select_transactions(max_gas) |

### PoA
| Existing | New |
| - | - |
| total_consumable_gas() | Nothing (unused) |
| transaction_status_events() | Replaced by new_txs_notifier
[reason](https://github.com/FuelLabs/fuel-core/blob/d5ba082264e1e3db54c6c81f6036eb06b9d3044c/crates/services/consensus_module/poa/src/service.rs#L590)
|
| pending_number() | Not used anymore after transaction_status_events()
change |
| remove_txs() | broadcast_txs_skipped_reason() |

### Block Importer
| Existing | New |
| - | - |
| Stream with blockEvents | Stream with block events |

### P2P
| Existing | New |
| - | - |
| gossiped_tx_stream | tx_from_p2p_stream |
| new_tx_gossip_subscription | new_peers_subscribed_stream |
| broadcast_transaction | broadcast_transaction |
| get_tx_ids| get_tx_ids |
| get_txs | get_txs |
| notify_gossip_transaction_validity |
notify_gossip_transaction_validity |

### GraphQL API

| Existing | New |
| - | - |
| transaction | transaction |
| tx_update_subscribe | tx_update_subscribe |


### Others changes

- The PR also changes the error type to help separate subdomains of
errors and try to be more clear about the moment errors can happens.
- The TTL management of the transactions is also in place as the less
priority task for the service.
- A new thread pool has been added to manage P2P sync that could take a
long time (same behavior as txpool v1)
- The transactions are now stored as Arc because they need to be
shareable with others modules (API, P2P)
- All the tests from the previous pool has been instantiated and pass. 
- The TxPool don't have anymore unused authorization and match the
correct linting requirements.


## Checklist
- [x] Breaking changes are clearly marked as such in the PR description
and changelog
- [x] New behavior is reflected in tests
- [x] [The specification](https://github.com/FuelLabs/fuel-specs/)
matches the implemented behavior (link update PR if changes are needed)

### Before requesting review
- [x] I have reviewed the code myself
- [x] I have created follow-up issues caused by this PR and linked them
here

---------

Co-authored-by: Green Baneling <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants