-
Notifications
You must be signed in to change notification settings - Fork 2.2k
Increase the default outgoing bandwidth #10096
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Increase the default outgoing bandwidth #10096
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @yyforyongyu, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request significantly increases the default outgoing gossip bandwidth limits within the system. The primary goal is to improve the efficiency and speed at which the node can propagate critical network information, such as channel and node announcements, especially when responding to gossip requests. This change aims to alleviate potential bottlenecks in network synchronization and message dissemination.
Highlights
- Increased Outgoing Gossip Bandwidth: The default values for
DefaultMsgBytesBurstandDefaultMsgBytesPerSecondhave been increased tenfold, from 100KB/s to 1MB/s for the rate, and from 200KB to 2MB for the burst. This significantly boosts the capacity for sending gossip messages. - Improved Configuration Documentation: The descriptions for the
MsgRateBytesandMsgBurstBytesconfiguration options inlncfg/gossip.goandsample-lnd.confhave been updated to provide clearer and more detailed explanations of their purpose and interaction within the token bucket rate-limiting scheme.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
|
Warning Gemini is unable to generate a review due to a potential policy violation. |
| // this, will block indefinitely. Once tokens (bytes are depleted), | ||
| // they'll be refilled at the DefaultMsgBytesPerSecond rate. | ||
| DefaultMsgBytesBurst = 2 * 100 * 1_024 | ||
| DefaultMsgBytesBurst = 2 * 1000 * 1_024 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need to also increase this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the burst must be greater than the msg-rate-bytes otherwise msg-rate-bytes may not take effect, more details here.
6fef5b6 to
d0c07d7
Compare
morehouse
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think 1 MB/s is a reasonable default outgoing rate limit. At least in the US, typical upload speeds are higher than this.
Lower-end setups may want to rate limit more aggressively.
Roasbeef
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 🏊♀️
When replying to
gossip_timestamp_filter, we will sendChannelAnnouncement,NodeAnnouncementand twoChannelUpdate1s. And their sizes are,ChannelAnnouncement1: This message has a mostly fixed size.NodeAnnouncement: This message has both fixed and variable size components.ChannelUpdate1: This message has a mostly fixed size.Best case: Assuming the minimal sizes, we will send
430+140+136*2 = 842bytes per channel, with the current default of 100kb, we can process 121 channels per second.Worst case: Assuming max size, when all the msgs are using extra bytes, we will send
65535*4 = 262,140bytes per channel, and with the current default, we can process 6 channels per second. But this is highly unlikely.Note that this value is shared among all peers, which means if we have 10 peers, we can only process 12 channels per peer per second.