Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 7 additions & 1 deletion .wordlist.txt
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,8 @@ ACR
acr
adaptor
adaptors
addr
ADDR
addr
agnostically
amd
apache
Expand Down Expand Up @@ -44,9 +44,11 @@ backend
balancer
Balancer
balancers
bg
BGP
bgp
bldr
bidir
bm
BMV
bmv
Expand Down Expand Up @@ -406,6 +408,7 @@ PyTest
pytests
Pyunit
qcow
qos
QoS
qos
Radv
Expand Down Expand Up @@ -571,6 +574,7 @@ vcpus
veth
VFP
vip
VIP
virsh
virt
virtio
Expand Down Expand Up @@ -605,6 +609,8 @@ VTEP
VTEPs
VxLAN
VXLAN
VxLAN
VxLan
vxlan
warmboots
wflow
Expand Down
7 changes: 3 additions & 4 deletions dash-pipeline/README-dash-docker.md
Original file line number Diff line number Diff line change
Expand Up @@ -131,7 +131,7 @@ The following factors complicate the workflow a little bit:
The net result is that a CI build or test step might initially fail after a Git pull request or merge, because it depends on a new Docker image which is also being built simultaneously and not published yet. Furthermore, some Docker images depend upon base docker images (`FROM` clause), which also might not yet be published at the time the derived image's build begins. Fortunately this can be remedied with a manual re-run of the failed jobs in the Git Actions page of your project. We'll explain everything ahead.

## Separate CI scripts for forks vs. main repo
As explained above, credentials are required to push docker images to ACR. Therefore, two different CI scripts exist for each Dockerfile. Each Docker CI script has two variants, and `dash-xxx-docker-acr.yml`. The first variant only builds the docker image to catch regressions. The second variant also publishes to ACR.
As explained above, credentials are required to push docker images to ACR. Therefore, two different CI scripts exist for each Dockerfile. Each Docker CI script has two variants, for example `dash-xxx-docker.yml` and `dash-xxx-docker-acr.yml`. The first variant only builds the docker image to catch regressions. The second variant also publishes to ACR.
* Forked projects (forks of `Azure/DASH`) will execute a build of any changed Dockerfile to verify correctness, but will not attempt to publish.
* Non-forked projects (branches of `Azure/DASH`) will execute a docker build and publish the images to ACR.

Expand Down Expand Up @@ -183,9 +183,8 @@ See the figure and descriptions below.
4. Create or modify Dockerfiles, associated `.env` files containing image names and tags, Makefiles, etc. Build and test this new work in your development machine. All Docker images are stored to and retrieved from the local machine's docker environment.
5. Commit and push changes to the forked project repo. This will trigger CI pipelines in the fork only. The main CI job will fail because it won't be able to pull the new docker images since they have not been published yet.
6. Do a pull request from `featureX-dev` in your fork, to the `featureX` branch in the main repo. This will trigger CI actions to build and publish the new docker images. In parallel it will attempt a CI pipeline run to build and test everything, which will likely fail because required new images won't be published yet.
7. Re-run the failed CI job, which should pass this time. From here you can proceed to step 8 (final PR) or return to step 4 and continue incremental development in your fork. Assuming the newly-published docker image published in step 6 is satisfactory, you can do more work on your fork and pushes to your forked project will trigger CI runs which should pass, since new docker images got published. If you need to make more docker updates, repeat s
8. When all changes created in your form have been pushed to `featureX` branch, issue a pull request to the main project. Once merged, it should pass all CI pipelines.

7. Re-run the failed CI job, which should pass this time. From here you can proceed to step 8 (final PR) or return to step 4 and continue incremental development in your fork. Assuming the newly-published docker image published in step 6 is satisfactory, you can do more work on your fork and pushes to your forked project will trigger CI runs which should pass, since new docker images got published. If you need to make more docker updates, repeat steps 4-7 as needed.
8. When all changes created in your fork have been pushed to `featureX` branch, issue a pull request to the main project. Once merged, it should pass all CI pipelines. (Be sure to sync to `main` before the final push to avoid surprises).

## Publishing Docker Images to Azure Container Registry
Docker images are stored in Azure Container Registry (ACR). Pushing new images requires proper authentication credentials for the registry. These credentials are stored in Git Project "Secrets" accessible only to the project administrator. Therefore, publishing new images is done via Git Actions which reference the secrets as "environment variables" which are available in the CI action's runner context. These CI actions are triggered by anything which changes the docker image contents or even the tag, including Dockerfiles, image names, Makefiles, etc.
Expand Down
2 changes: 1 addition & 1 deletion dash-pipeline/images/dash-docker-fork-workflow.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion documentation/general/design/dash-sonic-hld.md
Original file line number Diff line number Diff line change
Expand Up @@ -262,7 +262,7 @@ key = DASH_ACL_RULE:group_id:rule_num ; unique rule num wit
priority = INT32 value ; priority of the rule, lower the value, higher the priority
action = allow/deny
terminating = true/false ; if true, stop processing further rules
protocols = list of INT ',' separated; E.g. 6-udp, 17-tcp; if not provided, match on all protocols
protocols = list of INT ',' separated; E.g. 6-tcp, 17-udp; if not provided, match on all protocols
src_addr = list of source ip prefixes ',' separated
dst_addr = list of destination ip prefixes ',' separated
src_port = list of range of source ports ',' separated
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -297,6 +297,31 @@ The following scale of policies and routes are at minimum required to be
configured during validation and test plan needs to be executed covering
both scenarios:

**NEW Values Start** ###################################################

| | per ENI | 200G (DPU) | 400G | 800G | 1.6T (smart switch) |
|---------------|------------|--------------|-------|-------|-------|
| VNETs | | 1024 | 2048 | 4096 | 8192 |
| ENIs | | 64 | 128 | 256 | 512 |
| Routes | 100K | 6.4M | 12.8M | 25.6M | 51.2M |
| NSGs | 5in + 5out | 640 | 1280 | 2560 | 5120 |
| ACLs prefixes | 10x100K | 64M | 128M | 256M | 512M |
| ACLs Ports | 10x10K | 6.4M | 12.8M | 25.6M | 51.2M |
| Mappings (CA to PA) | 160K | 10M | 20M | 40M | 80M |
| Act Con | 1M (bidir) | 64M | 128M | 256M | 512M |
| CPS | | 3.75M | 7.5M | 15M | 30M |
| bg flows TCP | | 1M (bidir) | 2M | 4M | 8M |
| bg flows UDP | | 1M (bidir) | 2M | 4M | 8M |

- ACL rules per NSG = 1000
- Prefixes per ACL rule = 100
- Prefixes mapped per ACL rule = 16
- Prefixes routed per ACL rule = 84
- Routes per ACL rule = 10
- -> Change Above: NSG per ENI changed since 5 Inbound & 5 Outbound stages are required

**NEW Values End** ####################################################
<!--Comment Out
1. &nbsp; 8 ENI Scenario
- 8 ENIs/VPorts
- 200k \* 8 = 1.6M routes
Expand All @@ -312,6 +337,7 @@ both scenarios:
- 48000 ACL rules
- 9.6M Prefixes (upper limit per DPU - sum of the above)
- 2M Mapping Table
-->

## MSFT LAB IXIA Configuration

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ Each DPU sends heartbeat messages at a configured interval to its peer. When a p

## State Synchronization

State synchronization between the 2 DPUs uses the CNIP IP. All state synchronization happens at the granularity of the DP-VIP and happens from the primary of the DP-VIP towards the secondary. State synchronization happens in 2 stages
State synchronization between the 2 DPUs uses the CNIP IP. All state synchronization happens at the granularity of the DP-VIP and happens from the primary of the DP-VIP towards the secondary. State synchronization happens in 2 parallel stages

1. Bulk Sync
1. Data path sync
Expand Down
Binary file not shown.