-
Notifications
You must be signed in to change notification settings - Fork 5.2k
Add AdvSimd optimised GetPointerToFirstInvalidChar #121383
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
The intrinsic codepath was disabled for AdvSimd due to slower performance than the Vector128 codepath. Since there is no equivalent instruction for ExtractMostSignificantBits in AdvSimd, using the SSE2 algorithm will be slow. Therefore a new specialized algorithm is added to optimise for AdvSimd, based on the generic Vector128 algorithm. Fixes dotnet#41699 and dotnet#42052. Performance results on Neoverse-V2 (lower is better): | Method | Input | Version | Mean | Error | Ratio | |------------- |------------------- |-------- |----------:|----------:|------:| | GetByteCount | EnglishAllAscii | Before | 4.414 us | 0.0852 us | 1.000 | | GetByteCount | EnglishAllAscii | After | 4.420 us | 0.0858 us | 1.001 | | GetByteCount | EnglishMostlyAscii | Before | 20.168 us | 0.0613 us | 1.000 | | GetByteCount | EnglishMostlyAscii | After | 18.366 us | 0.1092 us | 0.911 | | GetByteCount | Chinese | Before | 9.132 us | 0.0052 us | 1.000 | | GetByteCount | Chinese | After | 8.157 us | 0.0342 us | 0.893 | | GetByteCount | Cyrillic | Before | 7.929 us | 0.0069 us | 1.000 | | GetByteCount | Cyrillic | After | 7.157 us | 0.0521 us | 0.903 | | GetByteCount | Greek | Before | 10.042 us | 0.0093 us | 1.000 | | GetByteCount | Greek | After | 9.109 us | 0.0646 us | 0.907 |
|
I think @tannergooding is a better person to review this as it's Libraries code. Is any of this can be replaced with the newly added "IndexOf" like Vector APIs? |
👍. This is not a large improvement and makes the code harder to maintain. I'd personally lean more towards removing the The general goal is to reduce the amount of platform specific code we have to maintain over time. This is a goal even if there are relatively minor regressions in doing so. We really only want to have platform specific paths if we know there is "significant" advantage such as much higher throughput numbers and/or real world benchmarks (not microbenchmarks) showing the gains. -- Not all 10% is created equal. 10% of 1us (100ns) is significantly less than 10% of 1ms (100us), for example. And while a 10% gain on 10us can be impactful to some apps, it is likely not the bottleneck or to have any kind of measurable impact to typical workloads. This is particularly true when things like Tiered Compilation don't kick in until there's a 100ms gap between Tier 0 compilations after startup. So we typically want to see reduction in code complexity or something showing the complexity increase is worthwhile. |
Thanks @tannergooding for your comment and explanation. I certainly agree, we will close this PR. |
|
Thanks all, that makes sense, I will bear that in mind and explore the suggestions of using |
The intrinsic codepath was temporarily disabled (#42052) for AdvSimd due to performance regression (#41699), but never re-enabled again. Since there is no equivalent instruction for ExtractMostSignificantBits in AdvSimd, using the same algorithm as SSE2 is inherently slower.
This PR adds optimizations specific to arm64 AdvSimd based on the generic Vector128 algorithm. It makes use of the
Unzipinstruction to convert vectors into scalar for processing, which offers some speedups against generic Vector128.Performance results
Run on Neoverse-V2 (lower is better)
cc @dotnet/arm64-contrib @SwapnilGaikwad