Skip to content

Conversation

@gpshead
Copy link
Member

@gpshead gpshead commented Dec 29, 2025

Optimize base64 encoding/decoding by eliminating loop-carried dependencies. Key changes:

  • Add base64_encode_trio() and base64_decode_quad() helper functions that process complete groups independently
  • Add base64_encode_fast() and base64_decode_fast() wrappers
  • Update b2a_base64 and a2b_base64 to use fast path for complete groups

The binasciibench I used measuring base64 encoding/decoding throughput is included in commit history, but i pulled it out of the PR in favor of adding to pyperformance.

Performance gains (encode/decode speedup vs main, PGO builds):

             64 bytes    64K        1M
  Zen2:      1.2x/1.8x   1.7x/2.8x  1.5x/2.8x
  Zen4:      1.2x/1.7x   1.6x/3.0x  1.5x/3.0x  [old data, likely faster]
  M4:        1.3x/1.9x   2.3x/2.8x  2.4x/2.9x  [old data, likely faster]
  RPi5-32:   1.2x/1.2x   2.4x/2.4x  2.0x/2.1x
  RPi4-64:   1.3x/2.0x   2.4x/5.0x  1.8x/5.0x

Additional SIMD implementations (NEON, AVX-512 VBMI) can achieve +50% (M4) to +1500% (!! Zen4) further gains and are planned for follow-on work if deemed simple to maintain.

Widely used third party libraries contain industry canonical SIMD accelerated variants such as simdutf (C++ based unfortunately) so the decision of how to link and use those and when is best kept separate.

This PR's simple pure better use of modern CPU functional unit pipelining wins make sense regardless.

Based on my exploratory work done in main...gpshead:cpython:claude/vectorize-base64-c-S7Hku

Add Tools/binasciibench/binasciibench.py benchmark for measuring base64
encoding/decoding throughput.

Optimize base64 encoding/decoding by eliminating loop-carried dependencies.
Key changes:
- Add base64_encode_trio() and base64_decode_quad() helper functions
  that process complete groups independently
- Add base64_encode_fast() and base64_decode_fast() wrappers
- Update b2a_base64 and a2b_base64 to use fast path for complete groups

Performance gains (encode/decode speedup vs main, PGO builds):

             64 bytes    64K        1M
  Zen2:      1.1x/1.6x   1.6x/2.4x  1.4x/2.4x
  Zen4:      1.2x/1.7x   1.6x/3.0x  1.5x/3.0x
  M4:        1.3x/1.9x   2.3x/2.8x  2.4x/2.9x
  RPi5-32:   1.4x/1.4x   2.4x/2.0x  2.0x/1.9x

Additional SIMD implementations (NEON, AVX-512 VBMI) can achieve
+50% to +1500% further gains and are planned for follow-on work.

Co-authored-by: Claude Opus 4.5 <[email protected]>
@gpshead gpshead changed the title Optimize base64 encode and decode for an easy 2-3x performance win gh-124951: Optimize base64 encode and decode for an easy 2-3x performance win [no SIMD required] Dec 29, 2025
@gpshead gpshead added the performance Performance or resource usage label Dec 29, 2025
@gpshead gpshead changed the title gh-124951: Optimize base64 encode and decode for an easy 2-3x performance win [no SIMD required] gh-124951: Optimize base64 encode and decode for an easy 2-3x speedup [no SIMD required] Dec 29, 2025
@gpshead gpshead changed the title gh-124951: Optimize base64 encode and decode for an easy 2-3x speedup [no SIMD required] gh-124951: Optimize base64 encode & decode for an easy 2-3x speedup [no SIMD] Dec 29, 2025
gpshead and others added 3 commits December 29, 2025 00:30
MSVC doesn't support forward declarations of arrays without explicit
size. Move the table definition before the inline functions that use
it, eliminating the need for a forward declaration.

Co-authored-by: Claude Opus 4.5 <[email protected]>
@gpshead gpshead marked this pull request as ready for review December 29, 2025 01:08
@gpshead gpshead requested a review from AA-Turner as a code owner December 29, 2025 01:08
@gpshead gpshead self-assigned this Dec 29, 2025
gpshead and others added 3 commits December 29, 2025 01:40
Add Py_ALIGNED(64) to both lookup tables to ensure each fits
within a single L1 cache line, reducing potential cache misses
during encoding/decoding loops.

Co-authored-by: Claude Opus 4.5 <[email protected]>
Replace hardcoded '=' characters with the BASE64_PAD macro
for consistency with the rest of the codebase.

Co-authored-by: Claude Opus 4.5 <[email protected]>
Copy link
Member

@serhiy-storchaka serhiy-storchaka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks pretty simple with large benefit.

BTW, I'm going to add support for ignorechars in the decoder, so it could support a multiline input without ignoring all other errors. The decoder will return on the fast path for each line.

gpshead and others added 4 commits January 2, 2026 04:28
Address review feedback from serhiy-storchaka: the fast path was doing
two checks per group - an explicit PAD comparison and the invalid char
check in base64_decode_quad().

Change PAD's table entry from 0 to 64 so the existing (v0|v1|v2|v3)&0xc0
check catches it, eliminating 4 comparisons per group.

The slow path is unaffected since it checks for PAD character before
the table lookup.

Decode is ~16% faster at 64K (1.62 GB/s → 1.88 GB/s).

Co-authored-by: Claude Opus 4.5 <[email protected]>
Suggested by serhiy-storchaka: replace index math (in + i*3, out + i*4)
with pointer increments. Encode is ~7% faster at 64K (2.11 → 2.25 GB/s).

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <[email protected]>
@gpshead gpshead merged commit 61fc72a into python:main Jan 2, 2026
46 checks passed
Py_ssize_t i;

for (i = 0; i < n_quads; i++) {
if (!base64_decode_quad(in + i * 4, out + i * 3, table)) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did incrementing in and out by 4 and 3 have any benefit?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

performance Performance or resource usage

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants