-
Notifications
You must be signed in to change notification settings - Fork 4
Add C1Z options to tweak decoder. #458
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. WalkthroughBumps github.com/klauspost/compress to v1.18.0 and vendors new internal/le helpers; refactors many zstd/huff0 internals to use le loaders and cursor-based bit readers; fixes snappy error propagation. Adds per-decoder options (memory, concurrency) and wires decoder options through C1Z loaders, managers, and readers. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor Caller
participant API as NewC1ZFile / NewC1FileReader
participant Loader as loadC1z
participant DecCtor as NewDecoder
participant Zstd as zstd.NewReader
Caller->>API: NewC1ZFile(ctx,..., WithDecoderOptions(...))
API->>Loader: loadC1z(path, tmpDir, decoderOptions...)
Loader->>DecCtor: NewDecoder(c1zFile, decoderOptions...)
DecCtor->>Zstd: zstd.NewReader(opts: lowmem, maxMem, [concurrency if set])
Zstd-->>DecCtor: configured decoder
DecCtor-->>Loader: decoded io.Reader
Loader->>Loader: write DB file from decoded stream
Loader-->>API: DB path
API-->>Caller: C1ZFile (ready)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
pkg/dotc1z/file.go (1)
41-52: Ensure decoder is closed on all paths (avoid leak on io.Copy error).If io.Copy fails, r.Close() isn’t called. Defer closing right after creation and drop the explicit close at the end.
Apply:
- r, err := NewDecoder(c1zFile, opts...) - if err != nil { - return "", err - } - _, err = io.Copy(dbFile, r) - if err != nil { - return "", err - } - err = r.Close() - if err != nil { - return "", err - } + r, err := NewDecoder(c1zFile, opts...) + if err != nil { + return "", err + } + defer func() { + if cerr := r.Close(); cerr != nil && err == nil { + err = cerr + } + }() + _, err = io.Copy(dbFile, r) + if err != nil { + return "", err + }vendor/github.com/klauspost/compress/zstd/bitreader.go (1)
120-127: Critical: close() zeroescursorbefore validation, hiding unread bitsSetting
b.cursor = 0before callingfinished()/remain()can incorrectly pass validation (and under-report remaining bits) when there are still bytes left. Move the release after validations.Apply:
func (b *bitReader) close() error { - // Release reference. - b.in = nil - b.cursor = 0 - if !b.finished() { - return fmt.Errorf("%d extra bits on block, should be 0", b.remain()) - } - if b.bitsRead > 64 { - return io.ErrUnexpectedEOF - } - return nil + // Validate before clearing state. + if !b.finished() { + return fmt.Errorf("%d extra bits on block, should be 0", b.remain()) + } + if b.bitsRead > 64 { + return io.ErrUnexpectedEOF + } + // Release references last. + b.in = nil + b.cursor = 0 + return nil }pkg/dotc1z/decoder.go (2)
167-171: Fix: nil deref when zstd.NewReader failsIf zstd.NewReader(...) returns an error, d.zd stays nil and the current check won’t trigger, leading to a panic at d.zd.Read. Check decoderInitErr directly.
- // Check we have a valid decoder - if d.zd != nil && d.decoderInitErr != nil { - return 0, d.decoderInitErr - } + // Check decoder initialization error + if d.decoderInitErr != nil { + return 0, d.decoderInitErr + }
177-197: Enforce max decoded size per Read to prevent overrunsThe pre-read check allows a single Read call to exceed the cap. Cap p to remaining bytes before calling the underlying reader.
// Check we have not exceeded our max decoded size maxDecodedSize := d.o.maxDecodedSize if maxDecodedSize == 0 { maxDecodedSize = defaultMaxDecodedSize } - if d.decodedBytes > maxDecodedSize { + if d.decodedBytes > maxDecodedSize { return 0, ErrMaxSizeExceeded } // Do underlying read - n, err := d.zd.Read(p) + if maxDecodedSize > 0 { + if d.decodedBytes >= maxDecodedSize { + return 0, ErrMaxSizeExceeded + } + if remaining := maxDecodedSize - d.decodedBytes; uint64(len(p)) > remaining { + p = p[:remaining] + } + } + n, err := d.zd.Read(p) //nolint:gosec // No risk of overflow/underflow because n is always >= 0. d.decodedBytes += uint64(n)
🧹 Nitpick comments (21)
go.mod (1)
3-3: Use major.minor in the go directive; optionally pin toolchainGo expects “go 1.23”, not “go 1.23.4”. If you want to pin a patch toolchain, add a separate toolchain directive.
- go 1.23.4 + go 1.23 + toolchain go1.23.4vendor/github.com/klauspost/compress/README.md (1)
30-38: Vendor docs trip markdownlint; prefer excluding vendor/ from docs lint.The changelog intentionally uses bare URLs and mixed indentation; fixing upstream text locally adds churn. Recommend updating lint config to skip
vendor/**.If desired, I can propose a minimal markdownlint config change that excludes vendor paths.
pkg/dotc1z/file.go (2)
3-12: Use os.O_TRUNC and drop syscall import.Minor cleanup; reduces imports and is more idiomatic.
import ( "errors" "io" "os" "path/filepath" - "syscall" @@ - outFile, err := os.OpenFile(outputFilePath, os.O_RDWR|os.O_CREATE|syscall.O_TRUNC, 0644) + outFile, err := os.OpenFile(outputFilePath, os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0644)Also applies to: 74-76
86-94: Close zstd writer on error path of io.Copy.If io.Copy fails, c1z remains open. Close it before returning to free resources.
_, err = io.Copy(c1z, dbFile) if err != nil { - return err + _ = c1z.Close() + return err } err = c1z.Flush()Also applies to: 100-103
pkg/dotc1z/c1file.go (2)
124-128: Make WithDecoderOptions additive to support multiple calls.Current implementation overwrites any previously provided options. Match WithPragma’s append semantics.
-func WithDecoderOptions(opts ...DecoderOption) C1ZOption { - return func(o *c1zOptions) { - o.decoderOptions = opts - } -} +func WithDecoderOptions(opts ...DecoderOption) C1ZOption { + return func(o *c1zOptions) { + if len(opts) == 0 { + return + } + o.decoderOptions = append(o.decoderOptions, opts...) + } +}
289-291: Typo in error message.“datbase” → “database”.
- return fmt.Errorf("c1file: datbase has not been opened") + return fmt.Errorf("c1file: database has not been opened")pkg/dotc1z/dotc1z.go (1)
52-59: Consider exposing decoder options in NewExternalC1FileReader.Optional: accept opts ...DecoderOption to configure external loads consistently.
-func NewExternalC1FileReader(ctx context.Context, tmpDir string, externalResourceC1ZPath string) (connectorstore.Reader, error) { - dbFilePath, err := loadC1z(externalResourceC1ZPath, tmpDir) +func NewExternalC1FileReader(ctx context.Context, tmpDir string, externalResourceC1ZPath string, opts ...DecoderOption) (connectorstore.Reader, error) { + dbFilePath, err := loadC1z(externalResourceC1ZPath, tmpDir, opts...)vendor/github.com/klauspost/compress/zstd/seqdec_amd64.s (1)
10-13: Asm offsets updated forbitReaderlayout — verify with compile-time checksMoving bitsRead to 40(CX) and introducing cursor at 32(CX) matches
bitReader{ in(0..), value@24, cursor@32, bitsRead@40 }. Prolog/epilog stores mirror this. Recommend adding/retaining Go-sideunsafe.Offsetofstatic assertions (if not already present upstream) to guard against future layout drift.If you want, I can draft a tiny Go file with
var _ [1]int-style offset checks to live alongside the asm.Also applies to: 301-304
pkg/dotc1z/decoder.go (8)
107-116: Document negative concurrency and add basic validationDocs mention 0 uses GOMAXPROCS and default 1 disables async; they don’t mention negative. Either document “n < 0 = don’t set option (use library default)” or reject negatives. Also guard against absurdly large n.
-// WithDecoderConcurrency sets the number of created decoders. -// Default is 1, which disables async decoding/concurrency. -// 0 uses GOMAXPROCS. +// WithDecoderConcurrency sets the number of created decoders. +// Default is 1 (disables async decoding/concurrency). 0 uses GOMAXPROCS. +// If n < 0, no explicit concurrency option is set and the library default applies. func WithDecoderConcurrency(n int) DecoderOption { return func(o *decoderOptions) error { - o.decoderConcurrency = n + // Optional: constrain to a sane range; adjust/remove as desired. + // if n > 1<<16 { return errors.New("c1z: WithDecoderConcurrency too large") } + o.decoderConcurrency = n return nil } }
142-154: Option assembly LGTM; tiny allocation nitPre-size slice to avoid one grow on append.
- zstdOpts := []zstd.DOption{ + zstdOpts := make([]zstd.DOption, 0, 3) + zstdOpts = append(zstdOpts, zstd.WithDecoderLowmem(true), // uses lower memory, trading potentially more allocations - zstd.WithDecoderMaxMemory(maxMemSize), // sets limit on maximum memory used when decoding stream - } + zstd.WithDecoderMaxMemory(maxMemSize), // sets limit on maximum memory used when decoding stream + ) if d.o.decoderConcurrency >= 0 { zstdOpts = append(zstdOpts, zstd.WithDecoderConcurrency(d.o.decoderConcurrency)) } zd, err := zstd.NewReader( d.f, zstdOpts..., )Also applies to: 147-149, 151-154
227-229: Default concurrency = 1: confirm intentSetting 1 preserves prior behavior (no async). If you want to keep upstream defaults instead, consider -1 here to omit setting the option unless explicitly provided.
75-79: Update default documentation (now 128MiB)Comment says 32MiB, but defaultDecoderMaxMemory is 128MiB.
-// Maximum is 1 << 63 bytes. Default is 32MiB. +// Maximum is 1 << 63 bytes. Default is 128MiB.
91-95: Update decoded size default in docs (now 3GiB)Comment says 1GiB; constant is 3GiB.
-// Maximum is 1 << 63 bytes. Default is 1GiB. +// Maximum is 1 << 63 bytes. Default is 3GiB.Also applies to: 16-21
67-73: Typo in comment“subequent” → “subsequent”.
-// WithContext sets a context, when cancelled, will cause subequent calls to Read() to return ctx.Error(). +// WithContext sets a context that, when cancelled, will cause subsequent calls to Read() to return ctx.Err().
35-47: Harden header read against short reads and use io.SeekStartUse io.ReadFull to guarantee full header and io.SeekStart for clarity.
- _, err := rs.Seek(0, 0) + _, err := rs.Seek(0, io.SeekStart) if err != nil { return err } } headerBytes := make([]byte, len(C1ZFileHeader)) - _, err := reader.Read(headerBytes) + _, err := io.ReadFull(reader, headerBytes)Also applies to: 43-47
28-29: Nit: double space in error messageMinor formatting.
- ErrWindowSizeExceeded = fmt.Errorf("c1z: window size exceeded, increase DecoderMaxMemory using %v environment variable", maxDecoderMemorySizeEnv) + ErrWindowSizeExceeded = fmt.Errorf("c1z: window size exceeded, increase DecoderMaxMemory using %v environment variable", maxDecoderMemorySizeEnv)vendor/github.com/klauspost/compress/internal/le/unsafe_enabled.go (3)
3-3: Confirm architecture coverage (consider mips64le/loong64).If unaligned LE loads are safe on mips64le and loong64 for your targets/toolchains, consider adding them here; otherwise, documenting the omission would help.
39-55: Document alignment and bounds preconditions.These functions assume sufficient length and allow unaligned access on the listed archs. Add brief preconditions to avoid misuse by future callers.
Example:
-// Store16 will store v at b. +// Store16 stores v at b[0:]. Caller must ensure len(b) >= 2.(Repeat for 32/64; for Load* note that len(b) - int(i) >= N.)
11-16: Remove stale commented code.Comments referencing Uint16 inside Load8 are misleading.
- //return binary.LittleEndian.Uint16(b[i:]) - //return *(*uint16)(unsafe.Pointer(&b[i])) + // Fast LE byte load without bounds checks.vendor/github.com/klauspost/compress/internal/le/unsafe_disabled.go (2)
1-1: Build tags: confirm intended fallbacks.This complements the unsafe path. If you add new LE 64-bit archs to the unsafe file, mirror the condition here to keep sets complementary.
29-42: Clarify Store semantics.*Minor doc nit: note that Store* writes starting at b[0:], mirroring PutUintXX behavior.
-// Store32 will store v at b. +// Store32 stores v at b[0:].(Repeat for 16/64.)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (25)
go.mod(1 hunks)pkg/dotc1z/c1file.go(3 hunks)pkg/dotc1z/decoder.go(4 hunks)pkg/dotc1z/dotc1z.go(1 hunks)pkg/dotc1z/file.go(2 hunks)vendor/github.com/klauspost/compress/README.md(8 hunks)vendor/github.com/klauspost/compress/huff0/bitreader.go(7 hunks)vendor/github.com/klauspost/compress/internal/le/le.go(1 hunks)vendor/github.com/klauspost/compress/internal/le/unsafe_disabled.go(1 hunks)vendor/github.com/klauspost/compress/internal/le/unsafe_enabled.go(1 hunks)vendor/github.com/klauspost/compress/s2sx.mod(1 hunks)vendor/github.com/klauspost/compress/zstd/README.md(1 hunks)vendor/github.com/klauspost/compress/zstd/bitreader.go(6 hunks)vendor/github.com/klauspost/compress/zstd/blockdec.go(0 hunks)vendor/github.com/klauspost/compress/zstd/blockenc.go(3 hunks)vendor/github.com/klauspost/compress/zstd/decoder.go(2 hunks)vendor/github.com/klauspost/compress/zstd/enc_base.go(1 hunks)vendor/github.com/klauspost/compress/zstd/matchlen_generic.go(1 hunks)vendor/github.com/klauspost/compress/zstd/seqdec.go(1 hunks)vendor/github.com/klauspost/compress/zstd/seqdec_amd64.s(16 hunks)vendor/github.com/klauspost/compress/zstd/seqdec_generic.go(1 hunks)vendor/github.com/klauspost/compress/zstd/seqenc.go(0 hunks)vendor/github.com/klauspost/compress/zstd/snappy.go(2 hunks)vendor/github.com/klauspost/compress/zstd/zstd.go(2 hunks)vendor/modules.txt(1 hunks)
💤 Files with no reviewable changes (2)
- vendor/github.com/klauspost/compress/zstd/seqenc.go
- vendor/github.com/klauspost/compress/zstd/blockdec.go
🧰 Additional context used
🪛 LanguageTool
vendor/github.com/klauspost/compress/README.md
[style] ~25-~25: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...isable all assembly across packages.
Use the links above for more information on...
(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
[grammar] ~40-~40: There might be a mistake here.
Context: .../github.com/klauspost/compress/pull/1017
- s2: Don't use stack for index tables htt...
(QB_NEW_EN)
[grammar] ~41-~41: There might be a mistake here.
Context: .../github.com/klauspost/compress/pull/1014
- gzhttp: No content-type on no body respo...
(QB_NEW_EN)
[grammar] ~42-~42: There might be a mistake here.
Context: .../github.com/klauspost/compress/pull/1011
- gzhttp: Do not set the content-type when...
(QB_NEW_EN)
[grammar] ~94-~94: There might be a mistake here.
Context: ...//github.com/klauspost/compress/pull/871
* flate: Reduce allocations in decompresso...
(QB_NEW_EN)
[grammar] ~95-~95: There might be a mistake here.
Context: ...//github.com/klauspost/compress/pull/869
* s2: Fix EstimateBlockSize on 6&7 length ...
(QB_NEW_EN)
[grammar] ~153-~153: There might be a mistake here.
Context: ...//github.com/klauspost/compress/pull/739
* zstd: Add delta encoding support by @gre...
(QB_NEW_EN)
🪛 markdownlint-cli2 (0.17.2)
vendor/github.com/klauspost/compress/README.md
31-31: Bare URL used
(MD034, no-bare-urls)
32-32: Bare URL used
(MD034, no-bare-urls)
33-33: Bare URL used
(MD034, no-bare-urls)
34-34: Bare URL used
(MD034, no-bare-urls)
35-35: Bare URL used
(MD034, no-bare-urls)
36-36: Bare URL used
(MD034, no-bare-urls)
37-37: Bare URL used
(MD034, no-bare-urls)
40-40: Bare URL used
(MD034, no-bare-urls)
41-41: Bare URL used
(MD034, no-bare-urls)
42-42: Bare URL used
(MD034, no-bare-urls)
43-43: Bare URL used
(MD034, no-bare-urls)
94-94: Unordered list indentation
Expected: 2; Actual: 1
(MD007, ul-indent)
94-94: Hard tabs
Column: 1
(MD010, no-hard-tabs)
94-94: Bare URL used
(MD034, no-bare-urls)
95-95: Unordered list indentation
Expected: 2; Actual: 1
(MD007, ul-indent)
95-95: Hard tabs
Column: 1
(MD010, no-hard-tabs)
95-95: Bare URL used
(MD034, no-bare-urls)
96-96: Unordered list indentation
Expected: 2; Actual: 1
(MD007, ul-indent)
96-96: Hard tabs
Column: 1
(MD010, no-hard-tabs)
96-96: Bare URL used
(MD034, no-bare-urls)
153-153: Unordered list indentation
Expected: 2; Actual: 1
(MD007, ul-indent)
153-153: Hard tabs
Column: 1
(MD010, no-hard-tabs)
153-153: Bare URL used
(MD034, no-bare-urls)
196-196: Unordered list indentation
Expected: 2; Actual: 1
(MD007, ul-indent)
196-196: Hard tabs
Column: 1
(MD010, no-hard-tabs)
196-196: Bare URL used
(MD034, no-bare-urls)
238-238: Unordered list indentation
Expected: 2; Actual: 1
(MD007, ul-indent)
238-238: Hard tabs
Column: 1
(MD010, no-hard-tabs)
238-238: Bare URL used
(MD034, no-bare-urls)
265-265: Unordered list indentation
Expected: 2; Actual: 1
(MD007, ul-indent)
265-265: Hard tabs
Column: 1
(MD010, no-hard-tabs)
266-266: Unordered list indentation
Expected: 2; Actual: 1
(MD007, ul-indent)
266-266: Hard tabs
Column: 1
(MD010, no-hard-tabs)
267-267: Unordered list indentation
Expected: 2; Actual: 1
(MD007, ul-indent)
267-267: Hard tabs
Column: 1
(MD010, no-hard-tabs)
268-268: Unordered list indentation
Expected: 2; Actual: 1
(MD007, ul-indent)
268-268: Hard tabs
Column: 1
(MD010, no-hard-tabs)
269-269: Unordered list indentation
Expected: 2; Actual: 1
(MD007, ul-indent)
269-269: Hard tabs
Column: 1
(MD010, no-hard-tabs)
270-270: Unordered list indentation
Expected: 2; Actual: 1
(MD007, ul-indent)
270-270: Hard tabs
Column: 1
(MD010, no-hard-tabs)
287-287: Unordered list indentation
Expected: 2; Actual: 1
(MD007, ul-indent)
287-287: Hard tabs
Column: 1
(MD010, no-hard-tabs)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Cursor Bugbot
- GitHub Check: go-test (1.23.x, windows-latest)
🔇 Additional comments (24)
vendor/github.com/klauspost/compress/s2sx.mod (1)
3-3: Go 1.22 floor in vendor looks goodThis aligns with klauspost/compress v1.18.0’s minimum. Ensure CI uses Go ≥1.22.
See the go.mod comment for a script that checks CI Go versions.
vendor/github.com/klauspost/compress/zstd/enc_base.go (1)
119-121: Panic message fix (t vs s) is correctImproves diagnostics without changing behavior.
vendor/github.com/klauspost/compress/zstd/README.md (1)
9-9: Doc note about noasm/nounsafe build tags: good additionClear guidance that matches the new internal/le loaders. No action needed on our side.
vendor/modules.txt (1)
269-276: Vendor manifest updated to v1.18.0 with internal/le: looks consistentVersion and new package entries line up with the dependency bump.
vendor/github.com/klauspost/compress/zstd/snappy.go (1)
198-201: Correct error propagation on Write paths — good fix.Switching to return
r.err(the actual Write error) eliminates leakage of an unrelatederrvalue.Also applies to: 241-243
vendor/github.com/klauspost/compress/zstd/blockenc.go (2)
12-13: Using slices.Max requires Go ≥ 1.21 — confirm toolchain.Ensure the module CI/go.mod uses a Go version that includes stdlib
slices(1.21+), otherwise builds will fail.Would you like a small script to assert Go toolchain version and fail early in CI?
889-891: LGTM: cleaner max count via slices.Max.The
+1range here correctly includes the top symbol; consistent with expected histogram bounds.vendor/github.com/klauspost/compress/zstd/decoder.go (2)
126-127: Docstring correction for Read — accurate now.“bytes read” (not written) clarifies the contract. No further action.
323-327: State reuse fix: reset bitreader cursor between DecodeAll calls — good.Zeroing
frame.history.decoders.br.cursoralongsidein=nilprevents stale cursor state on pooled decoders.Consider adding a regression test that runs consecutive
DecodeAllcalls on the sameDecoderverifying identical output without reallocation spikes.vendor/github.com/klauspost/compress/zstd/zstd.go (1)
12-13: Switch to internal le loaders — aligns with v1.18.0 and nounsafe/noasm paths.Using
internal/lefor LE loads centralizes unsafe/portable variants. Looks good.Double-check that
vendor/github.com/klauspost/compress/internal/leis present and builds under-tags nounsafein CI.Also applies to: 114-119
vendor/github.com/klauspost/compress/zstd/seqdec_generic.go (1)
32-32: Fast-path guard invariant holds.bitReader.cursoris set tolen(in)and only ever decremented byfillFast/fill, so it reliably equals the remaining input bytes. The conditioncursor > 4+((maxOffsetBits+16+16)>>3)thus guarantees sufficient data for both inlinedfillFast+getBitssequences without risk of overread.pkg/dotc1z/file.go (1)
14-14: API: loadC1z now takes decoder options (opts ...DecoderOption).Nice addition. Call sites with zero opts remain source-compatible.
vendor/github.com/klauspost/compress/internal/le/le.go (1)
1-5: LGTM: generic Indexer constraint is clear and minimal.No issues noted.
pkg/dotc1z/c1file.go (2)
106-109: New decoderOptions field looks good.Plumbs options cleanly without affecting existing behavior.
140-143: Forwarding options to loadC1z is correct.This preserves zero-config behavior while enabling customization.
pkg/dotc1z/dotc1z.go (1)
17-19: No internal call sites; approving NewC1FileReader API update.vendor/github.com/klauspost/compress/zstd/bitreader.go (2)
12-14: Struct/import changes look correctImporting
internal/leand addingcursor intaligns with the new loader approach and updated asm offsets. No issues spotted.Also applies to: 22-24
73-76: Bitreader cursor-based loads LGTMThe
fillFast,fillFastStart,fill,finished, andremaintransitions tole.Load32/64with a reverse-moving cursor are consistent and preserve semantics.Also applies to: 80-83, 90-102, 105-107, 115-117
vendor/github.com/klauspost/compress/zstd/matchlen_generic.go (1)
18-29: LGTM: switch tole.Load64and cursorless loop is soundThe 8-byte loop and tail comparison preserve behavior; slicing after
nis correct givenais shortest.pkg/dotc1z/decoder.go (1)
61-65: LGTM: options struct extensionAdding decoderConcurrency here cleanly threads the setting through the decoder.
vendor/github.com/klauspost/compress/zstd/seqdec.go (1)
248-306: LGTM: inlined fast path matches upstream intentCursor-based boundary and inlined nextFast logic look consistent with v1.18.0 performance work.
vendor/github.com/klauspost/compress/huff0/bitreader.go (2)
13-14: LGTM: switch to internal le loads and widened 4-byte fast pathsChanges align with upstream micro-opts and reduce bounds checks.
Also applies to: 69-75, 79-82, 89-101, 177-181, 185-188, 196-201
190-201: Retainoff > 4inbitReaderShifted.fill
This guard matches the upstream implementation inbitReaderShifted.fill(usesif b.off > 4) and is intentionally different from the>= 4fast-path inbitReaderBytes.fill(github.com).Likely an incorrect or invalid review comment.
vendor/github.com/klauspost/compress/internal/le/unsafe_enabled.go (1)
1-9: No action needed – go.mod specifies Go 1.23.4, so unsafe.SliceData is supported.
| // Load8 will load from b at index i. | ||
| func Load8[I Indexer](b []byte, i I) byte { | ||
| //return binary.LittleEndian.Uint16(b[i:]) | ||
| //return *(*uint16)(unsafe.Pointer(&b[i])) | ||
| return *(*byte)(unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), i)) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Cast index to uintptr in unsafe.Add.
Making the offset explicit avoids coupling the generic Indexer to a concrete uintptr at call sites and improves readability.
Apply:
- return *(*byte)(unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), i))
+ return *(*byte)(unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), uintptr(i)))📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Load8 will load from b at index i. | |
| func Load8[I Indexer](b []byte, i I) byte { | |
| //return binary.LittleEndian.Uint16(b[i:]) | |
| //return *(*uint16)(unsafe.Pointer(&b[i])) | |
| return *(*byte)(unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), i)) | |
| } | |
| // Load8 will load from b at index i. | |
| func Load8[I Indexer](b []byte, i I) byte { | |
| //return binary.LittleEndian.Uint16(b[i:]) | |
| //return *(*uint16)(unsafe.Pointer(&b[i])) | |
| return *(*byte)(unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), uintptr(i))) | |
| } |
🤖 Prompt for AI Agents
In vendor/github.com/klauspost/compress/internal/le/unsafe_enabled.go around
lines 11 to 16, the unsafe.Add call passes the generic index i directly; cast
the index to uintptr to make the offset explicit and avoid coupling the Indexer
type to uintptr at the call site — convert i to uintptr in the unsafe.Add second
argument so the pointer arithmetic uses uintptr(i) instead of i.
| // Load16 will load from b at index i. | ||
| func Load16[I Indexer](b []byte, i I) uint16 { | ||
| //return binary.LittleEndian.Uint16(b[i:]) | ||
| //return *(*uint16)(unsafe.Pointer(&b[i])) | ||
| return *(*uint16)(unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), i)) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Cast index to uintptr in unsafe.Add.
Same rationale as above.
- return *(*uint16)(unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), i))
+ return *(*uint16)(unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), uintptr(i)))📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Load16 will load from b at index i. | |
| func Load16[I Indexer](b []byte, i I) uint16 { | |
| //return binary.LittleEndian.Uint16(b[i:]) | |
| //return *(*uint16)(unsafe.Pointer(&b[i])) | |
| return *(*uint16)(unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), i)) | |
| } | |
| // Load16 will load from b at index i. | |
| func Load16[I Indexer](b []byte, i I) uint16 { | |
| //return binary.LittleEndian.Uint16(b[i:]) | |
| //return *(*uint16)(unsafe.Pointer(&b[i])) | |
| return *(*uint16)(unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), uintptr(i))) | |
| } |
🤖 Prompt for AI Agents
In vendor/github.com/klauspost/compress/internal/le/unsafe_enabled.go around
lines 18 to 23, the call to unsafe.Add passes i directly which is not the
correct type; cast the generic index i to uintptr before passing it to
unsafe.Add (i.e., convert i to uintptr) so the pointer arithmetic uses the
correct type and matches the same rationale used elsewhere in the file.
| // Load32 will load from b at index i. | ||
| func Load32[I Indexer](b []byte, i I) uint32 { | ||
| //return binary.LittleEndian.Uint32(b[i:]) | ||
| //return *(*uint32)(unsafe.Pointer(&b[i])) | ||
| return *(*uint32)(unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), i)) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Cast index to uintptr in unsafe.Add.
- return *(*uint32)(unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), i))
+ return *(*uint32)(unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), uintptr(i)))📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Load32 will load from b at index i. | |
| func Load32[I Indexer](b []byte, i I) uint32 { | |
| //return binary.LittleEndian.Uint32(b[i:]) | |
| //return *(*uint32)(unsafe.Pointer(&b[i])) | |
| return *(*uint32)(unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), i)) | |
| } | |
| // Load32 will load from b at index i. | |
| func Load32[I Indexer](b []byte, i I) uint32 { | |
| //return binary.LittleEndian.Uint32(b[i:]) | |
| //return *(*uint32)(unsafe.Pointer(&b[i])) | |
| return *(*uint32)(unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), uintptr(i))) | |
| } |
🤖 Prompt for AI Agents
In vendor/github.com/klauspost/compress/internal/le/unsafe_enabled.go around
lines 25 to 30, the call to unsafe.Add passes the generic index i directly which
is invalid for the offset parameter; cast the index to uintptr when calling
unsafe.Add (i.e. use uintptr(i)) so the offset is the correct uintptr type,
keeping the pointer to the slice data and returning the dereferenced uint32 as
before.
| // Load64 will load from b at index i. | ||
| func Load64[I Indexer](b []byte, i I) uint64 { | ||
| //return binary.LittleEndian.Uint64(b[i:]) | ||
| //return *(*uint64)(unsafe.Pointer(&b[i])) | ||
| return *(*uint64)(unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), i)) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Cast index to uintptr in unsafe.Add.
- return *(*uint64)(unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), i))
+ return *(*uint64)(unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), uintptr(i)))📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Load64 will load from b at index i. | |
| func Load64[I Indexer](b []byte, i I) uint64 { | |
| //return binary.LittleEndian.Uint64(b[i:]) | |
| //return *(*uint64)(unsafe.Pointer(&b[i])) | |
| return *(*uint64)(unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), i)) | |
| } | |
| // Load64 will load from b at index i. | |
| func Load64[I Indexer](b []byte, i I) uint64 { | |
| //return binary.LittleEndian.Uint64(b[i:]) | |
| //return *(*uint64)(unsafe.Pointer(&b[i])) | |
| return *(*uint64)(unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), uintptr(i))) | |
| } |
🤖 Prompt for AI Agents
In vendor/github.com/klauspost/compress/internal/le/unsafe_enabled.go around
lines 32 to 37, the call to unsafe.Add passes the generic index i directly; cast
i to uintptr before passing it to unsafe.Add (e.g., replace
unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), i) with
unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), uintptr(i))) so the offset
argument has the correct type.
6127e1e to
b109c95
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
pkg/dotc1z/manager/s3/s3.go (1)
141-145: Close file handle to prevent FD leak in SaveC1Z
os.Openresult isn’t closed, leaking a descriptor on error/success paths.f, err := os.Open(s.tmpFile) if err != nil { return err } +defer f.Close()pkg/dotc1z/decoder.go (1)
167-170: Fix nil dereference when decoder initialization failsIf
zstd.NewReaderreturns an error,d.zdremains nil andd.decoderInitErris set. The current check won’t trigger, leading to a nil deref atd.zd.Read(p).- // Check we have a valid decoder - if d.zd != nil && d.decoderInitErr != nil { - return 0, d.decoderInitErr - } + // Check decoder init error (zd will be nil on failure) + if d.decoderInitErr != nil { + return 0, d.decoderInitErr + } + if d.zd == nil { + return 0, fmt.Errorf("c1z: decoder not initialized") + }
🧹 Nitpick comments (8)
pkg/dotc1z/c1file.go (1)
124-128: Do not overwrite decoder options; append to preserve composabilityIf
WithDecoderOptionsis called multiple times, the latest call overwrites earlier ones. Append instead to matchWithPragmasemantics.func WithDecoderOptions(opts ...DecoderOption) C1ZOption { return func(o *c1zOptions) { - o.decoderOptions = opts + o.decoderOptions = append(o.decoderOptions, opts...) } }pkg/dotc1z/manager/manager.go (1)
33-37: Append decoder options instead of overwritingAvoid discarding previously set options when
WithDecoderOptionsis invoked more than once.func WithDecoderOptions(opts ...dotc1z.DecoderOption) ManagerOption { return func(o *managerOptions) { - o.decoderOptions = opts + o.decoderOptions = append(o.decoderOptions, opts...) } }pkg/dotc1z/manager/local/local.go (1)
33-37: Append decoder options to avoid overwriting prior configurationAligns with other option helpers and enables composition.
func WithDecoderOptions(opts ...dotc1z.DecoderOption) Option { return func(o *localManager) { - o.decoderOptions = opts + o.decoderOptions = append(o.decoderOptions, opts...) } }pkg/dotc1z/manager/s3/s3.go (1)
37-41: Append, don’t overwrite, decoder optionsMaintain composability when multiple callers set options.
func WithDecoderOptions(opts ...dotc1z.DecoderOption) Option { return func(o *s3Manager) { - o.decoderOptions = opts + o.decoderOptions = append(o.decoderOptions, opts...) } }pkg/dotc1z/decoder.go (1)
91-94: Docs mismatch: default decoded size is 3GiB, not 1GiBConstant
defaultMaxDecodedSizeis 3 GiB. Update comment to avoid confusion.-// Maximum is 1 << 63 bytes. Default is 1GiB. +// Maximum is 1 << 63 bytes. Default is 3GiB.vendor/github.com/klauspost/compress/huff0/bitreader.go (3)
65-74: fillFast underflow guard is caller-dependentb.off-4 assumes ≥4 bytes available. If any caller can hit off<4, this will underflow in unsafe mode. If the contract is guaranteed, consider a brief comment here; otherwise add a fast-path guard.
func (b *bitReaderBytes) fillFast() { if b.bitsRead < 32 { return } - // 2 bounds checks. + // Caller must ensure at least 4 bytes remain; keep fast path branchless. + // If this guarantee ever changes, add: if b.off < 4 { return } low := le.Load32(b.in, b.off-4) b.value |= uint64(low) << (b.bitsRead - 32) b.bitsRead -= 32 b.off -= 4 }
173-181: Same contract concern for shifted reader’s fillFastAs above, b.off-4 requires ≥4 bytes. Confirm caller contract or add a minimal guard/comment.
func (b *bitReaderShifted) fillFast() { if b.bitsRead < 32 { return } - low := le.Load32(b.in, b.off-4) + // Caller guarantees ≥4 bytes; add guard if that ever changes. + low := le.Load32(b.in, b.off-4) b.value |= uint64(low) << ((b.bitsRead - 32) & 63) b.bitsRead -= 32 b.off -= 4 }
195-201: Minor consistency/perf nit: use >=4 like bytes variantEnables fast 4-byte path instead of falling back to byte-by-byte for exactly 4 bytes.
- if b.off > 4 { + if b.off >= 4 { low := le.Load32(b.in, b.off-4) b.value |= uint64(low) << ((b.bitsRead - 32) & 63) b.bitsRead -= 32 b.off -= 4 return }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (28)
go.mod(1 hunks)pkg/dotc1z/c1file.go(3 hunks)pkg/dotc1z/decoder.go(5 hunks)pkg/dotc1z/dotc1z.go(1 hunks)pkg/dotc1z/file.go(2 hunks)pkg/dotc1z/manager/local/local.go(3 hunks)pkg/dotc1z/manager/manager.go(3 hunks)pkg/dotc1z/manager/s3/s3.go(3 hunks)vendor/github.com/klauspost/compress/README.md(8 hunks)vendor/github.com/klauspost/compress/huff0/bitreader.go(7 hunks)vendor/github.com/klauspost/compress/internal/le/le.go(1 hunks)vendor/github.com/klauspost/compress/internal/le/unsafe_disabled.go(1 hunks)vendor/github.com/klauspost/compress/internal/le/unsafe_enabled.go(1 hunks)vendor/github.com/klauspost/compress/s2sx.mod(1 hunks)vendor/github.com/klauspost/compress/zstd/README.md(1 hunks)vendor/github.com/klauspost/compress/zstd/bitreader.go(6 hunks)vendor/github.com/klauspost/compress/zstd/blockdec.go(0 hunks)vendor/github.com/klauspost/compress/zstd/blockenc.go(3 hunks)vendor/github.com/klauspost/compress/zstd/decoder.go(2 hunks)vendor/github.com/klauspost/compress/zstd/enc_base.go(1 hunks)vendor/github.com/klauspost/compress/zstd/matchlen_generic.go(1 hunks)vendor/github.com/klauspost/compress/zstd/seqdec.go(1 hunks)vendor/github.com/klauspost/compress/zstd/seqdec_amd64.s(16 hunks)vendor/github.com/klauspost/compress/zstd/seqdec_generic.go(1 hunks)vendor/github.com/klauspost/compress/zstd/seqenc.go(0 hunks)vendor/github.com/klauspost/compress/zstd/snappy.go(2 hunks)vendor/github.com/klauspost/compress/zstd/zstd.go(2 hunks)vendor/modules.txt(1 hunks)
💤 Files with no reviewable changes (2)
- vendor/github.com/klauspost/compress/zstd/blockdec.go
- vendor/github.com/klauspost/compress/zstd/seqenc.go
✅ Files skipped from review due to trivial changes (1)
- go.mod
🚧 Files skipped from review as they are similar to previous changes (14)
- vendor/github.com/klauspost/compress/s2sx.mod
- vendor/github.com/klauspost/compress/internal/le/unsafe_disabled.go
- vendor/github.com/klauspost/compress/zstd/decoder.go
- pkg/dotc1z/file.go
- pkg/dotc1z/dotc1z.go
- vendor/github.com/klauspost/compress/zstd/enc_base.go
- vendor/github.com/klauspost/compress/zstd/blockenc.go
- vendor/github.com/klauspost/compress/zstd/bitreader.go
- vendor/github.com/klauspost/compress/zstd/seqdec.go
- vendor/github.com/klauspost/compress/zstd/seqdec_generic.go
- vendor/github.com/klauspost/compress/internal/le/unsafe_enabled.go
- vendor/github.com/klauspost/compress/zstd/zstd.go
- vendor/modules.txt
- vendor/github.com/klauspost/compress/zstd/seqdec_amd64.s
🧰 Additional context used
🧬 Code graph analysis (7)
pkg/dotc1z/manager/s3/s3.go (3)
pkg/dotc1z/decoder.go (1)
DecoderOption(57-57)pkg/dotc1z/c1file.go (5)
WithDecoderOptions(124-128)C1ZOption(110-110)WithTmpDir(112-116)WithPragma(118-122)NewC1ZFile(131-158)pkg/dotc1z/manager/local/local.go (3)
WithDecoderOptions(33-37)Option(25-25)WithTmpDir(27-31)
pkg/dotc1z/manager/manager.go (4)
pkg/dotc1z/decoder.go (1)
DecoderOption(57-57)pkg/dotc1z/c1file.go (2)
WithDecoderOptions(124-128)WithTmpDir(112-116)pkg/dotc1z/manager/local/local.go (3)
WithDecoderOptions(33-37)Option(25-25)WithTmpDir(27-31)pkg/dotc1z/manager/s3/s3.go (4)
WithDecoderOptions(37-41)NewS3Manager(175-195)Option(29-29)WithTmpDir(31-35)
vendor/github.com/klauspost/compress/zstd/matchlen_generic.go (2)
vendor/github.com/klauspost/compress/internal/le/unsafe_disabled.go (1)
Load64(25-27)vendor/github.com/klauspost/compress/internal/le/unsafe_enabled.go (1)
Load64(33-37)
pkg/dotc1z/c1file.go (4)
pkg/dotc1z/decoder.go (1)
DecoderOption(57-57)pkg/dotc1z/manager/local/local.go (1)
WithDecoderOptions(33-37)pkg/dotc1z/manager/manager.go (1)
WithDecoderOptions(33-37)pkg/dotc1z/manager/s3/s3.go (1)
WithDecoderOptions(37-41)
pkg/dotc1z/decoder.go (1)
vendor/github.com/klauspost/compress/zstd/decoder.go (1)
NewReader(87-123)
pkg/dotc1z/manager/local/local.go (4)
pkg/dotc1z/decoder.go (1)
DecoderOption(57-57)pkg/dotc1z/c1file.go (4)
WithDecoderOptions(124-128)C1ZOption(110-110)WithTmpDir(112-116)NewC1ZFile(131-158)pkg/dotc1z/manager/manager.go (2)
WithDecoderOptions(33-37)WithTmpDir(27-31)pkg/dotc1z/manager/s3/s3.go (3)
WithDecoderOptions(37-41)Option(29-29)WithTmpDir(31-35)
vendor/github.com/klauspost/compress/huff0/bitreader.go (2)
vendor/github.com/klauspost/compress/internal/le/unsafe_disabled.go (2)
Load32(20-22)Load64(25-27)vendor/github.com/klauspost/compress/internal/le/unsafe_enabled.go (2)
Load32(26-30)Load64(33-37)
🪛 LanguageTool
vendor/github.com/klauspost/compress/README.md
[style] ~25-~25: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...isable all assembly across packages.
Use the links above for more information on...
(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
[grammar] ~40-~40: There might be a mistake here.
Context: .../github.com/klauspost/compress/pull/1017
- s2: Don't use stack for index tables htt...
(QB_NEW_EN)
[grammar] ~41-~41: There might be a mistake here.
Context: .../github.com/klauspost/compress/pull/1014
- gzhttp: No content-type on no body respo...
(QB_NEW_EN)
[grammar] ~42-~42: There might be a mistake here.
Context: .../github.com/klauspost/compress/pull/1011
- gzhttp: Do not set the content-type when...
(QB_NEW_EN)
[grammar] ~94-~94: There might be a mistake here.
Context: ...//github.com/klauspost/compress/pull/871
* flate: Reduce allocations in decompresso...
(QB_NEW_EN)
[grammar] ~95-~95: There might be a mistake here.
Context: ...//github.com/klauspost/compress/pull/869
* s2: Fix EstimateBlockSize on 6&7 length ...
(QB_NEW_EN)
[grammar] ~153-~153: There might be a mistake here.
Context: ...//github.com/klauspost/compress/pull/739
* zstd: Add delta encoding support by @gre...
(QB_NEW_EN)
🪛 markdownlint-cli2 (0.17.2)
vendor/github.com/klauspost/compress/README.md
31-31: Bare URL used
(MD034, no-bare-urls)
32-32: Bare URL used
(MD034, no-bare-urls)
33-33: Bare URL used
(MD034, no-bare-urls)
34-34: Bare URL used
(MD034, no-bare-urls)
35-35: Bare URL used
(MD034, no-bare-urls)
36-36: Bare URL used
(MD034, no-bare-urls)
37-37: Bare URL used
(MD034, no-bare-urls)
40-40: Bare URL used
(MD034, no-bare-urls)
41-41: Bare URL used
(MD034, no-bare-urls)
42-42: Bare URL used
(MD034, no-bare-urls)
43-43: Bare URL used
(MD034, no-bare-urls)
94-94: Unordered list indentation
Expected: 2; Actual: 1
(MD007, ul-indent)
94-94: Hard tabs
Column: 1
(MD010, no-hard-tabs)
94-94: Bare URL used
(MD034, no-bare-urls)
95-95: Unordered list indentation
Expected: 2; Actual: 1
(MD007, ul-indent)
95-95: Hard tabs
Column: 1
(MD010, no-hard-tabs)
95-95: Bare URL used
(MD034, no-bare-urls)
96-96: Unordered list indentation
Expected: 2; Actual: 1
(MD007, ul-indent)
96-96: Hard tabs
Column: 1
(MD010, no-hard-tabs)
96-96: Bare URL used
(MD034, no-bare-urls)
153-153: Unordered list indentation
Expected: 2; Actual: 1
(MD007, ul-indent)
153-153: Hard tabs
Column: 1
(MD010, no-hard-tabs)
153-153: Bare URL used
(MD034, no-bare-urls)
196-196: Unordered list indentation
Expected: 2; Actual: 1
(MD007, ul-indent)
196-196: Hard tabs
Column: 1
(MD010, no-hard-tabs)
196-196: Bare URL used
(MD034, no-bare-urls)
238-238: Unordered list indentation
Expected: 2; Actual: 1
(MD007, ul-indent)
238-238: Hard tabs
Column: 1
(MD010, no-hard-tabs)
238-238: Bare URL used
(MD034, no-bare-urls)
265-265: Unordered list indentation
Expected: 2; Actual: 1
(MD007, ul-indent)
265-265: Hard tabs
Column: 1
(MD010, no-hard-tabs)
266-266: Unordered list indentation
Expected: 2; Actual: 1
(MD007, ul-indent)
266-266: Hard tabs
Column: 1
(MD010, no-hard-tabs)
267-267: Unordered list indentation
Expected: 2; Actual: 1
(MD007, ul-indent)
267-267: Hard tabs
Column: 1
(MD010, no-hard-tabs)
268-268: Unordered list indentation
Expected: 2; Actual: 1
(MD007, ul-indent)
268-268: Hard tabs
Column: 1
(MD010, no-hard-tabs)
269-269: Unordered list indentation
Expected: 2; Actual: 1
(MD007, ul-indent)
269-269: Hard tabs
Column: 1
(MD010, no-hard-tabs)
270-270: Unordered list indentation
Expected: 2; Actual: 1
(MD007, ul-indent)
270-270: Hard tabs
Column: 1
(MD010, no-hard-tabs)
287-287: Unordered list indentation
Expected: 2; Actual: 1
(MD007, ul-indent)
287-287: Hard tabs
Column: 1
(MD010, no-hard-tabs)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: go-test (1.23.x, windows-latest)
- GitHub Check: go-test (1.23.x, ubuntu-latest)
- GitHub Check: go-lint
- GitHub Check: Cursor Bugbot
🔇 Additional comments (21)
vendor/github.com/klauspost/compress/zstd/snappy.go (2)
198-201: Fix: propagate write error from w.Write correctlyReturning
r.errhere ensures the actual write error is surfaced instead of a potentially staleerrfrom earlier decoding/encoding steps. Good catch.
240-243: Same correct error propagation for uncompressed pathMirrors the fix above; returning
r.erravoids leaking an unrelatederr. Looks good.pkg/dotc1z/c1file.go (2)
105-109: LGTM: decoder options plumbed into c1zOptionsField addition looks correct and non-breaking.
140-141: LGTM: forwarding decoder options to loaderPassing
options.decoderOptions...intoloadC1zcleanly threads the settings.pkg/dotc1z/manager/manager.go (2)
20-23: LGTM: manager options extended with decoder optionsStruct expansion is straightforward and keeps backward compatibility.
56-58: LGTM: propagating decoder options to S3 and local managersConditional propagation is correct and avoids passing empty varargs.
Also applies to: 65-67
pkg/dotc1z/manager/local/local.go (2)
18-23: LGTM: local manager extended with decoder optionsNo behavioral regressions; fields remain private.
107-115: LGTM: option list assembly including decoder optionsConstruction is clear and maintainable.
pkg/dotc1z/manager/s3/s3.go (2)
21-27: LGTM: s3Manager extended with decoder optionsNon-breaking and consistent with local manager.
126-134: LGTM: pass decoder options through to NewC1ZFileGood conditional inclusion to avoid empty option plumbing.
pkg/dotc1z/decoder.go (3)
61-65: LGTM: add decoderConcurrency to optionsMatches intended behavior; default handled in
NewDecoder.
107-115: LGTM: concurrency option and semanticsInterface is clear (1=sync, 0=GOMAXPROCS, <0 skip). Consider documenting “<0 disables explicit setting” in package docs if not already.
143-150: LGTM: structured zstd options and conditional concurrencyLow-mem + max memory always applied; concurrency applied when non-negative.
Also applies to: 151-154
vendor/github.com/klauspost/compress/zstd/README.md (1)
9-9: Vendored doc tweak looks goodAccurately reflects upstream’s build tags (noasm/nounsafe). No action needed on our side.
vendor/github.com/klauspost/compress/README.md (1)
17-27: Upstream README updates acknowledgedThese are vendor docs synced with v1.18.0; fine to keep as-is. Please avoid local edits to vendor docs to reduce churn.
Also applies to: 30-38, 39-44, 594-602
vendor/github.com/klauspost/compress/internal/le/le.go (1)
3-5: Indexer constraint LGTMCovers all integer kinds needed for generic slice indexing across safe/unsafe paths.
vendor/github.com/klauspost/compress/zstd/matchlen_generic.go (1)
18-29: Precondition relies on ‘a’ being shortest — double-check call sitesThe new le.Load64 path assumes b has ≥ len(a) bytes at each 8-byte step. Verify all callers uphold “a is shortest” to avoid OOB in unsafe-enabled builds.
vendor/github.com/klauspost/compress/huff0/bitreader.go (4)
13-14: Switch to internal/le is correctUnified 32/64-bit loads via le.* match the rest of the upgrade.
79-82: fillFastStart change is fine64-bit initial load via le.Load64 preserves semantics.
89-95: Nice: handle exactly 4 bytes via fast pathThe >=4 check + early return improves the edge case without extra work.
185-188: fillFastStart (shifted) is goodMatches the bytes variant; semantics preserved.
b109c95 to
aa4e704
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
pkg/dotc1z/file.go (1)
41-52: Close the decoder on all paths (defer) to avoid leaks.
If io.Copy fails, r.Close is skipped. Defer it after construction.Apply:
r, err := NewDecoder(c1zFile, opts...) if err != nil { return "", err } +defer r.Close() _, err = io.Copy(dbFile, r) if err != nil { return "", err } -err = r.Close() -if err != nil { - return "", err -}pkg/dotc1z/decoder.go (2)
167-171: Fix error gating to prevent nil deref on init failure.
If zstd.NewReader fails, d.zd is nil and d.decoderInitErr is set; current check skips returning the error and then calls d.zd.Read, panicking.Apply:
-// Check we have a valid decoder -if d.zd != nil && d.decoderInitErr != nil { - return 0, d.decoderInitErr -} +// Check we have a valid decoder +if d.decoderInitErr != nil { + return 0, d.decoderInitErr +}
43-47: Read full header to avoid short-read false negatives.
reader.Read can return fewer bytes than requested.Apply:
headerBytes := make([]byte, len(C1ZFileHeader)) -_, err := reader.Read(headerBytes) +_, err := io.ReadFull(reader, headerBytes) if err != nil { return err }vendor/github.com/klauspost/compress/huff0/bitreader.go (1)
173-181: Guardb.offbefore callingfillFast()to prevent OOB reads.
In the ZSTD sequence decoders (vendor/.../zstd/seqdec_generic.goat ~lines 43 & 46 andvendor/.../zstd/seqdec.goat ~lines 259 & 262),br.fillFast()is invoked unconditionally, butfillFastperforms an unsafele.Load32(b.in, b.off-4)which can read out of bounds ifbr.off < 4. Wrap each call inif br.off >= 4 { br.fillFast() }or add a debug‐only guard at the top of
fillFast()(e.g.if b.off < 4 { return }).
♻️ Duplicate comments (1)
pkg/dotc1z/decoder.go (1)
107-115: Document negative concurrency semantics (or validate).
Negative n silently bypasses setting zstd concurrency; this surprised reviewers previously.Option A (document intent):
-// WithDecoderConcurrency sets the number of created decoders. -// Default is 1, which disables async decoding/concurrency. -// 0 uses GOMAXPROCS. +// WithDecoderConcurrency sets the number of created decoders. +// Default is 1 (disables async decoding). +// 0 uses GOMAXPROCS. +// n < 0 leaves concurrency unspecified (use library default).Option B (validate and disallow negatives):
func WithDecoderConcurrency(n int) DecoderOption { return func(o *decoderOptions) error { - o.decoderConcurrency = n + if n < 0 { + return fmt.Errorf("c1z: WithDecoderConcurrency cannot be negative; omit to use library default") + } + o.decoderConcurrency = n return nil } }
🧹 Nitpick comments (13)
vendor/github.com/klauspost/compress/README.md (5)
94-97: Markdown lint noise (tabs/indent/bare URLs) — exclude vendor from docs linters.These upstream lines will keep triggering MD007/MD010/MD034. Prefer excluding vendor/** in markdownlint/LanguageTool configs rather than patching vendored text.
153-156: Same markdownlint caveat for vendored changelog formatting.Recommend ignoring vendor/** in doc-lint jobs to avoid churn.
196-197: Vendored changelog line — keep as-is; ignore in linters.Upstream formatting; do not modify.
238-239: Vendored changelog line — keep as-is; ignore in linters.Upstream formatting; do not modify.
265-271: v1.15.0 changelog snippet — OK; note mixed list styles from upstream.Suggest linter exclusion for vendor to prevent false positives.
pkg/dotc1z/manager/manager.go (1)
33-37: Allow multiple WithDecoderOptions calls (append instead of overwrite).
Current setter discards earlier options if called twice.Apply:
func WithDecoderOptions(opts ...dotc1z.DecoderOption) ManagerOption { return func(o *managerOptions) { - o.decoderOptions = opts + o.decoderOptions = append(o.decoderOptions, opts...) } }pkg/dotc1z/decoder.go (4)
182-190: Tighten decoded-size enforcement.
Use >= before read and add a post-increment check to signal overflow promptly.Apply:
-if d.decodedBytes > maxDecodedSize { +if d.decodedBytes >= maxDecodedSize { return 0, ErrMaxSizeExceeded } ... d.decodedBytes += uint64(n) +if d.decodedBytes > maxDecodedSize { + return n, ErrMaxSizeExceeded +}
91-96: Fix doc default: comment says 1GiB, code uses 3GiB.
Align the comment or constant for clarity.Apply (comment-only):
-// Maximum is 1 << 63 bytes. Default is 1GiB. +// Maximum is 1 << 63 bytes. Default is 3GiB.
27-29: Minor: remove double space in error message.Apply:
-ErrWindowSizeExceeded = fmt.Errorf("c1z: window size exceeded, increase DecoderMaxMemory using %v environment variable", maxDecoderMemorySizeEnv) +ErrWindowSizeExceeded = fmt.Errorf("c1z: window size exceeded, increase DecoderMaxMemory using %v environment variable", maxDecoderMemorySizeEnv)
67-73: Typo in comment.
“subequent” -> “subsequent”.Apply:
-// WithContext sets a context, when cancelled, will cause subequent calls to Read() to return ctx.Error(). +// WithContext sets a context which, when cancelled, will cause subsequent calls to Read() to return ctx.Err().vendor/github.com/klauspost/compress/huff0/bitreader.go (3)
50-50: Clarify comment: “exactly one byte”.“Requires that at least one byte is requested every time” reads oddly. This API always peeks exactly one byte.
Apply:
-// peekByteFast requires that at least one byte is requested every time. +// peekByteFast returns the next byte; callers must request exactly one byte and ensure the buffer is filled.
69-74: Stale “2 bounds checks” comment.With le.Load32 the bounds-check story differs (purego: one via slicing; unsafe: none). Update the comment to avoid confusion.
Apply:
- // 2 bounds checks. + // Fast-path 32-bit load; caller must ensure at least 4 bytes remain.
195-201: Consider enabling the off == 4 fast path for shifted reader as well.Using >= 4 here mirrors the bytes reader and avoids the 4× byte loop when exactly 4 remain. If there’s no semantic reason to prefer the byte path, this is a small win.
Apply:
- if b.off > 4 { + if b.off >= 4 { low := le.Load32(b.in, b.off-4) b.value |= uint64(low) << ((b.bitsRead - 32) & 63) b.bitsRead -= 32 b.off -= 4 return }If intentional, add a short comment explaining why the shifted variant keeps the byte path at off == 4.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (28)
go.mod(1 hunks)pkg/dotc1z/c1file.go(3 hunks)pkg/dotc1z/decoder.go(5 hunks)pkg/dotc1z/dotc1z.go(1 hunks)pkg/dotc1z/file.go(2 hunks)pkg/dotc1z/manager/local/local.go(3 hunks)pkg/dotc1z/manager/manager.go(3 hunks)pkg/dotc1z/manager/s3/s3.go(3 hunks)vendor/github.com/klauspost/compress/README.md(8 hunks)vendor/github.com/klauspost/compress/huff0/bitreader.go(7 hunks)vendor/github.com/klauspost/compress/internal/le/le.go(1 hunks)vendor/github.com/klauspost/compress/internal/le/unsafe_disabled.go(1 hunks)vendor/github.com/klauspost/compress/internal/le/unsafe_enabled.go(1 hunks)vendor/github.com/klauspost/compress/s2sx.mod(1 hunks)vendor/github.com/klauspost/compress/zstd/README.md(1 hunks)vendor/github.com/klauspost/compress/zstd/bitreader.go(6 hunks)vendor/github.com/klauspost/compress/zstd/blockdec.go(0 hunks)vendor/github.com/klauspost/compress/zstd/blockenc.go(3 hunks)vendor/github.com/klauspost/compress/zstd/decoder.go(2 hunks)vendor/github.com/klauspost/compress/zstd/enc_base.go(1 hunks)vendor/github.com/klauspost/compress/zstd/matchlen_generic.go(1 hunks)vendor/github.com/klauspost/compress/zstd/seqdec.go(1 hunks)vendor/github.com/klauspost/compress/zstd/seqdec_amd64.s(16 hunks)vendor/github.com/klauspost/compress/zstd/seqdec_generic.go(1 hunks)vendor/github.com/klauspost/compress/zstd/seqenc.go(0 hunks)vendor/github.com/klauspost/compress/zstd/snappy.go(2 hunks)vendor/github.com/klauspost/compress/zstd/zstd.go(2 hunks)vendor/modules.txt(1 hunks)
💤 Files with no reviewable changes (2)
- vendor/github.com/klauspost/compress/zstd/seqenc.go
- vendor/github.com/klauspost/compress/zstd/blockdec.go
✅ Files skipped from review due to trivial changes (1)
- vendor/github.com/klauspost/compress/s2sx.mod
🚧 Files skipped from review as they are similar to previous changes (20)
- vendor/github.com/klauspost/compress/zstd/seqdec.go
- go.mod
- vendor/github.com/klauspost/compress/zstd/zstd.go
- vendor/github.com/klauspost/compress/zstd/enc_base.go
- vendor/github.com/klauspost/compress/zstd/README.md
- pkg/dotc1z/manager/local/local.go
- pkg/dotc1z/dotc1z.go
- vendor/github.com/klauspost/compress/zstd/matchlen_generic.go
- pkg/dotc1z/c1file.go
- vendor/github.com/klauspost/compress/zstd/decoder.go
- vendor/github.com/klauspost/compress/internal/le/unsafe_enabled.go
- vendor/github.com/klauspost/compress/zstd/snappy.go
- vendor/github.com/klauspost/compress/zstd/blockenc.go
- vendor/github.com/klauspost/compress/zstd/seqdec_generic.go
- vendor/github.com/klauspost/compress/zstd/bitreader.go
- vendor/github.com/klauspost/compress/zstd/seqdec_amd64.s
- pkg/dotc1z/manager/s3/s3.go
- vendor/github.com/klauspost/compress/internal/le/le.go
- vendor/modules.txt
- vendor/github.com/klauspost/compress/internal/le/unsafe_disabled.go
🧰 Additional context used
🧬 Code graph analysis (4)
pkg/dotc1z/file.go (1)
pkg/dotc1z/decoder.go (2)
DecoderOption(57-57)NewDecoder(208-241)
pkg/dotc1z/manager/manager.go (4)
pkg/dotc1z/decoder.go (1)
DecoderOption(57-57)pkg/dotc1z/c1file.go (2)
WithDecoderOptions(124-128)WithTmpDir(112-116)pkg/dotc1z/manager/local/local.go (3)
WithDecoderOptions(33-37)Option(25-25)WithTmpDir(27-31)pkg/dotc1z/manager/s3/s3.go (4)
WithDecoderOptions(37-41)NewS3Manager(175-195)Option(29-29)WithTmpDir(31-35)
pkg/dotc1z/decoder.go (1)
vendor/github.com/klauspost/compress/zstd/decoder.go (1)
NewReader(87-123)
vendor/github.com/klauspost/compress/huff0/bitreader.go (2)
vendor/github.com/klauspost/compress/internal/le/unsafe_disabled.go (2)
Load32(20-22)Load64(25-27)vendor/github.com/klauspost/compress/internal/le/unsafe_enabled.go (2)
Load32(26-30)Load64(33-37)
🪛 LanguageTool
vendor/github.com/klauspost/compress/README.md
[style] ~25-~25: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...isable all assembly across packages.
Use the links above for more information on...
(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
[grammar] ~40-~40: There might be a mistake here.
Context: .../github.com/klauspost/compress/pull/1017
- s2: Don't use stack for index tables htt...
(QB_NEW_EN)
[grammar] ~41-~41: There might be a mistake here.
Context: .../github.com/klauspost/compress/pull/1014
- gzhttp: No content-type on no body respo...
(QB_NEW_EN)
[grammar] ~42-~42: There might be a mistake here.
Context: .../github.com/klauspost/compress/pull/1011
- gzhttp: Do not set the content-type when...
(QB_NEW_EN)
[grammar] ~94-~94: There might be a mistake here.
Context: ...//github.com/klauspost/compress/pull/871
* flate: Reduce allocations in decompresso...
(QB_NEW_EN)
[grammar] ~95-~95: There might be a mistake here.
Context: ...//github.com/klauspost/compress/pull/869
* s2: Fix EstimateBlockSize on 6&7 length ...
(QB_NEW_EN)
[grammar] ~153-~153: There might be a mistake here.
Context: ...//github.com/klauspost/compress/pull/739
* zstd: Add delta encoding support by @gre...
(QB_NEW_EN)
🪛 markdownlint-cli2 (0.17.2)
vendor/github.com/klauspost/compress/README.md
31-31: Bare URL used
(MD034, no-bare-urls)
32-32: Bare URL used
(MD034, no-bare-urls)
33-33: Bare URL used
(MD034, no-bare-urls)
34-34: Bare URL used
(MD034, no-bare-urls)
35-35: Bare URL used
(MD034, no-bare-urls)
36-36: Bare URL used
(MD034, no-bare-urls)
37-37: Bare URL used
(MD034, no-bare-urls)
40-40: Bare URL used
(MD034, no-bare-urls)
41-41: Bare URL used
(MD034, no-bare-urls)
42-42: Bare URL used
(MD034, no-bare-urls)
43-43: Bare URL used
(MD034, no-bare-urls)
94-94: Unordered list indentation
Expected: 2; Actual: 1
(MD007, ul-indent)
94-94: Hard tabs
Column: 1
(MD010, no-hard-tabs)
94-94: Bare URL used
(MD034, no-bare-urls)
95-95: Unordered list indentation
Expected: 2; Actual: 1
(MD007, ul-indent)
95-95: Hard tabs
Column: 1
(MD010, no-hard-tabs)
95-95: Bare URL used
(MD034, no-bare-urls)
96-96: Unordered list indentation
Expected: 2; Actual: 1
(MD007, ul-indent)
96-96: Hard tabs
Column: 1
(MD010, no-hard-tabs)
96-96: Bare URL used
(MD034, no-bare-urls)
153-153: Unordered list indentation
Expected: 2; Actual: 1
(MD007, ul-indent)
153-153: Hard tabs
Column: 1
(MD010, no-hard-tabs)
153-153: Bare URL used
(MD034, no-bare-urls)
196-196: Unordered list indentation
Expected: 2; Actual: 1
(MD007, ul-indent)
196-196: Hard tabs
Column: 1
(MD010, no-hard-tabs)
196-196: Bare URL used
(MD034, no-bare-urls)
238-238: Unordered list indentation
Expected: 2; Actual: 1
(MD007, ul-indent)
238-238: Hard tabs
Column: 1
(MD010, no-hard-tabs)
238-238: Bare URL used
(MD034, no-bare-urls)
265-265: Unordered list indentation
Expected: 2; Actual: 1
(MD007, ul-indent)
265-265: Hard tabs
Column: 1
(MD010, no-hard-tabs)
266-266: Unordered list indentation
Expected: 2; Actual: 1
(MD007, ul-indent)
266-266: Hard tabs
Column: 1
(MD010, no-hard-tabs)
267-267: Unordered list indentation
Expected: 2; Actual: 1
(MD007, ul-indent)
267-267: Hard tabs
Column: 1
(MD010, no-hard-tabs)
268-268: Unordered list indentation
Expected: 2; Actual: 1
(MD007, ul-indent)
268-268: Hard tabs
Column: 1
(MD010, no-hard-tabs)
269-269: Unordered list indentation
Expected: 2; Actual: 1
(MD007, ul-indent)
269-269: Hard tabs
Column: 1
(MD010, no-hard-tabs)
270-270: Unordered list indentation
Expected: 2; Actual: 1
(MD007, ul-indent)
270-270: Hard tabs
Column: 1
(MD010, no-hard-tabs)
287-287: Unordered list indentation
Expected: 2; Actual: 1
(MD007, ul-indent)
287-287: Hard tabs
Column: 1
(MD010, no-hard-tabs)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Cursor Bugbot
- GitHub Check: go-test (1.23.x, windows-latest)
🔇 Additional comments (13)
vendor/github.com/klauspost/compress/README.md (5)
30-38: Changelog (1.18.0) block — OK to vendor as-is.Docs-only; safe to keep aligned with upstream.
39-44: Changelog (1.17.11) block — OK to vendor as-is.Docs-only; safe to keep aligned with upstream.
287-287: Vendored changelog line — OK.No local edits; allow as-is.
594-602: Deflate “drop-in” table addition — looks correct and helpful.Links and package paths are accurate; no action needed.
17-27: Vendored README: module bump to v1.18.0 is consistent in go.mod, go.sum, and vendor/modules.txt.pkg/dotc1z/file.go (1)
14-14: Signature change LGTM; options propagation is clear.
No behavior change when opts are omitted.pkg/dotc1z/manager/manager.go (2)
21-23: New decoderOptions field: SGTM.
Enables plumbing without affecting existing callers.
56-58: Forwarding options conditionally is fine.
Keeps zero-alloc when no opts were set.Also applies to: 65-67
pkg/dotc1z/decoder.go (1)
143-149: Concurrency plumbing looks good.
Defaults to 1; opt-in to 0/GOMAXPROCS or library default via negative (if documented).Also applies to: 227-229
vendor/github.com/klauspost/compress/huff0/bitreader.go (4)
12-14: Switch to internal le loader looks good.Adopts internal LE loads consistently; matches broader PR direction and should reduce bounds-check overhead on hot paths.
79-82: LGTM: initial 64-bit load via le.Load64.Correctly matches the precondition (>= 8 bytes) and removes redundant checks.
89-95: LGTM: widen 4-byte fast path to off >= 4.Safe (reads at index off-4) and avoids the byte-wise tail loop for off == 4.
185-188: LGTM: shifted init via le.Load64.Consistent with bytes-reader variant; no issues.
Summary by CodeRabbit
New Features
Bug Fixes
Chores