Skip to content

Conversation

@ggreer
Copy link
Contributor

@ggreer ggreer commented Sep 4, 2025

Summary by CodeRabbit

  • New Features

    • Add configurable decoder options for C1Z loading (including decoder concurrency and max memory; default memory raised to 128MiB).
    • Reader/File creation now accepts optional configuration parameters for file loading.
  • Bug Fixes

    • Fix error propagation in Snappy→Zstandard conversion.
    • Improve decoder state/reuse handling for more reliable decoding.
  • Chores

    • Bump compression dependency to v1.18.0 and update vendored documentation.

@coderabbitai
Copy link

coderabbitai bot commented Sep 4, 2025

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

Walkthrough

Bumps github.com/klauspost/compress to v1.18.0 and vendors new internal/le helpers; refactors many zstd/huff0 internals to use le loaders and cursor-based bit readers; fixes snappy error propagation. Adds per-decoder options (memory, concurrency) and wires decoder options through C1Z loaders, managers, and readers.

Changes

Cohort / File(s) Summary
Module bump
go.mod, vendor/modules.txt, vendor/github.com/klauspost/compress/s2sx.mod
Upgrade klauspost/compress v1.17.11 → v1.18.0; update vendored module Go version to 1.22 and modules list.
C1Z options wiring
pkg/dotc1z/c1file.go, pkg/dotc1z/file.go, pkg/dotc1z/dotc1z.go
Add WithDecoderOptions(...) C1Z option; loadC1z accepts opts ...DecoderOption and forwards them to NewDecoder; NewC1FileReader gains optional opts ...C1FOption parameter and forwards options.
Decoder options & concurrency
pkg/dotc1z/decoder.go
Add DecoderOption WithDecoderConcurrency(n int); introduce decoderConcurrency field, validate WithDecoderMaxMemory, default concurrency 1, assemble zstd options dynamically (lowmem, maxMem, conditional concurrency semantics: 1=disable async, 0=GOMAXPROCS, negative=unset).
Manager plumbing
pkg/dotc1z/manager/manager.go, pkg/dotc1z/manager/local/local.go, pkg/dotc1z/manager/s3/s3.go
Add WithDecoderOptions(...) Manager/local/s3 option; store and forward decoderOptions to dotc1z.NewC1ZFile when present.
Vendor le package (new)
vendor/.../compress/internal/le/le.go, .../le/unsafe_disabled.go, .../le/unsafe_enabled.go
Add internal le package with generic Indexer constraint and Load*/Store* helpers; provide unsafe-enabled and unsafe-disabled implementations selected via build tags.
huff0 bitreader refactor
vendor/github.com/klauspost/compress/huff0/bitreader.go
Replace encoding/binary usage with le.Load32/Load64; adjust 4-byte fast path/bounds checks and comments; use le loaders for faster, unified memory access.
zstd bitreader & core changes
vendor/github.com/klauspost/compress/zstd/bitreader.go, vendor/.../zstd/zstd.go
BitReader switches to cursor-based boundary (no longer mutating input slice); replace binary loads with le.Load32/Load64; update finished/remain/close behaviors.
zstd algorithm & internals
vendor/.../zstd/matchlen_generic.go, .../seqdec_generic.go, .../seqdec.go, .../seqdec_amd64.s, .../blockdec.go, .../blockenc.go, .../seqenc.go, .../enc_base.go
Multiple internal refactors: use le.Load64 in matchlen, gate fast paths on cursor, adjust AMD64 decode-context offsets, remove instrumentation/test emission, use slices.Max for histograms, minor docs/panic-message fixes; no exported API changes.
Snappy error fix
vendor/github.com/klauspost/compress/zstd/snappy.go
Fix error propagation in SnappyConverter.Convert to return the per-call writer error when write fails.
Docs / READMEs
vendor/github.com/klauspost/compress/README.md, vendor/github.com/klauspost/compress/zstd/README.md
Documentation/changelog updates; clarify noasm/nounsafe guidance and stateless compression examples; no code/API changes.
Vendored small edits
vendor/github.com/klauspost/compress/huff0/*, vendor/github.com/klauspost/compress/zstd/*
Misc internal cleanups: removed unused imports, minor comment and doc fixes across multiple files.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor Caller
  participant API as NewC1ZFile / NewC1FileReader
  participant Loader as loadC1z
  participant DecCtor as NewDecoder
  participant Zstd as zstd.NewReader
  Caller->>API: NewC1ZFile(ctx,..., WithDecoderOptions(...))
  API->>Loader: loadC1z(path, tmpDir, decoderOptions...)
  Loader->>DecCtor: NewDecoder(c1zFile, decoderOptions...)
  DecCtor->>Zstd: zstd.NewReader(opts: lowmem, maxMem, [concurrency if set])
  Zstd-->>DecCtor: configured decoder
  DecCtor-->>Loader: decoded io.Reader
  Loader->>Loader: write DB file from decoded stream
  Loader-->>API: DB path
  API-->>Caller: C1ZFile (ready)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Poem

I nibbled through the bytes with care,
Tuned concurrency, set cursors fair.
le-loads hop swift, unsafe or shy,
Options stitched so streams comply.
Thump-thump—new compressions hum on by.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch ggreer/decoder-concurrency-option

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore or @coderabbit ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (4)
pkg/dotc1z/file.go (1)

41-52: Ensure decoder is closed on all paths (avoid leak on io.Copy error).

If io.Copy fails, r.Close() isn’t called. Defer closing right after creation and drop the explicit close at the end.

Apply:

-        r, err := NewDecoder(c1zFile, opts...)
-        if err != nil {
-            return "", err
-        }
-        _, err = io.Copy(dbFile, r)
-        if err != nil {
-            return "", err
-        }
-        err = r.Close()
-        if err != nil {
-            return "", err
-        }
+        r, err := NewDecoder(c1zFile, opts...)
+        if err != nil {
+            return "", err
+        }
+        defer func() {
+            if cerr := r.Close(); cerr != nil && err == nil {
+                err = cerr
+            }
+        }()
+        _, err = io.Copy(dbFile, r)
+        if err != nil {
+            return "", err
+        }
vendor/github.com/klauspost/compress/zstd/bitreader.go (1)

120-127: Critical: close() zeroes cursor before validation, hiding unread bits

Setting b.cursor = 0 before calling finished()/remain() can incorrectly pass validation (and under-report remaining bits) when there are still bytes left. Move the release after validations.

Apply:

 func (b *bitReader) close() error {
-	// Release reference.
-	b.in = nil
-	b.cursor = 0
-	if !b.finished() {
-		return fmt.Errorf("%d extra bits on block, should be 0", b.remain())
-	}
-	if b.bitsRead > 64 {
-		return io.ErrUnexpectedEOF
-	}
-	return nil
+	// Validate before clearing state.
+	if !b.finished() {
+		return fmt.Errorf("%d extra bits on block, should be 0", b.remain())
+	}
+	if b.bitsRead > 64 {
+		return io.ErrUnexpectedEOF
+	}
+	// Release references last.
+	b.in = nil
+	b.cursor = 0
+	return nil
 }
pkg/dotc1z/decoder.go (2)

167-171: Fix: nil deref when zstd.NewReader fails

If zstd.NewReader(...) returns an error, d.zd stays nil and the current check won’t trigger, leading to a panic at d.zd.Read. Check decoderInitErr directly.

-	// Check we have a valid decoder
-	if d.zd != nil && d.decoderInitErr != nil {
-		return 0, d.decoderInitErr
-	}
+	// Check decoder initialization error
+	if d.decoderInitErr != nil {
+		return 0, d.decoderInitErr
+	}

177-197: Enforce max decoded size per Read to prevent overruns

The pre-read check allows a single Read call to exceed the cap. Cap p to remaining bytes before calling the underlying reader.

 	// Check we have not exceeded our max decoded size
 	maxDecodedSize := d.o.maxDecodedSize
 	if maxDecodedSize == 0 {
 		maxDecodedSize = defaultMaxDecodedSize
 	}
-	if d.decodedBytes > maxDecodedSize {
+	if d.decodedBytes > maxDecodedSize {
 		return 0, ErrMaxSizeExceeded
 	}
 
 	// Do underlying read
-	n, err := d.zd.Read(p)
+	if maxDecodedSize > 0 {
+		if d.decodedBytes >= maxDecodedSize {
+			return 0, ErrMaxSizeExceeded
+		}
+		if remaining := maxDecodedSize - d.decodedBytes; uint64(len(p)) > remaining {
+			p = p[:remaining]
+		}
+	}
+	n, err := d.zd.Read(p)
 	//nolint:gosec // No risk of overflow/underflow because n is always >= 0.
 	d.decodedBytes += uint64(n)
🧹 Nitpick comments (21)
go.mod (1)

3-3: Use major.minor in the go directive; optionally pin toolchain

Go expects “go 1.23”, not “go 1.23.4”. If you want to pin a patch toolchain, add a separate toolchain directive.

- go 1.23.4
+ go 1.23
+ toolchain go1.23.4
vendor/github.com/klauspost/compress/README.md (1)

30-38: Vendor docs trip markdownlint; prefer excluding vendor/ from docs lint.

The changelog intentionally uses bare URLs and mixed indentation; fixing upstream text locally adds churn. Recommend updating lint config to skip vendor/**.

If desired, I can propose a minimal markdownlint config change that excludes vendor paths.

pkg/dotc1z/file.go (2)

3-12: Use os.O_TRUNC and drop syscall import.

Minor cleanup; reduces imports and is more idiomatic.

 import (
   "errors"
   "io"
   "os"
   "path/filepath"
-  "syscall"
@@
- outFile, err := os.OpenFile(outputFilePath, os.O_RDWR|os.O_CREATE|syscall.O_TRUNC, 0644)
+ outFile, err := os.OpenFile(outputFilePath, os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0644)

Also applies to: 74-76


86-94: Close zstd writer on error path of io.Copy.

If io.Copy fails, c1z remains open. Close it before returning to free resources.

 _, err = io.Copy(c1z, dbFile)
 if err != nil {
-  return err
+  _ = c1z.Close()
+  return err
 }
 
 err = c1z.Flush()

Also applies to: 100-103

pkg/dotc1z/c1file.go (2)

124-128: Make WithDecoderOptions additive to support multiple calls.

Current implementation overwrites any previously provided options. Match WithPragma’s append semantics.

-func WithDecoderOptions(opts ...DecoderOption) C1ZOption {
-  return func(o *c1zOptions) {
-    o.decoderOptions = opts
-  }
-}
+func WithDecoderOptions(opts ...DecoderOption) C1ZOption {
+  return func(o *c1zOptions) {
+    if len(opts) == 0 {
+      return
+    }
+    o.decoderOptions = append(o.decoderOptions, opts...)
+  }
+}

289-291: Typo in error message.

“datbase” → “database”.

- return fmt.Errorf("c1file: datbase has not been opened")
+ return fmt.Errorf("c1file: database has not been opened")
pkg/dotc1z/dotc1z.go (1)

52-59: Consider exposing decoder options in NewExternalC1FileReader.

Optional: accept opts ...DecoderOption to configure external loads consistently.

-func NewExternalC1FileReader(ctx context.Context, tmpDir string, externalResourceC1ZPath string) (connectorstore.Reader, error) {
-  dbFilePath, err := loadC1z(externalResourceC1ZPath, tmpDir)
+func NewExternalC1FileReader(ctx context.Context, tmpDir string, externalResourceC1ZPath string, opts ...DecoderOption) (connectorstore.Reader, error) {
+  dbFilePath, err := loadC1z(externalResourceC1ZPath, tmpDir, opts...)
vendor/github.com/klauspost/compress/zstd/seqdec_amd64.s (1)

10-13: Asm offsets updated for bitReader layout — verify with compile-time checks

Moving bitsRead to 40(CX) and introducing cursor at 32(CX) matches bitReader{ in(0..), value@24, cursor@32, bitsRead@40 }. Prolog/epilog stores mirror this. Recommend adding/retaining Go-side unsafe.Offsetof static assertions (if not already present upstream) to guard against future layout drift.

If you want, I can draft a tiny Go file with var _ [1]int-style offset checks to live alongside the asm.

Also applies to: 301-304

pkg/dotc1z/decoder.go (8)

107-116: Document negative concurrency and add basic validation

Docs mention 0 uses GOMAXPROCS and default 1 disables async; they don’t mention negative. Either document “n < 0 = don’t set option (use library default)” or reject negatives. Also guard against absurdly large n.

-// WithDecoderConcurrency sets the number of created decoders.
-// Default is 1, which disables async decoding/concurrency.
-// 0 uses GOMAXPROCS.
+// WithDecoderConcurrency sets the number of created decoders.
+// Default is 1 (disables async decoding/concurrency). 0 uses GOMAXPROCS.
+// If n < 0, no explicit concurrency option is set and the library default applies.
 func WithDecoderConcurrency(n int) DecoderOption {
 	return func(o *decoderOptions) error {
-		o.decoderConcurrency = n
+		// Optional: constrain to a sane range; adjust/remove as desired.
+		// if n > 1<<16 { return errors.New("c1z: WithDecoderConcurrency too large") }
+		o.decoderConcurrency = n
 		return nil
 	}
 }

142-154: Option assembly LGTM; tiny allocation nit

Pre-size slice to avoid one grow on append.

-		zstdOpts := []zstd.DOption{
+		zstdOpts := make([]zstd.DOption, 0, 3)
+		zstdOpts = append(zstdOpts,
 			zstd.WithDecoderLowmem(true),          // uses lower memory, trading potentially more allocations
-			zstd.WithDecoderMaxMemory(maxMemSize), // sets limit on maximum memory used when decoding stream
-		}
+			zstd.WithDecoderMaxMemory(maxMemSize), // sets limit on maximum memory used when decoding stream
+		)
 		if d.o.decoderConcurrency >= 0 {
 			zstdOpts = append(zstdOpts, zstd.WithDecoderConcurrency(d.o.decoderConcurrency))
 		}
 
 		zd, err := zstd.NewReader(
 			d.f,
 			zstdOpts...,
 		)

Also applies to: 147-149, 151-154


227-229: Default concurrency = 1: confirm intent

Setting 1 preserves prior behavior (no async). If you want to keep upstream defaults instead, consider -1 here to omit setting the option unless explicitly provided.


75-79: Update default documentation (now 128MiB)

Comment says 32MiB, but defaultDecoderMaxMemory is 128MiB.

-// Maximum is 1 << 63 bytes. Default is 32MiB.
+// Maximum is 1 << 63 bytes. Default is 128MiB.

91-95: Update decoded size default in docs (now 3GiB)

Comment says 1GiB; constant is 3GiB.

-// Maximum is 1 << 63 bytes. Default is 1GiB.
+// Maximum is 1 << 63 bytes. Default is 3GiB.

Also applies to: 16-21


67-73: Typo in comment

“subequent” → “subsequent”.

-// WithContext sets a context, when cancelled, will cause subequent calls to Read() to return ctx.Error().
+// WithContext sets a context that, when cancelled, will cause subsequent calls to Read() to return ctx.Err().

35-47: Harden header read against short reads and use io.SeekStart

Use io.ReadFull to guarantee full header and io.SeekStart for clarity.

-		_, err := rs.Seek(0, 0)
+		_, err := rs.Seek(0, io.SeekStart)
 		if err != nil {
 			return err
 		}
 	}
 
 	headerBytes := make([]byte, len(C1ZFileHeader))
-	_, err := reader.Read(headerBytes)
+	_, err := io.ReadFull(reader, headerBytes)

Also applies to: 43-47


28-29: Nit: double space in error message

Minor formatting.

-	ErrWindowSizeExceeded = fmt.Errorf("c1z: window size exceeded, increase DecoderMaxMemory using %v  environment variable", maxDecoderMemorySizeEnv)
+	ErrWindowSizeExceeded = fmt.Errorf("c1z: window size exceeded, increase DecoderMaxMemory using %v environment variable", maxDecoderMemorySizeEnv)
vendor/github.com/klauspost/compress/internal/le/unsafe_enabled.go (3)

3-3: Confirm architecture coverage (consider mips64le/loong64).

If unaligned LE loads are safe on mips64le and loong64 for your targets/toolchains, consider adding them here; otherwise, documenting the omission would help.


39-55: Document alignment and bounds preconditions.

These functions assume sufficient length and allow unaligned access on the listed archs. Add brief preconditions to avoid misuse by future callers.

Example:

-// Store16 will store v at b.
+// Store16 stores v at b[0:]. Caller must ensure len(b) >= 2.

(Repeat for 32/64; for Load* note that len(b) - int(i) >= N.)


11-16: Remove stale commented code.

Comments referencing Uint16 inside Load8 are misleading.

- //return binary.LittleEndian.Uint16(b[i:])
- //return *(*uint16)(unsafe.Pointer(&b[i]))
+ // Fast LE byte load without bounds checks.
vendor/github.com/klauspost/compress/internal/le/unsafe_disabled.go (2)

1-1: Build tags: confirm intended fallbacks.

This complements the unsafe path. If you add new LE 64-bit archs to the unsafe file, mirror the condition here to keep sets complementary.


29-42: Clarify Store semantics.*

Minor doc nit: note that Store* writes starting at b[0:], mirroring PutUintXX behavior.

-// Store32 will store v at b.
+// Store32 stores v at b[0:].

(Repeat for 16/64.)

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 8210db3 and 6127e1e.

⛔ Files ignored due to path filters (1)
  • go.sum is excluded by !**/*.sum
📒 Files selected for processing (25)
  • go.mod (1 hunks)
  • pkg/dotc1z/c1file.go (3 hunks)
  • pkg/dotc1z/decoder.go (4 hunks)
  • pkg/dotc1z/dotc1z.go (1 hunks)
  • pkg/dotc1z/file.go (2 hunks)
  • vendor/github.com/klauspost/compress/README.md (8 hunks)
  • vendor/github.com/klauspost/compress/huff0/bitreader.go (7 hunks)
  • vendor/github.com/klauspost/compress/internal/le/le.go (1 hunks)
  • vendor/github.com/klauspost/compress/internal/le/unsafe_disabled.go (1 hunks)
  • vendor/github.com/klauspost/compress/internal/le/unsafe_enabled.go (1 hunks)
  • vendor/github.com/klauspost/compress/s2sx.mod (1 hunks)
  • vendor/github.com/klauspost/compress/zstd/README.md (1 hunks)
  • vendor/github.com/klauspost/compress/zstd/bitreader.go (6 hunks)
  • vendor/github.com/klauspost/compress/zstd/blockdec.go (0 hunks)
  • vendor/github.com/klauspost/compress/zstd/blockenc.go (3 hunks)
  • vendor/github.com/klauspost/compress/zstd/decoder.go (2 hunks)
  • vendor/github.com/klauspost/compress/zstd/enc_base.go (1 hunks)
  • vendor/github.com/klauspost/compress/zstd/matchlen_generic.go (1 hunks)
  • vendor/github.com/klauspost/compress/zstd/seqdec.go (1 hunks)
  • vendor/github.com/klauspost/compress/zstd/seqdec_amd64.s (16 hunks)
  • vendor/github.com/klauspost/compress/zstd/seqdec_generic.go (1 hunks)
  • vendor/github.com/klauspost/compress/zstd/seqenc.go (0 hunks)
  • vendor/github.com/klauspost/compress/zstd/snappy.go (2 hunks)
  • vendor/github.com/klauspost/compress/zstd/zstd.go (2 hunks)
  • vendor/modules.txt (1 hunks)
💤 Files with no reviewable changes (2)
  • vendor/github.com/klauspost/compress/zstd/seqenc.go
  • vendor/github.com/klauspost/compress/zstd/blockdec.go
🧰 Additional context used
🪛 LanguageTool
vendor/github.com/klauspost/compress/README.md

[style] ~25-~25: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...isable all assembly across packages.

Use the links above for more information on...

(ENGLISH_WORD_REPEAT_BEGINNING_RULE)


[grammar] ~40-~40: There might be a mistake here.
Context: .../github.com/klauspost/compress/pull/1017

  • s2: Don't use stack for index tables htt...

(QB_NEW_EN)


[grammar] ~41-~41: There might be a mistake here.
Context: .../github.com/klauspost/compress/pull/1014

  • gzhttp: No content-type on no body respo...

(QB_NEW_EN)


[grammar] ~42-~42: There might be a mistake here.
Context: .../github.com/klauspost/compress/pull/1011

  • gzhttp: Do not set the content-type when...

(QB_NEW_EN)


[grammar] ~94-~94: There might be a mistake here.
Context: ...//github.com/klauspost/compress/pull/871
* flate: Reduce allocations in decompresso...

(QB_NEW_EN)


[grammar] ~95-~95: There might be a mistake here.
Context: ...//github.com/klauspost/compress/pull/869
* s2: Fix EstimateBlockSize on 6&7 length ...

(QB_NEW_EN)


[grammar] ~153-~153: There might be a mistake here.
Context: ...//github.com/klauspost/compress/pull/739
* zstd: Add delta encoding support by @gre...

(QB_NEW_EN)

🪛 markdownlint-cli2 (0.17.2)
vendor/github.com/klauspost/compress/README.md

31-31: Bare URL used

(MD034, no-bare-urls)


32-32: Bare URL used

(MD034, no-bare-urls)


33-33: Bare URL used

(MD034, no-bare-urls)


34-34: Bare URL used

(MD034, no-bare-urls)


35-35: Bare URL used

(MD034, no-bare-urls)


36-36: Bare URL used

(MD034, no-bare-urls)


37-37: Bare URL used

(MD034, no-bare-urls)


40-40: Bare URL used

(MD034, no-bare-urls)


41-41: Bare URL used

(MD034, no-bare-urls)


42-42: Bare URL used

(MD034, no-bare-urls)


43-43: Bare URL used

(MD034, no-bare-urls)


94-94: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


94-94: Hard tabs
Column: 1

(MD010, no-hard-tabs)


94-94: Bare URL used

(MD034, no-bare-urls)


95-95: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


95-95: Hard tabs
Column: 1

(MD010, no-hard-tabs)


95-95: Bare URL used

(MD034, no-bare-urls)


96-96: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


96-96: Hard tabs
Column: 1

(MD010, no-hard-tabs)


96-96: Bare URL used

(MD034, no-bare-urls)


153-153: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


153-153: Hard tabs
Column: 1

(MD010, no-hard-tabs)


153-153: Bare URL used

(MD034, no-bare-urls)


196-196: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


196-196: Hard tabs
Column: 1

(MD010, no-hard-tabs)


196-196: Bare URL used

(MD034, no-bare-urls)


238-238: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


238-238: Hard tabs
Column: 1

(MD010, no-hard-tabs)


238-238: Bare URL used

(MD034, no-bare-urls)


265-265: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


265-265: Hard tabs
Column: 1

(MD010, no-hard-tabs)


266-266: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


266-266: Hard tabs
Column: 1

(MD010, no-hard-tabs)


267-267: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


267-267: Hard tabs
Column: 1

(MD010, no-hard-tabs)


268-268: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


268-268: Hard tabs
Column: 1

(MD010, no-hard-tabs)


269-269: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


269-269: Hard tabs
Column: 1

(MD010, no-hard-tabs)


270-270: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


270-270: Hard tabs
Column: 1

(MD010, no-hard-tabs)


287-287: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


287-287: Hard tabs
Column: 1

(MD010, no-hard-tabs)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Cursor Bugbot
  • GitHub Check: go-test (1.23.x, windows-latest)
🔇 Additional comments (24)
vendor/github.com/klauspost/compress/s2sx.mod (1)

3-3: Go 1.22 floor in vendor looks good

This aligns with klauspost/compress v1.18.0’s minimum. Ensure CI uses Go ≥1.22.

See the go.mod comment for a script that checks CI Go versions.

vendor/github.com/klauspost/compress/zstd/enc_base.go (1)

119-121: Panic message fix (t vs s) is correct

Improves diagnostics without changing behavior.

vendor/github.com/klauspost/compress/zstd/README.md (1)

9-9: Doc note about noasm/nounsafe build tags: good addition

Clear guidance that matches the new internal/le loaders. No action needed on our side.

vendor/modules.txt (1)

269-276: Vendor manifest updated to v1.18.0 with internal/le: looks consistent

Version and new package entries line up with the dependency bump.

vendor/github.com/klauspost/compress/zstd/snappy.go (1)

198-201: Correct error propagation on Write paths — good fix.

Switching to return r.err (the actual Write error) eliminates leakage of an unrelated err value.

Also applies to: 241-243

vendor/github.com/klauspost/compress/zstd/blockenc.go (2)

12-13: Using slices.Max requires Go ≥ 1.21 — confirm toolchain.

Ensure the module CI/go.mod uses a Go version that includes stdlib slices (1.21+), otherwise builds will fail.

Would you like a small script to assert Go toolchain version and fail early in CI?


889-891: LGTM: cleaner max count via slices.Max.

The +1 range here correctly includes the top symbol; consistent with expected histogram bounds.

vendor/github.com/klauspost/compress/zstd/decoder.go (2)

126-127: Docstring correction for Read — accurate now.

“bytes read” (not written) clarifies the contract. No further action.


323-327: State reuse fix: reset bitreader cursor between DecodeAll calls — good.

Zeroing frame.history.decoders.br.cursor alongside in=nil prevents stale cursor state on pooled decoders.

Consider adding a regression test that runs consecutive DecodeAll calls on the same Decoder verifying identical output without reallocation spikes.

vendor/github.com/klauspost/compress/zstd/zstd.go (1)

12-13: Switch to internal le loaders — aligns with v1.18.0 and nounsafe/noasm paths.

Using internal/le for LE loads centralizes unsafe/portable variants. Looks good.

Double-check that vendor/github.com/klauspost/compress/internal/le is present and builds under -tags nounsafe in CI.

Also applies to: 114-119

vendor/github.com/klauspost/compress/zstd/seqdec_generic.go (1)

32-32: Fast-path guard invariant holds. bitReader.cursor is set to len(in) and only ever decremented by fillFast/fill, so it reliably equals the remaining input bytes. The condition cursor > 4+((maxOffsetBits+16+16)>>3) thus guarantees sufficient data for both inlined fillFast+getBits sequences without risk of overread.

pkg/dotc1z/file.go (1)

14-14: API: loadC1z now takes decoder options (opts ...DecoderOption).

Nice addition. Call sites with zero opts remain source-compatible.

vendor/github.com/klauspost/compress/internal/le/le.go (1)

1-5: LGTM: generic Indexer constraint is clear and minimal.

No issues noted.

pkg/dotc1z/c1file.go (2)

106-109: New decoderOptions field looks good.

Plumbs options cleanly without affecting existing behavior.


140-143: Forwarding options to loadC1z is correct.

This preserves zero-config behavior while enabling customization.

pkg/dotc1z/dotc1z.go (1)

17-19: No internal call sites; approving NewC1FileReader API update.

vendor/github.com/klauspost/compress/zstd/bitreader.go (2)

12-14: Struct/import changes look correct

Importing internal/le and adding cursor int aligns with the new loader approach and updated asm offsets. No issues spotted.

Also applies to: 22-24


73-76: Bitreader cursor-based loads LGTM

The fillFast, fillFastStart, fill, finished, and remain transitions to le.Load32/64 with a reverse-moving cursor are consistent and preserve semantics.

Also applies to: 80-83, 90-102, 105-107, 115-117

vendor/github.com/klauspost/compress/zstd/matchlen_generic.go (1)

18-29: LGTM: switch to le.Load64 and cursorless loop is sound

The 8-byte loop and tail comparison preserve behavior; slicing after n is correct given a is shortest.

pkg/dotc1z/decoder.go (1)

61-65: LGTM: options struct extension

Adding decoderConcurrency here cleanly threads the setting through the decoder.

vendor/github.com/klauspost/compress/zstd/seqdec.go (1)

248-306: LGTM: inlined fast path matches upstream intent

Cursor-based boundary and inlined nextFast logic look consistent with v1.18.0 performance work.

vendor/github.com/klauspost/compress/huff0/bitreader.go (2)

13-14: LGTM: switch to internal le loads and widened 4-byte fast paths

Changes align with upstream micro-opts and reduce bounds checks.

Also applies to: 69-75, 79-82, 89-101, 177-181, 185-188, 196-201


190-201: Retain off > 4 in bitReaderShifted.fill
This guard matches the upstream implementation in bitReaderShifted.fill (uses if b.off > 4) and is intentionally different from the >= 4 fast-path in bitReaderBytes.fill (github.com).

Likely an incorrect or invalid review comment.

vendor/github.com/klauspost/compress/internal/le/unsafe_enabled.go (1)

1-9: No action needed – go.mod specifies Go 1.23.4, so unsafe.SliceData is supported.

Comment on lines +11 to +16
// Load8 will load from b at index i.
func Load8[I Indexer](b []byte, i I) byte {
//return binary.LittleEndian.Uint16(b[i:])
//return *(*uint16)(unsafe.Pointer(&b[i]))
return *(*byte)(unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), i))
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Cast index to uintptr in unsafe.Add.

Making the offset explicit avoids coupling the generic Indexer to a concrete uintptr at call sites and improves readability.

Apply:

- return *(*byte)(unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), i))
+ return *(*byte)(unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), uintptr(i)))
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Load8 will load from b at index i.
func Load8[I Indexer](b []byte, i I) byte {
//return binary.LittleEndian.Uint16(b[i:])
//return *(*uint16)(unsafe.Pointer(&b[i]))
return *(*byte)(unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), i))
}
// Load8 will load from b at index i.
func Load8[I Indexer](b []byte, i I) byte {
//return binary.LittleEndian.Uint16(b[i:])
//return *(*uint16)(unsafe.Pointer(&b[i]))
return *(*byte)(unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), uintptr(i)))
}
🤖 Prompt for AI Agents
In vendor/github.com/klauspost/compress/internal/le/unsafe_enabled.go around
lines 11 to 16, the unsafe.Add call passes the generic index i directly; cast
the index to uintptr to make the offset explicit and avoid coupling the Indexer
type to uintptr at the call site — convert i to uintptr in the unsafe.Add second
argument so the pointer arithmetic uses uintptr(i) instead of i.

Comment on lines +18 to +23
// Load16 will load from b at index i.
func Load16[I Indexer](b []byte, i I) uint16 {
//return binary.LittleEndian.Uint16(b[i:])
//return *(*uint16)(unsafe.Pointer(&b[i]))
return *(*uint16)(unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), i))
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Cast index to uintptr in unsafe.Add.

Same rationale as above.

- return *(*uint16)(unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), i))
+ return *(*uint16)(unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), uintptr(i)))
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Load16 will load from b at index i.
func Load16[I Indexer](b []byte, i I) uint16 {
//return binary.LittleEndian.Uint16(b[i:])
//return *(*uint16)(unsafe.Pointer(&b[i]))
return *(*uint16)(unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), i))
}
// Load16 will load from b at index i.
func Load16[I Indexer](b []byte, i I) uint16 {
//return binary.LittleEndian.Uint16(b[i:])
//return *(*uint16)(unsafe.Pointer(&b[i]))
return *(*uint16)(unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), uintptr(i)))
}
🤖 Prompt for AI Agents
In vendor/github.com/klauspost/compress/internal/le/unsafe_enabled.go around
lines 18 to 23, the call to unsafe.Add passes i directly which is not the
correct type; cast the generic index i to uintptr before passing it to
unsafe.Add (i.e., convert i to uintptr) so the pointer arithmetic uses the
correct type and matches the same rationale used elsewhere in the file.

Comment on lines +25 to +30
// Load32 will load from b at index i.
func Load32[I Indexer](b []byte, i I) uint32 {
//return binary.LittleEndian.Uint32(b[i:])
//return *(*uint32)(unsafe.Pointer(&b[i]))
return *(*uint32)(unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), i))
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Cast index to uintptr in unsafe.Add.

- return *(*uint32)(unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), i))
+ return *(*uint32)(unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), uintptr(i)))
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Load32 will load from b at index i.
func Load32[I Indexer](b []byte, i I) uint32 {
//return binary.LittleEndian.Uint32(b[i:])
//return *(*uint32)(unsafe.Pointer(&b[i]))
return *(*uint32)(unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), i))
}
// Load32 will load from b at index i.
func Load32[I Indexer](b []byte, i I) uint32 {
//return binary.LittleEndian.Uint32(b[i:])
//return *(*uint32)(unsafe.Pointer(&b[i]))
return *(*uint32)(unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), uintptr(i)))
}
🤖 Prompt for AI Agents
In vendor/github.com/klauspost/compress/internal/le/unsafe_enabled.go around
lines 25 to 30, the call to unsafe.Add passes the generic index i directly which
is invalid for the offset parameter; cast the index to uintptr when calling
unsafe.Add (i.e. use uintptr(i)) so the offset is the correct uintptr type,
keeping the pointer to the slice data and returning the dereferenced uint32 as
before.

Comment on lines +32 to +37
// Load64 will load from b at index i.
func Load64[I Indexer](b []byte, i I) uint64 {
//return binary.LittleEndian.Uint64(b[i:])
//return *(*uint64)(unsafe.Pointer(&b[i]))
return *(*uint64)(unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), i))
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Cast index to uintptr in unsafe.Add.

- return *(*uint64)(unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), i))
+ return *(*uint64)(unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), uintptr(i)))
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Load64 will load from b at index i.
func Load64[I Indexer](b []byte, i I) uint64 {
//return binary.LittleEndian.Uint64(b[i:])
//return *(*uint64)(unsafe.Pointer(&b[i]))
return *(*uint64)(unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), i))
}
// Load64 will load from b at index i.
func Load64[I Indexer](b []byte, i I) uint64 {
//return binary.LittleEndian.Uint64(b[i:])
//return *(*uint64)(unsafe.Pointer(&b[i]))
return *(*uint64)(unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), uintptr(i)))
}
🤖 Prompt for AI Agents
In vendor/github.com/klauspost/compress/internal/le/unsafe_enabled.go around
lines 32 to 37, the call to unsafe.Add passes the generic index i directly; cast
i to uintptr before passing it to unsafe.Add (e.g., replace
unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), i) with
unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), uintptr(i))) so the offset
argument has the correct type.

@ggreer ggreer force-pushed the ggreer/decoder-concurrency-option branch from 6127e1e to b109c95 Compare September 4, 2025 23:12
cursor[bot]

This comment was marked as outdated.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
pkg/dotc1z/manager/s3/s3.go (1)

141-145: Close file handle to prevent FD leak in SaveC1Z

os.Open result isn’t closed, leaking a descriptor on error/success paths.

 f, err := os.Open(s.tmpFile)
 if err != nil {
 	return err
 }
+defer f.Close()
pkg/dotc1z/decoder.go (1)

167-170: Fix nil dereference when decoder initialization fails

If zstd.NewReader returns an error, d.zd remains nil and d.decoderInitErr is set. The current check won’t trigger, leading to a nil deref at d.zd.Read(p).

-	// Check we have a valid decoder
-	if d.zd != nil && d.decoderInitErr != nil {
-		return 0, d.decoderInitErr
-	}
+	// Check decoder init error (zd will be nil on failure)
+	if d.decoderInitErr != nil {
+		return 0, d.decoderInitErr
+	}
+	if d.zd == nil {
+		return 0, fmt.Errorf("c1z: decoder not initialized")
+	}
🧹 Nitpick comments (8)
pkg/dotc1z/c1file.go (1)

124-128: Do not overwrite decoder options; append to preserve composability

If WithDecoderOptions is called multiple times, the latest call overwrites earlier ones. Append instead to match WithPragma semantics.

 func WithDecoderOptions(opts ...DecoderOption) C1ZOption {
 	return func(o *c1zOptions) {
-		o.decoderOptions = opts
+		o.decoderOptions = append(o.decoderOptions, opts...)
 	}
 }
pkg/dotc1z/manager/manager.go (1)

33-37: Append decoder options instead of overwriting

Avoid discarding previously set options when WithDecoderOptions is invoked more than once.

 func WithDecoderOptions(opts ...dotc1z.DecoderOption) ManagerOption {
 	return func(o *managerOptions) {
-		o.decoderOptions = opts
+		o.decoderOptions = append(o.decoderOptions, opts...)
 	}
 }
pkg/dotc1z/manager/local/local.go (1)

33-37: Append decoder options to avoid overwriting prior configuration

Aligns with other option helpers and enables composition.

 func WithDecoderOptions(opts ...dotc1z.DecoderOption) Option {
 	return func(o *localManager) {
-		o.decoderOptions = opts
+		o.decoderOptions = append(o.decoderOptions, opts...)
 	}
 }
pkg/dotc1z/manager/s3/s3.go (1)

37-41: Append, don’t overwrite, decoder options

Maintain composability when multiple callers set options.

 func WithDecoderOptions(opts ...dotc1z.DecoderOption) Option {
 	return func(o *s3Manager) {
-		o.decoderOptions = opts
+		o.decoderOptions = append(o.decoderOptions, opts...)
 	}
 }
pkg/dotc1z/decoder.go (1)

91-94: Docs mismatch: default decoded size is 3GiB, not 1GiB

Constant defaultMaxDecodedSize is 3 GiB. Update comment to avoid confusion.

-// Maximum is 1 << 63 bytes. Default is 1GiB.
+// Maximum is 1 << 63 bytes. Default is 3GiB.
vendor/github.com/klauspost/compress/huff0/bitreader.go (3)

65-74: fillFast underflow guard is caller-dependent

b.off-4 assumes ≥4 bytes available. If any caller can hit off<4, this will underflow in unsafe mode. If the contract is guaranteed, consider a brief comment here; otherwise add a fast-path guard.

 func (b *bitReaderBytes) fillFast() {
   if b.bitsRead < 32 {
     return
   }
-  // 2 bounds checks.
+  // Caller must ensure at least 4 bytes remain; keep fast path branchless.
+  // If this guarantee ever changes, add: if b.off < 4 { return }
   low := le.Load32(b.in, b.off-4)
   b.value |= uint64(low) << (b.bitsRead - 32)
   b.bitsRead -= 32
   b.off -= 4
 }

173-181: Same contract concern for shifted reader’s fillFast

As above, b.off-4 requires ≥4 bytes. Confirm caller contract or add a minimal guard/comment.

 func (b *bitReaderShifted) fillFast() {
   if b.bitsRead < 32 {
     return
   }
-  low := le.Load32(b.in, b.off-4)
+  // Caller guarantees ≥4 bytes; add guard if that ever changes.
+  low := le.Load32(b.in, b.off-4)
   b.value |= uint64(low) << ((b.bitsRead - 32) & 63)
   b.bitsRead -= 32
   b.off -= 4
 }

195-201: Minor consistency/perf nit: use >=4 like bytes variant

Enables fast 4-byte path instead of falling back to byte-by-byte for exactly 4 bytes.

- if b.off > 4 {
+ if b.off >= 4 {
     low := le.Load32(b.in, b.off-4)
     b.value |= uint64(low) << ((b.bitsRead - 32) & 63)
     b.bitsRead -= 32
     b.off -= 4
     return
   }
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 6127e1e and b109c95.

⛔ Files ignored due to path filters (1)
  • go.sum is excluded by !**/*.sum
📒 Files selected for processing (28)
  • go.mod (1 hunks)
  • pkg/dotc1z/c1file.go (3 hunks)
  • pkg/dotc1z/decoder.go (5 hunks)
  • pkg/dotc1z/dotc1z.go (1 hunks)
  • pkg/dotc1z/file.go (2 hunks)
  • pkg/dotc1z/manager/local/local.go (3 hunks)
  • pkg/dotc1z/manager/manager.go (3 hunks)
  • pkg/dotc1z/manager/s3/s3.go (3 hunks)
  • vendor/github.com/klauspost/compress/README.md (8 hunks)
  • vendor/github.com/klauspost/compress/huff0/bitreader.go (7 hunks)
  • vendor/github.com/klauspost/compress/internal/le/le.go (1 hunks)
  • vendor/github.com/klauspost/compress/internal/le/unsafe_disabled.go (1 hunks)
  • vendor/github.com/klauspost/compress/internal/le/unsafe_enabled.go (1 hunks)
  • vendor/github.com/klauspost/compress/s2sx.mod (1 hunks)
  • vendor/github.com/klauspost/compress/zstd/README.md (1 hunks)
  • vendor/github.com/klauspost/compress/zstd/bitreader.go (6 hunks)
  • vendor/github.com/klauspost/compress/zstd/blockdec.go (0 hunks)
  • vendor/github.com/klauspost/compress/zstd/blockenc.go (3 hunks)
  • vendor/github.com/klauspost/compress/zstd/decoder.go (2 hunks)
  • vendor/github.com/klauspost/compress/zstd/enc_base.go (1 hunks)
  • vendor/github.com/klauspost/compress/zstd/matchlen_generic.go (1 hunks)
  • vendor/github.com/klauspost/compress/zstd/seqdec.go (1 hunks)
  • vendor/github.com/klauspost/compress/zstd/seqdec_amd64.s (16 hunks)
  • vendor/github.com/klauspost/compress/zstd/seqdec_generic.go (1 hunks)
  • vendor/github.com/klauspost/compress/zstd/seqenc.go (0 hunks)
  • vendor/github.com/klauspost/compress/zstd/snappy.go (2 hunks)
  • vendor/github.com/klauspost/compress/zstd/zstd.go (2 hunks)
  • vendor/modules.txt (1 hunks)
💤 Files with no reviewable changes (2)
  • vendor/github.com/klauspost/compress/zstd/blockdec.go
  • vendor/github.com/klauspost/compress/zstd/seqenc.go
✅ Files skipped from review due to trivial changes (1)
  • go.mod
🚧 Files skipped from review as they are similar to previous changes (14)
  • vendor/github.com/klauspost/compress/s2sx.mod
  • vendor/github.com/klauspost/compress/internal/le/unsafe_disabled.go
  • vendor/github.com/klauspost/compress/zstd/decoder.go
  • pkg/dotc1z/file.go
  • pkg/dotc1z/dotc1z.go
  • vendor/github.com/klauspost/compress/zstd/enc_base.go
  • vendor/github.com/klauspost/compress/zstd/blockenc.go
  • vendor/github.com/klauspost/compress/zstd/bitreader.go
  • vendor/github.com/klauspost/compress/zstd/seqdec.go
  • vendor/github.com/klauspost/compress/zstd/seqdec_generic.go
  • vendor/github.com/klauspost/compress/internal/le/unsafe_enabled.go
  • vendor/github.com/klauspost/compress/zstd/zstd.go
  • vendor/modules.txt
  • vendor/github.com/klauspost/compress/zstd/seqdec_amd64.s
🧰 Additional context used
🧬 Code graph analysis (7)
pkg/dotc1z/manager/s3/s3.go (3)
pkg/dotc1z/decoder.go (1)
  • DecoderOption (57-57)
pkg/dotc1z/c1file.go (5)
  • WithDecoderOptions (124-128)
  • C1ZOption (110-110)
  • WithTmpDir (112-116)
  • WithPragma (118-122)
  • NewC1ZFile (131-158)
pkg/dotc1z/manager/local/local.go (3)
  • WithDecoderOptions (33-37)
  • Option (25-25)
  • WithTmpDir (27-31)
pkg/dotc1z/manager/manager.go (4)
pkg/dotc1z/decoder.go (1)
  • DecoderOption (57-57)
pkg/dotc1z/c1file.go (2)
  • WithDecoderOptions (124-128)
  • WithTmpDir (112-116)
pkg/dotc1z/manager/local/local.go (3)
  • WithDecoderOptions (33-37)
  • Option (25-25)
  • WithTmpDir (27-31)
pkg/dotc1z/manager/s3/s3.go (4)
  • WithDecoderOptions (37-41)
  • NewS3Manager (175-195)
  • Option (29-29)
  • WithTmpDir (31-35)
vendor/github.com/klauspost/compress/zstd/matchlen_generic.go (2)
vendor/github.com/klauspost/compress/internal/le/unsafe_disabled.go (1)
  • Load64 (25-27)
vendor/github.com/klauspost/compress/internal/le/unsafe_enabled.go (1)
  • Load64 (33-37)
pkg/dotc1z/c1file.go (4)
pkg/dotc1z/decoder.go (1)
  • DecoderOption (57-57)
pkg/dotc1z/manager/local/local.go (1)
  • WithDecoderOptions (33-37)
pkg/dotc1z/manager/manager.go (1)
  • WithDecoderOptions (33-37)
pkg/dotc1z/manager/s3/s3.go (1)
  • WithDecoderOptions (37-41)
pkg/dotc1z/decoder.go (1)
vendor/github.com/klauspost/compress/zstd/decoder.go (1)
  • NewReader (87-123)
pkg/dotc1z/manager/local/local.go (4)
pkg/dotc1z/decoder.go (1)
  • DecoderOption (57-57)
pkg/dotc1z/c1file.go (4)
  • WithDecoderOptions (124-128)
  • C1ZOption (110-110)
  • WithTmpDir (112-116)
  • NewC1ZFile (131-158)
pkg/dotc1z/manager/manager.go (2)
  • WithDecoderOptions (33-37)
  • WithTmpDir (27-31)
pkg/dotc1z/manager/s3/s3.go (3)
  • WithDecoderOptions (37-41)
  • Option (29-29)
  • WithTmpDir (31-35)
vendor/github.com/klauspost/compress/huff0/bitreader.go (2)
vendor/github.com/klauspost/compress/internal/le/unsafe_disabled.go (2)
  • Load32 (20-22)
  • Load64 (25-27)
vendor/github.com/klauspost/compress/internal/le/unsafe_enabled.go (2)
  • Load32 (26-30)
  • Load64 (33-37)
🪛 LanguageTool
vendor/github.com/klauspost/compress/README.md

[style] ~25-~25: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...isable all assembly across packages.

Use the links above for more information on...

(ENGLISH_WORD_REPEAT_BEGINNING_RULE)


[grammar] ~40-~40: There might be a mistake here.
Context: .../github.com/klauspost/compress/pull/1017

  • s2: Don't use stack for index tables htt...

(QB_NEW_EN)


[grammar] ~41-~41: There might be a mistake here.
Context: .../github.com/klauspost/compress/pull/1014

  • gzhttp: No content-type on no body respo...

(QB_NEW_EN)


[grammar] ~42-~42: There might be a mistake here.
Context: .../github.com/klauspost/compress/pull/1011

  • gzhttp: Do not set the content-type when...

(QB_NEW_EN)


[grammar] ~94-~94: There might be a mistake here.
Context: ...//github.com/klauspost/compress/pull/871
* flate: Reduce allocations in decompresso...

(QB_NEW_EN)


[grammar] ~95-~95: There might be a mistake here.
Context: ...//github.com/klauspost/compress/pull/869
* s2: Fix EstimateBlockSize on 6&7 length ...

(QB_NEW_EN)


[grammar] ~153-~153: There might be a mistake here.
Context: ...//github.com/klauspost/compress/pull/739
* zstd: Add delta encoding support by @gre...

(QB_NEW_EN)

🪛 markdownlint-cli2 (0.17.2)
vendor/github.com/klauspost/compress/README.md

31-31: Bare URL used

(MD034, no-bare-urls)


32-32: Bare URL used

(MD034, no-bare-urls)


33-33: Bare URL used

(MD034, no-bare-urls)


34-34: Bare URL used

(MD034, no-bare-urls)


35-35: Bare URL used

(MD034, no-bare-urls)


36-36: Bare URL used

(MD034, no-bare-urls)


37-37: Bare URL used

(MD034, no-bare-urls)


40-40: Bare URL used

(MD034, no-bare-urls)


41-41: Bare URL used

(MD034, no-bare-urls)


42-42: Bare URL used

(MD034, no-bare-urls)


43-43: Bare URL used

(MD034, no-bare-urls)


94-94: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


94-94: Hard tabs
Column: 1

(MD010, no-hard-tabs)


94-94: Bare URL used

(MD034, no-bare-urls)


95-95: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


95-95: Hard tabs
Column: 1

(MD010, no-hard-tabs)


95-95: Bare URL used

(MD034, no-bare-urls)


96-96: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


96-96: Hard tabs
Column: 1

(MD010, no-hard-tabs)


96-96: Bare URL used

(MD034, no-bare-urls)


153-153: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


153-153: Hard tabs
Column: 1

(MD010, no-hard-tabs)


153-153: Bare URL used

(MD034, no-bare-urls)


196-196: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


196-196: Hard tabs
Column: 1

(MD010, no-hard-tabs)


196-196: Bare URL used

(MD034, no-bare-urls)


238-238: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


238-238: Hard tabs
Column: 1

(MD010, no-hard-tabs)


238-238: Bare URL used

(MD034, no-bare-urls)


265-265: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


265-265: Hard tabs
Column: 1

(MD010, no-hard-tabs)


266-266: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


266-266: Hard tabs
Column: 1

(MD010, no-hard-tabs)


267-267: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


267-267: Hard tabs
Column: 1

(MD010, no-hard-tabs)


268-268: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


268-268: Hard tabs
Column: 1

(MD010, no-hard-tabs)


269-269: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


269-269: Hard tabs
Column: 1

(MD010, no-hard-tabs)


270-270: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


270-270: Hard tabs
Column: 1

(MD010, no-hard-tabs)


287-287: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


287-287: Hard tabs
Column: 1

(MD010, no-hard-tabs)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: go-test (1.23.x, windows-latest)
  • GitHub Check: go-test (1.23.x, ubuntu-latest)
  • GitHub Check: go-lint
  • GitHub Check: Cursor Bugbot
🔇 Additional comments (21)
vendor/github.com/klauspost/compress/zstd/snappy.go (2)

198-201: Fix: propagate write error from w.Write correctly

Returning r.err here ensures the actual write error is surfaced instead of a potentially stale err from earlier decoding/encoding steps. Good catch.


240-243: Same correct error propagation for uncompressed path

Mirrors the fix above; returning r.err avoids leaking an unrelated err. Looks good.

pkg/dotc1z/c1file.go (2)

105-109: LGTM: decoder options plumbed into c1zOptions

Field addition looks correct and non-breaking.


140-141: LGTM: forwarding decoder options to loader

Passing options.decoderOptions... into loadC1z cleanly threads the settings.

pkg/dotc1z/manager/manager.go (2)

20-23: LGTM: manager options extended with decoder options

Struct expansion is straightforward and keeps backward compatibility.


56-58: LGTM: propagating decoder options to S3 and local managers

Conditional propagation is correct and avoids passing empty varargs.

Also applies to: 65-67

pkg/dotc1z/manager/local/local.go (2)

18-23: LGTM: local manager extended with decoder options

No behavioral regressions; fields remain private.


107-115: LGTM: option list assembly including decoder options

Construction is clear and maintainable.

pkg/dotc1z/manager/s3/s3.go (2)

21-27: LGTM: s3Manager extended with decoder options

Non-breaking and consistent with local manager.


126-134: LGTM: pass decoder options through to NewC1ZFile

Good conditional inclusion to avoid empty option plumbing.

pkg/dotc1z/decoder.go (3)

61-65: LGTM: add decoderConcurrency to options

Matches intended behavior; default handled in NewDecoder.


107-115: LGTM: concurrency option and semantics

Interface is clear (1=sync, 0=GOMAXPROCS, <0 skip). Consider documenting “<0 disables explicit setting” in package docs if not already.


143-150: LGTM: structured zstd options and conditional concurrency

Low-mem + max memory always applied; concurrency applied when non-negative.

Also applies to: 151-154

vendor/github.com/klauspost/compress/zstd/README.md (1)

9-9: Vendored doc tweak looks good

Accurately reflects upstream’s build tags (noasm/nounsafe). No action needed on our side.

vendor/github.com/klauspost/compress/README.md (1)

17-27: Upstream README updates acknowledged

These are vendor docs synced with v1.18.0; fine to keep as-is. Please avoid local edits to vendor docs to reduce churn.

Also applies to: 30-38, 39-44, 594-602

vendor/github.com/klauspost/compress/internal/le/le.go (1)

3-5: Indexer constraint LGTM

Covers all integer kinds needed for generic slice indexing across safe/unsafe paths.

vendor/github.com/klauspost/compress/zstd/matchlen_generic.go (1)

18-29: Precondition relies on ‘a’ being shortest — double-check call sites

The new le.Load64 path assumes b has ≥ len(a) bytes at each 8-byte step. Verify all callers uphold “a is shortest” to avoid OOB in unsafe-enabled builds.

vendor/github.com/klauspost/compress/huff0/bitreader.go (4)

13-14: Switch to internal/le is correct

Unified 32/64-bit loads via le.* match the rest of the upgrade.


79-82: fillFastStart change is fine

64-bit initial load via le.Load64 preserves semantics.


89-95: Nice: handle exactly 4 bytes via fast path

The >=4 check + early return improves the edge case without extra work.


185-188: fillFastStart (shifted) is good

Matches the bytes variant; semantics preserved.

@ggreer ggreer force-pushed the ggreer/decoder-concurrency-option branch from b109c95 to aa4e704 Compare September 4, 2025 23:57
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (4)
pkg/dotc1z/file.go (1)

41-52: Close the decoder on all paths (defer) to avoid leaks.
If io.Copy fails, r.Close is skipped. Defer it after construction.

Apply:

 r, err := NewDecoder(c1zFile, opts...)
 if err != nil {
   return "", err
 }
+defer r.Close()
 _, err = io.Copy(dbFile, r)
 if err != nil {
   return "", err
 }
-err = r.Close()
-if err != nil {
-  return "", err
-}
pkg/dotc1z/decoder.go (2)

167-171: Fix error gating to prevent nil deref on init failure.
If zstd.NewReader fails, d.zd is nil and d.decoderInitErr is set; current check skips returning the error and then calls d.zd.Read, panicking.

Apply:

-// Check we have a valid decoder
-if d.zd != nil && d.decoderInitErr != nil {
-  return 0, d.decoderInitErr
-}
+// Check we have a valid decoder
+if d.decoderInitErr != nil {
+  return 0, d.decoderInitErr
+}

43-47: Read full header to avoid short-read false negatives.
reader.Read can return fewer bytes than requested.

Apply:

 headerBytes := make([]byte, len(C1ZFileHeader))
-_, err := reader.Read(headerBytes)
+_, err := io.ReadFull(reader, headerBytes)
 if err != nil {
   return err
 }
vendor/github.com/klauspost/compress/huff0/bitreader.go (1)

173-181: Guard b.off before calling fillFast() to prevent OOB reads.
In the ZSTD sequence decoders (vendor/.../zstd/seqdec_generic.go at ~lines 43 & 46 and vendor/.../zstd/seqdec.go at ~lines 259 & 262), br.fillFast() is invoked unconditionally, but fillFast performs an unsafe le.Load32(b.in, b.off-4) which can read out of bounds if br.off < 4. Wrap each call in

if br.off >= 4 {
    br.fillFast()
}

or add a debug‐only guard at the top of fillFast() (e.g. if b.off < 4 { return }).

♻️ Duplicate comments (1)
pkg/dotc1z/decoder.go (1)

107-115: Document negative concurrency semantics (or validate).
Negative n silently bypasses setting zstd concurrency; this surprised reviewers previously.

Option A (document intent):

-// WithDecoderConcurrency sets the number of created decoders.
-// Default is 1, which disables async decoding/concurrency.
-// 0 uses GOMAXPROCS.
+// WithDecoderConcurrency sets the number of created decoders.
+// Default is 1 (disables async decoding).
+// 0 uses GOMAXPROCS.
+// n < 0 leaves concurrency unspecified (use library default).

Option B (validate and disallow negatives):

 func WithDecoderConcurrency(n int) DecoderOption {
   return func(o *decoderOptions) error {
-    o.decoderConcurrency = n
+    if n < 0 {
+      return fmt.Errorf("c1z: WithDecoderConcurrency cannot be negative; omit to use library default")
+    }
+    o.decoderConcurrency = n
     return nil
   }
 }
🧹 Nitpick comments (13)
vendor/github.com/klauspost/compress/README.md (5)

94-97: Markdown lint noise (tabs/indent/bare URLs) — exclude vendor from docs linters.

These upstream lines will keep triggering MD007/MD010/MD034. Prefer excluding vendor/** in markdownlint/LanguageTool configs rather than patching vendored text.


153-156: Same markdownlint caveat for vendored changelog formatting.

Recommend ignoring vendor/** in doc-lint jobs to avoid churn.


196-197: Vendored changelog line — keep as-is; ignore in linters.

Upstream formatting; do not modify.


238-239: Vendored changelog line — keep as-is; ignore in linters.

Upstream formatting; do not modify.


265-271: v1.15.0 changelog snippet — OK; note mixed list styles from upstream.

Suggest linter exclusion for vendor to prevent false positives.

pkg/dotc1z/manager/manager.go (1)

33-37: Allow multiple WithDecoderOptions calls (append instead of overwrite).
Current setter discards earlier options if called twice.

Apply:

 func WithDecoderOptions(opts ...dotc1z.DecoderOption) ManagerOption {
   return func(o *managerOptions) {
-    o.decoderOptions = opts
+    o.decoderOptions = append(o.decoderOptions, opts...)
   }
 }
pkg/dotc1z/decoder.go (4)

182-190: Tighten decoded-size enforcement.
Use >= before read and add a post-increment check to signal overflow promptly.

Apply:

-if d.decodedBytes > maxDecodedSize {
+if d.decodedBytes >= maxDecodedSize {
   return 0, ErrMaxSizeExceeded
 }
 ...
 d.decodedBytes += uint64(n)
+if d.decodedBytes > maxDecodedSize {
+  return n, ErrMaxSizeExceeded
+}

91-96: Fix doc default: comment says 1GiB, code uses 3GiB.
Align the comment or constant for clarity.

Apply (comment-only):

-// Maximum is 1 << 63 bytes. Default is 1GiB.
+// Maximum is 1 << 63 bytes. Default is 3GiB.

27-29: Minor: remove double space in error message.

Apply:

-ErrWindowSizeExceeded = fmt.Errorf("c1z: window size exceeded, increase DecoderMaxMemory using %v  environment variable", maxDecoderMemorySizeEnv)
+ErrWindowSizeExceeded = fmt.Errorf("c1z: window size exceeded, increase DecoderMaxMemory using %v environment variable", maxDecoderMemorySizeEnv)

67-73: Typo in comment.
“subequent” -> “subsequent”.

Apply:

-// WithContext sets a context, when cancelled, will cause subequent calls to Read() to return ctx.Error().
+// WithContext sets a context which, when cancelled, will cause subsequent calls to Read() to return ctx.Err().
vendor/github.com/klauspost/compress/huff0/bitreader.go (3)

50-50: Clarify comment: “exactly one byte”.

“Requires that at least one byte is requested every time” reads oddly. This API always peeks exactly one byte.

Apply:

-// peekByteFast requires that at least one byte is requested every time.
+// peekByteFast returns the next byte; callers must request exactly one byte and ensure the buffer is filled.

69-74: Stale “2 bounds checks” comment.

With le.Load32 the bounds-check story differs (purego: one via slicing; unsafe: none). Update the comment to avoid confusion.

Apply:

-	// 2 bounds checks.
+	// Fast-path 32-bit load; caller must ensure at least 4 bytes remain.

195-201: Consider enabling the off == 4 fast path for shifted reader as well.

Using >= 4 here mirrors the bytes reader and avoids the 4× byte loop when exactly 4 remain. If there’s no semantic reason to prefer the byte path, this is a small win.

Apply:

-	if b.off > 4 {
+	if b.off >= 4 {
 		low := le.Load32(b.in, b.off-4)
 		b.value |= uint64(low) << ((b.bitsRead - 32) & 63)
 		b.bitsRead -= 32
 		b.off -= 4
 		return
 	}

If intentional, add a short comment explaining why the shifted variant keeps the byte path at off == 4.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between b109c95 and aa4e704.

⛔ Files ignored due to path filters (1)
  • go.sum is excluded by !**/*.sum
📒 Files selected for processing (28)
  • go.mod (1 hunks)
  • pkg/dotc1z/c1file.go (3 hunks)
  • pkg/dotc1z/decoder.go (5 hunks)
  • pkg/dotc1z/dotc1z.go (1 hunks)
  • pkg/dotc1z/file.go (2 hunks)
  • pkg/dotc1z/manager/local/local.go (3 hunks)
  • pkg/dotc1z/manager/manager.go (3 hunks)
  • pkg/dotc1z/manager/s3/s3.go (3 hunks)
  • vendor/github.com/klauspost/compress/README.md (8 hunks)
  • vendor/github.com/klauspost/compress/huff0/bitreader.go (7 hunks)
  • vendor/github.com/klauspost/compress/internal/le/le.go (1 hunks)
  • vendor/github.com/klauspost/compress/internal/le/unsafe_disabled.go (1 hunks)
  • vendor/github.com/klauspost/compress/internal/le/unsafe_enabled.go (1 hunks)
  • vendor/github.com/klauspost/compress/s2sx.mod (1 hunks)
  • vendor/github.com/klauspost/compress/zstd/README.md (1 hunks)
  • vendor/github.com/klauspost/compress/zstd/bitreader.go (6 hunks)
  • vendor/github.com/klauspost/compress/zstd/blockdec.go (0 hunks)
  • vendor/github.com/klauspost/compress/zstd/blockenc.go (3 hunks)
  • vendor/github.com/klauspost/compress/zstd/decoder.go (2 hunks)
  • vendor/github.com/klauspost/compress/zstd/enc_base.go (1 hunks)
  • vendor/github.com/klauspost/compress/zstd/matchlen_generic.go (1 hunks)
  • vendor/github.com/klauspost/compress/zstd/seqdec.go (1 hunks)
  • vendor/github.com/klauspost/compress/zstd/seqdec_amd64.s (16 hunks)
  • vendor/github.com/klauspost/compress/zstd/seqdec_generic.go (1 hunks)
  • vendor/github.com/klauspost/compress/zstd/seqenc.go (0 hunks)
  • vendor/github.com/klauspost/compress/zstd/snappy.go (2 hunks)
  • vendor/github.com/klauspost/compress/zstd/zstd.go (2 hunks)
  • vendor/modules.txt (1 hunks)
💤 Files with no reviewable changes (2)
  • vendor/github.com/klauspost/compress/zstd/seqenc.go
  • vendor/github.com/klauspost/compress/zstd/blockdec.go
✅ Files skipped from review due to trivial changes (1)
  • vendor/github.com/klauspost/compress/s2sx.mod
🚧 Files skipped from review as they are similar to previous changes (20)
  • vendor/github.com/klauspost/compress/zstd/seqdec.go
  • go.mod
  • vendor/github.com/klauspost/compress/zstd/zstd.go
  • vendor/github.com/klauspost/compress/zstd/enc_base.go
  • vendor/github.com/klauspost/compress/zstd/README.md
  • pkg/dotc1z/manager/local/local.go
  • pkg/dotc1z/dotc1z.go
  • vendor/github.com/klauspost/compress/zstd/matchlen_generic.go
  • pkg/dotc1z/c1file.go
  • vendor/github.com/klauspost/compress/zstd/decoder.go
  • vendor/github.com/klauspost/compress/internal/le/unsafe_enabled.go
  • vendor/github.com/klauspost/compress/zstd/snappy.go
  • vendor/github.com/klauspost/compress/zstd/blockenc.go
  • vendor/github.com/klauspost/compress/zstd/seqdec_generic.go
  • vendor/github.com/klauspost/compress/zstd/bitreader.go
  • vendor/github.com/klauspost/compress/zstd/seqdec_amd64.s
  • pkg/dotc1z/manager/s3/s3.go
  • vendor/github.com/klauspost/compress/internal/le/le.go
  • vendor/modules.txt
  • vendor/github.com/klauspost/compress/internal/le/unsafe_disabled.go
🧰 Additional context used
🧬 Code graph analysis (4)
pkg/dotc1z/file.go (1)
pkg/dotc1z/decoder.go (2)
  • DecoderOption (57-57)
  • NewDecoder (208-241)
pkg/dotc1z/manager/manager.go (4)
pkg/dotc1z/decoder.go (1)
  • DecoderOption (57-57)
pkg/dotc1z/c1file.go (2)
  • WithDecoderOptions (124-128)
  • WithTmpDir (112-116)
pkg/dotc1z/manager/local/local.go (3)
  • WithDecoderOptions (33-37)
  • Option (25-25)
  • WithTmpDir (27-31)
pkg/dotc1z/manager/s3/s3.go (4)
  • WithDecoderOptions (37-41)
  • NewS3Manager (175-195)
  • Option (29-29)
  • WithTmpDir (31-35)
pkg/dotc1z/decoder.go (1)
vendor/github.com/klauspost/compress/zstd/decoder.go (1)
  • NewReader (87-123)
vendor/github.com/klauspost/compress/huff0/bitreader.go (2)
vendor/github.com/klauspost/compress/internal/le/unsafe_disabled.go (2)
  • Load32 (20-22)
  • Load64 (25-27)
vendor/github.com/klauspost/compress/internal/le/unsafe_enabled.go (2)
  • Load32 (26-30)
  • Load64 (33-37)
🪛 LanguageTool
vendor/github.com/klauspost/compress/README.md

[style] ~25-~25: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...isable all assembly across packages.

Use the links above for more information on...

(ENGLISH_WORD_REPEAT_BEGINNING_RULE)


[grammar] ~40-~40: There might be a mistake here.
Context: .../github.com/klauspost/compress/pull/1017

  • s2: Don't use stack for index tables htt...

(QB_NEW_EN)


[grammar] ~41-~41: There might be a mistake here.
Context: .../github.com/klauspost/compress/pull/1014

  • gzhttp: No content-type on no body respo...

(QB_NEW_EN)


[grammar] ~42-~42: There might be a mistake here.
Context: .../github.com/klauspost/compress/pull/1011

  • gzhttp: Do not set the content-type when...

(QB_NEW_EN)


[grammar] ~94-~94: There might be a mistake here.
Context: ...//github.com/klauspost/compress/pull/871
* flate: Reduce allocations in decompresso...

(QB_NEW_EN)


[grammar] ~95-~95: There might be a mistake here.
Context: ...//github.com/klauspost/compress/pull/869
* s2: Fix EstimateBlockSize on 6&7 length ...

(QB_NEW_EN)


[grammar] ~153-~153: There might be a mistake here.
Context: ...//github.com/klauspost/compress/pull/739
* zstd: Add delta encoding support by @gre...

(QB_NEW_EN)

🪛 markdownlint-cli2 (0.17.2)
vendor/github.com/klauspost/compress/README.md

31-31: Bare URL used

(MD034, no-bare-urls)


32-32: Bare URL used

(MD034, no-bare-urls)


33-33: Bare URL used

(MD034, no-bare-urls)


34-34: Bare URL used

(MD034, no-bare-urls)


35-35: Bare URL used

(MD034, no-bare-urls)


36-36: Bare URL used

(MD034, no-bare-urls)


37-37: Bare URL used

(MD034, no-bare-urls)


40-40: Bare URL used

(MD034, no-bare-urls)


41-41: Bare URL used

(MD034, no-bare-urls)


42-42: Bare URL used

(MD034, no-bare-urls)


43-43: Bare URL used

(MD034, no-bare-urls)


94-94: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


94-94: Hard tabs
Column: 1

(MD010, no-hard-tabs)


94-94: Bare URL used

(MD034, no-bare-urls)


95-95: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


95-95: Hard tabs
Column: 1

(MD010, no-hard-tabs)


95-95: Bare URL used

(MD034, no-bare-urls)


96-96: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


96-96: Hard tabs
Column: 1

(MD010, no-hard-tabs)


96-96: Bare URL used

(MD034, no-bare-urls)


153-153: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


153-153: Hard tabs
Column: 1

(MD010, no-hard-tabs)


153-153: Bare URL used

(MD034, no-bare-urls)


196-196: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


196-196: Hard tabs
Column: 1

(MD010, no-hard-tabs)


196-196: Bare URL used

(MD034, no-bare-urls)


238-238: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


238-238: Hard tabs
Column: 1

(MD010, no-hard-tabs)


238-238: Bare URL used

(MD034, no-bare-urls)


265-265: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


265-265: Hard tabs
Column: 1

(MD010, no-hard-tabs)


266-266: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


266-266: Hard tabs
Column: 1

(MD010, no-hard-tabs)


267-267: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


267-267: Hard tabs
Column: 1

(MD010, no-hard-tabs)


268-268: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


268-268: Hard tabs
Column: 1

(MD010, no-hard-tabs)


269-269: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


269-269: Hard tabs
Column: 1

(MD010, no-hard-tabs)


270-270: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


270-270: Hard tabs
Column: 1

(MD010, no-hard-tabs)


287-287: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


287-287: Hard tabs
Column: 1

(MD010, no-hard-tabs)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Cursor Bugbot
  • GitHub Check: go-test (1.23.x, windows-latest)
🔇 Additional comments (13)
vendor/github.com/klauspost/compress/README.md (5)

30-38: Changelog (1.18.0) block — OK to vendor as-is.

Docs-only; safe to keep aligned with upstream.


39-44: Changelog (1.17.11) block — OK to vendor as-is.

Docs-only; safe to keep aligned with upstream.


287-287: Vendored changelog line — OK.

No local edits; allow as-is.


594-602: Deflate “drop-in” table addition — looks correct and helpful.

Links and package paths are accurate; no action needed.


17-27: Vendored README: module bump to v1.18.0 is consistent in go.mod, go.sum, and vendor/modules.txt.

pkg/dotc1z/file.go (1)

14-14: Signature change LGTM; options propagation is clear.
No behavior change when opts are omitted.

pkg/dotc1z/manager/manager.go (2)

21-23: New decoderOptions field: SGTM.
Enables plumbing without affecting existing callers.


56-58: Forwarding options conditionally is fine.
Keeps zero-alloc when no opts were set.

Also applies to: 65-67

pkg/dotc1z/decoder.go (1)

143-149: Concurrency plumbing looks good.
Defaults to 1; opt-in to 0/GOMAXPROCS or library default via negative (if documented).

Also applies to: 227-229

vendor/github.com/klauspost/compress/huff0/bitreader.go (4)

12-14: Switch to internal le loader looks good.

Adopts internal LE loads consistently; matches broader PR direction and should reduce bounds-check overhead on hot paths.


79-82: LGTM: initial 64-bit load via le.Load64.

Correctly matches the precondition (>= 8 bytes) and removes redundant checks.


89-95: LGTM: widen 4-byte fast path to off >= 4.

Safe (reads at index off-4) and avoids the byte-wise tail loop for off == 4.


185-188: LGTM: shifted init via le.Load64.

Consistent with bytes-reader variant; no issues.

@ggreer ggreer merged commit 7dba25a into main Sep 5, 2025
6 checks passed
@ggreer ggreer deleted the ggreer/decoder-concurrency-option branch September 5, 2025 17:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants