Skip to content

Conversation

@mohini-crl
Copy link
Owner

Replicated from original PR cockroachdb#138283

Original author: tbg
Original creation date: 2025-01-06T12:12:32Z

Original reviewers: rickystewart

Original description:

See cockroachdb#136278 (comment).

grpc has gotten a little worse at allocations, but it's overall similarly fast, perhaps even a little faster in the smaller RPCs we care most about.

Benchmark results

$ benchdiff --old lastmerge ./pkg/rpc -b -r 'BenchmarkGRPCPing' -d 1s -c 10
old:  3ce8f44 Merge #138561 #138779 #138793
new:  3708ee5 DEPS: add resolve hints and update packages

name                                        old time/op    new time/op    delta
GRPCPing/bytes=____256/rpc=UnaryUnary-24       126µs ± 3%     124µs ± 2%  -1.59%  (p=0.035 n=9+10)
GRPCPing/bytes=___8192/rpc=StreamStream-24     126µs ± 3%     124µs ± 1%  -1.32%  (p=0.011 n=10+10)
GRPCPing/bytes=______1/rpc=UnaryUnary-24       124µs ± 4%     123µs ± 3%    ~     (p=0.315 n=10+10)
GRPCPing/bytes=______1/rpc=StreamStream-24    70.3µs ± 3%    70.8µs ± 2%    ~     (p=0.393 n=10+10)
GRPCPing/bytes=____256/rpc=StreamStream-24    74.5µs ± 3%    75.1µs ± 2%    ~     (p=0.105 n=10+10)
GRPCPing/bytes=___1024/rpc=UnaryUnary-24       123µs ± 6%     120µs ± 4%    ~     (p=0.661 n=10+9)
GRPCPing/bytes=___1024/rpc=StreamStream-24    67.4µs ± 8%    67.4µs ± 6%    ~     (p=0.720 n=10+9)
GRPCPing/bytes=___2048/rpc=UnaryUnary-24       133µs ± 5%     133µs ± 4%    ~     (p=0.986 n=10+10)
GRPCPing/bytes=___2048/rpc=StreamStream-24    73.9µs ± 1%    74.6µs ± 2%    ~     (p=0.234 n=8+8)
GRPCPing/bytes=___4096/rpc=UnaryUnary-24       150µs ± 2%     151µs ± 3%    ~     (p=0.182 n=9+10)
GRPCPing/bytes=___4096/rpc=StreamStream-24    97.4µs ±10%    95.3µs ±10%    ~     (p=0.393 n=10+10)
GRPCPing/bytes=___8192/rpc=UnaryUnary-24       175µs ± 1%     176µs ± 2%    ~     (p=0.720 n=9+10)
GRPCPing/bytes=__16384/rpc=UnaryUnary-24       252µs ± 1%     253µs ± 1%    ~     (p=0.315 n=9+10)
GRPCPing/bytes=__16384/rpc=StreamStream-24     190µs ± 1%     189µs ± 2%    ~     (p=0.497 n=9+10)
GRPCPing/bytes=__32768/rpc=UnaryUnary-24       363µs ± 1%     366µs ± 1%    ~     (p=0.079 n=10+9)
GRPCPing/bytes=__32768/rpc=StreamStream-24     305µs ± 3%     305µs ± 1%    ~     (p=0.579 n=10+10)
GRPCPing/bytes=__65536/rpc=UnaryUnary-24       512µs ± 2%     515µs ± 1%    ~     (p=0.095 n=9+10)
GRPCPing/bytes=__65536/rpc=StreamStream-24     449µs ± 1%     452µs ± 1%    ~     (p=0.059 n=9+8)
GRPCPing/bytes=_262144/rpc=UnaryUnary-24      1.48ms ± 3%    1.48ms ± 2%    ~     (p=0.739 n=10+10)
GRPCPing/bytes=_262144/rpc=StreamStream-24    1.42ms ± 1%    1.41ms ± 2%    ~     (p=0.182 n=9+10)
GRPCPing/bytes=1048576/rpc=UnaryUnary-24      5.90ms ± 2%    5.86ms ± 1%    ~     (p=0.278 n=10+9)
GRPCPing/bytes=1048576/rpc=StreamStream-24    5.81ms ± 2%    5.84ms ± 3%    ~     (p=0.631 n=10+10)

name                                        old speed      new speed      delta
GRPCPing/bytes=____256/rpc=UnaryUnary-24    4.44MB/s ± 3%  4.51MB/s ± 2%  +1.58%  (p=0.033 n=9+10)
GRPCPing/bytes=___8192/rpc=StreamStream-24   130MB/s ± 3%   132MB/s ± 1%  +1.32%  (p=0.010 n=10+10)
GRPCPing/bytes=______1/rpc=UnaryUnary-24     386kB/s ± 4%   391kB/s ± 3%    ~     (p=0.378 n=10+10)
GRPCPing/bytes=______1/rpc=StreamStream-24   682kB/s ± 3%   676kB/s ± 2%    ~     (p=0.189 n=10+9)
GRPCPing/bytes=____256/rpc=StreamStream-24  7.52MB/s ± 3%  7.46MB/s ± 2%    ~     (p=0.100 n=10+10)
GRPCPing/bytes=___1024/rpc=UnaryUnary-24    17.1MB/s ± 6%  17.4MB/s ± 4%    ~     (p=0.645 n=10+9)
GRPCPing/bytes=___1024/rpc=StreamStream-24  31.1MB/s ± 8%  31.1MB/s ± 6%    ~     (p=0.720 n=10+9)
GRPCPing/bytes=___2048/rpc=UnaryUnary-24    31.1MB/s ± 5%  31.2MB/s ± 4%    ~     (p=0.986 n=10+10)
GRPCPing/bytes=___2048/rpc=StreamStream-24  56.1MB/s ± 1%  55.6MB/s ± 2%    ~     (p=0.224 n=8+8)
GRPCPing/bytes=___4096/rpc=UnaryUnary-24    55.1MB/s ± 2%  54.6MB/s ± 3%    ~     (p=0.189 n=9+10)
GRPCPing/bytes=___4096/rpc=StreamStream-24  85.1MB/s ±11%  87.0MB/s ±11%    ~     (p=0.393 n=10+10)
GRPCPing/bytes=___8192/rpc=UnaryUnary-24    93.7MB/s ± 1%  93.5MB/s ± 2%    ~     (p=0.720 n=9+10)
GRPCPing/bytes=__16384/rpc=UnaryUnary-24     130MB/s ± 1%   130MB/s ± 1%    ~     (p=0.305 n=9+10)
GRPCPing/bytes=__16384/rpc=StreamStream-24   173MB/s ± 1%   173MB/s ± 2%    ~     (p=0.497 n=9+10)
GRPCPing/bytes=__32768/rpc=UnaryUnary-24     180MB/s ± 1%   179MB/s ± 1%    ~     (p=0.079 n=10+9)
GRPCPing/bytes=__32768/rpc=StreamStream-24   215MB/s ± 2%   215MB/s ± 1%    ~     (p=0.579 n=10+10)
GRPCPing/bytes=__65536/rpc=UnaryUnary-24     256MB/s ± 2%   255MB/s ± 1%    ~     (p=0.095 n=9+10)
GRPCPing/bytes=__65536/rpc=StreamStream-24   292MB/s ± 1%   290MB/s ± 1%    ~     (p=0.059 n=9+8)
GRPCPing/bytes=_262144/rpc=UnaryUnary-24     353MB/s ± 3%   353MB/s ± 2%    ~     (p=0.447 n=10+9)
GRPCPing/bytes=_262144/rpc=StreamStream-24   369MB/s ± 1%   371MB/s ± 2%    ~     (p=0.182 n=9+10)
GRPCPing/bytes=1048576/rpc=UnaryUnary-24     355MB/s ± 2%   358MB/s ± 1%    ~     (p=0.278 n=10+9)
GRPCPing/bytes=1048576/rpc=StreamStream-24   361MB/s ± 2%   359MB/s ± 3%    ~     (p=0.631 n=10+10)

name                                        old alloc/op   new alloc/op   delta
GRPCPing/bytes=______1/rpc=UnaryUnary-24      16.9kB ± 1%    16.9kB ± 3%    ~     (p=0.579 n=10+10)
GRPCPing/bytes=____256/rpc=UnaryUnary-24      19.8kB ± 2%    19.9kB ± 2%    ~     (p=0.755 n=10+10)
GRPCPing/bytes=____256/rpc=StreamStream-24    7.35kB ± 2%    7.43kB ± 2%    ~     (p=0.052 n=10+10)
GRPCPing/bytes=___1024/rpc=UnaryUnary-24      29.8kB ± 2%    29.8kB ± 1%    ~     (p=0.853 n=10+10)
GRPCPing/bytes=___1024/rpc=StreamStream-24    17.7kB ± 1%    17.7kB ± 1%    ~     (p=0.796 n=10+10)
GRPCPing/bytes=___2048/rpc=UnaryUnary-24      43.2kB ± 1%    43.0kB ± 1%    ~     (p=0.218 n=10+10)
GRPCPing/bytes=___2048/rpc=StreamStream-24    31.0kB ± 0%    31.1kB ± 1%    ~     (p=0.278 n=9+10)
GRPCPing/bytes=___4096/rpc=UnaryUnary-24      73.0kB ± 1%    73.2kB ± 1%    ~     (p=0.393 n=10+10)
GRPCPing/bytes=___4096/rpc=StreamStream-24    61.6kB ± 1%    61.7kB ± 0%    ~     (p=0.573 n=10+8)
GRPCPing/bytes=___8192/rpc=UnaryUnary-24       127kB ± 0%     127kB ± 1%    ~     (p=0.393 n=10+10)
GRPCPing/bytes=___8192/rpc=StreamStream-24     118kB ± 1%     118kB ± 0%    ~     (p=0.796 n=10+10)
GRPCPing/bytes=__16384/rpc=UnaryUnary-24       237kB ± 1%     237kB ± 1%    ~     (p=0.579 n=10+10)
GRPCPing/bytes=__16384/rpc=StreamStream-24     227kB ± 1%     227kB ± 1%    ~     (p=0.481 n=10+10)
GRPCPing/bytes=__32768/rpc=UnaryUnary-24       500kB ± 1%     500kB ± 1%    ~     (p=0.912 n=10+10)
GRPCPing/bytes=__32768/rpc=StreamStream-24     492kB ± 0%     492kB ± 0%    ~     (p=0.968 n=9+10)
GRPCPing/bytes=__65536/rpc=UnaryUnary-24       873kB ± 0%     872kB ± 0%    ~     (p=0.780 n=9+10)
GRPCPing/bytes=__65536/rpc=StreamStream-24     868kB ± 0%     868kB ± 0%    ~     (p=1.000 n=9+9)
GRPCPing/bytes=_262144/rpc=UnaryUnary-24      3.50MB ± 0%    3.51MB ± 0%    ~     (p=0.436 n=10+10)
GRPCPing/bytes=_262144/rpc=StreamStream-24    3.49MB ± 0%    3.50MB ± 0%    ~     (p=0.436 n=10+10)
GRPCPing/bytes=1048576/rpc=UnaryUnary-24      13.5MB ± 0%    13.5MB ± 0%    ~     (p=0.515 n=8+10)
GRPCPing/bytes=1048576/rpc=StreamStream-24    13.5MB ± 0%    13.5MB ± 0%    ~     (p=0.549 n=10+9)
GRPCPing/bytes=______1/rpc=StreamStream-24    4.08kB ± 3%    4.18kB ± 3%  +2.28%  (p=0.008 n=9+10)

name                                        old allocs/op  new allocs/op  delta
GRPCPing/bytes=_262144/rpc=UnaryUnary-24         282 ± 4%       286 ± 4%    ~     (p=0.223 n=10+10)
GRPCPing/bytes=_262144/rpc=StreamStream-24       147 ± 3%       149 ± 3%    ~     (p=0.053 n=9+8)
GRPCPing/bytes=1048576/rpc=UnaryUnary-24         510 ± 2%       513 ± 3%    ~     (p=0.656 n=8+9)
GRPCPing/bytes=1048576/rpc=StreamStream-24       370 ± 6%       377 ± 3%    ~     (p=0.168 n=9+9)
GRPCPing/bytes=____256/rpc=UnaryUnary-24         183 ± 0%       184 ± 0%  +0.71%  (p=0.000 n=8+10)
GRPCPing/bytes=______1/rpc=UnaryUnary-24         183 ± 0%       184 ± 0%  +0.77%  (p=0.000 n=10+8)
GRPCPing/bytes=__32768/rpc=UnaryUnary-24         211 ± 0%       213 ± 0%  +0.95%  (p=0.000 n=10+10)
GRPCPing/bytes=__16384/rpc=UnaryUnary-24         195 ± 0%       197 ± 0%  +1.03%  (p=0.000 n=10+10)
GRPCPing/bytes=___8192/rpc=UnaryUnary-24         184 ± 0%       186 ± 0%  +1.09%  (p=0.000 n=10+10)
GRPCPing/bytes=___2048/rpc=UnaryUnary-24         183 ± 0%       185 ± 0%  +1.09%  (p=0.000 n=10+10)
GRPCPing/bytes=___4096/rpc=UnaryUnary-24         183 ± 0%       185 ± 0%  +1.09%  (p=0.000 n=10+10)
GRPCPing/bytes=___1024/rpc=UnaryUnary-24         182 ± 0%       184 ± 0%  +1.10%  (p=0.000 n=10+10)
GRPCPing/bytes=__65536/rpc=UnaryUnary-24         219 ± 0%       221 ± 0%  +1.10%  (p=0.000 n=10+8)
GRPCPing/bytes=__32768/rpc=StreamStream-24      75.0 ± 0%      77.0 ± 0%  +2.67%  (p=0.000 n=10+10)
GRPCPing/bytes=__65536/rpc=StreamStream-24      83.0 ± 0%      85.3 ± 1%  +2.77%  (p=0.000 n=9+10)
GRPCPing/bytes=__16384/rpc=StreamStream-24      57.0 ± 0%      59.0 ± 0%  +3.51%  (p=0.000 n=10+10)
GRPCPing/bytes=___8192/rpc=StreamStream-24      51.0 ± 0%      53.0 ± 0%  +3.92%  (p=0.000 n=10+10)
GRPCPing/bytes=___4096/rpc=StreamStream-24      49.0 ± 0%      51.0 ± 0%  +4.08%  (p=0.000 n=10+10)
GRPCPing/bytes=___2048/rpc=StreamStream-24      48.0 ± 0%      50.0 ± 0%  +4.17%  (p=0.000 n=10+10)
GRPCPing/bytes=______1/rpc=StreamStream-24      47.0 ± 0%      49.0 ± 0%  +4.26%  (p=0.000 n=10+10)
GRPCPing/bytes=____256/rpc=StreamStream-24      47.0 ± 0%      49.0 ± 0%  +4.26%  (p=0.000 n=10+10)
GRPCPing/bytes=___1024/rpc=StreamStream-24      47.0 ± 0%      49.0 ± 0%  +4.26%  (p=0.000 n=10+10)

Epic: None
Release note: None

fqazi and others added 30 commits October 30, 2024 16:16
Previously, if the correct overloads were not found for sequence
builtins it was possible for the server to panic. This could happen when
rewriting a CREATE TABLE expression with an invalid sequence builtin
call. To address this, this patch updates the sequence logic to return
the error instead of panicking on it.

Fixes: cockroachdb#133399

Release note (bug fix): Address a panic inside CREATE TABLE AS if
sequence builtin expressions had invalid function overloads.
Previously, if the correct overloads were not found for sequence
builtins it was possible for the server to panic. This could happen when
rewriting a CREATE TABLE expression with an invalid sequence builtin
call. To address this, this patch updates the sequence logic to return
the error instead of panicking on it.

Fixes: cockroachdb#133399

Release note (bug fix): Address a panic inside CREATE TABLE AS if
sequence builtin expressions had invalid function overloads.
Previously, if the correct overloads were not found for sequence
builtins it was possible for the server to panic. This could happen when
rewriting a CREATE TABLE expression with an invalid sequence builtin
call. To address this, this patch updates the sequence logic to return
the error instead of panicking on it.

Fixes: cockroachdb#133399

Release note (bug fix): Address a panic inside CREATE TABLE AS if
sequence builtin expressions had invalid function overloads.
Previously, if the correct overloads were not found for sequence
builtins it was possible for the server to panic. This could happen when
rewriting a CREATE TABLE expression with an invalid sequence builtin
call. To address this, this patch updates the sequence logic to return
the error instead of panicking on it.

Fixes: cockroachdb#133399

Release note (bug fix): Address a panic inside CREATE TABLE AS if
sequence builtin expressions had invalid function overloads.
Previously, if the correct overloads were not found for sequence
builtins it was possible for the server to panic. This could happen when
rewriting a CREATE TABLE expression with an invalid sequence builtin
call. To address this, this patch updates the sequence logic to return
the error instead of panicking on it.

Fixes: cockroachdb#133399

Release note (bug fix): Address a panic inside CREATE TABLE AS if
sequence builtin expressions had invalid function overloads.
This commit fixes type schema corruption in the vectorized engine in an
edge case. In particular, consider the following circumstances:
- during the physical planning, when creating a new stage of processors,
we often reuse the same type slice (stored in
`InputSyncSpec.ColumnTypes`) that we get from the previous stage. In
other words, we might have memory aliasing, but only on the gateway
node because the remote nodes get their specs deserialized and each has
its own memory allocation.
- throughout the vectorized operator planning, as of
85fd4fb, for each newly projected
vector we append the corresponding type to the type slice we have in
scope. We also capture intermediate state of the type slice by some
operators (e.g. `BatchSchemaSubsetEnforcer`).
- as expected, when appending a type to the slice, if there is enough
capacity, we reuse it, meaning that we often append to the slice that
came to us via `InputSyncSpec.ColumnTypes`.
- now, if we have two stages of processors that happened to share the
same underlying type slice with some free capacity AND we needed to
append vectors for each stage, then we might corrupt the type schema
captured by an operator for the earlier stage when performing vectorized
planning for the later stage.

The bug is effectively the same as the comment deleted by
85fd4fb outlined:
```
// As an example, consider the following scenario in the context of
// planFilterExpr method:
// 1. r.ColumnTypes={types.Bool} with len=1 and cap=4
// 2. planSelectionOperators adds another types.Int column, so
//    filterColumnTypes={types.Bool, types.Int} with len=2 and cap=4
//    Crucially, it uses exact same underlying array as r.ColumnTypes
//    uses.
// 3. we project out second column, so r.ColumnTypes={types.Bool}
// 4. later, we add another types.Float column, so
//    r.ColumnTypes={types.Bool, types.Float}, but there is enough
//    capacity in the array, so we simply overwrite the second slot
//    with the new type which corrupts filterColumnTypes to become
//    {types.Bool, types.Float}, and we can get into a runtime type
//    mismatch situation.
```
The only differences are:
- aliasing of the type slice occurs via the
`InputSyncSpec.ColumnTypes` that is often used as the starting points
for populating `NewColOperatorResult.ColumnTypes` which is used
throughout the vectorized operator planning
- columns are "projected out" by sharing the type schema between two
stages of DistSQL processors.

This commit addresses this issue by capping the slice to its length
right before we get into the vectorized planning. This will make it so
that if we need to append a type, then we'll make a fresh allocation,
and any possible memory aliasing with a different stage of processors
will be gone.

I haven't quite figured out the exact conditions that are needed for
this bug to occur, but my intuition says that it should be quite rare in
practice (otherwise we'd have seen this much sooner given that the
offending commit was merged more than a year ago and was backported to
older branches).

Release note (bug fix): Previously, CockroachDB could encounter an
internal error of the form `interface conversion: coldata.Column is` in
an edge case and this is now fixed. The bug is present in versions
22.2.13+, 23.1.9+, 23.2+.
This commit fixes type schema corruption in the vectorized engine in an
edge case. In particular, consider the following circumstances:
- during the physical planning, when creating a new stage of processors,
we often reuse the same type slice (stored in
`InputSyncSpec.ColumnTypes`) that we get from the previous stage. In
other words, we might have memory aliasing, but only on the gateway
node because the remote nodes get their specs deserialized and each has
its own memory allocation.
- throughout the vectorized operator planning, as of
85fd4fb, for each newly projected
vector we append the corresponding type to the type slice we have in
scope. We also capture intermediate state of the type slice by some
operators (e.g. `BatchSchemaSubsetEnforcer`).
- as expected, when appending a type to the slice, if there is enough
capacity, we reuse it, meaning that we often append to the slice that
came to us via `InputSyncSpec.ColumnTypes`.
- now, if we have two stages of processors that happened to share the
same underlying type slice with some free capacity AND we needed to
append vectors for each stage, then we might corrupt the type schema
captured by an operator for the earlier stage when performing vectorized
planning for the later stage.

The bug is effectively the same as the comment deleted by
85fd4fb outlined:
```
// As an example, consider the following scenario in the context of
// planFilterExpr method:
// 1. r.ColumnTypes={types.Bool} with len=1 and cap=4
// 2. planSelectionOperators adds another types.Int column, so
//    filterColumnTypes={types.Bool, types.Int} with len=2 and cap=4
//    Crucially, it uses exact same underlying array as r.ColumnTypes
//    uses.
// 3. we project out second column, so r.ColumnTypes={types.Bool}
// 4. later, we add another types.Float column, so
//    r.ColumnTypes={types.Bool, types.Float}, but there is enough
//    capacity in the array, so we simply overwrite the second slot
//    with the new type which corrupts filterColumnTypes to become
//    {types.Bool, types.Float}, and we can get into a runtime type
//    mismatch situation.
```
The only differences are:
- aliasing of the type slice occurs via the
`InputSyncSpec.ColumnTypes` that is often used as the starting points
for populating `NewColOperatorResult.ColumnTypes` which is used
throughout the vectorized operator planning
- columns are "projected out" by sharing the type schema between two
stages of DistSQL processors.

This commit addresses this issue by capping the slice to its length
right before we get into the vectorized planning. This will make it so
that if we need to append a type, then we'll make a fresh allocation,
and any possible memory aliasing with a different stage of processors
will be gone.

I haven't quite figured out the exact conditions that are needed for
this bug to occur, but my intuition says that it should be quite rare in
practice (otherwise we'd have seen this much sooner given that the
offending commit was merged more than a year ago and was backported to
older branches).

Release note (bug fix): Previously, CockroachDB could encounter an
internal error of the form `interface conversion: coldata.Column is` in
an edge case and this is now fixed. The bug is present in versions
22.2.13+, 23.1.9+, 23.2+.
This commit fixes type schema corruption in the vectorized engine in an
edge case. In particular, consider the following circumstances:
- during the physical planning, when creating a new stage of processors,
we often reuse the same type slice (stored in
`InputSyncSpec.ColumnTypes`) that we get from the previous stage. In
other words, we might have memory aliasing, but only on the gateway
node because the remote nodes get their specs deserialized and each has
its own memory allocation.
- throughout the vectorized operator planning, as of
85fd4fb, for each newly projected
vector we append the corresponding type to the type slice we have in
scope. We also capture intermediate state of the type slice by some
operators (e.g. `BatchSchemaSubsetEnforcer`).
- as expected, when appending a type to the slice, if there is enough
capacity, we reuse it, meaning that we often append to the slice that
came to us via `InputSyncSpec.ColumnTypes`.
- now, if we have two stages of processors that happened to share the
same underlying type slice with some free capacity AND we needed to
append vectors for each stage, then we might corrupt the type schema
captured by an operator for the earlier stage when performing vectorized
planning for the later stage.

The bug is effectively the same as the comment deleted by
85fd4fb outlined:
```
// As an example, consider the following scenario in the context of
// planFilterExpr method:
// 1. r.ColumnTypes={types.Bool} with len=1 and cap=4
// 2. planSelectionOperators adds another types.Int column, so
//    filterColumnTypes={types.Bool, types.Int} with len=2 and cap=4
//    Crucially, it uses exact same underlying array as r.ColumnTypes
//    uses.
// 3. we project out second column, so r.ColumnTypes={types.Bool}
// 4. later, we add another types.Float column, so
//    r.ColumnTypes={types.Bool, types.Float}, but there is enough
//    capacity in the array, so we simply overwrite the second slot
//    with the new type which corrupts filterColumnTypes to become
//    {types.Bool, types.Float}, and we can get into a runtime type
//    mismatch situation.
```
The only differences are:
- aliasing of the type slice occurs via the
`InputSyncSpec.ColumnTypes` that is often used as the starting points
for populating `NewColOperatorResult.ColumnTypes` which is used
throughout the vectorized operator planning
- columns are "projected out" by sharing the type schema between two
stages of DistSQL processors.

This commit addresses this issue by capping the slice to its length
right before we get into the vectorized planning. This will make it so
that if we need to append a type, then we'll make a fresh allocation,
and any possible memory aliasing with a different stage of processors
will be gone.

I haven't quite figured out the exact conditions that are needed for
this bug to occur, but my intuition says that it should be quite rare in
practice (otherwise we'd have seen this much sooner given that the
offending commit was merged more than a year ago and was backported to
older branches).

Release note (bug fix): Previously, CockroachDB could encounter an
internal error of the form `interface conversion: coldata.Column is` in
an edge case and this is now fixed. The bug is present in versions
22.2.13+, 23.1.9+, 23.2+.
This commit fixes type schema corruption in the vectorized engine in an
edge case. In particular, consider the following circumstances:
- during the physical planning, when creating a new stage of processors,
we often reuse the same type slice (stored in
`InputSyncSpec.ColumnTypes`) that we get from the previous stage. In
other words, we might have memory aliasing, but only on the gateway
node because the remote nodes get their specs deserialized and each has
its own memory allocation.
- throughout the vectorized operator planning, as of
85fd4fb, for each newly projected
vector we append the corresponding type to the type slice we have in
scope. We also capture intermediate state of the type slice by some
operators (e.g. `BatchSchemaSubsetEnforcer`).
- as expected, when appending a type to the slice, if there is enough
capacity, we reuse it, meaning that we often append to the slice that
came to us via `InputSyncSpec.ColumnTypes`.
- now, if we have two stages of processors that happened to share the
same underlying type slice with some free capacity AND we needed to
append vectors for each stage, then we might corrupt the type schema
captured by an operator for the earlier stage when performing vectorized
planning for the later stage.

The bug is effectively the same as the comment deleted by
85fd4fb outlined:
```
// As an example, consider the following scenario in the context of
// planFilterExpr method:
// 1. r.ColumnTypes={types.Bool} with len=1 and cap=4
// 2. planSelectionOperators adds another types.Int column, so
//    filterColumnTypes={types.Bool, types.Int} with len=2 and cap=4
//    Crucially, it uses exact same underlying array as r.ColumnTypes
//    uses.
// 3. we project out second column, so r.ColumnTypes={types.Bool}
// 4. later, we add another types.Float column, so
//    r.ColumnTypes={types.Bool, types.Float}, but there is enough
//    capacity in the array, so we simply overwrite the second slot
//    with the new type which corrupts filterColumnTypes to become
//    {types.Bool, types.Float}, and we can get into a runtime type
//    mismatch situation.
```
The only differences are:
- aliasing of the type slice occurs via the
`InputSyncSpec.ColumnTypes` that is often used as the starting points
for populating `NewColOperatorResult.ColumnTypes` which is used
throughout the vectorized operator planning
- columns are "projected out" by sharing the type schema between two
stages of DistSQL processors.

This commit addresses this issue by capping the slice to its length
right before we get into the vectorized planning. This will make it so
that if we need to append a type, then we'll make a fresh allocation,
and any possible memory aliasing with a different stage of processors
will be gone.

I haven't quite figured out the exact conditions that are needed for
this bug to occur, but my intuition says that it should be quite rare in
practice (otherwise we'd have seen this much sooner given that the
offending commit was merged more than a year ago and was backported to
older branches).

Release note (bug fix): Previously, CockroachDB could encounter an
internal error of the form `interface conversion: coldata.Column is` in
an edge case and this is now fixed. The bug is present in versions
22.2.13+, 23.1.9+, 23.2+.
This commit fixes type schema corruption in the vectorized engine in an
edge case. In particular, consider the following circumstances:
- during the physical planning, when creating a new stage of processors,
we often reuse the same type slice (stored in
`InputSyncSpec.ColumnTypes`) that we get from the previous stage. In
other words, we might have memory aliasing, but only on the gateway
node because the remote nodes get their specs deserialized and each has
its own memory allocation.
- throughout the vectorized operator planning, as of
85fd4fb, for each newly projected
vector we append the corresponding type to the type slice we have in
scope. We also capture intermediate state of the type slice by some
operators (e.g. `BatchSchemaSubsetEnforcer`).
- as expected, when appending a type to the slice, if there is enough
capacity, we reuse it, meaning that we often append to the slice that
came to us via `InputSyncSpec.ColumnTypes`.
- now, if we have two stages of processors that happened to share the
same underlying type slice with some free capacity AND we needed to
append vectors for each stage, then we might corrupt the type schema
captured by an operator for the earlier stage when performing vectorized
planning for the later stage.

The bug is effectively the same as the comment deleted by
85fd4fb outlined:
```
// As an example, consider the following scenario in the context of
// planFilterExpr method:
// 1. r.ColumnTypes={types.Bool} with len=1 and cap=4
// 2. planSelectionOperators adds another types.Int column, so
//    filterColumnTypes={types.Bool, types.Int} with len=2 and cap=4
//    Crucially, it uses exact same underlying array as r.ColumnTypes
//    uses.
// 3. we project out second column, so r.ColumnTypes={types.Bool}
// 4. later, we add another types.Float column, so
//    r.ColumnTypes={types.Bool, types.Float}, but there is enough
//    capacity in the array, so we simply overwrite the second slot
//    with the new type which corrupts filterColumnTypes to become
//    {types.Bool, types.Float}, and we can get into a runtime type
//    mismatch situation.
```
The only differences are:
- aliasing of the type slice occurs via the
`InputSyncSpec.ColumnTypes` that is often used as the starting points
for populating `NewColOperatorResult.ColumnTypes` which is used
throughout the vectorized operator planning
- columns are "projected out" by sharing the type schema between two
stages of DistSQL processors.

This commit addresses this issue by capping the slice to its length
right before we get into the vectorized planning. This will make it so
that if we need to append a type, then we'll make a fresh allocation,
and any possible memory aliasing with a different stage of processors
will be gone.

I haven't quite figured out the exact conditions that are needed for
this bug to occur, but my intuition says that it should be quite rare in
practice (otherwise we'd have seen this much sooner given that the
offending commit was merged more than a year ago and was backported to
older branches).

Release note (bug fix): Previously, CockroachDB could encounter an
internal error of the form `interface conversion: coldata.Column is` in
an edge case and this is now fixed. The bug is present in versions
22.2.13+, 23.1.9+, 23.2+.
This commit fixes type schema corruption in the vectorized engine in an
edge case. In particular, consider the following circumstances:
- during the physical planning, when creating a new stage of processors,
we often reuse the same type slice (stored in
`InputSyncSpec.ColumnTypes`) that we get from the previous stage. In
other words, we might have memory aliasing, but only on the gateway
node because the remote nodes get their specs deserialized and each has
its own memory allocation.
- throughout the vectorized operator planning, as of
85fd4fb, for each newly projected
vector we append the corresponding type to the type slice we have in
scope. We also capture intermediate state of the type slice by some
operators (e.g. `BatchSchemaSubsetEnforcer`).
- as expected, when appending a type to the slice, if there is enough
capacity, we reuse it, meaning that we often append to the slice that
came to us via `InputSyncSpec.ColumnTypes`.
- now, if we have two stages of processors that happened to share the
same underlying type slice with some free capacity AND we needed to
append vectors for each stage, then we might corrupt the type schema
captured by an operator for the earlier stage when performing vectorized
planning for the later stage.

The bug is effectively the same as the comment deleted by
85fd4fb outlined:
```
// As an example, consider the following scenario in the context of
// planFilterExpr method:
// 1. r.ColumnTypes={types.Bool} with len=1 and cap=4
// 2. planSelectionOperators adds another types.Int column, so
//    filterColumnTypes={types.Bool, types.Int} with len=2 and cap=4
//    Crucially, it uses exact same underlying array as r.ColumnTypes
//    uses.
// 3. we project out second column, so r.ColumnTypes={types.Bool}
// 4. later, we add another types.Float column, so
//    r.ColumnTypes={types.Bool, types.Float}, but there is enough
//    capacity in the array, so we simply overwrite the second slot
//    with the new type which corrupts filterColumnTypes to become
//    {types.Bool, types.Float}, and we can get into a runtime type
//    mismatch situation.
```
The only differences are:
- aliasing of the type slice occurs via the
`InputSyncSpec.ColumnTypes` that is often used as the starting points
for populating `NewColOperatorResult.ColumnTypes` which is used
throughout the vectorized operator planning
- columns are "projected out" by sharing the type schema between two
stages of DistSQL processors.

This commit addresses this issue by capping the slice to its length
right before we get into the vectorized planning. This will make it so
that if we need to append a type, then we'll make a fresh allocation,
and any possible memory aliasing with a different stage of processors
will be gone.

I haven't quite figured out the exact conditions that are needed for
this bug to occur, but my intuition says that it should be quite rare in
practice (otherwise we'd have seen this much sooner given that the
offending commit was merged more than a year ago and was backported to
older branches).

Release note (bug fix): Previously, CockroachDB could encounter an
internal error of the form `interface conversion: coldata.Column is` in
an edge case and this is now fixed. The bug is present in versions
22.2.13+, 23.1.9+, 23.2+.
This commit fixes type schema corruption in the vectorized engine in an
edge case. In particular, consider the following circumstances:
- during the physical planning, when creating a new stage of processors,
we often reuse the same type slice (stored in
`InputSyncSpec.ColumnTypes`) that we get from the previous stage. In
other words, we might have memory aliasing, but only on the gateway
node because the remote nodes get their specs deserialized and each has
its own memory allocation.
- throughout the vectorized operator planning, as of
85fd4fb, for each newly projected
vector we append the corresponding type to the type slice we have in
scope. We also capture intermediate state of the type slice by some
operators (e.g. `BatchSchemaSubsetEnforcer`).
- as expected, when appending a type to the slice, if there is enough
capacity, we reuse it, meaning that we often append to the slice that
came to us via `InputSyncSpec.ColumnTypes`.
- now, if we have two stages of processors that happened to share the
same underlying type slice with some free capacity AND we needed to
append vectors for each stage, then we might corrupt the type schema
captured by an operator for the earlier stage when performing vectorized
planning for the later stage.

The bug is effectively the same as the comment deleted by
85fd4fb outlined:
```
// As an example, consider the following scenario in the context of
// planFilterExpr method:
// 1. r.ColumnTypes={types.Bool} with len=1 and cap=4
// 2. planSelectionOperators adds another types.Int column, so
//    filterColumnTypes={types.Bool, types.Int} with len=2 and cap=4
//    Crucially, it uses exact same underlying array as r.ColumnTypes
//    uses.
// 3. we project out second column, so r.ColumnTypes={types.Bool}
// 4. later, we add another types.Float column, so
//    r.ColumnTypes={types.Bool, types.Float}, but there is enough
//    capacity in the array, so we simply overwrite the second slot
//    with the new type which corrupts filterColumnTypes to become
//    {types.Bool, types.Float}, and we can get into a runtime type
//    mismatch situation.
```
The only differences are:
- aliasing of the type slice occurs via the
`InputSyncSpec.ColumnTypes` that is often used as the starting points
for populating `NewColOperatorResult.ColumnTypes` which is used
throughout the vectorized operator planning
- columns are "projected out" by sharing the type schema between two
stages of DistSQL processors.

This commit addresses this issue by capping the slice to its length
right before we get into the vectorized planning. This will make it so
that if we need to append a type, then we'll make a fresh allocation,
and any possible memory aliasing with a different stage of processors
will be gone.

I haven't quite figured out the exact conditions that are needed for
this bug to occur, but my intuition says that it should be quite rare in
practice (otherwise we'd have seen this much sooner given that the
offending commit was merged more than a year ago and was backported to
older branches).

Release note (bug fix): Previously, CockroachDB could encounter an
internal error of the form `interface conversion: coldata.Column is` in
an edge case and this is now fixed. The bug is present in versions
22.2.13+, 23.1.9+, 23.2+.
This commit fixes type schema corruption in the vectorized engine in an
edge case. In particular, consider the following circumstances:
- during the physical planning, when creating a new stage of processors,
we often reuse the same type slice (stored in
`InputSyncSpec.ColumnTypes`) that we get from the previous stage. In
other words, we might have memory aliasing, but only on the gateway
node because the remote nodes get their specs deserialized and each has
its own memory allocation.
- throughout the vectorized operator planning, as of
85fd4fb, for each newly projected
vector we append the corresponding type to the type slice we have in
scope. We also capture intermediate state of the type slice by some
operators (e.g. `BatchSchemaSubsetEnforcer`).
- as expected, when appending a type to the slice, if there is enough
capacity, we reuse it, meaning that we often append to the slice that
came to us via `InputSyncSpec.ColumnTypes`.
- now, if we have two stages of processors that happened to share the
same underlying type slice with some free capacity AND we needed to
append vectors for each stage, then we might corrupt the type schema
captured by an operator for the earlier stage when performing vectorized
planning for the later stage.

The bug is effectively the same as the comment deleted by
85fd4fb outlined:
```
// As an example, consider the following scenario in the context of
// planFilterExpr method:
// 1. r.ColumnTypes={types.Bool} with len=1 and cap=4
// 2. planSelectionOperators adds another types.Int column, so
//    filterColumnTypes={types.Bool, types.Int} with len=2 and cap=4
//    Crucially, it uses exact same underlying array as r.ColumnTypes
//    uses.
// 3. we project out second column, so r.ColumnTypes={types.Bool}
// 4. later, we add another types.Float column, so
//    r.ColumnTypes={types.Bool, types.Float}, but there is enough
//    capacity in the array, so we simply overwrite the second slot
//    with the new type which corrupts filterColumnTypes to become
//    {types.Bool, types.Float}, and we can get into a runtime type
//    mismatch situation.
```
The only differences are:
- aliasing of the type slice occurs via the
`InputSyncSpec.ColumnTypes` that is often used as the starting points
for populating `NewColOperatorResult.ColumnTypes` which is used
throughout the vectorized operator planning
- columns are "projected out" by sharing the type schema between two
stages of DistSQL processors.

This commit addresses this issue by capping the slice to its length
right before we get into the vectorized planning. This will make it so
that if we need to append a type, then we'll make a fresh allocation,
and any possible memory aliasing with a different stage of processors
will be gone.

I haven't quite figured out the exact conditions that are needed for
this bug to occur, but my intuition says that it should be quite rare in
practice (otherwise we'd have seen this much sooner given that the
offending commit was merged more than a year ago and was backported to
older branches).

Release note (bug fix): Previously, CockroachDB could encounter an
internal error of the form `interface conversion: coldata.Column is` in
an edge case and this is now fixed. The bug is present in versions
22.2.13+, 23.1.9+, 23.2+.
This commit fixes type schema corruption in the vectorized engine in an
edge case. In particular, consider the following circumstances:
- during the physical planning, when creating a new stage of processors,
we often reuse the same type slice (stored in
`InputSyncSpec.ColumnTypes`) that we get from the previous stage. In
other words, we might have memory aliasing, but only on the gateway
node because the remote nodes get their specs deserialized and each has
its own memory allocation.
- throughout the vectorized operator planning, as of
85fd4fb, for each newly projected
vector we append the corresponding type to the type slice we have in
scope. We also capture intermediate state of the type slice by some
operators (e.g. `BatchSchemaSubsetEnforcer`).
- as expected, when appending a type to the slice, if there is enough
capacity, we reuse it, meaning that we often append to the slice that
came to us via `InputSyncSpec.ColumnTypes`.
- now, if we have two stages of processors that happened to share the
same underlying type slice with some free capacity AND we needed to
append vectors for each stage, then we might corrupt the type schema
captured by an operator for the earlier stage when performing vectorized
planning for the later stage.

The bug is effectively the same as the comment deleted by
85fd4fb outlined:
```
// As an example, consider the following scenario in the context of
// planFilterExpr method:
// 1. r.ColumnTypes={types.Bool} with len=1 and cap=4
// 2. planSelectionOperators adds another types.Int column, so
//    filterColumnTypes={types.Bool, types.Int} with len=2 and cap=4
//    Crucially, it uses exact same underlying array as r.ColumnTypes
//    uses.
// 3. we project out second column, so r.ColumnTypes={types.Bool}
// 4. later, we add another types.Float column, so
//    r.ColumnTypes={types.Bool, types.Float}, but there is enough
//    capacity in the array, so we simply overwrite the second slot
//    with the new type which corrupts filterColumnTypes to become
//    {types.Bool, types.Float}, and we can get into a runtime type
//    mismatch situation.
```
The only differences are:
- aliasing of the type slice occurs via the
`InputSyncSpec.ColumnTypes` that is often used as the starting points
for populating `NewColOperatorResult.ColumnTypes` which is used
throughout the vectorized operator planning
- columns are "projected out" by sharing the type schema between two
stages of DistSQL processors.

This commit addresses this issue by capping the slice to its length
right before we get into the vectorized planning. This will make it so
that if we need to append a type, then we'll make a fresh allocation,
and any possible memory aliasing with a different stage of processors
will be gone.

I haven't quite figured out the exact conditions that are needed for
this bug to occur, but my intuition says that it should be quite rare in
practice (otherwise we'd have seen this much sooner given that the
offending commit was merged more than a year ago and was backported to
older branches).

Release note (bug fix): Previously, CockroachDB could encounter an
internal error of the form `interface conversion: coldata.Column is` in
an edge case and this is now fixed. The bug is present in versions
22.2.13+, 23.1.9+, 23.2+.
This commit fixes type schema corruption in the vectorized engine in an
edge case. In particular, consider the following circumstances:
- during the physical planning, when creating a new stage of processors,
we often reuse the same type slice (stored in
`InputSyncSpec.ColumnTypes`) that we get from the previous stage. In
other words, we might have memory aliasing, but only on the gateway
node because the remote nodes get their specs deserialized and each has
its own memory allocation.
- throughout the vectorized operator planning, as of
85fd4fb, for each newly projected
vector we append the corresponding type to the type slice we have in
scope. We also capture intermediate state of the type slice by some
operators (e.g. `BatchSchemaSubsetEnforcer`).
- as expected, when appending a type to the slice, if there is enough
capacity, we reuse it, meaning that we often append to the slice that
came to us via `InputSyncSpec.ColumnTypes`.
- now, if we have two stages of processors that happened to share the
same underlying type slice with some free capacity AND we needed to
append vectors for each stage, then we might corrupt the type schema
captured by an operator for the earlier stage when performing vectorized
planning for the later stage.

The bug is effectively the same as the comment deleted by
85fd4fb outlined:
```
// As an example, consider the following scenario in the context of
// planFilterExpr method:
// 1. r.ColumnTypes={types.Bool} with len=1 and cap=4
// 2. planSelectionOperators adds another types.Int column, so
//    filterColumnTypes={types.Bool, types.Int} with len=2 and cap=4
//    Crucially, it uses exact same underlying array as r.ColumnTypes
//    uses.
// 3. we project out second column, so r.ColumnTypes={types.Bool}
// 4. later, we add another types.Float column, so
//    r.ColumnTypes={types.Bool, types.Float}, but there is enough
//    capacity in the array, so we simply overwrite the second slot
//    with the new type which corrupts filterColumnTypes to become
//    {types.Bool, types.Float}, and we can get into a runtime type
//    mismatch situation.
```
The only differences are:
- aliasing of the type slice occurs via the
`InputSyncSpec.ColumnTypes` that is often used as the starting points
for populating `NewColOperatorResult.ColumnTypes` which is used
throughout the vectorized operator planning
- columns are "projected out" by sharing the type schema between two
stages of DistSQL processors.

This commit addresses this issue by capping the slice to its length
right before we get into the vectorized planning. This will make it so
that if we need to append a type, then we'll make a fresh allocation,
and any possible memory aliasing with a different stage of processors
will be gone.

I haven't quite figured out the exact conditions that are needed for
this bug to occur, but my intuition says that it should be quite rare in
practice (otherwise we'd have seen this much sooner given that the
offending commit was merged more than a year ago and was backported to
older branches).

Release note (bug fix): Previously, CockroachDB could encounter an
internal error of the form `interface conversion: coldata.Column is` in
an edge case and this is now fixed. The bug is present in versions
22.2.13+, 23.1.9+, 23.2+.
Currently, CRDB log is configured, by roachtest, to log to a file to catch any
logs written to it during a roachtest run. This is usually from a shared test
util that uses the CRDB log. The file sink on the CRDB logger will log program
arguments by default, but this can leak sensitive information.

This PR introduces a log redirect that uses the CRDB log interceptor
functionality instead of using a file sink. This way we can avoid logging the
program arguments.

Epic: None
Release note: None
This replaces the initiation of the file sink based CRDB log with the new
interceptor log redirect. It will log to a file in the artifacts directory.

Epic: None
Release note: None
…ort-release-23.1.29-rc-133241

release-23.1.29-rc: opt: relax max stack size in test for stack overflow
When testing the license throttling behavior, it's helpful to easily
override the telemetry ping timestamp. This timestamp will only be
accepted if it's smaller than the current recorded timestamp, which
reduces the chance that this ability can be used to extend the grace
period.

Epic: CRDB-40209
Release note: None
Instead of asking a user to "renew" we tell them to "add" a new
license, which matches other language we've used.

Epic: CRDB-40853
Release note: None
Prior to 24.3 release, we added a notification in the DB Console to
alert customers to the licensing changes and give them time to
prepare. Now that they'll be rolling out, the notice is removed since
it's no longer in the future.

Epic: CRDB-40853
Release note: None
Previously, the redaction logic for `Sensitive` settings in
the diagnotics payload was conditional on the value of the
`"server.redact_sensitive_settings.enabled"` cluster setting.

This commit modifies the behavior of `RedactedValue` used to render
modified cluster settings by the `diagnostics` package to always
fully redact the values of string settings and any sensitive or non-
reportable settings.

Because the `MaskedSetting` struct is now in use by code in the `SHOW
CLUSTER SETTING` code path, we no longer rely on it for redaction
behavior of string settings.

Note: This is a backport of a PR from `master` and this branch
does not contain the concept of `sensitive` settings so only `non-
reportable` ones are included.

Resolves: CRDB-43457
Epic: None

Release note (security update): all cluster settings that accept
strings are now fully redacted when transmitted as part of our
diagnostics telemetry. This payload includes a record of modified
cluster settings and their values when they are not strings.
Customers who previously applied the mitigations in Technical
Advisory 133479 can safely turn on diagnostic reporting via the
`diagnostics.reporting.enabled` cluster setting without leaking
sensitive cluster settings values.
Previously, the redaction logic for `Sensitive` settings in
the diagnotics payload was conditional on the value of the
`"server.redact_sensitive_settings.enabled"` cluster setting.

This commit modifies the behavior of `RedactedValue` used to render
modified cluster settings by the `diagnostics` package to always
fully redact the values of string settings and any sensitive or non-
reportable settings.

Because the `MaskedSetting` struct is now in use by code in the `SHOW
CLUSTER SETTING` code path, we no longer rely on it for redaction
behavior of string settings.

Note: This is a backport of a PR from `master` and this branch
does not contain the concept of `sensitive` settings so only `non-
reportable` ones are included.

Resolves: CRDB-43457
Epic: None

Release note (security update): all cluster settings that accept
strings are now fully redacted when transmitted as part of our
diagnostics telemetry. This payload includes a record of modified
cluster settings and their values when they are not strings.
Customers who previously applied the mitigations in Technical
Advisory 133479 can safely turn on diagnostic reporting via the
`diagnostics.reporting.enabled` cluster setting without leaking
sensitive cluster settings values.
Previously, the redaction logic for `Sensitive` settings in
the diagnotics payload was conditional on the value of the
`"server.redact_sensitive_settings.enabled"` cluster setting.

This commit modifies the behavior of `RedactedValue` used to render
modified cluster settings by the `diagnostics` package to always
fully redact the values of string settings and any sensitive or non-
reportable settings.

Because the `MaskedSetting` struct is now in use by code in the `SHOW
CLUSTER SETTING` code path, we no longer rely on it for redaction
behavior of string settings.

Note: This is a backport of a PR from `master` and this branch
does not contain the concept of `sensitive` settings so only `non-
reportable` ones are included.

Resolves: CRDB-43457
Epic: None

Release note (security update): all cluster settings that accept
strings are now fully redacted when transmitted as part of our
diagnostics telemetry. This payload includes a record of modified
cluster settings and their values when they are not strings.
Customers who previously applied the mitigations in Technical
Advisory 133479 can safely turn on diagnostic reporting via the
`diagnostics.reporting.enabled` cluster setting without leaking
sensitive cluster settings values.
Previously, the redaction logic for `Sensitive` settings in
the diagnotics payload was conditional on the value of the
`"server.redact_sensitive_settings.enabled"` cluster setting.

This commit modifies the behavior of `RedactedValue` used to render
modified cluster settings by the `diagnostics` package to always
fully redact the values of string settings and any sensitive or non-
reportable settings.

Because the `MaskedSetting` struct is now in use by code in the `SHOW
CLUSTER SETTING` code path, we no longer rely on it for redaction
behavior of string settings.

Note: This is a backport of a PR from `master` and this branch
does not contain the concept of `sensitive` settings so only `non-
reportable` ones are included.

Resolves: CRDB-43457
Epic: None

Release note (security update): all cluster settings that accept
strings are now fully redacted when transmitted as part of our
diagnostics telemetry. This payload includes a record of modified
cluster settings and their values when they are not strings.
Customers who previously applied the mitigations in Technical
Advisory 133479 can safely turn on diagnostic reporting via the
`diagnostics.reporting.enabled` cluster setting without leaking
sensitive cluster settings values.
Update pkg/testutils/release/cockroach_releases.yaml with recent values.

Epic: None
Release note: None
Release justification: test-only updates
Previously the test was fooling itself - the regex for `KV gRPC calls`
line was incorrect, so it was never matched, and we ended up with unset
counter (which happened to pass the test); this is now fixed.

Release note: None
Previously the test was fooling itself - the regex for `KV gRPC calls`
line was incorrect, so it was never matched, and we ended up with unset
counter (which happened to pass the test); this is now fixed.

Release note: None
mohini-crl and others added 28 commits January 5, 2025 10:17
…3-20241230125146

Revert "[Replicated] release-23.1: kvstreamer: fix pathological behavior in InOrder mode"
…02314

[Replicated] release-23.1: kvstreamer: fix pathological behavior in InOrder mode
…02326

[Replicated] release-23.1: Update pkg/testutils/release/cockroach_releases.yaml
…0-20250105102326

Revert "[Replicated] release-23.1: Update pkg/testutils/release/cockroach_releases.yaml"
…te-pr-135360-20250105102326

Revert "Revert "[Replicated] release-23.1: Update pkg/testutils/release/cockroach_releases.yaml""
…127-replicate-pr-135360-20250105102326

Revert "Revert "Revert "[Replicated] release-23.1: Update pkg/testutils/release/cockroach_releases.yaml"""
…/testutils/release/cockroach_releases.yaml""""
…130-revert-127-replicate-pr-135360-20250105102326

Revert "Revert "Revert "Revert "[Replicated] release-23.1: Update pkg/testutils/release/cockroach_releases.yaml""""
…date pkg/testutils/release/cockroach_releases.yaml"""""
…131-revert-130-revert-127-replicate-pr-135360-20250105102326

Revert "Revert "Revert "Revert "Revert "[Replicated] release-23.1: Update pkg/testutils/release/cockroach_releases.yaml"""""
…02335

[Replicated] release-23.1: pgwire,authccl: use pgx for TestAuthenticationAndHBARules
…02346

[Replicated] release-23.1: colexecerror: improve the catcher due to a recent regression
…33412

[Replicated] release-23.1: kvstreamer: fix pathological behavior in InOrder mode
…34015

[Replicated] release-23.1: kvstreamer: fix pathological behavior in InOrder mode
…35713

[Replicated] release-23.1: kvstreamer: fix pathological behavior in InOrder mode
…42940

[Replicated] release-23.1: kvstreamer: fix pathological behavior in InOrder mode
…44136

[Replicated] release-23.1: kvstreamer: fix pathological behavior in InOrder mode
…95913

[Replicated] release-23.1: kvstreamer: fix pathological behavior in InOrder mode
A few former subpackages are now proper modules, which
requires some gazelle/bzl wrangling.

This caused a few spurious diffs where unrelated lines
got reordered. I didn't do this manually.
…33443

[Replicated] release-23.1: colexecerror: improve the catcher due to a recent regression
…33432

[Replicated] release-23.1: pgwire,authccl: use pgx for TestAuthenticationAndHBARules
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.