Skip to content

fix(cogs): adjust cogs data based on retention#7855

Open
volokluev wants to merge 2 commits intomasterfrom
volo/cogs_retention_days
Open

fix(cogs): adjust cogs data based on retention#7855
volokluev wants to merge 2 commits intomasterfrom
volo/cogs_retention_days

Conversation

@volokluev
Copy link
Copy Markdown
Member

Depending on retention days, a trace item is more expensive. The default retention is 30 days, a 90day retention for an item would mean that we are storing 3 times the payload size

@volokluev volokluev requested a review from a team as a code owner March 31, 2026 18:23
@onewland
Copy link
Copy Markdown
Contributor

is consumer compute cost based off this number? because it shouldn't be affected

// Depending on retention days, a trace item is more expensive.
// The default retention is 30 days, a 90day retention for an
// item would effectively mean that we are storing 3 times the payload size
let retention_days_multiplier = eap_item.retention_days / 30;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is / with an int a quotient in rust?

if we ever had a 14d or 7d retention that would be a problem

@xurui-c
Copy link
Copy Markdown
Member

xurui-c commented Mar 31, 2026

@onewland no, query usage ratio is applied to compute

@onewland
Copy link
Copy Markdown
Contributor

onewland commented Apr 1, 2026

@onewland no, query usage ratio is applied to compute

I think we're talking about two different things.

The consumers use compute (e.g. 32 or 64 pods or whatever) and ClickHouse uses compute to execute queries. For both, COGS should do some kind of division across item type.

For the former, retention_days doesn't affect anything. For the latter, retention_days does probably matter (though it probably has less of an effect on compute than storage)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants