Skip to content

Conversation

@zhangyue19921010
Copy link
Contributor

Close #5186

@github-actions github-actions bot added the enhancement New feature or request label Nov 13, 2025
@codecov-commenter
Copy link

codecov-commenter commented Nov 13, 2025

Codecov Report

❌ Patch coverage is 96.82540% with 2 lines in your changes missing coverage. Please review.
✅ Project coverage is 82.23%. Comparing base (f084677) to head (18eea6d).
⚠️ Report is 14 commits behind head on main.

Files with missing lines Patch % Lines
rust/lance/src/dataset/optimize.rs 96.82% 2 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #5233      +/-   ##
==========================================
+ Coverage   82.05%   82.23%   +0.18%     
==========================================
  Files         342      344       +2     
  Lines      141516   144881    +3365     
  Branches   141516   144881    +3365     
==========================================
+ Hits       116115   119149    +3034     
- Misses      21561    21800     +239     
- Partials     3840     3932      +92     
Flag Coverage Δ
unittests 82.23% <96.82%> (+0.18%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Member

@westonpace westonpace left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few (mostly minor) suggestions

Comment on lines 214 to 229
// get all fragments by default
fn get_fragments(&self, dataset: &Dataset, _options: &CompactionOptions) -> Vec<FileFragment> {
// get_fragments should be returning fragments in sorted order (by id)
// and fragment ids should be unique
dataset.get_fragments()
}

// no filter by default
async fn filter_fragments(
&self,
_dataset: &Dataset,
fragments: Vec<FileFragment>,
_options: &CompactionOptions,
) -> Result<Vec<FileFragment>> {
Ok(fragments)
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do these really need to be trait methods? I think we can probably leave them out and just let individual implementations use them if they want to. It will keep the trait simpler.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed.

Ok(fragments)
}

async fn plan(&self, dataset: &Dataset, options: &CompactionOptions) -> Result<CompactionPlan>;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's document this method.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added docs

Ok(fragments)
}

async fn plan(&self, dataset: &Dataset, options: &CompactionOptions) -> Result<CompactionPlan>;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will CompactionOptions be flexible enough for all possible strategies? Should we maybe accept options as a JSON string or a Map<String, String>? This way different strategies can expose their own custom options. That would leave the API a little less defined but it would be more flexible.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we even need to take CompactionOptions? Maybe it should be a argument to the constructor of the individual structs. That way each could have their own arguments but also be strongly typed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will CompactionOptions be flexible enough for all possible strategies? Should we maybe accept options as a JSON string or a Map<String, String>? This way different strategies can expose their own custom options. That would leave the API a little less defined but it would be more flexible.

Hi @westonpace Thanks a lot for your review. Added Map<String, String> for flexible.

Do we even need to take CompactionOptions? Maybe it should be a argument to the constructor of the individual structs. That way each could have their own arguments but also be strongly typed.

Hi @wjones127 Thanks a lot for your review.

I have tried several ways to eliminate the CompactionOptions parameter in the plan method, but none of them are perfect :( The main contradiction is that during users start planning compaction based on the built planner, they may dynamically adjust the options parameters on certain conditions, such as

https://github.com/lancedb/lance/blob/254a8217ac26666585983aa7ec8c4234f4c3f99f/rust/lance/src/dataset/optimize.rs#L225

If options are passed in when building the planner, then after modifying the options subsequently, it must also be ensured that the options in the planner can be seen. Therefore, we need Arc + mutex and cannot use clone.

On the contrary, it might be simpler and more flexible to pass in the desired options each time the plan method is called here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If options are passed in when building the planner, then after modifying the options subsequently, it must also be ensured that the options in the planner can be seen. Therefore, we need Arc + mutex and cannot use clone.

I don't understand this. The logic of validate() can live in the planner and be internal.

If I were to rewrite compact_files, I would do:

pub async fn compact_files(
    dataset: &mut Dataset,
    mut options: CompactionOptions,
    remap_options: Option<Arc<dyn IndexRemapperOptions>>, // These will be deprecated later
) -> Result<CompactionMetrics> {
    info!(target: TRACE_DATASET_EVENTS, event=DATASET_COMPACTING_EVENT, uri = &dataset.uri);
    // .validate() now happens inside of `from_options`
    let planner = DefaultCompactionPlanner::from_options(options);

    compact_files_with_planner(dataset, &planner, remap_options).await
}

/// Compacts the files in the dataset without reordering them.
///
/// This does a few things:
/// By default, his does a few things:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
/// By default, his does a few things:
/// By default, this does a few things:

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

changed.

pub async fn compact_files(
dataset: &mut Dataset,
options: CompactionOptions,
remap_options: Option<Arc<dyn IndexRemapperOptions>>, // These will be deprecated later
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@zhangyue19921010
Copy link
Contributor Author

Hi @westonpace and @wjones127 Thanks a lot for your review. All comments are addressed. PTAL :)

Copy link
Contributor

@wjones127 wjones127 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the delay, I accidentally left my comments pending. I'm still not sure about the design. It seems like it could be simplified further.

Comment on lines +214 to +219
// get all fragments by default
fn get_fragments(&self, dataset: &Dataset, _options: &CompactionOptions) -> Vec<FileFragment> {
// get_fragments should be returning fragments in sorted order (by id)
// and fragment ids should be unique
dataset.get_fragments()
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was already commented on, but do we need this? It seems like individual implementations can just call dataset.get_fragments() and then do whatever filtering they would like.

Ok(fragments)
}

async fn plan(&self, dataset: &Dataset, options: &CompactionOptions) -> Result<CompactionPlan>;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If options are passed in when building the planner, then after modifying the options subsequently, it must also be ensured that the options in the planner can be seen. Therefore, we need Arc + mutex and cannot use clone.

I don't understand this. The logic of validate() can live in the planner and be internal.

If I were to rewrite compact_files, I would do:

pub async fn compact_files(
    dataset: &mut Dataset,
    mut options: CompactionOptions,
    remap_options: Option<Arc<dyn IndexRemapperOptions>>, // These will be deprecated later
) -> Result<CompactionMetrics> {
    info!(target: TRACE_DATASET_EVENTS, event=DATASET_COMPACTING_EVENT, uri = &dataset.uri);
    // .validate() now happens inside of `from_options`
    let planner = DefaultCompactionPlanner::from_options(options);

    compact_files_with_planner(dataset, &planner, remap_options).await
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Strategized Compaction Plan

4 participants