Skip to content
Merged
Show file tree
Hide file tree
Changes from 81 commits
Commits
Show all changes
84 commits
Select commit Hold shift + click to select a range
9657b75
feat: support array_append (#1072)
NoeB Nov 13, 2024
c32bf0c
chore: Simplify CometShuffleMemoryAllocator to use Spark unified memo…
viirya Nov 14, 2024
f3da844
docs: Update benchmarking.md (#1085)
rluvaton-flarion Nov 14, 2024
2c832b4
feat: Require offHeap memory to be enabled (always use unified memory…
andygrove Nov 14, 2024
7cec285
test: Restore one test in CometExecSuite by adding COMET_SHUFFLE_MODE…
viirya Nov 15, 2024
10ef62a
Add changelog for 0.4.0 (#1089)
andygrove Nov 15, 2024
0c9a403
chore: Prepare for 0.5.0 development (#1090)
andygrove Nov 15, 2024
406ffef
build: Skip installation of spark-integration and fuzz testing modul…
parthchandra Nov 15, 2024
bfd7054
Add hint for finding the GPG key to use when publishing to maven (#1093)
andygrove Nov 15, 2024
59da6ce
docs: Update documentation for 0.4.0 release (#1096)
andygrove Nov 18, 2024
ca3a529
fix: Unsigned type related bugs (#1095)
kazuyukitanimura Nov 19, 2024
b64c13d
chore: Include first ScanExec batch in metrics (#1105)
andygrove Nov 20, 2024
19dd58d
chore: Improve CometScan metrics (#1100)
andygrove Nov 20, 2024
e602305
chore: Add custom metric for native shuffle fetching batches from JVM…
andygrove Nov 21, 2024
9990b34
feat: support array_insert (#1073)
SemyonSinchenko Nov 22, 2024
500895d
feat: enable decimal to decimal cast of different precision and scale…
himadripal Nov 22, 2024
7b1a290
docs: fix readme FGPA/FPGA typo (#1117)
gstvg Nov 24, 2024
5400fd7
fix: Use RDD partition index (#1112)
viirya Nov 25, 2024
ebdde77
fix: Various metrics bug fixes and improvements (#1111)
andygrove Dec 2, 2024
9b250c4
fix: Don't create CometScanExec for subclasses of ParquetFileFormat (…
Kimahriman Dec 2, 2024
95727aa
fix: Fix metrics regressions (#1132)
andygrove Dec 3, 2024
36a2307
docs: Add more technical detail and new diagram to Comet plugin overv…
andygrove Dec 3, 2024
2671e0c
Stop passing Java config map into native createPlan (#1101)
andygrove Dec 4, 2024
8d7bcb8
feat: Improve ScanExec native metrics (#1133)
andygrove Dec 6, 2024
587c29b
chore: Remove unused StringView struct (#1143)
andygrove Dec 6, 2024
b95dc1d
docs: Add some documentation explaining how shuffle works (#1148)
andygrove Dec 6, 2024
1c6c7a9
test: enable more Spark 4.0 tests (#1145)
kazuyukitanimura Dec 6, 2024
8d83cc1
chore: Refactor cast to use SparkCastOptions param (#1146)
andygrove Dec 6, 2024
21503ca
Enable more scenarios in CometExecBenchmark. (#1151)
mbutrovich Dec 7, 2024
73f1405
chore: Move more expressions from core crate to spark-expr crate (#1152)
andygrove Dec 9, 2024
5c45fdc
remove dead code (#1155)
andygrove Dec 10, 2024
2c1a6b9
fix: Spark 4.0-preview1 SPARK-47120 (#1156)
kazuyukitanimura Dec 11, 2024
49cf0d7
chore: Move string kernels and expressions to spark-expr crate (#1164)
andygrove Dec 12, 2024
7db9aa6
chore: Move remaining expressions to spark-expr crate + some minor re…
andygrove Dec 12, 2024
f1d0879
chore: Add ignored tests for reading complex types from Parquet (#1167)
andygrove Dec 12, 2024
b9ac78b
feat: Add Spark-compatible implementation of SchemaAdapterFactory (#1…
andygrove Dec 17, 2024
46a28db
fix: Document enabling comet explain plan usage in Spark (4.0) (#1176)
parthchandra Dec 17, 2024
655081b
test: enabling Spark tests with offHeap requirement (#1177)
kazuyukitanimura Dec 18, 2024
e297d23
feat: Improve shuffle metrics (second attempt) (#1175)
andygrove Dec 18, 2024
8f4a8a5
fix: stddev_pop should not directly return 0.0 when count is 1.0 (#1184)
viirya Dec 19, 2024
ea6d205
feat: Make native shuffle compression configurable and respect `spark…
andygrove Dec 20, 2024
053b7cc
minor: move shuffle classes from common to spark (#1193)
andygrove Dec 22, 2024
639fa2f
minor: refactor decodeBatches to make private in broadcast exchange (…
andygrove Dec 22, 2024
58dee73
minor: refactor prepare_output so that it does not require an Executi…
andygrove Dec 22, 2024
5432e03
fix: fix missing explanation for then branch in case when (#1200)
rluvaton Dec 27, 2024
103f82f
minor: remove unused source files (#1202)
andygrove Dec 28, 2024
5d2c909
chore: Upgrade to DataFusion 44.0.0-rc2 (#1154)
andygrove Dec 28, 2024
4f8ce75
feat: add support for array_contains expression (#1163)
dharanad Jan 2, 2025
9320aed
feat: Add a `spark.comet.exec.memoryPool` configuration for experimen…
Kontinuation Jan 3, 2025
2e0f00a
feat: Reenable tests for filtered SMJ anti join (#1211)
comphead Jan 3, 2025
4333dce
chore: Add safety check to CometBuffer (#1050)
viirya Jan 3, 2025
4b56c52
remove unreachable code (#1213)
andygrove Jan 4, 2025
5f1e998
test: Enable Comet by default except some tests in SparkSessionExten…
kazuyukitanimura Jan 4, 2025
e39ffa6
extract struct expressions to folders based on spark grouping (#1216)
rluvaton Jan 6, 2025
5c389d1
chore: extract static invoke expressions to folders based on spark gr…
rluvaton Jan 6, 2025
e72beb1
chore: Follow-on PR to fully enable onheap memory usage (#1210)
andygrove Jan 6, 2025
74a6a8d
feat: Move shuffle block decompression and decoding to native code an…
andygrove Jan 7, 2025
3f0d442
chore: extract agg_funcs expressions to folders based on spark groupi…
rluvaton Jan 7, 2025
4cf840f
extract datetime_funcs expressions to folders based on spark grouping…
rluvaton Jan 7, 2025
508db06
chore: use datafusion from crates.io (#1232)
rluvaton Jan 7, 2025
c19202c
chore: extract strings file to `strings_func` like in spark grouping …
rluvaton Jan 8, 2025
fbcf025
chore: extract predicate_functions expressions to folders based on sp…
rluvaton Jan 8, 2025
ca7b4a8
build(deps): bump protobuf version to 3.21.12 (#1234)
wForget Jan 8, 2025
c6acc9d
extract json_funcs expressions to folders based on spark grouping (#1…
rluvaton Jan 8, 2025
0a68f1c
test: Enable shuffle by default in Spark tests (#1240)
kazuyukitanimura Jan 9, 2025
e731b6e
chore: extract hash_funcs expressions to folders based on spark group…
rluvaton Jan 9, 2025
be48839
fix: Fall back to Spark for unsupported partition or sort expressions…
andygrove Jan 9, 2025
d15d051
perf: Improve query planning to more reliably fall back to columnar s…
andygrove Jan 9, 2025
d52038e
fix regression (#1259)
andygrove Jan 10, 2025
c25060e
feat: add support for array_remove expression (#1179)
jatin510 Jan 12, 2025
e8261fb
fix: Fall back to Spark for distinct aggregates (#1262)
andygrove Jan 13, 2025
d7a7812
feat: Implement custom RecordBatch serde for shuffle for improved per…
andygrove Jan 13, 2025
1eb932a
docs: Update TPC-H benchmark results (#1257)
andygrove Jan 13, 2025
9fe5420
fix: disable initCap by default (#1276)
kazuyukitanimura Jan 14, 2025
cbe50e1
chore: Add changelog for 0.5.0 (#1278)
andygrove Jan 14, 2025
08d892a
update TPC-DS results for 0.5.0 (#1277)
andygrove Jan 14, 2025
9c1f0ee
fix: cast timestamp to decimal is unsupported (#1281)
wForget Jan 14, 2025
d36e8d7
chore: Start 0.6.0 development (#1286)
andygrove Jan 14, 2025
3eced67
docs: Fix links and provide complete benchmarking scripts (#1284)
andygrove Jan 14, 2025
82022af
feat: Add HasRowIdMapping interface (#1288)
viirya Jan 15, 2025
9e4e5e8
Merge branch 'main' into comet-parquet-exec-merge-20240116
parthchandra Jan 17, 2025
8083086
fix style
parthchandra Jan 17, 2025
5a31ba3
fix
parthchandra Jan 17, 2025
8fee4ca
fix for plan serialization
parthchandra Jan 18, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/spark_sql_test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ jobs:
with:
spark-version: ${{ matrix.spark-version.full }}
spark-short-version: ${{ matrix.spark-version.short }}
comet-version: '0.5.0-SNAPSHOT' # TODO: get this from pom.xml
comet-version: '0.6.0-SNAPSHOT' # TODO: get this from pom.xml
- name: Run Spark tests
run: |
cd apache-spark
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/spark_sql_test_ansi.yml
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ jobs:
with:
spark-version: ${{ matrix.spark-version.full }}
spark-short-version: ${{ matrix.spark-version.short }}
comet-version: '0.5.0-SNAPSHOT' # TODO: get this from pom.xml
comet-version: '0.6.0-SNAPSHOT' # TODO: get this from pom.xml
- name: Run Spark tests
run: |
cd apache-spark
Expand Down
2 changes: 1 addition & 1 deletion common/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ under the License.
<parent>
<groupId>org.apache.datafusion</groupId>
<artifactId>comet-parent-spark${spark.version.short}_${scala.binary.version}</artifactId>
<version>0.5.0-SNAPSHOT</version>
<version>0.6.0-SNAPSHOT</version>
<relativePath>../pom.xml</relativePath>
</parent>

Expand Down
39 changes: 39 additions & 0 deletions common/src/main/java/org/apache/comet/vector/HasRowIdMapping.java
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/

package org.apache.comet.vector;

/**
* An interface could be implemented by vectors that have row id mapping.
*
* <p>For example, Iceberg's DeleteFile has a row id mapping to map row id to position. This
* interface is used to set and get the row id mapping. The row id mapping is an array of integers,
* where the index is the row id and the value is the position. Here is an example:
* [0,1,2,3,4,5,6,7] -- Original status of the row id mapping array Position delete 2, 6
* [0,1,3,4,5,7,-,-] -- After applying position deletes [Set Num records to 6]
*/
public interface HasRowIdMapping {
default void setRowIdMapping(int[] rowIdMapping) {
throw new UnsupportedOperationException("setRowIdMapping is not supported");
}

default int[] getRowIdMapping() {
throw new UnsupportedOperationException("getRowIdMapping is not supported");
}
}
65 changes: 63 additions & 2 deletions docs/source/contributor-guide/benchmark-results/tpc-ds.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,8 @@ under the License.

# Apache DataFusion Comet: Benchmarks Derived From TPC-DS

The following benchmarks were performed on a two node Kubernetes cluster with
data stored locally in Parquet format on NVMe storage. Performance characteristics will vary in different environments
The following benchmarks were performed on a Linux workstation with PCIe 5, AMD 7950X CPU (16 cores), 128 GB RAM, and
data stored locally in Parquet format on NVMe storage. Performance characteristics will vary in different environments
and we encourage you to run these benchmarks in your own environments.

The tracking issue for improving TPC-DS performance is [#858](https://github.com/apache/datafusion-comet/issues/858).
Expand All @@ -43,3 +43,64 @@ The raw results of these benchmarks in JSON format is available here:

- [Spark](0.5.0/spark-tpcds.json)
- [Comet](0.5.0/comet-tpcds.json)

# Scripts

Here are the scripts that were used to generate these results.

## Apache Spark

```shell
#!/bin/bash
$SPARK_HOME/bin/spark-submit \
--master $SPARK_MASTER \
--conf spark.driver.memory=8G \
--conf spark.executor.memory=32G \
--conf spark.executor.instances=2 \
--conf spark.executor.cores=8 \
--conf spark.cores.max=16 \
--conf spark.eventLog.enabled=true \
tpcbench.py \
--benchmark tpcds \
--name spark \
--data /mnt/bigdata/tpcds/sf100/ \
--queries ../../tpcds/ \
--output . \
--iterations 5
```

## Apache Spark + Comet

```shell
#!/bin/bash
$SPARK_HOME/bin/spark-submit \
--master $SPARK_MASTER \
--conf spark.driver.memory=8G \
--conf spark.executor.instances=2 \
--conf spark.executor.memory=16G \
--conf spark.executor.cores=8 \
--total-executor-cores=16 \
--conf spark.eventLog.enabled=true \
--conf spark.driver.maxResultSize=2G \
--conf spark.memory.offHeap.enabled=true \
--conf spark.memory.offHeap.size=24g \
--jars $COMET_JAR \
--conf spark.driver.extraClassPath=$COMET_JAR \
--conf spark.executor.extraClassPath=$COMET_JAR \
--conf spark.plugins=org.apache.spark.CometPlugin \
--conf spark.comet.enabled=true \
--conf spark.comet.cast.allowIncompatible=true \
--conf spark.comet.exec.replaceSortMergeJoin=false \
--conf spark.comet.exec.shuffle.enabled=true \
--conf spark.comet.exec.shuffle.mode=auto \
--conf spark.comet.exec.shuffle.fallbackToColumnar=true \
--conf spark.comet.exec.shuffle.compression.codec=lz4 \
--conf spark.shuffle.manager=org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager \
tpcbench.py \
--name comet \
--benchmark tpcds \
--data /mnt/bigdata/tpcds/sf100/ \
--queries ../../tpcds/ \
--output . \
--iterations 5
```
71 changes: 67 additions & 4 deletions docs/source/contributor-guide/benchmark-results/tpc-h.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,21 +25,84 @@ and we encourage you to run these benchmarks in your own environments.

The tracking issue for improving TPC-H performance is [#391](https://github.com/apache/datafusion-comet/issues/391).

![](../../_static/images/benchmark-results/0.5.0-SNAPSHOT-2025-01-09/tpch_allqueries.png)
![](../../_static/images/benchmark-results/0.5.0/tpch_allqueries.png)

Here is a breakdown showing relative performance of Spark and Comet for each query.

![](../../_static/images/benchmark-results/0.5.0-SNAPSHOT-2025-01-09/tpch_queries_compare.png)
![](../../_static/images/benchmark-results/0.5.0/tpch_queries_compare.png)

The following chart shows how much Comet currently accelerates each query from the benchmark in relative terms.

![](../../_static/images/benchmark-results/0.5.0-SNAPSHOT-2025-01-09/tpch_queries_speedup_rel.png)
![](../../_static/images/benchmark-results/0.5.0/tpch_queries_speedup_rel.png)

The following chart shows how much Comet currently accelerates each query from the benchmark in absolute terms.

![](../../_static/images/benchmark-results/0.5.0-SNAPSHOT-2025-01-09/tpch_queries_speedup_abs.png)
![](../../_static/images/benchmark-results/0.5.0/tpch_queries_speedup_abs.png)

The raw results of these benchmarks in JSON format is available here:

- [Spark](0.5.0/spark-tpch.json)
- [Comet](0.5.0/comet-tpch.json)

# Scripts

Here are the scripts that were used to generate these results.

## Apache Spark

```shell
#!/bin/bash
$SPARK_HOME/bin/spark-submit \
--master $SPARK_MASTER \
--conf spark.driver.memory=8G \
--conf spark.executor.instances=1 \
--conf spark.executor.cores=8 \
--conf spark.cores.max=8 \
--conf spark.executor.memory=16g \
--conf spark.memory.offHeap.enabled=true \
--conf spark.memory.offHeap.size=16g \
--conf spark.eventLog.enabled=true \
tpcbench.py \
--name spark \
--benchmark tpch \
--data /mnt/bigdata/tpch/sf100/ \
--queries ../../tpch/queries \
--output . \
--iterations 5

```

## Apache Spark + Comet

```shell
#!/bin/bash
$SPARK_HOME/bin/spark-submit \
--master $SPARK_MASTER \
--conf spark.driver.memory=8G \
--conf spark.executor.instances=1 \
--conf spark.executor.cores=8 \
--conf spark.cores.max=8 \
--conf spark.executor.memory=16g \
--conf spark.memory.offHeap.enabled=true \
--conf spark.memory.offHeap.size=16g \
--conf spark.comet.exec.replaceSortMergeJoin=true \
--conf spark.eventLog.enabled=true \
--jars $COMET_JAR \
--driver-class-path $COMET_JAR \
--conf spark.driver.extraClassPath=$COMET_JAR \
--conf spark.executor.extraClassPath=$COMET_JAR \
--conf spark.sql.extensions=org.apache.comet.CometSparkSessionExtensions \
--conf spark.comet.enabled=true \
--conf spark.comet.exec.shuffle.enabled=true \
--conf spark.comet.exec.shuffle.mode=auto \
--conf spark.comet.exec.shuffle.fallbackToColumnar=true \
--conf spark.comet.exec.shuffle.compression.codec=lz4 \
--conf spark.shuffle.manager=org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager \
tpcbench.py \
--name comet \
--benchmark tpch \
--data /mnt/bigdata/tpch/sf100/ \
--queries ../../tpch/queries \
--output . \
--iterations 5
```
56 changes: 0 additions & 56 deletions docs/source/contributor-guide/benchmarking.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,62 +24,6 @@ benchmarking documentation and scripts are available in the [DataFusion Benchmar

We also have many micro benchmarks that can be run from an IDE located [here](https://github.com/apache/datafusion-comet/tree/main/spark/src/test/scala/org/apache/spark/sql/benchmark).

Here are example commands for running the benchmarks against a Spark cluster. This command will need to be
adapted based on the Spark environment and location of data files.

These commands are intended to be run from the `runners/datafusion-comet` directory in the `datafusion-benchmarks`
repository.

## Running Benchmarks Against Apache Spark

```shell
$SPARK_HOME/bin/spark-submit \
--master $SPARK_MASTER \
--conf spark.driver.memory=8G \
--conf spark.executor.instances=1 \
--conf spark.executor.memory=32G \
--conf spark.executor.cores=8 \
--conf spark.cores.max=8 \
tpcbench.py \
--benchmark tpch \
--data /mnt/bigdata/tpch/sf100/ \
--queries ../../tpch/queries \
--iterations 3
```

## Running Benchmarks Against Apache Spark with Apache DataFusion Comet Enabled

### TPC-H

```shell
$SPARK_HOME/bin/spark-submit \
--master $SPARK_MASTER \
--conf spark.driver.memory=8G \
--conf spark.executor.instances=1 \
--conf spark.executor.memory=16G \
--conf spark.executor.cores=8 \
--conf spark.cores.max=8 \
--conf spark.memory.offHeap.enabled=true \
--conf spark.memory.offHeap.size=16g \
--jars $COMET_JAR \
--conf spark.driver.extraClassPath=$COMET_JAR \
--conf spark.executor.extraClassPath=$COMET_JAR \
--conf spark.plugins=org.apache.spark.CometPlugin \
--conf spark.comet.cast.allowIncompatible=true \
--conf spark.comet.exec.replaceSortMergeJoin=true \
--conf spark.shuffle.manager=org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager \
--conf spark.comet.exec.shuffle.enabled=true \
--conf spark.comet.exec.shuffle.mode=auto \
--conf spark.comet.exec.shuffle.enableFastEncoding=true \
--conf spark.comet.exec.shuffle.fallbackToColumnar=true \
--conf spark.comet.exec.shuffle.compression.codec=lz4 \
tpcbench.py \
--benchmark tpch \
--data /mnt/bigdata/tpch/sf100/ \
--queries ../../tpch/queries \
--iterations 3
```

### TPC-DS

For TPC-DS, use `spark.comet.exec.replaceSortMergeJoin=false`.
Expand Down
2 changes: 1 addition & 1 deletion docs/source/contributor-guide/debugging.md
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,7 @@ Then build the Comet as [described](https://github.com/apache/arrow-datafusion-c
Start Comet with `RUST_BACKTRACE=1`

```console
RUST_BACKTRACE=1 $SPARK_HOME/spark-shell --jars spark/target/comet-spark-spark3.4_2.12-0.5.0-SNAPSHOT.jar --conf spark.plugins=org.apache.spark.CometPlugin --conf spark.comet.enabled=true --conf spark.comet.exec.enabled=true
RUST_BACKTRACE=1 $SPARK_HOME/spark-shell --jars spark/target/comet-spark-spark3.4_2.12-0.6.0-SNAPSHOT.jar --conf spark.plugins=org.apache.spark.CometPlugin --conf spark.comet.enabled=true --conf spark.comet.exec.enabled=true
```

Get the expanded exception details
Expand Down
4 changes: 2 additions & 2 deletions docs/source/user-guide/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ See the [Comet Kubernetes Guide](kubernetes.md) guide.
Make sure `SPARK_HOME` points to the same Spark version as Comet was built for.

```console
export COMET_JAR=spark/target/comet-spark-spark3.4_2.12-0.5.0-SNAPSHOT.jar
export COMET_JAR=spark/target/comet-spark-spark3.4_2.12-0.6.0-SNAPSHOT.jar

$SPARK_HOME/bin/spark-shell \
--jars $COMET_JAR \
Expand Down Expand Up @@ -130,7 +130,7 @@ explicitly contain Comet otherwise Spark may use a different class-loader for th
components which will then fail at runtime. For example:

```
--driver-class-path spark/target/comet-spark-spark3.4_2.12-0.5.0-SNAPSHOT.jar
--driver-class-path spark/target/comet-spark-spark3.4_2.12-0.6.0-SNAPSHOT.jar
```

Some cluster managers may require additional configuration, see <https://spark.apache.org/docs/latest/cluster-overview.html>
Expand Down
2 changes: 1 addition & 1 deletion fuzz-testing/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ under the License.
<parent>
<groupId>org.apache.datafusion</groupId>
<artifactId>comet-parent-spark${spark.version.short}_${scala.binary.version}</artifactId>
<version>0.5.0-SNAPSHOT</version>
<version>0.6.0-SNAPSHOT</version>
<relativePath>../pom.xml</relativePath>
</parent>

Expand Down
6 changes: 3 additions & 3 deletions native/Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

6 changes: 3 additions & 3 deletions native/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ members = ["core", "spark-expr", "proto"]
resolver = "2"

[workspace.package]
version = "0.5.0"
version = "0.6.0"
homepage = "https://datafusion.apache.org/comet"
repository = "https://github.com/apache/datafusion-comet"
authors = ["Apache DataFusion <[email protected]>"]
Expand Down Expand Up @@ -48,8 +48,8 @@ datafusion-expr-common = { version = "44.0.0", default-features = false }
datafusion-execution = { version = "44.0.0", default-features = false }
datafusion-physical-plan = { version = "44.0.0", default-features = false }
datafusion-physical-expr = { version = "44.0.0", default-features = false }
datafusion-comet-spark-expr = { path = "spark-expr", version = "0.5.0" }
datafusion-comet-proto = { path = "proto", version = "0.5.0" }
datafusion-comet-spark-expr = { path = "spark-expr", version = "0.6.0" }
datafusion-comet-proto = { path = "proto", version = "0.6.0" }
chrono = { version = "0.4", default-features = false, features = ["clock"] }
chrono-tz = { version = "0.8" }
futures = "0.3.28"
Expand Down
2 changes: 1 addition & 1 deletion pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ under the License.
</parent>
<groupId>org.apache.datafusion</groupId>
<artifactId>comet-parent-spark${spark.version.short}_${scala.binary.version}</artifactId>
<version>0.5.0-SNAPSHOT</version>
<version>0.6.0-SNAPSHOT</version>
<packaging>pom</packaging>
<name>Comet Project Parent POM</name>

Expand Down
2 changes: 1 addition & 1 deletion spark-integration/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ under the License.
<parent>
<groupId>org.apache.datafusion</groupId>
<artifactId>comet-parent-spark${spark.version.short}_${scala.binary.version}</artifactId>
<version>0.5.0-SNAPSHOT</version>
<version>0.6.0-SNAPSHOT</version>
<relativePath>../pom.xml</relativePath>
</parent>

Expand Down
Loading
Loading