-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-28570][CORE][SHUFFLE] Make UnsafeShuffleWriter use the new API. #25304
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 2 commits
ca58a99
0ee9311
1d08d80
a02448c
86b2c4f
91d7062
fc798b7
48335fa
cf046df
97cd3e7
37a72bc
0cb1a44
320cafe
41ceaea
c125a14
f8f95f3
94a3653
85a101d
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -19,6 +19,7 @@ | |
|
|
||
| import java.io.IOException; | ||
|
|
||
| import java.util.Optional; | ||
| import org.apache.spark.annotation.Private; | ||
|
|
||
| /** | ||
|
|
@@ -39,17 +40,39 @@ public interface ShuffleExecutorComponents { | |
| /** | ||
| * Called once per map task to create a writer that will be responsible for persisting all the | ||
| * partitioned bytes written by that map task. | ||
| * @param shuffleId Unique identifier for the shuffle the map task is a part of | ||
| * | ||
| * @param shuffleId Unique identifier for the shuffle the map task is a part of | ||
| * @param mapId Within the shuffle, the identifier of the map task | ||
| * @param mapTaskAttemptId Identifier of the task attempt. Multiple attempts of the same map task | ||
| * with the same (shuffleId, mapId) pair can be distinguished by the | ||
| * different values of mapTaskAttemptId. | ||
| * with the same (shuffleId, mapId) pair can be distinguished by the | ||
| * different values of mapTaskAttemptId. | ||
| * @param numPartitions The number of partitions that will be written by the map task. Some of | ||
| * these partitions may be empty. | ||
| * these partitions may be empty. | ||
| */ | ||
| ShuffleMapOutputWriter createMapOutputWriter( | ||
| int shuffleId, | ||
| int mapId, | ||
| long mapTaskAttemptId, | ||
| int numPartitions) throws IOException; | ||
|
|
||
| /** | ||
| * An optional extension for creating a map output writer that can optimize the transfer of a | ||
| * single partition file, as the entire result of a map task, to the backing store. | ||
| * <p> | ||
| * Most implementations should return the default {@link Optional#empty()} to indicate that | ||
| * they do not support this optimization. This primarily is for backwards-compatibility in | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Probably it's better to indicate what kinds of implementations may support this optimization? Otherwise it's confusing. I think the storage that has API like
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Truth be told, even plugins that support remote FS move would unlikely be able to support this well - one would still have to transfer the whole file up to the remote storage layer, but that could just as easily be done by writing the data from the file through an output stream. I think only implementations that stage the files locally could support this in any meaningful way at all.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm ok with leaving out the docs if only because very very few implementations should even care about this API.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. yeah I don't think its necessary to go into more details. I would say the "casual" plugin developer wouldn't bother with this, and if they're really serious, they can look at what the existing implementation does. The comment is sufficient for that. |
||
| * preserving an optimization in the local disk shuffle storage implementation. | ||
| * | ||
| * @param shuffleId Unique identifier for the shuffle the map task is a part of | ||
| * @param mapId Within the shuffle, the identifier of the map task | ||
| * @param mapTaskAttemptId Identifier of the task attempt. Multiple attempts of the same map task | ||
| * with the same (shuffleId, mapId) pair can be distinguished by the | ||
| * different values of mapTaskAttemptId. | ||
| */ | ||
| default Optional<SingleFileShuffleMapOutputWriter> createSingleFileMapOutputWriter( | ||
|
||
| int shuffleId, | ||
| int mapId, | ||
| long mapTaskAttemptId) throws IOException { | ||
| return Optional.empty(); | ||
| } | ||
| } | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,36 @@ | ||
| /* | ||
| * Licensed to the Apache Software Foundation (ASF) under one or more | ||
| * contributor license agreements. See the NOTICE file distributed with | ||
| * this work for additional information regarding copyright ownership. | ||
| * The ASF licenses this file to You under the Apache License, Version 2.0 | ||
| * (the "License"); you may not use this file except in compliance with | ||
| * the License. You may obtain a copy of the License at | ||
| * | ||
| * http://www.apache.org/licenses/LICENSE-2.0 | ||
| * | ||
| * Unless required by applicable law or agreed to in writing, software | ||
| * distributed under the License is distributed on an "AS IS" BASIS, | ||
| * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
| * See the License for the specific language governing permissions and | ||
| * limitations under the License. | ||
| */ | ||
|
|
||
| package org.apache.spark.shuffle.api; | ||
|
|
||
| import java.io.File; | ||
|
|
||
| import java.io.IOException; | ||
| import org.apache.spark.annotation.Private; | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. import grouping |
||
|
|
||
| /** | ||
| * Optional extension for partition writing that is optimized for transferring a single | ||
| * file to the backing store. | ||
| */ | ||
| @Private | ||
| public interface SingleFileShuffleMapOutputWriter { | ||
|
|
||
| /** | ||
| * Transfer a file that contains the bytes of all the splits written by this map task. | ||
|
||
| */ | ||
| void transferMapOutputFile(File mapOutputFile, long[] partitionLengths) throws IOException; | ||
| } | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -18,6 +18,7 @@ | |
| package org.apache.spark.shuffle.sort; | ||
|
|
||
| import java.nio.channels.Channels; | ||
| import java.util.Optional; | ||
| import javax.annotation.Nullable; | ||
| import java.io.*; | ||
| import java.nio.channels.FileChannel; | ||
|
|
@@ -53,6 +54,7 @@ | |
| import org.apache.spark.shuffle.api.ShuffleExecutorComponents; | ||
| import org.apache.spark.shuffle.api.ShuffleMapOutputWriter; | ||
| import org.apache.spark.shuffle.api.ShufflePartitionWriter; | ||
| import org.apache.spark.shuffle.api.SingleFileShuffleMapOutputWriter; | ||
| import org.apache.spark.shuffle.api.WritableByteChannelWrapper; | ||
| import org.apache.spark.storage.BlockManager; | ||
| import org.apache.spark.unsafe.Platform; | ||
|
|
@@ -215,31 +217,15 @@ void closeAndWriteOutput() throws IOException { | |
| serOutputStream = null; | ||
| final SpillInfo[] spills = sorter.closeAndGetSpills(); | ||
| sorter = null; | ||
| final ShuffleMapOutputWriter mapWriter = shuffleExecutorComponents | ||
| .createMapOutputWriter( | ||
| shuffleId, | ||
| mapId, | ||
| taskContext.taskAttemptId(), | ||
| partitioner.numPartitions()); | ||
| final long[] partitionLengths; | ||
| try { | ||
| try { | ||
| partitionLengths = mergeSpills(spills, mapWriter); | ||
| } finally { | ||
| for (SpillInfo spill : spills) { | ||
| if (spill.file.exists() && !spill.file.delete()) { | ||
| logger.error("Error while deleting spill file {}", spill.file.getPath()); | ||
| } | ||
| partitionLengths = mergeSpills(spills); | ||
| } finally { | ||
| for (SpillInfo spill : spills) { | ||
| if (spill.file.exists() && !spill.file.delete()) { | ||
| logger.error("Error while deleting spill file {}", spill.file.getPath()); | ||
| } | ||
| } | ||
| mapWriter.commitAllPartitions(); | ||
| } catch (Exception e) { | ||
| try { | ||
| mapWriter.abort(e); | ||
| } catch (Exception innerE) { | ||
| logger.error("Failed to abort the Map Output Writer", innerE); | ||
| } | ||
| throw e; | ||
| } | ||
| mapStatus = MapStatus$.MODULE$.apply(blockManager.shuffleServerId(), partitionLengths); | ||
| } | ||
|
|
@@ -273,57 +259,93 @@ void forceSorterToSpill() throws IOException { | |
| * | ||
| * @return the partition lengths in the merged file. | ||
| */ | ||
| private long[] mergeSpills(SpillInfo[] spills, | ||
| ShuffleMapOutputWriter mapWriter) throws IOException { | ||
| private long[] mergeSpills(SpillInfo[] spills) throws IOException { | ||
| long[] partitionLengths; | ||
| if (spills.length == 0) { | ||
| final ShuffleMapOutputWriter mapWriter = shuffleExecutorComponents | ||
| .createMapOutputWriter( | ||
| shuffleId, | ||
| mapId, | ||
| taskContext.taskAttemptId(), | ||
| partitioner.numPartitions()); | ||
| mapWriter.commitAllPartitions(); | ||
|
||
| return new long[partitioner.numPartitions()]; | ||
| } else if (spills.length == 1) { | ||
| Optional<SingleFileShuffleMapOutputWriter> maybeSingleFileWriter = | ||
| shuffleExecutorComponents.createSingleFileMapOutputWriter( | ||
| shuffleId, mapId, taskContext.taskAttemptId()); | ||
| if (maybeSingleFileWriter.isPresent()) { | ||
| // Here, we don't need to perform any metrics updates because the bytes written to this | ||
| // output file would have already been counted as shuffle bytes written. | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. this comment is only true for the local disk implementation. eg. if some other implementation did take advantage of the single merged file somehow, and wrote it all to a remote store, it would be doing another write. But I am not really worried about this, as I don't think any other store will actually use this ...
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Where could we move this? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @squito I think this
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I dunno if there is a great alternative. We could say its the job of individual implementations to increment the metrics, and then move this comment into It all kinda feels like overkill to me. @gczsjdy I agree its possible for another store to take advantage of this, but do you have a specific case in mind? I'd like to avoid adding too many things to the api and keep things simple (with odd cases just to support the existing implementation). |
||
| partitionLengths = spills[0].partitionLengths; | ||
| maybeSingleFileWriter.get().transferMapOutputFile(spills[0].file, partitionLengths); | ||
| } else { | ||
| partitionLengths = mergeSpillsUsingStandardWriter(spills); | ||
| } | ||
| } else { | ||
| partitionLengths = mergeSpillsUsingStandardWriter(spills); | ||
| } | ||
| return partitionLengths; | ||
| } | ||
|
|
||
| private long[] mergeSpillsUsingStandardWriter(SpillInfo[] spills) throws IOException { | ||
| long[] partitionLengths; | ||
| final boolean compressionEnabled = (boolean) sparkConf.get(package$.MODULE$.SHUFFLE_COMPRESS()); | ||
| final CompressionCodec compressionCodec = CompressionCodec$.MODULE$.createCodec(sparkConf); | ||
| final boolean fastMergeEnabled = | ||
| (boolean) sparkConf.get(package$.MODULE$.SHUFFLE_UNDAFE_FAST_MERGE_ENABLE()); | ||
| (boolean) sparkConf.get(package$.MODULE$.SHUFFLE_UNDAFE_FAST_MERGE_ENABLE()); | ||
|
||
| final boolean fastMergeIsSupported = !compressionEnabled || | ||
| CompressionCodec$.MODULE$.supportsConcatenationOfSerializedStreams(compressionCodec); | ||
| CompressionCodec$.MODULE$.supportsConcatenationOfSerializedStreams(compressionCodec); | ||
| final boolean encryptionEnabled = blockManager.serializerManager().encryptionEnabled(); | ||
| final int numPartitions = partitioner.numPartitions(); | ||
| long[] partitionLengths = new long[numPartitions]; | ||
| final ShuffleMapOutputWriter mapWriter = shuffleExecutorComponents | ||
| .createMapOutputWriter( | ||
| shuffleId, | ||
| mapId, | ||
| taskContext.taskAttemptId(), | ||
| partitioner.numPartitions()); | ||
| try { | ||
| if (spills.length == 0) { | ||
| return partitionLengths; | ||
| } else { | ||
| // There are multiple spills to merge, so none of these spill files' lengths were counted | ||
| // towards our shuffle write count or shuffle write time. If we use the slow merge path, | ||
| // then the final output file's size won't necessarily be equal to the sum of the spill | ||
| // files' sizes. To guard against this case, we look at the output file's actual size when | ||
| // computing shuffle bytes written. | ||
| // | ||
| // We allow the individual merge methods to report their own IO times since different merge | ||
| // strategies use different IO techniques. We count IO during merge towards the shuffle | ||
| // shuffle write time, which appears to be consistent with the "not bypassing merge-sort" | ||
| // branch in ExternalSorter. | ||
| if (fastMergeEnabled && fastMergeIsSupported) { | ||
| // Compression is disabled or we are using an IO compression codec that supports | ||
| // decompression of concatenated compressed streams, so we can perform a fast spill merge | ||
| // that doesn't need to interpret the spilled bytes. | ||
| if (transferToEnabled && !encryptionEnabled) { | ||
| logger.debug("Using transferTo-based fast merge"); | ||
| partitionLengths = mergeSpillsWithTransferTo(spills, mapWriter); | ||
| } else { | ||
| logger.debug("Using fileStream-based fast merge"); | ||
| partitionLengths = mergeSpillsWithFileStream(spills, mapWriter, null); | ||
| } | ||
| // There are multiple spills to merge, so none of these spill files' lengths were counted | ||
| // towards our shuffle write count or shuffle write time. If we use the slow merge path, | ||
| // then the final output file's size won't necessarily be equal to the sum of the spill | ||
| // files' sizes. To guard against this case, we look at the output file's actual size when | ||
| // computing shuffle bytes written. | ||
| // | ||
| // We allow the individual merge methods to report their own IO times since different merge | ||
| // strategies use different IO techniques. We count IO during merge towards the shuffle | ||
| // shuffle write time, which appears to be consistent with the "not bypassing merge-sort" | ||
|
||
| // branch in ExternalSorter. | ||
| if (fastMergeEnabled && fastMergeIsSupported) { | ||
| // Compression is disabled or we are using an IO compression codec that supports | ||
| // decompression of concatenated compressed streams, so we can perform a fast spill merge | ||
| // that doesn't need to interpret the spilled bytes. | ||
| if (transferToEnabled && !encryptionEnabled) { | ||
| logger.debug("Using transferTo-based fast merge"); | ||
| partitionLengths = mergeSpillsWithTransferTo(spills, mapWriter); | ||
| } else { | ||
| logger.debug("Using slow merge"); | ||
| partitionLengths = mergeSpillsWithFileStream(spills, mapWriter, compressionCodec); | ||
| logger.debug("Using fileStream-based fast merge"); | ||
| partitionLengths = mergeSpillsWithFileStream(spills, mapWriter, null); | ||
| } | ||
| // When closing an UnsafeShuffleExternalSorter that has already spilled once but also has | ||
| // in-memory records, we write out the in-memory records to a file but do not count that | ||
| // final write as bytes spilled (instead, it's accounted as shuffle write). The merge needs | ||
| // to be counted as shuffle write, but this will lead to double-counting of the final | ||
| // SpillInfo's bytes. | ||
| writeMetrics.decBytesWritten(spills[spills.length - 1].file.length()); | ||
| return partitionLengths; | ||
| } else { | ||
| logger.debug("Using slow merge"); | ||
| partitionLengths = mergeSpillsWithFileStream(spills, mapWriter, compressionCodec); | ||
| } | ||
| // When closing an UnsafeShuffleExternalSorter that has already spilled once but also has | ||
| // in-memory records, we write out the in-memory records to a file but do not count that | ||
| // final write as bytes spilled (instead, it's accounted as shuffle write). The merge needs | ||
| // to be counted as shuffle write, but this will lead to double-counting of the final | ||
| // SpillInfo's bytes. | ||
| writeMetrics.decBytesWritten(spills[spills.length - 1].file.length()); | ||
| mapWriter.commitAllPartitions(); | ||
| } catch (Exception e) { | ||
| try { | ||
| mapWriter.abort(e); | ||
| } catch (Exception e2) { | ||
| logger.warn("Failed to abort writing the map output.", e2); | ||
| e.addSuppressed(e2); | ||
| } | ||
| } catch (IOException e) { | ||
| throw e; | ||
| } | ||
| return partitionLengths; | ||
| } | ||
|
|
||
| /** | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,52 @@ | ||
| /* | ||
| * Licensed to the Apache Software Foundation (ASF) under one or more | ||
| * contributor license agreements. See the NOTICE file distributed with | ||
| * this work for additional information regarding copyright ownership. | ||
| * The ASF licenses this file to You under the Apache License, Version 2.0 | ||
| * (the "License"); you may not use this file except in compliance with | ||
| * the License. You may obtain a copy of the License at | ||
| * | ||
| * http://www.apache.org/licenses/LICENSE-2.0 | ||
| * | ||
| * Unless required by applicable law or agreed to in writing, software | ||
| * distributed under the License is distributed on an "AS IS" BASIS, | ||
| * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
| * See the License for the specific language governing permissions and | ||
| * limitations under the License. | ||
| */ | ||
|
|
||
| package org.apache.spark.shuffle.sort.io; | ||
|
|
||
| import java.io.File; | ||
| import java.io.IOException; | ||
| import java.nio.file.Files; | ||
| import org.apache.spark.shuffle.IndexShuffleBlockResolver; | ||
| import org.apache.spark.shuffle.api.SingleFileShuffleMapOutputWriter; | ||
| import org.apache.spark.util.Utils; | ||
|
|
||
| public class LocalDiskSingleFileMapOutputWriter | ||
| implements SingleFileShuffleMapOutputWriter { | ||
|
||
|
|
||
| private final int shuffleId; | ||
| private final int mapId; | ||
| private final IndexShuffleBlockResolver blockResolver; | ||
|
|
||
| public LocalDiskSingleFileMapOutputWriter( | ||
| int shuffleId, | ||
| int mapId, | ||
| IndexShuffleBlockResolver blockResolver) { | ||
| this.shuffleId = shuffleId; | ||
| this.mapId = mapId; | ||
| this.blockResolver = blockResolver; | ||
| } | ||
|
|
||
| @Override | ||
| public void transferMapOutputFile( | ||
| File mapOutputFile, | ||
|
||
| long[] partitionLengths) throws IOException { | ||
| File outputFile = blockResolver.getDataFile(shuffleId, mapId); | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think a brief comment here would help, eg. "we've only got one spill file, which is already in the right format of the final data file. So no merging to do, just move it to the right location" |
||
| File tempFile = Utils.tempFileWith(outputFile); | ||
| Files.move(mapOutputFile.toPath(), tempFile.toPath()); | ||
| blockResolver.writeIndexFileAndCommit(shuffleId, mapId, partitionLengths, tempFile); | ||
| } | ||
| } | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: import grouping