diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 7d1311924fe..05b102e75dc 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -5,7 +5,6 @@ your contribution to Apple and the community, and agree by submitting the patch that your contributions are licensed under the Apache 2.0 license (see `LICENSE.txt`). - ## How to submit a bug report Please ensure to specify the following: @@ -20,7 +19,6 @@ Please ensure to specify the following: * OS version and the output of `uname -a` * Network configuration - ### Example ``` @@ -89,7 +87,7 @@ SwiftNIO has been created to be high performance. The integration tests cover s ### Formatting -Try to keep your lines less than 120 characters long so github can correctly display your changes. +Try to keep your lines less than 120 characters long so GitHub can correctly display your changes. SwiftNIO uses the [swift-format](https://github.com/swiftlang/swift-format) tool to bring consistency to code formatting. There is a specific [.swift-format](./.swift-format) configuration file. This will be checked and enforced on PRs. Note that the check will run on the current most recent stable version target which may not match that in your own local development environment. @@ -106,7 +104,6 @@ act --container-architecture linux/amd64 --action-offline-mode --bind workflow_c This will run the format checks, binding to your local checkout so the edits made are to your own source. - ### Extensibility Try to make sure your code is robust to future extensions. The public interface is very hard to change after release - please refer to the [API guidelines](./docs/public-api.md) @@ -115,4 +112,4 @@ Try to make sure your code is robust to future extensions. The public interface Please open a pull request at https://github.com/apple/swift-nio. Make sure the CI passes, and then wait for code review. -After review you may be asked to make changes. When you are ready, use the request re-review feature of github or mention the reviewers by name in a comment. +After review you may be asked to make changes. When you are ready, use the request re-review feature of GitHub or mention the reviewers by name in a comment. diff --git a/Package.swift b/Package.swift index 4bbbe48e722..021ebf8e50d 100644 --- a/Package.swift +++ b/Package.swift @@ -23,7 +23,7 @@ let swiftAtomics: PackageDescription.Target.Dependency = .product(name: "Atomics let swiftCollections: PackageDescription.Target.Dependency = .product(name: "DequeModule", package: "swift-collections") let swiftSystem: PackageDescription.Target.Dependency = .product(name: "SystemPackage", package: "swift-system") -// These platforms require a depdency on `NIOPosix` from `NIOHTTP1` to maintain backward +// These platforms require a dependency on `NIOPosix` from `NIOHTTP1` to maintain backward // compatibility with previous NIO versions. let historicalNIOPosixDependencyRequired: [Platform] = [.macOS, .iOS, .tvOS, .watchOS, .linux, .android] diff --git a/README.md b/README.md index 055c5359c34..48d8b249746 100644 --- a/README.md +++ b/README.md @@ -86,7 +86,7 @@ SwiftNIO | Minimum Swift Version `2.43.0 ..< 2.51.0` | 5.5.2 `2.51.0 ..< 2.60.0` | 5.6 `2.60.0 ..< 2.65.0` | 5.7 -`2.65.0 ..< 2.76.0 | 5.8 +`2.65.0 ..< 2.76.0` | 5.8 `2.76.0 ...` | 5.9 ### SwiftNIO 1 @@ -116,7 +116,7 @@ SemVer and SwiftNIO's Public API guarantees should result in a working program w SwiftNIO is fundamentally a low-level tool for building high-performance networking applications in Swift. It particularly targets those use-cases where using a "thread-per-connection" model of concurrency is inefficient or untenable. This is a common limitation when building servers that use a large number of relatively low-utilization connections, such as HTTP servers. -To achieve its goals SwiftNIO extensively uses "non-blocking I/O": hence the name! Non-blocking I/O differs from the more common blocking I/O model because the application does not wait for data to be sent to or received from the network: instead, SwiftNIO asks for the kernel to notify it when I/O operations can be performed without waiting. +To achieve its goals, SwiftNIO extensively uses "non-blocking I/O": hence the name! Non-blocking I/O differs from the more common blocking I/O model because the application does not wait for data to be sent to or received from the network: instead, SwiftNIO asks for the kernel to notify it when I/O operations can be performed without waiting. SwiftNIO does not aim to provide high-level solutions like, for example, web frameworks do. Instead, SwiftNIO is focused on providing the low-level building blocks for these higher-level applications. When it comes to building a web application, most users will not want to use SwiftNIO directly: instead, they'll want to use one of the many great web frameworks available in the Swift ecosystem. Those web frameworks, however, may choose to use SwiftNIO under the covers to provide their networking support. @@ -139,7 +139,7 @@ All SwiftNIO applications are ultimately constructed of these various components #### EventLoops and EventLoopGroups -The basic I/O primitive of SwiftNIO is the event loop. The event loop is an object that waits for events (usually I/O related events, such as "data received") to happen and then fires some kind of callback when they do. In almost all SwiftNIO applications there will be relatively few event loops: usually only one or two per CPU core the application wants to use. Generally speaking event loops run for the entire lifetime of your application, spinning in an endless loop dispatching events. +The basic I/O primitive of SwiftNIO is the event loop. The event loop is an object that waits for events (usually I/O related events, such as "data received") to happen and then fires some kind of callback when they do. In almost all SwiftNIO applications there will be relatively few event loops: usually only one or two per CPU core the application wants to use. Generally speaking, event loops run for the entire lifetime of your application, spinning in an endless loop dispatching events. Event loops are gathered together into event loop *groups*. These groups provide a mechanism to distribute work around the event loops. For example, when listening for inbound connections the listening socket will be registered on one event loop. However, we don't want all connections that are accepted on that listening socket to be registered with the same event loop, as that would potentially overload one event loop while leaving the others empty. For that reason, the event loop group provides the ability to spread load across multiple event loops. diff --git a/Sources/NIOFileSystem/CopyStrategy.swift b/Sources/NIOFileSystem/CopyStrategy.swift index 790206daefc..ae8b4770e12 100644 --- a/Sources/NIOFileSystem/CopyStrategy.swift +++ b/Sources/NIOFileSystem/CopyStrategy.swift @@ -13,16 +13,16 @@ //===----------------------------------------------------------------------===// /// How to perform copies. Currently only relevant to directory level copies when using -/// ``FileSystemProtocol/copyItem(at:to:strategy:shouldProceedAfterError:shouldCopyItem:)`` -/// or other overloads that use the default behaviour +/// ``FileSystemProtocol/copyItem(at:to:strategy:shouldProceedAfterError:shouldCopyItem:)`` or other +/// overloads that use the default behaviour. public struct CopyStrategy: Hashable, Sendable { - // Avoid exposing to prevent alterations being breaking changes + // Avoid exposing to prevent breaking changes internal enum Wrapped: Hashable, Sendable { // platformDefault is reified into one of the concrete options below: case sequential - // Constraints on this value are enforced only on making `CopyStrategy`, - // the early error check there is desirable over validating on downstream use. + // Constraints on this value are enforced only on creation of `CopyStrategy`. The early + // error check is desirable over validating on downstream use. case parallel(_ maxDescriptors: Int) } @@ -33,14 +33,13 @@ public struct CopyStrategy: Hashable, Sendable { // These selections are relatively arbitrary but the rationale is as follows: // - // - Never exceed the default OS limits even if 4 such operations - // were happening at once + // - Never exceed the default OS limits even if 4 such operations were happening at once. // - Sufficient to enable significant speed up from parallelism - // - Not wasting effort by pushing contention to the underlying storage device - // Further we assume an SSD or similar underlying storage tech. - // Users on spinning rust need to account for that themselves anyway + // - Not wasting effort by pushing contention to the underlying storage device. Further we + // assume an SSD or similar underlying storage tech. Users on spinning rust need to account + // for that themselves anyway. // - // That said, empirical testing for this has not been performed, suggestions welcome + // That said, empirical testing for this has not been performed, suggestions welcome. // // Note: for now we model the directory scan as needing two handles because, during the creation // of the destination directory we hold the handle for a while copying attributes @@ -48,20 +47,22 @@ public struct CopyStrategy: Hashable, Sendable { // This may not result in a faster copy though so things are left simple internal static func determinePlatformDefault() -> Wrapped { #if os(macOS) || os(Linux) || os(Windows) - // 4 concurrent file copies/directory scans. - // Avoiding storage system contention is the dominant aspect here. - // Empirical testing on an SSD copying to the same volume with a dense directory of small - // files and sub directories of similar shape totalling 12GB showed improvements in elapsed - // time for (expected) increases in CPU time up to parallel(8), beyond this the increases - // in CPU came with only moderate gains. + // 4 concurrent file copies/directory scans. Avoiding storage system contention is of utmost + // importance. + // + // Testing was performed on an SSD, while copying objects (a dense directory of small files + // and subdirectories of similar shape) to the same volume, totalling 12GB. Results showed + // improvements in elapsed time for (expected) increases in CPU time up to parallel(8). + // Beyond this, the increases in CPU led to only moderate gains. + // // Anyone tuning this is encouraged to cover worst case scenarios. return .parallel(8) #elseif os(iOS) || os(tvOS) || os(watchOS) || os(Android) - // Reduced maximum descriptors in embedded world - // This is chosen based on biasing to safety, not empirical testing. + // Reduced maximum descriptors in embedded world. This is chosen based on biasing towards + // safety, not empirical testing. return .parallel(4) #else - // Safety first, if we have no view on it keep it simple. + // Safety first. If we do not know what system we run on, we keep it simple. return .sequential #endif } @@ -69,19 +70,19 @@ public struct CopyStrategy: Hashable, Sendable { extension CopyStrategy { // A copy fundamentally can't work without two descriptors unless you copy - // everything into memory which is infeasible/inefficeint for large copies. + // everything into memory which is infeasible/inefficient for large copies. private static let minDescriptorsAllowed = 2 - /// Operate in whatever manner is deemed a reasonable default for the platform. - /// This will limit the maximum file descriptors usage based on 'reasonable' defaults. + /// Operate in whatever manner is deemed a reasonable default for the platform. This will limit + /// the maximum file descriptors usage based on reasonable defaults. /// /// Current assumptions (which are subject to change): /// - Only one copy operation would be performed at once /// - The copy operation is not intended to be the primary activity on the device public static let platformDefault: Self = Self(Self.determinePlatformDefault()) - /// The copy is done asynchronously, but only one operation will occur at a time. - /// This is the only way to guarantee only one callback to the `shouldCopyItem` will happen at a time + /// The copy is done asynchronously, but only one operation will occur at a time. This is the + /// only way to guarantee only one callback to the `shouldCopyItem` will happen at a time. public static let sequential: Self = Self(.sequential) /// Allow multiple IO operations to run concurrently, including file copies/directory creation and scanning diff --git a/Sources/NIOFileSystem/FileSystem.swift b/Sources/NIOFileSystem/FileSystem.swift index 76a1bcb8a92..33bf8c605e6 100644 --- a/Sources/NIOFileSystem/FileSystem.swift +++ b/Sources/NIOFileSystem/FileSystem.swift @@ -32,12 +32,11 @@ import Bionic /// /// ### Creating a `FileSystem` /// -/// You should prefer using the `shared` instance of the file system. The -/// `shared` instance uses two threads unless the `SWIFT_FILE_SYSTEM_THREAD_COUNT` -/// environment variable is set. +/// You should prefer using the `shared` instance of the file system. The `shared` instance uses two +/// threads unless the `SWIFT_FILE_SYSTEM_THREAD_COUNT` environment variable is set. /// -/// If you require more granular control you can create a ``FileSystem`` with the required number -/// of threads by calling ``withFileSystem(numberOfThreads:_:)`` or by using ``init(threadPool:)``. +/// If you require more granular control you can create a ``FileSystem`` with the required number of +/// threads by calling ``withFileSystem(numberOfThreads:_:)`` or by using ``init(threadPool:)``. /// /// ### Errors /// @@ -54,8 +53,8 @@ public struct FileSystem: Sendable, FileSystemProtocol { /// Returns a shared global instance of the ``FileSystem``. /// /// The file system executes blocking work in a thread pool which defaults to having two - /// threads. This can be modified by `blockingPoolThreadCountSuggestion` or by - /// setting the `NIO_SINGLETON_BLOCKING_POOL_THREAD_COUNT` environment variable. + /// threads. This can be modified by `blockingPoolThreadCountSuggestion` or by setting the + /// `NIO_SINGLETON_BLOCKING_POOL_THREAD_COUNT` environment variable. public static var shared: FileSystem { globalFileSystem } private let threadPool: NIOThreadPool @@ -70,8 +69,8 @@ public struct FileSystem: Sendable, FileSystemProtocol { /// Creates a new ``FileSystem`` using the provided thread pool. /// /// - Parameter threadPool: A started thread pool to execute blocking system calls on. The - /// ``FileSystem`` doesn't take ownership of the thread pool and you remain responsible - /// for shutting it down when necessary. + /// ``FileSystem`` doesn't take ownership of the thread pool and you remain responsible for + /// shutting it down when necessary. public init(threadPool: NIOThreadPool) { self.init(threadPool: threadPool, ownsThreadPool: false) } @@ -217,8 +216,8 @@ public struct FileSystem: Sendable, FileSystemProtocol { /// Error codes thrown include: /// - ``FileSystemError/Code-swift.struct/fileAlreadyExists`` if a file or directory already /// exists . - /// - ``FileSystemError/Code-swift.struct/invalidArgument`` if a component in the `path` - /// prefix does not exist and `createIntermediateDirectories` is `false`. + /// - ``FileSystemError/Code-swift.struct/invalidArgument`` if a component in the `path` prefix + /// does not exist and `createIntermediateDirectories` is `false`. /// /// #### Implementation details /// @@ -248,10 +247,10 @@ public struct FileSystem: Sendable, FileSystemProtocol { /// #### Errors /// /// Error codes thrown include: - /// - ``FileSystemError/Code-swift.struct/invalidArgument`` if the template doesn't end - /// in at least 3 'X's. - /// - ``FileSystemError/Code-swift.struct/permissionDenied`` if the user doesn't have - /// permission to create a directory at the path specified in the template. + /// - ``FileSystemError/Code-swift.struct/invalidArgument`` if the template doesn't end in at + /// least 3 'X's. + /// - ``FileSystemError/Code-swift.struct/permissionDenied`` if the user doesn't have permission + /// to create a directory at the path specified in the template. /// /// #### Implementation details /// @@ -277,9 +276,9 @@ public struct FileSystem: Sendable, FileSystemProtocol { /// /// - Parameters: /// - path: The path of the file. - /// - infoAboutSymbolicLink: If the file is a symbolic link and this parameter is `true` then information - /// about the link will be returned, otherwise information about the destination of the - /// symbolic link is returned. + /// - infoAboutSymbolicLink: If the file is a symbolic link and this parameter is `true`, + /// then information about the link will be returned. Otherwise, information about the + /// destination of the symbolic link is returned. /// - Returns: Information about the file at the given path or `nil` if no file exists. public func info( forFileAt path: FilePath, @@ -299,18 +298,18 @@ public struct FileSystem: Sendable, FileSystemProtocol { /// - symbolic link, or /// - directory. /// - /// But `shouldCopyItem` can be used to ignore things outside this supported set. + /// `shouldCopyItem` can be used to ignore objects not part of this set. /// /// #### Errors /// /// In addition to the already documented errors these may be thrown - /// - ``FileSystemError/Code-swift.struct/unsupported`` if an item to be copied is not a - /// regular file, symbolic link or directory. + /// - ``FileSystemError/Code-swift.struct/unsupported`` if an item to be copied is not a regular + /// file, symbolic link or directory. /// /// #### Implementation details /// - /// This function is platform dependent. On Darwin the `copyfile(2)` system call is - /// used and items are cloned where possible. On Linux the `sendfile(2)` system call is used. + /// This function is platform dependent. On Darwin the `copyfile(2)` system call is used and + /// items are cloned where possible. On Linux the `sendfile(2)` system call is used. public func copyItem( at sourcePath: FilePath, to destinationPath: FilePath, @@ -334,8 +333,8 @@ public struct FileSystem: Sendable, FileSystemProtocol { ) } - // By doing this before looking at the type we allow callers to decide whether - // unanticipated kinds of entries can be safely ignored without needing changes upstream + // By doing this before looking at the type, we allow callers to decide whether + // unanticipated kinds of entries can be safely ignored without needing changes upstream. if await shouldCopyItem(.init(path: sourcePath, type: info.type)!, destinationPath) { switch info.type { case .regular: @@ -400,7 +399,7 @@ public struct FileSystem: Sendable, FileSystemProtocol { switch result { case .success: - // Great; we removed 1 whole item. + // Great; we removed an entire item. return 1 case .failure(.noSuchFileOrDirectory): @@ -413,8 +412,8 @@ public struct FileSystem: Sendable, FileSystemProtocol { code: .notEmpty, message: """ Can't remove directory at path '\(path)', it isn't empty and \ - 'removeItemRecursively' is false. Remove items from the directory first \ - or set 'removeItemRecursively' to true when calling \ + 'removeItemRecursively' is false. Remove items from the directory first or \ + set 'removeItemRecursively' to true when calling \ 'removeItem(at:recursively:)'. """, cause: nil, @@ -674,8 +673,8 @@ private let globalFileSystem: FileSystem = { extension NIOSingletons { /// Returns a shared global instance of the ``FileSystem``. /// - /// The file system executes blocking work in a thread pool see `blockingPoolThreadCountSuggestion` - /// for the default behaviour and ways to control it. + /// The file system executes blocking work in a thread pool. See + /// `blockingPoolThreadCountSuggestion` for the default behaviour and ways to control it. public static var fileSystem: FileSystem { globalFileSystem } } @@ -737,7 +736,8 @@ extension FileSystem { } } - /// Opens `path` for reading and writing and returns ``ReadWriteFileHandle`` or ``FileSystemError``. + /// Opens `path` for reading and writing and returns ``ReadWriteFileHandle`` or + /// ``FileSystemError``. private func _openFile( forReadingAndWritingAt path: FilePath, options: OpenOptions.Write @@ -777,15 +777,16 @@ extension FileSystem { withIntermediateDirectories createIntermediateDirectories: Bool, permissions: FilePermissions ) -> Result { - // Logic, assuming we are creating intermediate directories: try creating the directory, - // if it fails with ENOENT (no such file or directory) then drop the last component and - // append it to a buffer. Repeat until the path is empty meaning we cannot create the - // directory, or we succeed in which case we can append build up our original path - // creating directories one at a time. + // We assume that we will be creating intermediate directories: + // - Try creating the directory. If it fails with ENOENT (no such file or directory), then + // drop the last component and append it to a buffer. + // - Repeat until the path is empty. This means we cannot create the directory or we + // succeed, in which case we can build up our original path and create directories one at + // a time. var droppedComponents: [FilePath.Component] = [] var path = fullPath - // Normalize the path to remove any '..' which may not be necessary. + // Normalize the path to remove any superflous '..'. path.lexicallyNormalize() if path.isEmpty { @@ -857,28 +858,24 @@ extension FileSystem { } } - /// Represents an item in a directory that needs copying, or - /// an explicit indication of the end of items. - /// The provision of the ``endOfDir`` case significantly simplifies the parallel code + /// Represents an item in a directory that needs copying, or an explicit indication of the end + /// of items. The provision of the ``endOfDir`` case significantly simplifies the parallel code enum DirCopyItem: Hashable, Sendable { case endOfDir case toCopy(from: DirectoryEntry, to: FilePath) } - /// Creates the directory ``destinationPath`` based on the directory at ``sourcePath`` - /// including any permissions/attributes. - /// It does not copy the contents but does indicate the items directly within ``sourcePath`` which - /// should be copied - /// - /// This is a little cumbersome because it is used by ``copyDirectorySequential`` and - /// ``copyDirectoryParallel``. - /// It is desirable to use the file descriptor for the directory itself for as little time as possible - /// (certainly not across async invocations). - /// The down stream paths in the parallel and sequential paths are very different - /// - Returns: - /// An array of `DirCopyItem` which have passed the ``shouldCopyItem```filter - /// The target file paths will all be in ``destinationPath`` - /// The array will always finish with an ``DirCopyItem.endOfDir`` + /// Creates the directory ``destinationPath`` based on the directory at ``sourcePath`` including + /// any permissions/attributes. It does not copy the contents but indicates the items within + /// ``sourcePath`` which should be copied. + /// + /// This is a little cumbersome, because it is used by ``copyDirectorySequential`` and + /// ``copyDirectoryParallel``. It is desirable to use the directories' file descriptor for as + /// little time as possible, and certainly not across asynchronous invocations. The downstream + /// paths in the parallel and sequential paths are very different + /// - Returns: An array of `DirCopyItem` which have passed the ``shouldCopyItem``` filter. The + /// target file paths will all be in ``destinationPath``. The array will always finish with + /// an ``DirCopyItem.endOfDir``. private func prepareDirectoryForRecusiveCopy( from sourcePath: FilePath, to destinationPath: FilePath, @@ -917,26 +914,24 @@ extension FileSystem { } } } catch let error as FileSystemError where error.code == .unsupported { - // Not all file systems support extended attributes. Swallow errors which indicate - // that is the case. + // Not all file systems support extended attributes. Swallow errors indicating this. () } #endif - // Build a list of items the caller needs to deal with, - // they then do any further work after closing the current directory + // Build a list of items the caller needs to deal with, then do any further work after + // closing the current directory. var contentsToCopy = [DirCopyItem]() for try await batch in dir.listContents().batched() { for entry in batch { - // Any further work is pointless, we are under no obligation to cleanup - // so exit as fast and cleanly as possible. + // Any further work is pointless. We are under no obligation to cleanup. Exit as + // fast and cleanly as possible. try Task.checkCancellation() let entryDestination = destinationPath.appending(entry.name) if await shouldCopyItem(entry, entryDestination) { - // Assume there's a good chance of everything in the batch - // being included in the common case. - // Let geometric growth go from this point though. + // Assume there's a good chance of everything in the batch being included in + // the common case. Let geometric growth go from this point though. if contentsToCopy.isEmpty { // Reserve space for the endOfDir entry too. contentsToCopy.reserveCapacity(batch.count + 1) @@ -967,8 +962,8 @@ extension FileSystem { } } - /// This could be achieved through quite complicated special casing of the parallel copy. - /// The resulting code is far harder to read and debug though so this is kept as a special case + /// This could be achieved through quite complicated special casing of the parallel copy. The + /// resulting code is far harder to read and debug, so this is kept as a special case. private func copyDirectorySequential( from sourcePath: FilePath, to destinationPath: FilePath, @@ -981,9 +976,9 @@ extension FileSystem { _ destination: FilePath ) async -> Bool ) async throws { - // Strategy: find all needed items to copy/recurse into while the directory is open; - // defer actual copying and recursion until after the source directory has been closed - // to avoid consuming too many file descriptors. + // Strategy: find all needed items to copy/recurse into while the directory is open; defer + // actual copying and recursion until after the source directory has been closed to avoid + // consuming too many file descriptors. let toCopy = try await self.prepareDirectoryForRecusiveCopy( from: sourcePath, to: destinationPath, @@ -997,9 +992,9 @@ extension FileSystem { // Sequential cases doesn't need to worry about this, it uses simple recursion. continue case let .toCopy(source, destination): - // Note: The entry type could have changed between finding it and acting on it. - // This is inherent in file systems, just more likely in an async environment - // we just accept those coming through as regular errors. + // Note: The entry type could have changed between finding it and acting on it. This + // is inherent in file systems, just more likely in an asynchronous environment. We + // just accept those coming through as regular errors. switch source.type { case .regular: do { @@ -1069,12 +1064,14 @@ extension FileSystem { shouldCopyItem: shouldCopyItem ) case let .parallel(maxDescriptors): - // Note that maxDescriptors was validated on construction of CopyStrategy. - // See notes on CopyStrategy about assumptions on descriptor use. - // For now we take the worst case peak for every operation, which is 2 descriptors, - // this keeps the downstream limiting code simple - // We do not preclude the use of more granular limiting in future (e.g. a directory - // scan only requires 1), for now we just drop any excess remainder entirely. + // Note that maxDescriptors was validated on construction of CopyStrategy. See notes on + // CopyStrategy about assumptions on descriptor use. For now, we take the worst case + // peak for every operation, which is two file descriptors. This keeps the downstream + // limiting code simple. + // + // We do not preclude the use of more granular limiting in the future (e.g. a directory + // scan requires only a single file descriptor). For now we just drop any excess + // remainder entirely. let limitValue = maxDescriptors / 2 return try await self.copyDirectoryParallel( from: sourcePath, @@ -1086,9 +1083,9 @@ extension FileSystem { } } - /// Building block of the parallel directory copy implementation - /// Each invovation of this is allowed to consume two file descriptors, - /// any further work (if any) should be sent to `yield` for future processing + /// Building block of the parallel directory copy implementation. Each invocation of this is + /// allowed to consume two file descriptors. Any further work (if any) should be sent to `yield` + /// for future processing. func copySelfAndEnqueueChildren( from: DirectoryEntry, to: FilePath, @@ -1344,7 +1341,7 @@ extension FileSystem { // Check that the destination doesn't exist. 'rename' will remove it otherwise! switch self._info(forFileAt: destinationPath, infoAboutSymbolicLink: true) { case .success(.none): - // Doens't exist: continue + // Doesn't exist: continue () case .success(.some): @@ -1375,8 +1372,7 @@ extension FileSystem { return .success(.moved) case .failure(.improperLink): - // The two paths are on different logical devices; copy and then remove the - // original. + // The two paths are on different logical devices; copy and then remove the original. return .success(.differentLogicalDevices) case let .failure(errno): @@ -1410,7 +1406,8 @@ extension FileSystem { let lastComponent = lastComponentPath.string - // Finding the index of the last non-'X' character in `lastComponent.string` and advancing it by one. + // Finding the index of the last non-'X' character in `lastComponent.string` and advancing + // it by one. let prefix: String var index = lastComponent.lastIndex(where: { $0 != "X" }) if index != nil { @@ -1484,7 +1481,8 @@ extension FileSystem { case let .failure(error): if let systemCallError = error.cause as? FileSystemError.SystemCallError { switch systemCallError.errno { - // If the file at the generated path already exists, we generate a new file path. + // If the file at the generated path already exists, we generate a new file + // path. case .fileExists, .isDirectory: break default: diff --git a/Sources/NIOFileSystem/Internal/ParallelDirCopy.swift b/Sources/NIOFileSystem/Internal/ParallelDirCopy.swift index 8f1d6b3813e..ce793264a1c 100644 --- a/Sources/NIOFileSystem/Internal/ParallelDirCopy.swift +++ b/Sources/NIOFileSystem/Internal/ParallelDirCopy.swift @@ -16,16 +16,18 @@ import NIOCore @available(macOS 10.15, iOS 13.0, watchOS 6.0, tvOS 13.0, *) extension FileSystem { - - /// Iterative implementation of a recursive parallel copy of the directory from `sourcePath` to `destinationPath`. + /// Iterative implementation of a recursive parallel copy of the directory from `sourcePath` to + /// `destinationPath`. + /// + /// The parallelism is solely at the level of individual items (files, symbolic links and + /// directories). A larger file is not 'split' into concurrent reads or writes. /// - /// The parallelism is solely at the level of individual items (so files, symbolic links and directories), a larger file - /// is not considered for being 'split' into concurrent reads or wites. - /// If any symbolic link is encountered then only the link is copied. - /// The copied items will preserve permissions and any extended attributes (if supported by the file system). + /// If a symbolic link is encountered, then only the link is copied. If supported by the file + /// system, the copied items will preserve permissions and any extended attributes. /// - /// Note: `maxConcurrentOperations` is used as a hard (conservative) limit on the number of open file descriptors at any point. - /// Operations are assume to consume 2 descriptors so maximum open descriptors is `maxConcurrentOperations * 2` + /// Note: `maxConcurrentOperations` is used as a hard (conservative) limit on the number of open + /// file descriptors at any point. Operations are assumed to consume 2 descriptors, so the + /// maximum open descriptors are `maxConcurrentOperations * 2` @usableFromInline func copyDirectoryParallel( from sourcePath: FilePath, @@ -40,9 +42,9 @@ extension FileSystem { _ destination: FilePath ) async -> Bool ) async throws { - // Implemented with NIOAsyncSequenceProducer rather than AsyncStream. - // It is approximately the same speed in the best case but has significantly less variance. - // NIOAsyncSequenceProducer also enforces a multi producer single consumer access pattern. + // Implemented with NIOAsyncSequenceProducer rather than AsyncStream. It is approximately + // the same speed in the best case, but has significantly less variance. + // NIOAsyncSequenceProducer also enforces a multi-producer, single-consumer access pattern. let copyRequiredQueue = NIOAsyncSequenceProducer.makeSequence( elementType: DirCopyItem.self, backPressureStrategy: NoBackPressureStrategy(), @@ -50,33 +52,35 @@ extension FileSystem { delegate: DirCopyDelegate() ) - // We ignore the result of yield in all cases because we are not implementing back pressure - // and cancellation is dealt with separately. + // We ignore the result of yield in all cases, because we are not implementing back + // pressure, and cancellation is dealt with separately. @Sendable func yield(_ contentsOf: [DirCopyItem]) { _ = copyRequiredQueue.source.yield(contentsOf: contentsOf) } - // Kick start the procees by enqueuing the root entry, - // the calling function already validated the root needed copying. + // Kick-start the procees by enqueuing the root entry. The calling function already + // validated the root needed copying, so it is safe to force unwrap the value. _ = copyRequiredQueue.source.yield( .toCopy(from: .init(path: sourcePath, type: .directory)!, to: destinationPath) ) - // The processing of the very first item (the root) will increment this, - // after then when it hits zero we've finished. + // The processing of the very first item (the root) will increment this counter. Processing + // will finish when the counter hits zero again. + // // This does not need to be a ManagedAtomic or similar because: - // - All maintenance of state is done in the withThrowingTaskGroup callback + // - All state maintenance is done within the withThrowingTaskGroup closure // - All actual file system work is done by tasks created on the `taskGroup` var activeDirCount = 0 - // Despite there being no 'result' of each operation we cannot use a discarding task group + // Despite there being no 'result' for each operation, we cannot use a discarding task group, // because we use the 'drain results' queue as a concurrency limiting side effect. try await withThrowingTaskGroup(of: Void.self) { taskGroup in - - // Code handling each item to process on the current task + // Process each item in the current task. + // // Side Effects: // - Updates activeDirCount and finishes the stream if required. - // - Either adds a single task on `taskGroup` or none. + // - Might add a task to `taskGroup`. + // // Returns true if it added a task, false otherwise. func onNextItem(_ item: DirCopyItem) -> Bool { switch item { @@ -105,7 +109,7 @@ extension FileSystem { let iter = copyRequiredQueue.sequence.makeAsyncIterator() - // inProgress counts the number of tasks we have added to the task group + // inProgress counts the number of tasks we have added to the task group. // Get up to the maximum concurrency first. // We haven't started monitoring for task completion, so inProgress is 'worst case'. var inProgress = 0 @@ -116,16 +120,15 @@ extension FileSystem { inProgress += 1 } } else { - // Either we completed things before we hit the limit or we were cancelled. - // In the latter case we choose to propagate the cancel clearly. - // This makes testing for the cancellation more reliable. + // Either we completed things before we hit the limit or we were cancelled. In + // the latter case we choose to propagate the cancellation clearly. This makes + // testing for it more reliable. try Task.checkCancellation() return } } - // Then operate one in (finish) -> one out (start another), - // but only for items that trigger a task. + // One in (finish) -> one out (start another), but only for items that trigger a task. while let _ = try await taskGroup.next() { var keepConsuming = true while keepConsuming { @@ -133,7 +136,7 @@ extension FileSystem { if let item = item { keepConsuming = !onNextItem(item) } else { - // To accurately propagate the cancellation we must check here too + // We must check here, to accurately propagate the cancellation. try Task.checkCancellation() keepConsuming = false } @@ -143,7 +146,7 @@ extension FileSystem { } } -/// An 'always ask for more' no back-pressure strategy for a ``NIOAsyncSequenceProducer``. +/// An 'always ask for more' no back-pressure strategy for a ``NIOAsyncSequenceProducer``. @available(macOS 10.15, iOS 13, tvOS 13, watchOS 6, *) private struct NoBackPressureStrategy: NIOAsyncSequenceProducerBackPressureStrategy { mutating func didYield(bufferDepth: Int) -> Bool { true } diff --git a/Sources/NIOPosix/NIOThreadPool.swift b/Sources/NIOPosix/NIOThreadPool.swift index 6d3bc08f730..75dfd08749e 100644 --- a/Sources/NIOPosix/NIOThreadPool.swift +++ b/Sources/NIOPosix/NIOThreadPool.swift @@ -441,11 +441,11 @@ extension NIOThreadPool { } /// Runs the submitted closure if the thread pool is still active, otherwise throw an error. - /// The closure will be run on the thread pool so can do blocking work. + /// The closure will be run on the thread pool, such that we can do blocking work. /// /// - Parameters: /// - body: The closure which performs some blocking work to be done on the thread pool. - /// - Returns: result of the passed closure. + /// - Returns: Result of the passed closure. @available(macOS 10.15, iOS 13, tvOS 13, watchOS 6, *) public func runIfActive(_ body: @escaping @Sendable () throws -> T) async throws -> T { let workID = self.nextWorkID.loadThenWrappingIncrement(ordering: .relaxed)