Skip to content
Merged

sync #14

Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
56 commits
Select commit Hold shift + click to select a range
4537b34
YARN-11089. Fix typo in RM audit log. Contributed by Junfan Zhang.
szilard-nemeth Mar 21, 2022
e2701e2
YARN-11086. Add space in debug log of ParentQueue. Contributed by Jun…
szilard-nemeth Mar 21, 2022
c3124a3
YARN-10565. Refactor CS queue initialization to simplify weight mode …
9uapaw Mar 11, 2022
1d5650c
HDFS-13248: Namenode needs to use the actual client IP when going thr…
omalley Mar 17, 2022
2beb729
YARN-11087. Introduce the config to control the refresh interval in R…
9uapaw Mar 22, 2022
708a0ce
HADOOP-13704. Optimized S3A getContentSummary()
steveloughran Mar 22, 2022
8897549
HDFS-14617. Improve oiv tool to parse fsimage file in parallel with d…
Hexiaoqiao Mar 22, 2022
59d07bd
HADOOP-18160 Avoid shading wildfly.openssl runtime dependency (#4074)
andreAmorimF Mar 22, 2022
81879eb
HDFS-16471. Make HDFS ls tool cross platform (#4086)
GauthamBanasandra Mar 22, 2022
26ba384
Revert "HDFS-14617. Improve oiv tool to parse fsimage file in paralle…
Hexiaoqiao Mar 23, 2022
ef8bff0
HDFS-15987. Improve oiv tool to parse fsimage file in parallel with d…
Hexiaoqiao Mar 23, 2022
45ce1cc
HDFS-16501. Print the exception when reporting a bad block (#4062)
liubingxing Mar 23, 2022
921267c
YARN-11084. Introduce new config to specify AM default node-label whe…
9uapaw Mar 23, 2022
9edfe30
HADOOP-14661. Add S3 requester pays bucket support to S3A (#3962)
dannycjones Mar 23, 2022
077c6c6
YARN-10547. Decouple job parsing logic from SLSRunner. Contributed by…
9uapaw Mar 24, 2022
5261424
YARN-10552. Eliminate code duplication in SLSCapacityScheduler and SL…
9uapaw Mar 24, 2022
ffa0eab
YARN-11094. Follow up changes for YARN-10547. Contributed by Szilard …
brumi1024 Mar 25, 2022
565e848
HDFS-16434. Add opname to read/write lock for remaining operations (#…
tomscut Mar 25, 2022
08a77a7
YARN-10548. Decouple AM runner logic from SLSRunner. Contributed by S…
9uapaw Mar 25, 2022
da09d68
YARN-11069. Dynamic Queue ACL handling in Legacy and Flexible Auto Cr…
tomicooler Jan 27, 2022
61e809b
HADOOP-13386. Upgrade Avro to 1.9.2 (#3990)
pjfanning Mar 26, 2022
adbaf48
YARN-11100. Fix StackOverflowError in SLS scheduler event handling. C…
9uapaw Mar 26, 2022
046a620
HDFS-16355. Improve the description of dfs.block.scanner.volume.bytes…
GuoPhilipse Mar 27, 2022
1087633
Make upstream aware of 3.2.3 release.
iwasakims Mar 28, 2022
0fbd96a
Make upstream aware of 3.2.3 release.
iwasakims Mar 28, 2022
eb16421
HDFS-16517 Distance metric is wrong for non-DN machines in 2.10. Fixe…
omalley Mar 22, 2022
a9b4396
HDFS-16518: Add shutdownhook to invalidate the KeyProviders in the cache
li-leyang Mar 23, 2022
e386d6a
YARN-10549. Decouple RM runner logic from SLSRunner. Contributed by S…
9uapaw Mar 29, 2022
4e32318
HDFS-16523. Fix dependency error in hadoop-hdfs on M1 Mac (#4112)
aajisaka Mar 29, 2022
6eea28c
HDFS-16498. Fix NPE for checkBlockReportLease #4057. Contributed by t…
Hexiaoqiao Mar 30, 2022
08e6d0c
HADOOP-18145. Fileutil's unzip method causes unzipped files to lose t…
smallzhongfeng Mar 30, 2022
dc4a680
MAPREDUCE-7373. Building MapReduce NativeTask fails on Fedora 34+ (#4…
sekikn Mar 30, 2022
ac50657
HDFS-16413. Reconfig dfs usage parameters for datanode (#3863)
tomscut Mar 30, 2022
6e00a79
YARN-11106. Fix the test failure due to missing conf of yarn.resource…
zuston Mar 30, 2022
ab8c360
YARN-10550. Decouple NM runner logic from SLSRunner. Contributed by S…
szilard-nemeth Dec 26, 2020
2bf78e2
HDFS-16511. Improve lock type for ReplicaMap under fine-grain lock mo…
Hexiaoqiao Mar 31, 2022
9a4dddd
HDFS-16507. [SBN read] Avoid purging edit log which is in progress (#…
tomscut Mar 31, 2022
e044a46
YARN-11088. Introduce the config to control the AM allocated to non-e…
zuston Mar 24, 2022
94031b7
YARN-11103. SLS cleanup after previously merged SLS refactor jiras. C…
szilard-nemeth Mar 29, 2022
15a5ea2
HADOOP-18169. getDelegationTokens in ViewFs should also fetch the tok…
xinglin Mar 31, 2022
4b1a6bf
YARN-11102. Fix spotbugs error in hadoop-sls module. Contributed by S…
9uapaw Apr 1, 2022
34b3275
HDFS-16477. [SPS]: Add metric PendingSPSPaths for getting the number …
tomscut Apr 2, 2022
4ef1d3e
HDFS-16472. Make HDFS setrep tool cross platform (#4130)
GauthamBanasandra Apr 5, 2022
966b773
HDFS-16527. Add global timeout rule for TestRouterDistCpProcedure (#4…
tomscut Apr 6, 2022
bbfe350
HDFS-16530. setReplication debug log creates a new string even if deb…
sodonnel Apr 6, 2022
61bbdfd
HDFS-16529. Remove unnecessary setObserverRead in TestConsistentReads…
wzhallright Apr 6, 2022
7c20602
HDFS-16522. Set Http and Ipc ports for Datanodes in MiniDFSCluster (#…
virajjasani Apr 6, 2022
4b786c7
HADOOP-18178. Upgrade jackson to 2.13.2 and jackson-databind to 2.13.…
pjfanning Apr 7, 2022
f709355
HADOOP-18188. Support touch command for directory (#4135)
virajjasani Apr 7, 2022
807a428
HDFS-16457.Make fs.getspaceused.classname reconfigurable (#4069)
singer-bin Apr 8, 2022
5412fbf
HDFS-16460. [SPS]: Handle failure retries for moving tasks (#4001)
tomscut Apr 8, 2022
bfde910
HADOOP-18195. Make jackson 1 a runtime scope dependency (#4149)
pjfanning Apr 8, 2022
37650ce
HDFS-16497. EC: Add param comment for liveBusyBlockIndices with HDFS-…
tasanuma Apr 8, 2022
b69ede7
HADOOP-18191. Log retry count while handling exceptions in RetryInvoc…
virajjasani Apr 8, 2022
d5e97fe
HDFS-16473. Make HDFS stat tool cross platform (#4145)
GauthamBanasandra Apr 8, 2022
5de78ce
HDFS-16516. Fix Fsshell wrong params (#4090). Contributed by GuoPhili…
GuoPhilipse Apr 11, 2022
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 7 additions & 7 deletions LICENSE-binary
Original file line number Diff line number Diff line change
Expand Up @@ -218,12 +218,12 @@ com.aliyun.oss:aliyun-sdk-oss:3.13.2
com.amazonaws:aws-java-sdk-bundle:1.11.901
com.cedarsoftware:java-util:1.9.0
com.cedarsoftware:json-io:2.5.1
com.fasterxml.jackson.core:jackson-annotations:2.13.0
com.fasterxml.jackson.core:jackson-core:2.13.0
com.fasterxml.jackson.core:jackson-databind:2.13.0
com.fasterxml.jackson.jaxrs:jackson-jaxrs-base:2.13.0
com.fasterxml.jackson.jaxrs:jackson-jaxrs-json-provider:2.13.0
com.fasterxml.jackson.module:jackson-module-jaxb-annotations:2.13.0
com.fasterxml.jackson.core:jackson-annotations:2.13.2
com.fasterxml.jackson.core:jackson-core:2.13.2
com.fasterxml.jackson.core:jackson-databind:2.13.2.2
com.fasterxml.jackson.jaxrs:jackson-jaxrs-base:2.13.2
com.fasterxml.jackson.jaxrs:jackson-jaxrs-json-provider:2.13.2
com.fasterxml.jackson.module:jackson-module-jaxb-annotations:2.13.2
com.fasterxml.uuid:java-uuid-generator:3.1.4
com.fasterxml.woodstox:woodstox-core:5.3.0
com.github.davidmoten:rxjava-extras:0.8.0.17
Expand Down Expand Up @@ -283,7 +283,7 @@ log4j:log4j:1.2.17
net.java.dev.jna:jna:5.2.0
net.minidev:accessors-smart:1.2
net.minidev:json-smart:2.4.7
org.apache.avro:avro:1.7.7
org.apache.avro:avro:1.9.2
org.apache.commons:commons-collections4:4.2
org.apache.commons:commons-compress:1.21
org.apache.commons:commons-configuration2:2.1.1
Expand Down
3 changes: 3 additions & 0 deletions hadoop-client-modules/hadoop-client-api/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -161,6 +161,9 @@
<!-- Exclude snappy-java -->
<exclude>org/xerial/snappy/*</exclude>
<exclude>org/xerial/snappy/**/*</exclude>
<!-- Exclude org.widlfly.openssl -->
<exclude>org/wildfly/openssl/*</exclude>
<exclude>org/wildfly/openssl/**/*</exclude>
</excludes>
</relocation>
<relocation>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@

import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.classification.InterfaceStability;
import org.apache.hadoop.classification.VisibleForTesting;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

Expand Down Expand Up @@ -154,10 +155,20 @@ boolean running() {
/**
* How long in between runs of the background refresh.
*/
long getRefreshInterval() {
@VisibleForTesting
public long getRefreshInterval() {
return refreshInterval;
}

/**
* Randomize the refresh interval timing by this amount, the actual interval will be chosen
* uniformly between {@code interval-jitter} and {@code interval+jitter}.
*/
@VisibleForTesting
public long getJitter() {
return jitter;
}

/**
* Reset the current used data amount. This should be called
* when the cached value is re-computed.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,15 +36,18 @@
import java.nio.charset.CharsetEncoder;
import java.nio.charset.StandardCharsets;
import java.nio.file.AccessDeniedException;
import java.nio.file.attribute.PosixFilePermission;
import java.nio.file.FileSystems;
import java.nio.file.Files;
import java.nio.file.LinkOption;
import java.nio.file.Paths;
import java.util.ArrayList;
import java.util.Enumeration;
import java.util.EnumSet;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.Set;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
Expand All @@ -53,13 +56,13 @@
import java.util.jar.JarOutputStream;
import java.util.jar.Manifest;
import java.util.zip.GZIPInputStream;
import java.util.zip.ZipEntry;
import java.util.zip.ZipFile;
import java.util.zip.ZipInputStream;

import org.apache.commons.collections.map.CaseInsensitiveMap;
import org.apache.commons.compress.archivers.tar.TarArchiveEntry;
import org.apache.commons.compress.archivers.tar.TarArchiveInputStream;
import org.apache.commons.compress.archivers.zip.ZipArchiveEntry;
import org.apache.commons.compress.archivers.zip.ZipArchiveInputStream;
import org.apache.commons.compress.archivers.zip.ZipFile;
import org.apache.commons.io.FileUtils;
import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.classification.InterfaceStability;
Expand Down Expand Up @@ -644,12 +647,12 @@ public static long getDU(File dir) {
*/
public static void unZip(InputStream inputStream, File toDir)
throws IOException {
try (ZipInputStream zip = new ZipInputStream(inputStream)) {
try (ZipArchiveInputStream zip = new ZipArchiveInputStream(inputStream)) {
int numOfFailedLastModifiedSet = 0;
String targetDirPath = toDir.getCanonicalPath() + File.separator;
for(ZipEntry entry = zip.getNextEntry();
for(ZipArchiveEntry entry = zip.getNextZipEntry();
entry != null;
entry = zip.getNextEntry()) {
entry = zip.getNextZipEntry()) {
if (!entry.isDirectory()) {
File file = new File(toDir, entry.getName());
if (!file.getCanonicalPath().startsWith(targetDirPath)) {
Expand All @@ -668,6 +671,9 @@ public static void unZip(InputStream inputStream, File toDir)
if (!file.setLastModified(entry.getTime())) {
numOfFailedLastModifiedSet++;
}
if (entry.getPlatform() == ZipArchiveEntry.PLATFORM_UNIX) {
Files.setPosixFilePermissions(file.toPath(), permissionsFromMode(entry.getUnixMode()));
}
}
}
if (numOfFailedLastModifiedSet > 0) {
Expand All @@ -677,6 +683,49 @@ public static void unZip(InputStream inputStream, File toDir)
}
}

/**
* The permission operation of this method only involves users, user groups, and others.
* If SUID is set, only executable permissions are reserved.
* @param mode Permissions are represented by numerical values
* @return The original permissions for files are stored in collections
*/
private static Set<PosixFilePermission> permissionsFromMode(int mode) {
EnumSet<PosixFilePermission> permissions =
EnumSet.noneOf(PosixFilePermission.class);
addPermissions(permissions, mode, PosixFilePermission.OTHERS_READ,
PosixFilePermission.OTHERS_WRITE, PosixFilePermission.OTHERS_EXECUTE);
addPermissions(permissions, mode >> 3, PosixFilePermission.GROUP_READ,
PosixFilePermission.GROUP_WRITE, PosixFilePermission.GROUP_EXECUTE);
addPermissions(permissions, mode >> 6, PosixFilePermission.OWNER_READ,
PosixFilePermission.OWNER_WRITE, PosixFilePermission.OWNER_EXECUTE);
return permissions;
}

/**
* Assign the original permissions to the file
* @param permissions The original permissions for files are stored in collections
* @param mode Use a value of type int to indicate permissions
* @param r Read permission
* @param w Write permission
* @param x Execute permission
*/
private static void addPermissions(
Set<PosixFilePermission> permissions,
int mode,
PosixFilePermission r,
PosixFilePermission w,
PosixFilePermission x) {
if ((mode & 1L) == 1L) {
permissions.add(x);
}
if ((mode & 2L) == 2L) {
permissions.add(w);
}
if ((mode & 4L) == 4L) {
permissions.add(r);
}
}

/**
* Given a File input it will unzip it in the unzip directory.
* passed as the second parameter
Expand All @@ -685,14 +734,14 @@ public static void unZip(InputStream inputStream, File toDir)
* @throws IOException An I/O exception has occurred
*/
public static void unZip(File inFile, File unzipDir) throws IOException {
Enumeration<? extends ZipEntry> entries;
Enumeration<? extends ZipArchiveEntry> entries;
ZipFile zipFile = new ZipFile(inFile);

try {
entries = zipFile.entries();
entries = zipFile.getEntries();
String targetDirPath = unzipDir.getCanonicalPath() + File.separator;
while (entries.hasMoreElements()) {
ZipEntry entry = entries.nextElement();
ZipArchiveEntry entry = entries.nextElement();
if (!entry.isDirectory()) {
InputStream in = zipFile.getInputStream(entry);
try {
Expand All @@ -717,6 +766,9 @@ public static void unZip(File inFile, File unzipDir) throws IOException {
} finally {
out.close();
}
if (entry.getPlatform() == ZipArchiveEntry.PLATFORM_UNIX) {
Files.setPosixFilePermissions(file.toPath(), permissionsFromMode(entry.getUnixMode()));
}
} finally {
in.close();
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -147,9 +147,6 @@ protected void processOptions(LinkedList<String> args) {

@Override
protected void processPath(PathData item) throws IOException {
if (item.stat.isDirectory()) {
throw new PathIsDirectoryException(item.toString());
}
touch(item);
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -746,6 +746,17 @@ public List<Token<?>> getDelegationTokens(String renewer) throws IOException {
result.addAll(tokens);
}
}

// Add tokens from fallback FS
if (this.fsState.getRootFallbackLink() != null) {
AbstractFileSystem rootFallbackFs =
this.fsState.getRootFallbackLink().getTargetFileSystem();
List<Token<?>> tokens = rootFallbackFs.getDelegationTokens(renewer);
if (tokens != null) {
result.addAll(tokens);
}
}

return result;
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -387,12 +387,12 @@ private RetryInfo handleException(final Method method, final int callId,
throw retryInfo.getFailException();
}

log(method, retryInfo.isFailover(), counters.failovers, retryInfo.delay, e);
log(method, retryInfo.isFailover(), counters.failovers, counters.retries, retryInfo.delay, e);
return retryInfo;
}

private void log(final Method method, final boolean isFailover,
final int failovers, final long delay, final Exception ex) {
private void log(final Method method, final boolean isFailover, final int failovers,
final int retries, final long delay, final Exception ex) {
boolean info = true;
// If this is the first failover to this proxy, skip logging at INFO level
if (!failedAtLeastOnce.contains(proxyDescriptor.getProxyInfo().toString()))
Expand All @@ -408,13 +408,15 @@ private void log(final Method method, final boolean isFailover,
}

final StringBuilder b = new StringBuilder()
.append(ex + ", while invoking ")
.append(ex)
.append(", while invoking ")
.append(proxyDescriptor.getProxyInfo().getString(method.getName()));
if (failovers > 0) {
b.append(" after ").append(failovers).append(" failover attempts");
}
b.append(isFailover? ". Trying to failover ": ". Retrying ");
b.append(delay > 0? "after sleeping for " + delay + "ms.": "immediately.");
b.append(" Current retry count: ").append(retries).append(".");

if (info) {
LOG.info(b.toString());
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,11 @@
@InterfaceStability.Evolving
public final class CallerContext {
public static final Charset SIGNATURE_ENCODING = StandardCharsets.UTF_8;

// field names
public static final String CLIENT_IP_STR = "clientIp";
public static final String CLIENT_PORT_STR = "clientPort";

/** The caller context.
*
* It will be truncated if it exceeds the maximum allowed length in
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1508,7 +1508,19 @@ public UserGroupInformation getRealUser() {
return null;
}


/**
* If this is a proxy user, get the real user. Otherwise, return
* this user.
* @param user the user to check
* @return the real user or self
*/
public static UserGroupInformation getRealUserOrSelf(UserGroupInformation user) {
if (user == null) {
return null;
}
UserGroupInformation real = user.getRealUser();
return real != null ? real : user;
}

/**
* This class is used for storing the groups for testing. It stores a local
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -328,16 +328,16 @@ Returns 0 on success and -1 on error.
get
---

Usage: `hadoop fs -get [-ignorecrc] [-crc] [-p] [-f] [-t <thread count>] [-q <thread pool queue size>] <src> ... <localdst> `
Usage: `hadoop fs -get [-ignoreCrc] [-crc] [-p] [-f] [-t <thread count>] [-q <thread pool queue size>] <src> ... <localdst> `

Copy files to the local file system. Files that fail the CRC check may be copied with the -ignorecrc option. Files and CRCs may be copied using the -crc option.
Copy files to the local file system. Files that fail the CRC check may be copied with the -ignoreCrc option. Files and CRCs may be copied using the -crc option.

Options:

* `-p` : Preserves access and modification times, ownership and the permissions.
(assuming the permissions can be propagated across filesystems)
* `-f` : Overwrites the destination if it already exists.
* `-ignorecrc` : Skip CRC checks on the file(s) downloaded.
* `-ignoreCrc` : Skip CRC checks on the file(s) downloaded.
* `-crc`: write CRC checksums for the files downloaded.
* `-t <thread count>` : Number of threads to be used, default is 1.
Useful when downloading directories containing more than 1 file.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -299,6 +299,7 @@ Each metrics record contains tags such as HAState and Hostname as additional inf
| `FSN(Read/Write)Lock`*OperationName*`NanosAvgTime` | Average time of holding the lock by operations in nanoseconds |
| `FSN(Read/Write)LockOverallNanosNumOps` | Total number of acquiring lock by all operations |
| `FSN(Read/Write)LockOverallNanosAvgTime` | Average time of holding the lock by all operations in nanoseconds |
| `PendingSPSPaths` | The number of paths to be processed by storage policy satisfier |

JournalNode
-----------
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -453,6 +453,26 @@ The function `getLocatedFileStatus(FS, d)` is as defined in
The atomicity and consistency constraints are as for
`listStatus(Path, PathFilter)`.


### `ContentSummary getContentSummary(Path path)`

Given a path return its content summary.

`getContentSummary()` first checks if the given path is a file and if yes, it returns 0 for directory count
and 1 for file count.

#### Preconditions

exists(FS, path) else raise FileNotFoundException

#### Postconditions

Returns a `ContentSummary` object with information such as directory count
and file count for a given path.

The atomicity and consistency constraints are as for
`listStatus(Path, PathFilter)`.

### `BlockLocation[] getFileBlockLocations(FileStatus f, int s, int l)`

#### Preconditions
Expand Down
Loading