diff --git a/LICENSE.txt b/LICENSE.txt index 92ae3b1ade435..f4d869d1bd0cd 100644 --- a/LICENSE.txt +++ b/LICENSE.txt @@ -1659,3 +1659,180 @@ SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. The views and conclusions contained in the software and documentation are those of the authors and should not be interpreted as representing official policies, either expressed or implied, of the FreeBSD Project. + +The binary distribution of this product bundles these dependencies under the +following license: +Java Concurrency in Practice book annotations 1.0 +-------------------------------------------------------------------------------- +THE WORK (AS DEFINED BELOW) IS PROVIDED UNDER THE TERMS OF THIS CREATIVE COMMONS +PUBLIC LICENSE ("CCPL" OR "LICENSE"). THE WORK IS PROTECTED BY COPYRIGHT AND/OR +OTHER APPLICABLE LAW. ANY USE OF THE WORK OTHER THAN AS AUTHORIZED UNDER THIS +LICENSE OR COPYRIGHT LAW IS PROHIBITED. + +BY EXERCISING ANY RIGHTS TO THE WORK PROVIDED HERE, YOU ACCEPT AND AGREE TO BE +BOUND BY THE TERMS OF THIS LICENSE. THE LICENSOR GRANTS YOU THE RIGHTS CONTAINED +HERE IN CONSIDERATION OF YOUR ACCEPTANCE OF SUCH TERMS AND CONDITIONS. + +1. Definitions + +"Collective Work" means a work, such as a periodical issue, anthology or +encyclopedia, in which the Work in its entirety in unmodified form, along with a +number of other contributions, constituting separate and independent works in +themselves, are assembled into a collective whole. A work that constitutes a +Collective Work will not be considered a Derivative Work (as defined below) for +the purposes of this License. +"Derivative Work" means a work based upon the Work or upon the Work and other +pre-existing works, such as a translation, musical arrangement, dramatization, +fictionalization, motion picture version, sound recording, art reproduction, +abridgment, condensation, or any other form in which the Work may be recast, +transformed, or adapted, except that a work that constitutes a Collective Work +will not be considered a Derivative Work for the purpose of this License. For +the avoidance of doubt, where the Work is a musical composition or sound +recording, the synchronization of the Work in timed-relation with a moving image +("synching") will be considered a Derivative Work for the purpose of this +License. +"Licensor" means the individual or entity that offers the Work under the terms +of this License. +"Original Author" means the individual or entity who created the Work. +"Work" means the copyrightable work of authorship offered under the terms of +this License. +"You" means an individual or entity exercising rights under this License who has +not previously violated the terms of this License with respect to the Work, or +who has received express permission from the Licensor to exercise rights under +this License despite a previous violation. +2. Fair Use Rights. Nothing in this license is intended to reduce, limit, or +restrict any rights arising from fair use, first sale or other limitations on +the exclusive rights of the copyright owner under copyright law or other +applicable laws. + +3. License Grant. Subject to the terms and conditions of this License, Licensor +hereby grants You a worldwide, royalty-free, non-exclusive, perpetual (for the +duration of the applicable copyright) license to exercise the rights in the Work +as stated below: + +to reproduce the Work, to incorporate the Work into one or more Collective +Works, and to reproduce the Work as incorporated in the Collective Works; +to create and reproduce Derivative Works; +to distribute copies or phonorecords of, display publicly, perform publicly, and +perform publicly by means of a digital audio transmission the Work including as +incorporated in Collective Works; +to distribute copies or phonorecords of, display publicly, perform publicly, and +perform publicly by means of a digital audio transmission Derivative Works. +For the avoidance of doubt, where the work is a musical composition: + +Performance Royalties Under Blanket Licenses. Licensor waives the exclusive +right to collect, whether individually or via a performance rights society (e.g. +ASCAP, BMI, SESAC), royalties for the public performance or public digital +performance (e.g. webcast) of the Work. +Mechanical Rights and Statutory Royalties. Licensor waives the exclusive right +to collect, whether individually or via a music rights agency or designated +agent (e.g. Harry Fox Agency), royalties for any phonorecord You create from the +Work ("cover version") and distribute, subject to the compulsory license created +by 17 USC Section 115 of the US Copyright Act (or the equivalent in other +jurisdictions). +Webcasting Rights and Statutory Royalties. For the avoidance of doubt, where the +Work is a sound recording, Licensor waives the exclusive right to collect, +whether individually or via a performance-rights society (e.g. SoundExchange), +royalties for the public digital performance (e.g. webcast) of the Work, subject +to the compulsory license created by 17 USC Section 114 of the US Copyright Act +(or the equivalent in other jurisdictions). +The above rights may be exercised in all media and formats whether now known or +hereafter devised. The above rights include the right to make such modifications +as are technically necessary to exercise the rights in other media and formats. +All rights not expressly granted by Licensor are hereby reserved. + +4. Restrictions.The license granted in Section 3 above is expressly made subject +to and limited by the following restrictions: + +You may distribute, publicly display, publicly perform, or publicly digitally +perform the Work only under the terms of this License, and You must include a +copy of, or the Uniform Resource Identifier for, this License with every copy or +phonorecord of the Work You distribute, publicly display, publicly perform, or +publicly digitally perform. You may not offer or impose any terms on the Work +that alter or restrict the terms of this License or the recipients' exercise of +the rights granted hereunder. You may not sublicense the Work. You must keep +intact all notices that refer to this License and to the disclaimer of +warranties. You may not distribute, publicly display, publicly perform, or +publicly digitally perform the Work with any technological measures that control +access or use of the Work in a manner inconsistent with the terms of this +License Agreement. The above applies to the Work as incorporated in a Collective +Work, but this does not require the Collective Work apart from the Work itself +to be made subject to the terms of this License. If You create a Collective +Work, upon notice from any Licensor You must, to the extent practicable, remove +from the Collective Work any credit as required by clause 4(b), as requested. If +You create a Derivative Work, upon notice from any Licensor You must, to the +extent practicable, remove from the Derivative Work any credit as required by +clause 4(b), as requested. +If you distribute, publicly display, publicly perform, or publicly digitally +perform the Work or any Derivative Works or Collective Works, You must keep +intact all copyright notices for the Work and provide, reasonable to the medium +or means You are utilizing: (i) the name of the Original Author (or pseudonym, +if applicable) if supplied, and/or (ii) if the Original Author and/or Licensor +designate another party or parties (e.g. a sponsor institute, publishing entity, +journal) for attribution in Licensor's copyright notice, terms of service or by +other reasonable means, the name of such party or parties; the title of the Work +if supplied; to the extent reasonably practicable, the Uniform Resource +Identifier, if any, that Licensor specifies to be associated with the Work, +unless such URI does not refer to the copyright notice or licensing information +for the Work; and in the case of a Derivative Work, a credit identifying the use +of the Work in the Derivative Work (e.g., "French translation of the Work by +Original Author," or "Screenplay based on original Work by Original Author"). +Such credit may be implemented in any reasonable manner; provided, however, that +in the case of a Derivative Work or Collective Work, at a minimum such credit +will appear where any other comparable authorship credit appears and in a manner +at least as prominent as such other comparable authorship credit. +5. Representations, Warranties and Disclaimer + +UNLESS OTHERWISE MUTUALLY AGREED TO BY THE PARTIES IN WRITING, LICENSOR OFFERS +THE WORK AS-IS AND MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND CONCERNING +THE WORK, EXPRESS, IMPLIED, STATUTORY OR OTHERWISE, INCLUDING, WITHOUT +LIMITATION, WARRANTIES OF TITLE, MERCHANTIBILITY, FITNESS FOR A PARTICULAR +PURPOSE, NONINFRINGEMENT, OR THE ABSENCE OF LATENT OR OTHER DEFECTS, ACCURACY, +OR THE PRESENCE OF ABSENCE OF ERRORS, WHETHER OR NOT DISCOVERABLE. SOME +JURISDICTIONS DO NOT ALLOW THE EXCLUSION OF IMPLIED WARRANTIES, SO SUCH +EXCLUSION MAY NOT APPLY TO YOU. + +6. Limitation on Liability. EXCEPT TO THE EXTENT REQUIRED BY APPLICABLE LAW, IN +NO EVENT WILL LICENSOR BE LIABLE TO YOU ON ANY LEGAL THEORY FOR ANY SPECIAL, +INCIDENTAL, CONSEQUENTIAL, PUNITIVE OR EXEMPLARY DAMAGES ARISING OUT OF THIS +LICENSE OR THE USE OF THE WORK, EVEN IF LICENSOR HAS BEEN ADVISED OF THE +POSSIBILITY OF SUCH DAMAGES. + +7. Termination + +This License and the rights granted hereunder will terminate automatically upon +any breach by You of the terms of this License. Individuals or entities who have +received Derivative Works or Collective Works from You under this License, +however, will not have their licenses terminated provided such individuals or +entities remain in full compliance with those licenses. Sections 1, 2, 5, 6, 7, +and 8 will survive any termination of this License. +Subject to the above terms and conditions, the license granted here is perpetual +(for the duration of the applicable copyright in the Work). Notwithstanding the +above, Licensor reserves the right to release the Work under different license +terms or to stop distributing the Work at any time; provided, however that any +such election will not serve to withdraw this License (or any other license that +has been, or is required to be, granted under the terms of this License), and +this License will continue in full force and effect unless terminated as stated +above. +8. Miscellaneous + +Each time You distribute or publicly digitally perform the Work or a Collective +Work, the Licensor offers to the recipient a license to the Work on the same +terms and conditions as the license granted to You under this License. +Each time You distribute or publicly digitally perform a Derivative Work, +Licensor offers to the recipient a license to the original Work on the same +terms and conditions as the license granted to You under this License. +If any provision of this License is invalid or unenforceable under applicable +law, it shall not affect the validity or enforceability of the remainder of the +terms of this License, and without further action by the parties to this +agreement, such provision shall be reformed to the minimum extent necessary to +make such provision valid and enforceable. +No term or provision of this License shall be deemed waived and no breach +consented to unless such waiver or consent shall be in writing and signed by the +party to be charged with such waiver or consent. +This License constitutes the entire agreement between the parties with respect +to the Work licensed here. There are no understandings, agreements or +representations with respect to the Work not specified here. Licensor shall not +be bound by any additional provisions that may appear in any communication from +You. This License may not be modified without the mutual written agreement of +the Licensor and You. diff --git a/NOTICE.txt b/NOTICE.txt index 0c729e80f70c6..33c12ed0857d4 100644 --- a/NOTICE.txt +++ b/NOTICE.txt @@ -281,3 +281,175 @@ which has the following notices: Copyright 2004 Jason Paul Kitchen TypeUtil.java Copyright 2002-2012 Ramnivas Laddad, Juergen Hoeller, Chris Beams + +The binary distribution of this product bundles binaries of +Java Concurrency in Practice book annotations 1.0, +which has the following notices: + * Copyright (c) 2005 Brian Goetz and Tim Peierls Released under the Creative + Commons Attribution License (http://creativecommons.org/licenses/by/2.5) + Official home: http://www.jcip.net Any republication or derived work + distributed in source code form must include this copyright and license + notice. + +The binary distribution of this product bundles binaries of +Jetty 6.1.26, +which has the following notices: + * ============================================================== + Jetty Web Container + Copyright 1995-2016 Mort Bay Consulting Pty Ltd. + ============================================================== + + The Jetty Web Container is Copyright Mort Bay Consulting Pty Ltd + unless otherwise noted. + + Jetty is dual licensed under both + + * The Apache 2.0 License + http://www.apache.org/licenses/LICENSE-2.0.html + + and + + * The Eclipse Public 1.0 License + http://www.eclipse.org/legal/epl-v10.html + + Jetty may be distributed under either license. + + ------ + Eclipse + + The following artifacts are EPL. + * org.eclipse.jetty.orbit:org.eclipse.jdt.core + + The following artifacts are EPL and ASL2. + * org.eclipse.jetty.orbit:javax.security.auth.message + + + The following artifacts are EPL and CDDL 1.0. + * org.eclipse.jetty.orbit:javax.mail.glassfish + + + ------ + Oracle + + The following artifacts are CDDL + GPLv2 with classpath exception. + https://glassfish.dev.java.net/nonav/public/CDDL+GPL.html + + * javax.servlet:javax.servlet-api + * javax.annotation:javax.annotation-api + * javax.transaction:javax.transaction-api + * javax.websocket:javax.websocket-api + + ------ + Oracle OpenJDK + + If ALPN is used to negotiate HTTP/2 connections, then the following + artifacts may be included in the distribution or downloaded when ALPN + module is selected. + + * java.sun.security.ssl + + These artifacts replace/modify OpenJDK classes. The modififications + are hosted at github and both modified and original are under GPL v2 with + classpath exceptions. + http://openjdk.java.net/legal/gplv2+ce.html + + + ------ + OW2 + + The following artifacts are licensed by the OW2 Foundation according to the + terms of http://asm.ow2.org/license.html + + org.ow2.asm:asm-commons + org.ow2.asm:asm + + + ------ + Apache + + The following artifacts are ASL2 licensed. + + org.apache.taglibs:taglibs-standard-spec + org.apache.taglibs:taglibs-standard-impl + + + ------ + MortBay + + The following artifacts are ASL2 licensed. Based on selected classes from + following Apache Tomcat jars, all ASL2 licensed. + + org.mortbay.jasper:apache-jsp + org.apache.tomcat:tomcat-jasper + org.apache.tomcat:tomcat-juli + org.apache.tomcat:tomcat-jsp-api + org.apache.tomcat:tomcat-el-api + org.apache.tomcat:tomcat-jasper-el + org.apache.tomcat:tomcat-api + org.apache.tomcat:tomcat-util-scan + org.apache.tomcat:tomcat-util + + org.mortbay.jasper:apache-el + org.apache.tomcat:tomcat-jasper-el + org.apache.tomcat:tomcat-el-api + + + ------ + Mortbay + + The following artifacts are CDDL + GPLv2 with classpath exception. + + https://glassfish.dev.java.net/nonav/public/CDDL+GPL.html + + org.eclipse.jetty.toolchain:jetty-schemas + + ------ + Assorted + + The UnixCrypt.java code implements the one way cryptography used by + Unix systems for simple password protection. Copyright 1996 Aki Yoshida, + modified April 2001 by Iris Van den Broeke, Daniel Deville. + Permission to use, copy, modify and distribute UnixCrypt + for non-commercial or commercial purposes and without fee is + granted provided that the copyright notice appears in all copies./ + +The binary distribution of this product bundles binaries of +Snappy for Java 1.0.4.1, +which has the following notices: + * This product includes software developed by Google + Snappy: http://code.google.com/p/snappy/ (New BSD License) + + This product includes software developed by Apache + PureJavaCrc32C from apache-hadoop-common http://hadoop.apache.org/ + (Apache 2.0 license) + + This library containd statically linked libstdc++. This inclusion is allowed by + "GCC RUntime Library Exception" + http://gcc.gnu.org/onlinedocs/libstdc++/manual/license.html + + == Contributors == + * Tatu Saloranta + * Providing benchmark suite + * Alec Wysoker + * Performance and memory usage improvement + +The binary distribution of this product bundles binaries of +Xerces2 Java Parser 2.9.1, +which has the following notices: + * ========================================================================= + == NOTICE file corresponding to section 4(d) of the Apache License, == + == Version 2.0, in this case for the Apache Xerces Java distribution. == + ========================================================================= + + Apache Xerces Java + Copyright 1999-2007 The Apache Software Foundation + + This product includes software developed at + The Apache Software Foundation (http://www.apache.org/). + + Portions of this software were originally based on the following: + - software copyright (c) 1999, IBM Corporation., http://www.ibm.com. + - software copyright (c) 1999, Sun Microsystems., http://www.sun.com. + - voluntary contributions made by Paul Eng on behalf of the + Apache Software Foundation that were originally developed at iClick, Inc., + software copyright (c) 1999. diff --git a/hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh b/hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh index 6565d1d6a7631..f4493f1f20a09 100644 --- a/hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh +++ b/hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh @@ -115,29 +115,34 @@ esac # # A note about classpaths. # -# The classpath is configured such that entries are stripped prior -# to handing to Java based either upon duplication or non-existence. -# Wildcards and/or directories are *NOT* expanded as the -# de-duplication is fairly simple. So if two directories are in -# the classpath that both contain awesome-methods-1.0.jar, -# awesome-methods-1.0.jar will still be seen by java. But if -# the classpath specifically has awesome-methods-1.0.jar from the -# same directory listed twice, the last one will be removed. -# - -# An additional, custom CLASSPATH. This is really meant for -# end users, but as an administrator, one might want to push -# something extra in here too, such as the jar to the topology -# method. Just be sure to append to the existing HADOOP_USER_CLASSPATH -# so end users have a way to add stuff. -# export HADOOP_USER_CLASSPATH="/some/cool/path/on/your/machine" - -# Should HADOOP_USER_CLASSPATH be first in the official CLASSPATH? +# By default, Apache Hadoop overrides Java's CLASSPATH +# environment variable. It is configured such +# that it sarts out blank with new entries added after passing +# a series of checks (file/dir exists, not already listed aka +# de-deduplication). During de-depulication, wildcards and/or +# directories are *NOT* expanded to keep it simple. Therefore, +# if the computed classpath has two specific mentions of +# awesome-methods-1.0.jar, only the first one added will be seen. +# If two directories are in the classpath that both contain +# awesome-methods-1.0.jar, then Java will pick up both versions. + +# An additional, custom CLASSPATH. Site-wide configs should be +# handled via the shellprofile functionality, utilizing the +# hadoop_add_classpath function for greater control and much +# harder for apps/end-users to accidentally override. +# Similarly, end users should utilize ${HOME}/.hadooprc . +# This variable should ideally only be used as a short-cut, +# interactive way for temporary additions on the command line. +# export HADOOP_CLASSPATH="/some/cool/path/on/your/machine" + +# Should HADOOP_CLASSPATH be first in the official CLASSPATH? # export HADOOP_USER_CLASSPATH_FIRST="yes" -# If HADOOP_USE_CLIENT_CLASSLOADER is set, HADOOP_CLASSPATH along with the main -# jar are handled by a separate isolated client classloader. If it is set, -# HADOOP_USER_CLASSPATH_FIRST is ignored. Can be defined by doing +# If HADOOP_USE_CLIENT_CLASSLOADER is set, the classpath along +# with the main jar are handled by a separate isolated +# client classloader when 'hadoop jar', 'yarn jar', or 'mapred job' +# is utilized. If it is set, HADOOP_CLASSPATH and +# HADOOP_USER_CLASSPATH_FIRST are ignored. # export HADOOP_USE_CLIENT_CLASSLOADER=true # HADOOP_CLIENT_CLASSLOADER_SYSTEM_CLASSES overrides the default definition of diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/EmptyStorageStatistics.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/EmptyStorageStatistics.java index 1bcfe23ee90cd..1ef30dd7dbd86 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/EmptyStorageStatistics.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/EmptyStorageStatistics.java @@ -29,15 +29,22 @@ class EmptyStorageStatistics extends StorageStatistics { super(name); } + @Override public Iterator getLongStatistics() { return Collections.emptyIterator(); } + @Override public Long getLong(String key) { return null; } + @Override public boolean isTracked(String key) { return false; } + + @Override + public void reset() { + } } diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java index 9e13a7a80832f..146bce5b43d96 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java @@ -3619,8 +3619,11 @@ public StorageStatistics provide() { * Reset all statistics for all file systems */ public static synchronized void clearStatistics() { - for(Statistics stat: statisticsTable.values()) { - stat.reset(); + final Iterator iterator = + GlobalStorageStatistics.INSTANCE.iterator(); + while (iterator.hasNext()) { + final StorageStatistics statistics = iterator.next(); + statistics.reset(); } } diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystemStorageStatistics.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystemStorageStatistics.java index 8a1eb54c05d83..8c633f6f359fd 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystemStorageStatistics.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystemStorageStatistics.java @@ -138,6 +138,7 @@ public Long getLong(String key) { * * @return True only if the statistic is being tracked. */ + @Override public boolean isTracked(String key) { for (String k: KEYS) { if (k.equals(key)) { @@ -146,4 +147,9 @@ public boolean isTracked(String key) { } return false; } + + @Override + public void reset() { + stats.reset(); + } } diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/GlobalStorageStatistics.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/GlobalStorageStatistics.java index 750296577c03e..2dba525e5d9d1 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/GlobalStorageStatistics.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/GlobalStorageStatistics.java @@ -66,8 +66,9 @@ public synchronized StorageStatistics get(String name) { * @param provider An object which can create a new StorageStatistics * object if needed. * @return The StorageStatistics object with the given name. - * @throws RuntimeException If the StorageStatisticsProvider provides a new - * StorageStatistics object with the wrong name. + * @throws RuntimeException If the StorageStatisticsProvider provides a null + * object or a new StorageStatistics object with the + * wrong name. */ public synchronized StorageStatistics put(String name, StorageStatisticsProvider provider) { @@ -78,6 +79,10 @@ public synchronized StorageStatistics put(String name, return stats; } stats = provider.provide(); + if (stats == null) { + throw new RuntimeException("StorageStatisticsProvider for " + name + + " should not provide a null StorageStatistics object."); + } if (!stats.getName().equals(name)) { throw new RuntimeException("StorageStatisticsProvider for " + name + " provided a StorageStatistics object for " + stats.getName() + @@ -87,6 +92,15 @@ public synchronized StorageStatistics put(String name, return stats; } + /** + * Reset all global storage statistics. + */ + public synchronized void reset() { + for (StorageStatistics statistics : map.values()) { + statistics.reset(); + } + } + /** * Get an iterator that we can use to iterate throw all the global storage * statistics objects. diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/StorageStatistics.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/StorageStatistics.java index 0971f10b4bd6c..d987ad084d3ef 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/StorageStatistics.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/StorageStatistics.java @@ -132,8 +132,7 @@ public String getScheme() { * Get the value of a statistic. * * @return null if the statistic is not being tracked or is not a - * long statistic. - * The value of the statistic, otherwise. + * long statistic. The value of the statistic, otherwise. */ public abstract Long getLong(String key); @@ -143,4 +142,9 @@ public String getScheme() { * @return True only if the statistic is being tracked. */ public abstract boolean isTracked(String key); + + /** + * Reset all the statistic data. + */ + public abstract void reset(); } diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/UnionStorageStatistics.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/UnionStorageStatistics.java index d9783e6cde99f..3d5b6af794682 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/UnionStorageStatistics.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/UnionStorageStatistics.java @@ -20,6 +20,7 @@ import java.util.Iterator; import java.util.NoSuchElementException; +import com.google.common.base.Preconditions; import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.classification.InterfaceStability; @@ -77,6 +78,16 @@ public void remove() { public UnionStorageStatistics(String name, StorageStatistics[] stats) { super(name); + + Preconditions.checkArgument(name != null, + "The name of union storage statistics can not be null!"); + Preconditions.checkArgument(stats != null, + "The stats of union storage statistics can not be null!"); + for (StorageStatistics stat : stats) { + Preconditions.checkArgument(stat != null, + "The stats of union storage statistics can not have null element!"); + } + this.stats = stats; } @@ -87,8 +98,8 @@ public Iterator getLongStatistics() { @Override public Long getLong(String key) { - for (int i = 0; i < stats.length; i++) { - Long val = stats[i].getLong(key); + for (StorageStatistics stat : stats) { + Long val = stat.getLong(key); if (val != null) { return val; } @@ -103,11 +114,18 @@ public Long getLong(String key) { */ @Override public boolean isTracked(String key) { - for (int i = 0; i < stats.length; i++) { - if (stats[i].isTracked(key)) { + for (StorageStatistics stat : stats) { + if (stat.isTracked(key)) { return true; } } return false; } + + @Override + public void reset() { + for (StorageStatistics stat : stats) { + stat.reset(); + } + } } diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java index e8d76a336e1cb..11a93de7040ec 100644 --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java @@ -22,7 +22,6 @@ import java.io.StringWriter; import java.net.URI; import java.net.URISyntaxException; -import java.text.DateFormat; import java.util.ArrayList; import java.util.Arrays; import java.util.Collection; @@ -322,26 +321,6 @@ public static String formatTimeSortable(long timeDiff) { return buf.toString(); } - /** - * @param dateFormat date format to use - * @param finishTime finish time - * @param startTime start time - * @return formatted value. - * Formats time in ms and appends difference (finishTime - startTime) - * as returned by formatTimeDiff(). - * If finish time is 0, empty string is returned, if start time is 0 - * then difference is not appended to return value. - * @deprecated Use - * {@link StringUtils#getFormattedTimeWithDiff(FastDateFormat, long, long)} or - * {@link StringUtils#getFormattedTimeWithDiff(String, long, long)} instead. - */ - @Deprecated - public static String getFormattedTimeWithDiff(DateFormat dateFormat, - long finishTime, long startTime){ - String formattedFinishTime = dateFormat.format(finishTime); - return getFormattedTimeWithDiff(formattedFinishTime, finishTime, startTime); - } - /** * Formats time in ms and appends difference (finishTime - startTime) * as returned by formatTimeDiff(). diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/UnixShellGuide.md b/hadoop-common-project/hadoop-common/src/site/markdown/UnixShellGuide.md index b6d7517faf211..940627dd52f0d 100644 --- a/hadoop-common-project/hadoop-common/src/site/markdown/UnixShellGuide.md +++ b/hadoop-common-project/hadoop-common/src/site/markdown/UnixShellGuide.md @@ -32,12 +32,14 @@ HADOOP_CLIENT_OPTS="-Xmx1g -Dhadoop.socks.server=localhost:4000" hadoop fs -ls / will increase the memory and send this command via a SOCKS proxy server. -### `HADOOP_USER_CLASSPATH` +### `HADOOP_CLASSPATH` + + NOTE: Site-wide settings should be configured via a shellprofile entry and permanent user-wide settings should be configured via ${HOME}/.hadooprc using the `hadoop_add_classpath` function. See below for more information. The Apache Hadoop scripts have the capability to inject more content into the classpath of the running command by setting this environment variable. It should be a colon delimited list of directories, files, or wildcard locations. ```bash -HADOOP_USER_CLASSPATH=${HOME}/lib/myjars/*.jar hadoop classpath +HADOOP_CLASSPATH=${HOME}/lib/myjars/*.jar hadoop classpath ``` A user can provides hints to the location of the paths via the `HADOOP_USER_CLASSPATH_FIRST` variable. Setting this to any value will tell the system to try and push these paths near the front. @@ -53,8 +55,6 @@ For example: # my custom Apache Hadoop settings! # -HADOOP_USER_CLASSPATH=${HOME}/hadoopjars/* -HADOOP_USER_CLASSPATH_FIRST=yes HADOOP_CLIENT_OPTS="-Xmx1g" ``` diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileSystemStorageStatistics.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileSystemStorageStatistics.java index 59c3b8d7fd3ff..8debb69717198 100644 --- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileSystemStorageStatistics.java +++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileSystemStorageStatistics.java @@ -77,7 +77,7 @@ public void setup() { } @Test - public void testgetLongStatistics() { + public void testGetLongStatistics() { Iterator iter = storageStatistics.getLongStatistics(); while (iter.hasNext()) { final LongStatistic longStat = iter.next(); diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOpsCountStatistics.java b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOpsCountStatistics.java index 84fc925e5bf54..83d880a248d6d 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOpsCountStatistics.java +++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOpsCountStatistics.java @@ -171,4 +171,11 @@ public boolean isTracked(String key) { return OpType.fromSymbol(key) != null; } + @Override + public void reset() { + for (AtomicLong count : opsCount.values()) { + count.set(0); + } + } + } diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/TestDFSOpsCountStatistics.java b/hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/TestDFSOpsCountStatistics.java index 7578930a4b68d..d63ef1079dd98 100644 --- a/hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/TestDFSOpsCountStatistics.java +++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/TestDFSOpsCountStatistics.java @@ -23,19 +23,27 @@ import org.apache.hadoop.hdfs.DFSOpsCountStatistics.OpType; -import org.junit.BeforeClass; +import org.junit.Before; import org.junit.Rule; import org.junit.Test; import org.junit.rules.ExpectedException; import org.junit.rules.Timeout; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + import java.util.HashMap; import java.util.HashSet; import java.util.Iterator; import java.util.Map; import java.util.Set; +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.atomic.AtomicLong; +import java.util.concurrent.atomic.AtomicReference; +import static org.apache.hadoop.util.concurrent.HadoopExecutors.newFixedThreadPool; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertFalse; import static org.junit.Assert.assertNotNull; @@ -47,25 +55,25 @@ */ public class TestDFSOpsCountStatistics { - private static final DFSOpsCountStatistics STORAGE_STATISTICS = - new DFSOpsCountStatistics(); - private static final Map OP_COUNTER_MAP = new HashMap<>(); + private static final Logger LOG = LoggerFactory.getLogger( + TestDFSOpsCountStatistics.class); private static final String NO_SUCH_OP = "no-such-dfs-operation-dude"; + private final DFSOpsCountStatistics statistics = + new DFSOpsCountStatistics(); + private final Map expectedOpsCountMap = new HashMap<>(); + @Rule public final Timeout globalTimeout = new Timeout(10 * 1000); @Rule public final ExpectedException exception = ExpectedException.none(); - @BeforeClass - public static void setup() { + @Before + public void setup() { for (OpType opType : OpType.values()) { - final Long opCount = RandomUtils.nextLong(0, 100); - OP_COUNTER_MAP.put(opType.getSymbol(), opCount); - for (long i = 0; i < opCount; i++) { - STORAGE_STATISTICS.incrementOpCounter(opType); - } + expectedOpsCountMap.put(opType, new AtomicLong()); } + incrementOpsCountByRandomNumbers(); } /** @@ -84,13 +92,15 @@ public void testOpTypeSymbolsAreUnique() { @Test public void testGetLongStatistics() { short iterations = 0; // number of the iter.hasNext() - final Iterator iter = STORAGE_STATISTICS.getLongStatistics(); + final Iterator iter = statistics.getLongStatistics(); while (iter.hasNext()) { final LongStatistic longStat = iter.next(); assertNotNull(longStat); - assertTrue(OP_COUNTER_MAP.containsKey(longStat.getName())); - assertEquals(OP_COUNTER_MAP.get(longStat.getName()).longValue(), + final OpType opType = OpType.fromSymbol(longStat.getName()); + assertNotNull(opType); + assertTrue(expectedOpsCountMap.containsKey(opType)); + assertEquals(expectedOpsCountMap.get(opType).longValue(), longStat.getValue()); iterations++; } @@ -101,22 +111,103 @@ public void testGetLongStatistics() { @Test public void testGetLong() { - assertNull(STORAGE_STATISTICS.getLong(NO_SUCH_OP)); - - for (OpType opType : OpType.values()) { - final String key = opType.getSymbol(); - assertEquals(OP_COUNTER_MAP.get(key), STORAGE_STATISTICS.getLong(key)); - } + assertNull(statistics.getLong(NO_SUCH_OP)); + verifyStatistics(); } @Test public void testIsTracked() { - assertFalse(STORAGE_STATISTICS.isTracked(NO_SUCH_OP)); + assertFalse(statistics.isTracked(NO_SUCH_OP)); - final Iterator iter = STORAGE_STATISTICS.getLongStatistics(); + final Iterator iter = statistics.getLongStatistics(); while (iter.hasNext()) { final LongStatistic longStatistic = iter.next(); - assertTrue(STORAGE_STATISTICS.isTracked(longStatistic.getName())); + assertTrue(statistics.isTracked(longStatistic.getName())); + } + } + + @Test + public void testReset() { + statistics.reset(); + for (OpType opType : OpType.values()) { + expectedOpsCountMap.get(opType).set(0); + } + + final Iterator iter = statistics.getLongStatistics(); + while (iter.hasNext()) { + final LongStatistic longStat = iter.next(); + assertEquals(0, longStat.getValue()); + } + + incrementOpsCountByRandomNumbers(); + verifyStatistics(); + } + + @Test + public void testCurrentAccess() throws InterruptedException { + final int numThreads = 10; + final ExecutorService threadPool = newFixedThreadPool(numThreads); + + try { + final CountDownLatch allReady = new CountDownLatch(numThreads); + final CountDownLatch startBlocker = new CountDownLatch(1); + final CountDownLatch allDone = new CountDownLatch(numThreads); + final AtomicReference childError = new AtomicReference<>(); + + for (int i = 0; i < numThreads; i++) { + threadPool.submit(new Runnable() { + @Override + public void run() { + allReady.countDown(); + try { + startBlocker.await(); + incrementOpsCountByRandomNumbers(); + } catch (Throwable t) { + LOG.error("Child failed when calling mkdir", t); + childError.compareAndSet(null, t); + } finally { + allDone.countDown(); + } + } + }); + } + + allReady.await(); // wait until all threads are ready + startBlocker.countDown(); // all threads start making directories + allDone.await(); // wait until all threads are done + + assertNull("Child failed with exception.", childError.get()); + verifyStatistics(); + } finally { + threadPool.shutdownNow(); + } + } + + /** + * This is helper method to increment the statistics by random data. + */ + private void incrementOpsCountByRandomNumbers() { + for (OpType opType : OpType.values()) { + final Long randomCount = RandomUtils.nextLong(0, 100); + expectedOpsCountMap.get(opType).addAndGet(randomCount); + for (long i = 0; i < randomCount; i++) { + statistics.incrementOpCounter(opType); + } + } + } + + /** + * We have the expected ops count in {@link #expectedOpsCountMap}, and this + * method is to verify that its ops count is the same as the one in + * {@link #statistics}. + */ + private void verifyStatistics() { + for (OpType opType : OpType.values()) { + assertNotNull(expectedOpsCountMap.get(opType)); + assertNotNull(statistics.getLong(opType.getSymbol())); + assertEquals("Not expected count for operation " + opType.getSymbol(), + expectedOpsCountMap.get(opType).longValue(), + statistics.getLong(opType.getSymbol()).longValue()); } } diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md index b4842d47908f5..9ede4dab5b673 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md +++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md @@ -350,6 +350,8 @@ File and Directory Operations Location: webhdfs://:/ Content-Length: 0 + If no permissions are specified, the newly created file will be assigned with default 644 permission. No umask mode will be applied from server side (so "fs.permissions.umask-mode" value configuration set on Namenode side will have no effect). + **Note** that the reason of having two-step create/append is for preventing clients to send out data before the redirect. This issue is addressed by the "`Expect: 100-continue`" header in HTTP/1.1; see [RFC 2616, Section 8.2.3](http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html#sec8.2.3). Unfortunately, there are software library bugs (e.g. Jetty 6 HTTP server and Java 6 HTTP client), which do not correctly implement "`Expect: 100-continue`". The two-step create/append is a temporary workaround for the software library bugs. See also: [`overwrite`](#Overwrite), [`blocksize`](#Block_Size), [`replication`](#Replication), [`permission`](#Permission), [`buffersize`](#Buffer_Size), [FileSystem](../../api/org/apache/hadoop/fs/FileSystem.html).create @@ -442,6 +444,8 @@ See also: [`offset`](#Offset), [`length`](#Length), [`buffersize`](#Buffer_Size) {"boolean": true} + If no permissions are specified, the newly created directory will have 755 permission as default. No umask mode will be applied from server side (so "fs.permissions.umask-mode" value configuration set on Namenode side will have no effect). + See also: [`permission`](#Permission), [FileSystem](../../api/org/apache/hadoop/fs/FileSystem.html).mkdirs ### Create a Symbolic Link @@ -1957,7 +1961,7 @@ See also: [`SETOWNER`](#Set_Owner) |:---- |:---- | | Description | The permission of a file/directory. | | Type | Octal | -| Default Value | 755 | +| Default Value | 644 for files, 755 for directories | | Valid Values | 0 - 1777 | | Syntax | Any radix-8 integer (leading zeros may be omitted.) | diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java index a82e9dceba314..50f9f36b23ca3 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java @@ -573,7 +573,44 @@ public void testDFSClient() throws Exception { if (cluster != null) {cluster.shutdown();} } } - + + /** + * This is to test that the {@link FileSystem#clearStatistics()} resets all + * the global storage statistics. + */ + @Test + public void testClearStatistics() throws Exception { + final Configuration conf = getTestConfiguration(); + final MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf).build(); + try { + cluster.waitActive(); + FileSystem dfs = cluster.getFileSystem(); + + final Path dir = new Path("/testClearStatistics"); + final long mkdirCount = getOpStatistics(OpType.MKDIRS); + long writeCount = DFSTestUtil.getStatistics(dfs).getWriteOps(); + dfs.mkdirs(dir); + checkOpStatistics(OpType.MKDIRS, mkdirCount + 1); + assertEquals(++writeCount, + DFSTestUtil.getStatistics(dfs).getWriteOps()); + + final long createCount = getOpStatistics(OpType.CREATE); + FSDataOutputStream out = dfs.create(new Path(dir, "tmpFile"), (short)1); + out.write(40); + out.close(); + checkOpStatistics(OpType.CREATE, createCount + 1); + assertEquals(++writeCount, + DFSTestUtil.getStatistics(dfs).getWriteOps()); + + FileSystem.clearStatistics(); + checkOpStatistics(OpType.MKDIRS, 0); + checkOpStatistics(OpType.CREATE, 0); + checkStatistics(dfs, 0, 0, 0); + } finally { + cluster.shutdown(); + } + } + @Test public void testStatistics() throws IOException { FileSystem.getStatistics(HdfsConstants.HDFS_URI_SCHEME, diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/MockNameNodeResourceChecker.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/MockNameNodeResourceChecker.java new file mode 100644 index 0000000000000..745ef8c11383e --- /dev/null +++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/MockNameNodeResourceChecker.java @@ -0,0 +1,49 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hdfs.server.namenode; + +import java.io.IOException; + +import org.apache.hadoop.conf.Configuration; + +/** + * Mock NameNodeResourceChecker with resource availability flag which will be + * used to simulate the Namenode resource status. + */ +public class MockNameNodeResourceChecker extends NameNodeResourceChecker { + private volatile boolean hasResourcesAvailable = true; + + public MockNameNodeResourceChecker(Configuration conf) throws IOException { + super(conf); + } + + @Override + public boolean hasAvailableDiskSpace() { + return hasResourcesAvailable; + } + + /** + * Sets resource availability flag. + * + * @param resourceAvailable + * sets true if the resource is available otherwise sets to false + */ + public void setResourcesAvailable(boolean resourceAvailable) { + hasResourcesAvailable = resourceAvailable; + } +} diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeResourceChecker.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeResourceChecker.java index 2012b6aabe17e..f86ce5fc06772 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeResourceChecker.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeResourceChecker.java @@ -97,9 +97,10 @@ public void testCheckThatNameNodeResourceMonitorIsRunning() cluster = new MiniDFSCluster.Builder(conf) .numDataNodes(1).build(); - NameNodeResourceChecker mockResourceChecker = Mockito.mock(NameNodeResourceChecker.class); - Mockito.when(mockResourceChecker.hasAvailableDiskSpace()).thenReturn(true); - cluster.getNameNode().getNamesystem().nnResourceChecker = mockResourceChecker; + MockNameNodeResourceChecker mockResourceChecker = + new MockNameNodeResourceChecker(conf); + cluster.getNameNode() + .getNamesystem().nnResourceChecker = mockResourceChecker; cluster.waitActive(); @@ -117,8 +118,8 @@ public void testCheckThatNameNodeResourceMonitorIsRunning() isNameNodeMonitorRunning); assertFalse("NN should not presently be in safe mode", cluster.getNameNode().isInSafeMode()); - - Mockito.when(mockResourceChecker.hasAvailableDiskSpace()).thenReturn(false); + + mockResourceChecker.setResourcesAvailable(false); // Make sure the NNRM thread has a chance to run. long startMillis = Time.now(); diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestNNHealthCheck.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestNNHealthCheck.java index 4fca63b9fe826..e0f794f285db0 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestNNHealthCheck.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestNNHealthCheck.java @@ -20,7 +20,8 @@ import static org.apache.hadoop.fs.CommonConfigurationKeys.HA_HM_RPC_TIMEOUT_DEFAULT; import static org.apache.hadoop.fs.CommonConfigurationKeys.HA_HM_RPC_TIMEOUT_KEY; import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_LIFELINE_RPC_ADDRESS_KEY; -import static org.junit.Assert.*; +import static org.junit.Assert.assertTrue; +import static org.junit.Assert.fail; import java.io.IOException; @@ -30,14 +31,13 @@ import org.apache.hadoop.hdfs.DFSUtil; import org.apache.hadoop.hdfs.MiniDFSCluster; import org.apache.hadoop.hdfs.MiniDFSNNTopology; -import org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker; +import org.apache.hadoop.hdfs.server.namenode.MockNameNodeResourceChecker; import org.apache.hadoop.hdfs.tools.NNHAServiceTarget; import org.apache.hadoop.ipc.RemoteException; import org.apache.hadoop.test.GenericTestUtils; import org.junit.After; import org.junit.Before; import org.junit.Test; -import org.mockito.Mockito; public class TestNNHealthCheck { @@ -77,9 +77,8 @@ public void testNNHealthCheckWithLifelineAddress() throws IOException { } private void doNNHealthCheckTest() throws IOException { - NameNodeResourceChecker mockResourceChecker = Mockito.mock( - NameNodeResourceChecker.class); - Mockito.doReturn(true).when(mockResourceChecker).hasAvailableDiskSpace(); + MockNameNodeResourceChecker mockResourceChecker = + new MockNameNodeResourceChecker(conf); cluster.getNameNode(0).getNamesystem() .setNNResourceChecker(mockResourceChecker); @@ -101,7 +100,7 @@ private void doNNHealthCheckTest() throws IOException { // Should not throw error, which indicates healthy. rpc.monitorHealth(); - Mockito.doReturn(false).when(mockResourceChecker).hasAvailableDiskSpace(); + mockResourceChecker.setResourcesAvailable(false); try { // Should throw error - NN is unhealthy. diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSZKFailoverController.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSZKFailoverController.java index 94cccedc02f33..dfdcf3483c37e 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSZKFailoverController.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSZKFailoverController.java @@ -36,9 +36,9 @@ import org.apache.hadoop.hdfs.MiniDFSCluster; import org.apache.hadoop.hdfs.MiniDFSNNTopology; import org.apache.hadoop.hdfs.server.namenode.EditLogFileOutputStream; -import org.apache.hadoop.hdfs.server.namenode.ha.HATestUtil; +import org.apache.hadoop.hdfs.server.namenode.MockNameNodeResourceChecker; import org.apache.hadoop.hdfs.server.namenode.NameNode; -import org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker; +import org.apache.hadoop.hdfs.server.namenode.ha.HATestUtil; import org.apache.hadoop.test.GenericTestUtils; import org.apache.hadoop.test.MultithreadedTestUtil.TestContext; import org.apache.hadoop.test.MultithreadedTestUtil.TestingThread; @@ -47,7 +47,6 @@ import org.junit.Test; import com.google.common.base.Supplier; -import org.mockito.Mockito; public class TestDFSZKFailoverController extends ClientBaseWithFixes { private Configuration conf; @@ -135,9 +134,9 @@ public void shutdown() throws Exception { */ @Test(timeout=60000) public void testThreadDumpCaptureAfterNNStateChange() throws Exception { - NameNodeResourceChecker mockResourceChecker = Mockito.mock( - NameNodeResourceChecker.class); - Mockito.doReturn(false).when(mockResourceChecker).hasAvailableDiskSpace(); + MockNameNodeResourceChecker mockResourceChecker = + new MockNameNodeResourceChecker(conf); + mockResourceChecker.setResourcesAvailable(false); cluster.getNameNode(0).getNamesystem() .setNNResourceChecker(mockResourceChecker); waitForHAState(0, HAServiceState.STANDBY); diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/MapReduceTutorial.md b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/MapReduceTutorial.md index 1d5b7f2b1142c..8dee03c5f827b 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/MapReduceTutorial.md +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/MapReduceTutorial.md @@ -1040,7 +1040,7 @@ public class WordCount2 { Configuration conf = new Configuration(); GenericOptionsParser optionParser = new GenericOptionsParser(conf, args); String[] remainingArgs = optionParser.getRemainingArgs(); - if (!(remainingArgs.length != 2 | | remainingArgs.length != 4)) { + if ((remainingArgs.length != 2) && (remainingArgs.length != 4)) { System.err.println("Usage: wordcount [-skip skipPatternFile]"); System.exit(2); } diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java index b1595e81460ca..09177d74f1625 100644 --- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java +++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java @@ -228,57 +228,45 @@ public static void afterClass() throws Exception { public static void testWrite() throws Exception { FileSystem fs = cluster.getFileSystem(); - long tStart = System.currentTimeMillis(); - bench.writeTest(fs); - long execTime = System.currentTimeMillis() - tStart; + long execTime = bench.writeTest(fs); bench.analyzeResult(fs, TestType.TEST_TYPE_WRITE, execTime); } @Test (timeout = 3000) public void testRead() throws Exception { FileSystem fs = cluster.getFileSystem(); - long tStart = System.currentTimeMillis(); - bench.readTest(fs); - long execTime = System.currentTimeMillis() - tStart; + long execTime = bench.readTest(fs); bench.analyzeResult(fs, TestType.TEST_TYPE_READ, execTime); } @Test (timeout = 3000) public void testReadRandom() throws Exception { FileSystem fs = cluster.getFileSystem(); - long tStart = System.currentTimeMillis(); bench.getConf().setLong("test.io.skip.size", 0); - bench.randomReadTest(fs); - long execTime = System.currentTimeMillis() - tStart; + long execTime = bench.randomReadTest(fs); bench.analyzeResult(fs, TestType.TEST_TYPE_READ_RANDOM, execTime); } @Test (timeout = 3000) public void testReadBackward() throws Exception { FileSystem fs = cluster.getFileSystem(); - long tStart = System.currentTimeMillis(); bench.getConf().setLong("test.io.skip.size", -DEFAULT_BUFFER_SIZE); - bench.randomReadTest(fs); - long execTime = System.currentTimeMillis() - tStart; + long execTime = bench.randomReadTest(fs); bench.analyzeResult(fs, TestType.TEST_TYPE_READ_BACKWARD, execTime); } @Test (timeout = 3000) public void testReadSkip() throws Exception { FileSystem fs = cluster.getFileSystem(); - long tStart = System.currentTimeMillis(); bench.getConf().setLong("test.io.skip.size", 1); - bench.randomReadTest(fs); - long execTime = System.currentTimeMillis() - tStart; + long execTime = bench.randomReadTest(fs); bench.analyzeResult(fs, TestType.TEST_TYPE_READ_SKIP, execTime); } @Test (timeout = 6000) public void testAppend() throws Exception { FileSystem fs = cluster.getFileSystem(); - long tStart = System.currentTimeMillis(); - bench.appendTest(fs); - long execTime = System.currentTimeMillis() - tStart; + long execTime = bench.appendTest(fs); bench.analyzeResult(fs, TestType.TEST_TYPE_APPEND, execTime); } @@ -286,9 +274,7 @@ public void testAppend() throws Exception { public void testTruncate() throws Exception { FileSystem fs = cluster.getFileSystem(); bench.createControlFile(fs, DEFAULT_NR_BYTES / 2, DEFAULT_NR_FILES); - long tStart = System.currentTimeMillis(); - bench.truncateTest(fs); - long execTime = System.currentTimeMillis() - tStart; + long execTime = bench.truncateTest(fs); bench.analyzeResult(fs, TestType.TEST_TYPE_TRUNCATE, execTime); } @@ -430,12 +416,14 @@ public Long doIO(Reporter reporter, } } - private void writeTest(FileSystem fs) throws IOException { + private long writeTest(FileSystem fs) throws IOException { Path writeDir = getWriteDir(config); fs.delete(getDataDir(config), true); fs.delete(writeDir, true); - + long tStart = System.currentTimeMillis(); runIOTest(WriteMapper.class, writeDir); + long execTime = System.currentTimeMillis() - tStart; + return execTime; } private void runIOTest( @@ -496,10 +484,13 @@ public Long doIO(Reporter reporter, } } - private void appendTest(FileSystem fs) throws IOException { + private long appendTest(FileSystem fs) throws IOException { Path appendDir = getAppendDir(config); fs.delete(appendDir, true); + long tStart = System.currentTimeMillis(); runIOTest(AppendMapper.class, appendDir); + long execTime = System.currentTimeMillis() - tStart; + return execTime; } /** @@ -539,10 +530,13 @@ public Long doIO(Reporter reporter, } } - private void readTest(FileSystem fs) throws IOException { + private long readTest(FileSystem fs) throws IOException { Path readDir = getReadDir(config); fs.delete(readDir, true); + long tStart = System.currentTimeMillis(); runIOTest(ReadMapper.class, readDir); + long execTime = System.currentTimeMillis() - tStart; + return execTime; } /** @@ -620,10 +614,13 @@ private long nextOffset(long current) { } } - private void randomReadTest(FileSystem fs) throws IOException { + private long randomReadTest(FileSystem fs) throws IOException { Path readDir = getRandomReadDir(config); fs.delete(readDir, true); + long tStart = System.currentTimeMillis(); runIOTest(RandomReadMapper.class, readDir); + long execTime = System.currentTimeMillis() - tStart; + return execTime; } /** @@ -665,10 +662,13 @@ public Long doIO(Reporter reporter, } } - private void truncateTest(FileSystem fs) throws IOException { + private long truncateTest(FileSystem fs) throws IOException { Path TruncateDir = getTruncateDir(config); fs.delete(TruncateDir, true); + long tStart = System.currentTimeMillis(); runIOTest(TruncateMapper.class, TruncateDir); + long execTime = System.currentTimeMillis() - tStart; + return execTime; } private void sequentialTest(FileSystem fs, diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml index 80b6995db00ed..1274d787b6745 100644 --- a/hadoop-project/pom.xml +++ b/hadoop-project/pom.xml @@ -1246,8 +1246,34 @@ - true + true + + + cglib:cglib: + com.sun.jersey:* + com.sun.jersey.contribs:* + com.sun.jersey.jersey-test-framework:* + com.google.inject:guice + org.ow2.asm:asm + + + + cglib:cglib:3.2.0 + com.google.inject:guice:4.0 + com.sun.jersey:jersey-core:1.19 + com.sun.jersey:jersey-servlet:1.19 + com.sun.jersey:jersey-json:1.19 + com.sun.jersey:jersey-server:1.19 + com.sun.jersey:jersey-client:1.19 + com.sun.jersey:jersey-grizzly2:1.19 + com.sun.jersey:jersey-grizzly2-servlet:1.19 + com.sun.jersey.jersey-test-framework:jersey-test-framework-core:1.19 + com.sun.jersey.jersey-test-framework:jersey-test-framework-grizzly2:1.19 + com.sun.jersey.contribs:jersey-guice:1.19 + org.ow2.asm:asm:5.0.0 + + diff --git a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AStorageStatistics.java b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AStorageStatistics.java index 3a90c6bf83e33..c1cf7cfcef6c1 100644 --- a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AStorageStatistics.java +++ b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AStorageStatistics.java @@ -107,4 +107,11 @@ public boolean isTracked(String key) { return Statistic.fromSymbol(key) != null; } + @Override + public void reset() { + for (AtomicLong value : opsCount.values()) { + value.set(0); + } + } + } diff --git a/hadoop-yarn-project/hadoop-yarn/bin/yarn-config.sh b/hadoop-yarn-project/hadoop-yarn/bin/yarn-config.sh index 719a6ae43ed42..708d5685bf525 100644 --- a/hadoop-yarn-project/hadoop-yarn/bin/yarn-config.sh +++ b/hadoop-yarn-project/hadoop-yarn/bin/yarn-config.sh @@ -56,10 +56,10 @@ function hadoop_subproject_init HADOOP_YARN_HOME="${HADOOP_YARN_HOME:-$HADOOP_HOME}" # YARN-1429 added the completely superfluous YARN_USER_CLASSPATH - # env var. We're going to override HADOOP_USER_CLASSPATH to keep + # env var. We're going to override HADOOP_CLASSPATH to keep # consistency with the rest of the duplicate/useless env vars - hadoop_deprecate_envvar YARN_USER_CLASSPATH HADOOP_USER_CLASSPATH + hadoop_deprecate_envvar YARN_USER_CLASSPATH HADOOP_CLASSPATH hadoop_deprecate_envvar YARN_USER_CLASSPATH_FIRST HADOOP_USER_CLASSPATH_FIRST }