aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorAndrew McDermott <andrew.mcdermott@linaro.org>2014-02-12 16:59:54 +0000
committerAndrew McDermott <andrew.mcdermott@linaro.org>2014-02-12 16:59:54 +0000
commit86a9909c577bdd641e7483f56ffbec998ea421eb (patch)
tree3f67536f5008b7d434d0a25e2d1aa97d9179b74b
downloadopenjdk8-hadoop-LCA14-86a9909c577bdd641e7483f56ffbec998ea421eb.tar.gz
Initial import
Signed-off-by: Andrew McDermott <andrew.mcdermott@linaro.org>
-rw-r--r--README61
-rwxr-xr-xaarch64/bin/container-executorbin0 -> 164559 bytes
-rwxr-xr-xaarch64/bin/hadoop136
-rwxr-xr-xaarch64/bin/hadoop.cmd240
-rwxr-xr-xaarch64/bin/hdfs203
-rwxr-xr-xaarch64/bin/hdfs.cmd171
-rwxr-xr-xaarch64/bin/mapred148
-rwxr-xr-xaarch64/bin/mapred.cmd195
-rwxr-xr-xaarch64/bin/rcc61
-rwxr-xr-xaarch64/bin/test-container-executorbin0 -> 222809 bytes
-rwxr-xr-xaarch64/bin/yarn235
-rwxr-xr-xaarch64/bin/yarn.cmd254
-rw-r--r--aarch64/etc/hadoop/capacity-scheduler.xml111
-rw-r--r--aarch64/etc/hadoop/configuration.xsl40
-rw-r--r--aarch64/etc/hadoop/container-executor.cfg4
-rw-r--r--aarch64/etc/hadoop/core-site.xml20
-rw-r--r--aarch64/etc/hadoop/hadoop-env.cmd81
-rw-r--r--aarch64/etc/hadoop/hadoop-env.sh77
-rw-r--r--aarch64/etc/hadoop/hadoop-metrics.properties75
-rw-r--r--aarch64/etc/hadoop/hadoop-metrics2.properties44
-rw-r--r--aarch64/etc/hadoop/hadoop-policy.xml219
-rw-r--r--aarch64/etc/hadoop/hdfs-site.xml21
-rw-r--r--aarch64/etc/hadoop/httpfs-env.sh41
-rw-r--r--aarch64/etc/hadoop/httpfs-log4j.properties35
-rw-r--r--aarch64/etc/hadoop/httpfs-signature.secret1
-rw-r--r--aarch64/etc/hadoop/httpfs-site.xml17
-rw-r--r--aarch64/etc/hadoop/log4j.properties231
-rw-r--r--aarch64/etc/hadoop/mapred-env.cmd20
-rw-r--r--aarch64/etc/hadoop/mapred-env.sh27
-rw-r--r--aarch64/etc/hadoop/mapred-queues.xml.template92
-rw-r--r--aarch64/etc/hadoop/mapred-site.xml.template21
-rw-r--r--aarch64/etc/hadoop/slaves1
-rw-r--r--aarch64/etc/hadoop/ssl-client.xml.example80
-rw-r--r--aarch64/etc/hadoop/ssl-server.xml.example77
-rw-r--r--aarch64/etc/hadoop/yarn-env.cmd60
-rw-r--r--aarch64/etc/hadoop/yarn-env.sh112
-rw-r--r--aarch64/etc/hadoop/yarn-site.xml19
-rw-r--r--aarch64/include/Pipes.hh260
-rw-r--r--aarch64/include/SerialUtils.hh170
-rw-r--r--aarch64/include/StringUtils.hh81
-rw-r--r--aarch64/include/TemplateFactory.hh96
-rw-r--r--aarch64/include/hdfs.h692
-rw-r--r--aarch64/lib/native/libhadoop.abin0 -> 1036576 bytes
l---------aarch64/lib/native/libhadoop.so1
-rwxr-xr-xaarch64/lib/native/libhadoop.so.1.0.0bin0 -> 552720 bytes
-rw-r--r--aarch64/lib/native/libhadooppipes.abin0 -> 1757732 bytes
-rw-r--r--aarch64/lib/native/libhadooputils.abin0 -> 527602 bytes
-rw-r--r--aarch64/lib/native/libhdfs.abin0 -> 431394 bytes
l---------aarch64/lib/native/libhdfs.so1
-rwxr-xr-xaarch64/lib/native/libhdfs.so.0.0.0bin0 -> 251622 bytes
-rwxr-xr-xaarch64/libexec/hadoop-config.cmd292
-rwxr-xr-xaarch64/libexec/hadoop-config.sh295
-rwxr-xr-xaarch64/libexec/hdfs-config.cmd43
-rwxr-xr-xaarch64/libexec/hdfs-config.sh36
-rwxr-xr-xaarch64/libexec/httpfs-config.sh174
-rwxr-xr-xaarch64/libexec/mapred-config.cmd43
-rwxr-xr-xaarch64/libexec/mapred-config.sh52
-rwxr-xr-xaarch64/libexec/yarn-config.cmd72
-rwxr-xr-xaarch64/libexec/yarn-config.sh65
-rwxr-xr-xaarch64/sbin/distribute-exclude.sh81
-rwxr-xr-xaarch64/sbin/hadoop-daemon.sh202
-rwxr-xr-xaarch64/sbin/hadoop-daemons.sh36
-rwxr-xr-xaarch64/sbin/hdfs-config.cmd43
-rwxr-xr-xaarch64/sbin/hdfs-config.sh36
-rwxr-xr-xaarch64/sbin/httpfs.sh62
-rwxr-xr-xaarch64/sbin/mr-jobhistory-daemon.sh146
-rwxr-xr-xaarch64/sbin/refresh-namenodes.sh48
-rwxr-xr-xaarch64/sbin/slaves.sh67
-rwxr-xr-xaarch64/sbin/start-all.cmd52
-rwxr-xr-xaarch64/sbin/start-all.sh38
-rwxr-xr-xaarch64/sbin/start-balancer.sh27
-rwxr-xr-xaarch64/sbin/start-dfs.cmd41
-rwxr-xr-xaarch64/sbin/start-dfs.sh117
-rwxr-xr-xaarch64/sbin/start-secure-dns.sh33
-rwxr-xr-xaarch64/sbin/start-yarn.cmd47
-rwxr-xr-xaarch64/sbin/start-yarn.sh35
-rwxr-xr-xaarch64/sbin/stop-all.cmd52
-rwxr-xr-xaarch64/sbin/stop-all.sh38
-rwxr-xr-xaarch64/sbin/stop-balancer.sh28
-rwxr-xr-xaarch64/sbin/stop-dfs.cmd41
-rwxr-xr-xaarch64/sbin/stop-dfs.sh89
-rwxr-xr-xaarch64/sbin/stop-secure-dns.sh33
-rwxr-xr-xaarch64/sbin/stop-yarn.cmd47
-rwxr-xr-xaarch64/sbin/stop-yarn.sh35
-rwxr-xr-xaarch64/sbin/yarn-daemon.sh160
-rwxr-xr-xaarch64/sbin/yarn-daemons.sh38
-rw-r--r--aarch64/share/doc/hadoop/common/CHANGES.txt13861
-rw-r--r--aarch64/share/doc/hadoop/common/LICENSE.txt284
-rw-r--r--aarch64/share/doc/hadoop/common/NOTICE.txt2
-rw-r--r--aarch64/share/doc/hadoop/common/README.txt31
-rw-r--r--aarch64/share/doc/hadoop/hdfs/CHANGES.txt6945
-rw-r--r--aarch64/share/doc/hadoop/hdfs/LICENSE.txt271
-rw-r--r--aarch64/share/doc/hadoop/hdfs/NOTICE.txt2
-rw-r--r--aarch64/share/doc/hadoop/mapreduce/CHANGES.txt6904
-rw-r--r--aarch64/share/doc/hadoop/mapreduce/LICENSE.txt244
-rw-r--r--aarch64/share/doc/hadoop/mapreduce/NOTICE.txt2
-rw-r--r--aarch64/share/doc/hadoop/yarn/CHANGES.txt1698
-rw-r--r--aarch64/share/doc/hadoop/yarn/LICENSE.txt244
-rw-r--r--aarch64/share/doc/hadoop/yarn/NOTICE.txt2
-rw-r--r--aarch64/share/hadoop/common/hadoop-common-2.2.0-tests.jarbin0 -> 1352335 bytes
-rw-r--r--aarch64/share/hadoop/common/hadoop-common-2.2.0.jarbin0 -> 2677324 bytes
-rw-r--r--aarch64/share/hadoop/common/hadoop-nfs-2.2.0.jarbin0 -> 139540 bytes
-rw-r--r--aarch64/share/hadoop/common/jdiff/hadoop-core_0.20.0.xml32308
-rw-r--r--aarch64/share/hadoop/common/jdiff/hadoop-core_0.21.0.xml25944
-rw-r--r--aarch64/share/hadoop/common/jdiff/hadoop-core_0.22.0.xml28377
-rw-r--r--aarch64/share/hadoop/common/jdiff/hadoop_0.17.0.xml43272
-rw-r--r--aarch64/share/hadoop/common/jdiff/hadoop_0.18.1.xml44778
-rw-r--r--aarch64/share/hadoop/common/jdiff/hadoop_0.18.2.xml38788
-rw-r--r--aarch64/share/hadoop/common/jdiff/hadoop_0.18.3.xml38826
-rw-r--r--aarch64/share/hadoop/common/jdiff/hadoop_0.19.0.xml43972
-rw-r--r--aarch64/share/hadoop/common/jdiff/hadoop_0.19.1.xml44195
-rw-r--r--aarch64/share/hadoop/common/jdiff/hadoop_0.19.2.xml44204
-rw-r--r--aarch64/share/hadoop/common/jdiff/hadoop_0.20.0.xml52140
-rw-r--r--aarch64/share/hadoop/common/jdiff/hadoop_0.20.1.xml53832
-rw-r--r--aarch64/share/hadoop/common/jdiff/hadoop_0.20.2.xml53959
-rw-r--r--aarch64/share/hadoop/common/lib/activation-1.1.jarbin0 -> 62983 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/asm-3.2.jarbin0 -> 43398 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/avro-1.7.4.jarbin0 -> 303139 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/commons-beanutils-1.7.0.jarbin0 -> 188671 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jarbin0 -> 206035 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/commons-cli-1.2.jarbin0 -> 41123 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/commons-codec-1.4.jarbin0 -> 58160 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/commons-collections-3.2.1.jarbin0 -> 575389 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/commons-compress-1.4.1.jarbin0 -> 241367 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/commons-configuration-1.6.jarbin0 -> 298829 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/commons-digester-1.8.jarbin0 -> 143602 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/commons-el-1.0.jarbin0 -> 112341 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/commons-httpclient-3.1.jarbin0 -> 305001 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/commons-io-2.1.jarbin0 -> 163151 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/commons-lang-2.5.jarbin0 -> 279193 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/commons-logging-1.1.1.jarbin0 -> 60686 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/commons-math-2.1.jarbin0 -> 832410 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/commons-net-3.1.jarbin0 -> 273370 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/guava-11.0.2.jarbin0 -> 1648200 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/hadoop-annotations-2.2.0.jarbin0 -> 16781 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/hadoop-auth-2.2.0.jarbin0 -> 49779 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/jackson-core-asl-1.8.8.jarbin0 -> 227500 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jarbin0 -> 17884 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jarbin0 -> 668564 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/jackson-xc-1.8.8.jarbin0 -> 32353 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/jasper-compiler-5.5.23.jarbin0 -> 408133 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/jasper-runtime-5.5.23.jarbin0 -> 76844 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/jaxb-api-2.2.2.jarbin0 -> 105134 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jarbin0 -> 890168 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/jersey-core-1.9.jarbin0 -> 458739 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/jersey-json-1.9.jarbin0 -> 147952 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/jersey-server-1.9.jarbin0 -> 713089 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/jets3t-0.6.1.jarbin0 -> 321806 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/jettison-1.1.jarbin0 -> 67758 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/jetty-6.1.26.jarbin0 -> 539912 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/jetty-util-6.1.26.jarbin0 -> 177131 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/jsch-0.1.42.jarbin0 -> 185746 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/jsp-api-2.1.jarbin0 -> 100636 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/jsr305-1.3.9.jarbin0 -> 33015 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/junit-4.8.2.jarbin0 -> 237344 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/log4j-1.2.17.jarbin0 -> 489884 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/mockito-all-1.8.5.jarbin0 -> 1419869 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/netty-3.6.2.Final.jarbin0 -> 1199572 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/paranamer-2.3.jarbin0 -> 29555 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/protobuf-java-2.5.0.jarbin0 -> 533455 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/servlet-api-2.5.jarbin0 -> 105112 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/slf4j-api-1.7.5.jarbin0 -> 26084 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jarbin0 -> 8869 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/snappy-java-1.0.4.1.jarbin0 -> 995968 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/stax-api-1.0.1.jarbin0 -> 26514 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/xmlenc-0.52.jarbin0 -> 15010 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/xz-1.0.jarbin0 -> 94672 bytes
-rw-r--r--aarch64/share/hadoop/common/lib/zookeeper-3.4.5.jarbin0 -> 779974 bytes
-rw-r--r--aarch64/share/hadoop/common/sources/hadoop-common-2.2.0-sources.jarbin0 -> 1681090 bytes
-rw-r--r--aarch64/share/hadoop/common/sources/hadoop-common-2.2.0-test-sources.jarbin0 -> 746234 bytes
-rw-r--r--aarch64/share/hadoop/common/templates/core-site.xml20
-rw-r--r--aarch64/share/hadoop/hdfs/hadoop-hdfs-2.2.0-tests.jarbin0 -> 1988555 bytes
-rw-r--r--aarch64/share/hadoop/hdfs/hadoop-hdfs-2.2.0.jarbin0 -> 5242564 bytes
-rw-r--r--aarch64/share/hadoop/hdfs/hadoop-hdfs-nfs-2.2.0.jarbin0 -> 71670 bytes
-rw-r--r--aarch64/share/hadoop/hdfs/jdiff/hadoop-hdfs_0.20.0.xml10389
-rw-r--r--aarch64/share/hadoop/hdfs/jdiff/hadoop-hdfs_0.21.0.xml16220
-rw-r--r--aarch64/share/hadoop/hdfs/jdiff/hadoop-hdfs_0.22.0.xml18589
-rw-r--r--aarch64/share/hadoop/hdfs/lib/asm-3.2.jarbin0 -> 43398 bytes
-rw-r--r--aarch64/share/hadoop/hdfs/lib/commons-cli-1.2.jarbin0 -> 41123 bytes
-rw-r--r--aarch64/share/hadoop/hdfs/lib/commons-codec-1.4.jarbin0 -> 58160 bytes
-rw-r--r--aarch64/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jarbin0 -> 24239 bytes
-rw-r--r--aarch64/share/hadoop/hdfs/lib/commons-el-1.0.jarbin0 -> 112341 bytes
-rw-r--r--aarch64/share/hadoop/hdfs/lib/commons-io-2.1.jarbin0 -> 163151 bytes
-rw-r--r--aarch64/share/hadoop/hdfs/lib/commons-lang-2.5.jarbin0 -> 279193 bytes
-rw-r--r--aarch64/share/hadoop/hdfs/lib/commons-logging-1.1.1.jarbin0 -> 60686 bytes
-rw-r--r--aarch64/share/hadoop/hdfs/lib/guava-11.0.2.jarbin0 -> 1648200 bytes
-rw-r--r--aarch64/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jarbin0 -> 227500 bytes
-rw-r--r--aarch64/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jarbin0 -> 668564 bytes
-rw-r--r--aarch64/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jarbin0 -> 76844 bytes
-rw-r--r--aarch64/share/hadoop/hdfs/lib/jersey-core-1.9.jarbin0 -> 458739 bytes
-rw-r--r--aarch64/share/hadoop/hdfs/lib/jersey-server-1.9.jarbin0 -> 713089 bytes
-rw-r--r--aarch64/share/hadoop/hdfs/lib/jetty-6.1.26.jarbin0 -> 539912 bytes
-rw-r--r--aarch64/share/hadoop/hdfs/lib/jetty-util-6.1.26.jarbin0 -> 177131 bytes
-rw-r--r--aarch64/share/hadoop/hdfs/lib/jsp-api-2.1.jarbin0 -> 100636 bytes
-rw-r--r--aarch64/share/hadoop/hdfs/lib/jsr305-1.3.9.jarbin0 -> 33015 bytes
-rw-r--r--aarch64/share/hadoop/hdfs/lib/log4j-1.2.17.jarbin0 -> 489884 bytes
-rw-r--r--aarch64/share/hadoop/hdfs/lib/netty-3.6.2.Final.jarbin0 -> 1199572 bytes
-rw-r--r--aarch64/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jarbin0 -> 533455 bytes
-rw-r--r--aarch64/share/hadoop/hdfs/lib/servlet-api-2.5.jarbin0 -> 105112 bytes
-rw-r--r--aarch64/share/hadoop/hdfs/lib/xmlenc-0.52.jarbin0 -> 15010 bytes
-rw-r--r--aarch64/share/hadoop/hdfs/sources/hadoop-hdfs-2.2.0-sources.jarbin0 -> 1979061 bytes
-rw-r--r--aarch64/share/hadoop/hdfs/sources/hadoop-hdfs-2.2.0-test-sources.jarbin0 -> 1300644 bytes
-rw-r--r--aarch64/share/hadoop/hdfs/templates/hdfs-site.xml21
-rw-r--r--aarch64/share/hadoop/hdfs/webapps/datanode/WEB-INF/web.xml59
-rw-r--r--aarch64/share/hadoop/hdfs/webapps/datanode/robots.txt2
-rw-r--r--aarch64/share/hadoop/hdfs/webapps/hdfs/WEB-INF/web.xml109
-rw-r--r--aarch64/share/hadoop/hdfs/webapps/hdfs/decommission.xsl139
-rw-r--r--aarch64/share/hadoop/hdfs/webapps/hdfs/dfsclusterhealth.xsl170
-rw-r--r--aarch64/share/hadoop/hdfs/webapps/hdfs/dfsclusterhealth_utils.xsl88
-rw-r--r--aarch64/share/hadoop/hdfs/webapps/hdfs/index.html35
-rw-r--r--aarch64/share/hadoop/hdfs/webapps/journal/WEB-INF/web.xml39
-rw-r--r--aarch64/share/hadoop/hdfs/webapps/journal/index.html29
-rw-r--r--aarch64/share/hadoop/hdfs/webapps/secondary/WEB-INF/web.xml39
-rw-r--r--aarch64/share/hadoop/hdfs/webapps/secondary/index.html29
-rw-r--r--aarch64/share/hadoop/hdfs/webapps/static/hadoop.css190
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/LICENSE707
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/NOTICE16
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/RELEASE-NOTES234
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/RUNNING.txt454
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/bin/bootstrap.jarbin0 -> 22706 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/bin/catalina-tasks.xml58
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/bin/catalina.bat286
-rwxr-xr-xaarch64/share/hadoop/httpfs/tomcat/bin/catalina.sh506
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/bin/commons-daemon-native.tar.gzbin0 -> 202519 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/bin/commons-daemon.jarbin0 -> 24242 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/bin/cpappend.bat35
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/bin/digest.bat56
-rwxr-xr-xaarch64/share/hadoop/httpfs/tomcat/bin/digest.sh48
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/bin/setclasspath.bat82
-rwxr-xr-xaarch64/share/hadoop/httpfs/tomcat/bin/setclasspath.sh116
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/bin/shutdown.bat59
-rwxr-xr-xaarch64/share/hadoop/httpfs/tomcat/bin/shutdown.sh48
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/bin/startup.bat59
-rwxr-xr-xaarch64/share/hadoop/httpfs/tomcat/bin/startup.sh65
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/bin/tomcat-juli.jarbin0 -> 32278 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/bin/tomcat-native.tar.gzbin0 -> 258558 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/bin/tool-wrapper.bat85
-rwxr-xr-xaarch64/share/hadoop/httpfs/tomcat/bin/tool-wrapper.sh99
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/bin/version.bat59
-rwxr-xr-xaarch64/share/hadoop/httpfs/tomcat/bin/version.sh48
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/conf/catalina.policy222
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/conf/catalina.properties81
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/conf/context.xml35
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/conf/logging.properties67
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/conf/server.xml150
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/conf/tomcat-users.xml36
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/conf/web.xml1249
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/lib/annotations-api.jarbin0 -> 15240 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/lib/catalina-ant.jarbin0 -> 54565 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/lib/catalina-ha.jarbin0 -> 132132 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/lib/catalina-tribes.jarbin0 -> 237521 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/lib/catalina.jarbin0 -> 1243752 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/lib/ecj-3.7.2.jarbin0 -> 1749257 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/lib/el-api.jarbin0 -> 33314 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/lib/jasper-el.jarbin0 -> 112554 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/lib/jasper.jarbin0 -> 527671 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/lib/jsp-api.jarbin0 -> 76691 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/lib/servlet-api.jarbin0 -> 88499 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/lib/tomcat-coyote.jarbin0 -> 771696 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/lib/tomcat-dbcp.jarbin0 -> 253633 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/lib/tomcat-i18n-es.jarbin0 -> 70018 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/lib/tomcat-i18n-fr.jarbin0 -> 51901 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/lib/tomcat-i18n-ja.jarbin0 -> 54509 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/temp/safeToDelete.tmp0
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/ROOT/WEB-INF/web.xml16
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/ROOT/index.html21
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/default-log4j.properties20
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/httpfs-default.xml237
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/httpfs.properties21
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/client/HttpFSFileSystem$1.classbin0 -> 1136 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/client/HttpFSFileSystem$2.classbin0 -> 1399 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/client/HttpFSFileSystem$3.classbin0 -> 1761 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/client/HttpFSFileSystem$4.classbin0 -> 2037 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/client/HttpFSFileSystem$5.classbin0 -> 1863 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/client/HttpFSFileSystem$6.classbin0 -> 1009 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/client/HttpFSFileSystem$FILE_TYPE.classbin0 -> 2141 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/client/HttpFSFileSystem$HttpFSDataInputStream.classbin0 -> 1589 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/client/HttpFSFileSystem$HttpFSDataOutputStream.classbin0 -> 1406 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/client/HttpFSFileSystem$Operation.classbin0 -> 2723 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/client/HttpFSFileSystem.classbin0 -> 24410 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/client/HttpFSKerberosAuthenticator$DelegationTokenOperation.classbin0 -> 2203 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/client/HttpFSKerberosAuthenticator.classbin0 -> 7214 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/client/HttpFSPseudoAuthenticator.classbin0 -> 1339 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/client/HttpFSUtils.classbin0 -> 5333 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/server/CheckUploadContentTypeFilter.classbin0 -> 3107 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/server/FSOperations$FSAppend.classbin0 -> 2191 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/server/FSOperations$FSConcat.classbin0 -> 1918 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/server/FSOperations$FSContentSummary.classbin0 -> 1827 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/server/FSOperations$FSCreate.classbin0 -> 2908 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/server/FSOperations$FSDelete.classbin0 -> 1988 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/server/FSOperations$FSFileChecksum.classbin0 -> 1807 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/server/FSOperations$FSFileStatus.classbin0 -> 1791 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/server/FSOperations$FSHomeDir.classbin0 -> 1848 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/server/FSOperations$FSListStatus.classbin0 -> 2294 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/server/FSOperations$FSMkdirs.classbin0 -> 2110 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/server/FSOperations$FSOpen.classbin0 -> 1997 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/server/FSOperations$FSRename.classbin0 -> 1963 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/server/FSOperations$FSSetOwner.classbin0 -> 1809 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/server/FSOperations$FSSetPermission.classbin0 -> 1918 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/server/FSOperations$FSSetReplication.classbin0 -> 2037 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/server/FSOperations$FSSetTimes.classbin0 -> 1746 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/server/FSOperations.classbin0 -> 6459 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/server/HttpFSAuthenticationFilter.classbin0 -> 3633 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/server/HttpFSExceptionProvider.classbin0 -> 2899 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/server/HttpFSKerberosAuthenticationHandler$1.classbin0 -> 1290 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/server/HttpFSKerberosAuthenticationHandler.classbin0 -> 7845 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/server/HttpFSParametersProvider$AccessTimeParam.classbin0 -> 969 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/server/HttpFSParametersProvider$BlockSizeParam.classbin0 -> 965 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/server/HttpFSParametersProvider$DataParam.classbin0 -> 946 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/server/HttpFSParametersProvider$DestinationParam.classbin0 -> 901 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/server/HttpFSParametersProvider$DoAsParam.classbin0 -> 1529 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/server/HttpFSParametersProvider$FilterParam.classbin0 -> 881 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/server/HttpFSParametersProvider$GroupParam.classbin0 -> 1007 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/server/HttpFSParametersProvider$LenParam.classbin0 -> 944 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/server/HttpFSParametersProvider$ModifiedTimeParam.classbin0 -> 981 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/server/HttpFSParametersProvider$OffsetParam.classbin0 -> 942 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/server/HttpFSParametersProvider$OperationParam.classbin0 -> 1385 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/server/HttpFSParametersProvider$OverwriteParam.classbin0 -> 966 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/server/HttpFSParametersProvider$OwnerParam.classbin0 -> 1007 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/server/HttpFSParametersProvider$PermissionParam.classbin0 -> 1006 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/server/HttpFSParametersProvider$RecursiveParam.classbin0 -> 966 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/server/HttpFSParametersProvider$ReplicationParam.classbin0 -> 966 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/server/HttpFSParametersProvider$SourcesParam.classbin0 -> 885 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/server/HttpFSParametersProvider.classbin0 -> 4087 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/server/HttpFSReleaseFilter.classbin0 -> 1028 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/server/HttpFSServer$1.classbin0 -> 2000 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/server/HttpFSServer.classbin0 -> 19404 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/fs/http/server/HttpFSServerWebApp.classbin0 -> 3194 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/lang/RunnableCallable.classbin0 -> 2159 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/lang/XException$ERROR.classbin0 -> 261 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/lang/XException.classbin0 -> 2674 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/server/BaseService.classbin0 -> 3470 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/server/Server$Status.classbin0 -> 2043 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/server/Server.classbin0 -> 18236 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/server/ServerException$ERROR.classbin0 -> 3109 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/server/ServerException.classbin0 -> 1140 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/server/Service.classbin0 -> 915 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/server/ServiceException.classbin0 -> 910 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/service/DelegationTokenIdentifier.classbin0 -> 1206 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/service/DelegationTokenManager.classbin0 -> 1576 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/service/DelegationTokenManagerException$ERROR.classbin0 -> 2000 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/service/DelegationTokenManagerException.classbin0 -> 1106 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/service/FileSystemAccess$FileSystemExecutor.classbin0 -> 500 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/service/FileSystemAccess.classbin0 -> 1250 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/service/FileSystemAccessException$ERROR.classbin0 -> 2612 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/service/FileSystemAccessException.classbin0 -> 1070 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/service/Groups.classbin0 -> 575 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/service/Instrumentation$Cron.classbin0 -> 323 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/service/Instrumentation$Variable.classbin0 -> 364 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/service/Instrumentation.classbin0 -> 1417 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/service/ProxyUser.classbin0 -> 566 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/service/Scheduler.classbin0 -> 644 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/service/hadoop/FileSystemAccessService$1.classbin0 -> 1400 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/service/hadoop/FileSystemAccessService$2.classbin0 -> 1389 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/service/hadoop/FileSystemAccessService$3.classbin0 -> 3036 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/service/hadoop/FileSystemAccessService$4.classbin0 -> 1437 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/service/hadoop/FileSystemAccessService$CachedFileSystem.classbin0 -> 1557 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/service/hadoop/FileSystemAccessService$FileSystemCachePurger.classbin0 -> 2630 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/service/hadoop/FileSystemAccessService.classbin0 -> 15405 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/service/instrumentation/InstrumentationService$1.classbin0 -> 1316 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/service/instrumentation/InstrumentationService$2.classbin0 -> 1315 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/service/instrumentation/InstrumentationService$3.classbin0 -> 1317 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/service/instrumentation/InstrumentationService$Cron.classbin0 -> 1567 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/service/instrumentation/InstrumentationService$Sampler.classbin0 -> 2942 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/service/instrumentation/InstrumentationService$SamplersRunnable.classbin0 -> 1734 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/service/instrumentation/InstrumentationService$Timer.classbin0 -> 3008 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/service/instrumentation/InstrumentationService$VariableHolder.classbin0 -> 2192 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/service/instrumentation/InstrumentationService.classbin0 -> 9958 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/service/scheduler/SchedulerService$1.classbin0 -> 3280 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/service/scheduler/SchedulerService.classbin0 -> 5109 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/service/security/DelegationTokenManagerService$DelegationTokenSecretManager.classbin0 -> 1345 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/service/security/DelegationTokenManagerService.classbin0 -> 7282 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/service/security/GroupsService.classbin0 -> 1812 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/service/security/ProxyUserService$ERROR.classbin0 -> 1983 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/service/security/ProxyUserService.classbin0 -> 7297 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/servlet/FileSystemReleaseFilter.classbin0 -> 2345 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/servlet/HostnameFilter.classbin0 -> 2871 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/servlet/MDCFilter.classbin0 -> 2242 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/servlet/ServerWebApp.classbin0 -> 5411 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/util/Check.classbin0 -> 4014 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/util/ConfigurationUtils.classbin0 -> 5061 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/wsrs/BooleanParam.classbin0 -> 1765 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/wsrs/ByteParam.classbin0 -> 1354 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/wsrs/EnumParam.classbin0 -> 2070 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/wsrs/ExceptionProvider.classbin0 -> 3335 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/wsrs/InputStreamEntity.classbin0 -> 1446 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/wsrs/IntegerParam.classbin0 -> 1384 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/wsrs/JSONMapProvider.classbin0 -> 3997 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/wsrs/JSONProvider.classbin0 -> 4081 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/wsrs/LongParam.classbin0 -> 1354 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/wsrs/Param.classbin0 -> 2130 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/wsrs/Parameters.classbin0 -> 1412 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/wsrs/ParametersProvider.classbin0 -> 5689 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/wsrs/ShortParam.classbin0 -> 1560 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/wsrs/StringParam.classbin0 -> 2498 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/wsrs/UserProvider$1.classbin0 -> 1015 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/wsrs/UserProvider$UserParam.classbin0 -> 1337 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/classes/org/apache/hadoop/lib/wsrs/UserProvider.classbin0 -> 4065 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/activation-1.1.jarbin0 -> 62983 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/asm-3.2.jarbin0 -> 43398 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/avro-1.7.4.jarbin0 -> 303139 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/commons-beanutils-1.7.0.jarbin0 -> 188671 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/commons-beanutils-core-1.8.0.jarbin0 -> 206035 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/commons-cli-1.2.jarbin0 -> 41123 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/commons-codec-1.4.jarbin0 -> 58160 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/commons-collections-3.2.1.jarbin0 -> 575389 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/commons-compress-1.4.1.jarbin0 -> 241367 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/commons-configuration-1.6.jarbin0 -> 298829 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/commons-daemon-1.0.13.jarbin0 -> 24239 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/commons-digester-1.8.jarbin0 -> 143602 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/commons-io-2.1.jarbin0 -> 163151 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/commons-lang-2.5.jarbin0 -> 279193 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/commons-logging-1.1.1.jarbin0 -> 60686 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/commons-math-2.1.jarbin0 -> 832410 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/commons-net-3.1.jarbin0 -> 273370 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/guava-11.0.2.jarbin0 -> 1648200 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/hadoop-annotations-2.2.0.jarbin0 -> 16781 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/hadoop-auth-2.2.0.jarbin0 -> 49779 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/hadoop-common-2.2.0.jarbin0 -> 2677324 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/hadoop-hdfs-2.2.0.jarbin0 -> 5242564 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/jackson-core-asl-1.8.8.jarbin0 -> 227500 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/jackson-jaxrs-1.8.8.jarbin0 -> 17884 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/jackson-mapper-asl-1.8.8.jarbin0 -> 668564 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/jackson-xc-1.8.8.jarbin0 -> 32353 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/jaxb-api-2.2.2.jarbin0 -> 105134 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/jaxb-impl-2.2.3-1.jarbin0 -> 890168 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/jersey-core-1.9.jarbin0 -> 458739 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/jersey-json-1.9.jarbin0 -> 147952 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/jersey-server-1.9.jarbin0 -> 713089 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/jettison-1.1.jarbin0 -> 67758 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/jsch-0.1.42.jarbin0 -> 185746 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/json-simple-1.1.jarbin0 -> 16046 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/jsr305-1.3.9.jarbin0 -> 33015 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/log4j-1.2.17.jarbin0 -> 489884 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/paranamer-2.3.jarbin0 -> 29555 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/protobuf-java-2.5.0.jarbin0 -> 533455 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/slf4j-api-1.7.5.jarbin0 -> 26084 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/slf4j-log4j12-1.7.5.jarbin0 -> 8869 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/snappy-java-1.0.4.1.jarbin0 -> 995968 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/stax-api-1.0.1.jarbin0 -> 26514 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/xmlenc-0.52.jarbin0 -> 15010 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/xz-1.0.jarbin0 -> 94672 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/zookeeper-3.4.5.jarbin0 -> 779974 bytes
-rw-r--r--aarch64/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/web.xml98
-rw-r--r--aarch64/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.2.0.jarbin0 -> 482132 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.2.0.jarbin0 -> 656310 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.2.0.jarbin0 -> 1455462 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.2.0.jarbin0 -> 117197 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.2.0.jarbin0 -> 4057 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jarbin0 -> 1434955 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0.jarbin0 -> 35209 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.2.0.jarbin0 -> 21538 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jarbin0 -> 270272 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/lib-examples/hsqldb-2.0.0.jarbin0 -> 1256297 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/lib/aopalliance-1.0.jarbin0 -> 4467 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/lib/asm-3.2.jarbin0 -> 43398 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/lib/avro-1.7.4.jarbin0 -> 303139 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jarbin0 -> 241367 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/lib/commons-io-2.1.jarbin0 -> 163151 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/lib/guice-3.0.jarbin0 -> 710492 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/lib/guice-servlet-3.0.jarbin0 -> 65012 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/lib/hadoop-annotations-2.2.0.jarbin0 -> 16781 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jarbin0 -> 76643 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jarbin0 -> 227500 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jarbin0 -> 668564 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/lib/javax.inject-1.jarbin0 -> 2497 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/lib/jersey-core-1.9.jarbin0 -> 458739 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/lib/jersey-guice-1.9.jarbin0 -> 14786 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/lib/jersey-server-1.9.jarbin0 -> 713089 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/lib/junit-4.10.jarbin0 -> 253160 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/lib/log4j-1.2.17.jarbin0 -> 489884 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jarbin0 -> 1199572 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/lib/paranamer-2.3.jarbin0 -> 29555 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jarbin0 -> 533455 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jarbin0 -> 995968 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/lib/xz-1.0.jarbin0 -> 94672 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/sources/hadoop-mapreduce-client-app-2.2.0-sources.jarbin0 -> 278860 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/sources/hadoop-mapreduce-client-app-2.2.0-test-sources.jarbin0 -> 144052 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/sources/hadoop-mapreduce-client-common-2.2.0-sources.jarbin0 -> 244744 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/sources/hadoop-mapreduce-client-common-2.2.0-test-sources.jarbin0 -> 24308 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/sources/hadoop-mapreduce-client-core-2.2.0-sources.jarbin0 -> 1008323 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/sources/hadoop-mapreduce-client-core-2.2.0-test-sources.jarbin0 -> 67089 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/sources/hadoop-mapreduce-client-hs-2.2.0-sources.jarbin0 -> 72681 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/sources/hadoop-mapreduce-client-hs-2.2.0-test-sources.jarbin0 -> 63255 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/sources/hadoop-mapreduce-client-hs-plugins-2.2.0-sources.jarbin0 -> 2394 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/sources/hadoop-mapreduce-client-hs-plugins-2.2.0-test-sources.jarbin0 -> 2352 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/sources/hadoop-mapreduce-client-jobclient-2.2.0-sources.jarbin0 -> 21193 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/sources/hadoop-mapreduce-client-jobclient-2.2.0-test-sources.jarbin0 -> 694739 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/sources/hadoop-mapreduce-client-shuffle-2.2.0-sources.jarbin0 -> 10600 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/sources/hadoop-mapreduce-client-shuffle-2.2.0-test-sources.jarbin0 -> 6453 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/sources/hadoop-mapreduce-examples-2.2.0-sources.jarbin0 -> 695908 bytes
-rw-r--r--aarch64/share/hadoop/mapreduce/sources/hadoop-mapreduce-examples-2.2.0-test-sources.jarbin0 -> 12964 bytes
-rw-r--r--aarch64/share/hadoop/tools/lib/hadoop-archives-2.2.0.jarbin0 -> 21487 bytes
-rw-r--r--aarch64/share/hadoop/tools/lib/hadoop-datajoin-2.2.0.jarbin0 -> 14547 bytes
-rw-r--r--aarch64/share/hadoop/tools/lib/hadoop-distcp-2.2.0.jarbin0 -> 80387 bytes
-rw-r--r--aarch64/share/hadoop/tools/lib/hadoop-extras-2.2.0.jarbin0 -> 62040 bytes
-rw-r--r--aarch64/share/hadoop/tools/lib/hadoop-gridmix-2.2.0.jarbin0 -> 215354 bytes
-rw-r--r--aarch64/share/hadoop/tools/lib/hadoop-rumen-2.2.0.jarbin0 -> 277586 bytes
-rw-r--r--aarch64/share/hadoop/tools/lib/hadoop-streaming-2.2.0.jarbin0 -> 102790 bytes
-rw-r--r--aarch64/share/hadoop/tools/sources/hadoop-archives-2.2.0-sources.jarbin0 -> 9636 bytes
-rw-r--r--aarch64/share/hadoop/tools/sources/hadoop-archives-2.2.0-test-sources.jarbin0 -> 3185 bytes
-rw-r--r--aarch64/share/hadoop/tools/sources/hadoop-datajoin-2.2.0-sources.jarbin0 -> 12200 bytes
-rw-r--r--aarch64/share/hadoop/tools/sources/hadoop-datajoin-2.2.0-test-sources.jarbin0 -> 7197 bytes
-rw-r--r--aarch64/share/hadoop/tools/sources/hadoop-distcp-2.2.0-sources.jarbin0 -> 59176 bytes
-rw-r--r--aarch64/share/hadoop/tools/sources/hadoop-distcp-2.2.0-test-sources.jarbin0 -> 38610 bytes
-rw-r--r--aarch64/share/hadoop/tools/sources/hadoop-extras-2.2.0-sources.jarbin0 -> 30647 bytes
-rw-r--r--aarch64/share/hadoop/tools/sources/hadoop-extras-2.2.0-test-sources.jarbin0 -> 13893 bytes
-rw-r--r--aarch64/share/hadoop/tools/sources/hadoop-gridmix-2.2.0-sources.jarbin0 -> 121404 bytes
-rw-r--r--aarch64/share/hadoop/tools/sources/hadoop-gridmix-2.2.0-test-sources.jarbin0 -> 71676 bytes
-rw-r--r--aarch64/share/hadoop/tools/sources/hadoop-rumen-2.2.0-sources.jarbin0 -> 170000 bytes
-rw-r--r--aarch64/share/hadoop/tools/sources/hadoop-rumen-2.2.0-test-sources.jarbin0 -> 9314 bytes
-rw-r--r--aarch64/share/hadoop/tools/sources/hadoop-streaming-2.2.0-sources.jarbin0 -> 71829 bytes
-rw-r--r--aarch64/share/hadoop/tools/sources/hadoop-streaming-2.2.0-test-sources.jarbin0 -> 76355 bytes
-rw-r--r--aarch64/share/hadoop/yarn/hadoop-yarn-api-2.2.0.jarbin0 -> 1158740 bytes
-rw-r--r--aarch64/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.2.0.jarbin0 -> 32509 bytes
-rw-r--r--aarch64/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.2.0.jarbin0 -> 13299 bytes
-rw-r--r--aarch64/share/hadoop/yarn/hadoop-yarn-client-2.2.0.jarbin0 -> 94754 bytes
-rw-r--r--aarch64/share/hadoop/yarn/hadoop-yarn-common-2.2.0.jarbin0 -> 1301644 bytes
-rw-r--r--aarch64/share/hadoop/yarn/hadoop-yarn-server-common-2.2.0.jarbin0 -> 175522 bytes
-rw-r--r--aarch64/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.2.0.jarbin0 -> 467789 bytes
-rw-r--r--aarch64/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.2.0.jarbin0 -> 615701 bytes
-rw-r--r--aarch64/share/hadoop/yarn/hadoop-yarn-server-tests-2.2.0.jarbin0 -> 2137 bytes
-rw-r--r--aarch64/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.2.0.jarbin0 -> 25701 bytes
-rw-r--r--aarch64/share/hadoop/yarn/hadoop-yarn-site-2.2.0.jarbin0 -> 1930 bytes
-rw-r--r--aarch64/share/hadoop/yarn/lib-examples/hsqldb-2.0.0.jarbin0 -> 1256297 bytes
-rw-r--r--aarch64/share/hadoop/yarn/lib/aopalliance-1.0.jarbin0 -> 4467 bytes
-rw-r--r--aarch64/share/hadoop/yarn/lib/asm-3.2.jarbin0 -> 43398 bytes
-rw-r--r--aarch64/share/hadoop/yarn/lib/avro-1.7.4.jarbin0 -> 303139 bytes
-rw-r--r--aarch64/share/hadoop/yarn/lib/commons-compress-1.4.1.jarbin0 -> 241367 bytes
-rw-r--r--aarch64/share/hadoop/yarn/lib/commons-io-2.1.jarbin0 -> 163151 bytes
-rw-r--r--aarch64/share/hadoop/yarn/lib/guice-3.0.jarbin0 -> 710492 bytes
-rw-r--r--aarch64/share/hadoop/yarn/lib/guice-servlet-3.0.jarbin0 -> 65012 bytes
-rw-r--r--aarch64/share/hadoop/yarn/lib/hadoop-annotations-2.2.0.jarbin0 -> 16781 bytes
-rw-r--r--aarch64/share/hadoop/yarn/lib/hamcrest-core-1.1.jarbin0 -> 76643 bytes
-rw-r--r--aarch64/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jarbin0 -> 227500 bytes
-rw-r--r--aarch64/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jarbin0 -> 668564 bytes
-rw-r--r--aarch64/share/hadoop/yarn/lib/javax.inject-1.jarbin0 -> 2497 bytes
-rw-r--r--aarch64/share/hadoop/yarn/lib/jersey-core-1.9.jarbin0 -> 458739 bytes
-rw-r--r--aarch64/share/hadoop/yarn/lib/jersey-guice-1.9.jarbin0 -> 14786 bytes
-rw-r--r--aarch64/share/hadoop/yarn/lib/jersey-server-1.9.jarbin0 -> 713089 bytes
-rw-r--r--aarch64/share/hadoop/yarn/lib/junit-4.10.jarbin0 -> 253160 bytes
-rw-r--r--aarch64/share/hadoop/yarn/lib/log4j-1.2.17.jarbin0 -> 489884 bytes
-rw-r--r--aarch64/share/hadoop/yarn/lib/netty-3.6.2.Final.jarbin0 -> 1199572 bytes
-rw-r--r--aarch64/share/hadoop/yarn/lib/paranamer-2.3.jarbin0 -> 29555 bytes
-rw-r--r--aarch64/share/hadoop/yarn/lib/protobuf-java-2.5.0.jarbin0 -> 533455 bytes
-rw-r--r--aarch64/share/hadoop/yarn/lib/snappy-java-1.0.4.1.jarbin0 -> 995968 bytes
-rw-r--r--aarch64/share/hadoop/yarn/lib/xz-1.0.jarbin0 -> 94672 bytes
-rw-r--r--aarch64/share/hadoop/yarn/sources/hadoop-yarn-api-2.2.0-sources.jarbin0 -> 360318 bytes
-rw-r--r--aarch64/share/hadoop/yarn/sources/hadoop-yarn-applications-distributedshell-2.2.0-sources.jarbin0 -> 19273 bytes
-rw-r--r--aarch64/share/hadoop/yarn/sources/hadoop-yarn-applications-distributedshell-2.2.0-test-sources.jarbin0 -> 6355 bytes
-rw-r--r--aarch64/share/hadoop/yarn/sources/hadoop-yarn-applications-unmanaged-am-launcher-2.2.0-sources.jarbin0 -> 6265 bytes
-rw-r--r--aarch64/share/hadoop/yarn/sources/hadoop-yarn-applications-unmanaged-am-launcher-2.2.0-test-sources.jarbin0 -> 4941 bytes
-rw-r--r--aarch64/share/hadoop/yarn/sources/hadoop-yarn-client-2.2.0-sources.jarbin0 -> 59384 bytes
-rw-r--r--aarch64/share/hadoop/yarn/sources/hadoop-yarn-client-2.2.0-test-sources.jarbin0 -> 35662 bytes
-rw-r--r--aarch64/share/hadoop/yarn/sources/hadoop-yarn-common-2.2.0-sources.jarbin0 -> 634756 bytes
-rw-r--r--aarch64/share/hadoop/yarn/sources/hadoop-yarn-common-2.2.0-test-sources.jarbin0 -> 79714 bytes
-rw-r--r--aarch64/share/hadoop/yarn/sources/hadoop-yarn-server-common-2.2.0-sources.jarbin0 -> 76814 bytes
-rw-r--r--aarch64/share/hadoop/yarn/sources/hadoop-yarn-server-common-2.2.0-test-sources.jarbin0 -> 7884 bytes
-rw-r--r--aarch64/share/hadoop/yarn/sources/hadoop-yarn-server-nodemanager-2.2.0-sources.jarbin0 -> 262437 bytes
-rw-r--r--aarch64/share/hadoop/yarn/sources/hadoop-yarn-server-nodemanager-2.2.0-test-sources.jarbin0 -> 158721 bytes
-rw-r--r--aarch64/share/hadoop/yarn/sources/hadoop-yarn-server-resourcemanager-2.2.0-sources.jarbin0 -> 387489 bytes
-rw-r--r--aarch64/share/hadoop/yarn/sources/hadoop-yarn-server-resourcemanager-2.2.0-test-sources.jarbin0 -> 246635 bytes
-rw-r--r--aarch64/share/hadoop/yarn/sources/hadoop-yarn-server-tests-2.2.0-test-sources.jarbin0 -> 18425 bytes
-rw-r--r--aarch64/share/hadoop/yarn/sources/hadoop-yarn-server-web-proxy-2.2.0-sources.jarbin0 -> 17741 bytes
-rw-r--r--aarch64/share/hadoop/yarn/sources/hadoop-yarn-server-web-proxy-2.2.0-test-sources.jarbin0 -> 5907 bytes
-rw-r--r--aarch64/share/hadoop/yarn/test/hadoop-yarn-server-tests-2.2.0-tests.jarbin0 -> 35375 bytes
-rw-r--r--conf/capacity-scheduler.xml111
-rw-r--r--conf/configuration.xsl40
-rw-r--r--conf/container-executor.cfg4
-rw-r--r--conf/core-site.xml20
-rw-r--r--conf/hadoop-env.cmd81
-rw-r--r--conf/hadoop-env.sh77
-rw-r--r--conf/hadoop-metrics.properties75
-rw-r--r--conf/hadoop-metrics2.properties44
-rw-r--r--conf/hadoop-policy.xml219
-rw-r--r--conf/hdfs-site.xml21
-rw-r--r--conf/httpfs-env.sh41
-rw-r--r--conf/httpfs-log4j.properties35
-rw-r--r--conf/httpfs-signature.secret1
-rw-r--r--conf/httpfs-site.xml17
-rw-r--r--conf/log4j.properties231
-rw-r--r--conf/mapred-env.cmd20
-rw-r--r--conf/mapred-env.sh27
-rw-r--r--conf/mapred-queues.xml.template92
-rw-r--r--conf/mapred-site.xml.template21
-rw-r--r--conf/slaves1
-rw-r--r--conf/ssl-client.xml.example80
-rw-r--r--conf/ssl-server.xml.example77
-rw-r--r--conf/yarn-env.cmd60
-rw-r--r--conf/yarn-env.sh112
-rw-r--r--conf/yarn-site.xml19
-rwxr-xr-xdemo/teragen14
-rwxr-xr-xdemo/terasort11
-rwxr-xr-xdemo/teravalidate11
-rw-r--r--env.sh40
595 files changed, 635502 insertions, 0 deletions
diff --git a/README b/README
new file mode 100644
index 0000000..916aa48
--- /dev/null
+++ b/README
@@ -0,0 +1,61 @@
+This directory contains a pre-built version of Hadoop for
+demonstrating OpenJDK-8 on aarch64 systems. This build of Hadoop
+deliberately contains no native code.
+
+Setup
+=====
+
+To setup the environment please source the env.sh script.
+
+ $ . env.sh
+
+You can verify that the installation is complete by verifying the
+existence of hadoop (on your PATH):
+
+ $ which hadoop
+ $ hadoop version
+
+Teragen Demo
+============
+
+The goal of TeraSort is to sort a large amount of data as fast as
+possible. The example comprises the following steps:
+
+ 1) Generating the input data via teragen
+ 2) Running the actual terasort on the input data
+ 3) Validating the sorted output data via teravalidate
+
+Those discrete steps map to the following shell scripts:
+
+ $ teragen <n-gigabytes> <output-filename>
+ $ terasort <input-filename> <outout-filename>
+ $ teravalidate <input-filename> <output-filename>
+
+for example:
+
+ $ teragen 1 teragen-1GB
+ $ terasort teragen-1GB terasort-1GB-sorted
+ $ teravalidate terasort-1GB-sorted terasort-1GB-validated
+
+Available Demos
+===============
+
+ aggregatewordcount: An Aggregate based map/reduce program that counts the words in the input files.
+ aggregatewordhist: An Aggregate based map/reduce program that computes the histogram of the words in the input files.
+ dbcount: An example job that count the pageview counts from a database.
+ grep: A map/reduce program that counts the matches of a regex in the input.
+ join: A job that effects a join over sorted, equally partitioned datasets
+ multifilewc: A job that counts words from several files.
+ pentomino: A map/reduce tile laying program to find solutions to pentomino problems.
+ pi: A map/reduce program that estimates Pi using monte-carlo method.
+ randomtextwriter: A map/reduce program that writes 10GB of random textual data per node.
+ randomwriter: A map/reduce program that writes 10GB of random data per node.
+ secondarysort: An example defining a secondary sort to the reduce.
+ sleep: A job that sleeps at each map and reduce task.
+ sort: A map/reduce program that sorts the data written by the random writer.
+ sudoku: A sudoku solver.
+ teragen: Generate data for the terasort
+ terasort: Run the terasort
+ teravalidate: Checking results of terasort
+ wordcount: A map/reduce program that counts the words in the input files.
+
diff --git a/aarch64/bin/container-executor b/aarch64/bin/container-executor
new file mode 100755
index 0000000..04f9973
--- /dev/null
+++ b/aarch64/bin/container-executor
Binary files differ
diff --git a/aarch64/bin/hadoop b/aarch64/bin/hadoop
new file mode 100755
index 0000000..be91771
--- /dev/null
+++ b/aarch64/bin/hadoop
@@ -0,0 +1,136 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# This script runs the hadoop core commands.
+
+bin=`which $0`
+bin=`dirname ${bin}`
+bin=`cd "$bin"; pwd`
+
+DEFAULT_LIBEXEC_DIR="$bin"/../libexec
+HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR}
+. $HADOOP_LIBEXEC_DIR/hadoop-config.sh
+
+function print_usage(){
+ echo "Usage: hadoop [--config confdir] COMMAND"
+ echo " where COMMAND is one of:"
+ echo " fs run a generic filesystem user client"
+ echo " version print the version"
+ echo " jar <jar> run a jar file"
+ echo " checknative [-a|-h] check native hadoop and compression libraries availability"
+ echo " distcp <srcurl> <desturl> copy file or directories recursively"
+ echo " archive -archiveName NAME -p <parent path> <src>* <dest> create a hadoop archive"
+ echo " classpath prints the class path needed to get the"
+ echo " Hadoop jar and the required libraries"
+ echo " daemonlog get/set the log level for each daemon"
+ echo " or"
+ echo " CLASSNAME run the class named CLASSNAME"
+ echo ""
+ echo "Most commands print help when invoked w/o parameters."
+}
+
+if [ $# = 0 ]; then
+ print_usage
+ exit
+fi
+
+COMMAND=$1
+case $COMMAND in
+ # usage flags
+ --help|-help|-h)
+ print_usage
+ exit
+ ;;
+
+ #hdfs commands
+ namenode|secondarynamenode|datanode|dfs|dfsadmin|fsck|balancer|fetchdt|oiv|dfsgroups|portmap|nfs3)
+ echo "DEPRECATED: Use of this script to execute hdfs command is deprecated." 1>&2
+ echo "Instead use the hdfs command for it." 1>&2
+ echo "" 1>&2
+ #try to locate hdfs and if present, delegate to it.
+ shift
+ if [ -f "${HADOOP_HDFS_HOME}"/bin/hdfs ]; then
+ exec "${HADOOP_HDFS_HOME}"/bin/hdfs ${COMMAND/dfsgroups/groups} "$@"
+ elif [ -f "${HADOOP_PREFIX}"/bin/hdfs ]; then
+ exec "${HADOOP_PREFIX}"/bin/hdfs ${COMMAND/dfsgroups/groups} "$@"
+ else
+ echo "HADOOP_HDFS_HOME not found!"
+ exit 1
+ fi
+ ;;
+
+ #mapred commands for backwards compatibility
+ pipes|job|queue|mrgroups|mradmin|jobtracker|tasktracker)
+ echo "DEPRECATED: Use of this script to execute mapred command is deprecated." 1>&2
+ echo "Instead use the mapred command for it." 1>&2
+ echo "" 1>&2
+ #try to locate mapred and if present, delegate to it.
+ shift
+ if [ -f "${HADOOP_MAPRED_HOME}"/bin/mapred ]; then
+ exec "${HADOOP_MAPRED_HOME}"/bin/mapred ${COMMAND/mrgroups/groups} "$@"
+ elif [ -f "${HADOOP_PREFIX}"/bin/mapred ]; then
+ exec "${HADOOP_PREFIX}"/bin/mapred ${COMMAND/mrgroups/groups} "$@"
+ else
+ echo "HADOOP_MAPRED_HOME not found!"
+ exit 1
+ fi
+ ;;
+
+ classpath)
+ echo $CLASSPATH
+ exit
+ ;;
+
+ #core commands
+ *)
+ # the core commands
+ if [ "$COMMAND" = "fs" ] ; then
+ CLASS=org.apache.hadoop.fs.FsShell
+ elif [ "$COMMAND" = "version" ] ; then
+ CLASS=org.apache.hadoop.util.VersionInfo
+ elif [ "$COMMAND" = "jar" ] ; then
+ CLASS=org.apache.hadoop.util.RunJar
+ elif [ "$COMMAND" = "checknative" ] ; then
+ CLASS=org.apache.hadoop.util.NativeLibraryChecker
+ elif [ "$COMMAND" = "distcp" ] ; then
+ CLASS=org.apache.hadoop.tools.DistCp
+ CLASSPATH=${CLASSPATH}:${TOOL_PATH}
+ elif [ "$COMMAND" = "daemonlog" ] ; then
+ CLASS=org.apache.hadoop.log.LogLevel
+ elif [ "$COMMAND" = "archive" ] ; then
+ CLASS=org.apache.hadoop.tools.HadoopArchives
+ CLASSPATH=${CLASSPATH}:${TOOL_PATH}
+ elif [[ "$COMMAND" = -* ]] ; then
+ # class and package names cannot begin with a -
+ echo "Error: No command named \`$COMMAND' was found. Perhaps you meant \`hadoop ${COMMAND#-}'"
+ exit 1
+ else
+ CLASS=$COMMAND
+ fi
+ shift
+
+ # Always respect HADOOP_OPTS and HADOOP_CLIENT_OPTS
+ HADOOP_OPTS="$HADOOP_OPTS $HADOOP_CLIENT_OPTS"
+
+ #make sure security appender is turned off
+ HADOOP_OPTS="$HADOOP_OPTS -Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,NullAppender}"
+
+ export CLASSPATH=$CLASSPATH
+ exec "$JAVA" $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS "$@"
+ ;;
+
+esac
diff --git a/aarch64/bin/hadoop.cmd b/aarch64/bin/hadoop.cmd
new file mode 100755
index 0000000..63b2945
--- /dev/null
+++ b/aarch64/bin/hadoop.cmd
@@ -0,0 +1,240 @@
+@echo off
+@rem Licensed to the Apache Software Foundation (ASF) under one or more
+@rem contributor license agreements. See the NOTICE file distributed with
+@rem this work for additional information regarding copyright ownership.
+@rem The ASF licenses this file to You under the Apache License, Version 2.0
+@rem (the "License"); you may not use this file except in compliance with
+@rem the License. You may obtain a copy of the License at
+@rem
+@rem http://www.apache.org/licenses/LICENSE-2.0
+@rem
+@rem Unless required by applicable law or agreed to in writing, software
+@rem distributed under the License is distributed on an "AS IS" BASIS,
+@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+@rem See the License for the specific language governing permissions and
+@rem limitations under the License.
+
+
+@rem This script runs the hadoop core commands.
+
+@rem Environment Variables
+@rem
+@rem JAVA_HOME The java implementation to use. Overrides JAVA_HOME.
+@rem
+@rem HADOOP_CLASSPATH Extra Java CLASSPATH entries.
+@rem
+@rem HADOOP_USER_CLASSPATH_FIRST When defined, the HADOOP_CLASSPATH is
+@rem added in the beginning of the global
+@rem classpath. Can be defined, for example,
+@rem by doing
+@rem export HADOOP_USER_CLASSPATH_FIRST=true
+@rem
+@rem HADOOP_HEAPSIZE The maximum amount of heap to use, in MB.
+@rem Default is 1000.
+@rem
+@rem HADOOP_OPTS Extra Java runtime options.
+@rem
+@rem HADOOP_CLIENT_OPTS when the respective command is run.
+@rem HADOOP_{COMMAND}_OPTS etc HADOOP_JT_OPTS applies to JobTracker
+@rem for e.g. HADOOP_CLIENT_OPTS applies to
+@rem more than one command (fs, dfs, fsck,
+@rem dfsadmin etc)
+@rem
+@rem HADOOP_CONF_DIR Alternate conf dir. Default is ${HADOOP_HOME}/conf.
+@rem
+@rem HADOOP_ROOT_LOGGER The root appender. Default is INFO,console
+@rem
+
+if not defined HADOOP_BIN_PATH (
+ set HADOOP_BIN_PATH=%~dp0
+)
+
+if "%HADOOP_BIN_PATH:~-1%" == "\" (
+ set HADOOP_BIN_PATH=%HADOOP_BIN_PATH:~0,-1%
+)
+
+call :updatepath %HADOOP_BIN_PATH%
+
+:main
+ setlocal enabledelayedexpansion
+
+ set DEFAULT_LIBEXEC_DIR=%HADOOP_BIN_PATH%\..\libexec
+ if not defined HADOOP_LIBEXEC_DIR (
+ set HADOOP_LIBEXEC_DIR=%DEFAULT_LIBEXEC_DIR%
+ )
+
+ call %HADOOP_LIBEXEC_DIR%\hadoop-config.cmd %*
+ if "%1" == "--config" (
+ shift
+ shift
+ )
+
+ set hadoop-command=%1
+ if not defined hadoop-command (
+ goto print_usage
+ )
+
+ call :make_command_arguments %*
+
+ set hdfscommands=namenode secondarynamenode datanode dfs dfsadmin fsck balancer fetchdt oiv dfsgroups
+ for %%i in ( %hdfscommands% ) do (
+ if %hadoop-command% == %%i set hdfscommand=true
+ )
+ if defined hdfscommand (
+ @echo DEPRECATED: Use of this script to execute hdfs command is deprecated. 1>&2
+ @echo Instead use the hdfs command for it. 1>&2
+ if exist %HADOOP_HDFS_HOME%\bin\hdfs.cmd (
+ call %HADOOP_HDFS_HOME%\bin\hdfs.cmd %*
+ goto :eof
+ ) else if exist %HADOOP_HOME%\bin\hdfs.cmd (
+ call %HADOOP_HOME%\bin\hdfs.cmd %*
+ goto :eof
+ ) else (
+ echo HADOOP_HDFS_HOME not found!
+ goto :eof
+ )
+ )
+
+ set mapredcommands=pipes job queue mrgroups mradmin jobtracker tasktracker
+ for %%i in ( %mapredcommands% ) do (
+ if %hadoop-command% == %%i set mapredcommand=true
+ )
+ if defined mapredcommand (
+ @echo DEPRECATED: Use of this script to execute mapred command is deprecated. 1>&2
+ @echo Instead use the mapred command for it. 1>&2
+ if exist %HADOOP_MAPRED_HOME%\bin\mapred.cmd (
+ call %HADOOP_MAPRED_HOME%\bin\mapred.cmd %*
+ goto :eof
+ ) else if exist %HADOOP_HOME%\bin\mapred.cmd (
+ call %HADOOP_HOME%\bin\mapred.cmd %*
+ goto :eof
+ ) else (
+ echo HADOOP_MAPRED_HOME not found!
+ goto :eof
+ )
+ )
+
+ if %hadoop-command% == classpath (
+ @echo %CLASSPATH%
+ goto :eof
+ )
+
+ set corecommands=fs version jar checknative distcp daemonlog archive
+ for %%i in ( %corecommands% ) do (
+ if %hadoop-command% == %%i set corecommand=true
+ )
+ if defined corecommand (
+ call :%hadoop-command%
+ ) else (
+ set CLASSPATH=%CLASSPATH%;%CD%
+ set CLASS=%hadoop-command%
+ )
+
+ set path=%PATH%;%HADOOP_BIN_PATH%
+
+ @rem Always respect HADOOP_OPTS and HADOOP_CLIENT_OPTS
+ set HADOOP_OPTS=%HADOOP_OPTS% %HADOOP_CLIENT_OPTS%
+
+ @rem make sure security appender is turned off
+ if not defined HADOOP_SECURITY_LOGGER (
+ set HADOOP_SECURITY_LOGGER=INFO,NullAppender
+ )
+ set HADOOP_OPTS=%HADOOP_OPTS% -Dhadoop.security.logger=%HADOOP_SECURITY_LOGGER%
+
+ call %JAVA% %JAVA_HEAP_MAX% %HADOOP_OPTS% -classpath %CLASSPATH% %CLASS% %hadoop-command-arguments%
+
+ goto :eof
+
+:fs
+ set CLASS=org.apache.hadoop.fs.FsShell
+ goto :eof
+
+:version
+ set CLASS=org.apache.hadoop.util.VersionInfo
+ goto :eof
+
+:jar
+ set CLASS=org.apache.hadoop.util.RunJar
+ goto :eof
+
+:checknative
+ set CLASS=org.apache.hadoop.util.NativeLibraryChecker
+ goto :eof
+
+:distcp
+ set CLASS=org.apache.hadoop.tools.DistCp
+ set CLASSPATH=%CLASSPATH%;%TOOL_PATH%
+ goto :eof
+
+:daemonlog
+ set CLASS=org.apache.hadoop.log.LogLevel
+ goto :eof
+
+:archive
+ set CLASS=org.apache.hadoop.tools.HadoopArchives
+ set CLASSPATH=%CLASSPATH%;%TOOL_PATH%
+ goto :eof
+
+:updatepath
+ set path_to_add=%*
+ set current_path_comparable=%path%
+ set current_path_comparable=%current_path_comparable: =_%
+ set current_path_comparable=%current_path_comparable:(=_%
+ set current_path_comparable=%current_path_comparable:)=_%
+ set path_to_add_comparable=%path_to_add%
+ set path_to_add_comparable=%path_to_add_comparable: =_%
+ set path_to_add_comparable=%path_to_add_comparable:(=_%
+ set path_to_add_comparable=%path_to_add_comparable:)=_%
+
+ for %%i in ( %current_path_comparable% ) do (
+ if /i "%%i" == "%path_to_add_comparable%" (
+ set path_to_add_exist=true
+ )
+ )
+ set system_path_comparable=
+ set path_to_add_comparable=
+ if not defined path_to_add_exist path=%path_to_add%;%path%
+ set path_to_add=
+ goto :eof
+
+@rem This changes %1, %2 etc. Hence those cannot be used after calling this.
+:make_command_arguments
+ if "%1" == "--config" (
+ shift
+ shift
+ )
+ if [%2] == [] goto :eof
+ shift
+ set _arguments=
+ :MakeCmdArgsLoop
+ if [%1]==[] goto :EndLoop
+
+ if not defined _arguments (
+ set _arguments=%1
+ ) else (
+ set _arguments=!_arguments! %1
+ )
+ shift
+ goto :MakeCmdArgsLoop
+ :EndLoop
+ set hadoop-command-arguments=%_arguments%
+ goto :eof
+
+:print_usage
+ @echo Usage: hadoop [--config confdir] COMMAND
+ @echo where COMMAND is one of:
+ @echo fs run a generic filesystem user client
+ @echo version print the version
+ @echo jar ^<jar^> run a jar file
+ @echo checknative [-a^|-h] check native hadoop and compression libraries availability
+ @echo distcp ^<srcurl^> ^<desturl^> copy file or directories recursively
+ @echo archive -archiveName NAME -p ^<parent path^> ^<src^>* ^<dest^> create a hadoop archive
+ @echo classpath prints the class path needed to get the
+ @echo Hadoop jar and the required libraries
+ @echo daemonlog get/set the log level for each daemon
+ @echo or
+ @echo CLASSNAME run the class named CLASSNAME
+ @echo.
+ @echo Most commands print help when invoked w/o parameters.
+
+endlocal
diff --git a/aarch64/bin/hdfs b/aarch64/bin/hdfs
new file mode 100755
index 0000000..24bb11f
--- /dev/null
+++ b/aarch64/bin/hdfs
@@ -0,0 +1,203 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Environment Variables
+#
+# JSVC_HOME home directory of jsvc binary. Required for starting secure
+# datanode.
+#
+# JSVC_OUTFILE path to jsvc output file. Defaults to
+# $HADOOP_LOG_DIR/jsvc.out.
+#
+# JSVC_ERRFILE path to jsvc error file. Defaults to $HADOOP_LOG_DIR/jsvc.err.
+
+bin=`which $0`
+bin=`dirname ${bin}`
+bin=`cd "$bin"; pwd`
+
+DEFAULT_LIBEXEC_DIR="$bin"/../libexec
+HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR}
+. $HADOOP_LIBEXEC_DIR/hdfs-config.sh
+
+function print_usage(){
+ echo "Usage: hdfs [--config confdir] COMMAND"
+ echo " where COMMAND is one of:"
+ echo " dfs run a filesystem command on the file systems supported in Hadoop."
+ echo " namenode -format format the DFS filesystem"
+ echo " secondarynamenode run the DFS secondary namenode"
+ echo " namenode run the DFS namenode"
+ echo " journalnode run the DFS journalnode"
+ echo " zkfc run the ZK Failover Controller daemon"
+ echo " datanode run a DFS datanode"
+ echo " dfsadmin run a DFS admin client"
+ echo " haadmin run a DFS HA admin client"
+ echo " fsck run a DFS filesystem checking utility"
+ echo " balancer run a cluster balancing utility"
+ echo " jmxget get JMX exported values from NameNode or DataNode."
+ echo " oiv apply the offline fsimage viewer to an fsimage"
+ echo " oev apply the offline edits viewer to an edits file"
+ echo " fetchdt fetch a delegation token from the NameNode"
+ echo " getconf get config values from configuration"
+ echo " groups get the groups which users belong to"
+ echo " snapshotDiff diff two snapshots of a directory or diff the"
+ echo " current directory contents with a snapshot"
+ echo " lsSnapshottableDir list all snapshottable dirs owned by the current user"
+ echo " Use -help to see options"
+ echo " portmap run a portmap service"
+ echo " nfs3 run an NFS version 3 gateway"
+ echo ""
+ echo "Most commands print help when invoked w/o parameters."
+}
+
+if [ $# = 0 ]; then
+ print_usage
+ exit
+fi
+
+COMMAND=$1
+shift
+
+case $COMMAND in
+ # usage flags
+ --help|-help|-h)
+ print_usage
+ exit
+ ;;
+esac
+
+# Determine if we're starting a secure datanode, and if so, redefine appropriate variables
+if [ "$COMMAND" == "datanode" ] && [ "$EUID" -eq 0 ] && [ -n "$HADOOP_SECURE_DN_USER" ]; then
+ if [ -n "$JSVC_HOME" ]; then
+ if [ -n "$HADOOP_SECURE_DN_PID_DIR" ]; then
+ HADOOP_PID_DIR=$HADOOP_SECURE_DN_PID_DIR
+ fi
+
+ if [ -n "$HADOOP_SECURE_DN_LOG_DIR" ]; then
+ HADOOP_LOG_DIR=$HADOOP_SECURE_DN_LOG_DIR
+ HADOOP_OPTS="$HADOOP_OPTS -Dhadoop.log.dir=$HADOOP_LOG_DIR"
+ fi
+
+ HADOOP_IDENT_STRING=$HADOOP_SECURE_DN_USER
+ HADOOP_OPTS="$HADOOP_OPTS -Dhadoop.id.str=$HADOOP_IDENT_STRING"
+ starting_secure_dn="true"
+ else
+ echo "It looks like you're trying to start a secure DN, but \$JSVC_HOME"\
+ "isn't set. Falling back to starting insecure DN."
+ fi
+fi
+
+if [ "$COMMAND" = "namenode" ] ; then
+ CLASS='org.apache.hadoop.hdfs.server.namenode.NameNode'
+ HADOOP_OPTS="$HADOOP_OPTS $HADOOP_NAMENODE_OPTS"
+elif [ "$COMMAND" = "zkfc" ] ; then
+ CLASS='org.apache.hadoop.hdfs.tools.DFSZKFailoverController'
+ HADOOP_OPTS="$HADOOP_OPTS $HADOOP_ZKFC_OPTS"
+elif [ "$COMMAND" = "secondarynamenode" ] ; then
+ CLASS='org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode'
+ HADOOP_OPTS="$HADOOP_OPTS $HADOOP_SECONDARYNAMENODE_OPTS"
+elif [ "$COMMAND" = "datanode" ] ; then
+ CLASS='org.apache.hadoop.hdfs.server.datanode.DataNode'
+ if [ "$starting_secure_dn" = "true" ]; then
+ HADOOP_OPTS="$HADOOP_OPTS -jvm server $HADOOP_DATANODE_OPTS"
+ else
+ HADOOP_OPTS="$HADOOP_OPTS -server $HADOOP_DATANODE_OPTS"
+ fi
+elif [ "$COMMAND" = "journalnode" ] ; then
+ CLASS='org.apache.hadoop.hdfs.qjournal.server.JournalNode'
+ HADOOP_OPTS="$HADOOP_OPTS $HADOOP_JOURNALNODE_OPTS"
+elif [ "$COMMAND" = "dfs" ] ; then
+ CLASS=org.apache.hadoop.fs.FsShell
+ HADOOP_OPTS="$HADOOP_OPTS $HADOOP_CLIENT_OPTS"
+elif [ "$COMMAND" = "dfsadmin" ] ; then
+ CLASS=org.apache.hadoop.hdfs.tools.DFSAdmin
+ HADOOP_OPTS="$HADOOP_OPTS $HADOOP_CLIENT_OPTS"
+elif [ "$COMMAND" = "haadmin" ] ; then
+ CLASS=org.apache.hadoop.hdfs.tools.DFSHAAdmin
+ CLASSPATH=${CLASSPATH}:${TOOL_PATH}
+ HADOOP_OPTS="$HADOOP_OPTS $HADOOP_CLIENT_OPTS"
+elif [ "$COMMAND" = "fsck" ] ; then
+ CLASS=org.apache.hadoop.hdfs.tools.DFSck
+ HADOOP_OPTS="$HADOOP_OPTS $HADOOP_CLIENT_OPTS"
+elif [ "$COMMAND" = "balancer" ] ; then
+ CLASS=org.apache.hadoop.hdfs.server.balancer.Balancer
+ HADOOP_OPTS="$HADOOP_OPTS $HADOOP_BALANCER_OPTS"
+elif [ "$COMMAND" = "jmxget" ] ; then
+ CLASS=org.apache.hadoop.hdfs.tools.JMXGet
+elif [ "$COMMAND" = "oiv" ] ; then
+ CLASS=org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewer
+elif [ "$COMMAND" = "oev" ] ; then
+ CLASS=org.apache.hadoop.hdfs.tools.offlineEditsViewer.OfflineEditsViewer
+elif [ "$COMMAND" = "fetchdt" ] ; then
+ CLASS=org.apache.hadoop.hdfs.tools.DelegationTokenFetcher
+elif [ "$COMMAND" = "getconf" ] ; then
+ CLASS=org.apache.hadoop.hdfs.tools.GetConf
+elif [ "$COMMAND" = "groups" ] ; then
+ CLASS=org.apache.hadoop.hdfs.tools.GetGroups
+elif [ "$COMMAND" = "snapshotDiff" ] ; then
+ CLASS=org.apache.hadoop.hdfs.tools.snapshot.SnapshotDiff
+elif [ "$COMMAND" = "lsSnapshottableDir" ] ; then
+ CLASS=org.apache.hadoop.hdfs.tools.snapshot.LsSnapshottableDir
+elif [ "$COMMAND" = "portmap" ] ; then
+ CLASS=org.apache.hadoop.portmap.Portmap
+elif [ "$COMMAND" = "nfs3" ] ; then
+ CLASS=org.apache.hadoop.hdfs.nfs.nfs3.Nfs3
+else
+ CLASS="$COMMAND"
+fi
+
+export CLASSPATH=$CLASSPATH
+
+HADOOP_OPTS="$HADOOP_OPTS -Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,NullAppender}"
+
+# Check to see if we should start a secure datanode
+if [ "$starting_secure_dn" = "true" ]; then
+ if [ "$HADOOP_PID_DIR" = "" ]; then
+ HADOOP_SECURE_DN_PID="/tmp/hadoop_secure_dn.pid"
+ else
+ HADOOP_SECURE_DN_PID="$HADOOP_PID_DIR/hadoop_secure_dn.pid"
+ fi
+
+ JSVC=$JSVC_HOME/jsvc
+ if [ ! -f $JSVC ]; then
+ echo "JSVC_HOME is not set correctly so jsvc cannot be found. Jsvc is required to run secure datanodes. "
+ echo "Please download and install jsvc from http://archive.apache.org/dist/commons/daemon/binaries/ "\
+ "and set JSVC_HOME to the directory containing the jsvc binary."
+ exit
+ fi
+
+ if [[ ! $JSVC_OUTFILE ]]; then
+ JSVC_OUTFILE="$HADOOP_LOG_DIR/jsvc.out"
+ fi
+
+ if [[ ! $JSVC_ERRFILE ]]; then
+ JSVC_ERRFILE="$HADOOP_LOG_DIR/jsvc.err"
+ fi
+
+ exec "$JSVC" \
+ -Dproc_$COMMAND -outfile "$JSVC_OUTFILE" \
+ -errfile "$JSVC_ERRFILE" \
+ -pidfile "$HADOOP_SECURE_DN_PID" \
+ -nodetach \
+ -user "$HADOOP_SECURE_DN_USER" \
+ -cp "$CLASSPATH" \
+ $JAVA_HEAP_MAX $HADOOP_OPTS \
+ org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter "$@"
+else
+ # run it
+ exec "$JAVA" -Dproc_$COMMAND $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS "$@"
+fi
+
diff --git a/aarch64/bin/hdfs.cmd b/aarch64/bin/hdfs.cmd
new file mode 100755
index 0000000..70af80c
--- /dev/null
+++ b/aarch64/bin/hdfs.cmd
@@ -0,0 +1,171 @@
+@echo off
+@rem Licensed to the Apache Software Foundation (ASF) under one or more
+@rem contributor license agreements. See the NOTICE file distributed with
+@rem this work for additional information regarding copyright ownership.
+@rem The ASF licenses this file to You under the Apache License, Version 2.0
+@rem (the "License"); you may not use this file except in compliance with
+@rem the License. You may obtain a copy of the License at
+@rem
+@rem http://www.apache.org/licenses/LICENSE-2.0
+@rem
+@rem Unless required by applicable law or agreed to in writing, software
+@rem distributed under the License is distributed on an "AS IS" BASIS,
+@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+@rem See the License for the specific language governing permissions and
+@rem limitations under the License.
+@rem
+setlocal enabledelayedexpansion
+
+if not defined HADOOP_BIN_PATH (
+ set HADOOP_BIN_PATH=%~dp0
+)
+
+if "%HADOOP_BIN_PATH:~-1%" == "\" (
+ set HADOOP_BIN_PATH=%HADOOP_BIN_PATH:~0,-1%
+)
+
+set DEFAULT_LIBEXEC_DIR=%HADOOP_BIN_PATH%\..\libexec
+if not defined HADOOP_LIBEXEC_DIR (
+ set HADOOP_LIBEXEC_DIR=%DEFAULT_LIBEXEC_DIR%
+)
+
+call %HADOOP_LIBEXEC_DIR%\hdfs-config.cmd %*
+if "%1" == "--config" (
+ shift
+ shift
+)
+
+:main
+ if exist %HADOOP_CONF_DIR%\hadoop-env.cmd (
+ call %HADOOP_CONF_DIR%\hadoop-env.cmd
+ )
+
+ set hdfs-command=%1
+ call :make_command_arguments %*
+
+ if not defined hdfs-command (
+ goto print_usage
+ )
+
+ call :%hdfs-command% %hdfs-command-arguments%
+ set java_arguments=%JAVA_HEAP_MAX% %HADOOP_OPTS% -classpath %CLASSPATH% %CLASS% %hdfs-command-arguments%
+ call %JAVA% %java_arguments%
+
+goto :eof
+
+:namenode
+ set CLASS=org.apache.hadoop.hdfs.server.namenode.NameNode
+ set HADOOP_OPTS=%HADOOP_OPTS% %HADOOP_NAMENODE_OPTS%
+ goto :eof
+
+:zkfc
+ set CLASS=org.apache.hadoop.hdfs.tools.DFSZKFailoverController
+ set HADOOP_OPTS=%HADOOP_OPTS% %HADOOP_ZKFC_OPTS%
+ goto :eof
+
+:secondarynamenode
+ set CLASS=org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode
+ set HADOOP_OPTS=%HADOOP_OPTS% %HADOOP_SECONDARYNAMENODE_OPTS%
+ goto :eof
+
+:datanode
+ set CLASS=org.apache.hadoop.hdfs.server.datanode.DataNode
+ set HADOOP_OPTS=%HADOOP_OPTS% -server %HADOOP_DATANODE_OPTS%
+ goto :eof
+
+:dfs
+ set CLASS=org.apache.hadoop.fs.FsShell
+ set HADOOP_OPTS=%HADOOP_OPTS% %HADOOP_CLIENT_OPTS%
+ goto :eof
+
+:dfsadmin
+ set CLASS=org.apache.hadoop.hdfs.tools.DFSAdmin
+ set HADOOP_OPTS=%HADOOP_OPTS% %HADOOP_CLIENT_OPTS%
+ goto :eof
+
+:haadmin
+ set CLASS=org.apache.hadoop.hdfs.tools.DFSHAAdmin
+ set CLASSPATH=%CLASSPATH%;%TOOL_PATH%
+ set HADOOP_OPTS=%HADOOP_OPTS% %HADOOP_CLIENT_OPTS%
+ goto :eof
+
+:fsck
+ set CLASS=org.apache.hadoop.hdfs.tools.DFSck
+ set HADOOP_OPTS=%HADOOP_OPTS% %HADOOP_CLIENT_OPTS%
+ goto :eof
+
+:balancer
+ set CLASS=org.apache.hadoop.hdfs.server.balancer.Balancer
+ set HADOOP_OPTS=%HADOOP_OPTS% %HADOOP_BALANCER_OPTS%
+ goto :eof
+
+:jmxget
+ set CLASS=org.apache.hadoop.hdfs.tools.JMXGet
+ goto :eof
+
+:oiv
+ set CLASS=org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewer
+ goto :eof
+
+:oev
+ set CLASS=org.apache.hadoop.hdfs.tools.offlineEditsViewer.OfflineEditsViewer
+ goto :eof
+
+:fetchdt
+ set CLASS=org.apache.hadoop.hdfs.tools.DelegationTokenFetcher
+ goto :eof
+
+:getconf
+ set CLASS=org.apache.hadoop.hdfs.tools.GetConf
+ goto :eof
+
+:groups
+ set CLASS=org.apache.hadoop.hdfs.tools.GetGroups
+ goto :eof
+
+@rem This changes %1, %2 etc. Hence those cannot be used after calling this.
+:make_command_arguments
+ if "%1" == "--config" (
+ shift
+ shift
+ )
+ if [%2] == [] goto :eof
+ shift
+ set _hdfsarguments=
+ :MakeCmdArgsLoop
+ if [%1]==[] goto :EndLoop
+
+ if not defined _hdfsarguments (
+ set _hdfsarguments=%1
+ ) else (
+ set _hdfsarguments=!_hdfsarguments! %1
+ )
+ shift
+ goto :MakeCmdArgsLoop
+ :EndLoop
+ set hdfs-command-arguments=%_hdfsarguments%
+ goto :eof
+
+:print_usage
+ @echo Usage: hdfs [--config confdir] COMMAND
+ @echo where COMMAND is one of:
+ @echo dfs run a filesystem command on the file systems supported in Hadoop.
+ @echo namenode -format format the DFS filesystem
+ @echo secondarynamenode run the DFS secondary namenode
+ @echo namenode run the DFS namenode
+ @echo zkfc run the ZK Failover Controller daemon
+ @echo datanode run a DFS datanode
+ @echo dfsadmin run a DFS admin client
+ @echo fsck run a DFS filesystem checking utility
+ @echo balancer run a cluster balancing utility
+ @echo jmxget get JMX exported values from NameNode or DataNode.
+ @echo oiv apply the offline fsimage viewer to an fsimage
+ @echo oev apply the offline edits viewer to an edits file
+ @echo fetchdt fetch a delegation token from the NameNode
+ @echo getconf get config values from configuration
+ @echo groups get the groups which users belong to
+ @echo Use -help to see options
+ @echo.
+ @echo Most commands print help when invoked w/o parameters.
+
+endlocal
diff --git a/aarch64/bin/mapred b/aarch64/bin/mapred
new file mode 100755
index 0000000..531fd95
--- /dev/null
+++ b/aarch64/bin/mapred
@@ -0,0 +1,148 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+bin=`which $0`
+bin=`dirname ${bin}`
+bin=`cd "$bin"; pwd`
+
+DEFAULT_LIBEXEC_DIR="$bin"/../libexec
+HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR}
+if [ -e ${HADOOP_LIBEXEC_DIR}/mapred-config.sh ]; then
+ . ${HADOOP_LIBEXEC_DIR}/mapred-config.sh
+else
+ . "$bin/mapred-config.sh"
+fi
+
+function print_usage(){
+ echo "Usage: mapred [--config confdir] COMMAND"
+ echo " where COMMAND is one of:"
+ echo " pipes run a Pipes job"
+ echo " job manipulate MapReduce jobs"
+ echo " queue get information regarding JobQueues"
+ echo " classpath prints the class path needed for running"
+ echo " mapreduce subcommands"
+ echo " historyserver run job history servers as a standalone daemon"
+ echo " distcp <srcurl> <desturl> copy file or directories recursively"
+ echo " archive -archiveName NAME -p <parent path> <src>* <dest> create a hadoop archive"
+ echo ""
+ echo "Most commands print help when invoked w/o parameters."
+}
+
+if [ $# = 0 ]; then
+ print_usage
+ exit
+fi
+
+COMMAND=$1
+shift
+
+case $COMMAND in
+ # usage flags
+ --help|-help|-h)
+ print_usage
+ exit
+ ;;
+esac
+
+if [ "$COMMAND" = "job" ] ; then
+ CLASS=org.apache.hadoop.mapred.JobClient
+ HADOOP_OPTS="$HADOOP_OPTS $HADOOP_CLIENT_OPTS"
+elif [ "$COMMAND" = "queue" ] ; then
+ CLASS=org.apache.hadoop.mapred.JobQueueClient
+ HADOOP_OPTS="$HADOOP_OPTS $HADOOP_CLIENT_OPTS"
+elif [ "$COMMAND" = "pipes" ] ; then
+ CLASS=org.apache.hadoop.mapred.pipes.Submitter
+ HADOOP_OPTS="$HADOOP_OPTS $HADOOP_CLIENT_OPTS"
+elif [ "$COMMAND" = "sampler" ] ; then
+ CLASS=org.apache.hadoop.mapred.lib.InputSampler
+ HADOOP_OPTS="$HADOOP_OPTS $HADOOP_CLIENT_OPTS"
+elif [ "$COMMAND" = "classpath" ] ; then
+ echo -n
+elif [ "$COMMAND" = "historyserver" ] ; then
+ CLASS=org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer
+ HADOOP_OPTS="$HADOOP_OPTS -Dmapred.jobsummary.logger=${HADOOP_JHS_LOGGER:-INFO,console} $HADOOP_JOB_HISTORYSERVER_OPTS"
+ if [ "$HADOOP_JOB_HISTORYSERVER_HEAPSIZE" != "" ]; then
+ JAVA_HEAP_MAX="-Xmx""$HADOOP_JOB_HISTORYSERVER_HEAPSIZE""m"
+ fi
+elif [ "$COMMAND" = "mradmin" ] \
+ || [ "$COMMAND" = "jobtracker" ] \
+ || [ "$COMMAND" = "tasktracker" ] \
+ || [ "$COMMAND" = "groups" ] ; then
+ echo "Sorry, the $COMMAND command is no longer supported."
+ echo "You may find similar functionality with the \"yarn\" shell command."
+ print_usage
+ exit
+elif [ "$COMMAND" = "distcp" ] ; then
+ CLASS=org.apache.hadoop.tools.DistCp
+ CLASSPATH=${CLASSPATH}:${TOOL_PATH}
+ HADOOP_OPTS="$HADOOP_OPTS $HADOOP_CLIENT_OPTS"
+elif [ "$COMMAND" = "archive" ] ; then
+ CLASS=org.apache.hadoop.tools.HadoopArchives
+ CLASSPATH=${CLASSPATH}:${TOOL_PATH}
+ HADOOP_OPTS="$HADOOP_OPTS $HADOOP_CLIENT_OPTS"
+else
+ echo $COMMAND - invalid command
+ print_usage
+ exit
+fi
+
+# for developers, add mapred classes to CLASSPATH
+if [ -d "$HADOOP_MAPRED_HOME/build/classes" ]; then
+ CLASSPATH=${CLASSPATH}:$HADOOP_MAPRED_HOME/build/classes
+fi
+if [ -d "$HADOOP_MAPRED_HOME/build/webapps" ]; then
+ CLASSPATH=${CLASSPATH}:$HADOOP_MAPRED_HOME/build
+fi
+if [ -d "$HADOOP_MAPRED_HOME/build/test/classes" ]; then
+ CLASSPATH=${CLASSPATH}:$HADOOP_MAPRED_HOME/build/test/classes
+fi
+if [ -d "$HADOOP_MAPRED_HOME/build/tools" ]; then
+ CLASSPATH=${CLASSPATH}:$HADOOP_MAPRED_HOME/build/tools
+fi
+
+# for releases, add core mapred jar & webapps to CLASSPATH
+if [ -d "$HADOOP_PREFIX/${MAPRED_DIR}/webapps" ]; then
+ CLASSPATH=${CLASSPATH}:$HADOOP_PREFIX/${MAPRED_DIR}
+fi
+for f in $HADOOP_MAPRED_HOME/${MAPRED_DIR}/*.jar; do
+ CLASSPATH=${CLASSPATH}:$f;
+done
+
+# Need YARN jars also
+for f in $HADOOP_YARN_HOME/${YARN_DIR}/*.jar; do
+ CLASSPATH=${CLASSPATH}:$f;
+done
+
+# add libs to CLASSPATH
+for f in $HADOOP_MAPRED_HOME/${MAPRED_LIB_JARS_DIR}/*.jar; do
+ CLASSPATH=${CLASSPATH}:$f;
+done
+
+# add modules to CLASSPATH
+for f in $HADOOP_MAPRED_HOME/modules/*.jar; do
+ CLASSPATH=${CLASSPATH}:$f;
+done
+
+if [ "$COMMAND" = "classpath" ] ; then
+ echo $CLASSPATH
+ exit
+fi
+
+HADOOP_OPTS="$HADOOP_OPTS -Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,NullAppender}"
+
+export CLASSPATH
+exec "$JAVA" -Dproc_$COMMAND $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS "$@"
diff --git a/aarch64/bin/mapred.cmd b/aarch64/bin/mapred.cmd
new file mode 100755
index 0000000..b2d53fa
--- /dev/null
+++ b/aarch64/bin/mapred.cmd
@@ -0,0 +1,195 @@
+@echo off
+@rem Licensed to the Apache Software Foundation (ASF) under one or more
+@rem contributor license agreements. See the NOTICE file distributed with
+@rem this work for additional information regarding copyright ownership.
+@rem The ASF licenses this file to You under the Apache License, Version 2.0
+@rem (the "License"); you may not use this file except in compliance with
+@rem the License. You may obtain a copy of the License at
+@rem
+@rem http://www.apache.org/licenses/LICENSE-2.0
+@rem
+@rem Unless required by applicable law or agreed to in writing, software
+@rem distributed under the License is distributed on an "AS IS" BASIS,
+@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+@rem See the License for the specific language governing permissions and
+@rem limitations under the License.
+
+@rem The Hadoop mapred command script
+
+setlocal enabledelayedexpansion
+
+if not defined HADOOP_BIN_PATH (
+ set HADOOP_BIN_PATH=%~dp0
+)
+
+if "%HADOOP_BIN_PATH:~`%" == "\" (
+ set HADOOP_BIN_PATH=%HADOOP_BIN_PATH:~0,-1%
+)
+
+set DEFAULT_LIBEXEC_DIR=%HADOOP_BIN_PATH%\..\libexec
+if not defined HADOOP_LIBEXEC_DIR (
+ set HADOOP_LIBEXEC_DIR=%DEFAULT_LIBEXEC_DIR%
+)
+
+call %DEFAULT_LIBEXEC_DIR%\mapred-config.cmd %*
+if "%1" == "--config" (
+ shift
+ shift
+)
+
+:main
+ if exist %MAPRED_CONF_DIR%\mapred-env.cmd (
+ call %MAPRED_CONF_DIR%\mapred-env.cmd
+ )
+ set mapred-command=%1
+ call :make_command_arguments %*
+
+ if not defined mapred-command (
+ goto print_usage
+ )
+
+ @rem JAVA and JAVA_HEAP_MAX are set in hadoop-confg.cmd
+
+ if defined MAPRED_HEAPSIZE (
+ @rem echo run with Java heapsize %MAPRED_HEAPSIZE%
+ set JAVA_HEAP_SIZE=-Xmx%MAPRED_HEAPSIZE%m
+ )
+
+ @rem CLASSPATH initially contains HADOOP_CONF_DIR and MAPRED_CONF_DIR
+ if not defined HADOOP_CONF_DIR (
+ echo NO HADOOP_CONF_DIR set.
+ echo Please specify it either in mapred-env.cmd or in the environment.
+ goto :eof
+ )
+
+ set CLASSPATH=%HADOOP_CONF_DIR%;%MAPRED_CONF_DIR%;%CLASSPATH%
+
+ @rem for developers, add Hadoop classes to CLASSPATH
+ if exist %HADOOP_MAPRED_HOME%\build\classes (
+ set CLASSPATH=%CLASSPATH%;%HADOOP_MAPRED_HOME%\build\classes
+ )
+
+ if exist %HADOOP_MAPRED_HOME%\build\webapps (
+ set CLASSPATH=%CLASSPATH%;%HADOOP_MAPRED_HOME%\build
+ )
+
+ if exist %HADOOP_MAPRED_HOME%\build\test\classes (
+ set CLASSPATH=%CLASSPATH%;%HADOOP_MAPRED_HOME%\build\test\classes
+ )
+
+ if exist %HADOOP_MAPRED_HOME%\build\tools (
+ set CLASSPATH=%CLASSPATH%;%HADOOP_MAPRED_HOME%\build\tools
+ )
+
+ @rem Need YARN jars also
+ set CLASSPATH=%CLASSPATH%;%HADOOP_YARN_HOME%\%YARN_DIR%\*
+
+ @rem add libs to CLASSPATH
+ set CLASSPATH=%CLASSPATH%;%HADOOP_MAPRED_HOME%\%MAPRED_LIB_JARS_DIR%\*
+
+ @rem add modules to CLASSPATH
+ set CLASSPATH=%CLASSPATH%;%HADOOP_MAPRED_HOME%\modules\*
+
+ call :%mapred-command% %mapred-command-arguments%
+ set java_arguments=%JAVA_HEAP_MAX% %MAPRED_OPTS% -classpath %CLASSPATH% %CLASS% %mapred-command-arguments%
+ call %JAVA% %java_arguments%
+
+goto :eof
+
+
+:classpath
+ @echo %CLASSPATH%
+ goto :eof
+
+:job
+ set CLASS=org.apache.hadoop.mapred.JobClient
+ set HADOOP_OPTS=%HADOOP_OPTS% %HADOOP_CLIENT_OPTS%
+ goto :eof
+
+:queue
+ set CLASS=org.apache.hadoop.mapred.JobQueueClient
+ set HADOOP_OPTS=%HADOOP_OPTS% %HADOOP_CLIENT_OPTS%
+ goto :eof
+
+:sampler
+ set CLASS=org.apache.hadoop.mapred.lib.InputSampler
+ set HADOOP_OPTS=%HADOOP_OPTS% %HADOOP_CLIENT_OPTS%
+ goto :eof
+
+:historyserver
+ set CLASS=org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer
+ set HADOOP_OPTS=%HADOOP_OPTS% -Dmapred.jobsummary.logger=%HADOOP_JHS_LOGGER% %HADOOP_JOB_HISTORYSERVER_OPTS%"
+ if defined HADOOP_JOB_HISTORYSERVER_HEAPSIZE (
+ set JAVA_HEAP_MAX=-Xmx%HADOOP_JOB_HISTORYSERVER_HEAPSIZE%m
+ )
+ goto :eof
+
+:distcp
+ set CLASS=org.apache.hadoop.tools.DistCp
+ set CLASSPATH=%CLASSPATH%;%TOO_PATH%
+ set HADOOP_OPTS=%HADOOP_OPTS% %HADOOP_CLIENT_OPTS%
+ goto :eof
+
+:archive
+ set CLASS=org.apache.hadop.tools.HadoopArchives
+ set CLASSPATH=%CLASSPATH%;%TOO_PATH%
+ set HADOOP_OPTS=%HADOOP_OPTS% %HADOOP_CLIENT_OPTS%
+
+:pipes
+ goto not_supported
+
+:mradmin
+ goto not_supported
+
+:jobtracker
+ goto not_supported
+
+:tasktracker
+ goto not_supported
+
+:groups
+ goto not_supported
+
+
+@rem This changes %1, %2 etc. Hence those cannot be used after calling this.
+:make_command_arguments
+ if [%2] == [] goto :eof
+ if "%1" == "--config" (
+ shift
+ shift
+ )
+ shift
+ set _mapredarguments=
+ :MakeCmdArgsLoop
+ if [%1]==[] goto :EndLoop
+
+ if not defined _mapredarguments (
+ set _mapredarguments=%1
+ ) else (
+ set _mapredarguments=!_mapredarguments! %1
+ )
+ shift
+ goto :MakeCmdArgsLoop
+ :EndLoop
+ set mapred-command-arguments=%_mapredarguments%
+ goto :eof
+
+:not_supported
+ @echo Sorry, the %COMMAND% command is no longer supported.
+ @echo You may find similar functionality with the "yarn" shell command.
+ goto print_usage
+
+:print_usage
+ @echo Usage: mapred [--config confdir] COMMAND
+ @echo where COMMAND is one of:
+ @echo job manipulate MapReduce jobs
+ @echo queue get information regarding JobQueues
+ @echo classpath prints the class path needed for running
+ @echo mapreduce subcommands
+ @echo historyserver run job history servers as a standalone daemon
+ @echo distcp ^<srcurl^> ^<desturl^> copy file or directories recursively
+ @echo archive -archiveName NAME -p ^<parent path^> ^<src^>* ^<dest^> create a hadoop archive
+ @echo
+ @echo Most commands print help when invoked w/o parameters.
+
+endlocal
diff --git a/aarch64/bin/rcc b/aarch64/bin/rcc
new file mode 100755
index 0000000..22bffff
--- /dev/null
+++ b/aarch64/bin/rcc
@@ -0,0 +1,61 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+# The Hadoop record compiler
+#
+# Environment Variables
+#
+# JAVA_HOME The java implementation to use. Overrides JAVA_HOME.
+#
+# HADOOP_OPTS Extra Java runtime options.
+#
+# HADOOP_CONF_DIR Alternate conf dir. Default is ${HADOOP_PREFIX}/conf.
+#
+
+bin=`dirname "${BASH_SOURCE-$0}"`
+bin=`cd "$bin"; pwd`
+
+DEFAULT_LIBEXEC_DIR="$bin"/../libexec
+HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR}
+. $HADOOP_LIBEXEC_DIR/hadoop-config.sh
+
+if [ -f "${HADOOP_CONF_DIR}/hadoop-env.sh" ]; then
+ . "${HADOOP_CONF_DIR}/hadoop-env.sh"
+fi
+
+# some Java parameters
+if [ "$JAVA_HOME" != "" ]; then
+ #echo "run java in $JAVA_HOME"
+ JAVA_HOME=$JAVA_HOME
+fi
+
+if [ "$JAVA_HOME" = "" ]; then
+ echo "Error: JAVA_HOME is not set."
+ exit 1
+fi
+
+JAVA=$JAVA_HOME/bin/java
+JAVA_HEAP_MAX=-Xmx1000m
+
+# restore ordinary behaviour
+unset IFS
+
+CLASS='org.apache.hadoop.record.compiler.generated.Rcc'
+
+# run it
+exec "$JAVA" $HADOOP_OPTS -classpath "$CLASSPATH" $CLASS "$@"
diff --git a/aarch64/bin/test-container-executor b/aarch64/bin/test-container-executor
new file mode 100755
index 0000000..e2992cf
--- /dev/null
+++ b/aarch64/bin/test-container-executor
Binary files differ
diff --git a/aarch64/bin/yarn b/aarch64/bin/yarn
new file mode 100755
index 0000000..8d907be
--- /dev/null
+++ b/aarch64/bin/yarn
@@ -0,0 +1,235 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+# The Hadoop command script
+#
+# Environment Variables
+#
+# JAVA_HOME The java implementation to use. Overrides JAVA_HOME.
+#
+# YARN_CLASSPATH Extra Java CLASSPATH entries.
+#
+# YARN_HEAPSIZE The maximum amount of heap to use, in MB.
+# Default is 1000.
+#
+# YARN_{COMMAND}_HEAPSIZE overrides YARN_HEAPSIZE for a given command
+# eg YARN_NODEMANAGER_HEAPSIZE sets the heap
+# size for the NodeManager. If you set the
+# heap size in YARN_{COMMAND}_OPTS or YARN_OPTS
+# they take precedence.
+#
+# YARN_OPTS Extra Java runtime options.
+#
+# YARN_CLIENT_OPTS when the respective command is run.
+# YARN_{COMMAND}_OPTS etc YARN_NODEMANAGER_OPTS applies to NodeManager
+# for e.g. YARN_CLIENT_OPTS applies to
+# more than one command (fs, dfs, fsck,
+# dfsadmin etc)
+#
+# YARN_CONF_DIR Alternate conf dir. Default is ${HADOOP_YARN_HOME}/conf.
+#
+# YARN_ROOT_LOGGER The root appender. Default is INFO,console
+#
+
+bin=`dirname "${BASH_SOURCE-$0}"`
+bin=`cd "$bin"; pwd`
+
+DEFAULT_LIBEXEC_DIR="$bin"/../libexec
+HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR}
+. $HADOOP_LIBEXEC_DIR/yarn-config.sh
+
+function print_usage(){
+ echo "Usage: yarn [--config confdir] COMMAND"
+ echo "where COMMAND is one of:"
+ echo " resourcemanager run the ResourceManager"
+ echo " nodemanager run a nodemanager on each slave"
+ echo " rmadmin admin tools"
+ echo " version print the version"
+ echo " jar <jar> run a jar file"
+ echo " application prints application(s) report/kill application"
+ echo " node prints node report(s)"
+ echo " logs dump container logs"
+ echo " classpath prints the class path needed to get the"
+ echo " Hadoop jar and the required libraries"
+ echo " daemonlog get/set the log level for each daemon"
+ echo " or"
+ echo " CLASSNAME run the class named CLASSNAME"
+ echo "Most commands print help when invoked w/o parameters."
+}
+
+# if no args specified, show usage
+if [ $# = 0 ]; then
+ print_usage
+ exit 1
+fi
+
+# get arguments
+COMMAND=$1
+shift
+
+case $COMMAND in
+ # usage flags
+ --help|-help|-h)
+ print_usage
+ exit
+ ;;
+esac
+
+if [ -f "${YARN_CONF_DIR}/yarn-env.sh" ]; then
+ . "${YARN_CONF_DIR}/yarn-env.sh"
+fi
+
+# some Java parameters
+if [ "$JAVA_HOME" != "" ]; then
+ #echo "run java in $JAVA_HOME"
+ JAVA_HOME=$JAVA_HOME
+fi
+
+if [ "$JAVA_HOME" = "" ]; then
+ echo "Error: JAVA_HOME is not set."
+ exit 1
+fi
+
+JAVA=$JAVA_HOME/bin/java
+JAVA_HEAP_MAX=-Xmx1000m
+
+# check envvars which might override default args
+if [ "$YARN_HEAPSIZE" != "" ]; then
+ #echo "run with heapsize $YARN_HEAPSIZE"
+ JAVA_HEAP_MAX="-Xmx""$YARN_HEAPSIZE""m"
+ #echo $JAVA_HEAP_MAX
+fi
+
+# CLASSPATH initially contains $HADOOP_CONF_DIR & $YARN_CONF_DIR
+if [ ! -d "$HADOOP_CONF_DIR" ]; then
+ echo No HADOOP_CONF_DIR set.
+ echo Please specify it either in yarn-env.sh or in the environment.
+ exit 1
+fi
+
+CLASSPATH="${HADOOP_CONF_DIR}:${YARN_CONF_DIR}:${CLASSPATH}"
+
+# for developers, add Hadoop classes to CLASSPATH
+if [ -d "$HADOOP_YARN_HOME/yarn-api/target/classes" ]; then
+ CLASSPATH=${CLASSPATH}:$HADOOP_YARN_HOME/yarn-api/target/classes
+fi
+if [ -d "$HADOOP_YARN_HOME/yarn-common/target/classes" ]; then
+ CLASSPATH=${CLASSPATH}:$HADOOP_YARN_HOME/yarn-common/target/classes
+fi
+if [ -d "$HADOOP_YARN_HOME/yarn-mapreduce/target/classes" ]; then
+ CLASSPATH=${CLASSPATH}:$HADOOP_YARN_HOME/yarn-mapreduce/target/classes
+fi
+if [ -d "$HADOOP_YARN_HOME/yarn-master-worker/target/classes" ]; then
+ CLASSPATH=${CLASSPATH}:$HADOOP_YARN_HOME/yarn-master-worker/target/classes
+fi
+if [ -d "$HADOOP_YARN_HOME/yarn-server/yarn-server-nodemanager/target/classes" ]; then
+ CLASSPATH=${CLASSPATH}:$HADOOP_YARN_HOME/yarn-server/yarn-server-nodemanager/target/classes
+fi
+if [ -d "$HADOOP_YARN_HOME/yarn-server/yarn-server-common/target/classes" ]; then
+ CLASSPATH=${CLASSPATH}:$HADOOP_YARN_HOME/yarn-server/yarn-server-common/target/classes
+fi
+if [ -d "$HADOOP_YARN_HOME/yarn-server/yarn-server-resourcemanager/target/classes" ]; then
+ CLASSPATH=${CLASSPATH}:$HADOOP_YARN_HOME/yarn-server/yarn-server-resourcemanager/target/classes
+fi
+if [ -d "$HADOOP_YARN_HOME/build/test/classes" ]; then
+ CLASSPATH=${CLASSPATH}:$HADOOP_YARN_HOME/target/test/classes
+fi
+if [ -d "$HADOOP_YARN_HOME/build/tools" ]; then
+ CLASSPATH=${CLASSPATH}:$HADOOP_YARN_HOME/build/tools
+fi
+
+CLASSPATH=${CLASSPATH}:$HADOOP_YARN_HOME/${YARN_DIR}/*
+CLASSPATH=${CLASSPATH}:$HADOOP_YARN_HOME/${YARN_LIB_JARS_DIR}/*
+
+# so that filenames w/ spaces are handled correctly in loops below
+IFS=
+
+# default log directory & file
+if [ "$YARN_LOG_DIR" = "" ]; then
+ YARN_LOG_DIR="$HADOOP_YARN_HOME/logs"
+fi
+if [ "$YARN_LOGFILE" = "" ]; then
+ YARN_LOGFILE='yarn.log'
+fi
+
+# restore ordinary behaviour
+unset IFS
+
+# figure out which class to run
+if [ "$COMMAND" = "classpath" ] ; then
+ echo $CLASSPATH
+ exit
+elif [ "$COMMAND" = "rmadmin" ] ; then
+ CLASS='org.apache.hadoop.yarn.client.cli.RMAdminCLI'
+ YARN_OPTS="$YARN_OPTS $YARN_CLIENT_OPTS"
+elif [ "$COMMAND" = "application" ] ; then
+ CLASS=org.apache.hadoop.yarn.client.cli.ApplicationCLI
+ YARN_OPTS="$YARN_OPTS $YARN_CLIENT_OPTS"
+elif [ "$COMMAND" = "node" ] ; then
+ CLASS=org.apache.hadoop.yarn.client.cli.NodeCLI
+ YARN_OPTS="$YARN_OPTS $YARN_CLIENT_OPTS"
+elif [ "$COMMAND" = "resourcemanager" ] ; then
+ CLASSPATH=${CLASSPATH}:$YARN_CONF_DIR/rm-config/log4j.properties
+ CLASS='org.apache.hadoop.yarn.server.resourcemanager.ResourceManager'
+ YARN_OPTS="$YARN_OPTS $YARN_RESOURCEMANAGER_OPTS"
+ if [ "$YARN_RESOURCEMANAGER_HEAPSIZE" != "" ]; then
+ JAVA_HEAP_MAX="-Xmx""$YARN_RESOURCEMANAGER_HEAPSIZE""m"
+ fi
+elif [ "$COMMAND" = "nodemanager" ] ; then
+ CLASSPATH=${CLASSPATH}:$YARN_CONF_DIR/nm-config/log4j.properties
+ CLASS='org.apache.hadoop.yarn.server.nodemanager.NodeManager'
+ YARN_OPTS="$YARN_OPTS -server $YARN_NODEMANAGER_OPTS"
+ if [ "$YARN_NODEMANAGER_HEAPSIZE" != "" ]; then
+ JAVA_HEAP_MAX="-Xmx""$YARN_NODEMANAGER_HEAPSIZE""m"
+ fi
+elif [ "$COMMAND" = "proxyserver" ] ; then
+ CLASS='org.apache.hadoop.yarn.server.webproxy.WebAppProxyServer'
+ YARN_OPTS="$YARN_OPTS $YARN_PROXYSERVER_OPTS"
+ if [ "$YARN_PROXYSERVER_HEAPSIZE" != "" ]; then
+ JAVA_HEAP_MAX="-Xmx""$YARN_PROXYSERVER_HEAPSIZE""m"
+ fi
+elif [ "$COMMAND" = "version" ] ; then
+ CLASS=org.apache.hadoop.util.VersionInfo
+ YARN_OPTS="$YARN_OPTS $YARN_CLIENT_OPTS"
+elif [ "$COMMAND" = "jar" ] ; then
+ CLASS=org.apache.hadoop.util.RunJar
+ YARN_OPTS="$YARN_OPTS $YARN_CLIENT_OPTS"
+elif [ "$COMMAND" = "logs" ] ; then
+ CLASS=org.apache.hadoop.yarn.client.cli.LogsCLI
+ YARN_OPTS="$YARN_OPTS $YARN_CLIENT_OPTS"
+elif [ "$COMMAND" = "daemonlog" ] ; then
+ CLASS=org.apache.hadoop.log.LogLevel
+ YARN_OPTS="$YARN_OPTS $YARN_CLIENT_OPTS"
+else
+ CLASS=$COMMAND
+fi
+
+YARN_OPTS="$YARN_OPTS -Dhadoop.log.dir=$YARN_LOG_DIR"
+YARN_OPTS="$YARN_OPTS -Dyarn.log.dir=$YARN_LOG_DIR"
+YARN_OPTS="$YARN_OPTS -Dhadoop.log.file=$YARN_LOGFILE"
+YARN_OPTS="$YARN_OPTS -Dyarn.log.file=$YARN_LOGFILE"
+YARN_OPTS="$YARN_OPTS -Dyarn.home.dir=$HADOOP_YARN_HOME"
+YARN_OPTS="$YARN_OPTS -Dhadoop.home.dir=$HADOOP_YARN_HOME"
+YARN_OPTS="$YARN_OPTS -Dhadoop.root.logger=${YARN_ROOT_LOGGER:-INFO,console}"
+YARN_OPTS="$YARN_OPTS -Dyarn.root.logger=${YARN_ROOT_LOGGER:-INFO,console}"
+if [ "x$JAVA_LIBRARY_PATH" != "x" ]; then
+ YARN_OPTS="$YARN_OPTS -Djava.library.path=$JAVA_LIBRARY_PATH"
+fi
+
+exec "$JAVA" -Dproc_$COMMAND $JAVA_HEAP_MAX $YARN_OPTS -classpath "$CLASSPATH" $CLASS "$@"
+fi
diff --git a/aarch64/bin/yarn.cmd b/aarch64/bin/yarn.cmd
new file mode 100755
index 0000000..955df46
--- /dev/null
+++ b/aarch64/bin/yarn.cmd
@@ -0,0 +1,254 @@
+@echo off
+@rem Licensed to the Apache Software Foundation (ASF) under one or more
+@rem contributor license agreements. See the NOTICE file distributed with
+@rem this work for additional information regarding copyright ownership.
+@rem The ASF licenses this file to You under the Apache License, Version 2.0
+@rem (the "License"); you may not use this file except in compliance with
+@rem the License. You may obtain a copy of the License at
+@rem
+@rem http://www.apache.org/licenses/LICENSE-2.0
+@rem
+@rem Unless required by applicable law or agreed to in writing, software
+@rem distributed under the License is distributed on an "AS IS" BASIS,
+@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+@rem See the License for the specific language governing permissions and
+@rem limitations under the License.
+
+@rem The Hadoop command script
+@rem
+@rem Environment Variables
+@rem
+@rem JAVA_HOME The java implementation to use. Overrides JAVA_HOME.
+@rem
+@rem YARN_CLASSPATH Extra Java CLASSPATH entries.
+@rem
+@rem YARN_HEAPSIZE The maximum amount of heap to use, in MB.
+@rem Default is 1000.
+@rem
+@rem YARN_{COMMAND}_HEAPSIZE overrides YARN_HEAPSIZE for a given command
+@rem eg YARN_NODEMANAGER_HEAPSIZE sets the heap
+@rem size for the NodeManager. If you set the
+@rem heap size in YARN_{COMMAND}_OPTS or YARN_OPTS
+@rem they take precedence.
+@rem
+@rem YARN_OPTS Extra Java runtime options.
+@rem
+@rem YARN_CLIENT_OPTS when the respective command is run.
+@rem YARN_{COMMAND}_OPTS etc YARN_NODEMANAGER_OPTS applies to NodeManager
+@rem for e.g. YARN_CLIENT_OPTS applies to
+@rem more than one command (fs, dfs, fsck,
+@rem dfsadmin etc)
+@rem
+@rem YARN_CONF_DIR Alternate conf dir. Default is ${HADOOP_YARN_HOME}/conf.
+@rem
+@rem YARN_ROOT_LOGGER The root appender. Default is INFO,console
+@rem
+
+setlocal enabledelayedexpansion
+
+if not defined HADOOP_BIN_PATH (
+ set HADOOP_BIN_PATH=%~dp0
+)
+
+if "%HADOOP_BIN_PATH:~-1%" == "\" (
+ set HADOOP_BIN_PATH=%HADOOP_BIN_PATH:~0,-1%
+)
+
+set DEFAULT_LIBEXEC_DIR=%HADOOP_BIN_PATH%\..\libexec
+if not defined HADOOP_LIBEXEC_DIR (
+ set HADOOP_LIBEXEC_DIR=%DEFAULT_LIBEXEC_DIR%
+)
+
+call %DEFAULT_LIBEXEC_DIR%\yarn-config.cmd %*
+if "%1" == "--config" (
+ shift
+ shift
+)
+
+:main
+ if exist %YARN_CONF_DIR%\yarn-env.cmd (
+ call %YARN_CONF_DIR%\yarn-env.cmd
+ )
+
+ set yarn-command=%1
+ call :make_command_arguments %*
+
+ if not defined yarn-command (
+ goto print_usage
+ )
+
+ @rem JAVA and JAVA_HEAP_MAX and set in hadoop-config.cmd
+
+ if defined YARN_HEAPSIZE (
+ @rem echo run with Java heapsize %YARN_HEAPSIZE%
+ set JAVA_HEAP_MAX=-Xmx%YARN_HEAPSIZE%m
+ )
+
+ @rem CLASSPATH initially contains HADOOP_CONF_DIR & YARN_CONF_DIR
+ if not defined HADOOP_CONF_DIR (
+ echo No HADOOP_CONF_DIR set.
+ echo Please specify it either in yarn-env.cmd or in the environment.
+ goto :eof
+ )
+
+ set CLASSPATH=%HADOOP_CONF_DIR%;%YARN_CONF_DIR%;%CLASSPATH%
+
+ @rem for developers, add Hadoop classes to CLASSPATH
+ if exist %HADOOP_YARN_HOME%\yarn-api\target\classes (
+ set CLASSPATH=%CLASSPATH%;%HADOOP_YARN_HOME%\yarn-api\target\classes
+ )
+
+ if exist %HADOOP_YARN_HOME%\yarn-common\target\classes (
+ set CLASSPATH=%CLASSPATH%;%HADOOP_YARN_HOME%\yarn-common\target\classes
+ )
+
+ if exist %HADOOP_YARN_HOME%\yarn-mapreduce\target\classes (
+ set CLASSPATH=%CLASSPATH%;%HADOOP_YARN_HOME%\yarn-mapreduce\target\classes
+ )
+
+ if exist %HADOOP_YARN_HOME%\yarn-master-worker\target\classes (
+ set CLASSPATH=%CLASSPATH%;%HADOOP_YARN_HOME%\yarn-master-worker\target\classes
+ )
+
+ if exist %HADOOP_YARN_HOME%\yarn-server\yarn-server-nodemanager\target\classes (
+ set CLASSPATH=%CLASSPATH%;%HADOOP_YARN_HOME%\yarn-server\yarn-server-nodemanager\target\classes
+ )
+
+ if exist %HADOOP_YARN_HOME%\yarn-server\yarn-server-common\target\classes (
+ set CLASSPATH=%CLASSPATH%;%HADOOP_YARN_HOME%\yarn-server\yarn-server-common\target\classes
+ )
+
+ if exist %HADOOP_YARN_HOME%\yarn-server\yarn-server-resourcemanager\target\classes (
+ set CLASSPATH=%CLASSPATH%;%HADOOP_YARN_HOME%\yarn-server\yarn-server-resourcemanager\target\classes
+ )
+
+ if exist %HADOOP_YARN_HOME%\build\test\classes (
+ set CLASSPATH=%CLASSPATH%;%HADOOP_YARN_HOME%\build\test\classes
+ )
+
+ if exist %HADOOP_YARN_HOME%\build\tools (
+ set CLASSPATH=%CLASSPATH%;%HADOOP_YARN_HOME%\build\tools
+ )
+
+ set CLASSPATH=%CLASSPATH%;%HADOOP_YARN_HOME%\%YARN_DIR%\*
+ set CLASSPATH=%CLASSPATH%;%HADOOP_YARN_HOME%\%YARN_LIB_JARS_DIR%\*
+
+ call :%yarn-command% %yarn-command-arguments%
+
+ if defined JAVA_LIBRARY_PATH (
+ set YARN_OPTS=%YARN_OPTS% -Djava.library.path=%JAVA_LIBRARY_PATH%
+ )
+
+ set java_arguments=%JAVA_HEAP_MAX% %YARN_OPTS% -classpath %CLASSPATH% %CLASS% %yarn-command-arguments%
+ call %JAVA% %java_arguments%
+
+goto :eof
+
+:classpath
+ @echo %CLASSPATH%
+ goto :eof
+
+:rmadmin
+ set CLASS=org.apache.hadoop.yarn.server.resourcemanager.tools.RMAdmin
+ set YARN_OPTS=%YARN_OPTS% %YARN_CLIENT_OPTS%
+ goto :eof
+
+:application
+ set CLASS=org.apache.hadoop.yarn.client.cli.ApplicationCLI
+ set YARN_OPTS=%YARN_OPTS% %YARN_CLIENT_OPTS%
+ goto :eof
+
+:node
+ set CLASS=org.apache.hadoop.yarn.client.cli.NodeCLI
+ set YARN_OPTS=%YARN_OPTS% %YARN_CLIENT_OPTS%
+ goto :eof
+
+:resourcemanager
+ set CLASSPATH=%CLASSPATH%;%YARN_CONF_DIR%\rm-config\log4j.properties
+ set CLASS=org.apache.hadoop.yarn.server.resourcemanager.ResourceManager
+ set YARN_OPTS=%YARN_OPTS% %HADOOP_RESOURCEMANAGER_OPTS%
+ if defined YARN_RESOURCEMANAGER_HEAPSIZE (
+ set JAVA_HEAP_MAX=-Xmx%YARN_RESOURCEMANAGER_HEAPSIZE%m
+ )
+ goto :eof
+
+:nodemanager
+ set CLASSPATH=%CLASSPATH%;%YARN_CONF_DIR%\nm-config\log4j.properties
+ set CLASS=org.apache.hadoop.yarn.server.nodemanager.NodeManager
+ set YARN_OPTS=%YARN_OPTS% -server %HADOOP_NODEMANAGER_OPTS%
+ if defined YARN_NODEMANAGER_HEAPSIZE (
+ set JAVA_HEAP_MAX=-Xmx%YARN_NODEMANAGER_HEAPSIZE%m
+ )
+ goto :eof
+
+:proxyserver
+ set CLASS=org.apache.hadoop.yarn.server.webproxy.WebAppProxyServer
+ set YARN_OPTS=%YARN_OPTS% %HADOOP_PROXYSERVER_OPTS%
+ if defined YARN_PROXYSERVER_HEAPSIZE (
+ set JAVA_HEAP_MAX=-Xmx%YARN_PROXYSERVER_HEAPSIZE%m
+ )
+ goto :eof
+
+:version
+ set CLASS=org.apache.hadoop.util.VersionInfo
+ set YARN_OPTS=%YARN_OPTS% %YARN_CLIENT_OPTS%
+ goto :eof
+
+:jar
+ set CLASS=org.apache.hadoop.util.RunJar
+ set YARN_OPTS=%YARN_OPTS% %YARN_CLIENT_OPTS%
+ goto :eof
+
+:logs
+ set CLASS=org.apache.hadoop.yarn.logaggregation.LogDumper
+ set YARN_OPTS=%YARN_OPTS% %YARN_CLIENT_OPTS%
+ goto :eof
+
+:daemonlog
+ set CLASS=org.apache.hadoop.log.LogLevel
+ set YARN_OPTS=%YARN_OPTS% %YARN_CLIENT_OPTS%
+ goto :eof
+
+@rem This changes %1, %2 etc. Hence those cannot be used after calling this.
+:make_command_arguments
+ if "%1" == "--config" (
+ shift
+ shift
+ )
+ if [%2] == [] goto :eof
+ shift
+ set _yarnarguments=
+ :MakeCmdArgsLoop
+ if [%1]==[] goto :EndLoop
+
+ if not defined _yarnarguments (
+ set _yarnarguments=%1
+ ) else (
+ set _yarnarguments=!_yarnarguments! %1
+ )
+ shift
+ goto :MakeCmdArgsLoop
+ :EndLoop
+ set yarn-command-arguments=%_yarnarguments%
+ goto :eof
+
+:print_usage
+ @echo Usage: yarn [--config confdir] COMMAND
+ @echo where COMMAND is one of:
+ @echo resourcemanager run the ResourceManager
+ @echo nodemanager run a nodemanager on each slave
+ @echo historyserver run job history servers as a standalone daemon
+ @echo rmadmin admin tools
+ @echo version print the version
+ @echo jar ^<jar^> run a jar file
+ @echo application prints application(s) report/kill application
+ @echo node prints node report(s)
+ @echo logs dump container logs
+ @echo classpath prints the class path needed to get the
+ @echo Hadoop jar and the required libraries
+ @echo daemonlog get/set the log level for each daemon
+ @echo or
+ @echo CLASSNAME run the class named CLASSNAME
+ @echo Most commands print help when invoked w/o parameters.
+
+endlocal
diff --git a/aarch64/etc/hadoop/capacity-scheduler.xml b/aarch64/etc/hadoop/capacity-scheduler.xml
new file mode 100644
index 0000000..80a9fec
--- /dev/null
+++ b/aarch64/etc/hadoop/capacity-scheduler.xml
@@ -0,0 +1,111 @@
+<!--
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License. See accompanying LICENSE file.
+-->
+<configuration>
+
+ <property>
+ <name>yarn.scheduler.capacity.maximum-applications</name>
+ <value>10000</value>
+ <description>
+ Maximum number of applications that can be pending and running.
+ </description>
+ </property>
+
+ <property>
+ <name>yarn.scheduler.capacity.maximum-am-resource-percent</name>
+ <value>0.1</value>
+ <description>
+ Maximum percent of resources in the cluster which can be used to run
+ application masters i.e. controls number of concurrent running
+ applications.
+ </description>
+ </property>
+
+ <property>
+ <name>yarn.scheduler.capacity.resource-calculator</name>
+ <value>org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator</value>
+ <description>
+ The ResourceCalculator implementation to be used to compare
+ Resources in the scheduler.
+ The default i.e. DefaultResourceCalculator only uses Memory while
+ DominantResourceCalculator uses dominant-resource to compare
+ multi-dimensional resources such as Memory, CPU etc.
+ </description>
+ </property>
+
+ <property>
+ <name>yarn.scheduler.capacity.root.queues</name>
+ <value>default</value>
+ <description>
+ The queues at the this level (root is the root queue).
+ </description>
+ </property>
+
+ <property>
+ <name>yarn.scheduler.capacity.root.default.capacity</name>
+ <value>100</value>
+ <description>Default queue target capacity.</description>
+ </property>
+
+ <property>
+ <name>yarn.scheduler.capacity.root.default.user-limit-factor</name>
+ <value>1</value>
+ <description>
+ Default queue user limit a percentage from 0.0 to 1.0.
+ </description>
+ </property>
+
+ <property>
+ <name>yarn.scheduler.capacity.root.default.maximum-capacity</name>
+ <value>100</value>
+ <description>
+ The maximum capacity of the default queue.
+ </description>
+ </property>
+
+ <property>
+ <name>yarn.scheduler.capacity.root.default.state</name>
+ <value>RUNNING</value>
+ <description>
+ The state of the default queue. State can be one of RUNNING or STOPPED.
+ </description>
+ </property>
+
+ <property>
+ <name>yarn.scheduler.capacity.root.default.acl_submit_applications</name>
+ <value>*</value>
+ <description>
+ The ACL of who can submit jobs to the default queue.
+ </description>
+ </property>
+
+ <property>
+ <name>yarn.scheduler.capacity.root.default.acl_administer_queue</name>
+ <value>*</value>
+ <description>
+ The ACL of who can administer jobs on the default queue.
+ </description>
+ </property>
+
+ <property>
+ <name>yarn.scheduler.capacity.node-locality-delay</name>
+ <value>-1</value>
+ <description>
+ Number of missed scheduling opportunities after which the CapacityScheduler
+ attempts to schedule rack-local containers.
+ Typically this should be set to number of racks in the cluster, this
+ feature is disabled by default, set to -1.
+ </description>
+ </property>
+
+</configuration>
diff --git a/aarch64/etc/hadoop/configuration.xsl b/aarch64/etc/hadoop/configuration.xsl
new file mode 100644
index 0000000..d50d80b
--- /dev/null
+++ b/aarch64/etc/hadoop/configuration.xsl
@@ -0,0 +1,40 @@
+<?xml version="1.0"?>
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements. See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License. You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
+<xsl:output method="html"/>
+<xsl:template match="configuration">
+<html>
+<body>
+<table border="1">
+<tr>
+ <td>name</td>
+ <td>value</td>
+ <td>description</td>
+</tr>
+<xsl:for-each select="property">
+<tr>
+ <td><a name="{name}"><xsl:value-of select="name"/></a></td>
+ <td><xsl:value-of select="value"/></td>
+ <td><xsl:value-of select="description"/></td>
+</tr>
+</xsl:for-each>
+</table>
+</body>
+</html>
+</xsl:template>
+</xsl:stylesheet>
diff --git a/aarch64/etc/hadoop/container-executor.cfg b/aarch64/etc/hadoop/container-executor.cfg
new file mode 100644
index 0000000..d68cee8
--- /dev/null
+++ b/aarch64/etc/hadoop/container-executor.cfg
@@ -0,0 +1,4 @@
+yarn.nodemanager.linux-container-executor.group=#configured value of yarn.nodemanager.linux-container-executor.group
+banned.users=#comma separated list of users who can not run applications
+min.user.id=1000#Prevent other super-users
+allowed.system.users=##comma separated list of system users who CAN run applications
diff --git a/aarch64/etc/hadoop/core-site.xml b/aarch64/etc/hadoop/core-site.xml
new file mode 100644
index 0000000..d2ddf89
--- /dev/null
+++ b/aarch64/etc/hadoop/core-site.xml
@@ -0,0 +1,20 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<!--
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License. See accompanying LICENSE file.
+-->
+
+<!-- Put site-specific property overrides in this file. -->
+
+<configuration>
+</configuration>
diff --git a/aarch64/etc/hadoop/hadoop-env.cmd b/aarch64/etc/hadoop/hadoop-env.cmd
new file mode 100644
index 0000000..05badc2
--- /dev/null
+++ b/aarch64/etc/hadoop/hadoop-env.cmd
@@ -0,0 +1,81 @@
+@echo off
+@rem Licensed to the Apache Software Foundation (ASF) under one or more
+@rem contributor license agreements. See the NOTICE file distributed with
+@rem this work for additional information regarding copyright ownership.
+@rem The ASF licenses this file to You under the Apache License, Version 2.0
+@rem (the "License"); you may not use this file except in compliance with
+@rem the License. You may obtain a copy of the License at
+@rem
+@rem http://www.apache.org/licenses/LICENSE-2.0
+@rem
+@rem Unless required by applicable law or agreed to in writing, software
+@rem distributed under the License is distributed on an "AS IS" BASIS,
+@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+@rem See the License for the specific language governing permissions and
+@rem limitations under the License.
+
+@rem Set Hadoop-specific environment variables here.
+
+@rem The only required environment variable is JAVA_HOME. All others are
+@rem optional. When running a distributed configuration it is best to
+@rem set JAVA_HOME in this file, so that it is correctly defined on
+@rem remote nodes.
+
+@rem The java implementation to use. Required.
+set JAVA_HOME=%JAVA_HOME%
+
+@rem The jsvc implementation to use. Jsvc is required to run secure datanodes.
+@rem set JSVC_HOME=%JSVC_HOME%
+
+@rem set HADOOP_CONF_DIR=
+
+@rem Extra Java CLASSPATH elements. Automatically insert capacity-scheduler.
+if exist %HADOOP_HOME%\contrib\capacity-scheduler (
+ if not defined HADOOP_CLASSPATH (
+ set HADOOP_CLASSPATH=%HADOOP_HOME%\contrib\capacity-scheduler\*.jar
+ ) else (
+ set HADOOP_CLASSPATH=%HADOOP_CLASSPATH%;%HADOOP_HOME%\contrib\capacity-scheduler\*.jar
+ )
+)
+
+@rem The maximum amount of heap to use, in MB. Default is 1000.
+@rem set HADOOP_HEAPSIZE=
+@rem set HADOOP_NAMENODE_INIT_HEAPSIZE=""
+
+@rem Extra Java runtime options. Empty by default.
+@rem set HADOOP_OPTS=%HADOOP_OPTS% -Djava.net.preferIPv4Stack=true
+
+@rem Command specific options appended to HADOOP_OPTS when specified
+if not defined HADOOP_SECURITY_LOGGER (
+ set HADOOP_SECURITY_LOGGER=INFO,RFAS
+)
+if not defined HDFS_AUDIT_LOGGER (
+ set HDFS_AUDIT_LOGGER=INFO,NullAppender
+)
+
+set HADOOP_NAMENODE_OPTS=-Dhadoop.security.logger=%HADOOP_SECURITY_LOGGER% -Dhdfs.audit.logger=%HDFS_AUDIT_LOGGER% %HADOOP_NAMENODE_OPTS%
+set HADOOP_DATANODE_OPTS=-Dhadoop.security.logger=ERROR,RFAS %HADOOP_DATANODE_OPTS%
+set HADOOP_SECONDARYNAMENODE_OPTS=-Dhadoop.security.logger=%HADOOP_SECURITY_LOGGER% -Dhdfs.audit.logger=%HDFS_AUDIT_LOGGER% %HADOOP_SECONDARYNAMENODE_OPTS%
+
+@rem The following applies to multiple commands (fs, dfs, fsck, distcp etc)
+set HADOOP_CLIENT_OPTS=-Xmx128m %HADOOP_CLIENT_OPTS%
+@rem set HADOOP_JAVA_PLATFORM_OPTS="-XX:-UsePerfData %HADOOP_JAVA_PLATFORM_OPTS%"
+
+@rem On secure datanodes, user to run the datanode as after dropping privileges
+set HADOOP_SECURE_DN_USER=%HADOOP_SECURE_DN_USER%
+
+@rem Where log files are stored. %HADOOP_HOME%/logs by default.
+@rem set HADOOP_LOG_DIR=%HADOOP_LOG_DIR%\%USERNAME%
+
+@rem Where log files are stored in the secure data environment.
+set HADOOP_SECURE_DN_LOG_DIR=%HADOOP_LOG_DIR%\%HADOOP_HDFS_USER%
+
+@rem The directory where pid files are stored. /tmp by default.
+@rem NOTE: this should be set to a directory that can only be written to by
+@rem the user that will run the hadoop daemons. Otherwise there is the
+@rem potential for a symlink attack.
+set HADOOP_PID_DIR=%HADOOP_PID_DIR%
+set HADOOP_SECURE_DN_PID_DIR=%HADOOP_PID_DIR%
+
+@rem A string representing this instance of hadoop. %USERNAME% by default.
+set HADOOP_IDENT_STRING=%USERNAME%
diff --git a/aarch64/etc/hadoop/hadoop-env.sh b/aarch64/etc/hadoop/hadoop-env.sh
new file mode 100644
index 0000000..5836a8a
--- /dev/null
+++ b/aarch64/etc/hadoop/hadoop-env.sh
@@ -0,0 +1,77 @@
+# Copyright 2011 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Set Hadoop-specific environment variables here.
+
+# The only required environment variable is JAVA_HOME. All others are
+# optional. When running a distributed configuration it is best to
+# set JAVA_HOME in this file, so that it is correctly defined on
+# remote nodes.
+
+# The java implementation to use.
+export JAVA_HOME=${JAVA_HOME}
+
+# The jsvc implementation to use. Jsvc is required to run secure datanodes.
+#export JSVC_HOME=${JSVC_HOME}
+
+export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/etc/hadoop"}
+
+# Extra Java CLASSPATH elements. Automatically insert capacity-scheduler.
+for f in $HADOOP_HOME/contrib/capacity-scheduler/*.jar; do
+ if [ "$HADOOP_CLASSPATH" ]; then
+ export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$f
+ else
+ export HADOOP_CLASSPATH=$f
+ fi
+done
+
+# The maximum amount of heap to use, in MB. Default is 1000.
+#export HADOOP_HEAPSIZE=
+#export HADOOP_NAMENODE_INIT_HEAPSIZE=""
+
+# Extra Java runtime options. Empty by default.
+export HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true"
+
+# Command specific options appended to HADOOP_OPTS when specified
+export HADOOP_NAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_NAMENODE_OPTS"
+export HADOOP_DATANODE_OPTS="-Dhadoop.security.logger=ERROR,RFAS $HADOOP_DATANODE_OPTS"
+
+export HADOOP_SECONDARYNAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_SECONDARYNAMENODE_OPTS"
+
+# The following applies to multiple commands (fs, dfs, fsck, distcp etc)
+export HADOOP_CLIENT_OPTS="-Xmx512m $HADOOP_CLIENT_OPTS"
+#HADOOP_JAVA_PLATFORM_OPTS="-XX:-UsePerfData $HADOOP_JAVA_PLATFORM_OPTS"
+
+# On secure datanodes, user to run the datanode as after dropping privileges
+export HADOOP_SECURE_DN_USER=${HADOOP_SECURE_DN_USER}
+
+# Where log files are stored. $HADOOP_HOME/logs by default.
+#export HADOOP_LOG_DIR=${HADOOP_LOG_DIR}/$USER
+
+# Where log files are stored in the secure data environment.
+export HADOOP_SECURE_DN_LOG_DIR=${HADOOP_LOG_DIR}/${HADOOP_HDFS_USER}
+
+# The directory where pid files are stored. /tmp by default.
+# NOTE: this should be set to a directory that can only be written to by
+# the user that will run the hadoop daemons. Otherwise there is the
+# potential for a symlink attack.
+export HADOOP_PID_DIR=${HADOOP_PID_DIR}
+export HADOOP_SECURE_DN_PID_DIR=${HADOOP_PID_DIR}
+
+# A string representing this instance of hadoop. $USER by default.
+export HADOOP_IDENT_STRING=$USER
diff --git a/aarch64/etc/hadoop/hadoop-metrics.properties b/aarch64/etc/hadoop/hadoop-metrics.properties
new file mode 100644
index 0000000..c1b2eb7
--- /dev/null
+++ b/aarch64/etc/hadoop/hadoop-metrics.properties
@@ -0,0 +1,75 @@
+# Configuration of the "dfs" context for null
+dfs.class=org.apache.hadoop.metrics.spi.NullContext
+
+# Configuration of the "dfs" context for file
+#dfs.class=org.apache.hadoop.metrics.file.FileContext
+#dfs.period=10
+#dfs.fileName=/tmp/dfsmetrics.log
+
+# Configuration of the "dfs" context for ganglia
+# Pick one: Ganglia 3.0 (former) or Ganglia 3.1 (latter)
+# dfs.class=org.apache.hadoop.metrics.ganglia.GangliaContext
+# dfs.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
+# dfs.period=10
+# dfs.servers=localhost:8649
+
+
+# Configuration of the "mapred" context for null
+mapred.class=org.apache.hadoop.metrics.spi.NullContext
+
+# Configuration of the "mapred" context for file
+#mapred.class=org.apache.hadoop.metrics.file.FileContext
+#mapred.period=10
+#mapred.fileName=/tmp/mrmetrics.log
+
+# Configuration of the "mapred" context for ganglia
+# Pick one: Ganglia 3.0 (former) or Ganglia 3.1 (latter)
+# mapred.class=org.apache.hadoop.metrics.ganglia.GangliaContext
+# mapred.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
+# mapred.period=10
+# mapred.servers=localhost:8649
+
+
+# Configuration of the "jvm" context for null
+#jvm.class=org.apache.hadoop.metrics.spi.NullContext
+
+# Configuration of the "jvm" context for file
+#jvm.class=org.apache.hadoop.metrics.file.FileContext
+#jvm.period=10
+#jvm.fileName=/tmp/jvmmetrics.log
+
+# Configuration of the "jvm" context for ganglia
+# jvm.class=org.apache.hadoop.metrics.ganglia.GangliaContext
+# jvm.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
+# jvm.period=10
+# jvm.servers=localhost:8649
+
+# Configuration of the "rpc" context for null
+rpc.class=org.apache.hadoop.metrics.spi.NullContext
+
+# Configuration of the "rpc" context for file
+#rpc.class=org.apache.hadoop.metrics.file.FileContext
+#rpc.period=10
+#rpc.fileName=/tmp/rpcmetrics.log
+
+# Configuration of the "rpc" context for ganglia
+# rpc.class=org.apache.hadoop.metrics.ganglia.GangliaContext
+# rpc.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
+# rpc.period=10
+# rpc.servers=localhost:8649
+
+
+# Configuration of the "ugi" context for null
+ugi.class=org.apache.hadoop.metrics.spi.NullContext
+
+# Configuration of the "ugi" context for file
+#ugi.class=org.apache.hadoop.metrics.file.FileContext
+#ugi.period=10
+#ugi.fileName=/tmp/ugimetrics.log
+
+# Configuration of the "ugi" context for ganglia
+# ugi.class=org.apache.hadoop.metrics.ganglia.GangliaContext
+# ugi.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
+# ugi.period=10
+# ugi.servers=localhost:8649
+
diff --git a/aarch64/etc/hadoop/hadoop-metrics2.properties b/aarch64/etc/hadoop/hadoop-metrics2.properties
new file mode 100644
index 0000000..c3ffe31
--- /dev/null
+++ b/aarch64/etc/hadoop/hadoop-metrics2.properties
@@ -0,0 +1,44 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+# syntax: [prefix].[source|sink].[instance].[options]
+# See javadoc of package-info.java for org.apache.hadoop.metrics2 for details
+
+*.sink.file.class=org.apache.hadoop.metrics2.sink.FileSink
+# default sampling period, in seconds
+*.period=10
+
+# The namenode-metrics.out will contain metrics from all context
+#namenode.sink.file.filename=namenode-metrics.out
+# Specifying a special sampling period for namenode:
+#namenode.sink.*.period=8
+
+#datanode.sink.file.filename=datanode-metrics.out
+
+# the following example split metrics of different
+# context to different sinks (in this case files)
+#jobtracker.sink.file_jvm.context=jvm
+#jobtracker.sink.file_jvm.filename=jobtracker-jvm-metrics.out
+#jobtracker.sink.file_mapred.context=mapred
+#jobtracker.sink.file_mapred.filename=jobtracker-mapred-metrics.out
+
+#tasktracker.sink.file.filename=tasktracker-metrics.out
+
+#maptask.sink.file.filename=maptask-metrics.out
+
+#reducetask.sink.file.filename=reducetask-metrics.out
+
diff --git a/aarch64/etc/hadoop/hadoop-policy.xml b/aarch64/etc/hadoop/hadoop-policy.xml
new file mode 100644
index 0000000..491dbe7
--- /dev/null
+++ b/aarch64/etc/hadoop/hadoop-policy.xml
@@ -0,0 +1,219 @@
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<!--
+
+ Copyright 2011 The Apache Software Foundation
+
+ Licensed to the Apache Software Foundation (ASF) under one
+ or more contributor license agreements. See the NOTICE file
+ distributed with this work for additional information
+ regarding copyright ownership. The ASF licenses this file
+ to you under the Apache License, Version 2.0 (the
+ "License"); you may not use this file except in compliance
+ with the License. You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+-->
+
+<!-- Put site-specific property overrides in this file. -->
+
+<configuration>
+ <property>
+ <name>security.client.protocol.acl</name>
+ <value>*</value>
+ <description>ACL for ClientProtocol, which is used by user code
+ via the DistributedFileSystem.
+ The ACL is a comma-separated list of user and group names. The user and
+ group list is separated by a blank. For e.g. "alice,bob users,wheel".
+ A special value of "*" means all users are allowed.</description>
+ </property>
+
+ <property>
+ <name>security.client.datanode.protocol.acl</name>
+ <value>*</value>
+ <description>ACL for ClientDatanodeProtocol, the client-to-datanode protocol
+ for block recovery.
+ The ACL is a comma-separated list of user and group names. The user and
+ group list is separated by a blank. For e.g. "alice,bob users,wheel".
+ A special value of "*" means all users are allowed.</description>
+ </property>
+
+ <property>
+ <name>security.datanode.protocol.acl</name>
+ <value>*</value>
+ <description>ACL for DatanodeProtocol, which is used by datanodes to
+ communicate with the namenode.
+ The ACL is a comma-separated list of user and group names. The user and
+ group list is separated by a blank. For e.g. "alice,bob users,wheel".
+ A special value of "*" means all users are allowed.</description>
+ </property>
+
+ <property>
+ <name>security.inter.datanode.protocol.acl</name>
+ <value>*</value>
+ <description>ACL for InterDatanodeProtocol, the inter-datanode protocol
+ for updating generation timestamp.
+ The ACL is a comma-separated list of user and group names. The user and
+ group list is separated by a blank. For e.g. "alice,bob users,wheel".
+ A special value of "*" means all users are allowed.</description>
+ </property>
+
+ <property>
+ <name>security.namenode.protocol.acl</name>
+ <value>*</value>
+ <description>ACL for NamenodeProtocol, the protocol used by the secondary
+ namenode to communicate with the namenode.
+ The ACL is a comma-separated list of user and group names. The user and
+ group list is separated by a blank. For e.g. "alice,bob users,wheel".
+ A special value of "*" means all users are allowed.</description>
+ </property>
+
+ <property>
+ <name>security.admin.operations.protocol.acl</name>
+ <value>*</value>
+ <description>ACL for AdminOperationsProtocol. Used for admin commands.
+ The ACL is a comma-separated list of user and group names. The user and
+ group list is separated by a blank. For e.g. "alice,bob users,wheel".
+ A special value of "*" means all users are allowed.</description>
+ </property>
+
+ <property>
+ <name>security.refresh.usertogroups.mappings.protocol.acl</name>
+ <value>*</value>
+ <description>ACL for RefreshUserMappingsProtocol. Used to refresh
+ users mappings. The ACL is a comma-separated list of user and
+ group names. The user and group list is separated by a blank. For
+ e.g. "alice,bob users,wheel". A special value of "*" means all
+ users are allowed.</description>
+ </property>
+
+ <property>
+ <name>security.refresh.policy.protocol.acl</name>
+ <value>*</value>
+ <description>ACL for RefreshAuthorizationPolicyProtocol, used by the
+ dfsadmin and mradmin commands to refresh the security policy in-effect.
+ The ACL is a comma-separated list of user and group names. The user and
+ group list is separated by a blank. For e.g. "alice,bob users,wheel".
+ A special value of "*" means all users are allowed.</description>
+ </property>
+
+ <property>
+ <name>security.ha.service.protocol.acl</name>
+ <value>*</value>
+ <description>ACL for HAService protocol used by HAAdmin to manage the
+ active and stand-by states of namenode.</description>
+ </property>
+
+ <property>
+ <name>security.zkfc.protocol.acl</name>
+ <value>*</value>
+ <description>ACL for access to the ZK Failover Controller
+ </description>
+ </property>
+
+ <property>
+ <name>security.qjournal.service.protocol.acl</name>
+ <value>*</value>
+ <description>ACL for QJournalProtocol, used by the NN to communicate with
+ JNs when using the QuorumJournalManager for edit logs.</description>
+ </property>
+
+ <property>
+ <name>security.mrhs.client.protocol.acl</name>
+ <value>*</value>
+ <description>ACL for HSClientProtocol, used by job clients to
+ communciate with the MR History Server job status etc.
+ The ACL is a comma-separated list of user and group names. The user and
+ group list is separated by a blank. For e.g. "alice,bob users,wheel".
+ A special value of "*" means all users are allowed.</description>
+ </property>
+
+ <!-- YARN Protocols -->
+
+ <property>
+ <name>security.resourcetracker.protocol.acl</name>
+ <value>*</value>
+ <description>ACL for ResourceTrackerProtocol, used by the
+ ResourceManager and NodeManager to communicate with each other.
+ The ACL is a comma-separated list of user and group names. The user and
+ group list is separated by a blank. For e.g. "alice,bob users,wheel".
+ A special value of "*" means all users are allowed.</description>
+ </property>
+
+ <property>
+ <name>security.resourcemanager-administration.protocol.acl</name>
+ <value>*</value>
+ <description>ACL for ResourceManagerAdministrationProtocol, for admin commands.
+ The ACL is a comma-separated list of user and group names. The user and
+ group list is separated by a blank. For e.g. "alice,bob users,wheel".
+ A special value of "*" means all users are allowed.</description>
+ </property>
+
+ <property>
+ <name>security.applicationclient.protocol.acl</name>
+ <value>*</value>
+ <description>ACL for ApplicationClientProtocol, used by the ResourceManager
+ and applications submission clients to communicate with each other.
+ The ACL is a comma-separated list of user and group names. The user and
+ group list is separated by a blank. For e.g. "alice,bob users,wheel".
+ A special value of "*" means all users are allowed.</description>
+ </property>
+
+ <property>
+ <name>security.applicationmaster.protocol.acl</name>
+ <value>*</value>
+ <description>ACL for ApplicationMasterProtocol, used by the ResourceManager
+ and ApplicationMasters to communicate with each other.
+ The ACL is a comma-separated list of user and group names. The user and
+ group list is separated by a blank. For e.g. "alice,bob users,wheel".
+ A special value of "*" means all users are allowed.</description>
+ </property>
+
+ <property>
+ <name>security.containermanagement.protocol.acl</name>
+ <value>*</value>
+ <description>ACL for ContainerManagementProtocol protocol, used by the NodeManager
+ and ApplicationMasters to communicate with each other.
+ The ACL is a comma-separated list of user and group names. The user and
+ group list is separated by a blank. For e.g. "alice,bob users,wheel".
+ A special value of "*" means all users are allowed.</description>
+ </property>
+
+ <property>
+ <name>security.resourcelocalizer.protocol.acl</name>
+ <value>*</value>
+ <description>ACL for ResourceLocalizer protocol, used by the NodeManager
+ and ResourceLocalizer to communicate with each other.
+ The ACL is a comma-separated list of user and group names. The user and
+ group list is separated by a blank. For e.g. "alice,bob users,wheel".
+ A special value of "*" means all users are allowed.</description>
+ </property>
+
+ <property>
+ <name>security.job.task.protocol.acl</name>
+ <value>*</value>
+ <description>ACL for TaskUmbilicalProtocol, used by the map and reduce
+ tasks to communicate with the parent tasktracker.
+ The ACL is a comma-separated list of user and group names. The user and
+ group list is separated by a blank. For e.g. "alice,bob users,wheel".
+ A special value of "*" means all users are allowed.</description>
+ </property>
+
+ <property>
+ <name>security.job.client.protocol.acl</name>
+ <value>*</value>
+ <description>ACL for MRClientProtocol, used by job clients to
+ communciate with the MR ApplicationMaster to query job status etc.
+ The ACL is a comma-separated list of user and group names. The user and
+ group list is separated by a blank. For e.g. "alice,bob users,wheel".
+ A special value of "*" means all users are allowed.</description>
+ </property>
+
+</configuration>
diff --git a/aarch64/etc/hadoop/hdfs-site.xml b/aarch64/etc/hadoop/hdfs-site.xml
new file mode 100644
index 0000000..50ec146
--- /dev/null
+++ b/aarch64/etc/hadoop/hdfs-site.xml
@@ -0,0 +1,21 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<!--
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License. See accompanying LICENSE file.
+-->
+
+<!-- Put site-specific property overrides in this file. -->
+
+<configuration>
+
+</configuration>
diff --git a/aarch64/etc/hadoop/httpfs-env.sh b/aarch64/etc/hadoop/httpfs-env.sh
new file mode 100644
index 0000000..84c67b7
--- /dev/null
+++ b/aarch64/etc/hadoop/httpfs-env.sh
@@ -0,0 +1,41 @@
+#!/bin/bash
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License. See accompanying LICENSE file.
+#
+
+# Set httpfs specific environment variables here.
+
+# Settings for the Embedded Tomcat that runs HttpFS
+# Java System properties for HttpFS should be specified in this variable
+#
+# export CATALINA_OPTS=
+
+# HttpFS logs directory
+#
+# export HTTPFS_LOG=${HTTPFS_HOME}/logs
+
+# HttpFS temporary directory
+#
+# export HTTPFS_TEMP=${HTTPFS_HOME}/temp
+
+# The HTTP port used by HttpFS
+#
+# export HTTPFS_HTTP_PORT=14000
+
+# The Admin port used by HttpFS
+#
+# export HTTPFS_ADMIN_PORT=`expr ${HTTPFS_HTTP_PORT} + 1`
+
+# The hostname HttpFS server runs on
+#
+# export HTTPFS_HTTP_HOSTNAME=`hostname -f`
diff --git a/aarch64/etc/hadoop/httpfs-log4j.properties b/aarch64/etc/hadoop/httpfs-log4j.properties
new file mode 100644
index 0000000..284a819
--- /dev/null
+++ b/aarch64/etc/hadoop/httpfs-log4j.properties
@@ -0,0 +1,35 @@
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License. See accompanying LICENSE file.
+#
+
+# If the Java System property 'httpfs.log.dir' is not defined at HttpFSServer start up time
+# Setup sets its value to '${httpfs.home}/logs'
+
+log4j.appender.httpfs=org.apache.log4j.DailyRollingFileAppender
+log4j.appender.httpfs.DatePattern='.'yyyy-MM-dd
+log4j.appender.httpfs.File=${httpfs.log.dir}/httpfs.log
+log4j.appender.httpfs.Append=true
+log4j.appender.httpfs.layout=org.apache.log4j.PatternLayout
+log4j.appender.httpfs.layout.ConversionPattern=%d{ISO8601} %5p %c{1} [%X{hostname}][%X{user}:%X{doAs}] %X{op} %m%n
+
+log4j.appender.httpfsaudit=org.apache.log4j.DailyRollingFileAppender
+log4j.appender.httpfsaudit.DatePattern='.'yyyy-MM-dd
+log4j.appender.httpfsaudit.File=${httpfs.log.dir}/httpfs-audit.log
+log4j.appender.httpfsaudit.Append=true
+log4j.appender.httpfsaudit.layout=org.apache.log4j.PatternLayout
+log4j.appender.httpfsaudit.layout.ConversionPattern=%d{ISO8601} %5p [%X{hostname}][%X{user}:%X{doAs}] %X{op} %m%n
+
+log4j.logger.httpfsaudit=INFO, httpfsaudit
+
+log4j.logger.org.apache.hadoop.fs.http.server=INFO, httpfs
+log4j.logger.org.apache.hadoop.lib=INFO, httpfs
diff --git a/aarch64/etc/hadoop/httpfs-signature.secret b/aarch64/etc/hadoop/httpfs-signature.secret
new file mode 100644
index 0000000..56466e9
--- /dev/null
+++ b/aarch64/etc/hadoop/httpfs-signature.secret
@@ -0,0 +1 @@
+hadoop httpfs secret
diff --git a/aarch64/etc/hadoop/httpfs-site.xml b/aarch64/etc/hadoop/httpfs-site.xml
new file mode 100644
index 0000000..4a718e1
--- /dev/null
+++ b/aarch64/etc/hadoop/httpfs-site.xml
@@ -0,0 +1,17 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<configuration>
+
+</configuration>
diff --git a/aarch64/etc/hadoop/log4j.properties b/aarch64/etc/hadoop/log4j.properties
new file mode 100644
index 0000000..7e0834a
--- /dev/null
+++ b/aarch64/etc/hadoop/log4j.properties
@@ -0,0 +1,231 @@
+# Copyright 2011 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Define some default values that can be overridden by system properties
+hadoop.root.logger=INFO,console
+hadoop.log.dir=.
+hadoop.log.file=hadoop.log
+
+# Define the root logger to the system property "hadoop.root.logger".
+log4j.rootLogger=${hadoop.root.logger}, EventCounter
+
+# Logging Threshold
+log4j.threshold=ALL
+
+# Null Appender
+log4j.appender.NullAppender=org.apache.log4j.varia.NullAppender
+
+#
+# Rolling File Appender - cap space usage at 5gb.
+#
+hadoop.log.maxfilesize=256MB
+hadoop.log.maxbackupindex=20
+log4j.appender.RFA=org.apache.log4j.RollingFileAppender
+log4j.appender.RFA.File=${hadoop.log.dir}/${hadoop.log.file}
+
+log4j.appender.RFA.MaxFileSize=${hadoop.log.maxfilesize}
+log4j.appender.RFA.MaxBackupIndex=${hadoop.log.maxbackupindex}
+
+log4j.appender.RFA.layout=org.apache.log4j.PatternLayout
+
+# Pattern format: Date LogLevel LoggerName LogMessage
+log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
+# Debugging Pattern format
+#log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n
+
+
+#
+# Daily Rolling File Appender
+#
+
+log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender
+log4j.appender.DRFA.File=${hadoop.log.dir}/${hadoop.log.file}
+
+# Rollver at midnight
+log4j.appender.DRFA.DatePattern=.yyyy-MM-dd
+
+# 30-day backup
+#log4j.appender.DRFA.MaxBackupIndex=30
+log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout
+
+# Pattern format: Date LogLevel LoggerName LogMessage
+log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
+# Debugging Pattern format
+#log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n
+
+
+#
+# console
+# Add "console" to rootlogger above if you want to use this
+#
+
+log4j.appender.console=org.apache.log4j.ConsoleAppender
+log4j.appender.console.target=System.err
+log4j.appender.console.layout=org.apache.log4j.PatternLayout
+log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n
+
+#
+# TaskLog Appender
+#
+
+#Default values
+hadoop.tasklog.taskid=null
+hadoop.tasklog.iscleanup=false
+hadoop.tasklog.noKeepSplits=4
+hadoop.tasklog.totalLogFileSize=100
+hadoop.tasklog.purgeLogSplits=true
+hadoop.tasklog.logsRetainHours=12
+
+log4j.appender.TLA=org.apache.hadoop.mapred.TaskLogAppender
+log4j.appender.TLA.taskId=${hadoop.tasklog.taskid}
+log4j.appender.TLA.isCleanup=${hadoop.tasklog.iscleanup}
+log4j.appender.TLA.totalLogFileSize=${hadoop.tasklog.totalLogFileSize}
+
+log4j.appender.TLA.layout=org.apache.log4j.PatternLayout
+log4j.appender.TLA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
+
+#
+# HDFS block state change log from block manager
+#
+# Uncomment the following to suppress normal block state change
+# messages from BlockManager in NameNode.
+#log4j.logger.BlockStateChange=WARN
+
+#
+#Security appender
+#
+hadoop.security.logger=INFO,NullAppender
+hadoop.security.log.maxfilesize=256MB
+hadoop.security.log.maxbackupindex=20
+log4j.category.SecurityLogger=${hadoop.security.logger}
+hadoop.security.log.file=SecurityAuth-${user.name}.audit
+log4j.appender.RFAS=org.apache.log4j.RollingFileAppender
+log4j.appender.RFAS.File=${hadoop.log.dir}/${hadoop.security.log.file}
+log4j.appender.RFAS.layout=org.apache.log4j.PatternLayout
+log4j.appender.RFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
+log4j.appender.RFAS.MaxFileSize=${hadoop.security.log.maxfilesize}
+log4j.appender.RFAS.MaxBackupIndex=${hadoop.security.log.maxbackupindex}
+
+#
+# Daily Rolling Security appender
+#
+log4j.appender.DRFAS=org.apache.log4j.DailyRollingFileAppender
+log4j.appender.DRFAS.File=${hadoop.log.dir}/${hadoop.security.log.file}
+log4j.appender.DRFAS.layout=org.apache.log4j.PatternLayout
+log4j.appender.DRFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
+log4j.appender.DRFAS.DatePattern=.yyyy-MM-dd
+
+#
+# hadoop configuration logging
+#
+
+# Uncomment the following line to turn off configuration deprecation warnings.
+# log4j.logger.org.apache.hadoop.conf.Configuration.deprecation=WARN
+
+#
+# hdfs audit logging
+#
+hdfs.audit.logger=INFO,NullAppender
+hdfs.audit.log.maxfilesize=256MB
+hdfs.audit.log.maxbackupindex=20
+log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger}
+log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false
+log4j.appender.RFAAUDIT=org.apache.log4j.RollingFileAppender
+log4j.appender.RFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log
+log4j.appender.RFAAUDIT.layout=org.apache.log4j.PatternLayout
+log4j.appender.RFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
+log4j.appender.RFAAUDIT.MaxFileSize=${hdfs.audit.log.maxfilesize}
+log4j.appender.RFAAUDIT.MaxBackupIndex=${hdfs.audit.log.maxbackupindex}
+
+#
+# mapred audit logging
+#
+mapred.audit.logger=INFO,NullAppender
+mapred.audit.log.maxfilesize=256MB
+mapred.audit.log.maxbackupindex=20
+log4j.logger.org.apache.hadoop.mapred.AuditLogger=${mapred.audit.logger}
+log4j.additivity.org.apache.hadoop.mapred.AuditLogger=false
+log4j.appender.MRAUDIT=org.apache.log4j.RollingFileAppender
+log4j.appender.MRAUDIT.File=${hadoop.log.dir}/mapred-audit.log
+log4j.appender.MRAUDIT.layout=org.apache.log4j.PatternLayout
+log4j.appender.MRAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
+log4j.appender.MRAUDIT.MaxFileSize=${mapred.audit.log.maxfilesize}
+log4j.appender.MRAUDIT.MaxBackupIndex=${mapred.audit.log.maxbackupindex}
+
+# Custom Logging levels
+
+#log4j.logger.org.apache.hadoop.mapred.JobTracker=DEBUG
+#log4j.logger.org.apache.hadoop.mapred.TaskTracker=DEBUG
+#log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=DEBUG
+
+# Jets3t library
+log4j.logger.org.jets3t.service.impl.rest.httpclient.RestS3Service=ERROR
+
+#
+# Event Counter Appender
+# Sends counts of logging messages at different severity levels to Hadoop Metrics.
+#
+log4j.appender.EventCounter=org.apache.hadoop.log.metrics.EventCounter
+
+#
+# Job Summary Appender
+#
+# Use following logger to send summary to separate file defined by
+# hadoop.mapreduce.jobsummary.log.file :
+# hadoop.mapreduce.jobsummary.logger=INFO,JSA
+#
+hadoop.mapreduce.jobsummary.logger=${hadoop.root.logger}
+hadoop.mapreduce.jobsummary.log.file=hadoop-mapreduce.jobsummary.log
+hadoop.mapreduce.jobsummary.log.maxfilesize=256MB
+hadoop.mapreduce.jobsummary.log.maxbackupindex=20
+log4j.appender.JSA=org.apache.log4j.RollingFileAppender
+log4j.appender.JSA.File=${hadoop.log.dir}/${hadoop.mapreduce.jobsummary.log.file}
+log4j.appender.JSA.MaxFileSize=${hadoop.mapreduce.jobsummary.log.maxfilesize}
+log4j.appender.JSA.MaxBackupIndex=${hadoop.mapreduce.jobsummary.log.maxbackupindex}
+log4j.appender.JSA.layout=org.apache.log4j.PatternLayout
+log4j.appender.JSA.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n
+log4j.logger.org.apache.hadoop.mapred.JobInProgress$JobSummary=${hadoop.mapreduce.jobsummary.logger}
+log4j.additivity.org.apache.hadoop.mapred.JobInProgress$JobSummary=false
+
+#
+# Yarn ResourceManager Application Summary Log
+#
+# Set the ResourceManager summary log filename
+yarn.server.resourcemanager.appsummary.log.file=rm-appsummary.log
+# Set the ResourceManager summary log level and appender
+yarn.server.resourcemanager.appsummary.logger=${hadoop.root.logger}
+#yarn.server.resourcemanager.appsummary.logger=INFO,RMSUMMARY
+
+# To enable AppSummaryLogging for the RM,
+# set yarn.server.resourcemanager.appsummary.logger to
+# <LEVEL>,RMSUMMARY in hadoop-env.sh
+
+# Appender for ResourceManager Application Summary Log
+# Requires the following properties to be set
+# - hadoop.log.dir (Hadoop Log directory)
+# - yarn.server.resourcemanager.appsummary.log.file (resource manager app summary log filename)
+# - yarn.server.resourcemanager.appsummary.logger (resource manager app summary log level and appender)
+
+log4j.logger.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary=${yarn.server.resourcemanager.appsummary.logger}
+log4j.additivity.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary=false
+log4j.appender.RMSUMMARY=org.apache.log4j.RollingFileAppender
+log4j.appender.RMSUMMARY.File=${hadoop.log.dir}/${yarn.server.resourcemanager.appsummary.log.file}
+log4j.appender.RMSUMMARY.MaxFileSize=256MB
+log4j.appender.RMSUMMARY.MaxBackupIndex=20
+log4j.appender.RMSUMMARY.layout=org.apache.log4j.PatternLayout
+log4j.appender.RMSUMMARY.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
diff --git a/aarch64/etc/hadoop/mapred-env.cmd b/aarch64/etc/hadoop/mapred-env.cmd
new file mode 100644
index 0000000..610d593
--- /dev/null
+++ b/aarch64/etc/hadoop/mapred-env.cmd
@@ -0,0 +1,20 @@
+@echo off
+@rem Licensed to the Apache Software Foundation (ASF) under one or more
+@rem contributor license agreements. See the NOTICE file distributed with
+@rem this work for additional information regarding copyright ownership.
+@rem The ASF licenses this file to You under the Apache License, Version 2.0
+@rem (the "License"); you may not use this file except in compliance with
+@rem the License. You may obtain a copy of the License at
+@rem
+@rem http://www.apache.org/licenses/LICENSE-2.0
+@rem
+@rem Unless required by applicable law or agreed to in writing, software
+@rem distributed under the License is distributed on an "AS IS" BASIS,
+@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+@rem See the License for the specific language governing permissions and
+@rem limitations under the License.
+
+set HADOOP_JOB_HISTORYSERVER_HEAPSIZE=1000
+
+set HADOOP_MAPRED_ROOT_LOGGER=INFO,RFA
+
diff --git a/aarch64/etc/hadoop/mapred-env.sh b/aarch64/etc/hadoop/mapred-env.sh
new file mode 100644
index 0000000..6be1e27
--- /dev/null
+++ b/aarch64/etc/hadoop/mapred-env.sh
@@ -0,0 +1,27 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# export JAVA_HOME=/home/y/libexec/jdk1.6.0/
+
+export HADOOP_JOB_HISTORYSERVER_HEAPSIZE=1000
+
+export HADOOP_MAPRED_ROOT_LOGGER=INFO,RFA
+
+#export HADOOP_JOB_HISTORYSERVER_OPTS=
+#export HADOOP_MAPRED_LOG_DIR="" # Where log files are stored. $HADOOP_MAPRED_HOME/logs by default.
+#export HADOOP_JHS_LOGGER=INFO,RFA # Hadoop JobSummary logger.
+#export HADOOP_MAPRED_PID_DIR= # The pid files are stored. /tmp by default.
+#export HADOOP_MAPRED_IDENT_STRING= #A string representing this instance of hadoop. $USER by default
+#export HADOOP_MAPRED_NICENESS= #The scheduling priority for daemons. Defaults to 0.
diff --git a/aarch64/etc/hadoop/mapred-queues.xml.template b/aarch64/etc/hadoop/mapred-queues.xml.template
new file mode 100644
index 0000000..ce6cd20
--- /dev/null
+++ b/aarch64/etc/hadoop/mapred-queues.xml.template
@@ -0,0 +1,92 @@
+<?xml version="1.0"?>
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements. See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License. You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<!-- This is the template for queue configuration. The format supports nesting of
+ queues within queues - a feature called hierarchical queues. All queues are
+ defined within the 'queues' tag which is the top level element for this
+ XML document. The queue acls configured here for different queues are
+ checked for authorization only if the configuration property
+ mapreduce.cluster.acls.enabled is set to true. -->
+<queues>
+
+ <!-- Configuration for a queue is specified by defining a 'queue' element. -->
+ <queue>
+
+ <!-- Name of a queue. Queue name cannot contain a ':' -->
+ <name>default</name>
+
+ <!-- properties for a queue, typically used by schedulers,
+ can be defined here -->
+ <properties>
+ </properties>
+
+ <!-- State of the queue. If running, the queue will accept new jobs.
+ If stopped, the queue will not accept new jobs. -->
+ <state>running</state>
+
+ <!-- Specifies the ACLs to check for submitting jobs to this queue.
+ If set to '*', it allows all users to submit jobs to the queue.
+ If set to ' '(i.e. space), no user will be allowed to do this
+ operation. The default value for any queue acl is ' '.
+ For specifying a list of users and groups the format to use is
+ user1,user2 group1,group2
+
+ It is only used if authorization is enabled in Map/Reduce by setting
+ the configuration property mapreduce.cluster.acls.enabled to true.
+
+ Irrespective of this ACL configuration, the user who started the
+ cluster and cluster administrators configured via
+ mapreduce.cluster.administrators can do this operation. -->
+ <acl-submit-job> </acl-submit-job>
+
+ <!-- Specifies the ACLs to check for viewing and modifying jobs in this
+ queue. Modifications include killing jobs, tasks of jobs or changing
+ priorities.
+ If set to '*', it allows all users to view, modify jobs of the queue.
+ If set to ' '(i.e. space), no user will be allowed to do this
+ operation.
+ For specifying a list of users and groups the format to use is
+ user1,user2 group1,group2
+
+ It is only used if authorization is enabled in Map/Reduce by setting
+ the configuration property mapreduce.cluster.acls.enabled to true.
+
+ Irrespective of this ACL configuration, the user who started the
+ cluster and cluster administrators configured via
+ mapreduce.cluster.administrators can do the above operations on all
+ the jobs in all the queues. The job owner can do all the above
+ operations on his/her job irrespective of this ACL configuration. -->
+ <acl-administer-jobs> </acl-administer-jobs>
+ </queue>
+
+ <!-- Here is a sample of a hierarchical queue configuration
+ where q2 is a child of q1. In this example, q2 is a leaf level
+ queue as it has no queues configured within it. Currently, ACLs
+ and state are only supported for the leaf level queues.
+ Note also the usage of properties for the queue q2.
+ <queue>
+ <name>q1</name>
+ <queue>
+ <name>q2</name>
+ <properties>
+ <property key="capacity" value="20"/>
+ <property key="user-limit" value="30"/>
+ </properties>
+ </queue>
+ </queue>
+ -->
+</queues>
diff --git a/aarch64/etc/hadoop/mapred-site.xml.template b/aarch64/etc/hadoop/mapred-site.xml.template
new file mode 100644
index 0000000..761c352
--- /dev/null
+++ b/aarch64/etc/hadoop/mapred-site.xml.template
@@ -0,0 +1,21 @@
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<!--
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License. See accompanying LICENSE file.
+-->
+
+<!-- Put site-specific property overrides in this file. -->
+
+<configuration>
+
+</configuration>
diff --git a/aarch64/etc/hadoop/slaves b/aarch64/etc/hadoop/slaves
new file mode 100644
index 0000000..2fbb50c
--- /dev/null
+++ b/aarch64/etc/hadoop/slaves
@@ -0,0 +1 @@
+localhost
diff --git a/aarch64/etc/hadoop/ssl-client.xml.example b/aarch64/etc/hadoop/ssl-client.xml.example
new file mode 100644
index 0000000..a50dce4
--- /dev/null
+++ b/aarch64/etc/hadoop/ssl-client.xml.example
@@ -0,0 +1,80 @@
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements. See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License. You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<configuration>
+
+<property>
+ <name>ssl.client.truststore.location</name>
+ <value></value>
+ <description>Truststore to be used by clients like distcp. Must be
+ specified.
+ </description>
+</property>
+
+<property>
+ <name>ssl.client.truststore.password</name>
+ <value></value>
+ <description>Optional. Default value is "".
+ </description>
+</property>
+
+<property>
+ <name>ssl.client.truststore.type</name>
+ <value>jks</value>
+ <description>Optional. The keystore file format, default value is "jks".
+ </description>
+</property>
+
+<property>
+ <name>ssl.client.truststore.reload.interval</name>
+ <value>10000</value>
+ <description>Truststore reload check interval, in milliseconds.
+ Default value is 10000 (10 seconds).
+ </description>
+</property>
+
+<property>
+ <name>ssl.client.keystore.location</name>
+ <value></value>
+ <description>Keystore to be used by clients like distcp. Must be
+ specified.
+ </description>
+</property>
+
+<property>
+ <name>ssl.client.keystore.password</name>
+ <value></value>
+ <description>Optional. Default value is "".
+ </description>
+</property>
+
+<property>
+ <name>ssl.client.keystore.keypassword</name>
+ <value></value>
+ <description>Optional. Default value is "".
+ </description>
+</property>
+
+<property>
+ <name>ssl.client.keystore.type</name>
+ <value>jks</value>
+ <description>Optional. The keystore file format, default value is "jks".
+ </description>
+</property>
+
+</configuration>
diff --git a/aarch64/etc/hadoop/ssl-server.xml.example b/aarch64/etc/hadoop/ssl-server.xml.example
new file mode 100644
index 0000000..4b363ff
--- /dev/null
+++ b/aarch64/etc/hadoop/ssl-server.xml.example
@@ -0,0 +1,77 @@
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements. See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License. You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<configuration>
+
+<property>
+ <name>ssl.server.truststore.location</name>
+ <value></value>
+ <description>Truststore to be used by NN and DN. Must be specified.
+ </description>
+</property>
+
+<property>
+ <name>ssl.server.truststore.password</name>
+ <value></value>
+ <description>Optional. Default value is "".
+ </description>
+</property>
+
+<property>
+ <name>ssl.server.truststore.type</name>
+ <value>jks</value>
+ <description>Optional. The keystore file format, default value is "jks".
+ </description>
+</property>
+
+<property>
+ <name>ssl.server.truststore.reload.interval</name>
+ <value>10000</value>
+ <description>Truststore reload check interval, in milliseconds.
+ Default value is 10000 (10 seconds).
+</property>
+
+<property>
+ <name>ssl.server.keystore.location</name>
+ <value></value>
+ <description>Keystore to be used by NN and DN. Must be specified.
+ </description>
+</property>
+
+<property>
+ <name>ssl.server.keystore.password</name>
+ <value></value>
+ <description>Must be specified.
+ </description>
+</property>
+
+<property>
+ <name>ssl.server.keystore.keypassword</name>
+ <value></value>
+ <description>Must be specified.
+ </description>
+</property>
+
+<property>
+ <name>ssl.server.keystore.type</name>
+ <value>jks</value>
+ <description>Optional. The keystore file format, default value is "jks".
+ </description>
+</property>
+
+</configuration>
diff --git a/aarch64/etc/hadoop/yarn-env.cmd b/aarch64/etc/hadoop/yarn-env.cmd
new file mode 100644
index 0000000..3329f8f
--- /dev/null
+++ b/aarch64/etc/hadoop/yarn-env.cmd
@@ -0,0 +1,60 @@
+@echo off
+@rem Licensed to the Apache Software Foundation (ASF) under one or more
+@rem contributor license agreements. See the NOTICE file distributed with
+@rem this work for additional information regarding copyright ownership.
+@rem The ASF licenses this file to You under the Apache License, Version 2.0
+@rem (the "License"); you may not use this file except in compliance with
+@rem the License. You may obtain a copy of the License at
+@rem
+@rem http://www.apache.org/licenses/LICENSE-2.0
+@rem
+@rem Unless required by applicable law or agreed to in writing, software
+@rem distributed under the License is distributed on an "AS IS" BASIS,
+@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+@rem See the License for the specific language governing permissions and
+@rem limitations under the License.
+
+@rem User for YARN daemons
+if not defined HADOOP_YARN_USER (
+ set HADOOP_YARN_USER=%yarn%
+)
+
+if not defined YARN_CONF_DIR (
+ set YARN_CONF_DIR=%HADOOP_YARN_HOME%\conf
+)
+
+if defined YARN_HEAPSIZE (
+ @rem echo run with Java heapsize %YARN_HEAPSIZE%
+ set JAVA_HEAP_MAX=-Xmx%YARN_HEAPSIZE%m
+)
+
+if not defined YARN_LOG_DIR (
+ set YARN_LOG_DIR=%HADOOP_YARN_HOME%\logs
+)
+
+if not defined YARN_LOGFILE (
+ set YARN_LOGFILE=yarn.log
+)
+
+@rem default policy file for service-level authorization
+if not defined YARN_POLICYFILE (
+ set YARN_POLICYFILE=hadoop-policy.xml
+)
+
+if not defined YARN_ROOT_LOGGER (
+ set YARN_ROOT_LOGGER=INFO,console
+)
+
+set YARN_OPTS=%YARN_OPTS% -Dhadoop.log.dir=%YARN_LOG_DIR%
+set YARN_OPTS=%YARN_OPTS% -Dyarn.log.dir=%YARN_LOG_DIR%
+set YARN_OPTS=%YARN_OPTS% -Dhadoop.log.file=%YARN_LOGFILE%
+set YARN_OPTS=%YARN_OPTS% -Dyarn.log.file=%YARN_LOGFILE%
+set YARN_OPTS=%YARN_OPTS% -Dyarn.home.dir=%HADOOP_YARN_HOME%
+set YARN_OPTS=%YARN_OPTS% -Dyarn.id.str=%YARN_IDENT_STRING%
+set YARN_OPTS=%YARN_OPTS% -Dhadoop.home.dir=%HADOOP_YARN_HOME%
+set YARN_OPTS=%YARN_OPTS% -Dhadoop.root.logger=%YARN_ROOT_LOGGER%
+set YARN_OPTS=%YARN_OPTS% -Dyarn.root.logger=%YARN_ROOT_LOGGER%
+if defined JAVA_LIBRARY_PATH (
+ set YARN_OPTS=%YARN_OPTS% -Djava.library.path=%JAVA_LIBRARY_PATH%
+)
+set YARN_OPTS=%YARN_OPTS% -Dyarn.policy.file=%YARN_POLICYFILE% \ No newline at end of file
diff --git a/aarch64/etc/hadoop/yarn-env.sh b/aarch64/etc/hadoop/yarn-env.sh
new file mode 100644
index 0000000..cfce28d
--- /dev/null
+++ b/aarch64/etc/hadoop/yarn-env.sh
@@ -0,0 +1,112 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# User for YARN daemons
+export HADOOP_YARN_USER=${HADOOP_YARN_USER:-yarn}
+
+# resolve links - $0 may be a softlink
+export YARN_CONF_DIR="${YARN_CONF_DIR:-$HADOOP_YARN_HOME/conf}"
+
+# some Java parameters
+# export JAVA_HOME=/home/y/libexec/jdk1.6.0/
+if [ "$JAVA_HOME" != "" ]; then
+ #echo "run java in $JAVA_HOME"
+ JAVA_HOME=$JAVA_HOME
+fi
+
+if [ "$JAVA_HOME" = "" ]; then
+ echo "Error: JAVA_HOME is not set."
+ exit 1
+fi
+
+JAVA=$JAVA_HOME/bin/java
+JAVA_HEAP_MAX=-Xmx1000m
+
+# For setting YARN specific HEAP sizes please use this
+# Parameter and set appropriately
+# YARN_HEAPSIZE=1000
+
+# check envvars which might override default args
+if [ "$YARN_HEAPSIZE" != "" ]; then
+ JAVA_HEAP_MAX="-Xmx""$YARN_HEAPSIZE""m"
+fi
+
+# Resource Manager specific parameters
+
+# Specify the max Heapsize for the ResourceManager using a numerical value
+# in the scale of MB. For example, to specify an jvm option of -Xmx1000m, set
+# the value to 1000.
+# This value will be overridden by an Xmx setting specified in either YARN_OPTS
+# and/or YARN_RESOURCEMANAGER_OPTS.
+# If not specified, the default value will be picked from either YARN_HEAPMAX
+# or JAVA_HEAP_MAX with YARN_HEAPMAX as the preferred option of the two.
+#export YARN_RESOURCEMANAGER_HEAPSIZE=1000
+
+# Specify the JVM options to be used when starting the ResourceManager.
+# These options will be appended to the options specified as YARN_OPTS
+# and therefore may override any similar flags set in YARN_OPTS
+#export YARN_RESOURCEMANAGER_OPTS=
+
+# Node Manager specific parameters
+
+# Specify the max Heapsize for the NodeManager using a numerical value
+# in the scale of MB. For example, to specify an jvm option of -Xmx1000m, set
+# the value to 1000.
+# This value will be overridden by an Xmx setting specified in either YARN_OPTS
+# and/or YARN_NODEMANAGER_OPTS.
+# If not specified, the default value will be picked from either YARN_HEAPMAX
+# or JAVA_HEAP_MAX with YARN_HEAPMAX as the preferred option of the two.
+#export YARN_NODEMANAGER_HEAPSIZE=1000
+
+# Specify the JVM options to be used when starting the NodeManager.
+# These options will be appended to the options specified as YARN_OPTS
+# and therefore may override any similar flags set in YARN_OPTS
+#export YARN_NODEMANAGER_OPTS=
+
+# so that filenames w/ spaces are handled correctly in loops below
+IFS=
+
+
+# default log directory & file
+if [ "$YARN_LOG_DIR" = "" ]; then
+ YARN_LOG_DIR="$HADOOP_YARN_HOME/logs"
+fi
+if [ "$YARN_LOGFILE" = "" ]; then
+ YARN_LOGFILE='yarn.log'
+fi
+
+# default policy file for service-level authorization
+if [ "$YARN_POLICYFILE" = "" ]; then
+ YARN_POLICYFILE="hadoop-policy.xml"
+fi
+
+# restore ordinary behaviour
+unset IFS
+
+
+YARN_OPTS="$YARN_OPTS -Dhadoop.log.dir=$YARN_LOG_DIR"
+YARN_OPTS="$YARN_OPTS -Dyarn.log.dir=$YARN_LOG_DIR"
+YARN_OPTS="$YARN_OPTS -Dhadoop.log.file=$YARN_LOGFILE"
+YARN_OPTS="$YARN_OPTS -Dyarn.log.file=$YARN_LOGFILE"
+YARN_OPTS="$YARN_OPTS -Dyarn.home.dir=$YARN_COMMON_HOME"
+YARN_OPTS="$YARN_OPTS -Dyarn.id.str=$YARN_IDENT_STRING"
+YARN_OPTS="$YARN_OPTS -Dhadoop.root.logger=${YARN_ROOT_LOGGER:-INFO,console}"
+YARN_OPTS="$YARN_OPTS -Dyarn.root.logger=${YARN_ROOT_LOGGER:-INFO,console}"
+if [ "x$JAVA_LIBRARY_PATH" != "x" ]; then
+ YARN_OPTS="$YARN_OPTS -Djava.library.path=$JAVA_LIBRARY_PATH"
+fi
+YARN_OPTS="$YARN_OPTS -Dyarn.policy.file=$YARN_POLICYFILE"
+
+
diff --git a/aarch64/etc/hadoop/yarn-site.xml b/aarch64/etc/hadoop/yarn-site.xml
new file mode 100644
index 0000000..25292c7
--- /dev/null
+++ b/aarch64/etc/hadoop/yarn-site.xml
@@ -0,0 +1,19 @@
+<?xml version="1.0"?>
+<!--
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License. See accompanying LICENSE file.
+-->
+<configuration>
+
+<!-- Site specific YARN configuration properties -->
+
+</configuration>
diff --git a/aarch64/include/Pipes.hh b/aarch64/include/Pipes.hh
new file mode 100644
index 0000000..b5d0ddd
--- /dev/null
+++ b/aarch64/include/Pipes.hh
@@ -0,0 +1,260 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#ifndef HADOOP_PIPES_HH
+#define HADOOP_PIPES_HH
+
+#ifdef SWIG
+%module (directors="1") HadoopPipes
+%include "std_string.i"
+%feature("director") Mapper;
+%feature("director") Reducer;
+%feature("director") Partitioner;
+%feature("director") RecordReader;
+%feature("director") RecordWriter;
+%feature("director") Factory;
+#else
+#include <string>
+#endif
+
+#include <stdint.h>
+
+namespace HadoopPipes {
+
+/**
+ * This interface defines the interface between application code and the
+ * foreign code interface to Hadoop Map/Reduce.
+ */
+
+/**
+ * A JobConf defines the properties for a job.
+ */
+class JobConf {
+public:
+ virtual bool hasKey(const std::string& key) const = 0;
+ virtual const std::string& get(const std::string& key) const = 0;
+ virtual int getInt(const std::string& key) const = 0;
+ virtual float getFloat(const std::string& key) const = 0;
+ virtual bool getBoolean(const std::string&key) const = 0;
+ virtual ~JobConf() {}
+};
+
+/**
+ * Task context provides the information about the task and job.
+ */
+class TaskContext {
+public:
+ /**
+ * Counter to keep track of a property and its value.
+ */
+ class Counter {
+ private:
+ int id;
+ public:
+ Counter(int counterId) : id(counterId) {}
+ Counter(const Counter& counter) : id(counter.id) {}
+
+ int getId() const { return id; }
+ };
+
+ /**
+ * Get the JobConf for the current task.
+ */
+ virtual const JobConf* getJobConf() = 0;
+
+ /**
+ * Get the current key.
+ * @return the current key
+ */
+ virtual const std::string& getInputKey() = 0;
+
+ /**
+ * Get the current value.
+ * @return the current value
+ */
+ virtual const std::string& getInputValue() = 0;
+
+ /**
+ * Generate an output record
+ */
+ virtual void emit(const std::string& key, const std::string& value) = 0;
+
+ /**
+ * Mark your task as having made progress without changing the status
+ * message.
+ */
+ virtual void progress() = 0;
+
+ /**
+ * Set the status message and call progress.
+ */
+ virtual void setStatus(const std::string& status) = 0;
+
+ /**
+ * Register a counter with the given group and name.
+ */
+ virtual Counter*
+ getCounter(const std::string& group, const std::string& name) = 0;
+
+ /**
+ * Increment the value of the counter with the given amount.
+ */
+ virtual void incrementCounter(const Counter* counter, uint64_t amount) = 0;
+
+ virtual ~TaskContext() {}
+};
+
+class MapContext: public TaskContext {
+public:
+
+ /**
+ * Access the InputSplit of the mapper.
+ */
+ virtual const std::string& getInputSplit() = 0;
+
+ /**
+ * Get the name of the key class of the input to this task.
+ */
+ virtual const std::string& getInputKeyClass() = 0;
+
+ /**
+ * Get the name of the value class of the input to this task.
+ */
+ virtual const std::string& getInputValueClass() = 0;
+
+};
+
+class ReduceContext: public TaskContext {
+public:
+ /**
+ * Advance to the next value.
+ */
+ virtual bool nextValue() = 0;
+};
+
+class Closable {
+public:
+ virtual void close() {}
+ virtual ~Closable() {}
+};
+
+/**
+ * The application's mapper class to do map.
+ */
+class Mapper: public Closable {
+public:
+ virtual void map(MapContext& context) = 0;
+};
+
+/**
+ * The application's reducer class to do reduce.
+ */
+class Reducer: public Closable {
+public:
+ virtual void reduce(ReduceContext& context) = 0;
+};
+
+/**
+ * User code to decide where each key should be sent.
+ */
+class Partitioner {
+public:
+ virtual int partition(const std::string& key, int numOfReduces) = 0;
+ virtual ~Partitioner() {}
+};
+
+/**
+ * For applications that want to read the input directly for the map function
+ * they can define RecordReaders in C++.
+ */
+class RecordReader: public Closable {
+public:
+ virtual bool next(std::string& key, std::string& value) = 0;
+
+ /**
+ * The progress of the record reader through the split as a value between
+ * 0.0 and 1.0.
+ */
+ virtual float getProgress() = 0;
+};
+
+/**
+ * An object to write key/value pairs as they are emited from the reduce.
+ */
+class RecordWriter: public Closable {
+public:
+ virtual void emit(const std::string& key,
+ const std::string& value) = 0;
+};
+
+/**
+ * A factory to create the necessary application objects.
+ */
+class Factory {
+public:
+ virtual Mapper* createMapper(MapContext& context) const = 0;
+ virtual Reducer* createReducer(ReduceContext& context) const = 0;
+
+ /**
+ * Create a combiner, if this application has one.
+ * @return the new combiner or NULL, if one is not needed
+ */
+ virtual Reducer* createCombiner(MapContext& context) const {
+ return NULL;
+ }
+
+ /**
+ * Create an application partitioner object.
+ * @return the new partitioner or NULL, if the default partitioner should be
+ * used.
+ */
+ virtual Partitioner* createPartitioner(MapContext& context) const {
+ return NULL;
+ }
+
+ /**
+ * Create an application record reader.
+ * @return the new RecordReader or NULL, if the Java RecordReader should be
+ * used.
+ */
+ virtual RecordReader* createRecordReader(MapContext& context) const {
+ return NULL;
+ }
+
+ /**
+ * Create an application record writer.
+ * @return the new RecordWriter or NULL, if the Java RecordWriter should be
+ * used.
+ */
+ virtual RecordWriter* createRecordWriter(ReduceContext& context) const {
+ return NULL;
+ }
+
+ virtual ~Factory() {}
+};
+
+/**
+ * Run the assigned task in the framework.
+ * The user's main function should set the various functions using the
+ * set* functions above and then call this.
+ * @return true, if the task succeeded.
+ */
+bool runTask(const Factory& factory);
+
+}
+
+#endif
diff --git a/aarch64/include/SerialUtils.hh b/aarch64/include/SerialUtils.hh
new file mode 100644
index 0000000..cadfd76
--- /dev/null
+++ b/aarch64/include/SerialUtils.hh
@@ -0,0 +1,170 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#ifndef HADOOP_SERIAL_UTILS_HH
+#define HADOOP_SERIAL_UTILS_HH
+
+#include <string>
+#include <stdint.h>
+
+namespace HadoopUtils {
+
+ /**
+ * A simple exception class that records a message for the user.
+ */
+ class Error {
+ private:
+ std::string error;
+ public:
+
+ /**
+ * Create an error object with the given message.
+ */
+ Error(const std::string& msg);
+
+ /**
+ * Construct an error object with the given message that was created on
+ * the given file, line, and functino.
+ */
+ Error(const std::string& msg,
+ const std::string& file, int line, const std::string& function);
+
+ /**
+ * Get the error message.
+ */
+ const std::string& getMessage() const;
+ };
+
+ /**
+ * Check to make sure that the condition is true, and throw an exception
+ * if it is not. The exception will contain the message and a description
+ * of the source location.
+ */
+ #define HADOOP_ASSERT(CONDITION, MESSAGE) \
+ { \
+ if (!(CONDITION)) { \
+ throw HadoopUtils::Error((MESSAGE), __FILE__, __LINE__, \
+ __func__); \
+ } \
+ }
+
+ /**
+ * An interface for an input stream.
+ */
+ class InStream {
+ public:
+ /**
+ * Reads len bytes from the stream into the buffer.
+ * @param buf the buffer to read into
+ * @param buflen the length of the buffer
+ * @throws Error if there are problems reading
+ */
+ virtual void read(void *buf, size_t len) = 0;
+ virtual ~InStream() {}
+ };
+
+ /**
+ * An interface for an output stream.
+ */
+ class OutStream {
+ public:
+ /**
+ * Write the given buffer to the stream.
+ * @param buf the data to write
+ * @param len the number of bytes to write
+ * @throws Error if there are problems writing
+ */
+ virtual void write(const void *buf, size_t len) = 0;
+ /**
+ * Flush the data to the underlying store.
+ */
+ virtual void flush() = 0;
+ virtual ~OutStream() {}
+ };
+
+ /**
+ * A class to read a file as a stream.
+ */
+ class FileInStream : public InStream {
+ public:
+ FileInStream();
+ bool open(const std::string& name);
+ bool open(FILE* file);
+ void read(void *buf, size_t buflen);
+ bool skip(size_t nbytes);
+ bool close();
+ virtual ~FileInStream();
+ private:
+ /**
+ * The file to write to.
+ */
+ FILE *mFile;
+ /**
+ * Does is this class responsible for closing the FILE*?
+ */
+ bool isOwned;
+ };
+
+ /**
+ * A class to write a stream to a file.
+ */
+ class FileOutStream: public OutStream {
+ public:
+
+ /**
+ * Create a stream that isn't bound to anything.
+ */
+ FileOutStream();
+
+ /**
+ * Create the given file, potentially overwriting an existing file.
+ */
+ bool open(const std::string& name, bool overwrite);
+ bool open(FILE* file);
+ void write(const void* buf, size_t len);
+ bool advance(size_t nbytes);
+ void flush();
+ bool close();
+ virtual ~FileOutStream();
+ private:
+ FILE *mFile;
+ bool isOwned;
+ };
+
+ /**
+ * A stream that reads from a string.
+ */
+ class StringInStream: public InStream {
+ public:
+ StringInStream(const std::string& str);
+ virtual void read(void *buf, size_t buflen);
+ private:
+ const std::string& buffer;
+ std::string::const_iterator itr;
+ };
+
+ void serializeInt(int32_t t, OutStream& stream);
+ int32_t deserializeInt(InStream& stream);
+ void serializeLong(int64_t t, OutStream& stream);
+ int64_t deserializeLong(InStream& stream);
+ void serializeFloat(float t, OutStream& stream);
+ float deserializeFloat(InStream& stream);
+ void serializeString(const std::string& t, OutStream& stream);
+ void deserializeString(std::string& t, InStream& stream);
+}
+
+#endif
diff --git a/aarch64/include/StringUtils.hh b/aarch64/include/StringUtils.hh
new file mode 100644
index 0000000..4720172
--- /dev/null
+++ b/aarch64/include/StringUtils.hh
@@ -0,0 +1,81 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#ifndef HADOOP_STRING_UTILS_HH
+#define HADOOP_STRING_UTILS_HH
+
+#include <stdint.h>
+#include <string>
+#include <vector>
+
+namespace HadoopUtils {
+
+ /**
+ * Convert an integer to a string.
+ */
+ std::string toString(int32_t x);
+
+ /**
+ * Convert a string to an integer.
+ * @throws Error if the string is not a valid integer
+ */
+ int32_t toInt(const std::string& val);
+
+ /**
+ * Convert the string to a float.
+ * @throws Error if the string is not a valid float
+ */
+ float toFloat(const std::string& val);
+
+ /**
+ * Convert the string to a boolean.
+ * @throws Error if the string is not a valid boolean value
+ */
+ bool toBool(const std::string& val);
+
+ /**
+ * Get the current time in the number of milliseconds since 1970.
+ */
+ uint64_t getCurrentMillis();
+
+ /**
+ * Split a string into "words". Multiple deliminators are treated as a single
+ * word break, so no zero-length words are returned.
+ * @param str the string to split
+ * @param separator a list of characters that divide words
+ */
+ std::vector<std::string> splitString(const std::string& str,
+ const char* separator);
+
+ /**
+ * Quote a string to avoid "\", non-printable characters, and the
+ * deliminators.
+ * @param str the string to quote
+ * @param deliminators the set of characters to always quote
+ */
+ std::string quoteString(const std::string& str,
+ const char* deliminators);
+
+ /**
+ * Unquote the given string to return the original string.
+ * @param str the string to unquote
+ */
+ std::string unquoteString(const std::string& str);
+
+}
+
+#endif
diff --git a/aarch64/include/TemplateFactory.hh b/aarch64/include/TemplateFactory.hh
new file mode 100644
index 0000000..22e10ae
--- /dev/null
+++ b/aarch64/include/TemplateFactory.hh
@@ -0,0 +1,96 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#ifndef HADOOP_PIPES_TEMPLATE_FACTORY_HH
+#define HADOOP_PIPES_TEMPLATE_FACTORY_HH
+
+namespace HadoopPipes {
+
+ template <class mapper, class reducer>
+ class TemplateFactory2: public Factory {
+ public:
+ Mapper* createMapper(MapContext& context) const {
+ return new mapper(context);
+ }
+ Reducer* createReducer(ReduceContext& context) const {
+ return new reducer(context);
+ }
+ };
+
+ template <class mapper, class reducer, class partitioner>
+ class TemplateFactory3: public TemplateFactory2<mapper,reducer> {
+ public:
+ Partitioner* createPartitioner(MapContext& context) const {
+ return new partitioner(context);
+ }
+ };
+
+ template <class mapper, class reducer>
+ class TemplateFactory3<mapper, reducer, void>
+ : public TemplateFactory2<mapper,reducer> {
+ };
+
+ template <class mapper, class reducer, class partitioner, class combiner>
+ class TemplateFactory4
+ : public TemplateFactory3<mapper,reducer,partitioner>{
+ public:
+ Reducer* createCombiner(MapContext& context) const {
+ return new combiner(context);
+ }
+ };
+
+ template <class mapper, class reducer, class partitioner>
+ class TemplateFactory4<mapper,reducer,partitioner,void>
+ : public TemplateFactory3<mapper,reducer,partitioner>{
+ };
+
+ template <class mapper, class reducer, class partitioner,
+ class combiner, class recordReader>
+ class TemplateFactory5
+ : public TemplateFactory4<mapper,reducer,partitioner,combiner>{
+ public:
+ RecordReader* createRecordReader(MapContext& context) const {
+ return new recordReader(context);
+ }
+ };
+
+ template <class mapper, class reducer, class partitioner,class combiner>
+ class TemplateFactory5<mapper,reducer,partitioner,combiner,void>
+ : public TemplateFactory4<mapper,reducer,partitioner,combiner>{
+ };
+
+ template <class mapper, class reducer, class partitioner=void,
+ class combiner=void, class recordReader=void,
+ class recordWriter=void>
+ class TemplateFactory
+ : public TemplateFactory5<mapper,reducer,partitioner,combiner,recordReader>{
+ public:
+ RecordWriter* createRecordWriter(ReduceContext& context) const {
+ return new recordWriter(context);
+ }
+ };
+
+ template <class mapper, class reducer, class partitioner,
+ class combiner, class recordReader>
+ class TemplateFactory<mapper, reducer, partitioner, combiner, recordReader,
+ void>
+ : public TemplateFactory5<mapper,reducer,partitioner,combiner,recordReader>{
+ };
+
+}
+
+#endif
diff --git a/aarch64/include/hdfs.h b/aarch64/include/hdfs.h
new file mode 100644
index 0000000..1871665
--- /dev/null
+++ b/aarch64/include/hdfs.h
@@ -0,0 +1,692 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#ifndef LIBHDFS_HDFS_H
+#define LIBHDFS_HDFS_H
+
+#include <errno.h> /* for EINTERNAL, etc. */
+#include <fcntl.h> /* for O_RDONLY, O_WRONLY */
+#include <stdint.h> /* for uint64_t, etc. */
+#include <time.h> /* for time_t */
+
+#ifndef O_RDONLY
+#define O_RDONLY 1
+#endif
+
+#ifndef O_WRONLY
+#define O_WRONLY 2
+#endif
+
+#ifndef EINTERNAL
+#define EINTERNAL 255
+#endif
+
+
+/** All APIs set errno to meaningful values */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+ /**
+ * Some utility decls used in libhdfs.
+ */
+ struct hdfsBuilder;
+ typedef int32_t tSize; /// size of data for read/write io ops
+ typedef time_t tTime; /// time type in seconds
+ typedef int64_t tOffset;/// offset within the file
+ typedef uint16_t tPort; /// port
+ typedef enum tObjectKind {
+ kObjectKindFile = 'F',
+ kObjectKindDirectory = 'D',
+ } tObjectKind;
+
+
+ /**
+ * The C reflection of org.apache.org.hadoop.FileSystem .
+ */
+ struct hdfs_internal;
+ typedef struct hdfs_internal* hdfsFS;
+
+ struct hdfsFile_internal;
+ typedef struct hdfsFile_internal* hdfsFile;
+
+ /**
+ * Determine if a file is open for read.
+ *
+ * @param file The HDFS file
+ * @return 1 if the file is open for read; 0 otherwise
+ */
+ int hdfsFileIsOpenForRead(hdfsFile file);
+
+ /**
+ * Determine if a file is open for write.
+ *
+ * @param file The HDFS file
+ * @return 1 if the file is open for write; 0 otherwise
+ */
+ int hdfsFileIsOpenForWrite(hdfsFile file);
+
+ struct hdfsReadStatistics {
+ uint64_t totalBytesRead;
+ uint64_t totalLocalBytesRead;
+ uint64_t totalShortCircuitBytesRead;
+ };
+
+ /**
+ * Get read statistics about a file. This is only applicable to files
+ * opened for reading.
+ *
+ * @param file The HDFS file
+ * @param stats (out parameter) on a successful return, the read
+ * statistics. Unchanged otherwise. You must free the
+ * returned statistics with hdfsFileFreeReadStatistics.
+ * @return 0 if the statistics were successfully returned,
+ * -1 otherwise. On a failure, please check errno against
+ * ENOTSUP. webhdfs, LocalFilesystem, and so forth may
+ * not support read statistics.
+ */
+ int hdfsFileGetReadStatistics(hdfsFile file,
+ struct hdfsReadStatistics **stats);
+
+ /**
+ * @param stats HDFS read statistics for a file.
+ *
+ * @return the number of remote bytes read.
+ */
+ int64_t hdfsReadStatisticsGetRemoteBytesRead(
+ const struct hdfsReadStatistics *stats);
+
+ /**
+ * Free some HDFS read statistics.
+ *
+ * @param stats The HDFS read statistics to free.
+ */
+ void hdfsFileFreeReadStatistics(struct hdfsReadStatistics *stats);
+
+ /**
+ * hdfsConnectAsUser - Connect to a hdfs file system as a specific user
+ * Connect to the hdfs.
+ * @param nn The NameNode. See hdfsBuilderSetNameNode for details.
+ * @param port The port on which the server is listening.
+ * @param user the user name (this is hadoop domain user). Or NULL is equivelant to hhdfsConnect(host, port)
+ * @return Returns a handle to the filesystem or NULL on error.
+ * @deprecated Use hdfsBuilderConnect instead.
+ */
+ hdfsFS hdfsConnectAsUser(const char* nn, tPort port, const char *user);
+
+ /**
+ * hdfsConnect - Connect to a hdfs file system.
+ * Connect to the hdfs.
+ * @param nn The NameNode. See hdfsBuilderSetNameNode for details.
+ * @param port The port on which the server is listening.
+ * @return Returns a handle to the filesystem or NULL on error.
+ * @deprecated Use hdfsBuilderConnect instead.
+ */
+ hdfsFS hdfsConnect(const char* nn, tPort port);
+
+ /**
+ * hdfsConnect - Connect to an hdfs file system.
+ *
+ * Forces a new instance to be created
+ *
+ * @param nn The NameNode. See hdfsBuilderSetNameNode for details.
+ * @param port The port on which the server is listening.
+ * @param user The user name to use when connecting
+ * @return Returns a handle to the filesystem or NULL on error.
+ * @deprecated Use hdfsBuilderConnect instead.
+ */
+ hdfsFS hdfsConnectAsUserNewInstance(const char* nn, tPort port, const char *user );
+
+ /**
+ * hdfsConnect - Connect to an hdfs file system.
+ *
+ * Forces a new instance to be created
+ *
+ * @param nn The NameNode. See hdfsBuilderSetNameNode for details.
+ * @param port The port on which the server is listening.
+ * @return Returns a handle to the filesystem or NULL on error.
+ * @deprecated Use hdfsBuilderConnect instead.
+ */
+ hdfsFS hdfsConnectNewInstance(const char* nn, tPort port);
+
+ /**
+ * Connect to HDFS using the parameters defined by the builder.
+ *
+ * The HDFS builder will be freed, whether or not the connection was
+ * successful.
+ *
+ * Every successful call to hdfsBuilderConnect should be matched with a call
+ * to hdfsDisconnect, when the hdfsFS is no longer needed.
+ *
+ * @param bld The HDFS builder
+ * @return Returns a handle to the filesystem, or NULL on error.
+ */
+ hdfsFS hdfsBuilderConnect(struct hdfsBuilder *bld);
+
+ /**
+ * Create an HDFS builder.
+ *
+ * @return The HDFS builder, or NULL on error.
+ */
+ struct hdfsBuilder *hdfsNewBuilder(void);
+
+ /**
+ * Force the builder to always create a new instance of the FileSystem,
+ * rather than possibly finding one in the cache.
+ *
+ * @param bld The HDFS builder
+ */
+ void hdfsBuilderSetForceNewInstance(struct hdfsBuilder *bld);
+
+ /**
+ * Set the HDFS NameNode to connect to.
+ *
+ * @param bld The HDFS builder
+ * @param nn The NameNode to use.
+ *
+ * If the string given is 'default', the default NameNode
+ * configuration will be used (from the XML configuration files)
+ *
+ * If NULL is given, a LocalFileSystem will be created.
+ *
+ * If the string starts with a protocol type such as file:// or
+ * hdfs://, this protocol type will be used. If not, the
+ * hdfs:// protocol type will be used.
+ *
+ * You may specify a NameNode port in the usual way by
+ * passing a string of the format hdfs://<hostname>:<port>.
+ * Alternately, you may set the port with
+ * hdfsBuilderSetNameNodePort. However, you must not pass the
+ * port in two different ways.
+ */
+ void hdfsBuilderSetNameNode(struct hdfsBuilder *bld, const char *nn);
+
+ /**
+ * Set the port of the HDFS NameNode to connect to.
+ *
+ * @param bld The HDFS builder
+ * @param port The port.
+ */
+ void hdfsBuilderSetNameNodePort(struct hdfsBuilder *bld, tPort port);
+
+ /**
+ * Set the username to use when connecting to the HDFS cluster.
+ *
+ * @param bld The HDFS builder
+ * @param userName The user name. The string will be shallow-copied.
+ */
+ void hdfsBuilderSetUserName(struct hdfsBuilder *bld, const char *userName);
+
+ /**
+ * Set the path to the Kerberos ticket cache to use when connecting to
+ * the HDFS cluster.
+ *
+ * @param bld The HDFS builder
+ * @param kerbTicketCachePath The Kerberos ticket cache path. The string
+ * will be shallow-copied.
+ */
+ void hdfsBuilderSetKerbTicketCachePath(struct hdfsBuilder *bld,
+ const char *kerbTicketCachePath);
+
+ /**
+ * Free an HDFS builder.
+ *
+ * It is normally not necessary to call this function since
+ * hdfsBuilderConnect frees the builder.
+ *
+ * @param bld The HDFS builder
+ */
+ void hdfsFreeBuilder(struct hdfsBuilder *bld);
+
+ /**
+ * Set a configuration string for an HdfsBuilder.
+ *
+ * @param key The key to set.
+ * @param val The value, or NULL to set no value.
+ * This will be shallow-copied. You are responsible for
+ * ensuring that it remains valid until the builder is
+ * freed.
+ *
+ * @return 0 on success; nonzero error code otherwise.
+ */
+ int hdfsBuilderConfSetStr(struct hdfsBuilder *bld, const char *key,
+ const char *val);
+
+ /**
+ * Get a configuration string.
+ *
+ * @param key The key to find
+ * @param val (out param) The value. This will be set to NULL if the
+ * key isn't found. You must free this string with
+ * hdfsConfStrFree.
+ *
+ * @return 0 on success; nonzero error code otherwise.
+ * Failure to find the key is not an error.
+ */
+ int hdfsConfGetStr(const char *key, char **val);
+
+ /**
+ * Get a configuration integer.
+ *
+ * @param key The key to find
+ * @param val (out param) The value. This will NOT be changed if the
+ * key isn't found.
+ *
+ * @return 0 on success; nonzero error code otherwise.
+ * Failure to find the key is not an error.
+ */
+ int hdfsConfGetInt(const char *key, int32_t *val);
+
+ /**
+ * Free a configuration string found with hdfsConfGetStr.
+ *
+ * @param val A configuration string obtained from hdfsConfGetStr
+ */
+ void hdfsConfStrFree(char *val);
+
+ /**
+ * hdfsDisconnect - Disconnect from the hdfs file system.
+ * Disconnect from hdfs.
+ * @param fs The configured filesystem handle.
+ * @return Returns 0 on success, -1 on error.
+ * Even if there is an error, the resources associated with the
+ * hdfsFS will be freed.
+ */
+ int hdfsDisconnect(hdfsFS fs);
+
+
+ /**
+ * hdfsOpenFile - Open a hdfs file in given mode.
+ * @param fs The configured filesystem handle.
+ * @param path The full path to the file.
+ * @param flags - an | of bits/fcntl.h file flags - supported flags are O_RDONLY, O_WRONLY (meaning create or overwrite i.e., implies O_TRUNCAT),
+ * O_WRONLY|O_APPEND. Other flags are generally ignored other than (O_RDWR || (O_EXCL & O_CREAT)) which return NULL and set errno equal ENOTSUP.
+ * @param bufferSize Size of buffer for read/write - pass 0 if you want
+ * to use the default configured values.
+ * @param replication Block replication - pass 0 if you want to use
+ * the default configured values.
+ * @param blocksize Size of block - pass 0 if you want to use the
+ * default configured values.
+ * @return Returns the handle to the open file or NULL on error.
+ */
+ hdfsFile hdfsOpenFile(hdfsFS fs, const char* path, int flags,
+ int bufferSize, short replication, tSize blocksize);
+
+
+ /**
+ * hdfsCloseFile - Close an open file.
+ * @param fs The configured filesystem handle.
+ * @param file The file handle.
+ * @return Returns 0 on success, -1 on error.
+ * On error, errno will be set appropriately.
+ * If the hdfs file was valid, the memory associated with it will
+ * be freed at the end of this call, even if there was an I/O
+ * error.
+ */
+ int hdfsCloseFile(hdfsFS fs, hdfsFile file);
+
+
+ /**
+ * hdfsExists - Checks if a given path exsits on the filesystem
+ * @param fs The configured filesystem handle.
+ * @param path The path to look for
+ * @return Returns 0 on success, -1 on error.
+ */
+ int hdfsExists(hdfsFS fs, const char *path);
+
+
+ /**
+ * hdfsSeek - Seek to given offset in file.
+ * This works only for files opened in read-only mode.
+ * @param fs The configured filesystem handle.
+ * @param file The file handle.
+ * @param desiredPos Offset into the file to seek into.
+ * @return Returns 0 on success, -1 on error.
+ */
+ int hdfsSeek(hdfsFS fs, hdfsFile file, tOffset desiredPos);
+
+
+ /**
+ * hdfsTell - Get the current offset in the file, in bytes.
+ * @param fs The configured filesystem handle.
+ * @param file The file handle.
+ * @return Current offset, -1 on error.
+ */
+ tOffset hdfsTell(hdfsFS fs, hdfsFile file);
+
+
+ /**
+ * hdfsRead - Read data from an open file.
+ * @param fs The configured filesystem handle.
+ * @param file The file handle.
+ * @param buffer The buffer to copy read bytes into.
+ * @param length The length of the buffer.
+ * @return On success, a positive number indicating how many bytes
+ * were read.
+ * On end-of-file, 0.
+ * On error, -1. Errno will be set to the error code.
+ * Just like the POSIX read function, hdfsRead will return -1
+ * and set errno to EINTR if data is temporarily unavailable,
+ * but we are not yet at the end of the file.
+ */
+ tSize hdfsRead(hdfsFS fs, hdfsFile file, void* buffer, tSize length);
+
+ /**
+ * hdfsPread - Positional read of data from an open file.
+ * @param fs The configured filesystem handle.
+ * @param file The file handle.
+ * @param position Position from which to read
+ * @param buffer The buffer to copy read bytes into.
+ * @param length The length of the buffer.
+ * @return See hdfsRead
+ */
+ tSize hdfsPread(hdfsFS fs, hdfsFile file, tOffset position,
+ void* buffer, tSize length);
+
+
+ /**
+ * hdfsWrite - Write data into an open file.
+ * @param fs The configured filesystem handle.
+ * @param file The file handle.
+ * @param buffer The data.
+ * @param length The no. of bytes to write.
+ * @return Returns the number of bytes written, -1 on error.
+ */
+ tSize hdfsWrite(hdfsFS fs, hdfsFile file, const void* buffer,
+ tSize length);
+
+
+ /**
+ * hdfsWrite - Flush the data.
+ * @param fs The configured filesystem handle.
+ * @param file The file handle.
+ * @return Returns 0 on success, -1 on error.
+ */
+ int hdfsFlush(hdfsFS fs, hdfsFile file);
+
+
+ /**
+ * hdfsHFlush - Flush out the data in client's user buffer. After the
+ * return of this call, new readers will see the data.
+ * @param fs configured filesystem handle
+ * @param file file handle
+ * @return 0 on success, -1 on error and sets errno
+ */
+ int hdfsHFlush(hdfsFS fs, hdfsFile file);
+
+
+ /**
+ * hdfsHSync - Similar to posix fsync, Flush out the data in client's
+ * user buffer. all the way to the disk device (but the disk may have
+ * it in its cache).
+ * @param fs configured filesystem handle
+ * @param file file handle
+ * @return 0 on success, -1 on error and sets errno
+ */
+ int hdfsHSync(hdfsFS fs, hdfsFile file);
+
+
+ /**
+ * hdfsAvailable - Number of bytes that can be read from this
+ * input stream without blocking.
+ * @param fs The configured filesystem handle.
+ * @param file The file handle.
+ * @return Returns available bytes; -1 on error.
+ */
+ int hdfsAvailable(hdfsFS fs, hdfsFile file);
+
+
+ /**
+ * hdfsCopy - Copy file from one filesystem to another.
+ * @param srcFS The handle to source filesystem.
+ * @param src The path of source file.
+ * @param dstFS The handle to destination filesystem.
+ * @param dst The path of destination file.
+ * @return Returns 0 on success, -1 on error.
+ */
+ int hdfsCopy(hdfsFS srcFS, const char* src, hdfsFS dstFS, const char* dst);
+
+
+ /**
+ * hdfsMove - Move file from one filesystem to another.
+ * @param srcFS The handle to source filesystem.
+ * @param src The path of source file.
+ * @param dstFS The handle to destination filesystem.
+ * @param dst The path of destination file.
+ * @return Returns 0 on success, -1 on error.
+ */
+ int hdfsMove(hdfsFS srcFS, const char* src, hdfsFS dstFS, const char* dst);
+
+
+ /**
+ * hdfsDelete - Delete file.
+ * @param fs The configured filesystem handle.
+ * @param path The path of the file.
+ * @param recursive if path is a directory and set to
+ * non-zero, the directory is deleted else throws an exception. In
+ * case of a file the recursive argument is irrelevant.
+ * @return Returns 0 on success, -1 on error.
+ */
+ int hdfsDelete(hdfsFS fs, const char* path, int recursive);
+
+ /**
+ * hdfsRename - Rename file.
+ * @param fs The configured filesystem handle.
+ * @param oldPath The path of the source file.
+ * @param newPath The path of the destination file.
+ * @return Returns 0 on success, -1 on error.
+ */
+ int hdfsRename(hdfsFS fs, const char* oldPath, const char* newPath);
+
+
+ /**
+ * hdfsGetWorkingDirectory - Get the current working directory for
+ * the given filesystem.
+ * @param fs The configured filesystem handle.
+ * @param buffer The user-buffer to copy path of cwd into.
+ * @param bufferSize The length of user-buffer.
+ * @return Returns buffer, NULL on error.
+ */
+ char* hdfsGetWorkingDirectory(hdfsFS fs, char *buffer, size_t bufferSize);
+
+
+ /**
+ * hdfsSetWorkingDirectory - Set the working directory. All relative
+ * paths will be resolved relative to it.
+ * @param fs The configured filesystem handle.
+ * @param path The path of the new 'cwd'.
+ * @return Returns 0 on success, -1 on error.
+ */
+ int hdfsSetWorkingDirectory(hdfsFS fs, const char* path);
+
+
+ /**
+ * hdfsCreateDirectory - Make the given file and all non-existent
+ * parents into directories.
+ * @param fs The configured filesystem handle.
+ * @param path The path of the directory.
+ * @return Returns 0 on success, -1 on error.
+ */
+ int hdfsCreateDirectory(hdfsFS fs, const char* path);
+
+
+ /**
+ * hdfsSetReplication - Set the replication of the specified
+ * file to the supplied value
+ * @param fs The configured filesystem handle.
+ * @param path The path of the file.
+ * @return Returns 0 on success, -1 on error.
+ */
+ int hdfsSetReplication(hdfsFS fs, const char* path, int16_t replication);
+
+
+ /**
+ * hdfsFileInfo - Information about a file/directory.
+ */
+ typedef struct {
+ tObjectKind mKind; /* file or directory */
+ char *mName; /* the name of the file */
+ tTime mLastMod; /* the last modification time for the file in seconds */
+ tOffset mSize; /* the size of the file in bytes */
+ short mReplication; /* the count of replicas */
+ tOffset mBlockSize; /* the block size for the file */
+ char *mOwner; /* the owner of the file */
+ char *mGroup; /* the group associated with the file */
+ short mPermissions; /* the permissions associated with the file */
+ tTime mLastAccess; /* the last access time for the file in seconds */
+ } hdfsFileInfo;
+
+
+ /**
+ * hdfsListDirectory - Get list of files/directories for a given
+ * directory-path. hdfsFreeFileInfo should be called to deallocate memory.
+ * @param fs The configured filesystem handle.
+ * @param path The path of the directory.
+ * @param numEntries Set to the number of files/directories in path.
+ * @return Returns a dynamically-allocated array of hdfsFileInfo
+ * objects; NULL on error.
+ */
+ hdfsFileInfo *hdfsListDirectory(hdfsFS fs, const char* path,
+ int *numEntries);
+
+
+ /**
+ * hdfsGetPathInfo - Get information about a path as a (dynamically
+ * allocated) single hdfsFileInfo struct. hdfsFreeFileInfo should be
+ * called when the pointer is no longer needed.
+ * @param fs The configured filesystem handle.
+ * @param path The path of the file.
+ * @return Returns a dynamically-allocated hdfsFileInfo object;
+ * NULL on error.
+ */
+ hdfsFileInfo *hdfsGetPathInfo(hdfsFS fs, const char* path);
+
+
+ /**
+ * hdfsFreeFileInfo - Free up the hdfsFileInfo array (including fields)
+ * @param hdfsFileInfo The array of dynamically-allocated hdfsFileInfo
+ * objects.
+ * @param numEntries The size of the array.
+ */
+ void hdfsFreeFileInfo(hdfsFileInfo *hdfsFileInfo, int numEntries);
+
+
+ /**
+ * hdfsGetHosts - Get hostnames where a particular block (determined by
+ * pos & blocksize) of a file is stored. The last element in the array
+ * is NULL. Due to replication, a single block could be present on
+ * multiple hosts.
+ * @param fs The configured filesystem handle.
+ * @param path The path of the file.
+ * @param start The start of the block.
+ * @param length The length of the block.
+ * @return Returns a dynamically-allocated 2-d array of blocks-hosts;
+ * NULL on error.
+ */
+ char*** hdfsGetHosts(hdfsFS fs, const char* path,
+ tOffset start, tOffset length);
+
+
+ /**
+ * hdfsFreeHosts - Free up the structure returned by hdfsGetHosts
+ * @param hdfsFileInfo The array of dynamically-allocated hdfsFileInfo
+ * objects.
+ * @param numEntries The size of the array.
+ */
+ void hdfsFreeHosts(char ***blockHosts);
+
+
+ /**
+ * hdfsGetDefaultBlockSize - Get the default blocksize.
+ *
+ * @param fs The configured filesystem handle.
+ * @deprecated Use hdfsGetDefaultBlockSizeAtPath instead.
+ *
+ * @return Returns the default blocksize, or -1 on error.
+ */
+ tOffset hdfsGetDefaultBlockSize(hdfsFS fs);
+
+
+ /**
+ * hdfsGetDefaultBlockSizeAtPath - Get the default blocksize at the
+ * filesystem indicated by a given path.
+ *
+ * @param fs The configured filesystem handle.
+ * @param path The given path will be used to locate the actual
+ * filesystem. The full path does not have to exist.
+ *
+ * @return Returns the default blocksize, or -1 on error.
+ */
+ tOffset hdfsGetDefaultBlockSizeAtPath(hdfsFS fs, const char *path);
+
+
+ /**
+ * hdfsGetCapacity - Return the raw capacity of the filesystem.
+ * @param fs The configured filesystem handle.
+ * @return Returns the raw-capacity; -1 on error.
+ */
+ tOffset hdfsGetCapacity(hdfsFS fs);
+
+
+ /**
+ * hdfsGetUsed - Return the total raw size of all files in the filesystem.
+ * @param fs The configured filesystem handle.
+ * @return Returns the total-size; -1 on error.
+ */
+ tOffset hdfsGetUsed(hdfsFS fs);
+
+ /**
+ * Change the user and/or group of a file or directory.
+ *
+ * @param fs The configured filesystem handle.
+ * @param path the path to the file or directory
+ * @param owner User string. Set to NULL for 'no change'
+ * @param group Group string. Set to NULL for 'no change'
+ * @return 0 on success else -1
+ */
+ int hdfsChown(hdfsFS fs, const char* path, const char *owner,
+ const char *group);
+
+ /**
+ * hdfsChmod
+ * @param fs The configured filesystem handle.
+ * @param path the path to the file or directory
+ * @param mode the bitmask to set it to
+ * @return 0 on success else -1
+ */
+ int hdfsChmod(hdfsFS fs, const char* path, short mode);
+
+ /**
+ * hdfsUtime
+ * @param fs The configured filesystem handle.
+ * @param path the path to the file or directory
+ * @param mtime new modification time or -1 for no change
+ * @param atime new access time or -1 for no change
+ * @return 0 on success else -1
+ */
+ int hdfsUtime(hdfsFS fs, const char* path, tTime mtime, tTime atime);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /*LIBHDFS_HDFS_H*/
+
+/**
+ * vim: ts=4: sw=4: et
+ */
diff --git a/aarch64/lib/native/libhadoop.a b/aarch64/lib/native/libhadoop.a
new file mode 100644
index 0000000..28ffd82
--- /dev/null
+++ b/aarch64/lib/native/libhadoop.a
Binary files differ
diff --git a/aarch64/lib/native/libhadoop.so b/aarch64/lib/native/libhadoop.so
new file mode 120000
index 0000000..e9aafc2
--- /dev/null
+++ b/aarch64/lib/native/libhadoop.so
@@ -0,0 +1 @@
+libhadoop.so.1.0.0 \ No newline at end of file
diff --git a/aarch64/lib/native/libhadoop.so.1.0.0 b/aarch64/lib/native/libhadoop.so.1.0.0
new file mode 100755
index 0000000..c516c9e
--- /dev/null
+++ b/aarch64/lib/native/libhadoop.so.1.0.0
Binary files differ
diff --git a/aarch64/lib/native/libhadooppipes.a b/aarch64/lib/native/libhadooppipes.a
new file mode 100644
index 0000000..6f8eab2
--- /dev/null
+++ b/aarch64/lib/native/libhadooppipes.a
Binary files differ
diff --git a/aarch64/lib/native/libhadooputils.a b/aarch64/lib/native/libhadooputils.a
new file mode 100644
index 0000000..e2bfa57
--- /dev/null
+++ b/aarch64/lib/native/libhadooputils.a
Binary files differ
diff --git a/aarch64/lib/native/libhdfs.a b/aarch64/lib/native/libhdfs.a
new file mode 100644
index 0000000..845dc91
--- /dev/null
+++ b/aarch64/lib/native/libhdfs.a
Binary files differ
diff --git a/aarch64/lib/native/libhdfs.so b/aarch64/lib/native/libhdfs.so
new file mode 120000
index 0000000..2f587b5
--- /dev/null
+++ b/aarch64/lib/native/libhdfs.so
@@ -0,0 +1 @@
+libhdfs.so.0.0.0 \ No newline at end of file
diff --git a/aarch64/lib/native/libhdfs.so.0.0.0 b/aarch64/lib/native/libhdfs.so.0.0.0
new file mode 100755
index 0000000..a134032
--- /dev/null
+++ b/aarch64/lib/native/libhdfs.so.0.0.0
Binary files differ
diff --git a/aarch64/libexec/hadoop-config.cmd b/aarch64/libexec/hadoop-config.cmd
new file mode 100755
index 0000000..3e6e457
--- /dev/null
+++ b/aarch64/libexec/hadoop-config.cmd
@@ -0,0 +1,292 @@
+@echo off
+@rem Licensed to the Apache Software Foundation (ASF) under one or more
+@rem contributor license agreements. See the NOTICE file distributed with
+@rem this work for additional information regarding copyright ownership.
+@rem The ASF licenses this file to You under the Apache License, Version 2.0
+@rem (the "License"); you may not use this file except in compliance with
+@rem the License. You may obtain a copy of the License at
+@rem
+@rem http://www.apache.org/licenses/LICENSE-2.0
+@rem
+@rem Unless required by applicable law or agreed to in writing, software
+@rem distributed under the License is distributed on an "AS IS" BASIS,
+@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+@rem See the License for the specific language governing permissions and
+@rem limitations under the License.
+
+@rem included in all the hadoop scripts with source command
+@rem should not be executable directly
+@rem also should not be passed any arguments, since we need original %*
+
+if not defined HADOOP_COMMON_DIR (
+ set HADOOP_COMMON_DIR=share\hadoop\common
+)
+if not defined HADOOP_COMMON_LIB_JARS_DIR (
+ set HADOOP_COMMON_LIB_JARS_DIR=share\hadoop\common\lib
+)
+if not defined HADOOP_COMMON_LIB_NATIVE_DIR (
+ set HADOOP_COMMON_LIB_NATIVE_DIR=lib\native
+)
+if not defined HDFS_DIR (
+ set HDFS_DIR=share\hadoop\hdfs
+)
+if not defined HDFS_LIB_JARS_DIR (
+ set HDFS_LIB_JARS_DIR=share\hadoop\hdfs\lib
+)
+if not defined YARN_DIR (
+ set YARN_DIR=share\hadoop\yarn
+)
+if not defined YARN_LIB_JARS_DIR (
+ set YARN_LIB_JARS_DIR=share\hadoop\yarn\lib
+)
+if not defined MAPRED_DIR (
+ set MAPRED_DIR=share\hadoop\mapreduce
+)
+if not defined MAPRED_LIB_JARS_DIR (
+ set MAPRED_LIB_JARS_DIR=share\hadoop\mapreduce\lib
+)
+
+@rem the root of the Hadoop installation
+set HADOOP_HOME=%~dp0
+for %%i in (%HADOOP_HOME%.) do (
+ set HADOOP_HOME=%%~dpi
+)
+if "%HADOOP_HOME:~-1%" == "\" (
+ set HADOOP_HOME=%HADOOP_HOME:~0,-1%
+)
+
+if not exist %HADOOP_HOME%\share\hadoop\common\hadoop-common-*.jar (
+ @echo +================================================================+
+ @echo ^| Error: HADOOP_HOME is not set correctly ^|
+ @echo +----------------------------------------------------------------+
+ @echo ^| Please set your HADOOP_HOME variable to the absolute path of ^|
+ @echo ^| the directory that contains the hadoop distribution ^|
+ @echo +================================================================+
+ exit /b 1
+)
+
+set HADOOP_CONF_DIR=%HADOOP_HOME%\etc\hadoop
+
+@rem
+@rem Allow alternate conf dir location.
+@rem
+
+if "%1" == "--config" (
+ set HADOOP_CONF_DIR=%2
+ shift
+ shift
+)
+
+@rem
+@rem check to see it is specified whether to use the slaves or the
+@rem masters file
+@rem
+
+if "%1" == "--hosts" (
+ set HADOOP_SLAVES=%HADOOP_CONF_DIR%\%2
+ shift
+ shift
+)
+
+if exist %HADOOP_CONF_DIR%\hadoop-env.cmd (
+ call %HADOOP_CONF_DIR%\hadoop-env.cmd
+)
+
+@rem
+@rem setup java environment variables
+@rem
+
+if not defined JAVA_HOME (
+ echo Error: JAVA_HOME is not set.
+ goto :eof
+)
+
+if not exist %JAVA_HOME%\bin\java.exe (
+ echo Error: JAVA_HOME is incorrectly set.
+ echo Please update %HADOOP_HOME%\conf\hadoop-env.cmd
+ goto :eof
+)
+
+set JAVA=%JAVA_HOME%\bin\java
+@rem some Java parameters
+set JAVA_HEAP_MAX=-Xmx1000m
+
+@rem
+@rem check envvars which might override default args
+@rem
+
+if defined HADOOP_HEAPSIZE (
+ set JAVA_HEAP_MAX=-Xmx%HADOOP_HEAPSIZE%m
+)
+
+@rem
+@rem CLASSPATH initially contains %HADOOP_CONF_DIR%
+@rem
+
+set CLASSPATH=%HADOOP_CONF_DIR%
+
+if not defined HADOOP_COMMON_HOME (
+ if exist %HADOOP_HOME%\share\hadoop\common (
+ set HADOOP_COMMON_HOME=%HADOOP_HOME%
+ )
+)
+
+@rem
+@rem for releases, add core hadoop jar & webapps to CLASSPATH
+@rem
+
+if exist %HADOOP_COMMON_HOME%\%HADOOP_COMMON_DIR%\webapps (
+ set CLASSPATH=!CLASSPATH!;%HADOOP_COMMON_HOME%\%HADOOP_COMMON_DIR%
+)
+
+if exist %HADOOP_COMMON_HOME%\%HADOOP_COMMON_LIB_JARS_DIR% (
+ set CLASSPATH=!CLASSPATH!;%HADOOP_COMMON_HOME%\%HADOOP_COMMON_LIB_JARS_DIR%\*
+)
+
+set CLASSPATH=!CLASSPATH!;%HADOOP_COMMON_HOME%\%HADOOP_COMMON_DIR%\*
+
+@rem
+@rem add user-specified CLASSPATH last
+@rem
+
+if defined HADOOP_CLASSPATH (
+ if defined HADOOP_USER_CLASSPATH_FIRST (
+ set CLASSPATH=%HADOOP_CLASSPATH%;%CLASSPATH%;
+ ) else (
+ set CLASSPATH=%CLASSPATH%;%HADOOP_CLASSPATH%;
+ )
+)
+
+@rem
+@rem default log directory % file
+@rem
+
+if not defined HADOOP_LOG_DIR (
+ set HADOOP_LOG_DIR=%HADOOP_HOME%\logs
+)
+
+if not defined HADOOP_LOGFILE (
+ set HADOOP_LOGFILE=hadoop.log
+)
+
+if not defined HADOOP_ROOT_LOGGER (
+ set HADOOP_ROOT_LOGGER=INFO,console
+)
+
+@rem
+@rem default policy file for service-level authorization
+@rem
+
+if not defined HADOOP_POLICYFILE (
+ set HADOOP_POLICYFILE=hadoop-policy.xml
+)
+
+@rem
+@rem Determine the JAVA_PLATFORM
+@rem
+
+for /f "delims=" %%A in ('%JAVA% -Xmx32m %HADOOP_JAVA_PLATFORM_OPTS% -classpath "%CLASSPATH%" org.apache.hadoop.util.PlatformName') do set JAVA_PLATFORM=%%A
+@rem replace space with underscore
+set JAVA_PLATFORM=%JAVA_PLATFORM: =_%
+
+@rem
+@rem setup 'java.library.path' for native hadoop code if necessary
+@rem
+
+@rem Check if we're running hadoop directly from the build
+set JAVA_LIBRARY_PATH=
+if exist %HADOOP_COMMON_HOME%\target\bin (
+ set JAVA_LIBRARY_PATH=%HADOOP_COMMON_HOME%\target\bin
+)
+
+@rem For the distro case, check the bin folder
+if exist %HADOOP_COMMON_HOME%\bin (
+ set JAVA_LIBRARY_PATH=%JAVA_LIBRARY_PATH%;%HADOOP_COMMON_HOME%\bin
+)
+
+@rem
+@rem setup a default TOOL_PATH
+@rem
+set TOOL_PATH=%HADOOP_HOME%\share\hadoop\tools\lib\*
+
+set HADOOP_OPTS=%HADOOP_OPTS% -Dhadoop.log.dir=%HADOOP_LOG_DIR%
+set HADOOP_OPTS=%HADOOP_OPTS% -Dhadoop.log.file=%HADOOP_LOGFILE%
+set HADOOP_OPTS=%HADOOP_OPTS% -Dhadoop.home.dir=%HADOOP_HOME%
+set HADOOP_OPTS=%HADOOP_OPTS% -Dhadoop.id.str=%HADOOP_IDENT_STRING%
+set HADOOP_OPTS=%HADOOP_OPTS% -Dhadoop.root.logger=%HADOOP_ROOT_LOGGER%
+
+if defined JAVA_LIBRARY_PATH (
+ set HADOOP_OPTS=%HADOOP_OPTS% -Djava.library.path=%JAVA_LIBRARY_PATH%
+)
+set HADOOP_OPTS=%HADOOP_OPTS% -Dhadoop.policy.file=%HADOOP_POLICYFILE%
+
+@rem
+@rem Disable ipv6 as it can cause issues
+@rem
+
+set HADOOP_OPTS=%HADOOP_OPTS% -Djava.net.preferIPv4Stack=true
+
+@rem
+@rem put hdfs in classpath if present
+@rem
+
+if not defined HADOOP_HDFS_HOME (
+ if exist %HADOOP_HOME%\%HDFS_DIR% (
+ set HADOOP_HDFS_HOME=%HADOOP_HOME%
+ )
+)
+
+if exist %HADOOP_HDFS_HOME%\%HDFS_DIR%\webapps (
+ set CLASSPATH=!CLASSPATH!;%HADOOP_HDFS_HOME%\%HDFS_DIR%
+)
+
+if exist %HADOOP_HDFS_HOME%\%HDFS_LIB_JARS_DIR% (
+ set CLASSPATH=!CLASSPATH!;%HADOOP_HDFS_HOME%\%HDFS_LIB_JARS_DIR%\*
+)
+
+set CLASSPATH=!CLASSPATH!;%HADOOP_HDFS_HOME%\%HDFS_DIR%\*
+
+@rem
+@rem put yarn in classpath if present
+@rem
+
+if not defined HADOOP_YARN_HOME (
+ if exist %HADOOP_HOME%\%YARN_DIR% (
+ set HADOOP_YARN_HOME=%HADOOP_HOME%
+ )
+)
+
+if exist %HADOOP_YARN_HOME%\%YARN_DIR%\webapps (
+ set CLASSPATH=!CLASSPATH!;%HADOOP_YARN_HOME%\%YARN_DIR%
+)
+
+if exist %HADOOP_YARN_HOME%\%YARN_LIB_JARS_DIR% (
+ set CLASSPATH=!CLASSPATH!;%HADOOP_YARN_HOME%\%YARN_LIB_JARS_DIR%\*
+)
+
+set CLASSPATH=!CLASSPATH!;%HADOOP_YARN_HOME%\%YARN_DIR%\*
+
+@rem
+@rem put mapred in classpath if present AND different from YARN
+@rem
+
+if not defined HADOOP_MAPRED_HOME (
+ if exist %HADOOP_HOME%\%MAPRED_DIR% (
+ set HADOOP_MAPRED_HOME=%HADOOP_HOME%
+ )
+)
+
+if not "%HADOOP_MAPRED_HOME%\%MAPRED_DIR%" == "%HADOOP_YARN_HOME%\%YARN_DIR%" (
+
+ if exist %HADOOP_MAPRED_HOME%\%MAPRED_DIR%\webapps (
+ set CLASSPATH=!CLASSPATH!;%HADOOP_MAPRED_HOME%\%MAPRED_DIR%
+ )
+
+ if exist %HADOOP_MAPRED_HOME%\%MAPRED_LIB_JARS_DIR% (
+ set CLASSPATH=!CLASSPATH!;%HADOOP_MAPRED_HOME%\%MAPRED_LIB_JARS_DIR%\*
+ )
+
+ set CLASSPATH=!CLASSPATH!;%HADOOP_MAPRED_HOME%\%MAPRED_DIR%\*
+)
+
+:eof
diff --git a/aarch64/libexec/hadoop-config.sh b/aarch64/libexec/hadoop-config.sh
new file mode 100755
index 0000000..e5c40fc
--- /dev/null
+++ b/aarch64/libexec/hadoop-config.sh
@@ -0,0 +1,295 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# included in all the hadoop scripts with source command
+# should not be executable directly
+# also should not be passed any arguments, since we need original $*
+
+# Resolve links ($0 may be a softlink) and convert a relative path
+# to an absolute path. NB: The -P option requires bash built-ins
+# or POSIX:2001 compliant cd and pwd.
+
+# HADOOP_CLASSPATH Extra Java CLASSPATH entries.
+#
+# HADOOP_USER_CLASSPATH_FIRST When defined, the HADOOP_CLASSPATH is
+# added in the beginning of the global
+# classpath. Can be defined, for example,
+# by doing
+# export HADOOP_USER_CLASSPATH_FIRST=true
+#
+
+this="${BASH_SOURCE-$0}"
+common_bin=$(cd -P -- "$(dirname -- "$this")" && pwd -P)
+script="$(basename -- "$this")"
+this="$common_bin/$script"
+
+[ -f "$common_bin/hadoop-layout.sh" ] && . "$common_bin/hadoop-layout.sh"
+
+HADOOP_COMMON_DIR=${HADOOP_COMMON_DIR:-"share/hadoop/common"}
+HADOOP_COMMON_LIB_JARS_DIR=${HADOOP_COMMON_LIB_JARS_DIR:-"share/hadoop/common/lib"}
+HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_COMMON_LIB_NATIVE_DIR:-"lib/native"}
+HDFS_DIR=${HDFS_DIR:-"share/hadoop/hdfs"}
+HDFS_LIB_JARS_DIR=${HDFS_LIB_JARS_DIR:-"share/hadoop/hdfs/lib"}
+YARN_DIR=${YARN_DIR:-"share/hadoop/yarn"}
+YARN_LIB_JARS_DIR=${YARN_LIB_JARS_DIR:-"share/hadoop/yarn/lib"}
+MAPRED_DIR=${MAPRED_DIR:-"share/hadoop/mapreduce"}
+MAPRED_LIB_JARS_DIR=${MAPRED_LIB_JARS_DIR:-"share/hadoop/mapreduce/lib"}
+
+# the root of the Hadoop installation
+# See HADOOP-6255 for directory structure layout
+HADOOP_DEFAULT_PREFIX=$(cd -P -- "$common_bin"/.. && pwd -P)
+HADOOP_PREFIX=${HADOOP_PREFIX:-$HADOOP_DEFAULT_PREFIX}
+export HADOOP_PREFIX
+
+#check to see if the conf dir is given as an optional argument
+if [ $# -gt 1 ]
+then
+ if [ "--config" = "$1" ]
+ then
+ shift
+ confdir=$1
+ if [ ! -d "$confdir" ]; then
+ echo "Error: Cannot find configuration directory: $confdir"
+ exit 1
+ fi
+ shift
+ HADOOP_CONF_DIR=$confdir
+ fi
+fi
+
+# Allow alternate conf dir location.
+if [ -e "${HADOOP_PREFIX}/conf/hadoop-env.sh" ]; then
+ DEFAULT_CONF_DIR="conf"
+else
+ DEFAULT_CONF_DIR="etc/hadoop"
+fi
+
+export HADOOP_CONF_DIR="${HADOOP_CONF_DIR:-$HADOOP_PREFIX/$DEFAULT_CONF_DIR}"
+
+# User can specify hostnames or a file where the hostnames are (not both)
+if [[ ( "$HADOOP_SLAVES" != '' ) && ( "$HADOOP_SLAVE_NAMES" != '' ) ]] ; then
+ echo \
+ "Error: Please specify one variable HADOOP_SLAVES or " \
+ "HADOOP_SLAVE_NAME and not both."
+ exit 1
+fi
+
+# Process command line options that specify hosts or file with host
+# list
+if [ $# -gt 1 ]
+then
+ if [ "--hosts" = "$1" ]
+ then
+ shift
+ export HADOOP_SLAVES="${HADOOP_CONF_DIR}/$$1"
+ shift
+ elif [ "--hostnames" = "$1" ]
+ then
+ shift
+ export HADOOP_SLAVE_NAMES=$1
+ shift
+ fi
+fi
+
+# User can specify hostnames or a file where the hostnames are (not both)
+# (same check as above but now we know it's command line options that cause
+# the problem)
+if [[ ( "$HADOOP_SLAVES" != '' ) && ( "$HADOOP_SLAVE_NAMES" != '' ) ]] ; then
+ echo \
+ "Error: Please specify one of --hosts or --hostnames options and not both."
+ exit 1
+fi
+
+if [ -f "${HADOOP_CONF_DIR}/hadoop-env.sh" ]; then
+ . "${HADOOP_CONF_DIR}/hadoop-env.sh"
+fi
+
+# check if net.ipv6.bindv6only is set to 1
+bindv6only=$(/sbin/sysctl -n net.ipv6.bindv6only 2> /dev/null)
+if [ -n "$bindv6only" ] && [ "$bindv6only" -eq "1" ] && [ "$HADOOP_ALLOW_IPV6" != "yes" ]
+then
+ echo "Error: \"net.ipv6.bindv6only\" is set to 1 - Java networking could be broken"
+ echo "For more info: http://wiki.apache.org/hadoop/HadoopIPv6"
+ exit 1
+fi
+
+# Newer versions of glibc use an arena memory allocator that causes virtual
+# memory usage to explode. This interacts badly with the many threads that
+# we use in Hadoop. Tune the variable down to prevent vmem explosion.
+export MALLOC_ARENA_MAX=${MALLOC_ARENA_MAX:-4}
+
+# Attempt to set JAVA_HOME if it is not set
+if [[ -z $JAVA_HOME ]]; then
+ # On OSX use java_home (or /Library for older versions)
+ if [ "Darwin" == "$(uname -s)" ]; then
+ if [ -x /usr/libexec/java_home ]; then
+ export JAVA_HOME=($(/usr/libexec/java_home))
+ else
+ export JAVA_HOME=(/Library/Java/Home)
+ fi
+ fi
+
+ # Bail if we did not detect it
+ if [[ -z $JAVA_HOME ]]; then
+ echo "Error: JAVA_HOME is not set and could not be found." 1>&2
+ exit 1
+ fi
+fi
+
+JAVA=$JAVA_HOME/bin/java
+# some Java parameters
+JAVA_HEAP_MAX=-Xmx1000m
+
+# check envvars which might override default args
+if [ "$HADOOP_HEAPSIZE" != "" ]; then
+ #echo "run with heapsize $HADOOP_HEAPSIZE"
+ JAVA_HEAP_MAX="-Xmx""$HADOOP_HEAPSIZE""m"
+ #echo $JAVA_HEAP_MAX
+fi
+
+# CLASSPATH initially contains $HADOOP_CONF_DIR
+CLASSPATH="${HADOOP_CONF_DIR}"
+
+# so that filenames w/ spaces are handled correctly in loops below
+IFS=
+
+if [ "$HADOOP_COMMON_HOME" = "" ]; then
+ if [ -d "${HADOOP_PREFIX}/$HADOOP_COMMON_DIR" ]; then
+ export HADOOP_COMMON_HOME=$HADOOP_PREFIX
+ fi
+fi
+
+# for releases, add core hadoop jar & webapps to CLASSPATH
+if [ -d "$HADOOP_COMMON_HOME/$HADOOP_COMMON_DIR/webapps" ]; then
+ CLASSPATH=${CLASSPATH}:$HADOOP_COMMON_HOME/$HADOOP_COMMON_DIR
+fi
+
+if [ -d "$HADOOP_COMMON_HOME/$HADOOP_COMMON_LIB_JARS_DIR" ]; then
+ CLASSPATH=${CLASSPATH}:$HADOOP_COMMON_HOME/$HADOOP_COMMON_LIB_JARS_DIR'/*'
+fi
+
+CLASSPATH=${CLASSPATH}:$HADOOP_COMMON_HOME/$HADOOP_COMMON_DIR'/*'
+
+# default log directory & file
+if [ "$HADOOP_LOG_DIR" = "" ]; then
+ HADOOP_LOG_DIR="$HADOOP_PREFIX/logs"
+fi
+if [ "$HADOOP_LOGFILE" = "" ]; then
+ HADOOP_LOGFILE='hadoop.log'
+fi
+
+# default policy file for service-level authorization
+if [ "$HADOOP_POLICYFILE" = "" ]; then
+ HADOOP_POLICYFILE="hadoop-policy.xml"
+fi
+
+# restore ordinary behaviour
+unset IFS
+
+# setup 'java.library.path' for native-hadoop code if necessary
+
+if [ -d "${HADOOP_PREFIX}/build/native" -o -d "${HADOOP_PREFIX}/$HADOOP_COMMON_LIB_NATIVE_DIR" ]; then
+
+ if [ -d "${HADOOP_PREFIX}/$HADOOP_COMMON_LIB_NATIVE_DIR" ]; then
+ if [ "x$JAVA_LIBRARY_PATH" != "x" ]; then
+ JAVA_LIBRARY_PATH=${JAVA_LIBRARY_PATH}:${HADOOP_PREFIX}/$HADOOP_COMMON_LIB_NATIVE_DIR
+ else
+ JAVA_LIBRARY_PATH=${HADOOP_PREFIX}/$HADOOP_COMMON_LIB_NATIVE_DIR
+ fi
+ fi
+fi
+
+# setup a default TOOL_PATH
+TOOL_PATH="${TOOL_PATH:-$HADOOP_PREFIX/share/hadoop/tools/lib/*}"
+
+HADOOP_OPTS="$HADOOP_OPTS -Dhadoop.log.dir=$HADOOP_LOG_DIR"
+HADOOP_OPTS="$HADOOP_OPTS -Dhadoop.log.file=$HADOOP_LOGFILE"
+HADOOP_OPTS="$HADOOP_OPTS -Dhadoop.home.dir=$HADOOP_PREFIX"
+HADOOP_OPTS="$HADOOP_OPTS -Dhadoop.id.str=$HADOOP_IDENT_STRING"
+HADOOP_OPTS="$HADOOP_OPTS -Dhadoop.root.logger=${HADOOP_ROOT_LOGGER:-INFO,console}"
+if [ "x$JAVA_LIBRARY_PATH" != "x" ]; then
+ HADOOP_OPTS="$HADOOP_OPTS -Djava.library.path=$JAVA_LIBRARY_PATH"
+ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$JAVA_LIBRARY_PATH
+fi
+HADOOP_OPTS="$HADOOP_OPTS -Dhadoop.policy.file=$HADOOP_POLICYFILE"
+
+# Disable ipv6 as it can cause issues
+HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true"
+
+# put hdfs in classpath if present
+if [ "$HADOOP_HDFS_HOME" = "" ]; then
+ if [ -d "${HADOOP_PREFIX}/$HDFS_DIR" ]; then
+ export HADOOP_HDFS_HOME=$HADOOP_PREFIX
+ fi
+fi
+
+if [ -d "$HADOOP_HDFS_HOME/$HDFS_DIR/webapps" ]; then
+ CLASSPATH=${CLASSPATH}:$HADOOP_HDFS_HOME/$HDFS_DIR
+fi
+
+if [ -d "$HADOOP_HDFS_HOME/$HDFS_LIB_JARS_DIR" ]; then
+ CLASSPATH=${CLASSPATH}:$HADOOP_HDFS_HOME/$HDFS_LIB_JARS_DIR'/*'
+fi
+
+CLASSPATH=${CLASSPATH}:$HADOOP_HDFS_HOME/$HDFS_DIR'/*'
+
+# put yarn in classpath if present
+if [ "$HADOOP_YARN_HOME" = "" ]; then
+ if [ -d "${HADOOP_PREFIX}/$YARN_DIR" ]; then
+ export HADOOP_YARN_HOME=$HADOOP_PREFIX
+ fi
+fi
+
+if [ -d "$HADOOP_YARN_HOME/$YARN_DIR/webapps" ]; then
+ CLASSPATH=${CLASSPATH}:$HADOOP_YARN_HOME/$YARN_DIR
+fi
+
+if [ -d "$HADOOP_YARN_HOME/$YARN_LIB_JARS_DIR" ]; then
+ CLASSPATH=${CLASSPATH}:$HADOOP_YARN_HOME/$YARN_LIB_JARS_DIR'/*'
+fi
+
+CLASSPATH=${CLASSPATH}:$HADOOP_YARN_HOME/$YARN_DIR'/*'
+
+# put mapred in classpath if present AND different from YARN
+if [ "$HADOOP_MAPRED_HOME" = "" ]; then
+ if [ -d "${HADOOP_PREFIX}/$MAPRED_DIR" ]; then
+ export HADOOP_MAPRED_HOME=$HADOOP_PREFIX
+ fi
+fi
+
+if [ "$HADOOP_MAPRED_HOME/$MAPRED_DIR" != "$HADOOP_YARN_HOME/$YARN_DIR" ] ; then
+ if [ -d "$HADOOP_MAPRED_HOME/$MAPRED_DIR/webapps" ]; then
+ CLASSPATH=${CLASSPATH}:$HADOOP_MAPRED_HOME/$MAPRED_DIR
+ fi
+
+ if [ -d "$HADOOP_MAPRED_HOME/$MAPRED_LIB_JARS_DIR" ]; then
+ CLASSPATH=${CLASSPATH}:$HADOOP_MAPRED_HOME/$MAPRED_LIB_JARS_DIR'/*'
+ fi
+
+ CLASSPATH=${CLASSPATH}:$HADOOP_MAPRED_HOME/$MAPRED_DIR'/*'
+fi
+
+# Add the user-specified CLASSPATH via HADOOP_CLASSPATH
+# Add it first or last depending on if user has
+# set env-var HADOOP_USER_CLASSPATH_FIRST
+if [ "$HADOOP_CLASSPATH" != "" ]; then
+ # Prefix it if its to be preceded
+ if [ "$HADOOP_USER_CLASSPATH_FIRST" != "" ]; then
+ CLASSPATH=${HADOOP_CLASSPATH}:${CLASSPATH}
+ else
+ CLASSPATH=${CLASSPATH}:${HADOOP_CLASSPATH}
+ fi
+fi
+
diff --git a/aarch64/libexec/hdfs-config.cmd b/aarch64/libexec/hdfs-config.cmd
new file mode 100755
index 0000000..f3aa733
--- /dev/null
+++ b/aarch64/libexec/hdfs-config.cmd
@@ -0,0 +1,43 @@
+@echo off
+@rem Licensed to the Apache Software Foundation (ASF) under one or more
+@rem contributor license agreements. See the NOTICE file distributed with
+@rem this work for additional information regarding copyright ownership.
+@rem The ASF licenses this file to You under the Apache License, Version 2.0
+@rem (the "License"); you may not use this file except in compliance with
+@rem the License. You may obtain a copy of the License at
+@rem
+@rem http://www.apache.org/licenses/LICENSE-2.0
+@rem
+@rem Unless required by applicable law or agreed to in writing, software
+@rem distributed under the License is distributed on an "AS IS" BASIS,
+@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+@rem See the License for the specific language governing permissions and
+@rem limitations under the License.
+
+@rem included in all the hdfs scripts with source command
+@rem should not be executed directly
+
+if not defined HADOOP_BIN_PATH (
+ set HADOOP_BIN_PATH=%~dp0
+)
+
+if "%HADOOP_BIN_PATH:~-1%" == "\" (
+ set HADOOP_BIN_PATH=%HADOOP_BIN_PATH:~0,-1%
+)
+
+set DEFAULT_LIBEXEC_DIR=%HADOOP_BIN_PATH%\..\libexec
+if not defined HADOOP_LIBEXEC_DIR (
+ set HADOOP_LIBEXEC_DIR=%DEFAULT_LIBEXEC_DIR%
+)
+
+if exist %HADOOP_LIBEXEC_DIR%\hadoop-config.cmd (
+ call %HADOOP_LIBEXEC_DIR%\hadoop-config.cmd %*
+) else if exist %HADOOP_COMMON_HOME%\libexec\hadoop-config.cmd (
+ call %HADOOP_COMMON_HOME%\libexec\hadoop-config.cmd %*
+) else if exist %HADOOP_HOME%\libexec\hadoop-config.cmd (
+ call %HADOOP_HOME%\libexec\hadoop-config.cmd %*
+) else (
+ echo Hadoop common not found.
+)
+
+:eof
diff --git a/aarch64/libexec/hdfs-config.sh b/aarch64/libexec/hdfs-config.sh
new file mode 100755
index 0000000..2aabf53
--- /dev/null
+++ b/aarch64/libexec/hdfs-config.sh
@@ -0,0 +1,36 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# included in all the hdfs scripts with source command
+# should not be executed directly
+
+bin=`which "$0"`
+bin=`dirname "${bin}"`
+bin=`cd "$bin"; pwd`
+
+DEFAULT_LIBEXEC_DIR="$bin"/../libexec
+HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR}
+if [ -e "${HADOOP_LIBEXEC_DIR}/hadoop-config.sh" ]; then
+ . ${HADOOP_LIBEXEC_DIR}/hadoop-config.sh
+elif [ -e "${HADOOP_COMMON_HOME}/libexec/hadoop-config.sh" ]; then
+ . "$HADOOP_COMMON_HOME"/libexec/hadoop-config.sh
+elif [ -e "${HADOOP_HOME}/libexec/hadoop-config.sh" ]; then
+ . "$HADOOP_HOME"/libexec/hadoop-config.sh
+else
+ echo "Hadoop common not found."
+ exit
+fi
diff --git a/aarch64/libexec/httpfs-config.sh b/aarch64/libexec/httpfs-config.sh
new file mode 100755
index 0000000..02e1a71
--- /dev/null
+++ b/aarch64/libexec/httpfs-config.sh
@@ -0,0 +1,174 @@
+#!/bin/bash
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+# resolve links - $0 may be a softlink
+PRG="${0}"
+
+while [ -h "${PRG}" ]; do
+ ls=`ls -ld "${PRG}"`
+ link=`expr "$ls" : '.*-> \(.*\)$'`
+ if expr "$link" : '/.*' > /dev/null; then
+ PRG="$link"
+ else
+ PRG=`dirname "${PRG}"`/"$link"
+ fi
+done
+
+BASEDIR=`dirname ${PRG}`
+BASEDIR=`cd ${BASEDIR}/..;pwd`
+
+
+function print() {
+ if [ "${HTTPFS_SILENT}" != "true" ]; then
+ echo "$@"
+ fi
+}
+
+# if HTTPFS_HOME is already set warn it will be ignored
+#
+if [ "${HTTPFS_HOME}" != "" ]; then
+ echo "WARNING: current setting of HTTPFS_HOME ignored"
+fi
+
+print
+
+# setting HTTPFS_HOME to the installation dir, it cannot be changed
+#
+export HTTPFS_HOME=${BASEDIR}
+httpfs_home=${HTTPFS_HOME}
+print "Setting HTTPFS_HOME: ${HTTPFS_HOME}"
+
+# if the installation has a env file, source it
+# this is for native packages installations
+#
+if [ -e "${HTTPFS_HOME}/bin/httpfs-env.sh" ]; then
+ print "Sourcing: ${HTTPFS_HOME}/bin/httpfs-env.sh"
+ source ${HTTPFS_HOME}/bin/HTTPFS-env.sh
+ grep "^ *export " ${HTTPFS_HOME}/bin/httpfs-env.sh | sed 's/ *export/ setting/'
+fi
+
+# verify that the sourced env file didn't change HTTPFS_HOME
+# if so, warn and revert
+#
+if [ "${HTTPFS_HOME}" != "${httpfs_home}" ]; then
+ print "WARN: HTTPFS_HOME resetting to ''${HTTPFS_HOME}'' ignored"
+ export HTTPFS_HOME=${httpfs_home}
+ print " using HTTPFS_HOME: ${HTTPFS_HOME}"
+fi
+
+if [ "${HTTPFS_CONFIG}" = "" ]; then
+ export HTTPFS_CONFIG=${HTTPFS_HOME}/etc/hadoop
+ print "Setting HTTPFS_CONFIG: ${HTTPFS_CONFIG}"
+else
+ print "Using HTTPFS_CONFIG: ${HTTPFS_CONFIG}"
+fi
+httpfs_config=${HTTPFS_CONFIG}
+
+# if the configuration dir has a env file, source it
+#
+if [ -e "${HTTPFS_CONFIG}/httpfs-env.sh" ]; then
+ print "Sourcing: ${HTTPFS_CONFIG}/httpfs-env.sh"
+ source ${HTTPFS_CONFIG}/httpfs-env.sh
+ grep "^ *export " ${HTTPFS_CONFIG}/httpfs-env.sh | sed 's/ *export/ setting/'
+fi
+
+# verify that the sourced env file didn't change HTTPFS_HOME
+# if so, warn and revert
+#
+if [ "${HTTPFS_HOME}" != "${httpfs_home}" ]; then
+ echo "WARN: HTTPFS_HOME resetting to ''${HTTPFS_HOME}'' ignored"
+ export HTTPFS_HOME=${httpfs_home}
+fi
+
+# verify that the sourced env file didn't change HTTPFS_CONFIG
+# if so, warn and revert
+#
+if [ "${HTTPFS_CONFIG}" != "${httpfs_config}" ]; then
+ echo "WARN: HTTPFS_CONFIG resetting to ''${HTTPFS_CONFIG}'' ignored"
+ export HTTPFS_CONFIG=${httpfs_config}
+fi
+
+if [ "${HTTPFS_LOG}" = "" ]; then
+ export HTTPFS_LOG=${HTTPFS_HOME}/logs
+ print "Setting HTTPFS_LOG: ${HTTPFS_LOG}"
+else
+ print "Using HTTPFS_LOG: ${HTTPFS_LOG}"
+fi
+
+if [ ! -f ${HTTPFS_LOG} ]; then
+ mkdir -p ${HTTPFS_LOG}
+fi
+
+if [ "${HTTPFS_TEMP}" = "" ]; then
+ export HTTPFS_TEMP=${HTTPFS_HOME}/temp
+ print "Setting HTTPFS_TEMP: ${HTTPFS_TEMP}"
+else
+ print "Using HTTPFS_TEMP: ${HTTPFS_TEMP}"
+fi
+
+if [ ! -f ${HTTPFS_TEMP} ]; then
+ mkdir -p ${HTTPFS_TEMP}
+fi
+
+if [ "${HTTPFS_HTTP_PORT}" = "" ]; then
+ export HTTPFS_HTTP_PORT=14000
+ print "Setting HTTPFS_HTTP_PORT: ${HTTPFS_HTTP_PORT}"
+else
+ print "Using HTTPFS_HTTP_PORT: ${HTTPFS_HTTP_PORT}"
+fi
+
+if [ "${HTTPFS_ADMIN_PORT}" = "" ]; then
+ export HTTPFS_ADMIN_PORT=`expr $HTTPFS_HTTP_PORT + 1`
+ print "Setting HTTPFS_ADMIN_PORT: ${HTTPFS_ADMIN_PORT}"
+else
+ print "Using HTTPFS_ADMIN_PORT: ${HTTPFS_ADMIN_PORT}"
+fi
+
+if [ "${HTTPFS_HTTP_HOSTNAME}" = "" ]; then
+ export HTTPFS_HTTP_HOSTNAME=`hostname -f`
+ print "Setting HTTPFS_HTTP_HOSTNAME: ${HTTPFS_HTTP_HOSTNAME}"
+else
+ print "Using HTTPFS_HTTP_HOSTNAME: ${HTTPFS_HTTP_HOSTNAME}"
+fi
+
+if [ "${CATALINA_BASE}" = "" ]; then
+ export CATALINA_BASE=${HTTPFS_HOME}/share/hadoop/httpfs/tomcat
+ print "Setting CATALINA_BASE: ${CATALINA_BASE}"
+else
+ print "Using CATALINA_BASE: ${CATALINA_BASE}"
+fi
+
+if [ "${HTTPFS_CATALINA_HOME}" = "" ]; then
+ export HTTPFS_CATALINA_HOME=${CATALINA_BASE}
+ print "Setting HTTPFS_CATALINA_HOME: ${HTTPFS_CATALINA_HOME}"
+else
+ print "Using HTTPFS_CATALINA_HOME: ${HTTPFS_CATALINA_HOME}"
+fi
+
+if [ "${CATALINA_OUT}" = "" ]; then
+ export CATALINA_OUT=${HTTPFS_LOG}/httpfs-catalina.out
+ print "Setting CATALINA_OUT: ${CATALINA_OUT}"
+else
+ print "Using CATALINA_OUT: ${CATALINA_OUT}"
+fi
+
+if [ "${CATALINA_PID}" = "" ]; then
+ export CATALINA_PID=/tmp/httpfs.pid
+ print "Setting CATALINA_PID: ${CATALINA_PID}"
+else
+ print "Using CATALINA_PID: ${CATALINA_PID}"
+fi
+
+print
diff --git a/aarch64/libexec/mapred-config.cmd b/aarch64/libexec/mapred-config.cmd
new file mode 100755
index 0000000..f3aa733
--- /dev/null
+++ b/aarch64/libexec/mapred-config.cmd
@@ -0,0 +1,43 @@
+@echo off
+@rem Licensed to the Apache Software Foundation (ASF) under one or more
+@rem contributor license agreements. See the NOTICE file distributed with
+@rem this work for additional information regarding copyright ownership.
+@rem The ASF licenses this file to You under the Apache License, Version 2.0
+@rem (the "License"); you may not use this file except in compliance with
+@rem the License. You may obtain a copy of the License at
+@rem
+@rem http://www.apache.org/licenses/LICENSE-2.0
+@rem
+@rem Unless required by applicable law or agreed to in writing, software
+@rem distributed under the License is distributed on an "AS IS" BASIS,
+@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+@rem See the License for the specific language governing permissions and
+@rem limitations under the License.
+
+@rem included in all the hdfs scripts with source command
+@rem should not be executed directly
+
+if not defined HADOOP_BIN_PATH (
+ set HADOOP_BIN_PATH=%~dp0
+)
+
+if "%HADOOP_BIN_PATH:~-1%" == "\" (
+ set HADOOP_BIN_PATH=%HADOOP_BIN_PATH:~0,-1%
+)
+
+set DEFAULT_LIBEXEC_DIR=%HADOOP_BIN_PATH%\..\libexec
+if not defined HADOOP_LIBEXEC_DIR (
+ set HADOOP_LIBEXEC_DIR=%DEFAULT_LIBEXEC_DIR%
+)
+
+if exist %HADOOP_LIBEXEC_DIR%\hadoop-config.cmd (
+ call %HADOOP_LIBEXEC_DIR%\hadoop-config.cmd %*
+) else if exist %HADOOP_COMMON_HOME%\libexec\hadoop-config.cmd (
+ call %HADOOP_COMMON_HOME%\libexec\hadoop-config.cmd %*
+) else if exist %HADOOP_HOME%\libexec\hadoop-config.cmd (
+ call %HADOOP_HOME%\libexec\hadoop-config.cmd %*
+) else (
+ echo Hadoop common not found.
+)
+
+:eof
diff --git a/aarch64/libexec/mapred-config.sh b/aarch64/libexec/mapred-config.sh
new file mode 100755
index 0000000..254e0a0
--- /dev/null
+++ b/aarch64/libexec/mapred-config.sh
@@ -0,0 +1,52 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# included in all the mapred scripts with source command
+# should not be executed directly
+
+bin=`which "$0"`
+bin=`dirname "${bin}"`
+bin=`cd "$bin"; pwd`
+
+DEFAULT_LIBEXEC_DIR="$bin"/../libexec
+HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR}
+if [ -e "${HADOOP_LIBEXEC_DIR}/hadoop-config.sh" ]; then
+ . "${HADOOP_LIBEXEC_DIR}/hadoop-config.sh"
+elif [ -e "${HADOOP_COMMON_HOME}/libexec/hadoop-config.sh" ]; then
+ . "$HADOOP_COMMON_HOME"/libexec/hadoop-config.sh
+elif [ -e "${HADOOP_COMMON_HOME}/bin/hadoop-config.sh" ]; then
+ . "$HADOOP_COMMON_HOME"/bin/hadoop-config.sh
+elif [ -e "${HADOOP_HOME}/bin/hadoop-config.sh" ]; then
+ . "$HADOOP_HOME"/bin/hadoop-config.sh
+elif [ -e "${HADOOP_MAPRED_HOME}/bin/hadoop-config.sh" ]; then
+ . "$HADOOP_MAPRED_HOME"/bin/hadoop-config.sh
+else
+ echo "Hadoop common not found."
+ exit
+fi
+
+# Only set locally to use in HADOOP_OPTS. No need to export.
+# The following defaults are useful when somebody directly invokes bin/mapred.
+HADOOP_MAPRED_LOG_DIR=${HADOOP_MAPRED_LOG_DIR:-${HADOOP_MAPRED_HOME}/logs}
+HADOOP_MAPRED_LOGFILE=${HADOOP_MAPRED_LOGFILE:-hadoop.log}
+HADOOP_MAPRED_ROOT_LOGGER=${HADOOP_MAPRED_ROOT_LOGGER:-INFO,console}
+
+HADOOP_OPTS="$HADOOP_OPTS -Dhadoop.log.dir=$HADOOP_MAPRED_LOG_DIR"
+HADOOP_OPTS="$HADOOP_OPTS -Dhadoop.log.file=$HADOOP_MAPRED_LOGFILE"
+export HADOOP_OPTS="$HADOOP_OPTS -Dhadoop.root.logger=${HADOOP_MAPRED_ROOT_LOGGER}"
+
+
diff --git a/aarch64/libexec/yarn-config.cmd b/aarch64/libexec/yarn-config.cmd
new file mode 100755
index 0000000..41c1434
--- /dev/null
+++ b/aarch64/libexec/yarn-config.cmd
@@ -0,0 +1,72 @@
+@echo off
+@rem Licensed to the Apache Software Foundation (ASF) under one or more
+@rem contributor license agreements. See the NOTICE file distributed with
+@rem this work for additional information regarding copyright ownership.
+@rem The ASF licenses this file to You under the Apache License, Version 2.0
+@rem (the "License"); you may not use this file except in compliance with
+@rem the License. You may obtain a copy of the License at
+@rem
+@rem http://www.apache.org/licenses/LICENSE-2.0
+@rem
+@rem Unless required by applicable law or agreed to in writing, software
+@rem distributed under the License is distributed on an "AS IS" BASIS,
+@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+@rem See the License for the specific language governing permissions and
+@rem limitations under the License.
+
+@rem included in all the hdfs scripts with source command
+@rem should not be executed directly
+
+if not defined HADOOP_BIN_PATH (
+ set HADOOP_BIN_PATH=%~dp0
+)
+
+if "%HADOOP_BIN_PATH:~-1%" == "\" (
+ set HADOOP_BIN_PATH=%HADOOP_BIN_PATH:~0,-1%
+)
+
+set DEFAULT_LIBEXEC_DIR=%HADOOP_BIN_PATH%\..\libexec
+if not defined HADOOP_LIBEXEC_DIR (
+ set HADOOP_LIBEXEC_DIR=%DEFAULT_LIBEXEC_DIR%
+)
+
+if exist %HADOOP_LIBEXEC_DIR%\hadoop-config.cmd (
+ call %HADOOP_LIBEXEC_DIR%\hadoop-config.cmd %*
+) else if exist %HADOOP_COMMON_HOME%\libexec\hadoop-config.cmd (
+ call %HADOOP_COMMON_HOME%\libexec\hadoop-config.cmd %*
+) else if exist %HADOOP_HOME%\libexec\hadoop-config.cmd (
+ call %HADOOP_HOME%\libexec\hadoop-config.cmd %*
+) else (
+ echo Hadoop common not found.
+)
+
+@rem
+@rem Allow alternate conf dir location.
+@rem
+
+if "%1" == "--config" (
+ shift
+ set YARN_CONF_DIR=%2
+ shift
+)
+
+if not defined YARN_CONF_DIR (
+ if not defined HADOOP_CONF_DIR (
+ set YARN_CONF_DIR=%HADOOP_YARN_HOME%\conf
+ ) else (
+ set YARN_CONF_DIR=%HADOOP_CONF_DIR%
+ )
+)
+
+@rem
+@rem check to see it is specified whether to use the slaves or the
+@rem masters file
+@rem
+
+if "%1" == "--hosts" (
+ set YARN_SLAVES=%YARN_CONF_DIR%\%2
+ shift
+ shift
+)
+
+:eof
diff --git a/aarch64/libexec/yarn-config.sh b/aarch64/libexec/yarn-config.sh
new file mode 100755
index 0000000..3d67801
--- /dev/null
+++ b/aarch64/libexec/yarn-config.sh
@@ -0,0 +1,65 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# included in all the hadoop scripts with source command
+# should not be executable directly
+bin=`which "$0"`
+bin=`dirname "${bin}"`
+bin=`cd "$bin"; pwd`
+
+DEFAULT_LIBEXEC_DIR="$bin"/../libexec
+HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR}
+if [ -e "${HADOOP_LIBEXEC_DIR}/hadoop-config.sh" ]; then
+ . ${HADOOP_LIBEXEC_DIR}/hadoop-config.sh
+elif [ -e "${HADOOP_COMMON_HOME}/libexec/hadoop-config.sh" ]; then
+ . "$HADOOP_COMMON_HOME"/libexec/hadoop-config.sh
+elif [ -e "${HADOOP_HOME}/libexec/hadoop-config.sh" ]; then
+ . "$HADOOP_HOME"/libexec/hadoop-config.sh
+else
+ echo "Hadoop common not found."
+ exit
+fi
+
+# Same glibc bug that discovered in Hadoop.
+# Without this you can see very large vmem settings on containers.
+export MALLOC_ARENA_MAX=${MALLOC_ARENA_MAX:-4}
+
+#check to see if the conf dir is given as an optional argument
+if [ $# -gt 1 ]
+then
+ if [ "--config" = "$1" ]
+ then
+ shift
+ confdir=$1
+ shift
+ YARN_CONF_DIR=$confdir
+ fi
+fi
+
+# Allow alternate conf dir location.
+export YARN_CONF_DIR="${HADOOP_CONF_DIR:-$HADOOP_YARN_HOME/conf}"
+
+#check to see it is specified whether to use the slaves or the
+# masters file
+if [ $# -gt 1 ]
+then
+ if [ "--hosts" = "$1" ]
+ then
+ shift
+ slavesfile=$1
+ shift
+ export YARN_SLAVES="${YARN_CONF_DIR}/$slavesfile"
+ fi
+fi
diff --git a/aarch64/sbin/distribute-exclude.sh b/aarch64/sbin/distribute-exclude.sh
new file mode 100755
index 0000000..66fc14a
--- /dev/null
+++ b/aarch64/sbin/distribute-exclude.sh
@@ -0,0 +1,81 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+# ------------------------------------------------------------------
+#
+# The purpose of this script is to distribute the exclude file (see
+# "dfs.hosts.exclude" in hdfs-site.xml).
+#
+# Input of the script is a local exclude file. The exclude file
+# will be distributed to all the namenodes. The location on the namenodes
+# is determined by the configuration "dfs.hosts.exclude" in hdfs-site.xml
+# (this value is read from the local copy of hdfs-site.xml and must be same
+# on all the namenodes).
+#
+# The user running this script needs write permissions on the target
+# directory on namenodes.
+#
+# After this command, run refresh-namenodes.sh so that namenodes start
+# using the new exclude file.
+
+bin=`dirname "$0"`
+bin=`cd "$bin"; pwd`
+
+DEFAULT_LIBEXEC_DIR="$bin"/../libexec
+HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR}
+. $HADOOP_LIBEXEC_DIR/hdfs-config.sh
+
+if [ "$1" = '' ] ; then
+ "Error: please specify local exclude file as a first argument"
+ exit 1
+else
+ excludeFilenameLocal=$1
+fi
+
+if [ ! -f "$excludeFilenameLocal" ] ; then
+ echo "Error: exclude file [$excludeFilenameLocal] does not exist."
+ exit 1
+fi
+
+namenodes=$("$HADOOP_PREFIX/bin/hdfs" getconf -namenodes)
+excludeFilenameRemote=$("$HADOOP_PREFIX/bin/hdfs" getconf -excludeFile)
+
+if [ "$excludeFilenameRemote" = '' ] ; then
+ echo \
+ "Error: hdfs getconf -excludeFile returned empty string, " \
+ "please setup dfs.hosts.exclude in hdfs-site.xml in local cluster " \
+ "configuration and on all namenodes"
+ exit 1
+fi
+
+echo "Copying exclude file [$excludeFilenameRemote] to namenodes:"
+
+for namenode in $namenodes ; do
+ echo " [$namenode]"
+ scp "$excludeFilenameLocal" "$namenode:$excludeFilenameRemote"
+ if [ "$?" != '0' ] ; then errorFlag='1' ; fi
+done
+
+if [ "$errorFlag" = '1' ] ; then
+ echo "Error: transfer of exclude file failed, see error messages above."
+ exit 1
+else
+ echo "Transfer of exclude file to all namenodes succeeded."
+fi
+
+# eof
diff --git a/aarch64/sbin/hadoop-daemon.sh b/aarch64/sbin/hadoop-daemon.sh
new file mode 100755
index 0000000..ece40ef
--- /dev/null
+++ b/aarch64/sbin/hadoop-daemon.sh
@@ -0,0 +1,202 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+# Runs a Hadoop command as a daemon.
+#
+# Environment Variables
+#
+# HADOOP_CONF_DIR Alternate conf dir. Default is ${HADOOP_PREFIX}/conf.
+# HADOOP_LOG_DIR Where log files are stored. PWD by default.
+# HADOOP_MASTER host:path where hadoop code should be rsync'd from
+# HADOOP_PID_DIR The pid files are stored. /tmp by default.
+# HADOOP_IDENT_STRING A string representing this instance of hadoop. $USER by default
+# HADOOP_NICENESS The scheduling priority for daemons. Defaults to 0.
+##
+
+usage="Usage: hadoop-daemon.sh [--config <conf-dir>] [--hosts hostlistfile] [--script script] (start|stop) <hadoop-command> <args...>"
+
+# if no args specified, show usage
+if [ $# -le 1 ]; then
+ echo $usage
+ exit 1
+fi
+
+bin=`dirname "${BASH_SOURCE-$0}"`
+bin=`cd "$bin"; pwd`
+
+DEFAULT_LIBEXEC_DIR="$bin"/../libexec
+HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR}
+. $HADOOP_LIBEXEC_DIR/hadoop-config.sh
+
+# get arguments
+
+#default value
+hadoopScript="$HADOOP_PREFIX"/bin/hadoop
+if [ "--script" = "$1" ]
+ then
+ shift
+ hadoopScript=$1
+ shift
+fi
+startStop=$1
+shift
+command=$1
+shift
+
+hadoop_rotate_log ()
+{
+ log=$1;
+ num=5;
+ if [ -n "$2" ]; then
+ num=$2
+ fi
+ if [ -f "$log" ]; then # rotate logs
+ while [ $num -gt 1 ]; do
+ prev=`expr $num - 1`
+ [ -f "$log.$prev" ] && mv "$log.$prev" "$log.$num"
+ num=$prev
+ done
+ mv "$log" "$log.$num";
+ fi
+}
+
+if [ -f "${HADOOP_CONF_DIR}/hadoop-env.sh" ]; then
+ . "${HADOOP_CONF_DIR}/hadoop-env.sh"
+fi
+
+# Determine if we're starting a secure datanode, and if so, redefine appropriate variables
+if [ "$command" == "datanode" ] && [ "$EUID" -eq 0 ] && [ -n "$HADOOP_SECURE_DN_USER" ]; then
+ export HADOOP_PID_DIR=$HADOOP_SECURE_DN_PID_DIR
+ export HADOOP_LOG_DIR=$HADOOP_SECURE_DN_LOG_DIR
+ export HADOOP_IDENT_STRING=$HADOOP_SECURE_DN_USER
+ starting_secure_dn="true"
+fi
+
+if [ "$HADOOP_IDENT_STRING" = "" ]; then
+ export HADOOP_IDENT_STRING="$USER"
+fi
+
+
+# get log directory
+if [ "$HADOOP_LOG_DIR" = "" ]; then
+ export HADOOP_LOG_DIR="$HADOOP_PREFIX/logs"
+fi
+
+if [ ! -w "$HADOOP_LOG_DIR" ] ; then
+ mkdir -p "$HADOOP_LOG_DIR"
+ chown $HADOOP_IDENT_STRING $HADOOP_LOG_DIR
+fi
+
+if [ "$HADOOP_PID_DIR" = "" ]; then
+ HADOOP_PID_DIR=/tmp
+fi
+
+# some variables
+export HADOOP_LOGFILE=hadoop-$HADOOP_IDENT_STRING-$command-$HOSTNAME.log
+export HADOOP_ROOT_LOGGER=${HADOOP_ROOT_LOGGER:-"INFO,RFA"}
+export HADOOP_SECURITY_LOGGER=${HADOOP_SECURITY_LOGGER:-"INFO,RFAS"}
+export HDFS_AUDIT_LOGGER=${HDFS_AUDIT_LOGGER:-"INFO,NullAppender"}
+log=$HADOOP_LOG_DIR/hadoop-$HADOOP_IDENT_STRING-$command-$HOSTNAME.out
+pid=$HADOOP_PID_DIR/hadoop-$HADOOP_IDENT_STRING-$command.pid
+HADOOP_STOP_TIMEOUT=${HADOOP_STOP_TIMEOUT:-5}
+
+# Set default scheduling priority
+if [ "$HADOOP_NICENESS" = "" ]; then
+ export HADOOP_NICENESS=0
+fi
+
+case $startStop in
+
+ (start)
+
+ [ -w "$HADOOP_PID_DIR" ] || mkdir -p "$HADOOP_PID_DIR"
+
+ if [ -f $pid ]; then
+ if kill -0 `cat $pid` > /dev/null 2>&1; then
+ echo $command running as process `cat $pid`. Stop it first.
+ exit 1
+ fi
+ fi
+
+ if [ "$HADOOP_MASTER" != "" ]; then
+ echo rsync from $HADOOP_MASTER
+ rsync -a -e ssh --delete --exclude=.svn --exclude='logs/*' --exclude='contrib/hod/logs/*' $HADOOP_MASTER/ "$HADOOP_PREFIX"
+ fi
+
+ hadoop_rotate_log $log
+ echo starting $command, logging to $log
+ cd "$HADOOP_PREFIX"
+ case $command in
+ namenode|secondarynamenode|datanode|journalnode|dfs|dfsadmin|fsck|balancer|zkfc)
+ if [ -z "$HADOOP_HDFS_HOME" ]; then
+ hdfsScript="$HADOOP_PREFIX"/bin/hdfs
+ else
+ hdfsScript="$HADOOP_HDFS_HOME"/bin/hdfs
+ fi
+ nohup nice -n $HADOOP_NICENESS $hdfsScript --config $HADOOP_CONF_DIR $command "$@" > "$log" 2>&1 < /dev/null &
+ ;;
+ (*)
+ nohup nice -n $HADOOP_NICENESS $hadoopScript --config $HADOOP_CONF_DIR $command "$@" > "$log" 2>&1 < /dev/null &
+ ;;
+ esac
+ echo $! > $pid
+ sleep 1
+ head "$log"
+ # capture the ulimit output
+ if [ "true" = "$starting_secure_dn" ]; then
+ echo "ulimit -a for secure datanode user $HADOOP_SECURE_DN_USER" >> $log
+ # capture the ulimit info for the appropriate user
+ su --shell=/bin/bash $HADOOP_SECURE_DN_USER -c 'ulimit -a' >> $log 2>&1
+ else
+ echo "ulimit -a for user $USER" >> $log
+ ulimit -a >> $log 2>&1
+ fi
+ sleep 3;
+ if ! ps -p $! > /dev/null ; then
+ exit 1
+ fi
+ ;;
+
+ (stop)
+
+ if [ -f $pid ]; then
+ TARGET_PID=`cat $pid`
+ if kill -0 $TARGET_PID > /dev/null 2>&1; then
+ echo stopping $command
+ kill $TARGET_PID
+ sleep $HADOOP_STOP_TIMEOUT
+ if kill -0 $TARGET_PID > /dev/null 2>&1; then
+ echo "$command did not stop gracefully after $HADOOP_STOP_TIMEOUT seconds: killing with kill -9"
+ kill -9 $TARGET_PID
+ fi
+ else
+ echo no $command to stop
+ fi
+ else
+ echo no $command to stop
+ fi
+ ;;
+
+ (*)
+ echo $usage
+ exit 1
+ ;;
+
+esac
+
+
diff --git a/aarch64/sbin/hadoop-daemons.sh b/aarch64/sbin/hadoop-daemons.sh
new file mode 100755
index 0000000..181d7ac
--- /dev/null
+++ b/aarch64/sbin/hadoop-daemons.sh
@@ -0,0 +1,36 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+# Run a Hadoop command on all slave hosts.
+
+usage="Usage: hadoop-daemons.sh [--config confdir] [--hosts hostlistfile] [start|stop] command args..."
+
+# if no args specified, show usage
+if [ $# -le 1 ]; then
+ echo $usage
+ exit 1
+fi
+
+bin=`dirname "${BASH_SOURCE-$0}"`
+bin=`cd "$bin"; pwd`
+
+DEFAULT_LIBEXEC_DIR="$bin"/../libexec
+HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR}
+. $HADOOP_LIBEXEC_DIR/hadoop-config.sh
+
+exec "$bin/slaves.sh" --config $HADOOP_CONF_DIR cd "$HADOOP_PREFIX" \; "$bin/hadoop-daemon.sh" --config $HADOOP_CONF_DIR "$@"
diff --git a/aarch64/sbin/hdfs-config.cmd b/aarch64/sbin/hdfs-config.cmd
new file mode 100755
index 0000000..f3aa733
--- /dev/null
+++ b/aarch64/sbin/hdfs-config.cmd
@@ -0,0 +1,43 @@
+@echo off
+@rem Licensed to the Apache Software Foundation (ASF) under one or more
+@rem contributor license agreements. See the NOTICE file distributed with
+@rem this work for additional information regarding copyright ownership.
+@rem The ASF licenses this file to You under the Apache License, Version 2.0
+@rem (the "License"); you may not use this file except in compliance with
+@rem the License. You may obtain a copy of the License at
+@rem
+@rem http://www.apache.org/licenses/LICENSE-2.0
+@rem
+@rem Unless required by applicable law or agreed to in writing, software
+@rem distributed under the License is distributed on an "AS IS" BASIS,
+@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+@rem See the License for the specific language governing permissions and
+@rem limitations under the License.
+
+@rem included in all the hdfs scripts with source command
+@rem should not be executed directly
+
+if not defined HADOOP_BIN_PATH (
+ set HADOOP_BIN_PATH=%~dp0
+)
+
+if "%HADOOP_BIN_PATH:~-1%" == "\" (
+ set HADOOP_BIN_PATH=%HADOOP_BIN_PATH:~0,-1%
+)
+
+set DEFAULT_LIBEXEC_DIR=%HADOOP_BIN_PATH%\..\libexec
+if not defined HADOOP_LIBEXEC_DIR (
+ set HADOOP_LIBEXEC_DIR=%DEFAULT_LIBEXEC_DIR%
+)
+
+if exist %HADOOP_LIBEXEC_DIR%\hadoop-config.cmd (
+ call %HADOOP_LIBEXEC_DIR%\hadoop-config.cmd %*
+) else if exist %HADOOP_COMMON_HOME%\libexec\hadoop-config.cmd (
+ call %HADOOP_COMMON_HOME%\libexec\hadoop-config.cmd %*
+) else if exist %HADOOP_HOME%\libexec\hadoop-config.cmd (
+ call %HADOOP_HOME%\libexec\hadoop-config.cmd %*
+) else (
+ echo Hadoop common not found.
+)
+
+:eof
diff --git a/aarch64/sbin/hdfs-config.sh b/aarch64/sbin/hdfs-config.sh
new file mode 100755
index 0000000..2aabf53
--- /dev/null
+++ b/aarch64/sbin/hdfs-config.sh
@@ -0,0 +1,36 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# included in all the hdfs scripts with source command
+# should not be executed directly
+
+bin=`which "$0"`
+bin=`dirname "${bin}"`
+bin=`cd "$bin"; pwd`
+
+DEFAULT_LIBEXEC_DIR="$bin"/../libexec
+HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR}
+if [ -e "${HADOOP_LIBEXEC_DIR}/hadoop-config.sh" ]; then
+ . ${HADOOP_LIBEXEC_DIR}/hadoop-config.sh
+elif [ -e "${HADOOP_COMMON_HOME}/libexec/hadoop-config.sh" ]; then
+ . "$HADOOP_COMMON_HOME"/libexec/hadoop-config.sh
+elif [ -e "${HADOOP_HOME}/libexec/hadoop-config.sh" ]; then
+ . "$HADOOP_HOME"/libexec/hadoop-config.sh
+else
+ echo "Hadoop common not found."
+ exit
+fi
diff --git a/aarch64/sbin/httpfs.sh b/aarch64/sbin/httpfs.sh
new file mode 100755
index 0000000..c83a143
--- /dev/null
+++ b/aarch64/sbin/httpfs.sh
@@ -0,0 +1,62 @@
+#!/bin/bash
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+# resolve links - $0 may be a softlink
+PRG="${0}"
+
+while [ -h "${PRG}" ]; do
+ ls=`ls -ld "${PRG}"`
+ link=`expr "$ls" : '.*-> \(.*\)$'`
+ if expr "$link" : '/.*' > /dev/null; then
+ PRG="$link"
+ else
+ PRG=`dirname "${PRG}"`/"$link"
+ fi
+done
+
+BASEDIR=`dirname ${PRG}`
+BASEDIR=`cd ${BASEDIR}/..;pwd`
+
+source ${HADOOP_LIBEXEC_DIR:-${BASEDIR}/libexec}/httpfs-config.sh
+
+# The Java System property 'httpfs.http.port' it is not used by HttpFS,
+# it is used in Tomcat's server.xml configuration file
+#
+print "Using CATALINA_OPTS: ${CATALINA_OPTS}"
+
+catalina_opts="-Dhttpfs.home.dir=${HTTPFS_HOME}";
+catalina_opts="${catalina_opts} -Dhttpfs.config.dir=${HTTPFS_CONFIG}";
+catalina_opts="${catalina_opts} -Dhttpfs.log.dir=${HTTPFS_LOG}";
+catalina_opts="${catalina_opts} -Dhttpfs.temp.dir=${HTTPFS_TEMP}";
+catalina_opts="${catalina_opts} -Dhttpfs.admin.port=${HTTPFS_ADMIN_PORT}";
+catalina_opts="${catalina_opts} -Dhttpfs.http.port=${HTTPFS_HTTP_PORT}";
+catalina_opts="${catalina_opts} -Dhttpfs.http.hostname=${HTTPFS_HTTP_HOSTNAME}";
+
+print "Adding to CATALINA_OPTS: ${catalina_opts}"
+
+export CATALINA_OPTS="${CATALINA_OPTS} ${catalina_opts}"
+
+# A bug in catalina.sh script does not use CATALINA_OPTS for stopping the server
+#
+if [ "${1}" = "stop" ]; then
+ export JAVA_OPTS=${CATALINA_OPTS}
+fi
+
+if [ "${HTTPFS_SILENT}" != "true" ]; then
+ exec ${HTTPFS_CATALINA_HOME}/bin/catalina.sh "$@"
+else
+ exec ${HTTPFS_CATALINA_HOME}/bin/catalina.sh "$@" > /dev/null
+fi
+
diff --git a/aarch64/sbin/mr-jobhistory-daemon.sh b/aarch64/sbin/mr-jobhistory-daemon.sh
new file mode 100755
index 0000000..9ef3d45
--- /dev/null
+++ b/aarch64/sbin/mr-jobhistory-daemon.sh
@@ -0,0 +1,146 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+#
+# Environment Variables
+#
+# HADOOP_JHS_LOGGER Hadoop JobSummary logger.
+# HADOOP_CONF_DIR Alternate conf dir. Default is ${HADOOP_MAPRED_HOME}/conf.
+# HADOOP_MAPRED_PID_DIR The pid files are stored. /tmp by default.
+# HADOOP_MAPRED_NICENESS The scheduling priority for daemons. Defaults to 0.
+##
+
+usage="Usage: mr-jobhistory-daemon.sh [--config <conf-dir>] (start|stop) <mapred-command> "
+
+# if no args specified, show usage
+if [ $# -le 1 ]; then
+ echo $usage
+ exit 1
+fi
+
+bin=`dirname "${BASH_SOURCE-$0}"`
+bin=`cd "$bin"; pwd`
+
+DEFAULT_LIBEXEC_DIR="$bin"/../libexec
+HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR}
+if [ -e ${HADOOP_LIBEXEC_DIR}/mapred-config.sh ]; then
+ . $HADOOP_LIBEXEC_DIR/mapred-config.sh
+fi
+
+# get arguments
+startStop=$1
+shift
+command=$1
+shift
+
+hadoop_rotate_log ()
+{
+ log=$1;
+ num=5;
+ if [ -n "$2" ]; then
+ num=$2
+ fi
+ if [ -f "$log" ]; then # rotate logs
+ while [ $num -gt 1 ]; do
+ prev=`expr $num - 1`
+ [ -f "$log.$prev" ] && mv "$log.$prev" "$log.$num"
+ num=$prev
+ done
+ mv "$log" "$log.$num";
+ fi
+}
+
+if [ "$HADOOP_MAPRED_IDENT_STRING" = "" ]; then
+ export HADOOP_MAPRED_IDENT_STRING="$USER"
+fi
+
+export HADOOP_MAPRED_HOME=${HADOOP_MAPRED_HOME:-${HADOOP_PREFIX}}
+export HADOOP_MAPRED_LOGFILE=mapred-$HADOOP_MAPRED_IDENT_STRING-$command-$HOSTNAME.log
+export HADOOP_MAPRED_ROOT_LOGGER=${HADOOP_MAPRED_ROOT_LOGGER:-INFO,RFA}
+export HADOOP_JHS_LOGGER=${HADOOP_JHS_LOGGER:-INFO,JSA}
+
+if [ -f "${HADOOP_CONF_DIR}/mapred-env.sh" ]; then
+ . "${HADOOP_CONF_DIR}/mapred-env.sh"
+fi
+
+mkdir -p "$HADOOP_MAPRED_LOG_DIR"
+chown $HADOOP_MAPRED_IDENT_STRING $HADOOP_MAPRED_LOG_DIR
+
+if [ "$HADOOP_MAPRED_PID_DIR" = "" ]; then
+ HADOOP_MAPRED_PID_DIR=/tmp
+fi
+
+HADOOP_OPTS="$HADOOP_OPTS -Dhadoop.id.str=$HADOOP_MAPRED_IDENT_STRING"
+
+log=$HADOOP_MAPRED_LOG_DIR/mapred-$HADOOP_MAPRED_IDENT_STRING-$command-$HOSTNAME.out
+pid=$HADOOP_MAPRED_PID_DIR/mapred-$HADOOP_MAPRED_IDENT_STRING-$command.pid
+
+HADOOP_MAPRED_STOP_TIMEOUT=${HADOOP_MAPRED_STOP_TIMEOUT:-5}
+
+# Set default scheduling priority
+if [ "$HADOOP_MAPRED_NICENESS" = "" ]; then
+ export HADOOP_MAPRED_NICENESS=0
+fi
+
+case $startStop in
+
+ (start)
+
+ mkdir -p "$HADOOP_MAPRED_PID_DIR"
+
+ if [ -f $pid ]; then
+ if kill -0 `cat $pid` > /dev/null 2>&1; then
+ echo $command running as process `cat $pid`. Stop it first.
+ exit 1
+ fi
+ fi
+
+ hadoop_rotate_log $log
+ echo starting $command, logging to $log
+ cd "$HADOOP_MAPRED_HOME"
+ nohup nice -n $HADOOP_MAPRED_NICENESS "$HADOOP_MAPRED_HOME"/bin/mapred --config $HADOOP_CONF_DIR $command "$@" > "$log" 2>&1 < /dev/null &
+ echo $! > $pid
+ sleep 1; head "$log"
+ ;;
+
+ (stop)
+
+ if [ -f $pid ]; then
+ TARGET_PID=`cat $pid`
+ if kill -0 $TARGET_PID > /dev/null 2>&1; then
+ echo stopping $command
+ kill $TARGET_PID
+ sleep $HADOOP_MAPRED_STOP_TIMEOUT
+ if kill -0 $TARGET_PID > /dev/null 2>&1; then
+ echo "$command did not stop gracefully after $HADOOP_MAPRED_STOP_TIMEOUT seconds: killing with kill -9"
+ kill -9 $TARGET_PID
+ fi
+ else
+ echo no $command to stop
+ fi
+ else
+ echo no $command to stop
+ fi
+ ;;
+
+ (*)
+ echo $usage
+ exit 1
+ ;;
+
+esac
diff --git a/aarch64/sbin/refresh-namenodes.sh b/aarch64/sbin/refresh-namenodes.sh
new file mode 100755
index 0000000..d3f6759
--- /dev/null
+++ b/aarch64/sbin/refresh-namenodes.sh
@@ -0,0 +1,48 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+# ------------------------------------------------------------------
+# This script refreshes all namenodes, it's a simple wrapper
+# for dfsadmin to support multiple namenodes.
+
+bin=`dirname "$0"`
+bin=`cd "$bin"; pwd`
+
+DEFAULT_LIBEXEC_DIR="$bin"/../libexec
+HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR}
+. $HADOOP_LIBEXEC_DIR/hdfs-config.sh
+
+namenodes=$("$HADOOP_PREFIX/bin/hdfs" getconf -nnRpcAddresses)
+if [ "$?" != '0' ] ; then errorFlag='1' ;
+else
+ for namenode in $namenodes ; do
+ echo "Refreshing namenode [$namenode]"
+ "$HADOOP_PREFIX/bin/hdfs" dfsadmin -fs hdfs://$namenode -refreshNodes
+ if [ "$?" != '0' ] ; then errorFlag='1' ; fi
+ done
+fi
+
+if [ "$errorFlag" = '1' ] ; then
+ echo "Error: refresh of namenodes failed, see error messages above."
+ exit 1
+else
+ echo "Refresh of namenodes done."
+fi
+
+
+# eof
diff --git a/aarch64/sbin/slaves.sh b/aarch64/sbin/slaves.sh
new file mode 100755
index 0000000..016392f
--- /dev/null
+++ b/aarch64/sbin/slaves.sh
@@ -0,0 +1,67 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+# Run a shell command on all slave hosts.
+#
+# Environment Variables
+#
+# HADOOP_SLAVES File naming remote hosts.
+# Default is ${HADOOP_CONF_DIR}/slaves.
+# HADOOP_CONF_DIR Alternate conf dir. Default is ${HADOOP_PREFIX}/conf.
+# HADOOP_SLAVE_SLEEP Seconds to sleep between spawning remote commands.
+# HADOOP_SSH_OPTS Options passed to ssh when running remote commands.
+##
+
+usage="Usage: slaves.sh [--config confdir] command..."
+
+# if no args specified, show usage
+if [ $# -le 0 ]; then
+ echo $usage
+ exit 1
+fi
+
+bin=`dirname "${BASH_SOURCE-$0}"`
+bin=`cd "$bin"; pwd`
+
+DEFAULT_LIBEXEC_DIR="$bin"/../libexec
+HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR}
+. $HADOOP_LIBEXEC_DIR/hadoop-config.sh
+
+if [ -f "${HADOOP_CONF_DIR}/hadoop-env.sh" ]; then
+ . "${HADOOP_CONF_DIR}/hadoop-env.sh"
+fi
+
+# Where to start the script, see hadoop-config.sh
+# (it set up the variables based on command line options)
+if [ "$HADOOP_SLAVE_NAMES" != '' ] ; then
+ SLAVE_NAMES=$HADOOP_SLAVE_NAMES
+else
+ SLAVE_FILE=${HADOOP_SLAVES:-${HADOOP_CONF_DIR}/slaves}
+ SLAVE_NAMES=$(cat "$SLAVE_FILE" | sed 's/#.*$//;/^$/d')
+fi
+
+# start the daemons
+for slave in $SLAVE_NAMES ; do
+ ssh $HADOOP_SSH_OPTS $slave $"${@// /\\ }" \
+ 2>&1 | sed "s/^/$slave: /" &
+ if [ "$HADOOP_SLAVE_SLEEP" != "" ]; then
+ sleep $HADOOP_SLAVE_SLEEP
+ fi
+done
+
+wait
diff --git a/aarch64/sbin/start-all.cmd b/aarch64/sbin/start-all.cmd
new file mode 100755
index 0000000..9f65b5d
--- /dev/null
+++ b/aarch64/sbin/start-all.cmd
@@ -0,0 +1,52 @@
+@echo off
+@rem Licensed to the Apache Software Foundation (ASF) under one or more
+@rem contributor license agreements. See the NOTICE file distributed with
+@rem this work for additional information regarding copyright ownership.
+@rem The ASF licenses this file to You under the Apache License, Version 2.0
+@rem (the "License"); you may not use this file except in compliance with
+@rem the License. You may obtain a copy of the License at
+@rem
+@rem http://www.apache.org/licenses/LICENSE-2.0
+@rem
+@rem Unless required by applicable law or agreed to in writing, software
+@rem distributed under the License is distributed on an "AS IS" BASIS,
+@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+@rem See the License for the specific language governing permissions and
+@rem limitations under the License.
+
+setlocal enabledelayedexpansion
+
+@rem Start all hadoop daemons. Run this on master node.
+
+echo This script is Deprecated. Instead use start-dfs.cmd and start-yarn.cmd
+
+if not defined HADOOP_BIN_PATH (
+ set HADOOP_BIN_PATH=%~dp0
+)
+
+if "%HADOOP_BIN_PATH:~-1%" == "\" (
+ set HADOOP_BIN_PATH=%HADOOP_BIN_PATH:~0,-1%
+)
+
+set DEFAULT_LIBEXEC_DIR=%HADOOP_BIN_PATH%\..\libexec
+if not defined HADOOP_LIBEXEC_DIR (
+ set HADOOP_LIBEXEC_DIR=%DEFAULT_LIBEXEC_DIR%
+)
+
+call %HADOOP_LIBEXEC_DIR%\hadoop-config.cmd %*
+if "%1" == "--config" (
+ shift
+ shift
+)
+
+@rem start hdfs daemons if hdfs is present
+if exist %HADOOP_HDFS_HOME%\sbin\start-dfs.cmd (
+ call %HADOOP_HDFS_HOME%\sbin\start-dfs.cmd --config %HADOOP_CONF_DIR%
+)
+
+@rem start yarn daemons if yarn is present
+if exist %HADOOP_YARN_HOME%\sbin\start-yarn.cmd (
+ call %HADOOP_YARN_HOME%\sbin\start-yarn.cmd --config %HADOOP_CONF_DIR%
+)
+
+endlocal
diff --git a/aarch64/sbin/start-all.sh b/aarch64/sbin/start-all.sh
new file mode 100755
index 0000000..3124328
--- /dev/null
+++ b/aarch64/sbin/start-all.sh
@@ -0,0 +1,38 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+# Start all hadoop daemons. Run this on master node.
+
+echo "This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh"
+
+bin=`dirname "${BASH_SOURCE-$0}"`
+bin=`cd "$bin"; pwd`
+
+DEFAULT_LIBEXEC_DIR="$bin"/../libexec
+HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR}
+. $HADOOP_LIBEXEC_DIR/hadoop-config.sh
+
+# start hdfs daemons if hdfs is present
+if [ -f "${HADOOP_HDFS_HOME}"/sbin/start-dfs.sh ]; then
+ "${HADOOP_HDFS_HOME}"/sbin/start-dfs.sh --config $HADOOP_CONF_DIR
+fi
+
+# start yarn daemons if yarn is present
+if [ -f "${HADOOP_YARN_HOME}"/sbin/start-yarn.sh ]; then
+ "${HADOOP_YARN_HOME}"/sbin/start-yarn.sh --config $HADOOP_CONF_DIR
+fi
diff --git a/aarch64/sbin/start-balancer.sh b/aarch64/sbin/start-balancer.sh
new file mode 100755
index 0000000..2c14a59
--- /dev/null
+++ b/aarch64/sbin/start-balancer.sh
@@ -0,0 +1,27 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+bin=`dirname "${BASH_SOURCE-$0}"`
+bin=`cd "$bin"; pwd`
+
+DEFAULT_LIBEXEC_DIR="$bin"/../libexec
+HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR}
+. $HADOOP_LIBEXEC_DIR/hdfs-config.sh
+
+# Start balancer daemon.
+
+"$HADOOP_PREFIX"/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script "$bin"/hdfs start balancer $@
diff --git a/aarch64/sbin/start-dfs.cmd b/aarch64/sbin/start-dfs.cmd
new file mode 100755
index 0000000..9f20e5a
--- /dev/null
+++ b/aarch64/sbin/start-dfs.cmd
@@ -0,0 +1,41 @@
+@echo off
+@rem Licensed to the Apache Software Foundation (ASF) under one or more
+@rem contributor license agreements. See the NOTICE file distributed with
+@rem this work for additional information regarding copyright ownership.
+@rem The ASF licenses this file to You under the Apache License, Version 2.0
+@rem (the "License"); you may not use this file except in compliance with
+@rem the License. You may obtain a copy of the License at
+@rem
+@rem http://www.apache.org/licenses/LICENSE-2.0
+@rem
+@rem Unless required by applicable law or agreed to in writing, software
+@rem distributed under the License is distributed on an "AS IS" BASIS,
+@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+@rem See the License for the specific language governing permissions and
+@rem limitations under the License.
+@rem
+setlocal enabledelayedexpansion
+
+if not defined HADOOP_BIN_PATH (
+ set HADOOP_BIN_PATH=%~dp0
+)
+
+if "%HADOOP_BIN_PATH:~-1%" == "\" (
+ set HADOOP_BIN_PATH=%HADOOP_BIN_PATH:~0,-1%
+)
+
+set DEFAULT_LIBEXEC_DIR=%HADOOP_BIN_PATH%\..\libexec
+if not defined HADOOP_LIBEXEC_DIR (
+ set HADOOP_LIBEXEC_DIR=%DEFAULT_LIBEXEC_DIR%
+)
+
+call %HADOOP_LIBEXEC_DIR%\hdfs-config.cmd %*
+if "%1" == "--config" (
+ shift
+ shift
+)
+
+start "Apache Hadoop Distribution" hadoop namenode
+start "Apache Hadoop Distribution" hadoop datanode
+
+endlocal
diff --git a/aarch64/sbin/start-dfs.sh b/aarch64/sbin/start-dfs.sh
new file mode 100755
index 0000000..8cbea16
--- /dev/null
+++ b/aarch64/sbin/start-dfs.sh
@@ -0,0 +1,117 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+# Start hadoop dfs daemons.
+# Optinally upgrade or rollback dfs state.
+# Run this on master node.
+
+usage="Usage: start-dfs.sh [-upgrade|-rollback] [other options such as -clusterId]"
+
+bin=`dirname "${BASH_SOURCE-$0}"`
+bin=`cd "$bin"; pwd`
+
+DEFAULT_LIBEXEC_DIR="$bin"/../libexec
+HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR}
+. $HADOOP_LIBEXEC_DIR/hdfs-config.sh
+
+# get arguments
+if [ $# -ge 1 ]; then
+ nameStartOpt="$1"
+ shift
+ case "$nameStartOpt" in
+ (-upgrade)
+ ;;
+ (-rollback)
+ dataStartOpt="$nameStartOpt"
+ ;;
+ (*)
+ echo $usage
+ exit 1
+ ;;
+ esac
+fi
+
+#Add other possible options
+nameStartOpt="$nameStartOpt $@"
+
+#---------------------------------------------------------
+# namenodes
+
+NAMENODES=$($HADOOP_PREFIX/bin/hdfs getconf -namenodes)
+
+echo "Starting namenodes on [$NAMENODES]"
+
+"$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \
+ --config "$HADOOP_CONF_DIR" \
+ --hostnames "$NAMENODES" \
+ --script "$bin/hdfs" start namenode $nameStartOpt
+
+#---------------------------------------------------------
+# datanodes (using default slaves file)
+
+if [ -n "$HADOOP_SECURE_DN_USER" ]; then
+ echo \
+ "Attempting to start secure cluster, skipping datanodes. " \
+ "Run start-secure-dns.sh as root to complete startup."
+else
+ "$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \
+ --config "$HADOOP_CONF_DIR" \
+ --script "$bin/hdfs" start datanode $dataStartOpt
+fi
+
+#---------------------------------------------------------
+# secondary namenodes (if any)
+
+SECONDARY_NAMENODES=$($HADOOP_PREFIX/bin/hdfs getconf -secondarynamenodes 2>/dev/null)
+
+if [ -n "$SECONDARY_NAMENODES" ]; then
+ echo "Starting secondary namenodes [$SECONDARY_NAMENODES]"
+
+ "$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \
+ --config "$HADOOP_CONF_DIR" \
+ --hostnames "$SECONDARY_NAMENODES" \
+ --script "$bin/hdfs" start secondarynamenode
+fi
+
+#---------------------------------------------------------
+# quorumjournal nodes (if any)
+
+SHARED_EDITS_DIR=$($HADOOP_PREFIX/bin/hdfs getconf -confKey dfs.namenode.shared.edits.dir 2>&-)
+
+case "$SHARED_EDITS_DIR" in
+qjournal://*)
+ JOURNAL_NODES=$(echo "$SHARED_EDITS_DIR" | sed 's,qjournal://\([^/]*\)/.*,\1,g; s/;/ /g; s/:[0-9]*//g')
+ echo "Starting journal nodes [$JOURNAL_NODES]"
+ "$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \
+ --config "$HADOOP_CONF_DIR" \
+ --hostnames "$JOURNAL_NODES" \
+ --script "$bin/hdfs" start journalnode ;;
+esac
+
+#---------------------------------------------------------
+# ZK Failover controllers, if auto-HA is enabled
+AUTOHA_ENABLED=$($HADOOP_PREFIX/bin/hdfs getconf -confKey dfs.ha.automatic-failover.enabled)
+if [ "$(echo "$AUTOHA_ENABLED" | tr A-Z a-z)" = "true" ]; then
+ echo "Starting ZK Failover Controllers on NN hosts [$NAMENODES]"
+ "$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \
+ --config "$HADOOP_CONF_DIR" \
+ --hostnames "$NAMENODES" \
+ --script "$bin/hdfs" start zkfc
+fi
+
+# eof
diff --git a/aarch64/sbin/start-secure-dns.sh b/aarch64/sbin/start-secure-dns.sh
new file mode 100755
index 0000000..7ddf687
--- /dev/null
+++ b/aarch64/sbin/start-secure-dns.sh
@@ -0,0 +1,33 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Run as root to start secure datanodes in a security-enabled cluster.
+
+usage="Usage (run as root in order to start secure datanodes): start-secure-dns.sh"
+
+bin=`dirname "${BASH_SOURCE-$0}"`
+bin=`cd "$bin"; pwd`
+
+DEFAULT_LIBEXEC_DIR="$bin"/../libexec
+HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR}
+. $HADOOP_LIBEXEC_DIR/hdfs-config.sh
+
+if [ "$EUID" -eq 0 ] && [ -n "$HADOOP_SECURE_DN_USER" ]; then
+ "$HADOOP_PREFIX"/sbin/hadoop-daemons.sh --config $HADOOP_CONF_DIR --script "$bin"/hdfs start datanode $dataStartOpt
+else
+ echo $usage
+fi
diff --git a/aarch64/sbin/start-yarn.cmd b/aarch64/sbin/start-yarn.cmd
new file mode 100755
index 0000000..989510b
--- /dev/null
+++ b/aarch64/sbin/start-yarn.cmd
@@ -0,0 +1,47 @@
+@echo off
+@rem Licensed to the Apache Software Foundation (ASF) under one or more
+@rem contributor license agreements. See the NOTICE file distributed with
+@rem this work for additional information regarding copyright ownership.
+@rem The ASF licenses this file to You under the Apache License, Version 2.0
+@rem (the "License"); you may not use this file except in compliance with
+@rem the License. You may obtain a copy of the License at
+@rem
+@rem http://www.apache.org/licenses/LICENSE-2.0
+@rem
+@rem Unless required by applicable law or agreed to in writing, software
+@rem distributed under the License is distributed on an "AS IS" BASIS,
+@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+@rem See the License for the specific language governing permissions and
+@rem limitations under the License.
+@rem
+setlocal enabledelayedexpansion
+
+echo starting yarn daemons
+
+if not defined HADOOP_BIN_PATH (
+ set HADOOP_BIN_PATH=%~dp0
+)
+
+if "%HADOOP_BIN_PATH:~-1%" == "\" (
+ set HADOOP_BIN_PATH=%HADOOP_BIN_PATH:~0,-1%
+)
+
+set DEFAULT_LIBEXEC_DIR=%HADOOP_BIN_PATH%\..\libexec
+if not defined HADOOP_LIBEXEC_DIR (
+ set HADOOP_LIBEXEC_DIR=%DEFAULT_LIBEXEC_DIR%
+)
+
+call %HADOOP_LIBEXEC_DIR%\yarn-config.cmd %*
+if "%1" == "--config" (
+ shift
+ shift
+)
+
+@rem start resourceManager
+start "Apache Hadoop Distribution" yarn resourcemanager
+@rem start nodeManager
+start "Apache Hadoop Distribution" yarn nodemanager
+@rem start proxyserver
+@rem start "Apache Hadoop Distribution" yarn proxyserver
+
+endlocal
diff --git a/aarch64/sbin/start-yarn.sh b/aarch64/sbin/start-yarn.sh
new file mode 100755
index 0000000..40b77fb
--- /dev/null
+++ b/aarch64/sbin/start-yarn.sh
@@ -0,0 +1,35 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+# Start all yarn daemons. Run this on master node.
+
+echo "starting yarn daemons"
+
+bin=`dirname "${BASH_SOURCE-$0}"`
+bin=`cd "$bin"; pwd`
+
+DEFAULT_LIBEXEC_DIR="$bin"/../libexec
+HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR}
+. $HADOOP_LIBEXEC_DIR/yarn-config.sh
+
+# start resourceManager
+"$bin"/yarn-daemon.sh --config $YARN_CONF_DIR start resourcemanager
+# start nodeManager
+"$bin"/yarn-daemons.sh --config $YARN_CONF_DIR start nodemanager
+# start proxyserver
+#"$bin"/yarn-daemon.sh --config $YARN_CONF_DIR start proxyserver
diff --git a/aarch64/sbin/stop-all.cmd b/aarch64/sbin/stop-all.cmd
new file mode 100755
index 0000000..1d22c79
--- /dev/null
+++ b/aarch64/sbin/stop-all.cmd
@@ -0,0 +1,52 @@
+@echo off
+@rem Licensed to the Apache Software Foundation (ASF) under one or more
+@rem contributor license agreements. See the NOTICE file distributed with
+@rem this work for additional information regarding copyright ownership.
+@rem The ASF licenses this file to You under the Apache License, Version 2.0
+@rem (the "License"); you may not use this file except in compliance with
+@rem the License. You may obtain a copy of the License at
+@rem
+@rem http://www.apache.org/licenses/LICENSE-2.0
+@rem
+@rem Unless required by applicable law or agreed to in writing, software
+@rem distributed under the License is distributed on an "AS IS" BASIS,
+@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+@rem See the License for the specific language governing permissions and
+@rem limitations under the License.
+
+setlocal enabledelayedexpansion
+
+@rem Stop all hadoop daemons. Run this on master node.
+
+echo This script is Deprecated. Instead use stop-dfs.cmd and stop-yarn.cmd
+
+if not defined HADOOP_BIN_PATH (
+ set HADOOP_BIN_PATH=%~dp0
+)
+
+if "%HADOOP_BIN_PATH:~-1%" == "\" (
+ set HADOOP_BIN_PATH=%HADOOP_BIN_PATH:~0,-1%
+)
+
+set DEFAULT_LIBEXEC_DIR=%HADOOP_BIN_PATH%\..\libexec
+if not defined HADOOP_LIBEXEC_DIR (
+ set HADOOP_LIBEXEC_DIR=%DEFAULT_LIBEXEC_DIR%
+)
+
+call %HADOOP_LIBEXEC_DIR%\hadoop-config.cmd %*
+if "%1" == "--config" (
+ shift
+ shift
+)
+
+@rem stop hdfs daemons if hdfs is present
+if exist %HADOOP_HDFS_HOME%\sbin\stop-dfs.cmd (
+ call %HADOOP_HDFS_HOME%\sbin\stop-dfs.cmd --config %HADOOP_CONF_DIR%
+)
+
+@rem stop yarn daemons if yarn is present
+if exist %HADOOP_YARN_HOME%\sbin\stop-yarn.cmd (
+ call %HADOOP_YARN_HOME%\sbin\stop-yarn.cmd --config %HADOOP_CONF_DIR%
+)
+
+endlocal
diff --git a/aarch64/sbin/stop-all.sh b/aarch64/sbin/stop-all.sh
new file mode 100755
index 0000000..9a2fe98
--- /dev/null
+++ b/aarch64/sbin/stop-all.sh
@@ -0,0 +1,38 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+# Stop all hadoop daemons. Run this on master node.
+
+echo "This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh"
+
+bin=`dirname "${BASH_SOURCE-$0}"`
+bin=`cd "$bin"; pwd`
+
+DEFAULT_LIBEXEC_DIR="$bin"/../libexec
+HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR}
+. $HADOOP_LIBEXEC_DIR/hadoop-config.sh
+
+# stop hdfs daemons if hdfs is present
+if [ -f "${HADOOP_HDFS_HOME}"/sbin/stop-dfs.sh ]; then
+ "${HADOOP_HDFS_HOME}"/sbin/stop-dfs.sh --config $HADOOP_CONF_DIR
+fi
+
+# stop yarn daemons if yarn is present
+if [ -f "${HADOOP_HDFS_HOME}"/sbin/stop-yarn.sh ]; then
+ "${HADOOP_HDFS_HOME}"/sbin/stop-yarn.sh --config $HADOOP_CONF_DIR
+fi
diff --git a/aarch64/sbin/stop-balancer.sh b/aarch64/sbin/stop-balancer.sh
new file mode 100755
index 0000000..df82456
--- /dev/null
+++ b/aarch64/sbin/stop-balancer.sh
@@ -0,0 +1,28 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+bin=`dirname "${BASH_SOURCE-$0}"`
+bin=`cd "$bin"; pwd`
+
+DEFAULT_LIBEXEC_DIR="$bin"/../libexec
+HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR}
+. $HADOOP_LIBEXEC_DIR/hdfs-config.sh
+
+# Stop balancer daemon.
+# Run this on the machine where the balancer is running
+
+"$HADOOP_PREFIX"/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script "$bin"/hdfs stop balancer
diff --git a/aarch64/sbin/stop-dfs.cmd b/aarch64/sbin/stop-dfs.cmd
new file mode 100755
index 0000000..f0cf015
--- /dev/null
+++ b/aarch64/sbin/stop-dfs.cmd
@@ -0,0 +1,41 @@
+@echo off
+@rem Licensed to the Apache Software Foundation (ASF) under one or more
+@rem contributor license agreements. See the NOTICE file distributed with
+@rem this work for additional information regarding copyright ownership.
+@rem The ASF licenses this file to You under the Apache License, Version 2.0
+@rem (the "License"); you may not use this file except in compliance with
+@rem the License. You may obtain a copy of the License at
+@rem
+@rem http://www.apache.org/licenses/LICENSE-2.0
+@rem
+@rem Unless required by applicable law or agreed to in writing, software
+@rem distributed under the License is distributed on an "AS IS" BASIS,
+@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+@rem See the License for the specific language governing permissions and
+@rem limitations under the License.
+@rem
+setlocal enabledelayedexpansion
+
+if not defined HADOOP_BIN_PATH (
+ set HADOOP_BIN_PATH=%~dp0
+)
+
+if "%HADOOP_BIN_PATH:~-1%" == "\" (
+ set HADOOP_BIN_PATH=%HADOOP_BIN_PATH:~0,-1%
+)
+
+set DEFAULT_LIBEXEC_DIR=%HADOOP_BIN_PATH%\..\libexec
+if not defined HADOOP_LIBEXEC_DIR (
+ set HADOOP_LIBEXEC_DIR=%DEFAULT_LIBEXEC_DIR%
+)
+
+call %HADOOP_LIBEXEC_DIR%\hadoop-config.cmd %*
+if "%1" == "--config" (
+ shift
+ shift
+)
+
+Taskkill /FI "WINDOWTITLE eq Apache Hadoop Distribution - hadoop namenode"
+Taskkill /FI "WINDOWTITLE eq Apache Hadoop Distribution - hadoop datanode"
+
+endlocal
diff --git a/aarch64/sbin/stop-dfs.sh b/aarch64/sbin/stop-dfs.sh
new file mode 100755
index 0000000..6a622fa
--- /dev/null
+++ b/aarch64/sbin/stop-dfs.sh
@@ -0,0 +1,89 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+bin=`dirname "${BASH_SOURCE-$0}"`
+bin=`cd "$bin"; pwd`
+
+DEFAULT_LIBEXEC_DIR="$bin"/../libexec
+HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR}
+. $HADOOP_LIBEXEC_DIR/hdfs-config.sh
+
+#---------------------------------------------------------
+# namenodes
+
+NAMENODES=$($HADOOP_PREFIX/bin/hdfs getconf -namenodes)
+
+echo "Stopping namenodes on [$NAMENODES]"
+
+"$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \
+ --config "$HADOOP_CONF_DIR" \
+ --hostnames "$NAMENODES" \
+ --script "$bin/hdfs" stop namenode
+
+#---------------------------------------------------------
+# datanodes (using default slaves file)
+
+if [ -n "$HADOOP_SECURE_DN_USER" ]; then
+ echo \
+ "Attempting to stop secure cluster, skipping datanodes. " \
+ "Run stop-secure-dns.sh as root to complete shutdown."
+else
+ "$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \
+ --config "$HADOOP_CONF_DIR" \
+ --script "$bin/hdfs" stop datanode
+fi
+
+#---------------------------------------------------------
+# secondary namenodes (if any)
+
+SECONDARY_NAMENODES=$($HADOOP_PREFIX/bin/hdfs getconf -secondarynamenodes 2>/dev/null)
+
+if [ -n "$SECONDARY_NAMENODES" ]; then
+ echo "Stopping secondary namenodes [$SECONDARY_NAMENODES]"
+
+ "$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \
+ --config "$HADOOP_CONF_DIR" \
+ --hostnames "$SECONDARY_NAMENODES" \
+ --script "$bin/hdfs" stop secondarynamenode
+fi
+
+#---------------------------------------------------------
+# quorumjournal nodes (if any)
+
+SHARED_EDITS_DIR=$($HADOOP_PREFIX/bin/hdfs getconf -confKey dfs.namenode.shared.edits.dir 2>&-)
+
+case "$SHARED_EDITS_DIR" in
+qjournal://*)
+ JOURNAL_NODES=$(echo "$SHARED_EDITS_DIR" | sed 's,qjournal://\([^/]*\)/.*,\1,g; s/;/ /g; s/:[0-9]*//g')
+ echo "Stopping journal nodes [$JOURNAL_NODES]"
+ "$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \
+ --config "$HADOOP_CONF_DIR" \
+ --hostnames "$JOURNAL_NODES" \
+ --script "$bin/hdfs" stop journalnode ;;
+esac
+
+#---------------------------------------------------------
+# ZK Failover controllers, if auto-HA is enabled
+AUTOHA_ENABLED=$($HADOOP_PREFIX/bin/hdfs getconf -confKey dfs.ha.automatic-failover.enabled)
+if [ "$(echo "$AUTOHA_ENABLED" | tr A-Z a-z)" = "true" ]; then
+ echo "Stopping ZK Failover Controllers on NN hosts [$NAMENODES]"
+ "$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \
+ --config "$HADOOP_CONF_DIR" \
+ --hostnames "$NAMENODES" \
+ --script "$bin/hdfs" stop zkfc
+fi
+# eof
diff --git a/aarch64/sbin/stop-secure-dns.sh b/aarch64/sbin/stop-secure-dns.sh
new file mode 100755
index 0000000..fdd47c3
--- /dev/null
+++ b/aarch64/sbin/stop-secure-dns.sh
@@ -0,0 +1,33 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Run as root to start secure datanodes in a security-enabled cluster.
+
+usage="Usage (run as root in order to stop secure datanodes): stop-secure-dns.sh"
+
+bin=`dirname "${BASH_SOURCE-$0}"`
+bin=`cd "$bin"; pwd`
+
+DEFAULT_LIBEXEC_DIR="$bin"/../libexec
+HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR}
+. $HADOOP_LIBEXEC_DIR/hdfs-config.sh
+
+if [ "$EUID" -eq 0 ] && [ -n "$HADOOP_SECURE_DN_USER" ]; then
+ "$HADOOP_PREFIX"/sbin/hadoop-daemons.sh --config $HADOOP_CONF_DIR --script "$bin"/hdfs stop datanode
+else
+ echo $usage
+fi
diff --git a/aarch64/sbin/stop-yarn.cmd b/aarch64/sbin/stop-yarn.cmd
new file mode 100755
index 0000000..0914337
--- /dev/null
+++ b/aarch64/sbin/stop-yarn.cmd
@@ -0,0 +1,47 @@
+@echo off
+@rem Licensed to the Apache Software Foundation (ASF) under one or more
+@rem contributor license agreements. See the NOTICE file distributed with
+@rem this work for additional information regarding copyright ownership.
+@rem The ASF licenses this file to You under the Apache License, Version 2.0
+@rem (the "License"); you may not use this file except in compliance with
+@rem the License. You may obtain a copy of the License at
+@rem
+@rem http://www.apache.org/licenses/LICENSE-2.0
+@rem
+@rem Unless required by applicable law or agreed to in writing, software
+@rem distributed under the License is distributed on an "AS IS" BASIS,
+@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+@rem See the License for the specific language governing permissions and
+@rem limitations under the License.
+@rem
+setlocal enabledelayedexpansion
+
+echo stopping yarn daemons
+
+if not defined HADOOP_BIN_PATH (
+ set HADOOP_BIN_PATH=%~dp0
+)
+
+if "%HADOOP_BIN_PATH:~-1%" == "\" (
+ set HADOOP_BIN_PATH=%HADOOP_BIN_PATH:~0,-1%
+)
+
+set DEFAULT_LIBEXEC_DIR=%HADOOP_BIN_PATH%\..\libexec
+if not defined HADOOP_LIBEXEC_DIR (
+ set HADOOP_LIBEXEC_DIR=%DEFAULT_LIBEXEC_DIR%
+)
+
+call %HADOOP_LIBEXEC_DIR%\yarn-config.cmd %*
+if "%1" == "--config" (
+ shift
+ shift
+)
+
+@rem stop resourceManager
+Taskkill /FI "WINDOWTITLE eq Apache Hadoop Distribution - yarn resourcemanager"
+@rem stop nodeManager
+Taskkill /FI "WINDOWTITLE eq Apache Hadoop Distribution - yarn nodemanager"
+@rem stop proxy server
+Taskkill /FI "WINDOWTITLE eq Apache Hadoop Distribution - yarn proxyserver"
+
+endlocal
diff --git a/aarch64/sbin/stop-yarn.sh b/aarch64/sbin/stop-yarn.sh
new file mode 100755
index 0000000..a8498ef
--- /dev/null
+++ b/aarch64/sbin/stop-yarn.sh
@@ -0,0 +1,35 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+# Stop all yarn daemons. Run this on master node.
+
+echo "stopping yarn daemons"
+
+bin=`dirname "${BASH_SOURCE-$0}"`
+bin=`cd "$bin"; pwd`
+
+DEFAULT_LIBEXEC_DIR="$bin"/../libexec
+HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR}
+. $HADOOP_LIBEXEC_DIR/yarn-config.sh
+
+# stop resourceManager
+"$bin"/yarn-daemon.sh --config $YARN_CONF_DIR stop resourcemanager
+# stop nodeManager
+"$bin"/yarn-daemons.sh --config $YARN_CONF_DIR stop nodemanager
+# stop proxy server
+"$bin"/yarn-daemon.sh --config $YARN_CONF_DIR stop proxyserver
diff --git a/aarch64/sbin/yarn-daemon.sh b/aarch64/sbin/yarn-daemon.sh
new file mode 100755
index 0000000..527ae42
--- /dev/null
+++ b/aarch64/sbin/yarn-daemon.sh
@@ -0,0 +1,160 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+# Runs a yarn command as a daemon.
+#
+# Environment Variables
+#
+# YARN_CONF_DIR Alternate conf dir. Default is ${HADOOP_YARN_HOME}/conf.
+# YARN_LOG_DIR Where log files are stored. PWD by default.
+# YARN_MASTER host:path where hadoop code should be rsync'd from
+# YARN_PID_DIR The pid files are stored. /tmp by default.
+# YARN_IDENT_STRING A string representing this instance of hadoop. $USER by default
+# YARN_NICENESS The scheduling priority for daemons. Defaults to 0.
+##
+
+usage="Usage: yarn-daemon.sh [--config <conf-dir>] [--hosts hostlistfile] (start|stop) <yarn-command> "
+
+# if no args specified, show usage
+if [ $# -le 1 ]; then
+ echo $usage
+ exit 1
+fi
+
+bin=`dirname "${BASH_SOURCE-$0}"`
+bin=`cd "$bin"; pwd`
+
+DEFAULT_LIBEXEC_DIR="$bin"/../libexec
+HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR}
+. $HADOOP_LIBEXEC_DIR/yarn-config.sh
+
+# get arguments
+startStop=$1
+shift
+command=$1
+shift
+
+hadoop_rotate_log ()
+{
+ log=$1;
+ num=5;
+ if [ -n "$2" ]; then
+ num=$2
+ fi
+ if [ -f "$log" ]; then # rotate logs
+ while [ $num -gt 1 ]; do
+ prev=`expr $num - 1`
+ [ -f "$log.$prev" ] && mv "$log.$prev" "$log.$num"
+ num=$prev
+ done
+ mv "$log" "$log.$num";
+ fi
+}
+
+if [ -f "${YARN_CONF_DIR}/yarn-env.sh" ]; then
+ . "${YARN_CONF_DIR}/yarn-env.sh"
+fi
+
+if [ "$YARN_IDENT_STRING" = "" ]; then
+ export YARN_IDENT_STRING="$USER"
+fi
+
+# get log directory
+if [ "$YARN_LOG_DIR" = "" ]; then
+ export YARN_LOG_DIR="$HADOOP_YARN_HOME/logs"
+fi
+
+if [ ! -w "$YARN_LOG_DIR" ] ; then
+ mkdir -p "$YARN_LOG_DIR"
+ chown $YARN_IDENT_STRING $YARN_LOG_DIR
+fi
+
+if [ "$YARN_PID_DIR" = "" ]; then
+ YARN_PID_DIR=/tmp
+fi
+
+# some variables
+export YARN_LOGFILE=yarn-$YARN_IDENT_STRING-$command-$HOSTNAME.log
+export YARN_ROOT_LOGGER=${YARN_ROOT_LOGGER:-INFO,RFA}
+log=$YARN_LOG_DIR/yarn-$YARN_IDENT_STRING-$command-$HOSTNAME.out
+pid=$YARN_PID_DIR/yarn-$YARN_IDENT_STRING-$command.pid
+YARN_STOP_TIMEOUT=${YARN_STOP_TIMEOUT:-5}
+
+# Set default scheduling priority
+if [ "$YARN_NICENESS" = "" ]; then
+ export YARN_NICENESS=0
+fi
+
+case $startStop in
+
+ (start)
+
+ [ -w "$YARN_PID_DIR" ] || mkdir -p "$YARN_PID_DIR"
+
+ if [ -f $pid ]; then
+ if kill -0 `cat $pid` > /dev/null 2>&1; then
+ echo $command running as process `cat $pid`. Stop it first.
+ exit 1
+ fi
+ fi
+
+ if [ "$YARN_MASTER" != "" ]; then
+ echo rsync from $YARN_MASTER
+ rsync -a -e ssh --delete --exclude=.svn --exclude='logs/*' --exclude='contrib/hod/logs/*' $YARN_MASTER/ "$HADOOP_YARN_HOME"
+ fi
+
+ hadoop_rotate_log $log
+ echo starting $command, logging to $log
+ cd "$HADOOP_YARN_HOME"
+ nohup nice -n $YARN_NICENESS "$HADOOP_YARN_HOME"/bin/yarn --config $YARN_CONF_DIR $command "$@" > "$log" 2>&1 < /dev/null &
+ echo $! > $pid
+ sleep 1
+ head "$log"
+ # capture the ulimit output
+ echo "ulimit -a" >> $log
+ ulimit -a >> $log 2>&1
+ ;;
+
+ (stop)
+
+ if [ -f $pid ]; then
+ TARGET_PID=`cat $pid`
+ if kill -0 $TARGET_PID > /dev/null 2>&1; then
+ echo stopping $command
+ kill $TARGET_PID
+ sleep $YARN_STOP_TIMEOUT
+ if kill -0 $TARGET_PID > /dev/null 2>&1; then
+ echo "$command did not stop gracefully after $YARN_STOP_TIMEOUT seconds: killing with kill -9"
+ kill -9 $TARGET_PID
+ fi
+ else
+ echo no $command to stop
+ fi
+ else
+ echo no $command to stop
+ fi
+ ;;
+
+ (*)
+ echo $usage
+ exit 1
+ ;;
+
+esac
+
+
diff --git a/aarch64/sbin/yarn-daemons.sh b/aarch64/sbin/yarn-daemons.sh
new file mode 100755
index 0000000..a7858e4
--- /dev/null
+++ b/aarch64/sbin/yarn-daemons.sh
@@ -0,0 +1,38 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+# Run a Yarn command on all slave hosts.
+
+usage="Usage: yarn-daemons.sh [--config confdir] [--hosts hostlistfile] [start
+|stop] command args..."
+
+# if no args specified, show usage
+if [ $# -le 1 ]; then
+ echo $usage
+ exit 1
+fi
+
+bin=`dirname "${BASH_SOURCE-$0}"`
+bin=`cd "$bin"; pwd`
+
+DEFAULT_LIBEXEC_DIR="$bin"/../libexec
+HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR}
+. $HADOOP_LIBEXEC_DIR/yarn-config.sh
+
+exec "$bin/slaves.sh" --config $YARN_CONF_DIR cd "$HADOOP_YARN_HOME" \; "$bin/yarn-daemon.sh" --config $YARN_CONF_DIR "$@"
+
diff --git a/aarch64/share/doc/hadoop/common/CHANGES.txt b/aarch64/share/doc/hadoop/common/CHANGES.txt
new file mode 100644
index 0000000..6fefb12
--- /dev/null
+++ b/aarch64/share/doc/hadoop/common/CHANGES.txt
@@ -0,0 +1,13861 @@
+Hadoop Change Log
+
+Release 2.2.0 - 2013-10-13
+
+ INCOMPATIBLE CHANGES
+
+ HADOOP-10020. Disable symlinks temporarily (branch-2.1-beta only change)
+ (sanjay via suresh)
+
+ NEW FEATURES
+
+ HDFS-4817. Make HDFS advisory caching configurable on a per-file basis.
+ (Contributed by Colin Patrick McCabe)
+
+ IMPROVEMENTS
+
+ HADOOP-9948. Add a config value to CLITestHelper to skip tests on Windows.
+ (Chuan Liu via cnauroth)
+
+ HADOOP-9976. Different versions of avro and avro-maven-plugin (Karthik
+ Kambatla via Sandy Ryza)
+
+ HADOOP-9758. Provide configuration option for FileSystem/FileContext
+ symlink resolution (Andrew Wang via Colin Patrick McCabe)
+
+ HADOOP-8315. Support SASL-authenticated ZooKeeper in ActiveStandbyElector
+ (todd)
+
+ OPTIMIZATIONS
+
+ BUG FIXES
+
+ HADOOP-9776. HarFileSystem.listStatus() returns invalid authority if port
+ number is empty. (Shanyu Zhao via ivanmi)
+
+ HADOOP-9761. ViewFileSystem#rename fails when using DistributedFileSystem.
+ (Andrew Wang via Colin Patrick McCabe)
+
+ HADOOP-10003. HarFileSystem.listLocatedStatus() fails.
+ (Jason Dere and suresh via suresh)
+
+ HADOOP-10017. Fix NPE in DFSClient#getDelegationToken when doing Distcp
+ from a secured cluster to an insecured cluster. (Haohui Mai via jing9)
+
+Release 2.1.1-beta - 2013-09-23
+
+ INCOMPATIBLE CHANGES
+
+ HADOOP-9944. Fix RpcRequestHeaderProto.callId to be sint32 rather than
+ uint32 since ipc.Client.CONNECTION_CONTEXT_CALL_ID is signed (i.e. -3)
+ (acmurthy)
+
+ NEW FEATURES
+
+ IMPROVEMENTS
+
+ HADOOP-9910. proxy server start and stop documentation wrong
+ (Andre Kelpe via harsh)
+
+ HADOOP-9787. ShutdownHelper util to shutdown threads and threadpools.
+ (Karthik Kambatla via Sandy Ryza)
+
+ HADOOP-9803. Add a generic type parameter to RetryInvocationHandler.
+ (szetszwo)
+
+ HADOOP-9821. ClientId should have getMsb/getLsb methods.
+ (Tsuyoshi OZAWA via jing9)
+
+ HADOOP-9435. Support building the JNI code against the IBM JVM.
+ (Tian Hong Wang via Colin Patrick McCabe)
+
+ HADOOP-9355. Abstract symlink tests to use either FileContext or
+ FileSystem. (Andrew Wang via Colin Patrick McCabe)
+
+ HADOOP-9833 move slf4j to version 1.7.5 (Kousuke Saruta via stevel)
+
+ HADOOP-9672. Upgrade Avro dependency to 1.7.4. (sandy via kihwal)
+
+ HADOOP-8814. Replace string equals "" by String#isEmpty().
+ (Brandon Li via suresh)
+
+ HADOOP-9789. Support server advertised kerberos principals (daryn)
+
+ HADOOP-9802. Support Snappy codec on Windows. (cnauroth)
+
+ HADOOP-9879. Move the version info of zookeeper dependencies to
+ hadoop-project/pom (Karthik Kambatla via Sandy Ryza)
+
+ HADOOP-9886. Turn warning message in RetryInvocationHandler to debug (arpit)
+
+ HADOOP-9906. Move HAZKUtil to o.a.h.util.ZKUtil and make inner-classes
+ public (Karthik Kambatla via Sandy Ryza)
+
+ HADOOP-9918. Add addIfService to CompositeService (Karthik Kambatla via
+ Sandy Ryza)
+
+ HADOOP-9945. HAServiceState should have a state for stopped services.
+ (Karthik Kambatla via atm)
+
+ HADOOP-9962. in order to avoid dependency divergence within Hadoop itself
+ lets enable DependencyConvergence. (rvs via tucu)
+
+ HADOOP-9487 Deprecation warnings in Configuration should go to their
+ own log or otherwise be suppressible (Chu Tong via stevel)
+
+ HADOOP-9669. Reduce the number of byte array creations and copies in
+ XDR data manipulation. (Haohui Mai via brandonli)
+
+ OPTIMIZATIONS
+
+ BUG FIXES
+
+ HADOOP-9916. Fix race in ipc.Client retry. (Binglin Chang via llu)
+
+ HADOOP-9768. chown and chgrp reject users and groups with spaces on platforms
+ where spaces are otherwise acceptable. (cnauroth)
+
+ HADOOP-9801. Configuration#writeXml uses platform defaulting encoding, which
+ may mishandle multi-byte characters. (cnauroth)
+
+ HADOOP-9806 PortmapInterface should check if the procedure is out-of-range
+ (brandonli)
+
+ HADOOP-9315. Port HADOOP-9249 hadoop-maven-plugins Clover fix to branch-2 to
+ fix build failures. (Dennis Y via cnauroth)
+
+ HADOOP-9831. Make checknative shell command accessible on Windows. (cnauroth)
+
+ HADOOP-9675. use svn:eol-style native for html to prevent line ending
+ issues (Colin Patrick McCabe)
+
+ HADOOP-9757. Har metadata cache can grow without limit (Cristina Abad via daryn)
+
+ HADOOP-9858. Remove unused private RawLocalFileSystem#execCommand method from
+ branch-2. (cnauroth)
+
+ HADOOP-9857. Tests block and sometimes timeout on Windows due to invalid
+ entropy source. (cnauroth)
+
+ HADOOP-9527. Add symlink support to LocalFileSystem on Windows.
+ (Arpit Agarwal)
+
+ HADOOP-9381. Document dfs cp -f option. (Keegan Witt, suresh via suresh)
+
+ HADOOP-9868. Server must not advertise kerberos realm. (daryn via kihwal)
+
+ HADOOP-9880. SASL changes from HADOOP-9421 breaks Secure HA NN. (daryn via
+ jing9)
+
+ HADOOP-9899. Remove the debug message, added by HADOOP-8855, from
+ KerberosAuthenticator. (szetszwo)
+
+ HADOOP-9894. Race condition in Shell leads to logged error stream handling
+ exceptions (Arpit Agarwal)
+
+ HADOOP-9774. RawLocalFileSystem.listStatus() return absolute paths when
+ input path is relative on Windows. (Shanyu Zhao via ivanmi)
+
+ HADOOP-9924. FileUtil.createJarWithClassPath() does not generate relative
+ classpath correctly. (Shanyu Zhao via ivanmi)
+
+ HADOOP-9932. Improper synchronization in RetryCache. (kihwal)
+
+ HADOOP-9958. Add old constructor back to DelegationTokenInformation to
+ unbreak downstream builds. (Andrew Wang)
+
+ HADOOP-9960. Upgrade Jersey version to 1.9. (Karthik Kambatla via atm)
+
+ HADOOP-9557. hadoop-client excludes commons-httpclient. (Lohit Vijayarenu via
+ cnauroth)
+
+ HADOOP-9350. Hadoop not building against Java7 on OSX
+ (Robert Kanter via stevel)
+
+ HADOOP-9961. versions of a few transitive dependencies diverged between hadoop
+ subprojects. (rvs via tucu)
+
+ HADOOP-9977. Hadoop services won't start with different keypass and
+ keystorepass when https is enabled. (cnauroth)
+
+Release 2.1.0-beta - 2013-08-22
+
+ INCOMPATIBLE CHANGES
+
+ HADOOP-8886. Remove KFS support. (eli)
+
+ HADOOP-9163. [RPC v9] The rpc msg in ProtobufRpcEngine.proto should be moved out to
+ avoid an extra copy (Sanjay Radia)
+
+ HADOOP-9151. [RPC v9] Include RPC error info in RpcResponseHeader instead of sending
+ it separately (sanjay Radia)
+
+ HADOOP-9380. [RPC v9] Add totalLength to rpc response (sanjay Radia)
+
+ HADOOP-9425. [RPC v9] Add error codes to rpc-response (sanjay Radia)
+
+ HADOOP-9194. [RPC v9] RPC support for QoS. (Junping Du via llu)
+
+ HADOOP-9630. [RPC v9] Remove IpcSerializationType. (Junping Du via llu)
+
+ HADOOP-9421. [RPC v9] Convert SASL to use ProtoBuf and provide
+ negotiation capabilities (daryn)
+
+ HADOOP-9688. Add globally unique Client ID to RPC requests. (suresh)
+
+ HADOOP-9683. [RPC v9] Wrap IpcConnectionContext in RPC headers (daryn)
+
+ HADOOP-9698. [RPC v9] Client must honor server's SASL negotiate response (daryn)
+
+ HADOOP-9832. [RPC v9] Add RPC header to client ping (daryn)
+
+ HADOOP-9820. [RPC v9] Wire protocol is insufficient to support multiplexing. (daryn via jitendra)
+
+ NEW FEATURES
+
+ HADOOP-9283. Add support for running the Hadoop client on AIX. (atm)
+
+ HADOOP-8415. Add getDouble() and setDouble() in
+ org.apache.hadoop.conf.Configuration (Jan van der Lugt via harsh)
+
+ HADOOP-9338. FsShell Copy Commands Should Optionally Preserve File
+ Attributes. (Nick White via atm)
+
+ HADOOP-8562. Enhancements to support Hadoop on Windows Server and Windows
+ Azure environments. (See breakdown of tasks below for subtasks and
+ contributors)
+
+ HADOOP-8469. Make NetworkTopology class pluggable. (Junping Du via
+ szetszwo)
+
+ HADOOP-8470. Add NetworkTopologyWithNodeGroup, a 4-layer implementation
+ of NetworkTopology. (Junping Du via szetszwo)
+
+ HADOOP-9763. Extends LightWeightGSet to support eviction of expired
+ elements. (Tsz Wo (Nicholas) SZE via jing9)
+
+ HADOOP-9762. RetryCache utility for implementing RPC retries.
+ (Suresh Srinivas via jing9)
+
+ HADOOP-9792. Retry the methods that are tagged @AtMostOnce along
+ with @Idempotent. (suresh)
+
+ HADOOP-9509. Implement ONCRPC and XDR. (brandonli)
+
+ HADOOP-9515. Add general interface for NFS and Mount. (brandonli)
+
+ IMPROVEMENTS
+
+ HADOOP-9164. Print paths of loaded native libraries in
+ NativeLibraryChecker. (Binglin Chang via llu)
+
+ HADOOP-9253. Capture ulimit info in the logs at service start time.
+ (Arpit Gupta via suresh)
+
+ HADOOP-8924. Add maven plugin alternative to shell script to save
+ package-info.java. (Chris Nauroth via suresh)
+
+ HADOOP-9117. replace protoc ant plugin exec with a maven plugin. (tucu)
+
+ HADOOP-9279. Document the need to build hadoop-maven-plugins for
+ eclipse and separate project builds. (Tsuyoshi Ozawa via suresh)
+
+ HADOOP-9334. Upgrade netty version. (Nicolas Liochon via suresh)
+
+ HADOOP-9343. Allow additional exceptions through the RPC layer. (sseth)
+
+ HADOOP-9318. When exiting on a signal, print the signal name first. (Colin
+ Patrick McCabe via atm)
+
+ HADOOP-9358. "Auth failed" log should include exception string (todd)
+
+ HADOOP-9401. CodecPool: Add counters for number of (de)compressors
+ leased out. (kkambatl via tucu)
+
+ HADOOP-9450. HADOOP_USER_CLASSPATH_FIRST is not honored; CLASSPATH
+ is PREpended instead of APpended. (Chris Nauroth and harsh via harsh)
+
+ HADOOP-9496. Bad merge of HADOOP-9450 on branch-2 breaks all bin/hadoop
+ calls that need HADOOP_CLASSPATH. (harsh)
+
+ HADOOP-9503. Remove sleep between IPC client connect timeouts.
+ (Varun Sharma via szetszwo)
+
+ HADOOP-9322. LdapGroupsMapping doesn't seem to set a timeout for
+ its directory search. (harsh)
+
+ HADOOP-9523. Provide a generic IBM java vendor flag in PlatformName.java
+ to support non-Sun JREs. (Tian Hong Wang via suresh)
+
+ HADOOP-9511. Adding support for additional input streams (FSDataInputStream
+ and RandomAccessFile) in SecureIOUtils so as to help YARN-578. (Omkar Vinit
+ Joshi via vinodkv)
+
+ HADOOP-9560. metrics2#JvmMetrics should have max memory size of JVM.
+ (Tsuyoshi Ozawa via suresh)
+
+ HADOOP-9140 Cleanup rpc PB protos (sanjay Radia)
+
+ HADOOP-9218 Document the Rpc-wrappers used internally (sanjay Radia)
+
+ HADOOP-9574. Added new methods in AbstractDelegationTokenSecretManager for
+ helping YARN ResourceManager to reuse code for RM restart. (Jian He via
+ vinodkv)
+
+ HADOOP-7391 Document Interface Classification from HADOOP-5073 (sanjay Radia)
+
+ HADOOP-9287. Parallel-testing hadoop-common (Andrey Klochkov via jlowe)
+
+ HADOOP-9604. Javadoc of FSDataOutputStream is slightly inaccurate. (Jingguo
+ Yao via atm)
+
+ HADOOP-9625. HADOOP_OPTS not picked up by hadoop command.
+ (Paul Han via arpit)
+
+ HADOOP-9649. Promoted YARN service life-cycle libraries into Hadoop Common
+ for usage across all Hadoop projects. (Zhijie Shen via vinodkv)
+
+ HADOOP-9517. Documented various aspects of compatibility for Apache
+ Hadoop. (Karthik Kambatla via acmurthy)
+
+ HADOOP-8608. Add Configuration API for parsing time durations. (cdouglas)
+
+ HADOOP-9619 Mark stability of .proto files (sanjay Radia)
+
+ HADOOP-9676. Make maximum RPC buffer size configurable (Colin Patrick
+ McCabe)
+
+ HADOOP-9691. RPC clients can generate call ID using AtomicInteger instead of
+ synchronizing on the Client instance. (cnauroth)
+
+ HADOOP-9661. Allow metrics sources to be extended. (sandyr via tucu)
+
+ HADOOP-9370. Write FSWrapper class to wrap FileSystem and FileContext for
+ better test coverage. (Andrew Wang via Colin Patrick McCabe)
+
+ HADOOP-9673. NetworkTopology: when a node can't be added, print out its
+ location for diagnostic purposes. (Colin Patrick McCabe)
+
+ HADOOP-9414. Refactor out FSLinkResolver and relevant helper methods.
+ (Andrew Wang via Colin Patrick McCabe)
+
+ HADOOP-9416. Add new symlink resolution methods in FileSystem and
+ FileSystemLinkResolver. (Andrew Wang via Colin Patrick McCabe)
+
+ HADOOP-9720. Rename Client#uuid to Client#clientId. (Arpit Agarwal via
+ suresh)
+
+ HADOOP-9734. Common protobuf definitions for GetUserMappingsProtocol,
+ RefreshAuthorizationPolicyProtocol and RefreshUserMappingsProtocol (jlowe)
+
+ HADOOP-9716. Rpc retries should use the same call ID as the original call.
+ (szetszwo)
+
+ HADOOP-9717. Add retry attempt count to the RPC requests. (jing9)
+
+ HADOOP-9751. Add clientId and retryCount to RpcResponseHeaderProto.
+ (szetszwo)
+
+ HADOOP-9754. Remove unnecessary "throws IOException/InterruptedException",
+ and fix generic and other javac warnings. (szetszwo)
+
+ HADOOP-9760. Move GSet and related classes to common from HDFS.
+ (suresh)
+
+ HADOOP-9756. Remove the deprecated getServer(..) methods from RPC.
+ (Junping Du via szetszwo)
+
+ HADOOP-9770. Make RetryCache#state non volatile. (suresh)
+
+ HADOOP-9786. RetryInvocationHandler#isRpcInvocation should support
+ ProtocolTranslator. (suresh and jing9)
+
+ OPTIMIZATIONS
+
+ HADOOP-9150. Avoid unnecessary DNS resolution attempts for logical URIs
+ (todd)
+
+ HADOOP-9845. Update protobuf to 2.5 from 2.4.x. (tucu)
+
+ HADOOP-9872. Improve protoc version handling and detection. (tucu)
+
+ BUG FIXES
+
+ HADOOP-9451. Fault single-layer config if node group topology is enabled.
+ (Junping Du via llu)
+
+ HADOOP-9294. GetGroupsTestBase fails on Windows. (Chris Nauroth via suresh)
+
+ HADOOP-9305. Add support for running the Hadoop client on 64-bit AIX. (atm)
+
+ HADOOP-9245. mvn clean without running mvn install before fails.
+ (Karthik Kambatla via suresh)
+
+ HADOOP-9246 Execution phase for hadoop-maven-plugin should be
+ process-resources (Karthik Kambatla and Chris Nauroth via jlowe)
+
+ HADOOP-9297. remove old record IO generation and tests. (tucu)
+
+ HADOOP-9154. SortedMapWritable#putAll() doesn't add key/value classes to
+ the map. (Karthik Kambatla via tomwhite)
+
+ HADOOP-9304. remove addition of avro genreated-sources dirs to build. (tucu)
+
+ HADOOP-9267. hadoop -help, -h, --help should show usage instructions.
+ (Andrew Wang via atm)
+
+ HADOOP-8569. CMakeLists.txt: define _GNU_SOURCE and _LARGEFILE_SOURCE.
+ (Colin Patrick McCabe via atm)
+
+ HADOOP-9323. Fix typos in API documentation. (suresh)
+
+ HADOOP-7487. DF should throw a more reasonable exception when mount cannot
+ be determined. (Andrew Wang via atm)
+
+ HADOOP-8917. add LOCALE.US to toLowerCase in SecurityUtil.replacePattern.
+ (Arpit Gupta via suresh)
+
+ HADOOP-9342. Remove jline from distribution. (thw via tucu)
+
+ HADOOP-9230. TestUniformSizeInputFormat fails intermittently.
+ (kkambatl via tucu)
+
+ HADOOP-9349. Confusing output when running hadoop version from one hadoop
+ installation when HADOOP_HOME points to another. (sandyr via tucu)
+
+ HADOOP-9337. org.apache.hadoop.fs.DF.getMount() does not work on Mac OS.
+ (Ivan A. Veselovsky via atm)
+
+ HADOOP-9369. DNS#reverseDns() can return hostname with . appended at the
+ end. (Karthik Kambatla via atm)
+
+ HADOOP-9379. capture the ulimit info after printing the log to the
+ console. (Arpit Gupta via suresh)
+
+ HADOOP-9399. protoc maven plugin doesn't work on mvn 3.0.2 (todd)
+
+ HADOOP-9407. commons-daemon 1.0.3 dependency has bad group id causing
+ build issues. (Sangjin Lee via suresh)
+
+ HADOOP-9405. TestGridmixSummary#testExecutionSummarizer is broken. (Andrew
+ Wang via atm)
+
+ HADOOP-9430. TestSSLFactory fails on IBM JVM. (Amir Sanjar via suresh)
+
+ HADOOP-9125. LdapGroupsMapping threw CommunicationException after some
+ idle time. (Kai Zheng via atm)
+
+ HADOOP-9429. TestConfiguration fails with IBM JAVA. (Amir Sanjar via
+ suresh)
+
+ HADOOP-9222. Cover package with org.apache.hadoop.io.lz4 unit tests (Vadim
+ Bondarev via jlowe)
+
+ HADOOP-9233. Cover package org.apache.hadoop.io.compress.zlib with unit
+ tests (Vadim Bondarev via jlowe)
+
+ HADOOP-9211. Set default max heap size in HADOOP_CLIENT_OPTS to 512m
+ in order to avoid OOME. (Plamen Jeliazkov via shv)
+
+ HADOOP-9473. Typo in FileUtil copy() method. (Glen Mazza via suresh)
+
+ HADOOP-9504. MetricsDynamicMBeanBase has concurrency issues in
+ createMBeanInfo (Liang Xie via jlowe)
+
+ HADOOP-9455. HADOOP_CLIENT_OPTS appended twice causes JVM failures.
+ (Chris Nauroth via suresh)
+
+ HADOOP-9550. Remove aspectj dependency. (kkambatl via tucu)
+
+ HADOOP-9549. WebHdfsFileSystem hangs on close(). (daryn via kihwal)
+
+ HADOOP-9485. No default value in the code for
+ hadoop.rpc.socket.factory.class.default. (Colin Patrick McCabe via atm)
+
+ HADOOP-9459. ActiveStandbyElector can join election even before
+ Service HEALTHY, and results in null data at ActiveBreadCrumb.
+ (Vinay and todd via todd)
+
+ HADOOP-9307. BufferedFSInputStream.read returns wrong results
+ after certain seeks. (todd)
+
+ HADOOP-9220. Unnecessary transition to standby in ActiveStandbyElector.
+ (tom and todd via todd)
+
+ HADOOP-9563. Fix incompatibility introduced by HADOOP-9523.
+ (Tian Hong Wang via suresh)
+
+ HADOOP-9566. Performing direct read using libhdfs sometimes raises SIGPIPE
+ (which in turn throws SIGABRT) causing client crashes. (Colin Patrick
+ McCabe via atm)
+
+ HADOOP-9481. Broken conditional logic with HADOOP_SNAPPY_LIBRARY. (Vadim
+ Bondarev via atm)
+
+ HADOOP-9593. stack trace printed at ERROR for all yarn clients without
+ hadoop.home set (stevel)
+
+ HADOOP-8957. AbstractFileSystem#IsValidName should be overridden for
+ embedded file systems like ViewFs (Chris Nauroth via Sanjay Radia)
+
+ HADOOP-9607. Fixes in Javadoc build (Timothy St. Clair via cos)
+
+ HADOOP-9605. Update junit dependency. (Timothy St. Clair via cos)
+
+ HADOOP-9581. hadoop --config non-existent directory should result in error
+ (Ashwin Shankar via jlowe)
+
+ HADOOP-9638. Parallel test changes caused invalid test path for several HDFS
+ tests on Windows (Andrey Klochkov via cnauroth)
+
+ HADOOP-9632. TestShellCommandFencer will fail if there is a 'host' machine in
+ the network. (Chuan Liu via cnauroth)
+
+ HADOOP-9624. TestFSMainOperationsLocalFileSystem failed when the Hadoop test
+ root path has "X" in its name. (Xi Fang via cnauroth)
+
+ HADOOP-9439. JniBasedUnixGroupsMapping: fix some crash bugs. (Colin
+ Patrick McCabe)
+
+ HADOOP-9656. Gridmix unit tests fail on Windows and Linux. (Chuan Liu via
+ cnauroth)
+
+ HADOOP-9707. Fix register lists for crc32c inline assembly. (todd via
+ kihwal)
+
+ HADOOP-9738. TestDistCh fails. (jing9 via kihwal)
+
+ HADOOP-9759. Add support for NativeCodeLoader#getLibraryName on Windows.
+ (Chuan Liu via cnauroth)
+
+ HADOOP-9773. TestLightWeightCache should not set size limit to zero when
+ testing it. (szetszwo)
+
+ HADOOP-9507. LocalFileSystem rename() is broken in some cases when
+ destination exists. (cnauroth)
+
+ HADOOP-9816. RPC Sasl QOP is broken (daryn)
+
+ HADOOP-9850. RPC kerberos errors don't trigger relogin. (daryn via kihwal)
+
+ BREAKDOWN OF HADOOP-8562 SUBTASKS AND RELATED JIRAS
+
+ HADOOP-8924. Hadoop Common creating package-info.java must not depend on
+ sh. (Chris Nauroth via suresh)
+
+ HADOOP-8945. Merge winutils from branch-1-win to branch-trunk-win.
+ (Bikas Saha, Chuan Liu, Giridharan Kesavan, Ivan Mitic, and Steve Maine
+ ported by Chris Nauroth via suresh)
+
+ HADOOP-8946. winutils: compile codebase during Maven build on
+ branch-trunk-win. (Chris Nauroth via suresh)
+
+ HADOOP-8947. Merge FileUtil and Shell changes from branch-1-win to
+ branch-trunk-win to enable initial test pass. (Raja Aluri, Davio Lao,
+ Sumadhur Reddy Bolli, Ahmed El Baz, Kanna Karanam, Chuan Liu,
+ Ivan Mitic, Chris Nauroth, and Bikas Saha via suresh)
+
+ HADOOP-8954. "stat" executable not found on Windows. (Bikas Saha, Ivan Mitic
+ ported by Chris Narouth via suresh)
+
+ HADOOP-8959. TestUserGroupInformation fails on Windows due to "id" executable
+ not found. (Bikas Saha, Ivan Mitic, ported by Chris Narouth via suresh)
+
+ HADOOP-8955. "chmod" executable not found on Windows.
+ (Chris Nauroth via suresh)
+
+ HADOOP-8960. TestMetricsServlet fails on Windows. (Ivan Mitic via suresh)
+
+ HADOOP-8961. GenericOptionsParser URI parsing failure on Windows.
+ (Ivan Mitic via suresh)
+
+ HADOOP-8949. Remove FileUtil.CygPathCommand dead code. (Chris Nauroth via
+ suresh)
+
+ HADOOP-8956. FileSystem.primitiveMkdir failures on Windows cause multiple
+ test suites to fail. (Chris Nauroth via suresh)
+
+ HADOOP-8978. TestTrash fails on Windows. (Chris Nauroth via suresh)
+
+ HADOOP-8979. TestHttpServer fails on Windows. (Chris Nauroth via suresh)
+
+ HADOOP-8953. Shell PathData parsing failures on Windows. (Arpit Agarwal via
+ suresh)
+
+ HADOOP-8975. TestFileContextResolveAfs fails on Windows. (Chris Nauroth via
+ suresh)
+
+ HADOOP-8977. Multiple FsShell test failures on Windows. (Chris Nauroth via
+ suresh)
+
+ HADOOP-9005. Merge hadoop cmd line scripts from branch-1-win. (David Lao,
+ Bikas Saha, Lauren Yang, Chuan Liu, Thejas M Nair and Ivan Mitic via suresh)
+
+ HADOOP-9008. Building hadoop tarball fails on Windows. (Chris Nauroth via
+ suresh)
+
+ HADOOP-9011. saveVersion.py does not include branch in version annotation.
+ (Chris Nauroth via suresh)
+
+ HADOOP-9110. winutils ls off-by-one error indexing MONTHS array can cause
+ access violation. (Chris Nauroth via suresh)
+
+ HADOOP-9056. Build native library on Windows. (Chuan Liu, Arpit Agarwal via
+ suresh)
+
+ HADOOP-9144. Fix findbugs warnings. (Chris Nauroth via suresh)
+
+ HADOOP-9081. Add TestWinUtils. (Chuan Liu, Ivan Mitic, Chris Nauroth,
+ and Bikas Saha via suresh)
+
+ HADOOP-9146. Fix sticky bit regression on branch-trunk-win.
+ (Chris Nauroth via suresh)
+
+ HADOOP-9266. Fix javac, findbugs, and release audit warnings on
+ branch-trunk-win. (Chris Nauroth via suresh)
+
+ HADOOP-9270. Remove a stale java comment from FileUtil. (Chris Nauroth via
+ szetszwo)
+
+ HADOOP-9271. Revert Python build scripts from branch-trunk-win.
+ (Chris Nauroth via suresh)
+
+ HADOOP-9313. Remove spurious mkdir from hadoop-config.cmd.
+ (Ivan Mitic via suresh)
+
+ HADOOP-9309. Test failures on Windows due to UnsatisfiedLinkError
+ in NativeCodeLoader#buildSupportsSnappy. (Arpit Agarwal via suresh)
+
+ HADOOP-9347. Add instructions to BUILDING.txt describing how to
+ build on Windows. (Chris Nauroth via suresh)
+
+ HADOOP-9348. Address TODO in winutils to add more command line usage
+ and examples. (Chris Nauroth via suresh)
+
+ HADOOP-9354. Windows native project files missing license headers.
+ (Chris Nauroth via suresh)
+
+ HADOOP-9356. Remove remaining references to cygwin/cygpath from scripts.
+ (Chris Nauroth via suresh)
+
+ HADOOP-9232. JniBasedUnixGroupsMappingWithFallback fails on Windows
+ with UnsatisfiedLinkError. (Ivan Mitic via suresh)
+
+ HADOOP-9368. Add timeouts to new tests in branch-trunk-win.
+ (Arpit Agarwal via suresh)
+
+ HADOOP-9373. Merge CHANGES.branch-trunk-win.txt to CHANGES.txt.
+ (suresh)
+
+ HADOOP-9372. Fix bad timeout annotations on tests.
+ (Arpit Agarwal via suresh)
+
+ HADOOP-9376. TestProxyUserFromEnv fails on a Windows domain joined machine.
+ (Ivan Mitic via suresh)
+
+ HADOOP-9365. TestHAZKUtil fails on Windows. (Ivan Mitic via suresh)
+
+ HADOOP-9364. PathData#expandAsGlob does not return correct results for
+ absolute paths on Windows. (Ivan Mitic via suresh)
+
+ HADOOP-8973. DiskChecker cannot reliably detect an inaccessible disk on
+ Windows with NTFS ACLs. (Chris Nauroth via suresh)
+
+ HADOOP-9388. TestFsShellCopy fails on Windows. (Ivan Mitic via suresh)
+
+ HADOOP-9387. Fix DF so that it won't execute a shell command on Windows
+ to compute the file system/mount point. (Ivan Mitic via szetszwo)
+
+ HADOOP-9353. Activate native-win maven profile by default on Windows.
+ (Arpit Agarwal via szetszwo)
+
+ HADOOP-9437. TestNativeIO#testRenameTo fails on Windows due to assumption
+ that POSIX errno is embedded in NativeIOException. (Chris Nauroth via
+ suresh)
+
+ HADOOP-9443. Port winutils static code analysis change to trunk.
+ (Chuan Liu via suresh)
+
+ HADOOP-9290. Some tests cannot load native library on windows.
+ (Chris Nauroth via suresh)
+
+ HADOOP-9500. TestUserGroupInformation#testGetServerSideGroups fails on
+ Windows due to failure to find winutils.exe. (Chris Nauroth via suresh)
+
+ HADOOP-9490. LocalFileSystem#reportChecksumFailure not closing the
+ checksum file handle before rename. (Ivan Mitic via suresh)
+
+ HADOOP-9524. Fix ShellCommandFencer to work on Windows.
+ (Arpit Agarwal via suresh)
+
+ HADOOP-9413. Add common utils for File#setReadable/Writable/Executable &
+ File#canRead/Write/Execute that work cross-platform. (Ivan Mitic via suresh)
+
+ HADOOP-9532. HADOOP_CLIENT_OPTS is appended twice by Windows cmd scripts.
+ (Chris Nauroth via suresh)
+
+ HADOOP-9043. Disallow in winutils creating symlinks with forwards slashes.
+ (Chris Nauroth and Arpit Agarwal via suresh)
+
+ HADOOP-9483. winutils support for readlink command.
+ (Arpit Agarwal via suresh)
+
+ HADOOP-9488. FileUtil#createJarWithClassPath only substitutes environment
+ variables from current process environment/does not support overriding
+ when launching new process (Chris Nauroth via bikas)
+
+ HADOOP-9556. disable HA tests on Windows that fail due to ZooKeeper client
+ connection management bug. (Chris Nauroth via suresh)
+
+ HADOOP-9553. TestAuthenticationToken fails on Windows.
+ (Arpit Agarwal via suresh)
+
+ HADOOP-9397. Incremental dist tar build fails. (Chris Nauroth via jlowe)
+
+ HADOOP-9131. Turn off TestLocalFileSystem#testListStatusWithColons on
+ Windows. (Chris Nauroth via suresh)
+
+ HADOOP-9526. TestShellCommandFencer and TestShell fail on Windows.
+ (Arpit Agarwal via suresh)
+
+ HADOOP-8982. TestSocketIOWithTimeout fails on Windows.
+ (Chris Nauroth via suresh)
+
+ HADOOP-8958. ViewFs:Non absolute mount name failures when running
+ multiple tests on Windows. (Chris Nauroth via suresh)
+
+ HADOOP-9599. hadoop-config.cmd doesn't set JAVA_LIBRARY_PATH correctly.
+ (Mostafa Elhemali via ivanmi)
+
+ HADOOP-9637. Adding Native Fstat for Windows as needed by YARN. (Chuan Liu
+ via cnauroth)
+
+ HADOOP-9264. Port change to use Java untar API on Windows from
+ branch-1-win to trunk. (Chris Nauroth via suresh)
+
+ HADOOP-9678. TestRPC#testStopsAllThreads intermittently fails on Windows.
+ (Ivan Mitic via cnauroth)
+
+ HADOOP-9681. FileUtil.unTarUsingJava() should close the InputStream upon
+ finishing. (Chuan Liu via cnauroth)
+
+ HADOOP-9665. Fixed BlockDecompressorStream#decompress to return -1 rather
+ than throw EOF at end of file. (Zhijie Shen via acmurthy)
+
+ HADOOP-8440. HarFileSystem.decodeHarURI fails for URIs whose host contains
+ numbers. (Ivan Mitic via cnauroth)
+
+ HADOOP-9643. org.apache.hadoop.security.SecurityUtil calls
+ toUpperCase(Locale.getDefault()) as well as toLowerCase(Locale.getDefault())
+ on hadoop.security.authentication value. (markrmiller@gmail.com via tucu)
+
+ HADOOP-9701. mvn site ambiguous links in hadoop-common. (kkambatl via tucu)
+
+Release 2.0.5-alpha - 06/06/2013
+
+ INCOMPATIBLE CHANGES
+
+ NEW FEATURES
+
+ IMPROVEMENTS
+
+ OPTIMIZATIONS
+
+ BUG FIXES
+
+ HADOOP-9407. commons-daemon 1.0.3 dependency has bad group id causing
+ build issues. (Sangjin Lee via suresh)
+
+Release 2.0.4-alpha - 2013-04-25
+
+ INCOMPATIBLE CHANGES
+
+ NEW FEATURES
+
+ IMPROVEMENTS
+
+ OPTIMIZATIONS
+
+ BUG FIXES
+
+ HADOOP-9467. Metrics2 record filter should check name as well as tags.
+ (Chris Nauroth and Ganeshan Iyler via llu)
+
+ HADOOP-9406. hadoop-client leaks dependency on JDK tools jar. (tucu)
+
+ HADOOP-9301. hadoop client servlet/jsp/jetty/tomcat JARs creating
+ conflicts in Oozie & HttpFS. (tucu)
+
+ HADOOP-9299. kerberos name resolution is kicking in even when kerberos
+ is not configured (daryn)
+
+ HADOOP-9408. misleading description for net.topology.table.file.name
+ property in core-default.xml. (rajeshbabu via suresh)
+
+ HADOOP-9444. Modify hadoop-policy.xml to replace unexpanded variables to a
+ default value of '*'. (Roman Shaposhnik via vinodkv)
+
+ HADOOP-9471. hadoop-client wrongfully excludes jetty-util JAR,
+ breaking webhdfs. (tucu)
+
+Release 2.0.3-alpha - 2013-02-06
+
+ INCOMPATIBLE CHANGES
+
+ HADOOP-8999. SASL negotiation is flawed (daryn)
+
+ NEW FEATURES
+
+ HADOOP-8561. Introduce HADOOP_PROXY_USER for secure impersonation in child
+ hadoop client processes. (Yu Gao via llu)
+
+ HADOOP-8597. Permit FsShell's text command to read Avro files.
+ (Ivan Vladimirov Ivanov via cutting)
+
+ HADOOP-9020. Add a SASL PLAIN server (daryn via bobby)
+
+ HADOOP-9090. Support on-demand publish of metrics. (Mostafa Elhemali via
+ suresh)
+
+ HADOOP-9054. Add AuthenticationHandler that uses Kerberos but allows for
+ an alternate form of authentication for browsers. (rkanter via tucu)
+
+ IMPROVEMENTS
+
+ HADOOP-8789. Tests setLevel(Level.OFF) should be Level.ERROR.
+ (Andy Isaacson via eli)
+
+ HADOOP-8755. Print thread dump when tests fail due to timeout. (Andrey
+ Klochkov via atm)
+
+ HADOOP-8806. libhadoop.so: dlopen should be better at locating
+ libsnappy.so, etc. (Colin Patrick McCabe via eli)
+
+ HADOOP-8812. ExitUtil#terminate should print Exception#toString. (eli)
+
+ HADOOP-8736. Add Builder for building RPC server. (Brandon Li via Suresh)
+
+ HDFS-3957. Change MutableQuantiles to use a shared thread for rolling
+ over metrics. (Andrew Wang via todd)
+
+ HADOOP-8851. Use -XX:+HeapDumpOnOutOfMemoryError JVM option in the forked
+ tests. (Ivan A. Veselovsky via atm)
+
+ HADOOP-8783. Improve RPC.Server's digest auth (daryn)
+
+ HADOOP-8889. Upgrade to Surefire 2.12.3 (todd)
+
+ HADOOP-8804. Improve Web UIs when the wildcard address is used.
+ (Senthil Kumar via eli)
+
+ HADOOP-8894. GenericTestUtils.waitFor should dump thread stacks on timeout
+ (todd)
+
+ HADOOP-8909. Hadoop Common Maven protoc calls must not depend on external
+ sh script. (Chris Nauroth via suresh)
+
+ HADOOP-8911. CRLF characters in source and text files.
+ (Raja Aluri via suresh)
+
+ HADOOP-8912. Add .gitattributes file to prevent CRLF and LF mismatches
+ for source and text files. (Raja Aluri via suresh)
+
+ HADOOP-8784. Improve IPC.Client's token use (daryn)
+
+ HADOOP-8929. Add toString, other improvements for SampleQuantiles (todd)
+
+ HADOOP-8922. Provide alternate JSONP output for JMXJsonServlet to allow
+ javascript in browser (Damien Hardy via bobby)
+
+ HADOOP-8931. Add Java version to startup message. (eli)
+
+ HADOOP-8925. Remove the packaging. (eli)
+
+ HADOOP-8985. Add namespace declarations in .proto files for languages
+ other than java. (Binglin Chan via suresh)
+
+ HADOOP-9009. Add SecurityUtil methods to get/set authentication method
+ (daryn via bobby)
+
+ HADOOP-9010. Map UGI authenticationMethod to RPC authMethod (daryn via
+ bobby)
+
+ HADOOP-9013. UGI should not hardcode loginUser's authenticationType (daryn
+ via bobby)
+
+ HADOOP-9014. Standardize creation of SaslRpcClients (daryn via bobby)
+
+ HADOOP-9015. Standardize creation of SaslRpcServers (daryn via bobby)
+
+ HADOOP-8860. Split MapReduce and YARN sections in documentation navigation.
+ (tomwhite via tucu)
+
+ HADOOP-9021. Enforce configured SASL method on the server (daryn via
+ bobby)
+
+ HADOO-8998. set Cache-Control no-cache header on all dynamic content. (tucu)
+
+ HADOOP-9035. Generalize setup of LoginContext (daryn via bobby)
+
+ HADOOP-9093. Move all the Exception in PathExceptions to o.a.h.fs package.
+ (suresh)
+
+ HADOOP-9042. Add a test for umask in FileSystemContractBaseTest.
+ (Colin McCabe via eli)
+
+ HADOOP-9127. Update documentation for ZooKeeper Failover Controller.
+ (Daisuke Kobayashi via atm)
+
+ HADOOP-9004. Allow security unit tests to use external KDC. (Stephen Chu
+ via suresh)
+
+ HADOOP-9147. Add missing fields to FIleStatus.toString.
+ (Jonathan Allen via suresh)
+
+ HADOOP-8427. Convert Forrest docs to APT, incremental. (adi2 via tucu)
+
+ HADOOP-9162. Add utility to check native library availability.
+ (Binglin Chang via suresh)
+
+ HADOOP-9173. Add security token protobuf definition to common and
+ use it in hdfs. (suresh)
+
+ HADOOP-9119. Add test to FileSystemContractBaseTest to verify integrity
+ of overwritten files. (Steve Loughran via suresh)
+
+ HADOOP-9192. Move token related request/response messages to common.
+ (suresh)
+
+ HADOOP-8712. Change default hadoop.security.group.mapping to
+ JniBasedUnixGroupsNetgroupMappingWithFallback (Robert Parker via todd)
+
+ HADOOP-9106. Allow configuration of IPC connect timeout.
+ (Rober Parker via suresh)
+
+ HADOOP-9216. CompressionCodecFactory#getCodecClasses should trim the
+ result of parsing by Configuration. (Tsuyoshi Ozawa via todd)
+
+ HADOOP-9231. Parametrize staging URL for the uniformity of
+ distributionManagement. (Konstantin Boudnik via suresh)
+
+ HADOOP-9276. Allow BoundedByteArrayOutputStream to be resettable.
+ (Arun Murthy via hitesh)
+
+ HADOOP-7688. Add servlet handler check in HttpServer.start().
+ (Uma Maheswara Rao G via szetszwo)
+
+ HADOOP-7886. Add toString to FileStatus. (SreeHari via jghoman)
+
+ OPTIMIZATIONS
+
+ HADOOP-8866. SampleQuantiles#query is O(N^2) instead of O(N). (Andrew Wang
+ via atm)
+
+ HADOOP-8926. hadoop.util.PureJavaCrc32 cache hit-ratio is low for static
+ data (Gopal V via bobby)
+
+ BUG FIXES
+
+ HADOOP-9041. FsUrlStreamHandlerFactory could cause an infinite loop in
+ FileSystem initialization. (Yanbo Liang and Radim Kolar via llu)
+
+ HADOOP-8418. Update UGI Principal classes name for running with
+ IBM JDK on 64 bits Windows. (Yu Gao via eyang)
+
+ HADOOP-8795. BASH tab completion doesn't look in PATH, assumes path to
+ executable is specified. (Sean Mackrory via atm)
+
+ HADOOP-8780. Update DeprecatedProperties apt file. (Ahmed Radwan via
+ tomwhite)
+
+ HADOOP-8833. fs -text should make sure to call inputstream.seek(0)
+ before using input stream. (tomwhite and harsh)
+
+ HADOOP-8791. Fix rm command documentation to indicte it deletes
+ files and not directories. (Jing Zhao via suresh)
+
+ HADOOP-8855. SSL-based image transfer does not work when Kerberos
+ is disabled. (todd via eli)
+
+ HADOOP-8616. ViewFS configuration requires a trailing slash. (Sandy Ryza
+ via atm)
+
+ HADOOP-8756. Fix SEGV when libsnappy is in java.library.path but
+ not LD_LIBRARY_PATH. (Colin Patrick McCabe via eli)
+
+ HADOOP-8881. FileBasedKeyStoresFactory initialization logging should
+ be debug not info. (tucu)
+
+ HADOOP-8913. hadoop-metrics2.properties should give units in comment
+ for sampling period. (Sandy Ryza via suresh)
+
+ HADOOP-8878. Uppercase namenode hostname causes hadoop dfs calls with
+ webhdfs filesystem and fsck to fail when security is on.
+ (Arpit Gupta via suresh)
+
+ HADOOP-8901. GZip and Snappy support may not work without unversioned
+ libraries (Colin Patrick McCabe via todd)
+
+ HADOOP-8883. Anonymous fallback in KerberosAuthenticator is broken.
+ (rkanter via tucu)
+
+ HADOOP-8900. BuiltInGzipDecompressor throws IOException - stored gzip size
+ doesn't match decompressed size. (Andy Isaacson via suresh)
+
+ HADOOP-8948. TestFileUtil.testGetDU fails on Windows due to incorrect
+ assumption of line separator. (Chris Nauroth via suresh)
+
+ HADOOP-8951. RunJar to fail with user-comprehensible error
+ message if jar missing. (stevel via suresh)
+
+ HADOOP-8713. TestRPCCompatibility fails intermittently with JDK7
+ (Trevor Robinson via tgraves)
+
+ HADOOP-9012. IPC Client sends wrong connection context (daryn via bobby)
+
+ HADOOP-7115. Add a cache for getpwuid_r and getpwgid_r calls (tucu)
+
+ HADOOP-6607. Add different variants of non caching HTTP headers. (tucu)
+
+ HADOOP-9049. DelegationTokenRenewer needs to be Singleton and FileSystems
+ should register/deregister to/from. (Karthik Kambatla via tomwhite)
+
+ HADOOP-9064. Augment DelegationTokenRenewer API to cancel the tokens on
+ calls to removeRenewAction. (kkambatl via tucu)
+
+ HADOOP-9103. UTF8 class does not properly decode Unicode characters
+ outside the basic multilingual plane. (todd)
+
+ HADOOP-9070. Kerberos SASL server cannot find kerberos key. (daryn via atm)
+
+ HADOOP-6762. Exception while doing RPC I/O closes channel
+ (Sam Rash and todd via todd)
+
+ HADOOP-9126. FormatZK and ZKFC startup can fail due to zkclient connection
+ establishment delay. (Rakesh R and todd via todd)
+
+ HADOOP-9113. o.a.h.fs.TestDelegationTokenRenewer is failing intermittently.
+ (Karthik Kambatla via eli)
+
+ HADOOP-9135. JniBasedUnixGroupsMappingWithFallback should log at debug
+ rather than info during fallback. (Colin Patrick McCabe via todd)
+
+ HADOOP-9152. HDFS can report negative DFS Used on clusters with very small
+ amounts of data. (Brock Noland via atm)
+
+ HADOOP-9153. Support createNonRecursive in ViewFileSystem.
+ (Sandy Ryza via tomwhite)
+
+ HADOOP-9181. Set daemon flag for HttpServer's QueuedThreadPool.
+ (Liang Xie via suresh)
+
+ HADOOP-9155. FsPermission should have different default value, 777 for
+ directory and 666 for file. (Binglin Chang via atm)
+
+ HADOOP-9183. Potential deadlock in ActiveStandbyElector. (tomwhite)
+
+ HADOOP-9203. RPCCallBenchmark should find a random available port.
+ (Andrew Purtell via suresh)
+
+ HADOOP-9178. src/main/conf is missing hadoop-policy.xml.
+ (Sandy Ryza via eli)
+
+ HADOOP-8816. HTTP Error 413 full HEAD if using kerberos authentication.
+ (moritzmoeller via tucu)
+
+ HADOOP-9212. Potential deadlock in FileSystem.Cache/IPC/UGI. (tomwhite)
+
+ HADOOP-8589 ViewFs tests fail when tests and home dirs are nested.
+ (sanjay Radia)
+
+ HADOOP-9193. hadoop script can inadvertently expand wildcard arguments
+ when delegating to hdfs script. (Andy Isaacson via todd)
+
+ HADOOP-9215. when using cmake-2.6, libhadoop.so doesn't get created
+ (only libhadoop.so.1.0.0) (Colin Patrick McCabe via todd)
+
+ HADOOP-8857. hadoop.http.authentication.signature.secret.file docs
+ should not state that secret is randomly generated. (tucu)
+
+ HADOOP-9190. packaging docs is broken. (Andy Isaacson via tgraves)
+
+ HADOOP-9221. Convert remaining xdocs to APT. (Andy Isaacson via atm)
+
+ HADOOP-8981. TestMetricsSystemImpl fails on Windows. (Xuan Gong via suresh)
+
+ HADOOP-9124. SortedMapWritable violates contract of Map interface for
+ equals() and hashCode(). (Surenkumar Nihalani via tomwhite)
+
+ HADOOP-9278. Fix the file handle leak in HarMetaData.parseMetaData() in
+ HarFileSystem. (Chris Nauroth via szetszwo)
+
+ HADOOP-9252. In StringUtils, humanReadableInt(..) has a race condition and
+ the synchronization of limitDecimalTo2(double) can be avoided. (szetszwo)
+
+ HADOOP-9260. Hadoop version may be not correct when starting name node or
+ data node. (Chris Nauroth via jlowe)
+
+ HADOOP-9289. FsShell rm -f fails for non-matching globs. (Daryn Sharp via
+ suresh)
+
+Release 2.0.2-alpha - 2012-09-07
+
+ INCOMPATIBLE CHANGES
+
+ HADOOP-8388. Remove unused BlockLocation serialization.
+ (Colin Patrick McCabe via eli)
+
+ HADOOP-8689. Make trash a server side configuration option. (eli)
+
+ HADOOP-8710. Remove ability for users to easily run the trash emptire. (eli)
+
+ HADOOP-8794. Rename YARN_HOME to HADOOP_YARN_HOME. (vinodkv via acmurthy)
+
+ NEW FEATURES
+
+ HDFS-3042. Automatic failover support for NameNode HA (todd)
+ (see dedicated section below for breakdown of subtasks)
+
+ HADOOP-8135. Add ByteBufferReadable interface to FSDataInputStream. (Henry
+ Robinson via atm)
+
+ HADOOP-8458. Add management hook to AuthenticationHandler to enable
+ delegation token operations support (tucu)
+
+ HADOOP-8465. hadoop-auth should support ephemeral authentication (tucu)
+
+ HADOOP-8644. AuthenticatedURL should be able to use SSLFactory. (tucu)
+
+ HADOOP-8581. add support for HTTPS to the web UIs. (tucu)
+
+ HADOOP-7754. Expose file descriptors from Hadoop-wrapped local
+ FileSystems (todd and ahmed via tucu)
+
+ HADOOP-8240. Add a new API to allow users to specify a checksum type
+ on FileSystem.create(..). (Kihwal Lee via szetszwo)
+
+ IMPROVEMENTS
+
+ HADOOP-8340. SNAPSHOT build versions should compare as less than their eventual
+ final release. (todd)
+
+ HADOOP-8361. Avoid out-of-memory problems when deserializing strings.
+ (Colin Patrick McCabe via eli)
+
+ HADOOP-8224. Don't hardcode hdfs.audit.logger in the scripts.
+ (Tomohiko Kinebuchi via eli)
+
+ HADOOP-8398. Cleanup BlockLocation. (eli)
+
+ HADOOP-8422. Deprecate FileSystem#getDefault* and getServerDefault
+ methods that don't take a Path argument. (eli)
+
+ HADOOP-8323. Add javadoc and tests for Text.clear() behavior (harsh)
+
+ HADOOP-8358. Config-related WARN for dfs.web.ugi can be avoided. (harsh)
+
+ HADOOP-8450. Remove src/test/system. (eli)
+
+ HADOOP-8244. Improve comments on ByteBufferReadable.read. (Henry Robinson
+ via atm)
+
+ HADOOP-8368. Use CMake rather than autotools to build native code (ccccabe via tucu)
+
+ HADOOP-8524. Allow users to get source of a Configuration
+ parameter (harsh)
+
+ HADOOP-8449. hadoop fs -text fails with compressed sequence files
+ with the codec file extension (harsh)
+
+ HADOOP-6802. Remove FS_CLIENT_BUFFER_DIR_KEY = "fs.client.buffer.dir"
+ from CommonConfigurationKeys.java (not used, deprecated)
+ (Sho Shimauchi via harsh)
+
+ HADOOP-3450. Add tests to Local Directory Allocator for
+ asserting their URI-returning capability (Sho Shimauchi via harsh)
+
+ HADOOP-8463. hadoop.security.auth_to_local needs a key definition and doc.
+ (Madhukara Phatak via eli)
+
+ HADOOP-8533. Remove parallel call ununsed capability in RPC.
+ (Brandon Li via suresh)
+
+ HADOOP-8423. MapFile.Reader.get() crashes jvm or throws
+ EOFException on Snappy or LZO block-compressed data
+ (todd via harsh)
+
+ HADOOP-8541. Better high-percentile latency metrics. (Andrew Wang via atm)
+
+ HADOOP-8362. Improve exception message when Configuration.set() is
+ called with a null key or value. (Madhukara Phatak
+ and Suresh Srinivas via harsh)
+
+ HADOOP-7818. DiskChecker#checkDir should fail if the directory is
+ not executable. (Madhukara Phatak via harsh)
+
+ HADOOP-8531. SequenceFile Writer can throw out a better error if a
+ serializer or deserializer isn't available
+ (Madhukara Phatak via harsh)
+
+ HADOOP-8609. IPC server logs a useless message when shutting down socket.
+ (Jon Zuanich via atm)
+
+ HADOOP-8620. Add -Drequire.fuse and -Drequire.snappy. (Colin
+ Patrick McCabe via eli)
+
+ HADOOP-8687. Upgrade log4j to 1.2.17. (eli)
+
+ HADOOP-8278. Make sure components declare correct set of dependencies.
+ (tomwhite)
+
+ HADOOP-8700. Use enum to define the checksum constants in DataChecksum.
+ (szetszwo)
+
+ HADOOP-8686. Fix warnings in native code. (Colin Patrick McCabe via eli)
+
+ HADOOP-8239. Add subclasses of MD5MD5CRC32FileChecksum to support file
+ checksum with CRC32C. (Kihwal Lee via szetszwo)
+
+ HADOOP-8619. WritableComparator must implement no-arg constructor.
+ (Chris Douglas via Suresh)
+
+ HADOOP-8075. Lower native-hadoop library log from info to debug.
+ (Hızır Sefa İrken via eli)
+
+ HADOOP-8748. Refactor DFSClient retry utility methods to a new class
+ in org.apache.hadoop.io.retry. (Arun C Murthy via szetszwo)
+
+ HADOOP-8754. Deprecate all the RPC.getServer() variants. (Brandon Li
+ via szetszwo)
+
+ HADOOP-8801. ExitUtil#terminate should capture the exception stack trace. (eli)
+
+ HADOOP-8819. Incorrectly & is used instead of && in some file system
+ implementations. (Brandon Li via suresh)
+
+ HADOOP-7808. Port HADOOP-7510 - Add configurable option to use original
+ hostname in token instead of IP to allow server IP change.
+ (Daryn Sharp via suresh)
+
+ BUG FIXES
+
+ HADOOP-8372. NetUtils.normalizeHostName() incorrectly handles hostname
+ starting with a numeric character. (Junping Du via suresh)
+
+ HADOOP-8393. hadoop-config.sh missing variable exports, causes Yarn jobs
+ to fail with ClassNotFoundException MRAppMaster. (phunt via tucu)
+
+ HADOOP-8316. Audit logging should be disabled by default. (eli)
+
+ HADOOP-8400. All commands warn "Kerberos krb5 configuration not found"
+ when security is not enabled. (tucu)
+
+ HADOOP-8406. CompressionCodecFactory.CODEC_PROVIDERS iteration is
+ thread-unsafe (todd)
+
+ HADOOP-8287. etc/hadoop is missing hadoop-env.sh (eli)
+
+ HADOOP-8408. MR doesn't work with a non-default ViewFS mount table
+ and security enabled. (atm via eli)
+
+ HADOOP-8329. Build fails with Java 7. (eli)
+
+ HADOOP-8268. A few pom.xml across Hadoop project
+ may fail XML validation. (Radim Kolar via harsh)
+
+ HADOOP-8444. Fix the tests FSMainOperationsBaseTest.java and
+ FileContextMainOperationsBaseTest.java to avoid potential
+ test failure (Madhukara Phatak via harsh)
+
+ HADOOP-8452. DN logs backtrace when running under jsvc and /jmx is loaded
+ (Andy Isaacson via bobby)
+
+ HADOOP-8460. Document proper setting of HADOOP_PID_DIR and
+ HADOOP_SECURE_DN_PID_DIR (bobby)
+
+ HADOOP-8466. hadoop-client POM incorrectly excludes avro. (bmahe via tucu)
+
+ HADOOP-8481. update BUILDING.txt to talk about cmake rather than autotools.
+ (Colin Patrick McCabe via eli)
+
+ HADOOP-8485. Don't hardcode "Apache Hadoop 0.23" in the docs. (eli)
+
+ HADOOP-8488. test-patch.sh gives +1 even if the native build fails.
+ (Colin Patrick McCabe via eli)
+
+ HADOOP-8507. Avoid OOM while deserializing DelegationTokenIdentifer.
+ (Colin Patrick McCabe via eli)
+
+ HADOOP-8433. Don't set HADOOP_LOG_DIR in hadoop-env.sh.
+ (Brahma Reddy Battula via eli)
+
+ HADOOP-8509. JarFinder duplicate entry: META-INF/MANIFEST.MF exception (tucu)
+
+ HADOOP-8512. AuthenticatedURL should reset the Token when the server returns
+ other than OK on authentication (tucu)
+
+ HADOOP-8168. empty-string owners or groups causes {{MissingFormatWidthException}}
+ in o.a.h.fs.shell.Ls.ProcessPath() (ekoontz via tucu)
+
+ HADOOP-8438. hadoop-validate-setup.sh refers to examples jar file which doesn't exist
+ (Devaraj K via umamahesh)
+
+ HADOOP-8538. CMake builds fail on ARM. (Trevor Robinson via eli)
+
+ HADOOP-8547. Package hadoop-pipes examples/bin directory (again).
+ (Colin Patrick McCabe via eli)
+
+ HADOOP-8563. don't package hadoop-pipes examples/bin
+ (Colin Patrick McCabe via tgraves)
+
+ HADOOP-8566. AvroReflectSerializer.accept(Class) throws a NPE if the class has no
+ package (primitive types and arrays). (tucu)
+
+ HADOOP-8586. Fixup a bunch of SPNEGO misspellings. (eli)
+
+ HADOOP-3886. Error in javadoc of Reporter, Mapper and Progressable
+ (Jingguo Yao via harsh)
+
+ HADOOP-8587. HarFileSystem access of harMetaCache isn't threadsafe. (eli)
+
+ HADOOP-8585. Fix initialization circularity between UserGroupInformation
+ and HadoopConfiguration. (Colin Patrick McCabe via atm)
+
+ HADOOP-8552. Conflict: Same security.log.file for multiple users.
+ (kkambatl via tucu)
+
+ HADOOP-8537. Fix TFile tests to pass even when native zlib support is not
+ compiled. (todd)
+
+ HADOOP-8626. Typo in default setting for
+ hadoop.security.group.mapping.ldap.search.filter.user. (Jonathan Natkins
+ via atm)
+
+ HADOOP-8480. The native build should honor -DskipTests.
+ (Colin Patrick McCabe via eli)
+
+ HADOOP-8659. Native libraries must build with soft-float ABI for Oracle JVM
+ on ARM. (Trevor Robinson via todd)
+
+ HADOOP-8654. TextInputFormat delimiter bug (Gelesh and Jason Lowe via
+ bobby)
+
+ HADOOP-8614. IOUtils#skipFully hangs forever on EOF.
+ (Colin Patrick McCabe via eli)
+
+ HADOOP-8720. TestLocalFileSystem should use test root subdirectory.
+ (Vlad Rozov via eli)
+
+ HADOOP-8721. ZKFC should not retry 45 times when attempting a graceful
+ fence during a failover. (Vinayakumar B via atm)
+
+ HADOOP-8632. Configuration leaking class-loaders (Costin Leau via bobby)
+
+ HADOOP-4572. Can not access user logs - Jetty is not configured by default
+ to serve aliases/symlinks (ahmed via tucu)
+
+ HADOOP-8660. TestPseudoAuthenticator failing with NPE. (tucu)
+
+ HADOOP-8699. some common testcases create core-site.xml in test-classes
+ making other testcases to fail. (tucu)
+
+ HADOOP-8031. Configuration class fails to find embedded .jar resources;
+ should use URL.openStream() (genman via tucu)
+
+ HADOOP-8737. cmake: always use JAVA_HOME to find libjvm.so, jni.h, jni_md.h.
+ (Colin Patrick McCabe via eli)
+
+ HADOOP-8747. Syntax error on cmake version 2.6 patch 2 in JNIFlags.cmake. (cmccabe via tucu)
+
+ HADOOP-8722. Update BUILDING.txt with latest snappy info.
+ (Colin Patrick McCabe via eli)
+
+ HADOOP-8764. CMake: HADOOP-8737 broke ARM build. (Trevor Robinson via eli)
+
+ HADOOP-8770. NN should not RPC to self to find trash defaults. (eli)
+
+ HADOOP-8648. libhadoop: native CRC32 validation crashes when
+ io.bytes.per.checksum=1. (Colin Patrick McCabe via eli)
+
+ HADOOP-8766. FileContextMainOperationsBaseTest should randomize the root
+ dir. (Colin Patrick McCabe via atm)
+
+ HADOOP-8749. HADOOP-8031 changed the way in which relative xincludes are handled in
+ Configuration. (ahmed via tucu)
+
+ HADOOP-8431. Running distcp wo args throws IllegalArgumentException.
+ (Sandy Ryza via eli)
+
+ HADOOP-8775. MR2 distcp permits non-positive value to -bandwidth option
+ which causes job never to complete. (Sandy Ryza via atm)
+
+ HADOOP-8781. hadoop-config.sh should add JAVA_LIBRARY_PATH to LD_LIBRARY_PATH. (tucu)
+
+ BREAKDOWN OF HDFS-3042 SUBTASKS
+
+ HADOOP-8220. ZKFailoverController doesn't handle failure to become active
+ correctly (todd)
+
+ HADOOP-8228. Auto HA: Refactor tests and add stress tests. (todd)
+
+ HADOOP-8215. Security support for ZK Failover controller (todd)
+
+ HADOOP-8245. Fix flakiness in TestZKFailoverController (todd)
+
+ HADOOP-8257. TestZKFailoverControllerStress occasionally fails with Mockito
+ error (todd)
+
+ HADOOP-8260. Replace ClientBaseWithFixes with our own modified copy of the
+ class (todd)
+
+ HADOOP-8246. Auto-HA: automatically scope znode by nameservice ID (todd)
+
+ HADOOP-8247. Add a config to enable auto-HA, which disables manual
+ FailoverController (todd)
+
+ HADOOP-8306. ZKFC: improve error message when ZK is not running. (todd)
+
+ HADOOP-8279. Allow manual failover to be invoked when auto-failover is
+ enabled. (todd)
+
+ HADOOP-8276. Auto-HA: add config for java options to pass to zkfc daemon
+ (todd via eli)
+
+ HADOOP-8405. ZKFC tests leak ZK instances. (todd)
+
+Release 2.0.0-alpha - 05-23-2012
+
+ INCOMPATIBLE CHANGES
+
+ HADOOP-7920. Remove Avro Rpc. (suresh)
+
+ NEW FEATURES
+
+ HADOOP-7773. Add support for protocol buffer based RPC engine.
+ (suresh)
+
+ HADOOP-7875. Add helper class to unwrap protobuf ServiceException.
+ (suresh)
+
+ HADOOP-7454. Common side of High Availability Framework (HDFS-1623)
+ Contributed by Todd Lipcon, Aaron T. Myers, Eli Collins, Uma Maheswara Rao G,
+ Bikas Saha, Suresh Srinivas, Jitendra Nath Pandey, Hari Mankude, Brandon Li,
+ Sanjay Radia, Mingjie Lai, and Gregory Chanan
+
+ HADOOP-8121. Active Directory Group Mapping Service. (Jonathan Natkins via
+ atm)
+
+ HADOOP-7030. Add TableMapping topology implementation to read host to rack
+ mapping from a file. (Patrick Angeles and tomwhite via tomwhite)
+
+ HADOOP-8206. Common portion of a ZK-based failover controller (todd)
+
+ HADOOP-8210. Common side of HDFS-3148: The client should be able
+ to use multiple local interfaces for data transfer. (eli)
+
+ HADOOP-8343. Allow configuration of authorization for JmxJsonServlet and
+ MetricsServlet (tucu)
+
+ IMPROVEMENTS
+
+ HADOOP-7524. Change RPC to allow multiple protocols including multuple
+ versions of the same protocol (sanjay Radia)
+
+ HADOOP-7607. Simplify the RPC proxy cleanup process. (atm)
+
+ HADOOP-7687. Make getProtocolSignature public (sanjay)
+
+ HADOOP-7693. Enhance AvroRpcEngine to support the new #addProtocol
+ interface introduced in HADOOP-7524. (cutting)
+
+ HADOOP-7716. RPC protocol registration on SS does not log the protocol name
+ (only the class which may be different) (sanjay)
+
+ HADOOP-7776. Make the Ipc-Header in a RPC-Payload an explicit header.
+ (sanjay)
+
+ HADOOP-7862. Move the support for multiple protocols to lower layer so
+ that Writable, PB and Avro can all use it (Sanjay)
+
+ HADOOP-7876. Provided access to encoded key in DelegationKey for
+ use in protobuf based RPCs. (suresh)
+
+ HADOOP-7899. Generate proto java files as part of the build. (tucu)
+
+ HADOOP-7957. Classes deriving GetGroupsBase should be able to override
+ proxy creation. (jitendra)
+
+ HADOOP-7965. Support for protocol version and signature in PB. (jitendra)
+
+ HADOOP-8070. Add a standalone benchmark for RPC call performance. (todd)
+
+ HADOOP-8084. Updates ProtoBufRpc engine to not do an unnecessary copy
+ for RPC request/response. (ddas)
+
+ HADOOP-8085. Add RPC metrics to ProtobufRpcEngine. (Hari Mankude via
+ suresh)
+
+ HADOOP-8098. KerberosAuthenticatorHandler should use _HOST replacement to
+ resolve principal name (tucu)
+
+ HADOOP-8118. In metrics2.util.MBeans, change log level to trace for the
+ stack trace of InstanceAlreadyExistsException. (szetszwo)
+
+ HADOOP-8125. make hadoop-client set of curated jars available in a
+ distribution tarball (rvs via tucu)
+
+ HADOOP-7717. Move handling of concurrent client fail-overs to
+ RetryInvocationHandler (atm)
+
+ HADOOP-7728. Enable task memory management to be configurable in hadoop
+ config setup script. (ramya)
+
+ HADOOP-7358. Improve log levels when exceptions caught in RPC handler
+ (Todd Lipcon via shv)
+
+ HADOOP-7557 Make IPC header be extensible (sanjay radia)
+
+ HADOOP-7806. Support binding to sub-interfaces (eli)
+
+ HADOOP-6941. Adds support for building Hadoop with IBM's JDK
+ (Stephen Watt, Eli and ddas)
+
+ HADOOP-8183. Stop using "mapred.used.genericoptions.parser" (harsh)
+
+ HADOOP-6924. Adds a directory to the list of directories to search
+ for the libjvm.so file. The new directory is found by running a 'find'
+ command and the first output is taken. This was done to handle the
+ build of Hadoop with IBM's JDK. (Stephen Watt, Guillermo Cabrera and ddas)
+
+ HADOOP-8200. Remove HADOOP_[JOBTRACKER|TASKTRACKER]_OPTS. (eli)
+
+ HADOOP-8184. ProtoBuf RPC engine uses the IPC layer reply packet.
+ (Sanjay Radia via szetszwo)
+
+ HADOOP-8163. Improve ActiveStandbyElector to provide hooks for
+ fencing old active. (todd)
+
+ HADOOP-8193. Refactor FailoverController/HAAdmin code to add an abstract
+ class for "target" services. (todd)
+
+ HADOOP-8212. Improve ActiveStandbyElector's behavior when session expires
+ (todd)
+
+ HADOOP-8216. Address log4j.properties inconsistencies btw main and
+ template dirs. (Patrick Hunt via eli)
+
+ HADOOP-8149. Cap space usage of default log4j rolling policy.
+ (Patrick Hunt via eli)
+
+ HADOOP-8211. Update commons-net version to 3.1. (eli)
+
+ HADOOP-8236. haadmin should have configurable timeouts for failover
+ commands. (todd)
+
+ HADOOP-8242. AbstractDelegationTokenIdentifier: add getter methods
+ for owner and realuser. (Colin Patrick McCabe via eli)
+
+ HADOOP-8007. Use substitution tokens for fencing argument (todd)
+
+ HADOOP-8077. HA: fencing method should be able to be configured on
+ a per-NN or per-NS basis (todd)
+
+ HADOOP-8086. KerberosName silently sets defaultRealm to "" if the
+ Kerberos config is not found, it should log a WARN (tucu)
+
+ HADOOP-8280. Move VersionUtil/TestVersionUtil and GenericTestUtils from
+ HDFS into Common. (Ahmed Radwan via atm)
+
+ HADOOP-8117. Upgrade test build to Surefire 2.12 (todd)
+
+ HADOOP-8152. Expand public APIs for security library classes. (atm via eli)
+
+ HADOOP-7549. Use JDK ServiceLoader mechanism to find FileSystem implementations. (tucu)
+
+ HADOOP-8185. Update namenode -format documentation and add -nonInteractive
+ and -force. (Arpit Gupta via atm)
+
+ HADOOP-8214. make hadoop script recognize a full set of deprecated commands (rvs via tucu)
+
+ HADOOP-8347. Hadoop Common logs misspell 'successful'.
+ (Philip Zeyliger via eli)
+
+ HADOOP-8350. Improve NetUtils.getInputStream to return a stream which has
+ a tunable timeout. (todd)
+
+ HADOOP-8356. FileSystem service loading mechanism should print the FileSystem
+ impl it is failing to load (tucu)
+
+ HADOOP-8353. hadoop-daemon.sh and yarn-daemon.sh can be misleading on stop.
+ (Roman Shaposhnik via atm)
+
+ HADOOP-8113. Correction to BUILDING.txt: HDFS needs ProtocolBuffer, too
+ (not just MapReduce). Contributed by Eugene Koontz.
+
+ HADOOP-8285 Use ProtoBuf for RpcPayLoadHeader (sanjay radia)
+
+ HADOOP-8366 Use ProtoBuf for RpcResponseHeader (sanjay radia)
+
+ HADOOP-7729. Send back valid HTTP response if user hits IPC port with
+ HTTP GET. (todd)
+
+ HADOOP-7987. Support setting the run-as user in unsecure mode. (jitendra)
+
+ HADOOP-7994. Remove getProtocolVersion and getProtocolSignature from the
+ client side translator and server side implementation. (jitendra)
+
+ HADOOP-8367 Improve documentation of declaringClassProtocolName in
+ rpc headers. (Sanjay Radia)
+
+ HADOOP-8624. ProtobufRpcEngine should log all RPCs if TRACE logging is
+ enabled (todd)
+
+ OPTIMIZATIONS
+
+ BUG FIXES
+
+ HADOOP-8199. Fix issues in start-all.sh and stop-all.sh (Devaraj K via umamahesh)
+
+ HADOOP-7635. RetryInvocationHandler should release underlying resources on
+ close. (atm)
+
+ HADOOP-7695. RPC.stopProxy can throw unintended exception while logging
+ error. (atm)
+
+ HADOOP-7833. Fix findbugs warnings in protobuf generated code.
+ (John Lee via suresh)
+
+ HADOOP-7897. ProtobufRpcEngine client side exception mechanism is not
+ consistent with WritableRpcEngine. (suresh)
+
+ HADOOP-7913. Fix bug in ProtoBufRpcEngine. (sanjay)
+
+ HADOOP-7892. IPC logs too verbose after "RpcKind" introduction. (todd)
+
+ HADOOP-7968. Errant println left in RPC.getHighestSupportedProtocol. (Sho
+ Shimauchi via harsh)
+
+ HADOOP-7931. o.a.h.ipc.WritableRpcEngine should have a way to force
+ initialization. (atm)
+
+ HADOOP-8104. Inconsistent Jackson versions (tucu)
+
+ HADOOP-8119. Fix javac warnings in TestAuthenticationFilter in hadoop-auth.
+ (szetszwo)
+
+ HADOOP-7888. TestFailoverProxy fails intermittently on trunk. (Jason Lowe
+ via atm)
+
+ HADOOP-8154. DNS#getIPs shouldn't silently return the local host
+ IP for bogus interface names. (eli)
+
+ HADOOP-8169. javadoc generation fails with java.lang.OutOfMemoryError:
+ Java heap space (tgraves via bobby)
+
+ HADOOP-8167. Configuration deprecation logic breaks backwards compatibility (tucu)
+
+ HADOOP-8189. LdapGroupsMapping shouldn't throw away IOException. (Jonathan Natkins via atm)
+
+ HADOOP-8191. SshFenceByTcpPort uses netcat incorrectly (todd)
+
+ HADOOP-8157. Fix race condition in Configuration that could cause spurious
+ ClassNotFoundExceptions after a GC. (todd)
+
+ HADOOP-8197. Configuration logs WARNs on every use of a deprecated key (tucu)
+
+ HADOOP-8159. NetworkTopology: getLeaf should check for invalid topologies.
+ (Colin Patrick McCabe via eli)
+
+ HADOOP-8204. TestHealthMonitor fails occasionally (todd)
+
+ HADOOP-8202. RPC stopProxy() does not close the proxy correctly.
+ (Hari Mankude via suresh)
+
+ HADOOP-8218. RPC.closeProxy shouldn't throw error when closing a mock
+ (todd)
+
+ HADOOP-8238. NetUtils#getHostNameOfIP blows up if given ip:port
+ string w/o port. (eli)
+
+ HADOOP-8243. Security support broken in CLI (manual) failover controller
+ (todd)
+
+ HADOOP-8251. Fix SecurityUtil.fetchServiceTicket after HADOOP-6941 (todd)
+
+ HADOOP-8249. invalid hadoop-auth cookies should trigger authentication
+ if info is avail before returning HTTP 401 (tucu)
+
+ HADOOP-8261. Har file system doesn't deal with FS URIs with a host but no
+ port. (atm)
+
+ HADOOP-8263. Stringification of IPC calls not useful (todd)
+
+ HADOOP-8264. Remove irritating double double quotes in front of hostname
+ (Bernd Fondermann via bobby)
+
+ HADOOP-8270. hadoop-daemon.sh stop action should return 0 for an
+ already stopped service. (Roman Shaposhnik via eli)
+
+ HADOOP-8144. pseudoSortByDistance in NetworkTopology doesn't work
+ properly if no local node and first node is local rack node.
+ (Junping Du)
+
+ HADOOP-8282. start-all.sh refers incorrectly start-dfs.sh
+ existence for starting start-yarn.sh. (Devaraj K via eli)
+
+ HADOOP-7350. Use ServiceLoader to discover compression codec classes.
+ (tomwhite)
+
+ HADOOP-8284. clover integration broken, also mapreduce poms are pulling
+ in clover as a dependency. (phunt via tucu)
+
+ HADOOP-8309. Pseudo & Kerberos AuthenticationHandler should use
+ getType() to create token (tucu)
+
+ HADOOP-8314. HttpServer#hasAdminAccess should return false if
+ authorization is enabled but user is not authenticated. (tucu)
+
+ HADOOP-8296. hadoop/yarn daemonlog usage wrong (Devaraj K via tgraves)
+
+ HADOOP-8310. FileContext#checkPath should handle URIs with no port. (atm)
+
+ HADOOP-8321. TestUrlStreamHandler fails. (tucu)
+
+ HADOOP-8325. Add a ShutdownHookManager to be used by different
+ components instead of the JVM shutdownhook (tucu)
+
+ HADOOP-8275. Range check DelegationKey length.
+ (Colin Patrick McCabe via eli)
+
+ HADOOP-8342. HDFS command fails with exception following merge of
+ HADOOP-8325 (tucu)
+
+ HADOOP-8346. Makes oid changes to make SPNEGO work. Was broken due
+ to fixes introduced by the IBM JDK compatibility patch. (ddas)
+
+ HADOOP-8355. SPNEGO filter throws/logs exception when authentication fails (tucu)
+
+ HADOOP-8349. ViewFS doesn't work when the root of a file system is mounted. (atm)
+
+ HADOOP-8328. Duplicate FileSystem Statistics object for 'file' scheme.
+ (tomwhite)
+
+ HADOOP-8359. Fix javadoc warnings in Configuration. (Anupam Seth via
+ szetszwo)
+
+ HADOOP-7988. Upper case in hostname part of the principals doesn't work with
+ kerberos. (jitendra)
+
+ BREAKDOWN OF HADOOP-7454 SUBTASKS
+
+ HADOOP-7455. HA: Introduce HA Service Protocol Interface. (suresh)
+
+ HADOOP-7774. HA: Administrative CLI to control HA daemons. (todd)
+
+ HADOOP-7896. HA: if both NNs are in Standby mode, client needs to try failing
+ back and forth several times with sleeps. (atm)
+
+ HADOOP-7922. Improve some logging for client IPC failovers and
+ StandbyExceptions (todd)
+
+ HADOOP-7921. StandbyException should extend IOException (todd)
+
+ HADOOP-7928. HA: Client failover policy is incorrectly trying to fail over all
+ IOExceptions (atm)
+
+ HADOOP-7925. Add interface and update CLI to query current state to
+ HAServiceProtocol (eli via todd)
+
+ HADOOP-7932. Make client connection retries on socket time outs configurable.
+ (Uma Maheswara Rao G via todd)
+
+ HADOOP-7924. FailoverController for client-based configuration (eli)
+
+ HADOOP-7961. Move HA fencing to common. (eli)
+
+ HADOOP-7970. HAServiceProtocol methods must throw IOException. (Hari Mankude
+ via suresh).
+
+ HADOOP-7992. Add ZKClient library to facilitate leader election. (Bikas Saha
+ via suresh).
+
+ HADOOP-7983. HA: failover should be able to pass args to fencers. (eli)
+
+ HADOOP-7938. HA: the FailoverController should optionally fence the active
+ during failover. (eli)
+
+ HADOOP-7991. HA: the FailoverController should check the standby is ready
+ before failing over. (eli)
+
+ HADOOP-8038. Add 'ipc.client.connect.max.retries.on.timeouts' entry in
+ core-default.xml file. (Uma Maheswara Rao G via atm)
+
+ HADOOP-8041. Log a warning when a failover is first attempted (todd)
+
+ HADOOP-8068. void methods can swallow exceptions when going through failover
+ path (todd)
+
+ HADOOP-8116. RetriableCommand is using RetryPolicy incorrectly after
+ HADOOP-7896. (atm)
+
+ HADOOP-8317. Update maven-assembly-plugin to 2.3 - fix build on FreeBSD
+ (Radim Kolar via bobby)
+
+ HADOOP-8172. Configuration no longer sets all keys in a deprecated key
+ list. (Anupam Seth via bobby)
+
+ HADOOP-7868. Hadoop native fails to compile when default linker
+ option is -Wl,--as-needed. (Trevor Robinson via eli)
+
+ HADOOP-8655. Fix TextInputFormat for large deliminators. (Gelesh via
+ bobby)
+
+ HADOOP-7900. LocalDirAllocator confChanged() accesses conf.get() twice
+ (Ravi Gummadi via Uma Maheswara Rao G)
+
+ HADOOP-8146. FsShell commands cannot be interrupted
+ (Daryn Sharp via Uma Maheswara Rao G)
+
+ HADOOP-8018. Hudson auto test for HDFS has started throwing javadoc
+ (Jon Eagles via bobby)
+
+ HADOOP-8001 ChecksumFileSystem's rename doesn't correctly handle checksum
+ files. (Daryn Sharp via bobby)
+
+ HADOOP-8006 TestFSInputChecker is failing in trunk.
+ (Daryn Sharp via bobby)
+
+ HADOOP-7998. CheckFileSystem does not correctly honor setVerifyChecksum
+ (Daryn Sharp via bobby)
+
+ HADOOP-7606. Upgrade Jackson to version 1.7.1 to match the version required
+ by Jersey (Alejandro Abdelnur via atm)
+
+Release 0.23.9 - UNRELEASED
+
+ INCOMPATIBLE CHANGES
+
+ NEW FEATURES
+
+ IMPROVEMENTS
+
+ OPTIMIZATIONS
+
+ BUG FIXES
+
+Release 0.23.8 - 2013-06-05
+
+
+ INCOMPATIBLE CHANGES
+
+ NEW FEATURES
+
+ IMPROVEMENTS
+
+ OPTIMIZATIONS
+
+ BUG FIXES
+
+ HADOOP-9222. Cover package with org.apache.hadoop.io.lz4 unit tests (Vadim
+ Bondarev via jlowe)
+
+ HADOOP-9233. Cover package org.apache.hadoop.io.compress.zlib with unit
+ tests (Vadim Bondarev via jlowe)
+
+ HADOOP-9469. mapreduce/yarn source jars not included in dist tarball
+ (Robert Parker via tgraves)
+
+ HADOOP-9504. MetricsDynamicMBeanBase has concurrency issues in
+ createMBeanInfo (Liang Xie via jlowe)
+
+ HADOOP-9614. smart-test-patch.sh hangs for new version of patch (2.7.1)
+ (Ravi Prakash via jeagles)
+
+Release 0.23.7 - 2013-04-18
+
+ INCOMPATIBLE CHANGES
+
+ NEW FEATURES
+
+ IMPROVEMENTS
+
+ HADOOP-8849. FileUtil#fullyDelete should grant the target directories +rwx
+ permissions (Ivan A. Veselovsky via bobby)
+
+ HADOOP-9067. provide test for LocalFileSystem.reportChecksumFailure
+ (Ivan A. Veselovsky via bobby)
+
+ HADOOP-9336. Allow UGI of current connection to be queried. (Daryn Sharp
+ via kihwal)
+
+ HADOOP-9352. Expose UGI.setLoginUser for tests (daryn)
+
+ HADOOP-9209. Add shell command to dump file checksums (Todd Lipcon via
+ jeagles)
+
+ HADOOP-9374. Add tokens from -tokenCacheFile into UGI (daryn)
+
+ HADOOP-8711. IPC Server supports adding exceptions for which
+ the message is printed and the stack trace is not printed to avoid chatter.
+ (Brandon Li via Suresh)
+
+
+ OPTIMIZATIONS
+
+ HADOOP-8462. Native-code implementation of bzip2 codec. (Govind Kamat via
+ jlowe)
+
+ BUG FIXES
+
+ HADOOP-9302. HDFS docs not linked from top level (Andy Isaacson via
+ tgraves)
+
+ HADOOP-9303. command manual dfsadmin missing entry for restoreFailedStorage
+ option (Andy Isaacson via tgraves)
+
+ HADOOP-9339. IPC.Server incorrectly sets UGI auth type (Daryn Sharp via
+ kihwal)
+
+Release 0.23.6 - 2013-02-06
+
+ INCOMPATIBLE CHANGES
+
+ NEW FEATURES
+
+ IMPROVEMENTS
+ HADOOP-9217. Print thread dumps when hadoop-common tests fail.
+ (Andrey Klochkov via suresh)
+
+ HADOOP-9247. Parametrize Clover "generateXxx" properties to make them
+ re-definable via -D in mvn calls. (Ivan A. Veselovsky via suresh)
+
+
+ OPTIMIZATIONS
+
+ BUG FIXES
+
+ HADOOP-9072. Hadoop-Common-0.23-Build Fails to build in Jenkins
+ (Robert Parker via tgraves)
+
+ HADOOP-8992. Enhance unit-test coverage of class HarFileSystem (Ivan A.
+ Veselovsky via bobby)
+
+ HADOOP-9038. unit-tests for AllocatorPerContext.PathIterator (Ivan A.
+ Veselovsky via bobby)
+
+ HADOOP-9105. FsShell -moveFromLocal erroneously fails (daryn via bobby)
+
+ HADOOP-9097. Maven RAT plugin is not checking all source files (tgraves)
+
+ HADOOP-9255. relnotes.py missing last jira (tgraves)
+
+Release 0.23.5 - 2012-11-28
+
+ INCOMPATIBLE CHANGES
+
+ NEW FEATURES
+
+ IMPROVEMENTS
+
+ HADOOP-8932. JNI-based user-group mapping modules can be too chatty on
+ lookup failures. (Kihwal Lee via suresh)
+
+ HADOOP-8930. Cumulative code coverage calculation (Andrey Klochkov via
+ bobby)
+
+ OPTIMIZATIONS
+
+ BUG FIXES
+
+ HADOOP-8906. paths with multiple globs are unreliable. (Daryn Sharp via
+ jlowe)
+
+ HADOOP-8811. Compile hadoop native library in FreeBSD (Radim Kolar via
+ bobby)
+
+ HADOOP-8962. RawLocalFileSystem.listStatus fails when a child filename
+ contains a colon (jlowe via bobby)
+
+ HADOOP-8986. Server$Call object is never released after it is sent (bobby)
+
+ HADOOP-9022. Hadoop distcp tool fails to copy file if -m 0 specified
+ (Jonathan Eagles vai bobby)
+
+ HADOOP-9025. org.apache.hadoop.tools.TestCopyListing failing (Jonathan
+ Eagles via jlowe)
+
+Release 0.23.4
+
+ INCOMPATIBLE CHANGES
+
+ NEW FEATURES
+
+ IMPROVEMENTS
+
+ HADOOP-8822. relnotes.py was deleted post mavenization (bobby)
+
+ OPTIMIZATIONS
+
+ BUG FIXES
+
+ HADOOP-8843. Old trash directories are never deleted on upgrade
+ from 1.x (jlowe)
+
+ HADOOP-8684. Deadlock between WritableComparator and WritableComparable.
+ (Jing Zhao via suresh)
+
+Release 0.23.3
+
+ INCOMPATIBLE CHANGES
+
+ HADOOP-7967. Need generalized multi-token filesystem support (daryn)
+
+ NEW FEATURES
+
+ IMPROVEMENTS
+
+ HADOOP-8108. Move method getHostPortString() from NameNode to NetUtils.
+ (Brandon Li via jitendra)
+
+ HADOOP-8288. Remove references of mapred.child.ulimit etc. since they are
+ not being used any more (Ravi Prakash via bobby)
+
+ HADOOP-8535. Cut hadoop build times in half (Job Eagles via bobby)
+
+ HADOOP-8525. Provide Improved Traceability for Configuration (bobby)
+
+ OPTIMIZATIONS
+
+ BUG FIXES
+
+ HADOOP-8088. User-group mapping cache incorrectly does negative caching on
+ transient failures (Khiwal Lee via bobby)
+
+ HADOOP-8179. risk of NPE in CopyCommands processArguments() (Daryn Sharp
+ via bobby)
+
+ HADOOP-6963. In FileUtil.getDU(..), neither include the size of directories
+ nor follow symbolic links. (Ravi Prakash via szetszwo)
+
+ HADOOP-8180. Remove hsqldb since its not needed from pom.xml (Ravi Prakash
+ via tgraves)
+
+ HADOOP-8014. ViewFileSystem does not correctly implement getDefaultBlockSize,
+ getDefaultReplication, getContentSummary (John George via bobby)
+
+ HADOOP-7510. Tokens should use original hostname provided instead of ip
+ (Daryn Sharp via bobby)
+
+ HADOOP-8283. Allow tests to control token service value (Daryn Sharp via
+ bobby)
+
+ HADOOP-8286. Simplify getting a socket address from conf (Daryn Sharp via
+ bobby)
+
+ HADOOP-8227. Allow RPC to limit ephemeral port range. (bobby)
+
+ HADOOP-8305. distcp over viewfs is broken (John George via bobby)
+
+ HADOOP-8334. HttpServer sometimes returns incorrect port (Daryn Sharp via
+ bobby)
+
+ HADOOP-8330. Update TestSequenceFile.testCreateUsesFsArg() for HADOOP-8305.
+ (John George via szetszwo)
+
+ HADOOP-8335. Improve Configuration's address handling (Daryn Sharp via
+ bobby)
+
+ HADOOP-8327. distcpv2 and distcpv1 jars should not coexist (Dave Thompson
+ via bobby)
+
+ HADOOP-8341. Fix or filter findbugs issues in hadoop-tools (bobby)
+
+ HADOOP-8373. Port RPC.getServerAddress to 0.23 (Daryn Sharp via bobby)
+
+ HADOOP-8495. Update Netty to avoid leaking file descriptors during shuffle
+ (Jason Lowe via tgraves)
+
+ HADOOP-8129. ViewFileSystemTestSetup setupForViewFileSystem is erring
+ (Ahmed Radwan and Ravi Prakash via bobby)
+
+ HADOOP-8573. Configuration tries to read from an inputstream resource
+ multiple times (Robert Evans via tgraves)
+
+ HADOOP-8599. Non empty response from FileSystem.getFileBlockLocations when
+ asking for data beyond the end of file. (Andrey Klochkov via todd)
+
+ HADOOP-8606. FileSystem.get may return the wrong filesystem (Daryn Sharp
+ via bobby)
+
+ HADOOP-8551. fs -mkdir creates parent directories without the -p option
+ (John George via bobby)
+
+ HADOOP-8613. AbstractDelegationTokenIdentifier#getUser() should set token
+ auth type. (daryn)
+
+ HADOOP-8627. FS deleteOnExit may delete the wrong path (daryn via bobby)
+
+ HADOOP-8634. Ensure FileSystem#close doesn't squawk for deleteOnExit paths
+ (daryn via bobby)
+
+ HADOOP-8550. hadoop fs -touchz automatically created parent directories
+ (John George via bobby)
+
+ HADOOP-8635. Cannot cancel paths registered deleteOnExit (daryn via bobby)
+
+ HADOOP-8637. FilterFileSystem#setWriteChecksum is broken (daryn via bobby)
+
+ HADOOP-8370. Native build failure: javah: class file for
+ org.apache.hadoop.classification.InterfaceAudience not found (Trevor
+ Robinson via tgraves)
+
+ HADOOP-8633. Interrupted FsShell copies may leave tmp files (Daryn Sharp
+ via tgraves)
+
+ HADOOP-8703. distcpV2: turn CRC checking off for 0 byte size (Dave
+ Thompson via bobby)
+
+ HADOOP-8390. TestFileSystemCanonicalization fails with JDK7 (Trevor
+ Robinson via tgraves)
+
+ HADOOP-8692. TestLocalDirAllocator fails intermittently with JDK7
+ (Trevor Robinson via tgraves)
+
+ HADOOP-8693. TestSecurityUtil fails intermittently with JDK7 (Trevor
+ Robinson via tgraves)
+
+ HADOOP-8697. TestWritableName fails intermittently with JDK7 (Trevor
+ Robinson via tgraves)
+
+ HADOOP-8695. TestPathData fails intermittently with JDK7 (Trevor
+ Robinson via tgraves)
+
+ HADOOP-8611. Allow fall-back to the shell-based implementation when
+ JNI-based users-group mapping fails (Robert Parker via bobby)
+
+ HADOOP-8225. DistCp fails when invoked by Oozie (daryn via bobby)
+
+ HADOOP-8709. globStatus changed behavior from 0.20/1.x (Jason Lowe via
+ bobby)
+
+ HADOOP-8725. MR is broken when security is off (daryn via bobby)
+
+ HADOOP-8726. The Secrets in Credentials are not available to MR tasks
+ (daryn and Benoy Antony via bobby)
+
+ HADOOP-8727. Gracefully deprecate dfs.umaskmode in 2.x onwards (Harsh J
+ via bobby)
+
+Release 0.23.2 - UNRELEASED
+
+ INCOMPATIBLE CHANGES
+
+ NEW FEATURES
+
+ IMPROVEMENTS
+
+ HADOOP-8048. Allow merging of Credentials (Daryn Sharp via tgraves)
+
+ HADOOP-8032. mvn site:stage-deploy should be able to use the scp protocol
+ to stage documents (Ravi Prakash via tgraves)
+
+ HADOOP-7923. Automate the updating of version numbers in the doc system.
+ (szetszwo)
+
+ HADOOP-8137. Added links to CLI manuals to the site. (tgraves via
+ acmurthy)
+
+ OPTIMIZATIONS
+ HADOOP-8071. Avoid an extra packet in client code when nagling is
+ disabled. (todd)
+
+ HADOOP-6502. Improve the performance of Configuration.getClassByName when
+ the class is not found by caching negative results.
+ (sharad, todd via todd)
+
+ BUG FIXES
+
+ HADOOP-7660. Maven generated .classpath doesnot includes
+ "target/generated-test-source/java" as source directory.
+ (Laxman via bobby)
+
+ HADOOP-8042 When copying a file out of HDFS, modifying it, and uploading
+ it back into HDFS, the put fails due to a CRC mismatch
+ (Daryn Sharp via bobby)
+
+ HADOOP-8035 Hadoop Maven site is inefficient and runs phases redundantly
+ (abayer via tucu)
+
+ HADOOP-8051 HttpFS documentation it is not wired to the generated site (tucu)
+
+ HADOOP-8055. Hadoop tarball distribution lacks a core-site.xml (harsh)
+
+ HADOOP-8052. Hadoop Metrics2 should emit Float.MAX_VALUE (instead of
+ Double.MAX_VALUE) to avoid making Ganglia's gmetad core. (Varun Kapoor
+ via mattf)
+
+ HADOOP-8074. Small bug in hadoop error message for unknown commands.
+ (Colin Patrick McCabe via eli)
+
+ HADOOP-8082 add hadoop-client and hadoop-minicluster to the
+ dependency-management section. (tucu)
+
+ HADOOP-8066 The full docs build intermittently fails (abayer via tucu)
+
+ HADOOP-8083 javadoc generation for some modules is not done under target/ (tucu)
+
+ HADOOP-8036. TestViewFsTrash assumes the user's home directory is
+ 2 levels deep. (Colin Patrick McCabe via eli)
+
+ HADOOP-8046 Revert StaticMapping semantics to the existing ones, add DNS
+ mapping diagnostics in progress (stevel)
+
+ HADOOP-8057 hadoop-setup-conf.sh not working because of some extra spaces.
+ (Vinayakumar B via stevel)
+
+ HADOOP-7680 TestHardLink fails on Mac OS X, when gnu stat is in path.
+ (Milind Bhandarkar via stevel)
+
+ HADOOP-8050. Deadlock in metrics. (Kihwal Lee via mattf)
+
+ HADOOP-8131. FsShell put doesn't correctly handle a non-existent dir
+ (Daryn Sharp via bobby)
+
+ HADOOP-8123. Use java.home rather than env.JAVA_HOME for java in the
+ project. (Jonathan Eagles via acmurthy)
+
+ HADOOP-8064. Remove unnecessary dependency on w3c.org in document processing
+ (Khiwal Lee via bobby)
+
+ HADOOP-8140. dfs -getmerge should process its argments better (Daryn Sharp
+ via bobby)
+
+ HADOOP-8164. Back slash as path separator is handled for Windows only.
+ (Daryn Sharp via suresh)
+
+ HADOOP-8173. FsShell needs to handle quoted metachars. (Daryn Sharp via
+ szetszwo)
+
+ HADOOP-8175. Add -p option to mkdir in FsShell. (Daryn Sharp via szetszwo)
+
+ HADOOP-8176. Disambiguate the destination of FsShell copies (Daryn Sharp
+ via bobby)
+
+ HADOOP-8208. Disallow self failover. (eli)
+
+Release 0.23.1 - 2012-02-17
+
+ INCOMPATIBLE CHANGES
+
+ NEW FEATURES
+
+ HADOOP-7777 Implement a base class for DNSToSwitchMapping implementations
+ that can offer extra topology information. (stevel)
+
+ HADOOP-7657. Add support for LZ4 compression. (Binglin Chang via todd)
+
+ HADOOP-7910. Add Configuration.getLongBytes to handle human readable byte size values. (Sho Shimauchi via harsh)
+
+
+ IMPROVEMENTS
+
+ HADOOP-7801. HADOOP_PREFIX cannot be overriden. (Bruno Mahé via tomwhite)
+
+ HADOOP-7802. Hadoop scripts unconditionally source
+ "$bin"/../libexec/hadoop-config.sh. (Bruno Mahé via tomwhite)
+
+ HADOOP-7858. Drop some info logging to DEBUG level in IPC,
+ metrics, and HTTP. (todd via eli)
+
+ HADOOP-7424. Log an error if the topology script doesn't handle multiple args.
+ (Uma Maheswara Rao G via eli)
+
+ HADOOP-7804. Enable hadoop config generator to set configurations to enable
+ short circuit read. (Arpit Gupta via jitendra)
+
+ HADOOP-7877. Update balancer CLI usage documentation to include the new
+ -policy option. (szetszwo)
+
+ HADOOP-6840. Support non-recursive create() in FileSystem and
+ SequenceFile.Writer. (jitendra and eli via eli)
+
+ HADOOP-6886. LocalFileSystem Needs createNonRecursive API.
+ (Nicolas Spiegelberg and eli via eli)
+
+ HADOOP-7912. test-patch should run eclipse:eclipse to verify that it does
+ not break again. (Robert Joseph Evans via tomwhite)
+
+ HADOOP-7890. Redirect hadoop script's deprecation message to stderr.
+ (Koji Knoguchi via mahadev)
+
+ HADOOP-7504. Add the missing Ganglia31 opts to hadoop-metrics.properties as a comment. (harsh)
+
+ HADOOP-7933. Add a getDelegationTokens api to FileSystem which checks
+ for known tokens in the passed Credentials object. (sseth)
+
+ HADOOP-7737. normalize hadoop-mapreduce & hadoop-dist dist/tar build with
+ common/hdfs. (tucu)
+
+ HADOOP-7743. Add Maven profile to create a full source tarball. (tucu)
+
+ HADOOP-7758. Make GlobFilter class public. (tucu)
+
+ HADOOP-7590. Mavenize streaming and MR examples. (tucu)
+
+ HADOOP-7934. Normalize dependencies versions across all modules. (tucu)
+
+ HADOOP-7348. Change 'addnl' in getmerge util to be a flag '-nl' instead.
+ (XieXianshan via harsh)
+
+ HADOOP-7975. Add LZ4 as an entry in the default codec list, missed by HADOOP-7657 (harsh)
+
+ HADOOP-4515. Configuration#getBoolean must not be case sensitive. (Sho Shimauchi via harsh)
+
+ HADOOP-6490. Use StringUtils over String#replace in Path#normalizePath.
+ (Uma Maheswara Rao G via harsh)
+
+ HADOOP-7574. Improve FSShell -stat, add user/group elements.
+ (XieXianshan via harsh)
+
+ HADOOP-7736. Remove duplicate Path#normalizePath call. (harsh)
+
+ HADOOP-7919. Remove the unused hadoop.logfile.* properties from the
+ core-default.xml file. (harsh)
+
+ HADOOP-7939. Improve Hadoop subcomponent integration in Hadoop 0.23. (rvs via tucu)
+
+ HADOOP-8002. SecurityUtil acquired token message should be a debug rather than info.
+ (Arpit Gupta via mahadev)
+
+ HADOOP-8009. Create hadoop-client and hadoop-minicluster artifacts for downstream
+ projects. (tucu)
+
+ HADOOP-7470. Move up to Jackson 1.8.8. (Enis Soztutar via szetszwo)
+
+ HADOOP-8027. Visiting /jmx on the daemon web interfaces may print
+ unnecessary error in logs. (atm)
+
+ HADOOP-7792. Add verifyToken method to AbstractDelegationTokenSecretManager.
+ (jitendra)
+
+ OPTIMIZATIONS
+
+ BUG FIXES
+
+ HADOOP-7811. TestUserGroupInformation#testGetServerSideGroups test fails in chroot.
+ (Jonathan Eagles via mahadev)
+
+ HADOOP-7813. Fix test-patch to use proper numerical comparison when checking
+ javadoc and findbugs warning counts. (Jonathan Eagles via tlipcon)
+
+ HADOOP-7841. Run tests with non-secure random. (tlipcon)
+
+ HADOOP-7851. Configuration.getClasses() never returns the default value.
+ (Uma Maheswara Rao G via amarrk)
+
+ HADOOP-7787. Make source tarball use conventional name.
+ (Bruno Mahé via tomwhite)
+
+ HADOOP-6614. RunJar should provide more diags when it can't create
+ a temp file. (Jonathan Hsieh via eli)
+
+ HADOOP-7859. TestViewFsHdfs.testgetFileLinkStatus is failing an assert. (eli)
+
+ HADOOP-7864. Building mvn site with Maven < 3.0.2 causes OOM errors.
+ (Andrew Bayer via eli)
+
+ HADOOP-7854. UGI getCurrentUser is not synchronized. (Daryn Sharp via jitendra)
+
+ HADOOP-7870. fix SequenceFile#createWriter with boolean
+ createParent arg to respect createParent. (Jon Hsieh via eli)
+
+ HADOOP-7898. Fix javadoc warnings in AuthenticationToken.java. (suresh)
+
+ HADOOP-7878 Regression: HADOOP-7777 switch changes break HDFS tests when the
+ isSingleSwitch() predicate is used. (stevel)
+
+ HADOOP-7914. Remove the duplicated declaration of hadoop-hdfs test-jar in
+ hadoop-project/pom.xml. (szetszwo)
+
+ HADOOP-7837. no NullAppender in the log4j config. (eli)
+
+ HADOOP-7948. Shell scripts created by hadoop-dist/pom.xml to build tar do not
+ properly propagate failure. (cim_michajlomatijkiw via tucu)
+
+ HADOOP-7949. Updated maxIdleTime default in the code to match
+ core-default.xml (eli)
+
+ HADOOP-7853. multiple javax security configurations cause conflicts.
+ (daryn via tucu)
+
+ HDFS-2614. hadoop dist tarball is missing hdfs headers. (tucu)
+
+ HADOOP-7874. native libs should be under lib/native/ dir. (tucu)
+
+ HADOOP-7887. KerberosAuthenticatorHandler is not setting
+ KerberosName name rules from configuration. (tucu)
+
+ HADOOP-7902. skipping name rules setting (if already set) should be done
+ on UGI initialization only. (tucu)
+
+ HADOOP-7810. move hadoop archive to core from tools. (tucu)
+
+ HADOOP_7917. compilation of protobuf files fails in windows/cygwin. (tucu)
+
+ HADOOP-7907. hadoop-tools JARs are not part of the distro. (tucu)
+
+ HADOOP-7936. There's a Hoop README in the root dir of the tarball. (tucu)
+
+ HADOOP-7963. Fix ViewFS to catch a null canonical service-name and pass
+ tests TestViewFileSystem* (Siddharth Seth via vinodkv)
+
+ HADOOP-7964. Deadlock in NetUtils and SecurityUtil class initialization.
+ (Daryn Sharp via suresh)
+
+ HADOOP-7974. TestViewFsTrash incorrectly determines the user's home
+ directory. (harsh via eli)
+
+ HADOOP-7971. Adding back job/pipes/queue commands to bin/hadoop for
+ backward compatibility. (Prashath Sharma via acmurthy)
+
+ HADOOP-7982. UserGroupInformation fails to login if thread's context
+ classloader can't load HadoopLoginModule. (todd)
+
+ HADOOP-7986. Adding config for MapReduce History Server protocol in
+ hadoop-policy.xml for service level authorization. (Mahadev Konar via vinodkv)
+
+ HADOOP-7981. Improve documentation for org.apache.hadoop.io.compress.
+ Decompressor.getRemaining (Jonathan Eagles via mahadev)
+
+ HADOOP-7997. SequenceFile.createWriter(...createParent...) no
+ longer works on existing file. (Gregory Chanan via eli)
+
+ HADOOP-7993. Hadoop ignores old-style config options for enabling compressed
+ output. (Anupam Seth via mahadev)
+
+ HADOOP-8000. fetchdt command not available in bin/hadoop.
+ (Arpit Gupta via mahadev)
+
+ HADOOP-7999. "hadoop archive" fails with ClassNotFoundException.
+ (Jason Lowe via mahadev)
+
+ HADOOP-8012. hadoop-daemon.sh and yarn-daemon.sh are trying to mkdir
+ and chown log/pid dirs which can fail. (Roman Shaposhnik via eli)
+
+ HADOOP-8013. ViewFileSystem does not honor setVerifyChecksum
+ (Daryn Sharp via bobby)
+
+ HADOOP-8054 NPE with FilterFileSystem (Daryn Sharp via bobby)
+
+Release 0.23.0 - 2011-11-01
+
+ INCOMPATIBLE CHANGES
+
+ HADOOP-6904. Support method based RPC compatiblity. (hairong)
+
+ HADOOP-6432. Add Statistics support in FileContext. (jitendra)
+
+ HADOOP-7136. Remove failmon contrib component. (nigel)
+
+ NEW FEATURES
+
+ HADOOP-7324. Ganglia plugins for metrics v2. (Priyo Mustafi via llu)
+
+ HADOOP-7342. Add an utility API in FileUtil for JDK File.list
+ avoid NPEs on File.list() (Bharath Mundlapudi via mattf)
+
+ HADOOP-7322. Adding a util method in FileUtil for directory listing,
+ avoid NPEs on File.listFiles() (Bharath Mundlapudi via mattf)
+
+ HADOOP-7023. Add listCorruptFileBlocks to Filesysem. (Patrick Kling
+ via hairong)
+
+ HADOOP-7096. Allow setting of end-of-record delimiter for TextInputFormat
+ (Ahmed Radwan via todd)
+
+ HADOOP-6994. Api to get delegation token in AbstractFileSystem. (jitendra)
+
+ HADOOP-7171. Support UGI in FileContext API. (jitendra)
+
+ HADOOP-7257 Client side mount tables (sanjay)
+
+ HADOOP-6919. New metrics2 framework. (Luke Lu via acmurthy)
+
+ HADOOP-6920. Metrics instrumentation to move new metrics2 framework.
+ (Luke Lu via suresh)
+
+ HADOOP-7214. Add Common functionality necessary to provide an equivalent
+ of /usr/bin/groups for Hadoop. (Aaron T. Myers via todd)
+
+ HADOOP-6832. Add an authentication plugin using a configurable static user
+ for the web UI. (Owen O'Malley and Todd Lipcon via cdouglas)
+
+ HADOOP-7144. Expose JMX metrics via JSON servlet. (Robert Joseph Evans via
+ cdouglas)
+
+ HADOOP-7379. Add the ability to serialize and deserialize protocol buffers
+ in ObjectWritable. (todd)
+
+ HADOOP-7206. Support Snappy compression. (Issei Yoshida and
+ Alejandro Abdelnur via eli)
+
+ HADOOP-7329. Add the capability of getting invividual attribute of a mbean
+ using JMXProxyServlet. (tanping)
+
+ HADOOP-7380. Add client failover functionality to o.a.h.io.(ipc|retry).
+ (atm via eli)
+
+ HADOOP-7460. Support pluggable trash policies. (Usman Masoon via suresh)
+
+ HADOOP-6385. dfs should support -rmdir (was HDFS-639). (Daryn Sharp
+ via mattf)
+
+ HADOOP-7119. add Kerberos HTTP SPNEGO authentication support to Hadoop
+ JT/NN/DN/TT web-consoles. (Alejandro Abdelnur via atm)
+
+ IMPROVEMENTS
+
+ HADOOP-7655. Provide a small validation script that smoke tests the installed
+ cluster. (Arpit Gupta via mattf)
+
+ HADOOP-7042. Updates to test-patch.sh to include failed test names and
+ improve other messaging. (nigel)
+
+ HADOOP-7001. Configuration changes can occur via the Reconfigurable
+ interface. (Patrick Kling via dhruba)
+
+ HADOOP-6764. Add number of reader threads and queue length as
+ configuration parameters in RPC.getServer. (Dmytro Molkov via hairong)
+
+ HADOOP-7049. TestReconfiguration should be junit v4.
+ (Patrick Kling via eli)
+
+ HADOOP-7054 Change NN LoadGenerator to use FileContext APIs
+ (Sanjay Radia)
+
+ HADOOP-7060. A more elegant FileSystem#listCorruptFileBlocks API.
+ (Patrick Kling via hairong)
+
+ HADOOP-7058. Expose number of bytes in FSOutputSummer buffer to
+ implementatins. (Todd Lipcon via hairong)
+
+ HADOOP-7061. unprecise javadoc for CompressionCodec. (Jingguo Yao via eli)
+
+ HADOOP-7059. Remove "unused" warning in native code. (Noah Watkins via eli)
+
+ HADOOP-6864. Provide a JNI-based implementation of
+ ShellBasedUnixGroupsNetgroupMapping
+ (implementation of GroupMappingServiceProvider) (Erik Seffl via boryas)
+
+ HADOOP-7078. Improve javadocs for RawComparator interface.
+ (Harsh J Chouraria via todd)
+
+ HADOOP-6995. Allow wildcards to be used in ProxyUsers configurations.
+ (todd)
+
+ HADOOP-6376. Add a comment header to conf/slaves that specifies the file
+ format. (Kay Kay via todd)
+
+ HADOOP-7151. Document need for stable hashCode() in WritableComparable.
+ (Dmitriy V. Ryaboy via todd)
+
+ HADOOP-7112. Issue a warning when GenericOptionsParser libjars are not on
+ local filesystem. (tomwhite)
+
+ HADOOP-7114. FsShell should dump all exceptions at DEBUG level.
+ (todd via tomwhite)
+
+ HADOOP-7159. RPC server should log the client hostname when read exception
+ happened. (Scott Chen via todd)
+
+ HADOOP-7167. Allow using a file to exclude certain tests from build. (todd)
+
+ HADOOP-7133. Batch the calls in DataStorage to FileUtil.createHardLink().
+ (Matt Foley via jghoman)
+
+ HADOOP-7166. Add DaemonFactory to common. (Erik Steffl & jitendra)
+
+ HADOOP-7175. Add isEnabled() to Trash. (Daryn Sharp via szetszwo)
+
+ HADOOP-7180. Better support on CommandFormat on the API and exceptions.
+ (Daryn Sharp via szetszwo)
+
+ HADOOP-7202. Improve shell Command base class. (Daryn Sharp via szetszwo)
+
+ HADOOP-7224. Add CommandFactory to shell. (Daryn Sharp via szetszwo)
+
+ HADOOP-7014. Generalize CLITest structure and interfaces to facilitate
+ upstream adoption (e.g. for web testing). (cos)
+
+ HADOOP-7230. Move "fs -help" shell command tests from HDFS to COMMOM; see
+ also HDFS-1844. (Daryn Sharp via szetszwo)
+
+ HADOOP-7233. Refactor ls to conform to new FsCommand class. (Daryn Sharp
+ via szetszwo)
+
+ HADOOP-7235. Refactor the tail command to conform to new FsCommand class.
+ (Daryn Sharp via szetszwo)
+
+ HADOOP-7179. Federation: Improve HDFS startup scripts. (Erik Steffl
+ and Tanping Wang via suresh)
+
+ HADOOP-7227. Remove protocol version check at proxy creation in Hadoop
+ RPC. (jitendra)
+
+ HADOOP-7236. Refactor the mkdir command to conform to new FsCommand class.
+ (Daryn Sharp via szetszwo)
+
+ HADOOP-7250. Refactor the setrep command to conform to new FsCommand class.
+ (Daryn Sharp via szetszwo)
+
+ HADOOP-7249. Refactor the chmod/chown/chgrp command to conform to new
+ FsCommand class. (Daryn Sharp via szetszwo)
+
+ HADOOP-7251. Refactor the getmerge command to conform to new FsCommand
+ class. (Daryn Sharp via szetszwo)
+
+ HADOOP-7265. Keep track of relative paths in PathData. (Daryn Sharp
+ via szetszwo)
+
+ HADOOP-7238. Refactor the cat and text commands to conform to new FsCommand
+ class. (Daryn Sharp via szetszwo)
+
+ HADOOP-7271. Standardize shell command error messages. (Daryn Sharp
+ via szetszwo)
+
+ HADOOP-7272. Remove unnecessary security related info logs. (suresh)
+
+ HADOOP-7275. Refactor the stat command to conform to new FsCommand
+ class. (Daryn Sharp via szetszwo)
+
+ HADOOP-7237. Refactor the touchz command to conform to new FsCommand
+ class. (Daryn Sharp via szetszwo)
+
+ HADOOP-7267. Refactor the rm/rmr/expunge commands to conform to new
+ FsCommand class. (Daryn Sharp via szetszwo)
+
+ HADOOP-7285. Refactor the test command to conform to new FsCommand
+ class. (Daryn Sharp via todd)
+
+ HADOOP-7289. In ivy.xml, test conf should not extend common conf.
+ (Eric Yang via szetszwo)
+
+ HADOOP-7291. Update Hudson job not to run test-contrib. (Nigel Daley via eli)
+
+ HADOOP-7286. Refactor the du/dus/df commands to conform to new FsCommand
+ class. (Daryn Sharp via todd)
+
+ HADOOP-7301. FSDataInputStream should expose a getWrappedStream method.
+ (Jonathan Hsieh via eli)
+
+ HADOOP-7306. Start metrics system even if config files are missing
+ (Luke Lu via todd)
+
+ HADOOP-7302. webinterface.private.actions should be renamed and moved to
+ the MapReduce project. (Ari Rabkin via todd)
+
+ HADOOP-7329. Improve help message for "df" to include "-h" flag.
+ (Xie Xianshan via todd)
+
+ HADOOP-7320. Refactor the copy and move commands to conform to new
+ FsCommand class. (Daryn Sharp via todd)
+
+ HADOOP-7312. Update value of hadoop.common.configuration.version.
+ (Harsh J Chouraria via todd)
+
+ HADOOP-7337. Change PureJavaCrc32 annotations to public stable. (szetszwo)
+
+ HADOOP-7331. Make hadoop-daemon.sh return exit code 1 if daemon processes
+ did not get started. (Tanping Wang via todd)
+
+ HADOOP-7316. Add public javadocs to FSDataInputStream and
+ FSDataOutputStream. (eli)
+
+ HADOOP-7323. Add capability to resolve compression codec based on codec
+ name. (Alejandro Abdelnur via tomwhite)
+
+ HADOOP-1886. Undocumented parameters in FilesSystem. (Frank Conrad via eli)
+
+ HADOOP-7375. Add resolvePath method to FileContext. (Sanjay Radia via eli)
+
+ HADOOP-7383. HDFS needs to export protobuf library dependency in pom.
+ (todd via eli)
+
+ HADOOP-7374. Don't add tools.jar to the classpath when running Hadoop.
+ (eli)
+
+ HADOOP-7106. Reorganize project SVN layout to "unsplit" the projects.
+ (todd, nigel)
+
+ HADOOP-6605. Add JAVA_HOME detection to hadoop-config. (eli)
+
+ HADOOP-7384. Allow test-patch to be more flexible about patch format. (todd)
+
+ HADOOP-6929. RPC should have a way to pass Security information other than
+ protocol annotations. (sharad and omalley via mahadev)
+
+ HADOOP-7385. Remove StringUtils.stringifyException(ie) in logger functions.
+ (Bharath Mundlapudi via Tanping Wang).
+
+ HADOOP-310. Additional constructor requested in BytesWritable. (Brock
+ Noland via atm)
+
+ HADOOP-7429. Add another IOUtils#copyBytes method. (eli)
+
+ HADOOP-7451. Generalize StringUtils#join. (Chris Douglas via mattf)
+
+ HADOOP-7449. Add Data(In,Out)putByteBuffer to work with ByteBuffer similar
+ to Data(In,Out)putBuffer for byte[]. Merge from yahoo-merge branch,
+ -r 1079163. Fix missing Apache license headers. (Chris Douglas via mattf)
+
+ HADOOP-7361. Provide an option, -overwrite/-f, in put and copyFromLocal
+ shell commands. (Uma Maheswara Rao G via szetszwo)
+
+ HADOOP-7430. Improve error message when moving to trash fails due to
+ quota issue. (Ravi Prakash via mattf)
+
+ HADOOP-7444. Add Checksum API to verify and calculate checksums "in bulk"
+ (todd)
+
+ HADOOP-7443. Add CRC32C as another DataChecksum implementation (todd)
+
+ HADOOP-7305. Eclipse project files are incomplete. (Niels Basjes via eli)
+
+ HADOOP-7314. Add support for throwing UnknownHostException when a host doesn't
+ resolve. (Jeffrey Naisbitt via jitendra)
+
+ HADOOP-7465. A several tiny improvements for the LOG format.
+ (Xie Xianshan via eli)
+
+ HADOOP-7434. Display error when using "daemonlog -setlevel" with
+ illegal level. (yanjinshuang via eli)
+
+ HADOOP-7463. Adding a configuration parameter to SecurityInfo interface.
+ (mahadev)
+
+ HADOOP-7298. Add test utility for writing multi-threaded tests. (todd and
+ Harsh J Chouraria via todd)
+
+ HADOOP-7485. Add -h option to ls to list file sizes in human readable
+ format. (XieXianshan via suresh)
+
+ HADOOP-7378. Add -d option to ls to not expand directories.
+ (Daryn Sharp via suresh)
+
+ HADOOP-7474. Refactor ClientCache out of WritableRpcEngine. (jitendra)
+
+ HADOOP-7491. hadoop command should respect HADOOP_OPTS when given
+ a class name. (eli)
+
+ HADOOP-7178. Add a parameter, useRawLocalFileSystem, to copyToLocalFile(..)
+ in FileSystem. (Uma Maheswara Rao G via szetszwo)
+
+ HADOOP-6671. Use maven for hadoop common builds. (Alejandro Abdelnur
+ via tomwhite)
+
+ HADOOP-7502. Make generated sources IDE friendly.
+ (Alejandro Abdelnur via llu)
+
+ HADOOP-7501. Publish Hadoop Common artifacts (post HADOOP-6671) to Apache
+ SNAPSHOTs repo. (Alejandro Abdelnur via tomwhite)
+
+ HADOOP-7525. Make arguments to test-patch optional. (tomwhite)
+
+ HADOOP-7472. RPC client should deal with IP address change.
+ (Kihwal Lee via suresh)
+
+ HADOOP-7499. Add method for doing a sanity check on hostnames in NetUtils.
+ (Jeffrey Naisbit via mahadev)
+
+ HADOOP-6158. Move CyclicIteration to HDFS. (eli)
+
+ HADOOP-7526. Add TestPath tests for URI conversion and reserved
+ characters. (eli)
+
+ HADOOP-7531. Add servlet util methods for handling paths in requests. (eli)
+
+ HADOOP-7493. Add ShortWritable. (Uma Maheswara Rao G via szetszwo)
+
+ HADOOP-7555. Add a eclipse-generated files to .gitignore. (atm)
+
+ HADOOP-7264. Bump avro version to at least 1.4.1. (Alejandro Abdelnur via
+ tomwhite)
+
+ HADOOP-7498. Remove legacy TAR layout creation. (Alejandro Abdelnur via
+ tomwhite)
+
+ HADOOP-7496. Break Maven TAR & bintar profiles into just LAYOUT & TAR proper.
+ (Alejandro Abdelnur via tomwhite)
+
+ HADOOP-7561. Make test-patch only run tests for changed modules. (tomwhite)
+
+ HADOOP-7547. Add generic type in WritableComparable subclasses.
+ (Uma Maheswara Rao G via szetszwo)
+
+ HADOOP-7579. Rename package names from alfredo to auth.
+ (Alejandro Abdelnur via szetszwo)
+
+ HADOOP-7594. Support HTTP REST in HttpServer. (szetszwo)
+
+ HADOOP-7552. FileUtil#fullyDelete doesn't throw IOE but lists it
+ in the throws clause. (eli)
+
+ HADOOP-7580. Add a version of getLocalPathForWrite to LocalDirAllocator
+ which doesn't create dirs. (Chris Douglas & Siddharth Seth via acmurthy)
+
+ HADOOP-7507. Allow ganglia metrics to include the metrics system tags
+ in the gmetric names. (Alejandro Abdelnur via todd)
+
+ HADOOP-7612. Change test-patch to run tests for all nested modules.
+ (tomwhite)
+
+ HADOOP-7599. Script improvements to setup a secure Hadoop cluster
+ (Eric Yang via ddas)
+
+ HADOOP-7639. Enhance HttpServer to allow passing path-specs for filtering,
+ so that servers like Yarn WebApp can get filtered the paths served by
+ their own injected servlets. (Thomas Graves via vinodkv)
+
+ HADOOP-7575. Enhanced LocalDirAllocator to support fully-qualified
+ paths. (Jonathan Eagles via vinodkv)
+
+ HADOOP-7469 Add a standard handler for socket connection problems which
+ improves diagnostics (Uma Maheswara Rao G and stevel via stevel)
+
+ HADOOP-7710. Added hadoop-setup-application.sh for creating
+ application directory (Arpit Gupta via Eric Yang)
+
+ HADOOP-7707. Added toggle for dfs.support.append, webhdfs and hadoop proxy
+ user to setup config script. (Arpit Gupta via Eric Yang)
+
+ HADOOP-7720. Added parameter for HBase user to setup config script.
+ (Arpit Gupta via Eric Yang)
+
+ HADOOP-7624. Set things up for a top level hadoop-tools module. (tucu)
+
+ HADOOP-7627. Improve MetricsAsserts to give more understandable output
+ on failure. (todd)
+
+ HADOOP-7642. create hadoop-dist module where TAR stitching would happen.
+ (Thomas White via tucu)
+
+ HADOOP-7709. Running a set of methods in a Single Test Class.
+ (Jonathan Eagles via mahadev)
+
+ HADOOP-7705. Add a log4j back end that can push out JSON data,
+ one per line. (stevel)
+
+ HADOOP-7749. Add a NetUtils createSocketAddr call which provides more
+ help in exception messages. (todd)
+
+ HADOOP-7762. Common side of MR-2736. (eli)
+
+ HADOOP-7668. Add a NetUtils method that can tell if an InetAddress
+ belongs to local host. (suresh)
+
+ HADOOP-7509. Improve exception message thrown when Authentication is
+ required. (Ravi Prakash via suresh)
+
+ HADOOP-7745. Fix wrong variable name in exception message introduced
+ in HADOOP-7509. (Ravi Prakash via suresh)
+
+ MAPREDUCE-2764. Fix renewal of dfs delegation tokens. (Owen via jitendra)
+
+ HADOOP-7360. Preserve relative paths that do not contain globs in FsShell.
+ (Daryn Sharp and Kihwal Lee via szetszwo)
+
+ HADOOP-7771. FsShell -copyToLocal, -get, etc. commands throw NPE if the
+ destination directory does not exist. (John George and Daryn Sharp
+ via szetszwo)
+
+ HADOOP-7782. Aggregate project javadocs. (tomwhite)
+
+ HADOOP-7789. Improvements to site navigation. (acmurthy)
+
+ OPTIMIZATIONS
+
+ HADOOP-7333. Performance improvement in PureJavaCrc32. (Eric Caspole
+ via todd)
+
+ HADOOP-7445. Implement bulk checksum verification using efficient native
+ code. (todd)
+
+ HADOOP-7753. Support fadvise and sync_file_range in NativeIO. Add
+ ReadaheadPool infrastructure for use in HDFS and MR. (todd)
+
+ HADOOP-7446. Implement CRC32C native code using SSE4.2 instructions.
+ (Kihwal Lee and todd via todd)
+
+ HADOOP-7763. Add top-level navigation to APT docs. (tomwhite)
+
+ HADOOP-7785. Add equals, hashcode, toString to DataChecksum (todd)
+
+ BUG FIXES
+
+ HADOOP-7740. Fixed security audit logger configuration. (Arpit Gupta via Eric Yang)
+
+ HADOOP-7630. hadoop-metrics2.properties should have a property *.period
+ set to a default value for metrics. (Eric Yang via mattf)
+
+ HADOOP-7327. FileSystem.listStatus() throws NullPointerException instead of
+ IOException upon access permission failure. (mattf)
+
+ HADOOP-7015. RawLocalFileSystem#listStatus does not deal with a directory
+ whose entries are changing (e.g. in a multi-thread or multi-process
+ environment). (Sanjay Radia via eli)
+
+ HADOOP-7045. TestDU fails on systems with local file systems with
+ extended attributes. (eli)
+
+ HADOOP-6939. Inconsistent lock ordering in
+ AbstractDelegationTokenSecretManager. (Todd Lipcon via tomwhite)
+
+ HADOOP-7129. Fix typo in method name getProtocolSigature (todd)
+
+ HADOOP-7048. Wrong description of Block-Compressed SequenceFile Format in
+ SequenceFile's javadoc. (Jingguo Yao via tomwhite)
+
+ HADOOP-7153. MapWritable violates contract of Map interface for equals()
+ and hashCode(). (Nicholas Telford via todd)
+
+ HADOOP-6754. DefaultCodec.createOutputStream() leaks memory.
+ (Aaron Kimball via tomwhite)
+
+ HADOOP-7098. Tasktracker property not set in conf/hadoop-env.sh.
+ (Bernd Fondermann via tomwhite)
+
+ HADOOP-7131. Exceptions thrown by Text methods should include the causing
+ exception. (Uma Maheswara Rao G via todd)
+
+ HADOOP-6912. Guard against NPE when calling UGI.isLoginKeytabBased().
+ (Kan Zhang via jitendra)
+
+ HADOOP-7204. remove local unused fs variable from CmdHandler
+ and FsShellPermissions.changePermissions (boryas)
+
+ HADOOP-7210. Chown command is not working from FSShell
+ (Uma Maheswara Rao G via todd)
+
+ HADOOP-7215. RPC clients must use network interface corresponding to
+ the host in the client's kerberos principal key. (suresh)
+
+ HADOOP-7019. Refactor build targets to enable faster cross project dev
+ cycles. (Luke Lu via cos)
+
+ HADOOP-7216. Add FsCommand.runAll() with deprecated annotation for the
+ transition of Command base class improvement. (Daryn Sharp via szetszwo)
+
+ HADOOP-7207. fs member of FSShell is not really needed (boryas)
+
+ HADOOP-7223. FileContext createFlag combinations are not clearly defined.
+ (suresh)
+
+ HADOOP-7231. Fix synopsis for -count. (Daryn Sharp via eli).
+
+ HADOOP-7261. Disable IPV6 for junit tests. (suresh)
+
+ HADOOP-7268. FileContext.getLocalFSFileContext() behavior needs to be fixed
+ w.r.t tokens. (jitendra)
+
+ HADOOP-7290. Unit test failure in
+ TestUserGroupInformation.testGetServerSideGroups. (Trevor Robison via eli)
+
+ HADOOP-7292. Fix racy test case TestSinkQueue. (Luke Lu via todd)
+
+ HADOOP-7282. ipc.Server.getRemoteIp() may return null. (John George
+ via szetszwo)
+
+ HADOOP-7208. Fix implementation of equals() and hashCode() in
+ StandardSocketFactory. (Uma Maheswara Rao G via todd)
+
+ HADOOP-7336. TestFileContextResolveAfs will fail with default
+ test.build.data property. (jitendra)
+
+ HADOOP-7284 Trash and shell's rm does not work for viewfs (Sanjay Radia)
+
+ HADOOP-7341. Fix options parsing in CommandFormat (Daryn Sharp via todd)
+
+ HADOOP-7353. Cleanup FsShell and prevent masking of RTE stack traces.
+ (Daryn Sharp via todd)
+
+ HADOOP-7356. RPM packages broke bin/hadoop script in developer environment.
+ (Eric Yang via todd)
+
+ HADOOP-7389. Use of TestingGroups by tests causes subsequent tests to fail.
+ (atm via tomwhite)
+
+ HADOOP-7377. Fix command name handling affecting DFSAdmin. (Daryn Sharp
+ via mattf)
+
+ HADOOP-7402. TestConfiguration doesn't clean up after itself. (atm via eli)
+
+ HADOOP-7428. IPC connection is orphaned with null 'out' member.
+ (todd via eli)
+
+ HADOOP-7437. IOUtils.copybytes will suppress the stream closure exceptions.
+ (Uma Maheswara Rao G via szetszwo)
+
+ HADOOP-7090. Fix resource leaks in s3.INode, BloomMapFile, WritableUtils
+ and CBZip2OutputStream. (Uma Maheswara Rao G via szetszwo)
+
+ HADOOP-7440. HttpServer.getParameterValues throws NPE for missing
+ parameters. (Uma Maheswara Rao G and todd via todd)
+
+ HADOOP-7442. Docs in core-default.xml still reference deprecated config
+ "topology.script.file.name" (atm)
+
+ HADOOP-7419. new hadoop-config.sh doesn't manage classpath for
+ HADOOP_CONF_DIR correctly. (Bing Zheng and todd via todd)
+
+ HADOOP-7448. merge from yahoo-merge branch (via mattf):
+ -r 1079157: Fix content type for /stacks servlet to be
+ plain text (Luke Lu)
+ -r 1079164: No need to escape plain text (Luke Lu)
+
+ HADOOP-7471. The saveVersion.sh script sometimes fails to extract SVN URL.
+ (Alejandro Abdelnur via eli)
+
+ HADOOP-2081. Configuration getInt, getLong, and getFloat replace
+ invalid numbers with the default value. (Harsh J via eli)
+
+ HADOOP-7111. Several TFile tests failing when native libraries are
+ present. (atm)
+
+ HADOOP-7438. Fix deprecated warnings from hadoop-daemon.sh script.
+ (Ravi Prakash via suresh)
+
+ HADOOP-7468 hadoop-core JAR contains a log4j.properties file.
+ (Jolly Chen)
+
+ HADOOP-7508. Compiled nativelib is in wrong directory and it is not picked
+ up by surefire setup. (Alejandro Abdelnur via tomwhite)
+
+ HADOOP-7520. Fix to add distribution management info to hadoop-main
+ (Alejandro Abdelnur via gkesavan)
+
+ HADOOP-7515. test-patch reports the wrong number of javadoc warnings.
+ (tomwhite)
+
+ HADOOP-7523. Test org.apache.hadoop.fs.TestFilterFileSystem fails due to
+ java.lang.NoSuchMethodException. (John Lee via tomwhite)
+
+ HADOOP-7528. Maven build fails in Windows. (Alejandro Abdelnur via
+ tomwhite)
+
+ HADOOP-7533. Allow test-patch to be run from any subproject directory.
+ (tomwhite)
+
+ HADOOP-7512. Fix example mistake in WritableComparable javadocs.
+ (Harsh J via eli)
+
+ HADOOP-7357. hadoop.io.compress.TestCodec#main() should exit with
+ non-zero exit code if test failed. (Philip Zeyliger via eli)
+
+ HADOOP-6622. Token should not print the password in toString. (eli)
+
+ HADOOP-7529. Fix lock cycles in metrics system. (llu)
+
+ HADOOP-7545. Common -tests JAR should not include properties and configs.
+ (todd)
+
+ HADOOP-7536. Correct the dependency version regressions introduced in
+ HADOOP-6671. (Alejandro Abdelnur via tomwhite)
+
+ HADOOP-7566. MR tests are failing webapps/hdfs not found in CLASSPATH.
+ (Alejandro Abdelnur via mahadev)
+
+ HADOOP-7567. 'mvn eclipse:eclipse' fails for hadoop-alfredo (auth).
+ (Alejandro Abdelnur via tomwhite)
+
+ HADOOP-7563. Setup HADOOP_HDFS_HOME, HADOOP_MAPRED_HOME and classpath
+ correction. (Eric Yang via acmurthy)
+
+ HADOOP-7560. Change src layout to be heirarchical. (Alejandro Abdelnur
+ via acmurthy)
+
+ HADOOP-7576. Fix findbugs warnings and javac warnings in hadoop-auth.
+ (szetszwo)
+
+ HADOOP-7593. Fix AssertionError in TestHttpServer.testMaxThreads().
+ (Uma Maheswara Rao G via szetszwo)
+
+ HADOOP-7598. Fix smart-apply-patch.sh to handle patching from a sub
+ directory correctly. (Robert Evans via acmurthy)
+
+ HADOOP-7328. When a serializer class is missing, return null, not throw
+ an NPE. (Harsh J Chouraria via todd)
+
+ HADOOP-7626. Bugfix for a config generator (Eric Yang via ddas)
+
+ HADOOP-7629. Allow immutable FsPermission objects to be used as IPC
+ parameters. (todd)
+
+ HADOOP-7608. SnappyCodec check for Hadoop native lib is wrong
+ (Alejandro Abdelnur via todd)
+
+ HADOOP-7637. Fix to include FairScheduler configuration file in
+ RPM. (Eric Yang via ddas)
+
+ HADOOP-7633. Adds log4j.properties to the hadoop-conf dir on
+ deploy (Eric Yang via ddas)
+
+ HADOOP-7631. Fixes a config problem to do with running streaming jobs
+ (Eric Yang via ddas)
+
+ HADOOP-7662. Fixed logs servlet to use the pathspec '/*' instead of '/'
+ for correct filtering. (Thomas Graves via vinodkv)
+
+ HADOOP-7691. Fixed conflict uid for install packages. (Eric Yang)
+
+ HADOOP-7603. Set hdfs, mapred uid, and hadoop uid to fixed numbers.
+ (Eric Yang)
+
+ HADOOP-7658. Fixed HADOOP_SECURE_DN_USER environment variable in
+ hadoop-evn.sh (Eric Yang)
+
+ HADOOP-7684. Added init.d script for jobhistory server and
+ secondary namenode. (Eric Yang)
+
+ HADOOP-7715. Removed unnecessary security logger configuration. (Eric Yang)
+
+ HADOOP-7685. Improved directory ownership check function in
+ hadoop-setup-conf.sh. (Eric Yang)
+
+ HADOOP-7711. Fixed recursive sourcing of HADOOP_OPTS environment
+ variables (Arpit Gupta via Eric Yang)
+
+ HADOOP-7681. Fixed security and hdfs audit log4j properties
+ (Arpit Gupta via Eric Yang)
+
+ HADOOP-7708. Fixed hadoop-setup-conf.sh to handle config files
+ consistently. (Eric Yang)
+
+ HADOOP-7724. Fixed hadoop-setup-conf.sh to put proxy user in
+ core-site.xml. (Arpit Gupta via Eric Yang)
+
+ HADOOP-7755. Detect MapReduce PreCommit Trunk builds silently failing
+ when running test-patch.sh. (Jonathan Eagles via tomwhite)
+
+ HADOOP-7744. Ensure failed tests exit with proper error code. (Jonathan
+ Eagles via acmurthy)
+
+ HADOOP-7764. Allow HttpServer to set both ACL list and path spec filters.
+ (Jonathan Eagles via acmurthy)
+
+ HADOOP-7766. The auth to local mappings are not being respected, with webhdfs
+ and security enabled. (jitendra)
+
+ HADOOP-7721. Add log before login in KerberosAuthenticationHandler.
+ (jitendra)
+
+ HADOOP-7778. FindBugs warning in Token.getKind(). (tomwhite)
+
+ HADOOP-7798. Add support gpg signatures for maven release artifacts.
+ (cutting via acmurthy)
+
+ HADOOP-7797. Fix top-level pom.xml to refer to correct staging maven
+ repository. (omalley via acmurthy)
+
+ HADOOP-7101. UserGroupInformation.getCurrentUser() fails when called from
+ non-Hadoop JAAS context. (todd)
+
+Release 0.22.1 - Unreleased
+
+ INCOMPATIBLE CHANGES
+
+ NEW FEATURES
+
+ IMPROVEMENTS
+
+ OPTIMIZATIONS
+
+ BUG FIXES
+
+ HADOOP-7937. Forward port SequenceFile#syncFs and friends from Hadoop 1.x.
+ (tomwhite)
+
+Release 0.22.0 - 2011-11-29
+
+ INCOMPATIBLE CHANGES
+
+ HADOOP-7137. Remove hod contrib. (nigel via eli)
+
+ NEW FEATURES
+
+ HADOOP-6791. Refresh for proxy superuser config
+ (common part for HDFS-1096) (boryas)
+
+ HADOOP-6581. Add authenticated TokenIdentifiers to UGI so that
+ they can be used for authorization (Kan Zhang and Jitendra Pandey
+ via jghoman)
+
+ HADOOP-6584. Provide Kerberized SSL encryption for webservices.
+ (jghoman and Kan Zhang via jghoman)
+
+ HADOOP-6853. Common component of HDFS-1045. (jghoman)
+
+ HADOOP-6859 - Introduce additional statistics to FileSystem to track
+ file system operations (suresh)
+
+ HADOOP-6870. Add a new API getFiles to FileSystem and FileContext that
+ lists all files under the input path or the subtree rooted at the
+ input path if recursive is true. Block locations are returned together
+ with each file's status. (hairong)
+
+ HADOOP-6888. Add a new FileSystem API closeAllForUGI(..) for closing all
+ file systems associated with a particular UGI. (Devaraj Das and Kan Zhang
+ via szetszwo)
+
+ HADOOP-6892. Common component of HDFS-1150 (Verify datanodes' identities
+ to clients in secure clusters) (jghoman)
+
+ HADOOP-6889. Make RPC to have an option to timeout. (hairong)
+
+ HADOOP-6996. Allow CodecFactory to return a codec object given a codec'
+ class name. (hairong)
+
+ HADOOP-7013. Add boolean field isCorrupt to BlockLocation.
+ (Patrick Kling via hairong)
+
+ HADOOP-6978. Adds support for NativeIO using JNI.
+ (Todd Lipcon, Devaraj Das & Owen O'Malley via ddas)
+
+ HADOOP-7134. configure files that are generated as part of the released
+ tarball need to have executable bit set. (Roman Shaposhnik via cos)
+
+ IMPROVEMENTS
+
+ HADOOP-6644. util.Shell getGROUPS_FOR_USER_COMMAND method name
+ - should use common naming convention (boryas)
+
+ HADOOP-6778. add isRunning() method to
+ AbstractDelegationTokenSecretManager (for HDFS-1044) (boryas)
+
+ HADOOP-6633. normalize property names for JT/NN kerberos principal
+ names in configuration (boryas)
+
+ HADOOP-6627. "Bad Connection to FS" message in FSShell should print
+ message from the exception (boryas)
+
+ HADOOP-6600. mechanism for authorization check for inter-server
+ protocols. (boryas)
+
+ HADOOP-6623. Add StringUtils.split for non-escaped single-character
+ separator. (Todd Lipcon via tomwhite)
+
+ HADOOP-6761. The Trash Emptier has the ability to run more frequently.
+ (Dmytro Molkov via dhruba)
+
+ HADOOP-6714. Resolve compressed files using CodecFactory in FsShell::text.
+ (Patrick Angeles via cdouglas)
+
+ HADOOP-6661. User document for UserGroupInformation.doAs.
+ (Jitendra Pandey via jghoman)
+
+ HADOOP-6674. Makes use of the SASL authentication options in the
+ SASL RPC. (Jitendra Pandey via ddas)
+
+ HADOOP-6526. Need mapping from long principal names to local OS
+ user names. (boryas)
+
+ HADOOP-6814. Adds an API in UserGroupInformation to get the real
+ authentication method of a passed UGI. (Jitendra Pandey via ddas)
+
+ HADOOP-6756. Documentation for common configuration keys.
+ (Erik Steffl via shv)
+
+ HADOOP-6835. Add support for concatenated gzip input. (Greg Roelofs via
+ cdouglas)
+
+ HADOOP-6845. Renames the TokenStorage class to Credentials.
+ (Jitendra Pandey via ddas)
+
+ HADOOP-6826. FileStatus needs unit tests. (Rodrigo Schmidt via Eli
+ Collins)
+
+ HADOOP-6905. add buildDTServiceName method to SecurityUtil
+ (as part of MAPREDUCE-1718) (boryas)
+
+ HADOOP-6632. Adds support for using different keytabs for different
+ servers in a Hadoop cluster. In the earier implementation, all servers
+ of a certain type (like TaskTracker), would have the same keytab and the
+ same principal. Now the principal name is a pattern that has _HOST in it.
+ (Kan Zhang & Jitendra Pandey via ddas)
+
+ HADOOP-6861. Adds new non-static methods in Credentials to read and
+ write token storage file. (Jitendra Pandey & Owen O'Malley via ddas)
+
+ HADOOP-6877. Common part of HDFS-1178 (NameNode servlets should communicate
+ with NameNode directrly). (Kan Zhang via jghoman)
+
+ HADOOP-6475. Adding some javadoc to Server.RpcMetrics, UGI.
+ (Jitendra Pandey and borya via jghoman)
+
+ HADOOP-6656. Adds a thread in the UserGroupInformation to renew TGTs
+ periodically. (Owen O'Malley and ddas via ddas)
+
+ HADOOP-6890. Improve listFiles API introduced by HADOOP-6870. (hairong)
+
+ HADOOP-6862. Adds api to add/remove user and group to AccessControlList
+ (amareshwari)
+
+ HADOOP-6911. doc update for DelegationTokenFetcher (boryas)
+
+ HADOOP-6900. Make the iterator returned by FileSystem#listLocatedStatus to
+ throw IOException rather than RuntimeException when there is an IO error
+ fetching the next file. (hairong)
+
+ HADOOP-6905. Better logging messages when a delegation token is invalid.
+ (Kan Zhang via jghoman)
+
+ HADOOP-6693. Add metrics to track kerberol login activity. (suresh)
+
+ HADOOP-6803. Add native gzip read/write coverage to TestCodec.
+ (Eli Collins via tomwhite)
+
+ HADOOP-6950. Suggest that HADOOP_CLASSPATH should be preserved in
+ hadoop-env.sh.template. (Philip Zeyliger via Eli Collins)
+
+ HADOOP-6922. Make AccessControlList a writable and update documentation
+ for Job ACLs. (Ravi Gummadi via vinodkv)
+
+ HADOOP-6965. Introduces checks for whether the original tgt is valid
+ in the reloginFromKeytab method.
+
+ HADOOP-6856. Simplify constructors for SequenceFile, and MapFile. (omalley)
+
+ HADOOP-6987. Use JUnit Rule to optionally fail test cases that run more
+ than 10 seconds (jghoman)
+
+ HADOOP-7005. Update test-patch.sh to remove callback to Hudson. (nigel)
+
+ HADOOP-6985. Suggest that HADOOP_OPTS be preserved in
+ hadoop-env.sh.template. (Ramkumar Vadali via cutting)
+
+ HADOOP-7007. Update the hudson-test-patch ant target to work with the
+ latest test-patch.sh script (gkesavan)
+
+ HADOOP-7010. Typo in FileSystem.java. (Jingguo Yao via eli)
+
+ HADOOP-7009. MD5Hash provides a public factory method that creates an
+ instance of thread local MessageDigest. (hairong)
+
+ HADOOP-7008. Enable test-patch.sh to have a configured number of
+ acceptable findbugs and javadoc warnings. (nigel and gkesavan)
+
+ HADOOP-6818. Provides a JNI implementation of group resolution. (ddas)
+
+ HADOOP-6943. The GroupMappingServiceProvider interface should be public.
+ (Aaron T. Myers via tomwhite)
+
+ HADOOP-4675. Current Ganglia metrics implementation is incompatible with
+ Ganglia 3.1. (Brian Bockelman via tomwhite)
+
+ HADOOP-6977. Herriot daemon clients should vend statistics (cos)
+
+ HADOOP-7024. Create a test method for adding file systems during tests.
+ (Kan Zhang via jghoman)
+
+ HADOOP-6903. Make AbstractFSileSystem methods and some FileContext methods
+ to be public. (Sanjay Radia)
+
+ HADOOP-7034. Add TestPath tests to cover dot, dot dot, and slash
+ normalization. (eli)
+
+ HADOOP-7032. Assert type constraints in the FileStatus constructor. (eli)
+
+ HADOOP-6562. FileContextSymlinkBaseTest should use FileContextTestHelper.
+ (eli)
+
+ HADOOP-7028. ant eclipse does not include requisite ant.jar in the
+ classpath. (Patrick Angeles via eli)
+
+ HADOOP-6298. Add copyBytes to Text and BytesWritable. (omalley)
+
+ HADOOP-6578. Configuration should trim whitespace around a lot of value
+ types. (Michele Catasta via eli)
+
+ HADOOP-6811. Remove EC2 bash scripts. They are replaced by Apache Whirr
+ (incubating, http://incubator.apache.org/whirr). (tomwhite)
+
+ HADOOP-7102. Remove "fs.ramfs.impl" field from core-deafult.xml (shv)
+
+ HADOOP-7104. Remove unnecessary DNS reverse lookups from RPC layer
+ (Kan Zhang via todd)
+
+ HADOOP-6056. Use java.net.preferIPv4Stack to force IPv4.
+ (Michele Catasta via shv)
+
+ HADOOP-7110. Implement chmod with JNI. (todd)
+
+ HADOOP-6812. Change documentation for correct placement of configuration
+ variables: mapreduce.reduce.input.buffer.percent,
+ mapreduce.task.io.sort.factor, mapreduce.task.io.sort.mb
+ (Chris Douglas via shv)
+
+ HADOOP-6436. Remove auto-generated native build files. (rvs via eli)
+
+ HADOOP-6970. SecurityAuth.audit should be generated under /build. (boryas)
+
+ HADOOP-7154. Should set MALLOC_ARENA_MAX in hadoop-env.sh (todd)
+
+ HADOOP-7187. Fix socket leak in GangliaContext. (Uma Maheswara Rao G
+ via szetszwo)
+
+ HADOOP-7241. fix typo of command 'hadoop fs -help tail'.
+ (Wei Yongjun via eli)
+
+ HADOOP-7244. Documentation change for updated configuration keys.
+ (tomwhite via eli)
+
+ HADOOP-7189. Add ability to enable 'debug' property in JAAS configuration.
+ (Ted Yu via todd)
+
+ HADOOP-7192. Update fs -stat docs to reflect the format features. (Harsh
+ J Chouraria via todd)
+
+ HADOOP-7355 Add audience and stability annotations to HttpServer class
+ (stack)
+
+ HADOOP-7346. Send back nicer error message to clients using outdated IPC
+ version. (todd)
+
+ HADOOP-7335. Force entropy to come from non-true random for tests.
+ (todd via eli)
+
+ HADOOP-7325. The hadoop command should not accept class names starting with
+ a hyphen. (Brock Noland via todd)
+
+ HADOOP-7772. javadoc the topology classes (stevel)
+
+ HADOOP-7786. Remove HDFS-specific config keys defined in FsConfig. (eli)
+
+ HADOOP-7861. changes2html.pl generates links to HADOOP, HDFS, and MAPREDUCE
+ jiras. (shv)
+
+ OPTIMIZATIONS
+
+ HADOOP-6884. Add LOG.isDebugEnabled() guard for each LOG.debug(..).
+ (Erik Steffl via szetszwo)
+
+ HADOOP-6683. ZlibCompressor does not fully utilize the buffer.
+ (Kang Xiao via eli)
+
+ HADOOP-6949. Reduce RPC packet size of primitive arrays using
+ ArrayPrimitiveWritable instead of ObjectWritable. (Matt Foley via suresh)
+
+ BUG FIXES
+
+ HADOOP-6638. try to relogin in a case of failed RPC connection (expired
+ tgt) only in case the subject is loginUser or proxyUgi.realUser. (boryas)
+
+ HADOOP-6781. security audit log shouldn't have exception in it. (boryas)
+
+ HADOOP-6612. Protocols RefreshUserToGroupMappingsProtocol and
+ RefreshAuthorizationPolicyProtocol will fail with security enabled (boryas)
+
+ HADOOP-6764. Remove verbose logging from the Groups class. (Boris Shkolnik)
+
+ HADOOP-6730. Bug in FileContext#copy and provide base class for
+ FileContext tests. (Ravi Phulari via jghoman)
+
+ HADOOP-6669. Respect compression configuration when creating DefaultCodec
+ instances. (Koji Noguchi via cdouglas)
+
+ HADOOP-6747. TestNetUtils fails on Mac OS X. (Todd Lipcon via jghoman)
+
+ HADOOP-6787. Factor out glob pattern code from FileContext and
+ Filesystem. Also fix bugs identified in HADOOP-6618 and make the
+ glob pattern code less restrictive and more POSIX standard
+ compliant. (Luke Lu via eli)
+
+ HADOOP-6649. login object in UGI should be inside the subject (jnp via
+ boryas)
+
+ HADOOP-6687. user object in the subject in UGI should be reused in case
+ of a relogin. (jnp via boryas)
+
+ HADOOP-6603. Provide workaround for issue with Kerberos not resolving
+ cross-realm principal (Kan Zhang and Jitendra Pandey via jghoman)
+
+ HADOOP-6620. NPE if renewer is passed as null in getDelegationToken.
+ (Jitendra Pandey via jghoman)
+
+ HADOOP-6613. Moves the RPC version check ahead of the AuthMethod check.
+ (Kan Zhang via ddas)
+
+ HADOOP-6682. NetUtils:normalizeHostName does not process hostnames starting
+ with [a-f] correctly. (jghoman)
+
+ HADOOP-6652. Removes the unnecessary cache from
+ ShellBasedUnixGroupsMapping. (ddas)
+
+ HADOOP-6815. refreshSuperUserGroupsConfiguration should use server side
+ configuration for the refresh (boryas)
+
+ HADOOP-6648. Adds a check for null tokens in Credentials.addToken api.
+ (ddas)
+
+ HADOOP-6647. balancer fails with "is not authorized for protocol
+ interface NamenodeProtocol" in secure environment (boryas)
+
+ HADOOP-6834. TFile.append compares initial key against null lastKey
+ (hong tang via mahadev)
+
+ HADOOP-6670. Use the UserGroupInformation's Subject as the criteria for
+ equals and hashCode. (Owen O'Malley and Kan Zhang via ddas)
+
+ HADOOP-6536. Fixes FileUtil.fullyDelete() not to delete the contents of
+ the sym-linked directory. (Ravi Gummadi via amareshwari)
+
+ HADOOP-6873. using delegation token over hftp for long
+ running clients (boryas)
+
+ HADOOP-6706. Improves the sasl failure handling due to expired tickets,
+ and other server detected failures. (Jitendra Pandey and ddas via ddas)
+
+ HADOOP-6715. Fixes AccessControlList.toString() to return a descriptive
+ String representation of the ACL. (Ravi Gummadi via amareshwari)
+
+ HADOOP-6885. Fix java doc warnings in Groups and
+ RefreshUserMappingsProtocol. (Eli Collins via jghoman)
+
+ HADOOP-6482. GenericOptionsParser constructor that takes Options and
+ String[] ignores options. (Eli Collins via jghoman)
+
+ HADOOP-6906. FileContext copy() utility doesn't work with recursive
+ copying of directories. (vinod k v via mahadev)
+
+ HADOOP-6453. Hadoop wrapper script shouldn't ignore an existing
+ JAVA_LIBRARY_PATH. (Chad Metcalf via jghoman)
+
+ HADOOP-6932. Namenode start (init) fails because of invalid kerberos
+ key, even when security set to "simple" (boryas)
+
+ HADOOP-6913. Circular initialization between UserGroupInformation and
+ KerberosName (Kan Zhang via boryas)
+
+ HADOOP-6907. Rpc client doesn't use the per-connection conf to figure
+ out server's Kerberos principal (Kan Zhang via hairong)
+
+ HADOOP-6938. ConnectionId.getRemotePrincipal() should check if security
+ is enabled. (Kan Zhang via hairong)
+
+ HADOOP-6930. AvroRpcEngine doesn't work with generated Avro code.
+ (sharad)
+
+ HADOOP-6940. RawLocalFileSystem's markSupported method misnamed
+ markSupport. (Tom White via eli).
+
+ HADOOP-6951. Distinct minicluster services (e.g. NN and JT) overwrite each
+ other's service policies. (Aaron T. Myers via tomwhite)
+
+ HADOOP-6879. Provide SSH based (Jsch) remote execution API for system
+ tests (cos)
+
+ HADOOP-6989. Correct the parameter for SetFile to set the value type
+ for SetFile to be NullWritable instead of the key. (cdouglas via omalley)
+
+ HADOOP-6984. Combine the compress kind and the codec in the same option
+ for SequenceFiles. (cdouglas via omalley)
+
+ HADOOP-6933. TestListFiles is flaky. (Todd Lipcon via tomwhite)
+
+ HADOOP-6947. Kerberos relogin should set refreshKrb5Config to true.
+ (Todd Lipcon via tomwhite)
+
+ HADOOP-7006. Fix 'fs -getmerge' command to not be a no-op.
+ (Chris Nauroth via cutting)
+
+ HADOOP-6663. BlockDecompressorStream get EOF exception when decompressing
+ the file compressed from empty file. (Kang Xiao via tomwhite)
+
+ HADOOP-6991. Fix SequenceFile::Reader to honor file lengths and call
+ openFile (cdouglas via omalley)
+
+ HADOOP-7011. Fix KerberosName.main() to not throw an NPE.
+ (Aaron T. Myers via tomwhite)
+
+ HADOOP-6975. Integer overflow in S3InputStream for blocks > 2GB.
+ (Patrick Kling via tomwhite)
+
+ HADOOP-6758. MapFile.fix does not allow index interval definition.
+ (Gianmarco De Francisci Morales via tomwhite)
+
+ HADOOP-6926. SocketInputStream incorrectly implements read().
+ (Todd Lipcon via tomwhite)
+
+ HADOOP-6899 RawLocalFileSystem#setWorkingDir() does not work for relative names
+ (Sanjay Radia)
+
+ HADOOP-6496. HttpServer sends wrong content-type for CSS files
+ (and others). (Todd Lipcon via tomwhite)
+
+ HADOOP-7057. IOUtils.readFully and IOUtils.skipFully have typo in
+ exception creation's message. (cos)
+
+ HADOOP-7038. saveVersion script includes an additional \r while running
+ whoami under windows. (Wang Xu via cos)
+
+ HADOOP-7082. Configuration.writeXML should not hold lock while outputting
+ (todd)
+
+ HADOOP-7070. JAAS configuration should delegate unknown application names
+ to pre-existing configuration. (todd)
+
+ HADOOP-7087. SequenceFile.createWriter ignores FileSystem parameter (todd)
+
+ HADOOP-7091. reloginFromKeytab() should happen even if TGT can't be found.
+ (Kan Zhang via jghoman)
+
+ HADOOP-7100. Fix build to not refer to contrib/ec2 removed by HADOOP-6811
+ (todd)
+
+ HADOOP-7097. JAVA_LIBRARY_PATH missing base directory. (Noah Watkins via
+ todd)
+
+ HADOOP-7093. Servlets should default to text/plain (todd)
+
+ HADOOP-7089. Fix link resolution logic in hadoop-config.sh. (eli)
+
+ HADOOP-7046. Fix Findbugs warning in Configuration. (Po Cheung via shv)
+
+ HADOOP-7118. Fix NPE in Configuration.writeXml (todd)
+
+ HADOOP-7122. Fix thread leak when shell commands time out. (todd)
+
+ HADOOP-7126. Fix file permission setting for RawLocalFileSystem on Windows.
+ (Po Cheung via shv)
+
+ HADOOP-6642. Fix javac, javadoc, findbugs warnings related to security work.
+ (Chris Douglas, Po Cheung via shv)
+
+ HADOOP-7140. IPC Reader threads do not stop when server stops (todd)
+
+ HADOOP-7094. hadoop.css got lost during project split (cos)
+
+ HADOOP-7145. Configuration.getLocalPath should trim whitespace from
+ the provided directories. (todd)
+
+ HADOOP-7156. Workaround for unsafe implementations of getpwuid_r (todd)
+
+ HADOOP-6898. FileSystem.copyToLocal creates files with 777 permissions.
+ (Aaron T. Myers via tomwhite)
+
+ HADOOP-7229. Do not default to an absolute path for kinit in Kerberos
+ auto-renewal thread. (Aaron T. Myers via todd)
+
+ HADOOP-7172. SecureIO should not check owner on non-secure
+ clusters that have no native support. (todd via eli)
+
+ HADOOP-7184. Remove deprecated config local.cache.size from
+ core-default.xml (todd)
+
+ HADOOP-7245. FsConfig should use constants in CommonConfigurationKeys.
+ (tomwhite via eli)
+
+ HADOOP-7068. Ivy resolve force mode should be turned off by default.
+ (Luke Lu via tomwhite)
+
+ HADOOP-7296. The FsPermission(FsPermission) constructor does not use the
+ sticky bit. (Siddharth Seth via tomwhite)
+
+ HADOOP-7300. Configuration methods that return collections are inconsistent
+ about mutability. (todd)
+
+ HADOOP-7305. Eclipse project classpath should include tools.jar from JDK.
+ (Niels Basjes via todd)
+
+ HADOOP-7318. MD5Hash factory should reset the digester it returns.
+ (todd via eli)
+
+ HADOOP-7287. Configuration deprecation mechanism doesn't work properly for
+ GenericOptionsParser and Tools. (Aaron T. Myers via todd)
+
+ HADOOP-7146. RPC server leaks file descriptors (todd)
+
+ HADOOP-7276. Hadoop native builds fail on ARM due to -m32 (Trevor Robinson
+ via eli)
+
+ HADOOP-7121. Exceptions while serializing IPC call responses are not
+ handled well. (todd)
+
+ HADOOP-7351 Regression: HttpServer#getWebAppsPath used to be protected
+ so subclasses could supply alternate webapps path but it was made private
+ by HADOOP-6461 (Stack)
+
+ HADOOP-7349. HADOOP-7121 accidentally disabled some tests in TestIPC.
+ (todd)
+
+ HADOOP-7390. VersionInfo not generated properly in git after unsplit. (todd
+ via atm)
+
+ HADOOP-7568. SequenceFile should not print into stdout.
+ (Plamen Jeliazkov via shv)
+
+ HADOOP-7663. Fix TestHDFSTrash failure. (Mayank Bansal via shv)
+
+ HADOOP-7457. Remove out-of-date Chinese language documentation.
+ (Jakob Homan via eli)
+
+ HADOOP-7783. Add more symlink tests that cover intermediate links. (eli)
+
+Release 0.21.1 - Unreleased
+
+ IMPROVEMENTS
+
+ HADOOP-6934. Test for ByteWritable comparator.
+ (Johannes Zillmann via Eli Collins)
+
+ HADOOP-6786. test-patch needs to verify Herriot integrity (cos)
+
+ HADOOP-7177. CodecPool should report which compressor it is using.
+ (Allen Wittenauer via eli)
+
+ BUG FIXES
+
+ HADOOP-6925. BZip2Codec incorrectly implements read().
+ (Todd Lipcon via Eli Collins)
+
+ HADOOP-6833. IPC leaks call parameters when exceptions thrown.
+ (Todd Lipcon via Eli Collins)
+
+ HADOOP-6971. Clover build doesn't generate per-test coverage (cos)
+
+ HADOOP-6993. Broken link on cluster setup page of docs. (eli)
+
+ HADOOP-6944. [Herriot] Implement a functionality for getting proxy users
+ definitions like groups and hosts. (Vinay Thota via cos)
+
+ HADOOP-6954. Sources JARs are not correctly published to the Maven
+ repository. (tomwhite)
+
+ HADOOP-7052. misspelling of threshold in conf/log4j.properties.
+ (Jingguo Yao via eli)
+
+ HADOOP-7053. wrong FSNamesystem Audit logging setting in
+ conf/log4j.properties. (Jingguo Yao via eli)
+
+ HADOOP-7120. Fix a syntax error in test-patch.sh. (szetszwo)
+
+ HADOOP-7162. Rmove a duplicated call FileSystem.listStatus(..) in FsShell.
+ (Alexey Diomin via szetszwo)
+
+ HADOOP-7117. Remove fs.checkpoint.* from core-default.xml and replace
+ fs.checkpoint.* with dfs.namenode.checkpoint.* in documentations.
+ (Harsh J Chouraria via szetszwo)
+
+ HADOOP-7193. Correct the "fs -touchz" command help message.
+ (Uma Maheswara Rao G via szetszwo)
+
+ HADOOP-7174. Null is displayed in the "fs -copyToLocal" command.
+ (Uma Maheswara Rao G via szetszwo)
+
+ HADOOP-7194. Fix resource leak in IOUtils.copyBytes(..).
+ (Devaraj K via szetszwo)
+
+ HADOOP-7183. WritableComparator.get should not cache comparator objects.
+ (tomwhite via eli)
+
+Release 0.21.0 - 2010-08-13
+
+ INCOMPATIBLE CHANGES
+
+ HADOOP-4895. Remove deprecated methods DFSClient.getHints(..) and
+ DFSClient.isDirectory(..). (szetszwo)
+
+ HADOOP-4941. Remove deprecated FileSystem methods: getBlockSize(Path f),
+ getLength(Path f) and getReplication(Path src). (szetszwo)
+
+ HADOOP-4648. Remove obsolete, deprecated InMemoryFileSystem and
+ ChecksumDistributedFileSystem. (cdouglas via szetszwo)
+
+ HADOOP-4940. Remove a deprecated method FileSystem.delete(Path f). (Enis
+ Soztutar via szetszwo)
+
+ HADOOP-4010. Change semantics for LineRecordReader to read an additional
+ line per split- rather than moving back one character in the stream- to
+ work with splittable compression codecs. (Abdul Qadeer via cdouglas)
+
+ HADOOP-5094. Show hostname and separate live/dead datanodes in DFSAdmin
+ report. (Jakob Homan via szetszwo)
+
+ HADOOP-4942. Remove deprecated FileSystem methods getName() and
+ getNamed(String name, Configuration conf). (Jakob Homan via szetszwo)
+
+ HADOOP-5486. Removes the CLASSPATH string from the command line and instead
+ exports it in the environment. (Amareshwari Sriramadasu via ddas)
+
+ HADOOP-2827. Remove deprecated NetUtils::getServerAddress. (cdouglas)
+
+ HADOOP-5681. Change examples RandomWriter and RandomTextWriter to
+ use new mapreduce API. (Amareshwari Sriramadasu via sharad)
+
+ HADOOP-5680. Change org.apache.hadoop.examples.SleepJob to use new
+ mapreduce api. (Amareshwari Sriramadasu via sharad)
+
+ HADOOP-5699. Change org.apache.hadoop.examples.PiEstimator to use
+ new mapreduce api. (Amareshwari Sriramadasu via sharad)
+
+ HADOOP-5720. Introduces new task types - JOB_SETUP, JOB_CLEANUP
+ and TASK_CLEANUP. Removes the isMap methods from TaskID/TaskAttemptID
+ classes. (ddas)
+
+ HADOOP-5668. Change TotalOrderPartitioner to use new API. (Amareshwari
+ Sriramadasu via cdouglas)
+
+ HADOOP-5738. Split "waiting_tasks" JobTracker metric into waiting maps and
+ waiting reduces. (Sreekanth Ramakrishnan via cdouglas)
+
+ HADOOP-5679. Resolve findbugs warnings in core/streaming/pipes/examples.
+ (Jothi Padmanabhan via sharad)
+
+ HADOOP-4359. Support for data access authorization checking on Datanodes.
+ (Kan Zhang via rangadi)
+
+ HADOOP-5690. Change org.apache.hadoop.examples.DBCountPageView to use
+ new mapreduce api. (Amareshwari Sriramadasu via sharad)
+
+ HADOOP-5694. Change org.apache.hadoop.examples.dancing to use new
+ mapreduce api. (Amareshwari Sriramadasu via sharad)
+
+ HADOOP-5696. Change org.apache.hadoop.examples.Sort to use new
+ mapreduce api. (Amareshwari Sriramadasu via sharad)
+
+ HADOOP-5698. Change org.apache.hadoop.examples.MultiFileWordCount to
+ use new mapreduce api. (Amareshwari Sriramadasu via sharad)
+
+ HADOOP-5913. Provide ability to an administrator to stop and start
+ job queues. (Rahul Kumar Singh and Hemanth Yamijala via yhemanth)
+
+ MAPREDUCE-711. Removed Distributed Cache from Common, to move it
+ under Map/Reduce. (Vinod Kumar Vavilapalli via yhemanth)
+
+ HADOOP-6201. Change FileSystem::listStatus contract to throw
+ FileNotFoundException if the directory does not exist, rather than letting
+ this be implementation-specific. (Jakob Homan via cdouglas)
+
+ HADOOP-6230. Moved process tree and memory calculator related classes
+ from Common to Map/Reduce. (Vinod Kumar Vavilapalli via yhemanth)
+
+ HADOOP-6203. FsShell rm/rmr error message indicates exceeding Trash quota
+ and suggests using -skpTrash, when moving to trash fails.
+ (Boris Shkolnik via suresh)
+
+ HADOOP-6303. Eclipse .classpath template has outdated jar files and is
+ missing some new ones. (cos)
+
+ HADOOP-6396. Fix uninformative exception message when unable to parse
+ umask. (jghoman)
+
+ HADOOP-6299. Reimplement the UserGroupInformation to use the OS
+ specific and Kerberos JAAS login. (omalley)
+
+ HADOOP-6686. Remove redundant exception class name from the exception
+ message for the exceptions thrown at RPC client. (suresh)
+
+ HADOOP-6701. Fix incorrect exit codes returned from chmod, chown and chgrp
+ commands from FsShell. (Ravi Phulari via suresh)
+
+ NEW FEATURES
+
+ HADOOP-6332. Large-scale Automated Test Framework. (sharad, Sreekanth
+ Ramakrishnan, at all via cos)
+
+ HADOOP-4268. Change fsck to use ClientProtocol methods so that the
+ corresponding permission requirement for running the ClientProtocol
+ methods will be enforced. (szetszwo)
+
+ HADOOP-3953. Implement sticky bit for directories in HDFS. (Jakob Homan
+ via szetszwo)
+
+ HADOOP-4368. Implement df in FsShell to show the status of a FileSystem.
+ (Craig Macdonald via szetszwo)
+
+ HADOOP-3741. Add a web ui to the SecondaryNameNode for showing its status.
+ (szetszwo)
+
+ HADOOP-5018. Add pipelined writers to Chukwa. (Ari Rabkin via cdouglas)
+
+ HADOOP-5052. Add an example computing exact digits of pi using the
+ Bailey-Borwein-Plouffe algorithm. (Tsz Wo (Nicholas), SZE via cdouglas)
+
+ HADOOP-4927. Adds a generic wrapper around outputformat to allow creation of
+ output on demand (Jothi Padmanabhan via ddas)
+
+ HADOOP-5144. Add a new DFSAdmin command for changing the setting of restore
+ failed storage replicas in namenode. (Boris Shkolnik via szetszwo)
+
+ HADOOP-5258. Add a new DFSAdmin command to print a tree of the rack and
+ datanode topology as seen by the namenode. (Jakob Homan via szetszwo)
+
+ HADOOP-4756. A command line tool to access JMX properties on NameNode
+ and DataNode. (Boris Shkolnik via rangadi)
+
+ HADOOP-4539. Introduce backup node and checkpoint node. (shv)
+
+ HADOOP-5363. Add support for proxying connections to multiple clusters with
+ different versions to hdfsproxy. (Zhiyong Zhang via cdouglas)
+
+ HADOOP-5528. Add a configurable hash partitioner operating on ranges of
+ BinaryComparable keys. (Klaas Bosteels via shv)
+
+ HADOOP-5257. HDFS servers may start and stop external components through
+ a plugin interface. (Carlos Valiente via dhruba)
+
+ HADOOP-5450. Add application-specific data types to streaming's typed bytes
+ interface. (Klaas Bosteels via omalley)
+
+ HADOOP-5518. Add contrib/mrunit, a MapReduce unit test framework.
+ (Aaron Kimball via cutting)
+
+ HADOOP-5469. Add /metrics servlet to daemons, providing metrics
+ over HTTP as either text or JSON. (Philip Zeyliger via cutting)
+
+ HADOOP-5467. Introduce offline fsimage image viewer. (Jakob Homan via shv)
+
+ HADOOP-5752. Add a new hdfs image processor, Delimited, to oiv. (Jakob
+ Homan via szetszwo)
+
+ HADOOP-5266. Adds the capability to do mark/reset of the reduce values
+ iterator in the Context object API. (Jothi Padmanabhan via ddas)
+
+ HADOOP-5745. Allow setting the default value of maxRunningJobs for all
+ pools. (dhruba via matei)
+
+ HADOOP-5643. Adds a way to decommission TaskTrackers while the JobTracker
+ is running. (Amar Kamat via ddas)
+
+ HADOOP-4829. Allow FileSystem shutdown hook to be disabled.
+ (Todd Lipcon via tomwhite)
+
+ HADOOP-5815. Sqoop: A database import tool for Hadoop.
+ (Aaron Kimball via tomwhite)
+
+ HADOOP-4861. Add disk usage with human-readable size (-duh).
+ (Todd Lipcon via tomwhite)
+
+ HADOOP-5844. Use mysqldump when connecting to local mysql instance in Sqoop.
+ (Aaron Kimball via tomwhite)
+
+ HADOOP-5976. Add a new command, classpath, to the hadoop script. (Owen
+ O'Malley and Gary Murry via szetszwo)
+
+ HADOOP-6120. Add support for Avro specific and reflect data.
+ (sharad via cutting)
+
+ HADOOP-6226. Moves BoundedByteArrayOutputStream from the tfile package to
+ the io package and makes it available to other users (MAPREDUCE-318).
+ (Jothi Padmanabhan via ddas)
+
+ HADOOP-6105. Adds support for automatically handling deprecation of
+ configuration keys. (V.V.Chaitanya Krishna via yhemanth)
+
+ HADOOP-6235. Adds new method to FileSystem for clients to get server
+ defaults. (Kan Zhang via suresh)
+
+ HADOOP-6234. Add new option dfs.umaskmode to set umask in configuration
+ to use octal or symbolic instead of decimal. (Jakob Homan via suresh)
+
+ HADOOP-5073. Add annotation mechanism for interface classification.
+ (Jakob Homan via suresh)
+
+ HADOOP-4012. Provide splitting support for bzip2 compressed files. (Abdul
+ Qadeer via cdouglas)
+
+ HADOOP-6246. Add backward compatibility support to use deprecated decimal
+ umask from old configuration. (Jakob Homan via suresh)
+
+ HADOOP-4952. Add new improved file system interface FileContext for the
+ application writer (Sanjay Radia via suresh)
+
+ HADOOP-6170. Add facility to tunnel Avro RPCs through Hadoop RPCs.
+ This permits one to take advantage of both Avro's RPC versioning
+ features and Hadoop's proven RPC scalability. (cutting)
+
+ HADOOP-6267. Permit building contrib modules located in external
+ source trees. (Todd Lipcon via cutting)
+
+ HADOOP-6240. Add new FileContext rename operation that posix compliant
+ that allows overwriting existing destination. (suresh)
+
+ HADOOP-6204. Implementing aspects development and fault injeciton
+ framework for Hadoop (cos)
+
+ HADOOP-6313. Implement Syncable interface in FSDataOutputStream to expose
+ flush APIs to application users. (Hairong Kuang via suresh)
+
+ HADOOP-6284. Add a new parameter, HADOOP_JAVA_PLATFORM_OPTS, to
+ hadoop-config.sh so that it allows setting java command options for
+ JAVA_PLATFORM. (Koji Noguchi via szetszwo)
+
+ HADOOP-6337. Updates FilterInitializer class to be more visible,
+ and the init of the class is made to take a Configuration argument.
+ (Jakob Homan via ddas)
+
+ Hadoop-6223. Add new file system interface AbstractFileSystem with
+ implementation of some file systems that delegate to old FileSystem.
+ (Sanjay Radia via suresh)
+
+ HADOOP-6433. Introduce asychronous deletion of files via a pool of
+ threads. This can be used to delete files in the Distributed
+ Cache. (Zheng Shao via dhruba)
+
+ HADOOP-6415. Adds a common token interface for both job token and
+ delegation token. (Kan Zhang via ddas)
+
+ HADOOP-6408. Add a /conf servlet to dump running configuration.
+ (Todd Lipcon via tomwhite)
+
+ HADOOP-6520. Adds APIs to read/write Token and secret keys. Also
+ adds the automatic loading of tokens into UserGroupInformation
+ upon login. The tokens are read from a file specified in the
+ environment variable. (ddas)
+
+ HADOOP-6419. Adds SASL based authentication to RPC.
+ (Kan Zhang via ddas)
+
+ HADOOP-6510. Adds a way for superusers to impersonate other users
+ in a secure environment. (Jitendra Nath Pandey via ddas)
+
+ HADOOP-6421. Adds Symbolic links to FileContext, AbstractFileSystem.
+ It also adds a limited implementation for the local file system
+ (RawLocalFs) that allows local symlinks. (Eli Collins via Sanjay Radia)
+
+ HADOOP-6577. Add hidden configuration option "ipc.server.max.response.size"
+ to change the default 1 MB, the maximum size when large IPC handler
+ response buffer is reset. (suresh)
+
+ HADOOP-6568. Adds authorization for the default servlets.
+ (Vinod Kumar Vavilapalli via ddas)
+
+ HADOOP-6586. Log authentication and authorization failures and successes
+ for RPC (boryas)
+
+ HADOOP-6580. UGI should contain authentication method. (jnp via boryas)
+
+ HADOOP-6657. Add a capitalization method to StringUtils for MAPREDUCE-1545.
+ (Luke Lu via Steve Loughran)
+
+ HADOOP-6692. Add FileContext#listStatus that returns an iterator.
+ (hairong)
+
+ HADOOP-6869. Functionality to create file or folder on a remote daemon
+ side (Vinay Thota via cos)
+
+ IMPROVEMENTS
+
+ HADOOP-6798. Align Ivy version for all Hadoop subprojects. (cos)
+
+ HADOOP-6777. Implement a functionality for suspend and resume a process.
+ (Vinay Thota via cos)
+
+ HADOOP-6772. Utilities for system tests specific. (Vinay Thota via cos)
+
+ HADOOP-6771. Herriot's artifact id for Maven deployment should be set to
+ hadoop-core-instrumented (cos)
+
+ HADOOP-6752. Remote cluster control functionality needs JavaDocs
+ improvement (Balaji Rajagopalan via cos).
+
+ HADOOP-4565. Added CombineFileInputFormat to use data locality information
+ to create splits. (dhruba via zshao)
+
+ HADOOP-4936. Improvements to TestSafeMode. (shv)
+
+ HADOOP-4985. Remove unnecessary "throw IOException" declarations in
+ FSDirectory related methods. (szetszwo)
+
+ HADOOP-5017. Change NameNode.namesystem declaration to private. (szetszwo)
+
+ HADOOP-4794. Add branch information from the source version control into
+ the version information that is compiled into Hadoop. (cdouglas via
+ omalley)
+
+ HADOOP-5070. Increment copyright year to 2009, remove assertions of ASF
+ copyright to licensed files. (Tsz Wo (Nicholas), SZE via cdouglas)
+
+ HADOOP-5037. Deprecate static FSNamesystem.getFSNamesystem(). (szetszwo)
+
+ HADOOP-5088. Include releaseaudit target as part of developer test-patch
+ target. (Giridharan Kesavan via nigel)
+
+ HADOOP-2721. Uses setsid when creating new tasks so that subprocesses of
+ this process will be within this new session (and this process will be
+ the process leader for all the subprocesses). Killing the process leader,
+ or the main Java task in Hadoop's case, kills the entire subtree of
+ processes. (Ravi Gummadi via ddas)
+
+ HADOOP-5097. Remove static variable JspHelper.fsn, a static reference to
+ a non-singleton FSNamesystem object. (szetszwo)
+
+ HADOOP-3327. Improves handling of READ_TIMEOUT during map output copying.
+ (Amareshwari Sriramadasu via ddas)
+
+ HADOOP-5124. Choose datanodes randomly instead of starting from the first
+ datanode for providing fairness. (hairong via szetszwo)
+
+ HADOOP-4930. Implement a Linux native executable that can be used to
+ launch tasks as users. (Sreekanth Ramakrishnan via yhemanth)
+
+ HADOOP-5122. Fix format of fs.default.name value in libhdfs test conf.
+ (Craig Macdonald via tomwhite)
+
+ HADOOP-5038. Direct daemon trace to debug log instead of stdout. (Jerome
+ Boulon via cdouglas)
+
+ HADOOP-5101. Improve packaging by adding 'all-jars' target building core,
+ tools, and example jars. Let findbugs depend on this rather than the 'tar'
+ target. (Giridharan Kesavan via cdouglas)
+
+ HADOOP-4868. Splits the hadoop script into three parts - bin/hadoop,
+ bin/mapred and bin/hdfs. (Sharad Agarwal via ddas)
+
+ HADOOP-1722. Adds support for TypedBytes and RawBytes in Streaming.
+ (Klaas Bosteels via ddas)
+
+ HADOOP-4220. Changes the JobTracker restart tests so that they take much
+ less time. (Amar Kamat via ddas)
+
+ HADOOP-4885. Try to restore failed name-node storage directories at
+ checkpoint time. (Boris Shkolnik via shv)
+
+ HADOOP-5209. Update year to 2009 for javadoc. (szetszwo)
+
+ HADOOP-5279. Remove unnecessary targets from test-patch.sh.
+ (Giridharan Kesavan via nigel)
+
+ HADOOP-5120. Remove the use of FSNamesystem.getFSNamesystem() from
+ UpgradeManagerNamenode and UpgradeObjectNamenode. (szetszwo)
+
+ HADOOP-5222. Add offset to datanode clienttrace. (Lei Xu via cdouglas)
+
+ HADOOP-5240. Skip re-building javadoc when it is already
+ up-to-date. (Aaron Kimball via cutting)
+
+ HADOOP-5042. Add a cleanup stage to log rollover in Chukwa appender.
+ (Jerome Boulon via cdouglas)
+
+ HADOOP-5264. Removes redundant configuration object from the TaskTracker.
+ (Sharad Agarwal via ddas)
+
+ HADOOP-5232. Enable patch testing to occur on more than one host.
+ (Giri Kesavan via nigel)
+
+ HADOOP-4546. Fix DF reporting for AIX. (Bill Habermaas via cdouglas)
+
+ HADOOP-5023. Add Tomcat support to HdfsProxy. (Zhiyong Zhang via cdouglas)
+
+ HADOOP-5317. Provide documentation for LazyOutput Feature.
+ (Jothi Padmanabhan via johan)
+
+ HADOOP-5455. Document rpc metrics context to the extent dfs, mapred, and
+ jvm contexts are documented. (Philip Zeyliger via cdouglas)
+
+ HADOOP-5358. Provide scripting functionality to the synthetic load
+ generator. (Jakob Homan via hairong)
+
+ HADOOP-5442. Paginate jobhistory display and added some search
+ capabilities. (Amar Kamat via acmurthy)
+
+ HADOOP-4842. Streaming now allows specifiying a command for the combiner.
+ (Amareshwari Sriramadasu via ddas)
+
+ HADOOP-5196. avoiding unnecessary byte[] allocation in
+ SequenceFile.CompressedBytes and SequenceFile.UncompressedBytes.
+ (hong tang via mahadev)
+
+ HADOOP-4655. New method FileSystem.newInstance() that always returns
+ a newly allocated FileSystem object. (dhruba)
+
+ HADOOP-4788. Set Fair scheduler to assign both a map and a reduce on each
+ heartbeat by default. (matei)
+
+ HADOOP-5491. In contrib/index, better control memory usage.
+ (Ning Li via cutting)
+
+ HADOOP-5423. Include option of preserving file metadata in
+ SequenceFile::sort. (Michael Tamm via cdouglas)
+
+ HADOOP-5331. Add support for KFS appends. (Sriram Rao via cdouglas)
+
+ HADOOP-4365. Make Configuration::getProps protected in support of
+ meaningful subclassing. (Steve Loughran via cdouglas)
+
+ HADOOP-2413. Remove the static variable FSNamesystem.fsNamesystemObject.
+ (Konstantin Shvachko via szetszwo)
+
+ HADOOP-4584. Improve datanode block reports and associated file system
+ scan to avoid interefering with normal datanode operations.
+ (Suresh Srinivas via rangadi)
+
+ HADOOP-5502. Documentation for backup and checkpoint nodes.
+ (Jakob Homan via shv)
+
+ HADOOP-5485. Mask actions in the fair scheduler's servlet UI based on
+ value of webinterface.private.actions.
+ (Vinod Kumar Vavilapalli via yhemanth)
+
+ HADOOP-5581. HDFS should throw FileNotFoundException when while opening
+ a file that does not exist. (Brian Bockelman via rangadi)
+
+ HADOOP-5509. PendingReplicationBlocks does not start monitor in the
+ constructor. (shv)
+
+ HADOOP-5494. Modify sorted map output merger to lazily read values,
+ rather than buffering at least one record for each segment. (Devaraj Das
+ via cdouglas)
+
+ HADOOP-5396. Provide ability to refresh queue ACLs in the JobTracker
+ without having to restart the daemon.
+ (Sreekanth Ramakrishnan and Vinod Kumar Vavilapalli via yhemanth)
+
+ HADOOP-4490. Provide ability to run tasks as job owners.
+ (Sreekanth Ramakrishnan via yhemanth)
+
+ HADOOP-5697. Change org.apache.hadoop.examples.Grep to use new
+ mapreduce api. (Amareshwari Sriramadasu via sharad)
+
+ HADOOP-5625. Add operation duration to clienttrace. (Lei Xu via cdouglas)
+
+ HADOOP-5705. Improve TotalOrderPartitioner efficiency by updating the trie
+ construction. (Dick King via cdouglas)
+
+ HADOOP-5589. Eliminate source limit of 64 for map-side joins imposed by
+ TupleWritable encoding. (Jingkei Ly via cdouglas)
+
+ HADOOP-5734. Correct block placement policy description in HDFS
+ Design document. (Konstantin Boudnik via shv)
+
+ HADOOP-5657. Validate data in TestReduceFetch to improve merge test
+ coverage. (cdouglas)
+
+ HADOOP-5613. Change S3Exception to checked exception.
+ (Andrew Hitchcock via tomwhite)
+
+ HADOOP-5717. Create public enum class for the Framework counters in
+ org.apache.hadoop.mapreduce. (Amareshwari Sriramadasu via sharad)
+
+ HADOOP-5217. Split AllTestDriver for core, hdfs and mapred. (sharad)
+
+ HADOOP-5364. Add certificate expiration warning to HsftpFileSystem and HDFS
+ proxy. (Zhiyong Zhang via cdouglas)
+
+ HADOOP-5733. Add map/reduce slot capacity and blacklisted capacity to
+ JobTracker metrics. (Sreekanth Ramakrishnan via cdouglas)
+
+ HADOOP-5596. Add EnumSetWritable. (He Yongqiang via szetszwo)
+
+ HADOOP-5727. Simplify hashcode for ID types. (Shevek via cdouglas)
+
+ HADOOP-5500. In DBOutputFormat, where field names are absent permit the
+ number of fields to be sufficient to construct the select query. (Enis
+ Soztutar via cdouglas)
+
+ HADOOP-5081. Split TestCLI into HDFS, Mapred and Core tests. (sharad)
+
+ HADOOP-5015. Separate block management code from FSNamesystem. (Suresh
+ Srinivas via szetszwo)
+
+ HADOOP-5080. Add new test cases to TestMRCLI and TestHDFSCLI
+ (V.Karthikeyan via nigel)
+
+ HADOOP-5135. Splits the tests into different directories based on the
+ package. Four new test targets have been defined - run-test-core,
+ run-test-mapred, run-test-hdfs and run-test-hdfs-with-mr.
+ (Sharad Agarwal via ddas)
+
+ HADOOP-5771. Implements unit tests for LinuxTaskController.
+ (Sreekanth Ramakrishnan and Vinod Kumar Vavilapalli via yhemanth)
+
+ HADOOP-5419. Provide a facility to query the Queue ACLs for the
+ current user.
+ (Rahul Kumar Singh via yhemanth)
+
+ HADOOP-5780. Improve per block message prited by "-metaSave" in HDFS.
+ (Raghu Angadi)
+
+ HADOOP-5823. Added a new class DeprecatedUTF8 to help with removing
+ UTF8 related javac warnings. These warnings are removed in
+ FSEditLog.java as a use case. (Raghu Angadi)
+
+ HADOOP-5824. Deprecate DataTransferProtocol.OP_READ_METADATA and remove
+ the corresponding unused codes. (Kan Zhang via szetszwo)
+
+ HADOOP-5721. Factor out EditLogFileInputStream and EditLogFileOutputStream
+ into independent classes. (Luca Telloli & Flavio Junqueira via shv)
+
+ HADOOP-5838. Fix a few javac warnings in HDFS. (Raghu Angadi)
+
+ HADOOP-5854. Fix a few "Inconsistent Synchronization" warnings in HDFS.
+ (Raghu Angadi)
+
+ HADOOP-5369. Small tweaks to reduce MapFile index size. (Ben Maurer
+ via sharad)
+
+ HADOOP-5858. Eliminate UTF8 and fix warnings in test/hdfs-with-mr package.
+ (shv)
+
+ HADOOP-5866. Move DeprecatedUTF8 from o.a.h.io to o.a.h.hdfs since it may
+ not be used outside hdfs. (Raghu Angadi)
+
+ HADOOP-5857. Move normal java methods from hdfs .jsp files to .java files.
+ (szetszwo)
+
+ HADOOP-5873. Remove deprecated methods randomDataNode() and
+ getDatanodeByIndex(..) in FSNamesystem. (szetszwo)
+
+ HADOOP-5572. Improves the progress reporting for the sort phase for both
+ maps and reduces. (Ravi Gummadi via ddas)
+
+ HADOOP-5839. Fix EC2 scripts to allow remote job submission.
+ (Joydeep Sen Sarma via tomwhite)
+
+ HADOOP-5877. Fix javac warnings in TestHDFSServerPorts, TestCheckpoint,
+ TestNameEditsConfig, TestStartup and TestStorageRestore.
+ (Jakob Homan via shv)
+
+ HADOOP-5438. Provide a single FileSystem method to create or
+ open-for-append to a file. (He Yongqiang via dhruba)
+
+ HADOOP-5472. Change DistCp to support globbing of input paths. (Dhruba
+ Borthakur and Rodrigo Schmidt via szetszwo)
+
+ HADOOP-5175. Don't unpack libjars on classpath. (Todd Lipcon via tomwhite)
+
+ HADOOP-5620. Add an option to DistCp for preserving modification and access
+ times. (Rodrigo Schmidt via szetszwo)
+
+ HADOOP-5664. Change map serialization so a lock is obtained only where
+ contention is possible, rather than for each write. (cdouglas)
+
+ HADOOP-5896. Remove the dependency of GenericOptionsParser on
+ Option.withArgPattern. (Giridharan Kesavan and Sharad Agarwal via
+ sharad)
+
+ HADOOP-5784. Makes the number of heartbeats that should arrive a second
+ at the JobTracker configurable. (Amareshwari Sriramadasu via ddas)
+
+ HADOOP-5955. Changes TestFileOuputFormat so that is uses LOCAL_MR
+ instead of CLUSTER_MR. (Jothi Padmanabhan via das)
+
+ HADOOP-5948. Changes TestJavaSerialization to use LocalJobRunner
+ instead of MiniMR/DFS cluster. (Jothi Padmanabhan via das)
+
+ HADOOP-2838. Add mapred.child.env to pass environment variables to
+ tasktracker's child processes. (Amar Kamat via sharad)
+
+ HADOOP-5961. DataNode process understand generic hadoop command line
+ options (like -Ddfs.property=value). (Raghu Angadi)
+
+ HADOOP-5938. Change org.apache.hadoop.mapred.jobcontrol to use new
+ api. (Amareshwari Sriramadasu via sharad)
+
+ HADOOP-2141. Improves the speculative execution heuristic. The heuristic
+ is currently based on the progress-rates of tasks and the expected time
+ to complete. Also, statistics about trackers are collected, and speculative
+ tasks are not given to the ones deduced to be slow.
+ (Andy Konwinski and ddas)
+
+ HADOOP-5952. Change "-1 tests included" wording in test-patch.sh.
+ (Gary Murry via szetszwo)
+
+ HADOOP-6106. Provides an option in ShellCommandExecutor to timeout
+ commands that do not complete within a certain amount of time.
+ (Sreekanth Ramakrishnan via yhemanth)
+
+ HADOOP-5925. EC2 scripts should exit on error. (tomwhite)
+
+ HADOOP-6109. Change Text to grow its internal buffer exponentially, rather
+ than the max of the current length and the proposed length to improve
+ performance reading large values. (thushara wijeratna via cdouglas)
+
+ HADOOP-2366. Support trimmed strings in Configuration. (Michele Catasta
+ via szetszwo)
+
+ HADOOP-6099. The RPC module can be configured to not send period pings.
+ The default behaviour of sending periodic pings remain unchanged. (dhruba)
+
+ HADOOP-6142. Update documentation and use of harchives for relative paths
+ added in MAPREDUCE-739. (Mahadev Konar via cdouglas)
+
+ HADOOP-6148. Implement a fast, pure Java CRC32 calculator which outperforms
+ java.util.zip.CRC32. (Todd Lipcon and Scott Carey via szetszwo)
+
+ HADOOP-6146. Upgrade to JetS3t version 0.7.1. (tomwhite)
+
+ HADOOP-6161. Add get/setEnum methods to Configuration. (cdouglas)
+
+ HADOOP-6160. Fix releaseaudit target to run on specific directories.
+ (gkesavan)
+
+ HADOOP-6169. Removing deprecated method calls in TFile. (hong tang via
+ mahadev)
+
+ HADOOP-6176. Add a couple package private methods to AccessTokenHandler
+ for testing. (Kan Zhang via szetszwo)
+
+ HADOOP-6182. Fix ReleaseAudit warnings (Giridharan Kesavan and Lee Tucker
+ via gkesavan)
+
+ HADOOP-6173. Change src/native/packageNativeHadoop.sh to package all
+ native library files. (Hong Tang via szetszwo)
+
+ HADOOP-6184. Provide an API to dump Configuration in a JSON format.
+ (V.V.Chaitanya Krishna via yhemanth)
+
+ HADOOP-6224. Add a method to WritableUtils performing a bounded read of an
+ encoded String. (Jothi Padmanabhan via cdouglas)
+
+ HADOOP-6133. Add a caching layer to Configuration::getClassByName to
+ alleviate a performance regression introduced in a compatibility layer.
+ (Todd Lipcon via cdouglas)
+
+ HADOOP-6252. Provide a method to determine if a deprecated key is set in
+ config file. (Jakob Homan via suresh)
+
+ HADOOP-5879. Read compression level and strategy from Configuration for
+ gzip compression. (He Yongqiang via cdouglas)
+
+ HADOOP-6216. Support comments in host files. (Ravi Phulari and Dmytro
+ Molkov via szetszwo)
+
+ HADOOP-6217. Update documentation for project split. (Corinne Chandel via
+ omalley)
+
+ HADOOP-6268. Add ivy jar to .gitignore. (Todd Lipcon via cdouglas)
+
+ HADOOP-6270. Support deleteOnExit in FileContext. (Suresh Srinivas via
+ szetszwo)
+
+ HADOOP-6233. Rename configuration keys towards API standardization and
+ backward compatibility. (Jithendra Pandey via suresh)
+
+ HADOOP-6260. Add additional unit tests for FileContext util methods.
+ (Gary Murry via suresh).
+
+ HADOOP-6309. Change build.xml to run tests with java asserts. (Eli
+ Collins via szetszwo)
+
+ HADOOP-6326. Hundson runs should check for AspectJ warnings and report
+ failure if any is present (cos)
+
+ HADOOP-6329. Add build-fi directory to the ignore lists. (szetszwo)
+
+ HADOOP-5107. Use Maven ant tasks to publish the subproject jars.
+ (Giridharan Kesavan via omalley)
+
+ HADOOP-6343. Log unexpected throwable object caught in RPC. (Jitendra Nath
+ Pandey via szetszwo)
+
+ HADOOP-6367. Removes Access Token implementation from common.
+ (Kan Zhang via ddas)
+
+ HADOOP-6395. Upgrade some libraries to be consistent across common, hdfs,
+ and mapreduce. (omalley)
+
+ HADOOP-6398. Build is broken after HADOOP-6395 patch has been applied (cos)
+
+ HADOOP-6413. Move TestReflectionUtils to Common. (Todd Lipcon via tomwhite)
+
+ HADOOP-6283. Improve the exception messages thrown by
+ FileUtil$HardLink.getLinkCount(..). (szetszwo)
+
+ HADOOP-6279. Add Runtime::maxMemory to JVM metrics. (Todd Lipcon via
+ cdouglas)
+
+ HADOOP-6305. Unify build property names to facilitate cross-projects
+ modifications (cos)
+
+ HADOOP-6312. Remove unnecessary debug logging in Configuration constructor.
+ (Aaron Kimball via cdouglas)
+
+ HADOOP-6366. Reduce ivy console output to ovservable level (cos)
+
+ HADOOP-6400. Log errors getting Unix UGI. (Todd Lipcon via tomwhite)
+
+ HADOOP-6346. Add support for specifying unpack pattern regex to
+ RunJar.unJar. (Todd Lipcon via tomwhite)
+
+ HADOOP-6422. Make RPC backend plugable, protocol-by-protocol, to
+ ease evolution towards Avro. (cutting)
+
+ HADOOP-5958. Use JDK 1.6 File APIs in DF.java wherever possible.
+ (Aaron Kimball via tomwhite)
+
+ HADOOP-6222. Core doesn't have TestCommonCLI facility. (cos)
+
+ HADOOP-6394. Add a helper class to simplify FileContext related tests and
+ improve code reusability. (Jitendra Nath Pandey via suresh)
+
+ HADOOP-4656. Add a user to groups mapping service. (boryas, acmurthy)
+
+ HADOOP-6435. Make RPC.waitForProxy with timeout public. (Steve Loughran
+ via tomwhite)
+
+ HADOOP-6472. add tokenCache option to GenericOptionsParser for passing
+ file with secret keys to a map reduce job. (boryas)
+
+ HADOOP-3205. Read multiple chunks directly from FSInputChecker subclass
+ into user buffers. (Todd Lipcon via tomwhite)
+
+ HADOOP-6479. TestUTF8 assertions could fail with better text.
+ (Steve Loughran via tomwhite)
+
+ HADOOP-6155. Deprecate RecordIO anticipating Avro. (Tom White via cdouglas)
+
+ HADOOP-6492. Make some Avro serialization APIs public.
+ (Aaron Kimball via cutting)
+
+ HADOOP-6497. Add an adapter for Avro's SeekableInput interface, so
+ that Avro can read FileSystem data.
+ (Aaron Kimball via cutting)
+
+ HADOOP-6495. Identifier should be serialized after the password is
+ created In Token constructor (jnp via boryas)
+
+ HADOOP-6518. Makes the UGI honor the env var KRB5CCNAME.
+ (Owen O'Malley via ddas)
+
+ HADOOP-6531. Enhance FileUtil with an API to delete all contents of a
+ directory. (Amareshwari Sriramadasu via yhemanth)
+
+ HADOOP-6547. Move DelegationToken into Common, so that it can be used by
+ MapReduce also. (devaraj via omalley)
+
+ HADOOP-6552. Puts renewTGT=true and useTicketCache=true for the keytab
+ kerberos options. (ddas)
+
+ HADOOP-6534. Trim whitespace from directory lists initializing
+ LocalDirAllocator. (Todd Lipcon via cdouglas)
+
+ HADOOP-6559. Makes the RPC client automatically re-login when the SASL
+ connection setup fails. This is applicable only to keytab based logins.
+ (Devaraj Das)
+
+ HADOOP-6551. Delegation token renewing and cancelling should provide
+ meaningful exceptions when there are failures instead of returning
+ false. (omalley)
+
+ HADOOP-6583. Captures authentication and authorization metrics. (ddas)
+
+ HADOOP-6543. Allows secure clients to talk to unsecure clusters.
+ (Kan Zhang via ddas)
+
+ HADOOP-6579. Provide a mechanism for encoding/decoding Tokens from
+ a url-safe string and change the commons-code library to 1.4. (omalley)
+
+ HADOOP-6596. Add a version field to the AbstractDelegationTokenIdentifier's
+ serialized value. (omalley)
+
+ HADOOP-6573. Support for persistent delegation tokens.
+ (Jitendra Pandey via shv)
+
+ HADOOP-6594. Provide a fetchdt tool via bin/hdfs. (jhoman via acmurthy)
+
+ HADOOP-6589. Provide better error messages when RPC authentication fails.
+ (Kan Zhang via omalley)
+
+ HADOOP-6599 Split existing RpcMetrics into RpcMetrics & RpcDetailedMetrics.
+ (Suresh Srinivas via Sanjay Radia)
+
+ HADOOP-6537 Declare more detailed exceptions in FileContext and
+ AbstractFileSystem (Suresh Srinivas via Sanjay Radia)
+
+ HADOOP-6486. fix common classes to work with Avro 1.3 reflection.
+ (cutting via tomwhite)
+
+ HADOOP-6591. HarFileSystem can handle paths with the whitespace characters.
+ (Rodrigo Schmidt via dhruba)
+
+ HADOOP-6407. Have a way to automatically update Eclipse .classpath file
+ when new libs are added to the classpath through Ivy. (tomwhite)
+
+ HADOOP-3659. Patch to allow hadoop native to compile on Mac OS X.
+ (Colin Evans and Allen Wittenauer via tomwhite)
+
+ HADOOP-6471. StringBuffer -> StringBuilder - conversion of references
+ as necessary. (Kay Kay via tomwhite)
+
+ HADOOP-6646. Move HarfileSystem out of Hadoop Common. (mahadev)
+
+ HADOOP-6566. Add methods supporting, enforcing narrower permissions on
+ local daemon directories. (Arun Murthy and Luke Lu via cdouglas)
+
+ HADOOP-6705. Fix to work with 1.5 version of jiracli
+ (Giridharan Kesavan)
+
+ HADOOP-6658. Exclude Private elements from generated Javadoc. (tomwhite)
+
+ HADOOP-6635. Install/deploy source jars to Maven repo.
+ (Patrick Angeles via jghoman)
+
+ HADOOP-6717. Log levels in o.a.h.security.Groups too high
+ (Todd Lipcon via jghoman)
+
+ HADOOP-6667. RPC.waitForProxy should retry through NoRouteToHostException.
+ (Todd Lipcon via tomwhite)
+
+ HADOOP-6677. InterfaceAudience.LimitedPrivate should take a string not an
+ enum. (tomwhite)
+
+ HADOOP-678. Remove FileContext#isFile, isDirectory, and exists.
+ (Eli Collins via hairong)
+
+ HADOOP-6515. Make maximum number of http threads configurable.
+ (Scott Chen via zshao)
+
+ HADOOP-6563. Add more symlink tests to cover intermediate symlinks
+ in paths. (Eli Collins via suresh)
+
+ HADOOP-6585. Add FileStatus#isDirectory and isFile. (Eli Collins via
+ tomwhite)
+
+ HADOOP-6738. Move cluster_setup.xml from MapReduce to Common.
+ (Tom White via tomwhite)
+
+ HADOOP-6794. Move configuration and script files post split. (tomwhite)
+
+ HADOOP-6403. Deprecate EC2 bash scripts. (tomwhite)
+
+ HADOOP-6769. Add an API in FileSystem to get FileSystem instances based
+ on users(ddas via boryas)
+
+ HADOOP-6813. Add a new newInstance method in FileSystem that takes
+ a "user" as argument (ddas via boryas)
+
+ HADOOP-6668. Apply audience and stability annotations to classes in
+ common. (tomwhite)
+
+ HADOOP-6821. Document changes to memory monitoring. (Hemanth Yamijala
+ via tomwhite)
+
+ OPTIMIZATIONS
+
+ HADOOP-5595. NameNode does not need to run a replicator to choose a
+ random DataNode. (hairong)
+
+ HADOOP-5603. Improve NameNode's block placement performance. (hairong)
+
+ HADOOP-5638. More improvement on block placement performance. (hairong)
+
+ HADOOP-6180. NameNode slowed down when many files with same filename
+ were moved to Trash. (Boris Shkolnik via hairong)
+
+ HADOOP-6166. Further improve the performance of the pure-Java CRC32
+ implementation. (Tsz Wo (Nicholas), SZE via cdouglas)
+
+ HADOOP-6271. Add recursive and non recursive create and mkdir to
+ FileContext. (Sanjay Radia via suresh)
+
+ HADOOP-6261. Add URI based tests for FileContext.
+ (Ravi Pulari via suresh).
+
+ HADOOP-6307. Add a new SequenceFile.Reader constructor in order to support
+ reading on un-closed file. (szetszwo)
+
+ HADOOP-6467. Improve the performance on HarFileSystem.listStatus(..).
+ (mahadev via szetszwo)
+
+ HADOOP-6569. FsShell#cat should avoid calling unecessary getFileStatus
+ before opening a file to read. (hairong)
+
+ HADOOP-6689. Add directory renaming test to existing FileContext tests.
+ (Eli Collins via suresh)
+
+ HADOOP-6713. The RPC server Listener thread is a scalability bottleneck.
+ (Dmytro Molkov via hairong)
+
+ BUG FIXES
+
+ HADOOP-6748. Removes hadoop.cluster.administrators, cluster administrators
+ acl is passed as parameter in constructor. (amareshwari)
+
+ HADOOP-6828. Herrior uses old way of accessing logs directories (Sreekanth
+ Ramakrishnan via cos)
+
+ HADOOP-6788. [Herriot] Exception exclusion functionality is not working
+ correctly. (Vinay Thota via cos)
+
+ HADOOP-6773. Ivy folder contains redundant files (cos)
+
+ HADOOP-5379. CBZip2InputStream to throw IOException on data crc error.
+ (Rodrigo Schmidt via zshao)
+
+ HADOOP-5326. Fixes CBZip2OutputStream data corruption problem.
+ (Rodrigo Schmidt via zshao)
+
+ HADOOP-4963. Fixes a logging to do with getting the location of
+ map output file. (Amareshwari Sriramadasu via ddas)
+
+ HADOOP-2337. Trash should close FileSystem on exit and should not start
+ emtying thread if disabled. (shv)
+
+ HADOOP-5072. Fix failure in TestCodec because testSequenceFileGzipCodec
+ won't pass without native gzip codec. (Zheng Shao via dhruba)
+
+ HADOOP-5050. TestDFSShell.testFilePermissions should not assume umask
+ setting. (Jakob Homan via szetszwo)
+
+ HADOOP-4975. Set classloader for nested mapred.join configs. (Jingkei Ly
+ via cdouglas)
+
+ HADOOP-5078. Remove invalid AMI kernel in EC2 scripts. (tomwhite)
+
+ HADOOP-5045. FileSystem.isDirectory() should not be deprecated. (Suresh
+ Srinivas via szetszwo)
+
+ HADOOP-4960. Use datasource time, rather than system time, during metrics
+ demux. (Eric Yang via cdouglas)
+
+ HADOOP-5032. Export conf dir set in config script. (Eric Yang via cdouglas)
+
+ HADOOP-5176. Fix a typo in TestDFSIO. (Ravi Phulari via szetszwo)
+
+ HADOOP-4859. Distinguish daily rolling output dir by adding a timestamp.
+ (Jerome Boulon via cdouglas)
+
+ HADOOP-4959. Correct system metric collection from top on Redhat 5.1. (Eric
+ Yang via cdouglas)
+
+ HADOOP-5039. Fix log rolling regex to process only the relevant
+ subdirectories. (Jerome Boulon via cdouglas)
+
+ HADOOP-5095. Update Chukwa watchdog to accept config parameter. (Jerome
+ Boulon via cdouglas)
+
+ HADOOP-5147. Correct reference to agent list in Chukwa bin scripts. (Ari
+ Rabkin via cdouglas)
+
+ HADOOP-5148. Fix logic disabling watchdog timer in Chukwa daemon scripts.
+ (Ari Rabkin via cdouglas)
+
+ HADOOP-5100. Append, rather than truncate, when creating log4j metrics in
+ Chukwa. (Jerome Boulon via cdouglas)
+
+ HADOOP-5204. Fix broken trunk compilation on Hudson by letting
+ task-controller be an independent target in build.xml.
+ (Sreekanth Ramakrishnan via yhemanth)
+
+ HADOOP-5212. Fix the path translation problem introduced by HADOOP-4868
+ running on cygwin. (Sharad Agarwal via omalley)
+
+ HADOOP-5226. Add license headers to html and jsp files. (szetszwo)
+
+ HADOOP-5172. Disable misbehaving Chukwa unit test until it can be fixed.
+ (Jerome Boulon via nigel)
+
+ HADOOP-4933. Fixes a ConcurrentModificationException problem that shows up
+ when the history viewer is accessed concurrently.
+ (Amar Kamat via ddas)
+
+ HADOOP-5253. Remove duplicate call to cn-docs target.
+ (Giri Kesavan via nigel)
+
+ HADOOP-5251. Fix classpath for contrib unit tests to include clover jar.
+ (nigel)
+
+ HADOOP-5206. Synchronize "unprotected*" methods of FSDirectory on the root.
+ (Jakob Homan via shv)
+
+ HADOOP-5292. Fix NPE in KFS::getBlockLocations. (Sriram Rao via lohit)
+
+ HADOOP-5219. Adds a new property io.seqfile.local.dir for use by
+ SequenceFile, which earlier used mapred.local.dir. (Sharad Agarwal
+ via ddas)
+
+ HADOOP-5300. Fix ant javadoc-dev target and the typo in the class name
+ NameNodeActivtyMBean. (szetszwo)
+
+ HADOOP-5218. libhdfs unit test failed because it was unable to
+ start namenode/datanode. Fixed. (dhruba)
+
+ HADOOP-5273. Add license header to TestJobInProgress.java. (Jakob Homan
+ via szetszwo)
+
+ HADOOP-5229. Remove duplicate version variables in build files
+ (Stefan Groschupf via johan)
+
+ HADOOP-5383. Avoid building an unused string in NameNode's
+ verifyReplication(). (Raghu Angadi)
+
+ HADOOP-5347. Create a job output directory for the bbp examples. (szetszwo)
+
+ HADOOP-5341. Make hadoop-daemon scripts backwards compatible with the
+ changes in HADOOP-4868. (Sharad Agarwal via yhemanth)
+
+ HADOOP-5456. Fix javadoc links to ClientProtocol#restoreFailedStorage(..).
+ (Boris Shkolnik via szetszwo)
+
+ HADOOP-5458. Remove leftover Chukwa entries from build, etc. (cdouglas)
+
+ HADOOP-5386. Modify hdfsproxy unit test to start on a random port,
+ implement clover instrumentation. (Zhiyong Zhang via cdouglas)
+
+ HADOOP-5511. Add Apache License to EditLogBackupOutputStream. (shv)
+
+ HADOOP-5507. Fix JMXGet javadoc warnings. (Boris Shkolnik via szetszwo)
+
+ HADOOP-5191. Accessing HDFS with any ip or hostname should work as long
+ as it points to the interface NameNode is listening on. (Raghu Angadi)
+
+ HADOOP-5561. Add javadoc.maxmemory parameter to build, preventing OOM
+ exceptions from javadoc-dev. (Jakob Homan via cdouglas)
+
+ HADOOP-5149. Modify HistoryViewer to ignore unfamiliar files in the log
+ directory. (Hong Tang via cdouglas)
+
+ HADOOP-5477. Fix rare failure in TestCLI for hosts returning variations of
+ 'localhost'. (Jakob Homan via cdouglas)
+
+ HADOOP-5194. Disables setsid for tasks run on cygwin.
+ (Ravi Gummadi via ddas)
+
+ HADOOP-5322. Fix misleading/outdated comments in JobInProgress.
+ (Amareshwari Sriramadasu via cdouglas)
+
+ HADOOP-5198. Fixes a problem to do with the task PID file being absent and
+ the JvmManager trying to look for it. (Amareshwari Sriramadasu via ddas)
+
+ HADOOP-5464. DFSClient did not treat write timeout of 0 properly.
+ (Raghu Angadi)
+
+ HADOOP-4045. Fix processing of IO errors in EditsLog.
+ (Boris Shkolnik via shv)
+
+ HADOOP-5462. Fixed a double free bug in the task-controller
+ executable. (Sreekanth Ramakrishnan via yhemanth)
+
+ HADOOP-5652. Fix a bug where in-memory segments are incorrectly retained in
+ memory. (cdouglas)
+
+ HADOOP-5533. Recovery duration shown on the jobtracker webpage is
+ inaccurate. (Amar Kamat via sharad)
+
+ HADOOP-5647. Fix TestJobHistory to not depend on /tmp. (Ravi Gummadi
+ via sharad)
+
+ HADOOP-5661. Fixes some findbugs warnings in o.a.h.mapred* packages and
+ supresses a bunch of them. (Jothi Padmanabhan via ddas)
+
+ HADOOP-5704. Fix compilation problems in TestFairScheduler and
+ TestCapacityScheduler. (Chris Douglas via szetszwo)
+
+ HADOOP-5650. Fix safemode messages in the Namenode log. (Suresh Srinivas
+ via szetszwo)
+
+ HADOOP-5488. Removes the pidfile management for the Task JVM from the
+ framework and instead passes the PID back and forth between the
+ TaskTracker and the Task processes. (Ravi Gummadi via ddas)
+
+ HADOOP-5658. Fix Eclipse templates. (Philip Zeyliger via shv)
+
+ HADOOP-5709. Remove redundant synchronization added in HADOOP-5661. (Jothi
+ Padmanabhan via cdouglas)
+
+ HADOOP-5715. Add conf/mapred-queue-acls.xml to the ignore lists.
+ (szetszwo)
+
+ HADOOP-5592. Fix typo in Streaming doc in reference to GzipCodec.
+ (Corinne Chandel via tomwhite)
+
+ HADOOP-5656. Counter for S3N Read Bytes does not work. (Ian Nowland
+ via tomwhite)
+
+ HADOOP-5406. Fix JNI binding for ZlibCompressor::setDictionary. (Lars
+ Francke via cdouglas)
+
+ HADOOP-3426. Fix/provide handling when DNS lookup fails on the loopback
+ address. Also cache the result of the lookup. (Steve Loughran via cdouglas)
+
+ HADOOP-5476. Close the underlying InputStream in SequenceFile::Reader when
+ the constructor throws an exception. (Michael Tamm via cdouglas)
+
+ HADOOP-5675. Do not launch a job if DistCp has no work to do. (Tsz Wo
+ (Nicholas), SZE via cdouglas)
+
+ HADOOP-5737. Fixes a problem in the way the JobTracker used to talk to
+ other daemons like the NameNode to get the job's files. Also adds APIs
+ in the JobTracker to get the FileSystem objects as per the JobTracker's
+ configuration. (Amar Kamat via ddas)
+
+ HADOOP-5648. Not able to generate gridmix.jar on the already compiled
+ version of hadoop. (gkesavan)
+
+ HADOOP-5808. Fix import never used javac warnings in hdfs. (szetszwo)
+
+ HADOOP-5203. TT's version build is too restrictive. (Rick Cox via sharad)
+
+ HADOOP-5818. Revert the renaming from FSNamesystem.checkSuperuserPrivilege
+ to checkAccess by HADOOP-5643. (Amar Kamat via szetszwo)
+
+ HADOOP-5820. Fix findbugs warnings for http related codes in hdfs.
+ (szetszwo)
+
+ HADOOP-5822. Fix javac warnings in several dfs tests related to unncessary
+ casts. (Jakob Homan via szetszwo)
+
+ HADOOP-5842. Fix a few javac warnings under packages fs and util.
+ (Hairong Kuang via szetszwo)
+
+ HADOOP-5845. Build successful despite test failure on test-core target.
+ (sharad)
+
+ HADOOP-5314. Prevent unnecessary saving of the file system image during
+ name-node startup. (Jakob Homan via shv)
+
+ HADOOP-5855. Fix javac warnings for DisallowedDatanodeException and
+ UnsupportedActionException. (szetszwo)
+
+ HADOOP-5582. Fixes a problem in Hadoop Vaidya to do with reading
+ counters from job history files. (Suhas Gogate via ddas)
+
+ HADOOP-5829. Fix javac warnings found in ReplicationTargetChooser,
+ FSImage, Checkpointer, SecondaryNameNode and a few other hdfs classes.
+ (Suresh Srinivas via szetszwo)
+
+ HADOOP-5835. Fix findbugs warnings found in Block, DataNode, NameNode and
+ a few other hdfs classes. (Suresh Srinivas via szetszwo)
+
+ HADOOP-5853. Undeprecate HttpServer.addInternalServlet method. (Suresh
+ Srinivas via szetszwo)
+
+ HADOOP-5801. Fixes the problem: If the hosts file is changed across restart
+ then it should be refreshed upon recovery so that the excluded hosts are
+ lost and the maps are re-executed. (Amar Kamat via ddas)
+
+ HADOOP-5841. Resolve findbugs warnings in DistributedFileSystem,
+ DatanodeInfo, BlocksMap, DataNodeDescriptor. (Jakob Homan via szetszwo)
+
+ HADOOP-5878. Fix import and Serializable javac warnings found in hdfs jsp.
+ (szetszwo)
+
+ HADOOP-5782. Revert a few formatting changes introduced in HADOOP-5015.
+ (Suresh Srinivas via rangadi)
+
+ HADOOP-5687. NameNode throws NPE if fs.default.name is the default value.
+ (Philip Zeyliger via shv)
+
+ HADOOP-5867. Fix javac warnings found in NNBench and NNBenchWithoutMR.
+ (Konstantin Boudnik via szetszwo)
+
+ HADOOP-5728. Fixed FSEditLog.printStatistics IndexOutOfBoundsException.
+ (Wang Xu via johan)
+
+ HADOOP-5847. Fixed failing Streaming unit tests (gkesavan)
+
+ HADOOP-5252. Streaming overrides -inputformat option (Klaas Bosteels
+ via sharad)
+
+ HADOOP-5710. Counter MAP_INPUT_BYTES missing from new mapreduce api.
+ (Amareshwari Sriramadasu via sharad)
+
+ HADOOP-5809. Fix job submission, broken by errant directory creation.
+ (Sreekanth Ramakrishnan and Jothi Padmanabhan via cdouglas)
+
+ HADOOP-5635. Change distributed cache to work with other distributed file
+ systems. (Andrew Hitchcock via tomwhite)
+
+ HADOOP-5856. Fix "unsafe multithreaded use of DateFormat" findbugs warning
+ in DataBlockScanner. (Kan Zhang via szetszwo)
+
+ HADOOP-4864. Fixes a problem to do with -libjars with multiple jars when
+ client and cluster reside on different OSs. (Amareshwari Sriramadasu via
+ ddas)
+
+ HADOOP-5623. Fixes a problem to do with status messages getting overwritten
+ in streaming jobs. (Rick Cox and Jothi Padmanabhan via ddas)
+
+ HADOOP-5895. Fixes computation of count of merged bytes for logging.
+ (Ravi Gummadi via ddas)
+
+ HADOOP-5805. problem using top level s3 buckets as input/output
+ directories. (Ian Nowland via tomwhite)
+
+ HADOOP-5940. trunk eclipse-plugin build fails while trying to copy
+ commons-cli jar from the lib dir (Giridharan Kesavan via gkesavan)
+
+ HADOOP-5864. Fix DMI and OBL findbugs in packages hdfs and metrics.
+ (hairong)
+
+ HADOOP-5935. Fix Hudson's release audit warnings link is broken.
+ (Giridharan Kesavan via gkesavan)
+
+ HADOOP-5947. Delete empty TestCombineFileInputFormat.java
+
+ HADOOP-5899. Move a log message in FSEditLog to the right place for
+ avoiding unnecessary log. (Suresh Srinivas via szetszwo)
+
+ HADOOP-5944. Add Apache license header to BlockManager.java. (Suresh
+ Srinivas via szetszwo)
+
+ HADOOP-5891. SecondaryNamenode is able to converse with the NameNode
+ even when the default value of dfs.http.address is not overridden.
+ (Todd Lipcon via dhruba)
+
+ HADOOP-5953. The isDirectory(..) and isFile(..) methods in KosmosFileSystem
+ should not be deprecated. (szetszwo)
+
+ HADOOP-5954. Fix javac warnings in TestFileCreation, TestSmallBlock,
+ TestFileStatus, TestDFSShellGenericOptions, TestSeekBug and
+ TestDFSStartupVersions. (szetszwo)
+
+ HADOOP-5956. Fix ivy dependency in hdfsproxy and capacity-scheduler.
+ (Giridharan Kesavan via szetszwo)
+
+ HADOOP-5836. Bug in S3N handling of directory markers using an object with
+ a trailing "/" causes jobs to fail. (Ian Nowland via tomwhite)
+
+ HADOOP-5861. s3n files are not getting split by default. (tomwhite)
+
+ HADOOP-5762. Fix a problem that DistCp does not copy empty directory.
+ (Rodrigo Schmidt via szetszwo)
+
+ HADOOP-5859. Fix "wait() or sleep() with locks held" findbugs warnings in
+ DFSClient. (Kan Zhang via szetszwo)
+
+ HADOOP-5457. Fix to continue to run builds even if contrib test fails
+ (Giridharan Kesavan via gkesavan)
+
+ HADOOP-5963. Remove an unnecessary exception catch in NNBench. (Boris
+ Shkolnik via szetszwo)
+
+ HADOOP-5989. Fix streaming test failure. (gkesavan)
+
+ HADOOP-5981. Fix a bug in HADOOP-2838 in parsing mapred.child.env.
+ (Amar Kamat via sharad)
+
+ HADOOP-5420. Fix LinuxTaskController to kill tasks using the process
+ groups they are launched with.
+ (Sreekanth Ramakrishnan via yhemanth)
+
+ HADOOP-6031. Remove @author tags from Java source files. (Ravi Phulari
+ via szetszwo)
+
+ HADOOP-5980. Fix LinuxTaskController so tasks get passed
+ LD_LIBRARY_PATH and other environment variables.
+ (Sreekanth Ramakrishnan via yhemanth)
+
+ HADOOP-4041. IsolationRunner does not work as documented.
+ (Philip Zeyliger via tomwhite)
+
+ HADOOP-6004. Fixes BlockLocation deserialization. (Jakob Homan via
+ szetszwo)
+
+ HADOOP-6079. Serialize proxySource as DatanodeInfo in DataTransferProtocol.
+ (szetszwo)
+
+ HADOOP-6096. Fix Eclipse project and classpath files following project
+ split. (tomwhite)
+
+ HADOOP-6122. The great than operator in test-patch.sh should be "-gt" but
+ not ">". (szetszwo)
+
+ HADOOP-6114. Fix javadoc documentation for FileStatus.getLen.
+ (Dmitry Rzhevskiy via dhruba)
+
+ HADOOP-6131. A sysproperty should not be set unless the property
+ is set on the ant command line in build.xml (hong tang via mahadev)
+
+ HADOOP-6137. Fix project specific test-patch requirements
+ (Giridharan Kesavan)
+
+ HADOOP-6138. Eliminate the deprecated warnings introduced by H-5438.
+ (He Yongqiang via szetszwo)
+
+ HADOOP-6132. RPC client create an extra connection because of incorrect
+ key for connection cache. (Kan Zhang via rangadi)
+
+ HADOOP-6123. Add missing classpaths in hadoop-config.sh. (Sharad Agarwal
+ via szetszwo)
+
+ HADOOP-6172. Fix jar file names in hadoop-config.sh and include
+ ${build.src} as a part of the source list in build.xml. (Hong Tang via
+ szetszwo)
+
+ HADOOP-6124. Fix javac warning detection in test-patch.sh. (Giridharan
+ Kesavan via szetszwo)
+
+ HADOOP-6177. FSInputChecker.getPos() would return position greater
+ than the file size. (Hong Tang via hairong)
+
+ HADOOP-6188. TestTrash uses java.io.File api but not hadoop FileSystem api.
+ (Boris Shkolnik via szetszwo)
+
+ HADOOP-6192. Fix Shell.getUlimitMemoryCommand to not rely on Map-Reduce
+ specific configs. (acmurthy)
+
+ HADOOP-6103. Clones the classloader as part of Configuration clone.
+ (Amareshwari Sriramadasu via ddas)
+
+ HADOOP-6152. Fix classpath variables in bin/hadoop-config.sh and some
+ other scripts. (Aaron Kimball via szetszwo)
+
+ HADOOP-6215. fix GenericOptionParser to deal with -D with '=' in the
+ value. (Amar Kamat via sharad)
+
+ HADOOP-6227. Fix Configuration to allow final parameters to be set to null
+ and prevent them from being overridden.
+ (Amareshwari Sriramadasu via yhemanth)
+
+ HADOOP-6199. Move io.map.skip.index property to core-default from mapred.
+ (Amareshwari Sriramadasu via cdouglas)
+
+ HADOOP-6229. Attempt to make a directory under an existing file on
+ LocalFileSystem should throw an Exception. (Boris Shkolnik via tomwhite)
+
+ HADOOP-6243. Fix a NullPointerException in processing deprecated keys.
+ (Sreekanth Ramakrishnan via yhemanth)
+
+ HADOOP-6009. S3N listStatus incorrectly returns null instead of empty
+ array when called on empty root. (Ian Nowland via tomwhite)
+
+ HADOOP-6181. Fix .eclipse.templates/.classpath for avro and jets3t jar
+ files. (Carlos Valiente via szetszwo)
+
+ HADOOP-6196. Fix a bug in SequenceFile.Reader where syncing within the
+ header would cause the reader to read the sync marker as a record. (Jay
+ Booth via cdouglas)
+
+ HADOOP-6250. Modify test-patch to delete copied XML files before running
+ patch build. (Rahul Kumar Singh via yhemanth)
+
+ HADOOP-6257. Two TestFileSystem classes are confusing
+ hadoop-hdfs-hdfwithmr. (Philip Zeyliger via tomwhite)
+
+ HADOOP-6151. Added a input filter to all of the http servlets that quotes
+ html characters in the parameters, to prevent cross site scripting
+ attacks. (omalley)
+
+ HADOOP-6274. Fix TestLocalFSFileContextMainOperations test failure.
+ (Gary Murry via suresh).
+
+ HADOOP-6281. Avoid null pointer exceptions when the jsps don't have
+ paramaters (omalley)
+
+ HADOOP-6285. Fix the result type of the getParameterMap method in the
+ HttpServer.QuotingInputFilter. (omalley)
+
+ HADOOP-6286. Fix bugs in related to URI handling in glob methods in
+ FileContext. (Boris Shkolnik via suresh)
+
+ HADOOP-6292. Update native libraries guide. (Corinne Chandel via cdouglas)
+
+ HADOOP-6327. FileContext tests should not use /tmp and should clean up
+ files. (Sanjay Radia via szetszwo)
+
+ HADOOP-6318. Upgrade to Avro 1.2.0. (cutting)
+
+ HADOOP-6334. Fix GenericOptionsParser to understand URI for -files,
+ -libjars and -archives options and fix Path to support URI with fragment.
+ (Amareshwari Sriramadasu via szetszwo)
+
+ HADOOP-6344. Fix rm and rmr immediately delete files rather than sending
+ to trash, if a user is over-quota. (Jakob Homan via suresh)
+
+ HADOOP-6347. run-test-core-fault-inject runs a test case twice if
+ -Dtestcase is set (cos)
+
+ HADOOP-6375. Sync documentation for FsShell du with its implementation.
+ (Todd Lipcon via cdouglas)
+
+ HADOOP-6441. Protect web ui from cross site scripting attacks (XSS) on
+ the host http header and using encoded utf-7. (omalley)
+
+ HADOOP-6451. Fix build to run contrib unit tests. (Tom White via cdouglas)
+
+ HADOOP-6374. JUnit tests should never depend on anything in conf.
+ (Anatoli Fomenko via cos)
+
+ HADOOP-6290. Prevent duplicate slf4j-simple jar via Avro's classpath.
+ (Owen O'Malley via cdouglas)
+
+ HADOOP-6293. Fix FsShell -text to work on filesystems other than the
+ default. (cdouglas)
+
+ HADOOP-6341. Fix test-patch.sh for checkTests function. (gkesavan)
+
+ HADOOP-6314. Fix "fs -help" for the "-count" commond. (Ravi Phulari via
+ szetszwo)
+
+ HADOOP-6405. Update Eclipse configuration to match changes to Ivy
+ configuration (Edwin Chan via cos)
+
+ HADOOP-6411. Remove deprecated file src/test/hadoop-site.xml. (cos)
+
+ HADOOP-6386. NameNode's HttpServer can't instantiate InetSocketAddress:
+ IllegalArgumentException is thrown (cos)
+
+ HADOOP-6254. Slow reads cause s3n to fail with SocketTimeoutException.
+ (Andrew Hitchcock via tomwhite)
+
+ HADOOP-6428. HttpServer sleeps with negative values. (cos)
+
+ HADOOP-6414. Add command line help for -expunge command.
+ (Ravi Phulari via tomwhite)
+
+ HADOOP-6391. Classpath should not be part of command line arguments.
+ (Cristian Ivascu via tomwhite)
+
+ HADOOP-6462. Target "compile" does not exist in contrib/cloud. (tomwhite)
+
+ HADOOP-6402. testConf.xsl is not well-formed XML. (Steve Loughran
+ via tomwhite)
+
+ HADOOP-6489. Fix 3 findbugs warnings. (Erik Steffl via suresh)
+
+ HADOOP-6517. Fix UserGroupInformation so that tokens are saved/retrieved
+ to/from the embedded Subject (Owen O'Malley & Kan Zhang via ddas)
+
+ HADOOP-6538. Sets hadoop.security.authentication to simple by default.
+ (ddas)
+
+ HADOOP-6540. Contrib unit tests have invalid XML for core-site, etc.
+ (Aaron Kimball via tomwhite)
+
+ HADOOP-6521. User specified umask using deprecated dfs.umask must override
+ server configured using new dfs.umaskmode for backward compatibility.
+ (suresh)
+
+ HADOOP-6522. Fix decoding of codepoint zero in UTF8. (cutting)
+
+ HADOOP-6505. Use tr rather than sed to effect literal substitution in the
+ build script. (Allen Wittenauer via cdouglas)
+
+ HADOOP-6548. Replace mortbay imports with commons logging. (cdouglas)
+
+ HADOOP-6560. Handle invalid har:// uri in HarFileSystem. (szetszwo)
+
+ HADOOP-6549. TestDoAsEffectiveUser should use ip address of the host
+ for superuser ip check(jnp via boryas)
+
+ HADOOP-6570. RPC#stopProxy throws NPE if getProxyEngine(proxy) returns
+ null. (hairong)
+
+ HADOOP-6558. Return null in HarFileSystem.getFileChecksum(..) since no
+ checksum algorithm is implemented. (szetszwo)
+
+ HADOOP-6572. Makes sure that SASL encryption and push to responder
+ queue for the RPC response happens atomically. (Kan Zhang via ddas)
+
+ HADOOP-6545. Changes the Key for the FileSystem cache to be UGI (ddas)
+
+ HADOOP-6609. Fixed deadlock in RPC by replacing shared static
+ DataOutputBuffer in the UTF8 class with a thread local variable. (omalley)
+
+ HADOOP-6504. Invalid example in the documentation of
+ org.apache.hadoop.util.Tool. (Benoit Sigoure via tomwhite)
+
+ HADOOP-6546. BloomMapFile can return false negatives. (Clark Jefcoat
+ via tomwhite)
+
+ HADOOP-6593. TextRecordInputStream doesn't close SequenceFile.Reader.
+ (Chase Bradford via tomwhite)
+
+ HADOOP-6175. Incorrect version compilation with es_ES.ISO8859-15 locale
+ on Solaris 10. (Urko Benito via tomwhite)
+
+ HADOOP-6645. Bugs on listStatus for HarFileSystem (rodrigo via mahadev)
+
+ HADOOP-6645. Re: Bugs on listStatus for HarFileSystem (rodrigo via
+ mahadev)
+
+ HADOOP-6654. Fix code example in WritableComparable javadoc. (Tom White
+ via szetszwo)
+
+ HADOOP-6640. FileSystem.get() does RPC retries within a static
+ synchronized block. (hairong)
+
+ HADOOP-6691. TestFileSystemCaching sometimes hangs. (hairong)
+
+ HADOOP-6507. Hadoop Common Docs - delete 3 doc files that do not belong
+ under Common. (Corinne Chandel via tomwhite)
+
+ HADOOP-6439. Fixes handling of deprecated keys to follow order in which
+ keys are defined. (V.V.Chaitanya Krishna via yhemanth)
+
+ HADOOP-6690. FilterFileSystem correctly handles setTimes call.
+ (Rodrigo Schmidt via dhruba)
+
+ HADOOP-6703. Prevent renaming a file, directory or symbolic link to
+ itself. (Eli Collins via suresh)
+
+ HADOOP-6710. Symbolic umask for file creation is not conformant with posix.
+ (suresh)
+
+ HADOOP-6719. Insert all missing methods in FilterFs.
+ (Rodrigo Schmidt via dhruba)
+
+ HADOOP-6724. IPC doesn't properly handle IOEs thrown by socket factory.
+ (Todd Lipcon via tomwhite)
+
+ HADOOP-6722. NetUtils.connect should check that it hasn't connected a socket
+ to itself. (Todd Lipcon via tomwhite)
+
+ HADOOP-6634. Fix AccessControlList to use short names to verify access
+ control. (Vinod Kumar Vavilapalli via sharad)
+
+ HADOOP-6709. Re-instate deprecated FileSystem methods that were removed
+ after 0.20. (tomwhite)
+
+ HADOOP-6630. hadoop-config.sh fails to get executed if hadoop wrapper
+ scripts are in path. (Allen Wittenauer via tomwhite)
+
+ HADOOP-6742. Add methods HADOOP-6709 from to TestFilterFileSystem.
+ (Eli Collins via tomwhite)
+
+ HADOOP-6727. Remove UnresolvedLinkException from public FileContext APIs.
+ (Eli Collins via tomwhite)
+
+ HADOOP-6631. Fix FileUtil.fullyDelete() to continue deleting other files
+ despite failure at any level. (Contributed by Ravi Gummadi and
+ Vinod Kumar Vavilapalli)
+
+ HADOOP-6723. Unchecked exceptions thrown in IPC Connection should not
+ orphan clients. (Todd Lipcon via tomwhite)
+
+ HADOOP-6404. Rename the generated artifacts to common instead of core.
+ (tomwhite)
+
+ HADOOP-6461. Webapps aren't located correctly post-split.
+ (Todd Lipcon and Steve Loughran via tomwhite)
+
+ HADOOP-6826. Revert FileSystem create method that takes CreateFlags.
+ (tomwhite)
+
+ HADOOP-6800. Harmonize JAR library versions. (tomwhite)
+
+ HADOOP-6847. Problem staging 0.21.0 artifacts to Apache Nexus Maven
+ Repository (Giridharan Kesavan via cos)
+
+ HADOOP-6819. [Herriot] Shell command for getting the new exceptions in
+ the logs returning exitcode 1 after executing successfully. (Vinay Thota
+ via cos)
+
+ HADOOP-6839. [Herriot] Implement a functionality for getting the user list
+ for creating proxy users. (Vinay Thota via cos)
+
+ HADOOP-6836. [Herriot]: Generic method for adding/modifying the attributes
+ for new configuration. (Vinay Thota via cos)
+
+ HADOOP-6860. 'compile-fault-inject' should never be called directly.
+ (Konstantin Boudnik)
+
+ HADOOP-6790. Instrumented (Herriot) build uses too wide mask to include
+ aspect files. (Konstantin Boudnik)
+
+ HADOOP-6875. [Herriot] Cleanup of temp. configurations is needed upon
+ restart of a cluster (Vinay Thota via cos)
+
+Release 0.20.3 - Unreleased
+
+ NEW FEATURES
+
+ HADOOP-6637. Benchmark for establishing RPC session. (shv)
+
+ BUG FIXES
+
+ HADOOP-6760. WebServer shouldn't increase port number in case of negative
+ port setting caused by Jetty's race (cos)
+
+ HADOOP-6881. Make WritableComparator intialize classes when
+ looking for their raw comparator, as classes often register raw
+ comparators in initializers, which are no longer automatically run
+ in Java 6 when a class is referenced. (cutting via omalley)
+
+ HADOOP-7072. Remove java5 dependencies from build. (cos)
+
+Release 0.20.204.0 - Unreleased
+
+ NEW FEATURES
+
+ HADOOP-6255. Create RPM and Debian packages for common. Changes deployment
+ layout to be consistent across the binary tgz, rpm, and deb. Adds setup
+ scripts for easy one node cluster configuration and user creation.
+ (Eric Yang via omalley)
+
+Release 0.20.203.0 - 2011-5-11
+
+ BUG FIXES
+
+ HADOOP-7258. The Gzip codec should not return null decompressors. (omalley)
+
+Release 0.20.2 - 2010-2-16
+
+ NEW FEATURES
+
+ HADOOP-6218. Adds a feature where TFile can be split by Record
+ Sequence number. (Hong Tang and Raghu Angadi via ddas)
+
+ BUG FIXES
+
+ HADOOP-6231. Allow caching of filesystem instances to be disabled on a
+ per-instance basis. (tomwhite)
+
+ HADOOP-5759. Fix for IllegalArgumentException when CombineFileInputFormat
+ is used as job InputFormat. (Amareshwari Sriramadasu via dhruba)
+
+ HADOOP-6097. Fix Path conversion in makeQualified and reset LineReader byte
+ count at the start of each block in Hadoop archives. (Ben Slusky, Tom
+ White, and Mahadev Konar via cdouglas)
+
+ HADOOP-6269. Fix threading issue with defaultResource in Configuration.
+ (Sreekanth Ramakrishnan via cdouglas)
+
+ HADOOP-6460. Reinitializes buffers used for serializing responses in ipc
+ server on exceeding maximum response size to free up Java heap. (suresh)
+
+ HADOOP-6315. Avoid incorrect use of BuiltInflater/BuiltInDeflater in
+ GzipCodec. (Aaron Kimball via cdouglas)
+
+ HADOOP-6498. IPC client bug may cause rpc call hang. (Ruyue Ma and
+ hairong via hairong)
+
+ IMPROVEMENTS
+
+ HADOOP-5611. Fix C++ libraries to build on Debian Lenny. (Todd Lipcon
+ via tomwhite)
+
+ HADOOP-5612. Some c++ scripts are not chmodded before ant execution.
+ (Todd Lipcon via tomwhite)
+
+ HADOOP-1849. Add undocumented configuration parameter for per handler
+ call queue size in IPC Server. (shv)
+
+Release 0.20.1 - 2009-09-01
+
+ INCOMPATIBLE CHANGES
+
+ HADOOP-5726. Remove pre-emption from capacity scheduler code base.
+ (Rahul Kumar Singh via yhemanth)
+
+ HADOOP-5881. Simplify memory monitoring and scheduling related
+ configuration. (Vinod Kumar Vavilapalli via yhemanth)
+
+ NEW FEATURES
+
+ HADOOP-6080. Introduce -skipTrash option to rm and rmr.
+ (Jakob Homan via shv)
+
+ HADOOP-3315. Add a new, binary file foramt, TFile. (Hong Tang via cdouglas)
+
+ IMPROVEMENTS
+
+ HADOOP-5711. Change Namenode file close log to info. (szetszwo)
+
+ HADOOP-5736. Update the capacity scheduler documentation for features
+ like memory based scheduling, job initialization and removal of pre-emption.
+ (Sreekanth Ramakrishnan via yhemanth)
+
+ HADOOP-5714. Add a metric for NameNode getFileInfo operation. (Jakob Homan
+ via szetszwo)
+
+ HADOOP-4372. Improves the way history filenames are obtained and manipulated.
+ (Amar Kamat via ddas)
+
+ HADOOP-5897. Add name-node metrics to capture java heap usage.
+ (Suresh Srinivas via shv)
+
+ OPTIMIZATIONS
+
+ BUG FIXES
+
+ HADOOP-5691. Makes org.apache.hadoop.mapreduce.Reducer concrete class
+ instead of abstract. (Amareshwari Sriramadasu via sharad)
+
+ HADOOP-5646. Fixes a problem in TestQueueCapacities.
+ (Vinod Kumar Vavilapalli via ddas)
+
+ HADOOP-5655. TestMRServerPorts fails on java.net.BindException. (Devaraj
+ Das via hairong)
+
+ HADOOP-5654. TestReplicationPolicy.<init> fails on java.net.BindException.
+ (hairong)
+
+ HADOOP-5688. Fix HftpFileSystem checksum path construction. (Tsz Wo
+ (Nicholas) Sze via cdouglas)
+
+ HADOOP-4674. Fix fs help messages for -test, -text, -tail, -stat
+ and -touchz options. (Ravi Phulari via szetszwo)
+
+ HADOOP-5718. Remove the check for the default queue in capacity scheduler.
+ (Sreekanth Ramakrishnan via yhemanth)
+
+ HADOOP-5719. Remove jobs that failed initialization from the waiting queue
+ in the capacity scheduler. (Sreekanth Ramakrishnan via yhemanth)
+
+ HADOOP-4744. Attaching another fix to the jetty port issue. The TaskTracker
+ kills itself if it ever discovers that the port to which jetty is actually
+ bound is invalid (-1). (ddas)
+
+ HADOOP-5349. Fixes a problem in LocalDirAllocator to check for the return
+ path value that is returned for the case where the file we want to write
+ is of an unknown size. (Vinod Kumar Vavilapalli via ddas)
+
+ HADOOP-5636. Prevents a job from going to RUNNING state after it has been
+ KILLED (this used to happen when the SetupTask would come back with a
+ success after the job has been killed). (Amar Kamat via ddas)
+
+ HADOOP-5641. Fix a NullPointerException in capacity scheduler's memory
+ based scheduling code when jobs get retired. (yhemanth)
+
+ HADOOP-5828. Use absolute path for mapred.local.dir of JobTracker in
+ MiniMRCluster. (yhemanth)
+
+ HADOOP-4981. Fix capacity scheduler to schedule speculative tasks
+ correctly in the presence of High RAM jobs.
+ (Sreekanth Ramakrishnan via yhemanth)
+
+ HADOOP-5210. Solves a problem in the progress report of the reduce task.
+ (Ravi Gummadi via ddas)
+
+ HADOOP-5850. Fixes a problem to do with not being able to jobs with
+ 0 maps/reduces. (Vinod K V via ddas)
+
+ HADOOP-4626. Correct the API links in hdfs forrest doc so that they
+ point to the same version of hadoop. (szetszwo)
+
+ HADOOP-5883. Fixed tasktracker memory monitoring to account for
+ momentary spurts in memory usage due to java's fork() model.
+ (yhemanth)
+
+ HADOOP-5539. Fixes a problem to do with not preserving intermediate
+ output compression for merged data.
+ (Jothi Padmanabhan and Billy Pearson via ddas)
+
+ HADOOP-5932. Fixes a problem in capacity scheduler in computing
+ available memory on a tasktracker.
+ (Vinod Kumar Vavilapalli via yhemanth)
+
+ HADOOP-5908. Fixes a problem to do with ArithmeticException in the
+ JobTracker when there are jobs with 0 maps. (Amar Kamat via ddas)
+
+ HADOOP-5924. Fixes a corner case problem to do with job recovery with
+ empty history files. Also, after a JT restart, sends KillTaskAction to
+ tasks that report back but the corresponding job hasn't been initialized
+ yet. (Amar Kamat via ddas)
+
+ HADOOP-5882. Fixes a reducer progress update problem for new mapreduce
+ api. (Amareshwari Sriramadasu via sharad)
+
+ HADOOP-5746. Fixes a corner case problem in Streaming, where if an exception
+ happens in MROutputThread after the last call to the map/reduce method, the
+ exception goes undetected. (Amar Kamat via ddas)
+
+ HADOOP-5884. Fixes accounting in capacity scheduler so that high RAM jobs
+ take more slots. (Vinod Kumar Vavilapalli via yhemanth)
+
+ HADOOP-5937. Correct a safemode message in FSNamesystem. (Ravi Phulari
+ via szetszwo)
+
+ HADOOP-5869. Fix bug in assignment of setup / cleanup task that was
+ causing TestQueueCapacities to fail.
+ (Sreekanth Ramakrishnan via yhemanth)
+
+ HADOOP-5921. Fixes a problem in the JobTracker where it sometimes never used
+ to come up due to a system file creation on JobTracker's system-dir failing.
+ This problem would sometimes show up only when the FS for the system-dir
+ (usually HDFS) is started at nearly the same time as the JobTracker.
+ (Amar Kamat via ddas)
+
+ HADOOP-5920. Fixes a testcase failure for TestJobHistory.
+ (Amar Kamat via ddas)
+
+ HADOOP-6139. Fix the FsShell help messages for rm and rmr. (Jakob Homan
+ via szetszwo)
+
+ HADOOP-6145. Fix FsShell rm/rmr error messages when there is a FNFE.
+ (Jakob Homan via szetszwo)
+
+ HADOOP-6150. Users should be able to instantiate comparator using TFile
+ API. (Hong Tang via rangadi)
+
+Release 0.20.0 - 2009-04-15
+
+ INCOMPATIBLE CHANGES
+
+ HADOOP-4210. Fix findbugs warnings for equals implementations of mapred ID
+ classes. Removed public, static ID::read and ID::forName; made ID an
+ abstract class. (Suresh Srinivas via cdouglas)
+
+ HADOOP-4253. Fix various warnings generated by findbugs.
+ Following deprecated methods in RawLocalFileSystem are removed:
+ public String getName()
+ public void lock(Path p, boolean shared)
+ public void release(Path p)
+ (Suresh Srinivas via johan)
+
+ HADOOP-4618. Move http server from FSNamesystem into NameNode.
+ FSNamesystem.getNameNodeInfoPort() is removed.
+ FSNamesystem.getDFSNameNodeMachine() and FSNamesystem.getDFSNameNodePort()
+ replaced by FSNamesystem.getDFSNameNodeAddress().
+ NameNode(bindAddress, conf) is removed.
+ (shv)
+
+ HADOOP-4567. GetFileBlockLocations returns the NetworkTopology
+ information of the machines where the blocks reside. (dhruba)
+
+ HADOOP-4435. The JobTracker WebUI displays the amount of heap memory
+ in use. (dhruba)
+
+ HADOOP-4628. Move Hive into a standalone subproject. (omalley)
+
+ HADOOP-4188. Removes task's dependency on concrete filesystems.
+ (Sharad Agarwal via ddas)
+
+ HADOOP-1650. Upgrade to Jetty 6. (cdouglas)
+
+ HADOOP-3986. Remove static Configuration from JobClient. (Amareshwari
+ Sriramadasu via cdouglas)
+ JobClient::setCommandLineConfig is removed
+ JobClient::getCommandLineConfig is removed
+ JobShell, TestJobShell classes are removed
+
+ HADOOP-4422. S3 file systems should not create bucket.
+ (David Phillips via tomwhite)
+
+ HADOOP-4035. Support memory based scheduling in capacity scheduler.
+ (Vinod Kumar Vavilapalli via yhemanth)
+
+ HADOOP-3497. Fix bug in overly restrictive file globbing with a
+ PathFilter. (tomwhite)
+
+ HADOOP-4445. Replace running task counts with running task
+ percentage in capacity scheduler UI. (Sreekanth Ramakrishnan via
+ yhemanth)
+
+ HADOOP-4631. Splits the configuration into three parts - one for core,
+ one for mapred and the last one for HDFS. (Sharad Agarwal via cdouglas)
+
+ HADOOP-3344. Fix libhdfs build to use autoconf and build the same
+ architecture (32 vs 64 bit) of the JVM running Ant. The libraries for
+ pipes, utils, and libhdfs are now all in c++/<os_osarch_jvmdatamodel>/lib.
+ (Giridharan Kesavan via nigel)
+
+ HADOOP-4874. Remove LZO codec because of licensing issues. (omalley)
+
+ HADOOP-4970. The full path name of a file is preserved inside Trash.
+ (Prasad Chakka via dhruba)
+
+ HADOOP-4103. NameNode keeps a count of missing blocks. It warns on
+ WebUI if there are such blocks. '-report' and '-metaSave' have extra
+ info to track such blocks. (Raghu Angadi)
+
+ HADOOP-4783. Change permissions on history files on the jobtracker
+ to be only group readable instead of world readable.
+ (Amareshwari Sriramadasu via yhemanth)
+
+ NEW FEATURES
+
+ HADOOP-4575. Add a proxy service for relaying HsftpFileSystem requests.
+ Includes client authentication via user certificates and config-based
+ access control. (Kan Zhang via cdouglas)
+
+ HADOOP-4661. Add DistCh, a new tool for distributed ch{mod,own,grp}.
+ (szetszwo)
+
+ HADOOP-4709. Add several new features and bug fixes to Chukwa.
+ Added Hadoop Infrastructure Care Center (UI for visualize data collected
+ by Chukwa)
+ Added FileAdaptor for streaming small file in one chunk
+ Added compression to archive and demux output
+ Added unit tests and validation for agent, collector, and demux map
+ reduce job
+ Added database loader for loading demux output (sequence file) to jdbc
+ connected database
+ Added algorithm to distribute collector load more evenly
+ (Jerome Boulon, Eric Yang, Andy Konwinski, Ariel Rabkin via cdouglas)
+
+ HADOOP-4179. Add Vaidya tool to analyze map/reduce job logs for performanc
+ problems. (Suhas Gogate via omalley)
+
+ HADOOP-4029. Add NameNode storage information to the dfshealth page and
+ move DataNode information to a separated page. (Boris Shkolnik via
+ szetszwo)
+
+ HADOOP-4348. Add service-level authorization for Hadoop. (acmurthy)
+
+ HADOOP-4826. Introduce admin command saveNamespace. (shv)
+
+ HADOOP-3063 BloomMapFile - fail-fast version of MapFile for sparsely
+ populated key space (Andrzej Bialecki via stack)
+
+ HADOOP-1230. Add new map/reduce API and deprecate the old one. Generally,
+ the old code should work without problem. The new api is in
+ org.apache.hadoop.mapreduce and the old classes in org.apache.hadoop.mapred
+ are deprecated. Differences in the new API:
+ 1. All of the methods take Context objects that allow us to add new
+ methods without breaking compatability.
+ 2. Mapper and Reducer now have a "run" method that is called once and
+ contains the control loop for the task, which lets applications
+ replace it.
+ 3. Mapper and Reducer by default are Identity Mapper and Reducer.
+ 4. The FileOutputFormats use part-r-00000 for the output of reduce 0 and
+ part-m-00000 for the output of map 0.
+ 5. The reduce grouping comparator now uses the raw compare instead of
+ object compare.
+ 6. The number of maps in FileInputFormat is controlled by min and max
+ split size rather than min size and the desired number of maps.
+ (omalley)
+
+ HADOOP-3305. Use Ivy to manage dependencies. (Giridharan Kesavan
+ and Steve Loughran via cutting)
+
+ IMPROVEMENTS
+
+ HADOOP-4749. Added a new counter REDUCE_INPUT_BYTES. (Yongqiang He via
+ zshao)
+
+ HADOOP-4234. Fix KFS "glue" layer to allow applications to interface
+ with multiple KFS metaservers. (Sriram Rao via lohit)
+
+ HADOOP-4245. Update to latest version of KFS "glue" library jar.
+ (Sriram Rao via lohit)
+
+ HADOOP-4244. Change test-patch.sh to check Eclipse classpath no matter
+ it is run by Hudson or not. (szetszwo)
+
+ HADOOP-3180. Add name of missing class to WritableName.getClass
+ IOException. (Pete Wyckoff via omalley)
+
+ HADOOP-4178. Make the capacity scheduler's default values configurable.
+ (Sreekanth Ramakrishnan via omalley)
+
+ HADOOP-4262. Generate better error message when client exception has null
+ message. (stevel via omalley)
+
+ HADOOP-4226. Refactor and document LineReader to make it more readily
+ understandable. (Yuri Pradkin via cdouglas)
+
+ HADOOP-4238. When listing jobs, if scheduling information isn't available
+ print NA instead of empty output. (Sreekanth Ramakrishnan via johan)
+
+ HADOOP-4284. Support filters that apply to all requests, or global filters,
+ to HttpServer. (Kan Zhang via cdouglas)
+
+ HADOOP-4276. Improve the hashing functions and deserialization of the
+ mapred ID classes. (omalley)
+
+ HADOOP-4485. Add a compile-native ant task, as a shorthand. (enis)
+
+ HADOOP-4454. Allow # comments in slaves file. (Rama Ramasamy via omalley)
+
+ HADOOP-3461. Remove hdfs.StringBytesWritable. (szetszwo)
+
+ HADOOP-4437. Use Halton sequence instead of java.util.Random in
+ PiEstimator. (szetszwo)
+
+ HADOOP-4572. Change INode and its sub-classes to package private.
+ (szetszwo)
+
+ HADOOP-4187. Does a runtime lookup for JobConf/JobConfigurable, and if
+ found, invokes the appropriate configure method. (Sharad Agarwal via ddas)
+
+ HADOOP-4453. Improve ssl configuration and handling in HsftpFileSystem,
+ particularly when used with DistCp. (Kan Zhang via cdouglas)
+
+ HADOOP-4583. Several code optimizations in HDFS. (Suresh Srinivas via
+ szetszwo)
+
+ HADOOP-3923. Remove org.apache.hadoop.mapred.StatusHttpServer. (szetszwo)
+
+ HADOOP-4622. Explicitly specify interpretor for non-native
+ pipes binaries. (Fredrik Hedberg via johan)
+
+ HADOOP-4505. Add a unit test to test faulty setup task and cleanup
+ task killing the job. (Amareshwari Sriramadasu via johan)
+
+ HADOOP-4608. Don't print a stack trace when the example driver gets an
+ unknown program to run. (Edward Yoon via omalley)
+
+ HADOOP-4645. Package HdfsProxy contrib project without the extra level
+ of directories. (Kan Zhang via omalley)
+
+ HADOOP-4126. Allow access to HDFS web UI on EC2 (tomwhite via omalley)
+
+ HADOOP-4612. Removes RunJar's dependency on JobClient.
+ (Sharad Agarwal via ddas)
+
+ HADOOP-4185. Adds setVerifyChecksum() method to FileSystem.
+ (Sharad Agarwal via ddas)
+
+ HADOOP-4523. Prevent too many tasks scheduled on a node from bringing
+ it down by monitoring for cumulative memory usage across tasks.
+ (Vinod Kumar Vavilapalli via yhemanth)
+
+ HADOOP-4640. Adds an input format that can split lzo compressed
+ text files. (johan)
+
+ HADOOP-4666. Launch reduces only after a few maps have run in the
+ Fair Scheduler. (Matei Zaharia via johan)
+
+ HADOOP-4339. Remove redundant calls from FileSystem/FsShell when
+ generating/processing ContentSummary. (David Phillips via cdouglas)
+
+ HADOOP-2774. Add counters tracking records spilled to disk in MapTask and
+ ReduceTask. (Ravi Gummadi via cdouglas)
+
+ HADOOP-4513. Initialize jobs asynchronously in the capacity scheduler.
+ (Sreekanth Ramakrishnan via yhemanth)
+
+ HADOOP-4649. Improve abstraction for spill indices. (cdouglas)
+
+ HADOOP-3770. Add gridmix2, an iteration on the gridmix benchmark. (Runping
+ Qi via cdouglas)
+
+ HADOOP-4708. Add support for dfsadmin commands in TestCLI. (Boris Shkolnik
+ via cdouglas)
+