阅读487 返回首页    go iPhone_iPad_Mac_手机_平板_苹果apple


软件配置__用户指南_E-MapReduce-阿里云

软件配置的作用

Hadoop、Hive、Pig 等软件含有大量的配置,当需要对其软件配置进行修改时,就可以使用软件配置功能来实现。例如,HDFS 服务器的服务线程数目 dfs.namenode.handler.count 默认是 10,假设要加大到 50;HDFS 的文件块的大小 dfs.blocksize 默认是 128MB,假设系统都是小文件,想要改小到 64MB。

目前这个操作只能在集群启动的时候执行一次。

如何使用

  1. 登录阿里云 E-MapReduce 控制台集群列表

  2. 在上方选择所在的地域(Region),所创建集群将会在对应的Region内。

  3. 单击创建集群,即会进入创建集群的操作界面。

  4. 在创建集群的软件配置这一步中可以看到所有包含的软件以及对应的版本。若想修改集群的配置,可以通过软件配置(可选)框选择相应的 json 格式配置文件,对集群的默认参数进行覆盖或添加。json 文件的样例内容如下

    1. {
    2. "configurations": [
    3. {
    4. "classification": "core-site",
    5. "properties": {
    6. "fs.trash.interval": "61"
    7. }
    8. },
    9. {
    10. "classification": "hadoop-log4j",
    11. "properties": {
    12. "hadoop.log.file": "hadoop1.log",
    13. "hadoop.root.logger": "INFO",
    14. "a.b.c": "ABC"
    15. }
    16. },
    17. {
    18. "classification": "hdfs-site",
    19. "properties": {
    20. "dfs.namenode.handler.count": "12"
    21. }
    22. },
    23. {
    24. "classification": "mapred-site",
    25. "properties": {
    26. "mapreduce.task.io.sort.mb": "201"
    27. }
    28. },
    29. {
    30. "classification": "yarn-site",
    31. "properties": {
    32. "hadoop.security.groups.cache.secs": "251",
    33. "yarn.nodemanager.remote-app-log-dir": "/tmp/logs1"
    34. }
    35. },
    36. {
    37. "classification": "httpsfs-site",
    38. "properties": {
    39. "a.b.c.d": "200"
    40. }
    41. },
    42. {
    43. "classification": "capacity-scheduler",
    44. "properties": {
    45. "yarn.scheduler.capacity.maximum-am-resource-percent": "0.2"
    46. }
    47. },
    48. {
    49. "classification": "hadoop-env",
    50. "properties": {
    51. "BC":"CD"
    52. },
    53. "configurations":[
    54. {
    55. "classification":"export",
    56. "properties": {
    57. "AB":"${BC}",
    58. "HADOOP_CLIENT_OPTS":""-Xmx512m -Xms512m $HADOOP_CLIENT_OPTS""
    59. }
    60. }
    61. ]
    62. },
    63. {
    64. "classification": "httpfs-env",
    65. "properties": {
    66. },
    67. "configurations":[
    68. {
    69. "classification":"export",
    70. "properties": {
    71. "HTTPFS_SSL_KEYSTORE_PASS":"passwd"
    72. }
    73. }
    74. ]
    75. },
    76. {
    77. "classification": "mapred-env",
    78. "properties": {
    79. },
    80. "configurations":[
    81. {
    82. "classification":"export",
    83. "properties": {
    84. "HADOOP_JOB_HISTORYSERVER_HEAPSIZE":"1001"
    85. }
    86. }
    87. ]
    88. },
    89. {
    90. "classification": "yarn-env",
    91. "properties": {
    92. },
    93. "configurations":[
    94. {
    95. "classification":"export",
    96. "properties": {
    97. "HADOOP_YARN_USER":"${HADOOP_YARN_USER:-yarn1}"
    98. }
    99. }
    100. ]
    101. },
    102. {
    103. "classification": "pig",
    104. "properties": {
    105. "pig.tez.auto.parallelism": "false"
    106. }
    107. },
    108. {
    109. "classification": "pig-log4j",
    110. "properties": {
    111. "log4j.logger.org.apache.pig": "error, A"
    112. }
    113. },
    114. {
    115. "classification": "hive-env",
    116. "properties": {
    117. "BC":"CD"
    118. },
    119. "configurations":[
    120. {
    121. "classification":"export",
    122. "properties": {
    123. "AB":"${BC}",
    124. "HADOOP_CLIENT_OPTS1":""-Xmx512m -Xms512m $HADOOP_CLIENT_OPTS1""
    125. }
    126. }
    127. ]
    128. },
    129. {
    130. "classification": "hive-site",
    131. "properties": {
    132. "hive.tez.java.opts": "-Xmx3900m"
    133. }
    134. },
    135. {
    136. "classification": "hive-exec-log4j",
    137. "properties": {
    138. "log4j.logger.org.apache.zookeeper.ClientCnxnSocketNIO": "INFO,FA"
    139. }
    140. },
    141. {
    142. "classification": "hive-log4j",
    143. "properties": {
    144. "log4j.logger.org.apache.zookeeper.server.NIOServerCnxn": "INFO,DRFA"
    145. }
    146. }
    147. ]
    148. }

classification 参数指定要修改的配置文件,properties 参数放置要修改的 key value 键值对,默认配置文件有对应的 key 有则只覆盖 value,没有则添加对应的 key value 键值对。

配置文件与 classification 的对应关系如下列表格所示:

Hadoop

Filename classification
core-site.xml core-site
log4j.properties hadoop-log4j
hdfs-site.xml hdfs-site
mapred-site.xml mapred-site
yarn-site.xml yarn-site
httpsfs-site.xml httpsfs-site
capacity-scheduler.xml capacity-scheduler
hadoop-env.sh hadoop-env
httpfs-env.sh httpfs-env
mapred-env.sh mapred-env
yarn-env.sh yarn-env

Pig

Filename classification
pig.properties pig
log4j.properties pig-log4j

Hive

Filename classification
hive-env.sh hive-env
hive-site.xml hive-site
hive-exec-log4j.properties hive-exec-log4j
hive-log4j.properties hive-log4j

core-site 这类扁平的 xml 文件只有一层,配置都放在 properties 里。而 hadoop-en v这类 sh 文件可能有两层结构,可以通过嵌套 configurations 的方式来设置,请参见示例里 hadoop-env 的部分,为 export 的 HADOOP_CLIENT_OPTS 属性添加了 -Xmx512m -Xms512m 的设置。

设置好后,确认后单击下一步

最后更新:2016-11-23 16:03:59

  上一篇:go 报警管理__用户指南_E-MapReduce-阿里云
  下一篇:go 引导操作__用户指南_E-MapReduce-阿里云