閱讀103 返回首頁    go 阿裏雲 go 技術社區[雲棲]


hadoop 2.7.3 集群搭建遇到問題以及解決

  • 係統安裝相關
    1. center os安裝到係統中需要用軟碟通製作一個啟動盤,先選擇一個鏡像,然在在啟動菜單選擇寫入硬盤
    2. 出現錯誤 center os 7 starting timeout報了一長熘後出現dev/root dosnot exist 這是沒有找到鏡像文件,在報錯界麵cd dev,可以看到掛載的設備,有幾十個,sd開頭的是存儲相關。我這邊出現的是sda sda4 sdb。重啟,在選擇安裝界麵按e進入編輯,將vmlinuz initrd=initrd.imginst.stage2=hd:LABEL=CentOS\x207\x20x86_64 rd.live.check quiet 改為:vmlinuz initrd=initrd.imginst.stage2=hd:/dev/sda quiet 即可,然後按ctrl+x安裝如果沒成功改為sdb sda4總有一個會讓你成功
  • SSH相關

    1. 明明SSH已經配置值成功了,為什麼start-all.sh的時候還要輸入s2的密碼 s3的密碼?

    答:ssh設置的是給hadoop用戶配置的,需要su hadoop 切換到hadoop用戶下再執行start-all.sh

    2. ssh沒有配置成功如何重新開始配置

    答:先將用戶切換到hadoop然後打開~/.ssh目錄,將目錄下的所有文件都刪除(三個節點都這麼做),弄完後執行ssh localhost初始化ssh

  • hadoop相關

1. start-all.sh報錯:ssh: Could not resolve hostname master: Temporary failure in name resolution
答:請確保各配置文件裏的三台機器的host名沒有填錯,包括/etc/hosts core-site.xml hdfs-site.xml mapred-site.xml 這幾個文件都要排查
2. DataNode節點沒有起來,查看日誌發現是/user/hadoop/dfs/data 文件路徑不存在
答:請確保hdfs-site.xml裏麵的name為dfs.datanode.data.dir所對應的value路徑目錄文件夾已創建
3. DataNode節點沒有起來,查看節點日誌文件發現是對該文件夾沒有權限,無法操作
答:chown -R hadoop:hadoop /usr/hadoop 給三個節點的改目錄賦該權限給hadoop群組的hadoop用戶
4. Cannot create file/business1/2017-08-02/15/.logs.1501660021526.tmp. Name node is in safe mode.
答:表示目前是安全模式,值要退出安全模式就可以了hadoop dfsadmin -safemode leave 
5. DataXceiver error processing WRITE_BLOCK operation
        2017-08-03 01:27:55,667 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: s3:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.1.113:47061 dst: /192.168.1.113:50010
        java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=/192.168.1.113:50010 remote=/192.168.1.113:47061]. 60000 millis timeout left.
        at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342)
        at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
        at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
        at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
        at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
        at java.io.DataInputStream.read(DataInputStream.java:149)
        at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:199)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:501)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:897)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:802)
        at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
        at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253)
        at java.lang.Thread.run(Thread.java:748)

    字麵理解為文件操作超租期,實際上就是data stream操作過程中文件被刪掉了。之前也遇到過,通常是因為Mapred多個task操作同一個文件,一個task完成後刪掉文件導致。
解決方案:
繼續改大 xceiverCount 至8192並重啟集群生效。
修改hdfs-site.xml (針對2.x版本,1.x版本屬性名應該是:dfs.datanode.max.xcievers):
<property>
        <name>dfs.datanode.max.transfer.threads</name> 
        <value>8192</value> 
</property>
拷貝到各datanode節點並重啟datanode即可

    6. java.io.IOException: Incompatible clusterIDs

        java.io.IOException: Incompatible clusterIDs in /usr/hadoop/dfs/d   ata: namenode clusterID = CID-86e16085-c061-4806-aac1-6f125689d567; datanode clusterID = CID-888eeac4-405f-4e3e-a5c3-c5195da71455
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:775)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:300)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:416)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:395)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:573)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1362)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1327)
        at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:317)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:223)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:802)
        at java.lang.Thread.run(Thread.java:748)

        解決方案:

        將name/current下的VERSION中的clusterID複製到data/current下的VERSION中,覆蓋掉原來的clusterID
- ###flume相關
#####1.ChannelFullException: Space for commit to queue couldn't be acquired. Sinks are likely not keeping up with sources, or the buffer size is too tight
    答:
    1)修改配置參數:
    agent.channels.memoryChanne3.keep-alive = 60
    agent.channels.memoryChanne3.capacity = 1000000(給一個合適的值)
    2)修改java最大內存大小
    vim bin/flume-ng
    JAVA_OPTS="-Xmx2048m"

最後更新:2017-08-13 22:39:20

  上一篇:go  中美網絡安全差距在哪裏?這要從克林頓時代說起
  下一篇:go  FFmpeg任意文件讀取漏洞分析