閱讀980 返回首頁    go 阿裏雲 go 技術社區[雲棲]


大神帶你分分鍾超越最好結果——基於分布式CPU計算的Deeplearning4j遷移學習應用實例

https://yq.aliyun.com/cloud

         20162017AI2017GPUCPUGPUCPUCPUGPUGPUDeeplearning4jHadoop
——Apache SparkApache HadoopDeeplearning4jCommodity Hardware
Deeplearning4j

     Deeplearning4j2014DL4JSparkGPUSkymindJavadeeplearning4jJVMNd4jCPUGPUJava
Caltech-256

Apache SparkApache Hadoopdeeplearning4jCaltech-256Caltech-2562578080030,60772 - 75DL4J 


大型視覺識別挑戰 ImageNet1.4GPUCaltech-25630000ImageNetCaltech-256

VGG16 2014 ImageNetVGG16Caltech-2561.4500 MB 

DL4JVGG16DL4JAPIScala

val modelImportHelper = new TrainedModelHelper(TrainedModels.VGG16)
val vgg16 = modelImportHelper.loadModel()
val savePath = "./dl4j-models/vgg16.zip"
val locationToSave = new File(savePath)
// save the model in DL4J native format, which is faster for future reads
ModelSerializer.writeModel(vgg16, locationToSave, saveUpdater = true)

DL4J

val modelFile = new File("./dl4j-models/vgg16.zip")
val vgg16 = ModelSerializer.restoreComputationGraph(modelFile)
println(vgg16.summary())


==================================================================================================

VertexName (VertexType)                 nIn,nOut       TotalParams    ParamsShape                    Vertex Inputs

==================================================================================================

input_2 (InputVertex)                   -,-            -              -                              -

block1_conv1 (ConvolutionLayer)         3,64           1792           b:{1,64}, W:{64,3,3,3}         [input_2]

block1_conv2 (ConvolutionLayer)         64,64          36928          b:{1,64}, W:{64,64,3,3}        [block1_conv1]

block1_pool (SubsamplingLayer)          -,-            0              -                              [block1_conv2]

block2_conv1 (ConvolutionLayer)         64,128         73856          b:{1,128}, W:{128,64,3,3}      [block1_pool]

block2_conv2 (ConvolutionLayer)         128,128        147584         b:{1,128}, W:{128,128,3,3}     [block2_conv1]

block2_pool (SubsamplingLayer)          -,-            0              -                              [block2_conv2]

block3_conv1 (ConvolutionLayer)         128,256        295168         b:{1,256}, W:{256,128,3,3}     [block2_pool]

block3_conv2 (ConvolutionLayer)         256,256        590080         b:{1,256}, W:{256,256,3,3}     [block3_conv1]

block3_conv3 (ConvolutionLayer)         256,256        590080         b:{1,256}, W:{256,256,3,3}     [block3_conv2]

block3_pool (SubsamplingLayer)          -,-            0              -                              [block3_conv3]

block4_conv1 (ConvolutionLayer)         256,512        1180160        b:{1,512}, W:{512,256,3,3}     [block3_pool]

block4_conv2 (ConvolutionLayer)         512,512        2359808        b:{1,512}, W:{512,512,3,3}     [block4_conv1]

block4_conv3 (ConvolutionLayer)         512,512        2359808        b:{1,512}, W:{512,512,3,3}     [block4_conv2]

block4_pool (SubsamplingLayer)          -,-            0              -                              [block4_conv3]

block5_conv1 (ConvolutionLayer)         512,512        2359808        b:{1,512}, W:{512,512,3,3}     [block4_pool]

block5_conv2 (ConvolutionLayer)         512,512        2359808        b:{1,512}, W:{512,512,3,3}     [block5_conv1]

block5_conv3 (ConvolutionLayer)         512,512        2359808        b:{1,512}, W:{512,512,3,3}     [block5_conv2]

block5_pool (SubsamplingLayer)          -,-            0              -                              [block5_conv3]

flatten (PreprocessorVertex)            -,-            -              -                              [block5_pool]

fc1 (DenseLayer)                        25088,4096     102764544      b:{1,4096}, W:{25088,4096}     [flatten]

fc2 (DenseLayer)                        4096,4096      16781312       b:{1,4096}, W:{4096,4096}      [fc1]

predictions (DenseLayer)                4096,1000      4097000        b:{1,1000}, W:{4096,1000}      [fc2]

--------------------------------------------------------------------------------------------------------------------------------------------

           Total Parameters:  138357544

       Trainable Parameters:  138357544

          Frozen Parameters:  0

==================================================================================================


VGG16ConvolutionLayer

7b68f25bb9d9cf6fe4cb67cdbaa6dae34a2b2f05

           VGG1613

——VGG16 

VGG16
Caltech-256 //HDFSHDFS
VGG16——
Deeplearning4jAPIVGG16 

val modelFile = new File("./dl4j-models/vgg16.zip")
val vgg16 = ModelSerializer.restoreComputationGraph(modelFile)
val (frozenLayers: Array[Layer], unfrozenLayers: Array[Layer]) = {
 vgg16.getLayers.splitAt(vgg16.getLayers.map(_.conf().getLayer.getLayerName).indexOf("fc2") + 1)
}

org.deeplearning4j.nnfc2fc2 

dc9a57ad25f7dcb047bd3f695301bcad81315d14


val builder = new TransferLearning.GraphBuilder(model)
 .setFeatureExtractor(frozenLayers.last.conf().getLayer.getLayerName)
// remove all the unfrozen layers, leaving just the un-trainable part of the model
unfrozenLayers.foreach { layer =>
 builder.removeVertexAndConnections(layer.conf().getLayer.getLayerName)
}
builder.setOutputs(frozenLayers.last.conf().getLayer.getLayerName)
val frozenGraph = builder.build()

HDFSJPEGsc.binaryFiles HDFSDataVecDL4JETLINDArraysDL4J此處為完整代碼VGG16 

val finalOutput = Utils.getPredictions(data, frozenGraph, sc)
val df = finalOutput.map { ds =>
 (Nd4j.toByteArray(ds.getFeatureMatrix), Nd4j.toByteArray(ds.getLabels))
}.toDF()

df.write.parquet("hdfs:///user/leon/featurizedPredictions/train")

HDFS306074096VGG16 

VGG16
         VGG16ImageNetImageNet1000Caltech-256f2”1000Caltech256257 

val conf = new NeuralNetConfiguration.Builder()
 .seed(42)
 .optimizationAlgo(OptimizationAlgorithm.STOCHASTIC_GRADIENT_DESCENT)
 .iterations(1)
 .activation(Activation.SOFTMAX)
 .weightInit(WeightInit.XAVIER)
 .learningRate(0.01)
 .updater(Updater.NESTEROVS)
 .momentum(0.8)
 .graphBuilder()
 .addInputs("in")
 .addLayer("layer0",
   new OutputLayer.Builder(LossFunction.NEGATIVELOGLIKELIHOOD)
     .activation(Activation.SOFTMAX)
     .nIn(4096)
     .nOut(257)
     .build(),
   "in")
 .setOutputs("layer0")
 .backprop(true)
 .build()
val model = new ComputationGraph(conf)

5d799bde174c83a77bb5082b6ea162a67dc56c79

DL4JSparksparkSpark RDD 

val tm = new ParameterAveragingTrainingMaster.Builder(1)
 .averagingFrequency(5)
 .workerPrefetchNumBatches(2)
 .batchSizePerWorker(32)
 .rddTrainingApproach(RDDTrainingApproach.Export)
 .build()
val model = new SparkComputationGraph(sc, graph, tm)

SparkComputationGraph 

model.setListeners(new ScoreIterationListener(1))
(1 to param.numEpochs).foreach { i =>
 logger4j.info(s"epoch $i starting")
 model.fit(trainRDD)

 // print model accuracy and score on entire train and validation sets every 5 iterations
 if (i % 5 == 0) {
   logger4j.info(s"Train score: ${model.calculateScore(trainRDD, true)}")
   logger4j.info(s"Train stats:\n${Utils.evaluate(model.getNetwork, trainRDD, 16)}")
   if (validRDD.isDefined) {
     logger4j.info(s"Validation stats:\n${Utils.evaluate(model.getNetwork, validRDD.get, 16)}")
     logger4j.info(s"Validation score: ${model.calculateScore(validRDD.get, true)}")
   }
 }
}

sparkDL4J webuiminibatch 

b58c5b4d78ad0cd097854d0083f4593507d379ea

ImagenetImageNet 

17/05/12 16:06:12 INFO caltech256.TrainFeaturized$: Train score: 0.6663876733861492
17/05/12 16:06:39 INFO caltech256.TrainFeaturized$: Train stats:
Accuracy: 0.8877570632327504
Precision: 0.8937314411403346
Recall: 0.876864905154427

17/05/12 16:07:17 INFO caltech256.TrainFeaturized$: Validation stats:
Accuracy: 0.7625918867410836
Precision: 0.7703367671469078
Recall: 0.7383574179140013

17/05/12 16:07:26 INFO caltech256.TrainFeaturized$: Validation score: 1.08481537405921

88.876.3 

Accuracy: 0.7530218882718066
Precision: 0.7613121478786196
Recall: 0.7286152891276695

HadoopCPU

deeplearning4jApache SparkJavaHadoopHadoopSparkdeeplearning4jND4JCPUGPUDeeplearning4j
/ 

Nisha MuktewarCloudera

Seth Hendrickson以前是電氣工程師,現在是數據科學家和軟件工程師,研究方向是分布式機器學習。

@

Deep learning on Apache Spark and Apache Hadoop with Deeplearning4j | Cloudera Engineering BlogSeth Hendrickson

pdf

最後更新:2017-07-02 20:39:38

  上一篇:go  Deepgreen & Greenplum DBA小白普及課之四(性能問題解答)
  下一篇:go  Deepgreen & Greenplum DBA小白普及課之三(備份問題解答)