scikit-learn學習之神經網絡算法
本係列博客主要參考 Scikit-Learn 官方網站上的每一個算法進行,並進行部分翻譯,如有錯誤,請大家指正
轉載請注明出處,謝謝
======================================================================
scikit-learn博主使用的是0.17版本,是穩定版,當然現在有0.18發行版,兩者還是有區別的,感興趣的可以自己官網上查看
scikit-learn0.17(and 之前)上對於Neural Network算法 的支持僅限於 BernoulliRBM
scikit-learn0.18上對於Neural Network算法有三個 neural_network.BernoulliRBM ,neural_network.MLPClassifier,neural_network.MLPRgression
具體可參考:點擊閱讀
1:神經網絡算法簡介
2:Backpropagation算法詳細介紹
3:非線性轉化方程舉例
4:自己實現神經網絡算法NeuralNetwork
5:基於NeuralNetwork的XOR實例
6:基於NeuralNetwork的手寫數字識別實例
7:scikit-learn中BernoulliRBM使用實例
8:scikit-learn中的手寫數字識別實例
一:神經網絡算法簡介
1:背景
以人腦神經網絡為啟發,曆史上出現過很多版本,但最著名的是backpropagation
2:多層向前神經網絡(Multilayer Feed-Forward Neural Network)
多層向前神經網絡組成部分
輸入層(input layer),隱藏層(hiddenlayer),輸出層(output layer)
3:設計神經網絡結構
4:算法驗證——交叉驗證法(Cross- Validation)
解讀: 有一組輸入集A,B,可以分成三組,第一次以第一組為訓練集,求出一個準確度,第二次以第二組作為訓練集,求出一個準確度,求出準確度,第三次以第三組作為訓練集,求出一個準確度,然後對三個準確度求平均值
二:Backpropagation算法詳細介紹
1:通過迭代性來處理訓練集中的實例
2:輸入層輸入數
經過權重計算得到第一層的數據,第一層的數據作為第二層的輸入,再次經過權重計算得到結果,結果和真實值之間是存在誤差的,然後根據誤差,反向的更新每兩個連接之間的權重
3:算法詳細介紹
輸入:D : 數據集,| 學習率(learning rate),一個多層前向神經網絡







4:結合實例講解算法
0.9對用的是L,學習率
三:非線性轉化方程舉例
在二中Activation Function對計算結果進行轉換,得到下一層的輸入,這裏用到的f函數就是非線性轉換函數,Sigmoid函數(S曲線)用來做f函數,Sigmoid函數是一類函數,隻要S曲線滿足一定的性質就可以作為activation Function函數
Sigmoid函數:
常見的Sigmoid函數
1:雙曲函數(參考百科,下麵以tan函數為例)
雙曲函數的導數為:
2:邏輯函數(Logistic函數)
邏輯函數的導數形式為:
四:自己實現神經網絡算法NeuralNetwork
建立NeuralNetwork.py,添加以下代碼
- #coding:utf-8
- '''''
- Created on 2016/4/27
- @author: Gamer Think
- '''
- import numpy as np
- #定義雙曲函數和他們的導數
- def tanh(x):
- return np.tanh(x)
- def tanh_deriv(x):
- return 1.0 - np.tanh(x)**2
- def logistic(x):
- return 1/(1 + np.exp(-x))
- def logistic_derivative(x):
- return logistic(x)*(1-logistic(x))
- #定義NeuralNetwork 神經網絡算法
- class NeuralNetwork:
- #初始化,layes表示的是一個list,eg[10,10,3]表示第一層10個神經元,第二層10個神經元,第三層3個神經元
- def __init__(self, layers, activation='tanh'):
- """
- :param layers: A list containing the number of units in each layer.
- Should be at least two values
- :param activation: The activation function to be used. Can be
- "logistic" or "tanh"
- """
- if activation == 'logistic':
- self.activation = logistic
- self.activation_deriv = logistic_derivative
- elif activation == 'tanh':
- self.activation = tanh
- self.activation_deriv = tanh_deriv
- self.weights = []
- #循環從1開始,相當於以第二層為基準,進行權重的初始化
- for i in range(1, len(layers) - 1):
- #對當前神經節點的前驅賦值
- self.weights.append((2*np.random.random((layers[i - 1] + 1, layers[i] + 1))-1)*0.25)
- #對當前神經節點的後繼賦值
- self.weights.append((2*np.random.random((layers[i] + 1, layers[i + 1]))-1)*0.25)
- #訓練函數 ,X矩陣,每行是一個實例 ,y是每個實例對應的結果,learning_rate 學習率,
- # epochs,表示抽樣的方法對神經網絡進行更新的最大次數
- def fit(self, X, y, learning_rate=0.2, epochs=10000):
- X = np.atleast_2d(X) #確定X至少是二維的數據
- temp = np.ones([X.shape[0], X.shape[1]+1]) #初始化矩陣
- temp[:, 0:-1] = X # adding the bias unit to the input layer
- X = temp
- y = np.array(y) #把list轉換成array的形式
- for k in range(epochs):
- #隨機選取一行,對神經網絡進行更新
- i = np.random.randint(X.shape[0])
- a = [X[i]]
- #完成所有正向的更新
- for l in range(len(self.weights)):
- a.append(self.activation(np.dot(a[l], self.weights[l])))
- #
- error = y[i] - a[-1]
- deltas = [error * self.activation_deriv(a[-1])]
- #開始反向計算誤差,更新權重
- for l in range(len(a) - 2, 0, -1): # we need to begin at the second to last layer
- deltas.append(deltas[-1].dot(self.weights[l].T)*self.activation_deriv(a[l]))
- deltas.reverse()
- for i in range(len(self.weights)):
- layer = np.atleast_2d(a[i])
- delta = np.atleast_2d(deltas[i])
- self.weights[i] += learning_rate * layer.T.dot(delta)
- #預測函數
- def predict(self, x):
- x = np.array(x)
- temp = np.ones(x.shape[0]+1)
- temp[0:-1] = x
- a = temp
- for l in range(0, len(self.weights)):
- a = self.activation(np.dot(a, self.weights[l]))
- return a
五:基於NeuralNetwork的XOR(異或)示例
代碼如下:
- <span style="font-size:18px;">#coding:utf-8
- '''''
- Created on 2016/4/27
- @author: Gamer Think
- '''
- import numpy as np
- from NeuralNetwork import NeuralNetwork
- '''''
- [2,2,1]
- 第一個2:表示 數據的緯度,因為是二維的,表示兩個神經元,所以是2
- 第二個2:隱藏層數據緯度也是2,表示兩個神經元
- 1:表示輸入為一個神經元
- tanh:表示用雙曲函數裏的tanh函數
- '''
- nn = NeuralNetwork([2,2,1], 'tanh')
- X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
- y = np.array([0, 1, 1, 0])
- nn.fit(X, y)
- for i in [[0, 0], [0, 1], [1, 0], [1,1]]:
- print(i,nn.predict(i)) </span>
([0, 0], array([ 0.02150876]))
([0, 1], array([ 0.99857695]))
([1, 0], array([ 0.99859837]))
([1, 1], array([ 0.04854689]))
六:基於NeuralNetwork的手寫數字識別示例
代碼如下:
- <span style="font-size:18px;">import numpy as np
- from sklearn.datasets import load_digits
- from sklearn.metrics import confusion_matrix,classification_report
- from sklearn.preprocessing import LabelBinarizer
- from sklearn.cross_validation import train_test_split
- from NeuralNetwork import NeuralNetwork
- digits = load_digits()
- X = digits.data
- y = digits.target
- X -= X.min()
- X /= X.max()
- nn =NeuralNetwork([64,100,10],'logistic')
- X_train, X_test, y_train, y_test = train_test_split(X, y)
- labels_train = LabelBinarizer().fit_transform(y_train)
- labels_test = LabelBinarizer().fit_transform(y_test)
- print "start fitting"
- nn.fit(X_train,labels_train,epochs=3000)
- predictions = []
- for i in range(X_test.shape[0]):
- o = nn.predict(X_test[i])
- predictions.append(np.argmax(o))
- print confusion_matrix(y_test, predictions)
- print classification_report(y_test, predictions) </span>
七:scikit-learn中的BernoulliRBM使用實例
- <span style="font-family:Microsoft YaHei;font-size:18px;">from sklearn.neural_network import BernoulliRBM
- X = [[0,0],[1,1]]
- y = [0,1]
- clf = BernoulliRBM().fit(X,y)
- print clf</span>
輸出結果為:
BernoulliRBM(batch_size=10, learning_rate=0.1, n_components=256, n_iter=10,
random_state=None, verbose=0)
注意此模塊不支持predict函數,這與以往的算法有很大的不同
八:scikit-learn中的手寫數字識別實例
- <span style="font-family:Microsoft YaHei;font-size:18px;">import numpy as np
- import matplotlib.pyplot as plt
- from scipy.ndimage import convolve
- from sklearn import linear_model, datasets, metrics
- from sklearn.cross_validation import train_test_split
- from sklearn.neural_network import BernoulliRBM
- from sklearn.pipeline import Pipeline
- ###############################################################################
- # Setting up
- def nudge_dataset(X, Y):
- """
- This produces a dataset 5 times bigger than the original one,
- by moving the 8x8 images in X around by 1px to left, right, down, up
- """
- direction_vectors = [
- [[0, 1, 0],
- [0, 0, 0],
- [0, 0, 0]],
- [[0, 0, 0],
- [1, 0, 0],
- [0, 0, 0]],
- [[0, 0, 0],
- [0, 0, 1],
- [0, 0, 0]],
- [[0, 0, 0],
- [0, 0, 0],
- [0, 1, 0]]]
- shift = lambda x, w: convolve(x.reshape((8, 8)), mode='constant',
- weights=w).ravel()
- X = np.concatenate([X] +
- [np.apply_along_axis(shift, 1, X, vector)
- for vector in direction_vectors])
- Y = np.concatenate([Y for _ in range(5)], axis=0)
- return X, Y
- # Load Data
- digits = datasets.load_digits()
- X = np.asarray(digits.data, 'float32')
- X, Y = nudge_dataset(X, digits.target)
- X = (X - np.min(X, 0)) / (np.max(X, 0) + 0.0001) # 0-1 scaling
- X_train, X_test, Y_train, Y_test = train_test_split(X, Y,
- test_size=0.2,
- random_state=0)
- # Models we will use
- logistic = linear_model.LogisticRegression()
- rbm = BernoulliRBM(random_state=0, verbose=True)
- classifier = Pipeline(steps=[('rbm', rbm), ('logistic', logistic)])
- ###############################################################################
- # Training
- # Hyper-parameters. These were set by cross-validation,
- # using a GridSearchCV. Here we are not performing cross-validation to
- # save time.
- rbm.learning_rate = 0.06
- rbm.n_iter = 20
- # More components tend to give better prediction performance, but larger
- # fitting time
- rbm.n_components = 100
- logistic.C = 6000.0
- # Training RBM-Logistic Pipeline
- classifier.fit(X_train, Y_train)
- # Training Logistic regression
- logistic_classifier = linear_model.LogisticRegression(C=100.0)
- logistic_classifier.fit(X_train, Y_train)
- ###############################################################################
- # Evaluation
- print()
- print("Logistic regression using RBM features:\n%s\n" % (
- metrics.classification_report(
- Y_test,
- classifier.predict(X_test))))
- print("Logistic regression using raw pixel features:\n%s\n" % (
- metrics.classification_report(
- Y_test,
- logistic_classifier.predict(X_test))))
- ###############################################################################
- # Plotting
- plt.figure(figsize=(4.2, 4))
- for i, comp in enumerate(rbm.components_):
- plt.subplot(10, 10, i + 1)
- plt.imshow(comp.reshape((8, 8)), cmap=plt.cm.gray_r,
- interpolation='nearest')
- plt.xticks(())
- plt.yticks(())
- plt.suptitle('100 components extracted by RBM', fontsize=16)
- plt.subplots_adjust(0.08, 0.02, 0.92, 0.85, 0.08, 0.23)
- plt.show()</span>
顯示結果:
附:博主對於前邊的原理其實很是明白了,但是對於scikit-learn實現手寫數字識別係統這個代碼優點迷亂,如果路過大神明白的,可以給小弟指點迷津
- <span style="font-family:Microsoft YaHei;font-size:18px;">import numpy as np
- from sklearn.datasets import load_digits
- from sklearn.metrics import confusion_matrix,classification_report
- from sklearn.preprocessing import LabelBinarizer
- from sklearn.cross_validation import train_test_split
- from NeuralNetwork import NeuralNetwork
- digits = load_digits()
- X = digits.data
- y = digits.target
- X -= X.min()
- X /= X.max()
- nn =NeuralNetwork([64,100,10],'logistic')
- X_train, X_test, y_train, y_test = train_test_split(X, y)
- labels_train = LabelBinarizer().fit_transform(y_train)
- labels_test = LabelBinarizer().fit_transform(y_test)
- print "start fitting"
- nn.fit(X_train,labels_train,epochs=3000)
- predictions = []
- for i in range(X_test.shape[0]):
- o = nn.predict(X_test[i])
- predictions.append(np.argmax(o))
- print confusion_matrix(y_test, predictions)
- print classification_report(y_test, predictions) </span>
最後更新:2017-06-25 22:04:20