neural network 簡單小程式

這是看網路上一個youtuber的程式改寫的
類神經網路小程式
原本想研究課堂上學到的scikit-learn,或是Keras
但看完影片之後覺得這麼簡單不自己寫一個太對不起自己

影片在此
下面是我改寫的程式碼
程式碼解釋我哪天有空再來補(感覺是不會補了哈哈
#NN_test.py
#modify form Siraj Raval's work

import numpy as np

#sigmoid
def nonlin(x, deriv=False): 
    if(deriv==True):
        return (x*(1-x))     # to compute delta
    return 1/(1+np.exp(-x))  # to nromalize data between 0 and 1

#training data: total 5 examples in the training data (5 rows)
#input data
X = np.array([[0,   0,    1],     # total 3 nodes in input layer (3 cols)
             [0.24, 0.1,  0.62],  
             [0.52, 1,    0.3],
             [0.14, 0.55, 0.37],
             [1,    0.43, 0.72]])

#output data
Y = np.array([[0],        # total 1 nodes in output layer (1 col)
             [0.72],
             [1],
             [0.36],
             [0.01]])

#synapses initialize
np.random.seed(1)                     # make constant init values for synapses
syn0 = 2*np.random.random((3,8)) - 1  # 3x8 matrix of weights, in range -1 to 1 and mean 0. will create 5x8 L1.
syn1 = 2*np.random.random((8,1)) - 1  # 8x1 matrix of weights, in range -1 to 1 and mean 0. will create 5X1 L2.

#training
for j in range(80000):      

    # Calculate forward through the network.
    L0 = X
    L1 = nonlin(np.dot(L0, syn0))  # only weights, no bias
    L2 = nonlin(np.dot(L1, syn1))

    # Back propagation of errors using the chain rule. 
    L2_error = Y - L2
    if(j % 10000) == 0: 
        print("Error: " + str(np.mean(np.abs(L2_error))))
    L2_delta = L2_error*nonlin(L2, deriv=True)
    L1_error = L2_delta.dot(syn1.T)
    L1_delta = L1_error*nonlin(L1, deriv=True)

    #update weights (no learning rate term)
    syn1 += L1.T.dot(L2_delta)
    syn0 += L0.T.dot(L1_delta)

#print result
print("Output Y is:")
print(Y)
print("Output result after training is:")
print(L2)
result :



好了,今天的筆記到此結束
希望有幫助未來遺忘這些的自己,以及需要的人

留言

這個網誌中的熱門文章

python的list與numpy的array和matrix的關係