python implements BackPropagation algorithm

• 2020-06-15 09:40:31
• OfStack

An important part of implementing weights and bias updates for neural networks is the use of BackPropagation (back propagation) algorithm. Specifically, the back propagation algorithm USES the back propagation of the error to calculate the derivatives of w (weight) and b (bias) relative to the objective function, so that the original w and b can be updated by subtracting the partial derivatives. backprop(x,y) is an algorithm used to realize back propagation. (note: the code is not your summary, github have the code on the implementation of the https: / / github com LCAIZJ/neural networks - and - deep - learning)

``````
def backprop(self,x,y):
nabla_b = [np.zeros(b.shape) for b in self.biases]
nabla_w = [np.zeros(w.shape) for w in self.weights]
#  By entering x , to calculate the value of the output layer forward
activation = x
activations = [x]#  All the output layers are stored
zs = []
for b,w in zip(self.biases,self.weights):
z = np.dot(w,activation)+b
zs.append(z)
activation = sigmoid(z)
activations.append(activation)
#  Calculate the output layer of error
delta = self.cost_derivative(activations[-1],y)*sigmoid_prime(zs[:-1])
nabla_b[-1] = delta
nabla_w[-1] = np.dot(delta,activations[-2].transpose())
for l in xrange(2,self.num_layers):
z = zs[-l]
sp = sigmoid_prime(z)
delta = np.dot(self.weight[-l+1].transpose(),delta)*sp
nabla_b[-l] = delta
nabla_w[-l] = np.dot(delta,activations[-l-1].transpose())
return (nabla_b,nabla_w)``````

Where x and y passed in are a separate instance.

``````
def cost_derivative(self,output_activation,y):
return (output_activation-y)
def sigmoid(z):
return 1.0/(1.0+np.exp(z))
def sigmoid_prime(z):
return sigmoid(z)*(1-sigmoid(z))
``````

Related articles: