Technology sharing

Artificialis intelligenti Algorithm ipsum (medium) Cursus VIII-PyTorch Neural Network Neural Network Basics et Code Retineo Explicatio

2024-07-12

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

Salve omnes, ego sum Wei Xue AI. Neural network computatorium exemplar est, quod nexum neuronum in cerebro humano imitatur et late in recognitione imagini, lingua naturali et aliis agris usus est. Hic articulus notionem, structuram, exempla, institutionem et aestimationem reticulorum neuralis introducet, una cum codice perfecto currenti.

ENARRATIO neural network basics et codice

1. conceptus neural network

Retis neuralis in pluribus nodis (vel neurons) consistit et marginibus his nodis connexis. Uterque nodi neuronem repraesentat, et utraque acies nexum neuronorum repraesentat. Praecipuum munus retis neuralis est utilia notitias e notitia extrahere per summationem gravem et transformationem nonlinearum input datae.

2. structura neural network

Neural retia plerumque in accumsan input, accumsan abscondita et accumsan inputa divisa sunt. In input accumsan notitia externa suscipit, accumsan occultus notitias processus, et output accumsan outputs finales effectus. Nodi in unoquoque tabulato cum nodis in altera tabulato necti sunt, et singulae nexus par pondus habet.

2.1 Neural network calculation formula

Retis neuralis output hac formula computari potest:
a ( l ) = f ( z ( l ) ) a^{(l)} = f(z^{(l)})a(l)=f****(z****(l))
z ( l ) = w ( l ) a ( l 1 ) + b ( l ) z^{(l)} = w^{(l)}a^{(l-1)} + b^{(l )}z****(l)=w*****(l)a(l1)+b(l)
in, a ( l ) a^{(l)}a(l) Indicat primum ll*l Accumsan output, z ( l ) z^{(l)}z****(l) Indicat primum ll*l Summatio ex strata; w ( l ) w^{(l)}w*****(l) et b ( l ) b^{(l)}b(l) Repraesentant respectively ll*l iacuit pondera et bipes; f ( ) f(cdot)f****() significat activation munus.
Insert imaginem descriptionis hic

2.2 Runnable code

Hic est exemplum retis simplicis structurae neuralis;

import numpy as np
def sigmoid(x):
    return 1 / (1 + np.exp(-x))
def feedforward(X, weights, biases):
    a = X
    for w, b in zip(weights, biases):
        z = np.dot(a, w) + b
        a = sigmoid(z)
    return a
# 定义输入数据
X = np.array([[1, 2], [3, 4]])
# 定义权重和偏置
weights = [np.array([[0.1, 0.2], [0.3, 0.4]]), np.array([[0.5], [0.6]])]
biases = [np.array([0.1, 0.2]), np.array([0.3])]
# 计算输出
output = feedforward(X, weights, biases)
print(output)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17

3. Exemplar neural network

Exemplum retis neuralis initus notitiarum et pittacorum correspondentium includit. Per processum disciplinae, reticulum neural continenter pondera et bipes adaptat ad output quam proxime ad pittacium faciendum.

3.1 damnum munus

Disciplina reticulorum neuralis finis est munus detrimentum minuere. Communiter detrimentum functionum includunt medium quadratum errorem (MSE) et detrimentum crucis-entropy. Formula medii erroris quadrati talis est:
J ( w , b ) = 1 2 m i = 1 m ( y ( i ) − a ( i ) ) 2 J(w, b) = frac{1}{2m}sum_{i=}^{m }(y^{(i)} - a^{(i)})^2J(w*****,b)=2m1ego=1m(y**(ego)a(ego))2
in, mmm numerorum exempla repraesentat; y ( i ) y^{(i)}y**(ego) et a ( i ) a^{(i)}a(ego) Repraesentant respectively ii*ego pittacia et bona exemplaria praedicta.

3.2 Runnable code

Est hic simplex exemplum specimen generationis et amissionis calculi;

import numpy as np
def mse_loss(y_true, y_pred):
    return np.mean((y_true - y_pred) ** 2)
# 定义标签
y_true = np.array([[1], [0]])
# 计算损失
loss = mse_loss(y_true, output)
print(loss)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

4. Disciplina neural networks

Exercitatio reticuli neuralis processus deinceps propagationem et propagationem retro includit. Propagatio deinceps limitationem retis neuralis computat, et backpropagatio gradum amissionis functionis respectu ponderum et biarum computat, ac pondera et biases renovat.

4.1 Backpropagation

Backpropagatio algorithmus gradientes per catenam regulae computat.Ad primum ll*l accumsan pondus w ( l ) w^{(l)}w*****(l)clivus eius exprimi potest;
J w ( l ) = a ( l 1 ) ( f ( z ( l ) ) δ ( l ) ) frac{partialis J}{ partialis w^{(l)}} = a^{( l-1)} cdot (f'(z^{(l)}) cdot della^{(l)})w*****(l)J=a(l1)(f****(z****(l))δ(l))
in, δ ( l ) delta^{(l)}δ(l) Indicat primum ll*l accumsan error, f ( ) f.f****() significat inde activation munus.

4.2 Code for neural network training

Simplex formatio retis neuralis hic est exemplum:

import numpy as np
def sigmoid_derivative(x):
    return sigmoid(x) * (1 - sigmoid(x))
def backpropagation(X, y_true, weights, biases):
    gradients_w = [np.zeros(w.shape) for w in weights]
    gradients_b = [np.zeros(b.shape) for b in biases]
    # 前向传播
    a = X
    activations = [a]
    zs = []
    for w, b in zip(weights, biases):
        z = np.dot(a, w) + b
        zs.append(z)
        a = sigmoid(z)
        activations.append(a)
    # 计算输出层误差
    delta = activations[-1] - y_true
    gradients_b[-1] = delta
    gradients_w[-1] = np.dot(activations[-2].T, delta)
    # 反向传播
    for l in range(2, len(weights) + 1):
        z = zs[-l]
        sp = sigmoid_derivative(z)
        delta = np.dot(delta, weights[-l + 1].T) * sp
        gradients_b[-l] = delta
        gradients_w[-l] = np.dot(activations[-l - 1].T, delta)
    return gradients_w, gradients_b
# 定义学习率
learning_rate = 0.1
# 进行一次梯度下降
gradients_w, gradients_b = backpropagation(X, y_true, weights, biases)
# 更新权重和偏置
for w, grad_w in zip(weights, gradients_w):
    w -= learning_rate * grad_w
for b, grad_b in zip(biases, gradients_b):
    b -= learning_rate * grad_b
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36

5. Aestimatio Neural Networks

Aestimatio reticulorum neuralis fieri solet per aestimationem accurationis vel damni valoris functionis in test statuto. Accurate refertur ad rationem exemplarium numerorum ab exemplari recte praedicatorum ad totum exemplorum numerum.

5.1 Cura

Formula accurationis calculi talis est:
Accuracy = Numerus rectarum praedictionum Totalis numerus praedictionum textSagaciter=Totalis numerus praedictionumNumerus recte praedictiones

5.2 Code implementation

Hic simplex aestimatio reticularis neuralis est exempli gratia:

def predict(X, weights, biases):
    a = X
    for w, b in zip(weights, biases):
        z = np.dot(a, w) + b
        a = sigmoid(z)
    return np.round(a)
# 定义测试数据
X_test = np.array([[2, 3], [4, 5]])
# 进行预测
predictions = predict(X_test, weights, biases)
print(predictions)
# 计算准确率
y_test = np.array([[1], [0]])
accuracy = np.mean(predictions == y_test)
print("Accuracy:", accuracy)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15

Summatim

In hoc articulo reticulum simplex neurale implevimus, inter deinceps propagationem, posteriorem propagationem, descensum gradientem et processum aestimationem. Etsi haec retis simplex est, principia fundamentalia et exsecutionem methodi neural ligulae demonstrat. In applicationibus practicis, reticulorum neuralis structura magis implicata erit, involutis pluribus stratis ac nodis, ac meliorisationi algorithmorum et technicis regularizationis optimae. Praeterea hodiernae altae doctrinae compages (ut TensorFlow et PyTorch) efficaciorem exsecutionem et functiones differentiationis latae praebent, constructionem et institutionem reticulorum neuralis commodiorem reddendo.