Engee documentation
Notebook

Artificial neuron

An artificial neuron is a node of an artificial neural network, which is a simplified model of a natural neuron. It functions as a non-linear function of a linear combination of all input signals, the result of which is transmitted to a single output. Artificial neurons are networked by connecting the outputs of some neurons with the inputs of others. They are the basic elements of an ideal neurocomputer. Artificial neurons and networks are used for various practical purposes, including forecasting, pattern recognition, and control tasks. They can perform complex tasks due to the interaction between simple processors. Artificial neurons can operate in a biological environment and represent biohybrid neurons consisting of artificial and living components. However, the use of such models raises serious moral, social, and ethical issues. The general form of the neuron formula can be represented as follows:

y = ∑w * x + b

Where:

y is the neuron's output signal,

w is the weighting factor for the input,

x is the input value,

b — bias,

This formula describes a linear combination of input values multiplied by the corresponding weights, plus an offset. The neuron classifies the input data based on the sign of the received value.

In this example, a simple neural network learning model with one neuron and two weighting coefficients is implemented. The input data is divided into two groups, which are then trained based on the target values.

The learning process continues for 100,000 iterations, during which the weights are updated depending on the prediction error. After the training is completed, the model is tested on new data, and the results are displayed on a graph.

In [ ]:
# Определение параметров сети
bias = 10
WT1 = 0.1
WT2 = 0.1
step = 0.05

X1 = [0.323, 0.45, 0.33, 0.22]
X2 = [0.9, 0.76, 1.3123, 1.17]
X = vcat(X1, X2)

Y1 = [0.67, 0.445, 0.633, 0.312]
Y2 = [0.112, 0.22, 0.35, 0.42]
Y = vcat(Y1, Y2)

rezult = [0, 0, 0, 0, 1, 1, 1, 1];
In [ ]:
# Обучение
for i in 1:100000
    for j in 1:8
        r = (WT1 * X[j] + WT2 * Y[j] > bias) ? 1 : 0

        if r < rezult[j]
            WT1 += step
            WT2 += step
        elseif r > rezult[j]
            WT1 -= step
            WT2 -= step
        end
    end
end
In [ ]:
# Тест
test1 = [0.431, 0.6954, 0.12, 0.71]
test2 = [0.444, 0.43, 0.23, 0.3213]
output = Vector{Int}(undef, 4)

for k in 1:4
    output[k] = (WT1 * test1[k] + WT2 * test2[k] > bias) ? 1 : 0
end

# Вывод результатов и построение графика
allweight = [WT1, WT2]
define_line(x, weights, bias) = (-weights[1] * x .+ bias) ./ weights[2]
println("Output: ", output)
Output: [0, 1, 0, 1]

As we can see from the test results, two points are included in the first class, and two more are in the second. Let's build a graph to visualize the classification results.

In [ ]:
plot(X1, Y1, seriestype = :scatter, label = "Class 0", color = :blue)
plot!(X2, Y2, seriestype = :scatter, label = "Class 1", color = :red)
plot!(test1, test2, seriestype = :scatter, label = "Test", color = :green, marker=:circle)

x_vals = range(minimum(X), stop = maximum(X), length = 100)
y_vals = define_line(x_vals, allweight, bias)
plot!(x_vals, y_vals, label = "Decision Boundary", color = :black)
Out[0]:

Conclusion

In this example, we have analyzed the structure of a neuron at a simple level. This is one of the basic elements that make up more complex neural networks, such as convolutional neural networks.