Engee documentation
Notebook

Artificial neuron

An artificial neuron is a node of an artificial neural network that is a simplified model of a natural neuron. It functions as some non-linear function from a linear combination of all input signals, the result of which is transmitted to a single output. Artificial neurons are combined into networks by connecting the outputs of some neurons with the inputs of others. They are the basic elements of an ideal neurocomputer. Artificial neurons and networks are used in a variety of practical applications including prediction tasks, pattern recognition and control. They can perform complex tasks through interactions between simple processors. Artificial neurons can operate in a biological environment and are biohybrid neurons consisting of artificial and living components. However, the use of such models raises serious moral, social and ethical issues. A general view of the neuron formula can be presented as follows:

y = ∑w * x + b

Where:

y is the output signal of the neuron,

w - weight coefficient for the input,

x - input value,

b - bias,

This formula describes a linear combination of the input values multiplied by the corresponding weighting factors plus the bias. The neuron classifies the input data based on the sign of the resulting value

This example implements a simple neural network training model with one neuron and two weighting factors. The input data is divided into two groups, which are then trained based on the target values.

The training process continues for 100,000 iterations, during which the weights are updated depending on the prediction error. Once training is complete, the model is tested on new data and the results are plotted.

In [ ]:
# Определение параметров сети
bias = 10
WT1 = 0.1
WT2 = 0.1
step = 0.05

X1 = [0.323, 0.45, 0.33, 0.22]
X2 = [0.9, 0.76, 1.3123, 1.17]
X = vcat(X1, X2)

Y1 = [0.67, 0.445, 0.633, 0.312]
Y2 = [0.112, 0.22, 0.35, 0.42]
Y = vcat(Y1, Y2)

rezult = [0, 0, 0, 0, 1, 1, 1, 1];
In [ ]:
# Обучение
for i in 1:100000
    for j in 1:8
        r = (WT1 * X[j] + WT2 * Y[j] > bias) ? 1 : 0

        if r < rezult[j]
            WT1 += step
            WT2 += step
        elseif r > rezult[j]
            WT1 -= step
            WT2 -= step
        end
    end
end
In [ ]:
# Тест
test1 = [0.431, 0.6954, 0.12, 0.71]
test2 = [0.444, 0.43, 0.23, 0.3213]
output = Vector{Int}(undef, 4)

for k in 1:4
    output[k] = (WT1 * test1[k] + WT2 * test2[k] > bias) ? 1 : 0
end

# Вывод результатов и построение графика
allweight = [WT1, WT2]
define_line(x, weights, bias) = (-weights[1] * x .+ bias) ./ weights[2]
println("Output: ", output)
Output: [0, 1, 0, 1]

As we can see from the test results, two points are in the first class and two more are in the second class. Let's draw a graph to visualise the classification results.

In [ ]:
plot(X1, Y1, seriestype = :scatter, label = "Class 0", color = :blue)
plot!(X2, Y2, seriestype = :scatter, label = "Class 1", color = :red)
plot!(test1, test2, seriestype = :scatter, label = "Test", color = :green, marker=:circle)

x_vals = range(minimum(X), stop = maximum(X), length = 100)
y_vals = define_line(x_vals, allweight, bias)
plot!(x_vals, y_vals, label = "Decision Boundary", color = :black)
Out[0]:

Conclusion

In this example, we have broken down the structure of a neuron at a simple level. It is one of the basic elements that make up more complex neural networks, such as convolutional neural networks.