Learning in neural networks

Asked

Viewed 681 times

11

How does learning occur in neural networks? What’s the concept behind it? What is your relationship with "Deep Learning"?

3 answers

12


You really should buy a book on the subject if you are really interested. But the basic concepts (and put basics on this) are as follows::

1) A neuron has a certain number of inputs, and only one output. Output can be seen as a "Decision" made based on inputs.

inserir a descrição da imagem aqui

2) The output of the neuron is well-behaved, that is, it is a value in a predetermined range (something like "between 0 and +1"), even if the inputs of the neuron are of much greater magnitude.

3) To calculate the output, the neuron assigns a different "weight" to each of the inputs, makes a weighted linear sum of the various inputs. The "weights" of each input can be changed.

output_linear = peso_a . input_a + peso_b . input_b + ...

Of course, if one of the inputs is very large, even if its weight is small it will end up dominating the output.

The "weights" stored in each neuron are the memory of the system.

4) For output to be "well-behaved" the linear sum result is compressed by a nonlinear function, such as the sigmoid function:

output = 1 / (1 + exp(-output_linear))

The use of a non-linear function in output is one of the aspects that ensures that a neural network can "learn" any function.

5) A single neuron, also called Perceptron, is already useful for some simple decisions, for example to stop the car or walk in an intersection. One input is the red light, the other can be an ambulance approaching (whose weight should be high because it has more priority than the red light), etc.

A Perceptron would also be able to calculate how much soap the washing machine should use according to some variables, or what the selling price of a product for it to make a profit.

6) A neural network more capable than Perceptron has one or more hidden layers, i.e., groups of neurons that are not directly linked to input or output, forming a network of synapses (links between neurons).

inserir a descrição da imagem aqui

An extremely simple function, such as the XOR (or exclusive) function, cannot be learned by a Perceptron, but can be learned by a neural network with a hidden layer. Abusing a little metaphor, a Perceptron does not learn functions with characteristics "altruistic".

7) Through the "backpropagation" mechanism, it is possible to "train" a neural network. For this, there must be a learning phase, where network neurons are subjected to a certain set of inputs, and the error (difference between observed and expected output) is calculated. The error is used to recalculate the weights of the neural network, from front to back (starting with the output neuron and from there towards the inputs).

If neurons make use of a non-linear function for output, it can be proved that the neural network can "learn" any function via backpropagation.

The process of learning and functioning of the neural network is essentially statistical, analogous to diffuse logic. A neural network trained to recognize letters will always respond with a degree of uncertainty (instead of "this letter is A", the output would be something like "95% chance of being the letter A").

Finally, here is an article where I compare neurons with economic agents, maybe interest: https://epxx.co/artigos/economianeural.php

  • I’ve actually been studying on the subject for some time @epx. But in any case I wanted to enrich the site and know something about this subject that is a bit in vogue. : D. Thks

  • 1

    I hope I have contributed to the expectations.

3

Neural networks are one of the most famous types of machine learning algorithms and their main idea is to basically mimic the behavior of the human brain. If you have some knowledge in programming and statistics, you will better understand how these algorithms work.

The difference between one neural network and another is the training process. From the examples shown to her, the neural network adjusts its parameters according to the answers. For example, for training a neural network to classify news, examples of news should be shown to it. That is, the neural network regulates the "synapses" of the "brain" to classify new examples automatically.

Deep Learning is a deeper, broader machine learning with more complex structures.

For more information on the subject and some concepts visit this link.

1

We can think of learning in neural networks the same way we think when we want to understand how humans learn, I’ll give you a practical example here, imagine Michael Jordan training to hit the 3-ball from a great distance, what does he do to hit? It will surely test again and again. Every attempt this action will have input variables, for example: Right direction, Left direction, Upward direction, Breath control, Foot thrust, Hand thrust. Each variable I listed we will name X, and list as X1, X2, X3, X4, X5, X6 notice that I have listed up to 6 (X6), why is exactly the number of entries I have listed, for n entrances Xn, and to represent later the joining of the letters we will use the letter j. After we have the entries we will add weights (value that multiplies the variable, we will use W which means your power of influence in the variable X), every variable, multiplying it, that is, every variable will be influenced by its weight, imagine that Michael Jordan suffered a brain injury and forgot all about basketball, so the first attempt would be pretty crazy! I’d throw to the side of the court maybe! But with each attempt he would improve, "calibrating", that is, at each new attempt the "weights" would be updated aiming a smaller error to make the basket. After numerous attempts, I’m sure Michael will cut down on the error drastically and hit that basket. Now let’s try to understand this in a more mathematical way. Basket is true if the sum of Xj.Wj > T, otherwise he did not make the basket.

X= Entrances j= Summation junction, only to represent in the same formula. w= Are the weights T= Is the activation cut-off value (limit)

The concept is a little more complex than that but I think this example can help you think more abstractly using a practical example. Already on how deeplearning relates to neural networks is simple, deeplearning has much more layers, ie it is much more complex, nothing much.

Browser other questions tagged

You are not signed in. Login or sign up in order to post.