They are algorithms applied to NP (complexity) problems. They are in the class of nondeterministic algorithms that better to say, has a search not necessarily for an optimal solution, but rather a good solution based on stochastic actions with less time than deterministic search algorithms. Examples for such problems that these try to solve are those that have a combinatorial characteristic that testing all possible combinations would take centuries.
To put it bluntly, I’ll use the classic problem. If we consider any graph with nodes and edges, where each edge connects one or more nodes, let us consider that the goal is to get out of any node and go to another with the shortest possible distance.
Considering an illustrative example, in the Graph above, one wants to leave the point D
and go to E
performing the lowest possible cost. Although the image graph is possible to test all possible combinations, it is very difficult when considering huge graph problems.
In this context comes these algorithms, considered metaheuristic searches. As I said before, the solution that these models seek are not necessarily optimal solutions, but that in the search space among all possible combinations it manages to go in the direction of the best choice. For a well-defined evolutionary algorithm structure we have:
- Fitness Function: Assesses the cost of your solution.
- Item Pool: Loads a complete solution.
- Search Engine: Model that combines or induces individuals stochastically to the best places in the search space.
Previously, evolutionary algorithms are subsets of intelligent algorithms, and contain some proposed models of evolutionary algorithms, which in general share the above structure.
To illustrate, a classical model for solving the proposed routing model in the graph would be a genetic algorithm. So to build the model, we consider a fitness function that costs the size of the edges, and the pool of "individuals" are made up of a vector of items to be considered. To make it more understandable, an individual can be described as {D, A, B, E}
and another {D, F, G, E}
, the first has a cost of 5+7+7=19
and the second 6+11+9=26
, where the former has a better fitness function and therefore a better individual than the latter. In the mechanism of action for these individuals to be updated, we can describe something very simple as a mutation that uses two of the individuals in the pool to generate a third. Noting that in the graph problem there are constraints in which the algorithm you are modeling needs to respect.
- Do not connect disconnected edges.
- That the individual has D at the beginning and E at the end.
That is, evolutionary algorithms are most often modeled to solve specific problems, with specific construction to contain restrictions and in a stochastic way change, at the level of individuals (Solution carriers) to improve your solution in the search space, however much you have an idea behind, are built according to the problem to be solved.
In general, I advise you to look for information on how to solve NP-Hard problems, these algorithms belong to the discipline of computational intelligence and are most likely quick ways to find good solutions. Also look for optimizers that are the core of Artificial Intelligence models.
Considerations and Corrections.
The problem I mentioned is of the minimum path, with easy solution. The inverse of the problem is the longer path. The shortest path problem in a directed or undirected graph with non-negative-weight edges is solved in computational time by Dijkstra’s algorithm.
Just one thing: shorter distance between two points, for edges with non-negative distances, is a problem P. Proof of this? Djikstra’s algorithm, solves the problem with the best polynomial-time solution in the number of vertices. I think you’d want to use the Hamiltonian path problem or the traveling salesman problem...
– Jefferson Quesado
Yes, considering the correction. Thank you very much!
– Felipe Franco