Formally, operational research is the search for a point in the domain of a function in order to minimize its value.
Problems of maximizing a function f(x)
can be treated as minimizing function g(x) = -f(x)
You mentioned Linear Programming, that’s good. It gives a great entry point. Not to mention it’s easy to explore some of the more technical concepts.
Linear Programming
For those who do not know, Linear Programming is the discipline within Operational Research that seeks the best possible result of a given objective constraints. But it is not any objective function, nor any restriction, that are part of Linear Programming. Only linear functions are allowed.
The objective function is of the form F(X) = C . X
, where C
is a vector of constants, X
is the vector of variables and C . X
is the scalar product between the two vectors.
Restrictions may be provided individually:
A1 . X <= r1
where A1
is a vector and r1
is the scalar value that multiplication must satisfy. Each constraint has its own inequality.
But it can also be provided in the form of matrix multiplication by vector:
A * X <= r
where A
is the matrix that has the number of lines equal to the quantity of constraints (reminds of the vector A1
? it is the first line of the matrix A
) and r
is the maximum value of each restriction.
When you have a minimum value inequality (restriction on form Bi . X >= ri
) you can turn into maximum value (for Ai = - Bi
the restriction and equivalent to Ai . X <= -ri
). If you have an equality, you can turn into two inequalities:
B . X = r
(Idêntico à...)
B . X <= r
B . X >= r
I can’t remember how it works for non-inclusive inequalities, like Ai . X < ri
.
If we only have two variables, the domain would be a subset of the plane. Each constraint cuts the plane in half. Function domain (considering constraints) is the intersection between all semi-planes delimited by restrictions.
For example, with 2 variables and 4 variables, we could have this domain (marked in red):
Each constraint is a line in the case of a Linear Programming problem, the small perpendicular lines represent which half-plane is valid.
In the case of linear constraints, this resulting intersection will be convex, it will have no "dug" parts. Because of its conventivity, and because the objective function is linear, the solution will be in one of the vertices of this convex structure. It happens to have a solution space of the same objective value in case one of the faces is perpendicular to the direction of growth of the objective function, this implies that all the points of this face are also optimal.
One of the algorithms used in Linear Programming is the Simplex de Dantzig, normally referred to only as simplex method. In geometric terms, it would take the domain space (a convex hyper-solid) and, from a vertex, walks on the edge that provides the best result of the objective function (C . ri
minimum for minimizing problems, if I am not mistaken) and walks to the other vertex of this edge, until it arrives at a vertex whose edges, all of them, offer only worsening of the objective function. That means the algorithm found a great place (more on this later). As the solution space is convex and the objective function is linear, the optimal location is also optimal overall.
Wikipedia itself brings an example of the "ride" of the vertices on the hyper-solid domain:
By the way, in the general case, the Simplex method is not polynomial, there are hypersolids built specially so that Simplex takes exponential processing time. But the problem contains algorithms that solve it in polynomial time, with different strategies. One of them is the ellipsoid algorithm, navigating the inner points of solution space.
Linear Programming Integer
A variation of Linear Programming is Integer Linear Programming (PLI). It is integer because the solution needs to be in integers. That is, for all X
is a vector of integers.
The Simplex method works naturally with divisions to make the navigations from vertex to vertex, so its results are expected to be rational not integer most of the time. But this does not prevent it from being used to assist in PLI problems.
One of the strategies to find the exact solution is:
- run the Simplex method and find the best rational solution
- fix one of the variables
- two anchorages are made, one with the truncated value and the other rounded up
- you can start by fixing the variable whose quotient in the objective function is more significant
- when fixing one of the variables, it must be considered constant
- run this same algorithm again for each distinct fixed value determined in the second step
- recursion comes to an end when all variables are set to integer values
- the answer is the recursion that came at the end with the best possible objective function value
I won’t demonstrate, but this algorithm runs o(2^size(X))
recursions. The decision version of the problems dealt with by PLI fits into NP, so this is a much more difficult problem than a Linear Programming problem.
there are mixed problems of Linear Programming is PLI, so the solution to this mixed problem is only to make the fixings for integer of integer variables, of the PLI part of the problem
The travelling salesman’s problem (duly mentioned in reply from @Pagotti) can be easily converted into a PLI problem. Be the costs of crossing from city A to city B the objective function. Each edge constitutes a variable of the problem. The constraints are:
each edge can only be crossed or ignored
be it x_ab
the variable representing the directed edge of a-->b
, if it is ignored, it is worth 0; if it is trafficked, it is worth 1; therefore, 0 <= x_ab <= 1
can only pass in a single city once, and it needs to be visited (whereas the traveling salesman returns to the original point)
sum of x_ib
to get into b
(from an incident city i
any) must be 1
every city needs to enter and leave (whereas the traveling salesman returns to the original point)
sum of x_aj
to the destination town j
any amount equal to the sum of x_ia
to the city of origin i
any, for any and all city a
Nonlinear Programming
When you are in non-linear programming, there are two things to take into account:
- if the objective function is non-linear
- if the restrictions are non-linear
There may also be a case of constraints being linear in one range, but not limiting in others, therefore not continuous. Both typical nonlinear and nonlinear constraints by noncontinuity can leave the concave solution space.
Nonlinear constraints could occasionally generate solutions spaces of faces that are not pieces of hyper-planes, but still a convex hyper-solid. In this case it would still be possible to go after the best point of this face for the objective function in question.
With these new restrictions and the possibility, the possibility of concavity brings us an additional problem to go after the solutions: the great places.
In a minimization problem, you may end up finding a solution such that by trying to distance yourself, you cannot improve the objective function. In this case, you are in a valley. Maybe there is something even better, but for that you would have to "climb the hill" (hill Climbing) to verify.
Take as an example this function of fourth grade, taken from Wikipedia:
Imagine that it is your objective function. Your restrictions are -10 <= x <= 10
. If you found the local minimum around +2, you would need to climb the hill toward 0 to find the other local minimum, around -3.
Strategies to solve non-linear programming problems should take into account the hill Climbing.
It is quite common to attack these problems with heuristics that bring good enough answers. A heuristic of hill Climbing is the simulated tempering, which has a possibility to rise columns that decreases with the fall of the "temperature" of the system (this temperature in turn falls with the passage of time).
And, yes, there is the whole variation of these problems. And, no, I don’t know how to apply to meet this restriction of the entire solution.
Many goals
Did you think it was over? You thought it wrong! , man!
In addition to linear, linear and nonlinear problems, we also have problems with multiple objective functions.
The most interesting of these functions with many objectives is that there are codominant solutions. I can alter my solution in such a way that I increase the cashier’s profits more, but I make the journey more time-consuming (considering money as a goal to maximize and objective time to minimize). How should the computer treat these problems? What answer should it give? Common sense says it should be something that resembles Pareto:
This means that the more points that are not objectively better (in all objectives) we get from each other, the better it will be for a manager to choose a solution.
Here metaheuristics able to work with many goals are used to try to get this set answer:
- MOSA (multi-objective simulated tempering)
- genetic algorithms
- ants (there is a variation with spider that eats ants that try to do hill Climbing without much success)
- bees
I’ve never seen (but that doesn’t mean there aren’t) artificial neural networks to solve these problems. Nor do I know agents with learning that return a set of Pareto as a response (worth the same previous observation). For both, I know only strategies that return a solution.
Direct answers tl;dr
- Operational research is about researching the best solution for a particular operation (or better solutions)
- Several algorithms can be used depending on the problem
- Simplex Method/Ellipsoid Method for Linear Programming Problems
- Recursion of Linear Programming methods with integer value fixation for Integer Linear Programming
- Heuristics and metaheuristics for non-linear programming and for multi-objective problems
- simulated tempera, ants, bees, genetic algorithms, among others
- It is used to solve optimization problems, not decision problems
Examples
Okay, the answer is very abstract, it speaks of the characteristic of each problem (linear, linear integer and nonlinear), algorithms of resolution, algebraic characteristics of the problems... but to show a problem that is good, Jefferson, you didn’t, right?
Well, the intention of this section and its divisions is to provide examples up to the limit of 30,000 characters (okay, my exaggeration, you don’t need so much).
PUC-Rio: chairs and tables
Extracted from slides available to the public: http://www-di.inf.puc-rio.br/~laber/LP2010-2.pdf
You earn 45 bucks every chair you sell, and you earn 80 bucks every table you sell. You want to maximize your profits. Just put all the manpower on the tables, right?
Of course not! Looking only at the profits per item, I can’t say that. It depends on how much I have available to turn into chairs and how much I have available to turn into tables.
In case, I have two limiting factors:
- time, only 450h of work
- boards, I only have 400 in stock
To create a table, would spend 20 boards and 15h. Already for a chair, 5 boards and 10h.
Therefore, I have to meet the following conditions:
horas de trabalho: 10 * cadeiras + 15 * mesas <= 450
tábuas: 5 * cadeiras + 20 * mesas <= 400
And my goal is to maximize profit:
objetivo: max 45 * cadeiras + 80 * mesas
It is interesting to notice that 21 tables is outside the solution space because it exceeds my board restriction (would need 420 boards for 21 tables). You can see the solution space on slide 6 of the PDF of the source material:
Believing that the answer on the slide is correct, the great answer is:
cadeiras = 24, mesas = 14
Variation of the previous problem: lumberjack
Imagine you have exactly the same situation as before. But now, you have access to a lumberjack. It can provide new boards for you, consuming your working hours. It also has a cost to deliver you new wood.
How does that happen? Well, your furniture factory is in Poland in a pine forest and its workers have this face:
In case, we justify the hours spent. And the money spent? It would be with the wear of the precision axes they use to cut the trees.
As I am trying to derive the problem here, I am not going to try to put adequate numbers to find the answer. Let’s say that every 3 hours in the forest generates 50 boards at the cost of 10 dollars. This would modify the objective function and the restrictions as well. Normally, if the objective function is presented first and only then the restrictions, then I will present the problem in this way:
max: cadeiras * 45 + mesas * 80 - lenhadores * 10
Restrições:
horas de trabalho: 10 * cadeiras + 15 * mesas + 3 * lenhadores <= 450
tábuas: 5 * cadeiras + 20 * mesas <= 400 + 50 * lenhadores
I can rewrite the board restriction like this:
5 * cadeiras + 20 * mesas - 50 * lenhadores <= 400
How to use of lumberjacks implies a cost, I honestly do not know if it is worth using them. I think so, but I don’t have any program to do the calculation. Note also that now the solution space has 3 dimensions, so it would be a geometric solid.
This is a problem to be attacked with linear programming. The linked question in the title shows the general version of the problem, later I put here an instance of it to illustrate.
(1) optimization (2) searches (3) counting/optimization problems ; then I elaborate a complete answer
– Jefferson Quesado
I didn’t know there was a difference in computing :P
– Maniero