Текст книги "Effective Methods and Transportation Processes Management Models at the Railway Transport. Textbook"
Автор книги: Igor Shapkin
Жанр: Прочая образовательная литература, Наука и Образование
Возрастные ограничения: +18
сообщить о неприемлемом содержимом
Текущая страница: 2 (всего у книги 11 страниц)
Choosing the wrong KPI is very dangerous, as it can lead to incorrect recommendations. Operations organized from the point of view of an unsuccessfully chosen indicator can lead to large unjustified costs and losses (recall at least the notorious «shaft» as the main criterion for the economic activity of enterprises).
2.3 Different types of operations research problems and methods for solving them
The objectives of the study are divided into two categories: a) direct and b) reverse. Direct tasks answer the question: what will happen if, under the given conditions, we make such and such a decision? In particular, what will be equal to the selected performance indicator W in this decision?
Inverse problems answer the question: how should the elements of the solution be selected in order for the efficiency indicator W to turn to the maximum?
Naturally, direct problems are simpler than inverse ones. It is also obvious that in order to solve the inverse problem, first of all, one must be able to solve a straight line. This purpose is served by the mathematical model of the operation, which makes it possible to calculate the efficiency indicator W (and, if necessary, other characteristics) for any given conditions, with any solution.
If the number of possible solutions is small, then by calculating the W value for each of them and comparing the values obtained with each other, you can directly specify one or more optimal options for which the efficiency indicator reaches a maximum. However, when the number of possible solutions is large, the search for the optimal one among them «blindly», by simple search, is difficult, in some cases it is almost impossible. For this purpose, special methods of targeted search are used (we will get acquainted with some of them later). Now we will limit ourselves to the formulation of the problem of optimizing the solution in the most general form.
Let there be an operation «O», the success of which we can influence to some extent by choosing in one way or another the parameters (elements of the solution) that depend on us. The efficiency of the operation is characterized by the efficiency indicator W, which is required to be turned to the maximum.
Suppose that the direct problem is solved and the mathematical model allows you to calculate the value of W for any chosen solution, for any set of conditions.
Let us first consider the simplest (so-called «deterministic») case, when the conditions for performing the operation are fully known, i.e. do not contain an element of uncertainty. Then all the factors on which the success of the operation depends are divided into two groups:
1) Predetermined, predetermined factors (conditions) α1, α2,… over which we have no influence (in particular, restrictions imposed on the decision);
2) Factors depending on us (elements of the solution) x1, x2,… which we, within certain limits, can choose at our discretion.
The W performance indicator depends on both groups of factors. We will write this in the form of a formula:
W = W (a1, a2,..; х1, х2,..).
It is believed that the type of dependence (1) is known to us and with the help of a mathematical model we can calculate for any given α1, α2,.., x1, x2,.. value of W (i.e., the direct problem is solved). Then the inverse problem can be formulated as follows:
Under given conditions, α1, α2,.. find such elements of the solution x1, x2,.., which turn the W indicator to the maximum.
Before us is a typically mathematical problem belonging to the class of so-called variational problems. Methods for solving such problems are analyzed in detail in mathematics. The simplest of these methods (the well-known «maximum and minimum problems») are familiar to every engineer. These methods prescribe to find the maximum or minimum (in short, the «extremum») of the function to differentiate it by arguments, equate the derivatives to zero and solve the resulting system of equations. However, this classical method has only limited application in the study of operations. First, in the case when there are many arguments, the task of solving a system of equations is often not easier, but more difficult than the direct search for an extremum. Secondly, the extremum is often reached not at all at the point where the derivatives turn to zero (such a point may not exist at all), but somewhere at the boundary of the area of change of arguments. All the specific difficulties of the so-called «multidimensional variational problem in the presence of limitations» arise, sometimes unbearable in its complexity even for modern computers. In addition, we must not forget that the function W may not have derivatives at all, for example, be integer, or be given only with integer values of arguments. All this makes the task of finding an extremum far from being as easy as it seems at first glance. The optimization method should always be chosen based on the features of the W function and the type of constraints imposed on the elements of the solution. For example, if the function W linearly depends on the elements of the solution x1, x2,.., and the constraints imposed on x1, x2,.., have the form of linear equalities or inequalities, the problem of linear programming arises, which is solved by relatively simple methods (we will get acquainted with some of them later). If the W function is convex, special methods of «convex programming» are used, with their kind of «quadratic programming». To optimize the management of multi-stage operations, the method of «dynamic programming» can be applied. Finally, there is a whole set of numerical methods for finding the extremes of the functions of many arguments, specially adapted for implementation on computers. Thus, the problem of optimizing the solution in the considered deterministic case is reduced to the mathematical problem of finding the extremum of a function that can present computational, but not fundamental difficulties.
Let’s not forget, however, that we have considered so far the simplest case, when only two groups of factors appear in the problem: the given conditions α1, α2,.. and solution elements x1, x2,… The real tasks of operations research are often reduced to a scheme where, in addition to two groups of factors α1, α2,.., x1, x2,.., there is a third – unknown factors ξ1, ξ2, …, the values of which cannot be predicted in advance.
In this case, the W performance indicator depends on all three groups of factors:
W = W (a1, a2,..; х1, х2,..; o1, x2, …)
And the problem of solution optimization can be formulated as follows:
Under given conditions, α1, α2,.. Taking into account the presence of unknown factors ξ1, ξ2, … find such elements of the solution x1, x2,…, which, if possible, provide the maximum value of the efficiency indicator W.
This is another, not purely mathematical problem (it is not for nothing that the reservation «if possible» is made in its formulation). The presence of unknown factors translates the problem into a new quality: it turns into a problem of choosing a solution under conditions of uncertainty.
However, uncertainty is uncertainty. If the conditions for the operation are unknown, we cannot optimize the solution as successfully as we would if we had more information. Therefore, any decision made under conditions of uncertainty is worse than a decision made under predetermined conditions. It is our business to communicate to our decision as much as possible the features of reasonableness. It is not for nothing that one of the prominent foreign experts in operations research, T.L. Saati, defining his subject, writes that «operations research is the art of giving bad answers to those practical questions to which even worse answers are given by other methods.»
The task of making a decision in conditions of uncertainty is found at every step in life. Suppose, for example, that we are going to travel and put some things in our suitcase. The size of the suitcase is limited (conditions α1, α2,..), the weather in the travel areas is not known in advance (ξ1, ξ2,…). What items of clothing (x1, x2,..) should I take with me? This problem of operations research, of course, is solved by us without any mathematical apparatus, although based on some statistical data, say, about the weather in different areas, as well as our own tendency to colds; Something like optimizing the decision, consciously or unconsciously, we produce. Curiously, different people seem to use different performance indicators. If a young person is likely to seek to maximize the number of pleasant impressions from the trip, then an elderly traveler, perhaps, wants to minimize the likelihood of illness.
And now let’s take a more serious task. A system of protective structures is being designed to protect the area from floods. Neither the moments of the onset of floods, nor their size are known in advance. And you still need to design.
In order to make such decisions not at random, by inspiration, but soberly, with open eyes, modern science has a number of methodological techniques. The use of one or the other of them depends on the nature of the unknown factors, where they come from and by whom they are controlled.
The simplest case of uncertainty is the case when the unknown factors ξ1, ξ2,… are random variables (or random functions) whose statistical characteristics (say, distribution laws) are known to us or, in principle, can be obtained. We will call such problems of operations research stochastic problems, and the inherent uncertainty – stochastic uncertainty.
Here is an example of a stochastic operations research problem. Let the work of the catering enterprise be organized. We do not know exactly how many visitors will come to it the day before work, how long the service of each of them will continue, etc. However, the characteristics of these random variables, if we are not already at our disposal, can be obtained statistically.
Let us now assume that we have before us a stochastic problem of operations research, and the unknown factors ξ1, ξ2,… – ordinary random variables with some (in principle known) probabilistic characteristics. Then the efficiency indicator W, depending on these factors, will also be a random value.
The first thing that comes to mind is to take as an indicator of efficiency not the random variable W itself, but its average value (mathematical expectation)
W = M [W (a1, a2,..; х1, х2,..; o1, x2, …)]
and choose such a solution x1, x2,.., in which this average value turns into a maximum.
Note that this is exactly what we did, choosing in a number of examples of operations, the outcome of which depends on random factors, as an indicator of efficiency, the average value of the value that we wanted to turn into a maximum (minimum). This is the «average income» per unit of time, «average relative downtime», etc. In most cases, this approach to solving stochastic problems of operations research is fully justified. If we choose a solution based on the requirement that the average value of the performance indicator is maximized, then, of course, we will do better than if we chose a solution at random.
But what about the element of uncertainty? Of course, to some extent it remains. The success of each individual operation carried out with random values of the parameters ξ1, ξ2, …, can be very different from the expected average, both upwards and, unfortunately, downwards. We should be comforted by the following: by organizing the operation so that the average value of W is maximized and repeating the same (or similar) operations many times, we will ultimately gain more than if we did not use the calculation at all.
Thus, the choice of a solution that maximizes the average W value of the W efficiency indicator W is fully justified when it comes to operations with repeatability. A loss in one case is compensated by a gain in the other, and in the end our solution will be profitable.
But what if we are talking about an operation that is not repeatable, but unique, carried out only once? Here, a solution that simply maximizes the average value of W will be imprudent. It would be more cautious to guard yourself against unnecessary risk by demanding, for example, that the probability of obtaining an unacceptably small value of W, say, W˂w0, be sufficiently small:
P (W ˂w0) ≤ γ,
where γ is some small number, so small that an event with a probability of γ can be considered almost impossible. The condition-constraint can be taken into account when solving the problem of solution optimization along with others. Then we will look for a solution that maximizes the average value of W, but with an additional, «reinsurance» condition.
The case of stochastic uncertainty of conditions considered by us is relatively prosperous. The situation is much worse when the unknown factors ξ1, ξ2, … cannot be described by statistical methods. This happens in two cases: either the probability distribution for the parameters ξ1, ξ2, … In principle, it exists, but the corresponding statistical data cannot be obtained, or the probability distribution for the parameters ξ1, ξ2, … does not exist at all.
Let us give an example related to the last, most «harmful» category of uncertainty. Let’s assume that some commercial and industrial operation is planned, the success of which depends on the length of skirts ξ women will wear in the coming year. The probability distribution for the parameter ξ cannot, in principle, be obtained from any statistical data. One can only try to guess its plausible meanings in a purely speculative way.
Let us consider just such a case of «bad uncertainty»: the effectiveness of the operation depends on the unknown parameters ξ1, ξ2, …, about which we have no information, but can only make suggestions. Let’s try to solve the problem.
The first thing that comes to mind is to ask some (more or less plausible) values of the parameters ξ1, ξ2, … and find a conditionally optimal solution for them. Let’s assume that, having spent a lot of effort and time (our own and machine), we did it. So what? Will the conditionally optimal solution found be good for other conditions? As a rule, no. Therefore, its value is purely limited. In this case, it will be reasonable not to have a solution that is optimal for some conditions, but a compromise solution that, while not optimal for any conditions, will still be acceptable in their whole range. At present, a full-fledged scientific «theory of compromise» does not yet exist (although there are some attempts in this direction in decision theory). Usually, the final choice of a compromise solution is made by a person. Based on preliminary calculations, during which a large number of direct problems for different conditions and different solutions are solved, he can assess the strengths and weaknesses of each option and make a choice based on these estimates. To do this, it is not necessary (although sometimes curious) to know the exact conditional optimum for each set of conditions. Mathematical variational methods recede into the background in this case.
When considering the problems of operations research with «bad uncertainty», it is always useful to confront different approaches and different points of view in a dispute. Among the latter, it should be noted one, often used because of its mathematical certainty, which can be called the «position of extreme pessimism». It boils down to the fact that one must always count on the worst conditions and choose the solution that gives the maximum effect in these worst conditions for oneself. If, under these conditions, it gives the value of the efficiency indicator equal to W *, then this means that under no circumstances will the efficiency of the operation be less than W * («guaranteed winnings»). This approach is tempting because it gives a clear formulation of the optimization problem and the possibility of solving it by correct mathematical methods. But, using it, we must not forget that this point of view is extreme, that on its basis you can only get an extremely cautious, «reinsurance» decision, which is unlikely to be reasonable. Calculations based on the point of view of «extreme pessimism» should always be adjusted with a reasonable dose of optimism. It is hardly advisable to take the opposite point of view – extreme or «dashing» optimism, always count on the most favorable conditions, but a certain amount of risk when making a decision should still be present.
Let us mention one, rather original method used when choosing a solution in conditions of «bad uncertainty» – the so-called method of expert assessments. It is often used in other fields, such as futurology. Roughly speaking, it consists in the fact that a team of competent people («experts») gathers, each of them is asked to answer a question (for example, name the date when this or that discovery will be made); then the answers obtained are processed like statistical material, making it possible (to paraphrase T. L. Saati) «to give a bad answer to a question that cannot be answered in any other way.» Such expert assessments for unknown conditions can also be applied to solving problems of operations research under conditions of «bad uncertainty». Each of the experts evaluates the degree of plausibility of various variants of conditions, attributing to them some subjective probabilities. Despite the subjective nature of the estimates of probabilities by each expert, by averaging the estimates of the whole team, you can get something more objective and useful. By the way, the subjective assessments of different experts do not differ as much as one might expect. In this way, the solution of the problem of studying operations with «bad uncertainty» seems to be reduced to the solution of a relatively benign stochastic problem. Of course, the result obtained cannot be treated too trustingly, forgetting about its dubious origin, but along with others arising from other points of view, it can still help in choosing a solution.
Let’s name another approach to choosing a solution in conditions of uncertainty – the so-called «adaptive algorithms» of control. Let the operation O in question belong to the category of repeating repeatedly, and some of its conditions are ξ1, ξ2,… Unknown in advance, random. However, we do not have statistics on the probability distribution for these conditions and there is no time to collect such data (for example, it takes a considerable amount of time to collect statistics, and the operation needs to be performed now). Then it is possible to build and apply an adapting (adapting) control algorithm, which gradually takes place in the course of its application. At first, some (probably not the best) algorithm is taken, but as it is applied, it improves from time to time, since the experience of application indicates how it should be changed. It turns out something like the activity of a person who, as you know, «learns from mistakes.» Such adaptable control algorithms seem to have a great future.
Finally, we will consider a special case of uncertainty, not just «bad» but «hostile.» This kind of uncertainty arises in so-called «conflict situations» in which the interests of two (or more) parties with different goals collide. Conflict situations are characteristic of military operations, partly for sports competitions; in capitalist society – for competition. Such situations are dealt with by a special branch of mathematics – game theory. (It is often presented as part of the discipline «operations research.») The most pronounced case of a conflict situation is direct antagonism, when two sides A and B clash in a conflict, pursuing directly opposite goals («us» and «adversary»).
The theory of antagonistic games is based on the proposition that we are dealing with a reasonable and far-sighted adversary, always choosing his behavior in such a way as to prevent us from achieving our goal. In the accepted proposals, game theory makes it possible to choose the optimal solution in some sense, i.e. the least risky in the fight against a cunning and malicious opponent.
However, such a point of view on the conflict situation cannot be absolutized either. Life experience suggests that in conflict situations (for example, in hostilities), it is not the most cautious, but the most inventive who wins, who knows how to take advantage of the enemy’s weakness, deceive him, go beyond the conditions and methods of behavior known to him. So in conflict situations, game theory provides an extreme solution arising from a pessimistic, «reinsurance» position. Yet, if treated with due criticism, it, along with other considerations, can help in the final choice.
Closely related to game theory is the so-called «statistical decision theory». It is engaged in the preliminary mathematical justification of rational decisions in conditions of uncertainty, the development of reasonable «strategies of behavior» in these conditions. One possible approach to solving such problems is to consider an uncertain situation as a kind of «game», but not with a consciously opposing, reasonable adversary, but with «nature». By «nature» in the theory of statistical decisions is understood as a certain third-party authority, indifferent to the result of the game, but whose behavior is not known in advance.
Finally, let’s make one general remark. When justifying a decision under conditions of uncertainty, no matter what we do, the element of uncertainty remains. Therefore, it is impossible to impose too high demands on the accuracy of solving such problems. Instead of unambiguously indicating a single, exactly «optimal» (from some point of view) solution, it is better to single out a whole area of acceptable solutions that turn out to be insignificantly worse than others, no matter what point of view we use. Within this area, the persons responsible for this should make their final choice.
Правообладателям!
Это произведение, предположительно, находится в статусе 'public domain'. Если это не так и размещение материала нарушает чьи-либо права, то сообщите нам об этом.