Sac à dos multidimensionnel en variables 0-1 : encadrement de la somme des variables à l'optimum
The purpose of this paper is to apply second order -approximation method introduced to optimization theory by Antczak [2] to obtain a new second order -saddle point criteria for vector optimization problems involving second order invex functions. Therefore, a second order -saddle point and the second order -Lagrange function are defined for the second order -approximated vector optimization problem constructed in this approach. Then, the equivalence between an (weak) efficient solution of the...
In this paper, by using the second order -approximation method introduced by Antczak [3], new saddle point results are obtained for a nonlinear mathematical programming problem involving second order invex functions with respect to the same function . Moreover, a second order -saddle point and a second order -Lagrange function are defined for the so-called second order -approximated optimization problem constructed in this method. Then, the equivalence between an optimal solution in the original...
We study the existence of sample path average cost (SPAC-) optimal policies for Markov control processes on Borel spaces with strictly unbounded costs, i.e., costs that grow without bound on the complement of compact subsets. Assuming only that the cost function is lower semicontinuous and that the transition law is weakly continuous, we show the existence of a relaxed policy with 'minimal' expected average cost and that the optimal average cost is the limit of discounted programs. Moreover, we...
We deal with semi-Markov control processes (SMCPs) on Borel spaces with unbounded cost and mean holding time. Under suitable growth conditions on the cost function and the mean holding time, together with stability properties of the embedded Markov chains, we show the equivalence of several average cost criteria as well as the existence of stationary optimal policies with respect to each of these criteria.
In this paper, we present a method for generating scenarios for two-stage stochastic programs, using multivariate distributions specified by their marginal distributions and the correlation matrix. The margins are described by their cumulative distribution functions and we allow each margin to be of different type. We demonstrate the method on a model from stochastic service network design and show that it improves the stability of the scenario-generation process, compared to both sampling and a...
This paper describes a procedure that uses particle swarm optimization (PSO) combined with the Lagrangian Relaxation (LR) framework to solve a power-generator scheduling problem known as the unit commitment problem (UCP). The UCP consists of determining the schedule and production amount of generating units within a power system subject to operating constraints. The LR framework is applied to relax coupling constraints of the optimization problem. Thus, the UCP is separated into independent optimization...
A previous paper by the same authors presented a general theory solving (finite horizon) feasibility and optimization problems for linear dynamic discrete-time systems with polyhedral constraints. We derived necessary and sufficient conditions for the existence of solutions without assuming any restrictive hypothesis. For the solvable cases we also provided the inequative feedback dynamic system, that generates by forward recursion all and nothing but the feasible (or optimal, according to the cases)...
A second order optimality condition for multiobjective optimization with a set constraint is developed; this condition is expressed as the impossibility of nonhomogeneous linear systems. When the constraint is given in terms of inequalities and equalities, it can be turned into a John type multipliers rule, using a nonhomogeneous Motzkin Theorem of the Alternative. Adding weak second order regularity assumptions, Karush, Kuhn-Tucker type conditions are therefore deduced.
The article is devoted to Markov reward chains in discrete-time setting with finite state spaces. Unfortunately, the usual optimization criteria examined in the literature on Markov decision chains, such as a total discounted, total reward up to reaching some specific state (called the first passage models) or mean (average) reward optimality, may be quite insufficient to characterize the problem from the point of a decision maker. To this end it seems that it may be preferable if not necessary...