Displaying 41 – 60 of 101

Showing per page

État de l’art des méthodes d’«optimisation globale»

Gérard Berthiau, Patrick Siarry (2001)

RAIRO - Operations Research - Recherche Opérationnelle

We present a review of the main “global optimization” methods. The paper comprises one introduction and two parts. In the introduction, we recall some generalities about non linear constraint-less optimization and we list some classifications which have been proposed for the global optimization methods. We then describe, in the first part, various “classical” global optimization methods, most of which available long before the appearance of Simulated Annealing (a key event in this field). There...

État de l'art des méthodes “d'optimisation globale”

Gérard Berthiau, Patrick Siarry (2010)

RAIRO - Operations Research

We present a review of the main “global optimization" methods. The paper comprises one introduction and two parts. In the introduction, we recall some generalities about non linear constraint-less optimization and we list some classifications which have been proposed for the global optimization methods. We then describe, in the first part, various “classical" global optimization methods, most of which available long before the appearance of Simulated Annealing (a key event in this field)....

Expériences with Stochastic Algorithms fir a class of Constrained Global Optimisation Problems

Abdellah Salhi, L.G. Proll, D. Rios Insua, J.I. Martin (2010)

RAIRO - Operations Research

The solution of a variety of classes of global optimisation problems is required in the implementation of a framework for sensitivity analysis in multicriteria decision analysis. These problems have linear constraints, some of which have a particular structure, and a variety of objective functions, which may be smooth or non-smooth. The context in which they arise implies a need for a single, robust solution method. The literature contains few experimental results relevant to such a need. We...

Finding the principal points of a random variable

Emilio Carrizosa, E. Conde, A. Castaño, D. Romero-Morales (2001)

RAIRO - Operations Research - Recherche Opérationnelle

The p -principal points of a random variable X with finite second moment are those p points in minimizing the expected squared distance from X to the closest point. Although the determination of principal points involves in general the resolution of a multiextremal optimization problem, existing procedures in the literature provide just a local optimum. In this paper we show that standard Global Optimization techniques can be applied.

Finding the principal points of a random variable

Emilio Carrizosa, E. Conde, A. Castaño, D. Romero–Morales (2010)

RAIRO - Operations Research

The p-principal points of a random variable X with finite second moment are those p points in minimizing the expected squared distance from X to the closest point. Although the determination of principal points involves in general the resolution of a multiextremal optimization problem, existing procedures in the literature provide just a local optimum. In this paper we show that standard Global Optimization techniques can be applied.

First-Order Conditions for Optimization Problems with Quasiconvex Inequality Constraints

Ginchev, Ivan, Ivanov, Vsevolod I. (2008)

Serdica Mathematical Journal

2000 Mathematics Subject Classification: 90C46, 90C26, 26B25, 49J52.The constrained optimization problem min f(x), gj(x) ≤ 0 (j = 1,…p) is considered, where f : X → R and gj : X → R are nonsmooth functions with domain X ⊂ Rn. First-order necessary and first-order sufficient optimality conditions are obtained when gj are quasiconvex functions. Two are the main features of the paper: to treat nonsmooth problems it makes use of Dini derivatives; to obtain more sensitive conditions, it admits directionally...

Full convergence of the proximal point method for quasiconvex functions on Hadamard manifolds

Erik A. Papa Quiroz, P. Roberto Oliveira (2012)

ESAIM: Control, Optimisation and Calculus of Variations

In this paper we propose an extension of the proximal point method to solve minimization problems with quasiconvex objective functions on Hadamard manifolds. To reach this goal, we initially extend the concepts of regular and generalized subgradient from Euclidean spaces to Hadamard manifolds and prove that, in the convex case, these concepts coincide with the classical one. For the minimization problem, assuming that the function is bounded from below, in the quasiconvex and lower semicontinuous...

Full convergence of the proximal point method for quasiconvex functions on Hadamard manifolds

Erik A. Papa Quiroz, P. Roberto Oliveira (2012)

ESAIM: Control, Optimisation and Calculus of Variations

In this paper we propose an extension of the proximal point method to solve minimization problems with quasiconvex objective functions on Hadamard manifolds. To reach this goal, we initially extend the concepts of regular and generalized subgradient from Euclidean spaces to Hadamard manifolds and prove that, in the convex case, these concepts coincide with the classical one. For the minimization problem, assuming that the function is bounded from below, in the quasiconvex and lower semicontinuous...

Full convergence of the proximal point method for quasiconvex functions on Hadamard manifolds

Erik A. Papa Quiroz, P. Roberto Oliveira (2012)

ESAIM: Control, Optimisation and Calculus of Variations

In this paper we propose an extension of the proximal point method to solve minimization problems with quasiconvex objective functions on Hadamard manifolds. To reach this goal, we initially extend the concepts of regular and generalized subgradient from Euclidean spaces to Hadamard manifolds and prove that, in the convex case, these concepts coincide with the classical one. For the minimization problem, assuming that the function is bounded from below, in the quasiconvex and lower semicontinuous...

Funzioni semiconcave, singolarità e pile di sabbia

Piermarco Cannarsa (2005)

Bollettino dell'Unione Matematica Italiana

La semiconcavità è una nozione che generalizza quella di concavità conservandone la maggior parte delle proprietà ma permettendo di ampliarne le applicazioni. Questa è una rassegna dei punti più salienti della teoria delle funzioni semiconcave, con particolare riguardo allo studio dei loro insiemi singolari. Come applicazione, si discuterà una formula di rappresentazione per la soluzione di un modello dinamico per la materia granulare.

How much do approximate derivatives hurt filter methods?

Caroline Sainvitu (2009)

RAIRO - Operations Research

In this paper, we examine the influence of approximate first and/or second derivatives on the filter-trust-region algorithm designed for solving unconstrained nonlinear optimization problems and proposed by Gould, Sainvitu and Toint in [12]. Numerical experiments carried out on small-scaled unconstrained problems from the CUTEr collection describe the effect of the use of approximate derivatives on the robustness and the efficiency of the filter-trust-region method.

Interpretation and optimization of the k -means algorithm

Kristian Sabo, Rudolf Scitovski (2014)

Applications of Mathematics

The paper gives a new interpretation and a possible optimization of the well-known k -means algorithm for searching for a locally optimal partition of the set 𝒜 = { a i n : i = 1 , , m } which consists of k disjoint nonempty subsets π 1 , , π k , 1 k m . For this purpose, a new divided k -means algorithm was constructed as a limit case of the known smoothed k -means algorithm. It is shown that the algorithm constructed in this way coincides with the k -means algorithm if during the iterative procedure no data points appear in the Voronoi diagram....

Currently displaying 41 – 60 of 101