Displaying 1541 – 1560 of 2805

Showing per page

Neural networks learning as a multiobjective optimal control problem.

Maciej Krawczak (1997)

Mathware and Soft Computing

The supervised learning process of multilayer feedforward neural networks can be considered as a class of multi-objective, multi-stage optimal control problem. An iterative parametric minimax method is proposed in which the original optimization problem is embedded into a weighted minimax formulation. The resulting auxiliary parametric optimization problems at the lower level have simple structures that are readily tackled by efficient solution methods, such as the dynamic programming or the error...

New algorithms for coupled tasks scheduling – a survey

Jacek Blazewicz, Grzegorz Pawlak, Michal Tanas, Wojciech Wojciechowicz (2012)

RAIRO - Operations Research - Recherche Opérationnelle

The coupled tasks scheduling problem is a class of scheduling problems introduced for beam steering software of sophisticated radar devices, called phased arrays. Due to increasing popularity of such radars, the importance of coupled tasks scheduling is constantly growing. Unfortunately, most of the coupled tasks problems are NP-hard, and only a few practically usable algorithms for such problems were found. This paper provides a survey of already known complexity results of various variants of...

New complexity analysis of a full Nesterov- Todd step infeasible interior-point algorithm for symmetric optimization

Behrouz Kheirfam, Nezam Mahdavi-Amiri (2013)

Kybernetika

A full Nesterov-Todd step infeasible interior-point algorithm is proposed for solving linear programming problems over symmetric cones by using the Euclidean Jordan algebra. Using a new approach, we also provide a search direction and show that the iteration bound coincides with the best known bound for infeasible interior-point methods.

New Farkas-type constraint qualifications in convex infinite programming

Nguyen Dinh, Miguel A. Goberna, Marco A. López, Ta Quang Son (2007)

ESAIM: Control, Optimisation and Calculus of Variations

This paper provides KKT and saddle point optimality conditions, duality theorems and stability theorems for consistent convex optimization problems posed in locally convex topological vector spaces. The feasible sets of these optimization problems are formed by those elements of a given closed convex set which satisfy a (possibly infinite) convex system. Moreover, all the involved functions are assumed to be convex, lower semicontinuous and proper (but not necessarily real-valued). The key result...

New hybrid conjugate gradient method for nonlinear optimization with application to image restoration problems

Youcef Elhamam Hemici, Samia Khelladi, Djamel Benterki (2024)

Kybernetika

The conjugate gradient method is one of the most effective algorithm for unconstrained nonlinear optimization problems. This is due to the fact that it does not need a lot of storage memory and its simple structure properties, which motivate us to propose a new hybrid conjugate gradient method through a convex combination of β k R M I L and β k H S . We compute the convex parameter θ k using the Newton direction. Global convergence is established through the strong Wolfe conditions. Numerical experiments show the...

New regions of stability in input optimization

Sheng Huang, Sanjo Zlobec (1988)

Aplikace matematiky

using point-to-set mappings we identify two new regions of stability in input optimization. Then we extend various results from the literature on optimality conditions, continuity of Lagrange multipliers, and the marginal value formula over the new and some old regions of stability.

New results on semidefinite bounds for 1 -constrained nonconvex quadratic optimization

Yong Xia (2013)

RAIRO - Operations Research - Recherche Opérationnelle

In this paper, we show that the direct semidefinite programming (SDP) bound for the nonconvex quadratic optimization problem over ℓ1 unit ball (QPL1) is equivalent to the optimal d.c. (difference between convex) bound for the standard quadratic programming reformulation of QPL1. Then we disprove a conjecture about the tightness of the direct SDP bound. Finally, as an extension of QPL1, we study the relaxation problem of the sparse principal component analysis, denoted by QPL2L1. We show that the...

Currently displaying 1541 – 1560 of 2805