Page 1 Next

Displaying 1 – 20 of 25

Showing per page

Self-adaptation of parameters in a learning classifier system ensemble machine

Maciej Troć, Olgierd Unold (2010)

International Journal of Applied Mathematics and Computer Science

Self-adaptation is a key feature of evolutionary algorithms (EAs). Although EAs have been used successfully to solve a wide variety of problems, the performance of this technique depends heavily on the selection of the EA parameters. Moreover, the process of setting such parameters is considered a time-consuming task. Several research works have tried to deal with this problem; however, the construction of algorithms letting the parameters adapt themselves to the problem is a critical and open problem...

Shluková analysa

Adolf Filáček, Václav Koutník, Jiří Vondráček (1977)

Časopis pro pěstování matematiky

Some methods of constructing kernels in statistical learning

Tomasz Górecki, Maciej Łuczak (2010)

Discussiones Mathematicae Probability and Statistics

This paper is a collection of numerous methods and results concerning a design of kernel functions. It gives a short overview of methods of building kernels in metric spaces, especially R n and S n . However we also present a new theory. Introducing kernels was motivated by searching for non-linear patterns by using linear functions in a feature space created using a non-linear feature map.

Sparsity in penalized empirical risk minimization

Vladimir Koltchinskii (2009)

Annales de l'I.H.P. Probabilités et statistiques

Let (X, Y) be a random couple in S×T with unknown distribution P. Let (X1, Y1), …, (Xn, Yn) be i.i.d. copies of (X, Y), Pn being their empirical distribution. Let h1, …, hN:S↦[−1, 1] be a dictionary consisting of N functions. For λ∈ℝN, denote fλ:=∑j=1Nλjhj. Let ℓ:T×ℝ↦ℝ be a given loss function, which is convex with respect to the second variable. Denote (ℓ•f)(x, y):=ℓ(y; f(x)). We study the following penalized empirical risk minimization problem λ ^ ε : = argmin λ N P n ( f λ ) + ε λ p p , which is an empirical version of the problem λ ε : = argmin λ N P ( f λ ) + ε λ p p (hereɛ≥0...

Stacked regression with restrictions

Tomasz Górecki (2005)

Discussiones Mathematicae Probability and Statistics

When we apply stacked regression to classification we need only discriminant indices which can be negative. In many situations, we want these indices to be positive, e.g., if we want to use them to count posterior probabilities, when we want to use stacked regression to combining classification. In such situation, we have to use leastsquares regression under the constraint βₖ ≥ 0, k = 1,2,...,K. In their earlier work [5], LeBlanc and Tibshirani used an algorithm given in [4]. However, in this paper...

Statistical models for deformable templates in image and shape analysis

Stéphanie Allassonnière, Jérémie Bigot, Joan Alexis Glaunès, Florian Maire, Frédéric J.P. Richard (2013)

Annales mathématiques Blaise Pascal

High dimensional data are more and more frequent in many application fields. It becomes particularly important to be able to extract meaningful features from these data sets. Deformable template model is a popular way to achieve this. This paper is a review on the statistical aspects of this model as well as its generalizations. We describe the different mathematical frameworks to handle different data types as well as the deformations. We recall the theoretical convergence properties of the estimators...

Currently displaying 1 – 20 of 25

Page 1 Next