Page 1

Displaying 1 – 8 of 8

Showing per page

Improving feature selection process resistance to failures caused by curse-of-dimensionality effects

Petr Somol, Jiří Grim, Jana Novovičová, Pavel Pudil (2011)

Kybernetika

The purpose of feature selection in machine learning is at least two-fold - saving measurement acquisition costs and reducing the negative effects of the curse of dimensionality with the aim to improve the accuracy of the models and the classification rate of classifiers with respect to previously unknown data. Yet it has been shown recently that the process of feature selection itself can be negatively affected by the very same curse of dimensionality - feature selection methods may easily over-fit...

Information contained in design points of experiments with correlated observations

Andrej Pázman (2010)

Kybernetika

A random process (field) with given parametrized mean and covariance function is observed at a finite number of chosen design points. The information about its parameters is measured via the Fisher information matrix (for normally distributed observations) or using information functionals depending on that matrix. Conditions are stated, under which the contribution of one design point to this information is zero. Explicit expressions are obtained for the amount of information coming from a selected...

Insensitivity region for variance components in general linear model

Hana Boháčová (2008)

Acta Universitatis Palackianae Olomucensis. Facultas Rerum Naturalium. Mathematica

In linear regression models the estimator of variance components needs a suitable choice of a starting point for an iterative procedure for a determination of the estimate. The aim of this paper is to find a criterion for a decision whether a linear regression model enables to determine the estimate reasonably and whether it is possible to do so when using the given data.

Interval linear regression analysis based on Minkowski difference – a bridge between traditional and interval linear regression models

Masahiro Inuiguchi, Tetsuzo Tanino (2006)

Kybernetika

In this paper, we extend the traditional linear regression methods to the (numerical input)-(interval output) data case assuming both the observation/measurement error and the indeterminacy of the input-output relationship. We propose three different models based on three different assumptions of interval output data. In each model, the errors are defined as intervals by solving the interval equation representing the relationship among the interval output, the interval function and the interval...

Iterative feature selection in least square regression estimation

Pierre Alquier (2008)

Annales de l'I.H.P. Probabilités et statistiques

This paper presents a new algorithm to perform regression estimation, in both the inductive and transductive setting. The estimator is defined as a linear combination of functions in a given dictionary. Coefficients of the combinations are computed sequentially using projection on some simple sets. These sets are defined as confidence regions provided by a deviation (PAC) inequality on an estimator in one-dimensional models. We prove that every projection the algorithm actually improves the performance...

Currently displaying 1 – 8 of 8

Page 1