Page 1

Displaying 1 – 12 of 12

Showing per page

Imitation learning of car driving skills with decision trees and random forests

Paweł Cichosz, Łukasz Pawełczak (2014)

International Journal of Applied Mathematics and Computer Science

Machine learning is an appealing and useful approach to creating vehicle control algorithms, both for simulated and real vehicles. One common learning scenario that is often possible to apply is learning by imitation, in which the behavior of an exemplary driver provides training instances for a supervised learning algorithm. This article follows this approach in the domain of simulated car racing, using the TORCS simulator. In contrast to most prior work on imitation learning, a symbolic decision...

Implementation of adaptive generalized sidelobe cancellers using efficient complex valued arithmetic

George-Othon Glentis (2003)

International Journal of Applied Mathematics and Computer Science

Low complexity realizations of Least Mean Squared (LMS) error, Generalized Sidelobe Cancellers (GSCs) applied to adaptive beamforming are considered. The GSC method provides a simple way for implementing adaptive Linear Constraint Minimum Variance (LCMV) beamformers. Low complexity realizations of adaptive GSCs are of great importance for the design of high sampling rate, and/or small size and low power adaptive beamforming systems. The LMS algorithm and its Transform Domain (TD-LMS) counterpart...

Improving feature selection process resistance to failures caused by curse-of-dimensionality effects

Petr Somol, Jiří Grim, Jana Novovičová, Pavel Pudil (2011)

Kybernetika

The purpose of feature selection in machine learning is at least two-fold - saving measurement acquisition costs and reducing the negative effects of the curse of dimensionality with the aim to improve the accuracy of the models and the classification rate of classifiers with respect to previously unknown data. Yet it has been shown recently that the process of feature selection itself can be negatively affected by the very same curse of dimensionality - feature selection methods may easily over-fit...

Improving the generalization ability of neuro-fuzzy systems by ε-insensitive learning

Jacek Łęski (2002)

International Journal of Applied Mathematics and Computer Science

A new learning method tolerant of imprecision is introduced and used in neuro-fuzzy modelling. The proposed method makes it possible to dispose of an intrinsic inconsistency of neuro-fuzzy modelling, where zero-tolerance learning is used to obtain a fuzzy model tolerant of imprecision. This new method can be called ε-insensitive learning, where, in order to fit the fuzzy model to real data, the ε-insensitive loss function is used. ε-insensitive learning leads to a model with minimal Vapnik-Chervonenkis...

Interpretability of linguistic variables: a formal account

Ulrich Bodenhofer, Peter Bauer (2005)

Kybernetika

This contribution is concerned with the interpretability of fuzzy rule-based systems. While this property is widely considered to be a crucial one in fuzzy rule-based modeling, a more detailed formal investigation of what “interpretability” actually means is not available. So far, interpretability has most often been associated with rather heuristic assumptions about shape and mutual overlapping of fuzzy membership functions. In this paper, we attempt to approach this problem from a more general...

Intrinsic dimensionality and small sample properties of classifiers

Šarūnas Raudys (1998)

Kybernetika

Small learning-set properties of the Euclidean distance, the Parzen window, the minimum empirical error and the nonlinear single layer perceptron classifiers depend on an “intrinsic dimensionality” of the data, however the Fisher linear discriminant function is sensitive to all dimensions. There is no unique definition of the “intrinsic dimensionality”. The dimensionality of the subspace where the data points are situated is not a sufficient definition of the “intrinsic dimensionality”. An exact...

Iterative feature selection in least square regression estimation

Pierre Alquier (2008)

Annales de l'I.H.P. Probabilités et statistiques

This paper presents a new algorithm to perform regression estimation, in both the inductive and transductive setting. The estimator is defined as a linear combination of functions in a given dictionary. Coefficients of the combinations are computed sequentially using projection on some simple sets. These sets are defined as confidence regions provided by a deviation (PAC) inequality on an estimator in one-dimensional models. We prove that every projection the algorithm actually improves the performance...

Iterative Learning Control - monotonicity and optimization

David H. Owens, Steve Daley (2008)

International Journal of Applied Mathematics and Computer Science

The area if Iterative Learning Control (ILC) has great potential for applications to systems with a naturally repetitive action where the transfer of data from repetition (trial or iteration) can lead to substantial improvements in tracking performance. There are several serious issues arising from the "2D" structure of ILC and a number of new problems requiring new ways of thinking and design. This paper introduces some of these issues from the point of view of the research group at Sheffield University...

Currently displaying 1 – 12 of 12

Page 1