The search session has expired. Please query the service again.
The search session has expired. Please query the service again.
We propose a new linkage learning genetic algorithm called the Factor Graph based Genetic Algorithm (FGGA). In the FGGA, a factor graph is used to encode the underlying dependencies between variables of the problem. In order to learn the factor graph from a population of potential solutions, a symmetric non-negative matrix factorization is employed to factorize the matrix of pair-wise dependencies. To show the performance of the FGGA, encouraging experimental results on different separable problems...
We combine a new data model, where the random classification is subjected to rather weak restrictions which in turn are based on the Mammen−Tsybakov [E. Mammen and A.B. Tsybakov, Ann. Statis. 27 (1999) 1808–1829; A.B. Tsybakov, Ann. Statis. 32 (2004) 135–166.] small margin conditions, and the statistical query (SQ) model due to Kearns [M.J. Kearns, J. ACM 45 (1998) 983–1006] to what we refer to as PAC + SQ model. We generalize the class conditional constant noise (CCCN) model introduced by Decatur...
A major drawback of orthogonal frequency division multiplexing (OFDM) signals is the high value of peak to average power ratio (PAPR). Partial transmit sequences (PTS) is a popular PAPR reduction method with good PAPR reduction performance, but its search complexity is high. In this paper, in order to reduce PTS search complexity we propose a new technique based on biogeography-based optimization (BBO). More specifically, we present a new Generalized Oppositional Biogeography Based Optimization...
We study properties of Linear Genetic Programming (LGP) through several regression and classification benchmarks. In each problem, we decompose the results into bias and variance components, and explore the effect of varying certain key parameters on the overall error and its decomposed contributions. These parameters are the maximum program size, the initial population, and the function set used. We confirm and quantify several insights into the practical usage of GP, most notably that (a) the...
The theory of partially observable Markov decision processes (POMDPs) is a useful tool for developing various intelligent agents, and learning hierarchical POMDP models is one of the key approaches for building such agents when the environments of the agents are unknown and large. To learn hierarchical models, bottom-up learning methods in which learning takes place in a layer-by-layer manner from the lowest to the highest layer are already extensively used in some research fields such as hidden...
We propose a feature selection method for density estimation with
quadratic loss. This method relies on the study of unidimensional
approximation models and on the definition of confidence regions for
the density thanks to these models. It is quite general and includes
cases of interest like detection of relevant wavelets coefficients
or selection of support vectors in SVM. In the general case, we
prove that every selected feature actually improves the performance
of the estimator. In the case...
The current powerful graphics cards, providing stunning real-time visual effects for computer-based entertainment, have to accommodate powerful hardware components that are able to deliver the photo-realistic simulation to the end-user. Given the vast computing power of the graphics hardware, its producers very often offer a programming interface that makes it possible to use the computational resources of the graphics processors (GPU) to more general purposes. This step gave birth to the
so-called...
We define the class of discrete classical categorial grammars, similar in
the spirit to the notion of reversible class of languages introduced by Angluin and
Sakakibara. We show that the class of discrete classical categorial grammars is identifiable from positive structured examples. For this, we provide an original algorithm, which runs in quadratic time in the size of the examples. This work extends the previous results of Kanazawa. Indeed, in our work, several types can be associated to a word...
We study the problem of learning regular tree languages from text.
We show that the framework of function distinguishability, as introduced by the author in [Theoret. Comput. Sci.290 (2003) 1679–1711], can be generalized
from the case of string languages towards tree languages. This provides a large source of identifiable classes of regular tree languages. Each of these classes can be characterized in various ways. Moreover, we present a generic inference algorithm with polynomial update time...
We study sample-based estimates of the expectation of the function
produced by the empirical minimization algorithm. We investigate the
extent to which one can estimate the rate of convergence of the
empirical minimizer in a data dependent manner. We establish three
main results. First, we provide an algorithm that upper bounds the
expectation of the empirical minimizer in a completely
data-dependent manner. This bound is based on a structural result
due to Bartlett and Mendelson, which relates...
A PAC teaching model – under helpful distributions – is proposed which introduces the classical ideas of teaching models within the PAC setting: a polynomial-sized teaching set is associated with each target concept; the criterion of success is PAC identification; an additional parameter, namely the inverse of the minimum probability assigned to any example in the teaching set, is associated with each distribution; the learning algorithm running time takes this new parameter into account. An Occam...
A PAC teaching model -under helpful distributions -is proposed
which introduces the classical ideas of teaching models within the
PAC setting: a polynomial-sized teaching set is associated
with each target concept; the criterion of success is PAC
identification; an additional parameter, namely the inverse of the
minimum probability assigned to any example in the teaching set, is
associated with each distribution; the learning algorithm running
time takes this new parameter into account.
...
We establish rates of convergences in statistical learning for time series forecasting. Using the PAC-Bayesian approach, slow rates of convergence √ d/n for the Gibbs estimator under the absolute loss were given in a previous work [7], where n is the sample size and d the dimension of the set of predictors. Under the same weak dependence conditions, we extend this result to any convex Lipschitz loss function. We also identify a condition on the parameter space that ensures similar rates for the...
The last few years have witnessed important new developments in the theory and practice of pattern classification. We intend to survey some of the main new ideas that have led to these recent results.
The last few years have witnessed important new developments in
the theory and practice of pattern classification. We intend to
survey some of the main new ideas that have led to these
recent results.
Currently displaying 1 –
18 of
18