Page 1

Displaying 1 – 2 of 2

Showing per page

Bias-variance decomposition in Genetic Programming

Taras Kowaliw, René Doursat (2016)

Open Mathematics

We study properties of Linear Genetic Programming (LGP) through several regression and classification benchmarks. In each problem, we decompose the results into bias and variance components, and explore the effect of varying certain key parameters on the overall error and its decomposed contributions. These parameters are the maximum program size, the initial population, and the function set used. We confirm and quantify several insights into the practical usage of GP, most notably that (a) the...

Bottom-up learning of hierarchical models in a class of deterministic POMDP environments

Hideaki Itoh, Hisao Fukumoto, Hiroshi Wakuya, Tatsuya Furukawa (2015)

International Journal of Applied Mathematics and Computer Science

The theory of partially observable Markov decision processes (POMDPs) is a useful tool for developing various intelligent agents, and learning hierarchical POMDP models is one of the key approaches for building such agents when the environments of the agents are unknown and large. To learn hierarchical models, bottom-up learning methods in which learning takes place in a layer-by-layer manner from the lowest to the highest layer are already extensively used in some research fields such as hidden...

Currently displaying 1 – 2 of 2

Page 1