Displaying 261 – 280 of 838

Showing per page

Estimation of the hazard rate function with a reduction of bias and variance at the boundary

Bożena Janiszewska, Roman Różański (2005)

Discussiones Mathematicae Probability and Statistics

In the article, we propose a new estimator of the hazard rate function in the framework of the multiplicative point process intensity model. The technique combines the reflection method and the method of transformation. The new method eliminates the boundary effect for suitably selected transformations reducing the bias at the boundary and keeping the asymptotics of the variance. The transformation depends on a pre-estimate of the logarithmic derivative of the hazard function at the boundary.

Estimation of the spectral moment by means of the extrema.

Enrique M. Cabaña (1985)

Trabajos de Estadística e Investigación Operativa

An estimator of the standard deviation of the first derivative of a stationary Gaussian process with known variance and two continuous derivatives, based on the values of the relative maxima and minima, is proposed, and some of its properties are considered.

Estimation of the transition density of a Markov chain

Mathieu Sart (2014)

Annales de l'I.H.P. Probabilités et statistiques

We present two data-driven procedures to estimate the transition density of an homogeneous Markov chain. The first yields a piecewise constant estimator on a suitable random partition. By using an Hellinger-type loss, we establish non-asymptotic risk bounds for our estimator when the square root of the transition density belongs to possibly inhomogeneous Besov spaces with possibly small regularity index. Some simulations are also provided. The second procedure is of theoretical interest and leads...

Estimation of variances in a heteroscedastic RCA(1) model

Hana Janečková (2002)

Kybernetika

The paper concerns with a heteroscedastic random coefficient autoregressive model (RCA) of the form X t = b t X t - 1 + Y t . Two different procedures for estimating σ t 2 = E Y t 2 , σ b 2 = E b t 2 or σ B 2 = E ( b t - E b t ) 2 , respectively, are described under the special seasonal behaviour of σ t 2 . For both types of estimators strong consistency and asymptotic normality are proved.

Estimation variances for parameterized marked Poisson processes and for parameterized Poisson segment processes

Tomáš Mrkvička (2004)

Commentationes Mathematicae Universitatis Carolinae

A complete and sufficient statistic is found for stationary marked Poisson processes with a parametric distribution of marks. Then this statistic is used to derive the uniformly best unbiased estimator for the length density of a Poisson or Cox segment process with a parametric primary grain distribution. It is the number of segments with reference point within the sampling window divided by the window volume and multiplied by the uniformly best unbiased estimator of the mean segment length.

Estimators of the asymptotic variance of stationary point processes - a comparison

Michaela Prokešová (2011)

Kybernetika

We investigate estimators of the asymptotic variance σ 2 of a d –dimensional stationary point process Ψ which can be observed in convex and compact sampling window W n = n W . Asymptotic variance of Ψ is defined by the asymptotic relation V a r ( Ψ ( W n ) ) σ 2 | W n | (as n ) and its existence is guaranteed whenever the corresponding reduced covariance measure γ red ( 2 ) ( · ) has finite total variation. The three estimators discussed in the paper are the kernel estimator, the estimator based on the second order intesity of the point process and the...

Evolutionary learning of rich neural networks in the Bayesian model selection framework

Matteo Matteucci, Dario Spadoni (2004)

International Journal of Applied Mathematics and Computer Science

In this paper we focus on the problem of using a genetic algorithm for model selection within a Bayesian framework. We propose to reduce the model selection problem to a search problem solved using evolutionary computation to explore a posterior distribution over the model space. As a case study, we introduce ELeaRNT (Evolutionary Learning of Rich Neural Network Topologies), a genetic algorithm which evolves a particular class of models, namely, Rich Neural Networks (RNN), in order to find an optimal...

Exploring the impact of post-training rounding in regression models

Jan Kalina (2024)

Applications of Mathematics

Post-training rounding, also known as quantization, of estimated parameters stands as a widely adopted technique for mitigating energy consumption and latency in machine learning models. This theoretical endeavor delves into the examination of the impact of rounding estimated parameters in key regression methods within the realms of statistics and machine learning. The proposed approach allows for the perturbation of parameters through an additive error with values within a specified interval. This...

Exponential inequalities for VLMC empirical trees

Antonio Galves, Véronique Maume-Deschamps, Bernard Schmitt (2008)

ESAIM: Probability and Statistics

A seminal paper by Rissanen, published in 1983, introduced the class of Variable Length Markov Chains and the algorithm Context which estimates the probabilistic tree generating the chain. Even if the subject was recently considered in several papers, the central question of the rate of convergence of the algorithm remained open. This is the question we address here. We provide an exponential upper bound for the probability of incorrect estimation of the probabilistic tree, as a function...

Currently displaying 261 – 280 of 838