Page 1 Next

Displaying 1 – 20 of 24

Showing per page

Bayes sharpening of imprecise information

Piotr Kulczycki, Małgorzata Charytanowicz (2005)

International Journal of Applied Mathematics and Computer Science

A complete algorithm is presented for the sharpening of imprecise information, based on the methodology of kernel estimators and the Bayes decision rule, including conditioning factors. The use of the Bayes rule with a nonsymmetrical loss function enables the inclusion of different results of an under- and overestimation of a sharp value (real number), as well as minimizing potential losses. A conditional approach allows to obtain a more precise result thanks to using information entered as the...

Bayesian and Frequentist Two-Sample Predictions of the Inverse Weibull Model Based on Generalized Order Statistics

Abd Ellah, A. H. (2011)

Serdica Mathematical Journal

2000 Mathematics Subject Classification: 62E16,62F15, 62H12, 62M20.This paper is concerned with the problem of deriving Bayesian prediction bounds for the future observations (two-sample prediction) from the inverse Weibull distribution based on generalized order statistics (GOS). Study the two side interval Bayesian prediction, point prediction under symmetric and asymmetric loss functions and the maximum likelihood (ML) prediction using "plug-in" procedure for future observations from the inverse...

Bayesian inference in life tests based on exponential model with outliers when sample size is a random variable.

G. S. Lingappaiah (1990)

Trabajos de Estadística

This paper deals with the problem of prediction of the order statistics in a future sample. Underlying model is exponential. Outlier is present in the sample drawn and the sample size is considered a random variable. Firstly, an outlier of type θδ in the exponential model, is treated. Actual predictive distribution of the order statistics is obtained. As an extension, the two-sample problem is also taken up. Finally, an outlier of type θ + δ is dealt with and now the predictive distribution is expressed...

Bayesian like R- and M- estimators of change points

Jaromír Antoch, Marie Husková (2000)

Discussiones Mathematicae Probability and Statistics

The purpose of this paper is to study Bayesian like R- and M-estimators of change point(s). These estimators have smaller variance than the related argmax type estimators. Confidence intervals for the change point based on the exchangeability arguments are constructed. Finally, theoretical results are illustrated on the real data set.

Bayesian nonparametric estimation of hazard rate in monotone Aalen model

Jana Timková (2014)

Kybernetika

This text describes a method of estimating the hazard rate of survival data following monotone Aalen regression model. The proposed approach is based on techniques which were introduced by Arjas and Gasbarra [4]. The unknown functional parameters are assumed to be a priori piecewise constant on intervals of varying count and size. The estimates are obtained with the aid of the Gibbs sampler and its variants. The performance of the method is explored by simulations. The results indicate that the...

Bias-variance decomposition in Genetic Programming

Taras Kowaliw, René Doursat (2016)

Open Mathematics

We study properties of Linear Genetic Programming (LGP) through several regression and classification benchmarks. In each problem, we decompose the results into bias and variance components, and explore the effect of varying certain key parameters on the overall error and its decomposed contributions. These parameters are the maximum program size, the initial population, and the function set used. We confirm and quantify several insights into the practical usage of GP, most notably that (a) the...

Blended φ -divergences with examples

Václav Kůs (2003)

Kybernetika

Several new examples of divergences emerged in the recent literature called blended divergences. Mostly these examples are constructed by the modification or parametrization of the old well-known phi-divergences. Newly introduced parameter is often called blending parameter. In this paper we present compact theory of blended divergences which provides us with a generally applicable method for finding new classes of divergences containing any two divergences D 0 and D 1 given in advance. Several examples...

Bootstrap in nonstationary autoregression

Zuzana Prášková (2002)

Kybernetika

The first-order autoregression model with heteroskedastic innovations is considered and it is shown that the classical bootstrap procedure based on estimated residuals fails for the least-squares estimator of the autoregression coefficient. A different procedure called wild bootstrap, respectively its modification is considered and its consistency in the strong sense is established under very mild moment conditions.

Bootstrap method for central and intermediate order statistics under power normalization

Haroon Mohamed Barakat, E. M. Nigm, O. M. Khaled (2015)

Kybernetika

It has been known for a long time that for bootstrapping the distribution of the extremes under the traditional linear normalization of a sample consistently, the bootstrap sample size needs to be of smaller order than the original sample size. In this paper, we show that the same is true if we use the bootstrap for estimating a central, or an intermediate quantile under power normalization. A simulation study illustrates and corroborates theoretical results.

Currently displaying 1 – 20 of 24

Page 1 Next