An alternative class of estimators in double sampling.
Most of the earlier work on clustering has mainly been focused on numerical data whose inherent geometric properties can be exploited to naturally define distance functions between data points. Recently, the problem of clustering categorical data has started drawing interest. However, the computational cost makes most of the previous algorithms unacceptable for clustering very large databases. The -means algorithm is well known for its efficiency in this respect. At the same time, working only on...
We revisit Sklar’s Theorem and give another proof, primarily based on the use of right quantile functions. To this end we slightly generalise the distributional transform approach of Rüschendorf and facilitate some new results including a rigorous characterisation of an almost surely existing “left-invertibility” of distribution functions.
The main purpose of the paper is to present a statistical model-based iterative approach to the problem of image reconstruction from projections. This originally formulated reconstruction algorithm is based on a maximum likelihood method with an objective adjusted to the probability distribution of measured signals obtained from an x-ray computed tomograph with parallel beam geometry. Various forms of objectives are tested. Experimental results show that an objective that is exactly tailored statistically...
It is known that the identifiability of multivariate mixtures reduces to a question in algebraic geometry. We solve the question by studying certain generators in the ring of polynomials in vector variables, invariant under the action of the symmetric group.
The contribution deals with an application of the nonparametric version of Cox regression model to the analysis and modeling of the failure rate of technical devices. The objective is to recall the method of statistical analysis of such a model, to adapt it to the real–case study, and in such a way to demonstrate the flexibility of the Cox model. The goodness-of-fit of the model is tested, too, with the aid of the graphical test procedure based on generalized residuals.
The distribution of product of two normally distributed variables come from the first part of the XX Century. First works about this issue were [1] and [2] showed that under certain conditions the product could be considered as a normally distributed. A more recent approach is [3] that studied approximation to density function of the product using three methods: numerical integration, Monte Carlo simulation and analytical approximation to the result using the normal distribution....
An approximate necessary condition for the optimal bandwidth choice is derived. This condition is used to construct an iterative bandwidth selector. The algorithm is based on resampling and step-wise fitting the bandwidth to the density estimator from the previous iteration. Examples show fast convergence of the algorithm to the bandwidth value which is surprisingly close to the optimal one no matter what is the initial knowledge on the unknown density.