The search session has expired. Please query the service again.
A solution to the marginal problem is obtained in a form of parametric exponential (Gibbs–Markov) distribution, where the unknown parameters are obtained by an optimization procedure that agrees with the maximum likelihood (ML) estimate. With respect to a difficult performance of the method we propose also an alternative approach, providing the original basis of marginals can be appropriately extended. Then the (numerically feasible) solution can be obtained either by the maximum pseudo-likelihood...
This paper deals with a scalar response conditioned by a functional random variable. The main goal is to estimate the conditional hazard function. An asymptotic formula for the mean square error of this estimator is calculated considering as usual the bias and variance.
We consider a deconvolution problem of estimating a signal blurred with a random noise.
The noise is assumed to be a stationary Gaussian process multiplied
by a weight function function εh where h ∈ L2(R1) and ε
is a small parameter. The underlying solution is assumed to be infinitely
differentiable.
For this model we find asymptotically minimax and
Bayes estimators. In the case of solutions having finite number of
derivatives similar results were obtained in [G.K. Golubev and R.Z. Khasminskii,...
We observe an infinitely dimensional Gaussian random vector x = ξ + v
where
ξ is a sequence of standard Gaussian variables and v ∈ l2 is an
unknown
mean. We consider the hypothesis testing problem H0 : v = 0versus
alternatives for the sets
.
The sets Vε are lq-ellipsoids
of semi-axes ai = i-s R/ε with lp-ellipsoid
of semi-axes bi = i-r pε/ε removed or
similar Besov bodies Bq,t;s (R/ε) with Besov
bodies Bp,h;r (pε/ε) removed. Here
or
are the parameters which define the
sets Vε
for given radii...
Let U₀ be a random vector taking its values in a measurable space and having an unknown distribution P and let U₁,...,Uₙ and be independent, simple random samples from P of size n and m, respectively. Further, let be real-valued functions defined on the same space. Assuming that only the first sample is observed, we find a minimax predictor d⁰(n,U₁,...,Uₙ) of the vector with respect to a quadratic errors loss function.
The problem of predicting integrals of stochastic processes is
considered. Linear estimators have been constructed by means of
samples at N discrete times for processes having a fixed
Hölderian regularity s > 0 in quadratic mean. It is known
that the rate of convergence of the mean squared error is of
order N-(2s+1). In the class of analytic processes
Hp, p ≥ 1, we show that among all estimators,
the linear ones are optimal. Moreover, using optimal coefficient
estimators derived through...
Let Y ∈ ℝn be a random vector with mean s and covariance matrix σ2PntPn where Pn is some known n × n-matrix. We construct a statistical procedure to estimate s as well as under moment condition on Y or Gaussian hypothesis. Both cases are developed for known or unknown σ2. Our approach is free from any prior assumption on s and is based on non-asymptotic model selection methods. Given some linear spaces collection {Sm, m ∈ ℳ}, we consider, for any m ∈ ℳ, the least-squares estimator ŝm of s in Sm....
In this paper, we study the problem of non parametric estimation of an unknown regression function from dependent data with sub-gaussian errors. As a particular case, we handle the autoregressive framework. For this purpose, we consider a collection of finite dimensional linear spaces (e.g. linear spaces spanned by wavelets or piecewise polynomials on a possibly irregular grid) and we estimate the regression function by a least-squares estimator built on a data driven selected linear space among...
In this paper, we study the problem of non parametric estimation
of an unknown regression function from dependent data with
sub-Gaussian errors. As a particular case, we handle the
autoregressive framework. For this purpose, we consider a
collection of finite dimensional linear spaces (e.g. linear spaces
spanned by wavelets or piecewise polynomials on a possibly
irregular grid) and we estimate the regression function by a
least-squares estimator built on a data driven selected linear
space among...
We propose a method based on a penalised likelihood criterion, for
estimating the number on non-zero components of the mean
of a
Gaussian vector. Following the work of Birgé and Massart in Gaussian model
selection, we choose the penalty function such that the resulting
estimator minimises the Kullback risk.
Currently displaying 1 –
20 of
36