Previous Page 3

Displaying 41 – 59 of 59

Showing per page

Posterior odds ratios for selected regression hypotheses.

Arnold Zellner, Aloysius Siow (1980)

Trabajos de Estadística e Investigación Operativa

Bayesian posterior odds ratios for frequently encountered hypotheses about parameters of the normal linear multiple regression model are derived and discussed. For the particular prior distributions utilized, it is found that the posterior odds ratios can be well approximated by functions that are monotonic in usual sampling theory F statistics. Some implications of these finding and the relation of our work to the pioneering work of Jeffreys and others are considered. Tabulations of odd ratios...

Quadratic estimations in mixed linear models

Štefan Varga (1991)

Applications of Mathematics

In the paper four types of estimations of the linear function of the variance components are presented for the mixed linear model 𝐘 = 𝐗 β + 𝐞 with expectation E ( 𝐘 ) = 𝐗 β and covariance matrix D ( 𝐘 ) = 0 1 𝐕 1 + . . . + 0 𝐦 𝐕 𝐦 .

Selective F tests for sub-normal models

Célia Maria Pinto Nunes, João Tiago Mexia (2003)

Discussiones Mathematicae Probability and Statistics

F tests that are specially powerful for selected alternatives are built for sub-normal models. In these models the observation vector is the sum of a vector that stands for what is measured with a normal error vector, both vectors being independent. The results now presented generalize the treatment given by Dias (1994) for normal fixed-effects models, and consider the testing of hypothesis on the ordering of mean values and components.

Sparsity in penalized empirical risk minimization

Vladimir Koltchinskii (2009)

Annales de l'I.H.P. Probabilités et statistiques

Let (X, Y) be a random couple in S×T with unknown distribution P. Let (X1, Y1), …, (Xn, Yn) be i.i.d. copies of (X, Y), Pn being their empirical distribution. Let h1, …, hN:S↦[−1, 1] be a dictionary consisting of N functions. For λ∈ℝN, denote fλ:=∑j=1Nλjhj. Let ℓ:T×ℝ↦ℝ be a given loss function, which is convex with respect to the second variable. Denote (ℓ•f)(x, y):=ℓ(y; f(x)). We study the following penalized empirical risk minimization problem λ ^ ε : = argmin λ N P n ( f λ ) + ε λ p p , which is an empirical version of the problem λ ε : = argmin λ N P ( f λ ) + ε λ p p (hereɛ≥0...

The linear model with variance-covariance components and jackknife estimation

Jaromír Kudeláš (1994)

Applications of Mathematics

Let θ * be a biased estimate of the parameter ϑ based on all observations x 1 , , x n and let θ - i * ( i = 1 , 2 , , n ) be the same estimate of the parameter ϑ obtained after deletion of the i -th observation. If the expectation of the estimators θ * and θ - i * are expressed as E ( θ * ) = ϑ + a ( n ) b ( ϑ ) E ( θ - i * ) = ϑ + a ( n - 1 ) b ( ϑ ) i = 1 , 2 , , n , where a ( n ) is a known sequence of real numbers and b ( ϑ ) is a function of ϑ , then this system of equations can be regarded as a linear model. The least squares method gives the generalized jackknife estimator. Using this method, it is possible to obtain the unbiased...

Variance function estimation via model selection

Teresa Ledwina, Jan Mielniczuk (2010)

Applicationes Mathematicae

The problem of estimating an unknown variance function in a random design Gaussian heteroscedastic regression model is considered. Both the regression function and the logarithm of the variance function are modelled by piecewise polynomials. A finite collection of such parametric models based on a family of partitions of support of an explanatory variable is studied. Penalized model selection criteria as well as post-model-selection estimates are introduced based on Maximum Likelihood (ML) and Restricted...

Currently displaying 41 – 59 of 59

Previous Page 3