Displaying 61 – 80 of 107

Showing per page

On the connection between cherry-tree copulas and truncated R-vine copulas

Edith Kovács, Tamás Szántai (2017)

Kybernetika

Vine copulas are a flexible way for modeling dependences using only pair-copulas as building blocks. However if the number of variables grows the problem gets fastly intractable. For dealing with this problem Brechmann at al. proposed the truncated R-vine copulas. The truncated R-vine copula has the very useful property that it can be constructed by using only pair-copulas and a lower number of conditional pair-copulas. In our earlier papers we introduced the concept of cherry-tree copulas. In this...

On the control of the difference between two Brownian motions: a dynamic copula approach

Thomas Deschatre (2016)

Dependence Modeling

We propose new copulae to model the dependence between two Brownian motions and to control the distribution of their difference. Our approach is based on the copula between the Brownian motion and its reflection. We show that the class of admissible copulae for the Brownian motions are not limited to the class of Gaussian copulae and that it also contains asymmetric copulae. These copulae allow for the survival function of the difference between two Brownian motions to have higher value in the right...

On the control of the difference between two Brownian motions: an application to energy markets modeling

Thomas Deschatre (2016)

Dependence Modeling

We derive a model based on the structure of dependence between a Brownian motion and its reflection according to a barrier. The structure of dependence presents two states of correlation: one of comonotonicity with a positive correlation and one of countermonotonicity with a negative correlation. This model of dependence between two Brownian motions B1 and B2 allows for the value of [...] to be higher than 1/2 when x is close to 0, which is not the case when the dependence is modeled by a constant...

On the convergence of the Bhattacharyya bounds in the multiparametric case

Abdulghani Alharbi (1994)

Applicationes Mathematicae

Shanbhag (1972, 1979) showed that the diagonality of the Bhattacharyya matrix characterizes the set of normal, Poisson, binomial, negative binomial, gamma or Meixner hypergeometric distributions. In this note, using Shanbhag's techniques, we show that if a certain generalized version of the Bhattacharyya matrix is diagonal, then the bivariate distribution is either normal, Poisson, binomial, negative binomial, gamma or Meixner hypergeometric. Bartoszewicz (1980) extended the result of Blight and...

On the equality of the ordinary least squares estimators and the best linear unbiased estimators in multivariate growth-curve models.

Gabriela Beganu (2007)

RACSAM

It is well known that there were proved several necessary and sufficient conditions for the ordinary least squares estimators (OLSE) to be the best linear unbiased estimators (BLUE) of the fixed effects in general linear models. The purpose of this article is to verify one of these conditions given by Zyskind [39, 40]: there exists a matrix Q such that ΩX = XQ, where X and Ω are the design matrix and the covariance matrix, respectively. It will be shown the accessibility of this condition in some...

On the exact distribution of L1(vc) of Votaw.

Giorgio Pederzoli, Puspha N. Rathie (1987)

Trabajos de Estadística

This paper deals with the exact distribution of L1(vc) of Votaw. The results are given in terms of Meijer's G-function as well as in series form suitable for computation of percentage points.

On the Jensen-Shannon divergence and the variation distance for categorical probability distributions

Jukka Corander, Ulpu Remes, Timo Koski (2021)

Kybernetika

We establish a decomposition of the Jensen-Shannon divergence into a linear combination of a scaled Jeffreys' divergence and a reversed Jensen-Shannon divergence. Upper and lower bounds for the Jensen-Shannon divergence are then found in terms of the squared (total) variation distance. The derivations rely upon the Pinsker inequality and the reverse Pinsker inequality. We use these bounds to prove the asymptotic equivalence of the maximum likelihood estimate and minimum Jensen-Shannon divergence...

Currently displaying 61 – 80 of 107