Displaying similar documents to “A modification of the Hartung-Knapp confidence interval on the variance component in two-variance-component models”

Distributivity of strong implications over conjunctive and disjunctive uninorms

Daniel Ruiz-Aguilera, Joan Torrens (2006)

Kybernetika

Similarity:

This paper deals with implications defined from disjunctive uninorms U by the expression I ( x , y ) = U ( N ( x ) , y ) where N is a strong negation. The main goal is to solve the functional equation derived from the distributivity condition of these implications over conjunctive and disjunctive uninorms. Special cases are considered when the conjunctive and disjunctive uninorm are a t -norm or a t -conorm respectively. The obtained results show a lot of new solutions generalyzing those obtained in previous works...

On Ozeki’s inequality for power sums

Horst Alzer (2000)

Czechoslovak Mathematical Journal

Similarity:

Let p ( 0 , 1 ) be a real number and let n 2 be an even integer. We determine the largest value c n ( p ) such that the inequality i = 1 n | a i | p c n ( p ) holds for all real numbers a 1 , ... , a n which are pairwise distinct and satisfy min i j | a i - a j | = 1 . Our theorem completes results of Ozeki, Mitrinović-Kalajdžić, and Russell, who found the optimal value c n ( p ) in the case p > 0 and n odd, and in the case p 1 and n even.

On the structure of continuous uninorms

Paweł Drygaś (2007)

Kybernetika

Similarity:

Uninorms were introduced by Yager and Rybalov [13] as a generalization of triangular norms and conorms. We ask about properties of increasing, associative, continuous binary operation U in the unit interval with the neutral element e [ 0 , 1 ] . If operation U is continuous, then e = 0 or e = 1 . So, we consider operations which are continuous in the open unit square. As a result every associative, increasing binary operation with the neutral element e ( 0 , 1 ) , which is continuous in the open unit square may be...

Optimality conditions for maximizers of the information divergence from an exponential family

František Matúš (2007)

Kybernetika

Similarity:

The information divergence of a probability measure P from an exponential family over a finite set is defined as infimum of the divergences of P from Q subject to Q . All directional derivatives of the divergence from are explicitly found. To this end, behaviour of the conjugate of a log-Laplace transform on the boundary of its domain is analysed. The first order conditions for P to be a maximizer of the divergence from are presented, including new ones when P  is not projectable...