Page 1 Next

Displaying 1 – 20 of 68

Showing per page

A comparison of parametric models for mortality graduation. Application to mortality data for the Valencia Region (Spain).

Ana Debón, Francisco Montes, Ramón Sala (2005)


The parametric graduation of mortality data has as its objective the satisfactory estimation of the death rates based on mortality data but using an age-dependent function whose parameters are adjusted from the crude rates obtainable directly from the data. This paper proposes a revision of the most commonly used parametric models and compares the result obtained with each of them when they are applied to the mortality data for the Valencia Region. As a result of the comparison, we conclude that...

A posteriori disclosure risk measure for tabular data based on conditional entropy.

Anna Oganian, Josep Domingo-Ferrer (2003)


Statistical database protection, also known as Statistical Disclosure Control (SDC), is a part of information security which tries to prevent published statistical information (tables, individual records) from disclosing the contribution of specific respondents. This paper deals with the assessment of the disclosure risk associated to the release of tabular data. So-called sensitivity rules are currently being used to measure the disclosure risk for tables. This rules operate on an a priori basis:...

Análisis factorial de tablas mixtas: nuevas equivalencias entre ACP normado y ACM.

M.ª Isabel Landaluce Calvo (1997)


En este trabajo se pone de manifiesto que es posible el Análisis Factorial de tablas mixtas sin modificar la naturaleza de ninguno de los dos conjuntos, cualitativo y cuantitativo, que las integran. Se propone codificar de manera apropiada las indicadoras de cada variable cualitativa tratando de respetar, en la medida de lo posible, la estructura inicial de ésta última y posteriormente aplicar un Análisis en Componentes Principales (ACP) Normado al conjunto de variables. Los factores obtenidos para...

Análisis factorial múltiple como técnica de estudio de la estabilidad de los resultados de un análisis de componentes principales.

Elena Abascal Fernández, M.ª Isabel Landaluce Calvo (2002)


Una característica de los métodos factoriales es que siempre producen resultados y éstos no son una simple descripción, sino que ponen de manifiesto la estructura existente entre los datos, de ahí la necesidad de estudiar la validez de los resultados. Es preciso analizar la naturaleza de esta estructura y estudiar la estabilidad de los resultados. Consideramos que el mejor criterio es el análisis de la estabilidad de los mapas obtenidos en el análisis factorial.El Análisis Factorial Múltiple (AFM),...

Analyse géométrique des données : une enquête sur le racisme

Philippe Bonnet, Brigitte Le Roux, Gérard Lemaine (1996)

Mathématiques et Sciences Humaines

Dans cet article, nous présentons une démarche d'analyse statistique d'un questionnaire appliquée à une enquête sur le racisme. La méthodologie suivie est celle de l'ana lyse des données structurées, inspirée des comparaisons spécifiques en analyse de variance, et appliquée à des données géométriques (nuage euclidien). La mise en œuvre est réalisée grâce au langage d'interrogation de données (LID) implanté dans le logiciel EyeLID.

Application of HLM to data with multilevel structure

Vítor Valente, Teresa A. Oliveira (2011)

Discussiones Mathematicae Probability and Statistics

Many data sets analyzed in human and social sciences have a multilevel or hierarchical structure. By hierarchy we mean that units of a certain level (also referred micro units) are grouped into, or nested within, higher level (or macro) units. In these cases, the units within a cluster tend to be more different than units from other clusters, i.e., they are correlated. Thus, unlike in the classical setting where there exists a single source of variation between observational units, the heterogeneity...

Automatic error localisation for categorical, continuous and integer data.

Ton de Waal (2005)


Data collected by statistical offices generally contain errors, which have to be corrected before reliable data can be published. This correction process is referred to as statistical data editing. At statistical offices, certain rules, so-called edits, are often used during the editing process to determine whether a record is consistent or not. Inconsistent records are considered to contain errors, while consistent records are considered error-free. In this article we focus on automatic error localisation...

Currently displaying 1 – 20 of 68

Page 1 Next