Page 1

Displaying 1 – 18 of 18

Showing per page

About the maximum information and maximum likelihood principles

Igor Vajda, Jiří Grim (1998)

Kybernetika

Neural networks with radial basis functions are considered, and the Shannon information in their output concerning input. The role of information- preserving input transformations is discussed when the network is specified by the maximum information principle and by the maximum likelihood principle. A transformation is found which simplifies the input structure in the sense that it minimizes the entropy in the class of all information-preserving transformations. Such transformation need not be unique...

Aplicación de redes neuronales artificiales a la previsión de series temporales no estacionarias o no invertibles.

Raúl Pino, David de la Fuente, José Parreño, Paolo Priore (2002)

Qüestiió

En los últimos tiempos se ha comprobado un aumento del interés en la aplicación de las Redes Neuronales Artificiales a la previsión de series temporales, intentando explotar las indudables ventajas de estas herramientas. En este artículo se calculan previsiones de series no estacionarias o no invertibles, que presentan dificultades cuando se intentan pronosticar utilizando la metodología ARIMA de Box-Jenkins. Las ventajas de la aplicación de redes neuronales se aprecian con más claridad, cuando...

Artificial neural networks in time series forecasting: a comparative analysis

Héctor Allende, Claudio Moraga, Rodrigo Salas (2002)

Kybernetika

Artificial neural networks (ANN) have received a great deal of attention in many fields of engineering and science. Inspired by the study of brain architecture, ANN represent a class of non-linear models capable of learning from data. ANN have been applied in many areas where statistical methods are traditionally employed. They have been used in pattern recognition, classification, prediction and process control. The purpose of this paper is to discuss ANN and compare them to non-linear time series...

Evolutionary learning of rich neural networks in the Bayesian model selection framework

Matteo Matteucci, Dario Spadoni (2004)

International Journal of Applied Mathematics and Computer Science

In this paper we focus on the problem of using a genetic algorithm for model selection within a Bayesian framework. We propose to reduce the model selection problem to a search problem solved using evolutionary computation to explore a posterior distribution over the model space. As a case study, we introduce ELeaRNT (Evolutionary Learning of Rich Neural Network Topologies), a genetic algorithm which evolves a particular class of models, namely, Rich Neural Networks (RNN), in order to find an optimal...

Exploring the impact of post-training rounding in regression models

Jan Kalina (2024)

Applications of Mathematics

Post-training rounding, also known as quantization, of estimated parameters stands as a widely adopted technique for mitigating energy consumption and latency in machine learning models. This theoretical endeavor delves into the examination of the impact of rounding estimated parameters in key regression methods within the realms of statistics and machine learning. The proposed approach allows for the perturbation of parameters through an additive error with values within a specified interval. This...

Extraction of fuzzy logic rules from data by means of artificial neural networks

Martin Holeňa (2005)

Kybernetika

The extraction of logical rules from data has been, for nearly fifteen years, a key application of artificial neural networks in data mining. Although Boolean rules have been extracted in the majority of cases, also methods for the extraction of fuzzy logic rules have been studied increasingly often. In the paper, those methods are discussed within a five-dimensional classification scheme for neural-networks based rule extraction, and it is pointed out that all of them share the feature of being...

Fault location in EHV transmission lines using artificial neural networks

Tahar Bouthiba (2004)

International Journal of Applied Mathematics and Computer Science

This paper deals with the application of artificial neural networks (ANNs) to fault detection and location in extra high voltage (EHV) transmission lines for high speed protection using terminal line data. The proposed neural fault detector and locator were trained using various sets of data available from a selected power network model and simulating different fault scenarios (fault types, fault locations, fault resistances and fault inception angles) and different power system data (source capacities,...

Locally weighted neural networks for an analysis of the biosensor response

Romas Baronas, Feliksas Ivanauskas, Romualdas Maslovskis, Marijus Radavičius, Pranas Vaitkus (2007)

Kybernetika

This paper presents a semi-global mathematical model for an analysis of a signal of amperometric biosensors. Artificial neural networks were applied to an analysis of the biosensor response to multi-component mixtures. A large amount of the learning and test data was synthesized using computer simulation of the biosensor response. The biosensor signal was analyzed with respect to the concentration of each component of the mixture. The paradigm of locally weighted linear regression was used for retraining...

Neural network realizations of Bayes decision rules for exponentially distributed data

Igor Vajda, Belomír Lonek, Viktor Nikolov, Arnošt Veselý (1998)

Kybernetika

For general Bayes decision rules there are considered perceptron approximations based on sufficient statistics inputs. A particular attention is paid to Bayes discrimination and classification. In the case of exponentially distributed data with known model it is shown that a perceptron with one hidden layer is sufficient and the learning is restricted to synaptic weights of the output neuron. If only the dimension of the exponential model is known, then the number of hidden layers will increase...

Neural networks using Bayesian training

Gabriela Andrejková, Miroslav Levický (2003)

Kybernetika

Bayesian probability theory provides a framework for data modeling. In this framework it is possible to find models that are well-matched to the data, and to use these models to make nearly optimal predictions. In connection to neural networks and especially to neural network learning, the theory is interpreted as an inference of the most probable parameters for the model and the given training data. This article describes an application of Neural Networks using the Bayesian training to the problem...

Neuromorphic features of probabilistic neural networks

Jiří Grim (2007)

Kybernetika

We summarize the main results on probabilistic neural networks recently published in a series of papers. Considering the framework of statistical pattern recognition we assume approximation of class-conditional distributions by finite mixtures of product components. The probabilistic neurons correspond to mixture components and can be interpreted in neurophysiological terms. In this way we can find possible theoretical background of the functional properties of neurons. For example, the general...

Piecewise approximation and neural networks

Martina Révayová, Csaba Török (2007)

Kybernetika

The paper deals with the recently proposed autotracking piecewise cubic approximation (APCA) based on the discrete projective transformation, and neural networks (NN). The suggested new approach facilitates the analysis of data with complex dependence and relatively small errors. We introduce a new representation of polynomials that can provide different local approximation models. We demonstrate how APCA can be applied to especially noisy data thanks to NN and local estimations. On the other hand,...

Relative Measurement and Its Generalization in Decision Making. Why Pairwise Comparisons are Central in Mathematics for the Measurement of Intangible Factors. The Analytic Hierarchy/Network Process.

Thomas L. Saaty (2008)

RACSAM

According to the great mathematician Henri Lebesgue, making direct comparisons of objects with regard to a property is a fundamental mathematical process for deriving measurements. Measuring objects by using a known scale first then comparing the measurements works well for properties for which scales of measurement exist. The theme of this paper is that direct comparisons are necessary to establish measurements for intangible properties that have no scales of measurement. In that case the value...

Currently displaying 1 – 18 of 18

Page 1