The search session has expired. Please query the service again.
Decomposable (probabilistic) models are log-linear models generated by acyclic hypergraphs, and a number of nice properties enjoyed by them are known. In many applications the following selection problem naturally arises: given a probability distribution over a finite set of discrete variables and a positive integer , find a decomposable model with tree-width that best fits . If is the generating hypergraph of a decomposable model and is the estimate of under the model, we can measure...
Negotiation is an interaction that happens in multi-agent systems when agents have conflicting objectives and must look for an acceptable agreement. A typical negotiating situation involves two agents that cannot reach their goals by themselves because they do not have some resources they need or they do not know how to use them to reach their goals. Therefore, they must start a negotiation dialogue, taking also into account that they might have incomplete or wrong beliefs about the other agent's...
We present a test for identifying clusters in high dimensional
data based on the k-means algorithm when the null hypothesis is spherical
normal. We show that projection techniques used for evaluating validity of
clusters may be misleading for such data. In particular, we demonstrate
that increasingly well-separated clusters are identified as the dimensionality
increases, when no such clusters exist. Furthermore, in a case of true
bimodality, increasing the dimensionality makes identifying the correct...
The objective of this paper is to develop feasible gait patterns that could be used to control a real hexapod walking robot. These gaits should enable the fastest movement that is possible with the given robot's mechanics and drives on a flat terrain. Biological inspirations are commonly used in the design of walking robots and their control algorithms. However, legged robots differ significantly from their biological counterparts. Hence we believe that gait patterns should be learned using the...
In this note, we propose a general definition of shape which is both compatible with the one proposed in phenomenology (gestaltism) and with a computer vision implementation. We reverse the usual order in Computer Vision. We do not define “shape recognition” as a task which requires a “model” pattern which is searched in all images of a certain kind. We give instead a “blind” definition of shapes relying only on invariance and repetition arguments. Given a set of images , we call shape of this...
In this note, we propose a general definition of shape which is
both compatible with the one proposed in phenomenology
(gestaltism) and with a computer vision implementation. We reverse
the usual order in Computer Vision. We do not define “shape
recognition" as a task which requires a “model" pattern which is
searched in all images of a certain kind. We give instead a
“blind" definition of shapes relying
only on invariance and repetition arguments.
Given a set of images , we call shape of this...
We continue in the direction of the ideas from the Zhang’s paper [Z] about a relationship between Chu spaces and Formal Concept Analysis. We modify this categorical point of view at a classical concept lattice to a generalized concept lattice (in the sense of Krajči [K1]): We define generalized Chu spaces and show that they together with (a special type of) their morphisms form a category. Moreover we define corresponding modifications of the image / inverse image operator and show their commutativity...
In this paper, a new characterization for the interval-valued residuated fuzzy implication operators is presented, with which it is possible to use them in a simple and efficient way, since the calculation of the values of an intervalvalued implication applicated to two intervals is reduced to the study of a fuzzy implication applicated to the extremes of these intervals. This result is very important in order to extract knowledge from an L-fuzzy context with incomplete information. Finally, some...
The problem of extracting more compact rules from a rule-based knowledge base is approached by means of a chunking mechanism implemented via a neural system. Taking advantage of the parallel processing potentialities of neural systems, the computational problem normally arising when introducing chuncking processes is overcome. Also the memory saturation effect is coped with using some sort of forgetting mechanism which allows the system to eliminate previously stored, but less often used chunks....
We examine worst-case analysis from the standpoint of classical Decision Theory. We elucidate how this analysis is expressed in the framework of Wald's famous Maximin paradigm for decision-making under strict uncertainty. We illustrate the subtlety required in modeling this paradigm by showing that information-gap's robustness model is in fact a Maximin model in disguise.
Needs of feature selection in medium and large problems increases in many fields including medical and image processing fields. Previous comparative studies of feature selection algorithms are not satisfactory in problem size and in criterion function. In addition, no way has not shown to compare algorithms with different objectives. In this study, we propose a unified way to compare a large variety of algorithms. Our results show that the sequential floating algorithms promises for up to medium...
Several counterparts of Bayesian networks based on different paradigms have been proposed in evidence theory. Nevertheless, none of them is completely satisfactory. In this paper we will present a new one, based on a recently introduced concept of conditional independence. We define a conditioning rule for variables, and the relationship between conditional independence and irrelevance is studied with the aim of constructing a Bayesian-network-like model. Then, through a simple example, we will...
The objective of this paper is to present and make a comparative study of several inverse kinematics methods for serial manipulators, based on the Jacobian matrix. Besides the well-known Jacobian transpose and Jacobian pseudo-inverse methods, three others, borrowed from numerical analysis, are presented. Among them, two approximation methods avoid the explicit manipulability matrix inversion, while the third one is a slightly modified version of the Levenberg-Marquardt method (mLM). Their comparison...
We present a framework of L-fuzzy modifiers for L being a complete lattice. They are used to model linguistic hedges that act on linguistic terms represented by L-fuzzy sets. In the modelling process the context is taken into account by means of L-fuzzy relations, endowing the L-fuzzy modifiers with a clear inherent semantics. To our knowledge, these L-fuzzy modifiers are the first ones proposed that are suitable to perform this representation task for a lattice L different from the unit interval....
Designing classifiers may follow different goals. Which goal to prefer among others depends on the given cost situation and the class distribution. For example, a classifier designed for best accuracy in terms of misclassifications may fail when the cost of misclassification of one class is much higher than that of the other. This paper presents a decision-theoretic extension to make fuzzy rule generation cost-sensitive. Furthermore, it will be shown how interpretability aspects and the costs of...
Currently displaying 1 –
20 of
220