Loading [MathJax]/extensions/MathZoom.js
Displaying 21 –
40 of
122
Motivation for this paper are classification problems in which data can not be clearly divided into positive and negative examples, especially data in which there is a monotone hierarchy (degree, preference) of more or less positive (negative) examples. We present a new formulation of a fuzzy inductive logic programming task in the framework of fuzzy logic in narrow sense. Our construction is based on a syntactical equivalence of fuzzy logic programs FLP and a restricted class of generalised annotated...
The concept of usability of man-machine interfaces is usually judged in terms of a number of aspects or attributes that are known to be subject to some rough correlations, and that are in many cases given different importance, depending on the context of use of the application. In consequence, the automation of judgment processes regarding the overall usability of concrete interfaces requires the design of aggregation operators that are capable of modeling approximate or ill-defined interactions...
Fuzzy Rule-Based Systems have been succesfully applied to pattern classification problems. In this type of classification systems, the classical Fuzzy Reasoning Method classifies a new example with the consequent of the rule with the greatest degree of association. By using this reasoning method, we do not consider the information provided by the other rules that are also compatible (have also been fired) with this example.In this paper we analyze this problem and propose to use FRMs that combine...
A synthesis of recent development of regime-switching models based on aggregation operators is presented. It comprises procedures for model specification and identification, parameter estimation and model adequacy testing. Constructions of models for real life data from hydrology and finance are presented.
Lattice-valued possibilistic measures, conceived and developed in more detail by G. De Cooman in 1997 [2], enabled to apply the main ideas on which the real-valued possibilistic measures are founded also to the situations often occurring in the real world around, when the degrees of possibility, ascribed to various events charged by uncertainty, are comparable only quantitatively by the relations like “greater than” or “not smaller than”, including the particular cases when such degrees are not...
In this paper, we propose a new method to generate a continuous belief functions from a multimodal probability distribution function defined over a continuous domain. We generalize Smets' approach in the sense that focal elements of the resulting continuous belief function can be disjoint sets of the extended real space of dimension n. We then derive the continuous belief function from multimodal probability density functions using the least commitment principle. We illustrate the approach on two...
In this paper, we propose a new method to generate a continuous
belief functions from a multimodal probability distribution function defined
over a continuous domain. We generalize Smets' approach in the sense that
focal elements of the resulting continuous belief function can be disjoint sets
of the extended real space of dimension n. We then derive the continuous
belief function from multimodal probability density functions using the least
commitment principle. We illustrate the approach on two...
Rough sets, developed by Zdzisław Pawlak [12], are an important tool to describe the state of incomplete or partially unknown information. In this article, which is essentially the continuation of [8], we try to give the characterization of approximation operators in terms of ordinary properties of underlying relations (some of them, as serial and mediate relations, were not available in the Mizar Mathematical Library [11]). Here we drop the classical equivalence- and tolerance-based models of rough...
We propose a framework for building decision strategies using Bayesian network models and discuss its application to adaptive testing. Dynamic programming and algorithm are used to find optimal adaptive tests. The proposed algorithm is based on a new admissible heuristic function.
To represent a set whose members are known partially, the graded ill-known set is proposed. In this paper, we investigate calculations of function values of graded ill-known sets. Because a graded ill-known set is characterized by a possibility distribution in the power set, the calculations of function values of graded ill-known sets are based on the extension principle but generally complex. To reduce the complexity, lower and upper approximations of a given graded ill-known set are used at the...
The paper deals with practical aspects of decision making under uncertainty on finite sets. The model is based on marginal problem. Numerical behaviour of 10 different algorithms is compared in form of a study case on the data from the field of rheumatology. (Five of the algorithms types were suggested by A. Perez.) The algorithms (expert systems, inference engines) are studied in different situations (combinations of parameters).
Let be a discrete multidimensional probability distribution over a finite set of variables which is only partially specified by the requirement that it has prescribed given marginals , where is a class of subsets of with . The paper deals with the problem of approximating on the basis of those given marginals. The divergence of an approximation from is measured by the relative entropy . Two methods for approximating are compared. One of them uses formerly introduced concept of...
We present a variation of a method of classification based in uncertainty on credal set. Similarly to its origin it use the imprecise Dirichlet model to create the credal set and the same uncertainty measures. It take into account sets of two variables to reduce the uncertainty and to seek the direct relations between the variables in the data base and the variable to be classified. The success are equivalent to the success of the first method except in those where there are a direct relations between...
Compositional models are used to construct probability distributions from lower-order probability distributions. On the other hand, Bayesian models are used to represent probability distributions that factorize according to acyclic digraphs. We introduce a class of models, called recursive factorization models, to represent probability distributions that recursively factorize according to sequences of sets of variables, and prove that they have the same representation power as both compositional...
Computing with words is a way to artificial, human-like thinking. The paper shows some new possibilities of solving difficult problems of computing with words which are offered by relative-distance-measure RDM models of fuzzy membership functions. Such models are based on RDM interval arithmetic. The way of calculation with words was shown using a specific problem of flight delay formulated by Lotfi Zadeh. The problem seems easy at first sight, but according to the authors' knowledge it has not...
In this paper we argue that for fuzzy unification we need a procedural and declarative semantics (as opposed to the two valued case, where declarative semantics is hidden in the requirement that unified terms are syntactically – letter by letter – identical). We present an extension of the syntactic model of unification to allow near matches, defined using a similarity relation. We work in Hájek’s fuzzy logic in narrow sense. We base our semantics on a formal model of fuzzy logic programming extended...
Probability logic studies the properties resulting from the probabilistic interpretation of logical argument forms. Typical examples are probabilistic Modus Ponens and Modus Tollens. Argument forms with two premises usually lead from precise probabilities of the premises to imprecise or interval probabilities of the conclusion. In the contribution, we study generalized inference forms having three or more premises. Recently, Gilio has shown that these generalized forms “degrade” – more premises...
The research on incomplete soft sets is an integral part of the research on soft sets and has been initiated recently. However, the existing approach for dealing with incomplete soft sets is only applicable to decision making and has low forecasting accuracy. In order to solve these problems, in this paper we propose a novel data filling approach for incomplete soft sets. The missing data are filled in terms of the association degree between the parameters when a stronger association exists between...
Currently displaying 21 –
40 of
122