Lecture Notes in Artificial Intelligence Subseries of Lecture Notes in Computer Science Edited by J. G. Carbonell and J...

This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!

Lecture Notes in Artificial Intelligence Subseries of Lecture Notes in Computer Science Edited by J. G. Carbonell and J. Siekmann

Lecture Notes in Computer Science Edited by G. Goos, J. Hartmanis, and J. van Leeuwen

2226

3

Berlin Heidelberg New York Barcelona Hong Kong London Milan Paris Tokyo

Klaus P. Jantke Ayumi Shinohara (Eds.)

Discovery Science 4th International Conference, DS 2001 Washington, DC, USA, November 25-28, 2001 Proceedings

13

Series Editors Jaime G. Carbonell, Carnegie Mellon University, Pittsburgh, PA, USA J¨org Siekmann, University of Saarland, Saarbr¨ucken, Germany Volume Editors Klaus P. Jantke DFKI GmbH Saarbr¨ucken 66123 Saarbr¨ucken, Germany E-mail: [email protected] Ayumi Shinohara Kyushu University, Department of Informatics 6-10-1 Hakozaki, Higashi-ku, Fukuoka 812-8581, Japan E-mail: [email protected]

Cataloging-in-Publication Data applied for Die Deutsche Bibliothek - CIP-Einheitsaufnahme Discovery science : 4th international conference ; proceedings / DS 2001, Washington, DC, USA, November 25 - 28, 2001. Klaus P. Jantke ; Ayumi Shinohara (ed.). - Berlin ; Heidelberg ; New York ; Barcelona ; Hong Kong ; London ; Milan ; Paris ; Tokyo : Springer, 2001 (Lecture notes in computer science ; Vol. 2226 : Lecture notes in artiﬁcial intelligence) ISBN 3-540-42956-5

CR Subject Classiﬁcation (1998): I.2, H.2.8, H.3, J.1, J.2 ISBN 3-540-42956-5 Springer-Verlag Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, speciﬁcally the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microﬁlms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable for prosecution under the German Copyright Law. Springer-Verlag Berlin Heidelberg New York a member of BertelsmannSpringer Science+Business Media GmbH http://www.springer.de © Springer-Verlag Berlin Heidelberg 2001 Printed in Germany Typesetting: Camera-ready by author Printed on acid-free paper SPIN: 10840973

06/3142

543210

VI

Preface

Table of Contents Invited Papers The Discovery Science Project in Japan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Setsuo Arikawa Discovering Mechanisms: A Computational Philosophy of Science Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lindley Darden

1

3

Queries Revisited . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Dana Angluin Inventing Discovery Tools: Combining Information Visualization with Data Mining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Ben Shneiderman Robot Baby 2001 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Paul R. Cohen, Tim Oates, Niall Adams, and Carole R. Beal

Regular Papers VML: A View Modeling Language for Computational Knowledge Discovery 30 Hideo Bannai, Yoshinori Tamada, Osamu Maruyama, and Satoru Miyano Computational Discovery of Communicable Knowledge: Symposium Report 45 Saˇso Dˇzeroski and Pat Langley Bounding Negative Information in Frequent Sets Algorithms . . . . . . . . . . . . 50 I. Fortes, J.L. Balc´ azar, and R. Morales Functional Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 Jo˜ ao Gama Spherical Horses and Shared Toothbrushes: Lessons Learned from a Workshop on Scientiﬁc and Technological Thinking . . . . . . . . . . . . . . . . . . . . . 74 Michael E. Gorman, Alexandra Kincannon, and Matthew M. Mehalik Clipping and Analyzing News Using Machine Learning Techniques . . . . . . . 87 Hans Gr¨ undel, Tino Naphtali, Christian Wiech, Jan-Marian Gluba, Maiken Rohdenburg, and Tobias Scheﬀer Towards Discovery of Deep and Wide First-Order Structures: A Case Study in the Domain of Mutagenicity . . . . . . . . . . . . . . . . . . . . . . . . . . 100 Tam´ as Horv´ ath and Stefan Wrobel

X

Table of Contents

Eliminating Useless Parts in Semi-structured Documents Using Alternation Counts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Daisuke Ikeda, Yasuhiro Yamada, and Sachio Hirokawa Multicriterially Best Explanations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 Naresh S. Iyer and John R. Josephson Constructing Approximate Informative Basis of Association Rules . . . . . . . . 141 Kouta Kanda, Makoto Haraguchi, and Yoshiaki Okubo Passage-Based Document Retrieval as a Tool for Text Mining with User’s Information Needs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 Koichi Kise, Markus Junker, Andreas Dengel, and Keinosuke Matsumoto Automated Formulation of Reactions and Pathways in Nuclear Astrophysics: New Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 Sakir Kocabas An Integrated Framework for Extended Discovery in Particle Physics . . . . . 182 Sakir Kocabas and Pat Langley Stimulating Discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 Ronald N. Kostoﬀ Assisting Model-Discovery in Neuroendocrinology . . . . . . . . . . . . . . . . . . . . . . 214 Ashesh Mahidadia and Paul Compton A General Theory of Deduction, Induction, and Learning . . . . . . . . . . . . . . . 228 Eric Martin, Arun Sharma, and Frank Stephan Learning Conformation Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 Osamu Maruyama, Takayoshi Shoudai, Emiko Furuichi, Satoru Kuhara, and Satoru Miyano Knowledge Navigation on Visualizing Complementary Documents . . . . . . . . 258 Naohiro Matsumura, Yukio Ohsawa, and Mitsuru Ishizuka KeyWorld: Extracting Keywords from a Document as a Small World . . . . . 271 Yutaka Matsuo, Yukio Ohsawa, and Mitsuru Ishizuka A Method for Discovering Puriﬁed Web Communities . . . . . . . . . . . . . . . . . . 282 Tsuyoshi Murata Divide and Conquer Machine Learning for a Genomics Analogy Problem . . 290 Ming Ouyang, John Case, and Joan Burnside

Table of Contents

XI

Towards a Method of Searching a Diverse Theory Space for Scientiﬁc Discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 Joseph Phillips Eﬃcient Local Search in Conceptual Clustering . . . . . . . . . . . . . . . . . . . . . . . . 323 C´eline Robardet and Fabien Feschet Computational Revision of Quantitative Scientiﬁc Models . . . . . . . . . . . . . . . 336 Kazumi Saito, Pat Langley, Trond Grenager, Christopher Potter, Alicia Torregrosa, and Steven A. Klooster An Eﬃcient Derivation for Elementary Formal Systems Based on Partial Uniﬁcation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350 Noriko Sugimoto, Hiroki Ishizaka, and Takeshi Shinohara Worst-Case Analysis of Rule Discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 Einoshin Suzuki Mining Semi-structured Data by Path Expressions . . . . . . . . . . . . . . . . . . . . . 378 Katsuaki Taniguchi, Hiroshi Sakamoto, Hiroki Arimura, Shinichi Shimozono, and Setsuo Arikawa Theory Revision in Equation Discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 Ljupˇco Todorovski and Saˇso Dˇzeroski Simpliﬁed Training Algorithms for Hierarchical Hidden Markov Models . . . 401 Nobuhisa Ueda and Taisuke Sato Discovering Repetitive Expressions and Aﬃnities from Anthologies of Classical Japanese Poems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416 Koichiro Yamamoto, Masayuki Takeda, Ayumi Shinohara, Tomoko Fukuda, and Ichir¯ o Nanri

Poster Papers Web Site Rating and Improvement Based on Hyperlink Structure . . . . . . . . 429 Hironori Hiraishi, Hisayoshi Kato, Naonori Ohtsuka, and Fumio Mizoguchi A Practical Algorithm to Find the Best Episode Patterns . . . . . . . . . . . . . . . 435 Masahiro Hirao, Shunsuke Inenaga, Ayumi Shinohara, Masayuki Takeda, and Setsuo Arikawa Interactive Exploration of Time Series Data . . . . . . . . . . . . . . . . . . . . . . . . . . . 441 Harry Hochheiser and Ben Shneiderman Clustering Rules Using Empirical Similarity of Support Sets . . . . . . . . . . . . . 447 Shreevardhan Lele, Bruce Golden, Kimberly Ozga, and Edward Wasil

XII

Table of Contents

Computational Lessons from a Cognitive Study of Invention . . . . . . . . . . . . . 452 Marin Simina, Michael E. Gorman, and Janet L. Kolodner Component-Based Framework for Virtual Information Materialization . . . . 458 Yuzuru Tanaka and Tsuyoshi Sugibuchi Dynamic Aggregation to Support Pattern Discovery: A Case Study with Web Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464 Lida Tang and Ben Shneiderman Separation of Photoelectrons via Multivariate Maxwellian Mixture Model . 470 Genta Ueno, Nagatomo Nakamura, and Tomoyuki Higuchi Logic of Drug Discovery: A Descriptive Model of a Practice in Neuropharmacology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476 Alexander P.M. van den Bosch SCOOP: A Record Extractor without Knowledge on Input . . . . . . . . . . . . . . 482 Yasuhiro Yamada, Daisuke Ikeda, and Sachio Hirokawa Meta-analysis of Mutagenes Discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488 Premysl Zak, Pavel Spacil, and Jaroslava Halova Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493

Theory Revision in Equation Discovery Ljupˇco Todorovski and Saˇso Dˇzeroski Department of Intelligent Systems, Joˇzef Stefan Institute Jamova 39, 0.50 Ljubljana, Slovenia [email protected], [email protected]

Abstract. State of the art equation discovery systems start the discovery process from scratch, rather than from an initial hypothesis in the space of equations. On the other hand, theory revision systems start from a given theory as an initial hypothesis and use new examples to improve its quality. Two quality criteria are usually used in theory revision systems. The ﬁrst is the accuracy of the theory on new examples and the second is the minimality of change of the original theory. In this paper, we formulate the problem of theory revision in the context of equation discovery. Moreover, we propose a theory revision method suitable for use with the equation discovery system Lagramge. The accuracy of the revised theory and the minimality of theory change are considered. The use of the method is illustrated on the problem of improving an existing equation based model of the net production of carbon in the Earth ecosystem. Experiments show that small changes in the model parameters and structure considerably improve the accuracy of the model.

1

Introduction

Most of the existing equation discovery systems make use of a very limited portion of the theoretical knowledge available in the domain of interest. Usually, the domain knowledge is used to constrain the search space of possible equations to the equations that make sense from the point of view of the domain experts. One of the aspects of the domain knowledge that is usually neglected by the equation discovery systems are the existing models in the domain. Rather than starting the search with an existing equation based model, equation discovery systems always start their search from scratch. In contrast with them, theory revision systems [9,3] start with an existing model and use heuristic search to revise the model in order to improve its ﬁt to observational data. Most of the work on theory revision systems is on the revision of theories in propositional and ﬁrst-order logic [9]. In this paper, we propose a ﬂexible grammar based framework for theory revision in equation discovery. The existing initial model is transformed to a grammar, and alternative productions are used to deﬁne a space of possible revised equation models. The grammar based equation discovery system Lagramge [6] is then used to search through the space of revised models and ﬁnd the one that ﬁts observational data best. The use of the proposed framework is illustrated on revising an equation based earth-science model of the net production of carbon in the Earth ecosystem. K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 389–400, 2001. c Springer-Verlag Berlin Heidelberg 2001

390

L. Todorovski and S. Dˇzeroski

The paper is organized as follows. The following section give a brief introduction to grammar based equation discovery. Typical approaches to revision of theories in propositional and ﬁrst-order logic are brieﬂy reviewed in Section 3. The grammar based framework for theory revision in equation discovery is presented in Section 4. Section 5 presents the experiments with revising the earth-science equation model. The last section summarizes the paper, discusses related work and gives direction for further work.

2

Equation Discovery

Equation discovery is the area of machine learning that develops methods for automated discovery of quantitative laws, expressed in the form of equations, in collections of measured data [1]. Equation discovery systems heuristically search through a subset of the space of all possible equations and try to ﬁnd the equation which ﬁts the measured data best. Diﬀerent equation discovery systems explore diﬀerent spaces of possible equations. Early equation discovery systems used pre-deﬁned (built-in) spaces that were small enough to allow eﬀective heuristic (or exhaustive) search. However, this approach does not allow the user of the equation discovery system to tailor the space of possible equation to the domain of interest. On the other hand, recent equation discovery systems use diﬀerent approaches to allow the user to restrict the space of the possible equations. In equation discovery systems that are based on genetic programming, the user is allowed to specify a set of algebraic operators that can be used. A similar approach has been used in the EF [10] equation discovery system. The equation discovery system SDS [7] eﬀectively uses user provided scale-type information about the dimensions of the system variables and is capable of discovering complex equations from noisy data. Finally, the equation discovery system Lagramge [6] allows the user to specify the space of possible equations using a context free grammar. Note that grammars are a more general and powerful mechanism for tailoring the space of the equations to the domain of use than the ones used in SDS [7] and EF [10]. In the rest of this section we will describe this grammar based approach to equation discovery used in Lagramge. 2.1

Grammar-Based Equation Discovery

The problem of grammar based equation discovery can be formalized as follows. Given: – a set of variables V = v1 , v2 , . . . , vn of the observed system, including a target dependent variable vd ∈ V ; – a grammar G; and – a table M of observations (measured values) of the system variables. Find a model E in the form of one or more algebraic or diﬀerential equations deﬁning the target variable vd that: 1. is derived by the grammar G; and

Theory Revision in Equation Discovery

391

2. minimizes the discrepancy between the observed values of the target variable vd and the values of vd obtained with simulating the model. An example of a grammar for equation discovery is given in Table 1. The grammar contains a set of two nonterminal symbols {P Vdiff, Vdiff}, with a set of productions attached to each of them, and a set of three terminal symbols {v1, v2, const[0:1]}. The semantics of the terminal and nonterminal symbols in the grammar are explained below. There are two types of terminal symbols used in the grammars for equation discovery. The ﬁrst group is used to denote the variables of the observed system (v1 and v2 in the example grammar from Table 1). Another group of terminal symbols of the form const[l:h] is used to denote the constant parameter in the equation model whose value has to be ﬁtted against the observational data from M . A constraint [l:h] speciﬁes that the value of the constant parameter should be within the interval l ≤ v ≤ h. Table 1. An example of a grammar for equation discovery deﬁning the space of polynomials of a single variable vdiﬀ = v1 − v2 . P Vdiff -> const[0:1] P Vdiff -> const[0:1] + (P Vdiff) * (Vdiff) Vdiff -> v1 - v2

The nonterminal symbol Vdiff deﬁnes an intermediate variable which is the diﬀerence between two system variables v1 and v2. This is done with the single production for the nonterminal symbol Vdiff. The other nonterminal symbol P Vdiff is used to build polynomials of an arbitrary degree. 2.2

Lagramge

The equation discovery system Lagramge applies heuristic (or exhaustive) search through the space of models generated using user provided grammar G. The values constant parameters (terminal symbols const) in the generated models are ﬁtted against input data M using standard non-linear constrained optimization method. After ﬁtting the values of the constant parameters tho model is evaluated according to the sum of squared errors (SSE heuristic function [6]), i.e., the diﬀerences between observed values of the target variable vd and the values of vd calculated by the model. Alternative MDL heuristic function that takes into account the complexity of the model can be also used [6].

3

Theory Revision

The problem of theory revision can be deﬁned as follows: Given an imperfect domain theory in the form of classiﬁcation rules and a set of classiﬁed examples,

392

L. Todorovski and S. Dˇzeroski

ﬁnd an approximately minimal syntactic revision of the domain theory that correctly classiﬁes all of the examples. A representative system that addresses this problem is Either [3]. Either reﬁnes propositional Horn-clause theories using a suite of abductive, deductive and inductive techniques. Deduction is used to identify the problems with the domain theory, while abduction and induction are used to correct them. The problem of theory revision has received a lot of attention in the ﬁeld of inductive logic programming [2], where a number of approaches have been developed for revising theories in the form of ﬁrst-order Horn clause theories. For an overview, we refer the reader to [9]. Two kinds of problems are encountered within imperfect domain theories: over-generality occurs when an example is classiﬁed into a class other than the correct one, while over-speciﬁcity occurs when an example cannot be proven to belong to the correct class. Note that a single example can be misclassiﬁed both ways at the same time. Overly general rules are either specialized by adding new conditions to their antecedents or are deleted from the knowledge base. Problems of over-speciﬁcity are solved by generalizing the antecedents of existing rules, e.g., by removing conditions from them, or by the induction of new rules.

4 4.1

Grammar-Based Theory Revision of Equation Models Problem Deﬁnition

The problem of grammar based theory revision can be formalized as follows. Given: – a set of variables V = v1 , v2 , . . . , vn of the observed system, including a target dependent variable vd ∈ V ; – an existing model E, represented as an equation(s) deﬁning the target variable vd . Note that this can actually be a set of (algebraic or diﬀerential) equations deﬁning the value of the target variable vd ; – a grammar G that derives the model E; and – a table M of observations (measured values) of the system variables. Find a revised model E (equation/set of equations as above) that: 1. is derived by the grammar G; 2. minimizes the discrepancy between the observed values of the target variable vd and the values of vd obtained with simulating the model; and 3. diﬀers from the existing model E as little as possible. Items 2. and 3. above would typically appear in a formulation of a general theory revision problem, regardless of the language in which the theories are expressed. In contrast to our formulation, however, the possible changes to the initial theory would be speciﬁed in terms of revision operators that can be applied to the initial and intermediate theories. As theories are typically logical theories in theory revision settings, operators typically include addition/deletion of entire rules (propositional or ﬁrst-order Horn clauses) and addition/deletion of conditions in individual rules.

Theory Revision in Equation Discovery

4.2

393

From an Initial Model to a Grammar

In a typical setting of revising an existing scientiﬁc model, we would only have observational data and a model, i.e., an equation developed by scientists to explain a particular phenomenon. A grammar that would explain how this model was actually derived and provide options for alternative models is typically not available. The above is especially true for simpler models. However, when the model (equation) is complex, it is only rarely written as a single equation deﬁning the target variable, but rather as a set of equations deﬁning the target variable, which typically contains equations deﬁning intermediate variables. The latter typically deﬁne meaningful concepts in the domain of discourse. Often, alternative equations deﬁning an intermediate variable would be possible and the modeling scientist would choose one of these: the alternatives would rarely (if ever) be documented in the model itself, but might be mentioned in a scientiﬁc article describing the derived model and the modeling process. Table 2. Equations deﬁning the NPPc variable in the CASA earth-science model. NPPc = max(0, E · IP AR) E = 0.389 · T 1 · T 2 · W T 1 = 0.8 + 0.02 · topt − 0.0005 · topt 2 T 2 = 1.1814/((1 + e0.2·(TDIFF −10) ) · (1 + e0.3·(−TDIFF −10) )) TDIFF = topt − tempc W = 0.5 + 0.5 · eet/P ET P ET = 1.6 · (10 · max(tempc, 0)/ahi)A · pet tw m A = 0.000000675 · ahi 3 − 0.0000771 · ahi 2 + 0.01792 · ahi + 0.49239 IP AR = FPAR FAS · monthly solar · SOL CONV · 0.5 FPAR FAS = min((SR FAS − 1.08)/srdiﬀ , 0.95) SR FAS = (1 + fas ndvi/1000)/(1 − fas ndvi/1000) SOL CONV = 0.0864 · days per month

A set of equations deﬁning a target variable through some intermediate variables can easily be turned into a grammar, as demonstrated in Tables 2 and 3, which give an earth-science model and a grammar that derives this model only. Having the grammar in Table 3, however, enables us to specify alternative models through providing additional productions for the nonterminal symbols in the grammar. Additional productions for intermediate variables would specify alternative choices, only one of which will eventually be chosen for the ﬁnal model. Observational data would be then used to select among combinations of such choices, if we apply a grammar based equation discovery system (such as Lagramge) with the grammar that includes additional productions to observational data as input. While the presented approach from the previous paragraph does take into account the initial model, it may allow for a completely diﬀerent model to be

394

L. Todorovski and S. Dˇzeroski

Table 3. Grammar derived from the equations for NPPc variable in the CASA earthscience model in Table 2. The grammar generates the original equations only. NPPc -> E -> T1 ->

max(const[0:0], E * IPAR) const[0.389:0.389] * T1 * T2 * W const[0.8:0.8] + const[0.02:0.02] * topt - const[0.0005:0.0005] * topt * topt T2 -> const[1.1814:1.1814] / ((const[1:1] + exp(const[0.2:0.2] * (TDIFF - const[10:10]))) * (const[1:1] + exp(const[0.3:0.3] * (-TDIFF - const[10:10])))) TDIFF -> topt - tempc W -> const[0.5:0.5] + const[0.5:0.5] * eet / max(PET, const[0:0]) PET -> const[1.6:1.6] * pow(const[10:10] * max(tempc, const[0:0]) / ahi, A) * pet tw m A -> const[0.000000675:0.000000675] * ahi * ahi * ahi - const[0.0000771:0.0000771] * ahi * ahi + const[0.01792:0.01792] * ahi + const[0.49239:0.49239] IPAR -> FPAR FAS * solar * SOL CONV * const[0.5:0.5] FPAR FAS -> min((SR FAS - const[1.08:1.08]) / srdiff, const[0.95:0.95]) SR FAS -> (const[1:1] + fas ndvi / const[1000:1000]) / (const[1:1] - fas ndvi / const[1000:1000]) SOL CONV -> const[0.0864:0.0864] * days per month

derived, depending on whether productions for alternative deﬁnitions are provided for each of the intermediate variables. It is here that the minimal revision/change principle comes into play: among theories of similar quality (ﬁt to the data), theories that are closer to the original theory are to be preferred. Since we are dealing with theories that are not necessarily expressed in logic (e.g., equations), only syntactic criteria of minimality of change are applicable in a straightforward fashion. 4.3

Typical Alternative Productions

Note that when an alternative production is speciﬁed for an intermediate variable, there are no restrictions (at least in principle) on these productions. For example, they can introduce new intermediate variables and productions deﬁning them. They can also specify arbitrary functional forms (in the case of equations). However, they do have to eventually derive (in the context of the entire grammar) valid sub-expressions involving the set of terminal symbols (system variables) associated to the initial model. A very common alternative production would replace the particular constants on the right-hand-side with generic constants, allowing the equation discovery system to re-ﬁt them to the given observational data. In the grammar from Table 3 that change can be achieved by replacing a terminal symbol of the form const[v:v], denoting a constant parameter with ﬁxed value v, with a

Theory Revision in Equation Discovery

395

generic symbol const that allows for an arbitrary value of the particular constant parameter. In our experiments with the earth-science CASA model we allow for a 100% change of the original values of the constant parameters in the initial model. This can be speciﬁed by replacing the terminal symbol const[v:v] with const[0:2*v], where interval [0 : 2 · v] is equal to [(v − 100% · v) : (v + 100% · v)] (a 100% relative change). A slightly more complex alternative production would replace a particular polynomial on the right-hand-side of a production with an arbitrary polynomial of the same (intermediate) variables. For example, in the grammar from Table 3 can be replaced by a grammar, similar to the example grammar from Table 1, for generating an arbitrary polynomial of the variable topt. 4.4

Current Implementation

Our current implementation of the theory revision approach to equation discovery outlined above involves applying Lagramge to the given observational data and a grammar specifying the possible alternative productions to be used in theory revision. The observational data are used to select a particular combination of the possible alternatives: note that these also include leaving parts of the model unchanged (as the original productions are also a part of the grammar) even if alternative productions for these exist. We currently do not have an implementation of the minimal change preference integrated within Lagramge. This however, can be achieved in a relatively straightforward manner. One of the heuristic functions used by Lagramge to search the space of equations, called MDL, takes into account the degree-of-ﬁt (sum of square errors) as well as the size of the equation model. A reasonable approach to implement a minimality of change principle would be to replace the second term in the MDL heuristic: replace the size of the equation with a distance between the current model and the initial model. The distance measure can be a distance on tree-structured terms, which would take into account the number and complexity of the alternative productions taken to derive the current equation.

5

Experiments in Revising an Earth-Science Model

We illustrate the use of the proposed framework for theory revision in equation discovery on the problem of revising one part of the earth-science CASA model [4]. The CASA model predicts annual global ﬂuxes in trace gas production on the basis of a number of measured (observed) variables, such as surface temperatures, satellite observations of the land surface, soil properties, etc. Because the whole CASA model is a quite complex system of diﬀerence and algebraic equations, we focused on the revision of the NPPc part of CASA (CASA-NPPc), presented in Table 2, that is used to predict the monthly net production of carbon at a given location.

396

L. Todorovski and S. Dˇzeroski

The values of the input variables (terminal symbols in the grammar from Table 2) were measured (and/or calculated) for 303 locations on the Earth providing a data set with 303 examples. In order to evaluate the accuracy of the model on unseen data we applied standard ten-fold leave-one-out cross validation method. The error of the original and revised models was calculated as root N ˆ 2 /N , where N is nummean squared error deﬁned as (NPPc i − NPPc) i=1

i

ˆ i are the observed value and the value ber of the data points; NPPc i and NPPc calculated by the model, respectively. 5.1

Revisions Used in the Experiments

As described in Section 4 we ﬁrst transformed the given NPPc model into a grammar (given in Table 3) that derives that model only. Furthermore, we added alternative productions to the grammar that deﬁne the space of possible revisions. We used six alternative possibilities for the revision of the NPPc model, described below. E-c-100 : we allowed a 100% relative change of the constant parameter 0.389 in the equation deﬁning the intermediate variable E. Therefore, we replaced the original production for nonterminal symbol E in the grammar with E -> const[0:0.778] * T1 * T2 * W, i.e., changed the constraint on the value of the constant parameter from the original const[0.389:0.389], which ﬁxes the value of the constant parameter, to const[0:0.778], which allows a 100% relative change of the original value of the constant parameter ([0 : 0.778] being equal to [(0.389 − 100% · 0.389) : (0.389 + 100% · 0.389)]). T1-c-100, T2-c-100 : we allowed the same revisions as the one described above on the right hand sides of the productions for T1 and T2. SR FAS-c-20 : we allowed 20% relative change of the constant parameters values in the equation deﬁning the intermediate variable SR FAS . The relative change of 20% was used to avoid values of the constant parameters lower than 800, which would cause singularity (division by zero) problems in the formula for calculating SR FAS . T1-s : we allowed the original second degree polynomial for calculation of T1 = 0.8 + 0.02 · topt − 0.0005 · topt 2 with an arbitrary polynomial of the same variable topt. The following alternative productions were added to the grammar from Table 3 for this purpose: T1 -> const and T1 -> const + (T1) * topt. T2-s : the graph of the dependency between the T2 and TDIFF variables shows a Gaussian-like slightly asymmetrical dependency curve. Following the fact that this kind of dependency can be approximated also with a higher degree polynomial we replaced the original T1 production in the grammar from Table 3 with two productions (similar to the ones for T1-s, presented above) that deﬁne an arbitrary polynomial of the TDIFF variable. In addition to these six possibilities for revising the CASA-NPPc model we also used diﬀerent combinations of them.

Theory Revision in Equation Discovery

5.2

397

Results of the Experiments

The results of the experiments with diﬀerent alternative grammars for revision are presented in Table 4. Table 4. Error reduction (in %) gained with revising the original CASA-NPPc model using diﬀerent grammars for revision. Grammar Reduction of RMSE (in %) SR FAS-c-20 14.93 T2-c-100 13.25 T1-s 13.05 T2-s 12.90 E-c-100 12.59 T1-c-100 12.39 SR FAS-c-20 + T2-s 15.56 SR FAS-c-20 + T1-s 15.46 T2-c-100 + T1-s 13.92 T2-s + T1-s 13.30 SR FAS-c-20 + T2-c-100 11.55 SR FAS-c-20 + T2-s + T1-s + E-c-100 16.19 SR FAS-c-20 + T2-s + T1-c-100 + E-c-100 15.44 SR FAS-c-20 + T2-c-100 + T1-s + E-c-100 14.82 SR FAS-c-20 + T2-c-100 + T1-c-100 + E-c-100 12.92

The ﬁrst six rows of Table 4 shows that revising the value of the constant parameters in the equation for calculating SR FAS gives the greatest improvement of the original model. The original value of the parameters (equal to 1000) deﬁnes an almost linear dependence of SR FAS on observed variable srdiﬀ. The revised values of the constant parameters were equal to 800 (lowest possible values), which increase the non-linearity of the dependence. Allowing lower values of the parameters in the equation gives further improvement, but singularity (division by zero) problems appear due to the range of the srdiﬀ variable. The analysis of the results of the structural revisions shows the following. T1s revision cause the second-degree polynomial for calculating the T1 variable to be replaced by a fourth degree polynomial. On the other hand, the structural revision T2-s reduced the complex formula for calculating T2 with a constant value. This is a surprising result that would have to be discussed with the Earth science experts that built the CASA model. Furthermore, we tested pairwise combinations of the six model reﬁnements. The results are presented in the second part of the Table 4. Results show that improvements gained using individual reﬁnement grammars do not combine additively. However, combinations do increase the improvements: maximal improvement gained with pairwise combinations is 15.56% compared with the highest improvement of 14.93% gained using individual revisions.

398

L. Todorovski and S. Dˇzeroski

Finally, the results of the experiments with combining all the reﬁnements are presented in the last four rows of Table 4. Note however, that revisions of the T1 and T2 structures (T1-s and T2-s) are mutually exclusive with the respective revisions of the T1 and T2 constants (T1-c-100 and T2-c-100). Therefore, four possible combinations are possible, the one combining the structural revisions of the T1 and T2 formulas and revisions of the values of the constant parameters in formulas for the SR FAS and E gives the maximal improvement of the accuracy of 16.19%. In sum, the presented results of the experiments show that small revisions of the CASA-NPPc model parameters and structure considerably improve the accuracy of the model, the maximal improvement being above 16%. However, Earth science experts should also evaluate the comprehensibility and acceptability of the revised models. Nevertheless, if some of the revisions generate models that do not make sense from their point of view, new alternative productions would have to be deﬁned to reﬂect the experts comments, and allow only revisions that lead to acceptable models. Note here that the most of the error reduction is gained using a fairly simple revision operator of changing the values of the constant parameters in the SR FAS equation. Only minor additional reductions can be obtained by combining this revision with any of the other ﬁve revision operators described above. Therefore, this revision would probably be the optimal one from the point of view of the minimality of change criterion, discussed in Section 4.

6

Conclusions and Discussion

We have presented a general framework for the revision of theories in the form of (sets of) quantitative equations. The method is based on grammars, which can be derived from the original theory. Domain experts can focus the revision process on parts of the model and guide it by providing relevant alternative productions. In this way, the revision process can be interactive, as is quite often the case when revising theories expressed in logic. We have applied our approach to the problem of revising an existing equation based model of the net production of carbon in the Earth ecosystem. Experimental results show that small revisions in both the values of the constant parameters and the structure of equations considerably reduce the error of the model by 16%. Saito et al. [5] address the same task of revising scientiﬁc models in the form of equations. Their approach is based on transforming parts of the model into a neural network, training the neural network, then transforming the trained network back into an expression/equation. This indirect approach is limited to revising the parameters or form of one equation in the model at a time. It also requires some handcrafting to encode the equations as a neural network – the authors state that “the need to to translate the existing CASA model into a declarative form that our discovery system can manipulate” is a challenge to their approach.

Theory Revision in Equation Discovery

399

Our approach allows for a straightforward representation of existing scientiﬁc models as grammars, which can then be directly manipulated and used to perform theory revision. The transition from the initial model to a grammar is so straightforward that we consider automating this process as one of the topics for immediate further work. Revisions to several equations of the original model may be considered simultaneously, as illustrated by the experiments performed. Whigham and Recknagel [8] also consider the speciﬁc task of revising an existing model for predicting chlorophyll-a by using measured data. They use a genetic algorithm to calibrate the equation parameters. They also use a grammar based genetic programming approach to revise the structure of two sub-parts (one at a time) of the initial model. A most general grammar that can derive an arbitrary expression using the allowed arithmetic operators and functions was used for each of the two sub-parts. Unlike this paper, Whigham and Recknagel [8] do not present a general framework for the revision of quantitative scientiﬁc models. Their approach is similar to ours in that they use grammars to specify possible revisions. However, the grammars they use are too general to provide much information about the domain at hand. Also, they do not consider the notion of minimality of revision and genetic programming typically produces very large expressions without a simplicity bias. As already mentioned, an immediate topic for further work is to automate the grammar generation from the initial model. Another challenge is to provide the domain experts an interactive tool for testing out diﬀerent alternatives for revision. Furthermore, integrating the minimality of change criterion in Lagramge is also an open issue. Minimal description length (MDL) heuristics in Lagramge can be adapted to take into account the distance between the current and the initial equation model. Finally, we plan to apply the proposed framework to the task of revision of other portions of the CASA model as well as revision of other equation based environmental models.

Acknowledgments We thank Christopher Potter, Steven Klooster and Alicia Torregrosa from NASA-Ames Research Center for making available both the CASA model and the relevant data set.

References ˙ 1. P. Langley, H. A. Simon, G. L. Bradshaw, and J. M. Zythow. Scientiﬁc Discovery. MIT Press, Cambridge, MA, 1987. 2. N. Lavrac and Saˇso Dˇzeroski. Inductive Logic Programming: Techniques and Applications. Ellis Horwood, Chichester, 1994. Freely available at http://www-ai.ijs.si/SasoDzeroski/ILPBook/. 3. D. Ourston and R. J. Mooney. Theory reﬁnement combining analytical and empirical methods. Artiﬁcial Intelligence, 66:273–309, 1994.

400

L. Todorovski and S. Dˇzeroski

4. C. S. Potter and S.A. Klooster. Interannual variability in soil trace gas (CO2, N2O, NO) uxes and analysis of controllers on regional to global scales. Global Biogeochemical Cycles, 12:621–635, 1998. 5. K. Saito, P. Langley, and T. Grenager. The computational revision of quantitative scientiﬁc models. 2001. Submitted to Discovery Science conference. 6. L. Todorovski and S. Dˇzeroski. Declarative bias in equation discovery. In Proceedings of the Fourteenth International Conference on Machine Learning, pages 376–384, Nashville, MA, 1997. Morgan Kaufmann. 7. T. Washio and H. Motoda. Discovering admissible models of complex systems based on scale-types and identity constraints. In Proceedings of the Fifteenth International Joint Conference on Artiﬁcial Intelligence, volume 2, pages 810–817, Nogoya, Japan, 1997. Morgan Kaufmann. 8. P. A. Whigham and F. Recknagel. Predicting chlorophyll-a in freshwater lakes by hybridising process-based models and genetic algorithms. In Book of Abstracts of the Second International Conference on Applications of Machine Learning to Ecological Modeling. Adelaide University, 2000. 9. S. Wrobel. First order theory reﬁnement. In L. De Raedt, editor, Advances in Inductive Logic Programming, pages 14–33. IOS Press, 1996. ˙ 10. R. Zembowicz and J. M. Zytkow. Discovery of equations: Experimental evaluation of convergence. In Proceedings of the Tenth National Conference on Artiﬁcial Intelligence, pages 70–75, San Jose, CA, 1992. Morgan Kaufmann.

Mining Semi-structured Data by Path Expressions Katsuaki Taniguchi1 , Hiroshi Sakamoto1 , Hiroki Arimura1,2 , Shinichi Shimozono3 , and Setsuo Arikawa1 1

3

Department of Informatics, Kyushu University, Fukuoka 812-8581, Japan, {k-tani, hiroshi, arim, arikawa}@i.kyushu-u.ac.jp, 2 PRESTO, Japan Science Technology Co., Japan, Department of Artiﬁcial Intelligence, Kyushu Institute of Technology Iizuka 820-8502, Japan, [email protected]

Abstract. A new data model for ﬁltering semi-structured texts is presented. Given positive and negative examples of HTML pages labeled by a labelling function, the HTML pages are divided into a set of paths using the XML parser. A path is a sequence of element nodes and text nodes such that a text node appears in only the tail of the path. The labels of an element node and a text node are called a tag and a text, respectively. The goal of a mining algorithm is to ﬁnd an interesting pattern, called association path, which is a pair of a tag-sequence t and a word-sequence w represented by the word-association pattern [1]. An association path (t, w) agrees with a labelling function on a path p if t is a subsequence of the tag-sequence of p and w matches with the text of p iﬀ p is in a positive example. The importance of such an associate path α is measured by the agreement of a labelling function on given data, i.e., the number of paths on which α agrees with the labelling function. We present a mining algorithm for this problem and show the eﬃciency of this model by experiments.

1

Introduction

In the information extraction, it is one of the central problems in Web mining to detect the occurrences or the regions of useful texts. In case of the Web data, this problem is particularly diﬃcult because we can not represent a rich logical structure by the limited tags of the HTML. The framework of wrapper induction introduced by Kushmerick [13] is a new approach to handle this diﬃculty. The most interesting result of his study is to show the eﬀectiveness and eﬃciency of simple wrappers with string delimiters in the information extraction tasks. Together with his work, we can ﬁnd other extracting models, for example, in [8, 9,10,11,15,17]. The target class, called HTML pages, of the wrapper induction model is restricted such that a page is deﬁned by a ﬁnite repetition of a sequence of K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 378–388, 2001. c Springer-Verlag Berlin Heidelberg 2001

Mining Semi-structured Data by Path Expressions

379

attributes. The attributes are the data which an algorithm has to extract. In a learning model, a learning algorithm takes an input of labeled examples such that the labels indicate whether they are positive data or negative data. The strategy is useful to learn a concept for the wrapper class. However, in case that a concept class is hard to learn by a small number of examples, the model may not be eﬀective. This diﬃculty is critical in the point of implementation since the labelling examples are actually made by human inspection. Thus, we would like to present a mining model to decide which portion of a given data is important and an automatic process to construct a large labelling sample. The aim of this paper is to ﬁnd rules for ﬁltering semi-structured texts according to users interests. An HTML/XML ﬁle can be considered as an ordered labeled tree. We assume that each node is either an element node and a text node. Each node has two types of labels called the name and value. An element node corresponds to a tag. The name of the node is the tag name like ,

, and , and the value of the node is empty. A text node corresponds to a portion of a plain text in an HTML and the name is the reserved string T ext, respectively. The value of a text node is the text. A ﬁltering rule is a sequence s = α1 , . . . , αk , β, where αi is a tag name, β is a word-association pattern [1] which is a string consisting of several words and the wild card ∗. A word-association pattern matches with a string if there is a possible substitution for all ∗. Given the s and a semi-structured text, using an XML parser, we can easily construct the tree structure and decompose the tree into the set P of paths. Each path contains at most one text node in the tail. The semantics of the ﬁltering rule s for P is deﬁned as follows. For each p ∈ P , s matches with p if α1 , . . . , αk is a subsequence of the sequence of tag names of p and the tail of p is a text node such that β matches with the value of the node. Such a ﬁltering rule is considered as a simple decision tree to extract texts from paths in HTML trees. Each αi represents a test on a node. Unless the test is failed, we continue the test to the next test αi+1 . Finally, the value of the text node is extracted according to the pattern β. In other words, this rule is a pair of tag patterns and association patterns α, β, where a tag pattern is a sequence α = (α1 , . . . , αk ) of tag names such that these tags frequently appear in positive examples together with the association pattern. Such a ﬁltering rule is called an association path in this paper. We can use this notion for a measure to decide the importance of keyword in a text. We show the eﬃciency of the association paths by experiments. This paper is organized as follows. In Section 2, we deﬁne the data model for HTML pages, HTML trees, and path expressions. In Section 3, we review the deﬁnition of the word-association pattern in [1] and formulate the mining problem, called Association Path problem, of this paper. Next we describe a mining algorithm which ﬁnds an association path for given a large collection of HTML pages. In Section 4, we show several experimental results. In the ﬁrst experiment, the set of positive examples is a collection of HTML texts containing a keyword “TSP” and the set of negatives is that containing “NP”. These key-

380

K. Taniguchi et al.

words mean the travelling sealsman problem and NP-optimization problem on the computational complexity theory, respectively. The aim is to ﬁnd an association path to characterize the notion TSP comparing to NP. In this experiment, the algorithm found some interesting association paths. For the next experiment, we choose the keyword “DNA” for positive examples. Compared to the ﬁrst result, the algorithm found few interesting paths. In Section 5, we conclude this study.

2

The Data Model

In this section, we introduce the data model considered in this paper. First, we begin with the notations used in this paper. IN denotes the set of all nonnegative integers. An alphabet Σ is a set of ﬁnite symbols. A ﬁnite sequence a1 , . . . , an of elements in Σ is called string and it is denoted by w = a1 · · · an for short. The empty string of length zero is ε. The set of all strings is denoted by Σ ∗ and let Σ + = Σ ∗ − {ε}. For string w, if w = αβγ, then the strings α and γ are called a preﬁx and a suﬃx of w, respectively. For a string s, we denote by s[i] with 1 ≤ i ≤ |s| the i-th symbol of s, where |s| is the length of s. For an HTML page, the HTML trees are the ordered node-labeled trees deﬁned as follows. For each tree T , the set of all nodes of T is a ﬁnite subset of IN , where the 0 is the root. A node is called a leaf if it has no child and otherwise called an internal node. If nodes n, m ∈ IN have the same parent, then n and m are siblings. A sequence n1 , . . . , nk of nodes of T is called a path if n1 is the root and ni is the parent of ni+1 for all i = 1, . . . , k − 1. For a path p = n1 , . . . , nk , the number k is called the length of p and the node nk is called the tail of p. With each node n, the pair N L(n) = N (n), V (n), called the node label of n, is attached, where N (n) and V (n) are strings called the node name and node value, respectively. If N (n) ∈ Σ + and V (n) = ε, then the node n is called the element node and the string N (n) is called the tag. If N (n) = T ext for the reserved string T ext and V (n) ∈ Σ + , then n is called the text node and the V (n) called the text value. We assume that every node n ∈ IN is categorized to the element node or text node. If a page P contains a beginning tag of the form and P contains no ending tag corresponding to it. Then, the tag is called an empty tag in P . If a page P contains a string of the form t1 · w · t2 such that t1 , t2 are either beginning or ending tags and w is a string not containing any tag, then the string w is called a text in P . An HTML ﬁle is called a page. A page P corresponds to an ordered labeled tree. For the simplicity, we assume that the P contains no comments, which is any string beginning the string . Deﬁnition 1. For a page P , we deﬁne the HTML tree Pt recursively as follows. 1. If P contains an empty tag of the form , then Pt has the element node n such that it is a leaf of P , N (n) = tag, and V (n) = ε.

Mining Semi-structured Data by Path Expressions

381

2. If P contains a text w, then Pt has the text node n such that it is a leaf P , N (n) = T ext, V (n) = w. 3. If P contains a string of the form s for a string s ∈ Σ ∗ , then Pt has the subtree n(n1 , . . . , nk ), where N (n) = tag, V (n) = ε and n1 , . . . , nk are the roots of the trees t1 , . . . , tk which are obtained from the w by recursively applying the above 1, 2 and 3. Next we deﬁne the functions to get the node names, node values, and HTML attributes from given nodes and HTML trees deﬁned above. These functions are useful to explain the algorithms in the next section. These functions return the values indicated below and return null if such values do not exist. – – – –

Parent(n): The parent of the node n ∈ IN . ChildNodes(n): The sequence of all children of n. Name(n): The node name N (n) of n. Value(n): The concatenation V (n1 ) · · · V (nk ) for the leaves n1 , . . . , nk of the subtree rooted at n in the left-to-right order.

Recall that V (n) is not empty only if n is text node. Thus, Value(n) is equal to the concatenation of values of all text nodes below n. Let Pt be an HTML tree for a page P and let N = {0, . . . , n} be the set of nodes in Pt . For nodes i, j ∈ N , if there is a sequence pi,j = i1 , . . . , ik of nodes in N such that i1 = i, ik = j, and i = Parent(i+1 ) for all 1 ≤ ≤ k − 1, then the pi,j is called the path from i to j. If i is the root, then pi,j is denoted by pj for short. For each path p = i1 , . . . , ik of Pt , we also deﬁne the following useful notations. – Name(p): The sequence N ame(i1 ), . . . , N ame(ik ). – Value(p): V (nk ). Deﬁnition 2. Let Pt be an HTML tree over the set N of nodes. Let p = i1 , . . . , in be a path of Pt and let Name t = {Name(n) | n ∈ N }. A sequence α = name1 , . . . , namem , (namei ∈ Name t ) is called a path expression over Name t . It is called that the α matches with the p if there exists a subsequence j1 , . . . , jm of p such that Name(j ) = name for all 1 ≤ ≤ m. In the next section, we deﬁne a measure of the matching of the path expressions with the paths of HTML trees. We also deﬁne the ﬁnding problem of a path expression to maximize the measure.

3

Mining HTML Texts

In this section we ﬁrst deﬁne the problem to ﬁnd an expression, called an association pattern, for ﬁltering semistructured texts. The pattern is a pair of a word-association pattern and a path expression. The semantics of the patterns is deﬁned by the matching semantics of the word-association patterns and the path expressions.

382

3.1

K. Taniguchi et al.

The Problem

A word-association pattern [1] π over Σ is a pair π = (p1 , . . . , pd ; k) of a ﬁnite sequence of strings in Σ ∗ and a parameter k called proximity which is either a nonnegative integer or inﬁnity. A word-association pattern π matches a string s ∈ Σ ∗ if there exists a sequence i1 , . . . , id of integers such that every pj in π occurs in s at the position ij and 0 ≤ ij+1 − ij ≤ k for all 1 ≤ j < d − 1. The notion (d, k)-pattern refers to a d-word k-proximity word-association pattern (p1 , . . . , pd ; k). Let S = {s1 , . . . , sm } be a ﬁnite set of strings Σ ∗ and let ψ be a labeling function ψ : S → {0, 1}. Then, for a string s ∈ S, we say that a word-association pattern π agrees with ψ on s if π matches s iﬀ ψ(s) = 1. Given (Σ, S, ψ, d, k) of an alphabet Σ, a ﬁnite set S ⊂ Σ ∗ of strings, a labeling function ψ : S → {0, 1}, and positive integers d and k, the problem Max Agreement by (d, k)-Pattern [1] is to ﬁnd a (d, k)-pattern π such that it maximizes the agreement of ψ, i.e., the number of strings in S on which π agrees with ψ. Deﬁnition 3. An association path is an expression of the form α#π, where the α is a path expression such that its tail is T ext, the π is a word-association pattern, and the # is the special symbol not belonging to any α and π. Let p = α#π be an association path and p be a path in a tree. It is said that the p matches the p if α matches p and π matches Value(p ). For a ﬁnite set T of HTML trees, let Text T = {Name(p), Value(p) | p is a path of t ∈ T , Value(p) = ε} The intuitive meaning of p appearing in Text T is a path p of an HTML tree such that the tail of p is a text node. Let Name T be the set of Name(p) and let Value T be the set of Value(p) in Text T . Deﬁnition 4. Association Path An instance is (Σ, Text T , ψ, d, k) of an alphabet Σ, a set Text T of pairs for a ﬁnite set T of HTML trees, a labeling function ψ : Value T → {0, 1}, and positive integers d, k. A solution is an association path α#π. The string π is a (d, k)-pattern for a solution of the max agreement problem for input (Σ, Value T , ψ, d, k). The string α is a (d, k)-pattern for a solution of the max agreement problem for input (Σ, Name T , ψ , d, k) such that where ψ is deﬁned by ψ (Name(p)) = 1 iﬀ ψ(Value(p)) = 1. The goal of the problem is to maximize the sum of the agreements of ψ and ψ over all association paths α#π. 3.2

The Algorithm

To ﬁnd association paths, the data of HTML texts are transformed to path expressions as follows. Given a large set S of HTML texts, it is divided into two disjoint sets S1 and S2 by a labeling function. The labeling function is considered as a keyword or phrase by a user, i.e., any text in S is labeled by 1 if it contains

Mining Semi-structured Data by Path Expressions

383

the keyword and labeled by 0 otherwise. Next all texts in S1 and S2 are parsed to HTML trees and let Pos be the set all paths from S1 and Neg be the set of all paths from S2 . Fig. 3.2 shows the process of our algorithm brieﬂy. Mining

Keyword

association path Positive texts

Treep

Negative texts

Treen

HTML

α

#

π

α : pattern for tags π : pattern for texts

Fig. 1. The process of mining algorithm.

Algorithm Path-Find(Σ, Text, ψ, d, k) /* Input: a set of HTML pages P over Σ, a labeling function ψ, non negative integers d, k */ /* Output: a solution of Association Path for the input*/ 1. Let P1 be the set of all pages in P labeled by 1 and let P2 = (P − P1 ). For the set T1 of HTML trees of P1 , compute the set Pos of all paths of trees in P1 and the set Neg of all paths of all trees in P2 . 2. Let Pos = {pi | 1 ≤ i ≤ m} and Neg = {qj | 1 ≤ j ≤ n} (m, n ≥ 0). Compute the sets Name Pos = {Name(p) | p ∈ Pos}, Value Pos = {Value(p) | p ∈ Pos}, Name Neg = {Name(q) | q ∈ Neg}, and Value Neg = {Value(q) | q ∈ Neg}. 3. Find a (d, k)-pattern π of the max agreement problem for Value Pos , Value Neg , and ﬁnd a (d, k)-pattern α of the max agreement problem for Name Pos , Name Neg . 4. Output the pattern α#π which maximizes the sum of the agreement of α and π. We estimate the running time of the Path-Find. This algorithm ﬁnds an association path for only the paths whose tails are the text nodes, i.e., the paths of the form p = n1 , . . . , nk , the ni (1 ≤ i ≤ k − 1) is an element node and the nk is a text node. Thus, for such paths p, we regard the mining problem as the problem to ﬁnd two phrases α from the strings Name(p) and β from the strings Value(p) for constant parameters d of the number of phrases of texts and k of the distance of phrases. If the maximum number of phrases in a pattern is bounded by a constant d then the max agreement problem for (d, k)-patterns is solvable by EnumerateScan algorithm [19], a modiﬁcation of a naive generate-and-test algorithm, in

384

K. Taniguchi et al.

O(nd+1 ) time and O(nd ) scans although it is still too slow to apply real world problems. Adopting the framework of optimized pattern discovery, we have developed an eﬃcient algorithm, called Split-Merge [1], that ﬁnds all the optimal patterns for the class of (d, k)-patterns for various statistical measures including the classiﬁcation error and information entropy. The algorithm quickly searches the hypothesis space using dynamic reconstruction of the content index, called a suﬃx array with combining several techniques from computational geometry and string algorithms. We showed that the Split-Merge algorithm runs in almost linear time in average, more precisely in O(k d−1 N (log N )d+1 ) time using O(k d−1 N ) space for nearly random texts of size N [1]. We also show that the problem to ﬁnd one of the best phrase patterns with arbitrarily many strings is MAX SNP-hard [1]. Thus, we see that there is no eﬃcient approximation algorithm with arbitrary small error for the problem when the number d of phrases is unbounded.

4

Experimental Results

In this section, we show the experimental results. The text data is a collection from the ResearchIndex 1 which is a scientiﬁc literature digital library. A positive data is the set Pos of HTML pages containing the keyword “TSP” and a negative data is the set Neg of HTML pages containing the keyword “NP”. The set Neg consists of many topics of computational complexity problems and Pos is concerned with one of the most popular NP-hard problems Travelling Salesman Problem not properly contained in Neg. The aim of this experiment is to ﬁnd an association path which characterizes TSP with NP. By this experiment on the collection of 8.4MB, the algorithm Path-Find ﬁnds the best 600 patterns at the entropy measure in seconds for d = 2 and three minutes for d = 3 with k = 10 words using 200 mega-bytes of main memory on IBM PC (PentiumIII 600 MHz, gcc++ on Windows98). The result obtained by our algorithm is shown in Fig. 1. Our system found several interesting association paths which may be diﬃcult for human users to ﬁnd by inspection. Fig. 1 consists of some association paths whose tag sequences contains tag. This means that the phrases, e.g., ‘local search’ and ‘euclidean tsp’, are emphasized by the tag. Thus we consider these phrases to be interesting. In fact these phrases are remarkable by the following reasons. The phrase ‘local search’ in Rank 171 indicates the local search heuristics for TSP such as [14]. In this path, the tag and (font style and size) in the left hand side indicates the importance of the phrase in the right hand side. The phrase ‘tsp and other’ in Rank 276 is a substring of the title of the outstanding paper written by Arora [2] in 1996 on the approximation algorithm for Euclidean TSP. The euclidean graph is an important geometric 1

http://citeseer.nj.nec.com/

Mining Semi-structured Data by Path Expressions

385

Rank Association path α#π 5 38 90 171 213 276 394 455 552

< < < < < < < <

# font p body html> # font p body html> # font p body html> # font p body html> # font p body html> # font p body html> # <euclidean tsp > font p body html> # > #

Fig. 2. The association paths found in the experiments, which characterize the Web pages on the TSP problem from these on NP-optimization problm. The parameters are (2, 10) for (d, k), where α is a path and π is a phrase.

structure to construct an approximation algorithm for TSP. These keywords appear in Rank 394, 455, and 552, respectively. Next we examine the same text data by the association pattern algorithm [1] and compare the resulting phrases with our result. The list of 400 phrases found by the association pattern algorithm is partially presented in Fig. 1. As is shown in this list, almost phrases are trivial except ‘local search’.

0 1 2 3 4 5 6 7 8 9

<solutions for tsp > #

Fig. 5. Other result of the experiments for the DNA from NP-optimization problem. The parameters are also (2, 10) for (d, k), where α is a path and π is a phrase.

Mining Semi-structured Data by Path Expressions

5

387

Conclusion

We introduced a new method for mining from HTML texts and present an algorithm to ﬁnd an association path which is a pair of association patterns over tag sequences and text sequences. By experiments on HTML data of scientiﬁc literature, the algorithm found interesting association paths from positive and negative examples on the traveling salesman problem and the other NP optimization problems.

Acknowledgments

The authors would be grateful to the anonymous referees for their careful reading of the draft and uesful comments. Shinichi Shimozono thanks Miho Matsui for the suggestive discussions and observations obtained while supervising her graduation thesis.

References 1. Shimozono, S., Arimura, H., and Arikawa, S. Eﬃcient discovery of optimal wordassociation patterns in large text databases. New Generation Computing 18:49-60, 2000. 2. Arora, S. Polynomial-time approximation schemes for Euclidean TSP and other geometric problems. Proc. 37th IEEE Symposium on Foundations of Computer Science, 2-12, 1996. 3. Abiteboul, S., Buneman, P., and Suciu, D. Data on the Web: From relations to semistructured data and XML, Morgan Kaufmann, San Francisco, CA, 2000. 4. Angluin, D. Queries and concept learning. Machine Learning 2:319–342, 1988. 5. Buneman, P., Davidson, S., Hillebrand, G., and Suciu, D. A query language and optimization techniques for unstructured data. University of Pennsylvania, Computer and Information Science Department, Technical Report MS-CIS 96-09, 1996. 6. Cohen, W. W. and Fan, W. Learning Page-Independent Heuristics for Extracting Data from Web Pages, Proc. WWW-99. 1999. 7. Craven, M., DiPasquo, D., Freitag, D., McCallum, A., Mitchell, T., Nigam, K., and Slattery, S. Learning to construct knowledge bases from the World Wide Web, Artiﬁcial Intelligence 118:69–113, 2000. 8. Freitag, D. Information extraction from HTML: Application of a general machine learning approach. Proc. the 15th National Conference on Artiﬁcial Intelligence, 517-523, 1998 9. Grieser, G., Jantke, K. P., Lange, S., and Thomas, B. A unifying approach to HTML wrapper representation and learning, Proc. the 3rd International Conference, DS2000, Lecture Notes in Artiﬁcial Intelligence 1967:50–64, 2000. 10. Hammer, J., Garcia-Molina, H., Cho, J., and Crespo, A. Extracting semistructured information from the Web. Proc. Workshop on Management of Semistructured Data, 18–25, 1997. 11. Hsu, C.-N. Initial results on wrapping semistructured web pages with ﬁnite-state transducers and contextual rules. Proc. 1998 Workshop on AI and Information Integration, 66–73, 1998. 12. Kamada, T. Compact HTML for small information appliances. W3C NOTE 09Feb-1998. www.w3.org/TR/1998/NOTE-compactHTML-19980209, 1998.

388

K. Taniguchi et al.

13. Kushmerick, N. Wrapper induction: eﬃciency and expressiveness. Artiﬁcial Intelligence 118:15–68, 2000. 14. Lin, S., and Kernighan, B. W. An eﬀective heuristic algorithm for the travelling salesman problem. Operations Research 21:498-516, 1973. 15. Muslea, I., Minton, S., and Knoblock, C. A. Wrapper induction for semistructured, web-based information sources. Proc. Conference on Automated Learning and Discovery, 1998. 16. Sakamoto, H., Arimura, H., and Arikawa, S. Identiﬁcation of tree translation rules from examples. Proc. the 5th International Colloquium on Grammatical Inference, LNAI 1891:241–255, 2000. 17. Thomas, B. Anti-uniﬁcation based learning of T-Wrappers for information extraction, Proc. AAAI Workshop on Machine Learning for IE, 15–20, AAAI, 1999. 18. Valiant, L. G. A theory of the learnable. Comm. ACM 27:1134–1142, 1984. 19. Wang, J. T., Chirn, G. W., Marr, T. G., Shapiro, B., Shasha, D., and Zhang, K. Combinatorial pattern discovery for scientiﬁc data: Some preliminary results. Proc. SIGMOD’94, 115-125, 1994.

Worst-Case Analysis of Rule Discovery Einoshin Suzuki Electrical and Computer Engineering, Yokohama National University, 79-5 Tokiwadai, Hodogaya, Yokohama 240-8501, Japan [email protected]

Abstract. In this paper, we perform a worst-case analysis of rule discovery. A rule is deﬁned as a probabilistic constraint of true assignment to the class attribute of corresponding examples. In data mining, a rule can be considered as representing an important class of discovered patterns. We accomplish the aforementioned objective by extending a preliminary version of PAC learning, which represents a worst-case analysis for classiﬁcation. Our analysis consists of two cases: the case in which we try to avoid ﬁnding a bad rule, and the case in which we try to avoid overlooking a good rule. Discussions on related works are also provided for PAC learning, multiple comparison, analysis of association rule discovery, and simultaneous reliability evaluation of a discovered rule.

1

Introduction

Data mining [2] can be deﬁned as extraction of useful knowledge from massive data, and is gaining increasing attention due to advancement of various information technologies. Data mining can be regarded as advanced data analysis, and a typical process of analysis consists of several steps [2]. Pattern extraction represents an important step in such a process. A rule is deﬁned as a probabilistic constraint inherent in a data set, and is widely recognized as representing one of the most important patterns in data mining. Although rule discovery has been extensively studied in data mining, its theoretical analyses are surprisingly rare. Several exceptions include Agrawal et al.’s analysis of association rule discovery [1] and our analysis of a discovered rule based on simultaneous reliability evaluation [10]. However, these studies ignore the total number of rules that can be discovered from a data set. This fact represents that these studies fail to relate the size of a discovery problem to the number of examples needed for successful discovery, and suggests that a more solid foundation of data mining should be established. As a ﬁrst step toward this objective, we extend a preliminary version of PAC learning [7], which represents a worst-case analysis of classiﬁcation. Our analysis consists of two cases: the case in which we try to avoid ﬁnding a bad rule, and the case in which we try to avoid overlooking a good rule. We also discuss about related works including PAC learning [5,7], Jensen and Cohen’s multiple comparison [4], Agrawal et al.’s analysis of association rule discovery [1], and our previous analysis of a discovered rule based on simultaneous reliability evaluation K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 365–377, 2001. c Springer-Verlag Berlin Heidelberg 2001

366

E. Suzuki

[10]. In the rest of this paper, technical terms and symbols of referenced papers are uniﬁed to those of this paper.

2 2.1

Rule Discovery Problem Rule

Let a data set contain n examples each of which is expressed by b discrete attributes and a class attribute. Typically rule discovery assumes no speciﬁc class attribute unlike classiﬁcation. However, for the sake of formalization, we consider a rule which predicts a speciﬁc class attribute to be true. Let a value v assignment A = v to an attribute A be an atom. In this paper, we regard a given data set as a result of sampling with replacement from a true data set. We call the probability of examples each of which satisﬁes a propositional logical formula f the true probability Pr(f ) of f . Similarly, an estimated probability which is obtained from a given data set for Pr(f ) is represented by ). Note that Pr(f ) can be calculated by the Laplace estimate or simply Pr(f by the ratio of examples which satisfy f in the data set. We employ the latter method in this paper. A rule r is represented as follows with a premise y which represents a propositional formula of atoms, and a conclusion x which represents a true assignment to the class attribute. r:y→x An intuitive interpretation of r is that many examples satisfy y and those examples are likely to satisfy x with high probability. We deﬁne Pr(y) and Pr(x|y) as the generality and the accuracy of r respectively. Similarly, we call Pr(y) and Pr(x|y) the estimated generality and the estimated accuracy of r respectively. 2.2

Related Classes of Rules

This section presents several classes of rules which are related to ours. A probabilistic if-then rule [9] is deﬁned as follows, where yi represents a single atom. y1 ∧ y2 ∧ · · · ∧ yK → x In [10], a probabilistic if-then rule is called a conjunction rule, and this paper follows this paraphrasing. A conjunction rule can be regarded as a special case of our rule: the premise is restricted to either a single atom or a conjunction of atoms. Since a premise of a conjunction rule is represented by a combination of atoms, the number |R| of possible conjunction rules is typically huge. The following gives |R|, where a data set contains b attributes and each of these attributes can have one of a values. |R| = (a + 1)b − 1

(1)

Worst-Case Analysis of Rule Discovery

367

This formula can be explained by the fact that each of b attributes can either have one of a values or be excluded from the premise. A typical value for |R| is huge: for example, |R| = 3,486,784,400 for a data set of 20 binary attributes. A realistic measure would be to restrict the number of atoms allowed in the premise to at most K. The possible number |RK |, in this case, is given as follows. |RK | =

K i=1

ai

b i

(2)

Note that (1) can be also derived by settling K = b in (2) and considering the binary coeﬃcients. In association rule discovery [1], a data set is restricted to a transactional data set which consists of binary attributes. A true assignment to a binary attribute is called an item. Let an itemset be either a single item or a conjunction of items. An association rule, in its original form, consists of a premise and a conclusion each of which is represented by an itemset. In our framework of section 2.1, an association rule can be regarded as a special case of our rule: the premise is restricted to either a single atom or a conjunction of atoms, and only the value “true” is allowed. The cases of |R| and |RK | for association rule discovery are obtained by settling a = 1 in (1) and (2). 2.3

Discovery Problem

In this paper, the objective of a user is to obtain, with high probability 1−δ, a rule of which generality and accuracy are no smaller than 1 − ζ and 1 − respectively. Typically multiple rules are obtained in rule discovery, but we restrict ourselves to single-rule discovery for the sake of analysis. Objective : Find y → x which satisﬁes Pr [Pr(y) ≥ 1 − ζ, Pr(x|y) ≥ 1 − ] ≥ 1 − δ where ζ, , δ > 0

(3)

A discovery algorithm to be analyzed obtains a rule of which generality and accuracy are no smaller than user-given thresholds θS and θF respectively. As stated above, since a given data set is a result of sampling from a true data set, the user employs thresholds θS = 1 − ζ! θF = 1 − in applying the algorithm. Algorithm : Find y → x which satisﬁes Pr(y) ≥ θS , Pr(x|y) ≥ θF

(4)

An interesting problem here is to bound the required number m of examples to accomplish (3) under (4). This problem can be named as PAGA (Probably Approximately General and Accurate) discovery after the well-known PAC (Probably Approximately Correct) learning [5,7], and can be regarded as a foundation of data mining.

368

3

E. Suzuki

Case 1: Exclusion of a Bad Rule

In this section, we derive a lower bound of the number of examples for the problem deﬁned in the previous section. An assumed condition is to avoid ﬁnding a bad rule. This condition can be considered as important in several domains where reliability represents a crucial concern. 3.1

Preliminaries

First we introduce preliminaries which are needed in subsequent analyses. If the domain of a probabilistic variable X is {0, 1, · · · , m} and the probability distribution of the variable is represented as follows, X is said to follow a binary distribution [3]. Pr(X = k) = B(k; m, p) m = pk (1 − p)m−k k

(5)

where p represents a constant 0 < p < 1 and k = 0, 1, · · · , m. The Chernoﬀ bound states that the following holds for an arbitrary constant a > p [1]. Pr(X > am) < exp[−2m(a − p)2 ] 3.2

(6)

Theoretical Analysis

From (3), a bad rule rb : y → x satisﬁes Pr(y) < 1 − ζ or Pr(x|y) < 1 − .

(7)

Since we assume, in this section, that we avoid ﬁnding a bad rule, the employed thresholds for generality and accuracy are relatively large. This assumption together with (3) and (4) necessitate the following. θS > 1 − ζ and θF > 1 −

(8)

θS > Pr(y) or θF > Pr(x|y).

(9)

From (7) and (8),

Since rb : y → x is discovered, Pr(y) ≥ θS and Pr(x|y) ≥ θF .

(10)

Let the number of examples in the given data set be m. If and only if y and xy are satisﬁed by at least mθS and mPr(y)θ F examples respectively in the

Worst-Case Analysis of Rule Discovery

369

data set, rb happens to be discovered. Since each of the numbers of examples which satisfy y and xy follows a binary distribution, Pr (rb discovered) m ≤ MAX

mPr(y) B(k; m, Pr(y)),

k=mθS

B(k; mPr(y), Pr(x|y))(11)

k=mPr(y)θ F

2

mθS < MAX exp −2m , − Pr(y) m 2

mPr(y)θF exp −2mPr(y) − Pr(x|y) mPr(y) < MAX exp −2m(θS − 1 + ζ)2 , exp −2mθS (θF − 1 + )2 .

(12) (13)

Note that, in (11), we consider separately the case in which a bad rule rb1 in terms of generality is discovered and the case in which a bad rule rb2 in terms of accuracy is discovered. The ﬁrst and second terms correspond to the left inequality and the right inequality of (7) respectively. Since Pr(rb1 ) and Pr(rb2 ) are unknown, we bound Pr(rb discovered) by MAX[Pr(rb1 discovered), Pr(rb2 discovered)]. In (12), the Chernoﬀ bound (6) is employed from (9). Finally in (13), we employ (7) and the left inequality of (10). Let the set of all possible rules and the set of all bad rules be R and Rb respectively, and let the cardinality of a set S be |S|. The probability of discovering a bad rule satisﬁes the following inequalities. Pr (Rb contains a discovered rule) < |Rb |MAX exp −2m(θS − 1 + ζ)2 , exp −2mθS (θF − 1 + )2 (14) ≤ |R|MAX exp −2m(θS − 1 + ζ)2 , exp −2mθS (θF − 1 + )2 (15) Note that we allow to count multiple times the cases in which several bad rules satisfy the discovery condition in (14), and (15) uses |R| ≥ |Rb |. Our objective (3) requires the following with respect to a suﬃciently small δ. |R|MAX exp −2m(θS − 1 + ζ)2 , exp −2mθS (θF − 1 + )2 ≤ δ (16) We obtain a lower bound of the number m of examples for discovery in which ﬁnding a bad rule is avoided with a high probability. ln |R| δ (17) m≥ 2 2 2MIN (θS − 1 + ζ) , θS (θF − 1 + ) The above inequality describes inﬂuence of each parameter to the minimum number of examples quantitatively. As we have seen in section 2.2, |R| is typically large and is thus important even if its inﬂuence is tolerated by a logarithmic

370

E. Suzuki

function. The second most important factors are θS − 1 + ζ and θF − 1 + . Since they inﬂuence the lower bound of m by the inverse of their squares, they can be problematic when they are small. Since each of these terms represents the diﬀerence of a threshold and the user-expected value, θS −1+ζ and θF −1+ can be named as the margin of generality and the margin of accuracy respectively. In a typical setting of rule discovery, we can assume θS = 0.1, and we assume that (θS − 1 + ζ) = 10−1 or 10−2 . We also assume that (θF − 1 + ) = 10−1 or 10−2 . Under these assumptions, the denominator is either 2 ∗ 10−3 or 2 ∗ 10−5 . Finally, δ can be considered as a moderately important factor in a typical situation δ = 0.01 - 0.05 since it appears only as a denominator of |R|. 3.3

Application to Conjunction Rule Discovery

From (1) and (17), the lower bound of the number m of examples is given as follows if we restrict the discovered rule to a conjunction rule. ln (a + 1)b − 1 + ln 1δ m≥ 2 2 2MIN (θS − 1 + ζ) , θS (θF − 1 + )

(18)

Note that settling a = 1 gives the case of association rule discovery. Firstly, ln(1/δ) can be typically ignored when δ = 0.01 - 0.05 from ln[(a + 1)b −1] ln(1/δ), thus the lower bound of m is approximately proportional to b. Secondly, since the number a of possible values for an attribute only aﬀects the right-hand side through a logarithmic function, a is typically not so important as b and margins of generality and accuracy. We show, in ﬁgure 1, a plot of the lower bound against MIN[(θS − 1 + ζ)2 , θS (θF − 1 + )2 ] for b = 102 , 103 , 104 , where we settled a = 2 and δ = 0.05. Note that each of the x axis and the y axis is represented by a logarithmic scale.

lower bound for m 1e+10 1e+09 1e+08 1e+07

b=10,000

1e+06

b=1,000

100000

b=100

10000 1000 100 1e-06

1e-05

0.0001

0.001 MIN

0.01

0.1

1

Fig. 1. Minimum number of examples needed for conjunction rule discovery without ﬁnding a bad rule. In the ﬁgure, MIN represents MIN[(θS − 1 + ζ)2 , θS (θF − 1 + )2 ].

Worst-Case Analysis of Rule Discovery

371

We discuss about the lower bound of the number of examples for a typical setting with ﬁgure 1. The examples described in section 3.2 state MIN[(θS − 1 + ζ)2 , θS (θF − 1 + )2 ] = 10−3 or 10−5 . For these cases, the lower bound is approximately 5.6 ∗ 104 - 5.6 ∗ 106 or 5.6 ∗ 106 - 5.6 ∗ 108 for b = 102 - 104 . These results indicate that the required number of examples for successful discovery can be prohibitively large for small margins. Note that large margins represent large thresholds, and no rules are usually discovered for large thresholds. A realistic and eﬀective measure to this problem would be to adjust thresholds according to a discovery process such as [11]. It should be anyway noted that our analyses in this paper correspond to the worst case, and the required number of examples in a real discovery problem can be much smaller than those mentioned above. From (2) and (17), the lower bound of the number m of examples is given as follows if we restrict the discovered rule to a conjunction rule with at most K atoms in its premise. K i b ln + ln 1δ a i=1 i m≥ (19) 2 2 2MIN (θS − 1 + ζ) , θS (θF − 1 + ) Note that settling a = 1 gives the case of association rule discovery. Similarly as we did in ﬁgure 1, we show, in ﬁgure 2, two plots of the lower bound for a = 2 and δ = 0.05. The left plot represents a case in which we varied b = 102 , 103 , 104 under K = 2, and in the right plot we varied K = 1, 2, 3, 4, 100 (= b) under b = 102 . lower bound for m

lower bound for m

1e+08

1e+08

1e+07

1e+07 b=10,000 b=1,000 b=100

1e+06 100000

100000

10000

10000

1000

1000

100

100

10

10

1e-06

1e-05

0.0001

0.001 MIN

0.01

K=100 (equivalent to b=100 in Fig. 1)

1e+06

0.1

1

1e-06

K=4 K=3 K=2 K=1

1e-05

0.0001

0.001

0.01

0.1

1

MIN

Fig. 2. Minimum number of examples needed for conjunction rule discovery without ﬁnding a bad rule, where at most K atoms are allowed in the premise. The left and right plots assume K = 2 and b = 100 respectively.

From the left plot, we see that the inﬂuence of b is relatively small for K = 2. On the other hand, the right plot of ﬁgure 2 shows that, for K ≤ 4, the minimum required number of examples is smaller by approximately an order of magnitude

372

E. Suzuki

than the case of considering all conjunction rules (K = b = 100). It is widely accepted that a rule with a short premise exhibits high readability, and the above results suggest that they are also attractive in terms of the required number of examples.

4

Case 2: Inclusion of a Good Rule

In this section, we derive another lower bound of the number of examples for the problem deﬁned in section 2.3. An assumed condition is to avoid overlooking a good rule. This condition can be considered as important in several domains where possibility is considered as highly important. From (3), a good rule rg : y → x satisﬁes Pr(y) ≥ 1 − ζ and Pr(x|y) ≥ 1 − .

(20)

Since we assume, in this section, that we avoid overlooking a good rule, the employed thresholds for generality and accuracy are relatively small. This assumption together with (3) and (4) necessitate the following. θS < 1 − ζ and θF < 1 −

(21)

θS < Pr(y) and θF < Pr(x|y).

(22)

From (20) and (21),

Let the number of examples in the given data set be m. If and only if y is satisﬁed by at most mθS −1 examples or xy is satisﬁed by at most mPr(y)θ F − 1 examples in the data set, rg happens to be undiscovered. Since each of the numbers of examples which satisfy y and xy follows a binary distribution, Pr (rg undiscovered) mPr(y)θ F −1

mθS −1

≤

B(k; m, Pr(y)) +

k=0

mPr(y)θ F −1

B(k; m, Pr(y))

k=0

mPr(y)θ F −1 B(k; m, Pr(y)) +

k=0

B(k; mPr(y), Pr(x|y)) (23)

B(k; mPr(y), Pr(x|y))

(24)

k=0

m

=

k=0

mθS −1

0

(29)

Next, [7] assumes that a classiﬁcation algorithm returns a classiﬁer which is consistent to all training examples. This corresponds to assuming θF = 1. To sum up, compared to our study, [7] ignores the case of learning a classiﬁer with low generality and the case of learning a classiﬁer which is inconsistent to the training examples. In this case, application of the Chernoﬀ bound can be skipped, and for a bad classiﬁer hb , we obtain Pr(hb learned) = (1 − )m . In [7], a lower bound of the required number of examples m is given by the following, where H represents a set of all classiﬁers. ln |H| δ m≥ (30) Note that (30) resembles to (17): it only ignores generality (θS = 1 and no ζ), assumes θF = 1, and omits the squares in 2 and 2 in the denominator. The last omissions are due to skipping the Chernoﬀ bound. 5.2

Jensen and Cohen’s Multiple Comparison

Jensen and Cohen’s multiple comparison [4] proposes a prudent view of classiﬁcation. Its essential point can be stated as a probabilistic explanation that the more candidates of classiﬁers are inspected in a learning algorithm, the smaller accuracy is exhibited by the obtained classiﬁer. The multiple comparison provides a comprehensive uniﬁed view of several studies including overﬁtting [8] and oversearching [6], and [4] also proposes several realistic measures. Since this study deals with classiﬁcation as PAC learning, it ignores generality. This corresponds to considering only the second term in (11). Since [4] considers the case of θF < 1, it provides a more realistic framework to learning than [7]. The multiple comparison diﬀers from our study in that it directly calculates, based on a binary distribution without using the Chernoﬀ bound, the probability for a bad classiﬁer to satisfy at least mθF examples. Moreover, they calculate exactly the probability that no bad classiﬁer is learned while we, in (14), allow counting multiples times the cases in which more than one bad rules satisfy the discovery condition. Let the set of all bad classiﬁers be Hb , the probability in [4] is given by the following. |H|

Pr(Hb contains a learned classiﬁer) = 1 − [1 − Pr(hb learned)]

(31)

Pursuing strictness in calculation can be considered as a double-edged sword. Jensen and Cohen give no analytical solutions to the required number of examples for successful learning. We attribute this reason to the fact that resolving

Worst-Case Analysis of Rule Discovery

375

(31) for m is relatively diﬃcult. We have employed several approximations in our theoretical analyses, and these were necessary to bound m analytically. Another diﬀerence between [4] and our analyses is rather philosophical: while they are pessimistic about classiﬁcation, we are realistic about rule discovery. The study in [4] emphasizes that |H| is huge, and demonstrates various examples in which it is diﬃcult to avoid learning a bad classiﬁer. We also recognize that |R| is huge, but bounds the required number of examples m analytically with respect to |R|. 5.3

Theoretical Analysis of Association Rule Discovery

Analyses of association rule discovery [1] are threefold: a lower bound of the number of queries under the use of a database system, the expected number of itemsets each of which is satisﬁed by at least a required number of examples in a random data set, and the number of examples satisﬁed by an itemset in a sampled data set. The third analysis is highly related to our study in that both of the two deal with the case of sampling m examples from a true data set in rule discovery. The analysis provides a speciﬁcation of the Chernoﬀ bound (6), where X ) for an itemset f . It ﬁrst regards the right-hand side is regarded as mPr(f 2 ) to deviate at exp[−2m(a − p) ] as the upper bound of the probability for Pr(f least a − p from its value p (= Pr(f )) in the true data set. Next, it gives several examples of values for a − p and δ in exp[−2m(a − p)2 ] = δ, and represents the corresponding values of m in a table. The discovery algorithm employed in [1] ﬁrst obtains, by an algorithm called ) ≥ θS . Then, it generApriori, a set of imtemsets f each of which satisﬁes Pr(f ates a set of association rules from this set. One of the motivations of the above analysis was to reduce the run-time of Apriori by the use of a sampled data set. Due to this motivation, [1] ignores accuracy unlike our study. Moreover, since it considers a single association rule, the study fails to relate the size of a discovery problem to the number of examples needed for successful discovery. 5.4

Simultaneous Reliability Evaluation of a Discovered Rule

Simultaneous reliability evaluation of a discovered rule [10] also deals with the case of sampling m examples from a true data set in rule discovery as in section 5.3 and our study. Unlike the analysis in section 5.3, this study considers both generality and accuracy. The objective considered in [10] is identical to ours, and is represented by (3). Let x represent the negation of x. The analysis ﬁxes m and employs neither θS nor θF . It assumes that (m Pr(xy), m Pr(xy)) follows a two-dimensional normal distribution, and obtains the exact condition for accomplishing the objective analytically. This is a diﬀerent framework from ours: we use a discovery algorithm with ﬁxed thresholds θS , θF in (4) and bound the number m of sampled examples. The problem dealt in [10] can be reduced to the problem of deriving

376

E. Suzuki

and analyzing two tangent lines of an ellipse, and applying Lagrange’s multiplier method gives the following analytical solutions. $ % % 1 − Pr(y) Pr(y) 1 − β(δ)& ≥1−ζ (32) nPr(y) $ % % y) Pr(x, 1 − β(δ)& Pr(x|y) ≥ 1 − (33) y){(n + β(δ)2 )Pr(y) − β(δ)2 } Pr(x, Here β(δ) represents a positive constant which deﬁnes the size of a 1 − δ conﬁdence region i.e. the ellipse for (m Pr(xy), m Pr(xy)), and can be obtained by a simple numerical integration. Note that (32) and (33) represent conditions for generality and accuracy respectively. Each of them states that the corresponding estimated probability multiplied by a coeﬃcient which is related to the size of the conﬁdence region is no smaller than the corresponding user-expected value (1 − ζ or 1 − ). Since the study [10] assumes a speciﬁc distribution to the simultaneous occurrence of random variables, it does not fall in the category of worst-case analysis. Similarly to the analysis in section 5.3, the study fails to relate the size of a discovery problem to the number of examples needed for successful discovery.

6

Conclusions

The main contribution of this paper is threefold. 1) We formalized a worst-case analysis of rule discovery. The proposed framework employs thresholds θS , θF for generality and accuracy which are diﬀerent from user-expected values 1 − ζ! 1 − respectively. We considered the case in which we try to avoid ﬁnding a bad rule, and the case in which we try to avoid overlooking a good rule. 2) We derived a lower bound of the number m of required examples. By using probabilistic formalization and appropriate approximations, two lower bounds are obtained for the aforementioned two cases. Quantitative analysis of one of the lower bounds revealed that the total number |R| of rules, the margin θS − 1 + ζ for generality, and the margin θF − 1 + for accuracy are important. 3) We analyzed one of the lower bounds for a set of speciﬁc problems of conjunction rule discovery. Various useful conclusions are obtained by inspecting the lower bound for a set of typical settings. The contribution of 1) represents that this paper has provided, in rule discovery, a framework which corresponds to PAC learning. This framework can be named as PAGA (Probably Approximately General and Accurate) discovery. PAGA discovery can be regarded as promising as a theoretical foundation of active mining, which requests new examples in a discovery process. The contributions of 2) and 3) suggest various useful policies in applying various rule algorithms in practice. Such policies include sampling/extension of a data set and modiﬁcation of the class of discovered rules. We can safely conclude that

Worst-Case Analysis of Rule Discovery

377

our comprehension to rule discovery has deepened with these contributions and discussions in section 5. Ongoing work focuses on analyses of more realistic algorithms, especially an algorithm which discovers multiple rules with various conclusions.

Acknowledgement We are grateful to Setsuo Arikawa for enabling us to initiate this study by suggesting us to pursue the relationship of one of our previous studies and PAC learning. This work was partially supported by the grant-in-aid for scientiﬁc research on priority area “Active Mining” from the Japanese Ministry of Education, Culture, Sports, Science and Technology.

References 1. R. Agrawal, H. Mannila, R. Srikant, H. Toivonen, and A. I. Verkamo: Fast Discovery of Association Rules, Advances in Knowledge Discovery and Data Mining, pp. 307–328, AAAI/MIT Press, Menlo Park, Calif. (1996). 2. U. M. Fayyad, G. Piatetsky-Shapiro, and P. Smyth: “From Data Mining to Knowledge Discovery: An Overview”, Advances in Knowledge Discovery and Data Mining, AAAI/MIT Press, pp. 1–34, Menlo Park, Calif. (1996). 3. W. Feller: An Introduction to Probability Theory and Its Applications, John Wiley & Sons, New York (1957). 4. D. D. Jensen and P. R. Cohen: “Multiple Comparisons in Induction Algorithms”, Machine Learning, Vol. 38, No. 3, pp. 309–338 (2000). 5. M. J. Kearns and U. V. Vazirani: An Introduction to Computational Learning Theory. MIT Press, Cambridge, Mass. (1994). 6. J. R. Quinlan and R. Cameron-Jones: “Oversearching and Layered Search in Empirical Learning”, Proc. Fourteenth Int’l Joint Conf. on Artiﬁcial Intelligence (IJCAI), pp. 1019–1024 (1995). 7. S. Russel and P. Norvig: Artiﬁcial Intelligence, a Modern Approach, pp. 552–558, Prentice Hall, Upper Saddle River, N. J. (1995). 8. C. Schaﬀer: “Overﬁtting Avoidance as Bias”, Machine Learning, Vol. 10, No. 2, pp. 153–178 (1993). 9. P. Smyth and R. M. Goodman: “An Information Theoretic Approach to Rule Induction from Databases”, IEEE Trans. Knowledge and Data Eng., Vol. 4, No. 4, pp. 301–316 (1992). 10. E. Suzuki: “Simultaneous Reliability Evaluation of Generality and Accuracy for Rule Discovery in Databases”, Proc. Fourth Int’l Conf. on Knowledge Discovery and Data Mining (KDD), pp. 339–343 (1998). 11. E. Suzuki: “Scheduled Discovery of Exception Rules”, Discovery Science, LNAI 1721 (DS), pp. 184–195, Springer-Verlag (1999).

An Eﬃcient Derivation for Elementary Formal Systems Based on Partial Uniﬁcation Noriko Sugimoto, Hiroki Ishizaka, and Takeshi Shinohara Department of Artiﬁcial Intelligence Kyushu Institute of Technology Kawazu 680-4, Iizuka 820-8502, Japan {sugimoto, ishizaka, shino}@ai.kyutech.ac.jp

Abstract. An EFS is a kind of logic programs expressing various formal languages. We propose an eﬃcient derivation for EFS’s called an S-derivation, where every possible uniﬁers are evaluated at one step of the derivation. In the S-derivation, each uniﬁer is partially applied to each goal clause by assigning variables whose values are uniquely determined from the set of all possible uniﬁers. This contributes to reduce the number of backtracking, and thus the S-derivation works eﬃciently. In this paper, the S-derivation is shown to be complete for the class of regular EFS’s. We implement an EFS interpreter based on the S-derivation in Prolog programming language, and compare the parsing time with that of DCG provided by the Prolog interpreter. As the results of experiments, we verify the eﬃciency of the S-derivation for accepting context-free languages.

1

Introduction

In the area of machine learning or discovery science, it is an important issue to develop eﬃcient systems dealing with formal languages under a theoretical background. An elementary formal system (EFS, for short) is a kind of logic programs over the domain of strings [3,11,15]. The EFS’s are well-known to be ﬂexible enough to represent not only classes of languages in Chomsky hierarchy [3] but also binary relations over strings [12,13]. It has been shown that the EFS is suitable to discuss learnability in the framework for inductive inference and machine learning of languages [2,3,9,10]. Mukouchi and Arikawa [8] developed a theoretical framework for machine discovery, where refutability of search space is shown to be the most important factor and one of such refutably learnable classes is the class of length-bounded EFS’s. Theoretically, EFS’s can be used as working systems as Prolog programs because a derivation based on the resolution principle [7] is also deﬁned for EFS’s. In EFS’s, a derivation procedure is formalized as an acceptor for formal languages [3,15]. Furthermore, the derivation can be used to generating languages [14]. The purpose of this research

The research reported here is partially supported by the Telecommunication Adbancement Foundation, Japan.

K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 350–364, 2001. c Springer-Verlag Berlin Heidelberg 2001

An Eﬃcient Derivation for Elementary Formal Systems

351

is to develop an eﬃcient derivation and construct an EFS interpreter based on the derivation. Since an EFS deals with strings as its domain, uniﬁcations for strings should be computed eﬃciently at each step of the derivation. However, it is known that the uniﬁcation problem for strings is computationally hard and the uniﬁer is not always uniquely determined even if it is restricted to the maximally general uniﬁer [5,6]. On the other hand, for the ﬁrst order terms used in Prolog programming language, the uniﬁer is uniquely determined as the most general uniﬁer. Therefore, in an EFS, backtracking occurs for each selection of uniﬁers as well as clauses. Harada et al. [4] introduced restricted EFS’s called variableseparated EFS’s, where there is no variable successively occurring in any term. In the variable-separated EFS, the number of possible uniﬁers is decreased, and the derivation works eﬃciently. However, the size of a variable-separated EFS is possibly to be much larger than that of the non-variable-separated EFS equivalent to it. This causes ineﬃciency in parsing languages. Here, we introduce another approach to develop an eﬃcient EFS interpreter. When strings have successive occurrence of variables, the number of uniﬁers becomes large as pointed out by Harada et al. [4]. For example, for the strings xyz of variables and a1 a2 · · · an of constant symbols, they have O(n2 ) uniﬁers, because, for each i (i = 1, 2, . . . , n − 2) and j (j = i + 1, i + 2, . . . , n − 1), all substitutions replacing x with a1 a2 · · · ai , y with ai+1 ai+2 · · · aj , and z with aj+1 aj+2 · · · an are uniﬁers of them. In EFS’s, since there are many selections for uniﬁers at each step of a derivation, it has been diﬃcult to construct an eﬃcient interpreter. Thus, we propose a new approach to evaluate all possible uniﬁers at one step of the derivation. We formalize a derivation with sets of uniﬁers (an S-derivation, for short). In the S-derivation, each uniﬁer is partially applied to each goal clause by assigning variables whose values are uniquely determined from the set of all possible uniﬁers. The S-derivation is a natural extension of the standard derivation for EFS’s, because the set of uniﬁers can be regarded as the unique uniﬁer in EFS’s corresponding to the most general uniﬁer in the ﬁrst order language. We show that the S-derivation is complete for restricted EFS’s called regular EFS’s which deﬁne the class of languages equivalent to that of context-free languages. We implement an S-derivation for regular EFS’s in Prolog programming languages, and verify the eﬃciency of the S-derivation by comparing the running time of the S-derivation with that of deﬁnite clause grammars (DCG’s) provided by the Prolog interpreter. In our EFS interpreter, each uniﬁer is eﬃciently computed by using the Aho-Corasick pattern matching algorithm [1]. The AhoCorasick algorithm ﬁnds all occurrences of patterns on the text in linear time with the length of the text. A regular EFS is suitable to the computation of the uniﬁcation, because each string in the derivation becomes a substring of the initially given text. Therefore, every uniﬁers used in a derivation can be computed by only once scanning on the given text. As the results of experiments, we show that the S-derivation using the Aho-Corasick algorithm is eﬃcient with respect to the length of a given text and the number of variables in the EFS.

352

N. Sugimoto, H. Ishizaka, and T. Shinohara

This paper is organized as follows: In Section 2, we give some notations and deﬁnitions including derivation and semantics for EFS’s. In Section 3 and 4, we introduce S-derivation, and prove completeness of the S-derivation. In Section 5, we outline the EFS interpreter based on the S-derivation, and show experimental results for typical examples of EFS’s, where the S-derivation works eﬃciently. Finally, we summarize the results of this research, and describe some open problems.

2

Preliminaries

In this section, we give some basic deﬁnitions and notations according to [3,14, 15]. 2.1

Elementary Formal Systems

For a given set A, the set of all ﬁnite strings of symbols from A is denoted by A∗ . The empty string is denoted by ε. A+ denotes the set A∗ − {ε}. Let Σ, X, and Π be mutually distinct sets. We assume that Σ is a ﬁnite set of constant symbols, X is a set of variables, and Π is a ﬁnite set of predicate symbols. Each predicate symbol is associated with a non-negative integer called its arity. A term is an element of (Σ ∪ X)+ . A term is said to be regular, if every variable occurs at most once in the term. An atomic formula (atom, for short) is of the form p(π1 , π2 , . . . , πn ), where p is a predicate symbol with arity n and each πi is a term (i = 1, 2, . . . , n). A deﬁnite clause (clause, for short) is of the form A ← B1 , . . . , Bn (n ≥ 0), where A, B1 , . . . , Bn are atoms. The atom A and the sequence B1 , . . . , Bn are called the head and the body of the clause, respectively. A goal clause (goal , for short) is of the form ← B1 , . . . , Bn (n ≥ 0) and the goal with n = 0 is called the empty goal . An expression is a term, an atom, a clause, or a goal. An expression E is said to be ground, if E has no variable. For an expression E and a variable x, var(E) and oc(x, E) denote the set of all variables occurring in E, and the number of occurrences of x in E, respectively. An elementary formal system (EFS , for short) is a ﬁnite set of clauses. A substitution θ is a (semi-group) homomorphism from (Σ ∪ X)+ to itself satisfying the following conditions: 1. aθ = a for each a ∈ Σ, and 2. the set {x ∈ X | xθ = x}, denoted by D(θ), is ﬁnite. For a substitution θ, if D(θ) = {x1 , x2 , . . . , xn } and xi θ = πi for every i (i = 1, 2, . . . , n), then θ is denoted by the set {x1 /π1 , x2 /π2 , . . . , xn /πn }. For an expression E and a substitution θ, Eθ is deﬁned as the expression by simultaneously replacing each variable x in E with xθ. Let (E1 , E2 ) be a pair of expressions. Then a substitution θ is said to be a uniﬁer of E1 and E2 if E1 θ = E2 θ. The set of all uniﬁers θ of E1 and E2 satisfying D(θ) ⊆ var(E1 ) ∪ var(E2 ) is denoted by U (E1 , E2 ). We say that E1

An Eﬃcient Derivation for Elementary Formal Systems

353

and E2 are uniﬁable if the set U (E1 , E2 ) is not empty. An expression E1 is a variant of E2 if there exist two substitutions θ and δ such that E1 θ = E2 and E2 δ = E 1 . 2.2

The Semantics of EFS’s

We give two semantics of EFS’s by using provability relations and derivations. First, we introduce the provability semantics. Let Γ and C be an EFS and a clause. Then, the provability relation Γ C is inductively as follows: 1. If C ∈ Γ then Γ C. 2. If Γ C then Γ Cθ for any substitution θ. 3. If Γ A ← B1 , . . . , Bm and Γ Bm ← then Γ A ← B1 , . . . , Bm−1 . A clause C is provable from Γ if Γ C holds. The provability semantics of the EFS Γ , denoted by P S(Γ ), is deﬁned as the set of all ground atoms A satisfying that Γ A ←. For an EFS Γ and a unary predicate symbol p, the language deﬁned by Γ and p is denoted by L(Γ, p), and deﬁned as the set of all strings w ∈ Σ + such that p(w) ∈ P S(Γ ). The second semantics is based on a derivation for EFS’s. We assume a computation rule R to select an atom from every goal. Let Γ be an EFS, G be a goal, and R be a computation rule. A derivation from G is a (ﬁnite or inﬁnite) sequence of triplets (Gi , Ci , θi ) (i = 0, 1, . . .) which satisﬁes the following conditions: 1. Gi is a goal, θi is a substitution, Ci is a variant of a clause in Γ , and G0 = G. 2. var(Ci ) ∩ var(Cj ) = ∅ for every i and j (i = j), and var(Ci ) ∩ var(Gi ) = ∅ for every i. 3. If Gi =← A1 , . . . , Ak , and Am is the atom selected by R, then Ci is of the form A ← B1 , . . . , Bn satisfying that A and Am are uniﬁable, θi ∈ U (A, Am ), and Gi+1 is of the following form: (← A1 , . . . , Am−1 , B1 , . . . , Bn , Am+1 , . . . , Ak )θi . The atom Am is called a selected atom of Gi , and Gi+1 is called a resolvent of Gi and Ci by θi . A refutation is a ﬁnite derivation ending with the empty goal. The procedural semantics of an EFS Γ , denoted by RS(Γ ), is deﬁned as the set of all ground atoms A satisfying that there exists a refutation of Γ from the goal ← A. It has been shown that P S(Γ ) = RS(Γ ) for every EFS Γ [15]. This implies that a string w ∈ Σ + is in the language deﬁned by an EFS Γ and a predicate symbol p if and only if there exists a refutation of Γ from ← p(w). Thus, the derivation procedure can be regarded as an acceptor for the language. Finally, we give the distinct set from an EFS language. Let Γ be an EFS, and (Gi , Ci , θi ) (i = 0, 1, . . . , n) be a ﬁnite derivation of Γ . The derivation is said to be ﬁnitely failed with the length n if there exists no clause in Γ such that its head and the selected atom of Gn are uniﬁable. Furthermore, we deﬁne F F S(Γ ) as the set of all ground atoms A satisfying that all derivations of Γ from ← A are ﬁnitely failed within the length n.

354

3

N. Sugimoto, H. Ishizaka, and T. Shinohara

Extended Derivations with Sets of Uniﬁers

In this section, we introduce a derivation with sets of uniﬁers (S-derivation, for short). In the S-derivation, each uniﬁer is partially applied to each goal clause by assigning variables whose values are uniquely determined from the set of all possible uniﬁers. Since there are inﬁnitely many uniﬁers for terms containing variables, it is diﬃcult to compute the derivation from the goal containing variables. However, for restricted terms, all uniﬁers are computable by using maximally general uniﬁers. The S-derivation works eﬃciently by using the maximally general uniﬁers. Furthermore, in this section, the S-derivation is shown to be complete for accepting and generating languages deﬁned by restricted EFS’s called regular EFS’s. 3.1

Maximally General Uniﬁers

Let θ = {x1 /π1 , x2 /π2 , . . . , xm /πm } and δ = {y1 /τ1 , y2 /τ2 , . . . , yn /τn } be substitutions. Then, we deﬁne a composition of θ and δ as follows: θ · δ = {xi /πi δ | xi = πi δ} ∪ {yi /τi | yi ∈ / D(θ)}. Let θ, δ and γ be substitutions, and E be an expression. Then, we can prove the following equations along the same line of argument as deﬁnite programs [7]: 1. (Eθ)δ = E(θ · δ), and 2. (θ · δ) · γ = θ · (δ · γ). Let V be a ﬁnite set of variables, and (θ, δ) be a pair of substitutions. Then, we say that θ and δ are equivalent on V , if πθ is a variant of πδ for any π ∈ (Σ ∪V )+ . We show that the problem of determining whether or not θ and δ are equivalent on V is solvable by the following lemma. Lemma 1. Let θ and δ be substitutions, and V = {x1 , x2 , . . . , xn } be a ﬁnite set of variables. Then, θ and δ are equivalent on V if and only if the following statements hold: 1. xθ is a variant of xδ, for every x ∈ V , and 2. x1 x2 · · · xn θ is a variant of x1 x2 · · · xn δ. Proof. We can prove this lemma by the induction on the length of π ∈ (Σ ∪V )+ . Let (E1 , E2 ) be a pair of expressions. A maximally general uniﬁer (mxgu, for short) of E1 and E2 is a uniﬁer θ ∈ U (E1 , E2 ) satisfying that, for any δ ∈ U (E1 , E2 ) such that θ and δ are equivalent on var(E1 ) ∪ var(E2 ), there is no substitution γ such that θ = δ · γ. The set of all mxgu’s of E1 and E2 is denoted by M XGU (E1 , E2 ). For two terms π and τ , we deﬁne the number of mxgu’s of π and τ as the cardinality of equivalence classes of substitutions on var(π) ∪ var(τ ). Thus, we say that M XGU (π, τ ) is ﬁnite, if the number of mxgu’s is ﬁnite without equivalent substitutions on var(π) ∪ var(τ ). From the deﬁnition of maximally general uniﬁers, the following lemmas hold [5,6,14].

An Eﬃcient Derivation for Elementary Formal Systems

355

Lemma 2. Let π and τ be regular terms such that var(π) ∩ var(τ ) = ∅. Then, the set M XGU (π, τ ) is ﬁnite and computable. Lemma 3. Let π and τ be terms. If π is ground, then the set M XGU (π, τ ) is ﬁnite and computable, and M XGU (π, τ ) = U (π, τ ) holds. Lemma 4. Let x be a variable and π be a term which does not include x. Then, M XGU (π, x) is a singleton set which consists of the substitution {x/π}. 3.2

S-Derivation

In the following argument, we assume that every substitution θ satisﬁes θ · θ = θ, that is, var(xθ) ∩ D(θ) = ∅ for every variable x ∈ D(θ). Deﬁnition 1. For two substitutions θ and δ, we deﬁne θ ◦ δ as the set of all substitutions σ satisfying that σ = θ · δ · γ = δ · θ · γ for some substitution γ. Note that, for each element σ of the set θ ◦ δ, xσ becomes the element of the intersection of sets of strings which are uniﬁable with xθ and xδ. Substitutions θ and δ are said to be inconsistent if θ ◦ δ = ∅, and consistent, otherwise. We deﬁne M IN (θ ◦ δ) as the minimum subset of θ ◦ δ satisfying that, for any σ ∈ θ ◦ δ, there exists σ ∈ M IN (θ ◦ δ) such that σ = σ · γ for some substitution γ. For two ﬁnite sets Θ and ∆ of substitutions, we deﬁne 1. M IN (Θ ◦ ∆) = M IN (θ ◦ δ), and 2. IN T (Θ) =

(θ,δ)∈Θ×∆

θ.

θ∈Θ

Lemma 5. Let θ and δ be substitutions. If δ is ground, then the set M IN (θ ◦δ) is ﬁnite and computable. Proof. Let θ and δ be substitutions {xi /πi | i ∈ {1, 2, . . . , m}} and {yi /ti | i ∈ {1, 2, . . . , n}}, respectively. If σ ∈ θ ◦ δ then there exists a substitution γ satisfying that 1. σ = θ · δ · γ = {xi /πi δγ | i = 1, 2, . . . , m} ∪ δ ∪ γ, and 2. πi δγ = tj for every xi = yj ∈ D(θ) ∩ D(δ), from Deﬁnition 1. Let S be the set of all possible γ satisfying the above conditions and D(γ) ⊆ var(π1 δ) ∪ var(π2 δ) ∪ · · · ∪ var(πm δ). Since, from Lemma 3, the set U (πi δ, tj ) is ﬁnite for each xi = yj ∈ D(θ) ∩ D(δ), the set S is also ﬁnite and computable. It is clear that σ = θ · δ · γ ∈ M IN (θ ◦ δ) for each γ ∈ S, because every γ ∈ S is ground. Futhermore, we can show that, for every substitution γ , if θ · δ · γ = δ · θ · γ holds, then there exists γ ∈ S such that γ = γ · γ . Thus, the set M IN (θ ◦ δ) consists of θ · δ · γ for every γ ∈ S. It is clear that M IN (θ ◦ δ) is ﬁnite and computable.

356

N. Sugimoto, H. Ishizaka, and T. Shinohara

Example 1. We consider the substitutions: θ1 = {x/y}, θ2 = {x/aaz, y/az}, θ3 = {x/aaz, y/zb}, and δ = {x/aaa, y/ab}. From xθ1 δ = yδ = ab and xδ = aaa, the set U (xθ1 δ, xδ) is empty. Thus, θ1 ◦ δ = ∅, and θ and δ are inconsistent. From xθ2 δ = aazδ = aaz and xδ = aaa, U (xθ2 δ, xδ) is the set {{z/a}}. From yθ2 δ = azδ = az and yδ = ab, U (yθ2 δ, yδ) is the set {{z/b}}. Then, there exists no substitution γ such that xθ2 δγ = aaa and yθ2 δγ = ab. Therefore, θ2 ◦ δ = ∅, and θ2 and δ are inconsistent. From xθ3 δ = aazδ = aaz and xδ = aaa, U (xθ3 δ, xδ) is the set {{z/a}}. From yθ3 δ = zbδ = zb and yδ = ab, U (yθ3 δ, yδ) is the set {{z/a}}. Then, only the substitution γ = {z/a} satisﬁes xθ3 δγ = aaa and yθ3 δγ = ab. Therefore, M IN (θ ◦ δ) has only one element θ3 · δ · γ = {x/aaa, y/ab, z/a}, and θ3 and δ are consistent. Lemma 6. Let θ = {xi /πi | i = 1, 2, . . . , m} and δ = {y/τ } be substitutions satisfying that, for each i (i = 1, 2, . . . , m), πi and τ are regular, and var(πi ) ∩ var(τ ) = D(θ) ∩ var(τ ) = ∅. Then, the set M IN (θ ◦ δ) is ﬁnite and computable. Proof. If y ∈ / D(θ) then δ · θ · δ = θ · δ and θ · δ · δ = θ · δ from the assumption and δ · δ = δ. Thus, θ · δ ∈ θ ◦ δ holds. Furthermore, for every σ ∈ θ ◦ δ, σ = θ · δ · γ holds for some substitution γ. Thus, the set M IN (θ ◦ δ) is a singleton set {θ · δ}. / var(πi ) for every i (i = 1, 2, . . . , m) from the If y = xk ∈ D(θ) then y ∈ assumption θ · θ = θ. Thus, θ · δ = θ holds. Furthermore, from the assumption of this lemma, var(τ ) ∩ D(θ) = ∅ and δ · θ = δ ∪ {xi /πi | i = k}. If πk and τ are not uniﬁable, then θ and δ are inconsistent. Otherwise, for any γ ∈ M XGU (πk , τ ), θ · γ ∈ θ ◦ δ, because θ · δ · γ = δ · θ · γ = θ · γ holds. Furthermore, from the deﬁnition of mxgu’s, for any substitution σ such that θ · δ · σ ∈ θ ◦ δ, there exists γ ∈ M XGU (πk , τ ) satisfying that σ = γ · γ for some substitution γ . Thus, M IN (θ ◦ δ) is the set {θ · γ | γ ∈ M XGU (πk , τ )}. Since the set M XGU (πk , τ ) is ﬁnite and computable from Lemma 2, M IN (θ ◦ δ) is also ﬁnite and computable. Example 2. Let θ1 = {x/aya, z/y}, θ2 = {y/aza}, and δ = {y/y1 y2 }. Then, M IN (θ1 ◦δ) is a singleton set which consists of θ·δ = {x/ay1 y2 a, y/y1 y2 , z/y1 y2 }. On the other hand, since {y1 /a, y2 /za} , M XGU (aza, y1 y2 ) = {y1 /az, y2 /a} {y1 /az1 , y2 /z2 a, z/z1 z2 } we can obtain the following set: {y/aza, y1 /a, y2 /za} . M IN (θ2 ◦ δ) = {y/aza, y1 /az, y2 /a} {y/az1 z2 a, y1 /az1 , y2 /z2 a, z/z1 z2 } Deﬁnition 2. Let Γ be an EFS, G be a goal of Γ , and R be a computation rule. An S-derivation from G is a (ﬁnite or inﬁnite) sequence of triplets (Gi , Ci , Θi ) (i = 0, 1, . . .) which satisﬁes the following conditions:

An Eﬃcient Derivation for Elementary Formal Systems

357

1. Gi is a goal, Θi is a ﬁnite set of substitutions, Ci is a variant of a clause in Γ , and G0 = G. 2. var(Ci ) ∩ var(Cj ) = ∅ for every i and j (i = j), and var(Ci ) ∩ var(Gi ) = ∅ for every i. 3. Let Gi =← A1 , . . . , Ak , Ci = A ← B1 , . . . , Bq , and Am is the selected atom of Gi . If i = 0, then Θi = M XGU (Am , A). Otherwise, Θi = M IN (Θi−1 ◦ M XGU (Am , A)) for each i. The next goal Gi+1 is of the following form: (← A1 , . . . , Am−1 , B1 , . . . , Bq , Am+1 , . . . , Ak )IN T (Θi ). If the S-derivation ends with the empty goal Gn , then it is said to be an Srefutation from G, and each substitution in Θn−1 is called an answer substitution for G by Γ . Deﬁnition 3. Let Γ be an EFS, and (Gi , Ci , Θi ) (i = 0, 1, . . . , n) be a ﬁnite S-derivation of Γ . The derivation is said to be ﬁnitely failed with the length n if 1. Θn = ∅, or 2. there exists no clause in Γ such that its head and the selected atom of Gn are uniﬁable. For an EFS Γ , we deﬁne the following two sets: SF F S(Γ ) is the set of all ground atoms A satisfying that all S-derivations of Γ from ← A are ﬁnitely failed within the length n, and SRS(Γ ) is the set of all ground atoms A satisfying that there exists an S-refutation of Γ from ← A. 3.3

Completeness of S-Derivation

An EFS Γ is said to be regular if all predicate symbols in Γ are unary, and each clause A ← B1 , B2 , . . . , Bn in Γ satisfying the following conditions: 1. the term in A is regular, 2. every term in B1 , B2 , . . . , Bn are mutually distinct variables, and 3. var(B1 ) ∪ var(B2 ) ∪ · · · ∪ var(Bn ) ⊆ var(A). It has been shown that the class of languages deﬁned by regular EFS’s is equivalent to that of context-free languages [3]. For the regular EFS, we show that an S-derivation is complete by the following theorem. Theorem 1. For every regular EFS Γ , P S(Γ ) = RS(Γ ) = SRS(Γ ) holds. The above theorem can be proved by the following lemmas and proposition. Lemma 7. Let Γ be a regular EFS, G0 be a ground goal, and (Gi , Ci , Θi ) (i = 0, 1, . . . , n) be an S-derivation from G0 . Then, for every σ ∈ Θn−1 , σ is ground, and there exists a derivation (Gi , Ci , θi ) (i = 0, 1, . . . , n) such that G0 = G0 , and Gi = Gi σ for each i (i = 1, 2, . . . , n).

358

N. Sugimoto, H. Ishizaka, and T. Shinohara

Proof. Let p(w) be an selected atom of G0 , and p(π) be the head of C0 . Then, from the deﬁnition of an S-derivation, Θ0 = U (w, π) holds. Furthermore, for every σ ∈ Θ0 , IN T (Θ0 ) ⊆ σ holds. Let G1 be the resolvent of G0 and C0 by σ. Then, the derivation (G0 , C0 , σ), (G1 , , ) satisﬁes the statement. Next, we assume that p(τ ) be an selected atom of Gn−1 , and p(π) be the head of Cn−1 . Then, from the deﬁnitions of an S-derivation and a regular EFS, τ is a gound term w ∈ Σ + or a variable x ∈ D(σn−2 ) for every σn−2 ∈ Θn−2 . If σ ∈ Θn−1 then there exists σn−2 ∈ Θn−2 such that σ ∈ σn−2 ◦ δ for some δ ∈ M XGU (τ, π). If the selected atom is p(w) then δ is ground. Thus, σ is also ground. If the selected atom is p(x) then δ = {x/π} from Lemma 4. Since x/w ∈ σn−2 for some w ∈ Σ + , σ ∈ σn−2 ◦ δ = {σn−2 · γ | γ ∈ M XGU (w, π)}. Thus, σ is ground. From the assumption of the induction, there exists a derivation (Gi , Ci , θi ) (i = 0, 1, . . . , n − 1) such that G0 = G0 , and Gi = Gi σn−2 for each i (i = 1, 2, . . . , n − 1). Since σn−2 is ground, it is clear that σn−2 ⊆ σ. Let Gn be the resolvent of Gn−1 and Cn−1 by θn−1 ∈ U (w, π), then it is clear that the derivation (Gi , Ci , θi ) (i = 0, 1, . . . , n) satisﬁes the statement. Lemma 8. Let Γ be a regular EFS, G0 be a ground goal, and (Gi , Ci , θi ) (i = 0, 1, . . .) be a derivation from G0 . Then, there exists an S-derivation (Gi , Ci , Θi ) (i = 0, 1, . . .) and a substitution σi ∈ Θi such that G0 = G0 , and Gi+1 σi = Gi+1 for each i (i = 1, 2, . . .). Proof. Let p(w) be an selected atom of G0 , and p(π) be the head of C0 . Then, from the deﬁnition of a derivation, θ0 ∈ U (w, π). On the other hand, from the deﬁnition of an S-derivation, Θ0 = M XGU (w, π) = U (w, π). Thus, θ0 ∈ Θ0 , and G1 θ0 = G1 . Next, we assume that there exists σk−1 ∈ Θk−1 such that Gk σk−1 = Gk . Let p(w) be an selected atom of Gk , and the head of Ck be p(π). Then, from the deﬁnition of a derivation, θk ∈ U (w, π). On the other hand, from the deﬁnition of an S-derivation and the assumption of the induction, the selected atom of Gk has the form p(w) or p(x) for some w ∈ Σ + and x ∈ D(σk−1 ). If the selected atom is p(w) then Θk = M IN (Θk−1 ◦ U (w, π)). Since σk−1 ∈ Θk−1 and θk ∈ U (w, π), σk−1 ◦ θk ⊆ Θk holds. Furthermore, from the deﬁnitions of an S-derivation, D(σk−1 ) ∩ D(θk ) = ∅. Thus, σk−1 and θk are consistent, and σk−1 ∪ θk ∈ σk−1 ◦ θk . If the selected atom is p(x) then Θk = M IN (Θk−1 ◦ M XGU (x, π)), and M XGU (x, π) is a singleton set {{x/π}} from Lemma 4. Since σk−1 ∈ Θk−1 , M IN (σk−1 ◦ {x/π}) ⊆ Θk holds. Furthermore, from x/w ∈ σk−1 and U (w, π) = ∅, {x/π} and σk−1 are consistent. From the statement in the proof of Lemma 6, M IN ({x/π} ◦ σk−1 ) = {σk−1 · γ | γ ∈ M XGU (w, π)}. Since θk ∈ M XGU (w, π), σk−1 · θk = σk−1 ∪ θk ∈ Θk . It is clear that σk−1 ∪ θk becomes a substitution, and satisﬁes the statement. From Lemma 7 and 8, we can prove the following proposition.

An Eﬃcient Derivation for Elementary Formal Systems

359

Proposition 1. Let Γ be a regular EFS, G0 be a ground goal. Then, there exists a refutation from G0 if and only if there exists an S-refutation from G0 . Furthermore, we can also prove the following theorem. Theorem 2. For every regular EFS Γ , F F S(Γ ) = SF F S(Γ ) holds. Example 3. For an EFS

(1) p(xy) ← q(x), r(y); Γ = (2) q(an ) ←; , (3) r(aa) ← and a goal ← p(an+1 ), Fig 1 describes the derivation and the S-derivation as trees like SLD-trees [7]. In the derivation and the S-derivation in Fig 1, the label (k, θ) on each edge represents the derivation by the clause (k) and the uniﬁer or the set of uniﬁers θ. The derivation needs n + 1 times backtracking to determine p(an+1 ) ∈ F F (Γ ). On the other hand, in the S-derivation, it is determined by twice backtracking.

(1,{x / a, y / a n })

p( a n+1 )

(1,{x / a ,y / a n−1})

(1,{x / an ,y / a})

2

q( a ),r( a n ) fail

q( a n ),r( a )

q( a 2 ),r( a n−1 ) fail

(2,{ })

r( a) fail

( {{ 1,

q( x ),r( y)

{x / a,y / a n }

x / a 2 , y / a n−1

{x / a

n

,y/ a

}

({

p( a n+1 )

}

( 2, {{x/a n ,y / a}} ) r( a) fail Fig. 1. Backtracking by a derivation and an S-derivation

360

4

N. Sugimoto, H. Ishizaka, and T. Shinohara

An Implementation of EFS Interpreter

In this section, we outline an implementation of EFS interpreter based on an S-derivation, and give some results of experiments for typical examples of EFS’s where the S-derivation works eﬃciently. In order to construct an eﬃcient interpreter, we adopt two ideas: 1. computing each uniﬁers by using the Aho-Corasick pattern matching algorithm, and 2. reducing the number of backtracking of a derivation by using an S-derivation. 4.1

Uniﬁcations by the Aho-Corasick Pattern Matching Algorithm

The Aho-Corasick pattern matching algorithm ﬁnds all occurrence positions of some patterns by scanning the given text. From a given EFS, the EFS interpreter makes a pattern matching machine in advance for all ground strings in the EFS. For each given ground goal, the pattern matching machine scans the ground term in the given goal clause and outputs all occurrence positions of patterns on the term. From the occurrence positions, each uniﬁer is eﬃciently computed. Example 4. Let w = aaabaaabaaabaaa and τ = xbyabz be terms, where a, b ∈ Σ and x, y, z ∈ X. For constant substrings b and ab of τ , the pattern matching machine ﬁnds the occurrence positions on w as follows: b : (4 : 4), (8 : 8), (12 : 12), ab : (3 : 4), (7 : 8), (11 : 12), where, (i : j) in the line of b (resp. ab) means that the substring from ith to jth of w is b (resp. ab). For each occurrence (ib : jb ) of b and (iab : jab ) of ab such that jb ≤ iab , we obtain uniﬁers of w and τ as follows: {x/(1 : 3), y/(5 : 6), z/(9 : 15)} from ((4 : 4), (7 : 8)), {x/(1 : 3), y/(5 : 10), z/(13 : 15)} from ((4 : 4), (11 : 12)), {x/(1 : 7), y/(8 : 10), z/(13 : 15)} from ((8 : 8), (11 : 12)). A regular EFS is suitable to this computation of the uniﬁer, because every ground term in each resolvent of the derivation becomes a substring of the term in the given initial goal. This implies that all uniﬁers used in the derivation can be computed by only once scanning the given initial goal by the pattern matching machine. 4.2

An Implementation of S-Derivation

Since an S-derivation deals with the set of all possible uniﬁers at each step of the derivation, it is important to adopt a compact representation of the set. From the property of a regular EFS, all terms in the derivation are substrings of the term in the given initial goal. Thus, the set of all possible uniﬁers can be divided into some parts as shown by the next example.

An Eﬃcient Derivation for Elementary Formal Systems

361

Example 5. Let w = aabaabaabaa and τ = xybz be terms, where a, b ∈ Σ and x, y, z ∈ X. Then, the set of all uniﬁers {x/a, y/a, z/aabaabaa}, {x/a, y/abaa, z/aabaa}, {x/aa, y/baa, z/aabaa}, {x/aab, y/aa, z/aabaa}, {x/aaba, y/a, z/aabaa}, {x/a, y/abaabaa, z/aa}, U (w, τ ) = {x/aa, y/baabaa, z/aa}, {x/aab, y/aabaa, z/aa}, {x/aaba, y/abaa, z/aa}, {x/aabaa, y/baa, z/aa}, {x/aabaab, y/aa, z/aa}, {x/aabaaba, y/a, z/aa} can be divided into these three parts:

{x/a, y/abaabaa, z/aa}, U1 = {x/a, y/a, z/aabaabaa} , {x/aa, y/baabaa, z/aa}, {x/aab, y/aabaa, z/aa}, {x/a, y/abaa, z/aabaa}, U3 = {x/aaba, y/abaa, z/aa}, . {x/aa, y/baa, z/aabaa}, {x/aabaa, y/baa, z/aa}, U2 = , {x/aab, y/aa, z/aabaa}, {x/aabaab, y/aa, z/aa}, {x/aaba, y/a, z/aabaa} {x/aabaaba, y/a, z/aa} Furthermore, each Ui (i = 1, 2, 3) is represented as follows: U1 = {{x/(1 : 1), y/(2 : 2), z/(4 : 11)}}, U2 = {{x/(1 : k), y/(k + 1 : 5), z/(7 : 11)} | 4 ≥ k ≥ 1}, U3 = {{x/(1 : k), y/(k + 1 : 8), z/(10 : 11)} | 7 ≥ k ≥ 1}, where each (i : j) represents the substring from ith to jth of w. For an EFS Γ = {p(xybz) ← q1 (x), q2 (y), q3 (z); q1 (aa) ←; q2 (baa) ←; q3 (aabaa) ←} and the set U2 , the S-derivation from ← p(w) is shown in Fig 2.

p(1:11)

p( xy[6 : 6]z)

q1 ( x ), q2 ( y), q3 ( z)

{x /(1: k ), y /(k + 1, 5), z /(7 :11)} q1 (1: k ), q2 ( k + 1: 5), q3 ( 7 :11) q1 (1: 2)

{x /(1: 2), y /(3, 5), z /(7 :11)} q2 (3 : 5), q3 ( 7 :11)

q2 (3 : 5)

{x /(1: 2), y /(3, 5), z /(7 :11)} q3 ( 7 :11)

q3 ( 7 :11)

{x /(1: 2), y /(3, 5), z /(7 :11)}

Fig. 2. An S-derivation from the goal ← p(w).

362

N. Sugimoto, H. Ishizaka, and T. Shinohara

We can easily show that the derivation with the divided sets on uniﬁers is equivalent to the S-derivation. From the occurrence positions given by the pattern matching machine, each set Ui is eﬃciently computed. Thus, an S-derivation is eﬃcient. 4.3

Experimental Results

We construct three types of EFS interpreters C1 , C2 , and C3 , where C1 , C2 , and C3 use a derivation with naive uniﬁcations, a derivation with uniﬁcations by the Aho-Corasick algorithm, and an S-derivation with the uniﬁcation by the AhoCorasick algorithm, respectively. We verify the eﬃciency of the S-derivation and uniﬁcations using the Aho-Corasick algorithm, by comparing the running time of these interpreter with that of the deﬁnite clause grammar (DCG) provided by the Prolog interpreter. We consider the following EFS’s and DCG’s: (x x ) ← p p (x ), p (x ); p0 → p1 , p2 ; 0 1 2 1 1 2 2 p1 (aaxaa) ← p1 (x); p1 ← aap1 aa; , D1 = p1 ← bbp1 bb; , Γ1 = p1 (bbxbb) ← p1 (x); p p (aaaa) ←; p (bbbb) ←; → aaaa; p → bbbb; 1 1 1 1 p2 (a) ←; p2 (aa) ← . p2 → a; p2 → aa. p0 (x1 x2 x3 x4 aaa) ← p0 → p1 p1 p1 p1 aaa; p1 (x1 ), p1 (x2 ), p1 (x3 ), p1 (x4 ); p0 → aaa; p0 (aaa) ←; p1 → ap1 a; , D2 = Γ2 = p1 (axa) ← p1 (x); . p1 → bp1 b; (bxb) ← p (x); p 1 1 p1 → a; p1 → b; p1 (a) ←; p1 (b) ←; p1 → aa; p1 → bb. p1 (aa) ←; p1 (bb) ← . The DCG Di and the EFS Γi represent the same language (i = 1, 2). Table 1. The running time for the EFS Γ1 and the DCG D1 (sec.) The length of the text C1 C2 C3 DCG 100 18.17 50.46 2.03 0.2 200 64.05 195.12 3.93 0.4 300 137.86 435.54 5.86 0.54 400 238.4 762.86 7.78 0.76 500 367.11 1181.39 9.62 0.89

The running time by EFS interpreters for Γ1 and the DCG for D1 are shown in Table 1. The input data consist of 30 strings from {a, b}. From the results of this experiment, If an EFS has successive occurrence of variables, then an S-derivation is more eﬃcient than the derivation as shown by the diﬀerence between the running time of C2 and C3 .

An Eﬃcient Derivation for Elementary Formal Systems

363

Table 2. The running time for the EFS Γ2 and the DCG D2 (sec.) The length of the text C1 C2 C3 5 8.71 8.75 9.25 10 88.42 17.42 20.05 15 473.15 52.62 73.24 20 1619.83 168.24 351.96 25 4200.69 424.1 1175.22

DCG 6.49 22.43 115.5 528.6 1648.99

In Table 2, we present the running time of each EFS interpreter and DCG, for Γ2 and D2 . The input data consist of 1000 strings from {a, b}. The uniﬁcation by the Aho-Corasick algorithm is eﬃcient as the diﬀerence between the running time of C1 and C2 . Furthermore, we ﬁnd C2 and C3 are more eﬃcient than DCG. This result represens that the number of backtracking by the EFS interpreter are less than that by the DCG.

5

Conclusion

We have proposed an eﬃcient derivation for EFS’s called S-derivation, where all possible uniﬁers are evaluated at one step of the derivation. We have shown that the S-derivation is complete for accepting context-free languages. Furthermore, we have implemented the S-derivation, and veriﬁed its eﬃciency by comparing with the running time of DCG’s. One of the open problems is to discuss computability of the S-derivation for the extended classes of regular EFS’s. Since, in the S-derivation, each resolvent contains variables even if the initial goal is ground, the uniﬁcation should be eﬃciently computed for terms containing variables. However, it is known that the uniﬁcation problem for non-regular terms is NP-complete. Therefore, we have to consider another approach for the extended class of EFS’s. The S-derivation can be applied to translations over strings. We have already constructed the translator for regular TEFS’s [12] which represent binary relations over context-free languages. It is a future work to formalize generating languages by the S-derivation in the framework of TEFS’s, and to design a translator for real data by using our results.

References 1. A. V. Aho and M. J. Corasick: Eﬃcient string matching : An aid to bibliographic search, Communication of the ACM 18, No.6, 333–340 (1975). 2. S. Arikawa, S. Miyano, A. Shinohara, T. Shinohara, and A. Yamamoto: Algorithmic learning theory with elementary formal systems, IEICE Transaction on Information and Systems E75-D, 405–414 (1992). 3. S. Arikawa, T. Shinohara, and A. Yamamoto: Learning elementary formal systems, Theoretical Computer Science 95, 97–113 (1992).

364

N. Sugimoto, H. Ishizaka, and T. Shinohara

4. N. Harada, S. Arikawa, and H. Ishizaka: A Class of elementary formal systems that has an eﬃcient parsing algorithm, Information Modeling and Knowledge Bases IX, 89–101 (1997). 5. J. Jaﬀar: Minimal and complete word uniﬁcation, Journal of the ACM 37, 47–85 (1990). 6. D. Kapur: Complexity of uniﬁcation problems with associative-commutative operation, Journal of Automated Reasoning 9, 261–288 (1992). 7. J. W. Lloyd: Foundations of logic programming (second edition), Springer-Verlag (1987). 8. Y. Mukouchi and S. Arikawa: Towards a mathematical theory of machine discovery from facts, Theoretical Computer Science 137, 53–84 (1995). 9. T. Shinohara: Inductive inference on monotonic formal systems from positive data, New Generation Computing 8, 371–384 (1991). 10. T. Shinohara: Rich classes inferable from positive data: Length-bounded elementary formal system, Information and Computation 108, 175–186 (1994). 11. R. Smullyan: Theory of formal systems, Princeton Univ. Press, Princeton (1961). 12. N. Sugimoto, K. Hirata and H. Ishizaka: Constructive learning of translations based on dictionaries, In Proceedings of the Seventh International Workshop on Algorithmic Learning Theory, Lecture Notes in Artiﬁcial Intelligence 1160, 177–184 (1996). 13. N. Sugimoto: Learnability of translations from positive examples, In Proceedings of the Ninth International Conference on Algorithmic Learning Theory, Lecture Notes in Artiﬁcial Intelligence 1501, 169–178 (1998). 14. N. Sugimoto and H. Ishizaka: Generating languages by a derivation procedure for elementary formal systems, Information Processing Letters 69, 161–166 (1999). 15. A. Yamamoto: Procedural semantics and negative information of elementary formal system, Journal of Logic Programming 13, 89–97 (1992).

Computational Revision of Quantitative Scientiﬁc Models Kazumi Saito1 , Pat Langley2 , Trond Grenager2 , Christopher Potter3 , Alicia Torregrosa3 , and Steven A. Klooster3 1

NTT Communication Science Laboratories 2-4 Hikaridai, Seika, Soraku, Kyoto 619-0237 Japan saito@cslab.kecl.ntt.co.jp 2 Computational Learning Laboratory, CSLI Stanford University, Stanford, California 94305 USA {langley,grenager}@cs.stanford.edu 3 Ecosystem Science and Technology Branch NASA Ames Research Center, MS 242-4 Moﬀett Field, California 94035 USA {cpotter,lisy,sklooster}@gaia.arc.nasa.gov Abstract. Research on the computational discovery of numeric equations has focused on constructing laws from scratch, whereas work on theory revision has emphasized qualitative knowledge. In this paper, we describe an approach to improving scientiﬁc models that are cast as sets of equations. We review one such model for aspects of the Earth ecosystem, then recount its application to revising parameter values, intrinsic properties, and functional forms, in each case achieving reduction in error on Earth science data while retaining the communicability of the original model. After this, we consider earlier work on computational scientiﬁc discovery and theory revision, then close with suggestions for future research on this topic.

1

Research Goals and Motivation

Research on computational approaches to scientiﬁc knowledge discovery has a long history in artiﬁcial intelligence, dating back over two decades (e.g., Langley, 1979; Lenat, 1977). This body of work has led steadily to more powerful methods and, in recent years, to new discoveries deemed worth publication in the scientiﬁc literature, as reviewed by Langley (1998). However, despite this progress, mainstream work on the topic retains some important limitations. One drawback is that few approaches to the intelligent analysis of scientiﬁc data can use available knowledge about the domain to constrain search for laws or explanations. Moreover, although early work on computational discovery cast discovered knowledge in notations familiar to scientists, more recent eﬀorts have not. Rather, inﬂuenced by the success of machine learning and data mining, many researchers have adopted formalisms developed by these ﬁelds, such as decision trees and Bayesian networks. A return to methods that operate on established scientiﬁc notations seems necessary for scientists to understand their results. K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 336–349, 2001. c Springer-Verlag Berlin Heidelberg 2001

Computational Revision of Quantitative Scientiﬁc Models

337

Like earlier research on computational scientiﬁc discovery, our general approach involves deﬁning a space of possible models stated in an established scientiﬁc formalism, speciﬁcally sets of numeric equations, and developing techniques to search that space. However, it diﬀers from previous work in this area by starting from an existing scientiﬁc model and using heuristic search to revise the model in ways that improve its ﬁt to observations. Although there exists some research on theory reﬁnement (e.g., Ourston & Mooney 1990; Towell, 1991), it has emphasized qualitative knowledge rather than quantitative models that relate continuous variables, which play a central role in many sciences. In the pages that follow, we describe an approach to revising quantitative models of complex systems. We believe that our approach is a general one appropriate for many scientiﬁc domains, but we have focused our eﬀorts on one area – certain aspects of the Earth ecosystem – for which we have a viable model, existing data, and domain expertise. We brieﬂy review the domain and model before moving on to describe our approach to knowledge discovery and model revision. After this, we present some initial results that suggest our approach can improve substantially the model’s ﬁt to available data. We close with a discussion of related discovery work and directions for future research.

2

A Quantitative Model of the Earth Ecosystem

Data from the latest generation of satellites, combined with readings from ground sources, hold great promise for testing and improving existing scientiﬁc models of the Earth’s biosphere. One such model, CASA, developed by Potter and Klooster (1997, 1998) at NASA Ames Research Center, accounts for the global production and absorption of biogenic trace gases in the Earth atmosphere, as well as predicting changes in the geographic patterns of major vegetation types (e.g., grasslands, forest, tundra, and desert) on the land. CASA predicts, with reasonable accuracy, annual global ﬂuxes in trace gas production as a function of surface temperature, moisture levels, and soil properties, together with global satellite observations of the land surface. The model incorporates diﬀerence equations that represent the terrestrial carbon cycle, as well as processes that mineralize nitrogen and control vegetation type. These equations describe relations among quantitative variables and lead to changes in the modeled outputs over time. Some processes are contingent on the values of discrete variables, such as soil type and vegetation, which take on diﬀerent values at diﬀerent locations. CASA operates on gridded input at diﬀerent levels of resolution, but typical usage involves grid cells that are eight kilometers square, which matches the resolution for satellite observations of the land surface. To run the CASA model, the diﬀerence equations are repeatedly applied to each grid cell independently to produce new variable values on a daily or monthly basis, leading to predictions about how each variable changes, at each location, over time. Although CASA has been quite successful at modeling Earth’s ecosystem, there remain ways in which its predictions diﬀer from observations, suggesting that we invoke computational discovery methods to improve its ability to ﬁt the data. The result would be a revised model, cast in the same notation as the

338

K. Saito et al. Table 1. Variables used in the NPPc portion of the CASA ecosystem model.

NPPc is the net plant production of carbon at a site during the year. E is the photosynthetic eﬃciency at a site after factoring various sources of stress. T1 is a temperature stress factor (0 < T 1 < 1) for cold weather. T2 is a temperature stress factor (0 < T 2 < 1), nearly Gaussian in form but falling oﬀ more quickly at higher temperatures. W is a water stress factor (0.5 < W < 1) for dry regions. Topt is the average temperature for the month at which MON-FAS-NDVI takes on its maximum value at a site. Tempc is the average temperature at a site for a given month. EET is the estimated evapotranspiration (water loss due to evaporation and transpiration) at a site. PET is the potential evapotranspiration (water loss due to evaporation and transpiration given an unlimited water supply) at a site. PET-TW-M is a component of potential evapotranspiration that takes into account the latitude, time of year, and days in the month. A is a polynomial function of the annual heat index at a site. AHI is the annual heat index for a given site. MON-FAS-NDVI is the relative vegetation greenness for a given month as measured from space. IPAR is the energy from the sun that is intercepted by vegetation after factoring in time of year and days in the month. FPAR-FAS is the fraction of energy intercepted from the sun that is absorbed photosynthetically after factoring in vegetation type. MONTHLY-SOLAR is the average solar irradiance for a given month at a site. SOL-CONVER is 0.0864 times the number of days in each month. UMD-VEG is the type of ground cover (vegetation) at a site.

original one, that incorporates changes which are interesting to Earth scientists and which improve our understanding of the environment. Because the overall CASA model is quite complex, involving many variables and equations, we decided to focus on one portion that lies on the model’s ‘fringes’ and that does not involve any diﬀerence equations. Table 1 describes the variables that occur in this submodel, in which the dependent variable, NPPc, represents the net production of carbon. As Table 2 indicates, the model predicts this quantity as the product of two unobservable variables, the photosynthetic eﬃciency, E, at a site and the solar energy intercepted, IPAR, at that site. Photosynthetic eﬃciency is in turn calculated as the product of the maximum eﬃciency (0.56) and three stress factors that reduce this eﬃciency. One stress term, T2, takes into account the diﬀerence between the optimum temperature, Topt, and actual temperature, Tempc, for a site. A second factor, T1, involves

Computational Revision of Quantitative Scientiﬁc Models

339

Table 2. Equations used in the NPPc portion of the CASA ecosystem model. NPPc =

month

max (E · IPAR, 0)

E = 0.56 · T1 · T2 · W T1 = 0.8 + 0.02 · Topt − 0.0005 · Topt2 T2 = 1.18/[(1 + e0.2·(Topt−Tempc−10) ) · (1 + e0.3·(Tempc−Topt−10) )] W = 0.5 + 0.5 · EET/PET PET = 1.6 · (10 · Tempc / AHI)A · PET-TW-M if Tempc > 0 PET = 0 if Tempc ≤ 0 A = 0.000000675 · AHI3 − 0.0000771· AHI2 + 0.01792 · AHI + 0.49239 IPAR = 0.5 · FPAR-FAS · MONTHLY-SOLAR · SOL-CONVER FPAR-FAS = min((SR-FAS − 1.08)/SRDIFF(UMD-VEG), 0.95) SR-FAS = − (MON-FAS-NDVI + 1000) / (MON-FAS-NDVI − 1000)

the nearness of Topt to a global optimum for all sites, reﬂecting the intuition that plants which are better adapted to harsh temperatures are less eﬃcient overall. The third term, W, represents stress that results from lack of moisture as reﬂected by EET, the estimated water loss due to evaporation and transpiration, and PET, the water loss due to these processes given an unlimited water supply. In turn, PET is deﬁned in terms of the annual heat index, AHI, for a site, and PET-TW-M, another component of potential evapotranspiration. The energy intercepted from the sun, IPAR, is computed as the product of FPAR-FAS, the fraction of energy absorbed photosynthetically for a given vegetation type, MONTHLY-SOLAR, the average radiation for a given month, and SOL-CONVER, the number of days in that month. FPAR-FAS is a function of MON-FAS-NDVI, which indicates relative greenness at a site as observed from space, and SRDIFF, an intrinsic property that takes on diﬀerent numeric values for diﬀerent vegetation types as speciﬁed by the discrete variable UMD-VEG. Of the variables we have mentioned, NPPc, Tempc, MONTHLY-SOLAR, SOL-CONVER, MON-FAS-NDVI, and UMD-VEG are observable. Three additional terms – EET, PET-TW-M, and AHI – are deﬁned elsewhere in the model, but we assume their deﬁnitions are correct and thus we can treat them as observables. The remaining variables are unobservable and must be computed from the others using their deﬁnitions. This portion of the model also contains a number of numeric parameters, as shown in the equations in Table 2.

3

An Approach to Quantitative Model Revision

As noted earlier, our approach to scientiﬁc discovery involves reﬁning models like CASA that involve relations among quantitative variables. We adopt the traditional view of discovery as heuristic search through a space of models, with the search process directed by candidates’ ability to ﬁt the data. However, we assume this process starts not from scratch, but rather with an existing model,

340

K. Saito et al.

and the search operators involve making changes to this model, rather than constructing entirely new structures. Our long-term goal is not to automate the revision process, but instead to provide an interactive tool that scientists can direct and use to aid their model development. As a result, the approach we describe in this section addresses the task of making local changes to a model rather than carrying out global optimization, as assumed by Chown and Dietterich (2000). Thus, our software takes as input not only observations about measurable variables and an existing model stated as equations, but also information about which portion of the model should be altered. The output is a revised model that ﬁts the observed data better than the initial one. Below we review two discovery algorithms that we utilize to improve the speciﬁed part of a model, then describe three distinct types of revision they support. We consider these in order of increasing complexity, starting with simple changes to parameter values, moving on to revisions in the values of intrinsic properties, and ending with changes in an equation’s functional form. 3.1

The RF5 and RF6 Discovery Algorithms

Our approach relies on RF5 and RF6, two algorithms for discovering numeric equations described Saito and Nakano (1997, 2000). Given data for some continuous variable y that is dependent on continuous predictive variables x1 , . . . , xn , the RF5 system searches for multivariate polynomial equations of the form K J K J wjk y = w0 + wj xk = w0 + wj exp wjk ln(xk ) , (1) j=1

k=1

j=1

k=1

Such functional relations subsume many of the numeric laws found by previous computational discovery systems like Bacon (Langley, 1979) and Fahrenheit ˙ (Zytkow, Zhu, & Hussam, 1990). RF5’s ﬁrst step involves transforming a candidate functional form with J summed terms into a three-layer neural network based on the rightmost form of expression (1), in which the K hidden nodes in this network correspond to product units (Durbin & Rumelhart, 1989). The system then carries out search through the weight space using the BPQ algorithm, a second-order learning technique that calculates both the descent direction and the step size automatically. This process halts when it ﬁnds a set of weights that minimize the squared error on the dependent variable y. RF5 runs the BPQ method on networks with diﬀerent numbers of hidden units, then selects the one that gives the best score on an MDL metric. Finally, the program transforms the resulting network into a polynomial equation, with weights on hidden units becoming exponents and other weights becoming coeﬃcients. The RF6 algorithm extends RF5 by adding the ability to ﬁnd conditions on a numeric equation that involve nominal variables, which it encodes using one input variable for each nominal value. To this end, the system ﬁrst generates one such condition for each training case, then utilizes k-means clustering to generate

Computational Revision of Quantitative Scientiﬁc Models

341

a smaller set of more general conditions, with the number of clusters determined through cross validation. Finally, RF6 invokes decision-tree induction to construct a classiﬁer that discriminates among these clusters, which it transforms into rules that form the nominal conditions on the polynomial equation that RF5 has generated. 3.2

Three Types of Model Reﬁnement

There exist three natural types of reﬁnement within the class of models, like CASA, that are stated as sets of equations that refer to unobservable variables. These include revising the parameter values in equations, altering the values for an intrinsic property, and changing the functional form of an existing equation. Improving the parameters for an equation is the most straightforward process. The NPPc portion of CASA contains some parameterized equations that our Earth science team members believe are reliable, like that for computing the variable A from AHI, the annual heat index. However, it also includes equations with parameters about which there is less certainty, like the expression that predicts the temperature stress factor T2 from Tempc and Topt. Our approach to revising such parameters relies on creating a specialized neural network that encodes the equation’s functional form using ideas from RF5, but also including a term for the unchanged portion of the model. We then run the BPQ algorithm to ﬁnd revised parameter values, initializing weights based on those in the model. We can utilize a similar scheme to improve the values for an intrinsic property like SRDIFF that the model associates with the discrete values for some nominal variable like UMD-VEG (vegetation type). We encode each nominal term as a set of dummy variables, one for each discrete value, making the dummy variable equal to one if the discrete value occurs and zero otherwise. We introduce one hidden unit for the intrinsic property, with links from each of the dummy variables and with weights that correspond to the intrinsic values associated with each discrete value. To revise these weights, we create a neural network that incorporates the intrinsic values but also includes a term for the unchanging parts of the model. We can then run BPQ to revise the weights that correspond to intrinsic values, again initializing them to those in the initial model. Altering the form of an existing equation requires somewhat more eﬀort, but maps more directly onto previous work in equation discovery. In this case, the details depend on the speciﬁc functional form that we provide, but because we have available the RF5 and RF6 algorithms, the approach supports any of the forms that they can discover or specializations of them. Again, having identiﬁed a particular equation that we want to improve, we create a neural network that encodes the desired form, then invoke the BPQ algorithm to determine its parametric values, in this case initializing the network weights randomly. This approach to model reﬁnement supports changes to only one equation or intrinsic property at a time, but this is consistent with the interactive process described earlier. We envision the scientist identifying a portion of the model that he thinks could be better, running one of the three revision methods to improve its ﬁt to the data, and repeating this process until he is satisﬁed.

342

4

K. Saito et al.

Initial Results on Ecosystem Data

In order to evaluate our approach to scientiﬁc model revision, we utilized data relevant to the NPPc model available to the Earth science members of our team. These data consisted of observations from 303 distinct sites with known vegetation type and for which measurements of Tempc, MON-FAS-NDVI, MONTHLYSOLAR, SOL-CONVER, and UMD-VEG were available for each month during the year. In addition, other portions of CASA were able to compute values for the variables AHI, EET, and PET-TW-M. The resulting 303 training cases seemed suﬃcient for initial tests of our revision methods, so we used them to drive a variety of changes to the handcrafted model of carbon production. 4.1

Results on Parameter Revision

Our Earth science team members identiﬁed the equation for T2, one of the temperature stress variables, as a likely candidate for revision. As noted earlier, the handcrafted expression for this term was T 2 = 1.8/[(1 + e0.2(T opt−T empc−10) )(1 + e−0.3(T empc−T opt−10) )] , which produces a Gaussian-like curve that is slightly assymetrical. This reﬂects the intuition that photosynthetic eﬃciency will decrease when temperature (Tempc) is either below or above the optimal (Topt). To improve upon this equation, we deﬁned x = Topt − Tempc as an intermediate variable and recast the expression for T2 as the product of two sigmoidal functions of the form σ(a) = 1/(1 + exp(−a)) and a parameter. We transformed these into a neural network and used BPQ to minimize the error function 2 F1 = sample (NPPc − month w0 · σ(v10 + v11 · x) · σ(v20 − v21 · x) · Rest) , over the parameters {w0 , v10 , v11 , v20 , v21 }, where Rest = 0.56 · T1 · W · IPAR. The resulting equation generated in this manner was T 2 = 1.80/[(1 + e0.05(T opt−T empc−10.8 )(1 + e−0.03(T empc−T opt−90.33 )] , which has reasonably similar values to the original ones for some parameters but quite diﬀerent values for others. The root mean squared error (RMSE) for the original model on the available data was 467.910. In contrast, the error for the revised model was 457.757 on the training data and 461.466 using leave-one-out cross validation. Thus, RF6’s modiﬁcation of parameters in the T2 equation produced slightly more than one percent reduction in overall model error, which is somewhat disappointing. However, inspection of the resulting curves reveals a more interesting picture. Plotting the temperature stress factor T2 using the revised equations as a function of the diﬀerence Topt − Tempc still gives a Gaussian-like curve, but within the eﬀective range (from −30 to 30 Celsius) its values decrease monotonically. This seems counterintuitive but interesting from an Earth science perspective,

Computational Revision of Quantitative Scientiﬁc Models

343

as it suggests this stress factor has little inﬂuence on NPPc. Moreover, the original equation for T2 was not well grounded in ﬁrst principles of plant physiology, making empirical improvements of this sort beneﬁcial to the modeling enterprise. As another candidate for parameter revision, we selected the PET equation, PET = 1.6 · (10 · max(Tempc, 0) / AHI)A · PET-TW-M , which calculates potential water loss due to evaporation and transpiration given an unlimited water supply. By transforming this expression into PET = exp(ln(1.6) + A · ln(10)) · (max(Tempc, 0) / AHI)A · PET-TW-M and replacing the parameter values ln(1.6) and ln(10) with the variables v0 and v1 , we constructed a neural network and used BPQ for error minimization. When transforming the trained network back into the original form, the equation that resulted was PET = 1.56 · (9.16 · max(Tempc, 0) / AHI)A · PET-TW-M , which has values that are very similar to those in the original model’s equation. Moreover, since the RMSE for the obtained model was 464.358 on the training data and 467.643 using leave-one-out cross validation, the revision process did not improve the model’s accuracy substantially. However, since the PET equation is based on Thornthwaite’s (1948) method, which has been used continuously for over 50 years, we should not be overly surprised at this negative result. Indeed, we are encouraged by the fact that our approach did not revise parameters that have stood the test of time in Earth science. 4.2

Results on Intrinsic Value Revision

Another portion of the NPPc model that held potential for revision concerns the intrinsic property SRDIFF associated with the vegetation type UMD-VEG. For each site, the latter variable takes on one of 11 nominal values, such as grasslands, forest, tundra, and desert, each with an associated numeric value for SRDIFF that plays a role in the FPAR-FAS equation. This gives 11 parameters to revise, which seems manageable given the number of observations available. As outlined earlier, to revise these intrinsic values, we introduced one dummy variable, UMD-VEGk , for each vegetation type such that UMD-VEGk = 1 if UMD-VEG = k and 0 otherwise. We then deﬁned SRDIFF(UMD-VEG) as exp(− k vk · UMD-VEGk ) and, since SRDIFF’s value is independent of the month, we used BPQ to minimize, over the weights {vk }, the error function 2 F2 = site (NPPc − exp( k vk · UMD-VEGk ) · Rest) , where Rest = month E ·0.5·(SR-FAS −1.08)· MONTHLY-SOLAR · SOL-CONVER. Table 3 shows the initial values for this intrinsic property, as set by the CASA developers, along with the revised values produced by the above approach when

344

K. Saito et al.

Table 3. Original and revised values for the SRDIFF intrinsic property, along with the frequency for each vegetation type. vegetation type original revised clustered frequency

A

B

C

D

E

3.06 4.35 4.35 4.05 5.09 2.57 4.77 2.20 3.99 3.70 2.42 3.75 2.42 3.75 3.75 3.3 8.9 0.3 3.6 21.1

F 3.06 3.46 3.75 19.1

G

H

I

J

K

4.05 4.05 4.05 5.09 4.05 2.34 0.34 2.72 3.46 1.60 2.42 0.34 2.42 3.75 2.42 15.2 3.3 19.1 2.3 3.6

we ﬁxed other parts of the NPPc model. The most striking result is that the revised intrinsic values are nearly always lower than the initial values. The RMSE for the original model was 467.910, whereas the error using the revised values was 432.410 on the training set and 448.376 using cross validation. The latter constitutes an error reduction of over four percent, which seems substantial. However, since the original 11 intrinsic values were grouped into only four distinct values, we applied RF6’s clustering procedure over the trained neural network to group the revised values in the same manner. We examined the eﬀect on error rate as we varied the number of clusters from one to ﬁve; as expected, the training RMSE decreased monotonically, but the cross-validation RMSE was minimized for three clusters of values. The estimated error for this revised model is slightly better than for the one with 11 distinct values. Again, the clustered values are nearly always lower than the initial ones, a result that is certainly interesting from an Earth science viewpoint. We suspect that measurements of NPPc and related variables from a wider range of sites would produce intrinsic values closer to those in the original model. However, such a test must await additional observations and, for now, empirical ﬁt to the available data should outweigh the theoretical basis for the initial settings. In another approach to revising intrinsic values, we retained the original grouping of vegetation types into sets, with each type in a given set having the same value. We utilized a weight-sharing technique to encode this background knowledge in a neural network. For example, let vA and vF be weights corresponding to the SRDIFF values for vegetation types A and F, respectively; to ensure these values remained the same, we treated them as a single weight, say vAF . Here we can see that BPQ calculates the derivative of the error function over vAF as a sum of the individual derivatives over vA and vF , ∂F2 ∂F2 ∂F2 = + . ∂vAF ∂vA ∂vF In the trained neural network, the derivative over vAF becomes zero, but there is no guarantee that each derivative over vA or vF will do so. Therefore, we can treat the sum of the absolute values for derivatives over shared weights, like vA and vF , as a criterion for the ‘unlikeness’ among the elements of such a grouping. Table 4 shows the revised values for the intrinsic property SRDIFF that result from this approach, along with values for the unlikeness criterion deﬁned above.

Computational Revision of Quantitative Scientiﬁc Models

345

Table 4. Original and revised values, using the original groupings, for the SRDIFF intrinsic property, along with the frequency and unlikeness for each vegetation group. vegetation type

A∨F

B∨C

E∨J

D∨G∨H∨I∨K

original revised frequency unlikeness

3.06 2.23 22.4 26.1

4.35 3.27 9.2 0.3

5.09 2.54 23.4 2.3

4.05 1.81 44.9 13.6

As before, the obtained intrinsic values are always lower than the initial ones, and our criterion suggests that the group containing the vegetation types A and F has the least coherence. The RMSE for the revised model was 442.782 on the training data and 449.097 using leave-one-out cross validation, again indicating about four percent reduction in the model’s overall error. 4.3

Results on Revising Equation Structure

We also wanted to demonstrate our approach’s ability to improve the functional form of the NPPc model. For this purpose, we selected the equation for photosynthetic eﬃciency, E = 0.56 · T 1 · T 2 · W , which states that this term is a product of the water stress term, W, and the two temperature stress terms, T1 and T2. Because each stress factor takes on values less than one, multiplication has the eﬀect of reducing photosynthetic eﬃciency E below the maximum 0.56 possible (Potter & Klooster, 1998). Since E is calculated as a simple product of the three variables, one natural extension was to consider an equation that included exponents on these terms. To this end, we borrowed techniques from the RF5 system to create a neural network for such an expression, then used BPQ to minimize the error function 2 F3 = site (NPPc − month u0 · T1u1 · T2u2 · Wu3 · IPAR) , over the parameters {u0 , u1 , u2 , u3 }, which assumes the equations that predict IPAR remain unchanged. We initialized u0 to 0.56 and the other parameters to 1.0, as in the original model, and constrained the latter to be positive. The revised equation found in this manner, E = 0.521 · T 10.00 · T 20.03 · W 0.00 , has a small exponent for T2 and zero exponents for T1 and W, suggesting the former inﬂuences photosynthetic eﬃciency in minor ways and the latter not at all. On the available data, the root mean squared error for the original model was 467.910. In contrast, the revised model has an RMSE of 443.307 on the training set and an RMSE of 446.270 using cross validation. Thus, the revised

346

K. Saito et al.

equation produces a substantially better ﬁt to the observations than does the original model, in this case reducing error by almost ﬁve percent. With regards to Earth science, these results are plausible and the most interesting of all, as they suggest that the T1 and W stress terms are unnecessary for predicting NPPc. One explanation is that the inﬂuence of these factors is already being captured by the NDVI measure available from space, for which the signal-to-noise ratio has been steadily improving since CASA was ﬁrst developed. These results encouraged us to explore more radical revisions to the functional form for photosynthetic eﬃciency. Thus, we told our system to consider a form that omitted the three stress factors but that included the four variables – Topt, Tempc, EET, and PET – that appear in their deﬁnitions: E = v0 · exp(−0.5 · (v1 · Topt + v2 · Tempc + v3 · EET + v4 · PET + v5 )2 ) . This Gaussian-like activation function satisﬁes the constraint that E is positive and less than one. Running BPQ to minimize the error function over {v0 , . . . v5 } produced the equation E = 0.57 · exp(−0.5 · (−0.04 · Topt + 0.03 · Tempc − 0.03 · EET + 0.01 · PET)2 ), where we eliminated the parameter v5 because its value was −0.003. The RMSE for the revised model was 439.101 on the training data and 444.470 using leaveone-out cross validation, indicating more than ﬁve percent reduction in error. These results are very similar to those from our ﬁrst approach, which produced a cross validation RMSE of 446.270. In this case, the revised model is simpler in that it deﬁnes E directly in terms of Topt, Tempc, EET, and PET, rather than relying on the theoretical terms T1, T2, and W, two of which provide no predictive power. On the other hand, the original form for E had a clear theoretical interpretation, whereas the new version does not. In such situations, the ﬁnal decision should be left to domain scientists, who are best suited to balance a model’s simplicity against its interepretability.

5

Related Research on Computational Discovery

Our research on computational scientiﬁc discovery draws on two previous lines of work. One approach, which has an extended history within artiﬁcial intelligence, addresses the discovery of explicit quantitative laws. Early systems for numeric law discovery like Bacon (Langley, 1979; Langley et al., 1987) carried out a heuristic search through a space of new terms and simple equations. Numerous ˙ successors like Fahrenheit (Zytkow et al., 1990) and RF5 (Saito & Nakano, 1997) incorporate more sophisticated and more extensive search through a larger space of numeric equations. The most relevant equation discovery systems take into account domain knowledge to constrain the search for numeric laws. For example, Kokar’s (1986) Coper utilized knowledge about the dimensions of variables to focus attention and, more recently, Washio and Motoda’s (1998) SDS extends this idea to support diﬀerent types of variables and sets of simultaneous equations. Todorovski

Computational Revision of Quantitative Scientiﬁc Models

347

and Dˇzeroski’s (1997) LaGramge takes a quite diﬀerent approach, using domain knowledge in the form of context-free grammars to constrain its search through a space of diﬀerential equation models that describe temporal behavior. Although research on computational discovery of numeric laws has emphasized communicable scientiﬁc notations, it has focused on constructing such laws rather than revising existing ones. In contrast, another line of research has addressed the reﬁnement of existing models to improve their ﬁt to observations. For example, Ourston and Mooney (1990) developed a method that used training data to revise models stated as sets of propositional Horn clauses. Towell (1991) reports another approach that transforms such models into multilayer neural networks, then uses backpropagation to improve their ﬁt to observations, much as we have done for numeric equations. Work in this paradigm has emphasized classiﬁcation rather than regression tasks, but one can view our work as adapting the basic approach to equation discovery. We should also mention related work on the automated improvement of ecosystem models. Most AI work on Earth science domains focuses on learning classiﬁers that predict vegetation from satellite measures like NDVI, as contrasted with our concern for numeric prediction. Chown and Dietterich (2000) describe an approach that improves an existing ecosystem model’s ﬁt to continuous data, but their method only alters parameter values and does not revise equation structure. On another front, Schwabacher and Langley (2001) use a rule-induction algorithm to discover piecewise linear models that predict NDVI from climate variables, but their method takes no advantage of existing models.

6

Directions for Future Research

Although we have been encouraged by our results to date, there remain a number of directions in which we must extend our approach before it can become a useful tool for scientists. As noted earlier, we envision an interactive discovery aide that lets the user focus the system’s attention on those portions of the model it should attempt to improve. To this end, we need a graphical interface that supports marking of parameters, intrinsic properties, and equations that can be revised, as well as tools for displaying errors as a function of space, time, and predictive variables. In addition, the current system is limited to revising the parameters or form of one equation in the model at a time, as well as requiring some handcrafting to encode the equations as a neural network. Future versions should support revisions of multiple equations at the same time, preferably invoking the same variants of backpropagation as we have used to date, and also provide a library that maps functional forms to neural network encodings, so the system can transform the former into the latter automatically. We should also explore using other approaches to equation discovery, such as Todorovski and Dˇzeroski’s LaGramge, in place of the RF6 algorithm. Naturally, we also hope to evaluate our approach on its ability to improve other portions of the CASA model, as additional data becomes available. Another test of generality would be application of the same methods to other sci-

348

K. Saito et al.

entiﬁc domains in which there already exist formal models that can be revised. In the longer term, we should evaluate our interactive system not only in its ability to increase the predictive accuracy of an existing model, but in terms of the satisfaction to scientists who use the system to that end. Another challenge that we have encountered in our research has been the need to translate the existing CASA model into a declarative form that our discovery system can manipulate. In response, another long-term goal involves developing a modeling language in which scientists can cast their initial models and carry out simulations, but that can also serve as the declarative representation for our discovery methods. The ability to automatically revise models places novel constraints on such a language, but we are conﬁdent that the result will prove a useful aid to the discovery process.

7

Concluding Remarks

In this paper, we addressed the computational task of improving an existing scientiﬁc model that is composed of numeric equations. We illustrated this problem with an example model from the Earth sciences that predicts carbon production as a function of temperature, sunlight, and other variables. We identiﬁed three activities that can improve a model – revising an equation’s parameters, altering the values of an intrinsic property, and changing the functional form of an equation, then presented results for each type on an ecosystem modeling task that reduced the model’s prediction error, sometimes substantially. Our research on model revision builds on previous work in numeric law discovery and qualitative theory reﬁnement, but it combines these two themes in novel ways to enable new capabilities. Clearly, we remain some distance from our goal of an interactive discovery tool that scientists can use to improve their models, but we have also taken some important steps along the path, and we are encouraged by our initial results on an important scientiﬁc problem.

References Chown, E., & Dietterich, T. G. (2000). A divide and conquer approach to learning from prior knowledge. Proceedings of the Seventeenth International Conference on Machine Learning (pp. 143–150). San Francisco: Morgan Kaufmann. Durbin, R. & Rumelhart, D. E. (1989). Product units: A computationally powerful and biologically plausible extension. Neural Computation, 1 , 133–142. Kokar, M. M. (1986). Determining arguments of invariant functional descriptions. Machine Learning, 1 , 403–422. Langley, P. (1979). Rediscovering physics with Bacon.3. Proceedings of the Sixth International Joint Conference on Artiﬁcial Intelligence (pp. 505–507). Tokyo, Japan: Morgan Kaufmann. Langley, P. (1998). The computer-aided discovery of scientiﬁc knowledge. Proceedings of the First International Conference on Discovery Science. Fukuoka, Japan: Springer.

Computational Revision of Quantitative Scientiﬁc Models

349

˙ Langley, P., Simon, H. A., Bradshaw, G. L., & Zytkow, J. M. (1987). Scientiﬁc discovery: Computational explorations of the creative processes. Cambridge, MA: MIT Press. Lenat, D. B. (1977). Automated theory formation in mathematics. Proceedings of the Fifth International Joint Conference on Artiﬁcial Intelligence (pp. 833–842). Cambridge, MA: Morgan Kaufmann. Ourston, D., & Mooney, R. (1990). Changing the rules: A comprehensive approach to theory reﬁnement. Proceedings of the Eighth National Conference on Artiﬁcial Intelligence (pp. 815–820). Boston: AAAI Press. Potter C. S., & Klooster, S. A. (1997). Global model estimates of carbon and nitrogen storage in litter and soil pools: Response to change in vegetation quality and biomass allocation. Tellus, 49B , 1–17. Potter, C. S., & Klooster, S. A. (1998). Interannual variability in soil trace gas (CO2 , N2 O, NO) ﬂuxes and analysis of controllers on regional to global scales. Global Biogeochemical Cycles, 12 , 621–635. Saito, K., & Nakano, R. (1997). Law discovery using neural networks. Proceedings of the Fifteenth International Joint Conference on Artiﬁcial Intelligence (pp. 1078–1083). Yokohama: Morgan Kaufmann. Saito, K., & Nakano, R. (2000). Discovery of nominally conditioned polynomials using neural networks, vector quantizers and decision trees. Proceedings of the Third International Conference on Discovery Science (pp. 325–329). Kyoto: Springer. Schwabacher, M., & Langley, P. (2001). Discovering communicable scientiﬁc knowledge from spatio-temporal data. Proceedings of the Eighteenth International Conference on Machine Learning (pp. 489–496). Williamstown: Morgan Kaufmann. Thornthwaite, C. W. (1948) An approach toward rational classiﬁcation of climate. Geographic Review , 38 , 55–94. Todorovski, L., & Dˇzeroski, S. (1997). Declarative bias in equation discovery. Proceedings of the Fourteenth International Conference on Machine Learning (pp. 376–384). San Francisco: Morgan Kaufmann. Towell, G. (1991). Symbolic knowledge and neural networks: Insertion, reﬁnement, and extraction. Doctoral dissertation, Computer Sciences Department, University of Wisconsin, Madison. Washio, T. & Motoda, H. (1998). Discovering admissible simultaneous equations of large scale systems. Proceedings of the Fifteenth National Conference on Artiﬁcial Intelligence (pp. 189–196). Madison, WI: AAAI Press. ˙ Zytkow, J. M., Zhu, J., & Hussam, A. (1990). Automated discovery in a chemistry laboratory. Proceedings of the Eighth National Conference on Artiﬁcial Intelligence (pp. 889–894). Boston, MA: AAAI Press.

Eﬃcient Local Search in Conceptual Clustering C´eline Robardet and Fabien Feschet Laboratoire d’Analyse des Syst`emes de Sant´e Universit´e Lyon 1 UMR 5823, bˆ at 101, 43 bd du 11 nov. 1918 69622 Villeurbanne cedex FRANCE robardet@univ-lyon1.fr Abstract. In this paper, we consider unsupervised clustering as a combinatorial optimization problem. We focus on the use of Local Search procedures to optimize an association coeﬃcient whose aim is to construct a couple of conceptual partitions, one on the set of objects and the other one on the set of attribute-value pairs. We present a study of the variation of the function in order to decrease the complexity of local search and to propose stochastic local search. Performances of the given algorithms are tested on synthetic data sets and the real data set Vote taken from the UCI Irvine repository. Keywords: Unsupervised conceptual clustering, optimization procedure, local search.

1

Introduction

In the early steps of knowledge discovery from large databases, structuring data appears as a fundamental procedure which permits to better understand the data and to deﬁne groups with regards to an a priori similarity measure. This is usually referred to clustering in the unsupervised learning context. The data are composed of a set of objects described by a set of attributes such that each object owns a value on every attributes. In classiﬁcation/regression, we have a target attribute which can be used to construct the groups. Knowledge discovery can be done through the learning of rules which explain the values on the target attribute using the other attributes. In this way, to each group of objects is associated a set of attribute-value pairs [Rak97]. When no prior information is available, clustering procedures can be used to discover the underlying structure of the data. They construct a partition on the set of objects such that most similar objects belong to a same cluster whereas most dissimilar ones belong to diﬀerent groups. Hence, those procedures synthesize the data into few clusters. One of the key points in clustering is the a priori deﬁnition of similarity. When dealing with numerical attributes, it is usual to relate the similarity between two objects with their distance. Clustering is then reduced to the determination of groups minimizing the intra-cluster similarity and maximizing the inter-clusters one. For instance, in the K-MEANS algorithm [JD88,CDG+ 88], Euclidean distances between representative vectors of objects are used. This can K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 323–335, 2001. c Springer-Verlag Berlin Heidelberg 2001

324

C. Robardet and F. Feschet

also be extended to ordinal data and even to symbolic one but distances become less representative in this case. Instead, probabilistic representations are preferred. The diﬀerence between the probabilities of appearance of an attributevalue pair on the whole set of objects and its restriction on the set of objects belonging to a particular cluster is used to guide the search for a good partition. It is a trade-oﬀ between intra-class similarity and inter-class dissimilarity of the objects. For example, in the COBWEB algorithm [Fis87,Fis96], the category utility function is used as an objective function. It is a weighted averaging of the well known GINI index without ﬁxing the number of clusters. Other methods like AUTOCLASS [CS96] also use bayesian classiﬁcation, modeling objects by ﬁnite mixture distributions. Another key point in clustering is the optimization procedure. The cardinality of the set of all possible partitions increases exponentially with the size n of the set of objects, which leads to use fast but often rough heuristics. In the K-MEANS algorithm, a heuristic based on the principle of reallocation, is used. At each step, cluster centroids are computed and each object is assigned to the cluster whose centroid is the closest. After few such steps the procedure stops to improve the partition. But unfortunately, the algorithm makes only local changes to the initial partition and thus typically gets trapped in the ﬁrst local minimum. COBWEB method uses an incremental procedure which classiﬁes objects one by one. For each object, the procedure evaluates the two following options: classifying the object in one of the existing clusters or creating a new one containing only this object. The operation which leads to the most important increase in the function is considered. The main drawback of this heuristic is that it often constructs a local optimum which is dependant on the order of the objects in the incremental process. In AUTOCLASS, optimization is done for maximum posterior parameters (MAP) with the EM algorithm. In fact, among a set of models, constituted of a priori number of clusters and probability distributions functions, the method consists in estimating some parameters using the EM algorithm and choosing the best model using a MAP estimator. Optimization can be global or local. The ﬁrst one is usually unreachable and the second is very sensitive to initial conditions. Popular methods like Tabu Search or Genetic Algorithm are widely used without knowing clearly how they work. In this paper, we restrict to local optimization procedure and more precisely on the simplest one that is the local search procedure. Local optimization seems to be a promising method for clustering since it has provide good results at a low cost in lots of combinatorial optimization problems. We base our study on a variational approach of an objective function which is described in section 2. Variations of the function through elementary modiﬁcations are studied in section 3 where a single model of modiﬁcation is given. This permits us to introduce ﬁve stochastic optimization procedures which are experimentally studied on two diﬀerent data sets. The ﬁrst one is an artiﬁcial data set and the second one is the Vote data from the UCI Irvine repository. We then propose some conclusions and future works.

Eﬃcient Local Search in Conceptual Clustering

2

325

Clustering Method

To strengthen the semantic knowledge held by partitions, we study an algorithm for the construction of two linked partitions, one on the set of objects and the other one on the set of attribute-value pairs; we call this couple a bi-partition. Similar methods have already been proposed. We can cite the methods of data reorganization [MSW72,SCH75] which consist in permuting rows and columns of a data table on the base of a distance to minimize. Another one is the simultaneous clustering algorithm [Gov84]. It consists in searching a couple of partitions in a priori K and L clusters and an ideal binary table of dimensions K × L such that the gap between the initial data table structured by the two partitions and the ideal table is minimized. Those two procedures have important drawbacks. The ﬁrst methods do not produce partitions which must be constructed by the user. The second one determines a couple of partitions with a priori ﬁxed numbers of clusters. Furthermore, the resulting couple of partitions is often far from the global optimum. To enforce the knowledge contribution brought by the bi-partition, we favor couples of partitions which follow the following property, Property: The functional link, which restores one partition on the basis of the knowledge of the second one, must be as strong as possible. Furthermore, both partitions must have the same numbers of clusters. To evaluate the quality of a bi-partition regarding this property, we construct a function over PO × PQ , where PO is the set of partitions on the set of objects, and PQ is the set of partitions on the set of attribute-value pairs. This function must follow some properties [Rak97,RF01] to be adapted to the clustering structure, such as the independence upon clusters permutations or the ability to treat bi-partitions having partitions with diﬀerent numbers of clusters, etc. Theses properties are partially checked by association measures, which have been built to evaluate the link between two qualitative attributes X and Y , which are considered as partitions upon a same set. The association measures are widely used in supervised clustering [LdC96], whereas few unsupervised clustering algorithms used them [MH91]. We propose [RF01] to use an adaptation of the τb measure construct by Goodman and Kruskal [GK54], which we call τQ , τQ =

i

− j p2.j 2 1 − j p.j p2ij j pi.

We name τO the above measure obtained when exchanging the attributes1 . We denote by pi. (resp. p.j ) the frequency estimator of the probability associated to the attribute-value pair i (resp. j) of the X (resp. Y ) attribute, and by pij the frequency estimator of the probability that an attribute-value pair i of the attribute X, and the attribute-value pair j of Y arisen simultaneously. The τQ 1

τO is used to determine an adequate partition on PO and τQ is used to obtain an adequate one on PQ

326

C. Robardet and F. Feschet

coeﬃcient evaluates the proportional reduction in error given by the knowledge of the attribute X on the prediction of Y . It takes into account all the structure of the distribution when estimating the variation on the prediction. Using this measure, we do not need to ﬁx the number of clusters in the partitions. It measures how the knowledge of the partition P of PO improve the prediction of the cluster of an attribute-value pair in a partition Q of PQ , knowing the cluster(s) of P which possess objects described by the attribute-value pair. The measure is normalized and consequently none of the discrete or the single-cluster partitions are favored. Moreover, some experiments have been realized by M. Olszak [Ols95] and also by us [RF01]. They consist in comparing several association measures with regard to diﬀerent synthetic data sets. In both studies, the authors ﬁnd that the τQ has an appropriate behavior. To overcome the fact that our two partitions are not based on a same set, we build a co-occurrence table. In the data, h each object is described by h attributes Vi such that Vi : O → domi . Q = i=1 domi is the set of all attribute-value pairs, diﬀerentiating each attribute value of the diﬀerent attributes. The cooccurrence table between a partition P = (P1 , . . . , PK ) on the set O of objects and a partition Q = (Q1 , . . . , QK ) on the set Q, is (nij )i,j with nij =

h

δVi (x),y

x∈Pi y∈Qj i=1

where δ is the Kronecker2 symbol. Consequently, previous pij we replace the n (resp. pi. ) notation by nij.. (resp. nni... ) where ni. = j nij and n.. = i j nij To determine the best bi-partition, we search a bi-partition which maximizes the τQ and τO measures. The problem is now to ﬁnd an adequate optimization procedure, remembering that we are confronted to a combinatorial optimization problem. Note that the search space PO × PQ is huge (exponential in n) m c 1 c n c−i i (PX ) = (−1) i c! c=1 i=1

with

(X ) = m

and

X = {O, Q}

Consequently exhaustive or potentially exhaustive search procedures, like the Branch and Bound, is unrealistic in terms of time eﬃciency. Using others procedures, we have no guarantees the obtained solution is a global optimum. Choosing a local optimization method is a trade-oﬀ between computation cost and quality of the result.

3

Local Search

We consider general purpose methods which are based on the deﬁnition of the neighborhood of a given partition. At each step, a new solution is chosen among the neighborhood of the previous one, such that the algorithm converges towards 2

δVi (x),y = 1

if

Vi (x) = y,

δVi (x),y = 0

otherwise

Eﬃcient Local Search in Conceptual Clustering

327

at least a local optimum. Generating several possible solutions at each step allows to direct the search to the candidates which most improve the function. The main diﬃcult point is to determine how to construct an eﬃcient neighborhood suﬃciently rich and with a tractable complexity. Recent works [FK00,GKLN00] attempt to apply Local Search algorithm to clustering problem. [GKLN00] propose six operators for generating a partition starting from another. They apply those operators ﬁrst successively, and then stochastically following their frequency of improving the function. They observe that the second algorithm is more robust than the ﬁrst one. [FK00] couples together Local Search and K-MEANS algorithms. The neighborhood function consists in randomly swapping a cluster centroid by another object and then applying the K-MEANS procedure. This procedure is less dependant on the initialization of the algorithm and provide robust results. Both papers introduce randomness in the generating neighborhood process and observe increase in the quality of the results. Local Search is often compared with Tabu Search, Genetic Algorithms, and Simulated Annealing which attempt to obtain a possibly global optimum without visiting all possible solutions. Tabu Search consists in choosing a better solution than the current one when it exists, and to accept sub-optimal solution otherwise. A Tabu list prevents to return to a candidate recently evaluated. The procedure can thus pass through local optimum but often with a high computing time. Simulated Annealing relies on a stochastic process which allows to escape from local optima. Solutions which improve the objective function are not necessarily kept. The selection process consists in taking solutions regarding their associated probability. This probability increases for solutions improving the function. But the probability is also inﬂuence by a global parameter called temperature which gradually decreases to force the convergence of the algorithm to an optimum. Whereas other methods generate a unique new solution at each step, the particularity of Genetic Algorithms [Rud94] is to generate a set of best solutions, called population, at each step. The neighborhood of the population is deﬁned using genetic operators such as reproduction, mutation and crossover. New candidates which surpass their parents are always maintained, which guarantees the convergence to a good solution. [BRE91,Col98] apply such algorithms to clustering problems.

4

Variational Approach

For using local optimization procedures we usually deﬁne operators. Then we apply them on the current solution to generate the neighborhood. After that, we compute the measure on each member of the neighborhood and compare the value with the one obtained on the current solution. This procedure is expensive in memory space used and computing time. In our problem, computing the measure on a new partition might require to duplicate the co-occurrence table and consequently to double the memory space used, which is a drawback for the scalability of the method. Furthermore, the complexity for evaluating the τQ

328

C. Robardet and F. Feschet

measure is in O (p × q), with p denoting the number of clusters of P and q the ones of Q. This cost is multiplied by the cardinal of the neighborhood. To overcome those drawbacks, we propose a variational approach for evaluating the objective function. We deﬁne three operators for generating neighboring partitions. Those operators are the transfer of one element from a cluster to another, the split of a cluster into two and the merging of two clusters into one. Those operators constitute a complete generating system because what ever the current partition is, we can reach each of the other ones by applying a ﬁnite number of such operators. We evaluate the variation on the τQ measure when modifying the current partition by one of the three operators. We ﬁrst consider the variation on τQ when transferring, on the partition Q, one attribute-value pair y from a group denoted by b to another denoted by e. Given than each cluster of Q is linked to a column of the co-occurrence table, the transfer of y from Qb to Qe generates the moving of a quantity λyi from the cell on row i and column b to the one on row i and column e. Let us denote by nij the elements of the old co-occurrence table, and by mij those of the new one. The transfert of y induces the following equations between nij and mij mib = nib − λyi

mie = nie + λyi

;

mij = nij

otherwise

(1)

The variation of τQ given by the transfert is then old new − τQ = τQ

i

n2ij j ni. n..

1−

−

n2.j j n2..

n2.j j n2..

−

i

m2ij j ni. n..

1−

−

m2.j j n2..

m2.j j n2..

Simplifying using equations (1), we obtain y 2λy y 2λ y i [n − n − λ ] + C × [n − n + λ ] I× 2 ib ie .e .b i i n.. ni. n.. old new τQ − τQ = I 2 − n22 λy I (n.e − n.b + λy ) y

where λ = and e,

..

y i λi ,

and I and C are the following constants with respect to b

I =1−

n2.j j

n2..

C =1−

n2ij ni. n.. i j

The transfer of several attribute-value pairs in a same movement leads to the same expression. Indeed, considering the transfer of a set S of attribute-value pairs, we compute λSi vectors as follows λyi =

S

h x∈Pi i=1

δVi (x),y

and

λSi =

λyi

y∈S

Consequently, λi vectors are linear combinations of the (λyi ), and transferring a single attribute-value pair or a set of them is evaluated by the same expression.

Eﬃcient Local Search in Conceptual Clustering

329

Furthermore, the fusion of two clusters into a single one can be considered as a transfer of all attribute-value pairs of a cluster into another one, and thus leads to empty the ﬁrst cluster. The computational expression is similar of the transfer’s one. When two columns b and e are merged into the e one, we have the following expression, old new τQ − τQ =

I×

−2 n..

λbi i ni. nie

+C ×

I 2 − I n22 λb n.e

2λb n2.. n.e

..

Splitting a cluster into two is also a transfer like operation. It can be view as a transfer of a set S of attribute-value pairs into a new cluster. When a column b is split into an e and b ones, the variation of τQ is: old new τQ − τQ =

I×

2 n..

λS i i ni.

S

λS − n.b nib − λSi + C × 2λ n2 I 2 − I n22 λS (λS − n.b )

..

..

Similar expressions are found when moving a subset of objects for one row to another on the τO measure. Through the above expressions we show that the variation on the τQ measure can be evaluated using the co-occurrence table, for the evaluation of the nij parameters, and the data table, for the computing of the λi expressions. The partition itself is not taken into account for computing the variations. Furthermore, we have shown that the three diﬀerent operators lead to a unique expression of the τQ variation, which we denote by ∆ λSi , b, e . The fusion and merge are particular cases of the transfer modiﬁcation. Computing ∆ λSi , b, e has a old new lower computational complexity than evaluating τQ − τQ . In the variational approach, the evaluation of the ﬁrst partition is in O (p × q) because we need to compute the constant C. Then, when the constants I and C are ﬁxed, the complexity for evaluating a new partition is in O (max(p, q)). When we need to upgrade the constants I and C, it takes O (1) and O (p) respectively. Consequently, we reduce the complexity from O (p × q) to O (max(p, q)), except for the ﬁrst evaluation. Globally, the dimension of the problem is reduced. now expressed as a It is h function of the elementary vectors (λyi ), with λyi = x∈Pi i=1 δVi (x),y for all S attribute-value pairs y. All λi vectors can be generate from the elementary vectors (λyi ) as follows

λSi =

y

(λyi )

y

∈ {0, 1}

y∈Q

The problem is now to ﬁnd a way to determine the λSi vectors which lead to the most important increase in the measure. In the next section, we propose ﬁve algorithms which diﬀer from their way to choose such vectors.

330

5

C. Robardet and F. Feschet

Algorithms

Using the previous variational approach into a Local Search procedure leads to the following deterministic algorithm For each cluster Pb do For each attribute-value pair y of Pb do For each cluster Pe = Pb do Compute ∆ ((λyi ) , b, e) End For If (min ∆ < 0) then Modify the co-occurrence table End For End For At each step, we consider one attribute-value pair per cluster and try to transfer it to another cluster. We modify the co-occurrence table for the transfer with highest negative decrease. It is well known that randomness usually increases the performance of deterministic algorithm. Stochastic optimization can be considered as a random walk above the set of all partitions. If this search is guided to be attracted by high values of some measure on the partitions, the probability to visit the partitions with global maximum value are increased [FK00]. We thus propose four randomized versions depending on which For loop is randomized in the deterministic version. Stochastic 1 algorithm Randomly choose a cluster Pb Randomly choose y, an attribute-value pair For each cluster Pe = Pb do Compute ∆ ((λyi ) , b, e) End For If (min ∆ < 0) then Modify the co-occurrence table

Stochastic 2 algorithm Randomly choose a cluster Pb Randomly choose a subset S in Pb For each clusterPe = Pb do Compute ∆ λSi , b, e End For If (min ∆ < 0) then Modify the co-occurrence table

Stochatic 4 algorithm Stochastic 3 algorithm For each cluster Pb do For each cluster Pb do Randomly choose a subset S in Pb Randomly choose y in Pb For each clusterPe = For each cluster Pe = Pb do Pb do Compute ∆ ((λyi ) , b, e) Compute ∆ λSi , b, e End For End For If (min ∆ < 0) then If (min ∆ < 0) then Modify the co-occurrence table Modify the co-occurrence table End For End For

Eﬃcient Local Search in Conceptual Clustering

331

Those algorithm are several combinations between randomness and deterministic choice of the cluster and the attribute-value pair(s) to modify. Note that even and odd versions diﬀer by the choice of one or a subset of attributevalue pairs. In the two ﬁrst algorithms, the cluster, from which y or S is removed, is chosen randomly, whereas for the two last ones all the clusters are examined. In all those algorithms, the best ended cluster is chosen after examining all the possible ones.

6

Experimentation

To optimize a bi-partition, we successively execute the algorithm with τQ as objective function which leads to improve the partition Q of PQ and then we apply the same algorithm with τO objective function and thus improve the partition P of PO . We must underline the fact that modifying Q (resp. P ) greatly inﬂuences τQ (resp. τO ) and in a less extent inﬂuences also τO (resp. τQ ). This explains the fact that on some of the following graphics we observe a decrease in the measure. We ﬁrst apply those algorithms on a perfect synthetic data set which contains 200 objects and 30 attributes with 5 diﬀerent values each. This data set is composed of 5 blocks of homogenous data, composing a bi-partition into 5 clusters. Starting from the discrete partition (Fig.1 left) or from a random partition (Fig.1 droite), we apply the ﬁve algorithms on this data set. On Fig.1, the value of τQ is plotted at each step. 1

1 Determinist Stochastic 1 Stochastic 2 Stochastic 3 Stochastic 4

0.9

Determinist Stochastic 1 Stochastic 2 Stochastic 3 Stochastic 4

0.9

0.8

0.8

0.7

0.7

0.6

0.6

0.5

0.5

0.4

0.4

0.3

0.3

0.2

0.2

0.1

0.1

0

0 0

50

100

150

200

250

0

200

400

600

800

1000

1200

1400

1600

1800

2000

Fig. 1. Perfect synthetic data set, starting from the discrete partition (left), or from a random one (right)

On the synthetic data set, we observe that the deterministic and the third stochastic procedures ﬁnd in fewer steps the optimal partition than the other procedures. This can be explained by the fact that in those procedures all possible clusters Pb and all possible cluster Pe are evaluated and that at each step the best movement for a given single y is chosen. The ﬁrst stochastic procedure is also really impressive. When the ﬁrst partition is the discrete one (see Fig.1 left), it

332

C. Robardet and F. Feschet

has the same behavior than the deterministic procedure. When the ﬁrst partition is constructed randomly (see Fig.1 right), it takes more steps to ﬁnd the goal partition. The second and the fourth stochastic procedures are the slowest. They rely on a randomly choice of a subset of attribute-value pairs. When the subset is composed of dissimilar attribute-value pairs, the procedure can not improve the value of the measure. This explains the fact that those procedures have better performances on the left graphic, when the ﬁrst partition is the discrete one and consequently the possible subsets S are of small cardinality. To simulate a more realistic case, we randomly introduce some noise in the data set (see Fig.2 which shows the τQ value for each iteration with 10% (left) and 30% (right) of random noise).

0.9

0.6 Determinist Stochastic 1 Stochastic 2 Stochastic 3 Stochastic 4

0.8

Determinist Stochastic 1 Stochastic 2 Stochastic 3 Stochastic 4

0.5

0.7

0.6

0.4

0.5 0.3 0.4

0.3

0.2

0.2 0.1 0.1

0

0 0

200

400

600

800

1000

0

200

400

600

800

1000

Fig. 2. Synthetic Data set with 10% noise (left) and 30% noise (right)

The results obtained are similar to those found in the perfect case. The convergence speeds are in the same order. The previous graphics mask an important point: the required time for each step. The table 1 gathers the computation time, expressed in seconds, used for 10000 iterations. For information, they are obtained on a Pentium II 300Mhz with 32 Mb memory. Table 1. Computation time (in second) used by the several algorithms on diﬀerent data sets detreminite Stochastic 1 Stochastic 2 Stochastic 3 Stochastic 4

Perfect 10% noise 30 % noise 204 250 275 2.66 4 4 23 31 31 39 130 130 81 110 120

Eﬃcient Local Search in Conceptual Clustering

333

The deterministic procedure is very high time consuming. The ﬁrst stochastic procedure seems to be a good compromise between accuracy and time consumption. Then we apply the algorithms on a well known benchmark: 1984 United States Congressional Voting Records Database. We remove the attribute expressing the vote. On Fig.3 (left) is plotted the τQ values for each iteration of the algorithms. On the contrary of the previous experiences, the distinction between on one hand the second and the fourth stochastic procedures, and on the other hand the other procedures, is less obvious. All the procedures ﬁnd quite the same partition. We can observe an unexpected decrease in the function. This is due to the fact that an increase on τO leads to a decrease on τQ . Such phenomenon appears rarely, and when it appears, the algorithm quickly restores a better partition. Consequently this is not an handicap in the optimization process. To visualize the inﬂuence of the τO optimization on the τV one, we plot (see Fig.3 (right)) the value of the both functions at each iterations. On this graph we clearly observe the compensation process on the optimization of both functions.

0.35

0.4 Tau_0 Tau_Q

Determinist Stochastic 1 Stochastic 2 Stochastic 3 Stochastic 4

0.3

0.35

0.3 0.25 0.25 0.2 0.2 0.15 0.15 0.1 0.1

0.05

0.05

0

0 0

50

100

150

200

250

300

350

400

450

500

0

100

200

300

400

500

600

Fig. 3. Vote Data Set (left), Values of τO and τQ when using Stochastic 1 (right)

The quality of the obtained partition of voters can be evaluated through its comparison with the results of the elections. This election consists in deciding between a democrat or republican congress. We also obtain a partition in two clusters. The table 2 crosses the two partitions. Table 2. Cross table of the votes and the obtained results by the algorithm 4 Our results vs Vote Democrat Republican P1 221 14 235 P2 46 154 200 267 168 435

334

C. Robardet and F. Feschet

The group P1 of the obtained partition seems obviously corresponds to the democrat one, whereas the group P2 ﬁts the republican population. The rate of accurate prediction is here of 86.2% whereas about 90% accuracy appears to be STAGGER’s asymptote. The quality of the partition on the set of attribute-value pairs is also very good. We denote by G1 the cluster of attribute-value pairs associated to the group P1 , which ﬁts well the democrat population. This set gathers all the attribute-value pairs whose conditional probability of appearance, given the fact the voter is Democrat, are superior of the ones associated to the Republican voters. The probability of being democrat knowing the voter owns this attributevalue pair is also superior of the one obtained for the republican voters and thus for all the attribute-value pairs. All attributes are of binary/type (yes/no), and for each attribute, the yes value belong to a cluster and the no one to another one. Consequently, we can say that the obtained partition is ideal regarding our criteria of a good partition.

7

Conclusion

In this article, we have presented a variational study of a function used for guiding the search of a partition in conceptual clustering. It consists in evaluating the variation of the function when transfer, merge or split operators are applied to modify a partition. We showed that using this approach in optimization procedure allows to decrease the computational cost. Furthermore, it leads to simplify the problem, expressing the three operators under a single one. We mix this approach with stochastic local search optimization procedures and apply them on a synthetic data set and the real data set Vote taken from the UCI Irvine repository. The experimentation leads to conclude that some randomness is needed in the local search procedure to speed up the convergence to the best partition. But too much randomness, when the procedure examine a random subset of attribute-value pairs of a cluster, slow down the convergence in a more important way. The partitions obtained on the Vote data set are both of excellent accuracy. The partition on the voters set is quite the same than the one given by the result of the election, without taking this information into account. The partition on the set of attribute-value pairs follows exactly the conditional probabilities of appearance of those attribute-value pairs given the vote class. In a future work, we plan to analytically approximate the combination of (λyi ) which most improve the quality of the partition. This would reduce the number of steps of optimization required to obtain an optimum.

References [BRE91]

J. N. Bhuyan, V. V. Raghavan, and V. K. Elayavalli. Genetic algorithm for clustering with ordered representation. In Richard K. Belew and Lashon B. Booker, editors, Proceedings of the Fourth International Conference on Genetic Algorithms, San Mateo, CA, 1991. Morgan Kaufmann Publishers.

Eﬃcient Local Search in Conceptual Clustering

335

[CDG+ 88] G. Celeux, E. Diday, G. Govaert, Y. Lechevallier, and H. Ralambondrainy. Classiﬁcation automatique des donn´ ees. Dunod, paris, 1988. [Col98] R. M. Cole. Clustering with genetic algorithms. Master’s thesis, University of Western Australia, 1998. [CS96] P. Cheeseman and J. Stutz. Bayesian classiﬁcation (autoclass): Theory and results. Advances in Knowledge Discovery and Data Mining, 1996. [Fis87] D. H. Fisher. Knowledge acquisition via incremental conceptual clustering. Machine Learning, 2:139–172, 1987. [Fis96] D. H. Fisher. Iterative optimization and simpliﬁcation of hierarchical clusterings. Journal of Artiﬁcial Intelligence Research, 4:147–180, 1996. [FK00] P. Fr¨ anti and J. Kivij¨ arvi. Randomised local search algorithm for the clustering problem. Pattern Analysis and Applications, pages 358–369, 2000. [GK54] L. A. Goodman and W. H. Kruskal. Measures of association for cross classiﬁcation. Journal of the American Statistical Association, 49:732–764, 1954. [GKLN00] M. Gyllenberg, T. Koski, T. Lund, and O. Nevalainen. Clustering by adaptive local search with multiple search operators. Pattern Analysis and Applications, pages 348–357, 2000. [Gov84] G. Govaert. Classiﬁcation simultan´ee de tableaux binaires. In E. Diday, M. Jambu, L. Lebart, J. Pages, and R. Tomassone, editors, Data analysis and informatics III, pages 233–236. North Holland, 1984. [JD88] A. K. Jain and R. C. Dubes. Algorithms for clustering data. Prentice Hall, Englewood cliﬀs, New Jersey, 1988. [LdC96] I.C. Lerman and J. F. P. da Costa. Coeﬃcients d’association et variables ` a tr`es grand nombre de cat´egories dans les arbres de d´ecision : application ` a l’identiﬁcation de la structure secondaire d’une prot´eine. Technical Report 2803, INRIA, f´evrier 1996. [MH91] G. Matthews and J. Hearne. Clustering without a metric. IEEE Transaction on pattern analysis and machine intelligence, 13(2):175–184, 1991. [MSW72] W. T. McCormick, P. J. Schweitzer, and T. W. White. Problem decomposition and data reorganization by a clustering technique. Operations Research, 20(5):993–1009, 1972. [Ols95] M. Olszak. Mod´elisation des relations de causalit´ e entre variables qualitatives. PhD thesis, Universit´e de Gen`eve, 1995. [Rak97] R. Rakotomalala. Graphes d’induction. PhD thesis, Universit´e Claude Bernard Lyon 1, 1997. [RF01] C. Robardet and F. Feschet. Comparison of three objective functions for conceptual clustering. In Proceedings of the 5th European Conference on Principles and Practice of Knowledge Discovery in Databases. SpringerVerlag, September 2001. [Rud94] G. Rudolph. Convergence analysis of canonical genetic algorithms. IEEE Transactions on neuronal networks, 5(1):96–101, 1994. [SCH75] J.R. Slagle, C. L. Chang, and S. R. Heller. A clustering and datareorganizing. IEEE Transactions On systems, Man and Cybernetics, pages 125–128, January 1975.

Towards a Method of Searching a Diverse Theory Space for Scientific Discovery Joseph Phillips University of Pittsburgh Computer Science Dept. Pittsburgh, PA 15260, USA josephp@cs.pitt.edu Abstract. Scientists need customizable tools to help them with discovery. We present an adjustable heuristic function for scientific discovery. This function may be considered in either a Minimum Message Length (MML) or a Bayesian Net manner. The function is approximate because the default method of specifying theory prior probabilities is a gross estimate and because there is more to theory choice than maximizing probability. We do, however, effectively capture some user preferences with our technique. We show this for the qualitatively different domains of geophysics and sociology.

1 Introduction Our ultimate goal is to write a general program to assist scientists in creating and improving scientific models. Realizing this goal requires progress in machine learning, knowledge discovery in databases, data visualization and search algorithms. It also requires progress in scientific model preferencing. The scientific model preference problem is compounded by the fact that several scientists with very similar background knowledge may see the same data but may prefer different models. This paper is the first in an on going study to address scientific model preferencing issue. Scientific discovery can be viewed as a parameter search in a large and extremely inhomogeneous space. Physicists, for example, prefer strong relationships between numeric values (e.g., equations) when they can be found. They also, however, use knowledge that is more conveniently expressed hierarchically in decision trees and semantic nets. This is exemplified by the classification of, and the assigning of fundamental properties to subatomic particles. The minimum message length (MML) criterion is a mathematically wellgrounded approach for choosing the most probable theory given data [21][8][24][5]. Inspired by information theory, the criterion states that the most probable model has the smallest encoding of both the theory and data. Ideally, the theory’s encoding K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 304-322, 2001. © Springer-Verlag Berlin Heidelberg 2001

Towards a Method of Searching a Diverse Theory Space for Scientific Discovery

305

results from a domain expert’s estimation of its prior probability and is language independent. The encoding of the data should also be probabilistic: as a function of a given theory. Despite its generality and power for finding parameters in single classes of models (e.g., the class of polynomials), many have expressed skepticism about whether MML may meaningfully be applied to finding parameters in inhomogeneous model spaces (e.g., general scientific discovery). Cheeseman, for example, states “although finding the most probable domain model is often regarded as the goal of scientific investigation, in general, it is not the optimal means of making predictions.” [5] Our immediate, limited goal is to devise a heuristic function that can help users in large and inhomogeneous model spaces. Ideally, a search algorithm that is informed with our heuristic will return several regions in the model space that contain promising models, some known and some novel. Our approach is to adapt MML in a customizable manner. 1. We make MML applicable to a larger set of scientific discovery by mapping its terms onto those used by scientists: theory, laws and data. The MML theory is mapped to scientific theory. The MML data is split into scientific laws and data.

2. We make our heuristic function adjustable, but in a principled manner, by giving the user only two calibration parameters. These parameters directly correspond to the relationship between scientific theory and law, and scientific theory and data. It would be nice if we could ignore differences between theories and pretend that there is one “best” theory for all scientists. This, however, ignores significant evidence that scientists differ in opinion, e.g., see [10][15]. We judge our function based on criteria for heuristic functions: generality, ease of computation, simplicity and smoothness. We do not claim that we have “solved” this problem. The feature set by which to judge theories and the identification of the “best” model remain unsolved problems. 1. We offer no good guidance in developing the theory’s prior probability. Cheeseman and others have stressed the importance of using domain knowledge to specify the theory’s prior probability. They have also stated that syntactic features are often a poor substitute. We are aware of no general algorithm for the estimation of a theory’s prior probability. Although our technique is not limited to syntactic features, we use them in this paper. Our approach is compatible with more principled prior probability specifying techniques. 2. We make no claim that the “best” theory will result from this approach. This is due to (1) the unsolved prior probability problem, (2) to the difficulty in searching a large and inhomogeneous model space, and (3) the fact that the most probable model may or may not be the best model.

306

J. Phillips

We have developed a useful heuristic function despite these two major limitations. Its generality is tested by analyzing its performance in two completely different domains: sociology and geophysics. This paper is organized as follows. Section 2 discusses previous approaches to automated scientific discovery. Section 3 briefly introduces MML. Our approach is detailed in section 4. Section 5 presents and discusses our experiments. Section 6 concludes.

2 Scientific Discovery Several criteria have been proposed by philosophers of science for comparing competing hypotheses [3]. Among them are accuracy/empirical support, simplicity, novelty and cost/utility. Most automated approaches consider accuracy and simplicity. IDS by Nordhausen and Langley was perhaps the first general program for scientific discovery [18][19]. IDS takes as input an initial hierarchy of abstracted states and a sequential list of “histories” (qualitative states, see [6]). Using each history IDS modifies the affected nodes of the abstracted state tree to incorporate any new knowledge gained from that history. Its output is a fuller, richer hierarchy of nodes representing history abstractions. Thagard introduced Processes of Induction (or PI), to propose a computational scheme for scientific reasoning and discovery, but not as a working discovery tool [23]. PI represents models as having theories, laws and data. It evaluates scientific models by multiplying a simplicity metric by a data coverage metric. The simplicity metric is a function of how many facts have been explained and of how many cohypotheses were needed to help explain them. The evaluation scheme is fixed and has no notion of degree of inaccuracy. Zytkow and Zembowicz developed 49er, a general knowledge discovery tool [27][26]. It has a two stage process for finding regularities in databases. The first stage creates contingency tables (counts of how often values of one attribute co-occur with those of another) for pairings of database attributes. The second stage uses the contingency tables to constrain the search for other, higher order, regularities (e.g. taxonomies, equations, subset relations, etc.) Valdes-Perez has suggested searching the space of scientific models from the simplest to ones with increasingly more complexity, stopping at the first that fits the data. MECHEM uses this approach to find chemical reaction mechanisms [25]. Such orderings would be easy to encode as heuristic functions. We extend these approaches by using an adjustable, explicitly mentioned heuristic function that does not require enumerating all possible models. Our approach is to generalize Thagard’s scheme and place it on sounder theoretical footing.

Towards a Method of Searching a Diverse Theory Space for Scientific Discovery

307

3 Information Theory and Diverse Model Discovery The MML criterion is to minimize the sum of the length of a theory and data given the theory. Some data will have a smaller combined compressed length than the original message. For example, the pitch and relative durations of some bird calls may be written in musical notation. This notation dramatically reduces the information from the original time-dependent air-pressure signal that the bird produced. However, many sounds are not appropriately described by musical notation (e.g., human speech). The original time-dependent air-pressure signal will be a better representation than musical notation. The equation that relates these terms for data set D; context c; discrete, mutually exclusive and exhaustive hypotheses {H0, H1 .. Hn} with assigned prior probabilities p(Hi|c); and computed conditional data probabilities p(D|Hi,c) is: (1) –log p(H i D, c) = – log p(H i c) – log p(D H i, c) + const

which is equation (2) of [5]. Recall that the -log(p(choice)) is the Shannon lower bound on the information needed to distinguish choice from other possibilities. The constant term serves to “normalize” the probabilities and may be ignored if you only want their relative order. Cheeseman gives this iterative process for applying MML: 1. Define the theory space. 2. Use domain knowledge to assign prior probabilities to the theories. 3. Use Bayes’ theorem to obtain the posterior probabilities of the theories given the data from adequate descriptions of the theories (i.e., from descriptions that let you compute p(D|Hi,c) ). 4. Search the space with an appropriate algorithm. 5. Stop the search when a probable enough theory has been found (subject to computational constraints), or to redefine the theory space or prior probabilities. Several obstacles hamper efforts to apply MML to general scientific discovery. Among them are the specification of the initial theory prior probabilities, the inherently iterative nature of MML, and the difficulty in searching this space for a true “highest probability” theory. Like other MML efforts, there is no good rule for specifying an initial set of prior probabilities. Although Cheeseman and others warn about using syntactic features, this may be the easiest approach to try in a new domain. MML is an inherently iterative process of redefining theory spaces and prior probabilities. This complicates the usage of any function that needs calibration.

308

J. Phillips

The scientific theory search space is expected to be highly irregular, hampering the search for the “best” model. This is true of other domains. Cheeseman suggests simulated annealing and the EM algorithm as potential search mechanisms.

4 Our Approach We do not claim to have an optimal heuristic function in terms of returning the truly “best” model. Rather, our goal is to create a decent heuristic function that may help scientists on their initial searches with large, inhomogeneous spaces. Good heuristics for real-world problems are often tricky to design [16]. We evaluate our function based on four criteria: 1. Generality over different sciences: We seek a function that is applicable to both primarily conceptual models as well as primarily numeric. 2. Ease of computation: The function should not rely too heavily on values that are computationally difficult to obtain. And, once it has its values, it should be rapidly computable. 3. Simplicity of form: There are several competing beliefs for how scientific models should be evaluated. The function’s design should be as transparent as possible so that its assumptions are readily comprehended. 4. Smoothness: The function should give similar models similar scores. We chose these criteria because they are important to our long-term goal of creating a general program to assist a variety of scientists. Our contributions are the improvements in generality and ease of computation over Thagard’s function. Generality is improved in three ways. First, it is adjustable to the tastes of a particular scientist. Second, it is able to handle degrees of inaccuracy. Lastly, it may use statistical arguments as well as proofs. Statistical arguments also improve the ease of computation: the function does not have to try to formally prove laws or data using perhaps an undecidable theory. The form of our function, however, is a little more detailed than Thagard’s. The smoothness of both of our approaches critically depends upon how the user designs models. Following Thagard, models have three components: a theory that specifies the details of the model, the data to predict, and a set of laws found from the data and predicted by the theory. The theory and the law set are both composed of assertions in some language. We use first order predicate logic with the data structure extensions of Prolog as our language in this paper. The distinction between which assertions are theory and which are laws is given by Lakatos. He distinguishes between commonly accepted knowledge (the “hard core”, i.e., theory) and between more tentatively held knowledge (the “auxiliary hypotheses”, i.e., laws) of a given research program

Towards a Method of Searching a Diverse Theory Space for Scientific Discovery

309

[12][13]. The auxiliary hypotheses are the statements that are not commonly held (i.e., have lower prior probability), and are the main objects that are manipulated during Kuhnian normal scientific discovery [10]. The data is assumed to be in tabular form with associated uncertainties and error bars. It is simplest to assume that: 1. all measurements are independent of each other, 2. the data influence the choice of law set, and 3. the law set influences the choice of theory assertions. Figure 1 depicts these assumptions graphically as a Bayesian network.

data

{

laws

theory

Fig. 1. Bayesian network underlying the relationship between data, laws and theory

We are interested in the most probable total model. We derive the following starting from the Bayesian network of figure 1. Let T denote theory, LS denote a set of laws, and D denote data below: (2) p(T, D) = p(T D) ⋅ p(D)

Using Bayes’ rule we may re-write this as:

(3) =

∑ p(T) ⋅ p(LS i T) ⋅ p(D LS i) i

The last expression sums over all law sets and is appropriate when there may be disagreement over which law set is best (e.g., several scientists combining their beliefs). However, for an individual scientist, a particular law set may appear much more probable than any of its competitors. In this case we may simplify the expression to: (4) p(T, D) = p(T) ⋅ p(LS T) ⋅ p(D LS)

Now we consider the meaning of each term.

310

J. Phillips

The first term of equation 4 tells us the a priori probability of a theory, without reference to the law set or data. It encodes the biases on theories. It may be used, for example, to prefer one type of assertion over another. A commonly mentioned bias in science is one for syntactic simplicity, which is often measured as the length of an expression in a given language. This first term is the natural place to encode such a bias because this common measure of simplicity is only a function of the length of the expression. (5) p(T) = – log 2(s(T))

The function s(T) returns a measure of the size of T in some language. The function p(T) uses Shannon information theory to convert from a size to a probability. We admit that the syntactic length metric is crude. We welcome scientists to redefine p(T) as they choose based upon their own domain knowledge. In defense of this initial estimate of p(T) we note that syntactic metrics: (1) are easy to compute, (2) are well agreed upon as being relevant (if not completely correct), (3) are common to many or all sciences (as opposed to symmetry, for example, which enjoys larger support among physicists than among other scientists), and, (4) would favor syntactically simple theories, which may be easier to comprehend. The last point is especially relevant for initial probability distributions, which may return several interesting model space regions that scientists must understand before determining if they warrant further exploration. The second term tells us how likely the assertions of the law set are given the theory that we have chosen. At one extreme, if all laws are logically entailed by the theory, the term is 1.0 because they must be true (given the theory as premises). It is also 1.0 if the law set is empty because the theory is used to directly compute the data. At the other extreme, the term must be 0.0 if the theory contradicts any statement of the law set. Values in between signify that the law set may or may not follow, depending on specific values of free parameters in the theory. Free parameters are values that the theory refer to that do not have definite values, but distributions over sets of values. Examples include coefficients with standard deviations, and random numbers used during stochastic experiments. In these cases, the second term is set equal to the fraction of the free parameter space in which all of the statements of the law set are found to hold. For random numbers it will be more practical to estimate this value by sampling the space. Laws are limited to refer to the theoretical terms introduced in theories. The third term measures empirical support and the degree of data coverage by telling us how likely the data are given the statements of the law set and theory. The

Towards a Method of Searching a Diverse Theory Space for Scientific Discovery

311

same extremes hold when all of the data are logically entailed or some of it is contradicted by the law set or theory. Again, values in between 0.0 and 1.0 represent the fraction of the free parameter space in which the data are observed. Statistical assertions have an implicit free parameter that tells from which data set the statistic was collected. For example, consider two integers, each in the set [0..9], with an average value of 1. The implicit free parameter must denote one of three sets: (1,1), (0,2) or (2,0). Please consider this (propositional) example. Let our theory be the assertion “a>b”, our law be “a” and our data be two occurrences of “b.” We would pay the appropriate (perhaps syntactic) price for the theory. The law is not derivable from the theory, so we set its probability to p(a) (the a priori probability that free variable A which ranges over “a” and “not(a)” actually is “a”). From our theory and law we may deduce our data with probability 1. If, however, we add assertions “c->a” and “c” to our theory then we have (perhaps) increased theory cost, but the law is now deducible from theory. Thus, the law has probability 1 and has no cost. A problem with the heuristic function as given is that it has no parameters to be tuned to a particular scientist’s preferences. This implies that it always returns the same value for the same arguments. This contradicts our goal of not imposing one ideal form on all scientific models. Scientists should be able to fine tune the heuristic function, but any adjustment should be general enough to be applicable to all models. Further, we want the number of parameters to be relatively small, both because it will make the function easier to calibrate and because we want to guard against potential abuse by choosing a set of parameters that happen to make one model score well and a similar one score poorly. Our solution was to generalize the function in the following manner: (6) A

B

h tm + (T, LS, D) = p(T) ⋅ p(LS T) ⋅ p(D LS)

C

The “tm” signifies that the function is over total models (i.e. theory, law set and data) and the “+” reminds us that this a function to maximize (i.e., larger values are better). The three parameters A, B and C allow us to independently vary the relative weights of the a priori model probability, the law set probability and the data probability. Instead of maximizing probability, we may view it as minimizing information: (7) h tm –

= A ⋅ s(T) – ( B ⋅ log 2(p(LS T)) ) – ( C ⋅ log 2(p(D LS))

The “-” subscript denotes that this function should be minimized.

312

J. Phillips

Equation 7 generalizes original MML equation 1 in two ways. First, equation 1’s -log p(D|Hi,c) has been split into two terms, one for both the law set and the data. Both are graded probabilistically. Second, the coefficients A, B and C act as linear weights for the information terms. The linear weights may seem to grossly over generalize equation 1, but it really depends on how they are used. This is discussed in more detail in the next section. There are two advantages to this weighing approach. First, it conforms to our notions that some sciences value theory conciseness and hard predictions more than others. Set the values of A and C higher in these sciences. Second, it does not allow arbitrary and contrived exceptions to make two similar total models score significantly differently. Although we have offered a syntactic feature-based approach to specifying a theory’s a prior probability, we have not limited scientists to use our function. Further, we admit that this is an iterative approach where probabilities are refined. Revisiting our criteria we find: 1. Generality is achieved with the adjustable weights, the usage on probabilities of laws instead of counts of “explained facts”, the usage of prior distributions instead of “co-hypotheses”, and the potential use of proofs or statistical arguments. 2. The ease of computation is limited by our proof or statistical argument method, not by the heuristic. 3. Simplicity is achieved because the form is of a weighted sum with terms for theory, law and data. 4. Smoothness is achieved because lumping all theory together, all laws together and all data together hampers a user’s ability to create one model that scores well and another very similar one that scores poorly. Further generalizations of htm+ and htm- may be envisioned. Each of the coefficients A, B and C may split into several coefficients A[1..n1], B[1..n2] and C[1..n3]. These finer-grained coefficients may be used to weigh specific aspects of the theory (e.g. A[1] for equations, A[2] for decision trees, etc.), specific laws of the laws set (e.g. B[1] for equations, B[2] for simple logical assertions, etc.), and specific types of data (e.g. C[1] for spatial measurements, C[2] for temporal measurements, etc.) Using the finer-grained coefficients is justifiable in some cases, like when there are large differences in the precision. For example, in seismology, earthquake times are known with very high precision: to within a few seconds per century. Earthquake locations are known with less precision: to only within tens of kilometers per 40,000 km (the Earth’s circumference). Earthquake energies are known with far less precision, frequently only to an order of magnitude. We may want to weigh each type of

Towards a Method of Searching a Diverse Theory Space for Scientific Discovery

313

data separately, taking into consideration how much precision is given and how much we want this data fit at the expense of other data. Parameters A, B and C from equations 6 and 7 were not subdivided to simplify analysis and presentation.

5 Experiments and Discussion This section discusses the rough calibration of the heuristic function to models in two sciences. Geophysics and sociology were chosen because they cover a broad spectrum of acceptable scientific models. We do not evaluate this function by comparing its output with that of IDS, PI, 49er, or Mechem. Which model a scientist believes in given specific data is, at least to some degree, subjective. Rather, we seek a method of calibrating our heuristic such that if it is given examples of models that users like then it can prefer similar models in the future. The heuristic function’s parameters may be calibrated for each science by analyzing its accepted models. Although there are three parameters, we only care about are their relative values. Accordingly, we may set A to 1 and let B and C vary. Equivalently, borrowing from physical chemistry, we can plot B/A versus C/A to create a “phase diagram” that tells which of the various total models are preferred by the heuristic. Each phase diagram constrains the area of each scientific model. This in turn constrains B/A and C/A for all models. Comparing B/A with C/A makes the linear weights of equation 7 a conservative generalization of equation 1. The plots are primarily a comparison between B and C, and represent a value judgement on how much scientists want their uncertainty in the laws rather than in the data. There is no “correct” answer to this question. As we will see, it varies from scientist to scientist. This also strengthens our argument for an adjustable heuristic function. If a scientist prefers model X then that scientist should set the parameters to where X is preferred. If the scientist is strongly tempted by model Y, then the scientist should adjust the parameters to be in the region of X but leaning towards that of Y. The scientist may iteratively update the parameter values as new models are evaluated by both the scientist and the heuristic. Please recall our limited goal: to do an initial search in a large and inhomogeneous space for areas that contain potentially promising models. We do not promise the best models. Also, this may be an iterative process where theory prior probabilities are revised according to previous results.

314

J. Phillips

The Knowledge Base and How It Predicts The experiments were designed for a variant of the knowledge base discussed in [20]. The knowledge base has two lists of assertions, one for the theory and one for the laws. These assertions describe a standard is_a frame hierarchy of knowledge. Assertions may be frame inheritance statements, equations or Prolog-like logic sentences. A Prolog-like resolution engine drives inference, but dedicated code handles frame inheritance and equations for efficiency. The output of the knowledge base to a given query is either an answer, or FAILURE, signifying no prediction is possible. An information cost accrued by the data when a prediction is wrong or missing. For symbolic values this cost is the Shannon information cost of the prior probability of the recorded answer. Thus, the default model to try to beat is the product of the prior probabilities of each datum. For inte1 gers and fixed and floating point values the cost is: (8) – log 2(DistinctValDiff(predi ct, record) + 1)

where DistinctValDiff() returns the number of distinct, representable values between the predicted and recorded values in the attribute’s given precision. (For example, if an attribute was limited to multiples of 0.1 then DistinctValDiff(0.2,0.4) is 2.) When predict is missing then the function is set to its highest value for that attribute. Sociology Data This technique requires large amounts of calibration data. We focused on models of family structure because United States Census data on family structure are readily available [4]. Data are not available for specific individuals, but they are summarized in several tables. From these summaries the number of families with 1, 2, 3, 4, 5, and 6 or more “own children” may be calculated for each family type. The family types are married family, male-householder family, female-householder family, married subfamily, male-householder subfamily and female-householder subfamily. Additionally, the number of childless families (but not subfamilies) may be calculated. The term “own children” means children related by birth, marriage or adoption. The U.S. Census 1

Equation 8 corresponds to the last term of equation 7. It defines a maximal probability at the recorded value, and exponential decaying probability above and below that value. This distribution may be replaced by others and is not a critical aspect of this approach.

Towards a Method of Searching a Diverse Theory Space for Scientific Discovery

315

Bureau switched from “head of house” to “householder” to emphasize the sharing of responsibilities prevalent in modern American families. The term “subfamily” refers to parent(s) who live with other adult(s) who are the householder(s) (e.g. their own parent(s).) We randomly created a database of 10,000 people in proportion to the distribution of household types and number of children computed from the U.S. Census data. This database under represents the number of children a little because the U.S. Census data does not distinguish between 6 or more children. We treated such cases as exactly 6 children. It under represents the number of adults more because we made no attempt to include all cases of adults living with other adults. Our interest is only in predicting where children live as a function of their parents. The database lists each person, their address, and, when the person is a child, their mother and father. Children who did not live with their father got illegal values as their father attribute. This was also done for the mother attribute. All attributes are symbolic. Sociology Models After surveying ethnographic reports on 250 societies, Murdock came to the anti-climatic conclusion that the form of families in all societies is of “. . . a married man and woman with their offspring. [17]” (This is a minimal family structure because that unit may be embedded in larger structures.) We take this statement as the theory. We encode it in the structure of the virtual relations of figure 2, augmented with some extra semantics. For example, from the structure of the database we may deduce that all families have one address, one childset, one mother, one father, that a set of children may have 0 or more children, etc. The additional rules allow members to inherit selected properties of their families. Predicate prop(frame,attribute,value) notes that property attribute of frame has value value. family address childset mother father

child childset family

∀ ( child(C) ∧ fam(F) ∧ prop(C, family, F) ∧ prop(F, A, V) → prop(C, A, V) ) ∀ ( fam (F) ∧ prop(F, mother, M ) ∧ pro p(F, addr, ADDR) → prop(M, addr, ADDR) ) e tc

Fig. 2. Codification of Murdock’s theory

316

J. Phillips

The laws operationalize the theory by making direct predictions about recorded values. For example, assume the child database included address information. We may then note a correlation between a child’s address and that of their parent’s. ∀( child(C) ∧ mom(M) ∧ fam(F) ∧ prop(C, mom, M) ∧ prop(M, fam, F) ∧ prop(F, addr, A) → prop(C, addr, A) )

∀( child(C) ∧ dad(P) ∧ fam(F) ∧ prop(C, dad, P) ∧ prop(P, fam, F) ∧ prop(F, addr, A) → prop(C, addr, A) )

Fig. 3. Codification of potential Murdock laws (atoms mother, father and family have been abbreviated as mom, dad and fam)

The competing sociological model is due to Adams [1]. After examining Latin American and some ethnic societies, Adams concluded that the evidence for the nuclear families as described by Murdock was “marginal at best” [14]. Instead he proposed the mother-child dyad as the primary unit. This new model is created by removing the father attribute, or merely disallowing its use in proofs. We also delete the father law mentioned in figure 3 from the law set. We bound the parameters by considering two unacceptable models at opposite extremes. The first is the “data” model. It uses neither theory nor laws to predict values. It merely reflects the prior probability of any one value. The second is the “theory” model. It explicitly memorizes each value individually as a statement in the theory. It has neither general statements nor laws, and overfits the data. Table 1 gives the sizes of the each component of each total model. Both Murdock’s and Adams’ models must memorize adult addresses. Adams’ must also memorize those of children who live with their fathers but not mothers. The law sentences in figure 3 logically follow from theory so they have size 0. Unfortunately, the zero size forbids the constraining of the B parameter by this experiment. Table 1. Sizes of sociological models

Model

Abbr

Theory

Law

Data

data

d

0

0

107637

Adams

a

240

0

79582

Murdock

m

480

0

77739

theory

t

960960

0

0

Adams’

A

240

23429

77739

Towards a Method of Searching a Diverse Theory Space for Scientific Discovery

317

Figure 4a gives the “phase diagram” plot of data. Where a model out scores all others its abbreviating letter appears in the parameter space. log2(C/A) is plotted on the X axis and log2(B/A) on the Y.

Fig. 4. Sociology model “phase diagrams”

To place bounds on B we consider adding the father sentence to Adams’ law set. However, we cannot prove it from our theory. Therefore, we accept father in the model as a free variable with its (data-specified) prior probabilities. This results in a model with the equivalent predictive power of Murdock’s. It can now predict the addresses of children living with only their fathers. The price we pay is Shannon information cost of the prior probability of each usage of the prop(Child,father,Father) predicate for these predictions. See Adams’ in table 1. The revised “phase diagram” with Adams’ new model is plot in figure 4b. Geophysics Data We obtained data from the United States Geological Survey’s National Earthquake Information Center. We retrieved all recorded earthquakes in the catalog in a rectangular box from 139E to 162E and from 41N to 55N from 1976 to 2000. The Kuril subduction zone, the Japanese island of Hokkaido, and the Kuril island chain are the most prominent geophysical features in this area. Non-tectonic events were removed and the remaining ones were fit to a great circle. This great circle was taken to be the “length” of the fault and events greater than 512 km from it were removed. The time, distance-along-fault, (signed) distance-from-fault and depth of the remaining 11031 events were entered into our earthquake database. Geophysics Model In the theory of plate tectonics, a subduction zone is a region where one (oceanic) plate sinks beneath another (continental) plate. A Wadati-Benioff zone is the seismically active portion of this interface [2][23].

318

J. Phillips

A Wadati-Benioff zone may be modeled as a plane that increases in depth the further one goes into the continental plate. We did so by stating the assertions of figure 5 in the theory where the slope and intercept were found by least-squares fit. D is tF r om fa u lt = slo p e × d ep th + in t er ce pt

inherit(kuril_quakes,slope,1.05682). inherit(kuril_quakes,inter cept, -85.9936 km). Fig. 5. The theory of the planar Wadati-Benioff zone model.

The law set was left empty. As before, the “data” model did not try to predict, and the “theory” model overfit by memorization. The results are given in Table 2 and are plotted in Figure 6a. Table 2. Sizes of geophysical models.

Model

Abbr

Theory

Law

Data

data

d

0

0

97750

planar

p

618

0

63904

theory

t

1369230

0

9775

aftershock

a

618

13759

63103

The non-zero entry for the theory model’s for data size is due to round off error. That is, there is a slight difference between the decimal recording of the values logical assertions that comprise the theory (which have a fixed number of significant digits given by the precision of the values), and the binary recording of the values in the database.

Fig. 6. Geophysics model “phase diagrams”

To place bounds on B we add a law to the planar model. When a particular aftershock labelling procedure is used there is an average of a 43.5 km distance between

Towards a Method of Searching a Diverse Theory Space for Scientific Discovery

319

an aftershock and its mainshock. Encoding this as a law permits better predictions of some distances. We include no theory to predict aftershocks, only an empirical procedure for labeling them after the fact. Therefore, we let mainshock be a free variable. The aftershock model results are given in Table 2 and in Figure 6B. We now evaluate our heuristic with the criteria in section 4. Recall, they were (1) generality, (2) ease of computation, (3) simplicity of form, and (4) smoothness. The function is general because it was applied to symbolic sociology and numeric geophysics with equal ease, and because it has been applied to a domain where predictions have varying degrees of accuracy. Its ease of computation is limited by the ability to predict data, prove (or argue for) laws, and know data distributions. Also, its weighted sum form is simple. The function’s “smoothness,” its ability to give similar models similar scores, is limited by how honest people are with the law set. When some condition is true over the whole parameter space one could move it from theory to laws to avoid paying the syntactic cost. This is against the philosophy of this approach. Also, trying to estimate data distributions when there is little data may be a serious problem. Distributions may be used as “fudge factors” to vary a model’s score on the B/A axis. However, a potential advantage is that it will force such assumptions to be explicitly stated. We do not argue for one particular ratio for C/A or B/A. Rather, we seek a method for calibration. That said, we note that both geophysics and sociological had similar C/A bounds. Having B be too great may lead to “overfitting” the laws to the theory and ruling out yet unknown secondary effects. For discovery it may be best to fix A and C and let B vary as the model becomes more refined. This is another study. Note that this was truly a test of scientific rediscovery. Both the sociology and the geophysics theories were applied to new data. Neither Adams nor Murdock were trying to fit U.S. demographics for 1998. Benioff stated his hypothesis after examining events from S. America and Hindu-Kish, not the Kurils. (Wadati probably had data for Honshu, not the Kurils.)

6 Conclusion Scientists have different opinions on what the same data entails. To ignore that is to ignore the history of science. We have developed a heuristic function that takes some of these differences into account, and may be calibrated to a particular scientist, along our given axes. This heuristic function is a generalization of single model family parameter finding MML. It generalizes MML in a principled fashion to consider how much faith to put in laws versus data. Our approach also extends [23] to be applied to scientific discovery. It is general and has been applied to both symbolic and numeric scientific models.

320

J. Phillips

We do not claim to have solved the whole scientific model preferencing problem. Serious limitations remain including (1) the specification of the original model prior probability, (2) the inhomogeneity of the search space, and (3) the fact that the “most probable” model is not necessarily the best one. The purpose of this heuristic is to help scientists identify interesting regions in the model space, i.e., models that are the immediate neighbors of their favorite models in the B/A-C/A plots. This is an initial step of an iterative process. Computer scientists might believe that a heuristic function could not sufficiently constrain search in a domain as rich as scientific discovery. However, the heuristic function is only part of the search algorithm. The search algorithm may employ rules to suggest when to apply scientific operators (e.g., [11]), or may use metalearning to discover which operators are best in a particular domain. Preliminary results from rediscovery in geophysics show that rules and metalearning may be combined or employed separately to significantly speed scientific discovery [20]. Acknowledgments I thank my geophysicist Larry Ruff for his patience, my former advisors John Laird and Nandit Soparkar, and the National Physical Science Consortium and the Rackham Merit Fellowship for funding. References 1.

2. 3.

4.

5.

6. 7.

Adams, R.N. 1960. An inquiry into the nature of the family. p 30-49 in Dole, G. and Carneiro, R.L. (eds.), Essays in the Science of Culture: In Honor of Leslie A. White. Thomas Y. Crowell. New York. Benioff, H., 1948. Earthquakes and rock creep. Geol. Soc. Am. Bull., 59, p. 1391. Buchanan, B., Phillips, J. 2001. Towards a computational model of hypothesis formation and model building in science. Model Based Reasoning: Scientific Discovery, Technological Innovation, Values. Kluwer. Casper, L., Bryson, K. 1998. Current Population Reports: Population Characteristics: Household and Family Characteristics. March 1998 (Update). United States Census Bureau. Cheeseman, P. 1995. On Bayesian model selection. In Wolpert, D. (ed.) The Mathematics of Generalization: Proceedings of teh SFI/CNLS Workshop on Formal Approaches to Supervised Learning. Addison-Wesley: Reading, MA. Forbus, K., 1985, Qualitative process theory, in Qualitative reasoning about physical systems, D. Bobrow, ed., MIT Press: Cambridge, Mass. Fuller, S. 1993. Philosophy of Science and its Discontents, Second Edition. Guilford Press, New York.

Towards a Method of Searching a Diverse Theory Space for Scientific Discovery

8.

9.

10. 11. 12.

13.

14. 15. 16. 17. 18.

19.

20. 21. 22. 23. 24.

321

Georgeff, M.P. and Wallace, C.S. 1984. A general selection criterion for induction inference. In Proceedings of the European Conference on Artificial Intelligence, p. 473-482. Elsevier: Amsterdam. Korf, R.E. 1988. Search: A Survey of recent results. In H.E. Shrobe (Ed.), Exploring Artificial Intelligence: Survey Talks from the National Conferences on Artificial Intelligence (pp. 197-237). Morgan Kaufman. Kuhn, T. 1962. The Structure of Scientific Revolutions. University of Chicago: Chicago. Kulkarni, D. and Simon, H. 1988. The processes of scientific discovery: the strategy of experimentation, Cognitive Science, vol. 12, p. 139-175. Lakatos, I. 1970. Falsification and the methodology of scientific research programmes. In Lakatos, I. and Musgrave, A. (ed.) Criticism and the growth of knowledge. Cambridge University Press: Cambridge. Lakatos, I. 1971. History of science and its rational reconstructions. In Buck, R.C. and Cohen, R.S. (ed.) Boston Studies in the Philosophy of Science. vol 8, p 91-135. Reidel: Dordrecht. Lee, G. 1977. Family Structure and Interaction: A Comparative Analysis. J.B. Lippincott. Philadelphia. McAllister, J. 1996. Beauty and Revolution in Science. Cornell University: Ithaca. Michalewicz, Z., Fogel, D. 2000. How to Solve It: Modern Heuristics. SpringerVerlag. Berlin. Murdock, G.P. 1949. Social Structure. The Free Press. New York. Nordhausen, B., Langley, P., 1987, Towards an integrated discovery system, in Proceedings of the Tenth International Joint Conference on Artificial Intelligence, Morgan Kaufmann, Milan, Italy. Nordhausen, B., Langley, P., 1990, An integrated approach to empirical discovery, in Shrager J, and Langley, P. (ed.) Computational Models of Scientific Discovery and Theory Formation. Morgan Kaufmann, San Mateo. Phillips, J. 2000. Representation Reducing Heuristics for Semi-Automated Scientific Discovery. Ph D. Thesis, University of Michigan. Rissanen, J. 1978. Modeling by shortest data description. Automatica, 14, p. 45471. Sleep, N., Fujita, K. 1997. Principles of Geophysics. Blackwell Science. Malden. Thagard, P. 1988. Computational Philosophy of Science, MIT Press, Cambridge MA. Wallace, C.S., and Freeman, P.R. 1987. Estimation and inference by compact encoding. J. Roy. Stat. Soc., Series B, 49, p 233-265.

322

J. Phillips

25. Valdes-Perez, R. 1995. Machine discovery in chemistry: new results. Artificial Intelligence, 74(1), p 191-201. 26. Zembowicz, R. and Zytkow, J. 1996. From contingency tables to various forms of knowledge in databases, in: Advances in Knowledge Discovery and Data Mining, Fayyad et al (eds.) AAAI Press, San Mateo. 27. Zytkow, J. and Zembowicz, R. 1993. Database exploration in the search for regularities, J. Intelligent Information Systems, 2:39-81.

Divide and Conquer Machine Learning for a Genomics Analogy Problem (Progress Report) Ming Ouyang1 , John Case2 , and Joan Burnside3 1

Environmental and Occupational Health Sciences Institute UMDNJ – Robert Wood Johnson Medical School and Rutgers, The State University of New Jersey Piscataway, NJ 08854 USA ouyang@fidelio.rutgers.edu 2 Department of CIS University of Delaware Newark, DE 19716 USA case@cis.udel.edu 3 Department of Animal & Food Sciences University of Delaware Newark, DE 19716 USA joan@udel.edu

Abstract. Genomic strings are not of ﬁxed length, but provide onedimensional spatial data that do not divide for conquering by machine learning into manageable ﬁxed size chunks obeying Dietterich’s independent and identically distributed assumption. We nonetheless need to divide genomic strings for conquering by machine learning — in this case for genomic prediction. Orthologs are genomic strings derived from a common ancestor and having the same biological function. Ortholog detection is biologically interesting since it informs us about protein divergence through evolution, and, in the present context, also has important agricultural applications. In the present paper is indicated means to obtain an associated (ﬁxed size) attribute vector for genomic string data and for dividing and conquering the machine learning problem of ortholog detection herein seen as an analogy problem. The attributes are based on both the typical string similarity measures of bioinformatics and on a large number of diﬀerential metrics, many new to bioinformatics. Many of the diﬀerential metrics are based on evolutionary considerations, both theoretical and empirically observed, in some cases observed by the authors. C5.0 with AdaBoosting activated was employed and the preliminary results reported herein re complete cDNA strings are very encouraging for eventually and usefully employing the techniques described for ortholog detection on the more readily available EST (incomplete) genomic data.

K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 290–303, 2001. c Springer-Verlag Berlin Heidelberg 2001

Divide and Conquer Machine Learning for a Genomics Analogy Problem

1

291

Introduction

Genomic strings are strings of one of two types: nucleotide strings and amino acid strings. Nucleotide strings are what genes are, and they code for amino acid strings which are proteins. We can model each as strings of letters where the letters are standard names for the nucleotides or the amino acids. For machine learning1 purposes, it is not practical to process genomic strings as ﬁxed-size vectors (of letters). However, genomic strings can be thought of as one-dimensional spatial structures.2 Dietterich [Die00] discusses in detail the problem for machine learning of employing divide and conquer on spatial and temporal data which can’t be practically completely represented as ﬁxed-size vectors. Of course such data can be divided into manageable ﬁxed size chunks. He notes, though, that divide and conquer is problematic if the data fails to satisfy the independent and identically distributed (iid) assumption. As we will see below, the problem discussed in this paper does not satisfy this assumption, and this paper provides, then, among other things, a case study of how in our problem domain we circumvent the diﬃculty. In GenBank (major repository of genomic information) there are many human and mouse (mammal) genomic sequences with known associated functions; there are some but fewer (food animal) chicken sequences with known associated functions. Poultry is the third largest agricultural commodity, and the main meat consumed in the U.S.3 Control of disease in these birds is important for both agricultural economics and human health. The identiﬁcation of candidate genes for disease resistance, or the development of immune enhancers to make vaccines more eﬀective or even obsolete are among the more contemporary approaches to disease control in this important food animal. However, gene sequence information for birds is currently too limited. Fortunately, as just noted above, some information is available, so there is some basis for training a machine learning procedure. Orthologs are (genomic) sequences which are from diﬀerent species but which have common descent and the same function. Crucially, in a number of cases one can locate and compare human, mouse, and chicken orthologs. We’ve been concerned, then, with an analogy problem: ﬁnd/exploit patterns in the known orthologs between human, mouse, and chicken and apply those patterns to human and mouse orthologs X, Y with known function, but whose chicken ortholog Z is unknown, to detect the unknown Z. To ﬁnd patterns between relatively closely related species, e.g., human and mouse, it has suﬃced to use known local-alignment-based similarity tools such as BLAST (and variants) [AGM+ 90,AMS+ 97,KA90,Pea95] which are based on string similarity only. They ﬁnd “locally maximal segment pairs.” This similarity 1

2 3

Machine learning [Mit97,RN95] involves algorithmic techniques for ﬁtting programs to data and for outputting the programs ﬁt for subsequent use in predicting future data. A program so ﬁt to data is said to be learned. Amino acid sequences fold into 3-D structures, but that, for us, will be taken into account in future work. See Section 6 below. http://www.usda.gov/news/pubs/fbook98/ch1a.htm

292

M. Ouyang, J. Case, and J. Burnside

matching does not suﬃce for highly divergent orthologs (e.g., some of the orthologs between mammals and birds) since the regions of similarity are too fragmented. For example, Figure 1 depicts an optimal global amino acid sequence alignment between chicken and mouse IL-2 orthologs4 (with chicken shown on top). The corresponding nucleotide sequence alignment is also very fragmented (data not shown). The same degree of fragmentation is seen comparing chicken and human IL-2 (data not shown). When searching chicken IL-2 against GenBank, BLAST and variants do not and cannot ﬁnd any locally maximal segment pairs in mammals which have statistical signiﬁcance. This problem is not just for IL-2. More generally, it follows from [RYW+ 00] and recent news releases from Celera that more than 25% of orthologs are not identiﬁed by commonly used (local-alignment-based similarity) tools. ---------------MMCKVLIFGCISVATLMTTAYGASLSSAKRKPLQTLIKDL-EIL------ENIKNKIH | | || | | | | MYSMQLASCVTLTLVLLVNSAPTSSSTSSSTAEAQQQQQQQQQQQQHLEQLLMDLQELLSRMENYRNLKLPRM LEL--YTPTETQECTQQTLQCY------LGEVVTLKKETEDDTEIKEEFVTAIQNIEKNLKSLTGLNHTGSEC | | | | ||| | | | | | | | || | || LTFKFYLPKQATE--LKDLQCLEDELGPLRHVLDLTQSKSFQLEDAENFISNIRVTVVKLK---G-SDNTFEC KICEANNKKKFPDFLHELTNFVRYLQK---||| | QF--DDESATVVDFLRRWIAFCQSIISTSPQ Fig. 1. Optimal Global Amino Acid Alignment Between Chicken and Mouse IL-2

In the analysis of analogy problems from both cognitive psychology [Ste88] and artiﬁcial intelligence [Eva68,RN95], we see that both similarities and diﬀerences need to be taken into account. For example, here are a couple of string analogy problems from Hofstadter. These problems are based on alphabetical order, though, not genomics. abc → abd, ijk → ? abc → abg, iijjkk → ? We see that taking into account both string similarities and diﬀerences are a necessary part of solving these problems. Other projects have employed diﬀerential metrics to some degree and to good eﬀect. The tools for intron-exon5 recognition (not what we are doing in the present study), GRAIL [GME+ 92] and GENSCAN [BK97], employ diﬀerential metrics (and there is a similarity metric implicit, for example, in the potential function in GRAIL). A codon is comprised of a contiguous triple of nucleotides 4 5

IL-2 is interleukin 2, an immune system protein. Exons contain the coding portions of genes.

Divide and Conquer Machine Learning for a Genomics Analogy Problem

293

chick-human AA identity 25.54: :...chick-mouse NA identity 57.45: no : chick-human NA length/(# gaps) 49.5: :...chick-human NA length/(# gaps) 103.7143: no : chick-mouse NA length/(# gaps) 25.59: no : chick T to mouse C 118.0588: [Rest omitted] Fig. 2. First Tree Output By C5.0 — With Portion Omitted

{A, C, G, T}, and 61 of these triples each code for a single corresponding amino acid. Diﬀerential metrics can be based on so-called codon bias [SM82,SCH+ 88, Li97]. Most of the 20 amino acids are encoded by more than one codon; codon bias is, then, the quantiﬁable phenomenon that an organism uses one particular codon for an amino acid signiﬁcantly more often than all the other synonymous codons. [SG94] provides an improvement of BLAST with a measure of codon bias as a diﬀerential metric. In the present project we employ codon bias as one class of diﬀerential metrics or attributes: we count, for each of the 61 codons, how many times it occurs in the orthologs. In our project for (chicken) ortholog detection, we have devised a number of other diﬀerential metrics also to complement standard similarity metrics for genomic sequences. These measures of similarity and diﬀerences provide our attributes (or features) for machine learning and constitute, in many cases, a useful division into parts of the original problem about 1-D strings, a division towards conquering the problem. As noted above, this division yields cases where the iid assumption fails. Instead, the co-evolution of mammal and bird orthologs from common ancestor strings involves whole interdependent string patterns coming out partly diﬀerently and partly similarly.

2

Attributes Based on Similarities and Diﬀerences

We mentioned codon bias for diﬀerential metrics above. A straightforward evaluator of similarity is simple percent identity. Studies ([Li97], Chapter 1) have shown with accompanying simple biochemical explanation that, when mutations occur, the nucleotides A and G tend to change to G and A, respectively, and C and T tend to change to T and C, respectively;

294

M. Ouyang, J. Case, and J. Burnside

these are called transitions. The other 8 substitutions, between the group of {A, G} and the group of {C, T} are called transversions, and they occur less frequently than transitions. Insertions and deletions of nucleotides are thought to occur rarely; however, when they do occur, several adjacent nucleotides may be involved [BO98]. Therefore, another common way to evaluate the quality of an alignment is to assign a high score to identity matches, a medium score to transitions, a low score to transversions, a large penalty to opening a gap, and a small penalty to extending a gap. Our Table 1 is such a scoring scheme. Table 1. A Nucleotide Sequence Alignment Scoring Scheme. From\To A C G T

A 4 1 2 1

C 1 4 1 2

G 2 1 4 1

T 1 Gap Opening Gap Extension 2 -5 -2 1 4

Amino acid sequences are what cells translate nucleotide (gene) sequences into (to form proteins). When amino acid sequences are aligned, the scoring matrix is a 20 by 20 table because there are 20 amino acids; some commonly used matrices for amino acid sequence alignment include PAM and BLOSUM families of matrices (see [BO98] and the references therein). The Needleman-Wunsch algorithm [NW70] ﬁnds an optimal global alignment of two sequences. Optimal global alignment has thus far been mostly used in comprehensive studies of orthologs, as in [MB98], where orthology has already been established, and researchers want to extract additional information from the aligned sequences. Global alignment involves some increased complexity costs over local alignment schemes, but we’ve seen, for our applications reported herein, that this increased cost is not prohibitive; furthermore, we have begun using the more eﬃcient variant of Needleman-Wunsch from [Got82]. When we apply (the improved variant of) Needleman-Wunsch to obtain global alignment values for similarity attributes, for nucleotide alignment we apply the scoring scheme in Table 1, and, for amino acid alignment, we apply the scoring scheme from the PAM250 matrix. Needleman-Wunsch and its improvement calculate a global alignment optimal in the sense that no other alignment yields a higher score, global in the sense that the entire lengths of the sequences are taken into consideration. For our similarity attributes we employ both the Nucleotide Alignment (NA) scores and the Amino acid Alignment (AA) scores — comparing chicken with each of mouse and human.6 These scores are given as percent identities. 6

Applying attribute values for both chicken-mouse and chicken-human comparisons improves performance over just employing comparisons between chicken and one of these mammals.

Divide and Conquer Machine Learning for a Genomics Analogy Problem

295

We noticed an intriguing and biologically signiﬁcant empirical pattern comparing NA and AA for our current full data set of 213 complete orthologs between chicken-mouse-human. In Table 2 this pattern (among other things) is displayed for a representative sample of 20 of our orthologs. Table 2 is shown sorted in the column of chicken-mouse nucleotide alignment. For the top portion of the table (as sorted), chicken-mouse NA percent identity is larger than that of AA, but for the bottom portion of the table, the ordering of the two numbers becomes reversed. The likely biological/biochemical explanation appears to be: in the top portion we see the eﬀects of mutations in third and redundant position in codons [Li97]; in the bottom portion we see critically preserved amino acids; and in the middle some combination of each. We have employed, then, the values of (NAAA) and NA/AA as attributes measuring the degree to which nucleotide and amino acid alignments diﬀer. From the NA and AA alignments themselves we calculate lengths, numbers of gaps, and their average lengths. We then combine these numbers importantly in various ratios to provide diﬀerential attributes. The present paper reports on our progress with ortholog prediction for complete cDNA sequences. In the future we plan to apply our methods to ESTs (incomplete sequence data), and, making these attributes ratios is one way that, on average, the incompleteness of the ESTs will not bias our attributes compared with their values on complete sequences. We also employ as attributes the percentages of conservations of the four nucleotides, the percentages of transitions, and the percentages of transversions. From above, transversions are those nucleotide mutations (e.g., from C or T to A) that are less likely to occur biochemically. Table 2 also displays, for the 20 representative orthologs (out of our 213) transversion bias percents between mouse and chicken. We’ve based a number of additional attributes on various measures suggested by the biologically quite interesting transversion bias trends seen in this table and in the table of all our 213 orthologs. E.g., we have various useful attributes measuring deviation from the boldfaced columns for transversion bias. We illustrate these attributes with the example of a particular chicken sequence compared to its mouse ortholog. For such a sequence comparison (corresponding to the ﬁrst four columns of a single row of a table like Table 2) there are four transversion percentages: (t1 , t2 , t3 , t4 ) = (% of {C,T} → A, % of {A,G} → C, % of {C,T} → G, % of {A,G} → T).

(1)

We treat these four numbers as coordinates of a four-dimensional point. The general pattern (quite similar to that of the boldface pattern in Table 2): for transversions from chicken to mouse, a point will usually have a larger/largest second-coordinate than its other coordinates; hence, the points will reside in a restricted sub-region in the space. Since the distribution of these points is not known, we could use the distribution-free, scaling and rotation invariant measure called simplicial depth [BF84,Liu90,LS93,CO01] to measure how near a point is to the center of the cluster of points. We have experimented, to good eﬀect

296

M. Ouyang, J. Case, and J. Burnside Table 2. Transversion Bias and Comparative Aligments

Protein

Chicken to mouse Mouse to chicken From CT AG CT AG CT AG CT AG To A C G T A C G T frizzled 7 2.2 8.2 6.9 2.9 1.7 8.6 7.7 2.0 transforming growth factor β 3 6.0 5.7 4.0 1.9 2.9 6.7 5.3 2.6 nicotinic Ach receptor α 1 3.1 6.6 6.1 2.5 5.1 6.5 3.2 3.5 growth hormone 4.0 9.1 6.4 5.0 5.0 7.1 8.3 3.9 VEGF 5.9 8.2 3.5 1.6 5.7 3.9 6.1 3.9 PDGF receptor α 4.7 7.0 5.8 3.2 7.0 3.9 4.9 5.2 estrogen receptor 3.2 9.7 6.1 3.3 8.7 4.2 4.7 4.8 PDGF α 5.7 8.2 6.1 2.4 7.9 4.0 5.2 5.6 FSH receptor 4.9 7.9 4.9 6.1 8.1 5.8 4.4 5.2 ﬁbroblast growth factor 2 4.9 9.7 8.8 5.5 8.0 5.3 9.0 7.0 thyrotropin β 5.2 8.2 6.2 8.2 7.8 4.8 6.9 8.0 growth hormone receptor 9.2 7.0 5.4 7.1 9.8 6.0 6.0 7.0 insulin-like growth factor I 6.6 11.4 6.6 5.0 11.5 6.2 5.8 6.2 prolactin 9.5 9.6 7.2 8.7 10.0 8.1 7.4 9.3 β 2 microglobulin 17.8 18.3 11.7 11.7 7.7 22.9 22.1 6.7 prolactin receptor 8.8 10.1 6.7 8.5 14.1 6.6 7.7 6.5 interleukin 1 β 18.8 11.6 11.2 13.2 11.6 21.0 14.2 7.8 interleukin 18 14.6 13.3 11.7 11.3 15.1 10.4 14.3 11.4 interleukin 15 10.8 14.8 8.8 13.7 23.3 6.6 9.8 9.9 interleukin 2 19.3 11.7 13.9 16.7 24.9 9.0 10.4 17.5 Left: Transversion bias %s (the largest number in each row is boldfaced).

% identity NA AA 81.8 87.4 81.1 87.1 79.4 84.3 74.8 73.1 74.7 73.4 74.3 79.3 73.9 78.3 72.2 76.7 71.5 71.6 70.0 66.0 69.8 65.4 66.6 56.9 64.1 62.9 62.2 50.8 54.7 42.9 54.6 42.8 51.3 31.7 51.2 31.8 49.6 33.8 42.5 19.9

Right: Nucleotide (NA) vs. Amino acid (AA) sequence Alignments as % identities. Rows sorted by NA.

(Section 5), with easily computed, one-dimensional projections of the full, more diﬃcult to compute simplicial depth: we see not only that t2 tends to be the largest of the four, but, when t2 is not the largest, that t1 tends to be. We use as one-dimensional projections, the following formulas for additional diﬀerential attributes: t2 /minimum(t1 , t2 , t3 , t4 ) & t1 /minimum(t1 , t2 , t3 , t4 ). The ﬁrst we call a major transversion bias, the second a minor. Similar (but not the same) formulas are used for the transversion biases from mouse to chicken, and for those between human and chicken. Relatively large values in these diﬀerential measures indicate conformation to the typical transversion bias patterns. Lastly we employ some simple protein class information [AKF+ 95]7 (see also [TSB00]) for attributes.

7

http://www.tigr.org/docs/tigr-scripts/egad scripts/role report.spl

Divide and Conquer Machine Learning for a Genomics Analogy Problem

3

297

How We Obtain Negative Data for Classiﬁcation

For the classiﬁcation of genomic sequences as orthologous or not we want to supply for training data both positive and negative instances. Our positive data come from our 213 known orthologs. We employ two groups of negative data. The ﬁrst group is of the form ( human protein Y, mouse protein Y, chicken protein X ),

(2)

where – X and Y are in our collection, – X and Y are not orthologs, and – the two diﬀerences in lengths between chicken and each mammal protein is less than 30% of the length of the mammalian protein, and at least one of the amino acid global alignment identities between chick X and human Y or between chick X and mouse Y is greater than or equal to 13% (The 30% and 13% ﬁgures may be adjusted in the future as appears necessary, etc). For our 213 orthologs, there are 1043 data points in the ﬁrst group. This ﬁrst group corresponds to the type of negative data points on which we would want to test a decision program output by a machine learning technique. The second group is of the two forms

and

( human protein X, mouse protein Y, chicken protein X )

(3)

( human protein Y, mouse protein X, chicken protein X ),

(4)

and the constraints on the proteins are the same as in the ﬁrst group. For the 213 orthologs, there are 2086 data points in the second group. The use of this second group considerably improves performance of decision programs output by the machine learning technique described in the next section.

4

Machine Learning Techniques Employed

We employ as our machine learning technique Quinlan’s C5.0 which combines his C4.5 for decision tree induction [Qui93,RN95,Mit97] with the option for AdaBoosting [FS97,FS99]. Decision tree induction involves the ﬁtting of simple decision trees with unary-predicate tests to classiﬁcation data. C5.0 (and C4.5) employ an information-theoretic heuristic so that decisions at the top of a tree ﬁtted explain more data than decisions further down. This provides both eﬃciency in ﬁtting and some readability of the resultant trees for insight — the reasons the decision tree induction component was chosen for the project. AdaBoost is an important technique for improving learners both for ﬁtting training data [FS96] and for generalization and prediction beyond the training data [FSBL98] (see also [FMS01]). It also handles well the presence of errors in the training data

298

M. Ouyang, J. Case, and J. Burnside

[FS96]. AdaBoosting, as employed in C5.0, takes a weighted majority vote of the decisions of a sequence of decision trees, where each tree, beyond the ﬁrst, judiciously concentrates on the cases diﬃcult for its predecessor.8 Since AdaBoosting combines a number of decision trees, its use may involve some tolerable loss of readability and eﬃciency. However, AdaBoosting nonetheless looks like linear (i.e., fast) programming [FS99]. The features of AdaBoosting just mentioned are why it was chosen for the project. Other methods might have been chosen. Reported in [MST94] is a major series of studies and domains comparing machine learning techniques (including decision tree induction and neural net learning) and classical statistical techniques. Decision tree induction was generally robust over the domains studied including compared to statistics. Again, though, it had the advantage that it’s products are readable for insight. Of course, each technique compared had its especially good domains. In [BB98], for example, we see many bioinformatics problems tackled with either neural nets or statistical techniques (but not decision tree induction with AdaBoosting). Neural nets and statistical methods tend not to produce classiﬁcation programs readable for insight. We do note that the Morgan system in [SDFH98] does employ decision tree induction — to simplify otherwise complex dynamic programming for doing similarity matching for intron-exon recognition in vertebrates.9 We also see that [AMS+ 93] employs a decision tree induction which automatically selects string patterns from a given table and produces a decision program which tests input data against the table to predict transmembrane domains from protein data. Support vector machines [Vap95,Vap98] and neural nets can, in eﬀect, cut up the attribute space in ways that decision trees do not. For example, in some cases, there can be advantages in decision tree induction to suitably rotate the attribute space; however, AdaBoosting (more than) makes up for any such advantage [Qui97], and, in eﬀect, cuts up the attribute space very ﬁnely [Qui98]. Furthermore, support vector machines involve quadratic (i.e., slower) programming [FS99].

5

Results

When we run C5.0 with AdaBoost activated on our 213 orthologs (and associated negative data) we get ensembles of decision trees with an average of about 35 decision nodes per tree. These trees are humanly readable. The attributes tested in ensembles of trees based all 213 orthologs involve most of our current attributes. The decisions made by such an ensemble with only three trees10 makes no mistakes on all the positive and negative data points generated by the 8 9 10

Importantly, the voting weights are bigger for more accurate trees in the sequence of trees. In the present project we are working only with exons or portions thereof. Recall from Section 4 above that the ensemble of trees obtained from AdaBoosting makes its decisions by a judiciously weighted majority vote among the decisions of its constituent trees — even more usefully subtle decision making than that of any single tree.

Divide and Conquer Machine Learning for a Genomics Analogy Problem

299

213 orthologs. More importantly, though, we employed 10-fold cross-validation (i.e., a random 10-th of the data is removed from training and employed instead for testing) with 10 repetitions and obtained, with an boosting ensemble size of 25 trees, a low error rate of 2.4% (with Standard Error less than 0.05%) on the entire data set for all 213 orthologs. Furthermore, for each of the 213 ways of removing one ortholog of the 213, we also tried training on the remaining 212 (with their associated negative data points) and testing the ensembles obtained from C5.0 with AdaBoost activated on the missing ortholog and the (also missing) negative data points associated with it. In 95% of the cases that the ortholog omitted from the training data was chosen from the important protein class of cell/organism defense (which includes the immune system enhancers we are especially interested in)11 , ensembles with only four trees performed perfectly on all the positive and negative cases including those for the ortholog omitted. On our 213 orthologs and associated negative data the ﬁrst decision tree produced by C5.0 (with portion omitted to save space) is shown in Figure 2. The tree should be read essentially as an if-then-else program with nesting indicated by indentation. The decision yes is for othologous and no is for nonothologous. From vertical position in the tree we see, for example, that the top test of an amino acid percent identity, chick-human AA identity p} where q->p indicates that q links to p. In the same manner, if a page points to many good authorities, its hub weight is increased:

yp = Sum(xq ) {q such that p->q} HITS is a simple algorithm based solely on hyperlink information except the acquisition of a root set, and its behavior is analyzed by several researchers. HITS tends to generalize topics that are not sufficiently broad, which is called topic generalization [5]. There are several works for distilling topics of Web communities by using this phenomena [1] [3]. Another point that should be mentioned is that HITS sometimes outputs hubs and authorities which are irrelevant to input topic. When a good hub page of a community contains hyperlinks pointing to pages of several topics, pointed pages irrelevant to input topic may have much authoritative weight and regarded as an authoritative page of the community. Such phenomenon is called topic drift [6]. Another phenomenon is inadvertent topic hijacking [4], when a base set contains a number of Web pages from the same Web site. Since such pages often contain hyperlinks pointing to the same URL (for example, the top page of the site), authority weight of irrelevant pages may be increased. Several attempts have been made in order to avoid such phenomena, such as using anchor texts and giving weight to hyperlinks[3], and pruning irrelevant pages from base set in advance to the calculation of authority/hub weights[1]. However, it is considered that the fundamental issue of such undesirable behavior of HITS algorithm lies in the generation of base set. In HITS algorithm, base sets are generated by collecting neighboring pages of a root set, which is acquired from the result of keyword-based search engine. The algorithm is based on the assumption that many good authority/hub pages are included in the base set which are generated in the above way. However, this assumption is not always true. Since HITS focuses its attention on the pages of

A Method for Discovering Purified Web Communities

285

base set in the process of ranking, its results are heavily dependent on the quality of the base set. On the other hand, Murata’s Web community discovery method [11] acquires data in the process of discovery. The goal of the method is to find a complete bipartite graph which is composed of centers (informative pages) and fans (pages containing hyperlinks to centers), and data acquired from a search engine and Web servers are used for renewing the member of centers and fans iteratively. Since the quality of data can be improved by data acquisition in the process of discovery, the method is expected to avoid the above weakness that HITS suffers. This paper proposes a new method for purification of Web community, which is a modified version of the above method. Members of fans and centers are changed iteratively by a kind of majority vote of each other. In this manner, members of the communities are purified so that representative fans and authorities are acquired.

3 A Method for Purifying Web Communities A method for discovering Web community [11], which is the base of our new method in this paper, is explained first. The method consists of the following three steps: 1. Search of fans using a search engine 2. Addition of a new URL to centers 3. Repetition of step 1 and step 2 Figure 1 shows the steps of the community discovery method. In the method, some input URLs are accepted as initial centers, and fans which co-refer all of the centers are searched. As shown in the figure, fans are searched from centers by backlink search on a search engine. The next step is to add a new URL to centers based on the hyperlinks included in acquired fans. The fans’ HTML files are acquired through the internet, and all the hyperlink contained in the files are extracted. The hyperlinks are sorted in the order of frequency. Since hyperlinks to related Web pages often cooccur, the top-ranking hyperlink is expected to point to a page whose contents are closely related to the centers. Therefore, the URL of the page is added as a new member of centers. In a method for purifying Web communities, which is newly proposed in this paper, the above steps of renewing fans and centers are modified in the following way:

• If there are few fans which co-refer all the members of centers, one of the members of centers are randomly removed and then corresponding fans are searched by backlink search so that the number of fans will be more than a certain threshold. • After all the hyperlinks contained in fans’ HTML files are extracted, they are sorted in the order of frequency. Then a few URLs of high-ranking hyperlinks are added to the centers and the same numbers of low-ranking URLs that were the members of previous centers are removed from the centers. This means that centers are updated according as the references of corresponding fans. The number of addi-

286

T. Murata

tion/removal of URLs is limited up to half of the number of centers so that the topic of centers will not change drastically.

Fig. 1. A method for discovering Web communities

With these modifications, the following effects are expected: • Even if some irrelevant pages are contained in centers, the quality of fans will not be deteriorated since pages that refer most of the centers (not all of them) are searched and regarded as fans. • Since the URLs that are linked by many of fans are considered to be representative ones regarding the topic of Web community, replacing the members of centers with high-ranking URLs is expected to improve the quality of centers.

4 Experiments Based on the above method, the author has built a system for purifying Web communities. As the input to the system, bottom five URLs that are listed in the topics of 100hot.com (http://www.100hot.com/) are given. These URLs are regarded as initial centers of a community, and then it is purified by the system so that higher-ranking URLs will be collected as the members of final centers. Average rankings of centers for each topic before/after purification are shown in Table 1. This table shows that higher-ranking centers are acquired in some of the topics, such as Macintosh, Election, and Music. The reasons the system performs well for these topics are as follows:

A Method for Discovering Purified Web Communities

287

1. Topics of these communities are relatively focused than others. In many cases, there are representative pages that are referred by most of the community members in focused communities. 2. The graphs of these communities are densely connected. This enables the purification only from hyperlink information. Table 1. Average ranking of centers for each topic

Besides these topics that our system performs well, there are some other topics that the system outputs rather unexpected results. For example, the inputs and outputs for topic Magazine is as follows: • Inputs: chemweek.com, mysterynet.com, cosmomag.com, playbill.com, si.edu • Outputs: washingtonpost.com, nytimes.com, usatoday.com, latimes.com, wsj.com This result shows that the topic of the centers are shifted from Magazine to Newspaper, and it also shows the closeness of the communities of these two topics. Another example is the community for topic Travel: • Inputs: smarterliving.com, sheraton.com, ebookers.com, qixo.com, hotel.com • Outputs: hilton.com, hyatt.com, sheraton.com, marriott.com, holiday-inn.com In general, when a target topic is too broad that contains many subtopics, there are several representative pages for the topic. In this example, since many hotel sites are included in the input URLs, the topic of the community is focused to hotels in the process of purification. Both HITS and our method are based on the graph structures that are extracted locally from the vast Web network. Since our method acquires Web data during the process of purification, and renews the members of communities iteratively, it is expected that the method performs well even when the members of initial communities are not representative ones.

288

T. Murata

5 Concluding Remarks This paper proposes a new method for purifying Web communities based on the graph structure of hyperlinks. Results of the system that is developed based on our method are also shown. The following points should be mentioned for “purifying” our future research targets: • The method proposed in this paper is considered to be a method for searching a complete bipartite subgraph included in a graph that correspond to a community. Although the effectiveness of our method depends heavily on the graph structure of target communities, typical graph structure of Web communities is not clear. We have to study more about the model for such structures that fits well for actual Web communities. • There is no standard test data set for evaluating systems for Web mining. The above experimentation is made on the assumption that URLs listed in each topic of 100hot.com are ranked in the order of relevance to the topic. However, this assumption is not always true. In the ranking used for our experimentation, the topranking URL for topic car is Microsoft.com !! In order to evaluate the performance of our system objectively, some kind of standard test data set for Web mining is really needed.

References 1.

K. Bharat, M. Henzinger: “Improved Algorithms for Topic Distillation in a Hyperlinked Environment”, Proc. of the 21st Int’l ACM SIGIR Conf. pp.104-111, 1998. 2. A. Broder et. al.: “Graph structure in the Web”, Proc. of WWW9 conference, 2000. 3. S. Chakrabarti et. al.: “Experiments in Topic Distillation”, Proc. of ACM SIGIR workshop on Hypertext Information Retrieval on the Web, 1998. 4. S. Chakrabarti, et. al.: “Mining the Web’s Link Structure”, IEEE Computer, Vol.32, No.8, pp.60-67, 1999. 5. D. Gibson, J. Kleinberg, P. Raghavan: “Inferring Web Communities from Link Topology”, Proc. of the 9th Conf. on Hypertext and Hypermedia, 1998. 6. M. Henzinger: “Hyperlink Analysis for the Web”, IEEE Internet Computing, Vol.5, No.1, pp.45-50, 2001. 7. J. Kleinberg et. al.: “The Web as a Graph: Measurements, Models, and Methods”, Proc. of COCOON ’99, LNCS 1627, pp.1-17, Springer, 1999. 8. R. Kosala, H. Blockeel, “Web Mining Research: A Survey”, ACM SIGKDD Explorations, Vol.2, No.1, pp.1-15, 2000. 9. R. Kumar et. al.: "Trawling the Web for Emerging Cyber-Communities", Proc. of the 8th WWW conference, 1999. 10. T. Murata: “Machine Discovery Based on the Co-occurrence of References in a Search Engine”, Proc. of DS99, LNAI 1721, pp.220-229, Springer, 1999

A Method for Discovering Purified Web Communities

289

11. T. Murata: “Discovery of Web Communities Based on the Co-occurrence of References”, Proc. of DS2000, LNAI 1967, pp.65-75, Springer, 2000. 12. L. Page et. al.: "The PageRank Citation Ranking: Bringing Order to the Web", Online manuscript, http://www-db.stanford.edu/~backrub/pageranksub.ps, 1998.

KeyWorld: Extracting Keywords from a Document as a Small World Yutaka Matsuo1,2 , Yukio Ohsawa2,3 , and Mitsuru Ishizuka1 1

2

University of Tokyo, Hongo 7-3-1, Bunkyo-ku, Tokyo 113-8656, JAPAN, matsuo@miv.t.u-tokyo.ac.jp, http://www.miv.t.u-tokyo.ac.jp/˜matsuo/ Japan Science and Technology Corporation, Tsutsujigaoka 2-2-11, Miyagino-ku, Sendai, Miyagi, 983-0852, JAPAN, 3 University of Tsukuba, Otsuka 3-29-1, Bunkyo-ku, Tokyo 113-0012, JAPAN,

Abstract. The small world topology is known widespread in biological, social and man-made systems. This paper shows that the small world structure also exists in documents, such as papers. A document is represented by a network; the nodes represent terms, and the edges represent the co-occurrence of terms. This network is shown to have the characteristics of being a small world, i.e., nodes are highly clustered yet the path length between them is small. Based on the topology, we develop an indexing system called KeyWorld, which extracts important terms by measuring their contribution to the graph being small world.

1

Introduction

Graphs that occur in many biological, social and man-made systems are often neither completely regular nor completely random, but have instead a “small world” topology in which nodes are highly clustered yet the path length between them is small [11][10]. For instance, if you are introduced to someone at a party in a small world, you can usually ﬁnd a short chain of mutual acquaintances that connects you together. In the 1960s, Stanley Milgram’s pioneering work on the small world problem showed that any two randomly chosen individuals in the United States are linked by a chain of six or fewer ﬁrst-name acquaintances, known as “six degrees of separation” [5]. Watts and Strogatz have shown that a social graph (the collaboration graph of actors in feature ﬁlms), a biological graph (the neural network of the nematode worm C. elegans), and a man-made graph (the electrical power grid of the western United States) all have a small world topology [11][10]. The World Wide Web also forms a small world network [1]. In the context of document indexing, an innovative algorithm called KeyGraph [6] is developed, which utilizes the structure of the document. A document is represented as a graph, each node corresponds to a term,1 and each edge corresponds to the co-occurrence of terms. Based on the segmentation of this graph 1

A term is a word or a word sequence.

K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 271–281, 2001. c Springer-Verlag Berlin Heidelberg 2001

272

Y. Matsuo, Y. Ohsawa, and M. Ishizuka

into clusters, KeyGraph ﬁnds keywords by selecting the term which co-occurs in multiple clusters. Recently, KeyGraph has been applied to several domains, from earthquake sequences [7] to register transaction data of retail stores, and showed remarkable versatility. In this paper, inspired by both small world phenomenon and KeyGraph, we develop a new algorithm, called KeyWorld, to ﬁnd important terms. We show ﬁrst that the graph derived from a document has the small world characteristics. To extract important terms, we ﬁnd those terms which contribute to the world being small. The contribution is quantitatively measured by the diﬀerence of “small-worldliness” with and without the term. The rest of the paper is organized as follows. In the following section, we ﬁrst detail the small world topology, and show that some documents actually have small world characteristics. Then we explain how to extract the important terms in Section 3. We evaluate KeyWorld and suggest further improvements in Section 4. Finally, we discuss future works and conclude this paper.

2 2.1

Term Co-occurrence Graph and Small World Small-Worldliness

We treat an undirected, unweighted, simple, sparse and connected graph. (We expand to an unconnected graph in Section 3.) To formalize the notion of a small world, Watts and Strogatz deﬁne the clustering coeﬃcient and the characteristic path length [11][10]: – The characteristic path length, L, is the path length averaged over all pairs of nodes. The path length d(i, j) is the number of edges in the shortest path between nodes i and j. – The clustering coeﬃcient is a measure of the cliqueness of the local neighborhoods. For a node with k neighbors, then at most k C2 = k(k − 1)/2 edges can exist between them. The clustering of a node is the fraction of these allowable edges that occur. The clustering coeﬃcient, C is the average clustering over all the nodes in the graph. Watts and Strogatz deﬁne a small world graph as one in which L ≥ Lrand (or L ≈ Lrand ) and C Crand where Lrand and Crand are the characteristic path length and clustering coeﬃcient of a random graph with the same number of nodes and edges. They propose several models of graphs, one of which is called β-Graphs. Starting from a regular graph, they introduce disorder into the graph by randomly rewiring each edge with probability p as shown in Fig.1. If p = 0 then the graph is completely regular and ordered. If p = 1 then the graph is completely random and disordered. Intermediate values of p give graphs that are neither completely regular nor completely disordered. They are small worlds. Walsh deﬁnes the proximity ratio µ = (C/L) / (Crand /Lrand )

(1)

KeyWorld: Extracting Keywords from a Document as a Small World Regular

p=0

Small world

Increasing randomness

273

Random

p=1

Fig. 1. Random rewiring of a regular ring lattice. Table 1. Characteristic path lengths L, clustering coeﬃcients C and proximity ratios µ for graphs with a small world topology [9] (studied in [11])). L Lrand C Crand µ Film actor 3.65 2.99 0.79 0.00027 2396 Power grid 18.7 12.4 0.080 0.005 10.61 C. elegans 2.65 2.55 0.28 0.05 4.755 The graphs are deﬁned as follows. For the ﬁlm actors, two actors are joined by an edge if they have acted in a ﬁlm together. For the power grid, nodes represent generators, transformers and substations, and edges represent high-voltage transmission lines between them. For C. elegans, an edge joins two neurons if they are connected by either a synapse or a gap junction. Because the number of nodes and edges for each graph is diﬀerent, the magnitude of L, C and µ diﬀers.

as the small-worldliness of the graph [9]. As p increases from 0, L drops sharply since a few long-range edges introduce short cuts into the graph. These short cuts have little eﬀect on C. As a consequence the proximity ratio µ rises rapidly and the graph develops a small world topology. As p approaches 1, the neighborhood clustering starts to break down, and the short cuts no longer have a dramatic eﬀect at linking up nodes. C and µ therefore drop, and the graph loses its small world topology. In Table 1, we can see µ is large in the graphs with a small world topology. In short, small world networks are characterized by the distinctive combination of high clustering with short characteristic path length. 2.2

Term Co-occurrence Graph

A graph is constructed from a document as follows. We ﬁrst preprocess the document by stemming and removing stop words, as in [8]. We apply an n-gram to count phrase frequency. Then we regard the title of the document, each section title and each caption of ﬁgures and tables as a sentence, and exclude all the ﬁgures, tables, and references. We get a list of sentences, each of which consists of words (or phrases). In other words, we get basket data where each item is a term, discarding the information of term orderings and document structures.

274

Y. Matsuo, Y. Ohsawa, and M. Ishizuka

Table 2. Statistical data on proximity ratios µ for 57 graphs of papers in WWW9. L Lrand C Crand µ Max. 4.99 3.58 0.38 0.012 22.81 Ave. 5.36 — 0.33 — 15.31 Min. 8.13 2.94 0.31 0.027 4.20 We set fthre = 3. We restrict attention to the giant connected component of the graph, which include 89% of the nodes on average. We exclude three papers, where the giant connected component covers less than 50% of the nodes. We don’t show the Lrand and Crand for the average case, because n and k diﬀers dependent on the target paper. On average, n = 275 and k = 5.04.

Then we pick up frequent terms which appear over a user-given threshold, fthre times, and ﬁx them as nodes. For every pair of terms, we count the cooccurrence for every sentence, and add an edge if the Jaccard coeﬃcient exceeds a threshold, Jthre .2 The Jaccard coeﬃcient is simply the number of sentences that contain both terms divided by the number of sentences that contain either term. This idea is also used in constructing a referral network from WWW pages [4]. We assume the length of each edge is 1. Table 2 shows statistics of the small-worldliness of 57 graphs, each constructed from a technical paper that appeared at the 9th international World Wide Web conference (WWW9) [12]. From this result, we can conjecture these papers certainly have small world structures. However, depending on the paper, the small-worldliness varies. One reason why the paper has a small world structure can be considered that the author may mention some concepts step by step (making the clustering of related terms), and then try to merge the concepts and build up new ideas (making a ‘shortcut’ of clusters). The author will keep in mind that the new idea is steadily connected to the fundamental concepts, but not redundantly. However, as we have seen, the small-worldliness varies from paper to paper. Certainly it depends on the subject, the aim, and the author’s writing style of the paper.

3

Finding Important Terms

3.1

Shortcut and Contractor

Admitting that a document is a small world, how does it beneﬁt us? We try here to estimate the importance of a term, and pick up important terms, though they are rare in the document, based on the small world structure. We consider ‘important terms’ as the terms which reﬂect the main topic, the author’s idea, and the fundamental concepts of the document. 2

In this paper, we set Jthre so that the number of neighbors, k, is around 4.5 on average.

KeyWorld: Extracting Keywords from a Document as a Small World

275

First we introduce the notion of a shortcut and a contractor, following the deﬁnition in [10]. Deﬁnition 1. The range R(i, j) is the length of the shortest path between i and j in the absence of that edge. If R(i, j) > 2, then the edge (i, j) is called a shortcut. Applying the notion of “shortcuts” in terms of nodes, we can get the deﬁnition of “contractor.” Deﬁnition 2. If two nodes u and w are both elements of the same neighborhood Γ (v), and the shortest path length between them that does not involve any edges adjacent with v is denoted dv (u, w) > 2, then v is said to contract u and w, and v is called a contractor. In our ﬁrst thought, if dv (u, w) is large, the corresponding term of contractor v might be interesting, because they bridge the distant notions which rarely appear together. However, such a node sometimes connects the nodes far from the center of the graph, i.e. the main topic of the document. Below we take into account the whole structure of the graph, calculating the contribution of a node to make the world small. To treat the disconnected graph, we expand the deﬁnition of path length (though Watts restricts attention to the giant connected component3 of the graph). Deﬁnition 3. An extended path length d (i, j) of node i and j is deﬁned as follows. d(i, j), if (i, j) are connected, d (i, j) = (2) wsum , otherwise. where wsum is a constant, the sum of the widths of all the disconnected subgraphs. d(i, j) is a path length of the shortest path between i and j in a connected graph. If some edges are added to the graph and some parts of the graph gets connected, d (i, j) will not increase, unless the length of an edge is negative. Thus d (i, j) is one of the upper bounds of the path length considering the edges will be added. Deﬁnition 4. An extended characteristic path length L is an extended path length averaged over all pairs of nodes. Deﬁnition 5. Lv is an extended path length averaged over all pairs of nodes except node v. LGv is the extended characteristic path length of the graph without node v. 3

A connected component of a graph is a set of nodes such that each node pair has a path. A connected component is called a giant connected component if it contains more than 50% of the nodes in the graph.

276

Y. Matsuo, Y. Ohsawa, and M. Ishizuka Table 3. Frequent terms in this paper. Term Frequency term 39 small 36 world 35 graph 33 small world 27 node 26 document 25 length 20 important 19 paper 18 Table 4. Terms with 10 largest CBv in this paper. Term small world contribution node list author table important term show structure KeyWorld

CBv Frequency 4.38 27 3.11 11 2.98 26 2.24 8 1.36 7 1.10 8 0.80 11 0.72 6 0.44 7 0.44 10

In other words, Lv is the characteristic path length regarding the node v as a corridor (i.e., a set of edges). For example, if v is neighboring u, w, and z, then (u, w), (u, z), and(w, z) are considered to be linked. And LGv is the extended characteristic path length assuming the corridor doesn’t exist. Deﬁnition 6. The contribution, CBv , of the node v to make the world small is deﬁned as follows. (3) CBv = LGv − Lv We don’t pay attention to the clustering coeﬃcient, because adding or eliminating one node aﬀects the clustering coeﬃcient little. If node v with large CBv is absent in the graph, the graph gets very large. In the context of documents, the topics are divided. We assume such a term help merge the structure of the document, thus important.

KeyWorld: Extracting Keywords from a Document as a Small World

277

Table 5. Pairs of terms with 10 largest CBe . Pair node – contribution list – table contribution – important term table – show contribution – structure KeyWorld – list important term – develop network – show contribution – make author – idea

3.2

CBe 2.97 1.47 1.20 1.10 0.87 0.87 0.79 0.72 0.47 0.47

Example

We show the example experimented on this paper, i.e., the one you are reading now.4 Table 3 shows the frequent terms and Table 4 shows the important terms measured by CBv . Comparing two tables, the list of important terms includes the author’s idea, e.g., “important term” and “KeyGraph,” as well as the important basic concept, e.g., “structure,” although they are not frequently appeared. However the list of frequent terms simply show the components of the papers, and are not of interest. We can also measure the contribution of an edge, CBe , to make the world small, deﬁned similarly as CBv . However, if we look at the pairs of terms in Table 5, it is hard to understand what they suggest. There are numbers of relations between two terms, so we cannot imagine the relation of the pairs right away. Lastly, Fig. 2 shows the graphical visualization of the world of this paper. (Only the giant connected component of the graph is shown, though other parts of the graph is also used for calculation.) We can easily point out the terms without which the world will be separated, say “small world” and “comtribution”.

4

Evaluation and Improvements

This section describes an evaluation of KeyWorld as an indexing system. KeyWorld is not merely an indexing system but it provides an understandable graphical representation of the document. However, we restrict attention here to the performance of KeyWorld as an indexing tool to compare it with existing indexing techniques such as tf and tﬁdf. The tf measures simply term frequency. The tﬁdf measure is obtained by using the product of the term frequency and the inverse document frequency[8].5 4 5

We ignore the eﬀect of self-reference; it’s suﬃciently small. We use log N/nv as idf, where N is the number of document collection, and nv is the number of document which includes term v.

278

Y. Matsuo, Y. Ohsawa, and M. Ishizuka measure

small world

world topology

KeyWorld

document

small world topology

section

show

term

list

table

paper

keyword

author

idea

show

node

network

actor

graph

important term

edge

path length

develop

calculate

make

contribution

structure

important

KeyGraph

characteristic path

method

characteristic path length

pair

clustering coefficient

Watts

Fig. 2. Small world of this paper.

When an author writes a paper, he/she annotates keywords to his/her paper by selecting the category of the paper (e.g. “text mining”), utilized algorithms (e.g. “small world”), or the proposed method (e.g. “KeyWorld”). The choice depends on the author’s criteria. In our deﬁnition, a keyword is an important term in the document, which reﬂects the main topic, the author’s idea, and the fundamental concepts of the document. For example, considering this paper, we think “small world,” “document,” “contribution,” “important term,” “path length,” and “KeyWorld” are keywords, and “node,” “make,” and “text mining” are not keywords because they are too trivial or too broad, or do not occur in this document. In the experimentation, we asked the authors of 20 technical papers in the artiﬁcial intelligence ﬁeld to judge whether some terms in their papers are keywords or not by a questionnaire. For each document, we ﬁrst get top 15 weighted terms by tf, tﬁdf,6 KeyGraph, and KeyWorld, i.e. the four lists of 15 terms. (We denote the list by method a as lista .) We merge the four lists and shuﬄe the terms. Then we ask the author whether each term is a keyword or not after explaining the deﬁnition of keywords. Counting the number of authorized terms, we can get the precision of method a as follows. precisiona = 6

Number of authorized terms in lista Number of terms in lista

(4)

As a corpus, we used 166 papers in Journal of Artiﬁcial Intelligence Research, from Vol.1 in 1993 to Vol.14 in 2001.

KeyWorld: Extracting Keywords from a Document as a Small World

279

Table 6. Precision and coverage tf KeyWorld tﬁdf KeyWorld+idf precision 0.53 0.49 0.55 0.71 coverage 0.48 0.50 0.62 0.68 Table 7. Terms with 10 largest CBv × idfv in this paper. Term CBv × idfv Frequency small world 4.57 27 important term 3.82 11 co-occurrence 1.89 4 KeyWorld 1.58 10 short cut 1.56 4 actor 0.89 5 shortest path 0.66 4 sentence 0.66 4 document 0.66 23 path length 0.59 17

Next, from the shuﬄed list of all terms,7 the authors are told to pick 5 (or more) terms as indispensable terms which they think are essential to the document, and cover the most important concepts of the paper. We calculate the coverage of method a as follows. coveragea =

Number of indispensable terms in lista Number of indispensable terms

(5)

The results are shown in Table 6. The performance of KeyWorld is not good enough. The precision and coverage are almost equal to tf. However, we feel that the term list by KeyWorld includes very important terms as well as very dull words, e.g. “show” or “table” in Table 4. To sieve out these dull terms, we develop an improved weighting method, which annotates term v with the weight CBv × idfv ,

(6)

where idfv is an idf measure for term v. The improved results are also shown in Table 6. Both the precision and coverage are now far better than tﬁdf. Table 7 shows the top 10 terms by KeyWorld with idf factor for this paper. In summary, KeyWorld can often ﬁnd important terms, however, it also detect less important terms. By incorporating with the idf measure, KeyWorld can be a very good indexing tool. 7

If the author remembers the other terms, he/she is permitted to add them to the list.

280

5

Y. Matsuo, Y. Ohsawa, and M. Ishizuka

Discussion

The small world phenomenon was inaugurated as an area of experimental study in the social sciences by Stanley Milgram in the 1960’s. Since then, numerous studies have been done for network analysis. The importance of weak ties, which is a short cut between clusters of people, was mentioned 30 years ago [3]. The measure of contribution is similar to “centrality” in the context of social network study. Centrality can be measured in a number of ways [2]. Considering an actors’ social network, the simplest is to count the number of others with whom an actor maintains relations. The actor with the most connections, i.e., the highest degree, is most central. Another measure is closeness, which calculates the distance from each person in the network to each other person based on the connections among all members of the network. Central actors are closer to all others than are other actors. A third measure is betweenness which examines the extent to which an actor is situated between others in the network, i.e., the extent to which information must pass through them to get to others, and thus the extent to which they will be exposed to information circulating in the network. However, our measure of contribution has a characteristic in that it calculates the diﬀerence of the closeness of all nodes with and without a certain node. It measures a node’s contribution to the whole structure by temporarily eliminating the node.

6

Conclusion

Watts mentions in [10] the possible applications of small world research, including “the train of thought followed in a conversation or succession of ideas leading to a scientiﬁc breakthrough.” In this paper, we have focused on technical papers rather than a conversation or succession of ideas. The future direction of our research is to treat directed or weighted graphs for ﬁner analyses of documents. We expect our approach is eﬀective not only to document indexing, but also to other graphical representations. To ﬁnd out structurally important parts may bring us deeper understandings of the graph, new perspectives, and chances to utilize it. We are interested in a big structural change caused by a small change of the graph. A change, which makes the world very small, may sometimes be very important.

References 1. R. Albert, H. Jeong, and A.-L. Barabasi. The diameter of the World Wide Web. Nature, 401, 1999. 2. L. C. Freeman. Centrality in social networks: Conceptual clariﬁcation. Social Networks, 1:215–239, 1979. 3. M. Granovetter. Strength of weak ties. American Journal of Sociology, 78:1360– 1380, 1973. 4. H. Kautz, B. Selman, and M. Shah. The hidden Web. AI magagine, 18(2), 1997.

KeyWorld: Extracting Keywords from a Document as a Small World

281

5. S. Milgram. The small-world problem. Psychology Today, 2:60–67, 1967. 6. Y. Ohsawa, N. E. Benson, and M. Yachida. KeyGraph: Automatic indexing by co-occurrence graph based on building construction metaphor. In Proc. Advanced Digital Library Conference (IEEE ADL’98), 1998. 7. Y. Ohsawa and M. Yachida. Discover risky active faults by indexing an earthquake sequence. In Proc. Discovery Science, pages 208–219, 1999. 8. G. Salton. Automatic Text Processing. Addison-Wesley, 1988. 9. T. Walsh. Search in a small world. In Proc. IJCAI-99, pages 1172–1177, 1999. 10. D. Watts. Small worlds: the dynamics of networks between order and randomness. Princeton, 1999. 11. D. Watts and S. Strogatz. Collective dynamics of small-world networks. Nature, 393:440–442, 1998. 12. 9th International World Wide Web Conference. http://www9.org/.

Knowledge Navigation on Visualizing Complementary Documents Naohiro Matsumura1,3 , Yukio Ohsawa2,3 , and Mitsuru Ishizuka1 1

Graduate School of Engineering, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8656 Japan {matumura, ishizuka}@miv.t.u-tokyo.ac.jp 2 Graduate School of Systems Management, University of Tsukuba, 3-29-1 Otsuka, Bunkyo-ku, Tokyo, 112-0012 Japan osawa@gssm.otsuka.tsukuba.ac.jp 3 Japanese Science and Technology Corporation, 2-2-11 Tsutsujigaoka, Miyagino-ku, Sendai, Miyagi, 983-0852 Japan

Abstract. It is an up-to-date challenge to get answers for novel questions which nobody has ever considered. Such a question is too rare to be satisﬁed with a past single document. In this paper, we propose a new framework of knowledge navigation by graphically providing with multiple documents relevant to a user’s question. Our implemented system named MACLOD generates several navigational plans, each forming a complementary document-set, not a single document, for navigating a user to understanding a novel question. The obtained plans are mapped into a 2-dimensional interface where documents in each obtained document-set are connected with links in order to support user selecting a plan smoothly. In experiments, the method obtained satisfactory answers to user’s unique questions.

1

Introduction

It is an up-to-date challenge to answer a user’s novel question nobody has ever asked. However, such a question is too new to be satisﬁed with a past single document, and the required knowledge for understanding the documents relevant to a user’s question depends on his background[4]. In our previous work[3], we proposed a novel information retrieval method named combination retrieval for creating novel knowledge by combining complementary documents. Here, a complementary set of documents is composed of documents, and the combination of which supplies a satisfactory information. This idea is based on the principle that combining ideas can trigger the creation of new ideas[1,2]. Throughout the discussions of the work, we veriﬁed the fact that reading multiple complementary documents generates the synergy eﬀects which help us acquire novel knowledge. In this paper, we propose a new framework of knowledge navigation, i.e., supply a user with new knowledge, for satisfying the information request of a user by visualizing complementary documents. Our implemented system named K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 258–270, 2001. c Springer-Verlag Berlin Heidelberg 2001

Knowledge Navigation on Visualizing Complementary Documents

259

MACLOD(Map of Complementary Links of Documents) generates several navigational plans, each formed by a document-set for navigating a user to understand a novel question, by making use of the combination retrieval[3]. The obtained plans are mapped into a 2-dimensional interface where documents in each document-set are connected with links in order to support user selecting complementary documents smoothly. The remainder of this paper goes as follows: In Section 2, the meaning of our approach is shown by comparison with previous knowledge navigation methods. The mechanism of combination retrieval is described in Section 3, and the mechanism of MACLOD implemented here is described in Section 4. We show the experiments and the results in Section 5, showing the performance of MACLOD for medical counseling question-answer documents.

2

Previous Methods for Knowledge Navigation

The vision of knowledge navigation was shown by John Sculley(Then the president of Apple Computer Inc.) where electronic secretary in a computer named Knowledge Navigator managed various tasks on behalf of users, e.g., manage schedules. The concept inspired us. However, it is still diﬃcult to realize the Knowledge Navigator because of the complexity of real secretary’s tasks. A knowledge navigation system is a piece of software which answers a user’s question. The question maybe entered as a word-set query {alcohol, liver, cancer} or a sentence query “Does alcohol cause a liver cancer ?” An intelligent answer to this question may be “No, alcohol does not cause liver cancer directly. You may be confused of liver cancer and other liver damages from alcohol. Alcohol causes cancer in other tissues.” For giving such an answer, the system should have medical knowledge relevant to user’s query, and infer on the knowledge for answering the question. However, it is not realistic to implement such knowledge wide enough to be applied to unique user interests. Another approach for navigating knowledge is to retrieve ready-made documents relevant to the current query, from a prepared document collection. In this way, we can skip the process of knowledge acquisition and implementation, because man-made documents represent the complex human knowledge directly. A search engines for a word-set query entered by the user may be the simplest realization of this approach. However, we already know that existing information retrieval methods trying to answer a query by ONE of the output documents could not satisfy novel interests in Section 1.

3

The Process of Combination Retrieval

Combination retrieval[3] is a method for selecting meaningful documents which, as a set, serve a good (readable and satisfactory) answer to the user. In this section, we review the algorithm of the combination retrieval.

260

3.1

N. Matsumura, Y. Ohsawa, and M. Ishizuka

The Outline of the Process

The process of combination retrieval is as follows: The Process of Combination Retrieval Step 1) Accept user’s query Qg . Step 2) Obtain G, a word-set representing the goal user wants to understand, from Qg (G = Qg if Qg is given simply as a word-set). Step 3) Make knowledge-base Σ for the abduction of Step 4). For each document Dx in the document-collection Cdoc , a Horn clause is made as to describe the condition (words needed to be understood for reading Dx ) and the eﬀect (words to be subsequently understood by reading Dx ). Step 4) Obtain h, the optimal hypothesis-set which derives G if combined with Σ, by cost-based abduction (detailed later). h obtained here represents the union of following information, of the least size of K. S: The document-set the user should read. K: The keyword-set the user should understand for reading the documents in S. Step 5) Show the documents in S to the user. The intuitive meaning of employing the abductive inference is to obtain the conditions for understanding user’s goal G. Here, conditions include the documents to read (S) for understanding G, and necessary knowledge (K) for reading those documents. That is, S means the combination of documents to be presented to the user. 3.2

The Details of Combination Retrieval’s Process

In preparation, collection Cdoc of existing human-made documents is stored. Key, the set of keyword-candidates in the documents in Cdoc , i.e. word-set which is the union of extracted keywords from all the documents in Cdoc , is obtained and ﬁxed. Here, words are stemmed as in [5] and stop words (“does”, “is”, “a”...) are deleted, and then a constant number of words of the highest TFIDF values [6] (using Cdoc as the corpus for computing document frequencies of words) are extracted as keywords from each document in Cdoc . Next, let us go into the details of each step in 3.1. Step 1) to 2) Make goal G from user’s query Qg : Goal G is deﬁned as the set of words in Qg ∩ Key, i.e., keywords in the user’s query. For example, “does alcohol make me warm?” and query {alcohol, warm} are both put into the same goal {alcohol, warm}, if Cdoc is a set of past question-answer pairs of a medical counselor which do not have ”does”, ”make”, ”me”, ”warm”, ”in”, ”a”, or “day” in Key (some are deleted as stop words). Step 3) Make Horn clauses from documents: For the abductive inference in Step 4) of Subsection 3.1, knowledge-base Σ is formed of Horn clauses. A Horn clause is a clause as in Eq.(1), which means that y becomes true under the condition that all x1 , x2 , · · · xn are true, where variables x1 , x2 , · · · xn and y

Knowledge Navigation on Visualizing Complementary Documents

261

are atoms each of which corresponds to an event occurrence. A Horn clause can describe causes (x1 , x2 , · · · , xn ) and their eﬀect (y) simply. y :−x1 , x2 , · · · , xn .

(1)

In combination retrieval, the Horn clause for document Dx describes the cause (reading Dx with enough vocabulary knowledge) and the eﬀect (acquiring new knowledge from Dx ) of reading Dx , as: α :−β1 , β2 , · · · , βmx , Dx .

(2)

Here, α is the eﬀect term of Dx , which is a term (a word or a phrase) one can understand by reading document Dx . β1 , β2 · · · βmx are the conditional terms of Dx , which should be understood for reading and understanding Dx . That is, one who knows words β1 , β2 · · · βmx and reads Dx on this knowledge is supposed to acquire knowledge about α. The method for taking the eﬀect and the conditional terms from Dx is straight-forward. First, the eﬀect terms α, α2 , · · · are obtained as terms in G ∩ (the keywords of Dx ). This means that the eﬀect of Dx is expected on the user’s interest G, rather than by the intension of the author of Dx . For example, a document about cancer symptoms may work as a description of the demerit of smoking, if the reader is a heavy smoker. Focusing the consideration onto user’s goal in this way also speeds up the response of combination retrieval as in Subsection 5.1. Then, the keywords of Dx other than the eﬀect terms above form the conditional terms β1 , β2 , · · · βmx . As a result, Horn clauses are obtained as α1 :−β1 , β2 , · · · βmx , Dx , α2 :−β1 , β2 , · · · βmx , Dx , .. .

(3)

meaning that one knowing β1 , β2 , · · · βmx can read Dx and understand all the eﬀect terms α1 , α2 , · · · by reading Dx . Step 4) Cost based abduction for obtaining the documents to read: We employ the cost based abduction (CBA, hereafter)[7], an inference framework for obtaining solution h of the least |K| in Subsection 3.1. In CBA, the causes of a given eﬀect G is explained. Formally, CBA is described as extracting a minimal hypothesis-set h from a given set H of candidate hypotheses, so that h derives G using knowledge Σ. That is, h satisﬁes Eq.(4) under Eq.(5) and Eq.(6). We deal with Σ composed of causal rules, expressed in Horn clauses mentioned above. M inimize cost(h), under that : h ⊂ H, h ∪ Σ G,

(4) (5) (6)

262

N. Matsumura, Y. Ohsawa, and M. Ishizuka

Eq.(4) represents the selection of h to be minimal, i.e., of the lowest-cost hypothesis-set h(⊂ H), where cost denoted by cost(h) is the sum of the weights of hypotheses in h. The weights of hypotheses in H, the candidates of elements of solution h, are initially given. Generally speaking, the weight-values of hypotheses are closely related to the semantics in the problem to which CBA is applied, as exempliﬁed in [8]. In combination retrieval, weights are given diﬀerently to the two types of hypotheses in H: Type 1: Hypothesis that user reads a document in Cdoc Type 2: Hypothesis that user knows (have learned) a conditional term in Key In giving weights to hypotheses, we considered that user should be able to understand the output documents in S, with learning only a small set K of keywords from external knowledge other than Cdoc . This is reﬂected to minimizing |K|, the size of K. That is, the weights of hypotheses of Type 2 are ﬁxed to 1 and ones of Type 1 are ﬁxed to 0, and the content of h is S ∪ K. It might be good to give values between 0 and 1 to hypotheses of Type 2, each value representing the diﬃculty of learning each term. However, we do not know how each word is easy to learn for the user from outside of Cdoc . Further, it might seem to be necessary to give positive weights to hypotheses of Type 1, each value representing the cost of reading each document. However, this necessity can be discounted because we gave mx in Eq. 3 to be proportional to the length of Dx . That is, the user’s cost (eﬀort) for reading a document is implied by the number of meaningful keywords s/he should read in the document. If we sum the heterogeneous diﬃculties, i.e., of reading documents and of learning words, the meaning of the solution cost would become rather confusing. 3.3

An Example of Combination Retrieval’s Execution

For example, the combination retrieval runs as follows. Step 1) Qg = “Does alcohol cause a liver cancer ?” Step 2) G is obtained from Qg as {alcohol, liver, cancer}. Step 3) From Cdoc , documents D1 ,D2 , and D3 are taken, each including terms in G, and put into Horn clauses as: alcohol :−cirrhosis, cell, disease, D1 . liver :−cirrhosis, cell, disease, D1 . alcohol :−marijuana, drug, health, D2 . liver :−marijuana, drug, health, D2 . alcohol :−cell, disease, organ, D3 . cancer :−cell, disease, organ, D3 . Hypothesis-set H is formed of the conditional parts of D1 , D2 and D3 of Type 1 each weighted 0, and “cirrhosis,” “cell,” “disease,” “marijuana,” “drug,” “health,”and “organ” of Type 2 each weighted 1.

Knowledge Navigation on Visualizing Complementary Documents

263

Step 4) h is obtained as S ∪ K, where S = { D1 , D3 } and K = {cirrhosis, cell, disease, organ}, meaning that user should understand ”cirrhosis”, ”cell”, ”disease” and ”organ” for reading D1 and D3 , served as the answer to Qg . This solution is selected because cost(h) (i.e. |K|) takes the values of 4, less than 6 of the only alternative feasible solution, i.e. {marijuana, drug, health, cell, disease, organ} plus {D2 , D3 }. Step 5) The user now reads the two documents presented as: D1 (including alcohol and liver) stating that alcohol alters the liver function by changing liver cells into cirrhosis. D3 (including alcohol and cancer) showing the causes of cancer in various organs, including a lot of alcohol. This document recommends drinkers to limit to one ounce of pure alcohol per day. As a result, the subject learns that s/he should limit drinking alcohol to keep liver healthy and avoid cancer, and also came to know that other tissues than liver get cancer from alcohol. Thus, user can understand the answer by learning a small number of words from outside of Cdoc , as we aimed in employing CBA. More importantly than this major eﬀect of combination retrieval, a by-product is that the common hypotheses between D1 and D3 , i.e., {cell, disease} of Type 2 are discovered as the context of user’s interest underlying the entered query. This eﬀect is due to CBA which obtains the smallest number of involved contexts, for explaining the goal (i.e. answering the query), as solution hypotheses. Presenting such a novel and meaningful context to the user induces the user to creating new knowledge [9], to satisfy his/her novel interest.

4

MACLOD: Map of Complementary Links of Documents

In the combination retrieval, a user was imposed on two types of tasks that reading a obtained document-set and understanding the conditional terms of the document-set. However, this tasks are not always easy for a user since the background knowledge of a user is diﬀerent from individuals. For taking such already existing knowledge of a user into consideration when generating the documentset for reading, we propose a new framework to navigate a user by graphically providing with multiple documents of some document-sets each giving an answer to his/her interest. The concept of knowledge navigation in Section 2 can be realized in the framework. The implemented system named MACLOD (MAp of Complementary Links Of Documents) visualizes these document-sets(each forming a complementary document-set) to navigate a user to understanding his/her novel question. The process of MACLOD is as follows:

264

N. Matsumura, Y. Ohsawa, and M. Ishizuka

The Process of MACLOD Phase1. Obtain a plan for knowledge navigation: Obtain a plan (document-set S) for user’s query Qg along the procedure of the combination retrieval in Section 3. That is, the process is summarized as follows: Step 1) Accept user’s query Qg . Step 2) Obtain G, the goal user wants to understand. Step 3) Make knowledge-base Σ for the abduction of Step 4). Step 4) Obtain h, the optimal hypothesis-set which derives G if combined with Σ, by cost-based abduction. Step 5) Show the obtained documents in Step 4) to the user. Phase2. Iterate Phase1 to add plans: Iterate Phase1 to obtain N sets of plans where inconsistency conditions are added to the knowledge-base Σ in Subsection 3.2 to avoid already obtained plans. The inconsistency condition to be considered in each cycle of Phase1 is described as inc :−Dx1 , Dx2 , · · · , Dxn ,

(7)

where Dx1 , Dx2 , · · · , Dxn are the documents obtained in the previous cycle of Phase1. Here, the document included in S more than three times are forced not to be included in the next plan. This inconsistency condition, also added into knowledge-base Σ, is described as inc :−Dx1 .

(8)

Where Dx1 is a document included in S more than three times. The cycles of Phase1 continues until the number of iterations reaches N . Here, we empirically set N as 10. Phase3. Visualize the plans: MACLOD outputs a 2-dimensional interface in which obtained plans during above iterations are mapped. In the interface, documents in a plan obtained by one cycle at Phase1 are connected with links each other in order to support user selects appropriate documents. Phase4. Knowledge Navigation: The user goes on reading documents along the links in the 2-dimensional interface until s/he understands or gives up understanding Qg .

5 5.1

Experimental Evaluations of MACLOD The Experimental Conditions

MACLOD is implemented in a Celeron 500MHz machine with 320MB memory. Although CBA is time-consuming because of its NP-completeness, most answers in the experiments were returned within ten seconds from the entry of query by high-speed abduction as in [12]. Queries from users included 4 or less terms in Key, due to which the response time was below 10 sec. This quick response comes also from the goal-oriented construction of Horn clauses shown in Subsection 3.2. The document-collection Cdoc of MACLOD is

Knowledge Navigation on Visualizing Complementary Documents

265

1808 question-answer pairs of Alice, a health care question answering service on WWW (http://www.alice.columbia.edu). The small number as 1808 documents is a suitable condition for evaluating MACLOD for a sparse documentcollection which is insuﬃcient for answering novel queries. 5.2

An Example of MACLOD’s Execution

When a user entered a query in a word-set or a sentence, MACLOD obtained ten plans(document-sets) in Table 1 and showed a 2-dimensional output in Figure 5.2. In this case, input {alcohol, f at, calorie} was entered as query Qg for knowing if the calorie of alcohol changes into fat. Table 1. The top 10 plans for the input query {alcohol, fat, calorie}. Ranking 1 2 3 4 5 6 7 8 9 10

Plan(document-set) Cost d1459, d0181 25 d1459, d0611 26 d1459, d0426 27 d1802, d0181 27 d0576, d0181 27 d1802, d0882, d0611 39 d1802, d1100, d0611 39 d0746, d0576, d1466 39 d1730, d0576, d1466 39 d0746, d0331, d1466 41

The process of understanding the user’s interest(shown as Qg ) begins by reading a document-set d1459 and d0181 (double-circle nodes in Figure 5.2), a top ranked plan of MACLOD. The summaries of them are as follows: d1459 (including f at and calorie) stating that if the calorie comes short, the protein is burned into energy. The lack of protein delays the recovery of distress, or weakens the resistance to disease. d0181 (including alcohol) stating that drinking too much alcohol damages various tissues, especially the liver or the heart. After reading these two documents, the user was not satisfy fully his/her interest since the documents do not mention the causality between the calorie of alcohol and fat directly. If this does not satisfy one’s interest, then the user begins to select and read another documents linked from already read documents for getting new information about Qg . MACLOD helps this complementary reading process with a 2-dimensional interface where a user can piece out the whole relations among documents of obtained plans. That is, user can pick other document, that complements already-read documents, for reaching the satisfaction of her/himself.

266

N. Matsumura, Y. Ohsawa, and M. Ishizuka

Fig. 1. A 2-dimensional interface of MACLOD. Documents are shown as nodes, and complementary documents are connected with links.

The following steps, for example, are as follows. In Figure 5.2, d0611 and d0426 are linked from d1459, and d1802 and d0576 are linked from d0181. Here, because the user wanted to know the limit amount of alcohol to drink, the user was satisﬁed by reading d0611 that states the adequate quantity of alcohol per day. Also, d0576 stating the ideal quantity of calorie per day satisﬁed the user further because his potential interest was in diet. Thus, MACLOD can supply complementary documents step by step according to the user’s interests until the user gets satisﬁed. 5.3

The Answering System Compared with MACLOD

We compared the performance of MACLOD with the following typical search engine for question answering. We call this search engine here a Vector-based FAQ-ﬁnder (VFAQ in short hereafter). The Procedure of VFAQ Step1’) Prepare keyword-vector vx for each question Qx in Cdoc . Step2’) Obtain keyword-vector vQ for the current query Qg .

Knowledge Navigation on Visualizing Complementary Documents

267

Step3’) Find the top N keyword-vectors prepared in 1’), in the decreasing order of product value vx · vQ , and return their corresponding answers. Here, a keyword-vector for a query Q is formed as follows: Each vector has |Key| attributes (Key was introduced in 3.2 as the candidate of keywords in Cdoc ), each taking the value of TFIDF[6] in Q, of the corresponding keyword. Each vector v is normalized to make |v| = 1. For example, for query Qg {alcohol, warm} (or a question which is put into G: {alcohol, warm}), the vector comes to be (0, 0.99, 0, · · · , 0, 0.14, 0, 0, · · ·) where 0.99 and 0.14 are the normalized TFIDF values of “alcohol” and “warm” in Qg . Elements of value 0 here correspond to terms which are in Key but not included in Qg . Supplying N documents in Step 3’) is for setting the condition similar to MACLOD so that a fair comparison becomes possible. 5.4

Result Statistics

The experiment was executed for 5 subjects from 21 to 30 years old. This means that the subjects were of the near age to the past question askers of Alice. A popular method for evaluating the performance of a search engine is to see recall (the number of relevant documents retrieved, divided by the number of relevant documents to user’s query in Cdoc ) and precision (the number of relevant documents retrieved, divided by the number of retrieved documents). However, this traditional manner of evaluation is not appropriate for evaluating MACLOD, because it does not output a sheer list of most relevant documents to the query. In the traditional evaluation, it was regarded as a success if user gets satisﬁed by reading a few documents which are highly ranked in the output list. On the other hand, MACLOD aims at satisfying a user who reads some documents along the pathways, rather than a few best document. Therefore, this section presents an original way of evaluation for MACLOD. Here, 42 queries were entered. This seems to be quite a small number for the evaluation data. However, we compromised with this size of data because we aimed at having each subject evaluate the returned answer in a natural manner. That is, in order to have the subject report whether s/he was really satisﬁed with the output, the subject must enter his/her real ¨rare¨interest. Otherwise, the subject has to imagine an unreal person who asks the rare query and imagine what the unreal person feels with the returned answers. Therefore we restricted to a small number of queries entered from real novel interests. The overall result was shown in Figure 5.4. The horizontal axis means the number of documents read in series and the vertical axis means the number of satisﬁed queries. According to the subjects, MACLOD did better than VFAQ, especially for novel queries. For x = 1, MACLOD and VFAQ equally satisﬁed 16 queries. On the other hand, for x = 2, MACLOD satisﬁed 12 queries, whereas VFAQ satisﬁed 4 queries. And for x = 3, MACLOD satisﬁed 6 queries, whereas VFAQ satisﬁed 3 queries. Finally, fot x ≥ 4, MACLOD and VFAQ satisﬁed 3 queries. Thus, the superiority of MACLOD for x greater than 1 came to be

268

N. Matsumura, Y. Ohsawa, and M. Ishizuka

Fig. 2. Statistical results.

apparent. In all cases, VFAQ obtained redundant documents, i.e., document of similar contexts, equally relevant to the query. These results can be summarized that novel queries for Cdoc were answered satisfactory by MACLOD. Answers in the form of document-combination visualized by MACLOD came to be easy to read and browse along the links according to the subject, and the presented answers were meaningful for the user. 5.5

Comparison with Other Methods

Among the rare systems which combine documents for answering novel query, Hyper Bridges[10] and NaviPlan [11] produce a plan of user’s reading of documents. They present a plan made of sorted multiple documents, and a user who reads them in the order as sorted by Hyper Bridges or NaviPlan incrementally reﬁnes one’s own knowledge until one learns the meaning of the entered query. A plan made by these tools is a serial set of documents, which guides the user to an understanding of query starting from a beginner’s knowledge, in the order presented by the system. As a result, neither NaviPlan nor Hyper Bridges they can obtain an appropriate document to be read last, i.e., the document to directly reach the goal (i.e. answer the query), in all the examples above where multiple documents are required to be mixed to answer the query. On the other hand, the combination retrieval and its advanced version MACLOD makes a complementary set of documents, supplementing the content of each other for giving a satisfactory answer as a whole. User may read documents in an obtained document-set in any order as s/he likes. Especially, MACLOD gives user more ﬂexible search interface than the original combination retrieval. Let us here show the merit of MACLOD compared with the previous combination retrieval. In short, the merit is that user can select documents matching with his/her interest, reactively reﬂecting the context of documents read already.

Knowledge Navigation on Visualizing Complementary Documents

269

The fair extension of the combination retrieval to be compared with MACLOD is to have as many document-sets as obtained in MACLOD. In such an output style, it will be diﬃcult to control the context of the documents to read. That is, the order of sets sorted on cost does not always correspond to user´s interest and often bothers user with hard to read the document-sets in an undesired order. In this example, if the user feels d1459 mismatching to his/her context, s/he will not reach any satisfactory document-set in the list. Neither a MACLOD-like style output as in Figure5.2 makes things better, in this case because d1459 is shared by all the sets. In all trials for obtaining and showing highly ranked document-sets of the combination retrieval, the user was ﬁxed to ¨ the context bound by a ¨centraldocument as d1459 whether desiring or not the situation. From this problem with the combination retrieval, we can point the two-fold merit of MACLOD. 1. Due to discarding documents already appeared many times in the output document-sets in the process (see Section 4), MACLOD can include document-sets of various contexts in the output. This enables the user to choose suitable contexts reactively in the search process. 2. The graphical output makes the context-control easier, because the links between nodes (documents) represent the complemantary relations (i.e., as documents to be read together) between contexts. If user feels a document misleading to him, s/he can open a document linked from the current document without feeing a sudden departure from the current context.

6

Conclusions

The combination retrieval, a method to obtain a set of documents for answering a novel query is fully described and its visual interface MACLOD is introduced. Combination retrieval presents user with a set of, not a single, documents for answering a new query unable to be answered by one past answer to a past query. The MACLOD interface supplies a user with a further comfort in acquiring novel knowledge. MACLOD allows user to eﬃciently alter a part of the reading-plan (i.e. document-set) s/he is currently following, improving his/her satisfaction. This eﬀect works especially if the interest is novel i.e., if the context is too particular to be captured by past Q&A’s.

References 1. Hadamard, J: The Psychology of Invention in the Mathematical Field. Princeton University Press, 1945. 2. Swanson, D.R., Smalheiser, N.R.: An Interactive System for Complementary Literatures: a Stimulus to Scientiﬁc Discovery. Artiﬁcial Intelligence, Vol. 91, 183–203, 1997. 3. Matsumura, N., and Ohsawa, Y.: Combination Retrieval for Creating Knowledge from Sparse Document Collection, Proc. of Discovery Science, 320–324, 2000.

270

N. Matsumura, Y. Ohsawa, and M. Ishizuka

4. Brookes, B. C.: The foundations of information science, Journal of Information Science, 2, 125–133, 1980. 5. Porter, M.F.: An Algorithm for Suﬃx stripping. Automated Library and Information Systems, Vol.14, No.3, 130–137, 1980. 6. Salton, G. and Buckey, C.: Term-Weighting Approach in Automatic Text Retrieval, Reading in Information Retrieval, 323–328, 1998. 7. E. Charniak and S.E. Shimony: Probabilistic Semantics for Cost Based Abduction. Proc. of AAAI-90, 106–111, 1990. 8. Ohsawa, Y. and Yachida, M.: An Index Navigator for Understanding and Expressing User’s Coherent Interest, Proc. of IJCAI-97, 1: 722–729, 1997. 9. Nonaka,I. and Takeuchi, H.: The Knowledge Creating Company, Oxford University Press, 1995. 10. Ohsawa, Y., Matsuda, K. and Yachida, M.: Personal and Temporary Hyper Bridges: 2-D Interface for Undeﬁned Topics, J. Computer Networks and ISDN Systems, 30: 669–671, 1998. 11. Yamada, S. and Osawa, Y.: Planning to Guide Concept Understanding in the WWW. AAAI-98 Workshop on AI and Data Integration, 121–126, 1998. 12. Ohsawa, Y. and Ishizuka, M.: Networked Bubble Propagation: A Polynomial-time Hypothetical Reasoning Method for Computing Near-optimal Solutions, Artiﬁcial Intelligence, Vol.91, 131–154, 1997.

Learning Conformation Rules Osamu Maruyama1 , Takayoshi Shoudai2 , Emiko Furuichi3 , Satoru Kuhara4 , and Satoru Miyano5 1

5

Faculty of Mathematics, Kyushu University, Fukuoka, 812-8581, Japan, om@math.kyushu-u.ac.jp 2 Department of Informatics, Kyushu University 3 Fukuoka Women’s Junior College 4 Graduate School of Genetic Resources Technology, Kyushu University Human Genome Center, Institute of Medical Science, University of Tokyo

Abstract. Protein conformation problem, one of the hard and important problems, is to identify conformation rules which transform sequences to their tertiary structures, called conformations. Our aim of this work is to give a concrete theoretical foundation for graph-theoretic approach for the protein conformation problem in the framework of a probabilistic learning model. We propose the conformation problem as a learning problem from hypergraphs capturing the conformations of proteins in a loose way. We consider several classes of functions based on conformation rules, and show the PAC-learnability of them. The refutable PAC-learnability of functions is discussed, which would be helpful when a target function is not in the class of functions under consideration. We also report the conformation rules learned in our preliminary computational experiments.

1

Introduction

A protein is a chain of amino acid residues that folds into a unique native tertiary structure under speciﬁc conditions. Biochemical experiments show that an unfolded protein spontaneously refolds into its native structure when speciﬁc conditions are restored. This is the basis for the hypothesis that the native structure of a protein can be determined from the information contained in the amino acid sequence. Under this hypothesis, various computational methods of predicting protein conformation from sequence have been proposed. Protein conformation is analyzed in terms of free energy, where it is assumed that the free energy of a native structure of a protein is the globally minimum, which is known as “thermodynamic hypothesis.” Many computational methods based on the assumption have been extensively developed. For example, Church and Shalloway [1] developed a top-down search procedure in which conformation space is recursively dissected according to the intrinsic hierarchical structure of a landscape’s eﬀective-energy barriers, and Konig and Dandekar [4] applied genetic algorithms to this problem. Another interesting heuristic method is the K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 243–257, 2001. c Springer-Verlag Berlin Heidelberg 2001

244

O. Maruyama et al.

hydrophobic zipper method by Dill et al. [2]. Based on the fact that many hydrophobic contacts are topologically local, the hydrophobic zipper method randomly generates hydrophobic contacts near enough in a sequence, which serve as constraints forcing other hydrophobic contacts to be zipped up sequentially. Inspired by this hydrophobic zipper method, but apart from the free-energy minimization problem, we introduce a hypergraph representation of the tertiary structure of a protein, and a conformation rule which is deﬁned as a rewriting rule of hypergraphs. Many simple conformation models in free-energy minimization problems use lattices, which are periodic graphs in two- or three-dimensional space. The conformation of a protein turns to be a self-avoiding path in the lattice in which the nodes are labeled by the amino acids. Thus the hypergraph representation representation model is a generalization of the lattice model. The degree of a node v of a hypergraph is the number of hyperedges including v, and the rank of a hyperedge e is the number of the nodes in e. Because of spatial conditions of conformations, it would be natural to impose restrictions on both of the degrees and the ranks of a hypergraph representing a tertiary structure to be bounded by constants, which is helpful in learning conformation rules. We capture the tertiary structure of a protein as a hypergraph in a loose way, from which conformation rules are extracted. Conformation rules are repeatedly applied to a hypergraph, where the initial hypergraph is a hypergraph representing an amino acid sequence, called a chainhypergraph. The procedure searches for a location in the current hypergraph which is applicable to a conformation rule, from local toward global as in the hydrophobic zipper method. Thus we can say that our procedure of applying conformation rules to a sequence obeys the “local to global” folding principle, which is one of the various folding principles proposed so far. The resulting hypergraph represents the structure of the protein. We then consider the problem of learning conformation rules from hypergraph representations of proteins. A conformation is deﬁned as a function from sequences to hypergraphs. Thus the problem is to learn functions from an example, that is, a pair of a protein sequence and the corresponding hypergraph representation. The PAC-learning paradigm was extended to include functions by Natarajan and Tadepalli [9] and some results on concept learning have been extended for functions [7,8]. This paper has three contributions. One is a formulation of conformation rules by using hypergraphs, and another is a polynomial-time PAC-learning algorithm for a class which is deﬁned by this new concept of conformation rules. The other is some results on refutable PAC-learnability of functions, which would be helpful when a target function is not in the classes of functions we consider. We have implemented the algorithms of learning conformation rules and applying conformation rules in the Python language [13]. Preliminary computational experiments have been done with using TIM barrel proteins whose data ﬁles can be downloaded from the site of Protein Data Bank (PDB) [14]. The results of the experiments are also reported.

Learning Conformation Rules

2

245

Preliminaries

A hypergraph H = (V, E) consists of a set V of nodes and a set E of hyperedges each of which is a nonempty subset of V . In this paper we assume that |e| ≥ 2 for all e ∈ E without any notice. The rank of H is r(H) = maxe∈E |e|. For a node v, the degree of v is dH (v) = |{e ∈ E | v ∈ e}| and the degree of H is d(H) = maxv∈V dH (v). A chain-hypergraph is a hypergraph H = (V, E) such that V = {1, 2, . . . , n} for some n ≥ 1 and each {i, i + 1} is contained in some hyperedge in E for 1 ≤ i ≤ n−1, i.e., there is e ∈ E with {i, i+1} ⊆ e. Especially, a chain-hypergraph H = (V, E) is called a rank k linear chain-hypergraph if E = {{i, . . . , i + k − 1} | i = 1, . . . , n − k + 1}. For a set E of hyperedges, we call simplify(E) = E − {e ∈ E | there is e in E with e ⊆ e and e = e } the simpliﬁcation of E. In this paper we consider a hypergraph H = (V, E) whose nodes are labeled with a mapping ψ : V → ∆, where ∆ is an alphabet. It is denoted by H = (V, E, ψ), and called a hypergraph over ∆. We confuse H = (V, E, ψ) with H = (V, E) without any notice. Let H = (V, E, ψ) and ˜ ˜ = (V˜ , E, ˜ ψ) V ⊆ V . For convenience, we denote by H(V ) the subhypergraph H of H where ˜= {e ∈ E | v ∈ e}, – E v∈V ˜ – V = e∈E˜ e ∪ V , – ψ˜ = ψ| ˜ , that is, the restriction of ψ to V˜ . V

This subsection reviews some notions and results on the PAC-learnability of a class of functions by following Natarajan [7,8]. For an alphabet Ω, the set of all strings over Ω is denoted by Ω ∗ . The length of a string x ∈ Ω ∗ is denoted by |x|. For n ≥ 1, Ω [n] = {x ∈ Ω ∗ | |x| ≤ n}. Here, the alphabet Ω is assumed to be ﬁnite. Deﬁnition 1 ([7,8]). Let F be a class of functions from a ﬁnite set X to a ﬁnite set Y . The generalized VC-dimension of F , denoted by D(F ), is the maximum over the sizes |Z| of subsets Z ⊆ X such that there exist two functions f and g in F satisfying the following conditions: 1. f (x) = g(x) for all x ∈ Z. 2. For all Z1 ⊆ Z, there exists h ∈ F that agrees with f on Z1 and with g on Z − Z1 . Lemma 1 ([7,8]). Let F be a class of functions from a ﬁnite set X to a ﬁnite set Y . Then 2D(F ) ≤ |F | ≤ |X|D(F ) |Y |2·D(F ) . Let f : Ω ∗ → Ω ∗ . For integers n1 , n2 ≥ 1, the projection f [n1 ][n2 ] of f on Ω [n1 ] ×Ω [n2 ] is the function f [n1 ][n2 ] : Ω [n1 ] → Ω [n2 ] deﬁned by f [n1 ][n2 ] (x) = f (x) if f (x) is in Ω [n2 ] for all x in Ω [n1 ] . If there is some x in Ω [n1 ] such that f (x) is not in Ω [n2 ] , then f [n1 ][n2 ] is undeﬁned. For a class F of functions from Ω ∗ to Ω ∗ , we deﬁne F [n1 ][n2 ] = {f [n1 ][n2 ] | f ∈ F, f [n1 ][n2 ] is deﬁned}.

246

O. Maruyama et al.

Deﬁnition 2 ([7,8]). Let F be a class of functions from Ω ∗ to Ω ∗ with a representation R. An algorithm A is said to be a polynomial-time ﬁtting for F in representation R if the following conditions hold: 1. A is a polynomial-time algorithm taking as input a ﬁnite subset S of Ω ∗ ×Ω ∗ . 2. If there exists a function in F that is consistent with S, A outputs a name of the function in representation R. We say that F is of polynomial-dimension if there is a polynomial p(n1 , n2 ) in n1 and n2 such that D(F [n1 ][n2 ] ) ≤ p(n1 , n2 ). We say that F is of polynomialexpansion if there exists a polynomial q(n) such that for all f ∈ F and x ∈ Ω ∗ , |f (x)| ≤ q(|x|). The following theorem will be used to prove a result in Section 5 on the PAC-learnability of conformation rules. Theorem 1 ([7,8]). Let F be a class of functions from Ω ∗ to Ω ∗ with a representation R. F is polynomial-time PAC-learnable in R if the following hold: 1. F is of polynomial-dimension. 2. F is of polynomial-expansion. 3. There exists a polynomial-time ﬁtting for F in R.

3

Hypergraph Representation of a Protein

Let P be the protein of a primary structure A1 A2 · · · An , where Ai represents the i-th amino acid residue. Its tertiary structure is usually represented by a sequence of the positions of amino acid residues in the three dimensional space as (p1 , A1 ), (p2 , A2 ), . . . , (pn , An ), where pi = (xi , yi , zi ) is the position of Ai for 1 ≤ i ≤ n. The distance between pi and pj is denoted by |pi − pj |. Let Σ be the alphabet consisting of symbols representing the amino acid residues. Let µ > 0 be a real number. For a protein P with a tertiary structure (p1 , A1 ), (p2 , A2 ), . . . , (pn , An ), let GµP = (V, E) be an undirected graph deﬁned as follows: 1. V = {1, 2, . . . , n}. 2. For any distinct i, j in V with |pi − pj | ≤ µ, {i, j} is in E. We call the undirected graph GµP = (V, E) the structure graph of P with µ-range. For positive integers k, ω, τ and GµP = (V, E), let E µ,k,ω,τ be the set of P,complete the hyperedges e ⊆ V satisfying the following conditions: – 2 ≤ |e| ≤ k, – max e − min e + 1 ≥ τ , that is, a restriction on the width of e on the sequence 1, 2, . . . , n, – GµP [e] is a complete graph, where GµP [e] is the node-induced subgraph of e in GµP . Let ω EP, backbone = {{i, i + 1, . . . , j} | j = i + ω − 1, 1 ≤ i ≤ n − ω + 1},

Learning Conformation Rules

247

and ψ : V → Σ be a mapping deﬁned by ψ(i) = Ai for 1 ≤ i ≤ n. Then a = (V, E , ψ) with hypergraph H µ,k,ω,τ P,Σ,complete ω E = simplify(E µ,k,ω,τ ) ∪ EP, backbone P,complete

is a chain-hypergraph over Σ, which is called the hypergraph representation of P over Σ by complete graphs with µ, k, ω and τ . We say that an undirected graph G = (V, E) where V = {v0 , v1 , . . . , vk } µ,k,ω,τ and E = {{v0 , vi } | vi ∈ V, vi = v0 } is a star graph. Let EP, star be the set of the hyperedges e ⊆ V satisfying the following conditions: 2 ≤ |e| ≤ k, µ,k,ω,τ max e−min e+1 ≥ τ , and GµP [e] is a star graph. Then a hypergraph HP,Σ, star = (V, E , ψ) with µ,k,ω,τ ω E = simplify(EP, star ) ∪ EP,backbone is a chain-hypergraph over Σ, which is called the hypergraph representation of P over Σ by star graphs with µ, k, ω and τ . Instead of the explicit representation with amino acid residues, it is often used to classify the amino acid residues into several categories (e.g., [2,10,11]). In order to deal with such cases, we represent a protein in a more extended way. Namely, we consider chain-hypergraphs whose nodes are labeled with some “colors”, which are not necessarily the same as the amino acid residues. Let ∆ be an alphabet which consists of such “colors” labeling the nodes of hypergraphs. In this paper, we assume that the tertiary structure of a protein is represented by a chain-hypergraph over some alphabet ∆ in a way mentioned above.

4

Conformation Rules

In this section, we deﬁne a conformation which transforms strings over ∆ to chain-hypergraphs over ∆. We denote the set of all chain-hypergraphs over ∆ by H∆ . Deﬁnition 3. A conformation over ∆ is a function c : ∆+ → H∆ such that c(x) = (V, E, ψ) for a string x = x1 · · · xn ∈ ∆+ satisﬁes V = {1, . . . , n} and ψ(i) = xi for 1 ≤ i ≤ n. We give a way of completing a conformation by introducing conformation rules over ∆, which is based on hypergraph rewriting rules deﬁned as follows: A hypergraph rewriting rule over ∆ is a triplet ρ = (B, A, D) of a hypergraph B = (V, E, ψ) over ∆ and subsets A and D of 2V . The elements of A and D are called additional and removable hyperedges, respectively. The rank of ρ is deﬁned to be max{r(B), max{|a| | a ∈ A}}. The degree of ρ is deﬁned to be d(B). Deﬁnition 4. Let ρ1 = (B1 , A1 , D1 ) and ρ2 = (B2 , A2 , D2 ) be hypergraph rewriting rules over ∆ where B1 = (V1 , E1 , ψ1 ) and B2 = (V2 , E2 , ψ2 ). We say that ρ1 is isomorphic to ρ2 , denoted by ρ1 ≈ ρ2 , if there is a bijection ι : V1 → V2 such that

248

1. 2. 3. 4.

O. Maruyama et al.

ψ1 (v) = ψ2 (ι(v)) for all v ∈ V1 , ι(e1 ) ∈ E2 for all e1 ∈ E1 , and ι−1 (e2 ) ∈ E1 for all e2 ∈ E2 , ι(e1 ) ∈ A2 for all e1 ∈ A1 , and ι−1 (e2 ) ∈ A1 for all e2 ∈ A2 , ι(e1 ) ∈ D2 for all e1 ∈ D1 , and ι−1 (e2 ) ∈ D1 for all e2 ∈ D2 .

Deﬁnition 5. Let D∆ be a set of hypergraph rewriting rules over ∆. For positive integers P and Q, we deﬁne a (P × Q)-conformation rule σ over D∆ as σ = (β1 , β2 , . . . , βP ), where with

βp = (γp,1 , γp,2 , . . . , γp,Q ) γp,q ⊆ D∆

for 1 ≤ p ≤ P and 1 ≤ q ≤ Q. γp,q is called the (p, q)-unit of σ, and βp is the pth unit-sequence of σ. D∆ is the domain of σ. The rank of D∆ is deﬁned as r(D∆ ) = max{r(H) | H ∈ D∆ }, and the degree of D∆ is d(D∆ ) = max{d(H) | H ∈ D∆ }. The rank of σ is max{r(γp,q ) | 1 ≤ p ≤ P, 1 ≤ q ≤ Q}, and the degree of σ is max{d(γp,q ) | 1 ≤ p ≤ P, 1 ≤ q ≤ Q}. In this paper, we consider a rather limited hypergraph rewriting rules deﬁned in the following way: Deﬁnition 6. A bundle rule over ∆ is a hypergraph rewriting rule ρ = (B, A, D) over ∆ if, for B = (V, E, ψ) over ∆, 1. 2. 3. 4. 5.

|A| = 1, say A = {U }. |U | ≥ 2. U ∈ E. For any hyperedge e in E, e ∩ U = ∅. D = {e ∈ E | e ⊂ U }.

For short, we denote such a bundle rule ρ = (B, A, D) by (B, U ). We denote by Γ∆ the set of all bundle rules over ∆, and, for integers k ≥ 2 and d ≥ 1, by Γk,d,∆ , the set of all bundle rules over ∆ such that the rank is at most k and the degree is at most d. Remark 1. Obviously, Γ∆ is inﬁnite. Note that Γk,d,∆ is ﬁnite if ∆ is ﬁnite. On the other hand, k≥2 Γk,d,∆ and d≥1 Γk,d,∆ are inﬁnite. We here describe a concrete conformation, which is a function transforming strings to hypergraphs by using conformation rules. Let σ = (β1 , β2 , . . . , βP ) be a (P × Q)-conformation rule over Γ∆ where βp = (λp,1 , λp,2 , . . . , λp,Q ) and λp,q ⊆ Γk,d,∆ for 1 ≤ p ≤ P and 1 ≤ q ≤ Q. We apply σ to a string x = x1 · · · xn in ∆+ . For a positive integer ω, we start with a rank ω linear chain-hypergraph H = (V, E, ψ), that is, V = {1, . . . , n}, ψs (i) = xi for 1 ≤ i ≤ n, and E = {{i, . . . , i + ω − 1} | 1 ≤ i ≤ n − ω + 1}. At the pth stage (1 ≤ p ≤ P ), the pth unit-sequence βp of σ is used in the following way. In each stage, a window

Learning Conformation Rules

249

on the node sequence 1, 2, . . . , n corresponding to the string x = x1 · · · xn is an interval of the sequence, and enlarged from smaller to larger. The initial window size is speciﬁed by τ . For each window size, the window is slided from left to right on V . Suppose a window of size w(≥ τ ) at position i, that is, an interval [i, . . . , i + w − 1] consisting of consecutive w nodes in V . Let q = w − τ + 1, whose range is from 1 to Q. The bundle rules in the (p, q)-unit γp,q of σ are applied to create new hyperedges e such that e consists of only nodes in [i, . . . , i+w −1] and i and i + w − 1 are in e. A new creation of a hyperedge e in the window depends on a local structure around e in the current hypergraph H = (V, E, ψ). Namely, we consider a subhypergraph H(e). A new hyperedge e will be created if there is a bundle rule (B, U ) ∈ γp,q which is isomorphic to (H(e), e). After creating all new hyperedges in the process of sliding the window from left to right, these hyperedges are added to E and the proper subsets of them are deleted from E, and this window sliding process is repeated after the window is enlarged. A formal description is given in Fig. 1.

Input:

a (P × Q)-conformation rule σ = (β1 , . . . , βP ) over Γ∆ where βp = (γp,1 , γp,2 , . . . , γp,Q ) and γp,q ⊆ Γ∆ for 1 ≤ p ≤ P and 1 ≤ q ≤ Q, positive integers ω and τ with ω < τ , and a string x = x1 · · · xn in ∆+ Output: a hypergraph H = (V, E, ψ) Procedure: CON FORM(ω, τ, σ, x) let k be the rank of σ V = {1, . . . , n} let ψ be a mapping deﬁned by ψ(i) = xi for 1 ≤ i ≤ n E = {{i, . . . , i + ω − 1} | 1 ≤ i ≤ n − ω + 1} H = (V, E, ψ) # linear chain-hypergraph of rank ω for p from 1 to P : for q from 1 to min{Q, n}: w =τ +q−1 # w is the window size A = ∅; D = ∅ foreach i from 1 to n − w + 1: j =i+w−1 foreach e ⊆ {i, . . . , j} such that i, j ∈ e and |e| ≤ k: if a bundle rule (H(e), e) ≈ ρ for some ρ in γp,q : add e to A add the proper subsets of e in E to D E =E∪A\D

Fig. 1. Algorithm CON FORM

The graph G given in Fig. 2 is an example of the graphs which cannot be generated by any (1, Q)-conformation rule for any Q. The following proposition is obvious by deﬁnitions:

250

O. Maruyama et al.

☛

G

0❦

1❦

☛ ✟

1❦

1❦

☛ ✟

1❦

1❦

☛ ✟

1❦

1❦

☛ ❦ ❦ ❦ ❦ γ1,1 = {( 0 1 1 1 , 0❦ u u v ☛ ✟ ☛ γ2,1 = {( 0❦ 1❦ 1❦ 1❦ 1❦ 1❦ u

γ3,1 γ4,1

v

,

☛ ✟ = {( 1❦ 1❦ 1❦ 1❦ 1❦, u v ☛ ✟ = {( 1❦ 1❦ 1❦ 1❦ 1❦ u

v

,

✟

1❦

1❦

1❦

1❦

1❦

✟ 1❦)}

v

✟ 1❦)}

u

☛ 1❦

v

u

☛ 1❦

1❦)} v

u

✟ 1❦)}

v

✟

Fig. 2. σ = ((γ1,1 ), (γ2,1 ), (γ3,1 ), (γ4,1 )) which is a (4 × 1)-conformation rule over Γ2,4,{0,1} generating the graph G

Proposition 1. Let σ be a (P × Q)-conformation rule over Γk,d,∆ , ω and τ be positive integers with ω < τ , and x ∈ ∆+ . The hypergraph CON FORM(ω, τ, σ, x) given in Fig. 1 is a chain-hypergraph over ∆ of at most rank k. Deﬁnition 7. For a (P × Q)-conformation rule σ over Γ∆ and positive integers as a function from ∆+ to ω and τ with ω < τ , we deﬁne a conformation cω,τ σ ω,τ the set of chain-hypergraphs over ∆, by cσ (x) = CON FORM(ω, τ, σ, x) for x ∈ ∆+ .

5

PAC-Learning of Conformation [n]

For a positive integer n, let H∆ be the set of all chain-hypergraphs over ∆ with [n] [n] [n] at most n nodes. By cω,τ we denote a function cω,τ : ∆[n] → H∆ obtained σ σ ω,τ [n] by restricting cσ to ∆ . For integers ω ≥ 2, τ > ω, P, Q ≥ 1, and an alphabet ∆, let ω,τ,P,Q C∆ = {cω,τ | σ is a (P × Q)-conformation rule over Γ∆ }. σ

As noted in Remark 1, the alphabet Γ∆ is inﬁnite even if ∆ is ﬁnite. This makes a trouble in discussing the PAC-learnability of a class of conformations. However, if we restrict the rank and degree of conformation rules to constant integers k and d, respectively, the alphabet Γk,d,∆ is ﬁnite for ﬁnite alphabets ∆. Let ω,τ,P,Q Ck,d,∆ = {cω,τ | σ is a (P × Q)-conformation rule over Γk,d,∆ } σ

for integers k ≥ 2, d ≥ 1, ω ≥ 2, τ > ω, P ≥ 1 and Q ≥ 1. Our main result is the following theorem: ω,τ,P,Q Theorem 2. The class Ck,d,∆ is polynomial-time PAC-learnable.

Learning Conformation Rules

251

ω,τ,1,R Theorem 3. The class R≥1 Ck,d,∆ is polynomial-time PAC-learnable. We can prove these theorems by showing that these classes satisfy three conditions in Theorem 1. For an integer k ≥ 2, a hypergraph H = (V, E, ψ) of rank k with n = |V | can be expressed under an appropriate encoding as a string over ∆ whose length is polynomially bounded with respect to n. Thus we regard a conformation c over ∆ as a function from ∆+ to ∆+ . Therefore we can see that any class of conformations over ∆ is of polynomial-expansion. ω,τ,P,Q ω,τ,1,R Next we show that Ck,d,∆ and R≥1 Ck,d,∆ are of polynomial-dimension. Let ω,τ,P,Q Ck,d,∆

[n]

[n] = {cω,τ | σ is a (P × Q)-conformation rule over Γk,d,∆ }. σ ω,τ,P,Q [n] ω,τ,1,R [n] By Lemma 1, it suﬃces to show that Ck,d,∆ and R≥1 Ck,d,∆ are

bounded by 2p(n) for some polynomial p(n). A (P × Q)-conformation rule σ over Γk,d,∆ can be considered as a P × Q matrix whose elements are subsets ω,τ,P,Q [n] of Γk,d,∆ . Since |Γk,d,∆ | is a ﬁnite constant, say δ, we have Ck,d,∆ ≤ δ P ·Q [n] ω,τ,P,Q , that is, Ck,d,∆ 2 is also bounded by a ﬁnite constant. It should ω,τ,1,R [n] ω,τ,1,R [n] be noted here that R≥1 Ck,d,∆ = n≥R≥1 Ck,d,∆ . Thus, we can see that δ P ·n ω,τ,1,R [n] , which is bounded by 2p(n) for some polynomial R≥1 Ck,d,∆ ≤ 2 p(n). ω,τ,P,Q ω,τ,1,R Finally we discuss polynomial-time ﬁttings for Ck,d,∆ and R≥1 Ck,d,∆ . It ω,τ,P,Q since the cardinality is trivial that there is a polynomial-time ﬁtting for Ck,d,∆ of the class is a ﬁnite constant. ω,τ,1,R We then describe a polynomial-time ﬁtting B for R≥1 Ck,d,∆ by employing the algorithm EX T RACT given in Fig. 3. Given chain-hypergraphs H1 = (V1 , E1 , ψ1 ), . . . , Ht = (Vt , Et , ψt ) over ∆ and positive integers ω and τ with τ > ω, the algorithm B computes, for 1 ≤ h ≤ t, a conformation rule over Γ∆ , σ ˆ (h) = EX T RACT (ω, τ, N, Hh ), where N = max1≤h≤t |Vh |. We denote by (h) ˆ (h) for 1 ≤ q ≤ N . For each q with 1 ≤ q ≤ N , let γˆ1,q γˆ1,q the (1, q)-unit of σ (h) = 1≤h≤t γˆ1,q , and σ ˆ = ((ˆ γ1,1 , γˆ1,2 , . . . , γˆ1,N )). The algorithm B outputs σ ˆ from H1 , . . . , Ht . Obviously, Q runs in polynomial time since the rank of conformation rules is a constant k. If H1 = CON FORM(ω, τ, σ, s1 ), H2 = CON FORM(ω, τ, σ, s2 ), . . . , Ht = CON FORM(ω, τ, σ, st ) for some (1, Q)-conformation rule σ over Γk,d,∆ and strings s1 , s2 , . . . , st ∈ ∆+ , then we can show that Hh = CON FORM(ω, τ, σ ˆ , sh ) for 1 ≤ h ≤ t, which means that CON FORM(ω, τ, σ ˆ , ·) is consistent with the examples {(si , Hi ) | 1 ≤ i ≤ t}. For 1 ≤ h ≤ t and 1 ≤ q ≤ N , let – Ch,q be the contents of E just after the qth iteration of the for-loop on q of the 1st iteration of the for-loop on p of CON FORM(ω, τ, σ, sh ) has been ﬁnished if q ≤ min{Q, |sh |}, Ch,q = Ch,q−1 otherwise.

252

O. Maruyama et al.

Input:

a chain-hypergraph H = (V, E, ψ) over ∆ of rank k, and positive integers ω, τ and R Output: a conformation rule σ = (β1 ) over Γ∆ of rank k where β1 = (γ1,1 , γ1,2 , . . . , γ1,R ) with γ1,q ⊆ Γ∆ for 1 ≤ q ≤ R Procedure: EX T RACT (ω, τ, R, H) n = |V | ˜ = {{i, . . . , i + ω − 1} | 1 ≤ i ≤ n − ω + 1} E ˜ = (V, E, ˜ ψ) H for q from 1 to R: w =τ +q−1 A=∅ D=∅ foreach i from 1 to n − w + 1: j =i+w−1 foreach U ⊆ {i, . . . , j} such that i, j ∈ U and |U | ≤ k: if U ∈ E: ˜ ), U ) ρ = (H(U add ρ to γq add U to A ˜ to D add the proper subsets of U in E ˜=E ˜∪A\D E

Fig. 3. Algorithm EX T RACT

˜ just after the qth iteration of the for-loop of – Eh,q be the contents of E EX T RACT (ω, τ, N, Hh ) has been ﬁnished. – Cˆh,q be the contents of E just after the qth iteration of the for-loop on q of the 1st iteration of the for-loop on p of CON FORM(ω, τ, σ ˆ , sh ) has been ﬁnished if q ≤ |sh |, Ch,q = Ch,q−1 otherwise. For convenience, let Ch,0 = Eh,0 = Cˆh,0 = {{i, . . . , i+ω−1} | 1 ≤ i ≤ |sh |−ω+1} for 1 ≤ h ≤ t, which the initial hyperedges in the algorithm CON FORM and EX T RACT on the string sh . It is not hard to prove by induction on q Ch,q = Eh,q = Cˆh,q for 1 ≤ h ≤ t. This completes the proof. The following theorem can be shown in a similar way and we omit its proof. ω,τ,P,R is polynomial-time PAC-learnable. Theorem 4. The class R≥1 Ck,d,∆

6

Refutably PAC-Learning Functions

In this section, we introduce the refutability of PAC-learning algorithms on functions. The refutability of PAC-learning algorithms on concepts have been already

Learning Conformation Rules

253

discussed in [5,6]. PAC-learning algorithms having the ability to refute classes which do not seem to include a target function would be helpful in dealing with real data. Let f be a function from Ω ∗ to Ω ∗ , F be a class of functions from Ω ∗ to Ω ∗ , and P be a probability distribution on Ω ∗ . We deﬁne optf (P, F ) by optf (P, F ) = min P (f f ). f ∈F

We can see that if f ∈ F then optf (P, F ) = 0 for any P . Deﬁnition 8. Let F be a class of functions from Ω ∗ to Ω ∗ . A function class F is polynomial-sample refutably learnable if there exist an algorithm A and a polynomial p(·, ·, ·, ·) which satisfy the following conditions: 1. The algorithm A takes as input parameters ε, ε , δ ∈ (0, 1) and n ≥ 1. We call ε a refutation accuracy parameter. 2. Let f be a target function from Ω ∗ to Ω ∗ and P an arbitrary and unknown probability distribution on Ω ∗ . The algorithm A takes a sample of size p(1/ε, 1/ε , 1/δ, n) using a subroutine EX(f, P ), which at each call produces a single example for f according to P . 3. If optf (P, F ) = 0 then A outputs a function g ∈ F which satisﬁes P (f g) < ε with probability at least 1−δ. If optf (P, F ) ≥ ε then A refutes the function class F with probability at least 1 − δ. Theorem 5. If a class F of functions is of polynomial dimension, then F is polynomial-sample refutably learnable. By this theorem the followings hold: ω,τ,P,Q and Corollary 1. The classes Ck,d,∆ refutably learnable.

R≥1

ω,τ,P,R Ck,d,∆ are polynomial-sample

Since F is of polynomial dimension, there exists a polynomial poly(·, ·) such that log2 |F [n1 ][n2 ] | ≤ poly(n1 , n2 ) for any n1 , n2 ≥ 1. We construct the algorithm described in Figure 4. We introduce a refutation threshold parameter η ∈ (0, 1) so that a learning algorithm produces an approximate function instead of refuting F when the minimum error optf (P, F ) is small enough. Deﬁnition 9. Let F be a class of functions from Ω ∗ to Ω ∗ . A function class F is polynomial-sample strongly refutably learnable if there exist an algorithm A and a polynomial p(·, ·, ·, ·) which satisfy the following conditions: 1. The algorithm A takes as input parameters ε, ε , δ, η ∈ (0, 1) and n ≥ 1. 2. Let f be a target function from Ω ∗ to Ω ∗ and P an arbitrary and unknown probability distribution on Ω ∗ . The algorithm A takes a sample of size p(1/ε, 1/ε , 1/δ, n) using a subroutine EX(f, P ), which at each call produces a single example for f according to P .

254

O. Maruyama et al. Input: ε, ε , δ, n1 , n2 Procedure: let m = (1/ε + 1/ε )(1/δ + poly(n1 , n2 )) make m calls of EX let S be the set of examples seen if there is a function g ∈ F consistent with S: return g else refute F Fig. 4. Refutable algorithm ARef uteBySampleComplexity (ε, ε , δ, n1 , n2 )

3. If optf (P, F ) ≤ η then A outputs a concept g ∈ F which satisﬁes P (f g) < η + ε with probability at least 1 − δ. If optf (P, F ) ≥ η + ε then A refutes the function class F with probability at least 1 − δ. Theorem 6. If a class F of functions is of polynomial dimension, then F is polynomial-sample strongly refutably learnable. ω,τ,P,Q ω,τ,P,R Corollary 2. The classes Ck,d,∆ and R≥1 Ck,d,∆ are polynomial-sample strongly refutably learnable. We construct the algorithm described in Figure 5. We denote by d(g, S) the number of examples in S with which g does not agree. Input: ε, ε , δ, η, n1 , n2 Procedure: κ = min{ε, ε2} 2 m = 4 1/ε + 1/ε (1/δ + poly(n1 , n2 )) make m calls of EX let S be the set of examples seen if there is a function g ∈ F with d(g, S) ≤ m (η + (1/2)κ) then return g else refute F Fig. 5. Strongly refutable algorithm AStronglyRef uteBySampleComplexity (ε, ε , δ, η, n1 , n2 )

We can easily see that F is of polynomial dimension if F is polynomial-sample refutably learnable or polynomial-sample strongly refutably learnable. Therefore the following three statements are equivalent: 1. F is of polynomial dimension. 2. F is polynomial-sample refutably learnable. 3. F is polynomial-sample strongly refutably learnable.

Learning Conformation Rules

7

255

Experiments

In this section, we report our preliminary computational experiments on learning conformation rules from hypergraphs representing tertially structures of proteins. We have implemented the PAC-learning algorithm shown in the algorithms CON FORM(ω, τ, σ, x) and EX T RACT (ω, τ, R, H) in the Python language [13]. 7.1

Method of Experiments

The hypergraph representation of a protein over ∆ by star graphs are used with µ, k, ω, τ speciﬁed as follows: µ = 5.8˚ A, k = 10, ω = 5 and τ = 8. The choice of the alphabet ∆ for labeling the nodes of a hypergraph is one of the key to experiments. The alphabet ∆ represents a classiﬁcation of amino acid residues. In Hart and Istrail [3], they used the hydrophobic-hydrophilic model that regards a protein as a linear chain amino acid residues that are of two types H (hydrophobic) and P (hydrophilic). However some amino acids are neither hydrophobic nor hydrophilic. In our experiments, ∆ is set to {H, P, N }, where the amino acid residues are assigned as follows: H : ALA, CYS, ILE, LEU, MET, PHE, TRP, VAL, P : ARG, ASN, ASP, GLN, GLU, LYS, PRO, ASX, GLX, N : GLY, HIS, SER, THR, TYR. ω,τ,P,Q The class of conformations, Ck,d,∆ where P = 1 and Q = 2, is considered in the experiments (Since the degree bound d is not important rather than the rank bound k, d is unlimited.). Given examples (s1 , H1 ), . . . , (st , Ht ), the polynomialtime ﬁtting B, used to prove Theorem 3, outputs a (1, 2)-conformation rule σ ˆ, which is applied in CON FORM(ω, τ, σ ˆ , x) for a sequence x. To evaluate how a hypergraph predicted by CON FORM is similar to the target hypergraph, we compare them hyperedge by hyperedge. To this end, we deﬁne a similarity between hyperedges as follows: Let g ≥ 0 and 0 ≤ κ ≤ 1, and subsets E1 and E2 of 2V , where V = {1, 2, . . . , n}. For e1 ∈ E1 and e2 ∈ E2 , we say that e1 is (g, κ)-similar to e2 if min e2 −g ≤ min e1 ≤ min e2 +g and e1 e2 ≤ κ. We denote Simg,κ (E1 , E2 ) = |{e1 ∈ E1 | e1 is (g, κ)-similar to e2 ∈ E2 }| TIM-barrel proteins have high regulatory conformations, which are composed by eight parallel β-sheets forming a barrel structure [12]. We downloaded PDB ﬁles of TIM-barrel proteins from the site of PDB [14], which are screened out. The 15 proteins remains, whose tertiary structures are fully determined and composed of a single chain of amino acids. In our experiments, the following small modiﬁcation has been done: for a bundle rule ρ = (B, A, D) where A = {U }, D is set to ∅ instead of {e ∈ E | e ⊂ U }, which aﬀects nothing but would enable to attain more detailed conformation rules. 7.2

Evaluation

We have executed two kinds of experiments. One is self-conformation, that is, for a single protein p, a (1, 2)-conformation rule α is learned from the hypergraph

256

O. Maruyama et al.

representation of p, and used in CON FORM with the sequence of p. Another is the case where a (1, 2)-conformation rule α is extracted from 14 TIM-barrel proteins, and applied to the remaining one. In self-conformation, the successful results are attained. Let HT = (V, ET , ψ) and HP = (V, EP , ψ) be a target and a predicted hypergraph, respectively. For a set S, by S c we denote the complement of S. We give a typical results of selfconformation test in Tab. 1. Since the experiment is going well under the window sizes 7 and 8, the experiment should be continued with the window sizes over 8. However, if it is done, the procedure does not ﬁnish in a practical time. The task of hypergraph matching is repeatedly done in our procedure. An eﬃcient and practical algorithm for the problem of hypergraph isomorphism should be developed, which would be one of the future works. Table 1. Result of self-conformation with protein 4ALD, whose sequence is of length 363. The backbone hyperedges are excluded. window size 7 8

EP ∩ ET 69 14

EP ∩ ETc 0 0

EPc ∩ ET 0 0

Tab. 2 shows the result of conformation of protein 4ALD obtained by applying a (1,2)-conformation rule learned from the other 14 TIM-barrel proteins. In the stage of window size 7, 23 (= 6+17) hyperedge are added, 6 hyperedges of which are similar or exactly identical to hyperedges in the target HT . However, the remaining 17 hyperedges are wrong, that is, there are no similar hyperedges to them in HT . An interesting observation is that correct hyperedge addition often occurs in a neighborhood, which would imply that the conformation rule causing correct hyperedge addition captures some regional property common to several proteins. In the stage of window size 8, no hyperedge is added. This is because, once a wrong hyperedge is added, the wrong hyperedge makes it diﬃcult to add correct hyperedges in the following stages with larger window sizes. To settle this problem is also a future work. Table 2. Result of conformation of protein 4ALD applied a (1,2)-conformation rule learned from the other 14 TIM-barrel proteins. window size 7 8

Sim2,0.8 (EP , ET ) 6 0

Sim2,0.8 (ET , EP ) 9 0

EP ∩ ETc 17 0

EPc ∩ ET 63 14

Learning Conformation Rules

8

257

Concluding Remarks

In this paper, we formulated the protein conformation problem as the PAClearning problem of hypergraph rewriting rules from hypergraphs. Since, in terms of the protein conformation problem, our graph-theoretic approach is very unique, this learning problem should be extensively studied with adding appropriate modiﬁcation to the framework we proposed this time, although the current results of our preliminary computational experiments are far from satisfaction.

Acknowledgments This work was in part supported by Grant-in-Aid for Encouragement of Young Scientists and Grant-in-Aid for Scientiﬁc Research on Priority Areas (C) “Genome Information Science” from MEXT of Japan, and the Research for the Future Program of the Japan Society for the Promotion of Science.

References 1. Church, B.W. and Shalloway, D., Top-down free-energy minimization on protein potential energy landscapes, Proc. Natl. Acad. Sci. U.S.A. 98, 6098–103, 2001. 2. Dill, K.A., Fiebig, K.M. and Chan, H.S., Cooperatively protein-folding kinetics, Proc. Natl. Acad. Sci. U.S.A. 90, 1942–1946, 1993. 3. Hart, W.E. and Istrail, S.C., Robust proofs of NP-hardness for protein folding: general lattices and energy potentials, J. Comput. Biol. 4, 1–22, 1997. 4. Konig, R. and Dandekar, T., Improving genetic algorithms for protein folding simulations by systematic crossover, Biosystems 50, 17–25, 1999. 5. Matsumoto, S. and Shinohara A., Refutably probably approximately correct learning, Proc. 5th International Workshop on Algorithmic Learning Theory, LNAI 872, 469–483, 1994. 6. Matsumoto, S., Studies on the learnability of pattern languages. PhD thesis, Kyushu University, 1998. 7. Natarajan, B.K., Probably approximate learning of sets and functions, SIAM J. Comput. 20, 328–351, 1991. 8. Natarajan, B.K., Machine Learning: A Theoretical Approach, Morgan Kaufmann, 1991. 9. Natarajan, B.K. and Tadepalli, P., Two new frameworks for learning, Proc. Fifth International Symposium on Machine Learning, 402–415, 1988. 10. Shimozono, S., Shinohara, A, Shinohara, T., Miyano, S., Kuhara, S. and Arikawa, S., Knowledge acquisition from amino acid sequences by machine learning system BONSAI, Trans. Information Processing Society of Japan 35, 2009–2018, 1994. 11. Smith, R.F. and Smith, T.F., Automatic generation of primary sequence patterns from sets of related protein sequences, Proc. Natl. Acad. Sci. U.S.A. 87, 118–122, 1990. 12. Wierenga, R.K., The TIM-barrel fold: a versatile framework for eﬃcient enzymes, FEBS Letters 492, 193–198, 2001. 13. http://www.python.org/ 14. http://www.rcsb.org/pdb/

A General Theory of Deduction, Induction, and Learning Eric Martin1 , Arun Sharma1 , and Frank Stephan2 1

School of Computer Science and Engineering, The University of New South Wales, Sydney, NSW 2052, Australia, {emartin, arun}@cse.unsw.edu.au 2 Universit¨ at Heidelberg, 69121 Heidelberg, Germany, fstephan@math.uni-heidelberg.de

Abstract. Deduction, induction, learning, are various aspects of a more general scientiﬁc activity: the discovery of truth. We propose to embed them in a common, logical framework. First, we deﬁne a generalized notion of “logical consequence.” Alternating compact and “weakly compact” consequences, we stratify the set of generalized logical consequences of a given theory in a hierarchy. Classical ﬁrst-order logic is a particular case of this framework; the fact that it is all about deduction is due to the compactness theorem, and this is reﬂected by the collapsing of the corresponding hierarchy to the ﬁrst level. Classical learning paradigms in the inductive inference literature provide other particular cases. Finite learning corresponds exactly to the ﬁrst level (or level Σ1 ) of the hierarchy, whereas learning in the limit corresponds to another level (namely Σ2 ). More generally, strong and natural connections exist between our hierarchy of generalized logical consequences, the Borel hierarchy, and the hierarchy which measures the complexity of a formula in terms of alternations of quantiﬁers. It is hoped that this framework provides the foundation of a uniﬁed logic of deduction and induction, and highlights the inductive nature of learning. An essential motivation for our work is to apply the theory presented here to the design of “Inductive Prolog”, a system with both deductive and inductive capabilities, based on a natural extension of the resolution principle.

1

Introduction

Let us ﬁrst make a few remarks about the nature of deduction and the nature of induction, before we turn to the nature of learning. If a formula ϕ is a deductive consequence of a set of formulas T , it is clear to anyone that ϕ is a logical consequence of T , in the sense that ϕ is true in every model of T . Many would also agree that we can substitute “inductive” for “deductive” in the previous sentence. What is then the diﬀerence between ϕ being a deductive and ϕ being

Eric Martin is supported by the Australian Research Council Grant A49803051. Frank Stephan is supported by the Deutsche Forschungsgemeinschaft (DFG) Heisenberg Grant Ste 967/1-1.

K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 228–242, 2001. c Springer-Verlag Berlin Heidelberg 2001

A General Theory of Deduction, Induction, and Learning

229

an inductive consequence of T ? Well, it should be possible to discover with certainty that ϕ is a deductive consequence of T , if this is indeed the case. Whereas it should not be possible to discover with certainty that ϕ is an inductive consequence of T , if ϕ is not in fact a deductive consequence of T . How can we discover with certainty that ϕ is true on the basis of T ? A natural answer is: if and only if ϕ is actually a logical consequence of a ﬁnite subset of T . In other words, if and only if ϕ is a compact logical consequence of T . On the other hand, if ϕ is an inductive, but not a deductive, consequence of T , then we need an inﬁnite part of T , if not the whole of T , in order to be able to establish this fact. At this point, two questions emerge: 1. What should count as a model of T ? If it is any structure (as deﬁned in classical logic) in which every member of T is true, and if T consists of ﬁrst-order formulas, then the compactness theorem shows that every logical consequence of T is actually a deductive consequence of T , and there is no scope for a proper notion of induction. Hence, we should be able to consider not all structures, but some of them. This would result in a generalized notion of logical consequence that might not be compact. Are there natural candidates for such sets of structures? 2. Suppose that the class of models of T that have been retained is such that some generalized logical consequence ϕ of T is not a deductive (compact) consequence of T . Is then ϕ automatically “promoted” to the status of inductive consequence of T ? The fact that every model of T is a model of ϕ involves inﬁnitely many members of T . But how diﬃcult is it to conclude that ϕ is true on the basis of T ? If we can deﬁne diﬃculty levels, should one of them be considered as “the inductive level?” Let us now consider learning (for more details on the notions mentioned below, see [7]). We claim that the classical paradigms in the inductive inference literature are also about discovering the truth. Suppose that the underlying logical vocabulary consists of a unary predicate symbol P , together with a constant n for each natural number n. A language L can be identiﬁed with T = {P n | n ∈ L}, and its complement with T = {¬P n | n ∈ N \ L}. A text (respect. informant) for L can then be identiﬁed with an enumeration of T (respect. T ∪ T ). The task of discovering an r.e. index for (respect. the characteristic function of) L can be identiﬁed with the task of discovering the inﬁnitary formula T (respect. T ∧ T ). Clearly T is a logical consequence of T —in the classical sense, hence also in any more general sense. Retaining only the structuresthat correspond to languages i.e., the intended possible realities, will make T ∧ T a generalized consequence of T . So in both cases, identiﬁcation in the limit is about discovering a particular generalized logical consequence of T , namely a formula that can be viewed upon as a description of the language to be learned. On the other hand, the task of discovering the truth of an arbitrary formula ϕ from a background theory is equivalent to partial classiﬁcation (see [4]): a formula ϕ represents the class C of all theories T which logically imply ϕ in a general sense, and the partial classiﬁer has to ﬁnd out, on the basis of data from background

230

E. Martin, A. Sharma, and F. Stephan

theory T , that ϕ is a generalized logical consequence of T , whenever this is true. Note the following: 1. Considering the inﬁnite formula P n0 ∧ P n1 ∧ P n2 ∧ . . . rather than the index of a Turing machine which generates n0 , n1 , n2 . . . provides a logical representation equivalent to a representation in terms r.e. indexes. But inﬁnite formulas are not only a technical way of embedding learning paradigms into a logical framework. It turns out that the extension of ﬁrst-order languages to countable fragments of Lω1 ω (see below) are the natural logical languages of our framework. 2. When learning from positive data only, there is an implicit assumption that all positive data are enumerated in a text for a language L. This means that the models of {P n | n ∈ L} to be considered should not be consistent with P n for any n ∈ N\L. The notion of generalized logical consequence, together with the right set of structures, should be able to accommodate this kind of property. The previous considerations go beyond epistemological concerns about the nature of induction or learning. Indeed, the aim of this work is also to investigate the foundations of induction in AI. Current work in Inductive Logic Programming (see [14]) focuses on a very speciﬁc inductive task: discovering the minimal model of a potentially inﬁnite set of data. We take a more general view and investigate general inductive abilities. Considering both deduction and induction as particular expressions of the art of discovering the truth opens the door to a uniﬁed framework which can provide the basis of an “Inductive Prolog”. If Prolog is the deductive engine of AI, giving an agent the ability to compute solutions to existential queries, Inductive Prolog should be the deductive-inductive engine of AI, giving an agent the ability to compute solutions to existential or Σ2 queries, such as: does there exist a chemical compound which has this eﬀect on all molecules having this and that property? We proceed as follows. In Section 2, we introduce the necessary notation and in Section 3, we describe the components of our framework. In Section 4, we deﬁne hierarchies of generalized logical consequences by the alternating the use of compactness and weak compactness, and show some of their properties relevant to learning paradigms. In Section 5, we investigate the relationship between the hierarchies of generalized logical consequences and formula complexity. As additional evidence of their naturalness we also demonstrate links with the Borel hierarchy. Finally, in Section 6, we show how a number of classical learning paradigms can be cast into our framework.

2

Notation

A vocabulary is a countable set of function symbols (possibly including constants) and predicate symbols. A vocabulary can, but does not have to, contain equality. If it does not, it is said to be equality free. From now on, S denotes an arbitrary countable vocabulary. For some results, assumptions on S will be made. We

A General Theory of Deduction, Induction, and Learning

231

denote by LSωω the set of all ﬁrst-order S-formulas, and by LSω1 ω the extension of LSωω that accepts countable nonempty conjunctions and disjunctions.1 So for all countable nonempty T ⊆ LSω1 ω , the disjunction ofall members of T , written T , and the conjunction of all members of T , written T , both belong to LSω1 ω . Note that the occurrence or nonoccurrence of = in S determines whether LSωω and LSω1 ω are languages with or without equality. A countable fragment of LSω1 ω is a countable subset L of LSω1 ω which contains LSωω , is closed under subformulas, boolean operators, and quantiﬁcation.2 From now on, L denotes a countable fragment of LSω1 ω . It represents the language on the basis of which the core of the theory is developed. Clearly, LSωω is the smallest countable fragment of LSω1 ω . The members of LSω1 ω which are in Σ0 or Π0 prenex form are the quantiﬁer free members of LSωω . Let nonnull ordinal α and ϕ ∈ LSω1 ω be given. We say that ϕ is in Σα (respect. Πα ) prenex form just in case one of the following holds: 1. ϕ is in Σβ or Πβ prenex form for some β < α, or 2. ϕ is of the form ∃xψ (respect. ∀xψ) for some ψ ∈ LSω1 ω which is in Σα (respect. Πα ) prenex form, or 3. ϕ is of the form X (respect. X) for some (countable) X ⊆ LSω1 ω all of whose members are in Σα (respect. Πα ) prenex form. It is easy to verify that every member of LSω1 ω is logically equivalent to a member of LSω1 ω which is in Σα prenex form for some α. If ϕ ∈ LSω1 ω is logically equivalent to a closed member of LSω1 ω which is in Σα (respect. Πα ) prenex form, then we say that ϕ is Σα (respect. Πα ). Note that the classical deﬁnition of a member of LSωω being Σn (respect. Πn ) for some n ∈ N is a particular case of the former. The ∼ operator is the function ∼: LSω1 ω → LSω1 ω which is deﬁned as follows. If ϕ ∈ LSω1 ω is of form ¬ψ for some ψ ∈ LSω1 ω , then ∼ (ϕ) = ψ; otherwise ∼ (ϕ) = ¬ϕ. Given Γ ⊆ LSω1 ω and S-structure M, the Γ -diagram of M, denoted DΓ (M), is the set of all members of Γ that are true in M. Terms will refer to S-terms, formulas to members of L (not LSω1 ω ), sentences to closed formulas, and structures to S-structures. A Henkin structure is a structure all of whose individuals interpret closed terms.3 A Herbrand structure is a structure each of whose individuals interprets a unique closed term.4 Hence Herbrand structures are Henkin. When we consider a Henkin or a Herbrand structure, or a nonempty class of Henkin or Herbrand structures, we tacitly assume that S contains at least one constant. 1

2 3

4

Given regular cardinal κ, L S κω denotes the set of all S-formulas built from atomic S-formulas using boolean operators, quantiﬁers, and disjunctions or conjunctions over nonempty sets of cardinality smaller than κ. See [10]. For more details on this deﬁnition, see [2] or [12]. Henkin structures should not be confused with Henkin models, which is the name often given to the general models deﬁned in [6]. Our notion of Henkin structure is closer to the canonical structures deﬁned in [17] for Henkin’s proof of the completeness of ﬁrst-order logic. Herbrand structures are close to the Herbrand models considered in Logic Programming. See [3] or [11].

232

3

E. Martin, A. Sharma, and F. Stephan

Components of the Theory

We denote by W a class of structures, the class of possible worlds. Classical ﬁrstorder logic would take for W the class of all structures. We have explained that in order to address questions such as deduction versus induction, we need to be free to choose a more restrictive class of possible worlds. The discussion about learning suggests the consideration of W to be the class of all Henkin structures, or the class of all Herbrand structures. Henkin and Herbrand structures are interesting in many respects, and play a prominent role in Logic Programming ([11]). Given T ⊆ L, we denote by ModW (T ) the class of all members of W that are models of T . We denote by O a set of sentences, that we call the class of possible observations. For classical ﬁrst-order logic, the choice of O would be irrelevant. Suppose we want to cast learning paradigms into this framework. For learning from positive data only, O will be equal to the set of all atomic sentences; for learning from both positive and negative data, O will be equal to the set of all basic sentences. Other examples can also be found in the literature (see for example [9]). We denote by T a set of sets of sentences, that we call class of possible theories. This corresponds roughly to the class of possible texts in the inductive inference literature. Classical ﬁrst-order logic would take for T the set of all sets of closed members of LSωω . The quintuple (S, L, W, O, T) contains all we need to deﬁne the fundamental concepts of this framework. We call this quintuple the paradigm under investigation, and we denote it by P. Deﬁnition 1. Let T ⊆ L and M ∈ W be given. We say that M is an O-minimal model of T in W iﬀ M ∈ ModW (T ) and for all N ∈ ModW (T ), {ϕ ∈ O | N |= ϕ} ⊂ {ϕ ∈ O | M |= ϕ}. The discussion above about learning should justify the previous deﬁnition. A similar notion is also encountered in AI in the form of the closed-world assumption deﬁned in [16] (for an overview see [5]), and of course in Logic Programming with the least Herbrand models (see [3,11]). Let T ⊆ L be given. Then T can have exactly one O-minimal model in W, or none, or many. We denote by ModO W (T ) the class of all O-minimal models of T in W. Note the following: Lemma 2. If O is closed under ∼ then for all T ⊆ L, ModO W (T ) = ModW (T ). We can now generalize the notion of logical consequence: Deﬁnition 3. Let T ⊆ L and ϕ ∈ L be given. We say that ϕ is a logical consequence of T in W, and we write T |=W ϕ, iﬀ every member of ModW (T ) is a model of ϕ. We say that ϕ is an O-minimal logical consequence of T in W, O and we write T |=O W ϕ, iﬀ every member of ModW (T ) is a model of ϕ. The notion of O-minimal logical consequence in W is the notion of generalized logical consequence we investigate; the other just proves useful. Although we develop the theory on a very broad basis, here we consider almost exclusively two cases of paradigms, that we now deﬁne.

A General Theory of Deduction, Induction, and Learning

233

Deﬁnition 4. We say that P is standard iﬀ T = {DO (M) | M ∈ W}. If P is O standard and for all T ∈ T and sentences ϕ, either T |=O W ϕ or T |=W ¬ϕ, then we say that P is ideal. Standard paradigms are the analogues of the classical paradigms in the inductive inference literature. When no data are missing, the latter even correspond to ideal paradigms.

4

The Hierarchies of Generalized Logical Consequences

We now deﬁne the hierarchies of generalized logical consequences that are basically the fundamental object of study of this framework.5 First we set, for all T ∈ T, Σ0P (T ) = Π0P (T ) = T . Deﬁnition 5. Let nonnull ordinal α and T ∈ T be given. Suppose that ΠβP (T ) has been deﬁned for all β < α. A sentence ϕ belongs to ΣαP (T ) iﬀ there exists ﬁnite E ⊆ T and ﬁnite H ⊆ β 1 and T ∈ T be given. A sentence ϕ belongs to ΣαP (T ) iﬀ there is ψ ∈ β 1, if follows P easily from Lemma 9 and the induction hypothesis that ϕ ∈ Σα+β (DO (M)). P P So we have shown that for all M ∈ W, Σβ (DO (M)) ⊆ Σα+β (DO (M)). Let M ∈ W and ϕ ∈ ΠβP (DO (M)) be given. To complete the proof we show that P (DO (M)). By Lemma 8, choose ψ ∈ ΣβP (DO (M)) such ϕ belongs to Πα+β O P that for all T ∈ T with T |=O W ψ and T |=W ϕ, ¬ϕ ∈ Σβ (T ). Let N ∈ W O with DO (N) |=O W ψ and DO (N) |=W ϕ be given. Since P and P are ideal, O P we infer that DO (N) |=O W ψ and DO (N) |=W ϕ. Hence ¬ϕ ∈ Σβ (DO (N)), so P ¬ϕ ∈ Σα+β (DO (N)) as proved above. Since the same part of the proof also shows P P (DO (M)), we conclude with Lemma 8 that ϕ ∈ Πα+β (DO (M)). that ψ ∈ Σα+β

5

Connections with Other Hierarchies

In order to be able to establish relations between the hierarchies of generalized logical consequences and other hierarchies, we deﬁne still a new hierarchy, where the Σα ’s levels are better behaved: P are deﬁned αP and Π Deﬁnition 14. For all ordinals α, the sets of sentences Σ α by induction on α, as follows. P = O ∪ {∼ ϕ | ϕ ∈ O}. 1. Σ 0

236

E. Martin, A. Sharma, and F. Stephan

P = {∼ ϕ | ϕ ∈ Σ P }. 2. For all ordinals α, Π α α P iﬀ there exists β < α with the 3. Let α = 0 be given. A sentence ϕ belongs to Σ α P following property. For all M ∈ W with M |= ϕ, there exists ﬁnite D ⊆ Π β such that M |= D and D |=W ϕ. Informally, we will refer to the hierarchy deﬁned above as the uniform hierarchy. We will need the following properties. First note that the levels of the uniform hierarchy are ordered as expected in a hierarchy of a Borel type. P ⊆ Σ P ∩ Π P. αP ∪ Π Proposition 15. For all ordinals α, β if α < β then Σ α β β P = Π P , which implies immediately that for all ordinals β, Proof. Trivially, Σ 0 0 P P P P ∪Π ⊆Σ ∩Π . Let α = 0 be given. For all β > α, the inclusions Σ P ⊆ Σ P Σ α 0 0 β β β P ⊆ Π P are straightforward. It is easily veriﬁed that Σ P ⊆ Π P . We and Π α α α+1 β P is equal to ∼ ψ for some ψ ∈ Σ P , hence ∼ ϕ ∈ Π P , infer that any ϕ ∈ Π α α α+1 P P P ⊆Σ hence ϕ ∈ Σ . So Π . The result follows. α+1

α

α+1

Remember that L is just a fragment of LSω1 ω , hence does not contain the disjunction or conjunction of any of its countable subsets. So X and X in the closure property below are members of LSω1 ω , but not necessarily members of L. Lemma 16. Let α = 0, sentence ϕ, and countable X ⊆ L be given. P and |=W (ϕ ↔ X) then ϕ ∈ Σ P. 1. If X ⊆ Σ α α P. αP and |=W (ϕ ↔ X) then ϕ ∈ Π 2. If X ⊆ Π α Adding the requirement that W consists exclusively of Henkin structures enables to treat existential quantiﬁers as countable disjunctions, and universal quantiﬁers as countable conjunctions. Corollary 17. Suppose that W is a set of Henkin structures. Let formula ϕ with free variables x1 , . . . , xn be given. Denote by X the set of all sentences of the form ϕ[t1 /x1 , . . . tn /xn ] for some closed terms t1 , . . . tn . Let α = 0 be given. P then ∃x1 . . . ∃xn ϕ ∈ Σ P. 1. If X ⊆ Σ α α P P. α then ∀x1 . . . ∀xn ϕ ∈ Π 2. If X ⊆ Π α P , assuming that O is closed under the ∼ operator. P and Π We characterize Σ 1 1 Proposition 18. Suppose that O is closed under ∼. Let sentence ϕ be given. P if and only if |=W ϕ, or |=W ¬ϕ, or there is a nonempty set X of 1. ϕ ∈ Σ 1 ﬁnite, nonempty subsets of O such that |=W { D | D ∈ X} ↔ ϕ. P if and only if |=W ϕ, or |=W ¬ϕ, or there is a nonempty set X of 2. ϕ ∈ Π 1 ﬁnite, nonempty subsets of O such that |=W { D | D ∈ X} ↔ ϕ.

A General Theory of Deduction, Induction, and Learning

237

Proof. The proof is trivial if |=W ϕ or |=W ¬ϕ, so suppose otherwise. Assume P . Let M ∈ W be such that M |= ϕ. Choose a nonempty subset that ϕ ∈ Σ 1 DM of DO (M) such that DM |=W ϕ. Set X = {DM | M ∈ W and M |= ϕ}. Then X is nonempty, and it is easy to verify that |=W { D | D ∈ X} ↔ ϕ. Conversely, nonempty subsets of O be such that let nonempty set X of ﬁnite, P for all D ∈ X, it follows from |=W { D | D ∈ X} ↔ ϕ. Since D ∈ Σ 1 P Lemma 16 that ϕ ∈ Σ1 . We conclude that 1. holds, and 2. is an immediate consequence. Then we characterize the other levels: Proposition 19. Let α > 1 and sentence ϕ be given. P iﬀ there is nonempty X ⊆ P P 1. ϕ ∈ Σ X ↔ ϕ. α β 20Ne + 4He

+ + +

11.68 (MeV) 6.18 2.38 .

_______ 1

The reaction formulations of ASTRA are based on neutral atoms. For this reason, there appear minor differences with textbook notations, such as in the second reaction above whose textbook version is H + 23Na --> 24Na + /e + nu, instead of H + 23Na -> 24Na + nu.

Automated Formulation of Reactions and Pathways

173

In each example, hydrogen and sodium (on the left hand side) combine to form one or more new substances (on the right hand side), along with the total energy emissions in MeV. For the runs described in this paper, we provided ASTRA with information about the elements from hydrogen to sulphur, their isotopes and a few elementary particles like the electron, proton, neutron and the neutrino with their antiparticles, giving a total of 68 distinct entities. From these, the system generated more than 600 different reactions. We manually eliminated minor variations such as 3He + 9Be --> 12C + e + /e and 3He + 9Be --> 12C + nu + /nu, leaving 472 reactions that included 344 fusion reactions and 28 decays. 3.2 Generating Reaction Chains Taking as input the reactions generated by the first stage, ASTRA generates the reaction chains for an element E from a small set of basic elements/isotopes (E) that we assume as given. The system uses a depth-first, backward chaining search to construct the reaction chains. On the first step, ASTRA finds those reactions that give as an output the final element E. Upon selecting one of these reactions, R, it recursively finds those reactions that give as an output one of more R’s input elements. The algorithm continues this process, halting its recursion when it finds a reaction chain for which all the reacting elements are in (E), or when it cannot find a reaction off which to chain. ASTRA generates all possible reaction chains in this systematic manner.

4 New Results of ASTRA In this section we report the new results of our tests with ASTRA concerning hydrogen-, helium-, carbon- and oxygen-burning reactions. We start with proton, electron and neutron capture reactions of heavier elements such as oxygen, fluor, neon, sodium, magnesium, aluminium, silicon and phosphorus. 4.1 Proton, Electron, and Neutron Captures Proton captures are an important class of exothermic reactions that also take part in processes transforming hydrogen into helium as will be described below. Proton capture by an atomic nucleus turns it into another element with one higher atomic number. ASTRA finds 33 examples of proton captures given in astrophysics literature (e.g., Fowler, et al., 1967, 1975, 1983) for elements from hydrogen to oxygen (16O), and 20 more for elements from oxygen to sulphur. ASTRA’s first stage predicts that all elements from hydrogen to sulphur (32S), with the exception of 4He, participate in exothermic proton capture. The program produces 46 such reactions for elements from hydrogen to oxygen, including all 33 examples we have found in texts, but also 13 others which we have not seen in astrophysics texts that we examined. The program also finds 72 proton captures for elements from oxygen (16O) to sulphur (32S), including the 20 such reactions cited in the same literature. Three examples of such proton captures are,

174

S. Kocabas

H + 19F --> 20Ne H + 23Na --> 24Mg H + 27Al --> 28Si. In these reactions, proton captures by fluorine, sodium and aluminium, transforms them into neon, magnesium and silicon, respectively. Also, all the isotopes from oxygen to sulphur, with the exception of the isotopes of neon and magnesium, participate in exothermic proton captures that produce helium (4He). Three examples to such reactions are, H + 19F --> 4He + 16O H + 23Na --> 4He + 20Ne. H + 27Al --> 4He +24Mg. Electron capture reactions are weak interactions in which an electron is absorbed by the atomic nucleus to be transformed into one with a smaller atomic number. In the process, the electron is combined with a proton in the nucleus, effectively transforming it into a neutron with the emission of a neutrino: e + p --> n + nu. ASTRA’s first stage produces 6 electron capture reactions for elements from hydrogen to oxygen of which only the one just given appears in astrophysics texts. The program also found 8 electron capture reactions for elements from oxygen to sulphur, none of which we have seen in the texts. In neutron capture, an element combines with a neutron to form a heavier isotope of the same element. We found 17 neutron captures for elements from hydrogen to oxygen in the literature, while ASTRA predicts 59 such reactions that are theoretically possible for the same elements. Some examples of these reactions can be found in Kocabas and Langley (1998). Recent runs of the system generated 76 reactions for elements from oxygen to sulphur. Three examples of such neutron capture reactions are, n + n + n +

18

19

22

23

F --> Na --> 31 S -->

F Na 32 S.

Here, as indicated above, in each case the nucleus that absorbs the neutron turns into a heavier isotope of the same element. 4.2 Hyrogen Burning Processes The transformation of hydrogen into helium in a series of nuclear processes which take place in main sequence stars i the principal source of energy. The standard reaction chains given in astrophysics texts (e.g. Audouze & Vauclair, 1980, p. 52; Williams, 1991, p. 351) for helium synthesis in such stars are the hydrogen-burning processes called “proton-proton” or pp chains. Other hydrogen burning reactions that

Automated Formulation of Reactions and Pathways

175

appear in texts involve heavier elements carbon, nitrogen and oxygen, and the pathway is called the CNO-chain. ASTRA produces all known CNO-chains, in addition to one viable variant using the electron capture of 13N (see, Kocabas & Langley, 1998). We have tested ASTRA on hydrogen burning reactions involving the elements heavier than oxygen. Such reactions are hypothesized to occur in stars several times larger than the sun. The program found four hydrogen burning chains involving the elements fluorine, neon, sodium, magnesium, silicon, phosphorus and sulphur. One of these processes is H + 24Mg -> 25Mg + nu H + 25Mg -> 26Al H + 26Al -> 27Si 27 Si + e -> 27Al + e + nu H + 27Al -> 24Mg + 4He --------------------------------4 H -> 4He + 2 nu . In this process four hydrogen atoms in effect, transform into one helium atom, while two neutrinos are also emitted. We did not see any of these processes in the texts that we examined, but we presume that they are known to astrophysicists. 4.3 Helium Burning Processes The origin and the relative abundance of carbon and oxygen has been one of the main concerns of astrophysics. The standard account (e.g., Fowler, 1986, pp. 5-6) relies on the process of helium-burning, in which helium nuclei react to form carbon and oxygen in the following steps: 4

He + 4He --> 8Be He + 8Be --> 12C 4 He + 12C --> 16O . 4

In its earlier runs, ASTRA found an alternative to this process which astrophysicists qualified as more likely in neutron-rich stellar media (see, Kocabas & Langley, 2000). ASTRA finds 25 exothermic helium burning reactions involving the range of elements from oxygen to silicon, including the 16 such reactions cited in the texts. Some of these reacitons are, 4

He + 16O ->

20

5.16 (MeV)

4

23

Na H + 22Ne

10.5 1.72 9.3 10.6

He + 19F -> He + 19F ->

4

Ne

4

20

24

4

22

26

He + He +

Ne -> Ne ->

Mg Mg.

176

S. Kocabas

4

23

4

23

Na -> 27Al Na -> /nu + 27Si 23 Na -> H + 26Mg

10.2 5.4 1.82

He + 24Mg -> 28Si He + 25Mg -> 29Si 4 He + 25Mg -> nu + 29Al 4 He + 25Mg -> n + 28Si 4 He + 26Mg -> 30Si 4 He + 26Mg -> /nu + 30P 4 He + 26Mg -> n + 29Si

10.1 11.2 7.5 2.73 10.8 6.5 0.13

He + He + 4 He + 4

4

4

He + 27Al -> 31P He + 27Al -> /nu + 31S 4 He + 27Al -> H + 30Si

9.7 4.2 2.42

4

28

4

28

6.9 4.6 7.2

4

He + He + 4 He +

Si -> 32S Si -> nu + 29 Si -> 33S

32

P

Among these reactions those that emit neutrinos (nu and /nu) are weak interactions which are much slower than the other alpha capture reactions. Astrophysicists generally ignore the weak reactions for their slow rates, except in processes that rely on such weak reactions. A careful comparison of the proton capture, neutron capture and helium burning reactions produced by ASTRA with the natural abundances of the elements from oxygen to sulphur in the CRC Handbook (80th ed., D.R.Lide, 1999-2000) reveals an interesting result: The elements fluorine, neon, sodium, magnesium, silicon, phosphorus and sulphur in the solar system must have been formed by alpha capture processes, rather than proton or neutron captures. This is because, the stable isotope abundances of these elements indicate a parallelism with the stepwise alpha-capture (helium burning) of the stable lighter isotopes of the elements in the series (see Table 1). Indeed, the two alpha capture chains (16O, 20Ne, 24Mg, 28Si, 32S and 19F, 23Na, 27Al, 31 P) contain the most abundant isotopes of these elements. These processes may have been accompanied by carbon, nitrogen and oxygen burning processes which produce 24 Mg, 28S and 32S respectively as shown in the next subsection. Although proton capture reactions explain the relative abundance of 19F, 20Ne, 23Na, 24 Mg, 27Al, 28Si, and 32S, they fail to explain the relative abundance of 31P. Similarly, neutron capture reactions fail to explain the relative abundances of 20Ne, 24Mg and 28 Si. Yet, stepwise alpha capture explains the relative abundances of all the isotopes in the series. We are currently investigating the astrophysical literature on the origins of the elements from fluorine to sulphur before claiming any novelty on this issue.

Automated Formulation of Reactions and Pathways

177

Table 1. Relative abundances of some isotopes for elements from oxygen to sulphur.

_______________________________________________________ isotope % abundance isotope % abundance 16

O F

19

18

99.76 100

O F

18

0.2 0

20

22 Ne 90.48 Ne 9.25 22 Na 100 Na 0 24 26 Mg 78.99 Mg 11.01 27 26 Al 100 Al 0 28 29 Si 92.23 Si 4.67 31 30 P 100 P 0 32 34 S 95.0 S 4.21 _______________________________________________________ 23

4.4 Carbon, Nitrogen, and Oxygen Burning Carbon burning, in which two carbon atoms fuse together to produce heavier elements, takes place after the helium burning stage in a star. ASTRA finds four carbon burning reactions which produce the elements neon, sodium, and magnesium: 12

C C 12 C 12 C 12

+ + + +

12

C C 12 C 12 C 12

-> 24Mg -> nu + 24Na -> H + 23Na -> 4He + 20Ne

+ + + +

14.4 (MeV) 8.9 2.72 5.1

In nitrogen burning, two nitrogen atoms fuse together to form elements ranging from oxygen to silicon. ASTRA finds 10 such reactions: 14

N N 14 N 14 N 14 N 14 N 14 N 14 N 14 N 14 N 14

+ + + + + + + + + +

14

N N 14 N 14 N 14 N 14 N 14 N 14 N 14 N 14 N 14

-> 28Si -> nu + 28Al -> n + 27Si -> H + 27Al -> D + 26Al -> 3He + 25Mg -> 4He + 24Mg -> 8Be + 20Ne -> 12C + 16O -> 13N + 15N

+ 27.82 (MeV) + 23.12 + 10.65 + 16.24 + 5.52 + 4.52 + 17.72 + 8.32 + 10.46 + 0.72

Finally, ASTRA formulates the following oxygen burning reactions in which two oxygen atoms fuse together in exothermic reactions, and the elements magnesium, silicon, phosphorus and sulphur are generated:

178

S. Kocabas

16

O + 16O -> 32S O + 16O -> nu + 32P 16 O + 16O -> n + 31S 16 O + 16O -> H + 31P 16 O + 16O -> 4He + 28Si 16 O + 16O -> 8Be + 24Mg 16

+ + + + + +

17.12 (MeV) 14.82 2.05 8.34 10.22 0.02

Carbon, nitrogen and oxygen burning reactions happen only in massive stars as they require higher energies to initiate. The astrophysics texts that we examined mention only a few of these reactions, such as 12C + 12C -> 24Mg, 14N + 14N -> 28Si, and 16O + 16 O -> 32S, while ASTRA provides a full account of such reactions.

5 Discussion of Results We have compared ASTRA’s earlier outputs involving the elements from hydrogen to oxygen to those available in astrophysics texts (Clayton, 1983; Audouze & Vauclair, 1980; Kippenhahn & Weigert, 1994; Fowler et al., 1967, 1975, 1983; Cujec & Fowler, 1980; Adelberger, E.G., et al. (1998), and discussed some of its results with astrophysicists. We received encouraging comments from domain experts on the earlier outputs (see, Kocabas & Langley, 2000). However, the reactions and processes of the light elements have already been studied extensively by nuclear astrophysicists. For this reason, we decided to extend the scope of the program to investigate the reactions and the processes of the elements from oxygen to sulphur. The ASTRA program can handle a very large volume of data for constructing reactions and reaction networks. Astrophysicists normally formulate the reactions by hand, and construct the reaction networks by focusing on the more likely reactions by using certain domain criteria. It is in this way the hydrogen and helium burning processes involving the lighter elements have been dealt with extensively in the current literature. But as the number of possible reactions increase rapidly for the heavier elements, a complete analysis of the reactions and processes can only be carried out with the aid of a computational tool such as our program. Although we tested ASTRA on the reactions of the elements from hydrogen (H) to sulphur (32S) with some interesting results, we plan to extend the system for exploring the reactions of heavier elements from sulphur to iron (56Fe) and further, which take place in stellar and interstellar processes. The understanding of the nuclear processes in which the chemical elements are formed is important in more ways than one, as this provides detailed information about the stellar and interstellar conditions that produced these elements. This is why cosmologists and astronomers are also very much interested in these processes as well as nuclear astrophysicists. We have described in Section 4, how an analysis of the reactions and the reaction processes produced by ASTRA and the natural abundances of chemical elements and isotopes, can lead to a detailed picture of the conditions in which these elements are formed. Astrophysicists use reaction rates to rule out slower reactions from their reaction networks. The current version of ASTRA can use reaction rates to rule out candidates, retaining only those reactions with the highest rates to construct reaction networks. But the rate for each reaction must be given by the user, the program cannot calculate them. We attempted to incorporate the rate calculation in ASTRA recently but decided

Automated Formulation of Reactions and Pathways

179

not to gon on with this, because of the complexities involved. Rate calculations are based on the reaction cross-sections and element concentrations in stellar media. Astrophysicists first construct a model of the star by making a number of assumptions about the star size, mass, temperature, pressure and element distribuiton. Stellar plasma are also treated in several layers through which element compositions, dominant reactions and processes change. Although ASTRA can search a much larger space of reactions and processes than can human scientists. We did not meet any problems with it for the elements and isotopes from hydrogen to sulphur involving 68 distinct entities. We have yet to see if we will need to constrain the scope of the reactions for the elements from hydrogen to iron. We plan to extend the program to investigate the reactions of the elements from sulphur to iron. Meanwhile we will continue to investigate the literature about the origins of heavier elements in the solar system.

6 Related Research The ASTRA system has evolved from our previous work in computational study of discoveries in particle physics with BR-4 (Kocabas & Langley, 1995), which models the discoveries in this field by prediction and theory revision. BR-4 inherits some of its capabilities from its predecessor BR-3 (Kocabas, 1991), which in turn descends from STAHL (Zytkow & Simon, 1986), and STAHLp (Rose & Langley, 1986) which modelled qualitative discovery in chemistry. Our system shares goals and techniques with more recent systems MECHEM (Valdes-Perez, 1995) designed to discover new reaction mechanisms in catalytic chemistry, and SYNGEN (Hendrickson, 1995) which constructs pathways for the synthesis of complex organic chemicals from simpler constituents. There are many similarities between ASTRA and MECHEM in terms of the tasks they perform. Both systems produce reactions and reaction mechanisms in large search spaces, and both are designed as computational aids for scientists. But the two systems differ in their inputs and outputs. MECHEM receives as input the initial and final chemical substances and generates all the simple reaction pathways using a set of constraints on chemical reactivity. Similarly, ASTRA uses a set of quantum constraints to formulate the reactions from which it constructs the reaction links for each element until the final element is reached. The reaction links in a chain constitute what is called by astrophysicists ‘the reaction network’. ASTRA has to deal with a large number of entities (elementary particles, elements and their isotopes), and even much larger number of reactions of these entities, to construct valid reactions and reaction chains, while MECHEM has a relatively smaller search space in its domain of application. MECHEM’s reaction pathways are lists of reaction steps normally with at most two reactants and two products. In contrast, the reactions of ASTRA can have from one to three entities in both sides. As to the comparison between our system and SYNGEN, the latter addresses the synthesis of organic chemicals, where one needs to determine reaction paths and the initial substances, through a set of known intermediate substances. The constraints of SYNGEN are more similar to those used by MECHEM though they operate in different fields of chemistry. Our program differs from these systems in its field of application and the types of constraints used.

180

S. Kocabas

7 Conclusions In this paper we described the new results of ASTRA, a computational tool which formulates reactions and reaction chains for researchers in nuclear astrophysics. The system determines all valid reactions for a given set of elements, isotopes and particles using a set of quantum constraints. The system also generates all reaction pathways for an element starting from a set of lighter elements. ASTRA generates all reactions we have seen in the astrophysics literature involving proton, electron and neutron captures, and helium, carbon, nitrogen and oxygen burning. ASTRA also reproduces all reaction chains that scientists have proposed for the synthesis of helium, carbon, nitrogen and oxygen in stellar media. But many of the valid reactions and reaction chains that the system generates do not appear in the related scientific literature. The domain experts that we have contacted suggested that some of these results carry theoretical interest for certain stellar models, but the vast majority of the reaction chains would be ignored by astrophysicists for their low rates. Earlier we decided to incorporate the rate calculations in the ASTRA system, but later abandoned this project because of the complexities involved. Instead, we focused on extending the system’s knowledge base to investigate the reactions and processes of the heavier elements. Given information about 32 more elements and isotopes from oxygen to sulphur, amounting to a total of 68 distinct entities, the program generated all the proton, electron, neutron capture reactions and all the helium, carbon, nitrogen burning reactions. A close comparison of these reactions with the stability and natural abundances of the 32 istopes between oxygen and sulphur indicated that the stable isotopes in this range must have been formed by exothermic alpha capture reactions accompanied by carbon, nitrogen and oxygen burning rather than proton or neutron capture reactions. We are currently investigating the literature for any scientific record on this issue.

References Adelberger, E.G., et al. (1998). Solar fusion cross sections. Reviews of Modern Physics, vol. 70, No. 4. Pp 1266-1291. Audouze, J., & Vauclair, S. (1980). An introduction to nuclear astrophysics. Holland: D. Riedel. Clayton, D.D. (1983). Principles of Stellar Evolution and Nucleosynthesis. Chicago: The University of Chicago Press. Cujec, B. & Fowler, W.A. (1980). Neglect of D, T, and 3He in advanced stellar evolution. The Astropysical Journal, 236: 658-660. Feigenbaum, E. A., Buchanan, B.G., Lederberg, J. (1971). On generality and problem solving: A case study using the DENDRAL program. In Machine Intelligence (vol. 6). Edinburgh: Edinburgh University Press. Fowler, W.A. (1986). The synthesis of the chemical elements carbon and oxygen. In S.L. Shapiro & S.A. Teukolsky (Eds.), Highlights of modern astrophysics. New York: John Wiley & Sons. Fowler, W.A., Caughlan, G.R., and Zimmermann, B.A. (1967). Thermonuclear Reaction Rates. Ann. Rev. Astron. Astrphysics, 5, 525-570. Fowler, W.A., Caughlan, G.R., and Zimmermann, B.A. (1975). Thermonuclear Reaction Rates. Ann. Rev. Astron. Astrphysics, 13, 69-112.

Automated Formulation of Reactions and Pathways

181

Harris, M.J., Fowler, W.A. Caughlan, G.R., and Zimmermann, B. (1983). Thermonuclear reaction rates. Ann. Rev. Astron. Astrophysics, 21, 165-176. Hendrickson, J.B. (1995). Systematic synthesis design: The SYNGEN program. Working Notes of the AAAI Spring Symposium on Systematic Methods of Scientific Discovery (pp. 1317). Stanford, CA: AAAI Press. Jones, R. (1986). Generating predictions to aid the scientific discovery process. Proceedings of the Fifth National Conference on Artificial Intelligence, pp. 513-517, Philadelphia: Morgan Kaufmann. Kippenhahn, R. and Weigert, A. (1994). Stellar Structure and Evolution. London: SpringerVerlag. Kocabas, S. (1991). Conflict resolution as discovery in particle physics. Machine Learning, 6, 277-309. Kocabas, S., & Langley, P. (1995). Integration of research tasks for modeling discoveries in particle physics. Working notes of the AAAI Spring Symposium on Systematic Methods of Scientific Discovery (pp. 87-92). Stanford, CA: AAAI Press. Kocabas, S. & Langley, P. (1998). Generating process explanations in nuclear astrophysics. Proceedings of the ECAI-98 Workshop on Machine Discovery (pp. 4 -9), Brighton, UK. Kocabas, S. & Langley, P. (2000). Computer generation of process explanations in nuclear astrophysics. International Journal of Human-Computer Studies, 53, 1149-1164, Academic Press. Kulkarni, D., & Simon, H.A. (1990). Experimentation in machine discovery. In J. Shrager & P. Langley (Eds.), Computational models of scientific discovery and theory formation. San Mateo, CA: Morgan Kaufmann. Lang, K.R. (1974). Astrophysical formulae: A compendium for physicists and astrophysicists. New York: Springer-Verlag. Langley, P. (1981). Data-driven discovery of physical laws. Cognitive Science, 5, 31-54. Langley, P. (1998). The computer-aided discovery of scientific knowledge. Proceedings of the 1st International Conference on Discovery Science, Fukuoka, Japan: Springer. Langley, P., Simon, H.A., Bradshaw, G.L., & Zytkow, J.M. (1987). Scientific Discovery: Computational explorations of the creative processes. Cambridge, MA: MIT Press. Lee, Y., Buchanan, B.G., Mattison, D.R., Klopman, G., & Rosenkranz, H.S. (1995). Learning rules to predict rodent carcinogenicity. Machine Learning, 30, 217-240. Lide, D.R. (Ed.). (1999-2000). CRC handbook of chemistry and physics (80th ed.). Florida: CRC Press. Mitchell, F., Sleeman, D., Duffy, J.A., Ingram, M.D., & Young, R.W. (1997). Optical basicity of metallurgical slags: A new computer-based system for data visulisation and analysis. Ironmaking and Steelmaking, 24, 306-320. Rose, D. & Langley, P. (1986). Chemical discovery as belief revision. Machine Learning, 1, 423-451. Valdes-Perez, R.E. (1995). Machine discovery in chemistry: New results. Artificial Intelligence, 74, 191-201. Williams, W.S.C. (1991). Nuclear and Particle Physics. Oxford: Clarendon Press. Zytkow, J.M., & Simon, H.A. (1986). A theory of historical discovery: The construction of componential models. Machine Learning, 1, 107-137.

Passage-Based Document Retrieval as a Tool for Text Mining with User’s Information Needs Koichi Kise1,2 , Markus Junker1 , Andreas Dengel1 , and Keinosuke Matsumoto2 1

German Research Center for Artiﬁcial Intelligence (DFKI GmbH), P.O.Box 2080, 67608 Kaiserslautern, Germany {Koichi.Kise, Markus.Junker, Andreas.Dengel}@dfki.de 2 Department of Computer and Systems Sciences, Graduate School of Engineering, Osaka Prefecture University 1-1 Gakuencho, Sakai, Osaka 599-8531, Japan {kise, matsu}@cs.osakafu-u.ac.jp

Abstract. Document retrieval can be considered as a basic but important tool for text mining that is capable of taking a user’s information need into account. However, document retrieval is a hard task if multitopic lengthy documents have to be retrieved with a very short description (a few keywords) of the information need. In this paper, we focus on this problem which is typical in real world applications. We experimentally validate that passage-based document retrieval is advantageous in such circumstances as compared to conventional document retrieval. Passage-based document retrieval is a kind of document retrieval which takes into account only small fractions (passages) of documents to judge the document relevance to the information need. As a passage-based method, we employ the method based on density distributions of keywords. This is compared with the following three conventional methods for document retrieval: the vector space model, pseudo-feedback, and latent semantic indexing. Experimental results show that the passagebased method is superior to the conventional methods if long documents have to be retrieved by short queries.

1

Introduction

The growing number of electronic textual documents has created the need of intelligent access to the information implied by them. The goal of text mining is to discover novel nuggets of information from a huge collection of documents to fulﬁll the need in the ultimate sense [1]. The unstructured nature of documents, however, makes it diﬃcult to realize the goal in a general way. The current stateof-the-art is to approach the goal by integrating the tools developed so far in other related research areas [2], though their functionality and/or domains of interest are still restricted. In order to take a step forward, it would be required both to devise a novel combination of the tools and to polish them up. A typical scenario of text mining would be that (1) information extraction is utilized to obtain the information from documents, (2) data mining is applied K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 155–169, 2001. c Springer-Verlag Berlin Heidelberg 2001

156

K. Kise et al.

to the extracted information to derive novel information. In this scenario, information of interest is ﬁxed in the ﬁrst stage of the processing. Another possibility is mining based on a user’s ad-hoc information need. In this scenario, document retrieval is a tool applied at the ﬁrst stage of processing to select documents analyzed at later stages. Although the research area of document retrieval has several decades of history, it is still not trivial to retrieve documents relevant to a user’s need. Two major problems uncovered through the research activities are as follows: Multi-topic documents. If a document is beyond the length of abstracts, it often contains several topics. Even though one of them is relevant to the user’s need, the rest are not necessarily relevant. As a result, these irrelevant parts severely disturb the retrieval of documents. Short queries. It is common that a user’s information need is fed to a system as a set of query terms. However, it is not an easy task for a user to transform the need into query terms. From the analysis of Web search logs, for example, it is well-known that typical users issue quite short queries consisting of several terms. Such queries are too poor to retrieve documents appropriately. In conventional document retrieval, the retrieval of multi-topic documents is a hard task since there is no way to avoid the inﬂuence of irrelevant parts of documents. In order to tackle this problem, some researchers have proposed a diﬀerent way of retrieval called passage-based document retrieval [3,4,5]. In passage-based document retrieval, documents are retrieved based only on fractions (passages) of documents in order not to be disturbed by the irrelevant parts. It has been shown in the literature that passage-based document retrieval outperforms conventional document retrieval in processing long documents [5]. To handle passages as units of retrieval is advantageous to the application to text mining since it also gives a clue to extract relevant parts from the documents. In this paper, we experimentally validate that, for the second (short queries) problem, passage-based document retrieval is also superior to conventional document retrieval. As a method of passage-based retrieval, we utilize a method based on “density distributions” [6]. This method segments documents into passages dynamically in response to a query. As conventional methods, we employ the following three [7,8]: the vector space model, pseudo-feedback and latent semantic indexing.

2

Conventional Document Retrieval

Let us begin with an overview of conventional document retrieval methods. The task of document retrieval is to retrieve documents relevant to a given query from a ﬁxed set of documents or a document database. In a common way to deal with documents as well as queries, they are represented using a set of index terms (simply called terms from now on) by ignoring their positions in documents and queries. Terms are determined based on words of documents in the database. In the following, ti (1 ≤ i ≤ m) and dj (1 ≤ j ≤ n) represent a term and a

Passage-Based Document Retrieval as a Tool for Text Mining

157

document in the database, respectively, where m is the number of terms and n is the number of documents. 2.1

Vector Space Model

The vector space model (VSM) [7,8] is the simplest retrieval model. In the VSM, a document dj is represented as a m dimensional vector: dj = (w1j , ..., wmj )T ,

(1)

where T indicates the transpose, wij is a weight of a term ti in a document dj . A query q is likewise represented as q = (w1q , ..., wmq )T ,

(2)

where wiq is a weight of a term ti in a query q. So far, a variety of schemes for computing weights have been proposed. In this paper, we employ a standard scheme called “tf-idf” deﬁned as follows: wij = tf ij · idf i ,

(3)

where tf ij is the weight calculated using the term frequency fij (the number of occurrences of a term ti in a document dj ), and idf i is the weight calculated using the inverse of the document frequency ni (the number of documents which contain a term ti ). In computing tf ij and idf i , the raw frequency is usually dampened by a function. We utilize tf ij = fij and idf i = log(n/ni ) where n is the total number of documents. The weight wiq is similarly deﬁned as wiq = fiq where fiq is the frequency of a term ti in a query q. The result of retrieval is represented as a list of documents ranked according to their similarity to the query. The similarity sim(dj , q) between a document dj and a query q is measured by the cosine of the angle between dj and q: sim(dj , q) =

dTj q . dj q

(4)

where · is the Euclidean norm of a vector. 2.2

Pseudo-Feedback

A problem of the VSM is that a query is often too short to rank documents appropriately. To cope with this problem, it has been proposed to enrich an original query by expanding it with terms in documents. A method called “pseudo-feedback” [8] is known as a way to obtain the terms for expansion. In this method, ﬁrst, documents are ranked with an original query. Then, highly ranked documents are assumed to be relevant and their terms are incorporated into the original query. Documents are ranked again by using the expanded query.

158

K. Kise et al.

In this paper, we employ a simple variant of pseudo-feedback. Let E be a set of document vectors for expansion given by sim(d+ , q) j + ≥τ , (5) E = dj maxi sim(di , q) where q is an original query vector and τ is a threshold of the similarity. The sum ds of document vectors in E: ds = d+ (6) j + dj ∈E can be considered as enriched information about the original query. Then, the expanded query vector q is obtained by q =

q ds , +λ q ds

(7)

where λ is a parameter for controlling the weight of the newly incorporated component. Finally, documents are ranked again according to the similarity sim(dj , q ) to the expanded query. 2.3

Latent Semantic Indexing

Latent semantic indexing (LSI) [7,8] is another well-known way to improve the VSM. Let D be a term-by-document matrix deﬁned by D = (dˆ1 , ..., dˆn ) ,

(8)

where dˆj = dj /dj . By applying the singular value decomposition, D is decomposed into the product of three matrices: D = U SV T ,

(9)

where U and V are matrices of size m × r and n × r (r = rank(D)), respectively, and S = diag(σ1 , ..., σr ) is a diagonal matrix with singular values σi (σi ≥ σj if i ≤ j). Each row vector in U (V ) corresponds to a r-dimensional vector representing a term (document). By keeping only the k(< r) largest singular values in S along with the corresponding columns in U and V , D is approximated by Dk = Uk Sk VkT ,

(10)

where Uk , Sk and Vk are matrices of size m×k, k ×k and n×k, respectively. This approximation allows us to uncover “latent” semantic relation among terms as well as documents.

Passage-Based Document Retrieval as a Tool for Text Mining

159

The similarity between a document and a query is measured as follows. Let v j = (vj1 , ..., vjk ) be a row vector in Vk = (vji ) (1 ≤ j ≤ n, 1 ≤ i ≤ k). In the k-dimensional (approximated) space, a document dj is represented as d∗j = Sk v Tj .

(11)

An original query is also represented in the k-dimensional space as q ∗ = UkT q .

(12)

Then the similarity is obtained by sim(d∗j , q ∗ ).

3

Passage-Based Document Retrieval

Passages used in passage-based methods can be classiﬁed into three types: discourse, semantic and window [3]. Discourse passages are deﬁned based on discourse units such as sentences and paragraphs. Semantic passages are obtained by segmenting text at the points where the subject of text changes. Window passages are determined based on the number of terms. In this paper, we employ a passage-based method with window passages called “density distributions”(DD). The density distribution was ﬁrst introduced to locate the descriptions of a word [9] and applied to passage retrieval by some of the authors [6]. The fundamental idea of DD is that parts of documents which densely contain the terms in a query are relevant to it. Figure 1 shows an example of a density distribution. The horizontal axis indicates the positions of terms in a document. The distribution of query terms in the document is shown as spikes in the ﬁgure: their height indicates the weight of a term. The density distribution shown in the ﬁgure is obtained by smoothing the spikes with a window function. The details are as follows. Let aj (l) (1 ≤ l ≤ Lj ) be a term at the position l in a document dj where Lj is the length of a document dj measured in terms. The weighted distribution bj (l) of terms in a query q is deﬁned by wiq · idf i if aj (l) = tiq , (13) bj (l) = 0 otherwise . Smoothing of bj (l) enables us to obtain the density distribution ddj (l) for a document dj : W/2 ddj (l) = f (x)bj (l − x) , (14) x=−W/2

where f (x) is a window function with a window size W . We employ the Hanning window function deﬁned by 1 x (1 + cos 2π W ) if |x| ≤ W/2 , f (x) = 2 (15) 0 otherwise ,

160

K. Kise et al. 6

Density / Weight

5

density distribution weighted distribution of query terms

4 3 2 1 0 0

500

1000 Term Position

1500

2000

Fig. 1. Density distribution. 1

f(x)

0.75

0.5

0.25

0 −W/2

−W/4

0

W/4

x

W/2

Fig. 2. Hanning window function.

whose shape is illustrated in Fig. 2. In order to utilize DD as a passage-based document retrieval method, a score of a document is calculated using the density distribution. The score of dj for a query q is obtained as the maximum value of its density distribution as follows: score(dj , q) = max ddj (l) . l

(16)

This score is used to rank documents according to a query.

4

Experimental Comparison

In this section, we show the results of the experimental comparison. After the description of the test collections employed for the experiments, our methods for evaluating the results are described. Then, the results of experiments are presented and discussed.

Passage-Based Document Retrieval as a Tool for Text Mining

161

Table 1. Statistics about documents in the test collections. MED CRAN CR FR size [MB] 1.1 1.6 235 209 no. of doc. 1,033 1,398 27,922 19,789 no. of terms† 4,284 2,550 37,769 43,760 doc. len.‡ min. 20 23 22 1 max. 658 662 629,028 315,101 mean 155 162 1,455 1,792 median 139 142 324 550 † : counted in words after stemming and eliminating stopwords ‡ : counted in words before stemming and eliminating stopwords Table 2. Statistics about queries in the test collections. MED CRAN no. of queries 30 query len.† min. 2 max. 33 mean 10.8 median 9.0 † : counted in words after

4.1

CR FR title desc narr title desc narr 225 34 85 3 2 4 12 1 3 12 21 7 19 79 9 22 93 9.2 3.0 7.7 28.7 3.5 10.4 37.0 9.0 3.0 6.5 24.5 3.0 10.0 34.0 stemming and eliminating stopwords

Test Collections

We made a comparison using four test collections: MED (medicine), CRAN (aeronautics), FR (federal register), CR (congressional record). The collections MED and CRAN are available at [12], and FR and CR are contained in the TREC disks No.2 and No.4, respectively [13]. All collections are provided with queries and their groundtruth (a list of documents relevant to each query). For these collections, terms used for document representation were obtained by stemming and eliminating stopwords 1 . Tables 1 and 2 show some statistics about the collections. In Table 1, an important diﬀerence is the length of documents: MED and CRAN consist of abstracts, while FR and CR contain much longer documents. In Table 2, a point to note is the diﬀerence of query length. In the TREC collections, each information need is described by query types of diﬀerent length. In order to investigate the inﬂuence of query length, we employed three types: “title”(the shortest representation), “desc” (description; medium length) and “narr” (narrative; the longest). 1

Words which convey no meaning such as “the”.

162

4.2

K. Kise et al.

Evaluation

Average Precision. A common way to evaluate the performance of retrieval methods is to compute the (interpolated) precision at some recall levels. This results in a number of recall / precision points which are displayed in recallprecision graphs [7]. However, it is sometimes convenient for us to have a single value that summarizes the performance. The average precision (noninterpolated) over all relevant documents [7,12] is a measure resulting in a single value. The deﬁnition is as follows. As described in Sect. 2, the result of retrieval is represented as the ranked list of documents. Let r(i) be the rank of the i-th relevant document counted from the top of the list. The precision for this document is calculated by i/r(i). The precision values for all documents relevant to a query are averaged to obtain a single value for the query. The average precision over all relevant documents is then obtained by averaging the respective values over all queries. For example, consider two queries q1 and q2 which have two and three relevant documents, respectively. Suppose the ranks of relevant documents for q1 are 2 and 5, and those for q2 are 1, 3 and 10. The average precision for q1 and q2 is computed as (1/2 + 2/5)/2 = 0.45 and (1/1 + 2/3 + 3/10)/3 = 0.66, respectively. Then the average precision over all relevant documents which takes into account both queries is (0.45 + 0.66)/2 = 0.56. Statistical Test. The next step for the evaluation is to compare the values of the average precision obtained by diﬀerent methods. An important question here is whether the diﬀerence in the average precision is really meaningful or just by chance. In order to make such a distinction, it is necessary to apply a statistical test. Several statistical tests have been applied to the task of information retrieval [10,11]. In this paper, we utilize the test called “macro t-test” [11] (called paired t-test in [10]). The following is the summary of the test as described in [10]. Let ai and bi be the scores (e.g., the average precision) of retrieval methods A and B for a query i and deﬁne di = ai − bi . The test can be applied under the assumptions that the model is additive, i.e., di = µ+εi where µ is the population mean and εi is an error, and that the errors are normally distributed. The null hypothesis here is µ = 0 (A performs equivalently to B in terms of the average precision), and the alternative hypothesis is µ > 0 (A performs better than B). It is known that the Student’s t-statistic d¯ t= (17) s2 /n follows the t-distribution with the degree of freedom of n − 1, where n is the number of samples (queries), d¯ and s2 are the sample mean and variance: n

1 d¯ = di , n i=1

(18)

Passage-Based Document Retrieval as a Tool for Text Mining

163

Table 3. Values of parameters. parameter MED, CRAN CR, FR PF weight λ 1.0, 2.0 1.0, 2.0 threshold τ 0.71 ∼ 0.99 step 0.02 0.71 ∼ 0.99 step 0.02 LSI dimension k 60 ∼ 500 step 20 50 ∼ 500 step 50 DD window size W 20 ∼ 200 step 20 20 ∼ 100 step 10, and 150,200,300 Table 4. Best parameter values. MED CRAN

title 1.0 1.0 0.85 0.85 260 300 100 50

PF λ 2.0 τ 0.71 LSI k 60 DD W 80

CR desc 1.0 0.85 500 90

narr 1.0 0.93 400 200

title 1.0 0.83 350 90

narr 1.0 0.71 500 40

n

1 ¯2 . s = (di − d) n − 1 i=1 2

FR desc 2.0 0.71 500 40

(19)

By looking up the value of t in the t-distribution, we can obtain the Pvalue, i.e., the probability of observing the sample results di (1 ≤ i ≤ n) under the assumption that the null hypothesis is true. The P-value is compared to a predetermined signiﬁcance level α in order to decide whether the null hypothesis should be rejected or not. As signiﬁcance levels, we utilize 0.05 and 0.01. 4.3

Results for the Whole Collections

The methods PF (pseudo-feedback), LSI (latent semantic indexing) and DD (density distributions) were applied by ranging the values of parameters as shown in Table 3. Figure 3 exemplarily illustrates the variation in the average precision when varying the threshold τ in PF (λ = 1.0; left) and the window size W in DD (right). The lines in the graphs were obtained from the experiments on the collections CR and FR. Since these collections have three query sets (title, desc, narr), six lines are shown in each graph. In the graph of PF, the average precision ﬂuctuated slowly but irregularly with the threshold τ . On the other hand, the average precision of DD partly changed rapidly on smaller window sizes, and showed a tendency to converge as the window size became larger. Since better performance of DD was often obtained with smaller window sizes, DD would be more sensitive to the parameter W than PF to τ . Although it is an important topic to develop a method of automated adjustment of the window size, it is beyond the scope of this paper; we simply selected the best values of parameters which are shown in Table 4. Table 5 shows the average precision obtained by using the best parameter values. In Table 5, the best and the second best values of average precision

164

K. Kise et al. DD CR / title CR / desc CR / narr FR / title FR / desc FR / narr

0.3

0.25

average precision

average precision

PF (λ=1.0)

CR / title CR / desc CR / narr FR / title FR / desc FR / narr

0.3

0.25

0.2

0.15

0.2

0.15

0.1

0.05 0.7

0.8

0.9

threshold τ

1

0.1

0.05 0

100

200

300

window size W

Fig. 3. Variations in the average precision. Table 5. Average precision over all relevant documents. MED

CRAN

CR FR title desc narr title desc narr VSM 0.530 0.401 0.127 0.172 0.172 0.098 0.094 0.120 PF 0.640 0.450 0.169 0.195 0.184 0.115 0.123 0.119 (+20.8%) (+12.2%) (+33.1%) (+13.4%) (+7.0%) (+17.3%) (+30.9%) (−0.8%) LSI 0.685 0.444 0.101 0.128 0.134 0.043 0.051 0.075 (+29.2%) (+10.7%) (−20.5%) (−25.6%) (−22.1%) (−56.1%) (−45.7%) (−37.5%) DD 0.507 0.370 0.165 0.159 0.151 0.177 0.207 0.237 (−4.3%) (−7.7%) (+29.9%) (−7.6%) (−12.2%) (+80.6%) (+120%) (+97.5%) ( ) : diﬀerence to the VSM

among the methods are indicated in bold and italic fonts, respectively. In the parentheses, the ratio of diﬀerence to the VSM is noted. Let x and y be the average precision by the VSM and a method for comparison, respectively. The ratio is calculated by (y − x)/x. Thus a positive and a negative value indicate gain and loss, respectively. The results of the macro t-test for all pairs of methods are shown in Table 6. The meaning of the symbols such as “”, “>” and “∼” is summarized at the bottom of the table. For example, the symbol “>” was obtained in the case of DD compared to the VSM for the MED collection. This indicates that, at the signiﬁcance level α = 0.05, the null hypothesis “DD performs equivalently to the VSM” is rejected and the alternative hypothesis “DD performs worse than the VSM” is accepted. At α = 0.01, however, the null hypothesis cannot be rejected. Roughly speaking, “A ( )B”, “A > ( DD - LSI ∼ ∼ ∼ PF - VSM ∼ ∼ ∼ PF - LSI ∼ > ∼ ∼ LSI - VSM ∼ ∼ ∼ , : P-value ≤ 0.01 >, < : 0.01 < P-value ≤ 0.05 ∼ : 0.05 < P-value

FR desc >

narr ∼

– For the collections of short documents (MED and CRAN), the methods PF and LSI outperformed the VSM and DD. – For the collection CR which includes long documents, the methods mostly performed equivalently. The exception was the performance of PF. As shown in Table 6, PF was better than the VSM and LSI for the shortest queries (title) as well as DD for the middle length queries (desc). Note that methods are found to be equivalent by the statistical test even though the ratios of the diﬀerence of the average precision are bigger than those for MED and CRAN. For example, PF outperformed the VSM for MED and CRAN with the ratios +20.8% and +12.2%, while DD was equivalent to the VSM for CR with the ratio +29.9% (cmp. Table 5). This is because, in the statistical test, not only the average precision but also its variance and the number of queries are taken into account. – For the collection FR which also includes long documents, on the other hand, DD clearly outperformed the other methods. The advantage of PF and LSI for the collections of short documents did not hold here. From the above results, the inﬂuence of the length of documents and queries to the performance of the methods remains unclear. Although it has been shown that DD is inferior to PF and LSI for short documents, DD outperformed the other methods only for one of the collections which contain long documents. This could be because of the nature of the collections CR and FR. Although these collections include much longer documents than MED and CRAN, they also include many short documents as shown by the gap between the mean and the median in Table 1. 4.4

Results for Partitioned Collections

In order to clarify the relation between the performance and the length of documents and queries, we partitioned each of the collections CR and FR into three smaller collections as follows. Documents in the collections were ﬁrst splitted

166

K. Kise et al. Table 7. Statistics about the partitioned collections. CR FR relevant doc. irrel. relevant doc. irrel. short middle long doc. short middle long doc. no. of doc. 251 251 252 27,168 148 148 148 19,345 doc. len. min. 67 604 3,055 22 114 1,554 6,037 1 max. 601 3,029 629,028 385,065 1,512 5,994 315,101 124,353 mean 334 1,315 33,550 1,169 859 3,075 35,982 1,528 median 303 1,078 11,236 318 835 2,886 17,037 536 no. of queries 27 30 27 — 43 44 63 —

into two disjoint sets: documents relevant to at least one query, and those irrelevant to all queries. The set of relevant documents was further divided into three disjoint subsets of almost equal size according to the length of documents: short relevant documents, middle length relevant documents, and long relevant documents. By combining each subset with the set of irrelevant documents, we prepared three partitioned collections called “short”, “middle” and “long”. As queries for each partitioned collection, we took the queries which are relevant to at least one document in the partitioned collection. Since some documents are relevant to more than one query, the number of queries does not sum up to the number of queries in the original collections (cmp. Table 2). The statistics about the partitioned collections are shown in Table 7. Using the best parameters as shown in Table 4, we computed the average precision for the partitioned collections. Figure 4 illustrates the results. Each graph in the ﬁgure represents the results for a pair of a set of partitioned collections and a query length. The horizontal axes of the graphs indicate the partitioned collections. These graphs show that the conventional methods (VSM, PF, LSI) performed worse as the documents became longer. On the other hand, DD yielded almost equivalent results for all document lengths on CR collection, and even better results for the FR collection as the documents were longer. Table 8 shows the results of the statistical test for the partitioned collections. DD yielded signiﬁcantly better results in most of the cases for the “long” partitions. These results conﬁrm that passage-based document retrieval is better for longer documents, which has already been reported in the literature [5]. Let us now turn to the inﬂuence of the query length. Figure 5 illustrates the same results as in Fig. 4 but arranged in a diﬀerent way. Here, each graph corresponds to a partitioned collection and the horizontal axes represent the query lengths. For the “short” partitioned collections, no clear relation between the eﬀectiveness of the methods and the query length could be found. On the other hand, for the “middle” and “long” partitioned collections with the shortest queries (title), DD was always the best among the methods. For the “middle” CR collection with longer queries, DD performed worse than the other methods. For the “long”

Passage-Based Document Retrieval as a Tool for Text Mining CR / title

CR / desc

0.3

0.3

0.25

0.25

0.2

0.2

0.15

0.15

0.15

0.1

0.1

0.1

0.05

0.05

0.05

VSM PF LSI DD

average precision

0.2

0

short

middle

long

0

short

FR / title

average precision

CR / narr

0.3

0.25

middle

long

0

0.3

0.25

0.25

0.25

0.2

0.2

0.2

0.15

0.15

0.15

0.1

0.1

0.1

0.05

0.05

0.05

middle

long

0

short

middle

middle

long

FR / narr

0.3

short

short

FR / desc

0.3

0

167

long

0

short

middle

long

Fig. 4. Average precision for the partitioned collections (horizontal axes : document length). Table 8. Results of the macro t-test for the partitioned collections. methods A B DD - VSM short DD - PF DD - LSI DD - VSM middle DD - PF DD - LSI DD - VSM long DD - PF DD - LSI

title < ∼ ∼ ∼ ∼

CR desc ∼ < ∼ < ∼ >

narr < ∼ ∼ < ∼ ∼

title < < ∼ > >

FR desc ∼ ∼ ∼ > ∼

narr ∼ ∼ ∼ ∼ ∼

CR collection and “middle” FR collection, the advantage of DD shrank as the query length became longer. These tendencies can also be found in Table 8. For the “long” FR collection, the diﬀerence of the average precision between DD and the best among the other methods was about the same for all query lengths. However, there were disparities in their P-values: the P-value obtained

168

K. Kise et al.

average precision

CR / short

CR / middle 0.3

0.3

0.25

0.25

0.25

0.2

0.2

0.2

0.15

0.15

0.15

0.1

0.1

0.1

0.05

0.05

0.05

0

title

desc

narr

0

title

FR / short

average precision

CR / long

0.3

desc

narr

0

0.3

0.25

0.25

0.25

0.2

0.2

0.2

0.15

0.15

0.15

0.1

0.1

0.1

0.05

0.05

0.05

desc

narr

0

title

desc

desc

narr

FR / long

0.3

title

title

FR / middle

0.3

0

VSM PF LSI DD

narr

0

title

desc

narr

Fig. 5. Average precision for the partitioned collections (horizontal axes : query length).

with the shortest queries (title) was about 10 and 100 times smaller than those with the middle length (desc) and the longest queries (narr), respectively. From the results obtained from the partitioned collections, we conclude that passage-based document retrieval outperforms conventional methods if relatively lengthy documents are retrieved with short queries. An explanation for this feature of passage-based document retrieval could be as follows. If lengthy documents are retrieved with short queries, it becomes more essential to take into account the proximity of query terms, as done only by the passage-based method. In other words, the passage-based method is capable of distinguishing a few query terms which are in the same context (located close to each other in a document) from those occurring in diﬀerent contexts (far away from each other).

5

Conclusion

We have experimentally evaluated the eﬀect of the length of documents and queries for document retrieval methods. The passage-based method which is capable of ranking documents based on segmented passages has been compared with three conventional document retrieval methods. The results for a variety of document collections show that the passage-based method is superior to conventional methods for longer documents with shorter queries. This feature of

Passage-Based Document Retrieval as a Tool for Text Mining

169

passage-based retrieval is essential if we consider document retrieval as a tool for text mining based on a user’s query, since (1) users tend to issue short queries, and (2) available documents are often longer than abstracts. In order to use passage-based document retrieval as a tool, however, the following things should be further considered. First, the window size appropriate for analyzing documents should be determined automatically. Second, it is required for passage-based document retrieval to work for short documents equivalently to the best conventional method. These issues will be a subject of our future research.

Acknowledgment This work was supported by the German Ministry for Education and Research, bmb+f (Grant: 01 IN 902 B8).

References 1. M.A.Hearst, Untangling Text Data Mining, in Proceedings of ACL’99: the 37th Annual Meeting of the Association for Computational Linguistics, 1999. 2. M.Grobelnik, D.Mladenic and N.Milic-Frayling, Text Mining as Integration of Several Related Research Areas: Report on KDD’2000 Workshop on Text Mining, http://www.cs.cmu.edu/ dunja/WshKDD2000.html. 3. J.P.Callan, Passage-level evidence in document retrieval, in Proc. SIGIR ’94, pp.302-310,1994. 4. G.Salton, A.Singhal and M.Mitra, Automatic text decomposition using text segments and text themes, in Proc. Hypertext ’96, pp.53-65, 1996. 5. O.de Kretser and A.Moﬀat, Eﬀective Document Presentation with a LocalityBased Similarity Heuristic, in Proc. SIGIR ’99, pp.113–120, 1999. 6. K.Kise, H.Mizuno, M.Yamaguchi and K.Matsumoto, On the Use of Density Distribution of Keywords for Automated Generation of Hypertext Links from Arbitrary Parts of Documents, in Proc. ICDAR’99, pp.301–304, 1999. 7. R. Baeza-Yates and B.Ribeiro-Neto, Modern Information Retrieval, AddisonWesley Pub. Co., 1999. 8. C.D.Manning and H.Sch¨ utze, Foundations of Statistical Natural Language Processing, MIT Press, 1999. 9. S.Kurohashi, N.Shiraki, and M.Nagao, A Method for Detecting Important Descriptions of a Word Based on Its Density Distribution in Text, Trans. Information Processing Society of Japan, Vol.38, No.4, pp.845–853, 1997 [In Japanese]. 10. D.Hull, Using Statistical Testing in the Evaluation of Retrieval Experiments, in Proc. SIGIR ’93, pp.329–338, 1993. 11. Y.Yang and X.Liu, A Re-Examination of Text Categorization Methods, in Proc. SIGIR ’99, pp.42–49, 1999. 12. ftp://ftp.cs.cornell.edu/pub/smart/ 13. http://trec.nist.gov/

Constructing Approximate Informative Basis of Association Rules Kouta Kanda, Makoto Haraguchi, and Yoshiaki Okubo Division of Electronics and Information Engineering Hokkaido University N-13 W-8, Sapporo 060-8628, JAPAN { makoto, yoshiaki}@db-ei.eng.hokudai.ac.jp

Abstract. In the study of discovering association rules, it is regarded as an important task to reduce the number of generated rules without loss of any information about the signiﬁcant rules. From this point of view, Bastide, et al. have proposed to generate only non-redundant rules [2]. Although the number of generated rules can be reduced drastically by taking the redundancy into account, many rules are often still generated. In this paper, we try to propose a method for reducing the number of the generated rules by extending the original framework. For this purpose, we introduce a notion of approximate generator and consider an approximate redundancy. According to our new notion of redundancy, many non-redundant rules in the original sense are judged redundant and invisible to users. This achieves the reduction of generated rules. Furthermore, it is shown that any redundant rule can be easily reconstructed from our non-redundant rule with its approximate support and conﬁdence. The maximum errors of these values can be evaluated by a user-deﬁned parameter. We present an algorithm for constructing a set of non-redundant rules, called an approximate informative basis. The completeness and weak-soundness of the basis are theoretically shown. Any signiﬁcant rule can be reconstructed from the basis and any rule reconstructed from the basis is (approximately) signiﬁcant. Some experimental results show an eﬀectiveness of our method as well.

1

Introduction

The discovery of association rules is an important task in the research area of Data Mining. Its main purpose is to identify relationships among items in a given large database. This kind of problem has ﬁrstly introduced by Agrawal, et al. [1]. According to their statement, the problem can be divided into two sub-problems: Finding frequent itemsets: Given a transaction database D, we try to ﬁnd all frequent itemsets 1 in D. Generating conﬁdent association rules: All conﬁdent association rules are generated based on the frequent itemsets. 1

An itemset is a set of items appearing in D.

K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 141–154, 2001. c Springer-Verlag Berlin Heidelberg 2001

142

K. Kanda, M. Haraguchi, and Y. Okubo

In order to solve the former problem, we would be required to search in an itemset-lattice consisting of 2m itemsets if we have m possible items. On the other hand, the latter problem can be solved in a straightforward manner, once we have all frequent itemsets. Therefore, the former is considered primary and the latter, secondary in an eﬃcient discovery of association rules. In fact, many studies on association rule discovery have tended to concentrate on an eﬃcient computation of the frequent itemsets and many algorithms for this task have been proposed [1,4,5]. Thus, as many researchers have actually investigated, the task of ﬁnding all frequent itemsets is one of the important subjects in the discovery of association rules. However, we still have another signiﬁcant issue to be addressed. It is concerned with the number of rules generated from the obtained frequent itemsets. In general, a large number of rules are generated and then presented to a user. Although it is ensured that the generated rules meet the requirements for support and conﬁdence given by the user 2 , they often include many rules that are not so interesting to the user in fact. Therefore, the user has to check each presented rule carefully in order to obtain actually interesting ones. However, such a task is quite hard due to the large number of presented rules. In some cases, unfortunately, several interesting rules might be missed. Therefore, it is helpful for the user to reduce the number of generated (and presented) rules without loss of any information of possible ones. The purpose of this paper is to propose a method for such a reduction. By introducing a notion of redundancy of association rules, Bastide, et al. have proposed to identify only the set of non-redundant ones, called an informative basis, and to present the basis to the user. In a word, a non-redundant rule can be viewed as a representative of a set of rules, each of which has exactly the same support and conﬁdence, and it can be easily reconstructed from the representative. For example, assume we have the following association rules: r1 = i1 → i2 ∧ i3 ∧ i4 , r2 = i1 ∧ i2 → i3 ∧ i4 and r3 = i1 ∧ i2 ∧ i3 → i4 , where their supports and conﬁdences are exactly identical. Given r1 , the others can be reconstructed from r1 by a quite simple operation. Furthermore, their precise supports and conﬁdences can be obtained immediately. In this sense, r2 and r3 are considered to be redundant and r1 to be non-redundant 3 . Identifying non-redundant rules is just suﬃcient to obtain the possible ones. As has been mentioned above, since a non-redundant rule corresponds to a representative of a set of rules, the number of non-redundant rules is expected to be much smaller than one of the possible rules. By considering only non-redundant rules, therefore, we can drastically reduce the number of rules to be generated. From the author’s viewpoint, however, there often still exist many nonredundant rules. It might be a costly task for users to check them. Although we can easily reconstruct any redundant rule from a non-redundant one with its 2 3

If a rule meets the requirements, we say that the rule is signiﬁcant. In a word, such a non-redundant rule is characterized as one with the minimal antecedent and the maximal consequent.

Constructing Approximate Informative Basis of Association Rules

143

precise support and conﬁdence according to the original framework, the authors would like to claim that from a practical point of view, even though we cannot surely derive the precise support and conﬁdence of redundant rule, it would be worth reducing the number of output rules further. We try in this paper to propose a method for such a reduction by extending the original approach. Especially for this purpose, the original notion of redundancy is extended according to the claim above. Since the support and conﬁdence of our redundant rule can be approximately derived from a non-redundant one according to such an extended redundancy, these approximate values might not satisfy some users who require a high precision of the derived values. In our framework, therefore, we can ﬂexibly adjust the maximum error by giving an adequate value of a user-deﬁned parameter ε (0 ≤ ε < 1). As ε approaches 1, the maximum error increases, but the number of non-redundant rules decreases. Conversely, as ε approaches 0, the maximum error approaches 0, but the number of non-redundant rules increases. Given a user-deﬁned parameter ε, in order to describe our non-redundancy, we deﬁne a set of rules w.r.t. ε, called an approximate informative basis (AIB(ε)). It will be proved that every rule r in AIB(ε) has the following property: Any rule r reconstructable from r has approximately the same support and conﬁdence as ones of r, where the maximum errors of these values are evaluated by some formulas determined by ε. Thus a rule reconstructable from r is redundant. For the same reason, such a rule r in AIB(ε) is non-redundant, and can approximately represent any rule reconstructable from it. For any signiﬁcant rule r, there always exists a corresponding non-redundant rule in AIB(ε) from which r can be reconstructed. No signiﬁcant rule can be lost, once AIB(ε) is computed. The completeness in this sense and weak-soundness of AIB(ε) are summarized in a theorem. We present an algorithm for constructing AIB(ε). An eﬀectiveness of our method is shown by some experimental results. This paper is organized as follows. In the next section, we introduce some terminologies used throughout this paper. In Section 3, we brieﬂy explain the original framework by Bastide, et al. Section 4 discusses our method for constructing AIB(ε) with an example. Our preliminary experimental results are presented in Section 5. We summarize this paper and give some discussions in the last section. Especially, we brieﬂy describe a new interactive strategy, which we are going to develop, for identifying interesting rules based on the method presented in this paper.

2

Preliminaries

Let I be a ﬁnite set of items. An itemset l is a non-empty subset of I. A tuple id, l is called a transaction, where id is a transaction identiﬁer and l is an

144

K. Kanda, M. Haraguchi, and Y. Okubo

itemset. A transaction database D is a ﬁnite set of transactions. We often refer to itemset(id) as the itemset associated with id in a transaction. For a transaction t = id, l, we say that t contains an itemset l if l ⊆ l. Given a transaction database D, the support of an itemset l, denoted by sup(l), is deﬁned as the ratio of the number of transactions containing l to the number of all transactions in D. Let minsup be a user-deﬁned threshold for the permissive minimum support. An itemset l is called a frequent itemset if sup(l) ≥ minsup. An association rule r is an implication between two itemsets which is of the form r = l1 → (l2 \ l1 ), where l1 and l2 are itemsets such that l1 ⊂ l2 . The support of r, denoted by sup(r), is deﬁned as sup(r) = sup(l2 ). Furthermore, the conﬁdence of r, denoted by conf (r), is deﬁned as conf (r) = sup(l2 ) / sup(l1 ). Let minconf be a user-deﬁned threshold for the permissive minimum conﬁdence. An association rule r is said to be signiﬁcant if sup(r) ≥ minsup and conf (r) ≥ minconf . Given a transaction database D, let ID be the set of transaction identiﬁers in D. We consider a mapping ψ : 2I → 2ID that is deﬁned as ψ(l) = {id | id, l ∈ Moreover, we consider a mapping ϕ : 2ID → 2I that is deﬁned D ∧ l ⊆ l }. as ϕ(ID) = id∈ID itemset(id). Based on these mappings, a closure operator γ : 2I → 2I is deﬁned as γ(l) = ϕ(ψ(l)), that is, γ computes the maximum itemset that is shared with all transactions containing l. We say that an itemset l is closed if γ(l) = l. Since γ(γ(l)) = γ(l) holds for any itemset l, γ(l) is a closed itemset. It should be noted that for any itemset l such that l ⊆ l ⊆ γ(l), γ(l ) = γ(l) and sup(l) = sup(l ) = sup(γ(l)) hold. An itemset l is called an exact generator (E-generator) of γ(l). For a frequent closed itemset f , we refer to the set of E-generators of f as EG(f ) and the set of minimal E-generators of f as M EG(f ), that is, M EG(f ) = { g | g ∈ EG(f )∧ ∃g ∈ EG(f ) such that g ⊂ g }. For a frequent closed itemset f and its E-generator g ∈ M EG(f ), a tuple (g, f ) is called an EGC-tuple. Given an EGCtuple (g, f ), for any itemset l such that g ⊆ l ⊆ f , sup(g) = sup(l) = sup(f ) holds. The set of EGC-tuples w.r.t. D is referred to as EGC(D).

3

Informative Basis of Association Rules

In this section, we brieﬂy introduce a method of reducing the number of generated rules [2]. The key notion of this approach is a redundancy of association rule. Deﬁnition 1. (Redundancy of Association Rule) [2] Let r = l1 → (l2 \ l1 ) be an association rule. r is called a redundant rule iﬀ there exists an association rule r = l1 → (l2 \ l1 ) such that l1 ⊆ l1 , l2 ⊆ l2 , r = r, sup(r ) = sup(r) and conf (r ) = conf (r). Intuitively speaking, a redundant rule r is a rule which has exactly the same support and conﬁdence as ones of some non-redundant rule r and can be easily reconstructed from r by a simple operation on itemsets. Therefore, a nonredundant rule can be viewed as a representative of a set of redundant ones. This

Constructing Approximate Informative Basis of Association Rules

145

implies that extracting only non-redundant rules can be considered suﬃcient for the discovery of all possible rules. Since it is obvious that the number of nonredundant rules is smaller than that of all rules, we can reduce the number of rules to be obtained by simply taking non-redundant ones into account. Each non-redundant rule is characterized as a rule with the minimal antecedent and maximal consequent and is formally deﬁned in terms of E-generator and closure. It is shown that any rule can be reconstructed from a non-redundant rule with its precise support and conﬁdence. Furthermore, some experimental results show that the number of nonredundant rules is much smaller than that of all possible rules. Therefore, the method can be considered eﬀective and promising in order to reduce the number of rules to be generated. However, there often exist a large number of nonredundant rules even though all redundant ones are discarded. Since the task of checking them would be still costly for users, more reduction is strongly desired to assist the user’s task. In the next section, we try to propose a method for such a reduction by extending the original approach.

4

Approximate Informative Basis of Association Rules

As just mentioned, we still have a large number of rules even if we consider redundant ones to be unnecessary. Although we can easily reconstruct any redundant rule from a non-redundant one with its precise support and conﬁdence according to the original framework, the authors would like to claim that from a practical point of view, even though we cannot precisely derive the supports and conﬁdences of redundant rules, it would be worth reducing the number of output rules further. In this section, we try to propose a method for reducing the number of rules to be generated. Especially for this purpose, the original notion of redundancy is extended according to the claim above. In order to present our method, we ﬁrst introduce a notion of approximate generator. 4.1

Approximate Generators of Closed Itemsets

An approximate generator is an extension of E-generator and it can work more ﬂexibly. Deﬁnition 2. (A-Generators) Let l be an itemset and f a closed itemset. l is called an approximate generator (A-generator) of f if γ(l) ⊆ f and sup(f ) / sup(γ(l)) ≥ 1 − ε, where ε is a user-deﬁned parameter (0 ≤ ε < 1). Note that any E-generator of a closed itemset f is an A-generator of f . The following property plays a very important role in our method.

146

K. Kanda, M. Haraguchi, and Y. Okubo

Proposition 1. Let g be an A-generator of a closed itemset f . For any itemset l such that g ⊆ l ⊆ f, sup(g) ≥ sup(l) ≥ (1 − ε)sup(g) and

sup(f ) / (1 − ε) ≥ sup(l) ≥ sup(f ).

Proof. From the deﬁnition of A-generator, 1 ≥ sup(f ) / sup(γ(g)) ≥ 1 − ε holds. Since sup(γ(g)) = sup(g), we have sup(g) ≥ sup(f ) ≥ (1 − ε)sup(g). From sup(g) ≥ sup(l) ≥ sup(f ), therefore, sup(g) ≥ sup(l) ≥ (1 − ε)sup(g) holds. Based on the inequalities above, we can easily obtain sup(f ) / (1 − ε) ≥ sup(l) ≥ sup(f ) as well. The proposition implies that sup(g) and sup(f ) can be considered as approximations of sup(l) if we could accept the errors. It should be noted here that the maximum errors are precisely evaluated with the parameter ε. Therefore, we can ﬂexibly adjust the maximum errors so that they are permissible for us. As ε approaches 1, the maximum becomes larger. Conversely, as ε approaches 0, the maximum error approaches 0. That is, in case of ε = 0, any A-generator corresponds to an E-generator. 4.2

Approximation of EGC-Tuples

As previously mentioned, for any itemset l, the support of l can be precisely identiﬁed with an EGC-tuple (g, f ) such that g ⊆ l ⊆ f , since sup(g) = sup(l) = sup(f ). Therefore, based on the set of EGC-tuples w.r.t. D, EGC(D), we can obtain the precise support of any itemset. On the other hand, we deﬁne here an approximation of EGC(D) with the help of A-generators. Using the approximation, we can approximately identify the support of any itemset with the maximum errors we just discussed. Deﬁnition 3. (Approximation of EGC(D)) Let F be the set of frequent closed itemsets w.r.t. D and ε, a user-deﬁned parameter (0 ≤ ε < 1). Consider a partition of F, {F1 , . . . , Fk } 4 . For each Fi , there uniquely exists a closure fi∗ ∈ Fi such that ∀f ∈ Fi f ⊆ fi∗ and ∗ ∗ sup(f i )/sup(f ) ≥ 1 − ε. For each Fi , let us consider AGC(Fi ) = {(g, fi ) | g ∈ min( f ∈Fi M EG(f ))} 5 . An approximation of EGC(D) is deﬁned as AGC(D, ε) =

k

AGC(Fi ).

i=1

Each tuple in AGC(D, ε) is called an AGC-tuple. 4 5

That is, F = ∪ki=1 Fi and Fi ∩ Fj = φ (i = j), where each Fi is called a cell. For a set S, min(S) denotes the set of minimal elements in S under the set-inclusion ordering.

Constructing Approximate Informative Basis of Association Rules

147

From the deﬁnition, for each EGC-tuple (g, f ) ∈ EGC(D), it is obvious that f uniquely belongs to some Fi and there exists an AGC-tuple (g ∗ , fi∗ ) ∈ AGC(D, ε) such that g ∗ ⊆ g and f ⊆ fi∗ . Moreover, for any AGC-tuple (g ∗ , f ∗ ), g ∗ is an A-generator of f ∗ . From these observations and Proposition 1, therefore, we can obtain the following statement. Proposition 2. For any frequent itemset l, there exists an AGC-tuple (g, f ) ∈ AGC(D, ε) such that g ⊆ l ⊆ f . Furthermore, sup(g) ≥ sup(l) ≥ (1 − ε)sup(g) and sup(f ) / (1 − ε) ≥ sup(l) ≥ sup(f ) hold. Proposition 2 implies that AGC(D, ε) can identify the support of any frequent itemset approximately, where the maximum errors are precisely evaluated by functions of ε. 4.3

Approximate Informative Basis of Association Rules

Based on the set of AGC-tuples, AGC(D, ε), we can construct a basis of association rules, called an approximate informative basis(AIB), from which any signiﬁcant rule can be easily reconstructed with its approximate support and conﬁdence. Before giving the formal deﬁnition, we introduce a notion of approximate source of association rules. Deﬁnition 4. (Approximate Sources of Association Rules) Let D be a transaction database, ε a user-deﬁned parameter (0 ≤ ε < 1) and F the set of frequent closed itemsets. Assume that {F1 , . . . , Fk } is the partition of F based on which AGC(D, ε) is constructed. For an EGC-tuple (g, f ) ∈ EGC(D), consider an Fi such that f ⊆ fi∗ . An association rule to which the pair of (g, f ) and AGC(Fi ) is attached, s = g → (fi∗ \ g) : (g, f ) , AGC(Fi ) , is called an approximate source (A-source) of association rules 6 . The set of A-sources is referred to as AS(D, ε). We can reconstruct a set of association rules from an A-source. Deﬁnition 5. (Reconstruction of Association Rules from A-source) Let s = g → (f ∗ \ g) : (g, f ) , AGC(F ) be an A-source. It is said that an association rule l1 → (l2 \ l1 ) can be reconstructed from s if g ⊆ l1 ⊆ f and for an AGC-tuple (g ∗ , f ∗ ) ∈ AGC(F ), g ∗ ⊆ l2 ⊆ f ∗ . As shown in the next proposition, for any association rule that is reconstructed from an A-source, its support and conﬁdence can be within certain ranges determined by the values of the source and ε. 6

In what follows, depending on contexts, s often denotes only the rule g → (fi∗ \ g) of s.

148

K. Kanda, M. Haraguchi, and Y. Okubo

Proposition 3. Let s be an A-source and r be an association rule reconstructed from s. Then sup(s) ≥ sup(r) ≥ sup(s) 1−ε

and

conf (s) ≥ conf (r) ≥ conf (s) 1−ε

hold. Proof. Let s = g → (f ∗ \ g) : (g, f ) , AGC(F ) be an A-source and r = l1 → (l2 \ l1 ) be an association rule reconstructed from s. From the deﬁnition of reconstruction, g ⊆ l1 ⊆ f and for an AGC-tuple (g ∗ , f ∗ ) in AGC(F ), g ∗ ⊆ l2 ⊆ f ∗ hold. Note here that sup(g) = sup(l1 ) = sup(f ). Furthermore, from Proposition 1, sup(g ∗ ) ≥ sup(l2 ) ≥ (1 − ε)sup(g ∗ ) and sup(f ∗ )/(1 − ε) ≥ sup(l2 ) ≥ sup(f ∗ ) holds. Since sup(s) = sup(f ∗ ) and sup(r) = sup(l2 ), we can immediately obtain sup(s)/(1 − ε) ≥ sup(r) ≥ sup(s). Moreover, since sup(g) = sup(l1 ) and sup(l2 ) ≥ sup(f ∗ ), sup(l2 )/sup(l1 ) ≥ sup(f ∗ )/sup(g) holds. Similarly, from sup(g) = sup(l1 ) and sup(f ∗ )/(1 − ε) ≥ sup(l2 ), sup(f ∗ )/{(1 − ε)sup(g)} ≥ sup(l2 )/sup(l1 ) holds. Therefore, we obtain sup(f ∗ )/{(1 − ε)sup(g)} ≥ sup(l2 )/sup(l1 ) ≥ sup(f ∗ )/sup(g), that is, conf (s)/(1 − ε) ≥ conf (r) ≥ conf (s). The proposition states that if we could accept the errors, then sup(s) and conf (s) can be viewed as approximations of sup(r) and conf (r), respectively. That is, a set of association rules can be easily reconstructed from an A-source with their approximate supports and conﬁdences. In this sense, we can consider these rules to be approximately redundant (A-redundant). Now we can deﬁne an approximate informative basis of association rules from which any signiﬁcant rule can be reconstructed with its approximate values of support and conﬁdence. Deﬁnition 6. (Approximate Informative Basis of Association Rules) Let D be a transaction database, ε be a user-deﬁned parameter (0 ≤ ε < 1). An approximate informative basis of the signiﬁcant association rules w.r.t. D and ε, denoted by AIB(D, ε), is deﬁned as the set of A-sources whose conﬁdences are not less than (1 − ε)minconf : AIB(D, ε) = { s | s ∈ AS(D, ε) ∧ conf (s) ≥ (1 − ε)minconf }.

Theorem 1. Weak-Soundness of AIB(D, ε) : Any association rule r reconstructed from s in AIB(D, ε) is signiﬁcant or at worst A-signiﬁcant 7 . 7

For an association rule r, if sup(r) ≥ minsup and minconf > conf (r) ≥ (1 − ε)minconf , we say that r is approximately signiﬁcant (A-signiﬁcant).

Constructing Approximate Informative Basis of Association Rules

149

Completeness of AIB(D, ε) : For any signiﬁcant association rule r, there exists an A-source s in AIB(D, ε) from which r can be reconstructed. Proof. Weak-Soundness: Let r = l1 → (l2 \ l1 ) be an association rule reconstructed from an A-source s = g → (f ∗ \ g) : (g, f ) , AGC(F ) in AIB(D, ε). Then, there exists an AGC-tuple (g ∗ , f ∗ ) in AGC(F ) such that g ∗ ⊆ l2 ⊆ f ∗ . From Proposition 3, sup(r) ≥ sup(s) and conf (r) ≥ conf (s). Since f ∗ is a frequent closed itemset, sup(f ∗ ) ≥ minsup. From sup(s) = sup(f ∗ ), therefore, we have sup(r) ≥ minsup. Furthermore, since conf (s) ≥ (1−ε)minconf , we immediately have conf (r) ≥ (1−ε)minconf . Therefore, r is at worst A-signiﬁcant. Completeness: Let r = l1 → (l2 \ l1 ) be a signiﬁcant association rule. For each li , there exists an EGC-tuple (gi , fi ) in EGC(D) such that gi ⊆ li ⊆ fi . It should be noted here that since l1 ⊂ l2 , f1 ⊆ f2 holds. Assume that AGC(D, ε) is constructed based on a partition of F, PF . For the EGC-tuple (g2 , f2 ), we can consider a cell F of PF such that f2 ⊆ f ∗ , where f ∗ is the maximum itemset in F . Therefore, there exists an AGC-tuple (g ∗ , f ∗ ) in AGC(F ) such that g ∗ ⊆ g2 ⊆ f2 ⊆ f ∗ . Furthermore, f1 ⊆ f ∗ holds. Therefore, s = g1 → (f ∗ \ g1 ) : (g1 , f1 ) , AGC(F ) is an A-source from which r can be reconstructed. Since r is a signiﬁcant rule, sup(l2 )/sup(l1 ) ≥ minconf holds. By multiplying both sides by (1 − ε), we obtain (1 − ε)sup(l2 )/sup(l1 ) ≥ (1 − ε)minconf . From Proposition 2, sup(f ∗ )/(1 − ε) ≥ sup(l2 ) holds, that is, sup(f ∗ ) ≥ (1 − ε)sup(l2 ). Therefore, we have sup(f ∗ )/sup(l1 ) ≥ (1 − ε)minconf . Since sup(l1 ) = sup(g1 ) and sup(f ∗ )/sup(g1 ) = conf (s), AIB(D, ε) contains the A-source s. From Theorem 1, it is ensured that once we have AIB(D, ε), no signiﬁcant rule can be lost. 4.4

Constructing Approximate Informative Basis

Given a transaction database D, minsup, minconf and a user-deﬁned parameter ε, we can construct an approximate informative basis w.r.t. D and ε, AIB(D, ε). The construction process is divided into three sub-tasks: 1. Computing the set of EGC-tuples, EGC(D). 2. Computing an approximation of EGC(D), AGC(D, ε). 3. Constructing an approximate informative basis, AIB(D, ε). The ﬁrst task can be performed by adopting a Close [3]-like algorithm and the last one is straightforward. An algorithm for the second task, computing AGC(D, ε) from EGC(D), is shown in Figure 1. In general, as ε becomes larger, the number of iteration for the while-loops decreases. The worst case complexity of the algorithm is O(N 2 ), where N is the size of EGC(D) (that is, the number of EGC-tuples in EGC(D)).

150

K. Kanda, M. Haraguchi, and Y. Okubo

Input : EGC(D) and ε. Output : AGC(D, ε). AGC(D, ε) ← φ; EG ← φ; Rem ← φ; M in ← φ; while EGC(D) = φ do pick up t = (g, f ) from EGC(D); while EGC(D) = φ do remove t = (g , f ) from EGC(D); If f ⊆ f ∧ sup(f )/sup(f ) ≥ 1 − ε then EG ← EG ∪ {g }; else Rem ← Rem ∪ {t }; end end M in ← the set of minimal elements of EG; for g ∈ M in do AGC(D, ε) ← AGC(D, ε) ∪ {(g, f )}; end EGC(D) ← Rem; EG ← φ; Rem ← φ; M in ← φ; end Output AGC(D, ε) Fig. 1. Algorithm for Constructing AGC(D, ε)

Example: For the transaction database D shown in Figure 2, we try to construct an approximate informative basis. In the database, each itemset is represented in a simple form. For example, an itemset {a, b, c} is denoted as abc. We assume here that minsup = 1/6 and ε = 0.7. At ﬁrst, the set of EGC-tuples, EGC(D), is computed. For the database, we can obtain the following 10 EGC-tuples: EGC(D) = { (a, ac) : 3/6, (b, b) : 5/6, (c, c) : 5/6, (d, acd) : 2/6, (e, be) : 4/6, (ab, abc) : 2/6, (ae, abce) : 1/6, (bc, bc) : 4/6, (bd, abcd) : 1/6, (ce, bce) : 3/6 }, where the value attached to each tuple is the support of the tuple. Then an AGC(D, ε) is constructed from EGC(D) according to the algorithm in Figure 1. For example, we have AGC(D, ε) = { (a, abce), (ce, abce), (d, abcd), (b, be), (e, be), (c, bc) }. It should be noted here that the set of frequent closed itemsets, F = { abce, abcd, abc, bce, acd, ac, be, bc, b, c }, is divided into the following 4 cells:

Constructing Approximate Informative Basis of Association Rules

151

ID itemset 1 acd 2 bce 3 abce 4 be 5 abcd 6 bce Fig. 2. Example of Transaction Database

F1 = { abce, abc, bce, ac }, F2 = { abcd, acd }, F3 = { be, b } and F4 = { bc, c }. That is, AGC(F1 ) = { (a, abce), (ce, abce) }, AGC(F2 ) = { (d, abcd) }, AGC(F3 ) = { (b, be), (e, be) } and AGC(F4 ) = { (c, bc) }. Based on AGC(D, ε), we can obtain the set of A-sources, AS(D, ε), consisting of 20 sources. Assuming minconf = 0.85, we have the following approximate informative basis consisting of 12 sources: AIB(D, ε) = { s1 = a → (abce \ a) : (a, ac), AGC(F1 ), s2 = a → (abcd \ a) : (a, ac), AGC(F2 ), s3 = b → (be \ b) : (b, b), AGC(F3 ), s4 = b → (bc \ b) : (b, b), AGC(F4 ), s5 = c → (bc \ c) : (c, c), AGC(F4 ), s6 = d → (abcd \ d) : (d, acd), AGC(F2 ), s7 = e → (be \ e) : (e, be), AGC(F3 ), s8 = ab → (abce \ ab) : (ab, abc), AGC(F1 ), s9 = ab → (abcd \ ab) : (ab, abc), AGC(F2 ), s10 = ae → (abce \ ae) : (ae, abce), AGC(F1 ), s11 = bd → (abcd \ bd) : (bd, abcd), AGC(F2 ), s12 = ce → (abce \ ce) : (ce, bce), AGC(F1 ) }.

152

K. Kanda, M. Haraguchi, and Y. Okubo Table 1. Experimental Results minsup = 0.1 minconf = 0.7 minconf = 0.5 minconf = 0.3 Close 5,134 9,290 15,048 Our System (ε = 0.1) 1,733 2,985 4,444 Our System (ε = 0.2) 1,196 1,793 2,502 minsup = 0.05 minconf = 0.7 minconf = 0.5 minconf = 0.3 Close 7,742 15,594 28,712 Our System (ε = 0.1) 3,203 5,817 9,822 Our System (ε = 0.2) 2,194 3,600 5,500 minsup = 0.01 minconf = 0.7 minconf = 0.5 minconf = 0.3 Close 11,997 28,458 59,153 Our System (ε = 0.1) 6,900 13,290 25,113 Our System (ε = 0.2) 3,824 6,357 10,432

For example, from the A-source s1 , an association rule r = a → (ac \ a) can be reconstructed with its approximate support and conﬁdence, 1/6 (= sup(s1 )) and 1/3 (= conf (s1 )). On the other hand, its precise support and conﬁdence are 1/6 and 1, respectively. We can easily verify that the error of the conﬁdence surely follows Proposition 3.

5

Experimental Results

In this section, we present our preliminary experimental results. In order to verify an eﬀectiveness of our method, we have implemented a system to compute an AIB based on the algorithms presented in the previous section. The algorithm Close has been implemented as well to compare with the original method by Bastide, et al. Our system and Close have been written in C and have been tested on a 400MHz PentiumII PC with 160MB memory. For our experimentation, we have obtained “1984 United States Congressional Voting Records Database”, a database from the U CI Repository [7]. It consists of 435 transactions and the number of possible items is 17. Our system has computed AIBs for the database in various settings of parameters, minsup, minconf and ε. The numbers of rules output by each system are summarized in Table 1, where the results obtained by the original method is referred to as Close. For each parameter setting, our system has output fewer rules compared to that by Close. In the most eﬀective case, about 70% reduction has been achieved

Constructing Approximate Informative Basis of Association Rules

153

compared to Close8 . Even in the worst case, about 43% reduction has been achieved. Therefore, we can consider that our method is very eﬀective to reduce the number of generated rules.

6

Concluding Remarks

In this paper, we have presented a method for constructing an approximate informative basis (AIB) for signiﬁcant association rules from which any signiﬁcant rule can easily be reconstructed with its approximate support and conﬁdence. The maximum errors of these values are precisely evaluated by some formulae determined by a user-deﬁned parameter ε. Therefore, we can ﬂexibly adjust the preciseness of these approximate values. Some experimental results have shown that our method can drastically reduce the number of rules to be generated compared to the original framework. Therefore, readability and understandability for the rules would be improved by providing an adequate value of ε. As a next step of this study, we are planning to formalize a method for identifying actually interesting rules with their support and conﬁdence in an interactive manner. In the initial stage, ε is given a value close to 1 by a user and we obtain a rough AIB for which we can easily and completely check the contents. By checking them, the user selects several A-sources from which some interesting rules seem to be reconstructed. Then the user decreases the value of ε to obtain a more precise AIB. It should be noted that the system presents only a part of the AIB which is relative to the A-sources previously selected by the user. Therefore, we can obtain a more precise AIB keeping the number of contents small. For the presented AIB, similar processes are iteratively performed until the user satisfactorily identiﬁes interesting rules with their support and conﬁdence. At each stage, since the system keeps the number of contents of presented AIB compact, the selection tasks by the user would not be costly. Therefore, such a system would be quite helpful for users who try to discover interesting rules easily. In order to construct such an interactive system, we are expecting that the eﬃciency of computing AIB has to be improved more. Our AIB is currently computed by adopting an extended algorithm of Close [3]. Although Close can eﬃciently identify the set of frequent closed itemsets, several new algorithms for the same task have been proposed recently, e.g., A-Close [4], CHARM [6] and CLOSET [5]. By adopting these algorithms, the eﬃciency of computation of AIB would be improved.

References 1. R. Agrawal, R. Srikant: Fast Algorithms for Mining Association Rules, Proc. of the 20th Int’l Conf. on Very Large Data Bases, pp. 478–499, 1994. 8

It has been reported that Close has achieved about 80 – 90% reductions compared to Apriori.

154

K. Kanda, M. Haraguchi, and Y. Okubo

2. Y. Bastide, N. Psquier, R. Taouil, G. Stumme and L. Lakhal: Mining Minimal NonRedundant Association Rule Using Frequent Closed Itemset Proc. of Int’l Conf. on Computational Logic–CL2000, LNAI 1861, pp.972-986, 2000 3. N. Pasquier, Y. Bastide, B. Raﬁk and L. Lakhal: Eﬃcient Mining of Association Rules Using Closed Itemset Lattices, Information Systems, vol. 24, no. 1, pp.25–46, 1999 4. N. Pasquier, Y. Bastide, B. Raﬁk and L. Lakhal: Discovering Frequent Closed Itemsets for Association Rules, Proc. of ICDT, LNCS 1540, pp.398–416 1999 5. J. Pei, J. Han and R. Mao: CLOSET: An Eﬃcient Algorithm for Mining Frequent Closed Itemsets, Proc. of DMKD2000, 2000 6. M. J. Zaki and C. Hsiao: CHARM: An Eﬃcient Algorithm for Closed Association Rule Mining, Technical Report 99-10, Computer Science, Rensselaer Polytechnic Institute, 1999 7. P.M. Marphy and D. W. Aha.: UCI Repository of machine learning databases, http://www.ics.uci.edu/mlearn/MLRepository.html, Univ. of California, Dept. of Information and Computer Science, 1994

Multicriterially Best Explanations Naresh S. Iyer and John R. Josephson The Ohio State University, Laboratory for Artiﬁcial Intelligence Research, Computer and Information Science Department, Columbus, Ohio, 43210 USA {niyer,jj}@cis.ohio-state.edu

Abstract. Inference to the best explanation, IBE, (or abduction) requires ﬁnding the best explanatory hypothesis, from a set of rival hypotheses, to explain a collection of data. The notion of best, however, is multicriterial and the available rival hypotheses might be variously good according to diﬀerent criteria. Thus, one can view the abduction problem as that of choosing the best hypothesis from among a set of multicriterially evaluated hypotheses - i.e as a multiple criteria decision making problem. In the absence of a single hypothesis that is the best along all dimensions of goodness, the MCDM problem becomes especially hard. The Seeker-Filter-Viewer architecture provides an eﬀective and natural way to use computer power to assist humans to solve certain classes of MCDM problems. In this paper, we apply an MCDM perspective to the abductive problem of red-cell antibody identiﬁcation and present the results obtained by using the S-F-V architecture.

1

Introduction

Abductive inference is a ubiquitous form of reasoning in science and common sense. Abduction has been referred to as inference to the best explanation by Harman [3] and as the explanatory inference by Lycan [4]. Typically the available evidence is insuﬃcient to narrow conclusively to single explanations. So, multiple hypotheses are available and the problem becomes one of choosing the best among rivals. Josephson & Josephson [2] have described abductions as following this pattern: D is a collection of data (facts, observations, givens) H explains D (would, if true, explain D) No other hypothesis can explain D as well as H does. Therefore, H is probably true. They also suggest that the judgment of likelihood associated with a conclusion should depend upon a number of considerations. Apart from how good a single hypothesis is by itself, it is also desirable that it decisively surpass the K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 128–140, 2001. c Springer-Verlag Berlin Heidelberg 2001

Multicriterially Best Explanations

129

alternative hypotheses. However, there are in general, multiple kinds of criteria by which hypotheses may be compared. Explanatory power and plausibility are examples. Thus, we may view abduction as requiring a choice among the multicriterially evaluated hypotheses, that is a species of multiple criteria decision making. MCDM problems have been widely studied across diverse ﬁelds and many techniques abound for solving MCDM problems [5]. An important concept in MCDM is the idea of dominance. Dominance is very much like an all-otherthings-being-equal kind of reasoning. Speciﬁcally, we say that some multicriterially evaluated alternative A dominates another alternative B if there is some criterion in which A is strictly better than B and there is no criterion in which B is strictly better than A. An alternative that is not dominated is called a Pareto Optimal alternative. For a given problem, the set of Pareto Optimal alternatives has the property that, within the set, the only way to improve along any dimension is to accept a loss in another dimension. That is, choosing among the Pareto Optimal alternatives is a matter of making trade-oﬀs. It is known that the size of the Pareto-optimal set is typically a very small percentage of the actual number of alternatives [6] [7]. Thus, the application of dominance as a ﬁlter can be expected to considerably reduce the number of alternatives which need to be considered [1]. It is worth noting is that there is no loss incurred in the elimination of the dominated alternatives unless signiﬁcant criteria have not been considered. This is because we know that for every alternative eliminated by the dominance ﬁlter, there is at least one Pareto-optimal alternative that dominates it and is therefore multicriterially better than it. The application of dominance minimally requires that an order relation hold among values for each criterion. The survivors of the dominance ﬁlter represent the multicriterially maximal subset of alternatives from the original set. From the deﬁnition of dominance it is clear that, for each pair of alternatives that survive the dominance ﬁlter, they outperform each other according to different criteria. In other words, if alternatives A and B are in the Pareto Optimal set, then it must be the case that there is at least one criterion in which A is better than B and that there is at least one criterion in which B is better than A, thereby preventing either from dominating the other. In an abduction problem, a more plausible hypothesis, H1 , might not explain as much as a less plausible one, H2 . That is, H2 is better according to explanatory coverage while H1 is better according to the criterion of plausibility. In such a case, there is no obvious sense in which either H1 or H2 can be said to be a distinctly better hypothesis. However, depending upon the need to explain more, and upon the degree of conﬁdence that is needed for the ﬁnal choice, a choice between H1 or H2 may become possible. The choice from among the Pareto Optimal set requires that trade-oﬀs be accepted between plausibility and explanatory coverage. This can be a challenge since such trade-oﬀ judgments are often a function of the speciﬁc values at hand. For example, a certain level of conﬁdence or of explanatory power may be suﬃcient.

130

N.S. Iyer and J.R. Josephson

In summary, a general way to solve an MCDM problem is to apply the dominance ﬁlter and then allow for choice from among the set of dominance survivors by applying human trade-oﬀ judgments with respect to the various criteria. The Seeker-Filter-Viewer architecture described in [1] is based on this strategy for solving the MCDM problem. The Seeker is a module which generates applicable alternatives and produces evaluations for them according to the diﬀerent criteria. The Filter uses the principle of dominance to produce the Pareto-optimal set from the generated and evaluated set of alternatives. It eliminates the distinctly suboptimal alternatives. The Viewer allows a human to express his trade-oﬀ judgments on the Pareto-optimal alternatives. The Viewer allows a user to view the candidate alternatives as points in graphs with the criteria as axes. If multiple criteria need to be considered, the Viewer will provide multiple interlinked 2-D plots and histograms. The human expresses preferences by selecting desirable regions in the graphs. The graphically selected points or regions are cross-linked across all the open plots so that a selection made on one plot shows the values of the selected alternatives according to the other criteria. Apart from explanatory coverage and plausibility, we will describe several other criteria that can generally be used to evaluate candidate hypotheses in abduction problems. These criteria may or may not apply depending upon the problem domain and other characteristics of the data. We will brieﬂy describe the S-F-V architecture and as an illustration both of viewing abduction from an MCDM perspective, and a demonstration of applying the S-F-V architecture, we will present the results of experiments in the domain of red cell antibody identiﬁcation as described in [2]. We will describe the antibody identiﬁcation problem as an abduction problem, and deﬁne the evaluation criteria used in the experiment. Finally, we will show the results of viewing this abduction problem as an MCDM problem and applying the S-F-V architecture to help solve the problem.

2

The Seeker-Filter-Viewer Architecture

The S-F-V architecture is described in detail in [1] and [9]. In this section, we provide a brief overview of the architecture and its use in solving MCDM problems. Essentially, the architecture is composed of three modules, the Seeker, theFilter, and the Viewer, each designed to perform a speciﬁc set of functions involved in solving the given MCDM problem. We next describe these components one at a time: 2.1

The Seeker

The Seeker is responsible for the generation of the choice alternatives for the MCDM problem. In case the choice alternatives are already present or supplied by the decision-maker, the Seeker makes these choice alternatives accessible to the Filter by reading them from the database. For problems where the decisionmaker cannot provide the choice alternatives himself, it is the function of the

Multicriterially Best Explanations

131

Seeker to seek out the alternatives from whatever sources are available, in a form that can be used by the Filter. Abstractly, this could be a search on the Internet looking for choice alternatives pertaining to the problem. The Seeker described in [1] is currently capable of generating choice alternatives as compositions of various components listed in a component library. The Seeker instantiates all possible choice alternatives that can be formed by some distinct composition of a set of components in the library. Having instantiated the choice alternative, it next makes use of simulation models to evaluate various property values for the choice alternatives. For example, for an instantiated car, the Seeker might run simulations to compute the mileage, cost, weight, top-speed and other properties related to cars, for which simulation models are available. At the end of the generation process, the Seeker produces a list of choice alternatives along with a set of {property-name, property-value} pairs for each alternative. It makes this list available to the Filter. 2.2

The Filter

The Filter is responsible for applying the dominance rule to the set of alternatives generated by the Seeker. In order to do this, the Filter expects the decision-maker to choose those properties of the choice alternatives which reﬂect the dimensions of outcomes that matter to him, and additionally the directions of goodness for the criteria. For example, if the decision-maker desires to buy a car that is costeﬀective to him, he should choose cost and mileage as properties of interest to him. Once such a set of properties have been selected by the decision-maker, the Filter uses these properties as criteria based on which to apply the dominance rule on the set of alternatives. Since the criteria values for the chosen criteria are already made available by the Seeker, the Filter makes use of these values to produce the Pareto-optimal set of alternatives. As mentioned earlier, this step is essential because alternatives not belonging to the Pareto-optimal set are known to be dominated by some Pareto-optimal alternative. As a result, there is no loss incurred in eliminating such alternatives. By doing so, the Filter prevents the decision-maker from having to even consider such alternatives, and thereby unintentionally select a suboptimal alternative. Finally, as indicated in [6], [7], the Pareto-optimal set often tends to be a very small fraction of the original set. Hence the application of the Filter also reduces the size of the set of alternatives that further need to be considered. While the Filter can reduce the relevant set of alternatives from a large number to a small fraction, choosing an alternative even from a handful of Pareto-optimal alternatives can be a demanding task for the decision-maker. The next module of the architecture allows the decision-maker to graphically interact with the Pareto-optimal alternatives in various ways, in order to select the ﬁnal choice alternative(s) of interest to him. 2.3

The Viewer

As mentioned previously, choice among Pareto-optimal alternatives requires the making of tradeoﬀs. The Viewer allows the decision-maker to interact with the

132

N.S. Iyer and J.R. Josephson

Pareto-optimal set by means of various kinds of graphical plots which enable the decision-maker to express his tradeoﬀ preferences in the context of the available alternatives. A more detailed description of all modes of interaction that the Viewer allows, along with a description of an interaction session between the Viewer and a decision-maker is provided in [9]. Here will only mention that the Viewer allows the decision-maker to plot the Pareto-optimal alternatives as points in 2-D scatter plots where the axes of the plots can be selected by the decision-maker himself; he can further pull up as many plots as he desires. The Viewer also maintains a set of 1-D plots where the Pareto-optimal alternatives are plotted along single property-axes. The Viewer allows the decision-maker to select points or collections of points by enabling graphical selection of such points. Upon selection by the decision-maker, all points within the selected region are indicated using a separate color and moreover such indication is provided across all the open points. Thus ,even though the decision-maker makes his selection on a single plot, he gets to examine the implications of his selection in terms of the other properties by examining the colored points on all other plots. This forces the decision-maker to make selections and at the same time evaluate the consequences of the selection. It is expected that this will lead to a more rational selection process. Apart from making tradeoﬀs, the Viewer enables other kinds of preference expression by the decision-maker. These include: choosing alternatives by categories from bar-charts, applying hard-constraints based on criteria by using the 1-D plots, applying various kinds of constraints based on as yet unconsidered properties, combining alternatives that belong to diﬀerent Viewer-based selections of the decision-maker, looking at a list of all properties of alternatives in the selected region in a tabular form, and so on. Thus, the Viewer complements the Filter by enabling many kinds of preferences that apply when choosing from Pareto-optimal alternatives. The synergy between the three modules of the S-F-V architecture provides it with the ability to act as an eﬀective decision support for solving MCDM problems. As an indication of its eﬀectiveness, we point to the experiment described in [1] where close to 2 million choice alternatives (Hybrid vehicles) were generated by the Seeker, the Filter reduced this set to 1078 alternatives, and interaction between a decision-maker and these Filter survivors using the Viewer resulted in a ﬁnal output of 7 alternatives. The architecture has been applied to a number of engineering problems and our claims about the eﬀectiveness of the architecture are based on the response we received from the users regarding the ease with which they were able to use the architecture. We realize that a formal usability analysis of the architecture would go a long way towards establishing this. We have compared the Viewer with a few other alternative visualization techniques in MCDM literature and our impression is that the Viewer has its own set of unique properties. We direct the interested reader to a survey of such visualization techniques that occurs in [10] (pp. 238-249). This brings our description of the S-F-V architecture to a close. We next describe some properties of explanatory hypotheses, which can be used as evaluation criteria for the hypotheses. The use of such criteria to evaluate hypotheses

Multicriterially Best Explanations

133

will allow hypotheses to be viewed as multicriterially evaluated alternatives, thereby allowing the problem of choosing the “multicriterially best” alternative to be seen as an MCDM problem.

3

Evaluation Criteria for Explanatory Hypotheses

As we said, the idea of the best hypothesis from among a set of hypotheses is a multicriterial notion. In [8] the following qualities are suggested as criteria for evaluating hypotheses: Explanatory Power, Plausibility, Internal consistency, Simplicity, Speciﬁcity, Predictive Power, and Theoretical Promise. In order to apply MCDM techniques it will be necessary that the evaluations according to the criteria can be obtained in a numerical form, or some other form that enables the comparison of criterion values, so that it is conducive to the application of MCDM techniques. This may well depend upon the domain for which the abduction problem is being solved. As an illustration of how this can be done, we next describe how evaluations were produced for the hypotheses in red cell antibody identiﬁcation domain.

4

The RED Domain: The Red Cell Antibody Identiﬁcation Task

As described in [2], the RED systems are medical test-interpretation systems that operate in the knowledge domain of hospital blood banks. Speciﬁcally, the RED systems are meant to help in the problem of red-cell antibody identiﬁcation. We will ﬁrst brieﬂy describe the problem and then formulate the problem as an abduction problem. 4.1

The Problem

Before blood transfusion is carried out it is imperative to check that the donor’s blood matches the patient’s blood. The process of matching involves ensuring that the donor’s blood does not contain antigens which would be identiﬁed as foreign bodies by the patient’s immune system. If the immune system does encounter foreign bodies, it produces antibodies directed against them. The antibodies that are produced by the patient’s blood against red cell antigens of a donor are called red-cell antibodies. If the patient’s blood contains antibodies directed against the red cell antigens of the donor’s blood, this is a case of mismatch. Transfusion of badly matched blood could result in many bad consequences including fever, anemia, and life threatening kidney-failure. Hence the red cell antibody identiﬁcation task is of crucial importance to blood banks. In addition to the familiar A, B, and Rh, more than 400 red-cell antigens are known. Once the blood has been tested to determine the patient’s A-B-O and Rh blood type, it is necessary to test for the presence of antibodies directed toward other red-cell antigens.

134

N.S. Iyer and J.R. Josephson

Table 1. Red-cell test panel. The various test conditions, or phases, are listed along the left side (AlbuminIS, etc.) and identiﬁers for donors of the red cells are given across the top (623A, etc.). Entries in the table record reactions graded from 0, for no reaction, to 4+ for the strongest agglutination reaction, or H for hemolysis. Intermediate grades of agglutination are +/- (a trace of reaction), 1+w(a grade of 1+, but with the modiﬁer “weak”), 1+, 1+s(the modiﬁer means “strong”), 2+w, 2+, 2+s, 3+w, 3+, 3+s, 4+w. Thus, cell 623A has a 3+ agglutination reaction in the Coombs phase. 623A 479 537A 506A 303A 209A 186A 195 164 AlbuminIS Albumin37 Coombs EnzymeIS Enzyme37

0 0 3+ 0 0

0 0 0 0 0

0 0 3+ 0 1+

0 0 0 0 0

0 0 3+ 0 0

0 0 3+ 0 1+

0 0 3+ 0 0

0 2 3+ 0 1+

0 1 3+ 0 0

Typically this identiﬁcation is performed by using one or more reaction panels of the form shown in Table 1. The columns in the table refer to diﬀerent applicable donors, while the rows refer to diﬀerent test conditions. Each entry in the table indicates reactions shown by a mixed sample of the patient’s blood serum and the indicated donor’s red blood cells, under the speciﬁed test conditions. These ﬁgures are produced by the blood bank technologist to indicate his visual assessment of the strength and type of reaction. Possible reaction types are agglutination (clumping of cells) or hemolysis (splitting of the cell walls). The strength of the reactions are expressed in the blood-banker’s vocabulary, some terms of which are shown in Table 1, and consists of thirteen possible reaction strengths. Hemolysis reactions were ignored for purposes of this experiment. All 3+ entries are converted into the number 3 for our experiment. Similarly, the 1+ values are converted to number 1 and so on. Reactions indicated as 2 + s are converted into the number 2.5 while those marked as 2 + w are converted into the number 1.5. Additionally, information about the signiﬁcant antigens present in each of the donor samples are recorded in a table called the antigram. By reasoning about the pattern of reactions displayed by the reaction panel and using the antigen information present in the donor antigram, the blood-bank technologist attempts to determine which antibodies are present in the patient’s serum and are causing the observed reactions and which are absent, or at least not present in enough strength to cause reaction. The RED systems were built to automate this reasoning process. 4.2

The Red Cell Antibody Identiﬁcation Problem as an Abduction Problem

The reaction panel shown in Table 1 can be considered as data to be explained. Using the antigrams which give information about the various antigens present in the donor samples, it is possible to construct hypotheses about the existence

Multicriterially Best Explanations

135

of various antibodies in the patient’s serum. Each such hypothesis will contain two kinds of information – A proﬁle similar to Table 1 representing how much this particular hypothesis can oﬀer to explain for each of the reactions in the panel. This is the most that can be consistently explained by the hypothesis. – A plausibility value, which is the result of applying rules given by domain experts, to the data of the case. In our experiment, this value is an integer between -3 to +3, representing the plausibility on a symbolic scale from “ruled out” to “highly plausible”. An example for a certain antibody is given in Table 2. It shows how much of the reactions shown for the case from Table 1 are accounted for by hypothesizing that the antibody, AntiNMixed, is present in the patient’s serum. Also, the plausibility value for the hypothesis is indicated to be -2. Table 2. Reaction proﬁle for an individual antibody(Anti NMixed) hypothesis. Note by comparison with the overall reaction panel in Table 1 that the hypothesis only oﬀers to partially explain some of the reactions. Anti NMixed Proﬁle; Plausibility=-2 623A 479 303A 209A 186A 195 AlbuminIS Albumin37 Coombs EnzymeIS Enzyme37

0 0 0.5 0 0

0 0 0 0 0

0 0 2 0 0

0 0 0.5 0 0.5

0 0 0.5 0 0

0 0 2 0 0.5

Table 2 does not contain as many columns as Table 1 because the hypothesis cannot explain any of the reactions pertaining to those columns. The same kind of proﬁle is created for all of the other non-ruled out antibodies. Hence given a donor, the following inputs are present: 1. The reaction panel as indicated in Table 1 2. A plausibility value for each antibody. 3. A reaction proﬁle for each antibody for which the plausibility value is not -3, i.e. it has not been “ruled out.” The desired output will be a set of antibodies which best explain the reactions, along with plausibility values associated with them. The above problem can now be seen as an abduction problem with the following mapping: 1. The reaction panel represents the data, D, to be explained. 2. The individual antibodies which have not been “ruled out” and all possible composite hypotheses that can be generated from them represent the set of possible explanatory hypotheses, the set E.

136

N.S. Iyer and J.R. Josephson

The abduction problem is one of ﬁnding the hypothesis which best explain the reactions in the reaction panel. However, sometimes the evidence will be insufﬁcient and there will be no unique, best explanation.

5

Evaluation Criteria for Hypotheses in the RED Domain

In this section, we will describe how the evaluation criteria for the hypotheses in the RED domain were computed from the given information. For a given problem, the set E of all possible explanatory hypotheses was created as described next. Firstly, the set of antibodies which are ruled out(i.e. with plausibility values -3) are no longer considered as potentially explanatory hypotheses for the problem. Such hypotheses are excluded from set E. The set, S, of simple hypotheses may be deﬁned as follows: S = { Ai : Ai hypothesizes the presence of a particular antibody and the plausibility of Ai is not − 3}

Now, the set E of applicable hypotheses is deﬁned as the set of all possible hypotheses obtainable as combinations(conjunctions) of the simple hypotheses in S. The set C = E − S is therefore the set of all composite hypotheses which hypothesize the presence of more than a single antibody to explain the reactions. Thus, if we suppose A1 , A2 , A3 , · · · , Ak to be each of the individual hypotheses related to the presence of single antibodies which have not been ruled out in advance, then the hypothesis A4 is an example of a simple hypothesis while the hypothesis {A2 , A3 , Ak } is an example of a 3-part composite hypothesis. Obviously, the size of the largest composite hypothesis is k and it includes all of the simple hypotheses in the set S as its parts. Next, we will discuss how some of the criteria mentioned in Section 3 were computed for the set E above. 1. Explanatory Power: Given the values in the reaction panel and the reaction proﬁles, Ri , for simple hypotheses, Ai , one way to quantify the explanatory power of a simple hypothesis is to compute the sum of all the values in its reaction proﬁle table. Since each individual entry in the reaction proﬁle oﬀers to explain an observed reaction as consistently as possible, the sum of the reaction proﬁle matrix is indicative of the overall explanatory power of the simple hypothesis. This value is used as a heuristic measure of the explanatory power of the simple hypothesis. For a composite hypothesis, the reaction proﬁle is constructed by using the proﬁles for its parts. That is, the reaction proﬁle for a composite hypothesis is constructed as the entry-wise sum of reactions in the individual reaction proﬁles, with a maximum for any entry of the reaction strength that needs to be explained. More formally the Explanatory Power, E, was computed as, ∀H ∈ E, E(H) =

R(a, b)

a,b

(1)

Multicriterially Best Explanations

137

2. Implausibility: The plausibility, pi , for each of the simple hypotheses, Ai , is already available as a part of the input. Since -3 is the lowest degree of plausibility that is assigned to a simple hypothesis, the implausibility can be computed by a heuristic measure which produces low value of implausibilities for high value of plausibilities and so on. The exact form of this function is not important since the values themselves are meant to be used only for relative comparisons. In our experiment, implausibility, I, was computed as, ∀H ∈ E, I(H) =

(4 − pj )

Aj ∈H

3. Simplicity: There are at least two ways to deﬁne this criterion. a) Cardinality: Simplicity can be deﬁned in terms of the number of parts in a hypothesis, that is its cardinality. Note that we would want to minimize this value in order to maximize simplicity. However, this measure provides a better score for a hypothesis like {A2 , A6 } relative to another hypothesis like {A1 , A3 , A7 } based merely on the diﬀerences in their structural simplicities. b) Inclusion simplicity: This measure cannot be quantiﬁed on a per hypothesis basis like the previous ones. However, when comparing two composite hypotheses, say H1 and H2 , we say that H1 is better in inclusionsimplicity than H2 if and only if all of the constituent parts of H1 are present in H2 as well. In all other cases, the two hypotheses are considered incomparable in simplicity. This measure makes sure that the least complex hypothesis is preferred to a more complex one that explains no more. For the RED domain experiment, only two of the above criteria were used. This is because the implausibility value, as deﬁned previously, already carries the information carried by the inclusion-simplicity criterion. The addition of another hypothesis to a composite will always reduce its plausibility. So an included hypothesis will always be more plausible than the including one. Consequently, if k simple hypotheses are not ruled out in advance, then the abduction problem involves as many hypotheses as the total possible combinations that result from k, simple hypothesis. In other words, the problem becomes a (2k −1) alternative, 2-criteria, MCDM problem. In the next section, we discuss the results of applying the S-F-V architecture to this MCDM problem.

6

Results of Applying the S-F-V Architecture

It is to be noted that the potential number of explanatory hypotheses in the RED domain is exponential in the number of simple hypotheses that have not been ruled out at the onset. Considering that up to 30 clinically signiﬁcant red-cell antigens are known, the total number of alternatives is potentially quite large.

138

N.S. Iyer and J.R. Josephson

Hence, the ability of dominance ﬁlter to prune eﬀectively becomes valuable in reducing the complexity of the problem. The results shown below are for the case labeled OSU-9 in the RED domain as described in [2]. The reaction panel shown in Table 1 refers to the same case. This case resulted in 15 simple hypothesis which could not be ruled out based on the evidence at the outset. Thus we have a total of 32,767 potential explanatory hypotheses, a formidable number. The Seeker generates this set by building the exhaustive set of combinations starting with 15 simple hypotheses. In the process of generation, the Seeker also evaluates the hypotheses along the various criteria using the heuristic measures indicated in the previous section. It now makes this set of multicriterially evaluated hypotheses available to the Filter. The Filter, after applying the dominance rule using implausibility and explanatory power as the criteria produces a Pareto-Optimal set containing only 3 hypotheses! In other words, as long as the goal is to ﬁnd the most plausible hypothesis which explains the reactions the best, there is no need to consider the remaining 32764 eliminated alternatives; dominance ensures that they are inferior to the survivors. The remaining 3 surviving hypotheses are plotted as points in a Viewer scatter plot shown below with the Implausibility and Explanatory Power as the axes. The labels for each point show the composition

Fig. 1. Plot showing the 3 survivors of dominance applied to the case OSU-9 from the RED domain

of the individual hypotheses. We see that of the 3 survivors, one is a simple hypothesis and in fact this hypothesis, A8 occurs in each of the remaining two composite hypotheses, {A8, A5} and {A8, A12}. Figure 1 also shows the trade-oﬀs available to the user of such a system. Such a trade-oﬀ is typical of many abduction problems where the ability to explain more comes with a cost in the conﬁdence associated with the explanation. By

Multicriterially Best Explanations

139

using this plot, which is displayed by the Viewer in the S-F-V architecture, the user can exercise his trade-oﬀ judgments by selecting the point of interest to him. For example, to get greater explanatory coverage than that provided by the simple hypothesis, the user is informed from the plot that he will need to incur an increase in the implausibility. The composite {A8, A12}, shown as the middle point in the plot, allows for one step of trade-oﬀ in the direction of explaining more, with a resulting increase in the implausibility. Similarly, the point to the extreme right and top explains the most but is also the most implausible among the three potential explanatory hypotheses. Figure 2 shows similar trade-oﬀs for another experimental case. This plot shows more clearly, how moving from the leftmost point to the next point results in a considerable increase in explanatory coverage while the resultant increase in implausibility is not as large. Conversely, looking at the rightmost pair of points, we see that a very small increase in explanatory coverage is obtained by incurring quite large increase in implausibility. This illustrates how the diﬀerent kinds of tradeoﬀ judgments can be brought upon to choose between competing hypotheses even if they are both Pareto Optimal. This plot also shows how choice between

Fig. 2. Plot showing the survivors of dominance applied to the case Pat-32 from the RED domain

multicriterially best explanations involves trade-oﬀ. The choice of an appropriate hypothesis will depend upon the user’s (in this case the person administering the blood) willingness to hypothesize the presence or absence of an antibody according to the urgency of the situation and other risk based considerations. Alternatively, if additional knowledge becomes available at a later stage of the problem, this may be used to rule out some of the surviving hypotheses.

140

7

N.S. Iyer and J.R. Josephson

Conclusions

We have shown how the MCDM perspective applies to abductive reasoning. IBE problems are inherently multicriterial. These criteria need not be commensurable. Even if that is the case, a well-deﬁned notion of multicriterially best explanations can be given. Such best explanations need not be unique. However computer-aided visualization of the alternatives can help human to choose from among the multicriterially best hypotheses. It is worth noting that if there is indeed a single hypothesis that is the most plausible, explains the most, and so on, then such a hypothesis will be the sole survivor of the dominance ﬁlter (this is because by virtue of being the best along all of the evaluation criteria, it will dominate every other alternative, using the deﬁnition of dominance from page 2). Moreover MCDM techniques can help reduce the complexity of the problem. One can envision scientists using powerful, computerized decision aids like the S-F-V architecture in the future to help solve complex problems of discovery. Acknowledgments. This material is based upon work supported by The Oﬃce of Naval Research under Grant No. N00014-96-1-0701. The support of ONR and the DARPA RaDEO program is gratefully acknowledged. Standard disclaimers apply.

References 1. Josephson, John, R., Chandrasekaran, B., Carroll Mark, Iyer Naresh, Wasacz Bryon, Rizzoni Giorgio, Li Qingyuan, Erb, David A. An Architecture for Exploring Large Design Spaces. Proceedings of National Conference of the American Association for Artiﬁcial Intelligence, Madison, Wisconsin, pp 143-150 (1998) 2. Josephson John, R., Josephson, Susan, G.: Abductive Inference: Computation, Philosophy, Technology. Cambridge University Press (1994) 3. Harman, G.: The Inference to the Best Explanation. Philosophical Review, Vol. 74, pp. 88-95, (1965) 4. Lycan, W., G.: Judgement and Justiﬁcation. Cambridge University Press (1988) 5. Keeney, Ralph, L., Raiﬀa Howard: Decisions with multiple objectives: preferences and value tradeoﬀs. Wiley Publishers (1976) 6. Calpine H. C., Golding A.,: Some properties of Pareto Optimal Choices in Decision Problems. International Journal of Management Science, Vol. 4, No. 2, pp. 141-147 (1976) 7. Bentley, J., L., Kung, H., T., Schkolnick, M., Thompson, C., D.: On the Average Number of Maxima in a Set of Vectors and Applications. Journal for the Association of Computing Machinery, Vol. 25, No. 4, pp. 536-543, (1978) 8. Josephson, John, R.,: Abduction-Prediction Model of Scientiﬁc Discovery Reﬂected in a Prototype System for Model-Based Diagnosis. Philosophica, Vol. 61, No. 1, pp. 9-17 (1998) 9. Chandrasekaran B.,: Functional and Diagrammatic Representation for Device Libraries. Technical Report, The Ohio State University, (2000) 10. Mietinnen, K. M.,: Nonlinear Multiobjective Optimization. International Series in Operations Research and Management Science, Kluwer Academic Publishers, (1999)

Towards Discovery of Deep and Wide First-Order Structures: A Case Study in the Domain of Mutagenicity Tam´as Horv´ath1 and Stefan Wrobel2 1

Institute for Autonomous intelligent Systems, Fraunhofer Gesellschaft, Schloß Birlinghoven, D-53754 Sankt Augustin, tamas.horvath@fhg.de 2 Otto-von-Guericke-Universit¨ at Magdeburg, IWS, P.O.Box 4120, D-39106 Magdeburg, wrobel@iws.cs.uni-magdeburg.de

Abstract. In recent years, it has been shown that methods from Inductive Logic Programming (ILP) are powerful enough to discover new ﬁrst-order knowledge from data, while employing a clausal representation language that is relatively easy for humans to understand. Despite these successes, it is generally acknowledged that there are issues that present fundamental challenges for the current generation of systems. Among these, two problems are particularly prominent: learning deep clauses, i.e., clauses where a long chain of literals is needed to reach certain variables, and learning wide clauses, i.e., clauses with a large number of literals. In this paper we present a case study to show that by building on positive results on acyclic conjunctive query evaluation in relational database theory, it is possible to construct ILP learning algorithms that are capable of discovering clauses of signiﬁcantly greater depth and width. We give a detailed description of the class of clauses we consider, describe a greedy algorithm to work with these clauses, and show, on the popular ILP challenge problem of mutagenicity, how indeed our method can go beyond the depth and width barriers of current ILP systems.

1

Introduction

In recent years, it has been shown that methods from Inductive Logic Programming (ILP) [23,32] are powerful enough to discover new ﬁrst-order knowledge from data, while employing a clausal representation language that is relatively easy for humans to understand. Despite these successes, it is generally acknowledged that there are issues that present fundamental challenges for the current generation of systems. Among these, two problems are particularly prominent: learning deep clauses, i.e., clauses where a long chain of literals is needed to reach certain variables, and learning wide clauses, i.e., clauses with a large number of interconnected literals. K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 100–112, 2001. c Springer-Verlag Berlin Heidelberg 2001

Towards Discovery of Deep and Wide First-Order Structures

101

In current ILP systems, these challenges are reﬂected in system parameters that bound the depth and width of the clauses, respectively. Practical experience in applications shows that tractable runtimes are achieved only when setting the values of these parameters to small values; in fact it is not uncommon to limit the depth of clauses to two or three, and their width to four or ﬁve. In a recent study, Giordana and Saitta [10] have shown, based on empirical simulations, that indeed there seems to be a fundamental limit for current ILP systems, and that this limit might in large parts be due to the extreme growth of matching costs, i.e., the cost of determining if a clause covers a given example. Thus, if matching costs could be reduced, it should be possible to learn clauses of signiﬁcantly greater depth and width than currently achievable. In this paper, we present an ILP algorithm and a case study which provide evidence that indeed this seems to be the case. In our algorithm, we build on positive complexity results on conjunctive query evaluation in the area of relational database theory, and employ the class of acyclic conjunctive queries where the matching problem is known to be tractable. In the domain of mutagenicity, we show that using our algorithm it is indeed possible to discover structural relationships that must be expressed in clauses that have signiﬁcantly greater depth and width than those currently learnable. In fact, the additional predictive power gained by these deep and wide structures has allowed us to reach a predictive accuracy comparable to the one attained in previous studies, without using the additional numerical information available in these experiments. The paper is organized as follows. In Section 2, we ﬁrst brieﬂy introduce the learning problem that is usually considered in ILP. In Section 3, we present a more detailed introduction into the matching problem and discuss the state of the art in related work on the issue. In Section 4, we then formally deﬁne the class of acyclic clauses that is used in this work, and describe its properties. Section 5 discusses our greedy algorithm which uses this class of clauses to perform ILP learning. Section 6 contains our case study in the domain of mutagenesis, and Section 7 concludes.

2

The ILP Learning Problem

The ILP learning problem is often simply deﬁned as follows (see, e.g., [32]). Deﬁnition 1 (ILP prediction learning problem). Given – a vocabulary consisting of ﬁnite sets of function and predicate symbols, – a background knowledge language LB , an example language LE , and an hypothesis language LH , all over the given vocabulary, – background knowledge B expressed in LB , and – sets E + and E − of positive and negative examples expressed in LE such that B is consistent with E + and E − (B ∪ E + ∪ E − |= ), ﬁnd a learning hypothesis H ∈ LH such that

102

T. Horv´ ath and S. Wrobel

(i) H is complete, i.e., together with B entails the positive examples (H ∪ B |= E+) (ii) and H is correct, i.e., is consistent with the negative examples (H ∪ B ∪ E + ∪ E − |= ). This problem is called the prediction learning problem because the learning hypothesis H must be such that together with B it correctly predicts (derives, covers) the positive examples, and does not predict (derive, cover) the negation of any negative example as true (otherwise the hypothesis would be inconsistent with the negative examples). For instance, if f lies(tweety) is a positive example and ¬f lies(bello) a negative one then f lies(bello) must not be predicted1 . In order to decide conditions (i) and (ii) in the above deﬁnition, one has to decide for a single e ∈ E + ∪ E − whether H ∪ B |= e. This decision problem is called the matching or membership problem. We note that in the general problem setting deﬁned above, the membership problem is not decidable. Therefore, in most of the cases implication is replaced by clause subsumption deﬁned as follows. Let C1 and C2 be ﬁrst-order clauses. We say that C1 subsumes C2 , denoted by C1 ≤ C2 , if there is a substitution θ (a mapping of C1 ’s variables to C2 ’s terms) such that C1 θ ⊆ C2 (for more details see, e.g., [25]).

3

The Matching Problem: State of the Art

One of the reasons why the width and depth of the clauses in the hypothesis language are usually bounded by a small constant is that even in the strongly restricted ILP problem settings the membership problem is still NP-complete. For instance, consider the ILP prediction learning problem, where (non-constant) function symbols are not allowed in the vocabulary, the background knowledge is an extensional database (i.e., it consists of ground atoms), examples are ground atoms, and the hypothesis language is a subset of the set of deﬁnite non-recursive ﬁrst-order Horn clauses, or in other words, it is a subset of the set of conjunctive queries [1,30]. This is one of the problem settings most frequently considered in ILP real-word applications. Although in this setting, the membership problem, i.e., the problem of deciding whether a conjunctive query implies a ground atom with respect to an extensional database, and implication between conjunctive queries are both decidable, they are still NP-complete [6]. In the ILP community, both of these problems are viewed as instances of the clause subsumption problem because implication is equivalent to clause subsumption in the problem setting considered (see, e.g., [11]). These decision problems play a central role e.g. in top-down ILP approaches (see, e.g., [25] for an overview), where the algorithm starts with an overly general clause, for instance with the empty clause, and specializes it step by step until a clause is found that satisﬁes the requirements deﬁned by the user. 1

Strictly speaking the above setting only refers to the training error of a hypothesis while ILP systems actually seek to minimize the true error on future examples.

Towards Discovery of Deep and Wide First-Order Structures

103

As mentioned above, subsumption between ﬁrst-order clauses is one of the most important operators used in diﬀerent ILP methods. Since the clause subsumption problem is known to be NP-complete, diﬀerent approaches can be found in the corresponding literature that try to solve it in polynomial time. Among these, we refer to the technique of identifying tractable subclasses of ﬁrst-order clauses (see, e.g., [12,18,26]), to the earlier mentioned phase transitions in matching [10], and to stochastic matching [27]. In general, clause subsumption problem can be considered as a homomorphism problem between the relational structures that correspond to the clauses, as one has to ﬁnd a function between the universes of the structures that preserves the relations (see, e.g., [16]). Homomorphisms between relational structures appear in the query evaluation problems in relational database theory or in the constraint-satisfaction problem in artiﬁcial intelligence (see, e.g., [19]). In particular, from the point of view of computational complexity, the query evaluation problem for the above mentioned class of conjunctive queries is well-studied. Research in this ﬁeld goes back to the seminal paper by Chandra and Merlin [6] in the late seventies, who showed that the problem of evaluating a conjunctive query with respect to a relational database is NP-complete. In [33], Yannakakis has shown that query evaluation becomes computationally tractable if the set of literals in the query forms an acyclic hypergraph. This class of conjunctive queries is called acyclic conjunctive queries. In [13], Gottlob, Leone, and Scarcello have shown that acyclic conjunctive queries are LOGCFL-complete. The relevance of this result, besides providing the precise complexity of acyclic conjunctive query evaluation, is that acyclic conjunctive query evaluation is highly parallelizable due to the nature of LOGCFL. The positive complexity result of Yannakakis was then extended by Chekuri and Rajaraman [7] to cyclic queries of bounded query-width. Despite the fact that the class of conjunctive queries is one of the most frequently considered hypothesis language in ILP, and that acyclic conjunctive queries form a practically relevant class of database queries, to our knowledge only the recent paper [15] by Hirata has so far been concerned with acyclic conjunctive queries from the point of view of learnability2 . In that paper, Hirata has shown that, under widely believed complexity assumptions, a single acyclic conjunctive query is not polynomially predictable, and hence, it is not polynomially PAC-learnable [31]. This means that even though the membership problem for acyclic clauses is decidable in polynomial time, under worst-case assumptions the problem of learning these clauses is hard, so that practical learning algorithms, such as the one presented in section 5, must resort to heuristic methods.

2

The notion of acyclicity appears in the literature of ILP (see, e.g., [2]), but is diﬀerent from the one considered in this paper.

104

4

T. Horv´ ath and S. Wrobel

Acyclic Conjunctive Queries

In this section we give the necessary notions related to acyclic conjunctive queries considered in this work. For a detailed introduction to acyclic conjunctive queries the reader is referred to e.g. [1,30] or to the long version of [13]. For the rest of this paper, we assume that the vocabulary in Deﬁnition 1 consists of a set of constant symbols, a distinguished predicate symbol called the target predicate, and a set of predicates called the background predicates. Thus, (non-constant) function symbols are not included in the vocabulary. Examples are ground atoms of the target predicate, and the background knowledge is an extensional database consisting of ground atoms of the background predicates. Furthermore, we assume that hypotheses in LH are deﬁnite non-recursive ﬁrstorder clauses, or in the terminology of relational database theory, conjunctive queries of the form L0 ← L1 , . . . , Ll where L0 is a target atom, and Li is a background atom for i = 1, . . . , l. In what follows, by Boolean conjunctive queries we mean ﬁrst-order goal clauses of the form ← L1 , . . . , Ll where the Li ’s are all background atoms. In order to deﬁne a special class of conjunctive queries, called acyclic conjunctive queries, we ﬁrst need the notion of acyclic hypergraphs. A hypergraph (or set-system) H = (V, E) consists of a ﬁnite set V called vertices, and a family E of subsets of V called hyperedges. A hypergraph is α-acyclic [9], or simply acyclic, if one can remove all of its vertices and edges by deleting repeatedly either a hyperedge that is empty or is contained by another hyperedge, or a vertex contained by at most one hyperedge [14,34]. Note that acyclicity as deﬁned here is not a hereditary property, in contrast to e.g. the standard notion of acyclicity in ordinary undirected graphs, as it may happen that an acyclic hypergraph has a cyclic subhypergraph. For example, consider the hypergraph H = ({a, b, c}, {e1 , e2 , e3 , e4 }) with e1 = {a, b}, e2 = {b, c}, e3 = {a, c}, and e4 = {a, b, c}. This is an acyclic hypergraph, as one can remove step by step ﬁrst the hyperedges e1 , e2 , e3 (as they are subsets of e4 ), then the three vertices, and ﬁnally, the empty hypergraph is obtained by removing the empty hyperedge that remained from e4 . On the other hand, the hypergraph H = ({a, b, c}, {e1 , e2 , e3 }), which is a subhypergraph of H, is cyclic, as there is no vertex or edge that could be deleted by the above deﬁnition. In [9], other degrees of acyclicity are also considered, and it is shown that among them, α-acyclic hypergraphs form the largest class properly containing the other classes. Using the above notion of acyclicity, now we are ready to deﬁne the class of acyclic conjunctive queries. Let Q be a conjunctive query and L be a literal of Q. We denote by Var(Q) (resp. Var(L)) the set of variables occurring in Q (resp. L). We say that Q is acyclic if the hypergraph H(Q) = (V, E) with V = Var(Q) and E = {Var(L) : L is a literal in Q} is acyclic. For instance, from the conjunctive

Towards Discovery of Deep and Wide First-Order Structures

105

queries P (X, Y, X) ← R(X, Y ), R(Y, Z), R(Z, X) P (X, Y, Z) ← R(X, Y ), R(Y, Z), R(Z, X) the ﬁrst one is cyclic, while the second one is acyclic. In [3] it is shown that the class of acyclic conjunctive queries is identical to the class of conjunctive queries that can be represented by join forests [4]. Given a conjunctive query Q, the join forest JF (Q) representing Q is an ordinary undirected forest such that its vertices are the set of literals of Q, and for each variable x ∈ Var(Q) it holds that the subgraph of JF (Q) consisting of the vertices that contain x is connected (i.e., it is a tree). Now we show how to use join forests for eﬃcient acyclic query evaluation. Let E be a set of ground target atoms and B be the background knowledge as deﬁned at the beginning of this section, and let Q be an acyclic conjunctive query with join forest JF (Q). In order to ﬁnd the subset E ⊆ E implied by Q with respect to B, we can apply the following method. Let T0 , T1 , . . . , Tk (k ≥ 0) denote the set of connected components of JF (Q), where T0 denotes the tree containing the head of Q, and let Qi ⊆ Q denote the query represented by Ti for i = 0, . . . , k. The deﬁnition of the Qi ’s implies that they form a partition of the set of literals of Q such that literals belonging to diﬀerent blocks do not share common variables. Therefore, the subqueries Q0 , . . . , Qk can be evaluated separately; if there is an i, 1 ≤ i ≤ k, such that the Boolean conjunctive query is false with respect to B then Q implies none of the elements of E with respect to B, otherwise Q and Q0 imply the same subset of E with respect to B. By deﬁnition, Q0 implies an atom e ∈ E if there is a substitution mapping the head of Q0 to e and the atoms in its body into B, and Qi (1 ≤ i ≤ k) is true with respect to B if there is a substitution mapping Qi ’s atom into B. That is, using algorithm Evaluate given below, Q implies E with respect to B if and only if k (E ⊆ Evaluate(B ∪ E, T0 )) ∧ (Evaluate(B, Ti ) = ∅) . i=1

It remains to discuss the problem of how to compute a join forest for an acyclic conjunctive query. Using maximal weight spanning forests of ordinary graphs, in [4] Bernstein and Goodman give the following method to this problem. Let Q be an acyclic conjunctive query, and let G(Q) = (V, E, w) be a weighted graph with vertex set V = {L : L is a literal of Q}, edge set E = {(u, v) : Var(u) ∩ Var(v) = ∅}, and with weight function w : E → IN deﬁned by w : (u, v) → |Var(u) ∩ Var(v)| . Let M SF (Q) be a maximal weight spanning forest of G(Q). Note that maximal weight spanning forests can be computed in polynomial time (see, e.g., [8]). It holds that if Q is acyclic then M SF (Q) is a joint forest representing Q. In addition, given a maximal weight spanning forest M SF (Q) of a conjunctive query

106

T. Horv´ ath and S. Wrobel

algorithm Evaluate input: extensional database D and join tree T with root labeled by n0 output: {n0 θ: θ is a substitution mapping the nodes of T into D} let R = {n0 θ: θ is a substitution mapping n0 into D} let the children of n0 be labeled by n1 , . . . , nk (k ≥ 0) for i = 1 to k S = evaluate(D, Ti ) // Ti is the subtree of T rooted at ni R = the natural semijoin of R and S wrt. n0 and ni endfor return R

Q, instead of using the method given in the deﬁnition of acyclic hypergraphs, in order to decide whether Q is acyclic, one can check whether the equation w(u, v) = (Class(x) − 1) (1) (u,v)∈M SF (Q) x∈Var(Q) holds, where Class(x) denotes the number of literals in Q that contain x (see also [4]).3

5

A Greedy Algorithm

The goal of our learning algorithm is to discover sets of acyclic clauses that together are correct and complete. From the results of [16] on learning multiple clauses it follows that this problem is NP-hard, so we resort to a greedy sequential covering algorithm (see, e.g., [21]) as it is commonplace in ILP. Our sequential covering algorithm takes as input the background knowledge B and the set E of examples, calls the subroutine SingleClause for ﬁnding an acyclic conjunctive query Q, then updates E by removing the positive examples implied by Q with respect to B, and starts the process again until no new rule is found by the subroutine. It ﬁnally prints as output the set of acyclic conjunctive queries discovered. Now we turn to the problem of how to ﬁnd a single acyclic conjunctive query4 . In order to give the details on the subroutine SingleClause called by 3 4

The reason why Class(x) − 1 is used in (1) is that the number of edges in a tree is equal to its number of vertices minus 1. We note that the general problem of ﬁnding a single consistent and complete (not necessarily acyclic) conjunctive query is a PSPACE-hard problem [17] and it is an open problem whether it belongs to PSPACE (see also [16]). On the other hand, it is not known whether it remains PSPACE-hard for the class of acyclic conjunctive queries considered in this work, or to the other three classes corresponding to β, γ, and Berge-acyclicity discussed in [9].

Towards Discovery of Deep and Wide First-Order Structures

107

the algorithm, we ﬁrst need the notion of reﬁnement operators (see Chapter 17 in [25] for an overview). We recall that a special ILP problem setting deﬁned at the beginning of the previous section is considered. Fix the vocabulary and let L denote the set of acyclic conjunctive queries over the vocabulary. A downward reﬁnement operator is a function ρ : L → 2L such that Q1 ≤Q2 for every Q1 ∈ L and Q2 ∈ ρ(Q1 ). algorithm SingleClause input: background knowledge B and set E = E + ∪ E − of examples output: either ∅ or an acyclic conjunctive query Best satisfying |Covers(Best, B, E + )|/|E + | ≥ Pcov and Accuracy(Best, B, E) ≥ Pacc Beam = {P (x1 , . . . , xn ) ←} // P denotes the target predicate Best = ∅ LastChange = 0 repeat NewBeam = ∅ forall C ∈ Beam forall C ∈ ρ(C) if |Covers(C , B, E + )|/|E + | ≥ Pcov then if Accuracy(C , B, E) ≥ max(Pacc , Accuracy(Best, B, E)) then Best = C LastChange = 0 endif update NewBeam by C endif endfor endfor LastChange = LastChange + 1 Beam = NewBeam until Beam = ∅ or LastChange > Pchange return Best

Algorithm SingleClause applies beam search for ﬁnding a single acyclic conjunctive query. Its input is B and the current set E of examples. It returns the acyclic conjunctive query Best that covers a suﬃciently large (deﬁned by Pcov ) part of the positive examples and has accuracy at least Pacc , where Pcov and Pacc are user deﬁned parameters. If it has not found such an acyclic conjunctive query, then it returns the empty set. In each iteration of the outer (repeat) loop, the algorithm computes the reﬁnements for each acyclic conjunctive query in the beam stack, and if a reﬁnement is found that is better than the best one discovered so far then it will be the new best candidate. The beam stack is updated according to the rules’ quality measured by Accuracy. Finally we note that the outer loop is terminated if no candidate reﬁnement has been generated

108

T. Horv´ ath and S. Wrobel

or in the last Pchange iterations of the outer loop the best rule has not been changed, where Pchange is a user deﬁned parameter.

6

Case Study: Mutagenesis

Chemical mutagens are natural or artiﬁcial compounds that are capable of causing permanent transmissible changes in DNA. Such changes or mutations may involve small gene segments as well as whole chromosomes. Carcinogenic compounds are chemical mutagens that alter the DNA’s structure or sequence harmfully causing cancer in mammals. A huge amount of research in the ﬁeld of organic chemistry has been focusing on identifying carcinogen chemical compounds. The ﬁrst study on using ILP for predicting mutagenicity in nitroaromatic compounds along with providing a Prolog database was published in [29]. This database consists of two sets of nitroaromatic compounds from which we have used the regression friendly one containing 188 compounds. Depending on the value of log mutagenicity, the compounds were split into two disjoint sets (active consisting of 125 and inactive consisting of 63 compounds). The basic structure of the compounds is represented by the background predicates ‘atm’ and ‘bond’ of the form atm(Compound Id,Atom Id,Element,Type,Charge), bond(Compound Id,Atom1 Id,Atom2 Id,BondType) , respectively. Thus, the background knowledge B can be considered as a labeled directed graph. In order to work with undirected graph, for each fact bond(c, u, v, t) we have added a corresponding fact bond(c, v, u, t) to B. In addition, in our experiments we have also included the background predicates – benzene, carbon 6 ring, hetero aromatic 6 ring, ring6, – carbon 5 aromatic ring, carbon 5 ring, hetero aromatic 5 ring, ring5, – nitro, and methyl. These predicates deﬁne building blocks for complex chemical patterns (for their deﬁnitions see the Appendix of [29]). We note that we have not used the available numeric information (i.e., charge of atoms, log P, and (LUMO ). In our experiments we used a simple reﬁnement operator allowing only adding new literals to the body of an acyclic conjunctive query, and not allowing the usual operators such as uniﬁcation of two variables or specialization of a variable. That is, a reﬁnement of an acyclic conjunctive query is obtained by selecting one of its literals, and, depending on the predicate symbol of the selected literal, by adding a set of literals to its body as follows. If the literal is the head of the clause we add either a single ’atm’ literal or a set of literals corresponding to one of the building blocks. If the literal selected is an ’atm’ then we add either a new atom connected by a bond fact with the selected one, or we add a building block containing the selected atom. If a bond literal has been selected then we add a building block containing the current bond. Such building blocks are a common

Towards Discovery of Deep and Wide First-Order Structures

109

element speciﬁable in several declarative bias languages already in use in ILP (see e.g. the relational clich´es of FOCL [28] or the lookahead speciﬁcations of TILDE [5]); at present, they are simply given as part of the reﬁnement operator5 . As an example, let Q : active(x1 , x2 ) ← . . . , L, . . . be an acyclic conjunctive query, where L = bond(x1 , xi , xj , 7). Then a reﬁnement of Q with respect to L and building block benzene is the acyclic conjunctive query Q = Q ∪ {bond(x1 , xj , y1 , 7), bond(x1 , y1 , y2 , 7), bond(x1 , y2 , y3 , 7), bond(x1 , y3 , y4 , 7), bond(x1 , y4 , xi , 7), atm(x1 , y1 , c, u1 , v1 ), atm(x1 , y2 , c, u2 , v2 ), atm(x1 , y3 , c, u3 , v3 ), atm(x1 , y4 , c, u4 , v4 ), benzene(x1 , xi , xj , y1 , y2 , y3 , y4 )} , where the y’s, u’s, and v’s are all new variables. Note that despite the fact that the new bond literals together with L form a cycle of length 6, Q is acyclic, as we have also attached the benzene literal containing the six corresponding variables. It holds in general that the reﬁnement operator used in our work does not violate the acyclicity property. Finally we note that only properly subsumed reﬁnements have been considered (i.e., if Q is a reﬁnement of Q then Q ≤ Q). In order to see how our restriction on the hypothesis language inﬂuences the predictive accuracy, we have used 10-fold cross-validation with the 10 partitions given in [29]. Setting parameters Pcov to 0.1, Pacc to 125/188 (default accuracy), the size of the beam stack to 100, and Pchange to 3 (note that this is not a depth bound), we obtained 87% accuracy. Using the ILP system Progol [22], the authors of [29] report 88% accuracy, and a similar result, 89% was achieved by STILL [27] on the same ten partitions. However, in contrast to our experiment, in the Progol and STILL experiments, the numeric information was considered as well. As an example, one of the rules discovered independently in each of the ten runs is active(x1 , x2 ) ← atm(x1 , x3 , c, 27, x4 ), bond(x1 , x3 , x5 , x25 ), bond(x1 , x5 , x6 , x26 ), bond(x1 , x6 , x7 , x27 ), bond(x1 , x7 , x8 , x28 ), bond(x1 , x8 , x9 , x29 ), bond(x1 , x9 , x3 , x30 ), atm(x1 , x5 , x10 , x11 , x12 ), atm(x1 , x6 , x13 , x14 , x15 ), atm(x1 , x7 , x16 , x17 , x18 ), atm(x1 , x8 , x19 , x20 , x21 ), atm(x1 , x9 , x22 , x23 , x24 ), ring6(x1 , x3 , x5 , x6 , x7 , x8 , x9 ), bond(x1 , x7 , x31 , x32 ), atm(x1 , x31 , c, 27, x33 ) 5

(2)

Note that the use of such building blocks facilitates the search by making wide and deep clauses reachable in fewer steps, but of course does not change the complexity of the membership problem. Thus, even when given these building blocks, such clauses would be diﬃcult to learn for other ILP learners due to the intractable cost of matching.

110

T. Horv´ ath and S. Wrobel

(see also Fig. 1). Applying the notion of variable depth given in [25], the depth of the above rule is 7 according to the depth of the deepest variable x22 . Furthermore, its width is 15. Finally we note that using the standard Prolog backtracking technique, just evaluating the single rule above would take on the order of hours. ❝x3 (element: carbon, atom type: 27) ✟❍ ✟ ❍ ❍❝ ❝✟ x5 x9 ❝

❝ ✟ x8 ✟

x6 ❍ ❍

❍ ❝✟

x7

❝

x31 (element: carbon, atom type: 27)

Fig. 1. A graphical representation of the body of rule (2).

7

Conclusion

In this paper, we have taken the ﬁrst steps towards discovery of deep and wide ﬁrst-order structures in ILP. Taking up the argument recently put forward by [10], our approach centrally focuses on the matching costs caused by deep and wide clauses. To this end, from relational database theory [1,30], we have introduced a new class of clauses, α-acyclic conjunctive queries, which has not previously been used in practical ILP algorithms. Using the algorithms summarized in this paper, the matching problem for acyclic clauses can be solved eﬃciently. As shown in our case study in the domain of mutagenicity, with an appropriate greedy learner as presented in the paper, it is then possible to learn clauses of signiﬁcantly greater width and depth than previously feasible, and the additional predictive power gained by these deep and wide structures has in fact allowed us to reach a predictive accuracy comparable to the one attained in previous studies, without using the addition numerical information available in these experiments. Based on these encouraging preliminary results, further work is necessary to substantiate the evidence presented in this paper. Firstly, in the case study presented here, we have used quite a simple greedy algorithm, so that further improvements seen possible with more sophisticated search strategies (see e.g. [20]). Secondly, further experiments are of course necessary to examine in which type of problem the advantages shown here will also materialize; we expect this to be the case in all problems involving structurally complex objects or relationships. To facilitate the experiments, we will switch to a reﬁnement operator based on a declarative bias language (see [24] for an overview), as is commonplace in ILP. Finally, it appears possible to generalize our results to an even

Towards Discovery of Deep and Wide First-Order Structures

111

larger class of clauses, by considering certain classes of cyclic conjunctive queries which are also solvable in polynomial time (see e.g. [7]).

References 1. S. Abiteboul, R. Hull, and V. Vianu. Foundations of Databases. Addison-Wesley, Reading, Mass., 1995. 2. H. Arimura. Learning acyclic ﬁrst-order Horn sentences from entailment. In M. Li and A. Maruoka, editors, Proceedings of the 8th International Workshop on Algorithmic Learning Theory, volume 1316 of LNAI, pages 432–445, Springer, Berlin, 1997. 3. C. Beeri, R. Fagin, D. Maier, and M. Yannakakis. On the desirability of acyclic database schemes. Journal of the ACM, 30(3):479–513, 1983. 4. P. A. Bernstein and N. Goodman. The power of natural semijoins. SIAM Journal on Computing, 10(4):751–771, 1981. 5. H. Blockeel and L. D. Raedt. Lookahead and discretization in ILP. In N. Lavraˇc and S. Dˇzeroski, editors, Proceedings of the 7th International Workshop on Inductive Logic Programming, volume 1297 of LNAI, pages 77–84, Springer, Berlin, 1997. 6. A. K. Chandra and P. M. Merlin. Optimal implementations of conjunctive queries in relational databases. In Proceedings of the 9th ACM Symposium on Theory of Computing, pages 77–90. ACM Press, 1977. 7. C. Chekuri and A. Rajaraman. Conjunctive query containment revisited. Theoretical Computer Science, 239(2):211–229, 2000. 8. T. H. Cormen, C. E. Leiserson, and R. L. Rivest. Introduction to Algorithms. MIT Press, Cambridge, Mass., 1990. 9. R. Fagin. Degrees of acyclicity for hypergraphs and relational database schemes. Journal of the ACM, 30(3):514–550, 1983. 10. A. Giordana and L. Saitta. Phase transitions in relational learning. Machine Learning, 41(2):217–251, 2000. 11. G. Gottlob. Subsumption and implication. Information Processing Letters, 24(2):109–111, 1987. 12. G. Gottlob and A. Leitsch. On the eﬃciency of subsumtion algorithms. Journal of the ACM, 32(2):280–295, 1985. 13. G. Gottlob, N. Leone, and F. Scarcello. The complexity of acyclic conjunctive queries. In Proceedings of the 39th Annual Symposium on Foundations of Computer Science, pages 706–715. IEEE Computer Society Press, 1998. 14. M. Graham. On the universal relation. Technical report, Univ. of Toronto, Toronto, Canada, 1979. 15. K. Hirata. On the hardness of learning acyclic conjunctive queries. In Proceedings of the 11th International Conference on Algorithmic Learning Theory, volume 1968 of LNAI, pages 238–251. Springer, Berlin, 2000. 16. T. Horv´ ath and G. Tur´ an. Learning logic programs with structured background knowledge. Artiﬁcial Intelligence, 128(1-2):31–97, 2001. 17. J.-U. Kietz. Some lower bounds for the computational complexity of inductive logic programming. In P. Brazdil, editor, Proceedings of the European Conference on Machine Learning, volume 667 of LNAI, pages 115–123. Springer, Berlin, 1993. 18. J.-U. Kietz and M. L¨ ubbe. An eﬃcient subsumption algorithm for inductive logic programming. In W. Cohen and H. Hirsh, editors, Proc. Eleventh International Conference on Machine Learning (ML-94), pages 130–138, 1994.

112

T. Horv´ ath and S. Wrobel

19. Kolaitis and Vardi. Conjunctive-query containment and constraint satisfaction. JCSS: Journal of Computer and System Sciences, 61(2):302–332, 2000. 20. N. Lavraˇc and S. Dˇzeroski. Inductive Logic Programming: Techniques and Applications. Ellis Horwood, 1994. 21. T. M. Mitchell. Machine Learning. McGraw-Hill, 1997. 22. S. Muggleton. Inverse entailment and Progol. New Generation Computing, 13(34):245–286, 1995. 23. S. Muggleton and L. De Raedt. Inductive logic programming: Theory and methods. The Journal of Logic Programming, 19/20:629–680, 1994. 24. C. N´edellec, C. Rouveirol, H. Ad´e, F. Bergadano, and B. Tausend. Declarative bias in ILP. In L. De Raedt, editor, Advances in Inductive Logic Programming, pages 82–103. IOS Press, 1996. 25. S.-H. Nienhuys-Cheng and R. Wolf. Foundations of Inductive Logic Programming, volume 1228 of LNAI. Springer, Berlin, 1997. 26. T. Scheﬀer, R. Herbrich, and F. Wysotzki. Eﬃcient Θ-subsumption based on graph algorithms. In S. Muggleton, editor, Proceedings of the 6th International Workshop on Inductive Logic Programming, volume 1314 of LNAI, pages 212–228, Springer, Berlin, 1997. 27. M. Sebag and C. Rouveirol. Resource-bounded relational reasoning: Induction and deduction through stochastic matching. Machine Learning, 38(1/2):41–62, 2000. 28. G. Silverstein and M. Pazzani. Relational cliches: Constraining constructive induction during relational learning. In Birnbaum and Collins, editors, Proceedings of the 8th International Workshop on Machine Learning, pages 203–207, Morgan Kaufmann, San Mateo, CA, 1991. 29. A. Srinivasan, S. Muggleton, M. J. E. Sternberg, and R. D. King. Theories for mutagenicity: A study in ﬁrst-order and feature-based induction. Artiﬁcial Intelligence, 85(1/2), 1996. 30. J. D. Ullman. Database and Knowledge-Base Systems, Volumes I and II. Computer Science Press, 1989. 31. L. G. Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134–1142, 1985. 32. S. Wrobel. Inductive logic programming. In G. Brewka, editor, Advances in Knowledge Representation and Reasoning, pages 153–189. CSLI-Publishers, Stanford, CA, USA, 1996. Studies in Logic, Language and Information. 33. M. Yannakakis. Algorithms for acyclic database schemes. In Proceedings of the 7th Conference on Very Large Databases, Morgan Kaufman pubs. (Los Altos CA), Zaniolo and Delobel(eds), 1981. 34. C. T. Yu and Z. M. Ozsoyoglu. On determining tree query membership of a distributed query. INFOR, 22(3), 1984.

Clipping and Analyzing News Using Machine Learning Techniques Hans Gr¨ undel, Tino Naphtali, Christian Wiech, Jan-Marian Gluba, Maiken Rohdenburg, and Tobias Scheﬀer SemanticEdge, Kaiserin-Augusta-Allee 10-11, 10553 Berlin, Germany {hansg, tinon, christianw, jang, scheffer}@semanticedge.com

Abstract. Generating press clippings for companies manually requires a considerable amount of resources. We describe a system that monitors online newspapers and discussion boards automatically. The system extracts, classiﬁes and analyzes messages and generates press clippings automatically, taking the speciﬁc needs of client companies into account. Key components of the system are a spider, an information extraction engine, a text classiﬁer based on the Support Vector Machine that categorizes messages by subject, and a second classiﬁer that analyzes which emotional state the author of a newsgroup posting was likely to be in. By analyzing large amount of messages, the system can summarize the main issues that are being reported on for given business sectors, and can summarize the emotional attitude of customers and shareholders towards companies.

1

Introduction

Monitoring news paper or journal articles, or postings to discussion boards is an extremely laborious task when carried out manually. Press clipping agencies employ thousands of personnel in order to satisfy their clients’ demand for timely and reliable delivery of publications that relate to their own company, to their competitors, or to the relevant markets. The internet presence of most publications oﬀers the possibility of automating this ﬁltering and analyzing process. One challenge that arises is to analyze the content of a news story well enough to judge its relevance for a given client. A second diﬃculty is to provide appropriate overview and analyzing functionality that allows a user to keep track of the key content of a potentially huge amount of relevant publications. Software systems that spider the web in search of relevant information, and extract and process found information are usually referred to as information agents [16,2]. They are being used, for instance, to ﬁnd interesting web sites or links [19,12], or to ﬁlter news group postings (e.g., [26]). One attribute of information agents is how they determine the relevance of a document to a user. Content-based recommendation systems (e.g., [1]) judge the interestingness of a document to the user based on the content of other documents that the user has found interesting. By contrast, collaborative ﬁltering approaches (e.g., [13]), K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 87–99, 2001. c Springer-Verlag Berlin Heidelberg 2001

88

H. Gr¨ undel et al.

draw conclusions based on which documents other users with similar preferences have found interesting. In many applications, it is not reasonable to ask the user to elaborate his or her preferences explicitly. Therefore, information agents often try to learn a function that expresses user interest from user feedback; e.g., [26,18]. By contrast, a user who approaches a press clipping agency usually has speciﬁc, elaborated information needs. The problem of identifying predetermined relevant information in text or hypertext documents from some speciﬁc domain is usually referred to as information extraction (IE) (e.g., [4,3]). In the news clipping context, several instances of the information extraction problem occur. Firstly, press articles have to be extracted from HTML pages where they are usually embedded between link collections, adverts, and other surrounding text. Secondly, named entities such as companies or products have to be identiﬁed and extracted and, thirdly, meta-information such as publication dates or publishers need to be found. While ﬁrst IE algorithms were hand-crafted sets of rules (e.g., [7]), algorithms that learn extraction rules from hand-labeled documents (e.g., [8,14,6]) have now become standard. Unfortunately, rule-based approaches sometimes fail to provide the necessary robustness against the inherent variability of document structure, which has led to the recent interest in the use of Hidden Markov Models (HMMs) [25,17,21,23] for this purpose. In order to identify whether the content of a document matches one of the categories the user is interested in and to summarize the subjects of large amounts of relevant documents, classiﬁers that are learned from hand-labeled documents (e.g., [24,11]) provide a means of categorizing a document’s content that reaches far beyond key word search. Furthermore, it can be interesting to determine the emotional state [9] of authors of postings about a company or product. In this paper, we discuss a press clipping information agent that downloads news stories from selected news sources, classiﬁes the messages by subject and business sector, and recognizes company names. It then generates customized clippings that match the requirement of clients. We describe the general architecture in Section 2, and discuss the machine learning algorithms involved in Section 3. Section 4 concludes.

2

Publication Monitoring System

Figure 1 sketches the general architecture of the system. A user can conﬁgure the information service by providing a set of preferences. These include the names of all companies that he or she would like to monitor, as well as all business areas (e.g., biotechnology, computer hardware) of interest The spider cyclically downloads a set of newspapers, journals, and discussion boards. The set of news sources is ﬁxed in advance and not depending on the users’ choices. All downloaded messages are recorded in a news database after the extraction engine has stripped the HTML code in which the message is embedded (header and footer parts as well as HTML tags, pictures, and advertisements).

Clipping and Analyzing News Using Machine Learning Techniques Client (access by web-browser) Client (access by web-browser)

Server Web Personalized press clipping

News story database

Information extraction

analysis functionality

... Client (access by web-browser)

89

Initial config

Customer profile database

Indexer, keyword search

Emotional analysis

Spider

Text classification

Fig. 1. Overview of the SemanticEdge Publication Monitoring System

The spider developed by SemanticEdge is conﬁgured by providing a set of patterns which all URLs that are to be downloaded have to match. Typically, online issues of newspapers have a fairly ﬁxed site structure and only vary the dates and story numbers in the URLs daily. Depending on the diﬃculty of the site structure, conﬁguring the spider such that all current news stories but no advertisements, archives, or documents that do not directly belong to the newspaper are downloaded, requires between one and four hours. Text classiﬁer, named entity recognizer, and emotional analyzer operate on this database. The text classiﬁer categorizes all news stories and newsgroup postings whereas the emotional analyzer is only used for newsgroup postings; it classiﬁes the emotional state that a message was likely to be written in. For each client company, a customized press clipping is generated, including summarization and visualization functionality. The press clipping consists of a set of dynamically generated web pages that a user can view in a browser after providing a password. The system visualizes the number of publications by source, by subject, and by referred company. For each entry, an emotional score between zero (very negative) and one (very positive) is visualized as a red or green bar, indicating the attitude of the article (Fig. 2), or the set of summarized articles. Figure 2 shows the list of all articles relevant to a client, Figure 3 shows the summary mode in which the system summarizes all articles either from one news source, or about one company, or related to one business sector per line. The average positive or negative attitude of the articles summarized in one line is visualized by a red or green bar. Several diagrams visualize the frequency of referrals to business sectors or individual companies and the average expressed attitude (Figure 4).

3

Intelligent Document Analysis

Document analysis consists of information extraction (including recognition of named entities), subject classiﬁcation, and emotional state analysis.

90

H. Gr¨ undel et al.

Fig. 2. Press clipping for client company: message overview

Fig. 3. Press clipping for client company: company summary

Clipping and Analyzing News Using Machine Learning Techniques

91

Fig. 4. Top: frequency of messages related to business sectors. Bottom: expressed emotional attitude toward companies

92

3.1

H. Gr¨ undel et al.

Information Extraction

Two main paradigms of information extraction agents which can be trained from hand-labeled documents exist; algorithms that learn extraction rules (e.g., [8,14, 6]) and statistical approaches such as Markov models [25,17], partially hidden Markov models [21,23] and conditional random ﬁelds [15]. Rule base information extraction algorithms appear to be particularly suited to extract text from pages with a very strict structure and little variability between documents. In order to learn how to extract the text body from the HTML page of a Yahoo! message board, the proprietary rule learner that we use needs only one example in order to identify where, in the document structure, the information to be extracted is located. We can then extract text bodies from other messages with equal HTML structure with an accuracy of 100%. Many other information extraction tasks, such as recognizing company names, or stock recommendations, rule based learners do not provide enough robustness to deal with the high variability of natural language. Hidden Markov models (HMMs) (see, [20] for an introduction) are a very robust statistical method for analysis of temporal data. An HMM consists of ﬁnitely many states {S1 , . . . , SN } with probabilities πi = P (q1 = Si ), the probability of starting in state Si , and aij = P (qt+1 = Sj |qt = Si ), the probability of a transition from state Si to Sj . Each state is characterized by a probability distribution bi (Ot ) = P (Ot |qt = Si ) over observations. In the information extraction context, an observation is typically a token. The information items to be extracted correspond to the n target states of the HMM. Background tokens without label are emitted in all HMM states which are not one of the target states. HMM parameters can be learned from data using the Baum-Welch algorithm. When the HMM parameters are given, then the model can be used to extract information from a new document. Firstly, the document has to be transformed into a sequence of tokens; for each token, several attributes are determined, including the word stem, part of speech, the HTML context, attributes that indicates whether the word contains letter, digits, starts with a capital letter and other attributes. Thus, the document is transformed into a sequence of attribute vectors. Secondly, the forward-backward algorithm [20] is used to determine, for each token, the most likely state of the HMM that it was emitted in. If, for a given token, the most likely state is one of the background states, then this token can be ignored. If the most likely state is one of the target states and thus corresponds to one of the items to be extracted, then the token is extracted and copied into the corresponding database ﬁeld. In order to adapt the HMM parameter, a user ﬁrst has to label information to be extracted manually in the example document. Such partially labeled documents form the input to the learning algorithm which then generates the HMM parameters. We use a variant of the Baum-Welch algorithm [23] to ﬁnd the model parameters which are most likely to produce the given documents and are consistent with the labels added by the user.

Clipping and Analyzing News Using Machine Learning Techniques

93

Figure 5 shows the GUI of the SemanticEdge information extraction environment. HMM based and rule based learners are plugged into the system.

Fig. 5. GUI of the information extraction engine

For specialized information extraction tasks such as ﬁnding company names in news stories, specially tailored information extraction agents outperform more general approaches such as HMMs. For instance, most companies that are being reported about are listed at some stock exchange. To recognize these companies, we only need to maintain a dynamically growing database. 3.2

Subject Classiﬁcation

For the subject classiﬁcation step, we have deﬁned a set of message subject categories (e.g., IPO announcement, ad hoc message) and a set of business sector and markets categories. The resulting classiﬁers assign each message a set of relevant subjects and sectors. The classiﬁer proceeds in several steps. First, a text is tokenized and the resulting tokens are mapped to their word stems. We then count, for each word stem and each example text, how often that word occurs in the text. We thus transform each text into a feature vector, treating a text as a bag of words. Finally, we weight each feature by the inverse frequency of the corresponding word which has generally been observed to increase the accuracy of the resulting

94

H. Gr¨ undel et al.

classiﬁers (e.g., [10,22]). This procedure maps each text to a point in a highdimensional space. The Support Vector Machine (SVM) [11] is then used to eﬃciently ﬁnd a hyper-plane which separates positive from negative examples, such that the margin between any example and the plane is maximized. For each category we thus obtain a classiﬁer which can then take a new text and map it to a negative or positive values, measuring the document’s relevance for the category. During the application phase, the support vector machine returns, for each category, a value of its decision function, that can range from large negative to large positive values. It is necessary to deﬁne a threshold value from which on a document is considered to belong to the corresponding category. There are several criteria by which this threshold can be set; perhaps the most popular is the precision recall breakeven point. The precision quantiﬁes the probability of a document really belonging to a class given that it is predicted to lie in that class. Recall, on the other hand quantiﬁes how likely it is that a document really belonging to a category is in fact predicted to be a member of that class by the classiﬁer. By lowering the threshold value of the decision function we can increase recall and decrease precision, and vice versa. The point at which precision equals is often used as a normalized measure of the performance of classiﬁcation and IR methods. Varying the threshold leads to precision and recall curves. Figure 6 shows the GUI of our text SVM-based categorization tool.

Fig. 6. GUI of the text classiﬁcation engine

Clipping and Analyzing News Using Machine Learning Techniques

95

It is also possible to deﬁne the accuracy (the probability of the classiﬁer making a correct prediction for a new document) as a performance measure. Unfortunately, many categories (such as IPO announcement) are so infrequent, that a classiﬁer which in fact never predicts that a document does belong to this class can achieve an accuracy of as much as 99.9%. This renders the use of accuracy as a performance metric less suited than precision/recall curves. For each category, we manually selected about 3000 examples, between 60 and 700 of these examples were positives. Figure 7 shows precision, recall, and accuracy of some randomly selected classes over the threshold value. The curves are based on hold-out testing on 20% of the data. Note that, for many of these classes such as xxx, the prior ratio of positive examples is extremely small (such as 1.4%). Specialized categories, such as “initial public oﬀering announcement” can be recognized almost without error; “fuzzy” concepts like “positive marked news” impose greater uncertainties. 3.3

Emotional Classiﬁcation

In psychology, a space of universal, culturally independent base emotional states have been identiﬁed according to the diﬀerential emotional theory (e.g., [5,9]); ten clusters within this emotional space are generally considered base emotions. These are interest, happiness, surprise, sorrow, anger, disgust, contempt, shame, fear, guilt (Figure 4). While it is typically impossible to analyze the emotional state of the author of a sober newspaper article, authors of newsgroup often do not conceal their emotions. Given a posting, we use an SVM to determine, for each of the ten emotional states, a score that rates the likelihood of that emotion for the author. We average the scores over all postings related to a company, or to a product, and visualize the result as in Figure 4. We can project emotional scores onto a “positive-negative” ray and visualize the resulting score as a red or green bar as in Figure 2. We manually classiﬁed postings to discussion boards into positive, negative, and neutral for each of the ten base emotional states. Emotional classiﬁcation of messages turned out to be a fairly noisy process; the judgment on the emotional content of postings usually varies between individuals. Unfortunately, we found no positive examples for disgust but between 2 and 21 positive and between 16 and 92 negative examples for the other states. Figure 8 shows precision and recall curves for those emotions for which we found most positive examples, based on 10-fold cross validation. As we expected, recognizing emotions seems to be a very diﬃcult task; in particular, from the small samples available. Still the recognizer performs signiﬁcantly better than random guessing. Emotional classiﬁers with a rather high threshold can often achieve reasonable precision values. Also, in many cases in which human and classiﬁer disagree, it is not easy to tell whether human or classiﬁer are wrong.

96

H. Gr¨ undel et al.

Fig. 7. Precision, recall, and accuracy for subject classiﬁcation. First row: “US treasury”, “mergers and acquisition”; second row: “positive” / “negative market and economy news”; third row: “initial public oﬀering announcement”, “currency and exchange rates”.

Clipping and Analyzing News Using Machine Learning Techniques

97

Fig. 8. Precision, recall, and accuracy for emotional classiﬁcation. First row: positive versus negative, anger; second row: contempt, fear.

4

Conclusion

We describe a system that monitors online news sources and discussion boards, downloads the content regularly, extracts the document bodies, analyzes messages by content and emotional state, and generates customer-speciﬁc press clippings. A user of the system can specify his or her information needs by entering a list of company names (e.g., the name of the own company and relevant competitors) and selecting from a set of message types and business sectors. Information extraction tasks are addressed by rule induction and hidden Markov modes; the Support Vector Machine is used to learn classiﬁers from hand-labeled data. The customer-speciﬁc news stories are listed individually, as well as summarized by several criteria. Diagrams visualize how frequently business sectors or companies are cited over time. The resulting press clippings are generated in near real-time and fully automatically. This tool enables companies to keep track of how they are being perceived in news groups and in the press. It is also inexpensive compared to press clipping agencies. On the down side, the system is certain to miss all news stories that only appear in printed issues. Also, the classiﬁer has a certain inaccuracy which imposes the risk of missing relevant articles. Of course, this risk is also present with press clipping agencies. Nearly all studied subject categories can be recognized very reliably using support vector classiﬁers.

98

H. Gr¨ undel et al.

References 1. A. Amrodt and E. Plaza. Cased-based reasoning: foundations, issues, methodological variations, and system approaches. AICOM, 7(1):39–59, 1994. 2. N. Belkin and W. Croft. Information ﬁltering and information retrieval: Two sides of the same coin? Communications of the ACM, 35(12):29–38, 1992. 3. Mark Craven, Dan DiPasquo, Dayne Freitag, Andrew K. McCallum, Tom M. Mitchell, Kamal Nigam, and Se´ an Slattery. Learning to construct knowledge bases from the World Wide Web. Artiﬁcial Intelligence, 118(1-2):69–113, 2000. 4. L. Eikvil. Information extraction from the world wide web: a survey. Technical Report 945, Norwegian Computing Center, 1999. 5. P. Ekman, W. Friesen, and P. Ellsworth. Emotion in the human face: Guidelines for research and integration of ﬁndings. Pergamon Press, 1972. 6. G. Grieser, K. Jantke, S. Lange, and B. Thomas. A unifying approach to html wrapper representation and learning. In Proceedings of the Third International Conference on Discovery Science, 2000. 7. Ralph Grishman and Beth Sundheim. Message understanding conference - 6: A brief history. In Proceedings of the International Conference on Computational Linguistics, 1996. 8. N. Hsu and M. Dung. Generating ﬁnite-state transducers for semistructured data extraction from the web. Journal of Information Systems, Special Issue on Semistructured Data, 23(8), 1998. 9. C. Izard. The face of emotion. Appleton-Century-Crofts, 1971. 10. T. Joachims. A probabilistic analysis of the rocchio algorithm with tﬁdf for text categorization. In Proceedings of the 14th International Conference on Machine Learning, 1997. 11. T. Joachims. Text categorization with support vector machines. In Proceedings of the European Conference on Machine Learning, 1998. 12. Thorsten Joachims, Dayne Freitag, and Tom Mitchell. WebWatcher: A tour guide for the World Wide Web. In Proceedings of the 15th International Joint Conference on Artiﬁcial Intelligence (IJCAI-97), pages 770–777, San Francisco, August23– 29 1997. Morgan Kaufmann Publishers. 13. J. Konstantin, B. Miller, D. Maltz, J. Herlocker, L. Gordon, and J. Riedl. Grouplens: applying collaborative ﬁltering to usenet news. Communications of the ACM, 40(3):77–87, 1997. 14. N. Kushmerick. Wrapper induction: eﬃciency and expressiveness. Artiﬁcial Intelligence, 118:15–68, 2000. 15. John Laﬀerty, Fernando Pereira, and Andrew McCallum. Conditional random ﬁelds: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the International Conference on Machine Learning, 2001. 16. P. Maes. Agents that reduce work and information overload. Communications of the ACM, 37(7), 1994. 17. Andrew McCallum, Dayne Freitag, and Fernando Pereira. Maximum entropy Markov models for information extraction and segmentation. In Proceedings of the Seventeenth International Conference on Machine Learning, 2000. 18. A. Moukas. Amalthaea: Information discovery and ﬁltering using a multiagent evolving ecosystem. In Proceedings of the Conference on Practical Application of Intelligent Agents and Multi-Agent Technology, 1996. 19. M Pazzani, J. Muramatsu, and D. Billsus. Syskill and webert: Identifying interesting web sites. In Proceedings of the National Conference on Artiﬁcial Intelligence, pages 54–61, 1996.

Clipping and Analyzing News Using Machine Learning Techniques

99

20. L. Rabiner. A tutorial on hidden markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257–285, 1989. 21. T. Scheﬀer, C. Decomain, and S. Wrobel. Active hidden markov models for information extraction. In Proceedings of the International Symposium on Intelligent Data Analysis, 2001. 22. T. Scheﬀer and T. Joachims. Expected error analysis for model selection. In Proceedings of the Sixteenth International Conference on Machine Learning, 1999. 23. Tobias Scheﬀer and Stefan Wrobel. Active learning of partially hidden markov models. In Proceedings of the ECML/PKDD Workshop on Instance Selection, 2001. 24. M. Sehami, M. Craven, T. Joachims, and A. McCallum, editors. Learning for Text Categorization, Proceedings of the ICML/AAAI Workshop. AAAI Press, 1998. 25. Kristie Seymore, Andrew McCallum, and Roni Rosenfeld. Learning hidden markov model structure for information extraction. In AAAI’99 Workshop on Machine Learning for Information Extraction, 1999. 26. B. Sheth. Newt: A learning approach to personalized information ﬁltering. Master’s thesis, Department of Electiric Engineering and Computer Science, MIT, 1994.

Spherical Horses and Shared Toothbrushes: Lessons Learned from a Workshop on Scientific and Technological Thinking 1

2

1

Michael E. Gorman , Alexandra Kincannon , and Matthew M. Mehalik 1

Technology, Culture & Communications, School of Engineering & Applied Science, P.O. Box 400744, University of Virginia, Charlottesville, VA 22904-4744 USA {meg3c, mmm2f}@virginia.edu 2 Department of Psychology, P.O. Box 400400, University of Virginia, Charlottesville, VA 22904-4400 USA kincannon@virginia.edu Abstract. We briefly summarize some of the lessons learned in a workshop on cognitive studies of science and technology. Our purpose was to assemble a diverse group of practitioners to discuss the latest research, identify the stumbling blocks to advancement in this field, and brainstorm about directions for the future. Two questions became central themes. First, how can we combine artificial studies involving ‘spherical horses’ with fine-grained case studies of actual practice? Results obtained in the laboratory may have low applicability to real world situations. Second, how can we deal with academics’ attachments to their theoretical frameworks? Academics often like to develop unique ‘toothbrushes‘ and are reluctant to use anyone else’s. The workshop illustrated that toothbrushes can be shared and that spherical horses and fine-grained case studies can complement one another. Theories need to deal rigorously with the distributed character of scientific and technological problem solving. We hope this workshop will suggest directions more sophisticated theories might take.

1

Introduction

At the turn of the 21st century, the most valuable commodity in society is knowledge, particularly new knowledge that may give a culture, a company, or a laboratory an advantage [1-3]. Therefore, it is vital for the science and technology studies community to study the thinking processes that lead to discovery, new knowledge and invention. Knowledge about these processes can enhance the probability of new and useful technologies, clarify the process by which new ideas are turned into marketable realities, make it possible for us to turn students into ethical inventors and entrepreneurs, and facilitate the development of business strategies and social policies based on a genuine understanding of the creative process.

2

A Workshop on Scientific and Technological Thinking

In order to get access to cutting-edge research on techno-scientific thinking, Michael Gorman obtained funding from the National Science Foundation, the Strategic Institute of the Boston Consulting Group and the National Collegiate Inventors and Innovators Alliance to hold a workshop at the University of Virginia from March 24-27, 2001. With assistance from Alexandra Kincannon, Ryan Tweney and others, he as K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 74-86, 2001. © Springer-Verlag Berlin Heidelberg 2001

Spherical Horses and Shared Toothbrushes

75

sembled a diverse group of practitioners, focusing on those in the middle of their careers and also on junior faculty and graduate students who represent the future. There were 29 participants, including 18 senior or mid-career researchers, and 11 junior faculty and graduate students. Representatives from the NSF, the Strategic Institute of the BCG and the NCIIA also attended. Their role was to keep participants focused on lessons learned, even as the participants worked to assess the state of the art and push beyond it, establishing new directions for research on scientific and technological thinking. In the rest of this brief paper, Gorman and Kincannon, two of the organizers of the workshop, and Matthew Mehalik, one of the participants, will highlight results from this workshop, citing the work of participants where appropriate and adding in1 terpretive material of their own. Two questions dominated in the workshop, each illustrated by a metaphor. David Gooding, a philosopher from the University of Bath who has done fine-grained studies of the thinking processes of Michael Faraday, told a joke that set up one theme. In the joke, a multimillionaire offered a prize for predicting the outcome of a horse race to a stockbreeder, a geneticist, and a physicist. The stockbreeder said there were too many variables, the geneticist could not make a prediction about any particular horse, but the physicist claimed the prize, saying he could make the prediction to many decimal places—provided it were a perfectly spherical horse moving through a vacuum. This metaphor led to a question: How can we combine artificial studies involving ‘spherical horses’ and fine-grained case studies of actual practice? Results obtained under rigorous laboratory conditions may have what psychologists call low ecological validity, or low applicability to real world situations [4]. Highly abstract computational models often ignore the way in which real-world knowledge is embedded in social contexts and embodied in hands-on practices [5]. The second metaphor came from Christian Schunn, then at George Mason University and now at the University of Pittsburgh, who noted that taxonomies and frameworks are like toothbrushes—no one wants to use anyone else’s. This metaphor led to another question: How can we transcend academics’ attachments to their individual theoretical frameworks? Academic psychologists, historians, sociologists and philosophers like to develop and refine unique toothbrushes and are reluctant to use anyone else’s. Real-world practitioners are not as fussy; they are willing to assemble a ‘bricolage’ of elements from various frameworks that academics might regard as incommensurable.

3

A Moratorium against Spherical Horses?

Nancy Nersessian, a philosopher and cognitive scientist from the Georgia Institute of Technology, reminded participants that Bruno Latour declared a ten-year moratorium against cognitive studies of science in 1986. Latour was one of the key figures in promoting a new sociology of scientific knowledge. He and others were reacting 1

The views reflected here are those of the authors, and have not been endorsed by workshop participants, the NSF, BCG or the NCIIA. All participants were taped, with their consent, and we have used these tapes in an effort to reconstruct highlights. Thanks to Pat Langley for his comments on a draft.

76

M.E. Gorman, A. Kincannon, and M.M. Mehalik

against the idea that science was a purely rational enterprise, carried out in an abstract cognitive space. Cognitive scientists like Herbert Simon contributed to this abstract cognizer view 2 of science. Simon was one of the founders of a movement Nersessian labeled “Good Old Fashioned Artificial Intelligence” (GOFAI). Simon’s toothbrush, or framework, began with the assumption that there is nothing particularly unique about what a Kepler does—the same thinking processes are used on both ordinary and extraordinary problems [6]. Simon was a revolutionary in the Kuhnian sense; he played a major role in creating artificial intelligence and linking it with a new science of thinking, called cognitive science. Peter Slezak used programs like BACON to turn the tables on Latour’s moratorium: A decisive and sufficient refutation of the 'strong programme' in the sociology of scientific knowledge (SSK) would be the demonstration of a case in which scientific discovery is totally isolated from all social or cultural factors whatever. I want to discuss examples where precisely this circumstance prevails concerning the discovery of fundamental laws of the first importance in science. The work I will describe involves computer programs being developed in the burgeoning interdisciplinary field of cognitive science, and specifically within 'artificial intelligence' (AI). The claim I wish to advance is that these programs constitute a 'pure' or socially uncontaminated instance of inductive inference, and are capable of autonomously deriving classical scientific laws from the raw observational data [7, pp. 563-564]. Slezak argued that if programs like BACON [8, 9] can discover, then there is no need to invoke all these interests and negotiations the sociologists use to explain discovery. His claims sparked a vigorous debate in the November, 1989 issue of the journal Social Studies of Science. Latour and Slezak illustrate how academics can create almost incommensurable frameworks. If the Simon perspective is a toothbrush, then Latour is denying that it even exists—and vice versa. Nersessian reminded participants that, had Simon been at the workshop, he would have argued that his toothbrush does incorporate the social and cultural; it is just that all of this is represented symbolically in memory [10]. Therefore, cognition is about symbol processing. These symbols could be as easily instantiated in a computer as in a brain. In contrast, Greeno and others advocate a position whose roots might be traced to Gibson and Dewey: that knowledge emerges from the interaction between the individual and the situation [11]. Cognition is distributed in the environment as well as the brain, and is shared among individuals [12, 13]. Merlin Donald discusses the role of culture in the evolution of cognition [14]. Nersessian in her own work explored how cultural factors can account for differences between the problem-solving approaches of scientists like Maxwell and Ampere [15]. In the symbol-processing view, discovery and invention are merely aspects of a general problem-solving system that can best be represented at the ‘spherical horse’ level. In the situated and distributed view, discovery and invention are practices that 2

Simon intended to be a participant in our workshop, but died shortly before it—a great tragedy and a great loss. During the planning stages, he referred to this as a workshop of ‘right thinkers’. For tributes to him, see http://www.people.virginia.edu/~apk5t/STweb/mainST.html.

Spherical Horses and Shared Toothbrushes

77

need to be studied in their social context. This situated cognition perspective comes much closer to that of sociologists and anthropologists of science [16], but advocates like Norman and Hutchins still talk about the importance of representations like mental models. Jim Davies, from the Georgia Institute of Technology, applied Nersessian’s cognitive-historical approach to a case study of the use of visual analogy in scientific discovery. Davies analyzed the process of conceptual change in Maxwell’s work on electromagnetism and applied to it a model of visual analogical problem solving called Galatea. He found that visual analogy played an important role in the development of Maxwell’s theories and demonstrated that the cognitive-historical approach is useful for understanding general cognitive processes. Ryan Tweney, a co-organizer of the workshop, described his own in vivo case study of Michael Faraday’s work on the interaction of light and gold films [33]. Tweney is in the process of replicating these experiments to unpack the tacit knowledge that is embodied in the cognitive artifacts created by Faraday. He hopes to do a kind of material protocol analysis that goes beyond the verbal material that is in Faraday’s diary. One end result might be a digital version of Faraday’s diary that includes images and perhaps even QuickTime movies of replications of his experiments. This kind of study potentially bridges the gap between situated and symbolic studies of discovery.

4

A Common Set of Toothbrushes?

David Klahr, a cognitive psychologist at Carnegie Mellon, has shown a preference for spherical horses, conducting experiments on scientific thinking. However, his experiments have used sophisticated, complex tasks. For example, he and two of his students (also workshop participants) Kevin Dunbar and Jeff Shrager asked participants in a series of experiments to program a device called the Big Trak, and studied the processes they used to solve this problem. The Big Trak was a battery-powered vehicle that could be programmed, via a keypad, to move according to instructions. One of the keys was labeled RPT. Participants had to discover its function. Following in Herb Simon’s footsteps, Klahr, with Dunbar and Schunn, characterized subjects’ performance as a search in two problem spaces, one occupied by possible experiments, the other by hypotheses [17]. They found that one group of subjects (Theorists) preferred to work in the hypothesis space, proposing about half as many experiments as the second group (Experimenters). Almost all of the former's experiments were guided by a hypothesis, whereas the latter's were often simply exploratory. Based on this and other work, Klahr proposed a possible general framework, or shareable toothbrush, for classifying the different kinds of cognitive studies. This general framework is based on multiple problem spaces, and whether the study was a general one, using an abstract task like the Big Trak, or domain-specific, like Nersessian’s studies of Maxwell [18]. Dunbar, currently at McGill University and moving to Dartmouth in the fall, added to this general framework the idea of classifying experiments based on whether they were in vitro (controlled laboratory experiments) or in vivo (case studies). Computational simulations can be based on either in vivo or in vitro studies. A system for classifying studies of scientific discovery might begin with a 2x2 matrix. Big Trak is an example of an in vitro technique; the work on Maxwell described by Nersessian,

78

M.E. Gorman, A. Kincannon, and M.M. Mehalik

on Faraday by Tweney, and on nuclear fission by Andersen, are examples of in vivo work. The three in vivo research programs did not explicitly distinguish between hypothesis and experiment spaces, but the practitioners studied generated both hypotheses and experiments. The rest of this paper will feature highlights from the workshop that will force us to expand and transform this classification scheme (see Table I). Dunbar’s work has iterated between in vivo and in vitro studies. The value of in vitro work is the way in which it allows for control and isolation of factors—like the way in which the possibility of error encourages experimental participants to adopt a confirmatory heuristic [19]. Dunbar thinks it is important to compare such findings with what scientists actually do. He has conducted a series of in vivo studies of molecular biology laboratories [20, 21]. Group studies have the heuristic value of forcing people to explain their reasoning. Regarding error, the molecular biologists had evolved special controls to check each step in a complex procedure in order to eliminate error. Dunbar ran an in vitro study in which he found that undergraduate molecular biology students would also employ this kind of control on a task that simulated the kind of reasoning used in molecular biology [22]. Dunbar’s work shows the importance of iterating between in vitro and in vivo studies. Schunn and his colleagues were interested in how scientists deal with unexpected results, or anomalies. In one study, he videotaped two astronomers interacting over a new set of data concerning the formation of ring galaxies. Schunn found that these researchers noticed anomalies as much as expected results, but paid more attention to the anomalies. The researchers developed hypotheses about the anomalies and elaborated on them visually, whereas they used theory to elaborate on expected results. When the two astronomers discussed the anomalies, they used terms like ‘the funky thing’ and ‘the dipsy-doodle’, staying at a perceptual rather than a theoretical level. Schunn’s astronomers were working neither in the hypothesis nor experimental space; instead, they were working in a space of possible visualizations dependent on their domain-specific experience. Hanne Andersen, from the University of Copenhagen, described the use of a family resemblance view of taxonomic concepts for understanding the dynamics of conceptual change. She noted that the family resemblance account has been criticized for not being able to distinguish sufficiently between different concepts, the problem of wide-open texture. This limitation could be resolved by including dissimilarity as well as similarity between concepts and by focusing on taxonomies instead of individual concepts. Anomalies can be viewed as violations of taxonomic principles that then lead to conceptual change. Andersen applied this approach to the discovery of nuclear fission, finding that early models of disintegration and atomic structure were revised in light of anomalous experimental results of this taxonomic kind Shrager, affiliated with the Department of Plant Biology, Carnegie Institution of Washington, and the Institute for the Study of Learning and Expertise, did a reflective study of his own socialization into phytoplankton molecular biology. In the beginning, he had to be told about every step, even when there were explicit instructions; he needed an extensive apprenticeship. As his knowledge grew, he noted that it was “somewhere between his head and his hands.” As his skill developed, he was able to take some of his attention off the immediate task at hand and understand the purpose of the procedures he was using. On at least one occasion, this came together in the “blink of an eye.” The cognitive framework he found most useful was his own tooth-

Spherical Horses and Shared Toothbrushes

79

brush: view application [23]. To his surprise, Shrager found that, “What passes for theory in molecular biology is the same thing that passes for a manual in car mechanics.” He found less of a need to keep reflective notes in his diary as he became more proficient, though he continued to record the details of experiments, where particular materials were stored and all the other procedural details that are vital to a molecular biologist. He commented that, “if you lose your lab notebook, you’re hosed.” Gooding indicated that more abstract computational models of the spherical horse variety have not worked well for him. For him, “the beauty is in the dirt.” In collaboration with Tom Addis, a computer scientist, he evolved a detailed, computational scheme for representing Faraday’s experiments, hypotheses and construals [24]. Gooding thought that communication ought to be added to the matrix proposed by Klahr and Dunbar (See Table 1). Paul Thagard, from the University of Waterloo, has been gathering ideas from leaders in the field about what it takes to be a successful scientist. According to Herb Simon, one should not work on what everyone else is working on and one needs to have a secret weapon, in his case, computational modeling. As part of a case study, Thagard interviewed a microbiologist, Patrick Lee, who accidentally discovered that a common virus has potential as a treatment for cancer. The discovery was the result of a “stupid” experiment in viral replication done by one of Lee’s graduate students. The “stupid” experiment produced an anomalous result that eventually led to the generation of a new hypothesis about the virus’ ability to kill cancer cells. This chain of events is an example of abductive hypothesis formation, in which hypotheses are generated and evaluated in order to explain data. Once a hypothesis was generated that fit the data, researchers used deduction to arrive at the hypothesis that the virus could kill cancer cells. Thagard raises the questions of how one decides what experiments to do and how one determines what is a good experiment. These questions are a critical part of the cognitive processes involved in discovery. Thagard is also looking at the role of emotions in scientific inquiry, in judgments about potential experiments, in reactions to unexpected results, and in reactions to successful experiments (Thagard’s model of emotions and science: http://cogsci/uwaterloo.ca ). Thagard suggested adding a space of questions to the Klahr framework. Robert Rosenwein, a sociologist at Lehigh, presented an in vitro simulation of science (SCISIM) that comes close to an in vivo environment [25]. Students in a class like Gorman’s Scientific and Technological Thinking (http://128.143.168.25/classes /200R/tcc200rf00.html) take on a variety of social roles in science. Some work in competing labs, others run funding agencies, still others run a journal and a newsletter. The students in the labs try to get funding for their experiments, and then publish the results. They do not do the kinds of fine-grained experimental processes done by participants in Big Trak; instead, they choose the variables they want to combine in an experiment, select a level of precision, and are given a result. Experiments cost ‘simbucks’ and salaries have to be paid, so there is continual pressure to fund the lab. There is a group of independent scientists as well, who have to decide which line of research to pursue. SCISIM adds another column to the matrix, for simulation of pursuit decisions. Pursuit decisions concern which research program to seek funding for (See Table 1). Such decisions are usually made within a network of enterprises. Marin Simina, a cognitive scientist at Tulane, described a computational simulation of Alexander Graham Bell’s network of enterprises. Howard Gruber coined the term ‘network of enter-

80

M.E. Gorman, A. Kincannon, and M.M. Mehalik

prises’ to describe the way in which Darwin pursued multiple projects that eventually played a synergistic role in his theory of evolution [26]. Similarly, Alexander Graham Bell had two major enterprises in 1873: making speech visible to the deaf, and sending multiple messages down a single wire. These enterprises were synthesized in his patent for a speaking telegraph, which focused on the type of current that would have to be used to transmit and receive speech [27, 28]. Simina created a program called ALEC, which simulated the discovery Bell made on June 2, 1875. At that time, Bell’s primary goal was to reach fame and fortune by solving the problem of multiple telegraphy, Bell had suspended the goal of transmitting speech because his mental model for a transmitter contained an indefinite number of metal reeds—it was not clear how it could be built. On June 2, 1877, a single tuned reed transmitted multiple tones with sufficient volume to serve as a transmitter for the human voice. Bell was not seeking this result; he wanted the reed to transmit only a single tone. But this serendipitous result allowed him to activate his suspended goal and instruct Watson to build the first telephone [29]. ALEC was able to simulate the process of suspending the goal and how Bell was primed to reactivate it by a result.

5

Collaboration and Invention

Gary Bradshaw, a cognitive scientist at Mississippi State and a collaborator with Herb Simon, talked about “stepping off Herb’s shoulders into his shadow.” In a study of the Wright Brothers, he adapted Klahr’s framework to invention, creating three spaces: function, hypothesis and design [30]. One of the major reasons the Wrights succeeded where others failed was that the brothers decomposed the problem into separate functions—like vertical lift, horizontal stability, and turning. Other inventors worked primarily in a design space, adding features like additional wings without the careful functional analysis done by the Wrights. This suggests that function and design spaces ought to be added for inventors (see Table 1). To see how well his framework of invention work-spaces held up, Bradshaw tried another case—the rocket boys from West Virginia, immortalized in a book by Homer Hickam [31], and in the film October Sky [32]. Their problem of rocket construction could be decomposed into multiple spaces, but a complete factorial of all the possible variations would come close to two million cells, so they could not follow the strategy called Vary One Thing at a Time (VOTAT) —they did not have the resources. Although the elements of the rocket construction were not completely separable, they tested some variables in isolation, such as fuel mixtures in bottles. They also did careful post-launch inspection, and used theory to reduce the problem space; for example, they used calculus to derive their nozzle shape. They built knowledge as they went along, taking good notes. Team members also took different roles—one was more of a scientist, another more of an engineer and project manager. Tweney argued from his own experience that the rocket system was much less decomposable than suggested by Bradshaw’s analysis and that the West Virginia group seemed to hit upon some serendipitous decompositions. Tweney’s rocket group was stronger in chemistry, so they used theory to create the fuel, and copied the nozzle design. Both were post-Sputnik groups active during the late 1950’s, although Tweney insists that his was a less serious “rocket boy” group than the one studied by Hickam.

Spherical Horses and Shared Toothbrushes

81

Mehalik, a Systems Engineer at the University of Virginia, developed a framework which combined Hutchins’ analysis of distributed cognition ‘in the wild’ [12], with three states or stages in actor networks. 1. A top-down state in which one actor or group of actors controls the research program and tells others what to do. 2. A trading zone state in which no group of actors has a comprehensive view, but all are connected by a boundary object that each sees differently. Peter Galison uses particle detectors as an example of this sort of boundary object [34], 3. A shared representation state in which all actors have a common perspective on what needs to be accomplished, even if there is still some division of labor based on skills, aptitude and expertise. Mehalik applied this framework to the invention of an environmentally sustainable furniture fabric by a global group. This network began with a shared mental model based on an analogy to nature, then struggled to settle into a stable trading zone in which participants would trade economic benefits and prestige. The resulting fabric has won almost a dozen major environmental awards and is seen as a leading example of innovative environmental design. Klahr suggested that Mehalik’s research might add another dimension to his overall framework: capturing work in groups and teams. It might be possible to take each of the major actants studied by Mehalik, look at what spaces they worked in, then show links between them and their different activities. Tweney raised an important question about distributed cognition—could intra-individual cognition be modeled in a way similar to inter-individual cognition by including the three-state framework? Michael Hertz, from the University of Virginia, developed a tool for determining causal attribution, and applied it to Monsanto’s initially unsuccessful introduction of GMO’s into Europe. The tool did not allow Hertz to identify a primary cause, but it did reduce the complexity of the decision space for students studying the Monsanto case and trying to determine who or what was at fault. Shrager suggested implementing this tool in an Echo network that would incorporate interaction with the decisionmakers themselves. Bernie Carlson raised the question of when it is useful to quantify certain decision situations, again relating to the theme of the balance between using a tool to help reduce complexity in a decision situation while still maintaining contextual validity. Ryan Tweney raised the issue of using Hertz’s framework in a predictive sense—the dynamic complexity of the situation may be too difficult to make predictions; however, prediction is what a company such as Monsanto may be most interested in. Hertz responded by saying that the act of trying to identify causes has heuristic value, especially if a tool helps Monsanto distinguish between the relative role of factors it can influence and factors that are largely beyond its control. Decision aids and simulations simplify complex situations; decision-makers need to remember that these simplifications may not accurately reflect all important aspects of the underlying situation, including complex, dynamic interactions among variables. Thomas Hughes, a historian of technology, talked about his analysis of collective invention in large-scale systems like the development of the Atlas and Polaris missiles [35]. He extolled the virtues of systems management techniques and the benefits of isolating scientists from bureaucracy. Project management and oversight functions change with the size of the group and management becomes more explicitly needed with larger groups. Without sufficient oversight, large projects can be too diffuse and

82

M.E. Gorman, A. Kincannon, and M.M. Mehalik

inefficient. Dunbar suggested that having this kind of systems management was one reason why the privately funded Celera outperformed the publicly funded Human Genome Project. William (Chip) Levy, from the Department of Neurosurgery at the University of Virginia, described a neural network that models results of an implicit learning experiment. He uses the model as an illustration of how variability can be an adaptive property in biological terms. Complex systems, like brains and like neural network models, benefit from the random fluctuations of noise. Eliminating variability in these systems would sacrifice too much memory capacity. Variability exists both within and between individuals. Levy’s research highlights the role of tacit knowledge in discovery and invention. Sociologists of science and technology emphasize the tacit dimension [36, 37]. There is a growing cognitive literature on implicit knowledge in psychology [38, 39], but this literature does not connect directly to discovery and invention. Several conference participants mentioned tacit knowledge. Robert Matthews, a cognitive psychologist at Louisiana State and one of the leading researchers on implicit learning [40], predicted that Dunbar’s scientists would be unable to explain why they did what they did. Dunbar responded that the scientists’ after-the-fact stories about how they did what they did had nothing to do with their actual processes. Schunn noted KarmiloffSmith’s three stages of learning, in which the second stage means you can do something without being able to explain it, and the third stage involves reflection [41]. The way to become aware of one’s implicit knowledge is to watch oneself, which can interfere with performance. Maria Ippolito, from the University of Alaska, compared the creative process exhibited in the writings of Virginia Woolf to that used by scientists. Ippolito offered Woolf as an example of a scientific thinker in a more general sense and constructed a multi-dimensional database using Woolf’s writings. Through the examination of Woolf’s development as a writer, Ippolito investigated the psychological processes of creative problem solving, including heuristics, scripts and schemata, development of expertise, and search of unstructured problem spaces. Elke Kurz, from the University of Tübingen, commented on two studies in which she observed the softening of often-perceived boundaries between cognitive-historical case study analysis and in-laboratory analyses. She examined how scientists and mathematicians used different representational systems, such as variant forms of Calculus, when problem solving. These differences can be traced to historical developments in the different scientific fields. Such historical developments invite historical case analysis as a necessary part of the study of the conceptual resources these different scientists possessed. Kurz also replicated experiments involving perception of size constancy that had been done earlier by Brunswik. During the attempts at replication, Kurz noted how Brunswik needed to constrain the participants’ agency into forms that Brunswik found tolerable in the context of his experiment. Kurz stated the construction of this context of acceptable agency is a process worth studying using historical case methods, again complementing the in-laboratory style of investigation. Finally, Kurz reported on the difficulties of attempting a replication of a previous experiment because of the changes in many contextual events between the original experiment and the replicated experiment. This situation again invites the crossing of any perceived boundary between the case study and in-laboratory approaches.

Spherical Horses and Shared Toothbrushes

6

83

Lessons Learned

The workshop illustrates that toothbrushes can be shared. The example we used in this paper was the Simon/Klahr multiple spaces framework. Table 1 summarizes the potential spaces identified in the workshop. Table 1. Different search spaces identified by participants in the workshop. Asterisks denote computational simulations, a kind of ‘spherical horse’ that can be based on either in vivo or in vitro studies. Italics denote spaces that are unique to invention. Search Spaces Hypotheses Experiments Pursuit Communication Embodied knowledge Taxonomies Visualizations Questions Links in a social network Function Design

In Vitro Big Trak, SciSim Big Trak, SciSim SciSim SciSim

In Vivo Maxwell, Faraday Maxwell, Faraday ALEC* Faraday Faraday, Shrager Nuclear fission Galatea*, Schunn’s astronomers Patrick Lee Hughes, Mehalik Wright brothers, rocket boys Wright brothers, rocket boys

The problem with this framework is that each study seemed to suggest the need for yet another space. There is not always a clear line of demarcation between spaces. For example, SciSim incorporates in vivo cases, which means that it can exist in a kind of gray zone between in vitro and in vivo. Visualizations can be thought experiments, ways of seeing the data, and mental models of a device or even of a social network. Despite its shortcomings, this framework has heuristic value, both for organizing research already done and for suggesting directions for future work. For example, only Bradshaw has worked with function and design spaces, and there is no in vitro work on invention Mehalik’s work demonstrated the need for mapping movements among spaces across individuals over time. What would happen if we added time-scale to the framework? Schun suggested that visualizations happen most repidly, with experi3 ments and hypotheses taking longer, and taxonomies even longer. Hughes and Mehalik remind us that time-scale is partly dependent on the extent to which each of these activities depends on network-building. This framework is also general enough to facilitate comparisions between discovery, invention and artistic creation, as Ippolito noted. More comparisions of this sort are needed.

3

Personal communication.

84

7

M.E. Gorman, A. Kincannon, and M.M. Mehalik

Future of Cognitive Studies of Science and Technology

Bruce Seely, a historian of technology on rotation at the NSF’s Science and Technology Studies program, felt that the workshop showed how cognitive studies of science and technology had grown in sophistication. highlighting the creators of new knowledge in ways that complemented studies of users by other STS disciplines. Tiha von Ghyczy, representing the Strategic Institute of the Boston Consulting Group, noted that managers are happy to use any toothbrush that will help them improve their business strategies, and they are also more concerned about practical results than methodological foundations. Still, he felt that managers would find lessons from the workshop interesting. Strategies have a very short half-life; a successful strategy is quickly imitated by competitors. Therefore, original thinking is essential for business survival. Besides business strategy and science-technology studies, a cognitive approach to invention and discovery should also inform work in ‘mainstream’ cognitive science. Theories and frameworks need to be able to deal in a rigorous way with the shared and distributed character of scientific and technological problem solving, and also its tacit dimension. We hope this workshop will suggest the outlines more sophisticated theories and models might take. Ideally, anyone doing a computational model or decision-aid for discovery would base it on one or more fine-grained case studies. Tweney and Dunbar have had particularly good success combining in vitro and in vivo approaches. We hope this workshop will encourage more collaborations between those trained in spherical-horse approaches and those capable of going deeply into the details of particular discoveries and inventions.

References 1. 2. 3. 4. 5. 6. 7. 8.

Christensen, C.M., The innovator’s dilemma: When new technologies cause great firms to fail. 1997, Boston: Harvard Business School Press. Evans, P. and T.S. Wurster, Blown to bits: How the new economics of information transforms strategy. 2000, Boston: Harvard Business School Press. Nonaka, I. and H. Takeuchi, The knowledge-creating company: how Japanese companies create the dynamics of innovation. 1995, New York: Oxford University Press. Gorman, M.E., et al., Alexander Graham Bell, Elisha Gray and the Speaking Telegraph: A Cognitive Comparison. History of Technology, 1993. 15: p. 156. Shrager, J. and P. Langley, Computational Models of Scientific Discovery and Theory Formation. 1990, San Mateo, CA: Morgan Kaufmann Publishers, Inc. Simon, H.A., Langley, P. W., & Bradshaw, G., Scientific discovery as problem solving. Synthese, 1981. 47: p. 1-27. Slezak, P., Scientific discovery by computer as empirical refutation of the Strong Programme. Social Studies of Science, 1989. 19(4): p. 563-600. Langley, P., Simon, H. A., Bradshaw, G. L., & Zykow, J. M. Scientific Discovery: Computational Explorations of the Creative Processes. 1987, Cambridge: MIT Press.

Spherical Horses and Shared Toothbrushes

9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23.

24.

25. 26. 27.

85

Bradshaw, G.L., Langley, P., & Simon, H. A., Studying scientific discovery by computer simulation. Science, 1983. 222: p. 971-975. Vera, A.H. and H.A. Simon, Situated action: A symbolic interpretation. Cognitive Science, 1993. 17(1): p. 7-48. Greeno, J.G. and J.L. Moore, Situativity and symbols: Response to Vera and Simon. Cognitive Science, 1993. 17: p. 49-59. Hutchins, E., Cognition in the Wild. 1995, Cambridge, MA: MIT Press. Norman, D.A., Things That Make Us Smart: Defending Human Attributes in the Age of the Machine. 1993, New York: Addison Wesley. Donald, M., Origins of the modern mind: Three stages in the evolution of culture and cognition. 1991, Cambridge, UK: Harvard. Nersessian, N., How do scientists think? Capturing the dynamics of conceptual change in science, in Cognitive Models of Science, R.N. Giere, Editor. 1992, University of Minnesota Press: Minneapolis. p. 3-44. Suchman, L.A., Plans and Situated Actions: The Problem of Human-Machine Interaction. 1987, Cambridge: Cambridge University Press. Klahr, D., Exploring Science: The cognition and development of discovery processes. 2000, Cambridge, MA: MIT Press. Klahr, D. and H.A. Simon, Studies of Scientific Discovery: Complementary approaches and convergent findings. Psychological Bulletin, 1999. 125(5): p. 524-543. Gorman, M.E., Simulating Science: Heuristics, Mental Models and Technoscientific Thinking. Science, Technology and Society, ed. T. Gieryn. 1992, Bloomington: Indiana University Press. 265. Dunbar, K., How scientists really reason: Scientific reasoning in real-world laboratories, in The nature of insight, R.J. Sternberg and J. Davidson, Editors. 1995, MIT Press: Cambridge, MA. p. 365-396. Dunbar, K., How scientists think, in Creative Thought, T.B. Ward, S.M. Smith, and J. Vaid, Editors. 1997, American Psychological Association: Washington, D.C. Dunbar, K. Scientific reasoning strategies in a simulated molecular genetics environment. in Program of the Eleventh Annual Conference of the Cognitive Science Society. 1989. Ann Arbor, MI: Lawrence Erlbaum Associates. Shrager, J., Commonsense perception and the psychology of theory formation, in Computational Models of Scientific Discovery and Theory Formation, J. Shrager, & Langley, P., Editor. 1990, Morgan Kaufmann Publishers, Inc.: San Mateo, CA. p. 437-470. Gooding, D.C. and T.R. Addis, Modelling Faraday’s experiments with visual functional programming 1: Models, methods and examples, . 1993, Joint Research Councils’ Initiative on Cognitive Science & Human Computer Interaction Special Project Grant #9107137. Gorman, M. and R. Rosenwein, Simulating social epistemology. Social Epistemology, 1995. 9(1): p. 71-79. Gruber, H., Darwin on Man: A Psychological Study of Scientific Creativity. 2nd ed. 1981, Chicago: University of Chicago Press. Gorman, M.E. and J.K. Robinson, Using History to Teach Invention and Design: The Case of the Telephone. Science and Education, 1998. 7: p. 173-201.

86

M.E. Gorman, A. Kincannon, and M.M. Mehalik

28.

Simina, M., Enterprise-directed reasoning: Opportunism and deliberation in creative reasoning, in Cognitive Science. 1999, Georgia Institute of Technology: Atlanta, GA. Gorman, M.E., Transforming nature: Ethics, invention and design. 1998, Boston: Kluwer Academic Publishers. Bradshaw, G., The Airplane and the Logic of Invention, in Cognitive Models of Science, R.N. Giere, Editor. 1992, University of Minnesota Press: Minneapolis. p. 239-250. Hickam, H.H., Rocket boys: A memoir. 1998, New York: Delacorte Press. 368. Gordon (producer), C. and J. Johnston (director), October Sky, . 1999, Universal Studios: Universal City, CA. Tweney, R.D., Scientific Thinking: A cognitive-historical approach, in Designing for Science: Implications for everyday, classroom, and professional settings, K. Crowley, C.D. Schunn, and T. Okada, Editors. 2001, Lawrence Earlbaum & Associates: Mawah, NJ. p. 141-173. Galison, P.L., Image and logic: A material culture of microphysics. 1997, Chicago: University of Chicago Press. Hughes, T.P., Rescuing Prometheus. 1998, New York: Pantheon books. Collins, H.M., Tacit knowledge and scientific networks, in Science in context: Readings in the sociology of science, B. Barnes and D. Edge, Editors. 1982, The MIT Press: Cambridge, MA. Mackenzie, D. and G. Spinardi, Tacit knowledge, weapons design, and the uninvention of nuclear weapons. American Journal of Sociology, 1995. 101(1): p. 44-99. Berry, D.C., ed. How implicit is implicit learning? . 1997, Oxford University Press: Oxford. Dienes, Z. and J. Perner, A Theory of Implicit and Explicit Knowledge. Behavioral and Brain Sciences, 1999. 22(5). Matthews, R.C. and L.G. Roussel, Abstractness of implicit knowledge: A cognitive evolutionary perspective, in How implicit is implicit learning?, D.C. Berry, Editor. 1997, Oxford University Press: Oxford. p. 13-47. Karmiloff-Smith, A., From meta-process to conscious access: Evidence from children’s metalinguistic and repair data. Cognition, 1986. 23(2): p. 95-147.

29. 30. 31. 32. 33.

34. 35. 36. 37.

38. 39. 40. 41.

Functional Trees Jo˜ ao Gama LIACC, FEP - University of Porto Rua Campo Alegre, 823 4150 Porto, Portugal Phone: (+351) 226078830 Fax: (+351) 226003654 jgama@liacc.up.pt http://www.niaad.liacc.up.pt/˜jgama Abstract. The design of algorithms that explore multiple representation languages and explore diﬀerent search spaces has an intuitive appeal. In the context of classiﬁcation problems, algorithms that generate multivariate trees are able to explore multiple representation languages by using decision tests based on a combination of attributes. The same applies to model trees algorithms, in regression domains, but using linear models at leaf nodes. In this paper we study where to use combinations of attributes in regression and classiﬁcation tree learning. We present an algorithm for multivariate tree learning that combines a univariate decision tree with a linear function by means of constructive induction. This algorithm is able to use decision nodes with multivariate tests, and leaf nodes that make predictions using linear functions. Multivariate decision nodes are built when growing the tree, while functional leaves are built when pruning the tree. The algorithm has been implemented both for classiﬁcation problems and regression problems. The experimental evaluation shows that our algorithm has clear advantages with respect to the generalization ability when compared against its components, two simpliﬁed versions, and competes well against the state-of-the-art in multivariate regression and classiﬁcation trees. Keywords: Decision Trees, Multiple Models, Supervised Machine Learning.

1

Introduction

The generalization ability of a learning algorithm depends on the appropriateness of its representation language to express a generalization of the examples for the given task. Diﬀerent learning algorithms employ diﬀerent representations, search heuristics, evaluation functions, and search spaces. It is now commonly accepted that each algorithm has its own selective superiority [3]; each is best for some but not all tasks. The design of algorithms that explore multiple representation languages and explore diﬀerent search spaces has an intuitive appeal. This paper presents one such algorithm. In the context of supervised learning problems it is useful to distinguish between classiﬁcation problems and regression problems. In the former the target K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 59–73, 2001. c Springer-Verlag Berlin Heidelberg 2001

60

J. Gama

variable takes values in a ﬁnite and pre-deﬁned set of un-ordered values, and the usual goal is to minimize a 0-1-loss function. In the later the target variable is ordered and takes values in a subset of . The usual goal is to minimize a squared error loss function. Mainly due to the diﬀerences in the type of the target variable successful techniques in one type of problems are not directly applicable to the other type of problems. The supervised learning problem is to ﬁnd an approximation to an unknown function given a set of labelled examples. To solve this problem, several methods have been presented in the literature. Two of the most representative methods are the General Linear Model and Decision trees. Both methods explore diﬀerent hypothesis space and use diﬀerent search strategies. In the former the goal is to minimize the sum of squared deviations of the observed values for the dependent variable from those predicted by the model. It is based on the algebraic theory of invariants and has an analytical solution. The description language of the model takes the form of a polynomial that, in its simpler form, is a linear combination of the attributes: w0 + wi × xi . This is the basic idea behind linear-regression and discriminant functions[8]. The latter use a divide-and-conquer strategy. The goal is to decompose a complex problem into simpler problems and recursively applying the same strategy to the sub-problems. Solutions of the sub-problems are combined in the form of a tree. Its hypothesis space is the set of all possible hyper-rectangular regions. The power of this approach comes from the ability to split the space of the attributes into subspaces, whereby each subspace is ﬁtted with diﬀerent functions. This is the basic idea behind well-known tree based algorithms [2,13]. In the case of classiﬁcation problems, a class of algorithms that explore multiple representation languages are the so called multivariate trees [2,20,12,6,11]. In this sort of algorithms decision nodes can contain tests based on a combination of attributes. The language bias of univariate decision trees (axis parallel splits) are relaxed allowing decision surfaces oblique with respect to the axis of the instance space. As in the case of classiﬁcation problems, in regression problems some authors have studied the use of regression trees that explore multiple representation languages, here denominated model trees [2,13,15,21,18]. But while in classiﬁcation problems multivariate decisions appear in internal nodes, in regression problems multivariate decisions appear in leaf nodes. The problem that we study in this paper is where to use decisions based on combinations of attributes. Should we restrict combinations of attributes to decision nodes? Should we restrict combinations of attributes to leaf nodes? Could we use combinations of attributes both at decision nodes and leaf nodes? The algorithm that we present here is an extension of multivariate trees. It is applicable to regression and classiﬁcation domains, allowing combinations of attributes both at decision nodes and leaves. In the next section of the paper we describe our proposal to functional trees. In Section 3 we discuss the diﬀerent variants of multivariate models using an illustrative example on regression domains. In Section 4 we present related work both in the classiﬁcation and re-

Functional Trees

61

gression settings. In Section 5 we evaluate our algorithm on a set of benchmark regression and classiﬁcation problems. Last Section concludes the paper.

2

The Algorithm for Constructing Functional Trees

The standard algorithm to build univariate trees consists of two phases. In the ﬁrst phase a large tree is constructed. In the second phase this tree is pruned back. The algorithm to grow the tree follows the standard divide-and-conquer approach. The most relevant aspects are: the splitting rule, the termination criterion, and the leaf assignment criterion. With respect to the last criterion, the usual rule consists of assignment of a constant to a leaf node. Considering only the examples that fall at this node, the constant is usually the majority class in classiﬁcation problems or the mean of the y values in the regression setting. With respect to the splitting rule, each attribute value deﬁnes a possible partition of the dataset. We distinguish between nominal attributes and continuous ones. In the former the number of partitions is equal to the number of values of the attribute, in the latter a binary partition is obtained. To estimate the merit of the partition obtained by a given attribute we use the gain ratio heuristic for classiﬁcation problems and the decrease in variance criterion for regression problems. In any case, the attribute that maximizes the criterion is chosen as test attribute at this node. The pruning phase consists of traversing the tree in a depth-ﬁrst fashion. At each non-leaf node two measures should be estimated. An estimate of the error of the subtree above this node, that is computed as a weighted sum of the estimated error for each leaf of the subtree, and the estimated error of the non-leaf node if it was pruned to a leaf. If the later is lower than the former, the entire subtree is replaced to a leaf. All of these aspects have several and important variants, see for example [2, 14]. Nevertheless all decision nodes contain conditions based on the values of one attribute, and leaf nodes predict a constant. 2.1

Functional Trees

In this section we present the general algorithm to construct a functional tree. Given a set of examples and an attribute constructor, the main algorithm used to build a functional tree is presented in Figure 1. This algorithm is similar to many others, except in the constructive step (steps 2 and 3). Here a function is built and mapped to new attributes. There are some aspects of this algorithm that should be made explicit. In step 2, a model is built using the Constructor function. This is done using only the examples that fall at this node. Later, in step 3, the model is mapped to new attributes. Actually, the constructor function should be a classiﬁer or a regressor depending on the type of the problem. In the case of regression problems the constructor function is mapped to one new attribute, the yˆ value predict by the constructor. In the case of classiﬁcation problems the number of new attributes is equal to the number of classes. Each

62

J. Gama

Function Tree(Dataset, Constructor) 1. If Stop Criterion(DataSet) – Return a Leaf Node with a constant value. 2. Construct a model Φ using Constructor 3. For each example x ∈ DataSet – Compute yˆ = Φ(x) – Extend x with a new attribute yˆ. 4. Select the attribute from both original and all newly constructed attributes that maximizes some merit-function 5. For each partition i of the DataSet using the selected attribute – Treei = Tree(Dataseti , Constructor) 6. Return a Tree, as a decision node based on the select attribute, containing the Φ model, and descendents Treei . End Function Fig. 1. Building a Functional Tree

new attribute is the probability that the example belongs to one class1 given by the constructed model. The merit of each new attribute is evaluated using the merit-function of the univariate tree, and in competition with the original attributes (step 4). The model built by our algorithm has two types of decision nodes: those based on a test of one of the original attributes, and those based on the values of the constructor function. When using Generalized Linear Models (GLM) [16] as attribute constructor, each new attribute is a linear combination of the original attributes. Decision nodes based on constructed attributes deﬁnes a multivariate decision surface. Once a tree has been constructed, it is pruned back. The general algorithm to prune the tree is presented in Figure 2. To estimate the error at each leaf (step 1) we distinguish between classiﬁcation and regression problems. In the former we assume a binomial distribution using a process similar to the pessimistic error of C4.5. In the latter we assume a χ2 distribution of the variance of the cases in it using a process similar to the χ2 pruning described in [18]. A similar procedure is used to estimate the constructor error (step 3). The pruning algorithm produces two diﬀerent types of leaves: Ordinary Leaves that predict a constant, and Constructor Leaves that predict the value of the Constructor function learned (in the growing phase) at this node. By simplifying our algorithm we obtain diﬀerent conceptual models. Two interesting lesions are described in the following sub-sections. Bottom-Up Approach. We denote as Bottom-Up Approach to functional trees when the functional models are used exclusively at leaves. This is the strategy 1

At diﬀerent nodes the system considers diﬀerent number of classes depending on the class distribution of the examples that fall at this node.

Functional Trees

63

Function Prune(Tree) 1. 2. 3. 4.

Estimate Leaf Error as the error at this node. If Tree is a leaf Return Leaf Error. Estimate Constructor Error as the error of Φ 2 . For each descendent i – Backed Up Error += Prune(Treei ) 5. If argmin(Leaf Error,Constructor Error,Backed Up Error) – Is Leaf Error • Tree = Leaf • Tree Error = Leaf Error – Is Model Error • Tree = Constructor Leaf • Tree Error = Constructor Error – Is Backed Up Error • Tree Error = Backed Up Error 6. Return Tree Error

End Function Fig. 2. Pruning a Functional Tree

used for example in M5 [15,21], and in NBtree system [10]. In our tree algorithm this is done restricting the selection of the test attribute (step 4 in the growing algorithm) to the original attributes. Nevertheless we still build, at each node, the constructor function. The model built by the constructor function is used later in the pruning phase. In this way, all decision nodes are based in the original attributes. Leaf nodes could contain a constructor model. A leaf node contains a constructor model if and only if in the pruning algorithm the estimated error of the constructor model is lower than the Backed-up-error and the estimated error of the node has if a leaf replaced it. Top-Down Approach. We denote as Top-Down Approach to functional trees when the multivariate models are used exclusively at decision nodes (internal nodes). In our algorithm, restricting the pruning algorithm to choose only between the Backed Up Error and the Leaf Error obtain these kinds of models. In this case all leaves predict a constant value. This is the strategy used for example in systems like LMDT [20], OC1 [12], and Ltree [6]. Functional trees extend and generalize multivariate trees. Our algorithm can be seen as a hybrid model that performs a tight combination of a univariate tree and a GLM function. The components of the hybrid algorithm use diﬀerent representation languages and search strategies. While the tree uses a divide-andconquer method, the linear-regression performs a global minimization approach. While the former performs feature selection, the later uses all (or almost all) the attributes to build a model. From the point of view of the bias-variance

64

J. Gama

decomposition of the error [1] a decision tree is known to have low bias but high variance, while GLM functions are known to have low variance but high bias. This is the desirable behaviour for components of hybrid models.

3

An Illustrative Example

In this section we use the well-known regression dataset Housing to illustrate the diﬀerent variants of functional models. The attribute constructor used is the linear regression function. Figure 3(a) presents a univariate tree for the Housing

RM 1.23

LR Node 18 32.67

2.58 Leaf 44.06

Fig. 3. (a)The Univariate Regression Tree and (b) Top-Down regression tree for the Housing problem.

dataset. Decision nodes only contain tests based on the original attributes. Leaf nodes predict the average of y values taken from the examples that fall at the leaf. In a top-down multivariate tree (Figure 3(b)) decision nodes could contain (not necessarily) tests based on a linear combination of the original attributes. The tree contains a mixture of learned attributes, denoted as LR Node, and original attributes, e.g. AGE, DIS. Any of the linear-regression attributes can be used both at the node where they have been created and at deeper nodes. For example, the LR Node 19 has been created at the second level of the tree. It is used as test attribute at this node, and also (due to the constructive ability) as test attribute at the third level of the tree. Leaf nodes predict the average of y values of the examples that fall at this leaf. In a bottom-up multivariate tree (Figure 4(a)) decision nodes only contain tests based on the original attributes. Leaf nodes could predict (not necessarily) values obtained by using a linear-regression function built from the examples that fall at this node. This is

Functional Trees LR Node 14

RM

361.92 2.58 Leaf 44.06

Fig. 4. (a)The Bottom-Up Multivariate Regression Tree and (b) The Multivariate Regression Tree for the Housing problem.

the kind of multivariate regression trees that usually appears on the literature. For example, systems M5 [15,21] and RT [18] generate this kind of models. Figure 4(b) presents the full multivariate regression tree using both top-down and bottom-up multivariate approaches. In this case, decision nodes could contain (not necessarily) tests based on a linear combination of the original attributes, and leaf nodes could predict (not necessarily) values obtained by using a linearregression function built from the examples that fall at this node. Figure 5 illustrates the functional models in the case of a classiﬁcation problem. We have used the UCI dataset Learning Qualitative Structure Activity Relationships - QSARs pyrimidines to illustrate the diﬀerent variants of tree models. This is a complex two classes problem deﬁned by 54 continuous attributes. The attribute constructor used is the LinearBayes [5] classiﬁer. In a bottom-up functional tree (Figure 5(a)) decision nodes only contain tests based on the original attributes. Leaf nodes could predict (not necessarily) values obtained by using a LinearBayes function built from the examples that fall at this node. Figure 5(b) presents the functional tree using both top-down and bottom-up multivariate approaches. In this case, decision nodes could contain (not necessarily) tests based on a linear combination of the original attributes, and leaf nodes could predict (not necessarily) values obtained by using a LinearBayes function built from the examples that fall at this node.

4

Related Work

Breiman et.al. [2] presents the ﬁrst extensive and in-depth study of the problem of constructing decision and regression trees. But, while in the case of decision trees they consider internal nodes with a test based on linear combination of

66

J. Gama d1p5pi_doner -0.5

0.48

LB Leaf

c0

d1p4size

4.5 c0

LB Leaf

c0

c1

LB Leaf

Fig. 5. (a)The Bottom-Up Functional Tree and (b) the Functional Tree for the QSARs problem.

attributes, in the case of regression trees internal nodes are always based on a single attribute. In the context of classiﬁcation problems, several algorithms have been presented that could use at each decision node tests based on linear combination of the attributes [2,12,20,6]. The most comprehensive study on multivariate trees has been presented by Brodley and Utgoﬀ in [4]. Brodley and Utgoﬀ discusses several methods for constructing multivariate decision trees: representing a multivariate test, including symbolic and numeric features, learning the coeﬃcients of a multivariate test, selecting the features to include in a test, and pruning of multivariate decision trees. Brodley only considers multivariate tests at inner nodes in a tree. In this context few works consider functional tree leaves. One of the earliest work is the Percepton tree algorithm [19] where leaf nodes may implement a general linear discriminant function. Also Kohavi[10] has presented the naive Bayes tree that uses functional leaves. NBtree is a hybrid algorithm that generates a regular univariate decision tree, but the leaves contain a naive Bayes classiﬁer built from the examples that fall at this node. The approach retains the interpretability of naive Bayes and decision trees, while resulting in classiﬁers that frequently outperform both constituents, especially in large datasets. Also, Gama [7] has presented Cascade Generalization, a method to combine classiﬁcation algorithms by means of constructive induction. The work presented here, near follows Cascade method but extended for regression domains and allowing models with functional leaves. In regression domains, Quinlan [13] has presented system M5. It builds multivariate trees using linear models at the leaves. In the pruning phase for each

Functional Trees

67

leaf a linear model is built. Recently, Witten and Eibe [21] have extended M5. A linear model is built at each node of the initial regression tree. All the models along a particular path from the root to a leaf node are then combined into one linear model in a smoothing step. Also Karalic [9] has studied the inﬂuence of using linear regression in the leaves of a regression tree. As in the work of Quinlan, Karalic shows that it leads to smaller models with increase of performance. Torgo [17] has presented an experimental study about functional models for regression tree leaves. Later, the same author [18] has presented the system RT. Using RT with linear models at the leaves, RT builds and prunes a regular univariate tree. Then at each leaf a linear model is built using the examples that fall at this leaf.

5

Experimental Evaluation

It is commonly accepted that multivariate regression trees should be competitive against univariate models. In this section we evaluate the proposed algorithm, its simpliﬁed variants, and its components on a set of classiﬁcation and regression benchmark problems. In regression problems the constructor is a standard linear regression function. In classiﬁcation problems the constructor is the LinearBayes classiﬁer [5]. For comparative proposes we evaluate also system M53 . The main goal in this experimental evaluation is to study the inﬂuence in terms of performance of the position inside a regression and a classiﬁcation tree of the linear models. We evaluate three situations: – Trees that could use linear combinations at each internal node. – Trees that could use linear combinations at each leaf. – Trees that could use linear combinations both at each internal and leaf nodes. All evaluated models are based on the same tree growing and pruning algorithm. That is, they use exactly the same splitting criteria, stopping criteria, and pruning mechanism. Moreover they share many minor heuristics that individually are too small to mention, but collectively can make diﬀerence. Doing so, the differences on the evaluation statistics are due to the diﬀerences in the conceptual model. In this work we estimate the performance of a learned model using 10 fold cross validation. To minimize the inﬂuence of the variability of the training set, we repeat this process ten times, each time using a diﬀerent permutation of the dataset. The ﬁnal estimate is the mean of the performance statistic obtained in each run of the cross validation. For regression problems the performance is measured in terms of the mean squared error statistic. For classiﬁcation problems the performance is measured in terms of the error rate statistic. To apply pairwise comparisons we guarantee that, in all runs, all algorithms learn and test on the same partitions of the data. We compare the performance of the 3

We have used M5 from version 3.1.8 of the Weka environment. We have used several regression systems. The most competitive was M5.

68

J. Gama

functional tree (FT) against its components: the univariate tree (UT) and the constructor function (linear regression (LR) in regression problems, and LinearBayes (LB) in classiﬁcation problems). The functional tree is also compared against to the two simpliﬁed versions: Bottom-up (FT-B) and Top-Down (FT-T). For each dataset, comparisons between algorithms are done using the Wilcoxon signed ranked paired-test. The null hypothesis is that the diﬀerence between performance statistics has median value zero. We consider that a diﬀerence in performance has statistical signiﬁcance if the p value of the Wilcoxon test is less than 0.01. 5.1

Results in Regression Domains

We have chosen 20 datasets from the Repository of Regression problems at LIACC 4 . The choice of datasets was restricted by the criteria that almost all the attributes are ordered with few missing values5 . The number of examples varies from 43 to 40768. The number of attributes varies from 5 to 48. The results in terms of MSE and standard deviation are presented in Table 1. The ﬁrst two columns refer to the results of the components of the hybrid algorithm. The following three columns refer to the simpliﬁed versions of our algorithm and the full model. The last column refers to the M5 system. For each dataset, the algorithms are compared against the full multivariate tree using the Wilcoxon signed rank-test. A − (+) sign indicates that for this dataset the performance of the algorithm was worse (better) than the full model with a p value less than 0.01. Table 1 presents a comparative summary of the results. The ﬁrst line presents the geometric mean of the MSE statistic across all datasets. The second line shows the average rank of all models, computed for each dataset by assigning rank 1 to the best algorithm, 2 to the second best and so on. The third line shows the average ratio of MSE. This is computed for each dataset as the ratio between the MSE of one algorithm and the MSE of M5. The fourth line shows the number of signiﬁcant diﬀerences using the signed-rank test taking the multivariate tree FT as reference. We use the Wilcoxon Matched-Pairs Signed-Ranks Test to compare the error rate of pairs of algorithms across datasets6 . The last line shows the p values associated with this test for the MSE results on all datasets and taking FT as reference. It is interesting to note that the full model (FT) signiﬁcantly improves over both components (LR and UT) in 14 datasets out of 20. All the multivariate trees have a similar performance. Using the signiﬁcant test as criteria, FT is the most performing algorithm. It is interesting to note that the bottom-up version is the most competitive algorithm. The ratio of signiﬁcant wins/losses between the bottom-up and top-down versions is 4/3. 4 5

6

http://www.ncc.up.pt/∼ltorgo/Datasets In regression problems, the actual implementation ignores missing values at learning time. At application time, if the value of the test attribute is unknown, all descendent branches produce a prediction. The ﬁnal prediction is a weighted average of the predictions. Each pair of data points consists of the estimate MSE on one dataset and for the two learning algorithms being compared.

Functional Trees

69

Table 1. Summary of Results in Regression Problems (MSE). L.Regression Univ. Tree Functional Trees Data (LR) (UT) Top Bottom FT M5 Abalone − 4.908±0.0 − 5.728±0.1 4.616±0.0 − 4.759±0.0 4.602±0.0 4.553±0.5 Auto-mpg − 11.470±0.1 − 19.409±1.2 + 8.921±0.4 9.560±0.8 9.131±0.5 7.958±3.5 Cart − 5.684±0.0 + 0.995±0.0 − 1.016±0.0 + 0.993±0.0 1.012±0.0 0.994±0.0 Computer − 99.907±0.2 − 10.955±0.6 − 6.426±0.6 − 6.507±0.5 6.284±0.6 − 8.081±2.7 Cpu − 3734±1717 − 4111±1657 − 1760±389 − 1197±161 1070±137 1092±1315 Diabetes 0.399±0.0 − 0.535±0.0 − 0.500±0.0 0.400±0.0 0.399±0.0 0.446±0.3 Elevators − 1.02e-5±0.0 − 1.4e-5±0.0 − 0.86e-5±0.0 0.5e-5±0.0 0.5e-5±0.0 0.52e-5±0.0 Fried − 6.924±0.0 − 3.474±0.0 − 1.862±0.0 − 2.348±0.0 1.850±0.0 − 1.938±0.1 H.Quake 0.036±0.0 0.036±0.0 0.036±0.0 0.036±0.0 0.036±0.0 0.036±0.0 House(16H) −2.06e9±6.1e5 − 1.69e9±3.3e7 + 1.20e9±2.2e7 1.19e9±3.0e7 1.23e9±2.2e7 1.27e9±1.2e8 House(8L) −1.73e9±8.2e5 − 1.19e9±1.2e7 + 1.01e9±1.3e7 1.02e9±9.2e6 1.02e9±1.3e7 9.97e8±7.1e7 House(Cal) −4.81e9±2.0e6 − 3.69e9±3.5e7 − 3.09e9±2.7e7 + 2.78e9±2.8e7 3.05e9±3.1e7 3.07e9±2.8e8 Housing − 23.840±0.2 − 19.591±1.7 16.251±1.1 + 13.359±1.7 16.538±1.3 12.467±7.5 Kinematics − 0.041±0.0 − 0.035±0.0 − 0.027±0.0 − 0.026±0.0 0.023±0.0 − 0.025±0.0 Machine − 5952±2053 − 6036±1752 3473±673 3300±757 3032±759 3557±4271 Pole − 930.08±0.3 + 48.55±1.2 79.48±2.6 + 35.16±0.7 79.31±2.4 + 42.0±5.8 Puma32 − 7.2e-4±0.0 − 1.1e-4±0.0 + 0.71e-4±0.0 0.82e-4±0.0 0.82e-4±0.0 0.67e-4±0.0 Puma8 − 19.925±0.0 − 13.307±0.2 + 11.047±0.1 11.145±0.1 11.241±0.1 + 10.299±0.5 Pyrimidines − 0.018±0.0 0.014±0.0 + 0.010±0.0 0.013±0.0 0.013±0.0 0.012±0.0 Triazines − 0.025±0.0 + 0.019±0.0 − 0.018±0.0 0.023±0.0 0.023±0.0 0.017±0.0

Geometric Mean Average Rank Average Ratio Wins / Losses Signi. Wins/Losses Wilcoxon Test

Summary of MSE Results LR UT FT-T FT-B 39.2 23.59 17.68 16.47 5.4 4.9 3.15 2.9 4.0 1.57 1.13 1.03 1/19 4/16 8/12 6/11 0/18 3/15 6/9 4/5 0.0 0.02 0.21 0.1

FT 16.90 2.5 1.07 – – –

M5 16.2 2.3 1 11/9 2/3 0.23

Nevertheless there is a computational cost associated with the increase in performance veriﬁed. To run all the experiments referred here, FT requires almost 1.8 more time than the univariate regression tree.

5.2

Results in Classiﬁcation Problems

We have chosen 30 datasets from the UCI repository. For comparative purposes we also evaluate M5 [21]. M5 decomposes a n-classes classiﬁcation problem into n−1 binary regression problems7 . The results in terms of error-rate and standard deviation are presented in Table 2. The ﬁrst two columns refer to the results of the components of our system, the LinearBayes and the univariate tree. The next two columns refer to the lesioned versions of the algorithm, the BottomUp (FT-B) and Top-Down (FT-T). The ﬁfth column refers to the full proposed 7

We have used other multivariate trees. The most competitive was M5 .

70

J. Gama

Table 2. Summary of Error Rate Results LinBayes Univ. Tree Functional Trees Dataset LB UT Bottom Top FT M5 Adult − 17.012±0.5 14.178±0.5 − 14.307±0.4 13.800±0.4 13.830±0.4 − 15.182±0.6 Australian 13.498±0.3 14.750±1.0 − 14.343±0.4 13.928±0.6 13.638±0.6 14.643±5.2 Balance − 13.355±0.3 − 22.467±1.1 − 10.445±0.6 7.313±0.9 7.313±0.9 − 13.894±3.2 Banding 23.681±1.0 23.512±1.8 23.512±1.8 23.762±2.2 23.762±2.2 22.619±5.3 Breast(W) + 2.862±0.1 − 5.123±0.2 − 4.337±0.1 3.346±0.4 3.346±0.4 5.137±3.1 Cleveland 16.134±0.4 − 20.995±1.4 + 15.952±0.5 17.369±0.9 16.675±0.8 17.926±8.0 Credit + 14.228±0.1 14.608±0.5 14.784±0.5 15.103±0.4 15.220±0.6 14.913±3.7 Diabetes + 22.709±0.2 − 25.348±1.0 23.998±1.0 − 25.206±0.9 23.658±1.0 25.002±4.8 German 24.520±0.2 28.240±0.7 + 23.630±0.5 24.870±0.5 24.330±0.7 26.300±3.1 Glass − 36.647±0.8 32.150±2.3 32.150±2.3 32.509±3.3 32.509±3.3 29.479±10.4 Heart 17.704±0.2 − 23.074±1.7 17.037±0.6 17.333±1.4 17.185±0.8 16.667±8.9 Hepatitis + 15.481±0.7 17.135±1.3 17.135±1.3 17.135±1.3 17.135±1.3 19.919±8.5 Ionosphere 13.379±0.8 10.025±0.9 10.624±0.9 11.175±1.4 11.175±1.4 9.704±4.1 Iris 2.000±0.0 − 4.333±0.8 2.067±0.2 − 3.733±0.8 2.067±0.2 5.333±5.3 Letter − 29.821±1.3 11.880±0.6 12.005±0.6 11.799±1.1 11.799±1.1 + 9.440±0.5 Monks-1 − 25.009±0.0 10.536±1.7 11.150±1.9 8.752±1.9 8.729±1.9 10.054±8.9 Monks-2 − 34.186±0.6 − 32.865±0.0 − 33.907±0.4 9.004±1.6 9.074±1.6 27.664±20.9 Monks-3 − 4.163±0.0 + 1.572±0.4 3.511±0.9 2.884±0.4 2.998±0.4 1.364±2.4 Mushroom − 3.109±0.0 + 0.000±0.0 + 0.062±0.0 0.112±0.0 0.112±0.0 0.025±0.1 Optdigits − 4.687±0.1 − 9.476±0.3 − 4.732±0.1 3.295±0.1 3.300±0.1 − 5.429±1.4 Pendigits − 12.425±0.0 − 3.559±0.1 − 3.099±0.1 2.890±0.1 2.890±0.1 2.419±0.4 Pyrimidines − 9.846±0.1 + 5.733±0.2 6.115±0.2 6.158±0.2 6.159±0.2 6.175±0.9 Satimage − 16.011±0.1 − 12.894±0.2 − 12.894±0.2 11.776±0.3 11.776±0.3 12.402±3.2 Segment − 8.407±0.1 3.381±0.2 3.381±0.2 3.190±0.2 3.190±0.2 2.468±0.8 Shuttle − 5.629±0.3 0.028±0.0 0.028±0.0 0.036±0.0 0.036±0.0 0.067±0.0 Sonar 24.955±1.2 27.654±3.5 27.654±3.5 27.654±3.5 27.654±3.5 22.721±9.0 Vehicle 22.163±0.1 − 27.334±1.2 + 18.282±0.5 21.090±1.1 21.031±1.1 20.900±4.6 Votes − 9.739±0.2 3.773±0.5 3.773±0.5 3.795±0.5 3.795±0.5 4.172±4.0 Waveform + 14.939±0.2 − 24.036±0.8 + 15.216±0.2 − 16.142±0.3 15.863±0.4 − 17.241±1.4 Wine 1.133±0.5 − 6.609±1.3 1.404±0.3 1.459±0.3 1.404±0.3 3.830±3.6

Average Mean Geometric Mean Average Rank Average Ratio Wins/Losses Signiﬁcant Wins/Losses Wilcoxon Test

LB 15.31 11.63 4.0 7.545 11/19 5/15 0.00

UT 14.58 9.03 4.1 1.41 9/19 3/12 0.00

FT-B 12.72 7.03 3.1 1.12 13/13 5/8 0.8

FT-T 11.89 6.80 3.3 1.032 6/10 0/3 0.07

FT 11.72 6.63 3.0 1 – – –

M5 12.77 7.24 3.4 1.23 12/18 1/4

Functional Trees

71

model(FT). The last column refers to the results of M5 . For each dataset, the algorithms are compared against the full functional tree using the Wilcoxon signed rank-test. A − (+) sign indicates that for this dataset the performance of the algorithm was worse (better) than the full model with a p value less than 0.01. Table 2 present a comparative summary of the results. The ﬁrst two lines present the arithmetic and the geometric mean of the error rate across all datasets. The third line shows the average rank of all models, computed for each dataset by assigning rank 1 to the best algorithm, 2 to the second best and so on. The fourth line shows the average ratio of error rates. This is computed for each dataset as the ratio between the error rate of one algorithm and the error rate of the full functional tree FT. The ﬁfth line shows the number of signiﬁcant diﬀerences using the signed-rank test taking the multivariate tree FT as reference. We use the Wilcoxon Matched-Pairs Signed-Ranks Test to compare the error rate of pairs of algorithms across datasets. The last line shows the p values associated with this test for the results on all datasets and taking FT as reference. All the evaluation statistics shows that FT is a competitive algorithm. The most competitive simpliﬁed version is, again, the bottom-up version. The ratio of signiﬁcant wins/losses between the bottom-up and top-down versions is 10/6. It is interesting to note that the full model (FT) signiﬁcantly improves over both components (LB and UT) in 6 datasets. 5.3

Discussion

The experimental evaluation points out some interesting observations: – For both types of problems we obtain similar rankings of the performance between the diﬀerent versions of the algorithms. – All multivariate trees versions have similar performance. On these datasets, there is no clear winner between the diﬀerent versions of functional trees. – Any functional tree out-performs its constituents in a large set of problems. In our study the results are consistent on both type of problems. Our experimental study suggests that the full model, that is a multivariate model using linear functions both at decision nodes and leaves, is the most performing algorithm. Another dimension of analysis is the size of the model. Here we consider the number of leaves. This measures the number of diﬀerent regions into which the instance space is partitioned. On this datasets, the average number of leaves for the univariate tree is 70. Any multivariate tree generates smaller models. The average number of leaves of the full model is 50, for the bottom approach is 56, and for the top approach is 52. Nevertheless there is a computational cost associated with the increase in performance veriﬁed. To run all the experiments referred here, FT requires almost 1.7 more time than the univariate tree.

6

Conclusions

In this paper we have presented Functional Trees, a new formalism to construct multivariate trees for regression and classiﬁcation problems. The proposed algo-

72

J. Gama

rithm is able to use functional decision nodes and functional leaf nodes. Functional decision nodes are built when growing the tree, while functional leaves are built when pruning the tree. A contribution of this work is that it provides a single framework for classiﬁcation and regression multivariate trees. Functional trees can be seen as a generalization of multivariate trees for decision problems and model-trees for regression problems, allowing functional decisions both at inner and leaf nodes. We have experimentally observed that the uniﬁed framework is competitive against the state-of-the-art in model-trees. Another contribution of this work is the study about where to use decisions based on a combination of attributes both in regression and classiﬁcation. In the experimental evaluation on a set of benchmark problems we have compared the performance of a functional tree against its components, two simpliﬁed versions and the state-of-the-art in multivariate trees. The results are consistent on both type of problems. Our experimental study suggests that the full model, that is a multivariate model using linear functions both at decision nodes and leaves, is the most performing algorithm. Although most of the work in multivariate classiﬁcation trees follows the top-down approach, the bottom-up approach seems to be competitive. A similar observation applies to regression problems. This observation point directions for future research on this topic. Acknowledgments. Gratitude is expressed to the ﬁnancial support given by the FEDER and PRAXIS XXI, the Plurianual support attributed to LIACC, and Esprit LTR METAL project, the project Data Mining and Decision Support for Business Competitiveness (Sol-Eu-Net), and project ALES. I would like to thank the anonymous reviewers for the constructive comments.

References 1. L. Breiman. Arcing classiﬁers. The Annals of Statistics, 26(3):801–849, 1998. 2. L. Breiman, J. Friedman, R. Olshen, and C. Stone. Classiﬁcation and Regression Trees. Wadsworth International Group., 1984. 3. Carla E. Brodley. Recursive automatic bias selection for classiﬁer construction. Machine Learning, 20:63–94, 1995. 4. Carla E. Brodley and Paul E. Utgoﬀ. Multivariate decision trees. Machine Learning, 19:45–77, 1995. 5. J. Gama. A Linear-Bayes classiﬁer. In C. Monard, editor, Advances on Artiﬁcial Intelligence -SBIA2000. LNAI 1952 Springer Verlag, 2000. 6. Jo˜ ao Gama. Probabilistic Linear Tree. In D. Fisher, editor, Machine Learning, Proceedings of the 14th International Conference. Morgan Kaufmann, 1997. 7. Jo˜ ao Gama and P. Brazdil. Cascade Generalization. Machine Learning, 41:315– 343, 2000. 8. Geoﬀrey J.Mclachlan. Discriminant Analysis and Statistical Pattern Recognition. New York, Willey and Sons, 1992. 9. Aram Karalic. Employing linear regression in regression tree leaves. In Bernard Neumann, editor, European Conference on Artiﬁcial Intelligence, 1992.

Functional Trees

73

10. R. Kohavi. Scaling up the accuracy of naive Bayes classiﬁers: a decision tree hybrid. In Proceedings of the 2nd International Conference on Knowledge Discovery and Data Mining. AAAI Press, 1996. 11. W. Loh and Y. Shih. Split selection methods for classiﬁcation trees. Statistica Sinica, 7:815–840, 1997. 12. S. Murthy, S. Kasif, and S. Salzberg. A system for induction of oblique decision trees. Journal of Artiﬁcial Intelligence Research, 1994. 13. R. Quinlan. Learning with continuous classes. In Adams and Sterling, editors, Proceedings of AI’92. World Scientiﬁc, 1992. 14. R. Quinlan. C4.5: Programs for Machine Learning. Morgan Kaufmann Publishers, Inc., 1993. 15. R. Quinlan. Combining instance-based and model-based learning. In P.Utgoﬀ, editor, ML93, Machine Learning, Proceedings of the 10th International Conference. Morgan Kaufmann, 1993. 16. Paul Taylor. Statistical methods. In M. Berthold and D. Hand, editors, Intelligent Data Analysis - An Introduction. Springer Verlag, 1999. 17. Luis Torgo. Functional models for regression tree leaves. In D. Fisher, editor, Machine Learning, Proceedings of the 14th International Conference. Morgan Kaufmann, 1997. 18. Luis Torgo. Inductive Learning of Tree-based Regression Models. PhD thesis, University of Porto, 2000. 19. P. Utgoﬀ. Percepton trees - a case study in hybrid concept representation. In Proceedings of the Seventh National Conference on Artiﬁcial Intelligence. Morgan Kaufmann, 1988. 20. P. Utgoﬀ and C. Brodley. Linear machine decision trees. Coins technical report, 91-10, University of Massachusetts, 1991. 21. Ian Witten and Eibe Frank. Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations. Morgan Kaufmann Publishers, 2000.

Bounding Negative Information in Frequent Sets Algorithms I. Fortes1 , J.L. Balc´ azar2 , and R. Morales3 1

Dept. Applied Mathematic, E.T.S.I. Inform´ atica, Univ. M´ alaga. Campus Teatinos. 29071 M´ alaga, Spain ifortes@ctima.uma.es 2 Dept. LSI, Univ. Polit`ecnica de Catalunya. Campus Nord. 08034 Barcelona, Spain balqui@lsi.upc.es 3 Dept. Languages and Computer Science, E.T.S.I. Inform´ atica, Univ. M´ alaga. Campus Teatinos. 29071 M´ alaga, Spain morales@lcc.uma.es

Abstract. In Data Mining applications of the frequent sets problem, such as ﬁnding association rules, a commonly used generalization is to see each transaction as the characteristic function of the corresponding itemset. This allows one to ﬁnd also correlations between items not being in the transactions; but this may lead to the risk of a large and hard to interpret output. We propose a bottom-up algorithm in which the exploration of facts corresponding to items not being in the transactions is delayed with respect to positive information of items being in the transactions. This allows the user to dose the association rules found in terms of the amount of correlation allowed between absences of items. The algorithm takes advantage of the relationships between the corresponding frequencies of such itemsets. With a slight modiﬁcation, our algorithm can be used as well to ﬁnd all frequent itemsets consisting of an arbitrary number of present positive attributes and at most a predetermined number k of present negative attributes.

Work supported in part by the EU ESPRIT IST-1999-14186 (ALCOM-FT), EU EP27150 (NeuroColt), CIRIT 1997SGR-00366 and PB98-0937-C04 (FRESCO).

K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 50–58, 2001. c Springer-Verlag Berlin Heidelberg 2001

Bounding Negative Information in Frequent Sets Algorithms

1

51

Introduction

Data Mining or Knowledge Discovery in Databases (KDD) is a ﬁeld of increasing interest with strong connections with several research areas such as databases, machine learning, and statistics. It aims at ﬁnding useful information from large masses of data; see [5]. One of the most relevant subroutines in applications of this ﬁeld is ﬁnding frequent itemsets within the transactions in the database. This task consists of ﬁnding highly frequent itemsets, by comparing their frequency of occurrence within the given database with a given parameter σ. This problem can be solved by the well-known Apriori algorithm [2]. The Apriori algorithm is a method of searching the lattice of itemsets with respect to itemset inclusion. The strategy starts from the empty set and scans itemsets from smaller to larger in an incremental manner. The Apriori algorithm uses this strategy to eﬀectively prune away a substantial number of unproductive itemsets. The frequent sets that result from this task can be used then to discover association rules that have support and conﬁdence values no smaller than the user-speciﬁed minimum thresholds [1], or to solve other related Knowledge Discovery problems [7]. We do not discuss here how to form association rules from frequent itemsets, nor any other application of these; but focus on the performance of that very step, ﬁnding highly frequent patterns, whose complexity dominates by far the computational cost of many such applications. Here we considered the case where each transaction of the database is a binary-valued function of the attributes. The diﬀerence with the itemsets view is that now we look for patterns where the non-occurrence of an item is important too. This is formalized in terms of partial functions, which, on each item, may include it (value 1), exclude it (value 0), or not to consider it (undeﬁned). It was noticed in [6] that essentially the same algorithms, with the same “a priori” pruning strategies, can be applied to many other settings in which one looks for a certain theory on a certain formal language according to a certain predicate that is monotone on a generalization/specialization relation. In particular, our setting with binary-valued attributes falls into this category, and actually there exist implementations of the Apriori algorithm that solve the problem for the setting where each transaction is actually a function. Thus, they can be used to solve the problem of ﬁnding partial functions whose frequency is over some threshold. However, it is known that direct use of these algorithms on real life data frequently come up with extremely large numbers of frequent sets consisting “only of zeros”; for example, in the prototypical case of market basket data, certainly the number of items is overwhelmingly larger than the average number of items bought, and this means that the output of any frequent sets algorithm will contain large amounts of information of the sort “most of the times that scotch is not bought, bourbon is not bought either, with large support”. If such negative information is not desired at all, the original Apriori version can be used; but there may be cases where limited amounts of negative information are deemed useful, for instance looking for alternative products that can act

52

I. Fortes, J.L. Balc´ azar, and R. Morales

as mutual replacements, and yet one does not want to be forced into a search through the huge space of all partial functions. We are interested in producing algorithms that will provide frequent “itemsets” that have “missing” products, but in a controlled manner, so that they are useful when having some missing products in the itemsets is important but not so much as the products that are in the itemsets. Here we develop a variant of the Apriori algorithm that, if supplied with a limit k on the maximum number of negated attributes desired in the output frequent sets, will take advantage of this fact, and produce frequent itemsets for which this limit is obeyed. Of course, it does so in a much more eﬃcient way than just applying Apriori and discarding the part of the output that does not fulﬁll this condition. First, because the exploration is organized in a way that naturally reﬂects the condition on the output. Second, because we know that items may be, or not be, in each itemset, but not both implies complementarity relationships between the frequencies of “itemsets” that contain, or do not contain, a given item. We use these relationships to ﬁnd out frequencies of some “itemsets” without actually counting them, thus saving computational work.

2

Preliminaries

Now, we give the concepts that we will use along the paper. We consider a database T = {t1 , . . . , tN } with N rows over a set R = {A1 , . . . , An } = {Ai : i ∈ I} of binary-valued attributes, that can be seen as either items or columns; actually they just serve as a visual aid for their index set I = {1, . . . , n}. Each row, or transaction, maps R into {0, 1}. For A ∈ R, we also write A ∈ tl for tl (A) = 1 and, departing from standard use, A ∈ tl for tl (A) = 0. Obviously, A ∈ tl or A ∈ tl but not both. The database is actually a multiset of transactions. Each transaction has a unique identiﬁer. As for partial functions, they map a subset of R into {0, 1}; those that are deﬁned for exactly attributes are called -itemsets. The goal of our algorithm will be to ﬁnd frequent itemsets with any number of attributes mapped to 0 and any number of attributes mapped to 1; but in some speciﬁc order. Our notation for these partial functions is as follows. For p ∈ P(I) and s ∈ P(I − p), (s ∩ p = ∅) we denote the subset Ap,s and identify it with the partial function mapping the subset Ap = {Ai : i ∈ p} to 1, the subset As = {Aj : j ∈ s} to 0 and undeﬁned on the rest. Itemsets Ap,s are called k-negative itemsets where |s| = k, k = 0, . . . , n. If |s| = 0 then we have the positive itemset Ap,∅ . We identify partial functions deﬁned on a single attribute Aj , namely, A{j},∅ or A∅,{j} , with the corresponding symbol Aj or Aj respectively. A transaction can be seen as a total function. An itemset can be seen as a partial function. If the partial function can be extended to the total function corresponding to a transaction then we say that an itemset is a subset of a transaction and we employ the standard symbol ⊆ for this case. The support of an itemset (or partial function) is deﬁned as follows.

Bounding Negative Information in Frequent Sets Algorithms

53

Deﬁnition 1. Let R = {A1 , . . . , An } = {Ai : i ∈ I} be a set of n items and let T = {t1 , . . . , tN } be a database of transactions as before. The support or frequency of an itemset A is the ratio of the number of transactions on which it occurs as a subset to the total number of transactions. Therefore: f r(A) =

|{t ∈ T : A ⊆ t}| N

Given a user-speciﬁed minimum support value (denoted by σ), we say than an itemset A is frequent if its support is more than the minimum support, i.e. f r(A) ≥ σ. We introduce a natural structure in the itemset space by placing them into “ﬂoors” and “levels”. The ﬂoor k contains the itemsets with k negative attributes. In each ﬂoor, the itemsets are organized in levels (as usual): the level is the number of the attributes of the itemset. Thus, in ﬂoor zero we place positive itemsets, ordered by itemset inclusion (or equivalently, index set inclusion); in the ﬁrst ﬂoor we place all itemsets with one attribute valued to 0, organized similarly, and related similarly to the itemsets in ﬂoor zero. In ﬂoor k we place all the itemsets with k attributes valued to 0, organized levelwise in the standard way, and related similarly to the itemsets in other ﬂoors. Thus we are considering the order relation deﬁned as follows: Deﬁnition 2. For p ∈ P(I), s ∈ P(I − p), q ∈ P(I), and t ∈ P(I − q), given partial functions X = Ap,s and Y = Aq,t , we denote by X Y the fact that p ⊆ q and s ⊆ t. With respect to this relation, the property of having frequency larger than any threshold is antimonotone, since X Y implies f r(X) ≥ f r(Y ). Thus, whenever an itemset is not frequent enough, neither is any of its extensions, and this fact allows one to prune away a substantial number of unproductive itemsets. Therefore, frequent sets algorithms can be applied rather directly to this case. Our purpose now is to aim at a somewhat more reﬁned algorithm. Now, we give a simple example to show the structure of the itemset space. This example will be useful to describe the frequent itemset candidate generation and the path that follows our algorithm for it. Example: Let R = {A, B, C, D} be the set of four items. In this case, we use four ﬂoors to represent the itemsets with any number of negative attributes and any number of positive attributes. In each rectangle, the pair (f, ) indicates the ﬂoor f (number of negative attributes in the itemsets of this rectangle) and level (cardinality of the itemsets of this rectangle). See ﬁgure 1.

3

Algorithm Bounded-neg-Apriori

Our algorithm performs the same computations as Apriori on the zero ﬂoor, but then uses the frequencies computed to try to reduce the computational eﬀort spent on 1-negative itemsets. This process goes on along all ﬂoors. Overall,

54

I. Fortes, J.L. Balc´ azar, and R. Morales

Fig. 1. The structure of the itemset space

bounded-neg-Apriori can be seen as a reﬁnemet of Apriori in which the explicit evaluation of the frequency of k-negative itemsets is avoided, since it can be obtained from some itemsets of the previous ﬂoor, if they are processed in the appropriate order. We use a number of very easy properties of the frequencies. Of course all of the frequencies are real numbers in [0, 1]. Proposition 1. Let p ∈ P(I) be arbitrary, and s ∈ P(I − p) with |s| ≥ 1. 1. For each j ∈ s, f r(Ap,s ) = f r(Ap,s−{j} ) − f r(Ap∪{j},s−{j} ) and, f r(A∅,∅ ) = 1 2. Ap,s is frequent iﬀ ∃j ∈ s, f r(Ap,s−{j} ) > σ + f r(Ap∪{j},s−{j} ). Remark 1 : Each of the up to |s|-many ways of decomposing f r(Ap,s ) in part 1 leads to the same result: if f r(Ap,s−{j} ) < σ, for any j ∈ s, then Ap,s is not frequent. We will also use the following easy properties regarding the relation of the threshold σ to the value one-half. They allow for some extra pruning to be done for quite high frequency values (although this case might be infrequently occurring in practice). Proposition 2. Let p ∈ P(I) be arbitrary, and s ∈ P(I − p), arbitrary for statements not depending on p. 1. |f r(Aj ) − 0.5| < |σ − 0.5| ⇔ |f r(Aj ) − 0.5| < |σ − 0.5|. 2. If σ < 0.5 then f r(Aj ) ≤ σ ⇒ f r(Aj ) > σ and f r(Aj ) > 1 − σ ⇔ f r(Aj ) < σ. 3. If σ > 0.5 then f r(Aj ) ≥ σ ⇒ f r(Aj ) < σ and f r(Aj ) < 1 − σ ⇔ f r(Aj ) > σ. 4. ∀j ∈ s, if σ > 0.5 and f r(Ap,s−{j} ) > σ + f r(Ap∪{j},s−{j} ) then f r(Ap∪{j},s−{j} ) < σ, i.e. in this case Ap∪{j},s−{j} is not frequent. Remark 2 If σ > 0.5 and ∃j ∈ s / f r(Ap∪{j},s−{j} ) > σ then Ap,s is not frequent.

Bounding Negative Information in Frequent Sets Algorithms

3.1

55

Candidate Generation

Moving to the next round of candidates once all frequent -itemsets have been identiﬁed corresponds to moving up, in all possible ways, one step within the same ﬂoor, and climbing up in all possible ways to the next ﬂoor. More formally, at the ﬂoor zero, frequent set Ap,∅ leads to consideration as potential candidates of the following itemsets: all Aq,∅ where q = p ∪ {i} and all Ap,{j} , for j ∈ / p. Also, itemset Ap,{j} would lead to Aq,{j} for q = p ∪ {i}, for i∈ / p and i = j; our algorithm does not use this last sort of steps. In the other ﬂoors the movements are in the same form. For all p ∈ P(I) and s = ∅, from Ap,s we can climb up to the next ﬂoor to Ap,t where t = s ∪ {j}, for j ∈ P(I − {s ∪ p}). Also, itemset Ap,s would lead to Aq,s for q = p ∪ {i}, for i∈ / p and i ∈ / s but we will not use such steps either. Therefore the scheme of the search of frequent itemsets with k 0-valued attributes (i.e. in the ﬂoor k) is based on the following: whenever enough frequencies in the previous ﬂoor are known to test it, if f r(Ap,s−{j} ) > σ + f r(Ap∪{j},s−{j} ) where j ∈ s, then we know f r(Ap,s ) > σ so that it can be declared frequent; moreover, for σ > 0.5 this has to be tested only when that Ap∪{j},s−{j} turned out to be nonfrequent although Ap,s−{j} was frequent. Example: Let us turn our atention again to the example. Let us suppose that σ < 0.5; we explain the process of candidate generation and the path that our algorithm follows for it. Suppose that the maximal itemsets to be found are ABC, ABC, and AB. Thus, A, B, C are frequent items, and also B and C are frequent ’negative items’. At the initialization, we ﬁnd that D, A, and D cannot appear in any frequent itemset. The algorithm stores this information by means of the set I (deﬁned later). In the following step, we take into consideration as potential candidates, ﬁrstly the itemsets in (0, 2), secondly in (1, 2), and at last, in (2, 2) that verify the conditions. There we ﬁnd the frequent itemsets are AB, AC, BC, AB, AC, BC. At this moment, we know that there do not exist frequent itemsets in (2, 2). So, there will not exist frequent itemsets in (f, ) with f ≥ 2, > 2 and ≥ f . This information is used in the algorithm by means of the set J (deﬁned later) to reﬁne the search of candidate generation. In the following step we scan for frequent itemsets in (0, 3) and (1, 3) and ABC, ABC are frequent itemsets, and the exploration of the next level proves that, together with AB, they are the maximal frequent itemsets. Along the example it is clear how the algorithm would proceed in case we are given a bound on the number of negative attributes present: this would just discard ﬂoors that do not obey that limitation. 3.2

The Algorithm

Now, we present the algorithm in a more precise form. The algorithm has as input the set of attributes, the database, and the threshold σ on the support. The output of the algorithm is the set of all frequent itemsets with negative and positive itemsets. Also, a similar algorithm can be easily developed to ﬁnd the

56

I. Fortes, J.L. Balc´ azar, and R. Morales

set of all frequent itemsets with at most k negative attributes: simply impose explicitly the bound k on the corresponding loop in the algorithm. Let us consider the symbol f for the ﬂoor (that is the number of negative attributes of the itemset, 0 ≤ f ≤ n) and the symbol for the level (the number of the attributes of the itemset 0 ≤ ≤ n): we will write the sets Cf, and Lf, for candidates and frequent itemsets respectively. At the beginning we suppose that all Cf, and Lf, for f ≤ ≤ n are empty. With respect to this notation our algorithm traces the following path: (0, 1), (1, 1); (0, 2), (1, 2), (2, 2); (0, 3), (1, 3), (2, 3), (3, 3);, etc (recall to the example). Now, we present the algorithm in a pseudocode style. For clarity, main loops are commented. After the algorithm we included additional comments about some instructions that improve the search of frequent itemsets. Algorithm bounded-neg-Apriori 1. set current ﬂoor f := 0 set current level := 1 “This set is explained after the algorithm” J := ∅ 2. “Initially, we ﬁnd the frequent itemsets with isolated positive attributes” Lf, := {A{i},∅ , ∀i ∈ I/ f r(A{i},∅ ) > σ} 3. “This is the main loop to climb up ﬂoors” while f ≤ and ≤ n do while Lf, = ∅ and f ≤ and ≤ n do k := f + 1 L , −1 := ∅ “At this moment we can obtain the frequent itemsets of the upper” “ﬂoors at same level from the itemsets in the previous ﬂoor” “There are two cases according to σ” while k ≤ do if k ∈ J then Lk, := ∅ if σ ≤ 0.5 then (1) Ck, := {Ap,s / Ap,s ∈ Lk−1, −1 , m ∈ I − (p ∪ s ), s = s ∪ {m}, ∀i ∈ p, Ap−{i},s ∈ Lk, −1 , ∀j ∈ s, Ap,s−{j} ∈ Lk−1, −1 } else Ck, := {Ap,s / Ap,s ∈ Lk−1, −1 , m ∈ I − (p ∪ s ), s = s ∪ {m}, ∀i ∈ p, Ap−{i},s ∈ Lk, −1 , ∀j ∈ s, Ap,s−{j} ∈ Lk−1, −1 , ∀j ∈ s, f r(Ap∪{j},s−{j} ) < σ} ﬁ Lk, := {Ap,s ∈ Ck, / ∃j ∈ s, f r(Ap,s−{j} ) > σ + f r(Ap∪{j},s−{j} )} if Lk, = ∅ then J := J ∪ {k} ﬁ ﬁ if = 1 and k = 1 and L1,1 = ∅ then I := {i/A∅,{i} ∈ L1,1 } ﬁ (2) set current ﬂoor k := k + 1

Bounding Negative Information in Frequent Sets Algorithms

57

od (while k) “Selected a ﬂoor we look for the frequent itemsets in next level” “into this ﬂoor” set current level := + 1 J := J ∪ {k + 1/ k ∈ J, k < n} if f = 0 then Cf, := {Ap,∅ / ∀i ∈ p, Ap−{i},∅ ∈ Lf, −1 } Lf, := {Ap,∅ ∈ Cf, / f r(Ap,∅ ) > σ} else Cf, := {Ap,s / ∀i ∈ p, Ap−{i},s ∈ Lf, −1 , ∀j ∈ s, Ap,s−{j} ∈ Lf −1, −1 } Lf, := {Ap,s ∈ Cf, / f r(Ap,s ) > σ} (3) ﬁ od (while ) “If the maximum level into a ﬂoor is reached then we must go over” “to the next ﬂoor at this maximum level” set current ﬂoor f := f + 1 Cf, := {Ap,s / ∀i ∈ p, Ap−{i},s ∈ Lf, −1 , ∀j ∈ s, Ap,s−{j} ∈ Lf −1, −1 } Lf, := {Ap,s ∈ Cf, / ∃j ∈ s, f r(Ap,s−{j} ) > σ + f r(Ap∪{j},s−{j} )} od (while f) Lk, 4. output k≤ ≤n

The algorithm reﬁnes the search of frequent itemsets by means of the set J. In each level, J indicates the ﬂoors where no frequent itemsets will exist. In the sentence labeled (3) the generation of candidates and the computation of their frequencies must be done by considering σ (less or more than 0.5), as in the instruction labeled (1). Note that, by the sentence labeled (2), the only negative attributes that could appear in the candidate itemsets are the elements of L1,1 . So, we use this set, as soon as it is computed, to reﬁne the index set I used later along the computation. With respect to the complexity of the algorithm, from a theoretical point of view, two aspects are considered: candidate generation and itemset frequence computation. In the candidate generation the worst case is reached when the threshold σ is less or equal to 0.5. In this case, two itemsets one of them with a particular attribute positive and the other itemset with the same attribute negative can be frequent simultaneously. If σ > 0.5 then by remark 2 in proposition 2 the generation is reﬁned. Independtly of the σ value the sets I and J reﬁne the candidate generation. So, the needed requirements can be reduced. In the itemset frequence computation only itemsets with positive attributes are computed directly from the database. The frequencies of the other candidate itemsets with any number of negative attributes are obtained by using proposition one. Therefore, the number of passes through the database is like in Apriori, i.e., n + 1, where n is the greatest frequent itemset.

58

I. Fortes, J.L. Balc´ azar, and R. Morales

4

Conclusions and Future Work

In cases where the absence of some items from a transaction is relevant but one wants to avoid the generation of many rules relating these absences, it can be useful to allow for a maximum of k such absences from the frequent sets; even if no good guess exists for k, it may be useful to organize the search in such a way that the itemsets with m items show up in the order mandated by how many of them are positive: ﬁrst all positive, then m − 1 positive and one negative, and so on. Our algorithm allows one to do it and takes advantage of a number of facts, corresponding to relationships between the itemset frequencies, to avoid the counting of some candidates. Of course, it makes sense to try to combine this strategy together with other ideas that have been used together with Apriori, like random sampling to evaluate the frequencies, or instead of Apriori, like alternative algorithms such as DIC [4] or Ready-and-Go [3]. Experimental developments, as well as more detailed analyses and a careful formalization of the setting, can lead to improved results, and we continue to work along these two lines.

References 1. Agrawal R., Imielinski T., Swami A.N.: Mining association rules between sets of items in large databases. Proceedings of ACM SIGMOD International Conference on Management of Data (SIGMOD’93), ACM Press Washington D.C. , May 26-28 (1993) 207–216. 2. Agrawal R., Mannila H., Srikant R., Toivonen H., Verkamo A.I.: Fast discovery of association rules, in Fayyad U.M., Piatetsky-Shapiro G., Smyth Rp., Uthurusamy R. Eds, Advances in Knowledge Discovery and Data Mining, AAAI Press, Menlo Park, CA; (1996) 307–328. 3. Baixeries J., Casas-Garriga G. and Balc´ azar J.L.: Frequent sets, sequences, and taxonomies: new, eﬃcient algorithmic proposals. Tech. Rep. LSI-00-78-R. UPC. Barcelona (2000). 4. Brin S., Motwani R., Ullman J.D., Tsur S.: Dynamic Itemset Counting and Implication Rules for Market Basket Data. Int. Conf. Management of Data, ACM Press (1997) 255–264. 5. Fayyad U.M., Piatetsky–Shapiro G., Smyth P.: From data mining to knowledge discovery: An overview. In Fayyad U.M., Piatetsky–Shapiro G., Smyth P. and Uthurusamy R., eds, Advances in Knowledge Discovery and Data Mining, AAAI Press, Menlo Park, CA, (1996) 1–34. 6. Gunopulos D., Khardon R., Mannila H., Toivonen H. Data Mining, Hypergraph Transversals, and Machine Learning. Proceedings of the Sixteenth ACM SIGACTSIGMOD-SIGART Symposium on Principles of Database Systems, ACM Press, Tucson, Arizona, May 12-14, (1997) 209–216. 7. Mannila H:, Toivonen H.: Levelwise search and borders of theories in knowledge discovery. Data Mining and Knowledge Discovery. 1(3) (1997) 241–258.

Computational Discovery of Communicable Knowledge: Symposium Report Saˇso Dˇzeroski1 and Pat Langley2 1

Joˇzef Stefan Institute, Jamova 39, 1000 Ljubljana, Slovenia Saso.Dzeroski@ijs.si, www-ai.ijs.si/SasoDzeroski/ 2

Institute for the Study of Learning and Expertise 2164 Staunton Court, Palo Alto, CA 94306 USA langley@isle.org, www.isle.org/˜langley/

Abstract. The Symposium on Computational Discovery of Communicable Knowledge was held from March 24 to 25, 2001, at Stanford University. Fifteen speakers reviewed recent advances in computational approaches to scientiﬁc discovery, focusing on their discovery tasks and the generated knowledge, rather than on the discovery algorithms themselves. Despite considerable variety in both tasks and methods, the talks were uniﬁed by a concern with the discovery of knowledge cast in formalisms used to communicate among scientists and engineers.

Computational research on scientiﬁc discovery has a long history within both artiﬁcial intelligence and cognitive science. Early eﬀorts focused on reconstructing episodes from the history of science, but the past decade has seen similar techniques produce a variety of new scientiﬁc discoveries, many of them leading to publications in the relevant scientiﬁc literatures. Work in this paradigm has emphasized formalisms used to communicate among scientists, including numeric equations, structural models, and reaction pathways. However, in recent years, research on data mining and knowledge discovery has produced another paradigm. Even when applied to scientiﬁc domains, this framework employs formalisms developed by artiﬁcial intelligence researchers themselves, such as decision trees, rule sets, and Bayesian networks. Although such methods can produce predictive models that are highly accurate, their outputs are not stated in terms familiar to scientists, and thus typically are not very communicable. To highlight this distinction, Pat Langley organized the Symposium on Computational Discovery of Communicable Knowledge, which took place at Stanford University’s Center for the Study of Language and Information on March 24 and 25, 2001. The meeting’s aim was to bring together researchers who are pursuing computational approaches to the discovery of communicable knowledge and to review recent advances in this area. The primary focus was on discovery in scientiﬁc and engineering disciplines, where communication of knowledge is often a central concern. K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 45–49, 2001. c Springer-Verlag Berlin Heidelberg 2001

46

S. Dˇzeroski and P. Langley

Each of the 15 presentations emphasized the discovery tasks (the problem formulation and system input, including data and background knowledge) and the generated knowledge (the system output). Although artiﬁcial intelligence and machine learning traditionally focus on diﬀerences among algorithms, the meeting addressed the results of computational discovery at a more abstract level. In particular, it explored what methods for the computational discovery of communicable knowledge have in common, rather than the great diversity of methods used to that end. The commonalities among methods for communicable knowledge discovery were summarized best by Raul Vald´es-P´erez in a presentation titled A Recipe for Designing Discovery Programs on Human Terms. The key step in his recipe was identifying a set of possible solutions for some discovery task, as it is here that one can adopt a formalism that humans already use to represent knowledge. Vald´es-P´erez viewed computational discovery as a problem-solving activity to which one can apply heuristic-search methods. He illustrated the recipe on the problem of discovering niche statements, i.e., properties of items that make them unique or distinctive in a given set of items. The knowledge representation formalisms considered in the diﬀerent presentations were diverse and ranged from equations through qualitative rules to reaction pathways. Most talks at the symposium fell within two broad categories. The ﬁrst was concerned with equation discovery in either static systems or dynamic ones that change over time. The second addressed communicable knowledge discovery in biomedicine and in the related ﬁelds of biochemistry and molecular biology. One formalism that scientists and engineers rely on heavily is equations. The task of equation discovery involves ﬁnding numeric or quantitative laws, expressed as one or more equations, from collections of measured numeric data. Most existing approaches to this problem deal with the discovery of algebraic equations, but recent work has also addressed the task of dynamic system identiﬁcation, which involves discovering diﬀerential equations. Takashi Washio from Osaka University presented a talk about Conditions on Law Equations as Communicable Knowledge, in which he discussed the conditions that equations must satisfy to be considered communicable. In addition to ﬁtting the observed data, these include generic conditions and domain-dependent conditions. The former include objectiveness, generality, and reproducibility, as well as parsimony and mathematical admissibility with respect to unit dimensions and scale type constraints. Kazumi Saito from Nippon Telegraph and Telephone and Mark Schwabacher from NASA Ames Research Center presented two related applications of computational equation discovery in the environmental sciences, both concerned with global models of the Earth ecosystem. Saito’s talk on Improving an Ecosystem Model Using Earth Science Data addressed the task of revising an existing quantitative scientiﬁc model for predicting the net plant production of carbon in the light of new observations. Schwabacher’s talk, Discovering Communicable Scientiﬁc Knowledge from Spatio-Temporal Data in Earth Science, dealt with

Computational Discovery of Communicable Knowledge

47

the problem of predicting from climate variables the Normalized Diﬀerence Vegetation Index, a measure of greenness and a key component of the previous ecosystem model. Four presentations discussed the task of dynamic system identiﬁcation, which involves identifying the laws that govern behavior of systems with continuous variables that change over time. Such laws typically take the form of diﬀerential equations. Two of these talks described extensions to equation discovery methods to address system identiﬁcation, whereas the other talks reported work that began with methods for system identiﬁcation and incorporated artiﬁcial intelligence techniques that take advantage of domain knowledge. Saso Dˇzeroski from the Joˇzef Stefan Institute, in his talk on Discovering Ordinary and Partial Diﬀerential Equations, gave an overview of computational methods for discovering both ordinary and partial diﬀerential equations, the second of which describe dynamic systems that involve change over several dimensions (e.g., space and time). Ljupˇco Todorovski, from the same research center, discussed an approach that uses domain knowledge to aid the discovery process in his talk, Using Background Knowledge in Diﬀerential Equations Discovery. He showed how knowledge in the form of context-free grammars can constrain discovery in the domain of population dynamics. Reinhard Stolle, from Xerox PARC, spoke about Communicable Models and System Identiﬁcation. He described a discovery system that handles both structural identiﬁcation and parameter estimation by integrating qualitative reasoning, numerical simulation, geometric reasoning, constraint reasoning, abstraction, and other mechanisms. Matthew Easley from the University of Colorado, Boulder, reported extensions to Stolle’s framework in his presentation, Incorporating Engineering Formalisms into Automated Model Builders. His approach relied on input-output modeling to plan experiments and using the resulting data, combined with knowledge at diﬀerent levels of abstraction, to construct a diﬀerential equation model. The talk by Feng Zhao from Xerox PARC, Structure Discovery from Massive Spatial Data Sets, described an approach to analyzing spatio-temporal data that relies on the notion of spatial aggregation. This mechanism generates summary descriptions of the raw data, which it characterizes at varying levels of detail. Zhao reported applications to several challenging problems, including the interpretation of weather data, optimization for distributed control, and the analysis of spatio-temporal diﬀusion-reaction patterns. The rapid growth of biological databases, such as that for the human genome, has led to increased interest in applying computational discovery to biomedicine and related ﬁelds. Five presentations at the symposium focused on this general area. They covered a variety of discovery methods, including both propositional and ﬁrst-order rule induction, genetic programming, theory revision, and abductive inference, with similar breadth in the biological discovery tasks to which they were applied. Bruce Buchanan and Joseph Phillips, from the University of Pittsburgh, gave a presentation titled Introducing Semantics into Machine Learning. This focused

48

S. Dˇzeroski and P. Langley

on their incorporation of domain knowledge into rule-induction algorithms to let them ﬁnd interesting and novel relations in medicine and science. They reviewed both syntactic and semantic constraints on the rule discovery process and showed that stronger forms of background knowledge increase the chances that discovered rules are understandable, interesting, and novel. Stephen Muggleton from York University, in his talk Knowledge Discovery in Biological and Chemical Domains, described his application of ﬁrst-order rule induction to predicting the structure of proteins, modeling the relations between a chemical’s structure and its activity, and predicting a protein’s function from its structure (e.g., identifying precursors of neuropeptides). Knowledge discovered in these eﬀorts has appeared in journals for the respective scientiﬁc areas. John Koza from Stanford University presented Reverse Engineering and Automatic Synthesis of Metabolic Pathways from Observed Data. His approach utilized genetic programming to carry out search through a space of metabolic pathway models, with search directed by the models’ abilities to ﬁt time-series data on observed chemical concentrations. The target model included an internal feedback loop, a bifurcation point, and an accumulation point, suggesting the method can handle complex metabolic processes. The presentation by Pat Langley, from the Institute for the Study of Learning and Expertise, addressed Knowledge and Data in Computational Biological Discovery. He reported an approach that used data on gene expressions to revise a model of photosynthetic regulation in Cyanobacteria previously developed by plant biologists. The result was an improved model with altered processes that better explains the expression levels observed over time. The ultimate goal is an interactive system to support human biologists in their discovery activities. Marc Weeber from the U.S. National Library of Medicine reported on a quite diﬀerent approach in his talk on Literature-based Discovery in Biomedicine. The main idea relies on utilizing bibliographic databases to uncover indirect but plausible connections between disconnected bodies of scientiﬁc knowledge. He illustrated this method with a successful example of ﬁnding potentially new therapeutic applications for an existing drug, thalidomide. Sakir Kocabas, from Istanbul Technical University, talked about The Role of Completeness in Particle Physics Discoveries, which dealt with a completely diﬀerent domain. He described a computational model of historical discovery in particle physics that relies on two main criteria – consistency and completeness – to postulate new quantum properties, determine those properties’ values, propose new particles, and predict reactions among particles. Kocabas’ system successfully simulated an extended period in the history of this ﬁeld, including discovery of the neutrino and postulation of the baryon number. At the close of the symposium, Lorenzo Magnani from the University of Pavia commented on the presentations from a philosophical viewpoint. In particular, he cast the various eﬀorts in terms of his general framework for abduction, which incorporates diﬀerent types of explanatory reasoning. The gathering also spent ˙ time honoring the memory of Herbert Simon and Jan Zytkow, both of whom played seminal roles in the ﬁeld of computational scientiﬁc discovery.

Computational Discovery of Communicable Knowledge

49

Further information on the symposium is available at the World Wide Web page http://www.isle.org/symposia/comdisc.html. This includes information about the speakers, abstracts of the presentations, and pointers to publications related to their talks. Slides from the presentations can be found at the Web page http://math.nist.gov/˜JDevaney/CommKnow/. Saˇso Dˇzeroski and Ljupˇco Todorovski are currently editing a book based on the talks given at the symposium. Information on the book will appear at the symposium page and the ﬁrst author’s Web page as it becomes available.

Acknowledgements The Symposium on Computational Discovery of Communicable Knowledge was supported by Grant NAG 2-1335 from NASA Ames Research Center and by the Nippon Telegraph and Telephone Corporation.

References Bradley, E., Easley, M., & Stolle, R. (in press). Reasoning about nonlinear system identiﬁcation. Artiﬁcial Intelligence. Kocabas, S., & Langley, P. (in press). An integrated framework for extended discovery in particle physics. Proceedings of the Fourth International Conference on Discovery Science. Washington, D.C.: Springer. Koza, J. R., Mydlowec, W., Lanza, G., Yu, J., & Keane, M. A. (2001). Reverse engineering and automatic synthesis of metabolic pathways from observed data using genetic programming. Paciﬁc Symposium on Biocomputing, 6 , 434–445. Lee, Y., Buchanan, B. G., & Aronis, J. M. (1998). Knowledge-based learning in exploratory science: Learning rules to predict rodent carcinogenicity. Machine Learning, 30 , 217–240. Muggleton, S. (1999). Scientiﬁc knowledge discovery using inductive logic programming. Communications of the ACM , 42 , 42–46. Saito, K., Langley, P., Grenager, T., Potter, C., Torregrosa, A., & Klooster, S. A. (in press). Computational revision of quantitative scientiﬁc models. Proceedings of the Fourth International Conference on Discovery Science. Washington, D.C.: Springer. Schwabacher, M., & Langley, P. (2001). Discovering communicable scientiﬁc knowledge from spatio-temporal data. Proceedings of the Eighteenth International Conference on Machine Learning (pp. 489–496). Williamstown, MA: Morgan Kaufmann. Shrager, J., Langley, P., & Pohorille, A. (2001). Guiding revision of regulatory models with expression data. Unpublished manuscript, Institute for the Study of Learning and Expertise, Palo Alto, CA. Todorovski, L., & Dˇzeroski, S. (2000). Discovering the structure of partial diﬀerential equations from example behavior. Proceedings of the Seventeenth International Conference on Machine Learning (pp. 991–998). San Francisco: Morgan Kaufmann. Vald´es-P´erez, R. E. (1999). Principles of human-computer collaboration for knowledge discovery in science. Artiﬁcial Intelligence, 107 , 335–346. Washio, T., Motoda, H., & Niwa, Y. (2000). Enhancing the plausibility of law equation discovery. Proceedings of the Seventeenth International Conference on Machine Learning (pp. 1127–1134). San Francisco: Morgan Kaufmann. Yip, K., & Zhao, F. (1996). Spatial aggregation: Theory and applications. Journal of Artiﬁcial Intelligence Research, 5 , 1–26.

VML: A View Modeling Language for Computational Knowledge Discovery Hideo Bannai1 , Yoshinori Tamada2 , Osamu Maruyama3 , and Satoru Miyano1 1

Human Genome Center, Institute of Medical Science, University of Tokyo 4-6-1 Shirokanedai, Minato-ku, Tokyo 108-8639, Japan. {bannai,miyano}@ims.u-tokyo.ac.jp 2 Department of Mathematical Sciences, Tokai University 1117 Kitakaname, Hiratuka-shi, Kanagawa 259-1292, Japan. tamada@ss.u-tokai.ac.jp 3 Faculty of Mathematics, Kyushu University Kyushu University 36, Fukuoka 812-8581, Japan. om@math.kyushu-u.ac.jp

Abstract. We present the concept of a functional programming language called VML (View Modeling Language), providing facilities to increase the eﬃciency of the iterative, trial-and-error cycle which frequently appears in any knowledge discovery process. In VML, functions can be speciﬁed so that returning values implicitly “remember”, with a special internal representation, that it was calculated from the corresponding function. VML also provides facilities for “matching” the remembered representation so that one can easily obtain, from a given value, the functions and/or parameters used to create the value. Further, we describe, as VML programs, successful knowledge discovery tasks which we have actually experienced in the biological domain, and argue that computational knowledge discovery experiments can be eﬃciently developed and conducted using this language.

1

Introduction

The general ﬂow and components which comprise the knowledge discovery process have come to be recognized [4,10] in the literature. According to these articles, the KDD process can be, in general, divided into several stages such as: data preparation (selection, preprocessing, transformation) data mining, hypothesis interpretation/evaluation, and knowledge consolidation. It is also well known that a typical process will not only go one-way through the steps, but will involve many feedback loops, due to the trial-and-error nature of knowledge discovery [2]. Most research in the literature concerning KDD focus on only a single stage of the process, such as the development of eﬃcient and intelligent algorithms for a speciﬁc problem in the data mining stage. On the other hand, it seems that there has been comparatively little work which considers the process as a whole, concentrating on the iterative nature inherent in any KDD process. K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 30–44, 2001. c Springer-Verlag Berlin Heidelberg 2001

VML: A View Modeling Language for Computational Knowledge Discovery

31

More recently, the concept of view has been introduced for describing the steps of this process in a uniform manner [1,12,13,14]. Views are essentially functions over data. These functions, as well as their combinations, represent ways of looking at data, and the values they return are attributes values concerning their input arguments. The relationship between data, view, and the result obtained by applying a view to the data, can be considered as knowledge. The goal of KDD can be restated as the search for meaningful views. Views also provide an elegant interface for human intervention into the discovery process [1,12], whose need has been stressed in [9]. The iterative cycle of KDD consists very much of composing and decomposing of views, and facilities should be provided to assist these activities. The purpose of this paper is to present the concept of a programming language, VML (View Modeling Language), which can help speed up this iterative cycle. We consider extending the Objective Caml (OCaml) language [27], a functional language which is a dialect of the ML [16] language. We chose a functional language for our base, since it can handle higher order values (functions) just like any other value, which should help in the manipulation of views. Also, functional languages have a reputation for enabling eﬃcient and accurate programming of maintainable code, even for complex applications [6]. We focus on the fact that the primary diﬀerence between a view and a function, is that views must always have an interpretable meaning, because the knowledge must be interpretable to be of any use. The two extensions we consider are the keywords ‘view’ and ‘vmatch’. ‘view’ is used to bind a function to a name as well as instructing the program to remember any value resulting from the function. ‘vmatch’ is a keyword for the decomposing of functional application, enabling the extraction of the origins of remembered values. Of course, it is not impossible to accomplish the “remembering” with conventional languages. For example, we can have each function return a data structure which contains the resulting value and their representation. However, we wish to free the programmer from the labor of keeping track this data structure: what parameters were used where and when, by packaging this information implicitly into the language. As a result, the following tasks, for example, can be done without much extra eﬀort: – Interpret knowledge (functions and their parameters) obtained from learning/discovery programs. – Reuse knowledge obtained from previous learning/discovery rounds. Although we do not yet have a direct implementation of VML, we have been conducting computational experiments written in the C++ language based on the idea of views, obtaining substantial results [1,25]. We show how such experiments can be conducted comparatively easily by describing the experiments in terms of VML. The structure of this paper is as follows: Basic concepts of views and VML is described in Section 3. We describe, using VML, two actual discovery tasks we have conducted in Section 4. We discuss various issues in Section 5.

32

H. Bannai et al.

2

Related Work

There have been several knowledge discovery systems which focus on similar problems concerning the KDD process as a whole. KEPLER [21] concentrates on the extensibility of the system, adopting a “plug-in architecture”. CLEMENTINE [8] is a successful commercial application which focuses on human intervention, providing components which can be easily put together in many ways through a GUI interface. Our work is diﬀerent and unique in that it tries to give a solution at a more generic level - until we understand the nature of the data, we must try, literally, any method we can come up with, and therefore universality is desired in our approaches. Concerning the “remembering” of the origin of a value, one way to accomplish this is to remember the source code of the function. For example, some dialects of LISP provide a function called get-lambda-expression, which returns the actual lisp code of a given closure. However, this can return too much information concerning the value (e.g. the source code of a complicated algorithm). The idea in our work is to limit the information that the user will see, by regarding functions speciﬁed by the view keyword as the smallest unit of representation.

3

Concepts

In this section, we ﬁrst brieﬂy describe the concept of views, as found in [1]. Then, we discuss the basic concepts of VML, as an extension to the OCaml language [27], and give simple examples. 3.1

Entity, Views, and View Operation

Here, we review the deﬁnitions of entity, view, and view operation, and show how the KDD process can be described in terms of these concepts. An entity set E is a set of objects which may be distinguished from one another, representing the data under consideration. Each object e ∈ E is called an entity. A view v : E → R is a function over E. v will take an entity e, and return some aspect (i.e. attribute value) concerning e. A view operation is an operation which generates new views from existing views and entities. Below are some examples: Example 1. Given a view v : E → R, a new view v : E → R may be created ψ v with a function ψ : R → R (i.e. v ≡ ψ ◦ v : E → R → R ). We can also consider n-ary functions as views. All arguments except for the argument expecting the entity can be regarded as parameters of the view. Hypothesis generation via machine learning algorithms can also be considered as a form of view operation. The generated hypothesis can also be considered a view. Example 2. Given a set of data records (entities) and their attributes (views), the ID3 algorithm [18] (view operator) generates a decision tree T . T is also a view

VML: A View Modeling Language for Computational Knowledge Discovery

33

because it is a function which returns the class that a given entity is classiﬁed to. The generated view T can also be used as an input to other view operations, to create new views, which can be regarded as knowledge consolidation. Views and view operators are combined to create new views. The structure of such combinations of a compound view, is called the design of the view. The task of KDD lies in the search for good views which explain the data. Knowledge concerning the data is encapsulated in its design. Human intervention can be conducted through the hand-crafted design of views by domain experts. To successfully assist the expert in the knowledge discovery process, the expert should be able to manipulate and understand the view design with ease. 3.2

Representations

Here, we describe the basic concepts in VML. We shall call how a certain value is created, its representation. For example, if an integer value 55 was created by adding the numbers from 1 to 10, the representation of 55 is informally, “add the integers from 1 to 10”. A value may have multiple representations, but every representation should have only one corresponding value (except if there is some sort of random process in the representation). Intuitively, the representation for any value can be considered as the source code for computing that value. However, in VML, the representation is limited to only primitive values (ﬁrst order values), and also application to functions speciﬁed with the view keyword, so that it is feasible for the users to understand and interpret the values, seeing only the information that they want to see. The purpose of the view keyword is to specify that the runtime system should remember the representation of the return value of the function. We shall call such speciﬁed functions, view functions. Representations of values can be deﬁned as: rep ::= primv (* primitive values *) | vf rep1 ... repn (* application to view functions *) | x . rep’ (* λ-abstraction of representations *) vmatch is used to extract components from the representation of a value, by conducting pattern matching against the representation. 3.3

Simple Example

We give a simple example to illustrate basic OCaml syntax and the use of view and vmatch statements. The syntax and semantics of VML are the same as OCaml except for the added keywords. Only descriptions for the extended keywords are given, and the reader is requested to consult the Objective Caml Manual [27] for more information.

34

H. Bannai et al.

For the example in the previous subsection, a function which calculates the sum of positive integers 1 to n can be written in OCaml as:1 # let rec sumn n = if n int = # sumn 10;; (* apply 10 to sumn *) - : int = 55 let binds the function (value) to the name sumn. rec speciﬁes that the function is a recursive function (a function that calls itself). n is an argument of the function sumn. int -> int is the type of the function sumn, which reads as follows: “sumn is a function that takes a value of type int as an argument, and returns a value of type int”. Notice that the type of sumn is automatically inferred by the compiler/interpreter, and need not be speciﬁed. Arguments can be applied to functions just by writing them consecutively. The syntax of the view keyword is the same as the let statement. If we specify the above function with the view keyword in place of let: (we capitalize the ﬁrst letter of view functions for convenience) # view rec Sumn n = if n int = ::(n . Sumn n) # Sumn 10;; - : int = 55::(Sumn 10) Sumn is deﬁned as a view function, and therefore, values calculated from Sumn are implicitly remembered. In the above example, the return value is 55, and its representation, shown to the right of the double colon ‘::’, is (Sumn 10). We do not need to see the inside of Sumn, if we know the meaning of Sumn, to understand this value of 55. The vmatch keyword is used to decompose a representation of a value and extract the function and/or any parameters which were used to created the value. Its syntax is the same as the match statement of OCaml, which is used for the pattern matching of miscellaneous data structures. # let v = Sumn 10;; (* apply 10 to Sumn and bind the value to v *) val v : int = 55::(Sumn 10) # vmatch v with (* Extract parameters used to calculate v *) (Sumn x) -> printf "%d was applied to Sumn\n" x | _ -> printf "Error: v did not match (Sumn x)\n";; 10 was applied to Sumn - : unit = () In the above example, the representation of v, which is (Sumn 10), is matched against the pattern (Sumn x). If the match is successful, ‘x’ in the pattern is 1

The expressions starting after ’#’ and ending with ’;;’ is the input by the user, the others are responses from the compiler/interpreter. Comments are written between ‘(*’ and ‘*)’.

VML: A View Modeling Language for Computational Knowledge Discovery

35

assigned the corresponding value 10. This value can be used in the expression to the right of ‘->’ which is evaluated in case of a match. Multiple patterns matches can be attempted: each pattern and its corresponding expression are separated by ‘|’, and the expression for the ﬁrst matching pattern is evaluated. The underscore ‘ ’ represents a wild card pattern, matching any representation. The entire vmatch expression evaluates to the unit type () (similar to void in the C language) in this case, because printf is a function that executes a side-eﬀect (print a string), and returns (). 3.4

Partial Application

Here, we consider how representations of partial applications to view functions can be done. We note, however, that our description here may contains subtle problems, for example, concerning the order of evaluation of the expressions, which may be counter intuitive when programs contain side-eﬀects. A formal description and sample implementation resolving these issues can be found in [20]. In the previous examples, we added integers from 1 to n. Suppose we want to specify where to start also: add the integers from m to n. We can write the view function as follows: # view rec Sum_m_to_n m n = if (m > n) then 0 else (n + (Sum_m_to_n val Sum_m_to_n : int -> int -> int = ::(m n # let sum3to = Sum_m_to_n 3;; (* partial val sum3to : int -> int = ::(n . Sum_m_to_n # sum3to 5;; - : int = 12::(Sum_m_to_n 3 5)

m (n-1)));; . Sum_m_to_n m n) application *) 3 n)

Sum m to n is a view function of type int->int->int, which can be read as “a function that takes two arguments of type int and returns a value of type int”, or, “a function that takes one argument of type int and returns a value of type int->int”. In deﬁning sum3to, Sum m to n is applied with only one argument, 3, resulting in a function of type int->int. Applying another argument 5 to sum3to will result in the same value as Sum m to n 3 5. Partially applied values are matched as follows. Arguments not applied will only match the underscore ‘ ’: # vmatch sum3to with (Sum_m_to_n x _) -> printf "Sum_m_to_n partially applied with %d\n" x | _ -> printf "failed match\n";; Sum_m_to_n partially applied with 3 - : unit = () We can reverse the order of arguments by the fun keyword, which is essentially lambda abstraction.

36

H. Bannai et al.

# let sum10from = fun m -> Sum_m_to_n m 10;; val sum10from : int -> int = ::(m . Sum_m_to_n m 10) # sum10from 5;; - : int = 45::(Sum_m_to_n 5 10) # vmatch sum10from with (Sum_m_to_n _ x) -> printf "Sum_m_to_n partially applied with %d\n" x | _ -> printf "failed match\n";; Sum_m_to_n partially applied with 10 - : unit = () The representation for sum10from is the result obtained by β-reduction of the representation. (m . ((m n . Sum m to n m n) m 10)) →β (m . ((n . Sum m to n m n) 10)) →β (m . (Sum m to n m 10)) 3.5

Multiple Representations

In the example with Sumn, although Sumn recursively calls itself, the representation of the values generated in the recursive calls is not remembered, because the function ‘+’ is not a view function. If multiple representations are to be remembered, they can be maintained with a list of representations, and vmatch will try to match any of the representations.

4

Actual Knowledge Discovery Tasks

We describe two computational knowledge discovery experiments, showing how VML can assist the programmer in such experiments. As noted in Section 1, VML is not yet fully implemented, and therefore the experiments conducted here were developed with the C++ language, using the HypothesisCreator library [25], based on the concept of views. 4.1

Detecting Gene Regulatory Sites

It is known that: for many genes, whether or not the gene expresses its function depends on speciﬁc proteins, called transcription factors, which bind to speciﬁc locations on the DNA, called gene regulatory sites. Gene regulatory sites are usually located in the upstream region of the coding sequence of the gene. Since proteins selectively bind to these sites, it is believed that common motifs exists for genes which are regulated by the same protein. We consider the case where the 2-block motif model is preferred, that is, when the binding site cannot be characterized by a single motif, and 2 motifs should be searched for.

VML: A View Modeling Language for Computational Knowledge Discovery

37

view ListDistAnd min max l1 l2: int->int->(int list)->(int list)->bool Return true if there exists e1 ∈ l1, e2 ∈ l2 such that min ≤ (e2 − e1) ≤ max. view AstrstrList mm pat str: astr mismatch->string->string->(int list) Return the match positions (using approximate pattern matching) of a pattern as a list of int. The type astr_mismatch is the tuple (int * bool * bool * bool) where the int value is the maximum number of errors allowed, and the bool values are flags to permit the error types: insertion, deletion, and substitution, respectively. Fig. 1. View functions used in the view design for detecting putative gene regulatory sites.

We develop a simple, original method, based on views. Testing the method on B.subtilis σ A -dependent promoter sequences taken from [5], our method was able to rediscover the same results, as well as other candidates for 2-block motifs. We started by modeling the 2-block motif for regulatory sites as consisting of three components: the motif pattern (a string pattern, with possible mismatches), the gap width of these patterns (how far apart they can be), and their positions (distance in base pairs from the beginning of the coding sequence). We construct a function with the following design (the representation is omitted): # let orig pos len g_min g_max mm1 mm2 pat1 pat2 str = ListDistAnd g_min g_max (AstrstrList mm1 pat1 (Substring pos len str)) (AstrstrList mm2 pat2 (Substring pos len str));; val orig : int -> int -> float -> float -> astr_mismatch -> astr_mismatch -> string -> string -> string -> bool = The explanations for the view functions used are given in Figure 1. The arguments except str are parameters, and when all the parameters are applied, a function of type string->bool is generated, returning true if a certain 2-block motif appears for a given string, and false otherwise. To look for good parameters, we take a supervised learning approach and randomly selected genes of B.subtilis not included in the original dataset, from the GenBank database [24], as negative data. The score of each view is based on its accuracy as a classiﬁcation function that interprets whether or not an input sequence has the motifs. We looked at several top ranking views in order to evaluate them. Numerous iterations with diﬀerent search spaces yielded some interesting results. Selected results are shown in Figure 2. By limiting the search space by using knowledge obtained from previous work, we were able to come up with views v1 and v2 where the 2-block motifs were consistent or were the same with “TTGACA” and “TATAAT” as detected in [5,11]. We also ran the experiments with a wider range of parameters, and found a view v3, that could perfectly discriminate the positive and negative examples. Although a biological

38

H. Bannai et al.

v1: (str . ListDistAnd 20 30 (AstrstrList (2,false,false,true) (AstrstrList (2,false,false,true) true positive 102 false negative 40 = false positive 0 true negative 142 = v2: (str . ListDistAnd 20 30 (AstrstrList (2,false,false,true) (AstrstrList (2,false,false,true) true positive 100 false negative 42 = false positive 0 true negative 142 = v3: (str . ListDistAnd 25 35 (AstrstrList (3,false,false,true) (AstrstrList (2,false,false,true) true positive 142 false negative 0 = false positive 0 true negative 142 =

"ttgtca" (Substring -40 35 str)) "tataat" (Substring -40 35 str))) 71.8 % 100.0 % "ttgaca" (Substring -40 35 str)) "tataat" (Substring -40 35 str))) 70.4 % 100.0 % "atgatc" (Substring -50 65 str)) "gttata" (Substring -50 65 str))) 100.0 % 100.0 %

Fig. 2. Representations of the results of our method to ﬁnd regulatory sites.

interpretation must follow for the result to be meaningful, we were successful in ﬁnding a candidate for a novel result. In this kind of experiment, VML can help the expert in the following way: Although the views are sorted by some score, it is diﬃcult to check the validity of a view according to the score: i.e., a valuable view will probably have a high score, but a view with a high score may not be valuable. In the evaluation stage, there is a need for the expert to look at the many diﬀerent views with adequately high scores, and see what kind of parameters were used to generate the view. This could be written easily in VML since it would be just to obtain and display the representations of high scoring functions. 4.2

Characterization of N-Terminal Sorting Signals of Proteins

Proteins are composed of amino acids, and can be regarded as strings consisting of an alphabet of 20 characters. Most proteins are ﬁrst synthesized in the cytosol, and carried to speciﬁed locations, called localization sites. In most cases, the information determining the subcellular localization site is represented as a short amino acid sequence segment called a protein sorting signal [17]. Given an amino acid sequence, predicting where the protein will be carried to is an important and diﬃcult problem in molecular biology. Although numerous signal sequences have been found, similarities between these sequence for the same localization site are not yet fully understood. Our aim was to come up with a predictor which could challenge TargetP [3], the state-of-the-art neural network based predictor, in terms of prediction accuracy while not sacriﬁcing the interpretability of the classiﬁcation rule. Data available from the TargetP web-site [28] was used, consisting of 940 sequences containing 368 mTP (mitochondrial targeting peptides), 141 cTP (chloroplast transit peptides), 269 SP (signal peptides), and 162 “Other” sequences. The general approach was to: discuss with an expert on how to design the views, conduct computational experiments with those view designs, present results to the expert as feedback, and then repeat the process.

VML: A View Modeling Language for Computational Knowledge Discovery

39

We ﬁrst considered binary classiﬁers, which distinguishes sequences of a certain signal. The entity set is the set of amino acid sequences. The views we look for are of type string -> bool: for an amino sequence, return a Boolean value, true if the sequence contains a certain signal, and false if it does not. The views we designed (in time order) can be written in VML as follows (the meanings of each view function is given in Figure 3): # let h1 pat mm ind pos len str = Astrstr mm pat (AlphInd ind (Substring pos len val h1 : string -> astr_mismatch -> (char -> char) int -> string -> bool = ::(pat mm ind pos Astrstr mm pat (AlphInd ind (Substring pos len

str));; -> int -> len str . str))

# let h2 thr ind pos len str = GT (Average (AAindex ind (Substring pos len str))) thr;; val h2 : float -> string -> int -> int -> string -> bool = ::(thr ind pos len str . GT (Average (AAindex ind (Substring pos len str))) thr # let h3 thr aaind pos1 len1 pat mm alphind pos2 len2 str = And (h1 pat mm alphind pos1 len1 str) (h2 thr aaind pos2 len2 str);; val h3 : float -> string -> int -> int -> string -> astr_mismatch -> (char -> char) -> int -> int -> string -> bool = ::(thr aaind pos1 len1 pat mm alphind pos2 len2 str . And (h1 pat mm alphind pos1 len1 str) (h2 thr aaind pos2 len2 str) Notice that after applying all the arguments except for the last string, we can obtain functions of type string -> bool as desired. For example, using view function h2, we can create a view function of type string -> bool: # let f = h2 3.5 "BIGC670101" 5 20;; val f : string -> bool = ::(str .(GT (Average (AAindex "BIGC670101" (Substring 5 20 str)) 3.5))) Each function is composed of view functions, so representation of such a function will contain information of the arguments. The representation of the above rule can be read as: “For a given amino acid sequence, ﬁrst, look at the substring of length 20, starting from position 5. Then, calculate the average volume2 of the amino acids appearing in the substring, and return true if it the value is greater than 3.5, false otherwise”. The task is now to ﬁnd good parameters which deﬁnes a function that can accurately distinguish the signals. For each view design, a wide range of parameters were applied. For each combination of parameters and view design shown 2

“BIGC670101” is the accession id for amino acid index: ‘volume’.

40

H. Bannai et al.

view Substring pos len str : int -> int -> string -> string return substring: [pos,pos+len-1] of str. A negative value for pos means to count from the right end of the string. view AlphInd ind str : (char -> char) -> string -> string convert str according to alphabet indexing ind. ind is a mapping of char->char, called an alphabet indexing [19], and can be considered as a classification of the characters of a given alphabet. view Astrstr mm pat str : astr_mismatch -> string -> string -> bool approximate pattern matching[22]: match pat & str with mismatch mm. Type ‘astr mismatch’ is explained in Figure 1. view AAindex ac str : string -> string -> (float array) convert str to an array of float according to amino acid index: ac. ac is an accession id of an entry in the AAindex database[7]. Each entry in the database represents some biochemical property of amino acids, such as volume, hydropathy, etc., represented as a mapping of char -> float. view Average v : float array -> float the average of the values in v view GT x y : ’a -> ’a -> bool greater than view And x y : bool -> bool -> bool Boolean ‘and’

Fig. 3. View functions used in the view design to distinguish protein sorting signals.

above, we obtain a function: string->bool. The programmer need not worry about keeping track of the meanings of each function, because the representation may be consulted using the vmatch statement when needed. We apply all the protein sequences to this function, and calculate the score of this function as a classiﬁer of a certain signal. Functions with the best scores are selected. View design h1, looks for a pattern over a sequence converted by a classiﬁcation of an alphabet [19]. We hoped to ﬁnd some kind of structural similarities of the signals with this design, but we could not ﬁnd satisfactory parameters which would let h1 predict the signals accurately. Next, we designed a new view h2 which uses the AAindex database [7], this time looking for characteristics of the amino acid composition of a sequence segment. This turned out to be very eﬀective, especially for the SP set, and was used to distinguish SP from the other signals. For the remaining signals, we tried combining h1 and h2 into h3. This proved to be useful for distinguishing the “Other” set (those which do not have N-terminal signals), from mTP and cTP. We can see that the functional nature of VML enables the easy construction of the view designs. By combining the views and parameters thus obtained for each signal type into a single decision list, we were able to create a rule which competes fairly well with TargetP in terms of prediction accuracy. The scores of a 5-fold crossvalidation is shown in Table 1. The knowledge encapsulated in the view design

VML: A View Modeling Language for Computational Knowledge Discovery

41

Table 1. The Prediction Accuracy of the Final Hypothesis (scores of TargetP [3] in (tp×tn−f p×f n) parentheses) The score is deﬁned by: √ where tp, tn, f p, (tp+f n)(tp+f p)(tn+f p)(tn+f n)

f n are the number of true positive, true negative, false positive, and false negative, respectively (Matthews correlation coeﬃcient (MCC) [15]). True # of Predicted category category seqs cTP mTP SP Other cTP 141 96 (120) 26 (14) 0 (2) 19 (5) mTP 368 25 (41) 309 (300) 4 (9) 30 (18) SP 269 6 (2) 9 (7) 244 (245) 10 (15) Other 162 8 (10) 17 (13) 2 (2) 135 (137) Speciﬁcity 0.71 (0.69) 0.86 (0.90) 0.98 (0.96) 0.70 (0.78)

Sensitivity 0.68 0.84 0.91 0.83

(0.85) (0.82) (0.91) (0.85)

MCC 0.64 0.75 0.92 0.71

(0.72) (0.77) (0.90) (0.77)

was consistent with widely believed (but vague) characteristics of each signal, and the expert was surprised that such a simple rule could describe the sorting signals with such accuracy. A system called iPSORT was built based on these rules, and an experimental web service is provided at the iPSORT web-site [26]. vmatch can be useful in the following situation: After obtaining a good view of design h2, we may want to see if we can ﬁnd a good view of design h1, but use the same substring sequence as h2. This can be regarded as ﬁrst looking for a segment which has a distinct amino acid composition, and then looking closer at this segment, to see if structural characteristics of the segment can be found. This function can be written as: # let newh f = vmatch f with GT (Average (AAindex _ (Substring p l _))) _ -> fun pat mm ind str -> h1 pat mm ind p l str;; val newh : ’_a -> string -> astr_mismatch -> (char -> char) -> string -> bool = If the representation of a function h was for example: (str . GT (Average (AAindex ind (Substring 3 16 str))) 3.5) then, the representation of (newh h) would become: (pat mm ind str . (Astrstr mm pat (AlphInd ind (Substring 3 16 str)))) representing a function of design h1, but using the parameters of h of view design h2 for Substring. Again, we need not worry about explicitly keeping track of what values were applied to h2 to obtain h, since it is implicitly remembered and can be extracted by the vmatch keyword. Thus, we have seen that the design and manipulation of views can be done easily with VML, and would assist the trial-and-error cycle of the experiments.

42

5 5.1

H. Bannai et al.

Discussion Implementation

In the C++ library, each view function is encapsulated in an instance of a class derived from the view class. The view class has a method for interpreting the value for an entity. Constructors for various derived classes can take other instances of view classes as arguments. The view class also has a method which returns the view classes which were used to build the instance (a facility for simulating vmatch, for decomposing the functions). However, after spending much time in development, we came to feel that C++ was error prone and rather tedious to code the view functions. Also, although the view classes encapsulate functions, the function itself could not be easily reused for other purposes. For the points mentioned above, we can safely say that VML is advantageous over our C++ library. However, an eﬃcient implementation of VML, which is beyond the scope of this paper, is a topic of interest. The implementation given in [20] uses the Camlp4 preprocessor (and printer) [23], which converts a VML program (with a diﬀerent syntax from this paper) into an OCaml program, and it may be the case that there are optimizations that can be performed by a dedicated compiler. 5.2

Conclusion

We presented the concept of a language called VML, as an extension of the Objective Caml language. The advantages of VML are: 1) Since VML is a functional language, the composition and application of views can be done in a natural way, compared to imperative languages. 2) By deﬁning the unit of knowledge as views, the programmer does not need to explicitly keep track of how each individual view was designed (i.e. manage data structures to remember the set of parameters). 3) The programmer can use “parts” of a good view which can only be determined perhaps at runtime, and apply it to another (the example in Section 4.2). 4) In an interactive interface, (i.e. a VML interactive interpreter), the user can compose and decompose views and view designs, and apply them to data. When the user accidently stumbles upon an interesting view, he/she can retrieve the design immediately. Using VML, we modeled and described successful knowledge discovery tasks which we have actually experienced, and showed that the points noted above can lighten the burden of the programmer, and as a result, give way to speeding up the iterative trial-and-error cycle of computational knowledge discovery processes.

6

Acknowledgements

The authors would like to thank Sumii Eijiro of the University of Tokyo for his most valuable comments and suggestions.

VML: A View Modeling Language for Computational Knowledge Discovery

43

This research was supported in part by Grant-in-Aid for Encouragement of Young Scientists and Grant-in-Aid for Scientiﬁc Research on Priority Areas (C) “Genome Information Science” from the Ministry of Education, Sports, Science and Technology of Japan, and the Research for the Future Program of the Japan Society for the Promotion of Science.

References [1] H. Bannai, Y. Tamada, O. Maruyama, K. Nakai, and S. Miyano. Views: Fundamental building blocks in the process of knowledge discovery. In Proceedings of the 14th International FLAIRS Conference, pages 233–238. AAAI Press, 2001. [2] P. Cheeseman and J. Stutz. Bayesian classiﬁcation (AutoClass): Theory and results. In Advances in Knowledge Discovery and Data Mining. AAAI Press/MIT Press, 1996. [3] O. Emanuelsson, H. Nielsen, S. Brunak, and G. von Heijne. Predicting subcellular localization of proteins based on their N-terminal amino acid sequence. J. Mol. Biol., 300(4):1005–1016, July 2000. [4] U. Fayyad, G. Piatetsky-Shapiro, and P. Smyth. From data mining to knowledge discovery in databases. AI Magazine, 17(3):37–54, 1996. [5] J. D. Helmann. Compilation and analysis of bacillus subtilis σ A -dependent promoter sequences: evidence for extended contact between RNA polymerase and upstream promoter DNA. Nucleic Acids Res., 23(13):2351–2360, 1995. [6] J. Hughes. Why functional programming matters. Computer Journal, 32(2):98– 107, 1989. [7] S. Kawashima and M. Kanehisa. AAindex: Amino Acid index database. Nucleic Acids Res., 28(1):374, 2000. [8] T. Khabaza and C. Shearer. Data mining with Clementine. IEE Colloquium on ‘Knowledge Discovery in Databases’, 1995. IEE Digest No. 1995/021(B), London. [9] P. Langley. The computer-aided discovery of scientiﬁc knowledge. In Lecture Notes in Artiﬁcial Intelligence, volume 1532, pages 25–39, 1998. [10] P. Langley and H. A. Simon. Applications of machine learning and rule induction. Communications of the ACM, 38(11):54–64, 1995. [11] X. Liu, D. L. Brutlag, and J. S. Liu. BioProspector: Discovering conserved DNA motifs in upstream regulatory regions of co-expressed genes. In Paciﬁc Symposium on Biocomputing 2001, volume 6, pages 127–138, 2001. [12] O. Maruyama and S. Miyano. Design aspects of discovery systems. IEICE Transactions on Information and Systems, E83-D:61–70, 2000. [13] O. Maruyama, T. Uchida, T. Shoudai, and S. Miyano. Toward genomic hypothesis creator: View designer for discovery. In Discovery Science, volume 1532 of Lecture Notes in Artiﬁcial Intelligence, pages 105–116, 1998. [14] O. Maruyama, T. Uchida, K. L. Sim, and S. Miyano. Designing views in HypothesisCreator: System for assisting in discovery. In Discovery Science, volume 1721 of Lecture Notes in Artiﬁcial Intelligence, pages 115–127, 1999. [15] B. W. Matthews. Comparison of predicted and observed secondary structure of t4 phage lysozyme. Biochim. Biophys. Acta, 405:442–451, 1975. [16] R. Milner, M. Tofte, R. Harper, and D. MacQueen. The Deﬁnition of Standard ML (Revised). MIT Press, 1997. [17] K. Nakai. Protein sorting signals and prediction of subcellular localization. In P. Bork, editor, Analysis of Amino Acid Sequences, volume 54 of Advances in Protein Chemistry, pages 277–344. Academic Press, San Diego, 2000.

44

H. Bannai et al.

[18] J. Quinlan. Induction of decision trees. Machine Learning 1, 1:81–106, 1986. [19] S. Shimozono. Alphabet indexing for approximating features of symbols. Theor. Comput. Sci., 210:245–260, 1999. [20] E. Sumii and H. Bannai. VMlambda: A functional calculus for scientiﬁc discovery. http://www.yl.is.s.u-tokyo.ac.jp/˜sumii/pub/, 2001. [21] S. Wrobel, D. Wettschereck, E. Sommer, and W. Emde. Extensibility in data mining systems. In Proceedings of the 2nd International Conference on Knowledge Discovery and Data Mining (KDD-96), pages 214–219, 1996. [22] S. Wu and U. Manber. Fast text searching allowing errors. Commun. ACM, 35:83–91, 1992. [23] Camlp4 - http://caml.inria.fr/camlp4/. [24] GenBank - http://www.ncbi.nlm.nih.gov/Genbank. [25] HypothesisCreator - http://www.hypothesiscreator.net/. [26] iPSORT - http://www.hypothesiscreator.net/iPSORT/. [27] Objective Caml - http://caml.inria.fr/ocaml/. [28] TargetP - http://www.cbs.dtu.dk/services/TargetP/.

Robot Baby 2001 Paul R. Cohen1 , Tim Oates2 , Niall Adams3 , and Carole R. Beal4 1

2

Department of Computer Science, University of Massachusetts, Amherst cohen@cs.umass.edu Department of Computer Science, University of Maryland, Baltimore County oates@cs.umbc.edu 3 Department of Mathematics, Imperial College, London n.adams@ic.ac.uk 4 Department of Psychology, University of Massachusetts, Amherst cbeal@psych.umass.edu

Abstract. In this paper we claim that meaningful representations can be learned by programs, although today they are almost always designed by skilled engineers. We discuss several kinds of meaning that representations might have, and focus on a functional notion of meaning as appropriate for programs to learn. Speciﬁcally, a representation is meaningful if it incorporates an indicator of external conditions and if the indicator relation informs action. We survey methods for inducing kinds of representations we call structural abstractions. Prototypes of sensory time series are one kind of structural abstraction, and though they are not denoting or compositional, they do support planning. Deictic representations of objects and prototype representations of words enable a program to learn the denotational meanings of words. Finally, we discuss two algorithms designed to ﬁnd the macroscopic structure of episodes in a domain-independent way.

K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, p. 29, 2001. c Springer-Verlag Berlin Heidelberg 2001

Inventing Discovery Tools: Combining Information Visualization with Data Mining Ben Shneiderman Department of Computer Science, Human-Computer Interaction Laboratory, Institute for Advanced Computer Studies, and Institute for Systems Research University of Maryland, College Park, MD 20742 USA ben@cs.umd.edu

Abstract. The growing use of information visualization tools and data mining algorithms stems from two separate lines of research. Information visualization researchers believe in the importance of giving users an overview and insight into the data distributions, while data mining researchers believe that statistical algorithms and machine learning can be relied on to find the interesting patterns. This paper discusses two issues that influence design of discovery tools: statistical algorithms vs. visual data presentation, and hypothesis testing vs. exploratory data analysis. I claim that a combined approach could lead to novel discovery tools that preserve user control, enable more effective exploration, and promote responsibility.

1

Introduction

Genomics researchers, financial analysts, and social scientists hunt for patterns in vast data warehouses using increasingly powerful software tools. These tools are based on emerging concepts such as knowledge discovery, data mining, and information visualization. They also employ specialized methods such as neural networks, decisions trees, principal components analysis, and a hundred others. Computers have made it possible to conduct complex statistical analyses that would have been prohibitive to carry out in the past. However, the dangers of using complex computer software grow when user comprehension and control are diminished. Therefore, it seems useful to reflect on the underlying philosophy and appropriateness of the diverse methods that have been proposed. This could lead to better understandings of when to use given tools and methods, as well as contribute to the invention of new discovery tools and refinement of existing ones. Each tool conveys an outlook about the importance of human initiative and control as contrasted with machine intelligence and power [16]. The conclusion deals with the central issue of responsibility for failures and successes. Many issues influence design of discovery tools, but I focus on two: statistical algorithms vs. visual data presentation and hypothesis testing vs. exploratory data analysis.

Keynote for Discovery Science 2001 Conference, November 25-28, 2001, Washington, DC. Also to appear in Information Visualization, new journal by Palgrave/MacMillan.

K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 17–28, 2001. c Springer-Verlag Berlin Heidelberg 2001

18

2

B. Shneiderman

Statistical Algorithms vs. Visual Data Presentation

Early efforts to summarize data generated means, medians, standard deviations, and ranges. These numbers were helpful because their compactness, relative to the full data set, and their clarity supported understanding, comparisons, and decision making. Summary statistics appealed to the rational thinkers who were attracted to the objective nature of data comparisons that avoided human subjectivity. However, they also hid interesting features such as whether distributions were uniform, normal, skewed, bi-modal, or distorted by outliers. A remedy to these problems was the presentation of data as a visual plot so interesting features could be seen by a human researcher. The invention of times-series plots and statistical graphics for economic data is usually attributed to William Playfair (1759-1823) who published The Commercial and Political Atlas in 1786 in London. Visual presentations can be very powerful in revealing trends, highlighting outliers, showing clusters, and exposing gaps. Visual presentations can give users a richer sense of what is happening in the data and suggest possible directions for further study. Visual presentations speak to the intuitive side and the sense-making spirit that is part of exploration. Of course visual presentations have their limitations in terms of dealing with large data sets, occlusion of data, disorientation, and misinterpretation. By early in the 20th century statistical approaches, encouraged by the Age of Rationalism, became prevalent in many scientific domains. Ronald Fisher (1890-1962) developed modern statistical methods for experimental designs related to his extensive agricultural studies. His development of analysis of variance for design of factorial experiments [7] helped advance scientific research in many fields [12]. His approaches are still widely used in cognitive psychology and have influenced most experimental sciences. The appearance of computers heightened the importance of this issue. Computers can be used to carry out far more complex statistical algorithms and they also be used to generate rich visual, animated, and user-controlled displays. Typical presentation of statistical data mining results is by brief summary tables, induced rules, or decision trees. Typical visual data presentations show data-rich histograms, scattergrams, heatmaps, treemaps, dendrograms, parallel coordinates, etc. in multiple coordinated windows that support user-controlled exploration with dynamic queries for filtering (Fig. 1). Comparative studies of statistical summaries and visual presentations demonstrate the importance of user familiarity and training with each approach and the influence of specific tasks. Of course, statistical summaries and visual presentations can both be misleading or confusing.

Fig. 1. Spotfire (www.spotfire.com) display of chemical elements showing the strong correlation between ionization energy and electronegativity, and two dramatic outliers: radon and helium.

Inventing Discovery Tools 19

20

B. Shneiderman

An example may help clarify the distinction. Promoters of statistical methods may use linear correlation coefficients to detect relationships between variables, which works wonderfully when there is a linear relationship between variables and when the data is free from anomalies. However, if the relationship is quadratic (or exponential, sinusoidal, etc.) a linear algorithm may fail to detect the relationship. Similarly if there are data collection problems that add outliers or if there are discontinuities over the range (e.g. freezing or boiling points of water), then linear correlation may fail. A visual presentation is more likely to help researchers find such phenomena and suggest richer hypotheses.

3

Hypothesis Testing vs. Exploratory Data Analysis

Fisher’s approach not only promoted statistical methods over visual presentations, but also strongly endorsed theory-driven hypothesis-testing research over casual observation and exploratory data analysis. This philosophical strand goes back to Francis Bacon (1551-1626) and later to John Herschel’s 1830 A Preliminary Discourse on the Study of Natural Philosophy. They are usually credited with influencing modern notions of scientific methods based on rules of induction and the hypothetico-deductive method. Believers in scientific methods typically see controlled experiments as the fast path to progress, even though its use of the reductionist approach to test one variable at a time can be disconcertingly slow. Fisher’s invention of factorial experiments helped make controlled experimentation more efficient. Advocates of the reductionist approach and controlled experimentation argue that large benefits come when researchers are forced to clearly state their hypotheses in advance of data collection. This enables them to limit the number of independent variables and to measure a small number of dependent variables. They believe that the courageous act of stating hypotheses in advance sharpens thinking, leads to more parsimonious data collection, and encourages precise measurement. Their goals are to understand causal relationships, to produce replicable results, and to emerge with generalizable insights. Critics complain that the reductionist approach, with its laboratory conditions to ensure control, is too far removed from reality (not situated and therefore stripped of context) and therefore may ignore important variables that effect outcomes. They also argue that by forcing researchers to state an initial hypothesis, their observation will be biased towards finding evidence to support their hypothesis and will ignore interesting phenomena that are not related to their dependent variables. On the other side of this interesting debate are advocates of exploratory data analysis who believe that great gains can be made by collecting voluminous data sets and then searching for interesting patterns. They contend that statistical analyses and machine learning techniques have matured enough to reveal complex relationships that were not anticipated by researchers. They believe that a priori hypotheses limit research and are no longer needed because of the capacity of computers to collect and analyze voluminous data. Skeptics worry that any given set of data, no matter how large, may still be a special case, thereby undermining the generalizability of the results. They also question whether detection of strong statistical relationships can ever lead to an understanding of cause and effect. They declare that correlation does not imply causation.

Inventing Discovery Tools

21

Once again, an example may clarify this issue. If a semiconductor fabrication facility is generating a high rate of failures, promoters of hypothesis testing might list the possible causes, such as contaminants, excessive heat, or too rapid cooling. They might seek evidence to support these hypotheses and maybe conduct trial runs with the equipment to see if they could regenerate the problem. Promoters of exploratory data analysis might want to collect existing data from the past year of production under differeing conditions and then run data mining tools against these data sets to discover correlates of high rates of failure. Of course, an experienced supervisor may blend these approaches, gathering exploratory hypotheses from the existing data and then conducting confirmatory tests.

4 The New Paradigms The emergence of the computer has shaken the methodological edifice. Complex statistical calculations and animated visualizations become feasible. Elaborate controlled experiments can be run hundreds of times and exploratory data analysis has become widespread. Devotees of hypothesis-testing have new tools to collect data and prove their hypotheses. T-tests and analysis of variance (ANOVA) have been joined by linear and non-linear regression, complex forecasting methods, and discriminant analylsis. Those who believe in exploratory data analysis methods have even more new tools such as neural networks, rule induction, a hundred forms of automated clustering, and even more machine learning methods. These are often covered in the rapidly growing academic discipline of data mining [6,8]. Witten and Frank define data mining as "the extraction of implicit, previously unknown, and potentially useful information from data." They caution that "exaggerated reports appear of the secrets that can be uncovered by setting learning algorithms loose on oceans of data. But there is no magic in machine learning, no hidden power, no alchemy. Instead there is an identifiable body of simple and practical techniques that can often extract useful information from raw data." [19] Similarly, those who believe in data or information visualization are having a great time as the computer enables rapid display of large data sets with rich user control panels to support exploration [5]. Users can manipulate up to a million data items with 100-millisecond update of displays that present color-coded, size-coded markers for each item. With the right coding, human pre-attentive perceptual skills enable users to recognize patterns, spot outliers, identify gaps, and find clusters in a few hundred milliseconds. When data sets grow past a million items and cannot be easily seen on a computer display, users can extract relevant subsets, aggregate data into meaningful units, or randomly sample to create a manageable data set. The commercial success of tools such as SAS JMP (www.sas.com), SPSS Diamond (www.spss.com), and Spotfire (www.spotfire.com) (Fig. 1), especially for pharmaceutical drug discovery and genomic data analysis, demonstrate the attraction of visualization. Other notable products include Inxight’s Eureka (www.inxight.com) for multidimensional tabular data and Visual Insights’ eBizinsights (www.visualinsights.com) for web log visualization. Spence characterizes information visualization with this vignette "You are the owner of some numerical data which, you feel, is hiding some fundamental relation...you then glance at some visual presentation of that data and exclaim ’Ah ha! - now I understand.’"

22

B. Shneiderman

[13]. But Spence also cautions that "information visualization is characterized by so many beautiful images that there is a danger of adopting a ’Gee Whiz’ approach to its presentation."

5 A Spectrum of Discovery Tools The happy resolution to these debates is to take the best insights from both extremes and create novel discovery tools for many different users and many different domains. Skilled problem solvers often combine observation at early stages, which leads to hypothesistesting experiments. Alternatively they may have a precise hypothesis, but if they are careful observers during a controlled experiment, they may spot anomalies that lead to new hypotheses. Skilled problem solvers often combine statistical tests and visual presentation. A visual presentation of data may identify two clusters whose separate analysis can lead to useful results when a combined analysis would fail. Similarly, a visual presentation might show a parabola, which indicates a quadratic relationship between variables, but no relationship would be found if a linear correlation test were applied. Devotees of statistical methods often find that presenting their results visually helps to explain them and suggests further statistical tests. The process of combining statistical methods with visualization tools will take some time because of the conflicting philosophies of the promoters. The famed statistician John Tukey (1915-2000) quickly recognized the power of combined approaches [14]: "As yet I know of no person or group that is taking nearly adequate, advantage of the graphical potentialities of the computer... In exploration they are going to be the data analyst’s greatest single resource." The combined strength of visual data mining would enrich both approaches and enable more successful solutions [17]. However, most books on data mining have only brief discussion of information visualization and vice versa. Some researchers have begun to implement interactive visual approaches to data mining [10,2,15]. Accelerating the process of combining hypothesis testing with exploratory data analysis will also bring substantial benefits. New statistical tests and metrics for uniformity of distributions, outlier-ness, or cluster-ness will be helpful, especially if visual interfaces enable users to examine the distributions rapidly, change some parameters and get fresh metrics and corresponding visualizations.

6

Case Studies of Combining Visualization with Data Mining

One way to combine visual techniques with automated data mining is to provide support tools for users with both components. Users can then explore data with direct manipulation user interfaces that control information visualization components and apply statistical tests when something interesting appears. Alternatively, they can use data mining as a first pass and then examine the results visually. Direct manipulation strategies with user-controlled visualizations start with visual presentation of the world of action, which includes the objects of interest and the actions. Early examples included air traffic control and video games. In graphical user interfaces, direct manipulation means dragging files to folders or to the trashcan for deletion. Rapid incremental and reversible actions

Inventing Discovery Tools

23

encourage exploration and provide continuous feedback so users can see what they are doing. Good examples are moving or resizing a window. Modern applications of direct manipulation principles have led to information visualization tools that show hundreds of thousands of items on the screen at once. Sliders, check boxes, and radio buttons allow users to filter items dynamically with updates in less than 100 milliseconds.

Fig. 2. Dynamic Queries HomeFinder with sliders to control the display of markers indicating homes for sale. Users can specify distances to markers, bedrooms, cost, type of house and features [18] .

Early information visualizations included the Dynamic Queries HomeFinder (Fig. 2) which allowed users to select from a database of 1100 homes using sliders on home price, number of bedrooms, and distance from markers, plus buttons for other features such as fireplaces, central air conditioning, etc. [18]. This led to the FilmFinder [1] and then the successful commercial product, Spotfire (Fig. 1). One Spotfire feature is the View Tip that uses statistical data mining methods to suggest interesting pair-wise relationships by using linear correlation coefficients (Fig. 3). The ViewTip might be improved by giving more user control over the specification of interesting-ness that ranks the outcomes. While some users may be interested in high linear correlation coefficients, others may be interested in low correlation coefficients, or might prefer rankings by quadratic,

24

B. Shneiderman

exponential, sinusoidal or other correlations. Other choices might be to rank distributions by existing metrics such as skewness (negative or positive) or outlierness [3]. New metrics for degree of uniformity, cluster-ness, or gap-ness are excellent candidates for research. We are in the process of building a control panel that allows users to specify the distributions they are seeking by adjusting sliders and seeing how the rankings shift. Five algorithms have been written for 1-dimensional data and one for 2-dimensional data, but more will be prepared soon (Fig. 4).

Fig. 3. Spotfire View Tip panel with ranking of possible 2-dimensional scatter plots in descending order by the strength of linear correlation. Here the strong correlation in baseball statistics is shown between Career At Bats and Career Hits. Notice the single outlier in the upper right corner, representing Pete Rose’s long successful career.

A second case study is our work with time-series pattern finding [4]. Current tools for stock market or genomic expression data from DNA microarrays rely on clustering in multidimensional space, but a more user-controlled specification tool might enable analysts to carefully specify what they want [9]. Our efforts to build a tool, TimeSearcher, have relied on query specification by drawing boxes to indicate what ranges of values are desired for each time period (Fig. 5). It has more of the spirit of hypothesis testing. While this takes somewhat greater effort, it gives users greater control over the query results. Users can move the boxes around in a direct manipulation style and immediately see the new set of results. The opportunity for rapid exploration is dramatic and users can immediately see where matches are frequent and where they are rare.

Inventing Discovery Tools

25

Fig. 4. Prototype panel to enable user specification of 1-dimensional distribution requirements. The user has chosen the Cluster Finder II in the Algorithm box at the top. The user has specified the cluster tightness desired in the middle section. The ranking of the Results at the bottom lists all distributions according to the number of identifiable clusters. The M93-007 data is the second one in the Results list and it has four identifiable clusters. (Implemented by Kartik Parija and Jaime Spacco).

26

B. Shneiderman

Fig. 5. TimeSearcher allows users to specify ranges for time-series data and immediately see the result set. In this case two timeboxes have been drawn and 5 of the 225 stocks match this pattern [9].

Inventing Discovery Tools

7

27

Conclusion and Recommendations

Computational tools for discovery, such as data mining and information visualization have advanced dramatically in recent years. Unfortunately, these tools have been developed by largely separate communities with different philosophies. Data mining and machine learning researchers tend to believe in the power of their statistical methods to identify interesting patterns without human intervention. Information visualization researchers tend to believe in the importance of user control by domain experts to produce useful visual presentations that provide unanticipated insights. Recommendation 1: integrate data mining and information visualization to invent discovery tools. By adding visualization to data mining (such as presenting scattergrams to accompany induced rules), users will develop a deeper understanding of their data. By adding data mining to visualization (such as the Spotfire View Tip), users will be able to specify what they seek. Both communities of researchers emphasize exploratory data analysis over hypothesis testing. A middle ground of enabling users to structure their exploratory data analysis by applying their domain knowledge (such as limiting data mining algorithms to specific range values) may also be a source of innovative tools. Recommendation 2: allow users to specify what they are seeking and what they find interesting. By allowing data mining and information visualization users to constrain and direct their tools, they may produce more rapid innovation. As in the Spotfire View Tip example, users could be given a control panel to indicate what kind of correlations or outliers they are looking for. As users test their hypotheses against the data, they find dead ends and discover new possibilities. Since discovery is a process, not a point event, keeping a history of user actions has a high payoff. Users should be able to save their state (data items and control panel settings), back up to previous states, and send their history to others. Recommendation 3: recognize that users are situated in a social context. Researchers and practitioners rarely work alone. They need to gather data from multiple sources, consult with domain experts, pass on partial results to others, and then present their findings to colleagues and decision makers. Successful tools enable users to exchange data, ask for consultations from peers and mentors, and report results to others conveniently. Recommendation 4: respect human responsibility when designing discovery tools. If tools are comprehensible, predictable and controllable, then users can develop mastery over their tools and experience satisfaction in accomplishing their work. They want to be able to take pride in their successes and they should be responsible for their failures. When tools become too complex or unpredictable, users will avoid their use because the tools are out of their control. Users often perform better when they understand and control what the computer does [11]. If complex statistical algorithms or visual presentations are not well understood by users they cannot act on the results with confidence. I believe that visibility of the statistical processes and outcomes minimizes the danger of misinterpretation and incorrect results. Comprehension of the algorithms behind the visualizations and the implications of layout encourage effective usage that leads to successful discovery.

28

B. Shneiderman

Acknowledgements. Thanks to Mary Czerwinski, Lindley Darden, Harry Hochheiser, Jenny Preece, and Ian Witten for comments on drafts.

References 1. Ahlberg, C. and Shneiderman, B., Visual Information Seeking: Tight coupling of dynamic query filters with starfield displays, Proc. of ACM CHI ’94 Human Factors in Computing Systems, ACM Press, New York (April 1994), 313-317 + color plates. 2. Ankerst, M., Ester, M., and Kriegel, H.-P., Towards an effective cooperation of the user and the computer for classification, Proc. 6th ACM SIGKDD International Conf. on Knowledge Discovery and Data Mining, ACM, New York (2000), 179-188. 3. Barnett, Vic, and Lewis, Toby, Outliers in Statistical Data, John Wiley & Son Ltd; 3rd edition (April 1994). 4. Bradley, E., Time-series analysis, In Berthold, M. and Hand, E. (Editors), Intelligent Data Analysis: An Introduction, Springer (1999). 5. Card, S., Mackinlay, J, and Shneiderman, B. (Editors), Readings in Information Visualization: Using Vision to Think, Morgan Kaufmann Publishers, San Francisco, CA (1999). 6. Fayyad, U., Piatetsky-Shapiro, G., Smyth, P., and Uthurusamy, R., (Editors), Advances in Knowledge Discovery and Data Mining. MIT Press, Cambridge, MA (1996). 7. Fisher, R.A., The Design of Experiments, Oliver and Boyd, Edinburgh (1935). 9th edition, Macmillan, New York (1971). 8. Han, Jiawei and Kamber, Micheline, Data Mining: Concepts and Techniques, Morgan Kaufmann Publishers, San Francisco (2000). 9. Hochheiser, H. and Shneiderman, B., Interactive exploration of time-series data, In Proc. Discovery Science, Springer (2001). 10. Hinneburg, A., Keim, D., and Wawryniuk, M., HD-Eye: Visual mining of high-dimensional data, IEEE Computer Graphics and Applications 19, 5 (Sept/Oct 1999), 22-31. 11. Koenemann, J. and Belkin, N., A case for interaction: A study of interactive information retrieval behavior and effectiveness, Proc. CHI ’96 Human Factors in Computing Systems, ACM Press, New York (1996), 205-212. 12. Montgomery, D., Design and Analysis of Experiments, 3rd ed, Wiley, New York (1991). 13. Spence, Robert, Information Visualization, Addison-Wesley, Essex, England (2001). 14. Tukey, John, The technical tools of statistics,American Statistician 19 (1965), 23-28.Available at: http://stat.bell-labs.com/who/tukey/memo/techtools.html 15. Ware, M., Frank, E., Homes, F., Hall, M., and Witten, I. H., Interactive machine learning: Letting users build classifiers, International Journal of Human-Computer Studies (2001, in press). 16. Weizenbaum, Joseph, Computer Power and Human Reason: From Judgment to Calculation, W. H. Freeman and Co., San Francisco, CA, (1976). 17. Westphal, Christopher and Blaxton, Teresa, Data Mining Solutions: Methods and Tools for Solving Real-World Problems, John Wiley & Sons (1999). 18. Williamson, Christopher, and Shneiderman, Ben, The Dynamic HomeFinder: Evaluating dynamic queries in a real-estate information exploration system, Proc. ACM SIGIR’92 Conference, ACM Press (1992), 338-346. 19. Witten, Ian, and Frank, Eibe, Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations, Morgan Kaufmann Publishers, San Francisco (2000).

Queries Revisited Dana Angluin Computer Science Department Yale University P. O. Box 208285 New Haven, CT 06520-8285 angluin@cs.yale.edu

Abstract. We begin with a brief tutorial on the problem of learning a ﬁnite concept class over a ﬁnite domain using membership queries and/or equivalence queries. We then sketch general results on the number of queries needed to learn a class of concepts, focusing on the various notions of combinatorial dimension that have been employed, including the teaching dimension, the exclusion dimension, the extended teaching dimension, the ﬁngerprint dimension, the sample exclusion dimension, the Vapnik-Chervonenkis dimension, the abstract identiﬁcation dimension, and the general dimension.

K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, p. 16, 2001. c Springer-Verlag Berlin Heidelberg 2001

Discovering Mechanisms: A Computational Philosophy of Science Perspective Lindley Darden Department of Philosophy University of Maryland College Park, MD 20742 darden@carnap.umd.edu http://www.inform.umd.edu/PHIL/faculty/LDarden/

Abstract. A task in the philosophy of discovery is to ﬁnd reasoning strategies for discovery, which fall into three categories: strategies for generation, evaluation and revision. Because mechanisms are often what is discovered in biology, a new characterization of mechanism aids in their discovery. A computational system for discovering mechanisms is sketched, consisting of a simulator, a library of mechanism schemas and components, and a discoverer for generating, evaluating and revising proposed mechanism schemas. Revisions go through stages from how possibly to how plausibly to how actually.

1

Introduction

Philosophers of discovery look for reasoning strategies that can guide discovery. This work is in the framework of Herbert Simon’s (1997) view of discovery as problem solving. Given a problem to be solved, such as explaining a phenomenon, one goal is to ﬁnd a mechanism that produces that phenomenon. For example, given the phenomenon of the production of a protein, the goal is to ﬁnd the mechanism of protein synthesis. The task of the philosopher of discovery is to ﬁnd reasoning strategies to guide such discoveries. Strategies are heuristics for problem solving; that is, they provide guidance but do not guarantee success. Discovery is not viewed as something that occurs in a single a-ha moment of insight. Instead, discovery is construed as a process that occurs over an extended period of time, going through cycles of generation, evaluation, and revision (Darden 1991). The history of science is a source of “compiled hindsight” (Darden 1987) about reasoning strategies for discovering mechanisms. This paper will use examples from the history of biology to illustrate general reasoning strategies for discovering mechanisms. Section 2 puts this work into the broader context of a matrix of biological knowledge. Section 3 discusses a new characterization of mechanism, based on an ontology of entities, properties, and activities. Section 4 outlines components of a mechanism discovery system, including a simulator, a library of mechanism designs and components, and a discoverer. K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 3–15, 2001. c Springer-Verlag Berlin Heidelberg 2001

4

L. Darden

Knowledge Bases & AI

Databases

Information Storage & Retrieval

Laboratory and Field Biology Data and Knowledge

Physics and Chemistry

Biomatrix Fig. 1. Matrix of Biological Knowledge

2

Biomatrix

This work is situated in a larger framework. In the 1980s, Harold Morowitz (1985) chaired a National Academy of Sciences workshop on models in biology. As a result of that workshop, a society was formed with the name, “Biomatrix: A Society for Biological Computation and Informatics” (Morowitz and Smith 1987). This society was ahead of its time; it has splintered into diﬀerent groups and its grand vision has yet to be realized. Nonetheless, its vision is worth revisiting in order to put the work to be discussed in this paper into a broader context. As Figure 1 shows, the biomatrix vision included relations among three areas: ﬁrst, databases; second, information storage and retrieval by literature cataloging (e.g., Medline); and, third, artiﬁcial intelligence and knowledge bases. Discovery science has worked in all three areas since the 1980s. Knowledge discovery in databases is a booming area (e.g., Piatetsky-Shapiro and Frawley, eds., 1991).

Discovering Mechanisms

5

Discovery using abstracts available from literature catalogues has been developed by Don Swanson (1990) and others. The area of discovery using knowledge based systems is an active area, especially in computational biology. The meetings on Intelligent Systems in Molecular Biology and the International Society for Computational Biology arose from that part of the biomatrix. It is in the knowledge based systems box that my work today will fall. Relations to databases and information retrieval as related to mechanism discovery will perhaps occur to the reader.

3

Mechanisms, Schemas, and Sketches

Often in biology, what is to be discovered is a mechanism. Physicists often aim to discover general laws, such as Newton’s laws of motion. However, few biological phenomena are best characterized by universal, mathematical laws (Beatty 1995). The ﬁeld of molecular biology, for example, studies mechanisms, such as the mechanisms of DNA replication, protein synthesis, and gene regulation. The lively area of functional genomics is now attempting to discover the mechanisms in which the gene sequences act. Such mechanisms include gene expression, during both embryological development and normal gene activities in the adult. The ﬁeld of biochemistry also studies mechanisms when it ﬁnd the activities that transform one stage in a pathway to next, such as the enzymes, reactants and products in the Krebs cycle that produces the energy molecule ATP. An important current scientiﬁc task is to connect genetic mechanisms studied by molecular biology with metabolic mechanisms studied by biochemistry. As that task is accomplished, science will have a uniﬁed picture of the mechanisms that carry out the two essential features of life according to Aristotle: reproduction and nutrition. Given this importance of mechanisms in biology, a correspondingly important task for discovery science is to ﬁnd methods for discovering mechanisms. If the goal is to discover a mechanism, then the nature of that product shapes the process of discovery. A new characterization of mechanism aids the search for reasoning strategies to discover mechanisms. A mechanism is sought to explain how a phenomenon is produced (Machamer, Darden, Craver 2000) or how some task is carried out (Bechtel and Richardson 1993) or how the mechanism as a whole behaves (Glennan 1996). Mechanisms may be characterized in the following way: Mechanisms are entities and activities organized such that they are productive of regular changes from start or set-up to ﬁnish or termination conditions. (Machamer, Darden, Craver 2000, p. 3). Mechanisms are regular in that they usually work in the same way under the same conditions. The regularity is exhibited in the typical way that the mechanism runs from beginning to end; what makes it regular is the productive continuity between stages. Mechanisms exhibit productive continuity without gaps

6

L. Darden

from the set up to the termination conditions; that is, each stage gives rise to, allows, drives, or makes the next. The ontology proposed here consists of entities, properties, and activities. Mechanisms are composed of both entities (with their properties) and activities. Activities are the producers of change. Entities are the things that engage in activities. Activities require that entities have speciﬁc types of properties. For example, two entities, a DNA base and its complement, engage in the activity of hydrogen bonding because of their properties of geometric shape and weak polar charges. For a given scientiﬁc ﬁeld, there are typically entities and activities that are accepted as relatively fundamental or taken to be unproblematic for the purposes of a given scientist, research group, or ﬁeld. That is, descriptions of mechanisms in that ﬁeld typically bottom out somewhere. Bottoming out is relative: diﬀerent types of entities and activities are where a given ﬁeld stops when constructing its descriptions of mechanisms. In molecular biology, mechanisms typically bottom out in descriptions of the activities of cell organelles, such as the ribosome, and molecules, including macromolecules, smaller molecules, and ions. The most important kinds of activities in molecular biology are geometrico-mechanical and electro-chemical activities. An example of a geometrico-mechanical activity is the lock and key docking of an enzyme and its substrate. Electro-chemical activities include strong covalent bonding and weak hydrogen bonding. Entities and activities are interdependent (Machamer, Darden, Craver 2000, p. 6). For example, appropriate chemical valences are necessary for covalent bonding. Polar charges are necessary for hydrogen bonding. Appropriate shapes are necessary for lock and key docking. This interdependence of entities and activities allows one to reason about one, based on what is known or conjectured about the other, in each stage of the mechanism (Darden and Craver, in press). A mechanism schema is a truncated abstract description of a mechanism that can be ﬁlled with more speciﬁc descriptions of component entities and activities. An example is the following: DNA → RNA → protein. This is a diagram of the central dogma of molecular biology. It is a very abstract, schematic representation of the mechanism of protein synthesis. A schema may be even more abstract if it merely indicates functional roles played in the mechanism by ﬁllers of a place in the schema (Craver 2001). Consider the schema DNA → template → protein. The schema term “template” indicates the functional role played by the intermediate between DNA and protein. Hypotheses about role-ﬁllers changed during the incremental discovery of the mechanism of protein synthesis in the 1950s and 1960s. Thus, mechanism schemes are particularly good ways of representing functional roles. (For discussion of “local” and “integrated” functions and a less schematic way of representing them in a computational system, see Karp 2000.)

Discovering Mechanisms

7

Table 1. Constraints on the Organization of Mechanisms Character of phenomenon Componency Constraints Entities and activities Modules Spatial Constraints Compartmentalization Localization Connectivity Structural Orientation Temporal Constraints Order Rate Duration Frequency Hierarchical Constraints Integration of levels (from Craver and Darden 2001)

Mechanism sketches are incomplete schemas. They contain black boxes, which cannot yet be ﬁlled with known components. Attempts to instantiate a sketch would leave a gap in the productive continuity; that is, knowledge of the needed particular entities and activities is missing. Thus, sketches indicate what needs to be discovered in order to ﬁnd a mechanism schema. Once a schema is found and instantiated, a detailed description of a mechanism results. For example, a more detailed description of the protein synthesis mechanism (often depicted in diagrams) satisﬁes the constraints that any adequate description of a mechanism must satisfy. It shows how the phenomenon, the synthesis of a protein, is carried out by the operation of the mechanism. It depicts the entities–DNA, RNA, and amino acids–as well as implicitly, the activities. Hydrogen bonding is the activity operating when messenger RNA is copied from DNA. There is a geometrico-mechanical docking of the messenger RNA and the ribosome, a particle in the cytoplasm. Hydrogen bonding again occurs as the codons on messenger RNA bond to the anticodons on transfer RNAs carrying amino acids. Finally, covalent bonding is the activity that links the amino acids together in the protein. Good mechanism descriptions show the spatial relations of the components and the temporal order of the stages. A detailed description of a mechanism satisﬁes several general constraints. (They are listed in Table 1 and indicated here by italics.) There is a phenomenon that the mechanism, when working, produces, for example, the synthesis of a pro-

8

L. Darden

tein. The nature of the phenomenon, which may be recharacterized as research on it proceeds, constrains details about the mechanism that produces it. For example, the components of the mechanism, the entities and activities, must be adequate to synthesize a protein, composed of amino acids tightly covalently bonded to each other. There are various spatial constraints. The DNA is located in the nucleus (in eucaryotes) and the rest of the machinery is in the cytoplasm. The ribosome is a particle with a two part structure that allows it to attach to the messenger RNA and orient the codons of the messenger so that particular transfer RNAs can hydrogen bond to them. There is a particular order in which the steps occur and they take certain amounts of time. All of these constraints can play roles in the search for mechanisms, and, then, they become part of an adequate description of a mechanism. (For more discussion of these constraints, see Craver and Darden 2001.) From this list of constraints on an adequate description of a mechanism, it is evident that mere equations do not adequately represent the numerous features of a mechanism, especially spatial constraints. Diagrams that depict structural features, spatial relations and temporal sequences are good representations of mechanisms. To sum up so far: Recent work has provided this new characterization of what a mechanism is, the constraints that any adequate description of a mechanism must satisfy, and an analysis of abstract mechanism schemas and incomplete mechanism sketches that can play roles in guiding discovery.

4

Outline of a System for Constructing Hypothetical Mechanisms

Components of a computational system for discovering mechanisms are outlined in Figure 2. They include a simulator, a hypothesized mechanism schema, a discoverer with reasoning strategies for generation, evaluation, and revision, and a searchable, indexed library. 4.1

Simulator

The goal is to construct a simulator that adequately simulates a biological mechanism. Given the set up conditions, the simulator can be used to predict speciﬁc termination conditions. The simulator is an instantiation of a mechanism schema. It may contain more or less detail about the speciﬁc component entities and activities and their structural, spatial and temporal organization. From a human factors perspective, a video option to display the mechanism simulation in action would aid the user in seeing what the mechanism is doing at each stage. The video could be stopped at each stage and details of the entities and activities of that stage examined in more detail.

Discovering Mechanisms

9

reasoning strategies for generation, evaluation, revision

Library types of schemas types of modules

hypothetical mechanism schema

types of entities mechanism simulator types of activities

Mechanism Discovery System Fig. 2. Outline for a Mechanism Discovery System

4.2

Library

A mechanism schema is discovered by iterating through stages of generation, evaluation, and revision. Generation is accomplished by several steps. First, a phenomenon to be explained must be characterized. Its mode of description will guide the search for schemas that can produce it. Search occurs within a library, consisting of several types of entries: types of schemas, types of modules, types of entities, and types of activities. The search among types of schemas is a search for an abstraction of an analogous mechanism (on analogies and schemas, see, e.g., Holyoak and Thagard 1995). Kevin Dunbar (1995) has shown that molecular biologists often use “local analogies” to similar mechanisms in their own ﬁeld and “regional analogies” to mechanisms in other, neighboring ﬁelds. Such analogies are good sources from which to abstract mechanism schemas. Types of schemas, modules, entities and activities are interconnected. A particular type of schema, for example, a gene regulation schema, may suggest one or more types of modules, such as derepression or negative feedback modules. A type of entity will have activity-enabling properties that indicate it can produce a type of activity. Conversely, a type of activity will require particular types of entities. For example, nucleic acids have polar charged bases that enable them to engage in the activity of hydrogen bonding, a weak form of chemical bonding that can be easily formed and broken between polar molecules. Schemas may be indexed by the kind of phenomenon they produce. For example, for the phenomenon of producing an adaptation, two types of mechanisms have been proposed historically by biologists–selective mechanisms and instructive mechanisms (Darden, 1987). At a high degree of abstraction, a selection

10

L. Darden

schema may be characterized as follows: ﬁrst comes a stage of variant production; next comes a stage with a selective interaction that poses a challenge to the variants; this is followed by diﬀerential beneﬁt for some of the variants. In contrast, instructive mechanisms have a coupling between the stage of variant production and the selective environment, so that an instruction is sent from the environment and interpreted by the adaptive system to produce only the required variant. In evolutionary biology and immunology, selective mechanisms rather than instructive ones have been shown to work in producing evolutionary adaptations and clones of antibody cells (Darden and Cain 1989). A library of modules can be indexed by the functional roles they can fulﬁll in a schema (e.g., Goel and Chandrasekaran 1989). For example, if a schema requires end-product inhibition, then a feedback control module can be added to the linear schema. If cell-to-cell signaling is indicated, then membrane spanning proteins serving as receptors are a likely kind of module. Entities and activities can be categorized in numerous ways. Types of macromolecules include nucleic acids, proteins, and carbohydrates. When proteins, for example, perform functions, such as enzymes that catalyze reactions, then the kind of function, such as phosphorylation, is a useful indexing method. 4.3

Discoverer: Generation, Evaluation, Revision

During generation, after a phenomenon is characterized, then a search is made to see if an entire schema can be found that produces such a type of phenomenon. If an entire schema can be found, such as a selective or instructive schema, then generation can proceed to further speciﬁcation with types of modules, entities, and activities. If no entire schema is available, then modules may be put together piecemeal to fulﬁll various functional roles. If functional roles and modules to ﬁll them are not yet known, then reasoning about types of entities and activities is available. By starting from known set up conditions, or, conversely, from the end product, a hypothesized string of entities and activities can be constructed. Reasoning forward from the beginning or backward from the end product of the mechanism will allow gaps in the middle to be ﬁlled. In sum, reasoning strategies for discovering mechanisms include schema instantiation, modular subassembly, and forward chaining/backtracking (Darden, forthcoming). Evaluation. Once one or more hypothesized mechanism schemas are found or constructed piecemeal, then evaluation occurs. Evaluation proceeds through stages from how possibly to how plausibly to how actually. (Peter Machamer suggests that “how actually” is best read as “how most plausibly,” given that all scientiﬁc claims are contingent, that is, subject to revision in the light of new evidence.) How possibly a mechanism operates can be shown by building a simulator that begins with the set up conditions and produces the termination conditions by moving through hypothesized intermediate stages. As additional constraints are fulﬁlled and evaluation strategies applied, the proposed mechanism becomes

Discovering Mechanisms

11

Table 2. Strategies for Theory Evaluation 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.

Internally consistent and nontautologous Systematicity vs. modularity Clarity Explanatory adequacy Predictive adequacy Scope and generality Lack of ad hocness Extendability and fruitfulness Relations with other accepted theories Metaphysical and methodological constraints Relation to rivals (from Darden 1991, p. 257)

more plausible. The constraints of Table 1 must be satisﬁed. Table 2 (from Darden 1991, Table 15-2) lists strategies for theory assessment often employed by philosophers of science. A working simulator will likely show that the proposed schema is internally consistent and consists of modules whose functioning is clearly understood, thus satisfying some of the conditions listed in 1-3. If the simulator can be run to produce the phenomenon to be explained, then condition 4 of explanatory adequacy is at least partially fulﬁlled. Testing a prediction against data is often viewed as the most important evaluation strategy. The simulator can be run with diﬀerent initial conditions to produce predictions, which can be tested against data. If a prediction does not match a data point, then an anomaly results and revision is required. We will omit further discussion of the other strategies for theory assessment in order to turn our attention to anomaly resolution strategies to use when revision is required.

Anomaly resolution. When a prediction does not match a data point, then an anomaly results. Strategies for anomaly resolution require a number of information processing tasks to be carried out. In previous work with John Josephson and Dale Moberg, we investigated computational implementation of such tasks (Moberg and Josephson 1990; Darden et al. 1992; Darden 1998). A list of such tasks is found in Figure 3. Reasoning in anomaly resolution is, ﬁrst, a diagnostic reasoning task, to localize the site(s) of failure, and, then, a redesign task, to improve the simulation to remove the problem. Characterizing the exact diﬀerence between the prediction and the data point is a ﬁrst step. Peter Karp (1990; 1993) discussed this step of anomaly resolution in his implementation of the MOLGEN system to resolve anomalies in a molecular biology simulator. One wants to milk the anomaly itself for all the information one can get about the nature of the failure. Often during diagnosis, the nature of the anomaly allows failures to be localized to one part of the system rather than others, sometimes to a speciﬁc site.

12

L. Darden

simulator + initial conditions

prediction by simulator

observed result anomaly detection

anomaly characterization

construct fault site hypotheses add'l information (e.g., from research program)

provisionally localize blame

add'l information Construct alternative redesign h’s evaluate h's & chose one h*

construct modified theory with h*

retry localization

test modified theory (resimulate)

retry redesign further evaluate modified theory (e.g., relation to other theories?) incorporate in explanatory repertoire Fig. 3. Information Processing Tasks in Anomaly Resolution (from Darden 1998, p. 69)

Discovering Mechanisms

13

Once hypothesized localizations are found by doing credit assignment, then alternative redesign hypotheses for that module can be constructed. Once again, the library of modules, entities and activities can be consulted to ﬁnd plausible candidates. The newly redesigned simulator can be run again to see if the problem is ﬁxed and the prediction now matches the data point.

5

Piecemeal Discovery and Hierarchical Integration

The view of scientiﬁc discovery proposed here is that discovery of mechanisms occurs in extended episodes of cycles of generation, evaluation, and revision. In so far as the constraints are satisﬁed, the assessment strategies are applied, and any anomalies are resolved, then the hypothesized mechanism will have moved through the stages of how possibly to how plausibly to how actually. A new mechanism will have been discovered. Once a new mechanism at a given mechanism level has been discovered, then that mechanism needs to be situated within the context of other biological mechanisms. Thus, the general strategy for theory evaluation of consistent relations with other accepted theories in other ﬁelds of science (see Table 2, strategy 9) is reinterpreted. By thinking about theories as mechanism schemas, the strategy gets implemented by situating the hypothesized mechanism in a larger context. This larger context consists of mechanisms that occur before and after it, as well as mechanisms up or down in a mechanism hierarchy (Craver 2001). Biological mechanisms are nested within other mechanisms, and ﬁnding such a ﬁt in an integrated picture is another measure of the adequacy of a newly proposed mechanism.

6

Conclusion

Integrated mechanism schemas can serve as the scaﬀolding of the biological matrix. They provide a framework to integrate general biological knowledge of mechanisms, the data that provide evidence for such mechanisms, and the reports in the literature of research to discover mechanisms. This paper has discussed a new characterization of mechanism, based on an ontology of entities, properties, and activities, and has outlined components of a computational system for discovering mechanisms. Discovery is viewed as an extended process, requiring reasoning strategies for generation, evaluation, and revision of hypothesized mechanism schemas. Discovery moves through the stages of from how possibly to how plausibly to how actually a mechanism works.

Acknowledgments This work was supported by the National Science Foundation under grant number SBR-9817942. Any opinions, ﬁndings and conclusions or recommendations expressed in this material are those of the author and do not necessarily reﬂect

14

L. Darden

those of the National Science Foundation. Many of the ideas in this paper were worked out in collaboration with Carl Craver and Peter Machamer.

References 1. Beatty, John (1995), “The Evolutionary Contingency Thesis,” in James G. Lennox and Gereon Wolters (eds.), Concepts, Theories, and Rationality in the Biological Sciences. Pittsburgh, PA: University of Pittsburgh Press, pp. 45-81. 2. Bechtel, William and Robert C. Richardson (1993), Discovering Complexity: Decomposition and Localization as Strategies in Scientiﬁc Research. Princeton, N. J.: Princeton University Press. 3. Craver, Carl (2001), “Role Functions, Mechanisms, and Hierarchy,” Philosophy of Science 68: 53-74. 4. Craver, Carl and Lindley Darden (2001), “Discovering Mechanisms in Neurobiology: The Case of Spatial Memory,” in Peter Machamer, R. Grush, and P. McLaughlin (eds.), Theory and Method in the Neurosciences. Pittsburgh, PA: University of Pittsburgh Press, pp. 112-137. 5. Darden, Lindley (1987), “Viewing the History of Science as Compiled Hindsight,” AI Magazine 8(2): 33-41. 6. Darden, Lindley (1990), “Diagnosing and Fixing Faults in Theories,” in J. Shrager and P. Langley (eds.), Computational Models of Scientiﬁc Discovery and Theory Formation. San Mateo, CA: Morgan Kaufmann, pp. 319-346. 7. Darden, Lindley (1991), Theory Change in Science: Strategies from Mendelian Genetics. New York: Oxford University Press. 8. Darden, Lindley (1998), “Anomaly-Driven Theory Redesign: Computational Philosophy of Science Experiments,” in Terrell W. Bynum and James Moor (eds.), The Digital Phoenix: How Computers are Changing Philosophy. Oxford: Blackwell, pp. 62-78. Available: www.inform.umd.edu/PHIL/faculty/LDarden/Research/pubs/ 9. Darden, Lindley (forthcoming), “Strategies for Discovering Mechanisms: Schema Instantiation, Modular Subassembly, Forward Chaining/Backtracking,” Presented at PSA 2000, Vancouver. Preprint available: www.inform.umd.edu/PHIL/faculty/LDarden/Research/pubs 10. Darden, Lindley and Joseph A. Cain (1989), “Selection Type Theories,” Philosophy of Science 56: 106-129. Available: www.inform.umd.edu/PHIL/faculty/LDarden/Research/pubs/ 11. Darden, Lindley and Carl Craver (in press), “Strategies in the Interﬁeld Discovery of the Mechanism of Protein Synthesis,” Studies in History and Philosophy of Biological and Biomedical Sciences. 12. Darden, Lindley, Dale Moberg, Sunil Thadani, and John Josephson, (July 1992), “A Computational Approach to Scientiﬁc Theory Revision: The TRANSGENE Experiments,” Technical Report 92-LD-TRANSGENE, Laboratory for Artiﬁcial Intelligence Research, The Ohio State University. Columbus, Ohio, USA. 13. Dunbar, Kevin (1995), “How Scientists Really Reason: Scientiﬁc Reasoning in RealWorld Laboratories,” in R. J. Sternberg and J. E. Davidson (eds.), The Nature of Insight. Cambridge, MA: MIT Press, pp. 365-395. 14. Glennan, Stuart S. (1996), “Mechanisms and The Nature of Causation,” Erkenntnis 44: 49-71. 15. Goel, Ashok and B. Chandrasekaran, (1989) “Functional Representation of Designs and Redesign Problem Solving,” in Proceedings of the Eleventh International Joint Conference on Artiﬁcial Intelligence, Detroit, MI, August 1989, pp. 1388-1394.

Discovering Mechanisms

15

16. Holyoak, Keith J. and Paul Thagard (1995), Mental Leaps: Analogy in Creative Thought. Cambridge, MA: MIT Press. 17. Karp, Peter (1990), “Hypothesis Formation as Design,” in J. Shrager and P. Langley (eds.), Computational Models of Scientiﬁc Discovery and Theory Formation. San Mateo, CA: Morgan Kaufmann, pp. 275-317. 18. Karp, Peter (1993), “A Qualitative Biochemistry and its Application to the Regulation of the Tryptophan Operon,” in L. Hunter (ed.), Artiﬁcial Intelligence and Molecular Biology. Cambridge, MA: AAAI Press and MIT Press, pp. 289-324. 19. Karp, Peter D. (2000), “An Ontology for Biological Function Based on Molecular Interactions,” Bioinformatics 16:269-285. 20. Machamer, Peter, Lindley Darden, and Carl Carver (2000), “Thinking About Mechanisms,” Philosophy of Science 67: 1-25. 21. Moberg, Dale and John Josephson (1990), “Diagnosing and Fixing Faults in Theories, Appendix A: An Implementation Note,” in J. Shrager and P. Langley (eds.), Computational Models of Scientiﬁc Discovery and Theory Formation. San Mateo, CA: Morgan Kaufmann, pp. 347-353. 22. Morowitz, Harold (1985), “Models for Biomedical Research: A New Perspective,” Report of the Committee on Models for Biomedical Research. Washington, D.C.: National Academy Press. 23. Morowitz, Harold and Temple Smith (1987), “Report of the Matrix of Biological Knowledge Workshop, July 13-August 14, 1987,” Sante Fe, NM: Sante Fe Institute. 24. Piatetsky-Shapiro, Gregory and William J. Frawley (eds.) (1991), Knowledge Discovery in Databases. Cambridge, MA: MIT Press. 25. Simon, Herbert A. (1977), Models of Discovery. Dordrecht: Reidel. 26. Swanson, Don R. (1990), “Medical Literature as a Potential Source of New Knowledge,” Bull. Med. Libr. Assoc. 78:29-37.

The Discovery Science Project in Japan Setsuo Arikawa Department of Informatics, Kyushu University Fukuoka 812-8581, Japan arikawa@i.kyushu-u.ac.jp

Abstract. The Discovery Science project in Japan in which more than sixty scientists participated was a three-year project sponsored by Grant-in-Aid for Scientiﬁc Research on Priority Area from the Ministry of Education, Culture, Sports, Science and Technology (MEXT) of Japan. This project mainly aimed to (1) develop new methods for knowledge discovery, (2) install network environments for knowledge discovery, and (3) establish Discovery Science as a new area of Computer Science / Artiﬁcial Intelligence Study. In order to attain these aims we set up ﬁve groups for studying the following research areas: (A) (B) (C) (D) (E)

Logic for/of Knowledge Discovery Knowledge Discovery by Inference/Reasoning Knowledge Discovery Based on Computational Learning Theory Knowledge Discovery in Huge Database and Data Mining Knowledge Discovery in Network Environments

These research areas and related topics can be regarded as a preliminary definition of Discovery Science by enumeration. Thus Discovery Science ranges over philosophy, logic, reasoning, computational learning and system developments. In addition to these ﬁve research groups we organized a steering group for planning, adjustment and evaluation of the project. The steering group, chaired by the principal investigator of the project, consists of leaders of the ﬁve research groups and their subgroups as well as advisors from the outside of the project. We invited three scientists to consider the Discovery Science overlooking the above ﬁve research areas from viewpoints of knowledge science, natural language processing, and image processing, respectively. The group A studied discovery from a very broad perspective, taking into account of historical and social aspects of discovery, and computational and logical aspects of discovery. The group B focused on the role of inference/reasoning in knowledge discovery, and obtained many results on both theory and practice on statistical abduction, inductive logic programming and inductive inference. The group C aimed to propose and develop computational models and methodologies for knowledge discovery mainly based on computational learning theory. This group obtained some deep theoretical results on boosting of learning algorithms and the minimax strategy for Gaussian density estimation, and also methodologies specialized to concrete problems such as algorithm for ﬁnding best subsequence patterns, biological sequence compression algorithm, text categorization, and MDL-based compression. The group D aimed to create computational strategy for speeding up the discovery process in total. For this purpose, K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 1–2, 2001. c Springer-Verlag Berlin Heidelberg 2001

2

S. Arikawa

the group D was organized with researchers working in scientiﬁc domains and researchers from computer science so that real issues in the discovery process can be exposed out and practical computational techniques can be devised and tested for solving these real issues. This group handled many kinds of data: data from national projects such as genomic data and satellite observations, data generated from laboratory experiments, data collected from personal interests such as literature and medical records, data collected in business and marketing areas, and data for proving the eﬃciency of algorithms such as UCI repository. So many theoretical and practical results were obtained on such a variety of data. The group E aimed to develop a uniﬁed media system for knowledge discovery and network agents for knowledge discovery. This group obtained practical results on a new virtual materialization of DB records and scientiﬁc computations that help scientists to make a scientiﬁc discovery, a convenient visualization interface that treats web data, and an eﬃcient algorithm that extracts important information from semi-structured data in the web space. This lecture describes an outline of our project and the main results as well as how the project was prepared. We have published and are publishing special issues on our project from several journals [5],[6],[7],[8],[9],[10]. As an activity of the project we organized and sponsored Discovery Science Conference for three years where many papers were presented by our members [2],[3],[4]. We also published annual progress reports [1], which were distributed at the DS conferences. We are publishing the ﬁnal technical report as an LNAI[11].

References 1. S. Arikawa, M. Sato, T. Sato, A. Maruoka, S. Miyano, and Y. Kanada. Discovery Science Progress Report No.1 (1998), No.2 (1999), No.3 (2000). Department of Informatics, Kyushu University. 2. S. Arikawa and H. Motoda. Discovery Science. LNAI, Springer 1532, 1998. 3. S. Arikawa and K. Furukawa. Discovery Science. LNAI, Springer 1721, 1999. 4. S. Arikawa and S. Morishita. Discovery Science. LNAI, Springer 1967, 2000. 5. H. Motoda and S. Arikawa (Eds.) Special Feature on Discovery Science. New Generation Computing, 18(1): 13–86, 2000. 6. S. Miyano (Ed.) Special Issue on Surveys on Discovery Science. IEICE Transactions on Information and Systems, E83-D(1): 1–70, 2000. 7. H. Motoda (Ed.) Special Issue on Discovery Science. Journal of Japanese Society for Artiﬁcial Intelligence, 15(4):592–702, 2000. 8. S. Morishita and S. Miyano(Eds.) Discovery Science and Data Mining (in Japanese). bit special volume , Kyoritsu Shuppan, 2000. 9. S. Arikawa, M. Sato, T. Sato, A. Maruoka, S. Miyano, and Y. Kanada. The Discovery Science Project. Journal of Japanese Society for Artiﬁcial Intelligence, 15(4) 595–607, 2000. 10. S. Arikawa, H. Motoda, K. Furukawa, and S. Morishita (Eds.) Theoretical Aspects of Discovery Science. Theoretical Computer Science (to appear) 11. S. Arikawa and A. Shinohara (Eds.) Progresses in Discovery Science. LNAI, Springer (2001, to appear)

Discovering Repetitive Expressions and Aﬃnities from Anthologies of Classical Japanese Poems Koichiro Yamamoto1 , Masayuki Takeda1,2 , Ayumi Shinohara1 , o Nanri3 Tomoko Fukuda3 , and Ichir¯ 1

Department of Informatics, Kyushu University 33, Fukuoka 812-8581, Japan 2 PRESTO, Japan Science and Technology Corporation (JST) 3 Junshin Women’s Junior College, Fukuoka 815-0036, Japan {k-yama, takeda, ayumi}@i.kyushu-u.ac.jp {tomoko-f@muc, nanri-i@msj}.biglobe.ne.jp

Abstract. The class of pattern languages was introduced by Angluin (1980), and a lot of studies have been undertaken on it from the theoretical viewpoint of learnabilities. However, there have been few practical studies except for the one by Shinohara (1982), in which patterns are restricted so that every variable occurs at most once. In this paper, we distinguish repetitive variables from those occurring only once within a pattern, and focus on the number of occurrences of a repetitive-variable and the length of strings it matches, in order to model the rhetorical device based on repetition of words in classical Japanese poems. Preliminary result suggests that it will lead to characterization of individual anthology, which has never been achieved, up till now.

1

Introduction

Recently, we have tackled several problems in analyzing classical Japanese poems, Waka. In [12], we successfully discovered from Waka poems characteristic patterns, named Fushi, which are read-once patterns whose constant parts are restricted to sequences of auxiliary verbs and postpositional particles. In [10], we addressed the problem of semi-automatically ﬁnding similar poems, and discovered unheeded instances of Honkadori (poetic allusion), one important rhetorical device in Waka poems based on speciﬁc allusion to earlier famous poems. On the contrary, we in [11] succeeded to discover expression highlighting diﬀerences between two anthologies by two closely related poets (e.g., master poet and disciples). In the present paper, we focus on repetition. Repetition is the basis for many poetic forms. The use of repetition can heighten the emotional impact of a piece. This device, however, has received little attentions in the case of Waka poetry. One of the main reasons might be that a Waka poem takes a form of short poem, namely, it consists only of ﬁve lines and thirty-one syllables, arranged 5-7-5-7-7, and therefore the use of repetition is often considered to waste words (letters) under this tight limitation. In fact, some poets/scholars in earlier times taught their disciples never to repeat a word in a Waka poem. They considered word repetition as ‘disease’ to be avoided. This K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 416–428, 2001. c Springer-Verlag Berlin Heidelberg 2001

Discovering Repetitive Expressions

417

device, however, gives a remarkable eﬀect if skillfully used, even in Waka poetry. The following poem, composed by priest Egy¯ o (lived in the latter half of the 10th-century), is a good example of repetition, where two words ‘nawo’ and ‘kiku’ are respectively used twice 1 . Ha-shi-no-na-wo/na-wo-u-ta-ta-ne-to/ki-ku-hi-to-no/ ¯ -Shu ¯ #195) ki-ku-ha-ma-ko-to-ka/u-tsu-tsu-na-ga-ra-ni (Egyo

Since there has been few studies on this poetic device in the long research history of Waka poetry, it is necessary to develop a method of automatically extracting (candidates for) instances of the repetition from database. To retrieve instances of repetition like above, we consider the pattern matching problem for patterns such as x x y y, where is the variable-length don’t care (VLDC), a wildcard that matches any strings, and x, y are variables that match any nonempty strings. Recall the pattern languages proposed by Angluin [2]. A pattern is a string in Π = (Σ ∪V )+ , where V is an inﬁnite set {x1 , x2 , . . . } of variables and Σ ∩V = ∅. For example, ax1 bx2 x1 is a pattern, where a, b ∈ Σ. The language of a pattern π is the set of strings obtained by replacing variables in π by non-empty strings. For example, L(ax1 bx2 x1 ) = {aubvu | u, v ∈ Σ + }. Although the membership problem is NP-complete for the class of Angluin patterns as shown in [2], it becomes polynomial-time solvable when the number of variables occurring within π is bounded by a ﬁxed number k. Several subclasses have been investigated from the viewpoint of polynomial-time learnability. For example, the classes of read-once patterns (every variable occurs only at once) and one-variable patterns (only one variable is contained) are known to be polynomial-time learnable [2]. In the present paper, we try to study subclasses from viewpoints of pattern matching and similarity computation. It should be mentioned that the class of regular expressions with back referencing [1] is considered as a superclass of the Angluin patterns. The membership for this class is also known to be NP-complete. On the other hand, we attempted in [10] to semi-automatically discover similar poems from an accumulation of about 450,000 Waka poems in a machinereadable form. As mentioned above, one of the aims was to discover unheeded instances of Honkadori. The method is simple: Arrange all possible pairs of poems in decreasing order of their similarities, and then scholarly scrutinize a ﬁrst part. The key to success in this approach is how to develop an appropriate similarity measure. Traditionally, the scheme of weighted edit distance with a weight matrix may have been used to quantify aﬃnities between strings. This scheme, however, requires a ﬁne tuning of quadratically many weights in a matrix with the alphabet size, by a hand-coding or a heuristic criterion. As an alternative idea, we introduced a new framework called string resemblance systems (SRSs 1

We inserted the hyphens ‘-’ between syllables, each of which was written as one Kana character although romanized here. One can see that every syllable consists of either a single vowel or a consonant and a vowel. Thus there can be no consonantal clusters and every syllable ends in one of the ﬁve vowels a, i, u, e, o.

418

K. Yamamoto et al.

for short) [10]. In this framework, similarity of two strings is evaluated via a pattern that matches both of them, with the support by an appropriate function that associates the quantity of resemblance candidate patterns. This scheme bridges a gap between optimal pattern discovery (see, e.g., [5]) and similarity computation. An SRS is speciﬁed by (1) a pattern set to which common patterns belong, and (2) a pattern score function that maps each pattern in the set to the quantity of resemblance. For example, if we choose the set of patterns with VLDCs and deﬁne the score of a pattern to be the number of symbols in it, then the obtained measure is the length of the longest common subsequence (LCS) of two strings. In fact, the strings acdeba and abdac have a common pattern ada which contains three symbols. With this framework one can easily design and modify his/her measures. In fact we designed some measures as combinations of pattern set and pattern score function along with the framework, and reported successful results in discovering unnoticed instances of Honkadori [10]. The discovered aﬃnities raised an interesting issue for Waka studies, and we could give a convincing conclusion to it: 1. We have proved that one of the most important poems by Fujiwara-noKanesuke, one of the renowned thirty-six poets, was in fact based on a model poem found in Kokin-Sh¯ u. The same poem had been interpreted just to show “frank utterance of parents’ care for their child.” Our study revealed the poet’s techniques in composition half hidden by the heart-warming feature of the poem by extracting the same structure between the two poems2 . 2. We have compared Tametada-Sh¯ u, the mysterious anthology unidentiﬁed in Japanese literary history, with a number of private anthologies edited after the middle of the Kamakura period (the 13th-century) using the same method, and found that there are about 10 pairs of similar poems between Tametada-Sh¯ u and S¯ okon-Sh¯ u, an anthology by Sh¯ otetsu. The result suggests that the mysterious anthology was edited by a poet in the early Muromachi period (the 15th-century). There have been surmised dispute about the editing date since one scholar suggested the middle of Kamakura period as a probable one. We have had a strong evidence about this problem. In this paper, we focus on the class of Angluin patterns and on its subclasses, and discuss the problems of the pattern-matching, the similarity computation, and the pattern discovery. It should be emphasized that although many studies has been undertaken to the class of Angluin patterns and its subclasses, most of them has been done from the theoretical viewpoint of learnability. The only exception is due to Shinohara [9]. He mentioned practical applications, but they are limited to the subclass called the read-once patterns (referred to as regular patterns in [9]). We show in this paper the ﬁrst practical application of Angluin 2

Asahi, one of Japan’s leading newspapers, made a front-page report of this discovery (26 May, 2001).

Discovering Repetitive Expressions

419

patterns that are not limited to the read-once patterns. As our framework quantiﬁes similarities between strings by weighting patterns common to the strings, we modify the deﬁnition of patterns as follows: – Substitute a gap symbol for every variable occurring only once in a pattern. – Associate each variable x with an integer µ(x) so that the variable x matches a string w only if the length of w is at least µ(x). (In the original setting in [2], µ(x) = 1 for all variable x.) Since we are interested only in repetitive strings in a Waka poem, there is no need to name non-repetitive strings. It suﬃces to use gap symbols instead of variables for representing non-repetitive strings. Thus, the ﬁrst item is rather for the sake of simpliﬁcation. On the contrary, the second item is an essential augmentation by which the score of a pattern π can be sensitive to the values of µ(x) for variables x in π. In fact, we are strongly interested in the length of repeated string when analyzing repetitive expressions in Waka poems. Fig. 1 is an instance of Honkadori we discovered in [10]. The two poems have several common expressions, such as, “na-ka-ra-he-te” and “to-shi-so-he-ni-keru.” One can notice that both the poems use the repetition of words. Namely, the Kokin-Sh¯ u poem and the Shin-Kokin-Sh¯ u repeat “nakara” (stem of verb “nagarafu”; name of a bridge) and “matsu” (wait; pine tree), respectively. This strengthens the aﬃnities based on existence of common substrings. Poem alluded to. (Kokin-Sh¯ u #826) Sakanoue-no-Korenori. a-fu-ko-to-wo Without seeing you, na-ka-ra-no-ha-shi-no I have lived on na-ka-ra-he-te Adoring you ever ko-hi-wa-ta-ru-ma-ni Like the ancient bridge of Nagara to-shi-so-he-ni-ke-ru And many years have passed on. Allusive-variation. (Shin-Kokin-Sh¯ u #1636) Nijoin Sanuki. na-ka-ra-he-te Like the ancient pine tree of longevity na-ho-ki-mi-ka-yo-wo On the mount of expectation called “Matsuyama,” ma-tsu-ya-ma-no I have lived on ma-tsu-to-se-shi-ma-ni Expecting your everlasting reign to-shi-so-he-ni-ke-ru And many years have passed on. Fig. 1. Discovered instance of poetic allusion.

It may be relevant to mention that this work is a multidisciplinary study between the literature and the computer science. In fact, the second author from the last is a Waka researcher and the last author is a linguist in Japanese language.

2

A Uniform Framework for String Similarity

This section brieﬂy sketches the framework of string resemblance systems according to [10]. Gusﬁeld [6] pointed out that in dealing with string similarity

420

K. Yamamoto et al.

the language of alignments is often more convenient than the language of edit operations. Our framework is a generalization of the alignment based scheme and is based on the notion of common patterns. Before describing our scheme, we need to introduce some notation. The set of strings over an alphabet Σ is denoted by Σ ∗ . The length of a string u is denoted by |u|. The string of length 0 is called the empty string, and denoted by ε. Let Σ + = Σ ∗ − {ε}. Let us denote by R the set of real numbers. A pattern system is a triple of a ﬁnite alphabet Σ, a set Π of descriptions called patterns, and a function L that maps a pattern in Π to a subset of Σ ∗ . L(π) is called the language of a pattern π ∈ Π. A pattern π ∈ Π match a string w ∈ Σ ∗ if w belongs to L(π). A pattern π in Π is a common pattern of strings w1 and w2 in Σ ∗ if π matches both of them. Deﬁnition 1. A string resemblance system (SRS) is a 4-tuple Σ, Π, L, score , where Σ, Π, L is a pattern system and score is a pattern score function that maps a pattern in Π to a real number. The similarity SIM(x, y) between strings x and y with respect to Σ, Π, L, score

is deﬁned by SIM(x, y) = max{score(π) | π ∈ Π and x, y ∈ L(π) }. When the set {score(π) | π ∈ Π and x, y ∈ L(π) } is empty or the maximum does not exist, SIM(x,y) is undeﬁned. The above deﬁnition regards similarity computation as optimal pattern discovery. Our framework thus bridges a gap between similarity computation and pattern discovery. In [10], we deﬁned the homomorphic SRSs and showed that the class of homomorphic SRSs covers most of the known similarity (dissimilarity) measures, such as, the edit distance, the weighted edit distance, the Hamming distance, the LCS measure. We also extended in [10] this class to the semi-homomorphic SRSs, and the similarity measures we developed in [8] for musical sequence comparison fall into this class. We can handle a variety of string (dis)similarity by changing the pattern system and the pattern score function. The pattern systems appearing in the above examples are, however, restricted to homomorphic ones. Here, we shall mention SRSs with non-homomorphic pattern systems An order-free pattern (or fragmentary pattern) is a multiset {u1 , . . . , uk } such that k > 0 and u1 , . . . , uk ∈ Σ + , and is denoted by π[u1 , . . . , uk ]. The language of pattern π[u1 , . . . , uk ] is the set of strings that contain the strings u1 , . . . , uk without overlaps. The membership problem of the order-free patterns is NP-complete [7], and the similarity computation is NP-hard in general as shown in [7]. However, the membership problem is polynomial-time solvable when k is ﬁxed. The class of order-free patterns plays an important role in ﬁnding similar poems from anthologies of Waka poems [10]. The pattern languages, introduced by Angluin [2], is also interesting for our framework. Deﬁnition 2 (Angluin pattern system). The Angluin pattern system is a pattern system Σ, (Σ ∪ V )+ , L , where V is an inﬁnite set {x1 , x2 , . . . } of variables with Σ ∩ V = ∅, and L(π) is the set of strings π · θ such that θ is a homomorphism from (Σ ∪ V )+ to Σ + such that c · θ = c for every c ∈ Σ.

Discovering Repetitive Expressions

421

In this paper we discuss SRSs with the Angluin pattern system.

3

Computational Complexity

Deﬁnition 3. Membership Problem for pattern system Σ, Π, L . Given a pattern π ∈ Π and a string w ∈ Σ ∗ , determine whether or not w ∈ L(π). Theorem 1 ([2]). Membership problem for Angluin pattern system is NP-complete. Deﬁnition 4. Similarity Computation with respect to SRS Σ, Π, L, score . Given two strings w1 , w2 ∈ Σ ∗ , ﬁnd a pattern π ∈ Π with {w1 , w2 } ⊆ L(π) that maximizes score(π). Theorem 2. For an SRS with Angluin pattern system, Similarity Computation is NP-hard in general. Proof. We consider the following problem, that is a decision version of a special case of Similarity Computation with w1 = w2 , and show its NPcompleteness. Optimal Pattern with respect to SRS Σ, Π, L, score : Given a string w ∈ Σ ∗ and an integer k, determine whether or not there is a pattern π ∈ Π such that w ∈ L(π) and score(π) ≥ k. We give a reduction from Membership Problem for Angluin pattern system Σ, Π, L to Optimal Pattern with respect to SRS with Angluin pattern system Σ , Π , L , score for a speciﬁc score function score deﬁned as follows. Let Σ = Σ ∪ {#} with # ∈ Σ. We take a one-to-one mapping · from Π = (Σ ∪ V )+ to Σ ∗ that is log-space computable with respect to |π|. We deﬁne the score function score : Π → R by score(π ) = 1 if π is of the form π = π# π for some π ∈ Π = (Σ ∪ V )+ , and score(π ) = 0 otherwise. For a given instance π ∈ Π and w ∈ Σ ∗ of Membership Problem for Angluin pattern system, let us consider w = w# π and k = 1 as an input to Optimal Pattern. Then we can see that there is a pattern π ∈ Π with w ∈ L(π ) and score(π ) = 1 if and only if w ∈ L(π), since w ∈ L(π ) if and only if π = π# π and w ∈ L(π). This completes the proof.

4

Practical Aspects

Recall that similarities between strings are quantiﬁed by weighting patterns common to them in our framework. For a ﬁner weighting, we augment the descriptive power of Angluin patterns by putting a restriction on the length of a string matched by each variable. Namely, we associate each variable x with an integer µ(x) such that the variable x matches a string w only if µ(x) ≤ |w|. For example, suppose that π1 = z1 xz2 xz3 and π2 = z1 yz2 yz3 , where µ(x) = 2, µ(y) = 3, and µ(z1 ) = µ(z2 ) = µ(z3 ) = 0. Then, π1 is common to the strings bcaaabbaac and acabbaabbbb, but π2 is not. This enables us to deﬁne a score function so that it is sensitive to the lengths of strings substituted for variables.

422

K. Yamamoto et al.

On the other hand, as we have seen in the last section, similarity computation as well as membership problem is intractable in general for Angluin pattern system. From a practical point of view, it is valuable to consider subclasses of the pattern system that are tractable. Let occx (π) denote the number of occurrences of a variable x within a pattern π ∈ (Σ ∪ V )+ . For example, occx (abxcyxbz) = 2. A variable x is said to be repetitive w.r.t. π if occx (π) > 1. A pattern π is said to be read-once if π contains no repetitive variables. Historically, read-once patterns are called regular patterns because the induced languages are regular [9]. The membership problem of the read-once patterns is solvable in linear time. A k-repetitive-variable pattern is a pattern that has at most k repetitive-variables. It is not diﬃcult to see that: Theorem 3. The membership problem of the k-repetitive-variable patterns can be solved in O(n2k+1 ) time for input of size n. That is, non-repetitive variables do not matter. Moreover, we are interested only in repeated strings in text strings. For these reasons, we substitute for each of the non-repetitive variables in a pattern. Patterns are then strings over (Σ∪V ∪{}), in which every variable is repetitive. For example the above pattern abxcyxbz is written as abxc xb. Despite the polynomial-time computability, the membership problem of the k-repetitive-variable patterns requires much time to solve. The similarity computation is therefore very slow in practice. For this reason, we in this paper restrict ourselves to the case of k = 1, namely, the one-repetitive-variable patterns. In order to eﬃciently solve the membership problem and similarity computation for this class, we utilize a kind of ﬁltering technique. For example, when the pattern a xxb cx matches a string w, then the candidate strings for substituting for x must occur at least three times in w without overlaps. We obtain such substring statistics on a given string w by exploiting such data structures as the minimal augmented suﬃx trees developed by Apostolico and Preparata [3,4]. Suﬃx tree [6] for a string w is a tree structure that represents all suﬃces of w as paths from the root to leaves, so that every node except leaves have at least two children. Suﬃx trees are useful for the task of various string processing [6]. Each node v corresponds to a substring v˜ of w. For each internal node v, we associate the number of leaves of the subtree rooted at v. It corresponds to the number of (possibly overlapped) occurrences v˜ in w to the node (see Fig. 2 (a)). Minimal augmented suﬃx tree is an augmented version of the suﬃx tree, where additional nodes are introduced to count non-overlapping occurrences. (see Fig. 2 (b)).

5

Application to Waka Data

In this section, we present and discuss the results of our experiments carried out on the Eight Imperial Anthologies, the ﬁrst eight of the imperial anthologies compiled by emperor commands, listed in Table 1.

Discovering Repetitive Expressions 12 a

12

$

a

b

a

b

2 a b a $

$

4 a b a 2

a b a $

b

$ 4

b a

$

a b a $

2

b a a b a a b $ a b a $

$ b a

a 7

a

423

$

b a a b a b a $

1 a b a $

$

b 4

a 2

$

7

a

a 2

a $

3 a b a 1

a b a $

$

b a 1

$ b a a b a a b $ a b a $

(a)

4

$

$ b a a b a b a $

(b)

Fig. 2. (a)Suﬃx tree and (b)minimal augmented suﬃx tree for string ababaababa$. The number associated to each internal node denotes the number occurrences of the string in the string, where occurrence means possibly overlapped occurrence in (a) and non-overlapped occurrence in (b). For example, the string aba occurs four times in the string ababaababa, but it appears only three times without overlapping. Table 1. Eight Imperial Anthologies. no. I II III IV V VI VII VIII

5.1

anthology Kokin-Sh¯ u Gosen-Sh¯ u Sh¯ ui-Sh¯ u Go-Sh¯ ui-Sh¯ u Kiny¯ o-Sh¯ u Shika-Sh¯ u Senzai-Sh¯ u Shin-Kokin-Sh¯ u

compilation # poems 905 1,111 955–958 1,425 1005–1006 1,360 1087 1,229 1127 717 1151 420 1188 1,290 1216 2,005

Similarity Computation

For a success in discovery, we want to put an appropriate restriction on the pattern system and on the pattern score function by using some domain knowledge. However, there are few studies on repetition of words in Waka poems as stated before, and therefore we do not in advance know what kind of restriction is eﬀective. We take a stepwise-reﬁnement approach, namely, we start with very simple pattern system and score function, and then improve them based on analysis of obtained results. Here we restrict ourselves to one-repetitive-variable patterns. Moreover, we use a simple pattern score function that is not sensitive to characters or VLDCs in the patterns. Namely, the score of axxbcx is identical to that of xxx, for example. Despite this simpliﬁcation, we wish to pay attention to

424

K. Yamamoto et al.

how long the strings that match variable x are. Thus, a one-repetitive-variable pattern π is essentially expressed as two integers: occx (π) and µ(x). We assume that the score function is non-decreasing with respect to occx (π) and to µ(x). We compared the anthology Kokin-Sh¯ u with two anthologies Gosen-Sh¯ u and Shin-Kokin-Sh¯ u. The score function we used is deﬁned by score(π) = occx (π) · µ(x). The frequency distributions are shown in Table 2. From the taTable 2. Frequency distribution on similarity values in comparison of Kokin-Sh¯ u with Gosen-Sh¯ u and Shin-Kokin-Sh¯ u. Note that similarity values cannot be 1, 2, 3, 5, 7 because of the deﬁnition of the pattern score function. The frequencies for any similarity values not present here are all 0.

Gosen-Sh¯ u Shin-Kokin-Sh¯ u

0 1,390,030 1,962,550

4 178,331 244,776

6 1,944 2,173

8

37 11

10

8 0

ble, there seem relatively higher similarities between Kokin-Sh¯ u and Gosen-Sh¯ u, compared with Kokin-Sh¯ u and Shin-Kokin-Sh¯ u. We examined a ﬁrst part of a list of poem pairs arranged in the decreasing order of similarity value. However, we had impressions that most of pairs with high similarity value are dissimilar, probably because the pattern system we used is too simple to quantify the aﬃnities concerning repetition techniques. See the poems shown in Fig. 3. All the poems are matched by the pattern x x with µ(x) = 4. The ﬁrst three poems are similar each other, while the other pairs are dissimilar. It seems that information about the locations at which a string occurs repeatedly is important.

ka-su-ka-no-ha/ke-fu-ha-na-ya-ki-so/wa-ka-ku-sa-no/ ¯ #17) tsu-ma-mo-ko-mo-re-ri/wa-re-mo-ko-mo-re-ri/ (Kokin-Shu to-shi-no-u-chi-ni/ha-ru-ha-ki-ni-ke-ri/hi-to-to-se-wo/ ¯ #1) ko-so-to-ya-i-ha-mu/ko-to-shi-to-ya-i-ha-mu/ (Kokin-Shu hi-ru-na-re-ya/mi-so-ma-ka-he-tsu-ru/tsu-ki-ka-ke-wo/ ¯ #1100) ke-fu-to-ya-i-ha-mu/ki-no-fu-to-ya-i-ha-mu/ (Gosen-Shu ha-ru-ka-su-mi/ta-te-ru-ya-i-tsu-ko/mi-yo-shi-no-no/ ¯ #3) yo-shi-no-no-ya-ma-ni/yu-ki-ha-fu-ri-tsu-tsu/ (Kokin-Shu tsu-ra-ka-ra-ha/o-na-shi-ko-ko-ro-ni/tsu-ra-ka-ra-m/ ¯ #592) tsu-re-na-ki-hi-to-wo/ko-hi-m-to-mo-se-su/ (Gosen-Shu Fig. 3. Poems that are matched by the same pattern x x with µ(x) = 4. All pairs have a unique similarity value. The ﬁrst three poems can be considered to ‘share’ the same poetic device and are closely similar, while some pairs are dissimilar.

Discovering Repetitive Expressions

425

Moreover, we observed that there are a lot of meaningless repetitions of strings, especially when µ(x) is relatively small, say, µ(x) = 2. It seems better to restrict ourselves to repetition of strings occurring at the beginning or the end of a line in order to remove such repetitions. We assume the lines of a poem are parenthesized by [, ]. Then, the pattern [][x][x][][], for example, matches any poem whose second and third lines begin with a same string. We want to use the set of such patterns as the pattern set, but the number of such patterns is 35 = 243, which makes the similarity computation impractical. However, by using the Minimal Augmented Suﬃx Trees, we can ﬁlter out a wasteful computation and perform the computation in reasonable time. The results are shown in Table 3. By examining a ﬁrst part, we conﬁrmed that this time pairs with a high similarity value are closely similar. Table 3. Improved results. Frequency distribution on similarity values in comparison of Kokin-Sh¯ u with Gosen-Sh¯ u and Shin-Kokin-Sh¯ u. Note that similarity values cannot be 1, 2, 3, 5, 7 because of the deﬁnition of the pattern score function. The frequencies for any similarity values not present here are all 0.

Gosen-Sh¯ u Shin-Kokin-Sh¯ u

5.2

0 1,569,925 2,208,888

4

407 583

6

14 39

8

1 0

10

3 0

Characterization of Anthologies

Table 4 shows the most 30 patterns occurring in Kokin-Sh¯ u. The table illustrates variations of word repetition techniques. Table 4. Most frequent 30 patterns in Kokin-Sh¯ u. freq. 11 10 10 7 5 5 5 4 4 4

freq. pattern freq. pattern pattern 3 [x][][x][][] 1 [x][][x][][x] [][][x][x][] [x][x][][][] 3 [][x][][][x] 1 [x][][][][x] 3 [][x][][x][] 1 [x][][x][][] [][x][x][][] 3 [][][x][][x] 1 [x][][][][x] [x][][][][x] 3 [][][][x][x] 1 [][x][x][][] [][x][][][x] 2 [x][][x][][] 1 [][x][][x][] [][][x][x][] 2 [x][][][x][] 1 [][x][x][][] [][][][x][x] 2 [x][][][][x] 1 [][][x][x][] [x][][][x][] 2 [][x][x][][] 1 [][][x][][x] [x][x][][][] 1 [x][x][][][x] 0 [x][x][x][x][x] [][][x][x][]

426

K. Yamamoto et al.

For every pattern of the above mentioned form, we collected the poems that are matched by it from the ﬁrst eight imperial anthologies shown in Table 1. The results are summarized in Table 5. The ﬁrst four anthologies have a Table 5. Characterization of anthologies. I, II, III, IV, V, VI, VII, VIII represent Kokin-Sh¯ u, Gosen-Sh¯ u, Sh¯ ui-Sh¯ u, Go-Sh¯ ui-Sh¯ u, Kiny¯ o-Sh¯ u, Shika-Sh¯ u, Senzai-Sh¯ u, Shin-Kokin-Sh¯ u, respectively, (occx (π), µ(x)) (2, 2) (2, 3) (2, 4) (2, 5) (3, 2) (3, 3) (3, 4) (3, 5) (4, 2) (4, 3) (4, 4) (4, 5) (5, 2) (5, 3) (5, 4) (5, 5)

I

96 23 10 5 2 0 0 0 0 0 0 0 0 0 0 0

II

104 20 7 5 11 0 0 0 5 0 0 0 1 0 0 0

III 118 28 13 10 2 0 0 0 0 0 0 0 0 0 0 0

IV 108 31 5 3 3 2 0 0 0 0 0 0 0 0 0 0

V

24 5 4 2 0 0 0 0 0 0 0 0 0 0 0 0

VI

22 9 5 2 1 1 0 0 0 0 0 0 0 0 0 0

VII 77 17 3 1 1 0 0 0 0 0 0 0 0 0 0 0

VIII 112 19 1 0 0 0 0 0 0 0 0 0 0 0 0 0

considerable amount of poems that use repetition of words, even for a large value of µ(x). This is contrasted with Shin-Kokin-Sh¯ u where limited to a small value of µ(x). This might be a reﬂection of the editor’s preferences or of literary trend. Anyway, pursuing the reason for such diﬀerences will provide clues for further investigation on literary trend or the editors’ personalities.

6

Concluding Remarks

The Angluin pattern language has been studied mainly from theoretical viewpoints. There are no practical applications except for those limited to the readonce patterns. This paper presented the ﬁrst practical application of the Angluin pattern languages that are not limited to read-once patterns. We hope that pattern matching and similarity computation for the patterns discussed in this paper possibly lead to discovering overlooked aspects of individual poets. We distinguished repetitive variables (i.e., occurring more than once in a pattern) from non-repetitive variables, and associated each variable x with an integer µ(x) as the lower bound to the length of strings the variable x matches. This enables us to give a pattern score depending upon the lengths of strings substituted for variables. For one-repetitive-variable pattern, we presented a way

Discovering Repetitive Expressions

427

of speed-up of pattern matching, which uses substring statistics from minimal augmented suﬃx tree of a given string as a ﬁlter that excludes patterns which cannot match it. Preliminary experiment showed this idea successfully speeds up the pattern matching against many patterns repeatedly. In this paper, we restricted ourselves to one-repetitive-variable patterns and to repetition of words which occur at the beginning or the end of lines of Waka poem. The restriction played an important role but we want to consider a slightly more complex patterns. For example, the following two poems are matched by the pattern [][][x][xx][]. [shi-ra-yu-ki-no][ya-he-fu-ri-shi-ke-ru][ka-he-ru-ya-ma] ¯ #902) [ka-he-ru-ka-he-ru-mo][o-i-ni-ke-ru-ka-na] (Kokin-Shu [a-fu-ko-to-ha][ma-ha-ra-ni-a-me-ru][i-yo-su-ta-re] ¯ #244) [i-yo-i-yo-wa-re-wo][wa-hi-sa-su-ru-ka-na] (Shika-Shu

Moreover, the next poem is matched by the pattern [x][y][x∗][x∗][y∗] that contains two-repetitive-variables. [wa-su-re-shi-to][i-hi-tsu-ru-na-ka-ha][wa-su-re-ke-ri] ¯i-Shu ¯ #886) [wa-su-re-mu-to-ko-so][i-fu-he-ka-ri-ke-re] (Go-Shu

To deal with more general patterns like these ones will be future work.

References 1. A. V. Aho. Handbook of Theoretical Computer Science, volume A, Algorithm and Complexity, chapter 5, pages 255–295. Elsevier, Amsterdam, 1990. 2. D. Angluin. Finding patterns common to a set of strings. J. Comput. Sys. Sci., 21:46–62, 1980. 3. A. Apostolico and F. Preparata. Structural properties of the string statistics problem. J. Comput. & Syst. Sci., 31(3):394–411, 1985. 4. A. Apostolico and F. Preparata. Data structures and algorithms for the string statistics problem. Algorithmica, 15(5):481–494, 1996. 5. H. Arimura. Text data mining with optimized pattern discovery. In Proc. 17th Workshop on Machine Intelligence, Cambridge, July 2000. 6. D. Gusﬁeld. Algorithms on Strings, Trees, and Sequences: Computer Science and Computational Biology. Cambridge University Press, New York, 1997. 7. H. Hori, S. Shimozono, M. Takeda, and A. Shinohara. Fragmentary pattern matching: Complexity, algorithms and applications for analyzing classic literary works. In Proc. 12th Annual International Symposium on Algorithms and Computation (ISAAC’01), 2001. To appear. 8. T. Kadota, M. Hirao, A. Ishino, M. Takeda, A. Shinohara, and F. Matsuo. Musical sequence comparison for melodic and rhythmic similarities. In Proc. 8th International Symposium on String Processing and Information Retrieval (SPIRE2001). IEEE Computer Society, 2001. To appear. 9. T. Shinohara. Polynomial-time inference of pattern languages and its applications. In Proc. 7th IBM Symp. Math. Found. Comp. Sci., pages 191–209, 1982.

428

K. Yamamoto et al.

10. M. Takeda, T. Fukuda, I. Nanri, M. Yamasaki, and K. Tamari. Discovering instances of poetic allusion from anthologies of classical Japanese poems. Theor. Comput. Sci. To appear. 11. M. Takeda, T. Matsumoto, T. Fukuda, and I. Nanri. Discovering characteristic expressions from literary works. Theor. Comput. Sci. To appear. 12. M. Yamasaki, M. Takeda, T. Fukuda, and I. Nanri. Discovering characteristic patterns from collections of classical Japanese poems. New Gener. Comput., 18(1):61– 73, 2000.

Lecture Notes in Computer Science Edited by G. Goos, J. Hartmanis, and J. van Leeuwen

2226

3

Berlin Heidelberg New York Barcelona Hong Kong London Milan Paris Tokyo

Klaus P. Jantke Ayumi Shinohara (Eds.)

Discovery Science 4th International Conference, DS 2001 Washington, DC, USA, November 25-28, 2001 Proceedings

13

Series Editors Jaime G. Carbonell, Carnegie Mellon University, Pittsburgh, PA, USA J¨org Siekmann, University of Saarland, Saarbr¨ucken, Germany Volume Editors Klaus P. Jantke DFKI GmbH Saarbr¨ucken 66123 Saarbr¨ucken, Germany E-mail: [email protected] Ayumi Shinohara Kyushu University, Department of Informatics 6-10-1 Hakozaki, Higashi-ku, Fukuoka 812-8581, Japan E-mail: [email protected]

Cataloging-in-Publication Data applied for Die Deutsche Bibliothek - CIP-Einheitsaufnahme Discovery science : 4th international conference ; proceedings / DS 2001, Washington, DC, USA, November 25 - 28, 2001. Klaus P. Jantke ; Ayumi Shinohara (ed.). - Berlin ; Heidelberg ; New York ; Barcelona ; Hong Kong ; London ; Milan ; Paris ; Tokyo : Springer, 2001 (Lecture notes in computer science ; Vol. 2226 : Lecture notes in artiﬁcial intelligence) ISBN 3-540-42956-5

CR Subject Classiﬁcation (1998): I.2, H.2.8, H.3, J.1, J.2 ISBN 3-540-42956-5 Springer-Verlag Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, speciﬁcally the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microﬁlms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable for prosecution under the German Copyright Law. Springer-Verlag Berlin Heidelberg New York a member of BertelsmannSpringer Science+Business Media GmbH http://www.springer.de © Springer-Verlag Berlin Heidelberg 2001 Printed in Germany Typesetting: Camera-ready by author Printed on acid-free paper SPIN: 10840973

06/3142

543210

VI

Preface

Table of Contents Invited Papers The Discovery Science Project in Japan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Setsuo Arikawa Discovering Mechanisms: A Computational Philosophy of Science Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lindley Darden

1

3

Queries Revisited . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Dana Angluin Inventing Discovery Tools: Combining Information Visualization with Data Mining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Ben Shneiderman Robot Baby 2001 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Paul R. Cohen, Tim Oates, Niall Adams, and Carole R. Beal

Regular Papers VML: A View Modeling Language for Computational Knowledge Discovery 30 Hideo Bannai, Yoshinori Tamada, Osamu Maruyama, and Satoru Miyano Computational Discovery of Communicable Knowledge: Symposium Report 45 Saˇso Dˇzeroski and Pat Langley Bounding Negative Information in Frequent Sets Algorithms . . . . . . . . . . . . 50 I. Fortes, J.L. Balc´ azar, and R. Morales Functional Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 Jo˜ ao Gama Spherical Horses and Shared Toothbrushes: Lessons Learned from a Workshop on Scientiﬁc and Technological Thinking . . . . . . . . . . . . . . . . . . . . . 74 Michael E. Gorman, Alexandra Kincannon, and Matthew M. Mehalik Clipping and Analyzing News Using Machine Learning Techniques . . . . . . . 87 Hans Gr¨ undel, Tino Naphtali, Christian Wiech, Jan-Marian Gluba, Maiken Rohdenburg, and Tobias Scheﬀer Towards Discovery of Deep and Wide First-Order Structures: A Case Study in the Domain of Mutagenicity . . . . . . . . . . . . . . . . . . . . . . . . . . 100 Tam´ as Horv´ ath and Stefan Wrobel

X

Table of Contents

Eliminating Useless Parts in Semi-structured Documents Using Alternation Counts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Daisuke Ikeda, Yasuhiro Yamada, and Sachio Hirokawa Multicriterially Best Explanations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 Naresh S. Iyer and John R. Josephson Constructing Approximate Informative Basis of Association Rules . . . . . . . . 141 Kouta Kanda, Makoto Haraguchi, and Yoshiaki Okubo Passage-Based Document Retrieval as a Tool for Text Mining with User’s Information Needs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 Koichi Kise, Markus Junker, Andreas Dengel, and Keinosuke Matsumoto Automated Formulation of Reactions and Pathways in Nuclear Astrophysics: New Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 Sakir Kocabas An Integrated Framework for Extended Discovery in Particle Physics . . . . . 182 Sakir Kocabas and Pat Langley Stimulating Discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 Ronald N. Kostoﬀ Assisting Model-Discovery in Neuroendocrinology . . . . . . . . . . . . . . . . . . . . . . 214 Ashesh Mahidadia and Paul Compton A General Theory of Deduction, Induction, and Learning . . . . . . . . . . . . . . . 228 Eric Martin, Arun Sharma, and Frank Stephan Learning Conformation Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 Osamu Maruyama, Takayoshi Shoudai, Emiko Furuichi, Satoru Kuhara, and Satoru Miyano Knowledge Navigation on Visualizing Complementary Documents . . . . . . . . 258 Naohiro Matsumura, Yukio Ohsawa, and Mitsuru Ishizuka KeyWorld: Extracting Keywords from a Document as a Small World . . . . . 271 Yutaka Matsuo, Yukio Ohsawa, and Mitsuru Ishizuka A Method for Discovering Puriﬁed Web Communities . . . . . . . . . . . . . . . . . . 282 Tsuyoshi Murata Divide and Conquer Machine Learning for a Genomics Analogy Problem . . 290 Ming Ouyang, John Case, and Joan Burnside

Table of Contents

XI

Towards a Method of Searching a Diverse Theory Space for Scientiﬁc Discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 Joseph Phillips Eﬃcient Local Search in Conceptual Clustering . . . . . . . . . . . . . . . . . . . . . . . . 323 C´eline Robardet and Fabien Feschet Computational Revision of Quantitative Scientiﬁc Models . . . . . . . . . . . . . . . 336 Kazumi Saito, Pat Langley, Trond Grenager, Christopher Potter, Alicia Torregrosa, and Steven A. Klooster An Eﬃcient Derivation for Elementary Formal Systems Based on Partial Uniﬁcation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350 Noriko Sugimoto, Hiroki Ishizaka, and Takeshi Shinohara Worst-Case Analysis of Rule Discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 Einoshin Suzuki Mining Semi-structured Data by Path Expressions . . . . . . . . . . . . . . . . . . . . . 378 Katsuaki Taniguchi, Hiroshi Sakamoto, Hiroki Arimura, Shinichi Shimozono, and Setsuo Arikawa Theory Revision in Equation Discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 Ljupˇco Todorovski and Saˇso Dˇzeroski Simpliﬁed Training Algorithms for Hierarchical Hidden Markov Models . . . 401 Nobuhisa Ueda and Taisuke Sato Discovering Repetitive Expressions and Aﬃnities from Anthologies of Classical Japanese Poems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416 Koichiro Yamamoto, Masayuki Takeda, Ayumi Shinohara, Tomoko Fukuda, and Ichir¯ o Nanri

Poster Papers Web Site Rating and Improvement Based on Hyperlink Structure . . . . . . . . 429 Hironori Hiraishi, Hisayoshi Kato, Naonori Ohtsuka, and Fumio Mizoguchi A Practical Algorithm to Find the Best Episode Patterns . . . . . . . . . . . . . . . 435 Masahiro Hirao, Shunsuke Inenaga, Ayumi Shinohara, Masayuki Takeda, and Setsuo Arikawa Interactive Exploration of Time Series Data . . . . . . . . . . . . . . . . . . . . . . . . . . . 441 Harry Hochheiser and Ben Shneiderman Clustering Rules Using Empirical Similarity of Support Sets . . . . . . . . . . . . . 447 Shreevardhan Lele, Bruce Golden, Kimberly Ozga, and Edward Wasil

XII

Table of Contents

Computational Lessons from a Cognitive Study of Invention . . . . . . . . . . . . . 452 Marin Simina, Michael E. Gorman, and Janet L. Kolodner Component-Based Framework for Virtual Information Materialization . . . . 458 Yuzuru Tanaka and Tsuyoshi Sugibuchi Dynamic Aggregation to Support Pattern Discovery: A Case Study with Web Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464 Lida Tang and Ben Shneiderman Separation of Photoelectrons via Multivariate Maxwellian Mixture Model . 470 Genta Ueno, Nagatomo Nakamura, and Tomoyuki Higuchi Logic of Drug Discovery: A Descriptive Model of a Practice in Neuropharmacology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476 Alexander P.M. van den Bosch SCOOP: A Record Extractor without Knowledge on Input . . . . . . . . . . . . . . 482 Yasuhiro Yamada, Daisuke Ikeda, and Sachio Hirokawa Meta-analysis of Mutagenes Discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488 Premysl Zak, Pavel Spacil, and Jaroslava Halova Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493

Theory Revision in Equation Discovery Ljupˇco Todorovski and Saˇso Dˇzeroski Department of Intelligent Systems, Joˇzef Stefan Institute Jamova 39, 0.50 Ljubljana, Slovenia [email protected], [email protected]

Abstract. State of the art equation discovery systems start the discovery process from scratch, rather than from an initial hypothesis in the space of equations. On the other hand, theory revision systems start from a given theory as an initial hypothesis and use new examples to improve its quality. Two quality criteria are usually used in theory revision systems. The ﬁrst is the accuracy of the theory on new examples and the second is the minimality of change of the original theory. In this paper, we formulate the problem of theory revision in the context of equation discovery. Moreover, we propose a theory revision method suitable for use with the equation discovery system Lagramge. The accuracy of the revised theory and the minimality of theory change are considered. The use of the method is illustrated on the problem of improving an existing equation based model of the net production of carbon in the Earth ecosystem. Experiments show that small changes in the model parameters and structure considerably improve the accuracy of the model.

1

Introduction

Most of the existing equation discovery systems make use of a very limited portion of the theoretical knowledge available in the domain of interest. Usually, the domain knowledge is used to constrain the search space of possible equations to the equations that make sense from the point of view of the domain experts. One of the aspects of the domain knowledge that is usually neglected by the equation discovery systems are the existing models in the domain. Rather than starting the search with an existing equation based model, equation discovery systems always start their search from scratch. In contrast with them, theory revision systems [9,3] start with an existing model and use heuristic search to revise the model in order to improve its ﬁt to observational data. Most of the work on theory revision systems is on the revision of theories in propositional and ﬁrst-order logic [9]. In this paper, we propose a ﬂexible grammar based framework for theory revision in equation discovery. The existing initial model is transformed to a grammar, and alternative productions are used to deﬁne a space of possible revised equation models. The grammar based equation discovery system Lagramge [6] is then used to search through the space of revised models and ﬁnd the one that ﬁts observational data best. The use of the proposed framework is illustrated on revising an equation based earth-science model of the net production of carbon in the Earth ecosystem. K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 389–400, 2001. c Springer-Verlag Berlin Heidelberg 2001

390

L. Todorovski and S. Dˇzeroski

The paper is organized as follows. The following section give a brief introduction to grammar based equation discovery. Typical approaches to revision of theories in propositional and ﬁrst-order logic are brieﬂy reviewed in Section 3. The grammar based framework for theory revision in equation discovery is presented in Section 4. Section 5 presents the experiments with revising the earth-science equation model. The last section summarizes the paper, discusses related work and gives direction for further work.

2

Equation Discovery

Equation discovery is the area of machine learning that develops methods for automated discovery of quantitative laws, expressed in the form of equations, in collections of measured data [1]. Equation discovery systems heuristically search through a subset of the space of all possible equations and try to ﬁnd the equation which ﬁts the measured data best. Diﬀerent equation discovery systems explore diﬀerent spaces of possible equations. Early equation discovery systems used pre-deﬁned (built-in) spaces that were small enough to allow eﬀective heuristic (or exhaustive) search. However, this approach does not allow the user of the equation discovery system to tailor the space of possible equation to the domain of interest. On the other hand, recent equation discovery systems use diﬀerent approaches to allow the user to restrict the space of the possible equations. In equation discovery systems that are based on genetic programming, the user is allowed to specify a set of algebraic operators that can be used. A similar approach has been used in the EF [10] equation discovery system. The equation discovery system SDS [7] eﬀectively uses user provided scale-type information about the dimensions of the system variables and is capable of discovering complex equations from noisy data. Finally, the equation discovery system Lagramge [6] allows the user to specify the space of possible equations using a context free grammar. Note that grammars are a more general and powerful mechanism for tailoring the space of the equations to the domain of use than the ones used in SDS [7] and EF [10]. In the rest of this section we will describe this grammar based approach to equation discovery used in Lagramge. 2.1

Grammar-Based Equation Discovery

The problem of grammar based equation discovery can be formalized as follows. Given: – a set of variables V = v1 , v2 , . . . , vn of the observed system, including a target dependent variable vd ∈ V ; – a grammar G; and – a table M of observations (measured values) of the system variables. Find a model E in the form of one or more algebraic or diﬀerential equations deﬁning the target variable vd that: 1. is derived by the grammar G; and

Theory Revision in Equation Discovery

391

2. minimizes the discrepancy between the observed values of the target variable vd and the values of vd obtained with simulating the model. An example of a grammar for equation discovery is given in Table 1. The grammar contains a set of two nonterminal symbols {P Vdiff, Vdiff}, with a set of productions attached to each of them, and a set of three terminal symbols {v1, v2, const[0:1]}. The semantics of the terminal and nonterminal symbols in the grammar are explained below. There are two types of terminal symbols used in the grammars for equation discovery. The ﬁrst group is used to denote the variables of the observed system (v1 and v2 in the example grammar from Table 1). Another group of terminal symbols of the form const[l:h] is used to denote the constant parameter in the equation model whose value has to be ﬁtted against the observational data from M . A constraint [l:h] speciﬁes that the value of the constant parameter should be within the interval l ≤ v ≤ h. Table 1. An example of a grammar for equation discovery deﬁning the space of polynomials of a single variable vdiﬀ = v1 − v2 . P Vdiff -> const[0:1] P Vdiff -> const[0:1] + (P Vdiff) * (Vdiff) Vdiff -> v1 - v2

The nonterminal symbol Vdiff deﬁnes an intermediate variable which is the diﬀerence between two system variables v1 and v2. This is done with the single production for the nonterminal symbol Vdiff. The other nonterminal symbol P Vdiff is used to build polynomials of an arbitrary degree. 2.2

Lagramge

The equation discovery system Lagramge applies heuristic (or exhaustive) search through the space of models generated using user provided grammar G. The values constant parameters (terminal symbols const) in the generated models are ﬁtted against input data M using standard non-linear constrained optimization method. After ﬁtting the values of the constant parameters tho model is evaluated according to the sum of squared errors (SSE heuristic function [6]), i.e., the diﬀerences between observed values of the target variable vd and the values of vd calculated by the model. Alternative MDL heuristic function that takes into account the complexity of the model can be also used [6].

3

Theory Revision

The problem of theory revision can be deﬁned as follows: Given an imperfect domain theory in the form of classiﬁcation rules and a set of classiﬁed examples,

392

L. Todorovski and S. Dˇzeroski

ﬁnd an approximately minimal syntactic revision of the domain theory that correctly classiﬁes all of the examples. A representative system that addresses this problem is Either [3]. Either reﬁnes propositional Horn-clause theories using a suite of abductive, deductive and inductive techniques. Deduction is used to identify the problems with the domain theory, while abduction and induction are used to correct them. The problem of theory revision has received a lot of attention in the ﬁeld of inductive logic programming [2], where a number of approaches have been developed for revising theories in the form of ﬁrst-order Horn clause theories. For an overview, we refer the reader to [9]. Two kinds of problems are encountered within imperfect domain theories: over-generality occurs when an example is classiﬁed into a class other than the correct one, while over-speciﬁcity occurs when an example cannot be proven to belong to the correct class. Note that a single example can be misclassiﬁed both ways at the same time. Overly general rules are either specialized by adding new conditions to their antecedents or are deleted from the knowledge base. Problems of over-speciﬁcity are solved by generalizing the antecedents of existing rules, e.g., by removing conditions from them, or by the induction of new rules.

4 4.1

Grammar-Based Theory Revision of Equation Models Problem Deﬁnition

The problem of grammar based theory revision can be formalized as follows. Given: – a set of variables V = v1 , v2 , . . . , vn of the observed system, including a target dependent variable vd ∈ V ; – an existing model E, represented as an equation(s) deﬁning the target variable vd . Note that this can actually be a set of (algebraic or diﬀerential) equations deﬁning the value of the target variable vd ; – a grammar G that derives the model E; and – a table M of observations (measured values) of the system variables. Find a revised model E (equation/set of equations as above) that: 1. is derived by the grammar G; 2. minimizes the discrepancy between the observed values of the target variable vd and the values of vd obtained with simulating the model; and 3. diﬀers from the existing model E as little as possible. Items 2. and 3. above would typically appear in a formulation of a general theory revision problem, regardless of the language in which the theories are expressed. In contrast to our formulation, however, the possible changes to the initial theory would be speciﬁed in terms of revision operators that can be applied to the initial and intermediate theories. As theories are typically logical theories in theory revision settings, operators typically include addition/deletion of entire rules (propositional or ﬁrst-order Horn clauses) and addition/deletion of conditions in individual rules.

Theory Revision in Equation Discovery

4.2

393

From an Initial Model to a Grammar

In a typical setting of revising an existing scientiﬁc model, we would only have observational data and a model, i.e., an equation developed by scientists to explain a particular phenomenon. A grammar that would explain how this model was actually derived and provide options for alternative models is typically not available. The above is especially true for simpler models. However, when the model (equation) is complex, it is only rarely written as a single equation deﬁning the target variable, but rather as a set of equations deﬁning the target variable, which typically contains equations deﬁning intermediate variables. The latter typically deﬁne meaningful concepts in the domain of discourse. Often, alternative equations deﬁning an intermediate variable would be possible and the modeling scientist would choose one of these: the alternatives would rarely (if ever) be documented in the model itself, but might be mentioned in a scientiﬁc article describing the derived model and the modeling process. Table 2. Equations deﬁning the NPPc variable in the CASA earth-science model. NPPc = max(0, E · IP AR) E = 0.389 · T 1 · T 2 · W T 1 = 0.8 + 0.02 · topt − 0.0005 · topt 2 T 2 = 1.1814/((1 + e0.2·(TDIFF −10) ) · (1 + e0.3·(−TDIFF −10) )) TDIFF = topt − tempc W = 0.5 + 0.5 · eet/P ET P ET = 1.6 · (10 · max(tempc, 0)/ahi)A · pet tw m A = 0.000000675 · ahi 3 − 0.0000771 · ahi 2 + 0.01792 · ahi + 0.49239 IP AR = FPAR FAS · monthly solar · SOL CONV · 0.5 FPAR FAS = min((SR FAS − 1.08)/srdiﬀ , 0.95) SR FAS = (1 + fas ndvi/1000)/(1 − fas ndvi/1000) SOL CONV = 0.0864 · days per month

A set of equations deﬁning a target variable through some intermediate variables can easily be turned into a grammar, as demonstrated in Tables 2 and 3, which give an earth-science model and a grammar that derives this model only. Having the grammar in Table 3, however, enables us to specify alternative models through providing additional productions for the nonterminal symbols in the grammar. Additional productions for intermediate variables would specify alternative choices, only one of which will eventually be chosen for the ﬁnal model. Observational data would be then used to select among combinations of such choices, if we apply a grammar based equation discovery system (such as Lagramge) with the grammar that includes additional productions to observational data as input. While the presented approach from the previous paragraph does take into account the initial model, it may allow for a completely diﬀerent model to be

394

L. Todorovski and S. Dˇzeroski

Table 3. Grammar derived from the equations for NPPc variable in the CASA earthscience model in Table 2. The grammar generates the original equations only. NPPc -> E -> T1 ->

max(const[0:0], E * IPAR) const[0.389:0.389] * T1 * T2 * W const[0.8:0.8] + const[0.02:0.02] * topt - const[0.0005:0.0005] * topt * topt T2 -> const[1.1814:1.1814] / ((const[1:1] + exp(const[0.2:0.2] * (TDIFF - const[10:10]))) * (const[1:1] + exp(const[0.3:0.3] * (-TDIFF - const[10:10])))) TDIFF -> topt - tempc W -> const[0.5:0.5] + const[0.5:0.5] * eet / max(PET, const[0:0]) PET -> const[1.6:1.6] * pow(const[10:10] * max(tempc, const[0:0]) / ahi, A) * pet tw m A -> const[0.000000675:0.000000675] * ahi * ahi * ahi - const[0.0000771:0.0000771] * ahi * ahi + const[0.01792:0.01792] * ahi + const[0.49239:0.49239] IPAR -> FPAR FAS * solar * SOL CONV * const[0.5:0.5] FPAR FAS -> min((SR FAS - const[1.08:1.08]) / srdiff, const[0.95:0.95]) SR FAS -> (const[1:1] + fas ndvi / const[1000:1000]) / (const[1:1] - fas ndvi / const[1000:1000]) SOL CONV -> const[0.0864:0.0864] * days per month

derived, depending on whether productions for alternative deﬁnitions are provided for each of the intermediate variables. It is here that the minimal revision/change principle comes into play: among theories of similar quality (ﬁt to the data), theories that are closer to the original theory are to be preferred. Since we are dealing with theories that are not necessarily expressed in logic (e.g., equations), only syntactic criteria of minimality of change are applicable in a straightforward fashion. 4.3

Typical Alternative Productions

Note that when an alternative production is speciﬁed for an intermediate variable, there are no restrictions (at least in principle) on these productions. For example, they can introduce new intermediate variables and productions deﬁning them. They can also specify arbitrary functional forms (in the case of equations). However, they do have to eventually derive (in the context of the entire grammar) valid sub-expressions involving the set of terminal symbols (system variables) associated to the initial model. A very common alternative production would replace the particular constants on the right-hand-side with generic constants, allowing the equation discovery system to re-ﬁt them to the given observational data. In the grammar from Table 3 that change can be achieved by replacing a terminal symbol of the form const[v:v], denoting a constant parameter with ﬁxed value v, with a

Theory Revision in Equation Discovery

395

generic symbol const that allows for an arbitrary value of the particular constant parameter. In our experiments with the earth-science CASA model we allow for a 100% change of the original values of the constant parameters in the initial model. This can be speciﬁed by replacing the terminal symbol const[v:v] with const[0:2*v], where interval [0 : 2 · v] is equal to [(v − 100% · v) : (v + 100% · v)] (a 100% relative change). A slightly more complex alternative production would replace a particular polynomial on the right-hand-side of a production with an arbitrary polynomial of the same (intermediate) variables. For example, in the grammar from Table 3 can be replaced by a grammar, similar to the example grammar from Table 1, for generating an arbitrary polynomial of the variable topt. 4.4

Current Implementation

Our current implementation of the theory revision approach to equation discovery outlined above involves applying Lagramge to the given observational data and a grammar specifying the possible alternative productions to be used in theory revision. The observational data are used to select a particular combination of the possible alternatives: note that these also include leaving parts of the model unchanged (as the original productions are also a part of the grammar) even if alternative productions for these exist. We currently do not have an implementation of the minimal change preference integrated within Lagramge. This however, can be achieved in a relatively straightforward manner. One of the heuristic functions used by Lagramge to search the space of equations, called MDL, takes into account the degree-of-ﬁt (sum of square errors) as well as the size of the equation model. A reasonable approach to implement a minimality of change principle would be to replace the second term in the MDL heuristic: replace the size of the equation with a distance between the current model and the initial model. The distance measure can be a distance on tree-structured terms, which would take into account the number and complexity of the alternative productions taken to derive the current equation.

5

Experiments in Revising an Earth-Science Model

We illustrate the use of the proposed framework for theory revision in equation discovery on the problem of revising one part of the earth-science CASA model [4]. The CASA model predicts annual global ﬂuxes in trace gas production on the basis of a number of measured (observed) variables, such as surface temperatures, satellite observations of the land surface, soil properties, etc. Because the whole CASA model is a quite complex system of diﬀerence and algebraic equations, we focused on the revision of the NPPc part of CASA (CASA-NPPc), presented in Table 2, that is used to predict the monthly net production of carbon at a given location.

396

L. Todorovski and S. Dˇzeroski

The values of the input variables (terminal symbols in the grammar from Table 2) were measured (and/or calculated) for 303 locations on the Earth providing a data set with 303 examples. In order to evaluate the accuracy of the model on unseen data we applied standard ten-fold leave-one-out cross validation method. The error of the original and revised models was calculated as root N ˆ 2 /N , where N is nummean squared error deﬁned as (NPPc i − NPPc) i=1

i

ˆ i are the observed value and the value ber of the data points; NPPc i and NPPc calculated by the model, respectively. 5.1

Revisions Used in the Experiments

As described in Section 4 we ﬁrst transformed the given NPPc model into a grammar (given in Table 3) that derives that model only. Furthermore, we added alternative productions to the grammar that deﬁne the space of possible revisions. We used six alternative possibilities for the revision of the NPPc model, described below. E-c-100 : we allowed a 100% relative change of the constant parameter 0.389 in the equation deﬁning the intermediate variable E. Therefore, we replaced the original production for nonterminal symbol E in the grammar with E -> const[0:0.778] * T1 * T2 * W, i.e., changed the constraint on the value of the constant parameter from the original const[0.389:0.389], which ﬁxes the value of the constant parameter, to const[0:0.778], which allows a 100% relative change of the original value of the constant parameter ([0 : 0.778] being equal to [(0.389 − 100% · 0.389) : (0.389 + 100% · 0.389)]). T1-c-100, T2-c-100 : we allowed the same revisions as the one described above on the right hand sides of the productions for T1 and T2. SR FAS-c-20 : we allowed 20% relative change of the constant parameters values in the equation deﬁning the intermediate variable SR FAS . The relative change of 20% was used to avoid values of the constant parameters lower than 800, which would cause singularity (division by zero) problems in the formula for calculating SR FAS . T1-s : we allowed the original second degree polynomial for calculation of T1 = 0.8 + 0.02 · topt − 0.0005 · topt 2 with an arbitrary polynomial of the same variable topt. The following alternative productions were added to the grammar from Table 3 for this purpose: T1 -> const and T1 -> const + (T1) * topt. T2-s : the graph of the dependency between the T2 and TDIFF variables shows a Gaussian-like slightly asymmetrical dependency curve. Following the fact that this kind of dependency can be approximated also with a higher degree polynomial we replaced the original T1 production in the grammar from Table 3 with two productions (similar to the ones for T1-s, presented above) that deﬁne an arbitrary polynomial of the TDIFF variable. In addition to these six possibilities for revising the CASA-NPPc model we also used diﬀerent combinations of them.

Theory Revision in Equation Discovery

5.2

397

Results of the Experiments

The results of the experiments with diﬀerent alternative grammars for revision are presented in Table 4. Table 4. Error reduction (in %) gained with revising the original CASA-NPPc model using diﬀerent grammars for revision. Grammar Reduction of RMSE (in %) SR FAS-c-20 14.93 T2-c-100 13.25 T1-s 13.05 T2-s 12.90 E-c-100 12.59 T1-c-100 12.39 SR FAS-c-20 + T2-s 15.56 SR FAS-c-20 + T1-s 15.46 T2-c-100 + T1-s 13.92 T2-s + T1-s 13.30 SR FAS-c-20 + T2-c-100 11.55 SR FAS-c-20 + T2-s + T1-s + E-c-100 16.19 SR FAS-c-20 + T2-s + T1-c-100 + E-c-100 15.44 SR FAS-c-20 + T2-c-100 + T1-s + E-c-100 14.82 SR FAS-c-20 + T2-c-100 + T1-c-100 + E-c-100 12.92

The ﬁrst six rows of Table 4 shows that revising the value of the constant parameters in the equation for calculating SR FAS gives the greatest improvement of the original model. The original value of the parameters (equal to 1000) deﬁnes an almost linear dependence of SR FAS on observed variable srdiﬀ. The revised values of the constant parameters were equal to 800 (lowest possible values), which increase the non-linearity of the dependence. Allowing lower values of the parameters in the equation gives further improvement, but singularity (division by zero) problems appear due to the range of the srdiﬀ variable. The analysis of the results of the structural revisions shows the following. T1s revision cause the second-degree polynomial for calculating the T1 variable to be replaced by a fourth degree polynomial. On the other hand, the structural revision T2-s reduced the complex formula for calculating T2 with a constant value. This is a surprising result that would have to be discussed with the Earth science experts that built the CASA model. Furthermore, we tested pairwise combinations of the six model reﬁnements. The results are presented in the second part of the Table 4. Results show that improvements gained using individual reﬁnement grammars do not combine additively. However, combinations do increase the improvements: maximal improvement gained with pairwise combinations is 15.56% compared with the highest improvement of 14.93% gained using individual revisions.

398

L. Todorovski and S. Dˇzeroski

Finally, the results of the experiments with combining all the reﬁnements are presented in the last four rows of Table 4. Note however, that revisions of the T1 and T2 structures (T1-s and T2-s) are mutually exclusive with the respective revisions of the T1 and T2 constants (T1-c-100 and T2-c-100). Therefore, four possible combinations are possible, the one combining the structural revisions of the T1 and T2 formulas and revisions of the values of the constant parameters in formulas for the SR FAS and E gives the maximal improvement of the accuracy of 16.19%. In sum, the presented results of the experiments show that small revisions of the CASA-NPPc model parameters and structure considerably improve the accuracy of the model, the maximal improvement being above 16%. However, Earth science experts should also evaluate the comprehensibility and acceptability of the revised models. Nevertheless, if some of the revisions generate models that do not make sense from their point of view, new alternative productions would have to be deﬁned to reﬂect the experts comments, and allow only revisions that lead to acceptable models. Note here that the most of the error reduction is gained using a fairly simple revision operator of changing the values of the constant parameters in the SR FAS equation. Only minor additional reductions can be obtained by combining this revision with any of the other ﬁve revision operators described above. Therefore, this revision would probably be the optimal one from the point of view of the minimality of change criterion, discussed in Section 4.

6

Conclusions and Discussion

We have presented a general framework for the revision of theories in the form of (sets of) quantitative equations. The method is based on grammars, which can be derived from the original theory. Domain experts can focus the revision process on parts of the model and guide it by providing relevant alternative productions. In this way, the revision process can be interactive, as is quite often the case when revising theories expressed in logic. We have applied our approach to the problem of revising an existing equation based model of the net production of carbon in the Earth ecosystem. Experimental results show that small revisions in both the values of the constant parameters and the structure of equations considerably reduce the error of the model by 16%. Saito et al. [5] address the same task of revising scientiﬁc models in the form of equations. Their approach is based on transforming parts of the model into a neural network, training the neural network, then transforming the trained network back into an expression/equation. This indirect approach is limited to revising the parameters or form of one equation in the model at a time. It also requires some handcrafting to encode the equations as a neural network – the authors state that “the need to to translate the existing CASA model into a declarative form that our discovery system can manipulate” is a challenge to their approach.

Theory Revision in Equation Discovery

399

Our approach allows for a straightforward representation of existing scientiﬁc models as grammars, which can then be directly manipulated and used to perform theory revision. The transition from the initial model to a grammar is so straightforward that we consider automating this process as one of the topics for immediate further work. Revisions to several equations of the original model may be considered simultaneously, as illustrated by the experiments performed. Whigham and Recknagel [8] also consider the speciﬁc task of revising an existing model for predicting chlorophyll-a by using measured data. They use a genetic algorithm to calibrate the equation parameters. They also use a grammar based genetic programming approach to revise the structure of two sub-parts (one at a time) of the initial model. A most general grammar that can derive an arbitrary expression using the allowed arithmetic operators and functions was used for each of the two sub-parts. Unlike this paper, Whigham and Recknagel [8] do not present a general framework for the revision of quantitative scientiﬁc models. Their approach is similar to ours in that they use grammars to specify possible revisions. However, the grammars they use are too general to provide much information about the domain at hand. Also, they do not consider the notion of minimality of revision and genetic programming typically produces very large expressions without a simplicity bias. As already mentioned, an immediate topic for further work is to automate the grammar generation from the initial model. Another challenge is to provide the domain experts an interactive tool for testing out diﬀerent alternatives for revision. Furthermore, integrating the minimality of change criterion in Lagramge is also an open issue. Minimal description length (MDL) heuristics in Lagramge can be adapted to take into account the distance between the current and the initial equation model. Finally, we plan to apply the proposed framework to the task of revision of other portions of the CASA model as well as revision of other equation based environmental models.

Acknowledgments We thank Christopher Potter, Steven Klooster and Alicia Torregrosa from NASA-Ames Research Center for making available both the CASA model and the relevant data set.

References ˙ 1. P. Langley, H. A. Simon, G. L. Bradshaw, and J. M. Zythow. Scientiﬁc Discovery. MIT Press, Cambridge, MA, 1987. 2. N. Lavrac and Saˇso Dˇzeroski. Inductive Logic Programming: Techniques and Applications. Ellis Horwood, Chichester, 1994. Freely available at http://www-ai.ijs.si/SasoDzeroski/ILPBook/. 3. D. Ourston and R. J. Mooney. Theory reﬁnement combining analytical and empirical methods. Artiﬁcial Intelligence, 66:273–309, 1994.

400

L. Todorovski and S. Dˇzeroski

4. C. S. Potter and S.A. Klooster. Interannual variability in soil trace gas (CO2, N2O, NO) uxes and analysis of controllers on regional to global scales. Global Biogeochemical Cycles, 12:621–635, 1998. 5. K. Saito, P. Langley, and T. Grenager. The computational revision of quantitative scientiﬁc models. 2001. Submitted to Discovery Science conference. 6. L. Todorovski and S. Dˇzeroski. Declarative bias in equation discovery. In Proceedings of the Fourteenth International Conference on Machine Learning, pages 376–384, Nashville, MA, 1997. Morgan Kaufmann. 7. T. Washio and H. Motoda. Discovering admissible models of complex systems based on scale-types and identity constraints. In Proceedings of the Fifteenth International Joint Conference on Artiﬁcial Intelligence, volume 2, pages 810–817, Nogoya, Japan, 1997. Morgan Kaufmann. 8. P. A. Whigham and F. Recknagel. Predicting chlorophyll-a in freshwater lakes by hybridising process-based models and genetic algorithms. In Book of Abstracts of the Second International Conference on Applications of Machine Learning to Ecological Modeling. Adelaide University, 2000. 9. S. Wrobel. First order theory reﬁnement. In L. De Raedt, editor, Advances in Inductive Logic Programming, pages 14–33. IOS Press, 1996. ˙ 10. R. Zembowicz and J. M. Zytkow. Discovery of equations: Experimental evaluation of convergence. In Proceedings of the Tenth National Conference on Artiﬁcial Intelligence, pages 70–75, San Jose, CA, 1992. Morgan Kaufmann.

Mining Semi-structured Data by Path Expressions Katsuaki Taniguchi1 , Hiroshi Sakamoto1 , Hiroki Arimura1,2 , Shinichi Shimozono3 , and Setsuo Arikawa1 1

3

Department of Informatics, Kyushu University, Fukuoka 812-8581, Japan, {k-tani, hiroshi, arim, arikawa}@i.kyushu-u.ac.jp, 2 PRESTO, Japan Science Technology Co., Japan, Department of Artiﬁcial Intelligence, Kyushu Institute of Technology Iizuka 820-8502, Japan, [email protected]

Abstract. A new data model for ﬁltering semi-structured texts is presented. Given positive and negative examples of HTML pages labeled by a labelling function, the HTML pages are divided into a set of paths using the XML parser. A path is a sequence of element nodes and text nodes such that a text node appears in only the tail of the path. The labels of an element node and a text node are called a tag and a text, respectively. The goal of a mining algorithm is to ﬁnd an interesting pattern, called association path, which is a pair of a tag-sequence t and a word-sequence w represented by the word-association pattern [1]. An association path (t, w) agrees with a labelling function on a path p if t is a subsequence of the tag-sequence of p and w matches with the text of p iﬀ p is in a positive example. The importance of such an associate path α is measured by the agreement of a labelling function on given data, i.e., the number of paths on which α agrees with the labelling function. We present a mining algorithm for this problem and show the eﬃciency of this model by experiments.

1

Introduction

In the information extraction, it is one of the central problems in Web mining to detect the occurrences or the regions of useful texts. In case of the Web data, this problem is particularly diﬃcult because we can not represent a rich logical structure by the limited tags of the HTML. The framework of wrapper induction introduced by Kushmerick [13] is a new approach to handle this diﬃculty. The most interesting result of his study is to show the eﬀectiveness and eﬃciency of simple wrappers with string delimiters in the information extraction tasks. Together with his work, we can ﬁnd other extracting models, for example, in [8, 9,10,11,15,17]. The target class, called HTML pages, of the wrapper induction model is restricted such that a page is deﬁned by a ﬁnite repetition of a sequence of K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 378–388, 2001. c Springer-Verlag Berlin Heidelberg 2001

Mining Semi-structured Data by Path Expressions

379

attributes. The attributes are the data which an algorithm has to extract. In a learning model, a learning algorithm takes an input of labeled examples such that the labels indicate whether they are positive data or negative data. The strategy is useful to learn a concept for the wrapper class. However, in case that a concept class is hard to learn by a small number of examples, the model may not be eﬀective. This diﬃculty is critical in the point of implementation since the labelling examples are actually made by human inspection. Thus, we would like to present a mining model to decide which portion of a given data is important and an automatic process to construct a large labelling sample. The aim of this paper is to ﬁnd rules for ﬁltering semi-structured texts according to users interests. An HTML/XML ﬁle can be considered as an ordered labeled tree. We assume that each node is either an element node and a text node. Each node has two types of labels called the name and value. An element node corresponds to a tag. The name of the node is the tag name like ,

, and , and the value of the node is empty. A text node corresponds to a portion of a plain text in an HTML and the name is the reserved string T ext, respectively. The value of a text node is the text. A ﬁltering rule is a sequence s = α1 , . . . , αk , β, where αi is a tag name, β is a word-association pattern [1] which is a string consisting of several words and the wild card ∗. A word-association pattern matches with a string if there is a possible substitution for all ∗. Given the s and a semi-structured text, using an XML parser, we can easily construct the tree structure and decompose the tree into the set P of paths. Each path contains at most one text node in the tail. The semantics of the ﬁltering rule s for P is deﬁned as follows. For each p ∈ P , s matches with p if α1 , . . . , αk is a subsequence of the sequence of tag names of p and the tail of p is a text node such that β matches with the value of the node. Such a ﬁltering rule is considered as a simple decision tree to extract texts from paths in HTML trees. Each αi represents a test on a node. Unless the test is failed, we continue the test to the next test αi+1 . Finally, the value of the text node is extracted according to the pattern β. In other words, this rule is a pair of tag patterns and association patterns α, β, where a tag pattern is a sequence α = (α1 , . . . , αk ) of tag names such that these tags frequently appear in positive examples together with the association pattern. Such a ﬁltering rule is called an association path in this paper. We can use this notion for a measure to decide the importance of keyword in a text. We show the eﬃciency of the association paths by experiments. This paper is organized as follows. In Section 2, we deﬁne the data model for HTML pages, HTML trees, and path expressions. In Section 3, we review the deﬁnition of the word-association pattern in [1] and formulate the mining problem, called Association Path problem, of this paper. Next we describe a mining algorithm which ﬁnds an association path for given a large collection of HTML pages. In Section 4, we show several experimental results. In the ﬁrst experiment, the set of positive examples is a collection of HTML texts containing a keyword “TSP” and the set of negatives is that containing “NP”. These key-

380

K. Taniguchi et al.

words mean the travelling sealsman problem and NP-optimization problem on the computational complexity theory, respectively. The aim is to ﬁnd an association path to characterize the notion TSP comparing to NP. In this experiment, the algorithm found some interesting association paths. For the next experiment, we choose the keyword “DNA” for positive examples. Compared to the ﬁrst result, the algorithm found few interesting paths. In Section 5, we conclude this study.

2

The Data Model

In this section, we introduce the data model considered in this paper. First, we begin with the notations used in this paper. IN denotes the set of all nonnegative integers. An alphabet Σ is a set of ﬁnite symbols. A ﬁnite sequence a1 , . . . , an of elements in Σ is called string and it is denoted by w = a1 · · · an for short. The empty string of length zero is ε. The set of all strings is denoted by Σ ∗ and let Σ + = Σ ∗ − {ε}. For string w, if w = αβγ, then the strings α and γ are called a preﬁx and a suﬃx of w, respectively. For a string s, we denote by s[i] with 1 ≤ i ≤ |s| the i-th symbol of s, where |s| is the length of s. For an HTML page, the HTML trees are the ordered node-labeled trees deﬁned as follows. For each tree T , the set of all nodes of T is a ﬁnite subset of IN , where the 0 is the root. A node is called a leaf if it has no child and otherwise called an internal node. If nodes n, m ∈ IN have the same parent, then n and m are siblings. A sequence n1 , . . . , nk of nodes of T is called a path if n1 is the root and ni is the parent of ni+1 for all i = 1, . . . , k − 1. For a path p = n1 , . . . , nk , the number k is called the length of p and the node nk is called the tail of p. With each node n, the pair N L(n) = N (n), V (n), called the node label of n, is attached, where N (n) and V (n) are strings called the node name and node value, respectively. If N (n) ∈ Σ + and V (n) = ε, then the node n is called the element node and the string N (n) is called the tag. If N (n) = T ext for the reserved string T ext and V (n) ∈ Σ + , then n is called the text node and the V (n) called the text value. We assume that every node n ∈ IN is categorized to the element node or text node. If a page P contains a beginning tag of the form and P contains no ending tag corresponding to it. Then, the tag is called an empty tag in P . If a page P contains a string of the form t1 · w · t2 such that t1 , t2 are either beginning or ending tags and w is a string not containing any tag, then the string w is called a text in P . An HTML ﬁle is called a page. A page P corresponds to an ordered labeled tree. For the simplicity, we assume that the P contains no comments, which is any string beginning the string . Deﬁnition 1. For a page P , we deﬁne the HTML tree Pt recursively as follows. 1. If P contains an empty tag of the form , then Pt has the element node n such that it is a leaf of P , N (n) = tag, and V (n) = ε.

Mining Semi-structured Data by Path Expressions

381

2. If P contains a text w, then Pt has the text node n such that it is a leaf P , N (n) = T ext, V (n) = w. 3. If P contains a string of the form s for a string s ∈ Σ ∗ , then Pt has the subtree n(n1 , . . . , nk ), where N (n) = tag, V (n) = ε and n1 , . . . , nk are the roots of the trees t1 , . . . , tk which are obtained from the w by recursively applying the above 1, 2 and 3. Next we deﬁne the functions to get the node names, node values, and HTML attributes from given nodes and HTML trees deﬁned above. These functions are useful to explain the algorithms in the next section. These functions return the values indicated below and return null if such values do not exist. – – – –

Parent(n): The parent of the node n ∈ IN . ChildNodes(n): The sequence of all children of n. Name(n): The node name N (n) of n. Value(n): The concatenation V (n1 ) · · · V (nk ) for the leaves n1 , . . . , nk of the subtree rooted at n in the left-to-right order.

Recall that V (n) is not empty only if n is text node. Thus, Value(n) is equal to the concatenation of values of all text nodes below n. Let Pt be an HTML tree for a page P and let N = {0, . . . , n} be the set of nodes in Pt . For nodes i, j ∈ N , if there is a sequence pi,j = i1 , . . . , ik of nodes in N such that i1 = i, ik = j, and i = Parent(i+1 ) for all 1 ≤ ≤ k − 1, then the pi,j is called the path from i to j. If i is the root, then pi,j is denoted by pj for short. For each path p = i1 , . . . , ik of Pt , we also deﬁne the following useful notations. – Name(p): The sequence N ame(i1 ), . . . , N ame(ik ). – Value(p): V (nk ). Deﬁnition 2. Let Pt be an HTML tree over the set N of nodes. Let p = i1 , . . . , in be a path of Pt and let Name t = {Name(n) | n ∈ N }. A sequence α = name1 , . . . , namem , (namei ∈ Name t ) is called a path expression over Name t . It is called that the α matches with the p if there exists a subsequence j1 , . . . , jm of p such that Name(j ) = name for all 1 ≤ ≤ m. In the next section, we deﬁne a measure of the matching of the path expressions with the paths of HTML trees. We also deﬁne the ﬁnding problem of a path expression to maximize the measure.

3

Mining HTML Texts

In this section we ﬁrst deﬁne the problem to ﬁnd an expression, called an association pattern, for ﬁltering semistructured texts. The pattern is a pair of a word-association pattern and a path expression. The semantics of the patterns is deﬁned by the matching semantics of the word-association patterns and the path expressions.

382

3.1

K. Taniguchi et al.

The Problem

A word-association pattern [1] π over Σ is a pair π = (p1 , . . . , pd ; k) of a ﬁnite sequence of strings in Σ ∗ and a parameter k called proximity which is either a nonnegative integer or inﬁnity. A word-association pattern π matches a string s ∈ Σ ∗ if there exists a sequence i1 , . . . , id of integers such that every pj in π occurs in s at the position ij and 0 ≤ ij+1 − ij ≤ k for all 1 ≤ j < d − 1. The notion (d, k)-pattern refers to a d-word k-proximity word-association pattern (p1 , . . . , pd ; k). Let S = {s1 , . . . , sm } be a ﬁnite set of strings Σ ∗ and let ψ be a labeling function ψ : S → {0, 1}. Then, for a string s ∈ S, we say that a word-association pattern π agrees with ψ on s if π matches s iﬀ ψ(s) = 1. Given (Σ, S, ψ, d, k) of an alphabet Σ, a ﬁnite set S ⊂ Σ ∗ of strings, a labeling function ψ : S → {0, 1}, and positive integers d and k, the problem Max Agreement by (d, k)-Pattern [1] is to ﬁnd a (d, k)-pattern π such that it maximizes the agreement of ψ, i.e., the number of strings in S on which π agrees with ψ. Deﬁnition 3. An association path is an expression of the form α#π, where the α is a path expression such that its tail is T ext, the π is a word-association pattern, and the # is the special symbol not belonging to any α and π. Let p = α#π be an association path and p be a path in a tree. It is said that the p matches the p if α matches p and π matches Value(p ). For a ﬁnite set T of HTML trees, let Text T = {Name(p), Value(p) | p is a path of t ∈ T , Value(p) = ε} The intuitive meaning of p appearing in Text T is a path p of an HTML tree such that the tail of p is a text node. Let Name T be the set of Name(p) and let Value T be the set of Value(p) in Text T . Deﬁnition 4. Association Path An instance is (Σ, Text T , ψ, d, k) of an alphabet Σ, a set Text T of pairs for a ﬁnite set T of HTML trees, a labeling function ψ : Value T → {0, 1}, and positive integers d, k. A solution is an association path α#π. The string π is a (d, k)-pattern for a solution of the max agreement problem for input (Σ, Value T , ψ, d, k). The string α is a (d, k)-pattern for a solution of the max agreement problem for input (Σ, Name T , ψ , d, k) such that where ψ is deﬁned by ψ (Name(p)) = 1 iﬀ ψ(Value(p)) = 1. The goal of the problem is to maximize the sum of the agreements of ψ and ψ over all association paths α#π. 3.2

The Algorithm

To ﬁnd association paths, the data of HTML texts are transformed to path expressions as follows. Given a large set S of HTML texts, it is divided into two disjoint sets S1 and S2 by a labeling function. The labeling function is considered as a keyword or phrase by a user, i.e., any text in S is labeled by 1 if it contains

Mining Semi-structured Data by Path Expressions

383

the keyword and labeled by 0 otherwise. Next all texts in S1 and S2 are parsed to HTML trees and let Pos be the set all paths from S1 and Neg be the set of all paths from S2 . Fig. 3.2 shows the process of our algorithm brieﬂy. Mining

Keyword

association path Positive texts

Treep

Negative texts

Treen

HTML

α

#

π

α : pattern for tags π : pattern for texts

Fig. 1. The process of mining algorithm.

Algorithm Path-Find(Σ, Text, ψ, d, k) /* Input: a set of HTML pages P over Σ, a labeling function ψ, non negative integers d, k */ /* Output: a solution of Association Path for the input*/ 1. Let P1 be the set of all pages in P labeled by 1 and let P2 = (P − P1 ). For the set T1 of HTML trees of P1 , compute the set Pos of all paths of trees in P1 and the set Neg of all paths of all trees in P2 . 2. Let Pos = {pi | 1 ≤ i ≤ m} and Neg = {qj | 1 ≤ j ≤ n} (m, n ≥ 0). Compute the sets Name Pos = {Name(p) | p ∈ Pos}, Value Pos = {Value(p) | p ∈ Pos}, Name Neg = {Name(q) | q ∈ Neg}, and Value Neg = {Value(q) | q ∈ Neg}. 3. Find a (d, k)-pattern π of the max agreement problem for Value Pos , Value Neg , and ﬁnd a (d, k)-pattern α of the max agreement problem for Name Pos , Name Neg . 4. Output the pattern α#π which maximizes the sum of the agreement of α and π. We estimate the running time of the Path-Find. This algorithm ﬁnds an association path for only the paths whose tails are the text nodes, i.e., the paths of the form p = n1 , . . . , nk , the ni (1 ≤ i ≤ k − 1) is an element node and the nk is a text node. Thus, for such paths p, we regard the mining problem as the problem to ﬁnd two phrases α from the strings Name(p) and β from the strings Value(p) for constant parameters d of the number of phrases of texts and k of the distance of phrases. If the maximum number of phrases in a pattern is bounded by a constant d then the max agreement problem for (d, k)-patterns is solvable by EnumerateScan algorithm [19], a modiﬁcation of a naive generate-and-test algorithm, in

384

K. Taniguchi et al.

O(nd+1 ) time and O(nd ) scans although it is still too slow to apply real world problems. Adopting the framework of optimized pattern discovery, we have developed an eﬃcient algorithm, called Split-Merge [1], that ﬁnds all the optimal patterns for the class of (d, k)-patterns for various statistical measures including the classiﬁcation error and information entropy. The algorithm quickly searches the hypothesis space using dynamic reconstruction of the content index, called a suﬃx array with combining several techniques from computational geometry and string algorithms. We showed that the Split-Merge algorithm runs in almost linear time in average, more precisely in O(k d−1 N (log N )d+1 ) time using O(k d−1 N ) space for nearly random texts of size N [1]. We also show that the problem to ﬁnd one of the best phrase patterns with arbitrarily many strings is MAX SNP-hard [1]. Thus, we see that there is no eﬃcient approximation algorithm with arbitrary small error for the problem when the number d of phrases is unbounded.

4

Experimental Results

In this section, we show the experimental results. The text data is a collection from the ResearchIndex 1 which is a scientiﬁc literature digital library. A positive data is the set Pos of HTML pages containing the keyword “TSP” and a negative data is the set Neg of HTML pages containing the keyword “NP”. The set Neg consists of many topics of computational complexity problems and Pos is concerned with one of the most popular NP-hard problems Travelling Salesman Problem not properly contained in Neg. The aim of this experiment is to ﬁnd an association path which characterizes TSP with NP. By this experiment on the collection of 8.4MB, the algorithm Path-Find ﬁnds the best 600 patterns at the entropy measure in seconds for d = 2 and three minutes for d = 3 with k = 10 words using 200 mega-bytes of main memory on IBM PC (PentiumIII 600 MHz, gcc++ on Windows98). The result obtained by our algorithm is shown in Fig. 1. Our system found several interesting association paths which may be diﬃcult for human users to ﬁnd by inspection. Fig. 1 consists of some association paths whose tag sequences contains tag. This means that the phrases, e.g., ‘local search’ and ‘euclidean tsp’, are emphasized by the tag. Thus we consider these phrases to be interesting. In fact these phrases are remarkable by the following reasons. The phrase ‘local search’ in Rank 171 indicates the local search heuristics for TSP such as [14]. In this path, the tag and (font style and size) in the left hand side indicates the importance of the phrase in the right hand side. The phrase ‘tsp and other’ in Rank 276 is a substring of the title of the outstanding paper written by Arora [2] in 1996 on the approximation algorithm for Euclidean TSP. The euclidean graph is an important geometric 1

http://citeseer.nj.nec.com/

Mining Semi-structured Data by Path Expressions

385

Rank Association path α#π 5 38 90 171 213 276 394 455 552

< < < < < < < <

# font p body html> # font p body html> # font p body html> # font p body html> # font p body html> # font p body html> # <euclidean tsp > font p body html> # > #

Fig. 2. The association paths found in the experiments, which characterize the Web pages on the TSP problem from these on NP-optimization problm. The parameters are (2, 10) for (d, k), where α is a path and π is a phrase.

structure to construct an approximation algorithm for TSP. These keywords appear in Rank 394, 455, and 552, respectively. Next we examine the same text data by the association pattern algorithm [1] and compare the resulting phrases with our result. The list of 400 phrases found by the association pattern algorithm is partially presented in Fig. 1. As is shown in this list, almost phrases are trivial except ‘local search’.

0 1 2 3 4 5 6 7 8 9

<solutions for tsp > #

Fig. 5. Other result of the experiments for the DNA from NP-optimization problem. The parameters are also (2, 10) for (d, k), where α is a path and π is a phrase.

Mining Semi-structured Data by Path Expressions

5

387

Conclusion

We introduced a new method for mining from HTML texts and present an algorithm to ﬁnd an association path which is a pair of association patterns over tag sequences and text sequences. By experiments on HTML data of scientiﬁc literature, the algorithm found interesting association paths from positive and negative examples on the traveling salesman problem and the other NP optimization problems.

Acknowledgments

The authors would be grateful to the anonymous referees for their careful reading of the draft and uesful comments. Shinichi Shimozono thanks Miho Matsui for the suggestive discussions and observations obtained while supervising her graduation thesis.

References 1. Shimozono, S., Arimura, H., and Arikawa, S. Eﬃcient discovery of optimal wordassociation patterns in large text databases. New Generation Computing 18:49-60, 2000. 2. Arora, S. Polynomial-time approximation schemes for Euclidean TSP and other geometric problems. Proc. 37th IEEE Symposium on Foundations of Computer Science, 2-12, 1996. 3. Abiteboul, S., Buneman, P., and Suciu, D. Data on the Web: From relations to semistructured data and XML, Morgan Kaufmann, San Francisco, CA, 2000. 4. Angluin, D. Queries and concept learning. Machine Learning 2:319–342, 1988. 5. Buneman, P., Davidson, S., Hillebrand, G., and Suciu, D. A query language and optimization techniques for unstructured data. University of Pennsylvania, Computer and Information Science Department, Technical Report MS-CIS 96-09, 1996. 6. Cohen, W. W. and Fan, W. Learning Page-Independent Heuristics for Extracting Data from Web Pages, Proc. WWW-99. 1999. 7. Craven, M., DiPasquo, D., Freitag, D., McCallum, A., Mitchell, T., Nigam, K., and Slattery, S. Learning to construct knowledge bases from the World Wide Web, Artiﬁcial Intelligence 118:69–113, 2000. 8. Freitag, D. Information extraction from HTML: Application of a general machine learning approach. Proc. the 15th National Conference on Artiﬁcial Intelligence, 517-523, 1998 9. Grieser, G., Jantke, K. P., Lange, S., and Thomas, B. A unifying approach to HTML wrapper representation and learning, Proc. the 3rd International Conference, DS2000, Lecture Notes in Artiﬁcial Intelligence 1967:50–64, 2000. 10. Hammer, J., Garcia-Molina, H., Cho, J., and Crespo, A. Extracting semistructured information from the Web. Proc. Workshop on Management of Semistructured Data, 18–25, 1997. 11. Hsu, C.-N. Initial results on wrapping semistructured web pages with ﬁnite-state transducers and contextual rules. Proc. 1998 Workshop on AI and Information Integration, 66–73, 1998. 12. Kamada, T. Compact HTML for small information appliances. W3C NOTE 09Feb-1998. www.w3.org/TR/1998/NOTE-compactHTML-19980209, 1998.

388

K. Taniguchi et al.

13. Kushmerick, N. Wrapper induction: eﬃciency and expressiveness. Artiﬁcial Intelligence 118:15–68, 2000. 14. Lin, S., and Kernighan, B. W. An eﬀective heuristic algorithm for the travelling salesman problem. Operations Research 21:498-516, 1973. 15. Muslea, I., Minton, S., and Knoblock, C. A. Wrapper induction for semistructured, web-based information sources. Proc. Conference on Automated Learning and Discovery, 1998. 16. Sakamoto, H., Arimura, H., and Arikawa, S. Identiﬁcation of tree translation rules from examples. Proc. the 5th International Colloquium on Grammatical Inference, LNAI 1891:241–255, 2000. 17. Thomas, B. Anti-uniﬁcation based learning of T-Wrappers for information extraction, Proc. AAAI Workshop on Machine Learning for IE, 15–20, AAAI, 1999. 18. Valiant, L. G. A theory of the learnable. Comm. ACM 27:1134–1142, 1984. 19. Wang, J. T., Chirn, G. W., Marr, T. G., Shapiro, B., Shasha, D., and Zhang, K. Combinatorial pattern discovery for scientiﬁc data: Some preliminary results. Proc. SIGMOD’94, 115-125, 1994.

Worst-Case Analysis of Rule Discovery Einoshin Suzuki Electrical and Computer Engineering, Yokohama National University, 79-5 Tokiwadai, Hodogaya, Yokohama 240-8501, Japan [email protected]

Abstract. In this paper, we perform a worst-case analysis of rule discovery. A rule is deﬁned as a probabilistic constraint of true assignment to the class attribute of corresponding examples. In data mining, a rule can be considered as representing an important class of discovered patterns. We accomplish the aforementioned objective by extending a preliminary version of PAC learning, which represents a worst-case analysis for classiﬁcation. Our analysis consists of two cases: the case in which we try to avoid ﬁnding a bad rule, and the case in which we try to avoid overlooking a good rule. Discussions on related works are also provided for PAC learning, multiple comparison, analysis of association rule discovery, and simultaneous reliability evaluation of a discovered rule.

1

Introduction

Data mining [2] can be deﬁned as extraction of useful knowledge from massive data, and is gaining increasing attention due to advancement of various information technologies. Data mining can be regarded as advanced data analysis, and a typical process of analysis consists of several steps [2]. Pattern extraction represents an important step in such a process. A rule is deﬁned as a probabilistic constraint inherent in a data set, and is widely recognized as representing one of the most important patterns in data mining. Although rule discovery has been extensively studied in data mining, its theoretical analyses are surprisingly rare. Several exceptions include Agrawal et al.’s analysis of association rule discovery [1] and our analysis of a discovered rule based on simultaneous reliability evaluation [10]. However, these studies ignore the total number of rules that can be discovered from a data set. This fact represents that these studies fail to relate the size of a discovery problem to the number of examples needed for successful discovery, and suggests that a more solid foundation of data mining should be established. As a ﬁrst step toward this objective, we extend a preliminary version of PAC learning [7], which represents a worst-case analysis of classiﬁcation. Our analysis consists of two cases: the case in which we try to avoid ﬁnding a bad rule, and the case in which we try to avoid overlooking a good rule. We also discuss about related works including PAC learning [5,7], Jensen and Cohen’s multiple comparison [4], Agrawal et al.’s analysis of association rule discovery [1], and our previous analysis of a discovered rule based on simultaneous reliability evaluation K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 365–377, 2001. c Springer-Verlag Berlin Heidelberg 2001

366

E. Suzuki

[10]. In the rest of this paper, technical terms and symbols of referenced papers are uniﬁed to those of this paper.

2 2.1

Rule Discovery Problem Rule

Let a data set contain n examples each of which is expressed by b discrete attributes and a class attribute. Typically rule discovery assumes no speciﬁc class attribute unlike classiﬁcation. However, for the sake of formalization, we consider a rule which predicts a speciﬁc class attribute to be true. Let a value v assignment A = v to an attribute A be an atom. In this paper, we regard a given data set as a result of sampling with replacement from a true data set. We call the probability of examples each of which satisﬁes a propositional logical formula f the true probability Pr(f ) of f . Similarly, an estimated probability which is obtained from a given data set for Pr(f ) is represented by ). Note that Pr(f ) can be calculated by the Laplace estimate or simply Pr(f by the ratio of examples which satisfy f in the data set. We employ the latter method in this paper. A rule r is represented as follows with a premise y which represents a propositional formula of atoms, and a conclusion x which represents a true assignment to the class attribute. r:y→x An intuitive interpretation of r is that many examples satisfy y and those examples are likely to satisfy x with high probability. We deﬁne Pr(y) and Pr(x|y) as the generality and the accuracy of r respectively. Similarly, we call Pr(y) and Pr(x|y) the estimated generality and the estimated accuracy of r respectively. 2.2

Related Classes of Rules

This section presents several classes of rules which are related to ours. A probabilistic if-then rule [9] is deﬁned as follows, where yi represents a single atom. y1 ∧ y2 ∧ · · · ∧ yK → x In [10], a probabilistic if-then rule is called a conjunction rule, and this paper follows this paraphrasing. A conjunction rule can be regarded as a special case of our rule: the premise is restricted to either a single atom or a conjunction of atoms. Since a premise of a conjunction rule is represented by a combination of atoms, the number |R| of possible conjunction rules is typically huge. The following gives |R|, where a data set contains b attributes and each of these attributes can have one of a values. |R| = (a + 1)b − 1

(1)

Worst-Case Analysis of Rule Discovery

367

This formula can be explained by the fact that each of b attributes can either have one of a values or be excluded from the premise. A typical value for |R| is huge: for example, |R| = 3,486,784,400 for a data set of 20 binary attributes. A realistic measure would be to restrict the number of atoms allowed in the premise to at most K. The possible number |RK |, in this case, is given as follows. |RK | =

K i=1

ai

b i

(2)

Note that (1) can be also derived by settling K = b in (2) and considering the binary coeﬃcients. In association rule discovery [1], a data set is restricted to a transactional data set which consists of binary attributes. A true assignment to a binary attribute is called an item. Let an itemset be either a single item or a conjunction of items. An association rule, in its original form, consists of a premise and a conclusion each of which is represented by an itemset. In our framework of section 2.1, an association rule can be regarded as a special case of our rule: the premise is restricted to either a single atom or a conjunction of atoms, and only the value “true” is allowed. The cases of |R| and |RK | for association rule discovery are obtained by settling a = 1 in (1) and (2). 2.3

Discovery Problem

In this paper, the objective of a user is to obtain, with high probability 1−δ, a rule of which generality and accuracy are no smaller than 1 − ζ and 1 − respectively. Typically multiple rules are obtained in rule discovery, but we restrict ourselves to single-rule discovery for the sake of analysis. Objective : Find y → x which satisﬁes Pr [Pr(y) ≥ 1 − ζ, Pr(x|y) ≥ 1 − ] ≥ 1 − δ where ζ, , δ > 0

(3)

A discovery algorithm to be analyzed obtains a rule of which generality and accuracy are no smaller than user-given thresholds θS and θF respectively. As stated above, since a given data set is a result of sampling from a true data set, the user employs thresholds θS = 1 − ζ! θF = 1 − in applying the algorithm. Algorithm : Find y → x which satisﬁes Pr(y) ≥ θS , Pr(x|y) ≥ θF

(4)

An interesting problem here is to bound the required number m of examples to accomplish (3) under (4). This problem can be named as PAGA (Probably Approximately General and Accurate) discovery after the well-known PAC (Probably Approximately Correct) learning [5,7], and can be regarded as a foundation of data mining.

368

3

E. Suzuki

Case 1: Exclusion of a Bad Rule

In this section, we derive a lower bound of the number of examples for the problem deﬁned in the previous section. An assumed condition is to avoid ﬁnding a bad rule. This condition can be considered as important in several domains where reliability represents a crucial concern. 3.1

Preliminaries

First we introduce preliminaries which are needed in subsequent analyses. If the domain of a probabilistic variable X is {0, 1, · · · , m} and the probability distribution of the variable is represented as follows, X is said to follow a binary distribution [3]. Pr(X = k) = B(k; m, p) m = pk (1 − p)m−k k

(5)

where p represents a constant 0 < p < 1 and k = 0, 1, · · · , m. The Chernoﬀ bound states that the following holds for an arbitrary constant a > p [1]. Pr(X > am) < exp[−2m(a − p)2 ] 3.2

(6)

Theoretical Analysis

From (3), a bad rule rb : y → x satisﬁes Pr(y) < 1 − ζ or Pr(x|y) < 1 − .

(7)

Since we assume, in this section, that we avoid ﬁnding a bad rule, the employed thresholds for generality and accuracy are relatively large. This assumption together with (3) and (4) necessitate the following. θS > 1 − ζ and θF > 1 −

(8)

θS > Pr(y) or θF > Pr(x|y).

(9)

From (7) and (8),

Since rb : y → x is discovered, Pr(y) ≥ θS and Pr(x|y) ≥ θF .

(10)

Let the number of examples in the given data set be m. If and only if y and xy are satisﬁed by at least mθS and mPr(y)θ F examples respectively in the

Worst-Case Analysis of Rule Discovery

369

data set, rb happens to be discovered. Since each of the numbers of examples which satisfy y and xy follows a binary distribution, Pr (rb discovered) m ≤ MAX

mPr(y) B(k; m, Pr(y)),

k=mθS

B(k; mPr(y), Pr(x|y))(11)

k=mPr(y)θ F

2

mθS < MAX exp −2m , − Pr(y) m 2

mPr(y)θF exp −2mPr(y) − Pr(x|y) mPr(y) < MAX exp −2m(θS − 1 + ζ)2 , exp −2mθS (θF − 1 + )2 .

(12) (13)

Note that, in (11), we consider separately the case in which a bad rule rb1 in terms of generality is discovered and the case in which a bad rule rb2 in terms of accuracy is discovered. The ﬁrst and second terms correspond to the left inequality and the right inequality of (7) respectively. Since Pr(rb1 ) and Pr(rb2 ) are unknown, we bound Pr(rb discovered) by MAX[Pr(rb1 discovered), Pr(rb2 discovered)]. In (12), the Chernoﬀ bound (6) is employed from (9). Finally in (13), we employ (7) and the left inequality of (10). Let the set of all possible rules and the set of all bad rules be R and Rb respectively, and let the cardinality of a set S be |S|. The probability of discovering a bad rule satisﬁes the following inequalities. Pr (Rb contains a discovered rule) < |Rb |MAX exp −2m(θS − 1 + ζ)2 , exp −2mθS (θF − 1 + )2 (14) ≤ |R|MAX exp −2m(θS − 1 + ζ)2 , exp −2mθS (θF − 1 + )2 (15) Note that we allow to count multiple times the cases in which several bad rules satisfy the discovery condition in (14), and (15) uses |R| ≥ |Rb |. Our objective (3) requires the following with respect to a suﬃciently small δ. |R|MAX exp −2m(θS − 1 + ζ)2 , exp −2mθS (θF − 1 + )2 ≤ δ (16) We obtain a lower bound of the number m of examples for discovery in which ﬁnding a bad rule is avoided with a high probability. ln |R| δ (17) m≥ 2 2 2MIN (θS − 1 + ζ) , θS (θF − 1 + ) The above inequality describes inﬂuence of each parameter to the minimum number of examples quantitatively. As we have seen in section 2.2, |R| is typically large and is thus important even if its inﬂuence is tolerated by a logarithmic

370

E. Suzuki

function. The second most important factors are θS − 1 + ζ and θF − 1 + . Since they inﬂuence the lower bound of m by the inverse of their squares, they can be problematic when they are small. Since each of these terms represents the diﬀerence of a threshold and the user-expected value, θS −1+ζ and θF −1+ can be named as the margin of generality and the margin of accuracy respectively. In a typical setting of rule discovery, we can assume θS = 0.1, and we assume that (θS − 1 + ζ) = 10−1 or 10−2 . We also assume that (θF − 1 + ) = 10−1 or 10−2 . Under these assumptions, the denominator is either 2 ∗ 10−3 or 2 ∗ 10−5 . Finally, δ can be considered as a moderately important factor in a typical situation δ = 0.01 - 0.05 since it appears only as a denominator of |R|. 3.3

Application to Conjunction Rule Discovery

From (1) and (17), the lower bound of the number m of examples is given as follows if we restrict the discovered rule to a conjunction rule. ln (a + 1)b − 1 + ln 1δ m≥ 2 2 2MIN (θS − 1 + ζ) , θS (θF − 1 + )

(18)

Note that settling a = 1 gives the case of association rule discovery. Firstly, ln(1/δ) can be typically ignored when δ = 0.01 - 0.05 from ln[(a + 1)b −1] ln(1/δ), thus the lower bound of m is approximately proportional to b. Secondly, since the number a of possible values for an attribute only aﬀects the right-hand side through a logarithmic function, a is typically not so important as b and margins of generality and accuracy. We show, in ﬁgure 1, a plot of the lower bound against MIN[(θS − 1 + ζ)2 , θS (θF − 1 + )2 ] for b = 102 , 103 , 104 , where we settled a = 2 and δ = 0.05. Note that each of the x axis and the y axis is represented by a logarithmic scale.

lower bound for m 1e+10 1e+09 1e+08 1e+07

b=10,000

1e+06

b=1,000

100000

b=100

10000 1000 100 1e-06

1e-05

0.0001

0.001 MIN

0.01

0.1

1

Fig. 1. Minimum number of examples needed for conjunction rule discovery without ﬁnding a bad rule. In the ﬁgure, MIN represents MIN[(θS − 1 + ζ)2 , θS (θF − 1 + )2 ].

Worst-Case Analysis of Rule Discovery

371

We discuss about the lower bound of the number of examples for a typical setting with ﬁgure 1. The examples described in section 3.2 state MIN[(θS − 1 + ζ)2 , θS (θF − 1 + )2 ] = 10−3 or 10−5 . For these cases, the lower bound is approximately 5.6 ∗ 104 - 5.6 ∗ 106 or 5.6 ∗ 106 - 5.6 ∗ 108 for b = 102 - 104 . These results indicate that the required number of examples for successful discovery can be prohibitively large for small margins. Note that large margins represent large thresholds, and no rules are usually discovered for large thresholds. A realistic and eﬀective measure to this problem would be to adjust thresholds according to a discovery process such as [11]. It should be anyway noted that our analyses in this paper correspond to the worst case, and the required number of examples in a real discovery problem can be much smaller than those mentioned above. From (2) and (17), the lower bound of the number m of examples is given as follows if we restrict the discovered rule to a conjunction rule with at most K atoms in its premise. K i b ln + ln 1δ a i=1 i m≥ (19) 2 2 2MIN (θS − 1 + ζ) , θS (θF − 1 + ) Note that settling a = 1 gives the case of association rule discovery. Similarly as we did in ﬁgure 1, we show, in ﬁgure 2, two plots of the lower bound for a = 2 and δ = 0.05. The left plot represents a case in which we varied b = 102 , 103 , 104 under K = 2, and in the right plot we varied K = 1, 2, 3, 4, 100 (= b) under b = 102 . lower bound for m

lower bound for m

1e+08

1e+08

1e+07

1e+07 b=10,000 b=1,000 b=100

1e+06 100000

100000

10000

10000

1000

1000

100

100

10

10

1e-06

1e-05

0.0001

0.001 MIN

0.01

K=100 (equivalent to b=100 in Fig. 1)

1e+06

0.1

1

1e-06

K=4 K=3 K=2 K=1

1e-05

0.0001

0.001

0.01

0.1

1

MIN

Fig. 2. Minimum number of examples needed for conjunction rule discovery without ﬁnding a bad rule, where at most K atoms are allowed in the premise. The left and right plots assume K = 2 and b = 100 respectively.

From the left plot, we see that the inﬂuence of b is relatively small for K = 2. On the other hand, the right plot of ﬁgure 2 shows that, for K ≤ 4, the minimum required number of examples is smaller by approximately an order of magnitude

372

E. Suzuki

than the case of considering all conjunction rules (K = b = 100). It is widely accepted that a rule with a short premise exhibits high readability, and the above results suggest that they are also attractive in terms of the required number of examples.

4

Case 2: Inclusion of a Good Rule

In this section, we derive another lower bound of the number of examples for the problem deﬁned in section 2.3. An assumed condition is to avoid overlooking a good rule. This condition can be considered as important in several domains where possibility is considered as highly important. From (3), a good rule rg : y → x satisﬁes Pr(y) ≥ 1 − ζ and Pr(x|y) ≥ 1 − .

(20)

Since we assume, in this section, that we avoid overlooking a good rule, the employed thresholds for generality and accuracy are relatively small. This assumption together with (3) and (4) necessitate the following. θS < 1 − ζ and θF < 1 −

(21)

θS < Pr(y) and θF < Pr(x|y).

(22)

From (20) and (21),

Let the number of examples in the given data set be m. If and only if y is satisﬁed by at most mθS −1 examples or xy is satisﬁed by at most mPr(y)θ F − 1 examples in the data set, rg happens to be undiscovered. Since each of the numbers of examples which satisfy y and xy follows a binary distribution, Pr (rg undiscovered) mPr(y)θ F −1

mθS −1

≤

B(k; m, Pr(y)) +

k=0

mPr(y)θ F −1

B(k; m, Pr(y))

k=0

mPr(y)θ F −1 B(k; m, Pr(y)) +

k=0

B(k; mPr(y), Pr(x|y)) (23)

B(k; mPr(y), Pr(x|y))

(24)

k=0

m

=

k=0

mθS −1

0

(29)

Next, [7] assumes that a classiﬁcation algorithm returns a classiﬁer which is consistent to all training examples. This corresponds to assuming θF = 1. To sum up, compared to our study, [7] ignores the case of learning a classiﬁer with low generality and the case of learning a classiﬁer which is inconsistent to the training examples. In this case, application of the Chernoﬀ bound can be skipped, and for a bad classiﬁer hb , we obtain Pr(hb learned) = (1 − )m . In [7], a lower bound of the required number of examples m is given by the following, where H represents a set of all classiﬁers. ln |H| δ m≥ (30) Note that (30) resembles to (17): it only ignores generality (θS = 1 and no ζ), assumes θF = 1, and omits the squares in 2 and 2 in the denominator. The last omissions are due to skipping the Chernoﬀ bound. 5.2

Jensen and Cohen’s Multiple Comparison

Jensen and Cohen’s multiple comparison [4] proposes a prudent view of classiﬁcation. Its essential point can be stated as a probabilistic explanation that the more candidates of classiﬁers are inspected in a learning algorithm, the smaller accuracy is exhibited by the obtained classiﬁer. The multiple comparison provides a comprehensive uniﬁed view of several studies including overﬁtting [8] and oversearching [6], and [4] also proposes several realistic measures. Since this study deals with classiﬁcation as PAC learning, it ignores generality. This corresponds to considering only the second term in (11). Since [4] considers the case of θF < 1, it provides a more realistic framework to learning than [7]. The multiple comparison diﬀers from our study in that it directly calculates, based on a binary distribution without using the Chernoﬀ bound, the probability for a bad classiﬁer to satisfy at least mθF examples. Moreover, they calculate exactly the probability that no bad classiﬁer is learned while we, in (14), allow counting multiples times the cases in which more than one bad rules satisfy the discovery condition. Let the set of all bad classiﬁers be Hb , the probability in [4] is given by the following. |H|

Pr(Hb contains a learned classiﬁer) = 1 − [1 − Pr(hb learned)]

(31)

Pursuing strictness in calculation can be considered as a double-edged sword. Jensen and Cohen give no analytical solutions to the required number of examples for successful learning. We attribute this reason to the fact that resolving

Worst-Case Analysis of Rule Discovery

375

(31) for m is relatively diﬃcult. We have employed several approximations in our theoretical analyses, and these were necessary to bound m analytically. Another diﬀerence between [4] and our analyses is rather philosophical: while they are pessimistic about classiﬁcation, we are realistic about rule discovery. The study in [4] emphasizes that |H| is huge, and demonstrates various examples in which it is diﬃcult to avoid learning a bad classiﬁer. We also recognize that |R| is huge, but bounds the required number of examples m analytically with respect to |R|. 5.3

Theoretical Analysis of Association Rule Discovery

Analyses of association rule discovery [1] are threefold: a lower bound of the number of queries under the use of a database system, the expected number of itemsets each of which is satisﬁed by at least a required number of examples in a random data set, and the number of examples satisﬁed by an itemset in a sampled data set. The third analysis is highly related to our study in that both of the two deal with the case of sampling m examples from a true data set in rule discovery. The analysis provides a speciﬁcation of the Chernoﬀ bound (6), where X ) for an itemset f . It ﬁrst regards the right-hand side is regarded as mPr(f 2 ) to deviate at exp[−2m(a − p) ] as the upper bound of the probability for Pr(f least a − p from its value p (= Pr(f )) in the true data set. Next, it gives several examples of values for a − p and δ in exp[−2m(a − p)2 ] = δ, and represents the corresponding values of m in a table. The discovery algorithm employed in [1] ﬁrst obtains, by an algorithm called ) ≥ θS . Then, it generApriori, a set of imtemsets f each of which satisﬁes Pr(f ates a set of association rules from this set. One of the motivations of the above analysis was to reduce the run-time of Apriori by the use of a sampled data set. Due to this motivation, [1] ignores accuracy unlike our study. Moreover, since it considers a single association rule, the study fails to relate the size of a discovery problem to the number of examples needed for successful discovery. 5.4

Simultaneous Reliability Evaluation of a Discovered Rule

Simultaneous reliability evaluation of a discovered rule [10] also deals with the case of sampling m examples from a true data set in rule discovery as in section 5.3 and our study. Unlike the analysis in section 5.3, this study considers both generality and accuracy. The objective considered in [10] is identical to ours, and is represented by (3). Let x represent the negation of x. The analysis ﬁxes m and employs neither θS nor θF . It assumes that (m Pr(xy), m Pr(xy)) follows a two-dimensional normal distribution, and obtains the exact condition for accomplishing the objective analytically. This is a diﬀerent framework from ours: we use a discovery algorithm with ﬁxed thresholds θS , θF in (4) and bound the number m of sampled examples. The problem dealt in [10] can be reduced to the problem of deriving

376

E. Suzuki

and analyzing two tangent lines of an ellipse, and applying Lagrange’s multiplier method gives the following analytical solutions. $ % % 1 − Pr(y) Pr(y) 1 − β(δ)& ≥1−ζ (32) nPr(y) $ % % y) Pr(x, 1 − β(δ)& Pr(x|y) ≥ 1 − (33) y){(n + β(δ)2 )Pr(y) − β(δ)2 } Pr(x, Here β(δ) represents a positive constant which deﬁnes the size of a 1 − δ conﬁdence region i.e. the ellipse for (m Pr(xy), m Pr(xy)), and can be obtained by a simple numerical integration. Note that (32) and (33) represent conditions for generality and accuracy respectively. Each of them states that the corresponding estimated probability multiplied by a coeﬃcient which is related to the size of the conﬁdence region is no smaller than the corresponding user-expected value (1 − ζ or 1 − ). Since the study [10] assumes a speciﬁc distribution to the simultaneous occurrence of random variables, it does not fall in the category of worst-case analysis. Similarly to the analysis in section 5.3, the study fails to relate the size of a discovery problem to the number of examples needed for successful discovery.

6

Conclusions

The main contribution of this paper is threefold. 1) We formalized a worst-case analysis of rule discovery. The proposed framework employs thresholds θS , θF for generality and accuracy which are diﬀerent from user-expected values 1 − ζ! 1 − respectively. We considered the case in which we try to avoid ﬁnding a bad rule, and the case in which we try to avoid overlooking a good rule. 2) We derived a lower bound of the number m of required examples. By using probabilistic formalization and appropriate approximations, two lower bounds are obtained for the aforementioned two cases. Quantitative analysis of one of the lower bounds revealed that the total number |R| of rules, the margin θS − 1 + ζ for generality, and the margin θF − 1 + for accuracy are important. 3) We analyzed one of the lower bounds for a set of speciﬁc problems of conjunction rule discovery. Various useful conclusions are obtained by inspecting the lower bound for a set of typical settings. The contribution of 1) represents that this paper has provided, in rule discovery, a framework which corresponds to PAC learning. This framework can be named as PAGA (Probably Approximately General and Accurate) discovery. PAGA discovery can be regarded as promising as a theoretical foundation of active mining, which requests new examples in a discovery process. The contributions of 2) and 3) suggest various useful policies in applying various rule algorithms in practice. Such policies include sampling/extension of a data set and modiﬁcation of the class of discovered rules. We can safely conclude that

Worst-Case Analysis of Rule Discovery

377

our comprehension to rule discovery has deepened with these contributions and discussions in section 5. Ongoing work focuses on analyses of more realistic algorithms, especially an algorithm which discovers multiple rules with various conclusions.

Acknowledgement We are grateful to Setsuo Arikawa for enabling us to initiate this study by suggesting us to pursue the relationship of one of our previous studies and PAC learning. This work was partially supported by the grant-in-aid for scientiﬁc research on priority area “Active Mining” from the Japanese Ministry of Education, Culture, Sports, Science and Technology.

References 1. R. Agrawal, H. Mannila, R. Srikant, H. Toivonen, and A. I. Verkamo: Fast Discovery of Association Rules, Advances in Knowledge Discovery and Data Mining, pp. 307–328, AAAI/MIT Press, Menlo Park, Calif. (1996). 2. U. M. Fayyad, G. Piatetsky-Shapiro, and P. Smyth: “From Data Mining to Knowledge Discovery: An Overview”, Advances in Knowledge Discovery and Data Mining, AAAI/MIT Press, pp. 1–34, Menlo Park, Calif. (1996). 3. W. Feller: An Introduction to Probability Theory and Its Applications, John Wiley & Sons, New York (1957). 4. D. D. Jensen and P. R. Cohen: “Multiple Comparisons in Induction Algorithms”, Machine Learning, Vol. 38, No. 3, pp. 309–338 (2000). 5. M. J. Kearns and U. V. Vazirani: An Introduction to Computational Learning Theory. MIT Press, Cambridge, Mass. (1994). 6. J. R. Quinlan and R. Cameron-Jones: “Oversearching and Layered Search in Empirical Learning”, Proc. Fourteenth Int’l Joint Conf. on Artiﬁcial Intelligence (IJCAI), pp. 1019–1024 (1995). 7. S. Russel and P. Norvig: Artiﬁcial Intelligence, a Modern Approach, pp. 552–558, Prentice Hall, Upper Saddle River, N. J. (1995). 8. C. Schaﬀer: “Overﬁtting Avoidance as Bias”, Machine Learning, Vol. 10, No. 2, pp. 153–178 (1993). 9. P. Smyth and R. M. Goodman: “An Information Theoretic Approach to Rule Induction from Databases”, IEEE Trans. Knowledge and Data Eng., Vol. 4, No. 4, pp. 301–316 (1992). 10. E. Suzuki: “Simultaneous Reliability Evaluation of Generality and Accuracy for Rule Discovery in Databases”, Proc. Fourth Int’l Conf. on Knowledge Discovery and Data Mining (KDD), pp. 339–343 (1998). 11. E. Suzuki: “Scheduled Discovery of Exception Rules”, Discovery Science, LNAI 1721 (DS), pp. 184–195, Springer-Verlag (1999).

An Eﬃcient Derivation for Elementary Formal Systems Based on Partial Uniﬁcation Noriko Sugimoto, Hiroki Ishizaka, and Takeshi Shinohara Department of Artiﬁcial Intelligence Kyushu Institute of Technology Kawazu 680-4, Iizuka 820-8502, Japan {sugimoto, ishizaka, shino}@ai.kyutech.ac.jp

Abstract. An EFS is a kind of logic programs expressing various formal languages. We propose an eﬃcient derivation for EFS’s called an S-derivation, where every possible uniﬁers are evaluated at one step of the derivation. In the S-derivation, each uniﬁer is partially applied to each goal clause by assigning variables whose values are uniquely determined from the set of all possible uniﬁers. This contributes to reduce the number of backtracking, and thus the S-derivation works eﬃciently. In this paper, the S-derivation is shown to be complete for the class of regular EFS’s. We implement an EFS interpreter based on the S-derivation in Prolog programming language, and compare the parsing time with that of DCG provided by the Prolog interpreter. As the results of experiments, we verify the eﬃciency of the S-derivation for accepting context-free languages.

1

Introduction

In the area of machine learning or discovery science, it is an important issue to develop eﬃcient systems dealing with formal languages under a theoretical background. An elementary formal system (EFS, for short) is a kind of logic programs over the domain of strings [3,11,15]. The EFS’s are well-known to be ﬂexible enough to represent not only classes of languages in Chomsky hierarchy [3] but also binary relations over strings [12,13]. It has been shown that the EFS is suitable to discuss learnability in the framework for inductive inference and machine learning of languages [2,3,9,10]. Mukouchi and Arikawa [8] developed a theoretical framework for machine discovery, where refutability of search space is shown to be the most important factor and one of such refutably learnable classes is the class of length-bounded EFS’s. Theoretically, EFS’s can be used as working systems as Prolog programs because a derivation based on the resolution principle [7] is also deﬁned for EFS’s. In EFS’s, a derivation procedure is formalized as an acceptor for formal languages [3,15]. Furthermore, the derivation can be used to generating languages [14]. The purpose of this research

The research reported here is partially supported by the Telecommunication Adbancement Foundation, Japan.

K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 350–364, 2001. c Springer-Verlag Berlin Heidelberg 2001

An Eﬃcient Derivation for Elementary Formal Systems

351

is to develop an eﬃcient derivation and construct an EFS interpreter based on the derivation. Since an EFS deals with strings as its domain, uniﬁcations for strings should be computed eﬃciently at each step of the derivation. However, it is known that the uniﬁcation problem for strings is computationally hard and the uniﬁer is not always uniquely determined even if it is restricted to the maximally general uniﬁer [5,6]. On the other hand, for the ﬁrst order terms used in Prolog programming language, the uniﬁer is uniquely determined as the most general uniﬁer. Therefore, in an EFS, backtracking occurs for each selection of uniﬁers as well as clauses. Harada et al. [4] introduced restricted EFS’s called variableseparated EFS’s, where there is no variable successively occurring in any term. In the variable-separated EFS, the number of possible uniﬁers is decreased, and the derivation works eﬃciently. However, the size of a variable-separated EFS is possibly to be much larger than that of the non-variable-separated EFS equivalent to it. This causes ineﬃciency in parsing languages. Here, we introduce another approach to develop an eﬃcient EFS interpreter. When strings have successive occurrence of variables, the number of uniﬁers becomes large as pointed out by Harada et al. [4]. For example, for the strings xyz of variables and a1 a2 · · · an of constant symbols, they have O(n2 ) uniﬁers, because, for each i (i = 1, 2, . . . , n − 2) and j (j = i + 1, i + 2, . . . , n − 1), all substitutions replacing x with a1 a2 · · · ai , y with ai+1 ai+2 · · · aj , and z with aj+1 aj+2 · · · an are uniﬁers of them. In EFS’s, since there are many selections for uniﬁers at each step of a derivation, it has been diﬃcult to construct an eﬃcient interpreter. Thus, we propose a new approach to evaluate all possible uniﬁers at one step of the derivation. We formalize a derivation with sets of uniﬁers (an S-derivation, for short). In the S-derivation, each uniﬁer is partially applied to each goal clause by assigning variables whose values are uniquely determined from the set of all possible uniﬁers. The S-derivation is a natural extension of the standard derivation for EFS’s, because the set of uniﬁers can be regarded as the unique uniﬁer in EFS’s corresponding to the most general uniﬁer in the ﬁrst order language. We show that the S-derivation is complete for restricted EFS’s called regular EFS’s which deﬁne the class of languages equivalent to that of context-free languages. We implement an S-derivation for regular EFS’s in Prolog programming languages, and verify the eﬃciency of the S-derivation by comparing the running time of the S-derivation with that of deﬁnite clause grammars (DCG’s) provided by the Prolog interpreter. In our EFS interpreter, each uniﬁer is eﬃciently computed by using the Aho-Corasick pattern matching algorithm [1]. The AhoCorasick algorithm ﬁnds all occurrences of patterns on the text in linear time with the length of the text. A regular EFS is suitable to the computation of the uniﬁcation, because each string in the derivation becomes a substring of the initially given text. Therefore, every uniﬁers used in a derivation can be computed by only once scanning on the given text. As the results of experiments, we show that the S-derivation using the Aho-Corasick algorithm is eﬃcient with respect to the length of a given text and the number of variables in the EFS.

352

N. Sugimoto, H. Ishizaka, and T. Shinohara

This paper is organized as follows: In Section 2, we give some notations and deﬁnitions including derivation and semantics for EFS’s. In Section 3 and 4, we introduce S-derivation, and prove completeness of the S-derivation. In Section 5, we outline the EFS interpreter based on the S-derivation, and show experimental results for typical examples of EFS’s, where the S-derivation works eﬃciently. Finally, we summarize the results of this research, and describe some open problems.

2

Preliminaries

In this section, we give some basic deﬁnitions and notations according to [3,14, 15]. 2.1

Elementary Formal Systems

For a given set A, the set of all ﬁnite strings of symbols from A is denoted by A∗ . The empty string is denoted by ε. A+ denotes the set A∗ − {ε}. Let Σ, X, and Π be mutually distinct sets. We assume that Σ is a ﬁnite set of constant symbols, X is a set of variables, and Π is a ﬁnite set of predicate symbols. Each predicate symbol is associated with a non-negative integer called its arity. A term is an element of (Σ ∪ X)+ . A term is said to be regular, if every variable occurs at most once in the term. An atomic formula (atom, for short) is of the form p(π1 , π2 , . . . , πn ), where p is a predicate symbol with arity n and each πi is a term (i = 1, 2, . . . , n). A deﬁnite clause (clause, for short) is of the form A ← B1 , . . . , Bn (n ≥ 0), where A, B1 , . . . , Bn are atoms. The atom A and the sequence B1 , . . . , Bn are called the head and the body of the clause, respectively. A goal clause (goal , for short) is of the form ← B1 , . . . , Bn (n ≥ 0) and the goal with n = 0 is called the empty goal . An expression is a term, an atom, a clause, or a goal. An expression E is said to be ground, if E has no variable. For an expression E and a variable x, var(E) and oc(x, E) denote the set of all variables occurring in E, and the number of occurrences of x in E, respectively. An elementary formal system (EFS , for short) is a ﬁnite set of clauses. A substitution θ is a (semi-group) homomorphism from (Σ ∪ X)+ to itself satisfying the following conditions: 1. aθ = a for each a ∈ Σ, and 2. the set {x ∈ X | xθ = x}, denoted by D(θ), is ﬁnite. For a substitution θ, if D(θ) = {x1 , x2 , . . . , xn } and xi θ = πi for every i (i = 1, 2, . . . , n), then θ is denoted by the set {x1 /π1 , x2 /π2 , . . . , xn /πn }. For an expression E and a substitution θ, Eθ is deﬁned as the expression by simultaneously replacing each variable x in E with xθ. Let (E1 , E2 ) be a pair of expressions. Then a substitution θ is said to be a uniﬁer of E1 and E2 if E1 θ = E2 θ. The set of all uniﬁers θ of E1 and E2 satisfying D(θ) ⊆ var(E1 ) ∪ var(E2 ) is denoted by U (E1 , E2 ). We say that E1

An Eﬃcient Derivation for Elementary Formal Systems

353

and E2 are uniﬁable if the set U (E1 , E2 ) is not empty. An expression E1 is a variant of E2 if there exist two substitutions θ and δ such that E1 θ = E2 and E2 δ = E 1 . 2.2

The Semantics of EFS’s

We give two semantics of EFS’s by using provability relations and derivations. First, we introduce the provability semantics. Let Γ and C be an EFS and a clause. Then, the provability relation Γ C is inductively as follows: 1. If C ∈ Γ then Γ C. 2. If Γ C then Γ Cθ for any substitution θ. 3. If Γ A ← B1 , . . . , Bm and Γ Bm ← then Γ A ← B1 , . . . , Bm−1 . A clause C is provable from Γ if Γ C holds. The provability semantics of the EFS Γ , denoted by P S(Γ ), is deﬁned as the set of all ground atoms A satisfying that Γ A ←. For an EFS Γ and a unary predicate symbol p, the language deﬁned by Γ and p is denoted by L(Γ, p), and deﬁned as the set of all strings w ∈ Σ + such that p(w) ∈ P S(Γ ). The second semantics is based on a derivation for EFS’s. We assume a computation rule R to select an atom from every goal. Let Γ be an EFS, G be a goal, and R be a computation rule. A derivation from G is a (ﬁnite or inﬁnite) sequence of triplets (Gi , Ci , θi ) (i = 0, 1, . . .) which satisﬁes the following conditions: 1. Gi is a goal, θi is a substitution, Ci is a variant of a clause in Γ , and G0 = G. 2. var(Ci ) ∩ var(Cj ) = ∅ for every i and j (i = j), and var(Ci ) ∩ var(Gi ) = ∅ for every i. 3. If Gi =← A1 , . . . , Ak , and Am is the atom selected by R, then Ci is of the form A ← B1 , . . . , Bn satisfying that A and Am are uniﬁable, θi ∈ U (A, Am ), and Gi+1 is of the following form: (← A1 , . . . , Am−1 , B1 , . . . , Bn , Am+1 , . . . , Ak )θi . The atom Am is called a selected atom of Gi , and Gi+1 is called a resolvent of Gi and Ci by θi . A refutation is a ﬁnite derivation ending with the empty goal. The procedural semantics of an EFS Γ , denoted by RS(Γ ), is deﬁned as the set of all ground atoms A satisfying that there exists a refutation of Γ from the goal ← A. It has been shown that P S(Γ ) = RS(Γ ) for every EFS Γ [15]. This implies that a string w ∈ Σ + is in the language deﬁned by an EFS Γ and a predicate symbol p if and only if there exists a refutation of Γ from ← p(w). Thus, the derivation procedure can be regarded as an acceptor for the language. Finally, we give the distinct set from an EFS language. Let Γ be an EFS, and (Gi , Ci , θi ) (i = 0, 1, . . . , n) be a ﬁnite derivation of Γ . The derivation is said to be ﬁnitely failed with the length n if there exists no clause in Γ such that its head and the selected atom of Gn are uniﬁable. Furthermore, we deﬁne F F S(Γ ) as the set of all ground atoms A satisfying that all derivations of Γ from ← A are ﬁnitely failed within the length n.

354

3

N. Sugimoto, H. Ishizaka, and T. Shinohara

Extended Derivations with Sets of Uniﬁers

In this section, we introduce a derivation with sets of uniﬁers (S-derivation, for short). In the S-derivation, each uniﬁer is partially applied to each goal clause by assigning variables whose values are uniquely determined from the set of all possible uniﬁers. Since there are inﬁnitely many uniﬁers for terms containing variables, it is diﬃcult to compute the derivation from the goal containing variables. However, for restricted terms, all uniﬁers are computable by using maximally general uniﬁers. The S-derivation works eﬃciently by using the maximally general uniﬁers. Furthermore, in this section, the S-derivation is shown to be complete for accepting and generating languages deﬁned by restricted EFS’s called regular EFS’s. 3.1

Maximally General Uniﬁers

Let θ = {x1 /π1 , x2 /π2 , . . . , xm /πm } and δ = {y1 /τ1 , y2 /τ2 , . . . , yn /τn } be substitutions. Then, we deﬁne a composition of θ and δ as follows: θ · δ = {xi /πi δ | xi = πi δ} ∪ {yi /τi | yi ∈ / D(θ)}. Let θ, δ and γ be substitutions, and E be an expression. Then, we can prove the following equations along the same line of argument as deﬁnite programs [7]: 1. (Eθ)δ = E(θ · δ), and 2. (θ · δ) · γ = θ · (δ · γ). Let V be a ﬁnite set of variables, and (θ, δ) be a pair of substitutions. Then, we say that θ and δ are equivalent on V , if πθ is a variant of πδ for any π ∈ (Σ ∪V )+ . We show that the problem of determining whether or not θ and δ are equivalent on V is solvable by the following lemma. Lemma 1. Let θ and δ be substitutions, and V = {x1 , x2 , . . . , xn } be a ﬁnite set of variables. Then, θ and δ are equivalent on V if and only if the following statements hold: 1. xθ is a variant of xδ, for every x ∈ V , and 2. x1 x2 · · · xn θ is a variant of x1 x2 · · · xn δ. Proof. We can prove this lemma by the induction on the length of π ∈ (Σ ∪V )+ . Let (E1 , E2 ) be a pair of expressions. A maximally general uniﬁer (mxgu, for short) of E1 and E2 is a uniﬁer θ ∈ U (E1 , E2 ) satisfying that, for any δ ∈ U (E1 , E2 ) such that θ and δ are equivalent on var(E1 ) ∪ var(E2 ), there is no substitution γ such that θ = δ · γ. The set of all mxgu’s of E1 and E2 is denoted by M XGU (E1 , E2 ). For two terms π and τ , we deﬁne the number of mxgu’s of π and τ as the cardinality of equivalence classes of substitutions on var(π) ∪ var(τ ). Thus, we say that M XGU (π, τ ) is ﬁnite, if the number of mxgu’s is ﬁnite without equivalent substitutions on var(π) ∪ var(τ ). From the deﬁnition of maximally general uniﬁers, the following lemmas hold [5,6,14].

An Eﬃcient Derivation for Elementary Formal Systems

355

Lemma 2. Let π and τ be regular terms such that var(π) ∩ var(τ ) = ∅. Then, the set M XGU (π, τ ) is ﬁnite and computable. Lemma 3. Let π and τ be terms. If π is ground, then the set M XGU (π, τ ) is ﬁnite and computable, and M XGU (π, τ ) = U (π, τ ) holds. Lemma 4. Let x be a variable and π be a term which does not include x. Then, M XGU (π, x) is a singleton set which consists of the substitution {x/π}. 3.2

S-Derivation

In the following argument, we assume that every substitution θ satisﬁes θ · θ = θ, that is, var(xθ) ∩ D(θ) = ∅ for every variable x ∈ D(θ). Deﬁnition 1. For two substitutions θ and δ, we deﬁne θ ◦ δ as the set of all substitutions σ satisfying that σ = θ · δ · γ = δ · θ · γ for some substitution γ. Note that, for each element σ of the set θ ◦ δ, xσ becomes the element of the intersection of sets of strings which are uniﬁable with xθ and xδ. Substitutions θ and δ are said to be inconsistent if θ ◦ δ = ∅, and consistent, otherwise. We deﬁne M IN (θ ◦ δ) as the minimum subset of θ ◦ δ satisfying that, for any σ ∈ θ ◦ δ, there exists σ ∈ M IN (θ ◦ δ) such that σ = σ · γ for some substitution γ. For two ﬁnite sets Θ and ∆ of substitutions, we deﬁne 1. M IN (Θ ◦ ∆) = M IN (θ ◦ δ), and 2. IN T (Θ) =

(θ,δ)∈Θ×∆

θ.

θ∈Θ

Lemma 5. Let θ and δ be substitutions. If δ is ground, then the set M IN (θ ◦δ) is ﬁnite and computable. Proof. Let θ and δ be substitutions {xi /πi | i ∈ {1, 2, . . . , m}} and {yi /ti | i ∈ {1, 2, . . . , n}}, respectively. If σ ∈ θ ◦ δ then there exists a substitution γ satisfying that 1. σ = θ · δ · γ = {xi /πi δγ | i = 1, 2, . . . , m} ∪ δ ∪ γ, and 2. πi δγ = tj for every xi = yj ∈ D(θ) ∩ D(δ), from Deﬁnition 1. Let S be the set of all possible γ satisfying the above conditions and D(γ) ⊆ var(π1 δ) ∪ var(π2 δ) ∪ · · · ∪ var(πm δ). Since, from Lemma 3, the set U (πi δ, tj ) is ﬁnite for each xi = yj ∈ D(θ) ∩ D(δ), the set S is also ﬁnite and computable. It is clear that σ = θ · δ · γ ∈ M IN (θ ◦ δ) for each γ ∈ S, because every γ ∈ S is ground. Futhermore, we can show that, for every substitution γ , if θ · δ · γ = δ · θ · γ holds, then there exists γ ∈ S such that γ = γ · γ . Thus, the set M IN (θ ◦ δ) consists of θ · δ · γ for every γ ∈ S. It is clear that M IN (θ ◦ δ) is ﬁnite and computable.

356

N. Sugimoto, H. Ishizaka, and T. Shinohara

Example 1. We consider the substitutions: θ1 = {x/y}, θ2 = {x/aaz, y/az}, θ3 = {x/aaz, y/zb}, and δ = {x/aaa, y/ab}. From xθ1 δ = yδ = ab and xδ = aaa, the set U (xθ1 δ, xδ) is empty. Thus, θ1 ◦ δ = ∅, and θ and δ are inconsistent. From xθ2 δ = aazδ = aaz and xδ = aaa, U (xθ2 δ, xδ) is the set {{z/a}}. From yθ2 δ = azδ = az and yδ = ab, U (yθ2 δ, yδ) is the set {{z/b}}. Then, there exists no substitution γ such that xθ2 δγ = aaa and yθ2 δγ = ab. Therefore, θ2 ◦ δ = ∅, and θ2 and δ are inconsistent. From xθ3 δ = aazδ = aaz and xδ = aaa, U (xθ3 δ, xδ) is the set {{z/a}}. From yθ3 δ = zbδ = zb and yδ = ab, U (yθ3 δ, yδ) is the set {{z/a}}. Then, only the substitution γ = {z/a} satisﬁes xθ3 δγ = aaa and yθ3 δγ = ab. Therefore, M IN (θ ◦ δ) has only one element θ3 · δ · γ = {x/aaa, y/ab, z/a}, and θ3 and δ are consistent. Lemma 6. Let θ = {xi /πi | i = 1, 2, . . . , m} and δ = {y/τ } be substitutions satisfying that, for each i (i = 1, 2, . . . , m), πi and τ are regular, and var(πi ) ∩ var(τ ) = D(θ) ∩ var(τ ) = ∅. Then, the set M IN (θ ◦ δ) is ﬁnite and computable. Proof. If y ∈ / D(θ) then δ · θ · δ = θ · δ and θ · δ · δ = θ · δ from the assumption and δ · δ = δ. Thus, θ · δ ∈ θ ◦ δ holds. Furthermore, for every σ ∈ θ ◦ δ, σ = θ · δ · γ holds for some substitution γ. Thus, the set M IN (θ ◦ δ) is a singleton set {θ · δ}. / var(πi ) for every i (i = 1, 2, . . . , m) from the If y = xk ∈ D(θ) then y ∈ assumption θ · θ = θ. Thus, θ · δ = θ holds. Furthermore, from the assumption of this lemma, var(τ ) ∩ D(θ) = ∅ and δ · θ = δ ∪ {xi /πi | i = k}. If πk and τ are not uniﬁable, then θ and δ are inconsistent. Otherwise, for any γ ∈ M XGU (πk , τ ), θ · γ ∈ θ ◦ δ, because θ · δ · γ = δ · θ · γ = θ · γ holds. Furthermore, from the deﬁnition of mxgu’s, for any substitution σ such that θ · δ · σ ∈ θ ◦ δ, there exists γ ∈ M XGU (πk , τ ) satisfying that σ = γ · γ for some substitution γ . Thus, M IN (θ ◦ δ) is the set {θ · γ | γ ∈ M XGU (πk , τ )}. Since the set M XGU (πk , τ ) is ﬁnite and computable from Lemma 2, M IN (θ ◦ δ) is also ﬁnite and computable. Example 2. Let θ1 = {x/aya, z/y}, θ2 = {y/aza}, and δ = {y/y1 y2 }. Then, M IN (θ1 ◦δ) is a singleton set which consists of θ·δ = {x/ay1 y2 a, y/y1 y2 , z/y1 y2 }. On the other hand, since {y1 /a, y2 /za} , M XGU (aza, y1 y2 ) = {y1 /az, y2 /a} {y1 /az1 , y2 /z2 a, z/z1 z2 } we can obtain the following set: {y/aza, y1 /a, y2 /za} . M IN (θ2 ◦ δ) = {y/aza, y1 /az, y2 /a} {y/az1 z2 a, y1 /az1 , y2 /z2 a, z/z1 z2 } Deﬁnition 2. Let Γ be an EFS, G be a goal of Γ , and R be a computation rule. An S-derivation from G is a (ﬁnite or inﬁnite) sequence of triplets (Gi , Ci , Θi ) (i = 0, 1, . . .) which satisﬁes the following conditions:

An Eﬃcient Derivation for Elementary Formal Systems

357

1. Gi is a goal, Θi is a ﬁnite set of substitutions, Ci is a variant of a clause in Γ , and G0 = G. 2. var(Ci ) ∩ var(Cj ) = ∅ for every i and j (i = j), and var(Ci ) ∩ var(Gi ) = ∅ for every i. 3. Let Gi =← A1 , . . . , Ak , Ci = A ← B1 , . . . , Bq , and Am is the selected atom of Gi . If i = 0, then Θi = M XGU (Am , A). Otherwise, Θi = M IN (Θi−1 ◦ M XGU (Am , A)) for each i. The next goal Gi+1 is of the following form: (← A1 , . . . , Am−1 , B1 , . . . , Bq , Am+1 , . . . , Ak )IN T (Θi ). If the S-derivation ends with the empty goal Gn , then it is said to be an Srefutation from G, and each substitution in Θn−1 is called an answer substitution for G by Γ . Deﬁnition 3. Let Γ be an EFS, and (Gi , Ci , Θi ) (i = 0, 1, . . . , n) be a ﬁnite S-derivation of Γ . The derivation is said to be ﬁnitely failed with the length n if 1. Θn = ∅, or 2. there exists no clause in Γ such that its head and the selected atom of Gn are uniﬁable. For an EFS Γ , we deﬁne the following two sets: SF F S(Γ ) is the set of all ground atoms A satisfying that all S-derivations of Γ from ← A are ﬁnitely failed within the length n, and SRS(Γ ) is the set of all ground atoms A satisfying that there exists an S-refutation of Γ from ← A. 3.3

Completeness of S-Derivation

An EFS Γ is said to be regular if all predicate symbols in Γ are unary, and each clause A ← B1 , B2 , . . . , Bn in Γ satisfying the following conditions: 1. the term in A is regular, 2. every term in B1 , B2 , . . . , Bn are mutually distinct variables, and 3. var(B1 ) ∪ var(B2 ) ∪ · · · ∪ var(Bn ) ⊆ var(A). It has been shown that the class of languages deﬁned by regular EFS’s is equivalent to that of context-free languages [3]. For the regular EFS, we show that an S-derivation is complete by the following theorem. Theorem 1. For every regular EFS Γ , P S(Γ ) = RS(Γ ) = SRS(Γ ) holds. The above theorem can be proved by the following lemmas and proposition. Lemma 7. Let Γ be a regular EFS, G0 be a ground goal, and (Gi , Ci , Θi ) (i = 0, 1, . . . , n) be an S-derivation from G0 . Then, for every σ ∈ Θn−1 , σ is ground, and there exists a derivation (Gi , Ci , θi ) (i = 0, 1, . . . , n) such that G0 = G0 , and Gi = Gi σ for each i (i = 1, 2, . . . , n).

358

N. Sugimoto, H. Ishizaka, and T. Shinohara

Proof. Let p(w) be an selected atom of G0 , and p(π) be the head of C0 . Then, from the deﬁnition of an S-derivation, Θ0 = U (w, π) holds. Furthermore, for every σ ∈ Θ0 , IN T (Θ0 ) ⊆ σ holds. Let G1 be the resolvent of G0 and C0 by σ. Then, the derivation (G0 , C0 , σ), (G1 , , ) satisﬁes the statement. Next, we assume that p(τ ) be an selected atom of Gn−1 , and p(π) be the head of Cn−1 . Then, from the deﬁnitions of an S-derivation and a regular EFS, τ is a gound term w ∈ Σ + or a variable x ∈ D(σn−2 ) for every σn−2 ∈ Θn−2 . If σ ∈ Θn−1 then there exists σn−2 ∈ Θn−2 such that σ ∈ σn−2 ◦ δ for some δ ∈ M XGU (τ, π). If the selected atom is p(w) then δ is ground. Thus, σ is also ground. If the selected atom is p(x) then δ = {x/π} from Lemma 4. Since x/w ∈ σn−2 for some w ∈ Σ + , σ ∈ σn−2 ◦ δ = {σn−2 · γ | γ ∈ M XGU (w, π)}. Thus, σ is ground. From the assumption of the induction, there exists a derivation (Gi , Ci , θi ) (i = 0, 1, . . . , n − 1) such that G0 = G0 , and Gi = Gi σn−2 for each i (i = 1, 2, . . . , n − 1). Since σn−2 is ground, it is clear that σn−2 ⊆ σ. Let Gn be the resolvent of Gn−1 and Cn−1 by θn−1 ∈ U (w, π), then it is clear that the derivation (Gi , Ci , θi ) (i = 0, 1, . . . , n) satisﬁes the statement. Lemma 8. Let Γ be a regular EFS, G0 be a ground goal, and (Gi , Ci , θi ) (i = 0, 1, . . .) be a derivation from G0 . Then, there exists an S-derivation (Gi , Ci , Θi ) (i = 0, 1, . . .) and a substitution σi ∈ Θi such that G0 = G0 , and Gi+1 σi = Gi+1 for each i (i = 1, 2, . . .). Proof. Let p(w) be an selected atom of G0 , and p(π) be the head of C0 . Then, from the deﬁnition of a derivation, θ0 ∈ U (w, π). On the other hand, from the deﬁnition of an S-derivation, Θ0 = M XGU (w, π) = U (w, π). Thus, θ0 ∈ Θ0 , and G1 θ0 = G1 . Next, we assume that there exists σk−1 ∈ Θk−1 such that Gk σk−1 = Gk . Let p(w) be an selected atom of Gk , and the head of Ck be p(π). Then, from the deﬁnition of a derivation, θk ∈ U (w, π). On the other hand, from the deﬁnition of an S-derivation and the assumption of the induction, the selected atom of Gk has the form p(w) or p(x) for some w ∈ Σ + and x ∈ D(σk−1 ). If the selected atom is p(w) then Θk = M IN (Θk−1 ◦ U (w, π)). Since σk−1 ∈ Θk−1 and θk ∈ U (w, π), σk−1 ◦ θk ⊆ Θk holds. Furthermore, from the deﬁnitions of an S-derivation, D(σk−1 ) ∩ D(θk ) = ∅. Thus, σk−1 and θk are consistent, and σk−1 ∪ θk ∈ σk−1 ◦ θk . If the selected atom is p(x) then Θk = M IN (Θk−1 ◦ M XGU (x, π)), and M XGU (x, π) is a singleton set {{x/π}} from Lemma 4. Since σk−1 ∈ Θk−1 , M IN (σk−1 ◦ {x/π}) ⊆ Θk holds. Furthermore, from x/w ∈ σk−1 and U (w, π) = ∅, {x/π} and σk−1 are consistent. From the statement in the proof of Lemma 6, M IN ({x/π} ◦ σk−1 ) = {σk−1 · γ | γ ∈ M XGU (w, π)}. Since θk ∈ M XGU (w, π), σk−1 · θk = σk−1 ∪ θk ∈ Θk . It is clear that σk−1 ∪ θk becomes a substitution, and satisﬁes the statement. From Lemma 7 and 8, we can prove the following proposition.

An Eﬃcient Derivation for Elementary Formal Systems

359

Proposition 1. Let Γ be a regular EFS, G0 be a ground goal. Then, there exists a refutation from G0 if and only if there exists an S-refutation from G0 . Furthermore, we can also prove the following theorem. Theorem 2. For every regular EFS Γ , F F S(Γ ) = SF F S(Γ ) holds. Example 3. For an EFS

(1) p(xy) ← q(x), r(y); Γ = (2) q(an ) ←; , (3) r(aa) ← and a goal ← p(an+1 ), Fig 1 describes the derivation and the S-derivation as trees like SLD-trees [7]. In the derivation and the S-derivation in Fig 1, the label (k, θ) on each edge represents the derivation by the clause (k) and the uniﬁer or the set of uniﬁers θ. The derivation needs n + 1 times backtracking to determine p(an+1 ) ∈ F F (Γ ). On the other hand, in the S-derivation, it is determined by twice backtracking.

(1,{x / a, y / a n })

p( a n+1 )

(1,{x / a ,y / a n−1})

(1,{x / an ,y / a})

2

q( a ),r( a n ) fail

q( a n ),r( a )

q( a 2 ),r( a n−1 ) fail

(2,{ })

r( a) fail

( {{ 1,

q( x ),r( y)

{x / a,y / a n }

x / a 2 , y / a n−1

{x / a

n

,y/ a

}

({

p( a n+1 )

}

( 2, {{x/a n ,y / a}} ) r( a) fail Fig. 1. Backtracking by a derivation and an S-derivation

360

4

N. Sugimoto, H. Ishizaka, and T. Shinohara

An Implementation of EFS Interpreter

In this section, we outline an implementation of EFS interpreter based on an S-derivation, and give some results of experiments for typical examples of EFS’s where the S-derivation works eﬃciently. In order to construct an eﬃcient interpreter, we adopt two ideas: 1. computing each uniﬁers by using the Aho-Corasick pattern matching algorithm, and 2. reducing the number of backtracking of a derivation by using an S-derivation. 4.1

Uniﬁcations by the Aho-Corasick Pattern Matching Algorithm

The Aho-Corasick pattern matching algorithm ﬁnds all occurrence positions of some patterns by scanning the given text. From a given EFS, the EFS interpreter makes a pattern matching machine in advance for all ground strings in the EFS. For each given ground goal, the pattern matching machine scans the ground term in the given goal clause and outputs all occurrence positions of patterns on the term. From the occurrence positions, each uniﬁer is eﬃciently computed. Example 4. Let w = aaabaaabaaabaaa and τ = xbyabz be terms, where a, b ∈ Σ and x, y, z ∈ X. For constant substrings b and ab of τ , the pattern matching machine ﬁnds the occurrence positions on w as follows: b : (4 : 4), (8 : 8), (12 : 12), ab : (3 : 4), (7 : 8), (11 : 12), where, (i : j) in the line of b (resp. ab) means that the substring from ith to jth of w is b (resp. ab). For each occurrence (ib : jb ) of b and (iab : jab ) of ab such that jb ≤ iab , we obtain uniﬁers of w and τ as follows: {x/(1 : 3), y/(5 : 6), z/(9 : 15)} from ((4 : 4), (7 : 8)), {x/(1 : 3), y/(5 : 10), z/(13 : 15)} from ((4 : 4), (11 : 12)), {x/(1 : 7), y/(8 : 10), z/(13 : 15)} from ((8 : 8), (11 : 12)). A regular EFS is suitable to this computation of the uniﬁer, because every ground term in each resolvent of the derivation becomes a substring of the term in the given initial goal. This implies that all uniﬁers used in the derivation can be computed by only once scanning the given initial goal by the pattern matching machine. 4.2

An Implementation of S-Derivation

Since an S-derivation deals with the set of all possible uniﬁers at each step of the derivation, it is important to adopt a compact representation of the set. From the property of a regular EFS, all terms in the derivation are substrings of the term in the given initial goal. Thus, the set of all possible uniﬁers can be divided into some parts as shown by the next example.

An Eﬃcient Derivation for Elementary Formal Systems

361

Example 5. Let w = aabaabaabaa and τ = xybz be terms, where a, b ∈ Σ and x, y, z ∈ X. Then, the set of all uniﬁers {x/a, y/a, z/aabaabaa}, {x/a, y/abaa, z/aabaa}, {x/aa, y/baa, z/aabaa}, {x/aab, y/aa, z/aabaa}, {x/aaba, y/a, z/aabaa}, {x/a, y/abaabaa, z/aa}, U (w, τ ) = {x/aa, y/baabaa, z/aa}, {x/aab, y/aabaa, z/aa}, {x/aaba, y/abaa, z/aa}, {x/aabaa, y/baa, z/aa}, {x/aabaab, y/aa, z/aa}, {x/aabaaba, y/a, z/aa} can be divided into these three parts:

{x/a, y/abaabaa, z/aa}, U1 = {x/a, y/a, z/aabaabaa} , {x/aa, y/baabaa, z/aa}, {x/aab, y/aabaa, z/aa}, {x/a, y/abaa, z/aabaa}, U3 = {x/aaba, y/abaa, z/aa}, . {x/aa, y/baa, z/aabaa}, {x/aabaa, y/baa, z/aa}, U2 = , {x/aab, y/aa, z/aabaa}, {x/aabaab, y/aa, z/aa}, {x/aaba, y/a, z/aabaa} {x/aabaaba, y/a, z/aa} Furthermore, each Ui (i = 1, 2, 3) is represented as follows: U1 = {{x/(1 : 1), y/(2 : 2), z/(4 : 11)}}, U2 = {{x/(1 : k), y/(k + 1 : 5), z/(7 : 11)} | 4 ≥ k ≥ 1}, U3 = {{x/(1 : k), y/(k + 1 : 8), z/(10 : 11)} | 7 ≥ k ≥ 1}, where each (i : j) represents the substring from ith to jth of w. For an EFS Γ = {p(xybz) ← q1 (x), q2 (y), q3 (z); q1 (aa) ←; q2 (baa) ←; q3 (aabaa) ←} and the set U2 , the S-derivation from ← p(w) is shown in Fig 2.

p(1:11)

p( xy[6 : 6]z)

q1 ( x ), q2 ( y), q3 ( z)

{x /(1: k ), y /(k + 1, 5), z /(7 :11)} q1 (1: k ), q2 ( k + 1: 5), q3 ( 7 :11) q1 (1: 2)

{x /(1: 2), y /(3, 5), z /(7 :11)} q2 (3 : 5), q3 ( 7 :11)

q2 (3 : 5)

{x /(1: 2), y /(3, 5), z /(7 :11)} q3 ( 7 :11)

q3 ( 7 :11)

{x /(1: 2), y /(3, 5), z /(7 :11)}

Fig. 2. An S-derivation from the goal ← p(w).

362

N. Sugimoto, H. Ishizaka, and T. Shinohara

We can easily show that the derivation with the divided sets on uniﬁers is equivalent to the S-derivation. From the occurrence positions given by the pattern matching machine, each set Ui is eﬃciently computed. Thus, an S-derivation is eﬃcient. 4.3

Experimental Results

We construct three types of EFS interpreters C1 , C2 , and C3 , where C1 , C2 , and C3 use a derivation with naive uniﬁcations, a derivation with uniﬁcations by the Aho-Corasick algorithm, and an S-derivation with the uniﬁcation by the AhoCorasick algorithm, respectively. We verify the eﬃciency of the S-derivation and uniﬁcations using the Aho-Corasick algorithm, by comparing the running time of these interpreter with that of the deﬁnite clause grammar (DCG) provided by the Prolog interpreter. We consider the following EFS’s and DCG’s: (x x ) ← p p (x ), p (x ); p0 → p1 , p2 ; 0 1 2 1 1 2 2 p1 (aaxaa) ← p1 (x); p1 ← aap1 aa; , D1 = p1 ← bbp1 bb; , Γ1 = p1 (bbxbb) ← p1 (x); p p (aaaa) ←; p (bbbb) ←; → aaaa; p → bbbb; 1 1 1 1 p2 (a) ←; p2 (aa) ← . p2 → a; p2 → aa. p0 (x1 x2 x3 x4 aaa) ← p0 → p1 p1 p1 p1 aaa; p1 (x1 ), p1 (x2 ), p1 (x3 ), p1 (x4 ); p0 → aaa; p0 (aaa) ←; p1 → ap1 a; , D2 = Γ2 = p1 (axa) ← p1 (x); . p1 → bp1 b; (bxb) ← p (x); p 1 1 p1 → a; p1 → b; p1 (a) ←; p1 (b) ←; p1 → aa; p1 → bb. p1 (aa) ←; p1 (bb) ← . The DCG Di and the EFS Γi represent the same language (i = 1, 2). Table 1. The running time for the EFS Γ1 and the DCG D1 (sec.) The length of the text C1 C2 C3 DCG 100 18.17 50.46 2.03 0.2 200 64.05 195.12 3.93 0.4 300 137.86 435.54 5.86 0.54 400 238.4 762.86 7.78 0.76 500 367.11 1181.39 9.62 0.89

The running time by EFS interpreters for Γ1 and the DCG for D1 are shown in Table 1. The input data consist of 30 strings from {a, b}. From the results of this experiment, If an EFS has successive occurrence of variables, then an S-derivation is more eﬃcient than the derivation as shown by the diﬀerence between the running time of C2 and C3 .

An Eﬃcient Derivation for Elementary Formal Systems

363

Table 2. The running time for the EFS Γ2 and the DCG D2 (sec.) The length of the text C1 C2 C3 5 8.71 8.75 9.25 10 88.42 17.42 20.05 15 473.15 52.62 73.24 20 1619.83 168.24 351.96 25 4200.69 424.1 1175.22

DCG 6.49 22.43 115.5 528.6 1648.99

In Table 2, we present the running time of each EFS interpreter and DCG, for Γ2 and D2 . The input data consist of 1000 strings from {a, b}. The uniﬁcation by the Aho-Corasick algorithm is eﬃcient as the diﬀerence between the running time of C1 and C2 . Furthermore, we ﬁnd C2 and C3 are more eﬃcient than DCG. This result represens that the number of backtracking by the EFS interpreter are less than that by the DCG.

5

Conclusion

We have proposed an eﬃcient derivation for EFS’s called S-derivation, where all possible uniﬁers are evaluated at one step of the derivation. We have shown that the S-derivation is complete for accepting context-free languages. Furthermore, we have implemented the S-derivation, and veriﬁed its eﬃciency by comparing with the running time of DCG’s. One of the open problems is to discuss computability of the S-derivation for the extended classes of regular EFS’s. Since, in the S-derivation, each resolvent contains variables even if the initial goal is ground, the uniﬁcation should be eﬃciently computed for terms containing variables. However, it is known that the uniﬁcation problem for non-regular terms is NP-complete. Therefore, we have to consider another approach for the extended class of EFS’s. The S-derivation can be applied to translations over strings. We have already constructed the translator for regular TEFS’s [12] which represent binary relations over context-free languages. It is a future work to formalize generating languages by the S-derivation in the framework of TEFS’s, and to design a translator for real data by using our results.

References 1. A. V. Aho and M. J. Corasick: Eﬃcient string matching : An aid to bibliographic search, Communication of the ACM 18, No.6, 333–340 (1975). 2. S. Arikawa, S. Miyano, A. Shinohara, T. Shinohara, and A. Yamamoto: Algorithmic learning theory with elementary formal systems, IEICE Transaction on Information and Systems E75-D, 405–414 (1992). 3. S. Arikawa, T. Shinohara, and A. Yamamoto: Learning elementary formal systems, Theoretical Computer Science 95, 97–113 (1992).

364

N. Sugimoto, H. Ishizaka, and T. Shinohara

4. N. Harada, S. Arikawa, and H. Ishizaka: A Class of elementary formal systems that has an eﬃcient parsing algorithm, Information Modeling and Knowledge Bases IX, 89–101 (1997). 5. J. Jaﬀar: Minimal and complete word uniﬁcation, Journal of the ACM 37, 47–85 (1990). 6. D. Kapur: Complexity of uniﬁcation problems with associative-commutative operation, Journal of Automated Reasoning 9, 261–288 (1992). 7. J. W. Lloyd: Foundations of logic programming (second edition), Springer-Verlag (1987). 8. Y. Mukouchi and S. Arikawa: Towards a mathematical theory of machine discovery from facts, Theoretical Computer Science 137, 53–84 (1995). 9. T. Shinohara: Inductive inference on monotonic formal systems from positive data, New Generation Computing 8, 371–384 (1991). 10. T. Shinohara: Rich classes inferable from positive data: Length-bounded elementary formal system, Information and Computation 108, 175–186 (1994). 11. R. Smullyan: Theory of formal systems, Princeton Univ. Press, Princeton (1961). 12. N. Sugimoto, K. Hirata and H. Ishizaka: Constructive learning of translations based on dictionaries, In Proceedings of the Seventh International Workshop on Algorithmic Learning Theory, Lecture Notes in Artiﬁcial Intelligence 1160, 177–184 (1996). 13. N. Sugimoto: Learnability of translations from positive examples, In Proceedings of the Ninth International Conference on Algorithmic Learning Theory, Lecture Notes in Artiﬁcial Intelligence 1501, 169–178 (1998). 14. N. Sugimoto and H. Ishizaka: Generating languages by a derivation procedure for elementary formal systems, Information Processing Letters 69, 161–166 (1999). 15. A. Yamamoto: Procedural semantics and negative information of elementary formal system, Journal of Logic Programming 13, 89–97 (1992).

Computational Revision of Quantitative Scientiﬁc Models Kazumi Saito1 , Pat Langley2 , Trond Grenager2 , Christopher Potter3 , Alicia Torregrosa3 , and Steven A. Klooster3 1

NTT Communication Science Laboratories 2-4 Hikaridai, Seika, Soraku, Kyoto 619-0237 Japan saito@cslab.kecl.ntt.co.jp 2 Computational Learning Laboratory, CSLI Stanford University, Stanford, California 94305 USA {langley,grenager}@cs.stanford.edu 3 Ecosystem Science and Technology Branch NASA Ames Research Center, MS 242-4 Moﬀett Field, California 94035 USA {cpotter,lisy,sklooster}@gaia.arc.nasa.gov Abstract. Research on the computational discovery of numeric equations has focused on constructing laws from scratch, whereas work on theory revision has emphasized qualitative knowledge. In this paper, we describe an approach to improving scientiﬁc models that are cast as sets of equations. We review one such model for aspects of the Earth ecosystem, then recount its application to revising parameter values, intrinsic properties, and functional forms, in each case achieving reduction in error on Earth science data while retaining the communicability of the original model. After this, we consider earlier work on computational scientiﬁc discovery and theory revision, then close with suggestions for future research on this topic.

1

Research Goals and Motivation

Research on computational approaches to scientiﬁc knowledge discovery has a long history in artiﬁcial intelligence, dating back over two decades (e.g., Langley, 1979; Lenat, 1977). This body of work has led steadily to more powerful methods and, in recent years, to new discoveries deemed worth publication in the scientiﬁc literature, as reviewed by Langley (1998). However, despite this progress, mainstream work on the topic retains some important limitations. One drawback is that few approaches to the intelligent analysis of scientiﬁc data can use available knowledge about the domain to constrain search for laws or explanations. Moreover, although early work on computational discovery cast discovered knowledge in notations familiar to scientists, more recent eﬀorts have not. Rather, inﬂuenced by the success of machine learning and data mining, many researchers have adopted formalisms developed by these ﬁelds, such as decision trees and Bayesian networks. A return to methods that operate on established scientiﬁc notations seems necessary for scientists to understand their results. K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 336–349, 2001. c Springer-Verlag Berlin Heidelberg 2001

Computational Revision of Quantitative Scientiﬁc Models

337

Like earlier research on computational scientiﬁc discovery, our general approach involves deﬁning a space of possible models stated in an established scientiﬁc formalism, speciﬁcally sets of numeric equations, and developing techniques to search that space. However, it diﬀers from previous work in this area by starting from an existing scientiﬁc model and using heuristic search to revise the model in ways that improve its ﬁt to observations. Although there exists some research on theory reﬁnement (e.g., Ourston & Mooney 1990; Towell, 1991), it has emphasized qualitative knowledge rather than quantitative models that relate continuous variables, which play a central role in many sciences. In the pages that follow, we describe an approach to revising quantitative models of complex systems. We believe that our approach is a general one appropriate for many scientiﬁc domains, but we have focused our eﬀorts on one area – certain aspects of the Earth ecosystem – for which we have a viable model, existing data, and domain expertise. We brieﬂy review the domain and model before moving on to describe our approach to knowledge discovery and model revision. After this, we present some initial results that suggest our approach can improve substantially the model’s ﬁt to available data. We close with a discussion of related discovery work and directions for future research.

2

A Quantitative Model of the Earth Ecosystem

Data from the latest generation of satellites, combined with readings from ground sources, hold great promise for testing and improving existing scientiﬁc models of the Earth’s biosphere. One such model, CASA, developed by Potter and Klooster (1997, 1998) at NASA Ames Research Center, accounts for the global production and absorption of biogenic trace gases in the Earth atmosphere, as well as predicting changes in the geographic patterns of major vegetation types (e.g., grasslands, forest, tundra, and desert) on the land. CASA predicts, with reasonable accuracy, annual global ﬂuxes in trace gas production as a function of surface temperature, moisture levels, and soil properties, together with global satellite observations of the land surface. The model incorporates diﬀerence equations that represent the terrestrial carbon cycle, as well as processes that mineralize nitrogen and control vegetation type. These equations describe relations among quantitative variables and lead to changes in the modeled outputs over time. Some processes are contingent on the values of discrete variables, such as soil type and vegetation, which take on diﬀerent values at diﬀerent locations. CASA operates on gridded input at diﬀerent levels of resolution, but typical usage involves grid cells that are eight kilometers square, which matches the resolution for satellite observations of the land surface. To run the CASA model, the diﬀerence equations are repeatedly applied to each grid cell independently to produce new variable values on a daily or monthly basis, leading to predictions about how each variable changes, at each location, over time. Although CASA has been quite successful at modeling Earth’s ecosystem, there remain ways in which its predictions diﬀer from observations, suggesting that we invoke computational discovery methods to improve its ability to ﬁt the data. The result would be a revised model, cast in the same notation as the

338

K. Saito et al. Table 1. Variables used in the NPPc portion of the CASA ecosystem model.

NPPc is the net plant production of carbon at a site during the year. E is the photosynthetic eﬃciency at a site after factoring various sources of stress. T1 is a temperature stress factor (0 < T 1 < 1) for cold weather. T2 is a temperature stress factor (0 < T 2 < 1), nearly Gaussian in form but falling oﬀ more quickly at higher temperatures. W is a water stress factor (0.5 < W < 1) for dry regions. Topt is the average temperature for the month at which MON-FAS-NDVI takes on its maximum value at a site. Tempc is the average temperature at a site for a given month. EET is the estimated evapotranspiration (water loss due to evaporation and transpiration) at a site. PET is the potential evapotranspiration (water loss due to evaporation and transpiration given an unlimited water supply) at a site. PET-TW-M is a component of potential evapotranspiration that takes into account the latitude, time of year, and days in the month. A is a polynomial function of the annual heat index at a site. AHI is the annual heat index for a given site. MON-FAS-NDVI is the relative vegetation greenness for a given month as measured from space. IPAR is the energy from the sun that is intercepted by vegetation after factoring in time of year and days in the month. FPAR-FAS is the fraction of energy intercepted from the sun that is absorbed photosynthetically after factoring in vegetation type. MONTHLY-SOLAR is the average solar irradiance for a given month at a site. SOL-CONVER is 0.0864 times the number of days in each month. UMD-VEG is the type of ground cover (vegetation) at a site.

original one, that incorporates changes which are interesting to Earth scientists and which improve our understanding of the environment. Because the overall CASA model is quite complex, involving many variables and equations, we decided to focus on one portion that lies on the model’s ‘fringes’ and that does not involve any diﬀerence equations. Table 1 describes the variables that occur in this submodel, in which the dependent variable, NPPc, represents the net production of carbon. As Table 2 indicates, the model predicts this quantity as the product of two unobservable variables, the photosynthetic eﬃciency, E, at a site and the solar energy intercepted, IPAR, at that site. Photosynthetic eﬃciency is in turn calculated as the product of the maximum eﬃciency (0.56) and three stress factors that reduce this eﬃciency. One stress term, T2, takes into account the diﬀerence between the optimum temperature, Topt, and actual temperature, Tempc, for a site. A second factor, T1, involves

Computational Revision of Quantitative Scientiﬁc Models

339

Table 2. Equations used in the NPPc portion of the CASA ecosystem model. NPPc =

month

max (E · IPAR, 0)

E = 0.56 · T1 · T2 · W T1 = 0.8 + 0.02 · Topt − 0.0005 · Topt2 T2 = 1.18/[(1 + e0.2·(Topt−Tempc−10) ) · (1 + e0.3·(Tempc−Topt−10) )] W = 0.5 + 0.5 · EET/PET PET = 1.6 · (10 · Tempc / AHI)A · PET-TW-M if Tempc > 0 PET = 0 if Tempc ≤ 0 A = 0.000000675 · AHI3 − 0.0000771· AHI2 + 0.01792 · AHI + 0.49239 IPAR = 0.5 · FPAR-FAS · MONTHLY-SOLAR · SOL-CONVER FPAR-FAS = min((SR-FAS − 1.08)/SRDIFF(UMD-VEG), 0.95) SR-FAS = − (MON-FAS-NDVI + 1000) / (MON-FAS-NDVI − 1000)

the nearness of Topt to a global optimum for all sites, reﬂecting the intuition that plants which are better adapted to harsh temperatures are less eﬃcient overall. The third term, W, represents stress that results from lack of moisture as reﬂected by EET, the estimated water loss due to evaporation and transpiration, and PET, the water loss due to these processes given an unlimited water supply. In turn, PET is deﬁned in terms of the annual heat index, AHI, for a site, and PET-TW-M, another component of potential evapotranspiration. The energy intercepted from the sun, IPAR, is computed as the product of FPAR-FAS, the fraction of energy absorbed photosynthetically for a given vegetation type, MONTHLY-SOLAR, the average radiation for a given month, and SOL-CONVER, the number of days in that month. FPAR-FAS is a function of MON-FAS-NDVI, which indicates relative greenness at a site as observed from space, and SRDIFF, an intrinsic property that takes on diﬀerent numeric values for diﬀerent vegetation types as speciﬁed by the discrete variable UMD-VEG. Of the variables we have mentioned, NPPc, Tempc, MONTHLY-SOLAR, SOL-CONVER, MON-FAS-NDVI, and UMD-VEG are observable. Three additional terms – EET, PET-TW-M, and AHI – are deﬁned elsewhere in the model, but we assume their deﬁnitions are correct and thus we can treat them as observables. The remaining variables are unobservable and must be computed from the others using their deﬁnitions. This portion of the model also contains a number of numeric parameters, as shown in the equations in Table 2.

3

An Approach to Quantitative Model Revision

As noted earlier, our approach to scientiﬁc discovery involves reﬁning models like CASA that involve relations among quantitative variables. We adopt the traditional view of discovery as heuristic search through a space of models, with the search process directed by candidates’ ability to ﬁt the data. However, we assume this process starts not from scratch, but rather with an existing model,

340

K. Saito et al.

and the search operators involve making changes to this model, rather than constructing entirely new structures. Our long-term goal is not to automate the revision process, but instead to provide an interactive tool that scientists can direct and use to aid their model development. As a result, the approach we describe in this section addresses the task of making local changes to a model rather than carrying out global optimization, as assumed by Chown and Dietterich (2000). Thus, our software takes as input not only observations about measurable variables and an existing model stated as equations, but also information about which portion of the model should be altered. The output is a revised model that ﬁts the observed data better than the initial one. Below we review two discovery algorithms that we utilize to improve the speciﬁed part of a model, then describe three distinct types of revision they support. We consider these in order of increasing complexity, starting with simple changes to parameter values, moving on to revisions in the values of intrinsic properties, and ending with changes in an equation’s functional form. 3.1

The RF5 and RF6 Discovery Algorithms

Our approach relies on RF5 and RF6, two algorithms for discovering numeric equations described Saito and Nakano (1997, 2000). Given data for some continuous variable y that is dependent on continuous predictive variables x1 , . . . , xn , the RF5 system searches for multivariate polynomial equations of the form K J K J wjk y = w0 + wj xk = w0 + wj exp wjk ln(xk ) , (1) j=1

k=1

j=1

k=1

Such functional relations subsume many of the numeric laws found by previous computational discovery systems like Bacon (Langley, 1979) and Fahrenheit ˙ (Zytkow, Zhu, & Hussam, 1990). RF5’s ﬁrst step involves transforming a candidate functional form with J summed terms into a three-layer neural network based on the rightmost form of expression (1), in which the K hidden nodes in this network correspond to product units (Durbin & Rumelhart, 1989). The system then carries out search through the weight space using the BPQ algorithm, a second-order learning technique that calculates both the descent direction and the step size automatically. This process halts when it ﬁnds a set of weights that minimize the squared error on the dependent variable y. RF5 runs the BPQ method on networks with diﬀerent numbers of hidden units, then selects the one that gives the best score on an MDL metric. Finally, the program transforms the resulting network into a polynomial equation, with weights on hidden units becoming exponents and other weights becoming coeﬃcients. The RF6 algorithm extends RF5 by adding the ability to ﬁnd conditions on a numeric equation that involve nominal variables, which it encodes using one input variable for each nominal value. To this end, the system ﬁrst generates one such condition for each training case, then utilizes k-means clustering to generate

Computational Revision of Quantitative Scientiﬁc Models

341

a smaller set of more general conditions, with the number of clusters determined through cross validation. Finally, RF6 invokes decision-tree induction to construct a classiﬁer that discriminates among these clusters, which it transforms into rules that form the nominal conditions on the polynomial equation that RF5 has generated. 3.2

Three Types of Model Reﬁnement

There exist three natural types of reﬁnement within the class of models, like CASA, that are stated as sets of equations that refer to unobservable variables. These include revising the parameter values in equations, altering the values for an intrinsic property, and changing the functional form of an existing equation. Improving the parameters for an equation is the most straightforward process. The NPPc portion of CASA contains some parameterized equations that our Earth science team members believe are reliable, like that for computing the variable A from AHI, the annual heat index. However, it also includes equations with parameters about which there is less certainty, like the expression that predicts the temperature stress factor T2 from Tempc and Topt. Our approach to revising such parameters relies on creating a specialized neural network that encodes the equation’s functional form using ideas from RF5, but also including a term for the unchanged portion of the model. We then run the BPQ algorithm to ﬁnd revised parameter values, initializing weights based on those in the model. We can utilize a similar scheme to improve the values for an intrinsic property like SRDIFF that the model associates with the discrete values for some nominal variable like UMD-VEG (vegetation type). We encode each nominal term as a set of dummy variables, one for each discrete value, making the dummy variable equal to one if the discrete value occurs and zero otherwise. We introduce one hidden unit for the intrinsic property, with links from each of the dummy variables and with weights that correspond to the intrinsic values associated with each discrete value. To revise these weights, we create a neural network that incorporates the intrinsic values but also includes a term for the unchanging parts of the model. We can then run BPQ to revise the weights that correspond to intrinsic values, again initializing them to those in the initial model. Altering the form of an existing equation requires somewhat more eﬀort, but maps more directly onto previous work in equation discovery. In this case, the details depend on the speciﬁc functional form that we provide, but because we have available the RF5 and RF6 algorithms, the approach supports any of the forms that they can discover or specializations of them. Again, having identiﬁed a particular equation that we want to improve, we create a neural network that encodes the desired form, then invoke the BPQ algorithm to determine its parametric values, in this case initializing the network weights randomly. This approach to model reﬁnement supports changes to only one equation or intrinsic property at a time, but this is consistent with the interactive process described earlier. We envision the scientist identifying a portion of the model that he thinks could be better, running one of the three revision methods to improve its ﬁt to the data, and repeating this process until he is satisﬁed.

342

4

K. Saito et al.

Initial Results on Ecosystem Data

In order to evaluate our approach to scientiﬁc model revision, we utilized data relevant to the NPPc model available to the Earth science members of our team. These data consisted of observations from 303 distinct sites with known vegetation type and for which measurements of Tempc, MON-FAS-NDVI, MONTHLYSOLAR, SOL-CONVER, and UMD-VEG were available for each month during the year. In addition, other portions of CASA were able to compute values for the variables AHI, EET, and PET-TW-M. The resulting 303 training cases seemed suﬃcient for initial tests of our revision methods, so we used them to drive a variety of changes to the handcrafted model of carbon production. 4.1

Results on Parameter Revision

Our Earth science team members identiﬁed the equation for T2, one of the temperature stress variables, as a likely candidate for revision. As noted earlier, the handcrafted expression for this term was T 2 = 1.8/[(1 + e0.2(T opt−T empc−10) )(1 + e−0.3(T empc−T opt−10) )] , which produces a Gaussian-like curve that is slightly assymetrical. This reﬂects the intuition that photosynthetic eﬃciency will decrease when temperature (Tempc) is either below or above the optimal (Topt). To improve upon this equation, we deﬁned x = Topt − Tempc as an intermediate variable and recast the expression for T2 as the product of two sigmoidal functions of the form σ(a) = 1/(1 + exp(−a)) and a parameter. We transformed these into a neural network and used BPQ to minimize the error function 2 F1 = sample (NPPc − month w0 · σ(v10 + v11 · x) · σ(v20 − v21 · x) · Rest) , over the parameters {w0 , v10 , v11 , v20 , v21 }, where Rest = 0.56 · T1 · W · IPAR. The resulting equation generated in this manner was T 2 = 1.80/[(1 + e0.05(T opt−T empc−10.8 )(1 + e−0.03(T empc−T opt−90.33 )] , which has reasonably similar values to the original ones for some parameters but quite diﬀerent values for others. The root mean squared error (RMSE) for the original model on the available data was 467.910. In contrast, the error for the revised model was 457.757 on the training data and 461.466 using leave-one-out cross validation. Thus, RF6’s modiﬁcation of parameters in the T2 equation produced slightly more than one percent reduction in overall model error, which is somewhat disappointing. However, inspection of the resulting curves reveals a more interesting picture. Plotting the temperature stress factor T2 using the revised equations as a function of the diﬀerence Topt − Tempc still gives a Gaussian-like curve, but within the eﬀective range (from −30 to 30 Celsius) its values decrease monotonically. This seems counterintuitive but interesting from an Earth science perspective,

Computational Revision of Quantitative Scientiﬁc Models

343

as it suggests this stress factor has little inﬂuence on NPPc. Moreover, the original equation for T2 was not well grounded in ﬁrst principles of plant physiology, making empirical improvements of this sort beneﬁcial to the modeling enterprise. As another candidate for parameter revision, we selected the PET equation, PET = 1.6 · (10 · max(Tempc, 0) / AHI)A · PET-TW-M , which calculates potential water loss due to evaporation and transpiration given an unlimited water supply. By transforming this expression into PET = exp(ln(1.6) + A · ln(10)) · (max(Tempc, 0) / AHI)A · PET-TW-M and replacing the parameter values ln(1.6) and ln(10) with the variables v0 and v1 , we constructed a neural network and used BPQ for error minimization. When transforming the trained network back into the original form, the equation that resulted was PET = 1.56 · (9.16 · max(Tempc, 0) / AHI)A · PET-TW-M , which has values that are very similar to those in the original model’s equation. Moreover, since the RMSE for the obtained model was 464.358 on the training data and 467.643 using leave-one-out cross validation, the revision process did not improve the model’s accuracy substantially. However, since the PET equation is based on Thornthwaite’s (1948) method, which has been used continuously for over 50 years, we should not be overly surprised at this negative result. Indeed, we are encouraged by the fact that our approach did not revise parameters that have stood the test of time in Earth science. 4.2

Results on Intrinsic Value Revision

Another portion of the NPPc model that held potential for revision concerns the intrinsic property SRDIFF associated with the vegetation type UMD-VEG. For each site, the latter variable takes on one of 11 nominal values, such as grasslands, forest, tundra, and desert, each with an associated numeric value for SRDIFF that plays a role in the FPAR-FAS equation. This gives 11 parameters to revise, which seems manageable given the number of observations available. As outlined earlier, to revise these intrinsic values, we introduced one dummy variable, UMD-VEGk , for each vegetation type such that UMD-VEGk = 1 if UMD-VEG = k and 0 otherwise. We then deﬁned SRDIFF(UMD-VEG) as exp(− k vk · UMD-VEGk ) and, since SRDIFF’s value is independent of the month, we used BPQ to minimize, over the weights {vk }, the error function 2 F2 = site (NPPc − exp( k vk · UMD-VEGk ) · Rest) , where Rest = month E ·0.5·(SR-FAS −1.08)· MONTHLY-SOLAR · SOL-CONVER. Table 3 shows the initial values for this intrinsic property, as set by the CASA developers, along with the revised values produced by the above approach when

344

K. Saito et al.

Table 3. Original and revised values for the SRDIFF intrinsic property, along with the frequency for each vegetation type. vegetation type original revised clustered frequency

A

B

C

D

E

3.06 4.35 4.35 4.05 5.09 2.57 4.77 2.20 3.99 3.70 2.42 3.75 2.42 3.75 3.75 3.3 8.9 0.3 3.6 21.1

F 3.06 3.46 3.75 19.1

G

H

I

J

K

4.05 4.05 4.05 5.09 4.05 2.34 0.34 2.72 3.46 1.60 2.42 0.34 2.42 3.75 2.42 15.2 3.3 19.1 2.3 3.6

we ﬁxed other parts of the NPPc model. The most striking result is that the revised intrinsic values are nearly always lower than the initial values. The RMSE for the original model was 467.910, whereas the error using the revised values was 432.410 on the training set and 448.376 using cross validation. The latter constitutes an error reduction of over four percent, which seems substantial. However, since the original 11 intrinsic values were grouped into only four distinct values, we applied RF6’s clustering procedure over the trained neural network to group the revised values in the same manner. We examined the eﬀect on error rate as we varied the number of clusters from one to ﬁve; as expected, the training RMSE decreased monotonically, but the cross-validation RMSE was minimized for three clusters of values. The estimated error for this revised model is slightly better than for the one with 11 distinct values. Again, the clustered values are nearly always lower than the initial ones, a result that is certainly interesting from an Earth science viewpoint. We suspect that measurements of NPPc and related variables from a wider range of sites would produce intrinsic values closer to those in the original model. However, such a test must await additional observations and, for now, empirical ﬁt to the available data should outweigh the theoretical basis for the initial settings. In another approach to revising intrinsic values, we retained the original grouping of vegetation types into sets, with each type in a given set having the same value. We utilized a weight-sharing technique to encode this background knowledge in a neural network. For example, let vA and vF be weights corresponding to the SRDIFF values for vegetation types A and F, respectively; to ensure these values remained the same, we treated them as a single weight, say vAF . Here we can see that BPQ calculates the derivative of the error function over vAF as a sum of the individual derivatives over vA and vF , ∂F2 ∂F2 ∂F2 = + . ∂vAF ∂vA ∂vF In the trained neural network, the derivative over vAF becomes zero, but there is no guarantee that each derivative over vA or vF will do so. Therefore, we can treat the sum of the absolute values for derivatives over shared weights, like vA and vF , as a criterion for the ‘unlikeness’ among the elements of such a grouping. Table 4 shows the revised values for the intrinsic property SRDIFF that result from this approach, along with values for the unlikeness criterion deﬁned above.

Computational Revision of Quantitative Scientiﬁc Models

345

Table 4. Original and revised values, using the original groupings, for the SRDIFF intrinsic property, along with the frequency and unlikeness for each vegetation group. vegetation type

A∨F

B∨C

E∨J

D∨G∨H∨I∨K

original revised frequency unlikeness

3.06 2.23 22.4 26.1

4.35 3.27 9.2 0.3

5.09 2.54 23.4 2.3

4.05 1.81 44.9 13.6

As before, the obtained intrinsic values are always lower than the initial ones, and our criterion suggests that the group containing the vegetation types A and F has the least coherence. The RMSE for the revised model was 442.782 on the training data and 449.097 using leave-one-out cross validation, again indicating about four percent reduction in the model’s overall error. 4.3

Results on Revising Equation Structure

We also wanted to demonstrate our approach’s ability to improve the functional form of the NPPc model. For this purpose, we selected the equation for photosynthetic eﬃciency, E = 0.56 · T 1 · T 2 · W , which states that this term is a product of the water stress term, W, and the two temperature stress terms, T1 and T2. Because each stress factor takes on values less than one, multiplication has the eﬀect of reducing photosynthetic eﬃciency E below the maximum 0.56 possible (Potter & Klooster, 1998). Since E is calculated as a simple product of the three variables, one natural extension was to consider an equation that included exponents on these terms. To this end, we borrowed techniques from the RF5 system to create a neural network for such an expression, then used BPQ to minimize the error function 2 F3 = site (NPPc − month u0 · T1u1 · T2u2 · Wu3 · IPAR) , over the parameters {u0 , u1 , u2 , u3 }, which assumes the equations that predict IPAR remain unchanged. We initialized u0 to 0.56 and the other parameters to 1.0, as in the original model, and constrained the latter to be positive. The revised equation found in this manner, E = 0.521 · T 10.00 · T 20.03 · W 0.00 , has a small exponent for T2 and zero exponents for T1 and W, suggesting the former inﬂuences photosynthetic eﬃciency in minor ways and the latter not at all. On the available data, the root mean squared error for the original model was 467.910. In contrast, the revised model has an RMSE of 443.307 on the training set and an RMSE of 446.270 using cross validation. Thus, the revised

346

K. Saito et al.

equation produces a substantially better ﬁt to the observations than does the original model, in this case reducing error by almost ﬁve percent. With regards to Earth science, these results are plausible and the most interesting of all, as they suggest that the T1 and W stress terms are unnecessary for predicting NPPc. One explanation is that the inﬂuence of these factors is already being captured by the NDVI measure available from space, for which the signal-to-noise ratio has been steadily improving since CASA was ﬁrst developed. These results encouraged us to explore more radical revisions to the functional form for photosynthetic eﬃciency. Thus, we told our system to consider a form that omitted the three stress factors but that included the four variables – Topt, Tempc, EET, and PET – that appear in their deﬁnitions: E = v0 · exp(−0.5 · (v1 · Topt + v2 · Tempc + v3 · EET + v4 · PET + v5 )2 ) . This Gaussian-like activation function satisﬁes the constraint that E is positive and less than one. Running BPQ to minimize the error function over {v0 , . . . v5 } produced the equation E = 0.57 · exp(−0.5 · (−0.04 · Topt + 0.03 · Tempc − 0.03 · EET + 0.01 · PET)2 ), where we eliminated the parameter v5 because its value was −0.003. The RMSE for the revised model was 439.101 on the training data and 444.470 using leaveone-out cross validation, indicating more than ﬁve percent reduction in error. These results are very similar to those from our ﬁrst approach, which produced a cross validation RMSE of 446.270. In this case, the revised model is simpler in that it deﬁnes E directly in terms of Topt, Tempc, EET, and PET, rather than relying on the theoretical terms T1, T2, and W, two of which provide no predictive power. On the other hand, the original form for E had a clear theoretical interpretation, whereas the new version does not. In such situations, the ﬁnal decision should be left to domain scientists, who are best suited to balance a model’s simplicity against its interepretability.

5

Related Research on Computational Discovery

Our research on computational scientiﬁc discovery draws on two previous lines of work. One approach, which has an extended history within artiﬁcial intelligence, addresses the discovery of explicit quantitative laws. Early systems for numeric law discovery like Bacon (Langley, 1979; Langley et al., 1987) carried out a heuristic search through a space of new terms and simple equations. Numerous ˙ successors like Fahrenheit (Zytkow et al., 1990) and RF5 (Saito & Nakano, 1997) incorporate more sophisticated and more extensive search through a larger space of numeric equations. The most relevant equation discovery systems take into account domain knowledge to constrain the search for numeric laws. For example, Kokar’s (1986) Coper utilized knowledge about the dimensions of variables to focus attention and, more recently, Washio and Motoda’s (1998) SDS extends this idea to support diﬀerent types of variables and sets of simultaneous equations. Todorovski

Computational Revision of Quantitative Scientiﬁc Models

347

and Dˇzeroski’s (1997) LaGramge takes a quite diﬀerent approach, using domain knowledge in the form of context-free grammars to constrain its search through a space of diﬀerential equation models that describe temporal behavior. Although research on computational discovery of numeric laws has emphasized communicable scientiﬁc notations, it has focused on constructing such laws rather than revising existing ones. In contrast, another line of research has addressed the reﬁnement of existing models to improve their ﬁt to observations. For example, Ourston and Mooney (1990) developed a method that used training data to revise models stated as sets of propositional Horn clauses. Towell (1991) reports another approach that transforms such models into multilayer neural networks, then uses backpropagation to improve their ﬁt to observations, much as we have done for numeric equations. Work in this paradigm has emphasized classiﬁcation rather than regression tasks, but one can view our work as adapting the basic approach to equation discovery. We should also mention related work on the automated improvement of ecosystem models. Most AI work on Earth science domains focuses on learning classiﬁers that predict vegetation from satellite measures like NDVI, as contrasted with our concern for numeric prediction. Chown and Dietterich (2000) describe an approach that improves an existing ecosystem model’s ﬁt to continuous data, but their method only alters parameter values and does not revise equation structure. On another front, Schwabacher and Langley (2001) use a rule-induction algorithm to discover piecewise linear models that predict NDVI from climate variables, but their method takes no advantage of existing models.

6

Directions for Future Research

Although we have been encouraged by our results to date, there remain a number of directions in which we must extend our approach before it can become a useful tool for scientists. As noted earlier, we envision an interactive discovery aide that lets the user focus the system’s attention on those portions of the model it should attempt to improve. To this end, we need a graphical interface that supports marking of parameters, intrinsic properties, and equations that can be revised, as well as tools for displaying errors as a function of space, time, and predictive variables. In addition, the current system is limited to revising the parameters or form of one equation in the model at a time, as well as requiring some handcrafting to encode the equations as a neural network. Future versions should support revisions of multiple equations at the same time, preferably invoking the same variants of backpropagation as we have used to date, and also provide a library that maps functional forms to neural network encodings, so the system can transform the former into the latter automatically. We should also explore using other approaches to equation discovery, such as Todorovski and Dˇzeroski’s LaGramge, in place of the RF6 algorithm. Naturally, we also hope to evaluate our approach on its ability to improve other portions of the CASA model, as additional data becomes available. Another test of generality would be application of the same methods to other sci-

348

K. Saito et al.

entiﬁc domains in which there already exist formal models that can be revised. In the longer term, we should evaluate our interactive system not only in its ability to increase the predictive accuracy of an existing model, but in terms of the satisfaction to scientists who use the system to that end. Another challenge that we have encountered in our research has been the need to translate the existing CASA model into a declarative form that our discovery system can manipulate. In response, another long-term goal involves developing a modeling language in which scientists can cast their initial models and carry out simulations, but that can also serve as the declarative representation for our discovery methods. The ability to automatically revise models places novel constraints on such a language, but we are conﬁdent that the result will prove a useful aid to the discovery process.

7

Concluding Remarks

In this paper, we addressed the computational task of improving an existing scientiﬁc model that is composed of numeric equations. We illustrated this problem with an example model from the Earth sciences that predicts carbon production as a function of temperature, sunlight, and other variables. We identiﬁed three activities that can improve a model – revising an equation’s parameters, altering the values of an intrinsic property, and changing the functional form of an equation, then presented results for each type on an ecosystem modeling task that reduced the model’s prediction error, sometimes substantially. Our research on model revision builds on previous work in numeric law discovery and qualitative theory reﬁnement, but it combines these two themes in novel ways to enable new capabilities. Clearly, we remain some distance from our goal of an interactive discovery tool that scientists can use to improve their models, but we have also taken some important steps along the path, and we are encouraged by our initial results on an important scientiﬁc problem.

References Chown, E., & Dietterich, T. G. (2000). A divide and conquer approach to learning from prior knowledge. Proceedings of the Seventeenth International Conference on Machine Learning (pp. 143–150). San Francisco: Morgan Kaufmann. Durbin, R. & Rumelhart, D. E. (1989). Product units: A computationally powerful and biologically plausible extension. Neural Computation, 1 , 133–142. Kokar, M. M. (1986). Determining arguments of invariant functional descriptions. Machine Learning, 1 , 403–422. Langley, P. (1979). Rediscovering physics with Bacon.3. Proceedings of the Sixth International Joint Conference on Artiﬁcial Intelligence (pp. 505–507). Tokyo, Japan: Morgan Kaufmann. Langley, P. (1998). The computer-aided discovery of scientiﬁc knowledge. Proceedings of the First International Conference on Discovery Science. Fukuoka, Japan: Springer.

Computational Revision of Quantitative Scientiﬁc Models

349

˙ Langley, P., Simon, H. A., Bradshaw, G. L., & Zytkow, J. M. (1987). Scientiﬁc discovery: Computational explorations of the creative processes. Cambridge, MA: MIT Press. Lenat, D. B. (1977). Automated theory formation in mathematics. Proceedings of the Fifth International Joint Conference on Artiﬁcial Intelligence (pp. 833–842). Cambridge, MA: Morgan Kaufmann. Ourston, D., & Mooney, R. (1990). Changing the rules: A comprehensive approach to theory reﬁnement. Proceedings of the Eighth National Conference on Artiﬁcial Intelligence (pp. 815–820). Boston: AAAI Press. Potter C. S., & Klooster, S. A. (1997). Global model estimates of carbon and nitrogen storage in litter and soil pools: Response to change in vegetation quality and biomass allocation. Tellus, 49B , 1–17. Potter, C. S., & Klooster, S. A. (1998). Interannual variability in soil trace gas (CO2 , N2 O, NO) ﬂuxes and analysis of controllers on regional to global scales. Global Biogeochemical Cycles, 12 , 621–635. Saito, K., & Nakano, R. (1997). Law discovery using neural networks. Proceedings of the Fifteenth International Joint Conference on Artiﬁcial Intelligence (pp. 1078–1083). Yokohama: Morgan Kaufmann. Saito, K., & Nakano, R. (2000). Discovery of nominally conditioned polynomials using neural networks, vector quantizers and decision trees. Proceedings of the Third International Conference on Discovery Science (pp. 325–329). Kyoto: Springer. Schwabacher, M., & Langley, P. (2001). Discovering communicable scientiﬁc knowledge from spatio-temporal data. Proceedings of the Eighteenth International Conference on Machine Learning (pp. 489–496). Williamstown: Morgan Kaufmann. Thornthwaite, C. W. (1948) An approach toward rational classiﬁcation of climate. Geographic Review , 38 , 55–94. Todorovski, L., & Dˇzeroski, S. (1997). Declarative bias in equation discovery. Proceedings of the Fourteenth International Conference on Machine Learning (pp. 376–384). San Francisco: Morgan Kaufmann. Towell, G. (1991). Symbolic knowledge and neural networks: Insertion, reﬁnement, and extraction. Doctoral dissertation, Computer Sciences Department, University of Wisconsin, Madison. Washio, T. & Motoda, H. (1998). Discovering admissible simultaneous equations of large scale systems. Proceedings of the Fifteenth National Conference on Artiﬁcial Intelligence (pp. 189–196). Madison, WI: AAAI Press. ˙ Zytkow, J. M., Zhu, J., & Hussam, A. (1990). Automated discovery in a chemistry laboratory. Proceedings of the Eighth National Conference on Artiﬁcial Intelligence (pp. 889–894). Boston, MA: AAAI Press.

Eﬃcient Local Search in Conceptual Clustering C´eline Robardet and Fabien Feschet Laboratoire d’Analyse des Syst`emes de Sant´e Universit´e Lyon 1 UMR 5823, bˆ at 101, 43 bd du 11 nov. 1918 69622 Villeurbanne cedex FRANCE robardet@univ-lyon1.fr Abstract. In this paper, we consider unsupervised clustering as a combinatorial optimization problem. We focus on the use of Local Search procedures to optimize an association coeﬃcient whose aim is to construct a couple of conceptual partitions, one on the set of objects and the other one on the set of attribute-value pairs. We present a study of the variation of the function in order to decrease the complexity of local search and to propose stochastic local search. Performances of the given algorithms are tested on synthetic data sets and the real data set Vote taken from the UCI Irvine repository. Keywords: Unsupervised conceptual clustering, optimization procedure, local search.

1

Introduction

In the early steps of knowledge discovery from large databases, structuring data appears as a fundamental procedure which permits to better understand the data and to deﬁne groups with regards to an a priori similarity measure. This is usually referred to clustering in the unsupervised learning context. The data are composed of a set of objects described by a set of attributes such that each object owns a value on every attributes. In classiﬁcation/regression, we have a target attribute which can be used to construct the groups. Knowledge discovery can be done through the learning of rules which explain the values on the target attribute using the other attributes. In this way, to each group of objects is associated a set of attribute-value pairs [Rak97]. When no prior information is available, clustering procedures can be used to discover the underlying structure of the data. They construct a partition on the set of objects such that most similar objects belong to a same cluster whereas most dissimilar ones belong to diﬀerent groups. Hence, those procedures synthesize the data into few clusters. One of the key points in clustering is the a priori deﬁnition of similarity. When dealing with numerical attributes, it is usual to relate the similarity between two objects with their distance. Clustering is then reduced to the determination of groups minimizing the intra-cluster similarity and maximizing the inter-clusters one. For instance, in the K-MEANS algorithm [JD88,CDG+ 88], Euclidean distances between representative vectors of objects are used. This can K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 323–335, 2001. c Springer-Verlag Berlin Heidelberg 2001

324

C. Robardet and F. Feschet

also be extended to ordinal data and even to symbolic one but distances become less representative in this case. Instead, probabilistic representations are preferred. The diﬀerence between the probabilities of appearance of an attributevalue pair on the whole set of objects and its restriction on the set of objects belonging to a particular cluster is used to guide the search for a good partition. It is a trade-oﬀ between intra-class similarity and inter-class dissimilarity of the objects. For example, in the COBWEB algorithm [Fis87,Fis96], the category utility function is used as an objective function. It is a weighted averaging of the well known GINI index without ﬁxing the number of clusters. Other methods like AUTOCLASS [CS96] also use bayesian classiﬁcation, modeling objects by ﬁnite mixture distributions. Another key point in clustering is the optimization procedure. The cardinality of the set of all possible partitions increases exponentially with the size n of the set of objects, which leads to use fast but often rough heuristics. In the K-MEANS algorithm, a heuristic based on the principle of reallocation, is used. At each step, cluster centroids are computed and each object is assigned to the cluster whose centroid is the closest. After few such steps the procedure stops to improve the partition. But unfortunately, the algorithm makes only local changes to the initial partition and thus typically gets trapped in the ﬁrst local minimum. COBWEB method uses an incremental procedure which classiﬁes objects one by one. For each object, the procedure evaluates the two following options: classifying the object in one of the existing clusters or creating a new one containing only this object. The operation which leads to the most important increase in the function is considered. The main drawback of this heuristic is that it often constructs a local optimum which is dependant on the order of the objects in the incremental process. In AUTOCLASS, optimization is done for maximum posterior parameters (MAP) with the EM algorithm. In fact, among a set of models, constituted of a priori number of clusters and probability distributions functions, the method consists in estimating some parameters using the EM algorithm and choosing the best model using a MAP estimator. Optimization can be global or local. The ﬁrst one is usually unreachable and the second is very sensitive to initial conditions. Popular methods like Tabu Search or Genetic Algorithm are widely used without knowing clearly how they work. In this paper, we restrict to local optimization procedure and more precisely on the simplest one that is the local search procedure. Local optimization seems to be a promising method for clustering since it has provide good results at a low cost in lots of combinatorial optimization problems. We base our study on a variational approach of an objective function which is described in section 2. Variations of the function through elementary modiﬁcations are studied in section 3 where a single model of modiﬁcation is given. This permits us to introduce ﬁve stochastic optimization procedures which are experimentally studied on two diﬀerent data sets. The ﬁrst one is an artiﬁcial data set and the second one is the Vote data from the UCI Irvine repository. We then propose some conclusions and future works.

Eﬃcient Local Search in Conceptual Clustering

2

325

Clustering Method

To strengthen the semantic knowledge held by partitions, we study an algorithm for the construction of two linked partitions, one on the set of objects and the other one on the set of attribute-value pairs; we call this couple a bi-partition. Similar methods have already been proposed. We can cite the methods of data reorganization [MSW72,SCH75] which consist in permuting rows and columns of a data table on the base of a distance to minimize. Another one is the simultaneous clustering algorithm [Gov84]. It consists in searching a couple of partitions in a priori K and L clusters and an ideal binary table of dimensions K × L such that the gap between the initial data table structured by the two partitions and the ideal table is minimized. Those two procedures have important drawbacks. The ﬁrst methods do not produce partitions which must be constructed by the user. The second one determines a couple of partitions with a priori ﬁxed numbers of clusters. Furthermore, the resulting couple of partitions is often far from the global optimum. To enforce the knowledge contribution brought by the bi-partition, we favor couples of partitions which follow the following property, Property: The functional link, which restores one partition on the basis of the knowledge of the second one, must be as strong as possible. Furthermore, both partitions must have the same numbers of clusters. To evaluate the quality of a bi-partition regarding this property, we construct a function over PO × PQ , where PO is the set of partitions on the set of objects, and PQ is the set of partitions on the set of attribute-value pairs. This function must follow some properties [Rak97,RF01] to be adapted to the clustering structure, such as the independence upon clusters permutations or the ability to treat bi-partitions having partitions with diﬀerent numbers of clusters, etc. Theses properties are partially checked by association measures, which have been built to evaluate the link between two qualitative attributes X and Y , which are considered as partitions upon a same set. The association measures are widely used in supervised clustering [LdC96], whereas few unsupervised clustering algorithms used them [MH91]. We propose [RF01] to use an adaptation of the τb measure construct by Goodman and Kruskal [GK54], which we call τQ , τQ =

i

− j p2.j 2 1 − j p.j p2ij j pi.

We name τO the above measure obtained when exchanging the attributes1 . We denote by pi. (resp. p.j ) the frequency estimator of the probability associated to the attribute-value pair i (resp. j) of the X (resp. Y ) attribute, and by pij the frequency estimator of the probability that an attribute-value pair i of the attribute X, and the attribute-value pair j of Y arisen simultaneously. The τQ 1

τO is used to determine an adequate partition on PO and τQ is used to obtain an adequate one on PQ

326

C. Robardet and F. Feschet

coeﬃcient evaluates the proportional reduction in error given by the knowledge of the attribute X on the prediction of Y . It takes into account all the structure of the distribution when estimating the variation on the prediction. Using this measure, we do not need to ﬁx the number of clusters in the partitions. It measures how the knowledge of the partition P of PO improve the prediction of the cluster of an attribute-value pair in a partition Q of PQ , knowing the cluster(s) of P which possess objects described by the attribute-value pair. The measure is normalized and consequently none of the discrete or the single-cluster partitions are favored. Moreover, some experiments have been realized by M. Olszak [Ols95] and also by us [RF01]. They consist in comparing several association measures with regard to diﬀerent synthetic data sets. In both studies, the authors ﬁnd that the τQ has an appropriate behavior. To overcome the fact that our two partitions are not based on a same set, we build a co-occurrence table. In the data, h each object is described by h attributes Vi such that Vi : O → domi . Q = i=1 domi is the set of all attribute-value pairs, diﬀerentiating each attribute value of the diﬀerent attributes. The cooccurrence table between a partition P = (P1 , . . . , PK ) on the set O of objects and a partition Q = (Q1 , . . . , QK ) on the set Q, is (nij )i,j with nij =

h

δVi (x),y

x∈Pi y∈Qj i=1

where δ is the Kronecker2 symbol. Consequently, previous pij we replace the n (resp. pi. ) notation by nij.. (resp. nni... ) where ni. = j nij and n.. = i j nij To determine the best bi-partition, we search a bi-partition which maximizes the τQ and τO measures. The problem is now to ﬁnd an adequate optimization procedure, remembering that we are confronted to a combinatorial optimization problem. Note that the search space PO × PQ is huge (exponential in n) m c 1 c n c−i i (PX ) = (−1) i c! c=1 i=1

with

(X ) = m

and

X = {O, Q}

Consequently exhaustive or potentially exhaustive search procedures, like the Branch and Bound, is unrealistic in terms of time eﬃciency. Using others procedures, we have no guarantees the obtained solution is a global optimum. Choosing a local optimization method is a trade-oﬀ between computation cost and quality of the result.

3

Local Search

We consider general purpose methods which are based on the deﬁnition of the neighborhood of a given partition. At each step, a new solution is chosen among the neighborhood of the previous one, such that the algorithm converges towards 2

δVi (x),y = 1

if

Vi (x) = y,

δVi (x),y = 0

otherwise

Eﬃcient Local Search in Conceptual Clustering

327

at least a local optimum. Generating several possible solutions at each step allows to direct the search to the candidates which most improve the function. The main diﬃcult point is to determine how to construct an eﬃcient neighborhood suﬃciently rich and with a tractable complexity. Recent works [FK00,GKLN00] attempt to apply Local Search algorithm to clustering problem. [GKLN00] propose six operators for generating a partition starting from another. They apply those operators ﬁrst successively, and then stochastically following their frequency of improving the function. They observe that the second algorithm is more robust than the ﬁrst one. [FK00] couples together Local Search and K-MEANS algorithms. The neighborhood function consists in randomly swapping a cluster centroid by another object and then applying the K-MEANS procedure. This procedure is less dependant on the initialization of the algorithm and provide robust results. Both papers introduce randomness in the generating neighborhood process and observe increase in the quality of the results. Local Search is often compared with Tabu Search, Genetic Algorithms, and Simulated Annealing which attempt to obtain a possibly global optimum without visiting all possible solutions. Tabu Search consists in choosing a better solution than the current one when it exists, and to accept sub-optimal solution otherwise. A Tabu list prevents to return to a candidate recently evaluated. The procedure can thus pass through local optimum but often with a high computing time. Simulated Annealing relies on a stochastic process which allows to escape from local optima. Solutions which improve the objective function are not necessarily kept. The selection process consists in taking solutions regarding their associated probability. This probability increases for solutions improving the function. But the probability is also inﬂuence by a global parameter called temperature which gradually decreases to force the convergence of the algorithm to an optimum. Whereas other methods generate a unique new solution at each step, the particularity of Genetic Algorithms [Rud94] is to generate a set of best solutions, called population, at each step. The neighborhood of the population is deﬁned using genetic operators such as reproduction, mutation and crossover. New candidates which surpass their parents are always maintained, which guarantees the convergence to a good solution. [BRE91,Col98] apply such algorithms to clustering problems.

4

Variational Approach

For using local optimization procedures we usually deﬁne operators. Then we apply them on the current solution to generate the neighborhood. After that, we compute the measure on each member of the neighborhood and compare the value with the one obtained on the current solution. This procedure is expensive in memory space used and computing time. In our problem, computing the measure on a new partition might require to duplicate the co-occurrence table and consequently to double the memory space used, which is a drawback for the scalability of the method. Furthermore, the complexity for evaluating the τQ

328

C. Robardet and F. Feschet

measure is in O (p × q), with p denoting the number of clusters of P and q the ones of Q. This cost is multiplied by the cardinal of the neighborhood. To overcome those drawbacks, we propose a variational approach for evaluating the objective function. We deﬁne three operators for generating neighboring partitions. Those operators are the transfer of one element from a cluster to another, the split of a cluster into two and the merging of two clusters into one. Those operators constitute a complete generating system because what ever the current partition is, we can reach each of the other ones by applying a ﬁnite number of such operators. We evaluate the variation on the τQ measure when modifying the current partition by one of the three operators. We ﬁrst consider the variation on τQ when transferring, on the partition Q, one attribute-value pair y from a group denoted by b to another denoted by e. Given than each cluster of Q is linked to a column of the co-occurrence table, the transfer of y from Qb to Qe generates the moving of a quantity λyi from the cell on row i and column b to the one on row i and column e. Let us denote by nij the elements of the old co-occurrence table, and by mij those of the new one. The transfert of y induces the following equations between nij and mij mib = nib − λyi

mie = nie + λyi

;

mij = nij

otherwise

(1)

The variation of τQ given by the transfert is then old new − τQ = τQ

i

n2ij j ni. n..

1−

−

n2.j j n2..

n2.j j n2..

−

i

m2ij j ni. n..

1−

−

m2.j j n2..

m2.j j n2..

Simplifying using equations (1), we obtain y 2λy y 2λ y i [n − n − λ ] + C × [n − n + λ ] I× 2 ib ie .e .b i i n.. ni. n.. old new τQ − τQ = I 2 − n22 λy I (n.e − n.b + λy ) y

where λ = and e,

..

y i λi ,

and I and C are the following constants with respect to b

I =1−

n2.j j

n2..

C =1−

n2ij ni. n.. i j

The transfer of several attribute-value pairs in a same movement leads to the same expression. Indeed, considering the transfer of a set S of attribute-value pairs, we compute λSi vectors as follows λyi =

S

h x∈Pi i=1

δVi (x),y

and

λSi =

λyi

y∈S

Consequently, λi vectors are linear combinations of the (λyi ), and transferring a single attribute-value pair or a set of them is evaluated by the same expression.

Eﬃcient Local Search in Conceptual Clustering

329

Furthermore, the fusion of two clusters into a single one can be considered as a transfer of all attribute-value pairs of a cluster into another one, and thus leads to empty the ﬁrst cluster. The computational expression is similar of the transfer’s one. When two columns b and e are merged into the e one, we have the following expression, old new τQ − τQ =

I×

−2 n..

λbi i ni. nie

+C ×

I 2 − I n22 λb n.e

2λb n2.. n.e

..

Splitting a cluster into two is also a transfer like operation. It can be view as a transfer of a set S of attribute-value pairs into a new cluster. When a column b is split into an e and b ones, the variation of τQ is: old new τQ − τQ =

I×

2 n..

λS i i ni.

S

λS − n.b nib − λSi + C × 2λ n2 I 2 − I n22 λS (λS − n.b )

..

..

Similar expressions are found when moving a subset of objects for one row to another on the τO measure. Through the above expressions we show that the variation on the τQ measure can be evaluated using the co-occurrence table, for the evaluation of the nij parameters, and the data table, for the computing of the λi expressions. The partition itself is not taken into account for computing the variations. Furthermore, we have shown that the three diﬀerent operators lead to a unique expression of the τQ variation, which we denote by ∆ λSi , b, e . The fusion and merge are particular cases of the transfer modiﬁcation. Computing ∆ λSi , b, e has a old new lower computational complexity than evaluating τQ − τQ . In the variational approach, the evaluation of the ﬁrst partition is in O (p × q) because we need to compute the constant C. Then, when the constants I and C are ﬁxed, the complexity for evaluating a new partition is in O (max(p, q)). When we need to upgrade the constants I and C, it takes O (1) and O (p) respectively. Consequently, we reduce the complexity from O (p × q) to O (max(p, q)), except for the ﬁrst evaluation. Globally, the dimension of the problem is reduced. now expressed as a It is h function of the elementary vectors (λyi ), with λyi = x∈Pi i=1 δVi (x),y for all S attribute-value pairs y. All λi vectors can be generate from the elementary vectors (λyi ) as follows

λSi =

y

(λyi )

y

∈ {0, 1}

y∈Q

The problem is now to ﬁnd a way to determine the λSi vectors which lead to the most important increase in the measure. In the next section, we propose ﬁve algorithms which diﬀer from their way to choose such vectors.

330

5

C. Robardet and F. Feschet

Algorithms

Using the previous variational approach into a Local Search procedure leads to the following deterministic algorithm For each cluster Pb do For each attribute-value pair y of Pb do For each cluster Pe = Pb do Compute ∆ ((λyi ) , b, e) End For If (min ∆ < 0) then Modify the co-occurrence table End For End For At each step, we consider one attribute-value pair per cluster and try to transfer it to another cluster. We modify the co-occurrence table for the transfer with highest negative decrease. It is well known that randomness usually increases the performance of deterministic algorithm. Stochastic optimization can be considered as a random walk above the set of all partitions. If this search is guided to be attracted by high values of some measure on the partitions, the probability to visit the partitions with global maximum value are increased [FK00]. We thus propose four randomized versions depending on which For loop is randomized in the deterministic version. Stochastic 1 algorithm Randomly choose a cluster Pb Randomly choose y, an attribute-value pair For each cluster Pe = Pb do Compute ∆ ((λyi ) , b, e) End For If (min ∆ < 0) then Modify the co-occurrence table

Stochastic 2 algorithm Randomly choose a cluster Pb Randomly choose a subset S in Pb For each clusterPe = Pb do Compute ∆ λSi , b, e End For If (min ∆ < 0) then Modify the co-occurrence table

Stochatic 4 algorithm Stochastic 3 algorithm For each cluster Pb do For each cluster Pb do Randomly choose a subset S in Pb Randomly choose y in Pb For each clusterPe = For each cluster Pe = Pb do Pb do Compute ∆ ((λyi ) , b, e) Compute ∆ λSi , b, e End For End For If (min ∆ < 0) then If (min ∆ < 0) then Modify the co-occurrence table Modify the co-occurrence table End For End For

Eﬃcient Local Search in Conceptual Clustering

331

Those algorithm are several combinations between randomness and deterministic choice of the cluster and the attribute-value pair(s) to modify. Note that even and odd versions diﬀer by the choice of one or a subset of attributevalue pairs. In the two ﬁrst algorithms, the cluster, from which y or S is removed, is chosen randomly, whereas for the two last ones all the clusters are examined. In all those algorithms, the best ended cluster is chosen after examining all the possible ones.

6

Experimentation

To optimize a bi-partition, we successively execute the algorithm with τQ as objective function which leads to improve the partition Q of PQ and then we apply the same algorithm with τO objective function and thus improve the partition P of PO . We must underline the fact that modifying Q (resp. P ) greatly inﬂuences τQ (resp. τO ) and in a less extent inﬂuences also τO (resp. τQ ). This explains the fact that on some of the following graphics we observe a decrease in the measure. We ﬁrst apply those algorithms on a perfect synthetic data set which contains 200 objects and 30 attributes with 5 diﬀerent values each. This data set is composed of 5 blocks of homogenous data, composing a bi-partition into 5 clusters. Starting from the discrete partition (Fig.1 left) or from a random partition (Fig.1 droite), we apply the ﬁve algorithms on this data set. On Fig.1, the value of τQ is plotted at each step. 1

1 Determinist Stochastic 1 Stochastic 2 Stochastic 3 Stochastic 4

0.9

Determinist Stochastic 1 Stochastic 2 Stochastic 3 Stochastic 4

0.9

0.8

0.8

0.7

0.7

0.6

0.6

0.5

0.5

0.4

0.4

0.3

0.3

0.2

0.2

0.1

0.1

0

0 0

50

100

150

200

250

0

200

400

600

800

1000

1200

1400

1600

1800

2000

Fig. 1. Perfect synthetic data set, starting from the discrete partition (left), or from a random one (right)

On the synthetic data set, we observe that the deterministic and the third stochastic procedures ﬁnd in fewer steps the optimal partition than the other procedures. This can be explained by the fact that in those procedures all possible clusters Pb and all possible cluster Pe are evaluated and that at each step the best movement for a given single y is chosen. The ﬁrst stochastic procedure is also really impressive. When the ﬁrst partition is the discrete one (see Fig.1 left), it

332

C. Robardet and F. Feschet

has the same behavior than the deterministic procedure. When the ﬁrst partition is constructed randomly (see Fig.1 right), it takes more steps to ﬁnd the goal partition. The second and the fourth stochastic procedures are the slowest. They rely on a randomly choice of a subset of attribute-value pairs. When the subset is composed of dissimilar attribute-value pairs, the procedure can not improve the value of the measure. This explains the fact that those procedures have better performances on the left graphic, when the ﬁrst partition is the discrete one and consequently the possible subsets S are of small cardinality. To simulate a more realistic case, we randomly introduce some noise in the data set (see Fig.2 which shows the τQ value for each iteration with 10% (left) and 30% (right) of random noise).

0.9

0.6 Determinist Stochastic 1 Stochastic 2 Stochastic 3 Stochastic 4

0.8

Determinist Stochastic 1 Stochastic 2 Stochastic 3 Stochastic 4

0.5

0.7

0.6

0.4

0.5 0.3 0.4

0.3

0.2

0.2 0.1 0.1

0

0 0

200

400

600

800

1000

0

200

400

600

800

1000

Fig. 2. Synthetic Data set with 10% noise (left) and 30% noise (right)

The results obtained are similar to those found in the perfect case. The convergence speeds are in the same order. The previous graphics mask an important point: the required time for each step. The table 1 gathers the computation time, expressed in seconds, used for 10000 iterations. For information, they are obtained on a Pentium II 300Mhz with 32 Mb memory. Table 1. Computation time (in second) used by the several algorithms on diﬀerent data sets detreminite Stochastic 1 Stochastic 2 Stochastic 3 Stochastic 4

Perfect 10% noise 30 % noise 204 250 275 2.66 4 4 23 31 31 39 130 130 81 110 120

Eﬃcient Local Search in Conceptual Clustering

333

The deterministic procedure is very high time consuming. The ﬁrst stochastic procedure seems to be a good compromise between accuracy and time consumption. Then we apply the algorithms on a well known benchmark: 1984 United States Congressional Voting Records Database. We remove the attribute expressing the vote. On Fig.3 (left) is plotted the τQ values for each iteration of the algorithms. On the contrary of the previous experiences, the distinction between on one hand the second and the fourth stochastic procedures, and on the other hand the other procedures, is less obvious. All the procedures ﬁnd quite the same partition. We can observe an unexpected decrease in the function. This is due to the fact that an increase on τO leads to a decrease on τQ . Such phenomenon appears rarely, and when it appears, the algorithm quickly restores a better partition. Consequently this is not an handicap in the optimization process. To visualize the inﬂuence of the τO optimization on the τV one, we plot (see Fig.3 (right)) the value of the both functions at each iterations. On this graph we clearly observe the compensation process on the optimization of both functions.

0.35

0.4 Tau_0 Tau_Q

Determinist Stochastic 1 Stochastic 2 Stochastic 3 Stochastic 4

0.3

0.35

0.3 0.25 0.25 0.2 0.2 0.15 0.15 0.1 0.1

0.05

0.05

0

0 0

50

100

150

200

250

300

350

400

450

500

0

100

200

300

400

500

600

Fig. 3. Vote Data Set (left), Values of τO and τQ when using Stochastic 1 (right)

The quality of the obtained partition of voters can be evaluated through its comparison with the results of the elections. This election consists in deciding between a democrat or republican congress. We also obtain a partition in two clusters. The table 2 crosses the two partitions. Table 2. Cross table of the votes and the obtained results by the algorithm 4 Our results vs Vote Democrat Republican P1 221 14 235 P2 46 154 200 267 168 435

334

C. Robardet and F. Feschet

The group P1 of the obtained partition seems obviously corresponds to the democrat one, whereas the group P2 ﬁts the republican population. The rate of accurate prediction is here of 86.2% whereas about 90% accuracy appears to be STAGGER’s asymptote. The quality of the partition on the set of attribute-value pairs is also very good. We denote by G1 the cluster of attribute-value pairs associated to the group P1 , which ﬁts well the democrat population. This set gathers all the attribute-value pairs whose conditional probability of appearance, given the fact the voter is Democrat, are superior of the ones associated to the Republican voters. The probability of being democrat knowing the voter owns this attributevalue pair is also superior of the one obtained for the republican voters and thus for all the attribute-value pairs. All attributes are of binary/type (yes/no), and for each attribute, the yes value belong to a cluster and the no one to another one. Consequently, we can say that the obtained partition is ideal regarding our criteria of a good partition.

7

Conclusion

In this article, we have presented a variational study of a function used for guiding the search of a partition in conceptual clustering. It consists in evaluating the variation of the function when transfer, merge or split operators are applied to modify a partition. We showed that using this approach in optimization procedure allows to decrease the computational cost. Furthermore, it leads to simplify the problem, expressing the three operators under a single one. We mix this approach with stochastic local search optimization procedures and apply them on a synthetic data set and the real data set Vote taken from the UCI Irvine repository. The experimentation leads to conclude that some randomness is needed in the local search procedure to speed up the convergence to the best partition. But too much randomness, when the procedure examine a random subset of attribute-value pairs of a cluster, slow down the convergence in a more important way. The partitions obtained on the Vote data set are both of excellent accuracy. The partition on the voters set is quite the same than the one given by the result of the election, without taking this information into account. The partition on the set of attribute-value pairs follows exactly the conditional probabilities of appearance of those attribute-value pairs given the vote class. In a future work, we plan to analytically approximate the combination of (λyi ) which most improve the quality of the partition. This would reduce the number of steps of optimization required to obtain an optimum.

References [BRE91]

J. N. Bhuyan, V. V. Raghavan, and V. K. Elayavalli. Genetic algorithm for clustering with ordered representation. In Richard K. Belew and Lashon B. Booker, editors, Proceedings of the Fourth International Conference on Genetic Algorithms, San Mateo, CA, 1991. Morgan Kaufmann Publishers.

Eﬃcient Local Search in Conceptual Clustering

335

[CDG+ 88] G. Celeux, E. Diday, G. Govaert, Y. Lechevallier, and H. Ralambondrainy. Classiﬁcation automatique des donn´ ees. Dunod, paris, 1988. [Col98] R. M. Cole. Clustering with genetic algorithms. Master’s thesis, University of Western Australia, 1998. [CS96] P. Cheeseman and J. Stutz. Bayesian classiﬁcation (autoclass): Theory and results. Advances in Knowledge Discovery and Data Mining, 1996. [Fis87] D. H. Fisher. Knowledge acquisition via incremental conceptual clustering. Machine Learning, 2:139–172, 1987. [Fis96] D. H. Fisher. Iterative optimization and simpliﬁcation of hierarchical clusterings. Journal of Artiﬁcial Intelligence Research, 4:147–180, 1996. [FK00] P. Fr¨ anti and J. Kivij¨ arvi. Randomised local search algorithm for the clustering problem. Pattern Analysis and Applications, pages 358–369, 2000. [GK54] L. A. Goodman and W. H. Kruskal. Measures of association for cross classiﬁcation. Journal of the American Statistical Association, 49:732–764, 1954. [GKLN00] M. Gyllenberg, T. Koski, T. Lund, and O. Nevalainen. Clustering by adaptive local search with multiple search operators. Pattern Analysis and Applications, pages 348–357, 2000. [Gov84] G. Govaert. Classiﬁcation simultan´ee de tableaux binaires. In E. Diday, M. Jambu, L. Lebart, J. Pages, and R. Tomassone, editors, Data analysis and informatics III, pages 233–236. North Holland, 1984. [JD88] A. K. Jain and R. C. Dubes. Algorithms for clustering data. Prentice Hall, Englewood cliﬀs, New Jersey, 1988. [LdC96] I.C. Lerman and J. F. P. da Costa. Coeﬃcients d’association et variables ` a tr`es grand nombre de cat´egories dans les arbres de d´ecision : application ` a l’identiﬁcation de la structure secondaire d’une prot´eine. Technical Report 2803, INRIA, f´evrier 1996. [MH91] G. Matthews and J. Hearne. Clustering without a metric. IEEE Transaction on pattern analysis and machine intelligence, 13(2):175–184, 1991. [MSW72] W. T. McCormick, P. J. Schweitzer, and T. W. White. Problem decomposition and data reorganization by a clustering technique. Operations Research, 20(5):993–1009, 1972. [Ols95] M. Olszak. Mod´elisation des relations de causalit´ e entre variables qualitatives. PhD thesis, Universit´e de Gen`eve, 1995. [Rak97] R. Rakotomalala. Graphes d’induction. PhD thesis, Universit´e Claude Bernard Lyon 1, 1997. [RF01] C. Robardet and F. Feschet. Comparison of three objective functions for conceptual clustering. In Proceedings of the 5th European Conference on Principles and Practice of Knowledge Discovery in Databases. SpringerVerlag, September 2001. [Rud94] G. Rudolph. Convergence analysis of canonical genetic algorithms. IEEE Transactions on neuronal networks, 5(1):96–101, 1994. [SCH75] J.R. Slagle, C. L. Chang, and S. R. Heller. A clustering and datareorganizing. IEEE Transactions On systems, Man and Cybernetics, pages 125–128, January 1975.

Towards a Method of Searching a Diverse Theory Space for Scientific Discovery Joseph Phillips University of Pittsburgh Computer Science Dept. Pittsburgh, PA 15260, USA josephp@cs.pitt.edu Abstract. Scientists need customizable tools to help them with discovery. We present an adjustable heuristic function for scientific discovery. This function may be considered in either a Minimum Message Length (MML) or a Bayesian Net manner. The function is approximate because the default method of specifying theory prior probabilities is a gross estimate and because there is more to theory choice than maximizing probability. We do, however, effectively capture some user preferences with our technique. We show this for the qualitatively different domains of geophysics and sociology.

1 Introduction Our ultimate goal is to write a general program to assist scientists in creating and improving scientific models. Realizing this goal requires progress in machine learning, knowledge discovery in databases, data visualization and search algorithms. It also requires progress in scientific model preferencing. The scientific model preference problem is compounded by the fact that several scientists with very similar background knowledge may see the same data but may prefer different models. This paper is the first in an on going study to address scientific model preferencing issue. Scientific discovery can be viewed as a parameter search in a large and extremely inhomogeneous space. Physicists, for example, prefer strong relationships between numeric values (e.g., equations) when they can be found. They also, however, use knowledge that is more conveniently expressed hierarchically in decision trees and semantic nets. This is exemplified by the classification of, and the assigning of fundamental properties to subatomic particles. The minimum message length (MML) criterion is a mathematically wellgrounded approach for choosing the most probable theory given data [21][8][24][5]. Inspired by information theory, the criterion states that the most probable model has the smallest encoding of both the theory and data. Ideally, the theory’s encoding K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 304-322, 2001. © Springer-Verlag Berlin Heidelberg 2001

Towards a Method of Searching a Diverse Theory Space for Scientific Discovery

305

results from a domain expert’s estimation of its prior probability and is language independent. The encoding of the data should also be probabilistic: as a function of a given theory. Despite its generality and power for finding parameters in single classes of models (e.g., the class of polynomials), many have expressed skepticism about whether MML may meaningfully be applied to finding parameters in inhomogeneous model spaces (e.g., general scientific discovery). Cheeseman, for example, states “although finding the most probable domain model is often regarded as the goal of scientific investigation, in general, it is not the optimal means of making predictions.” [5] Our immediate, limited goal is to devise a heuristic function that can help users in large and inhomogeneous model spaces. Ideally, a search algorithm that is informed with our heuristic will return several regions in the model space that contain promising models, some known and some novel. Our approach is to adapt MML in a customizable manner. 1. We make MML applicable to a larger set of scientific discovery by mapping its terms onto those used by scientists: theory, laws and data. The MML theory is mapped to scientific theory. The MML data is split into scientific laws and data.

2. We make our heuristic function adjustable, but in a principled manner, by giving the user only two calibration parameters. These parameters directly correspond to the relationship between scientific theory and law, and scientific theory and data. It would be nice if we could ignore differences between theories and pretend that there is one “best” theory for all scientists. This, however, ignores significant evidence that scientists differ in opinion, e.g., see [10][15]. We judge our function based on criteria for heuristic functions: generality, ease of computation, simplicity and smoothness. We do not claim that we have “solved” this problem. The feature set by which to judge theories and the identification of the “best” model remain unsolved problems. 1. We offer no good guidance in developing the theory’s prior probability. Cheeseman and others have stressed the importance of using domain knowledge to specify the theory’s prior probability. They have also stated that syntactic features are often a poor substitute. We are aware of no general algorithm for the estimation of a theory’s prior probability. Although our technique is not limited to syntactic features, we use them in this paper. Our approach is compatible with more principled prior probability specifying techniques. 2. We make no claim that the “best” theory will result from this approach. This is due to (1) the unsolved prior probability problem, (2) to the difficulty in searching a large and inhomogeneous model space, and (3) the fact that the most probable model may or may not be the best model.

306

J. Phillips

We have developed a useful heuristic function despite these two major limitations. Its generality is tested by analyzing its performance in two completely different domains: sociology and geophysics. This paper is organized as follows. Section 2 discusses previous approaches to automated scientific discovery. Section 3 briefly introduces MML. Our approach is detailed in section 4. Section 5 presents and discusses our experiments. Section 6 concludes.

2 Scientific Discovery Several criteria have been proposed by philosophers of science for comparing competing hypotheses [3]. Among them are accuracy/empirical support, simplicity, novelty and cost/utility. Most automated approaches consider accuracy and simplicity. IDS by Nordhausen and Langley was perhaps the first general program for scientific discovery [18][19]. IDS takes as input an initial hierarchy of abstracted states and a sequential list of “histories” (qualitative states, see [6]). Using each history IDS modifies the affected nodes of the abstracted state tree to incorporate any new knowledge gained from that history. Its output is a fuller, richer hierarchy of nodes representing history abstractions. Thagard introduced Processes of Induction (or PI), to propose a computational scheme for scientific reasoning and discovery, but not as a working discovery tool [23]. PI represents models as having theories, laws and data. It evaluates scientific models by multiplying a simplicity metric by a data coverage metric. The simplicity metric is a function of how many facts have been explained and of how many cohypotheses were needed to help explain them. The evaluation scheme is fixed and has no notion of degree of inaccuracy. Zytkow and Zembowicz developed 49er, a general knowledge discovery tool [27][26]. It has a two stage process for finding regularities in databases. The first stage creates contingency tables (counts of how often values of one attribute co-occur with those of another) for pairings of database attributes. The second stage uses the contingency tables to constrain the search for other, higher order, regularities (e.g. taxonomies, equations, subset relations, etc.) Valdes-Perez has suggested searching the space of scientific models from the simplest to ones with increasingly more complexity, stopping at the first that fits the data. MECHEM uses this approach to find chemical reaction mechanisms [25]. Such orderings would be easy to encode as heuristic functions. We extend these approaches by using an adjustable, explicitly mentioned heuristic function that does not require enumerating all possible models. Our approach is to generalize Thagard’s scheme and place it on sounder theoretical footing.

Towards a Method of Searching a Diverse Theory Space for Scientific Discovery

307

3 Information Theory and Diverse Model Discovery The MML criterion is to minimize the sum of the length of a theory and data given the theory. Some data will have a smaller combined compressed length than the original message. For example, the pitch and relative durations of some bird calls may be written in musical notation. This notation dramatically reduces the information from the original time-dependent air-pressure signal that the bird produced. However, many sounds are not appropriately described by musical notation (e.g., human speech). The original time-dependent air-pressure signal will be a better representation than musical notation. The equation that relates these terms for data set D; context c; discrete, mutually exclusive and exhaustive hypotheses {H0, H1 .. Hn} with assigned prior probabilities p(Hi|c); and computed conditional data probabilities p(D|Hi,c) is: (1) –log p(H i D, c) = – log p(H i c) – log p(D H i, c) + const

which is equation (2) of [5]. Recall that the -log(p(choice)) is the Shannon lower bound on the information needed to distinguish choice from other possibilities. The constant term serves to “normalize” the probabilities and may be ignored if you only want their relative order. Cheeseman gives this iterative process for applying MML: 1. Define the theory space. 2. Use domain knowledge to assign prior probabilities to the theories. 3. Use Bayes’ theorem to obtain the posterior probabilities of the theories given the data from adequate descriptions of the theories (i.e., from descriptions that let you compute p(D|Hi,c) ). 4. Search the space with an appropriate algorithm. 5. Stop the search when a probable enough theory has been found (subject to computational constraints), or to redefine the theory space or prior probabilities. Several obstacles hamper efforts to apply MML to general scientific discovery. Among them are the specification of the initial theory prior probabilities, the inherently iterative nature of MML, and the difficulty in searching this space for a true “highest probability” theory. Like other MML efforts, there is no good rule for specifying an initial set of prior probabilities. Although Cheeseman and others warn about using syntactic features, this may be the easiest approach to try in a new domain. MML is an inherently iterative process of redefining theory spaces and prior probabilities. This complicates the usage of any function that needs calibration.

308

J. Phillips

The scientific theory search space is expected to be highly irregular, hampering the search for the “best” model. This is true of other domains. Cheeseman suggests simulated annealing and the EM algorithm as potential search mechanisms.

4 Our Approach We do not claim to have an optimal heuristic function in terms of returning the truly “best” model. Rather, our goal is to create a decent heuristic function that may help scientists on their initial searches with large, inhomogeneous spaces. Good heuristics for real-world problems are often tricky to design [16]. We evaluate our function based on four criteria: 1. Generality over different sciences: We seek a function that is applicable to both primarily conceptual models as well as primarily numeric. 2. Ease of computation: The function should not rely too heavily on values that are computationally difficult to obtain. And, once it has its values, it should be rapidly computable. 3. Simplicity of form: There are several competing beliefs for how scientific models should be evaluated. The function’s design should be as transparent as possible so that its assumptions are readily comprehended. 4. Smoothness: The function should give similar models similar scores. We chose these criteria because they are important to our long-term goal of creating a general program to assist a variety of scientists. Our contributions are the improvements in generality and ease of computation over Thagard’s function. Generality is improved in three ways. First, it is adjustable to the tastes of a particular scientist. Second, it is able to handle degrees of inaccuracy. Lastly, it may use statistical arguments as well as proofs. Statistical arguments also improve the ease of computation: the function does not have to try to formally prove laws or data using perhaps an undecidable theory. The form of our function, however, is a little more detailed than Thagard’s. The smoothness of both of our approaches critically depends upon how the user designs models. Following Thagard, models have three components: a theory that specifies the details of the model, the data to predict, and a set of laws found from the data and predicted by the theory. The theory and the law set are both composed of assertions in some language. We use first order predicate logic with the data structure extensions of Prolog as our language in this paper. The distinction between which assertions are theory and which are laws is given by Lakatos. He distinguishes between commonly accepted knowledge (the “hard core”, i.e., theory) and between more tentatively held knowledge (the “auxiliary hypotheses”, i.e., laws) of a given research program

Towards a Method of Searching a Diverse Theory Space for Scientific Discovery

309

[12][13]. The auxiliary hypotheses are the statements that are not commonly held (i.e., have lower prior probability), and are the main objects that are manipulated during Kuhnian normal scientific discovery [10]. The data is assumed to be in tabular form with associated uncertainties and error bars. It is simplest to assume that: 1. all measurements are independent of each other, 2. the data influence the choice of law set, and 3. the law set influences the choice of theory assertions. Figure 1 depicts these assumptions graphically as a Bayesian network.

data

{

laws

theory

Fig. 1. Bayesian network underlying the relationship between data, laws and theory

We are interested in the most probable total model. We derive the following starting from the Bayesian network of figure 1. Let T denote theory, LS denote a set of laws, and D denote data below: (2) p(T, D) = p(T D) ⋅ p(D)

Using Bayes’ rule we may re-write this as:

(3) =

∑ p(T) ⋅ p(LS i T) ⋅ p(D LS i) i

The last expression sums over all law sets and is appropriate when there may be disagreement over which law set is best (e.g., several scientists combining their beliefs). However, for an individual scientist, a particular law set may appear much more probable than any of its competitors. In this case we may simplify the expression to: (4) p(T, D) = p(T) ⋅ p(LS T) ⋅ p(D LS)

Now we consider the meaning of each term.

310

J. Phillips

The first term of equation 4 tells us the a priori probability of a theory, without reference to the law set or data. It encodes the biases on theories. It may be used, for example, to prefer one type of assertion over another. A commonly mentioned bias in science is one for syntactic simplicity, which is often measured as the length of an expression in a given language. This first term is the natural place to encode such a bias because this common measure of simplicity is only a function of the length of the expression. (5) p(T) = – log 2(s(T))

The function s(T) returns a measure of the size of T in some language. The function p(T) uses Shannon information theory to convert from a size to a probability. We admit that the syntactic length metric is crude. We welcome scientists to redefine p(T) as they choose based upon their own domain knowledge. In defense of this initial estimate of p(T) we note that syntactic metrics: (1) are easy to compute, (2) are well agreed upon as being relevant (if not completely correct), (3) are common to many or all sciences (as opposed to symmetry, for example, which enjoys larger support among physicists than among other scientists), and, (4) would favor syntactically simple theories, which may be easier to comprehend. The last point is especially relevant for initial probability distributions, which may return several interesting model space regions that scientists must understand before determining if they warrant further exploration. The second term tells us how likely the assertions of the law set are given the theory that we have chosen. At one extreme, if all laws are logically entailed by the theory, the term is 1.0 because they must be true (given the theory as premises). It is also 1.0 if the law set is empty because the theory is used to directly compute the data. At the other extreme, the term must be 0.0 if the theory contradicts any statement of the law set. Values in between signify that the law set may or may not follow, depending on specific values of free parameters in the theory. Free parameters are values that the theory refer to that do not have definite values, but distributions over sets of values. Examples include coefficients with standard deviations, and random numbers used during stochastic experiments. In these cases, the second term is set equal to the fraction of the free parameter space in which all of the statements of the law set are found to hold. For random numbers it will be more practical to estimate this value by sampling the space. Laws are limited to refer to the theoretical terms introduced in theories. The third term measures empirical support and the degree of data coverage by telling us how likely the data are given the statements of the law set and theory. The

Towards a Method of Searching a Diverse Theory Space for Scientific Discovery

311

same extremes hold when all of the data are logically entailed or some of it is contradicted by the law set or theory. Again, values in between 0.0 and 1.0 represent the fraction of the free parameter space in which the data are observed. Statistical assertions have an implicit free parameter that tells from which data set the statistic was collected. For example, consider two integers, each in the set [0..9], with an average value of 1. The implicit free parameter must denote one of three sets: (1,1), (0,2) or (2,0). Please consider this (propositional) example. Let our theory be the assertion “a>b”, our law be “a” and our data be two occurrences of “b.” We would pay the appropriate (perhaps syntactic) price for the theory. The law is not derivable from the theory, so we set its probability to p(a) (the a priori probability that free variable A which ranges over “a” and “not(a)” actually is “a”). From our theory and law we may deduce our data with probability 1. If, however, we add assertions “c->a” and “c” to our theory then we have (perhaps) increased theory cost, but the law is now deducible from theory. Thus, the law has probability 1 and has no cost. A problem with the heuristic function as given is that it has no parameters to be tuned to a particular scientist’s preferences. This implies that it always returns the same value for the same arguments. This contradicts our goal of not imposing one ideal form on all scientific models. Scientists should be able to fine tune the heuristic function, but any adjustment should be general enough to be applicable to all models. Further, we want the number of parameters to be relatively small, both because it will make the function easier to calibrate and because we want to guard against potential abuse by choosing a set of parameters that happen to make one model score well and a similar one score poorly. Our solution was to generalize the function in the following manner: (6) A

B

h tm + (T, LS, D) = p(T) ⋅ p(LS T) ⋅ p(D LS)

C

The “tm” signifies that the function is over total models (i.e. theory, law set and data) and the “+” reminds us that this a function to maximize (i.e., larger values are better). The three parameters A, B and C allow us to independently vary the relative weights of the a priori model probability, the law set probability and the data probability. Instead of maximizing probability, we may view it as minimizing information: (7) h tm –

= A ⋅ s(T) – ( B ⋅ log 2(p(LS T)) ) – ( C ⋅ log 2(p(D LS))

The “-” subscript denotes that this function should be minimized.

312

J. Phillips

Equation 7 generalizes original MML equation 1 in two ways. First, equation 1’s -log p(D|Hi,c) has been split into two terms, one for both the law set and the data. Both are graded probabilistically. Second, the coefficients A, B and C act as linear weights for the information terms. The linear weights may seem to grossly over generalize equation 1, but it really depends on how they are used. This is discussed in more detail in the next section. There are two advantages to this weighing approach. First, it conforms to our notions that some sciences value theory conciseness and hard predictions more than others. Set the values of A and C higher in these sciences. Second, it does not allow arbitrary and contrived exceptions to make two similar total models score significantly differently. Although we have offered a syntactic feature-based approach to specifying a theory’s a prior probability, we have not limited scientists to use our function. Further, we admit that this is an iterative approach where probabilities are refined. Revisiting our criteria we find: 1. Generality is achieved with the adjustable weights, the usage on probabilities of laws instead of counts of “explained facts”, the usage of prior distributions instead of “co-hypotheses”, and the potential use of proofs or statistical arguments. 2. The ease of computation is limited by our proof or statistical argument method, not by the heuristic. 3. Simplicity is achieved because the form is of a weighted sum with terms for theory, law and data. 4. Smoothness is achieved because lumping all theory together, all laws together and all data together hampers a user’s ability to create one model that scores well and another very similar one that scores poorly. Further generalizations of htm+ and htm- may be envisioned. Each of the coefficients A, B and C may split into several coefficients A[1..n1], B[1..n2] and C[1..n3]. These finer-grained coefficients may be used to weigh specific aspects of the theory (e.g. A[1] for equations, A[2] for decision trees, etc.), specific laws of the laws set (e.g. B[1] for equations, B[2] for simple logical assertions, etc.), and specific types of data (e.g. C[1] for spatial measurements, C[2] for temporal measurements, etc.) Using the finer-grained coefficients is justifiable in some cases, like when there are large differences in the precision. For example, in seismology, earthquake times are known with very high precision: to within a few seconds per century. Earthquake locations are known with less precision: to only within tens of kilometers per 40,000 km (the Earth’s circumference). Earthquake energies are known with far less precision, frequently only to an order of magnitude. We may want to weigh each type of

Towards a Method of Searching a Diverse Theory Space for Scientific Discovery

313

data separately, taking into consideration how much precision is given and how much we want this data fit at the expense of other data. Parameters A, B and C from equations 6 and 7 were not subdivided to simplify analysis and presentation.

5 Experiments and Discussion This section discusses the rough calibration of the heuristic function to models in two sciences. Geophysics and sociology were chosen because they cover a broad spectrum of acceptable scientific models. We do not evaluate this function by comparing its output with that of IDS, PI, 49er, or Mechem. Which model a scientist believes in given specific data is, at least to some degree, subjective. Rather, we seek a method of calibrating our heuristic such that if it is given examples of models that users like then it can prefer similar models in the future. The heuristic function’s parameters may be calibrated for each science by analyzing its accepted models. Although there are three parameters, we only care about are their relative values. Accordingly, we may set A to 1 and let B and C vary. Equivalently, borrowing from physical chemistry, we can plot B/A versus C/A to create a “phase diagram” that tells which of the various total models are preferred by the heuristic. Each phase diagram constrains the area of each scientific model. This in turn constrains B/A and C/A for all models. Comparing B/A with C/A makes the linear weights of equation 7 a conservative generalization of equation 1. The plots are primarily a comparison between B and C, and represent a value judgement on how much scientists want their uncertainty in the laws rather than in the data. There is no “correct” answer to this question. As we will see, it varies from scientist to scientist. This also strengthens our argument for an adjustable heuristic function. If a scientist prefers model X then that scientist should set the parameters to where X is preferred. If the scientist is strongly tempted by model Y, then the scientist should adjust the parameters to be in the region of X but leaning towards that of Y. The scientist may iteratively update the parameter values as new models are evaluated by both the scientist and the heuristic. Please recall our limited goal: to do an initial search in a large and inhomogeneous space for areas that contain potentially promising models. We do not promise the best models. Also, this may be an iterative process where theory prior probabilities are revised according to previous results.

314

J. Phillips

The Knowledge Base and How It Predicts The experiments were designed for a variant of the knowledge base discussed in [20]. The knowledge base has two lists of assertions, one for the theory and one for the laws. These assertions describe a standard is_a frame hierarchy of knowledge. Assertions may be frame inheritance statements, equations or Prolog-like logic sentences. A Prolog-like resolution engine drives inference, but dedicated code handles frame inheritance and equations for efficiency. The output of the knowledge base to a given query is either an answer, or FAILURE, signifying no prediction is possible. An information cost accrued by the data when a prediction is wrong or missing. For symbolic values this cost is the Shannon information cost of the prior probability of the recorded answer. Thus, the default model to try to beat is the product of the prior probabilities of each datum. For inte1 gers and fixed and floating point values the cost is: (8) – log 2(DistinctValDiff(predi ct, record) + 1)

where DistinctValDiff() returns the number of distinct, representable values between the predicted and recorded values in the attribute’s given precision. (For example, if an attribute was limited to multiples of 0.1 then DistinctValDiff(0.2,0.4) is 2.) When predict is missing then the function is set to its highest value for that attribute. Sociology Data This technique requires large amounts of calibration data. We focused on models of family structure because United States Census data on family structure are readily available [4]. Data are not available for specific individuals, but they are summarized in several tables. From these summaries the number of families with 1, 2, 3, 4, 5, and 6 or more “own children” may be calculated for each family type. The family types are married family, male-householder family, female-householder family, married subfamily, male-householder subfamily and female-householder subfamily. Additionally, the number of childless families (but not subfamilies) may be calculated. The term “own children” means children related by birth, marriage or adoption. The U.S. Census 1

Equation 8 corresponds to the last term of equation 7. It defines a maximal probability at the recorded value, and exponential decaying probability above and below that value. This distribution may be replaced by others and is not a critical aspect of this approach.

Towards a Method of Searching a Diverse Theory Space for Scientific Discovery

315

Bureau switched from “head of house” to “householder” to emphasize the sharing of responsibilities prevalent in modern American families. The term “subfamily” refers to parent(s) who live with other adult(s) who are the householder(s) (e.g. their own parent(s).) We randomly created a database of 10,000 people in proportion to the distribution of household types and number of children computed from the U.S. Census data. This database under represents the number of children a little because the U.S. Census data does not distinguish between 6 or more children. We treated such cases as exactly 6 children. It under represents the number of adults more because we made no attempt to include all cases of adults living with other adults. Our interest is only in predicting where children live as a function of their parents. The database lists each person, their address, and, when the person is a child, their mother and father. Children who did not live with their father got illegal values as their father attribute. This was also done for the mother attribute. All attributes are symbolic. Sociology Models After surveying ethnographic reports on 250 societies, Murdock came to the anti-climatic conclusion that the form of families in all societies is of “. . . a married man and woman with their offspring. [17]” (This is a minimal family structure because that unit may be embedded in larger structures.) We take this statement as the theory. We encode it in the structure of the virtual relations of figure 2, augmented with some extra semantics. For example, from the structure of the database we may deduce that all families have one address, one childset, one mother, one father, that a set of children may have 0 or more children, etc. The additional rules allow members to inherit selected properties of their families. Predicate prop(frame,attribute,value) notes that property attribute of frame has value value. family address childset mother father

child childset family

∀ ( child(C) ∧ fam(F) ∧ prop(C, family, F) ∧ prop(F, A, V) → prop(C, A, V) ) ∀ ( fam (F) ∧ prop(F, mother, M ) ∧ pro p(F, addr, ADDR) → prop(M, addr, ADDR) ) e tc

Fig. 2. Codification of Murdock’s theory

316

J. Phillips

The laws operationalize the theory by making direct predictions about recorded values. For example, assume the child database included address information. We may then note a correlation between a child’s address and that of their parent’s. ∀( child(C) ∧ mom(M) ∧ fam(F) ∧ prop(C, mom, M) ∧ prop(M, fam, F) ∧ prop(F, addr, A) → prop(C, addr, A) )

∀( child(C) ∧ dad(P) ∧ fam(F) ∧ prop(C, dad, P) ∧ prop(P, fam, F) ∧ prop(F, addr, A) → prop(C, addr, A) )

Fig. 3. Codification of potential Murdock laws (atoms mother, father and family have been abbreviated as mom, dad and fam)

The competing sociological model is due to Adams [1]. After examining Latin American and some ethnic societies, Adams concluded that the evidence for the nuclear families as described by Murdock was “marginal at best” [14]. Instead he proposed the mother-child dyad as the primary unit. This new model is created by removing the father attribute, or merely disallowing its use in proofs. We also delete the father law mentioned in figure 3 from the law set. We bound the parameters by considering two unacceptable models at opposite extremes. The first is the “data” model. It uses neither theory nor laws to predict values. It merely reflects the prior probability of any one value. The second is the “theory” model. It explicitly memorizes each value individually as a statement in the theory. It has neither general statements nor laws, and overfits the data. Table 1 gives the sizes of the each component of each total model. Both Murdock’s and Adams’ models must memorize adult addresses. Adams’ must also memorize those of children who live with their fathers but not mothers. The law sentences in figure 3 logically follow from theory so they have size 0. Unfortunately, the zero size forbids the constraining of the B parameter by this experiment. Table 1. Sizes of sociological models

Model

Abbr

Theory

Law

Data

data

d

0

0

107637

Adams

a

240

0

79582

Murdock

m

480

0

77739

theory

t

960960

0

0

Adams’

A

240

23429

77739

Towards a Method of Searching a Diverse Theory Space for Scientific Discovery

317

Figure 4a gives the “phase diagram” plot of data. Where a model out scores all others its abbreviating letter appears in the parameter space. log2(C/A) is plotted on the X axis and log2(B/A) on the Y.

Fig. 4. Sociology model “phase diagrams”

To place bounds on B we consider adding the father sentence to Adams’ law set. However, we cannot prove it from our theory. Therefore, we accept father in the model as a free variable with its (data-specified) prior probabilities. This results in a model with the equivalent predictive power of Murdock’s. It can now predict the addresses of children living with only their fathers. The price we pay is Shannon information cost of the prior probability of each usage of the prop(Child,father,Father) predicate for these predictions. See Adams’ in table 1. The revised “phase diagram” with Adams’ new model is plot in figure 4b. Geophysics Data We obtained data from the United States Geological Survey’s National Earthquake Information Center. We retrieved all recorded earthquakes in the catalog in a rectangular box from 139E to 162E and from 41N to 55N from 1976 to 2000. The Kuril subduction zone, the Japanese island of Hokkaido, and the Kuril island chain are the most prominent geophysical features in this area. Non-tectonic events were removed and the remaining ones were fit to a great circle. This great circle was taken to be the “length” of the fault and events greater than 512 km from it were removed. The time, distance-along-fault, (signed) distance-from-fault and depth of the remaining 11031 events were entered into our earthquake database. Geophysics Model In the theory of plate tectonics, a subduction zone is a region where one (oceanic) plate sinks beneath another (continental) plate. A Wadati-Benioff zone is the seismically active portion of this interface [2][23].

318

J. Phillips

A Wadati-Benioff zone may be modeled as a plane that increases in depth the further one goes into the continental plate. We did so by stating the assertions of figure 5 in the theory where the slope and intercept were found by least-squares fit. D is tF r om fa u lt = slo p e × d ep th + in t er ce pt

inherit(kuril_quakes,slope,1.05682). inherit(kuril_quakes,inter cept, -85.9936 km). Fig. 5. The theory of the planar Wadati-Benioff zone model.

The law set was left empty. As before, the “data” model did not try to predict, and the “theory” model overfit by memorization. The results are given in Table 2 and are plotted in Figure 6a. Table 2. Sizes of geophysical models.

Model

Abbr

Theory

Law

Data

data

d

0

0

97750

planar

p

618

0

63904

theory

t

1369230

0

9775

aftershock

a

618

13759

63103

The non-zero entry for the theory model’s for data size is due to round off error. That is, there is a slight difference between the decimal recording of the values logical assertions that comprise the theory (which have a fixed number of significant digits given by the precision of the values), and the binary recording of the values in the database.

Fig. 6. Geophysics model “phase diagrams”

To place bounds on B we add a law to the planar model. When a particular aftershock labelling procedure is used there is an average of a 43.5 km distance between

Towards a Method of Searching a Diverse Theory Space for Scientific Discovery

319

an aftershock and its mainshock. Encoding this as a law permits better predictions of some distances. We include no theory to predict aftershocks, only an empirical procedure for labeling them after the fact. Therefore, we let mainshock be a free variable. The aftershock model results are given in Table 2 and in Figure 6B. We now evaluate our heuristic with the criteria in section 4. Recall, they were (1) generality, (2) ease of computation, (3) simplicity of form, and (4) smoothness. The function is general because it was applied to symbolic sociology and numeric geophysics with equal ease, and because it has been applied to a domain where predictions have varying degrees of accuracy. Its ease of computation is limited by the ability to predict data, prove (or argue for) laws, and know data distributions. Also, its weighted sum form is simple. The function’s “smoothness,” its ability to give similar models similar scores, is limited by how honest people are with the law set. When some condition is true over the whole parameter space one could move it from theory to laws to avoid paying the syntactic cost. This is against the philosophy of this approach. Also, trying to estimate data distributions when there is little data may be a serious problem. Distributions may be used as “fudge factors” to vary a model’s score on the B/A axis. However, a potential advantage is that it will force such assumptions to be explicitly stated. We do not argue for one particular ratio for C/A or B/A. Rather, we seek a method for calibration. That said, we note that both geophysics and sociological had similar C/A bounds. Having B be too great may lead to “overfitting” the laws to the theory and ruling out yet unknown secondary effects. For discovery it may be best to fix A and C and let B vary as the model becomes more refined. This is another study. Note that this was truly a test of scientific rediscovery. Both the sociology and the geophysics theories were applied to new data. Neither Adams nor Murdock were trying to fit U.S. demographics for 1998. Benioff stated his hypothesis after examining events from S. America and Hindu-Kish, not the Kurils. (Wadati probably had data for Honshu, not the Kurils.)

6 Conclusion Scientists have different opinions on what the same data entails. To ignore that is to ignore the history of science. We have developed a heuristic function that takes some of these differences into account, and may be calibrated to a particular scientist, along our given axes. This heuristic function is a generalization of single model family parameter finding MML. It generalizes MML in a principled fashion to consider how much faith to put in laws versus data. Our approach also extends [23] to be applied to scientific discovery. It is general and has been applied to both symbolic and numeric scientific models.

320

J. Phillips

We do not claim to have solved the whole scientific model preferencing problem. Serious limitations remain including (1) the specification of the original model prior probability, (2) the inhomogeneity of the search space, and (3) the fact that the “most probable” model is not necessarily the best one. The purpose of this heuristic is to help scientists identify interesting regions in the model space, i.e., models that are the immediate neighbors of their favorite models in the B/A-C/A plots. This is an initial step of an iterative process. Computer scientists might believe that a heuristic function could not sufficiently constrain search in a domain as rich as scientific discovery. However, the heuristic function is only part of the search algorithm. The search algorithm may employ rules to suggest when to apply scientific operators (e.g., [11]), or may use metalearning to discover which operators are best in a particular domain. Preliminary results from rediscovery in geophysics show that rules and metalearning may be combined or employed separately to significantly speed scientific discovery [20]. Acknowledgments I thank my geophysicist Larry Ruff for his patience, my former advisors John Laird and Nandit Soparkar, and the National Physical Science Consortium and the Rackham Merit Fellowship for funding. References 1.

2. 3.

4.

5.

6. 7.

Adams, R.N. 1960. An inquiry into the nature of the family. p 30-49 in Dole, G. and Carneiro, R.L. (eds.), Essays in the Science of Culture: In Honor of Leslie A. White. Thomas Y. Crowell. New York. Benioff, H., 1948. Earthquakes and rock creep. Geol. Soc. Am. Bull., 59, p. 1391. Buchanan, B., Phillips, J. 2001. Towards a computational model of hypothesis formation and model building in science. Model Based Reasoning: Scientific Discovery, Technological Innovation, Values. Kluwer. Casper, L., Bryson, K. 1998. Current Population Reports: Population Characteristics: Household and Family Characteristics. March 1998 (Update). United States Census Bureau. Cheeseman, P. 1995. On Bayesian model selection. In Wolpert, D. (ed.) The Mathematics of Generalization: Proceedings of teh SFI/CNLS Workshop on Formal Approaches to Supervised Learning. Addison-Wesley: Reading, MA. Forbus, K., 1985, Qualitative process theory, in Qualitative reasoning about physical systems, D. Bobrow, ed., MIT Press: Cambridge, Mass. Fuller, S. 1993. Philosophy of Science and its Discontents, Second Edition. Guilford Press, New York.

Towards a Method of Searching a Diverse Theory Space for Scientific Discovery

8.

9.

10. 11. 12.

13.

14. 15. 16. 17. 18.

19.

20. 21. 22. 23. 24.

321

Georgeff, M.P. and Wallace, C.S. 1984. A general selection criterion for induction inference. In Proceedings of the European Conference on Artificial Intelligence, p. 473-482. Elsevier: Amsterdam. Korf, R.E. 1988. Search: A Survey of recent results. In H.E. Shrobe (Ed.), Exploring Artificial Intelligence: Survey Talks from the National Conferences on Artificial Intelligence (pp. 197-237). Morgan Kaufman. Kuhn, T. 1962. The Structure of Scientific Revolutions. University of Chicago: Chicago. Kulkarni, D. and Simon, H. 1988. The processes of scientific discovery: the strategy of experimentation, Cognitive Science, vol. 12, p. 139-175. Lakatos, I. 1970. Falsification and the methodology of scientific research programmes. In Lakatos, I. and Musgrave, A. (ed.) Criticism and the growth of knowledge. Cambridge University Press: Cambridge. Lakatos, I. 1971. History of science and its rational reconstructions. In Buck, R.C. and Cohen, R.S. (ed.) Boston Studies in the Philosophy of Science. vol 8, p 91-135. Reidel: Dordrecht. Lee, G. 1977. Family Structure and Interaction: A Comparative Analysis. J.B. Lippincott. Philadelphia. McAllister, J. 1996. Beauty and Revolution in Science. Cornell University: Ithaca. Michalewicz, Z., Fogel, D. 2000. How to Solve It: Modern Heuristics. SpringerVerlag. Berlin. Murdock, G.P. 1949. Social Structure. The Free Press. New York. Nordhausen, B., Langley, P., 1987, Towards an integrated discovery system, in Proceedings of the Tenth International Joint Conference on Artificial Intelligence, Morgan Kaufmann, Milan, Italy. Nordhausen, B., Langley, P., 1990, An integrated approach to empirical discovery, in Shrager J, and Langley, P. (ed.) Computational Models of Scientific Discovery and Theory Formation. Morgan Kaufmann, San Mateo. Phillips, J. 2000. Representation Reducing Heuristics for Semi-Automated Scientific Discovery. Ph D. Thesis, University of Michigan. Rissanen, J. 1978. Modeling by shortest data description. Automatica, 14, p. 45471. Sleep, N., Fujita, K. 1997. Principles of Geophysics. Blackwell Science. Malden. Thagard, P. 1988. Computational Philosophy of Science, MIT Press, Cambridge MA. Wallace, C.S., and Freeman, P.R. 1987. Estimation and inference by compact encoding. J. Roy. Stat. Soc., Series B, 49, p 233-265.

322

J. Phillips

25. Valdes-Perez, R. 1995. Machine discovery in chemistry: new results. Artificial Intelligence, 74(1), p 191-201. 26. Zembowicz, R. and Zytkow, J. 1996. From contingency tables to various forms of knowledge in databases, in: Advances in Knowledge Discovery and Data Mining, Fayyad et al (eds.) AAAI Press, San Mateo. 27. Zytkow, J. and Zembowicz, R. 1993. Database exploration in the search for regularities, J. Intelligent Information Systems, 2:39-81.

Divide and Conquer Machine Learning for a Genomics Analogy Problem (Progress Report) Ming Ouyang1 , John Case2 , and Joan Burnside3 1

Environmental and Occupational Health Sciences Institute UMDNJ – Robert Wood Johnson Medical School and Rutgers, The State University of New Jersey Piscataway, NJ 08854 USA ouyang@fidelio.rutgers.edu 2 Department of CIS University of Delaware Newark, DE 19716 USA case@cis.udel.edu 3 Department of Animal & Food Sciences University of Delaware Newark, DE 19716 USA joan@udel.edu

Abstract. Genomic strings are not of ﬁxed length, but provide onedimensional spatial data that do not divide for conquering by machine learning into manageable ﬁxed size chunks obeying Dietterich’s independent and identically distributed assumption. We nonetheless need to divide genomic strings for conquering by machine learning — in this case for genomic prediction. Orthologs are genomic strings derived from a common ancestor and having the same biological function. Ortholog detection is biologically interesting since it informs us about protein divergence through evolution, and, in the present context, also has important agricultural applications. In the present paper is indicated means to obtain an associated (ﬁxed size) attribute vector for genomic string data and for dividing and conquering the machine learning problem of ortholog detection herein seen as an analogy problem. The attributes are based on both the typical string similarity measures of bioinformatics and on a large number of diﬀerential metrics, many new to bioinformatics. Many of the diﬀerential metrics are based on evolutionary considerations, both theoretical and empirically observed, in some cases observed by the authors. C5.0 with AdaBoosting activated was employed and the preliminary results reported herein re complete cDNA strings are very encouraging for eventually and usefully employing the techniques described for ortholog detection on the more readily available EST (incomplete) genomic data.

K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 290–303, 2001. c Springer-Verlag Berlin Heidelberg 2001

Divide and Conquer Machine Learning for a Genomics Analogy Problem

1

291

Introduction

Genomic strings are strings of one of two types: nucleotide strings and amino acid strings. Nucleotide strings are what genes are, and they code for amino acid strings which are proteins. We can model each as strings of letters where the letters are standard names for the nucleotides or the amino acids. For machine learning1 purposes, it is not practical to process genomic strings as ﬁxed-size vectors (of letters). However, genomic strings can be thought of as one-dimensional spatial structures.2 Dietterich [Die00] discusses in detail the problem for machine learning of employing divide and conquer on spatial and temporal data which can’t be practically completely represented as ﬁxed-size vectors. Of course such data can be divided into manageable ﬁxed size chunks. He notes, though, that divide and conquer is problematic if the data fails to satisfy the independent and identically distributed (iid) assumption. As we will see below, the problem discussed in this paper does not satisfy this assumption, and this paper provides, then, among other things, a case study of how in our problem domain we circumvent the diﬃculty. In GenBank (major repository of genomic information) there are many human and mouse (mammal) genomic sequences with known associated functions; there are some but fewer (food animal) chicken sequences with known associated functions. Poultry is the third largest agricultural commodity, and the main meat consumed in the U.S.3 Control of disease in these birds is important for both agricultural economics and human health. The identiﬁcation of candidate genes for disease resistance, or the development of immune enhancers to make vaccines more eﬀective or even obsolete are among the more contemporary approaches to disease control in this important food animal. However, gene sequence information for birds is currently too limited. Fortunately, as just noted above, some information is available, so there is some basis for training a machine learning procedure. Orthologs are (genomic) sequences which are from diﬀerent species but which have common descent and the same function. Crucially, in a number of cases one can locate and compare human, mouse, and chicken orthologs. We’ve been concerned, then, with an analogy problem: ﬁnd/exploit patterns in the known orthologs between human, mouse, and chicken and apply those patterns to human and mouse orthologs X, Y with known function, but whose chicken ortholog Z is unknown, to detect the unknown Z. To ﬁnd patterns between relatively closely related species, e.g., human and mouse, it has suﬃced to use known local-alignment-based similarity tools such as BLAST (and variants) [AGM+ 90,AMS+ 97,KA90,Pea95] which are based on string similarity only. They ﬁnd “locally maximal segment pairs.” This similarity 1

2 3

Machine learning [Mit97,RN95] involves algorithmic techniques for ﬁtting programs to data and for outputting the programs ﬁt for subsequent use in predicting future data. A program so ﬁt to data is said to be learned. Amino acid sequences fold into 3-D structures, but that, for us, will be taken into account in future work. See Section 6 below. http://www.usda.gov/news/pubs/fbook98/ch1a.htm

292

M. Ouyang, J. Case, and J. Burnside

matching does not suﬃce for highly divergent orthologs (e.g., some of the orthologs between mammals and birds) since the regions of similarity are too fragmented. For example, Figure 1 depicts an optimal global amino acid sequence alignment between chicken and mouse IL-2 orthologs4 (with chicken shown on top). The corresponding nucleotide sequence alignment is also very fragmented (data not shown). The same degree of fragmentation is seen comparing chicken and human IL-2 (data not shown). When searching chicken IL-2 against GenBank, BLAST and variants do not and cannot ﬁnd any locally maximal segment pairs in mammals which have statistical signiﬁcance. This problem is not just for IL-2. More generally, it follows from [RYW+ 00] and recent news releases from Celera that more than 25% of orthologs are not identiﬁed by commonly used (local-alignment-based similarity) tools. ---------------MMCKVLIFGCISVATLMTTAYGASLSSAKRKPLQTLIKDL-EIL------ENIKNKIH | | || | | | | MYSMQLASCVTLTLVLLVNSAPTSSSTSSSTAEAQQQQQQQQQQQQHLEQLLMDLQELLSRMENYRNLKLPRM LEL--YTPTETQECTQQTLQCY------LGEVVTLKKETEDDTEIKEEFVTAIQNIEKNLKSLTGLNHTGSEC | | | | ||| | | | | | | | || | || LTFKFYLPKQATE--LKDLQCLEDELGPLRHVLDLTQSKSFQLEDAENFISNIRVTVVKLK---G-SDNTFEC KICEANNKKKFPDFLHELTNFVRYLQK---||| | QF--DDESATVVDFLRRWIAFCQSIISTSPQ Fig. 1. Optimal Global Amino Acid Alignment Between Chicken and Mouse IL-2

In the analysis of analogy problems from both cognitive psychology [Ste88] and artiﬁcial intelligence [Eva68,RN95], we see that both similarities and diﬀerences need to be taken into account. For example, here are a couple of string analogy problems from Hofstadter. These problems are based on alphabetical order, though, not genomics. abc → abd, ijk → ? abc → abg, iijjkk → ? We see that taking into account both string similarities and diﬀerences are a necessary part of solving these problems. Other projects have employed diﬀerential metrics to some degree and to good eﬀect. The tools for intron-exon5 recognition (not what we are doing in the present study), GRAIL [GME+ 92] and GENSCAN [BK97], employ diﬀerential metrics (and there is a similarity metric implicit, for example, in the potential function in GRAIL). A codon is comprised of a contiguous triple of nucleotides 4 5

IL-2 is interleukin 2, an immune system protein. Exons contain the coding portions of genes.

Divide and Conquer Machine Learning for a Genomics Analogy Problem

293

chick-human AA identity 25.54: :...chick-mouse NA identity 57.45: no : chick-human NA length/(# gaps) 49.5: :...chick-human NA length/(# gaps) 103.7143: no : chick-mouse NA length/(# gaps) 25.59: no : chick T to mouse C 118.0588: [Rest omitted] Fig. 2. First Tree Output By C5.0 — With Portion Omitted

{A, C, G, T}, and 61 of these triples each code for a single corresponding amino acid. Diﬀerential metrics can be based on so-called codon bias [SM82,SCH+ 88, Li97]. Most of the 20 amino acids are encoded by more than one codon; codon bias is, then, the quantiﬁable phenomenon that an organism uses one particular codon for an amino acid signiﬁcantly more often than all the other synonymous codons. [SG94] provides an improvement of BLAST with a measure of codon bias as a diﬀerential metric. In the present project we employ codon bias as one class of diﬀerential metrics or attributes: we count, for each of the 61 codons, how many times it occurs in the orthologs. In our project for (chicken) ortholog detection, we have devised a number of other diﬀerential metrics also to complement standard similarity metrics for genomic sequences. These measures of similarity and diﬀerences provide our attributes (or features) for machine learning and constitute, in many cases, a useful division into parts of the original problem about 1-D strings, a division towards conquering the problem. As noted above, this division yields cases where the iid assumption fails. Instead, the co-evolution of mammal and bird orthologs from common ancestor strings involves whole interdependent string patterns coming out partly diﬀerently and partly similarly.

2

Attributes Based on Similarities and Diﬀerences

We mentioned codon bias for diﬀerential metrics above. A straightforward evaluator of similarity is simple percent identity. Studies ([Li97], Chapter 1) have shown with accompanying simple biochemical explanation that, when mutations occur, the nucleotides A and G tend to change to G and A, respectively, and C and T tend to change to T and C, respectively;

294

M. Ouyang, J. Case, and J. Burnside

these are called transitions. The other 8 substitutions, between the group of {A, G} and the group of {C, T} are called transversions, and they occur less frequently than transitions. Insertions and deletions of nucleotides are thought to occur rarely; however, when they do occur, several adjacent nucleotides may be involved [BO98]. Therefore, another common way to evaluate the quality of an alignment is to assign a high score to identity matches, a medium score to transitions, a low score to transversions, a large penalty to opening a gap, and a small penalty to extending a gap. Our Table 1 is such a scoring scheme. Table 1. A Nucleotide Sequence Alignment Scoring Scheme. From\To A C G T

A 4 1 2 1

C 1 4 1 2

G 2 1 4 1

T 1 Gap Opening Gap Extension 2 -5 -2 1 4

Amino acid sequences are what cells translate nucleotide (gene) sequences into (to form proteins). When amino acid sequences are aligned, the scoring matrix is a 20 by 20 table because there are 20 amino acids; some commonly used matrices for amino acid sequence alignment include PAM and BLOSUM families of matrices (see [BO98] and the references therein). The Needleman-Wunsch algorithm [NW70] ﬁnds an optimal global alignment of two sequences. Optimal global alignment has thus far been mostly used in comprehensive studies of orthologs, as in [MB98], where orthology has already been established, and researchers want to extract additional information from the aligned sequences. Global alignment involves some increased complexity costs over local alignment schemes, but we’ve seen, for our applications reported herein, that this increased cost is not prohibitive; furthermore, we have begun using the more eﬃcient variant of Needleman-Wunsch from [Got82]. When we apply (the improved variant of) Needleman-Wunsch to obtain global alignment values for similarity attributes, for nucleotide alignment we apply the scoring scheme in Table 1, and, for amino acid alignment, we apply the scoring scheme from the PAM250 matrix. Needleman-Wunsch and its improvement calculate a global alignment optimal in the sense that no other alignment yields a higher score, global in the sense that the entire lengths of the sequences are taken into consideration. For our similarity attributes we employ both the Nucleotide Alignment (NA) scores and the Amino acid Alignment (AA) scores — comparing chicken with each of mouse and human.6 These scores are given as percent identities. 6

Applying attribute values for both chicken-mouse and chicken-human comparisons improves performance over just employing comparisons between chicken and one of these mammals.

Divide and Conquer Machine Learning for a Genomics Analogy Problem

295

We noticed an intriguing and biologically signiﬁcant empirical pattern comparing NA and AA for our current full data set of 213 complete orthologs between chicken-mouse-human. In Table 2 this pattern (among other things) is displayed for a representative sample of 20 of our orthologs. Table 2 is shown sorted in the column of chicken-mouse nucleotide alignment. For the top portion of the table (as sorted), chicken-mouse NA percent identity is larger than that of AA, but for the bottom portion of the table, the ordering of the two numbers becomes reversed. The likely biological/biochemical explanation appears to be: in the top portion we see the eﬀects of mutations in third and redundant position in codons [Li97]; in the bottom portion we see critically preserved amino acids; and in the middle some combination of each. We have employed, then, the values of (NAAA) and NA/AA as attributes measuring the degree to which nucleotide and amino acid alignments diﬀer. From the NA and AA alignments themselves we calculate lengths, numbers of gaps, and their average lengths. We then combine these numbers importantly in various ratios to provide diﬀerential attributes. The present paper reports on our progress with ortholog prediction for complete cDNA sequences. In the future we plan to apply our methods to ESTs (incomplete sequence data), and, making these attributes ratios is one way that, on average, the incompleteness of the ESTs will not bias our attributes compared with their values on complete sequences. We also employ as attributes the percentages of conservations of the four nucleotides, the percentages of transitions, and the percentages of transversions. From above, transversions are those nucleotide mutations (e.g., from C or T to A) that are less likely to occur biochemically. Table 2 also displays, for the 20 representative orthologs (out of our 213) transversion bias percents between mouse and chicken. We’ve based a number of additional attributes on various measures suggested by the biologically quite interesting transversion bias trends seen in this table and in the table of all our 213 orthologs. E.g., we have various useful attributes measuring deviation from the boldfaced columns for transversion bias. We illustrate these attributes with the example of a particular chicken sequence compared to its mouse ortholog. For such a sequence comparison (corresponding to the ﬁrst four columns of a single row of a table like Table 2) there are four transversion percentages: (t1 , t2 , t3 , t4 ) = (% of {C,T} → A, % of {A,G} → C, % of {C,T} → G, % of {A,G} → T).

(1)

We treat these four numbers as coordinates of a four-dimensional point. The general pattern (quite similar to that of the boldface pattern in Table 2): for transversions from chicken to mouse, a point will usually have a larger/largest second-coordinate than its other coordinates; hence, the points will reside in a restricted sub-region in the space. Since the distribution of these points is not known, we could use the distribution-free, scaling and rotation invariant measure called simplicial depth [BF84,Liu90,LS93,CO01] to measure how near a point is to the center of the cluster of points. We have experimented, to good eﬀect

296

M. Ouyang, J. Case, and J. Burnside Table 2. Transversion Bias and Comparative Aligments

Protein

Chicken to mouse Mouse to chicken From CT AG CT AG CT AG CT AG To A C G T A C G T frizzled 7 2.2 8.2 6.9 2.9 1.7 8.6 7.7 2.0 transforming growth factor β 3 6.0 5.7 4.0 1.9 2.9 6.7 5.3 2.6 nicotinic Ach receptor α 1 3.1 6.6 6.1 2.5 5.1 6.5 3.2 3.5 growth hormone 4.0 9.1 6.4 5.0 5.0 7.1 8.3 3.9 VEGF 5.9 8.2 3.5 1.6 5.7 3.9 6.1 3.9 PDGF receptor α 4.7 7.0 5.8 3.2 7.0 3.9 4.9 5.2 estrogen receptor 3.2 9.7 6.1 3.3 8.7 4.2 4.7 4.8 PDGF α 5.7 8.2 6.1 2.4 7.9 4.0 5.2 5.6 FSH receptor 4.9 7.9 4.9 6.1 8.1 5.8 4.4 5.2 ﬁbroblast growth factor 2 4.9 9.7 8.8 5.5 8.0 5.3 9.0 7.0 thyrotropin β 5.2 8.2 6.2 8.2 7.8 4.8 6.9 8.0 growth hormone receptor 9.2 7.0 5.4 7.1 9.8 6.0 6.0 7.0 insulin-like growth factor I 6.6 11.4 6.6 5.0 11.5 6.2 5.8 6.2 prolactin 9.5 9.6 7.2 8.7 10.0 8.1 7.4 9.3 β 2 microglobulin 17.8 18.3 11.7 11.7 7.7 22.9 22.1 6.7 prolactin receptor 8.8 10.1 6.7 8.5 14.1 6.6 7.7 6.5 interleukin 1 β 18.8 11.6 11.2 13.2 11.6 21.0 14.2 7.8 interleukin 18 14.6 13.3 11.7 11.3 15.1 10.4 14.3 11.4 interleukin 15 10.8 14.8 8.8 13.7 23.3 6.6 9.8 9.9 interleukin 2 19.3 11.7 13.9 16.7 24.9 9.0 10.4 17.5 Left: Transversion bias %s (the largest number in each row is boldfaced).

% identity NA AA 81.8 87.4 81.1 87.1 79.4 84.3 74.8 73.1 74.7 73.4 74.3 79.3 73.9 78.3 72.2 76.7 71.5 71.6 70.0 66.0 69.8 65.4 66.6 56.9 64.1 62.9 62.2 50.8 54.7 42.9 54.6 42.8 51.3 31.7 51.2 31.8 49.6 33.8 42.5 19.9

Right: Nucleotide (NA) vs. Amino acid (AA) sequence Alignments as % identities. Rows sorted by NA.

(Section 5), with easily computed, one-dimensional projections of the full, more diﬃcult to compute simplicial depth: we see not only that t2 tends to be the largest of the four, but, when t2 is not the largest, that t1 tends to be. We use as one-dimensional projections, the following formulas for additional diﬀerential attributes: t2 /minimum(t1 , t2 , t3 , t4 ) & t1 /minimum(t1 , t2 , t3 , t4 ). The ﬁrst we call a major transversion bias, the second a minor. Similar (but not the same) formulas are used for the transversion biases from mouse to chicken, and for those between human and chicken. Relatively large values in these diﬀerential measures indicate conformation to the typical transversion bias patterns. Lastly we employ some simple protein class information [AKF+ 95]7 (see also [TSB00]) for attributes.

7

http://www.tigr.org/docs/tigr-scripts/egad scripts/role report.spl

Divide and Conquer Machine Learning for a Genomics Analogy Problem

3

297

How We Obtain Negative Data for Classiﬁcation

For the classiﬁcation of genomic sequences as orthologous or not we want to supply for training data both positive and negative instances. Our positive data come from our 213 known orthologs. We employ two groups of negative data. The ﬁrst group is of the form ( human protein Y, mouse protein Y, chicken protein X ),

(2)

where – X and Y are in our collection, – X and Y are not orthologs, and – the two diﬀerences in lengths between chicken and each mammal protein is less than 30% of the length of the mammalian protein, and at least one of the amino acid global alignment identities between chick X and human Y or between chick X and mouse Y is greater than or equal to 13% (The 30% and 13% ﬁgures may be adjusted in the future as appears necessary, etc). For our 213 orthologs, there are 1043 data points in the ﬁrst group. This ﬁrst group corresponds to the type of negative data points on which we would want to test a decision program output by a machine learning technique. The second group is of the two forms

and

( human protein X, mouse protein Y, chicken protein X )

(3)

( human protein Y, mouse protein X, chicken protein X ),

(4)

and the constraints on the proteins are the same as in the ﬁrst group. For the 213 orthologs, there are 2086 data points in the second group. The use of this second group considerably improves performance of decision programs output by the machine learning technique described in the next section.

4

Machine Learning Techniques Employed

We employ as our machine learning technique Quinlan’s C5.0 which combines his C4.5 for decision tree induction [Qui93,RN95,Mit97] with the option for AdaBoosting [FS97,FS99]. Decision tree induction involves the ﬁtting of simple decision trees with unary-predicate tests to classiﬁcation data. C5.0 (and C4.5) employ an information-theoretic heuristic so that decisions at the top of a tree ﬁtted explain more data than decisions further down. This provides both eﬃciency in ﬁtting and some readability of the resultant trees for insight — the reasons the decision tree induction component was chosen for the project. AdaBoost is an important technique for improving learners both for ﬁtting training data [FS96] and for generalization and prediction beyond the training data [FSBL98] (see also [FMS01]). It also handles well the presence of errors in the training data

298

M. Ouyang, J. Case, and J. Burnside

[FS96]. AdaBoosting, as employed in C5.0, takes a weighted majority vote of the decisions of a sequence of decision trees, where each tree, beyond the ﬁrst, judiciously concentrates on the cases diﬃcult for its predecessor.8 Since AdaBoosting combines a number of decision trees, its use may involve some tolerable loss of readability and eﬃciency. However, AdaBoosting nonetheless looks like linear (i.e., fast) programming [FS99]. The features of AdaBoosting just mentioned are why it was chosen for the project. Other methods might have been chosen. Reported in [MST94] is a major series of studies and domains comparing machine learning techniques (including decision tree induction and neural net learning) and classical statistical techniques. Decision tree induction was generally robust over the domains studied including compared to statistics. Again, though, it had the advantage that it’s products are readable for insight. Of course, each technique compared had its especially good domains. In [BB98], for example, we see many bioinformatics problems tackled with either neural nets or statistical techniques (but not decision tree induction with AdaBoosting). Neural nets and statistical methods tend not to produce classiﬁcation programs readable for insight. We do note that the Morgan system in [SDFH98] does employ decision tree induction — to simplify otherwise complex dynamic programming for doing similarity matching for intron-exon recognition in vertebrates.9 We also see that [AMS+ 93] employs a decision tree induction which automatically selects string patterns from a given table and produces a decision program which tests input data against the table to predict transmembrane domains from protein data. Support vector machines [Vap95,Vap98] and neural nets can, in eﬀect, cut up the attribute space in ways that decision trees do not. For example, in some cases, there can be advantages in decision tree induction to suitably rotate the attribute space; however, AdaBoosting (more than) makes up for any such advantage [Qui97], and, in eﬀect, cuts up the attribute space very ﬁnely [Qui98]. Furthermore, support vector machines involve quadratic (i.e., slower) programming [FS99].

5

Results

When we run C5.0 with AdaBoost activated on our 213 orthologs (and associated negative data) we get ensembles of decision trees with an average of about 35 decision nodes per tree. These trees are humanly readable. The attributes tested in ensembles of trees based all 213 orthologs involve most of our current attributes. The decisions made by such an ensemble with only three trees10 makes no mistakes on all the positive and negative data points generated by the 8 9 10

Importantly, the voting weights are bigger for more accurate trees in the sequence of trees. In the present project we are working only with exons or portions thereof. Recall from Section 4 above that the ensemble of trees obtained from AdaBoosting makes its decisions by a judiciously weighted majority vote among the decisions of its constituent trees — even more usefully subtle decision making than that of any single tree.

Divide and Conquer Machine Learning for a Genomics Analogy Problem

299

213 orthologs. More importantly, though, we employed 10-fold cross-validation (i.e., a random 10-th of the data is removed from training and employed instead for testing) with 10 repetitions and obtained, with an boosting ensemble size of 25 trees, a low error rate of 2.4% (with Standard Error less than 0.05%) on the entire data set for all 213 orthologs. Furthermore, for each of the 213 ways of removing one ortholog of the 213, we also tried training on the remaining 212 (with their associated negative data points) and testing the ensembles obtained from C5.0 with AdaBoost activated on the missing ortholog and the (also missing) negative data points associated with it. In 95% of the cases that the ortholog omitted from the training data was chosen from the important protein class of cell/organism defense (which includes the immune system enhancers we are especially interested in)11 , ensembles with only four trees performed perfectly on all the positive and negative cases including those for the ortholog omitted. On our 213 orthologs and associated negative data the ﬁrst decision tree produced by C5.0 (with portion omitted to save space) is shown in Figure 2. The tree should be read essentially as an if-then-else program with nesting indicated by indentation. The decision yes is for othologous and no is for nonothologous. From vertical position in the tree we see, for example, that the top test of an amino acid percent identity, chick-human AA identity p} where q->p indicates that q links to p. In the same manner, if a page points to many good authorities, its hub weight is increased:

yp = Sum(xq ) {q such that p->q} HITS is a simple algorithm based solely on hyperlink information except the acquisition of a root set, and its behavior is analyzed by several researchers. HITS tends to generalize topics that are not sufficiently broad, which is called topic generalization [5]. There are several works for distilling topics of Web communities by using this phenomena [1] [3]. Another point that should be mentioned is that HITS sometimes outputs hubs and authorities which are irrelevant to input topic. When a good hub page of a community contains hyperlinks pointing to pages of several topics, pointed pages irrelevant to input topic may have much authoritative weight and regarded as an authoritative page of the community. Such phenomenon is called topic drift [6]. Another phenomenon is inadvertent topic hijacking [4], when a base set contains a number of Web pages from the same Web site. Since such pages often contain hyperlinks pointing to the same URL (for example, the top page of the site), authority weight of irrelevant pages may be increased. Several attempts have been made in order to avoid such phenomena, such as using anchor texts and giving weight to hyperlinks[3], and pruning irrelevant pages from base set in advance to the calculation of authority/hub weights[1]. However, it is considered that the fundamental issue of such undesirable behavior of HITS algorithm lies in the generation of base set. In HITS algorithm, base sets are generated by collecting neighboring pages of a root set, which is acquired from the result of keyword-based search engine. The algorithm is based on the assumption that many good authority/hub pages are included in the base set which are generated in the above way. However, this assumption is not always true. Since HITS focuses its attention on the pages of

A Method for Discovering Purified Web Communities

285

base set in the process of ranking, its results are heavily dependent on the quality of the base set. On the other hand, Murata’s Web community discovery method [11] acquires data in the process of discovery. The goal of the method is to find a complete bipartite graph which is composed of centers (informative pages) and fans (pages containing hyperlinks to centers), and data acquired from a search engine and Web servers are used for renewing the member of centers and fans iteratively. Since the quality of data can be improved by data acquisition in the process of discovery, the method is expected to avoid the above weakness that HITS suffers. This paper proposes a new method for purification of Web community, which is a modified version of the above method. Members of fans and centers are changed iteratively by a kind of majority vote of each other. In this manner, members of the communities are purified so that representative fans and authorities are acquired.

3 A Method for Purifying Web Communities A method for discovering Web community [11], which is the base of our new method in this paper, is explained first. The method consists of the following three steps: 1. Search of fans using a search engine 2. Addition of a new URL to centers 3. Repetition of step 1 and step 2 Figure 1 shows the steps of the community discovery method. In the method, some input URLs are accepted as initial centers, and fans which co-refer all of the centers are searched. As shown in the figure, fans are searched from centers by backlink search on a search engine. The next step is to add a new URL to centers based on the hyperlinks included in acquired fans. The fans’ HTML files are acquired through the internet, and all the hyperlink contained in the files are extracted. The hyperlinks are sorted in the order of frequency. Since hyperlinks to related Web pages often cooccur, the top-ranking hyperlink is expected to point to a page whose contents are closely related to the centers. Therefore, the URL of the page is added as a new member of centers. In a method for purifying Web communities, which is newly proposed in this paper, the above steps of renewing fans and centers are modified in the following way:

• If there are few fans which co-refer all the members of centers, one of the members of centers are randomly removed and then corresponding fans are searched by backlink search so that the number of fans will be more than a certain threshold. • After all the hyperlinks contained in fans’ HTML files are extracted, they are sorted in the order of frequency. Then a few URLs of high-ranking hyperlinks are added to the centers and the same numbers of low-ranking URLs that were the members of previous centers are removed from the centers. This means that centers are updated according as the references of corresponding fans. The number of addi-

286

T. Murata

tion/removal of URLs is limited up to half of the number of centers so that the topic of centers will not change drastically.

Fig. 1. A method for discovering Web communities

With these modifications, the following effects are expected: • Even if some irrelevant pages are contained in centers, the quality of fans will not be deteriorated since pages that refer most of the centers (not all of them) are searched and regarded as fans. • Since the URLs that are linked by many of fans are considered to be representative ones regarding the topic of Web community, replacing the members of centers with high-ranking URLs is expected to improve the quality of centers.

4 Experiments Based on the above method, the author has built a system for purifying Web communities. As the input to the system, bottom five URLs that are listed in the topics of 100hot.com (http://www.100hot.com/) are given. These URLs are regarded as initial centers of a community, and then it is purified by the system so that higher-ranking URLs will be collected as the members of final centers. Average rankings of centers for each topic before/after purification are shown in Table 1. This table shows that higher-ranking centers are acquired in some of the topics, such as Macintosh, Election, and Music. The reasons the system performs well for these topics are as follows:

A Method for Discovering Purified Web Communities

287

1. Topics of these communities are relatively focused than others. In many cases, there are representative pages that are referred by most of the community members in focused communities. 2. The graphs of these communities are densely connected. This enables the purification only from hyperlink information. Table 1. Average ranking of centers for each topic

Besides these topics that our system performs well, there are some other topics that the system outputs rather unexpected results. For example, the inputs and outputs for topic Magazine is as follows: • Inputs: chemweek.com, mysterynet.com, cosmomag.com, playbill.com, si.edu • Outputs: washingtonpost.com, nytimes.com, usatoday.com, latimes.com, wsj.com This result shows that the topic of the centers are shifted from Magazine to Newspaper, and it also shows the closeness of the communities of these two topics. Another example is the community for topic Travel: • Inputs: smarterliving.com, sheraton.com, ebookers.com, qixo.com, hotel.com • Outputs: hilton.com, hyatt.com, sheraton.com, marriott.com, holiday-inn.com In general, when a target topic is too broad that contains many subtopics, there are several representative pages for the topic. In this example, since many hotel sites are included in the input URLs, the topic of the community is focused to hotels in the process of purification. Both HITS and our method are based on the graph structures that are extracted locally from the vast Web network. Since our method acquires Web data during the process of purification, and renews the members of communities iteratively, it is expected that the method performs well even when the members of initial communities are not representative ones.

288

T. Murata

5 Concluding Remarks This paper proposes a new method for purifying Web communities based on the graph structure of hyperlinks. Results of the system that is developed based on our method are also shown. The following points should be mentioned for “purifying” our future research targets: • The method proposed in this paper is considered to be a method for searching a complete bipartite subgraph included in a graph that correspond to a community. Although the effectiveness of our method depends heavily on the graph structure of target communities, typical graph structure of Web communities is not clear. We have to study more about the model for such structures that fits well for actual Web communities. • There is no standard test data set for evaluating systems for Web mining. The above experimentation is made on the assumption that URLs listed in each topic of 100hot.com are ranked in the order of relevance to the topic. However, this assumption is not always true. In the ranking used for our experimentation, the topranking URL for topic car is Microsoft.com !! In order to evaluate the performance of our system objectively, some kind of standard test data set for Web mining is really needed.

References 1.

K. Bharat, M. Henzinger: “Improved Algorithms for Topic Distillation in a Hyperlinked Environment”, Proc. of the 21st Int’l ACM SIGIR Conf. pp.104-111, 1998. 2. A. Broder et. al.: “Graph structure in the Web”, Proc. of WWW9 conference, 2000. 3. S. Chakrabarti et. al.: “Experiments in Topic Distillation”, Proc. of ACM SIGIR workshop on Hypertext Information Retrieval on the Web, 1998. 4. S. Chakrabarti, et. al.: “Mining the Web’s Link Structure”, IEEE Computer, Vol.32, No.8, pp.60-67, 1999. 5. D. Gibson, J. Kleinberg, P. Raghavan: “Inferring Web Communities from Link Topology”, Proc. of the 9th Conf. on Hypertext and Hypermedia, 1998. 6. M. Henzinger: “Hyperlink Analysis for the Web”, IEEE Internet Computing, Vol.5, No.1, pp.45-50, 2001. 7. J. Kleinberg et. al.: “The Web as a Graph: Measurements, Models, and Methods”, Proc. of COCOON ’99, LNCS 1627, pp.1-17, Springer, 1999. 8. R. Kosala, H. Blockeel, “Web Mining Research: A Survey”, ACM SIGKDD Explorations, Vol.2, No.1, pp.1-15, 2000. 9. R. Kumar et. al.: "Trawling the Web for Emerging Cyber-Communities", Proc. of the 8th WWW conference, 1999. 10. T. Murata: “Machine Discovery Based on the Co-occurrence of References in a Search Engine”, Proc. of DS99, LNAI 1721, pp.220-229, Springer, 1999

A Method for Discovering Purified Web Communities

289

11. T. Murata: “Discovery of Web Communities Based on the Co-occurrence of References”, Proc. of DS2000, LNAI 1967, pp.65-75, Springer, 2000. 12. L. Page et. al.: "The PageRank Citation Ranking: Bringing Order to the Web", Online manuscript, http://www-db.stanford.edu/~backrub/pageranksub.ps, 1998.

KeyWorld: Extracting Keywords from a Document as a Small World Yutaka Matsuo1,2 , Yukio Ohsawa2,3 , and Mitsuru Ishizuka1 1

2

University of Tokyo, Hongo 7-3-1, Bunkyo-ku, Tokyo 113-8656, JAPAN, matsuo@miv.t.u-tokyo.ac.jp, http://www.miv.t.u-tokyo.ac.jp/˜matsuo/ Japan Science and Technology Corporation, Tsutsujigaoka 2-2-11, Miyagino-ku, Sendai, Miyagi, 983-0852, JAPAN, 3 University of Tsukuba, Otsuka 3-29-1, Bunkyo-ku, Tokyo 113-0012, JAPAN,

Abstract. The small world topology is known widespread in biological, social and man-made systems. This paper shows that the small world structure also exists in documents, such as papers. A document is represented by a network; the nodes represent terms, and the edges represent the co-occurrence of terms. This network is shown to have the characteristics of being a small world, i.e., nodes are highly clustered yet the path length between them is small. Based on the topology, we develop an indexing system called KeyWorld, which extracts important terms by measuring their contribution to the graph being small world.

1

Introduction

Graphs that occur in many biological, social and man-made systems are often neither completely regular nor completely random, but have instead a “small world” topology in which nodes are highly clustered yet the path length between them is small [11][10]. For instance, if you are introduced to someone at a party in a small world, you can usually ﬁnd a short chain of mutual acquaintances that connects you together. In the 1960s, Stanley Milgram’s pioneering work on the small world problem showed that any two randomly chosen individuals in the United States are linked by a chain of six or fewer ﬁrst-name acquaintances, known as “six degrees of separation” [5]. Watts and Strogatz have shown that a social graph (the collaboration graph of actors in feature ﬁlms), a biological graph (the neural network of the nematode worm C. elegans), and a man-made graph (the electrical power grid of the western United States) all have a small world topology [11][10]. The World Wide Web also forms a small world network [1]. In the context of document indexing, an innovative algorithm called KeyGraph [6] is developed, which utilizes the structure of the document. A document is represented as a graph, each node corresponds to a term,1 and each edge corresponds to the co-occurrence of terms. Based on the segmentation of this graph 1

A term is a word or a word sequence.

K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 271–281, 2001. c Springer-Verlag Berlin Heidelberg 2001

272

Y. Matsuo, Y. Ohsawa, and M. Ishizuka

into clusters, KeyGraph ﬁnds keywords by selecting the term which co-occurs in multiple clusters. Recently, KeyGraph has been applied to several domains, from earthquake sequences [7] to register transaction data of retail stores, and showed remarkable versatility. In this paper, inspired by both small world phenomenon and KeyGraph, we develop a new algorithm, called KeyWorld, to ﬁnd important terms. We show ﬁrst that the graph derived from a document has the small world characteristics. To extract important terms, we ﬁnd those terms which contribute to the world being small. The contribution is quantitatively measured by the diﬀerence of “small-worldliness” with and without the term. The rest of the paper is organized as follows. In the following section, we ﬁrst detail the small world topology, and show that some documents actually have small world characteristics. Then we explain how to extract the important terms in Section 3. We evaluate KeyWorld and suggest further improvements in Section 4. Finally, we discuss future works and conclude this paper.

2 2.1

Term Co-occurrence Graph and Small World Small-Worldliness

We treat an undirected, unweighted, simple, sparse and connected graph. (We expand to an unconnected graph in Section 3.) To formalize the notion of a small world, Watts and Strogatz deﬁne the clustering coeﬃcient and the characteristic path length [11][10]: – The characteristic path length, L, is the path length averaged over all pairs of nodes. The path length d(i, j) is the number of edges in the shortest path between nodes i and j. – The clustering coeﬃcient is a measure of the cliqueness of the local neighborhoods. For a node with k neighbors, then at most k C2 = k(k − 1)/2 edges can exist between them. The clustering of a node is the fraction of these allowable edges that occur. The clustering coeﬃcient, C is the average clustering over all the nodes in the graph. Watts and Strogatz deﬁne a small world graph as one in which L ≥ Lrand (or L ≈ Lrand ) and C Crand where Lrand and Crand are the characteristic path length and clustering coeﬃcient of a random graph with the same number of nodes and edges. They propose several models of graphs, one of which is called β-Graphs. Starting from a regular graph, they introduce disorder into the graph by randomly rewiring each edge with probability p as shown in Fig.1. If p = 0 then the graph is completely regular and ordered. If p = 1 then the graph is completely random and disordered. Intermediate values of p give graphs that are neither completely regular nor completely disordered. They are small worlds. Walsh deﬁnes the proximity ratio µ = (C/L) / (Crand /Lrand )

(1)

KeyWorld: Extracting Keywords from a Document as a Small World Regular

p=0

Small world

Increasing randomness

273

Random

p=1

Fig. 1. Random rewiring of a regular ring lattice. Table 1. Characteristic path lengths L, clustering coeﬃcients C and proximity ratios µ for graphs with a small world topology [9] (studied in [11])). L Lrand C Crand µ Film actor 3.65 2.99 0.79 0.00027 2396 Power grid 18.7 12.4 0.080 0.005 10.61 C. elegans 2.65 2.55 0.28 0.05 4.755 The graphs are deﬁned as follows. For the ﬁlm actors, two actors are joined by an edge if they have acted in a ﬁlm together. For the power grid, nodes represent generators, transformers and substations, and edges represent high-voltage transmission lines between them. For C. elegans, an edge joins two neurons if they are connected by either a synapse or a gap junction. Because the number of nodes and edges for each graph is diﬀerent, the magnitude of L, C and µ diﬀers.

as the small-worldliness of the graph [9]. As p increases from 0, L drops sharply since a few long-range edges introduce short cuts into the graph. These short cuts have little eﬀect on C. As a consequence the proximity ratio µ rises rapidly and the graph develops a small world topology. As p approaches 1, the neighborhood clustering starts to break down, and the short cuts no longer have a dramatic eﬀect at linking up nodes. C and µ therefore drop, and the graph loses its small world topology. In Table 1, we can see µ is large in the graphs with a small world topology. In short, small world networks are characterized by the distinctive combination of high clustering with short characteristic path length. 2.2

Term Co-occurrence Graph

A graph is constructed from a document as follows. We ﬁrst preprocess the document by stemming and removing stop words, as in [8]. We apply an n-gram to count phrase frequency. Then we regard the title of the document, each section title and each caption of ﬁgures and tables as a sentence, and exclude all the ﬁgures, tables, and references. We get a list of sentences, each of which consists of words (or phrases). In other words, we get basket data where each item is a term, discarding the information of term orderings and document structures.

274

Y. Matsuo, Y. Ohsawa, and M. Ishizuka

Table 2. Statistical data on proximity ratios µ for 57 graphs of papers in WWW9. L Lrand C Crand µ Max. 4.99 3.58 0.38 0.012 22.81 Ave. 5.36 — 0.33 — 15.31 Min. 8.13 2.94 0.31 0.027 4.20 We set fthre = 3. We restrict attention to the giant connected component of the graph, which include 89% of the nodes on average. We exclude three papers, where the giant connected component covers less than 50% of the nodes. We don’t show the Lrand and Crand for the average case, because n and k diﬀers dependent on the target paper. On average, n = 275 and k = 5.04.

Then we pick up frequent terms which appear over a user-given threshold, fthre times, and ﬁx them as nodes. For every pair of terms, we count the cooccurrence for every sentence, and add an edge if the Jaccard coeﬃcient exceeds a threshold, Jthre .2 The Jaccard coeﬃcient is simply the number of sentences that contain both terms divided by the number of sentences that contain either term. This idea is also used in constructing a referral network from WWW pages [4]. We assume the length of each edge is 1. Table 2 shows statistics of the small-worldliness of 57 graphs, each constructed from a technical paper that appeared at the 9th international World Wide Web conference (WWW9) [12]. From this result, we can conjecture these papers certainly have small world structures. However, depending on the paper, the small-worldliness varies. One reason why the paper has a small world structure can be considered that the author may mention some concepts step by step (making the clustering of related terms), and then try to merge the concepts and build up new ideas (making a ‘shortcut’ of clusters). The author will keep in mind that the new idea is steadily connected to the fundamental concepts, but not redundantly. However, as we have seen, the small-worldliness varies from paper to paper. Certainly it depends on the subject, the aim, and the author’s writing style of the paper.

3

Finding Important Terms

3.1

Shortcut and Contractor

Admitting that a document is a small world, how does it beneﬁt us? We try here to estimate the importance of a term, and pick up important terms, though they are rare in the document, based on the small world structure. We consider ‘important terms’ as the terms which reﬂect the main topic, the author’s idea, and the fundamental concepts of the document. 2

In this paper, we set Jthre so that the number of neighbors, k, is around 4.5 on average.

KeyWorld: Extracting Keywords from a Document as a Small World

275

First we introduce the notion of a shortcut and a contractor, following the deﬁnition in [10]. Deﬁnition 1. The range R(i, j) is the length of the shortest path between i and j in the absence of that edge. If R(i, j) > 2, then the edge (i, j) is called a shortcut. Applying the notion of “shortcuts” in terms of nodes, we can get the deﬁnition of “contractor.” Deﬁnition 2. If two nodes u and w are both elements of the same neighborhood Γ (v), and the shortest path length between them that does not involve any edges adjacent with v is denoted dv (u, w) > 2, then v is said to contract u and w, and v is called a contractor. In our ﬁrst thought, if dv (u, w) is large, the corresponding term of contractor v might be interesting, because they bridge the distant notions which rarely appear together. However, such a node sometimes connects the nodes far from the center of the graph, i.e. the main topic of the document. Below we take into account the whole structure of the graph, calculating the contribution of a node to make the world small. To treat the disconnected graph, we expand the deﬁnition of path length (though Watts restricts attention to the giant connected component3 of the graph). Deﬁnition 3. An extended path length d (i, j) of node i and j is deﬁned as follows. d(i, j), if (i, j) are connected, d (i, j) = (2) wsum , otherwise. where wsum is a constant, the sum of the widths of all the disconnected subgraphs. d(i, j) is a path length of the shortest path between i and j in a connected graph. If some edges are added to the graph and some parts of the graph gets connected, d (i, j) will not increase, unless the length of an edge is negative. Thus d (i, j) is one of the upper bounds of the path length considering the edges will be added. Deﬁnition 4. An extended characteristic path length L is an extended path length averaged over all pairs of nodes. Deﬁnition 5. Lv is an extended path length averaged over all pairs of nodes except node v. LGv is the extended characteristic path length of the graph without node v. 3

A connected component of a graph is a set of nodes such that each node pair has a path. A connected component is called a giant connected component if it contains more than 50% of the nodes in the graph.

276

Y. Matsuo, Y. Ohsawa, and M. Ishizuka Table 3. Frequent terms in this paper. Term Frequency term 39 small 36 world 35 graph 33 small world 27 node 26 document 25 length 20 important 19 paper 18 Table 4. Terms with 10 largest CBv in this paper. Term small world contribution node list author table important term show structure KeyWorld

CBv Frequency 4.38 27 3.11 11 2.98 26 2.24 8 1.36 7 1.10 8 0.80 11 0.72 6 0.44 7 0.44 10

In other words, Lv is the characteristic path length regarding the node v as a corridor (i.e., a set of edges). For example, if v is neighboring u, w, and z, then (u, w), (u, z), and(w, z) are considered to be linked. And LGv is the extended characteristic path length assuming the corridor doesn’t exist. Deﬁnition 6. The contribution, CBv , of the node v to make the world small is deﬁned as follows. (3) CBv = LGv − Lv We don’t pay attention to the clustering coeﬃcient, because adding or eliminating one node aﬀects the clustering coeﬃcient little. If node v with large CBv is absent in the graph, the graph gets very large. In the context of documents, the topics are divided. We assume such a term help merge the structure of the document, thus important.

KeyWorld: Extracting Keywords from a Document as a Small World

277

Table 5. Pairs of terms with 10 largest CBe . Pair node – contribution list – table contribution – important term table – show contribution – structure KeyWorld – list important term – develop network – show contribution – make author – idea

3.2

CBe 2.97 1.47 1.20 1.10 0.87 0.87 0.79 0.72 0.47 0.47

Example

We show the example experimented on this paper, i.e., the one you are reading now.4 Table 3 shows the frequent terms and Table 4 shows the important terms measured by CBv . Comparing two tables, the list of important terms includes the author’s idea, e.g., “important term” and “KeyGraph,” as well as the important basic concept, e.g., “structure,” although they are not frequently appeared. However the list of frequent terms simply show the components of the papers, and are not of interest. We can also measure the contribution of an edge, CBe , to make the world small, deﬁned similarly as CBv . However, if we look at the pairs of terms in Table 5, it is hard to understand what they suggest. There are numbers of relations between two terms, so we cannot imagine the relation of the pairs right away. Lastly, Fig. 2 shows the graphical visualization of the world of this paper. (Only the giant connected component of the graph is shown, though other parts of the graph is also used for calculation.) We can easily point out the terms without which the world will be separated, say “small world” and “comtribution”.

4

Evaluation and Improvements

This section describes an evaluation of KeyWorld as an indexing system. KeyWorld is not merely an indexing system but it provides an understandable graphical representation of the document. However, we restrict attention here to the performance of KeyWorld as an indexing tool to compare it with existing indexing techniques such as tf and tﬁdf. The tf measures simply term frequency. The tﬁdf measure is obtained by using the product of the term frequency and the inverse document frequency[8].5 4 5

We ignore the eﬀect of self-reference; it’s suﬃciently small. We use log N/nv as idf, where N is the number of document collection, and nv is the number of document which includes term v.

278

Y. Matsuo, Y. Ohsawa, and M. Ishizuka measure

small world

world topology

KeyWorld

document

small world topology

section

show

term

list

table

paper

keyword

author

idea

show

node

network

actor

graph

important term

edge

path length

develop

calculate

make

contribution

structure

important

KeyGraph

characteristic path

method

characteristic path length

pair

clustering coefficient

Watts

Fig. 2. Small world of this paper.

When an author writes a paper, he/she annotates keywords to his/her paper by selecting the category of the paper (e.g. “text mining”), utilized algorithms (e.g. “small world”), or the proposed method (e.g. “KeyWorld”). The choice depends on the author’s criteria. In our deﬁnition, a keyword is an important term in the document, which reﬂects the main topic, the author’s idea, and the fundamental concepts of the document. For example, considering this paper, we think “small world,” “document,” “contribution,” “important term,” “path length,” and “KeyWorld” are keywords, and “node,” “make,” and “text mining” are not keywords because they are too trivial or too broad, or do not occur in this document. In the experimentation, we asked the authors of 20 technical papers in the artiﬁcial intelligence ﬁeld to judge whether some terms in their papers are keywords or not by a questionnaire. For each document, we ﬁrst get top 15 weighted terms by tf, tﬁdf,6 KeyGraph, and KeyWorld, i.e. the four lists of 15 terms. (We denote the list by method a as lista .) We merge the four lists and shuﬄe the terms. Then we ask the author whether each term is a keyword or not after explaining the deﬁnition of keywords. Counting the number of authorized terms, we can get the precision of method a as follows. precisiona = 6

Number of authorized terms in lista Number of terms in lista

(4)

As a corpus, we used 166 papers in Journal of Artiﬁcial Intelligence Research, from Vol.1 in 1993 to Vol.14 in 2001.

KeyWorld: Extracting Keywords from a Document as a Small World

279

Table 6. Precision and coverage tf KeyWorld tﬁdf KeyWorld+idf precision 0.53 0.49 0.55 0.71 coverage 0.48 0.50 0.62 0.68 Table 7. Terms with 10 largest CBv × idfv in this paper. Term CBv × idfv Frequency small world 4.57 27 important term 3.82 11 co-occurrence 1.89 4 KeyWorld 1.58 10 short cut 1.56 4 actor 0.89 5 shortest path 0.66 4 sentence 0.66 4 document 0.66 23 path length 0.59 17

Next, from the shuﬄed list of all terms,7 the authors are told to pick 5 (or more) terms as indispensable terms which they think are essential to the document, and cover the most important concepts of the paper. We calculate the coverage of method a as follows. coveragea =

Number of indispensable terms in lista Number of indispensable terms

(5)

The results are shown in Table 6. The performance of KeyWorld is not good enough. The precision and coverage are almost equal to tf. However, we feel that the term list by KeyWorld includes very important terms as well as very dull words, e.g. “show” or “table” in Table 4. To sieve out these dull terms, we develop an improved weighting method, which annotates term v with the weight CBv × idfv ,

(6)

where idfv is an idf measure for term v. The improved results are also shown in Table 6. Both the precision and coverage are now far better than tﬁdf. Table 7 shows the top 10 terms by KeyWorld with idf factor for this paper. In summary, KeyWorld can often ﬁnd important terms, however, it also detect less important terms. By incorporating with the idf measure, KeyWorld can be a very good indexing tool. 7

If the author remembers the other terms, he/she is permitted to add them to the list.

280

5

Y. Matsuo, Y. Ohsawa, and M. Ishizuka

Discussion

The small world phenomenon was inaugurated as an area of experimental study in the social sciences by Stanley Milgram in the 1960’s. Since then, numerous studies have been done for network analysis. The importance of weak ties, which is a short cut between clusters of people, was mentioned 30 years ago [3]. The measure of contribution is similar to “centrality” in the context of social network study. Centrality can be measured in a number of ways [2]. Considering an actors’ social network, the simplest is to count the number of others with whom an actor maintains relations. The actor with the most connections, i.e., the highest degree, is most central. Another measure is closeness, which calculates the distance from each person in the network to each other person based on the connections among all members of the network. Central actors are closer to all others than are other actors. A third measure is betweenness which examines the extent to which an actor is situated between others in the network, i.e., the extent to which information must pass through them to get to others, and thus the extent to which they will be exposed to information circulating in the network. However, our measure of contribution has a characteristic in that it calculates the diﬀerence of the closeness of all nodes with and without a certain node. It measures a node’s contribution to the whole structure by temporarily eliminating the node.

6

Conclusion

Watts mentions in [10] the possible applications of small world research, including “the train of thought followed in a conversation or succession of ideas leading to a scientiﬁc breakthrough.” In this paper, we have focused on technical papers rather than a conversation or succession of ideas. The future direction of our research is to treat directed or weighted graphs for ﬁner analyses of documents. We expect our approach is eﬀective not only to document indexing, but also to other graphical representations. To ﬁnd out structurally important parts may bring us deeper understandings of the graph, new perspectives, and chances to utilize it. We are interested in a big structural change caused by a small change of the graph. A change, which makes the world very small, may sometimes be very important.

References 1. R. Albert, H. Jeong, and A.-L. Barabasi. The diameter of the World Wide Web. Nature, 401, 1999. 2. L. C. Freeman. Centrality in social networks: Conceptual clariﬁcation. Social Networks, 1:215–239, 1979. 3. M. Granovetter. Strength of weak ties. American Journal of Sociology, 78:1360– 1380, 1973. 4. H. Kautz, B. Selman, and M. Shah. The hidden Web. AI magagine, 18(2), 1997.

KeyWorld: Extracting Keywords from a Document as a Small World

281

5. S. Milgram. The small-world problem. Psychology Today, 2:60–67, 1967. 6. Y. Ohsawa, N. E. Benson, and M. Yachida. KeyGraph: Automatic indexing by co-occurrence graph based on building construction metaphor. In Proc. Advanced Digital Library Conference (IEEE ADL’98), 1998. 7. Y. Ohsawa and M. Yachida. Discover risky active faults by indexing an earthquake sequence. In Proc. Discovery Science, pages 208–219, 1999. 8. G. Salton. Automatic Text Processing. Addison-Wesley, 1988. 9. T. Walsh. Search in a small world. In Proc. IJCAI-99, pages 1172–1177, 1999. 10. D. Watts. Small worlds: the dynamics of networks between order and randomness. Princeton, 1999. 11. D. Watts and S. Strogatz. Collective dynamics of small-world networks. Nature, 393:440–442, 1998. 12. 9th International World Wide Web Conference. http://www9.org/.

Knowledge Navigation on Visualizing Complementary Documents Naohiro Matsumura1,3 , Yukio Ohsawa2,3 , and Mitsuru Ishizuka1 1

Graduate School of Engineering, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8656 Japan {matumura, ishizuka}@miv.t.u-tokyo.ac.jp 2 Graduate School of Systems Management, University of Tsukuba, 3-29-1 Otsuka, Bunkyo-ku, Tokyo, 112-0012 Japan osawa@gssm.otsuka.tsukuba.ac.jp 3 Japanese Science and Technology Corporation, 2-2-11 Tsutsujigaoka, Miyagino-ku, Sendai, Miyagi, 983-0852 Japan

Abstract. It is an up-to-date challenge to get answers for novel questions which nobody has ever considered. Such a question is too rare to be satisﬁed with a past single document. In this paper, we propose a new framework of knowledge navigation by graphically providing with multiple documents relevant to a user’s question. Our implemented system named MACLOD generates several navigational plans, each forming a complementary document-set, not a single document, for navigating a user to understanding a novel question. The obtained plans are mapped into a 2-dimensional interface where documents in each obtained document-set are connected with links in order to support user selecting a plan smoothly. In experiments, the method obtained satisfactory answers to user’s unique questions.

1

Introduction

It is an up-to-date challenge to answer a user’s novel question nobody has ever asked. However, such a question is too new to be satisﬁed with a past single document, and the required knowledge for understanding the documents relevant to a user’s question depends on his background[4]. In our previous work[3], we proposed a novel information retrieval method named combination retrieval for creating novel knowledge by combining complementary documents. Here, a complementary set of documents is composed of documents, and the combination of which supplies a satisfactory information. This idea is based on the principle that combining ideas can trigger the creation of new ideas[1,2]. Throughout the discussions of the work, we veriﬁed the fact that reading multiple complementary documents generates the synergy eﬀects which help us acquire novel knowledge. In this paper, we propose a new framework of knowledge navigation, i.e., supply a user with new knowledge, for satisfying the information request of a user by visualizing complementary documents. Our implemented system named K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 258–270, 2001. c Springer-Verlag Berlin Heidelberg 2001

Knowledge Navigation on Visualizing Complementary Documents

259

MACLOD(Map of Complementary Links of Documents) generates several navigational plans, each formed by a document-set for navigating a user to understand a novel question, by making use of the combination retrieval[3]. The obtained plans are mapped into a 2-dimensional interface where documents in each document-set are connected with links in order to support user selecting complementary documents smoothly. The remainder of this paper goes as follows: In Section 2, the meaning of our approach is shown by comparison with previous knowledge navigation methods. The mechanism of combination retrieval is described in Section 3, and the mechanism of MACLOD implemented here is described in Section 4. We show the experiments and the results in Section 5, showing the performance of MACLOD for medical counseling question-answer documents.

2

Previous Methods for Knowledge Navigation

The vision of knowledge navigation was shown by John Sculley(Then the president of Apple Computer Inc.) where electronic secretary in a computer named Knowledge Navigator managed various tasks on behalf of users, e.g., manage schedules. The concept inspired us. However, it is still diﬃcult to realize the Knowledge Navigator because of the complexity of real secretary’s tasks. A knowledge navigation system is a piece of software which answers a user’s question. The question maybe entered as a word-set query {alcohol, liver, cancer} or a sentence query “Does alcohol cause a liver cancer ?” An intelligent answer to this question may be “No, alcohol does not cause liver cancer directly. You may be confused of liver cancer and other liver damages from alcohol. Alcohol causes cancer in other tissues.” For giving such an answer, the system should have medical knowledge relevant to user’s query, and infer on the knowledge for answering the question. However, it is not realistic to implement such knowledge wide enough to be applied to unique user interests. Another approach for navigating knowledge is to retrieve ready-made documents relevant to the current query, from a prepared document collection. In this way, we can skip the process of knowledge acquisition and implementation, because man-made documents represent the complex human knowledge directly. A search engines for a word-set query entered by the user may be the simplest realization of this approach. However, we already know that existing information retrieval methods trying to answer a query by ONE of the output documents could not satisfy novel interests in Section 1.

3

The Process of Combination Retrieval

Combination retrieval[3] is a method for selecting meaningful documents which, as a set, serve a good (readable and satisfactory) answer to the user. In this section, we review the algorithm of the combination retrieval.

260

3.1

N. Matsumura, Y. Ohsawa, and M. Ishizuka

The Outline of the Process

The process of combination retrieval is as follows: The Process of Combination Retrieval Step 1) Accept user’s query Qg . Step 2) Obtain G, a word-set representing the goal user wants to understand, from Qg (G = Qg if Qg is given simply as a word-set). Step 3) Make knowledge-base Σ for the abduction of Step 4). For each document Dx in the document-collection Cdoc , a Horn clause is made as to describe the condition (words needed to be understood for reading Dx ) and the eﬀect (words to be subsequently understood by reading Dx ). Step 4) Obtain h, the optimal hypothesis-set which derives G if combined with Σ, by cost-based abduction (detailed later). h obtained here represents the union of following information, of the least size of K. S: The document-set the user should read. K: The keyword-set the user should understand for reading the documents in S. Step 5) Show the documents in S to the user. The intuitive meaning of employing the abductive inference is to obtain the conditions for understanding user’s goal G. Here, conditions include the documents to read (S) for understanding G, and necessary knowledge (K) for reading those documents. That is, S means the combination of documents to be presented to the user. 3.2

The Details of Combination Retrieval’s Process

In preparation, collection Cdoc of existing human-made documents is stored. Key, the set of keyword-candidates in the documents in Cdoc , i.e. word-set which is the union of extracted keywords from all the documents in Cdoc , is obtained and ﬁxed. Here, words are stemmed as in [5] and stop words (“does”, “is”, “a”...) are deleted, and then a constant number of words of the highest TFIDF values [6] (using Cdoc as the corpus for computing document frequencies of words) are extracted as keywords from each document in Cdoc . Next, let us go into the details of each step in 3.1. Step 1) to 2) Make goal G from user’s query Qg : Goal G is deﬁned as the set of words in Qg ∩ Key, i.e., keywords in the user’s query. For example, “does alcohol make me warm?” and query {alcohol, warm} are both put into the same goal {alcohol, warm}, if Cdoc is a set of past question-answer pairs of a medical counselor which do not have ”does”, ”make”, ”me”, ”warm”, ”in”, ”a”, or “day” in Key (some are deleted as stop words). Step 3) Make Horn clauses from documents: For the abductive inference in Step 4) of Subsection 3.1, knowledge-base Σ is formed of Horn clauses. A Horn clause is a clause as in Eq.(1), which means that y becomes true under the condition that all x1 , x2 , · · · xn are true, where variables x1 , x2 , · · · xn and y

Knowledge Navigation on Visualizing Complementary Documents

261

are atoms each of which corresponds to an event occurrence. A Horn clause can describe causes (x1 , x2 , · · · , xn ) and their eﬀect (y) simply. y :−x1 , x2 , · · · , xn .

(1)

In combination retrieval, the Horn clause for document Dx describes the cause (reading Dx with enough vocabulary knowledge) and the eﬀect (acquiring new knowledge from Dx ) of reading Dx , as: α :−β1 , β2 , · · · , βmx , Dx .

(2)

Here, α is the eﬀect term of Dx , which is a term (a word or a phrase) one can understand by reading document Dx . β1 , β2 · · · βmx are the conditional terms of Dx , which should be understood for reading and understanding Dx . That is, one who knows words β1 , β2 · · · βmx and reads Dx on this knowledge is supposed to acquire knowledge about α. The method for taking the eﬀect and the conditional terms from Dx is straight-forward. First, the eﬀect terms α, α2 , · · · are obtained as terms in G ∩ (the keywords of Dx ). This means that the eﬀect of Dx is expected on the user’s interest G, rather than by the intension of the author of Dx . For example, a document about cancer symptoms may work as a description of the demerit of smoking, if the reader is a heavy smoker. Focusing the consideration onto user’s goal in this way also speeds up the response of combination retrieval as in Subsection 5.1. Then, the keywords of Dx other than the eﬀect terms above form the conditional terms β1 , β2 , · · · βmx . As a result, Horn clauses are obtained as α1 :−β1 , β2 , · · · βmx , Dx , α2 :−β1 , β2 , · · · βmx , Dx , .. .

(3)

meaning that one knowing β1 , β2 , · · · βmx can read Dx and understand all the eﬀect terms α1 , α2 , · · · by reading Dx . Step 4) Cost based abduction for obtaining the documents to read: We employ the cost based abduction (CBA, hereafter)[7], an inference framework for obtaining solution h of the least |K| in Subsection 3.1. In CBA, the causes of a given eﬀect G is explained. Formally, CBA is described as extracting a minimal hypothesis-set h from a given set H of candidate hypotheses, so that h derives G using knowledge Σ. That is, h satisﬁes Eq.(4) under Eq.(5) and Eq.(6). We deal with Σ composed of causal rules, expressed in Horn clauses mentioned above. M inimize cost(h), under that : h ⊂ H, h ∪ Σ G,

(4) (5) (6)

262

N. Matsumura, Y. Ohsawa, and M. Ishizuka

Eq.(4) represents the selection of h to be minimal, i.e., of the lowest-cost hypothesis-set h(⊂ H), where cost denoted by cost(h) is the sum of the weights of hypotheses in h. The weights of hypotheses in H, the candidates of elements of solution h, are initially given. Generally speaking, the weight-values of hypotheses are closely related to the semantics in the problem to which CBA is applied, as exempliﬁed in [8]. In combination retrieval, weights are given diﬀerently to the two types of hypotheses in H: Type 1: Hypothesis that user reads a document in Cdoc Type 2: Hypothesis that user knows (have learned) a conditional term in Key In giving weights to hypotheses, we considered that user should be able to understand the output documents in S, with learning only a small set K of keywords from external knowledge other than Cdoc . This is reﬂected to minimizing |K|, the size of K. That is, the weights of hypotheses of Type 2 are ﬁxed to 1 and ones of Type 1 are ﬁxed to 0, and the content of h is S ∪ K. It might be good to give values between 0 and 1 to hypotheses of Type 2, each value representing the diﬃculty of learning each term. However, we do not know how each word is easy to learn for the user from outside of Cdoc . Further, it might seem to be necessary to give positive weights to hypotheses of Type 1, each value representing the cost of reading each document. However, this necessity can be discounted because we gave mx in Eq. 3 to be proportional to the length of Dx . That is, the user’s cost (eﬀort) for reading a document is implied by the number of meaningful keywords s/he should read in the document. If we sum the heterogeneous diﬃculties, i.e., of reading documents and of learning words, the meaning of the solution cost would become rather confusing. 3.3

An Example of Combination Retrieval’s Execution

For example, the combination retrieval runs as follows. Step 1) Qg = “Does alcohol cause a liver cancer ?” Step 2) G is obtained from Qg as {alcohol, liver, cancer}. Step 3) From Cdoc , documents D1 ,D2 , and D3 are taken, each including terms in G, and put into Horn clauses as: alcohol :−cirrhosis, cell, disease, D1 . liver :−cirrhosis, cell, disease, D1 . alcohol :−marijuana, drug, health, D2 . liver :−marijuana, drug, health, D2 . alcohol :−cell, disease, organ, D3 . cancer :−cell, disease, organ, D3 . Hypothesis-set H is formed of the conditional parts of D1 , D2 and D3 of Type 1 each weighted 0, and “cirrhosis,” “cell,” “disease,” “marijuana,” “drug,” “health,”and “organ” of Type 2 each weighted 1.

Knowledge Navigation on Visualizing Complementary Documents

263

Step 4) h is obtained as S ∪ K, where S = { D1 , D3 } and K = {cirrhosis, cell, disease, organ}, meaning that user should understand ”cirrhosis”, ”cell”, ”disease” and ”organ” for reading D1 and D3 , served as the answer to Qg . This solution is selected because cost(h) (i.e. |K|) takes the values of 4, less than 6 of the only alternative feasible solution, i.e. {marijuana, drug, health, cell, disease, organ} plus {D2 , D3 }. Step 5) The user now reads the two documents presented as: D1 (including alcohol and liver) stating that alcohol alters the liver function by changing liver cells into cirrhosis. D3 (including alcohol and cancer) showing the causes of cancer in various organs, including a lot of alcohol. This document recommends drinkers to limit to one ounce of pure alcohol per day. As a result, the subject learns that s/he should limit drinking alcohol to keep liver healthy and avoid cancer, and also came to know that other tissues than liver get cancer from alcohol. Thus, user can understand the answer by learning a small number of words from outside of Cdoc , as we aimed in employing CBA. More importantly than this major eﬀect of combination retrieval, a by-product is that the common hypotheses between D1 and D3 , i.e., {cell, disease} of Type 2 are discovered as the context of user’s interest underlying the entered query. This eﬀect is due to CBA which obtains the smallest number of involved contexts, for explaining the goal (i.e. answering the query), as solution hypotheses. Presenting such a novel and meaningful context to the user induces the user to creating new knowledge [9], to satisfy his/her novel interest.

4

MACLOD: Map of Complementary Links of Documents

In the combination retrieval, a user was imposed on two types of tasks that reading a obtained document-set and understanding the conditional terms of the document-set. However, this tasks are not always easy for a user since the background knowledge of a user is diﬀerent from individuals. For taking such already existing knowledge of a user into consideration when generating the documentset for reading, we propose a new framework to navigate a user by graphically providing with multiple documents of some document-sets each giving an answer to his/her interest. The concept of knowledge navigation in Section 2 can be realized in the framework. The implemented system named MACLOD (MAp of Complementary Links Of Documents) visualizes these document-sets(each forming a complementary document-set) to navigate a user to understanding his/her novel question. The process of MACLOD is as follows:

264

N. Matsumura, Y. Ohsawa, and M. Ishizuka

The Process of MACLOD Phase1. Obtain a plan for knowledge navigation: Obtain a plan (document-set S) for user’s query Qg along the procedure of the combination retrieval in Section 3. That is, the process is summarized as follows: Step 1) Accept user’s query Qg . Step 2) Obtain G, the goal user wants to understand. Step 3) Make knowledge-base Σ for the abduction of Step 4). Step 4) Obtain h, the optimal hypothesis-set which derives G if combined with Σ, by cost-based abduction. Step 5) Show the obtained documents in Step 4) to the user. Phase2. Iterate Phase1 to add plans: Iterate Phase1 to obtain N sets of plans where inconsistency conditions are added to the knowledge-base Σ in Subsection 3.2 to avoid already obtained plans. The inconsistency condition to be considered in each cycle of Phase1 is described as inc :−Dx1 , Dx2 , · · · , Dxn ,

(7)

where Dx1 , Dx2 , · · · , Dxn are the documents obtained in the previous cycle of Phase1. Here, the document included in S more than three times are forced not to be included in the next plan. This inconsistency condition, also added into knowledge-base Σ, is described as inc :−Dx1 .

(8)

Where Dx1 is a document included in S more than three times. The cycles of Phase1 continues until the number of iterations reaches N . Here, we empirically set N as 10. Phase3. Visualize the plans: MACLOD outputs a 2-dimensional interface in which obtained plans during above iterations are mapped. In the interface, documents in a plan obtained by one cycle at Phase1 are connected with links each other in order to support user selects appropriate documents. Phase4. Knowledge Navigation: The user goes on reading documents along the links in the 2-dimensional interface until s/he understands or gives up understanding Qg .

5 5.1

Experimental Evaluations of MACLOD The Experimental Conditions

MACLOD is implemented in a Celeron 500MHz machine with 320MB memory. Although CBA is time-consuming because of its NP-completeness, most answers in the experiments were returned within ten seconds from the entry of query by high-speed abduction as in [12]. Queries from users included 4 or less terms in Key, due to which the response time was below 10 sec. This quick response comes also from the goal-oriented construction of Horn clauses shown in Subsection 3.2. The document-collection Cdoc of MACLOD is

Knowledge Navigation on Visualizing Complementary Documents

265

1808 question-answer pairs of Alice, a health care question answering service on WWW (http://www.alice.columbia.edu). The small number as 1808 documents is a suitable condition for evaluating MACLOD for a sparse documentcollection which is insuﬃcient for answering novel queries. 5.2

An Example of MACLOD’s Execution

When a user entered a query in a word-set or a sentence, MACLOD obtained ten plans(document-sets) in Table 1 and showed a 2-dimensional output in Figure 5.2. In this case, input {alcohol, f at, calorie} was entered as query Qg for knowing if the calorie of alcohol changes into fat. Table 1. The top 10 plans for the input query {alcohol, fat, calorie}. Ranking 1 2 3 4 5 6 7 8 9 10

Plan(document-set) Cost d1459, d0181 25 d1459, d0611 26 d1459, d0426 27 d1802, d0181 27 d0576, d0181 27 d1802, d0882, d0611 39 d1802, d1100, d0611 39 d0746, d0576, d1466 39 d1730, d0576, d1466 39 d0746, d0331, d1466 41

The process of understanding the user’s interest(shown as Qg ) begins by reading a document-set d1459 and d0181 (double-circle nodes in Figure 5.2), a top ranked plan of MACLOD. The summaries of them are as follows: d1459 (including f at and calorie) stating that if the calorie comes short, the protein is burned into energy. The lack of protein delays the recovery of distress, or weakens the resistance to disease. d0181 (including alcohol) stating that drinking too much alcohol damages various tissues, especially the liver or the heart. After reading these two documents, the user was not satisfy fully his/her interest since the documents do not mention the causality between the calorie of alcohol and fat directly. If this does not satisfy one’s interest, then the user begins to select and read another documents linked from already read documents for getting new information about Qg . MACLOD helps this complementary reading process with a 2-dimensional interface where a user can piece out the whole relations among documents of obtained plans. That is, user can pick other document, that complements already-read documents, for reaching the satisfaction of her/himself.

266

N. Matsumura, Y. Ohsawa, and M. Ishizuka

Fig. 1. A 2-dimensional interface of MACLOD. Documents are shown as nodes, and complementary documents are connected with links.

The following steps, for example, are as follows. In Figure 5.2, d0611 and d0426 are linked from d1459, and d1802 and d0576 are linked from d0181. Here, because the user wanted to know the limit amount of alcohol to drink, the user was satisﬁed by reading d0611 that states the adequate quantity of alcohol per day. Also, d0576 stating the ideal quantity of calorie per day satisﬁed the user further because his potential interest was in diet. Thus, MACLOD can supply complementary documents step by step according to the user’s interests until the user gets satisﬁed. 5.3

The Answering System Compared with MACLOD

We compared the performance of MACLOD with the following typical search engine for question answering. We call this search engine here a Vector-based FAQ-ﬁnder (VFAQ in short hereafter). The Procedure of VFAQ Step1’) Prepare keyword-vector vx for each question Qx in Cdoc . Step2’) Obtain keyword-vector vQ for the current query Qg .

Knowledge Navigation on Visualizing Complementary Documents

267

Step3’) Find the top N keyword-vectors prepared in 1’), in the decreasing order of product value vx · vQ , and return their corresponding answers. Here, a keyword-vector for a query Q is formed as follows: Each vector has |Key| attributes (Key was introduced in 3.2 as the candidate of keywords in Cdoc ), each taking the value of TFIDF[6] in Q, of the corresponding keyword. Each vector v is normalized to make |v| = 1. For example, for query Qg {alcohol, warm} (or a question which is put into G: {alcohol, warm}), the vector comes to be (0, 0.99, 0, · · · , 0, 0.14, 0, 0, · · ·) where 0.99 and 0.14 are the normalized TFIDF values of “alcohol” and “warm” in Qg . Elements of value 0 here correspond to terms which are in Key but not included in Qg . Supplying N documents in Step 3’) is for setting the condition similar to MACLOD so that a fair comparison becomes possible. 5.4

Result Statistics

The experiment was executed for 5 subjects from 21 to 30 years old. This means that the subjects were of the near age to the past question askers of Alice. A popular method for evaluating the performance of a search engine is to see recall (the number of relevant documents retrieved, divided by the number of relevant documents to user’s query in Cdoc ) and precision (the number of relevant documents retrieved, divided by the number of retrieved documents). However, this traditional manner of evaluation is not appropriate for evaluating MACLOD, because it does not output a sheer list of most relevant documents to the query. In the traditional evaluation, it was regarded as a success if user gets satisﬁed by reading a few documents which are highly ranked in the output list. On the other hand, MACLOD aims at satisfying a user who reads some documents along the pathways, rather than a few best document. Therefore, this section presents an original way of evaluation for MACLOD. Here, 42 queries were entered. This seems to be quite a small number for the evaluation data. However, we compromised with this size of data because we aimed at having each subject evaluate the returned answer in a natural manner. That is, in order to have the subject report whether s/he was really satisﬁed with the output, the subject must enter his/her real ¨rare¨interest. Otherwise, the subject has to imagine an unreal person who asks the rare query and imagine what the unreal person feels with the returned answers. Therefore we restricted to a small number of queries entered from real novel interests. The overall result was shown in Figure 5.4. The horizontal axis means the number of documents read in series and the vertical axis means the number of satisﬁed queries. According to the subjects, MACLOD did better than VFAQ, especially for novel queries. For x = 1, MACLOD and VFAQ equally satisﬁed 16 queries. On the other hand, for x = 2, MACLOD satisﬁed 12 queries, whereas VFAQ satisﬁed 4 queries. And for x = 3, MACLOD satisﬁed 6 queries, whereas VFAQ satisﬁed 3 queries. Finally, fot x ≥ 4, MACLOD and VFAQ satisﬁed 3 queries. Thus, the superiority of MACLOD for x greater than 1 came to be

268

N. Matsumura, Y. Ohsawa, and M. Ishizuka

Fig. 2. Statistical results.

apparent. In all cases, VFAQ obtained redundant documents, i.e., document of similar contexts, equally relevant to the query. These results can be summarized that novel queries for Cdoc were answered satisfactory by MACLOD. Answers in the form of document-combination visualized by MACLOD came to be easy to read and browse along the links according to the subject, and the presented answers were meaningful for the user. 5.5

Comparison with Other Methods

Among the rare systems which combine documents for answering novel query, Hyper Bridges[10] and NaviPlan [11] produce a plan of user’s reading of documents. They present a plan made of sorted multiple documents, and a user who reads them in the order as sorted by Hyper Bridges or NaviPlan incrementally reﬁnes one’s own knowledge until one learns the meaning of the entered query. A plan made by these tools is a serial set of documents, which guides the user to an understanding of query starting from a beginner’s knowledge, in the order presented by the system. As a result, neither NaviPlan nor Hyper Bridges they can obtain an appropriate document to be read last, i.e., the document to directly reach the goal (i.e. answer the query), in all the examples above where multiple documents are required to be mixed to answer the query. On the other hand, the combination retrieval and its advanced version MACLOD makes a complementary set of documents, supplementing the content of each other for giving a satisfactory answer as a whole. User may read documents in an obtained document-set in any order as s/he likes. Especially, MACLOD gives user more ﬂexible search interface than the original combination retrieval. Let us here show the merit of MACLOD compared with the previous combination retrieval. In short, the merit is that user can select documents matching with his/her interest, reactively reﬂecting the context of documents read already.

Knowledge Navigation on Visualizing Complementary Documents

269

The fair extension of the combination retrieval to be compared with MACLOD is to have as many document-sets as obtained in MACLOD. In such an output style, it will be diﬃcult to control the context of the documents to read. That is, the order of sets sorted on cost does not always correspond to user´s interest and often bothers user with hard to read the document-sets in an undesired order. In this example, if the user feels d1459 mismatching to his/her context, s/he will not reach any satisfactory document-set in the list. Neither a MACLOD-like style output as in Figure5.2 makes things better, in this case because d1459 is shared by all the sets. In all trials for obtaining and showing highly ranked document-sets of the combination retrieval, the user was ﬁxed to ¨ the context bound by a ¨centraldocument as d1459 whether desiring or not the situation. From this problem with the combination retrieval, we can point the two-fold merit of MACLOD. 1. Due to discarding documents already appeared many times in the output document-sets in the process (see Section 4), MACLOD can include document-sets of various contexts in the output. This enables the user to choose suitable contexts reactively in the search process. 2. The graphical output makes the context-control easier, because the links between nodes (documents) represent the complemantary relations (i.e., as documents to be read together) between contexts. If user feels a document misleading to him, s/he can open a document linked from the current document without feeing a sudden departure from the current context.

6

Conclusions

The combination retrieval, a method to obtain a set of documents for answering a novel query is fully described and its visual interface MACLOD is introduced. Combination retrieval presents user with a set of, not a single, documents for answering a new query unable to be answered by one past answer to a past query. The MACLOD interface supplies a user with a further comfort in acquiring novel knowledge. MACLOD allows user to eﬃciently alter a part of the reading-plan (i.e. document-set) s/he is currently following, improving his/her satisfaction. This eﬀect works especially if the interest is novel i.e., if the context is too particular to be captured by past Q&A’s.

References 1. Hadamard, J: The Psychology of Invention in the Mathematical Field. Princeton University Press, 1945. 2. Swanson, D.R., Smalheiser, N.R.: An Interactive System for Complementary Literatures: a Stimulus to Scientiﬁc Discovery. Artiﬁcial Intelligence, Vol. 91, 183–203, 1997. 3. Matsumura, N., and Ohsawa, Y.: Combination Retrieval for Creating Knowledge from Sparse Document Collection, Proc. of Discovery Science, 320–324, 2000.

270

N. Matsumura, Y. Ohsawa, and M. Ishizuka

4. Brookes, B. C.: The foundations of information science, Journal of Information Science, 2, 125–133, 1980. 5. Porter, M.F.: An Algorithm for Suﬃx stripping. Automated Library and Information Systems, Vol.14, No.3, 130–137, 1980. 6. Salton, G. and Buckey, C.: Term-Weighting Approach in Automatic Text Retrieval, Reading in Information Retrieval, 323–328, 1998. 7. E. Charniak and S.E. Shimony: Probabilistic Semantics for Cost Based Abduction. Proc. of AAAI-90, 106–111, 1990. 8. Ohsawa, Y. and Yachida, M.: An Index Navigator for Understanding and Expressing User’s Coherent Interest, Proc. of IJCAI-97, 1: 722–729, 1997. 9. Nonaka,I. and Takeuchi, H.: The Knowledge Creating Company, Oxford University Press, 1995. 10. Ohsawa, Y., Matsuda, K. and Yachida, M.: Personal and Temporary Hyper Bridges: 2-D Interface for Undeﬁned Topics, J. Computer Networks and ISDN Systems, 30: 669–671, 1998. 11. Yamada, S. and Osawa, Y.: Planning to Guide Concept Understanding in the WWW. AAAI-98 Workshop on AI and Data Integration, 121–126, 1998. 12. Ohsawa, Y. and Ishizuka, M.: Networked Bubble Propagation: A Polynomial-time Hypothetical Reasoning Method for Computing Near-optimal Solutions, Artiﬁcial Intelligence, Vol.91, 131–154, 1997.

Learning Conformation Rules Osamu Maruyama1 , Takayoshi Shoudai2 , Emiko Furuichi3 , Satoru Kuhara4 , and Satoru Miyano5 1

5

Faculty of Mathematics, Kyushu University, Fukuoka, 812-8581, Japan, om@math.kyushu-u.ac.jp 2 Department of Informatics, Kyushu University 3 Fukuoka Women’s Junior College 4 Graduate School of Genetic Resources Technology, Kyushu University Human Genome Center, Institute of Medical Science, University of Tokyo

Abstract. Protein conformation problem, one of the hard and important problems, is to identify conformation rules which transform sequences to their tertiary structures, called conformations. Our aim of this work is to give a concrete theoretical foundation for graph-theoretic approach for the protein conformation problem in the framework of a probabilistic learning model. We propose the conformation problem as a learning problem from hypergraphs capturing the conformations of proteins in a loose way. We consider several classes of functions based on conformation rules, and show the PAC-learnability of them. The refutable PAC-learnability of functions is discussed, which would be helpful when a target function is not in the class of functions under consideration. We also report the conformation rules learned in our preliminary computational experiments.

1

Introduction

A protein is a chain of amino acid residues that folds into a unique native tertiary structure under speciﬁc conditions. Biochemical experiments show that an unfolded protein spontaneously refolds into its native structure when speciﬁc conditions are restored. This is the basis for the hypothesis that the native structure of a protein can be determined from the information contained in the amino acid sequence. Under this hypothesis, various computational methods of predicting protein conformation from sequence have been proposed. Protein conformation is analyzed in terms of free energy, where it is assumed that the free energy of a native structure of a protein is the globally minimum, which is known as “thermodynamic hypothesis.” Many computational methods based on the assumption have been extensively developed. For example, Church and Shalloway [1] developed a top-down search procedure in which conformation space is recursively dissected according to the intrinsic hierarchical structure of a landscape’s eﬀective-energy barriers, and Konig and Dandekar [4] applied genetic algorithms to this problem. Another interesting heuristic method is the K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 243–257, 2001. c Springer-Verlag Berlin Heidelberg 2001

244

O. Maruyama et al.

hydrophobic zipper method by Dill et al. [2]. Based on the fact that many hydrophobic contacts are topologically local, the hydrophobic zipper method randomly generates hydrophobic contacts near enough in a sequence, which serve as constraints forcing other hydrophobic contacts to be zipped up sequentially. Inspired by this hydrophobic zipper method, but apart from the free-energy minimization problem, we introduce a hypergraph representation of the tertiary structure of a protein, and a conformation rule which is deﬁned as a rewriting rule of hypergraphs. Many simple conformation models in free-energy minimization problems use lattices, which are periodic graphs in two- or three-dimensional space. The conformation of a protein turns to be a self-avoiding path in the lattice in which the nodes are labeled by the amino acids. Thus the hypergraph representation representation model is a generalization of the lattice model. The degree of a node v of a hypergraph is the number of hyperedges including v, and the rank of a hyperedge e is the number of the nodes in e. Because of spatial conditions of conformations, it would be natural to impose restrictions on both of the degrees and the ranks of a hypergraph representing a tertiary structure to be bounded by constants, which is helpful in learning conformation rules. We capture the tertiary structure of a protein as a hypergraph in a loose way, from which conformation rules are extracted. Conformation rules are repeatedly applied to a hypergraph, where the initial hypergraph is a hypergraph representing an amino acid sequence, called a chainhypergraph. The procedure searches for a location in the current hypergraph which is applicable to a conformation rule, from local toward global as in the hydrophobic zipper method. Thus we can say that our procedure of applying conformation rules to a sequence obeys the “local to global” folding principle, which is one of the various folding principles proposed so far. The resulting hypergraph represents the structure of the protein. We then consider the problem of learning conformation rules from hypergraph representations of proteins. A conformation is deﬁned as a function from sequences to hypergraphs. Thus the problem is to learn functions from an example, that is, a pair of a protein sequence and the corresponding hypergraph representation. The PAC-learning paradigm was extended to include functions by Natarajan and Tadepalli [9] and some results on concept learning have been extended for functions [7,8]. This paper has three contributions. One is a formulation of conformation rules by using hypergraphs, and another is a polynomial-time PAC-learning algorithm for a class which is deﬁned by this new concept of conformation rules. The other is some results on refutable PAC-learnability of functions, which would be helpful when a target function is not in the classes of functions we consider. We have implemented the algorithms of learning conformation rules and applying conformation rules in the Python language [13]. Preliminary computational experiments have been done with using TIM barrel proteins whose data ﬁles can be downloaded from the site of Protein Data Bank (PDB) [14]. The results of the experiments are also reported.

Learning Conformation Rules

2

245

Preliminaries

A hypergraph H = (V, E) consists of a set V of nodes and a set E of hyperedges each of which is a nonempty subset of V . In this paper we assume that |e| ≥ 2 for all e ∈ E without any notice. The rank of H is r(H) = maxe∈E |e|. For a node v, the degree of v is dH (v) = |{e ∈ E | v ∈ e}| and the degree of H is d(H) = maxv∈V dH (v). A chain-hypergraph is a hypergraph H = (V, E) such that V = {1, 2, . . . , n} for some n ≥ 1 and each {i, i + 1} is contained in some hyperedge in E for 1 ≤ i ≤ n−1, i.e., there is e ∈ E with {i, i+1} ⊆ e. Especially, a chain-hypergraph H = (V, E) is called a rank k linear chain-hypergraph if E = {{i, . . . , i + k − 1} | i = 1, . . . , n − k + 1}. For a set E of hyperedges, we call simplify(E) = E − {e ∈ E | there is e in E with e ⊆ e and e = e } the simpliﬁcation of E. In this paper we consider a hypergraph H = (V, E) whose nodes are labeled with a mapping ψ : V → ∆, where ∆ is an alphabet. It is denoted by H = (V, E, ψ), and called a hypergraph over ∆. We confuse H = (V, E, ψ) with H = (V, E) without any notice. Let H = (V, E, ψ) and ˜ ˜ = (V˜ , E, ˜ ψ) V ⊆ V . For convenience, we denote by H(V ) the subhypergraph H of H where ˜= {e ∈ E | v ∈ e}, – E v∈V ˜ – V = e∈E˜ e ∪ V , – ψ˜ = ψ| ˜ , that is, the restriction of ψ to V˜ . V

This subsection reviews some notions and results on the PAC-learnability of a class of functions by following Natarajan [7,8]. For an alphabet Ω, the set of all strings over Ω is denoted by Ω ∗ . The length of a string x ∈ Ω ∗ is denoted by |x|. For n ≥ 1, Ω [n] = {x ∈ Ω ∗ | |x| ≤ n}. Here, the alphabet Ω is assumed to be ﬁnite. Deﬁnition 1 ([7,8]). Let F be a class of functions from a ﬁnite set X to a ﬁnite set Y . The generalized VC-dimension of F , denoted by D(F ), is the maximum over the sizes |Z| of subsets Z ⊆ X such that there exist two functions f and g in F satisfying the following conditions: 1. f (x) = g(x) for all x ∈ Z. 2. For all Z1 ⊆ Z, there exists h ∈ F that agrees with f on Z1 and with g on Z − Z1 . Lemma 1 ([7,8]). Let F be a class of functions from a ﬁnite set X to a ﬁnite set Y . Then 2D(F ) ≤ |F | ≤ |X|D(F ) |Y |2·D(F ) . Let f : Ω ∗ → Ω ∗ . For integers n1 , n2 ≥ 1, the projection f [n1 ][n2 ] of f on Ω [n1 ] ×Ω [n2 ] is the function f [n1 ][n2 ] : Ω [n1 ] → Ω [n2 ] deﬁned by f [n1 ][n2 ] (x) = f (x) if f (x) is in Ω [n2 ] for all x in Ω [n1 ] . If there is some x in Ω [n1 ] such that f (x) is not in Ω [n2 ] , then f [n1 ][n2 ] is undeﬁned. For a class F of functions from Ω ∗ to Ω ∗ , we deﬁne F [n1 ][n2 ] = {f [n1 ][n2 ] | f ∈ F, f [n1 ][n2 ] is deﬁned}.

246

O. Maruyama et al.

Deﬁnition 2 ([7,8]). Let F be a class of functions from Ω ∗ to Ω ∗ with a representation R. An algorithm A is said to be a polynomial-time ﬁtting for F in representation R if the following conditions hold: 1. A is a polynomial-time algorithm taking as input a ﬁnite subset S of Ω ∗ ×Ω ∗ . 2. If there exists a function in F that is consistent with S, A outputs a name of the function in representation R. We say that F is of polynomial-dimension if there is a polynomial p(n1 , n2 ) in n1 and n2 such that D(F [n1 ][n2 ] ) ≤ p(n1 , n2 ). We say that F is of polynomialexpansion if there exists a polynomial q(n) such that for all f ∈ F and x ∈ Ω ∗ , |f (x)| ≤ q(|x|). The following theorem will be used to prove a result in Section 5 on the PAC-learnability of conformation rules. Theorem 1 ([7,8]). Let F be a class of functions from Ω ∗ to Ω ∗ with a representation R. F is polynomial-time PAC-learnable in R if the following hold: 1. F is of polynomial-dimension. 2. F is of polynomial-expansion. 3. There exists a polynomial-time ﬁtting for F in R.

3

Hypergraph Representation of a Protein

Let P be the protein of a primary structure A1 A2 · · · An , where Ai represents the i-th amino acid residue. Its tertiary structure is usually represented by a sequence of the positions of amino acid residues in the three dimensional space as (p1 , A1 ), (p2 , A2 ), . . . , (pn , An ), where pi = (xi , yi , zi ) is the position of Ai for 1 ≤ i ≤ n. The distance between pi and pj is denoted by |pi − pj |. Let Σ be the alphabet consisting of symbols representing the amino acid residues. Let µ > 0 be a real number. For a protein P with a tertiary structure (p1 , A1 ), (p2 , A2 ), . . . , (pn , An ), let GµP = (V, E) be an undirected graph deﬁned as follows: 1. V = {1, 2, . . . , n}. 2. For any distinct i, j in V with |pi − pj | ≤ µ, {i, j} is in E. We call the undirected graph GµP = (V, E) the structure graph of P with µ-range. For positive integers k, ω, τ and GµP = (V, E), let E µ,k,ω,τ be the set of P,complete the hyperedges e ⊆ V satisfying the following conditions: – 2 ≤ |e| ≤ k, – max e − min e + 1 ≥ τ , that is, a restriction on the width of e on the sequence 1, 2, . . . , n, – GµP [e] is a complete graph, where GµP [e] is the node-induced subgraph of e in GµP . Let ω EP, backbone = {{i, i + 1, . . . , j} | j = i + ω − 1, 1 ≤ i ≤ n − ω + 1},

Learning Conformation Rules

247

and ψ : V → Σ be a mapping deﬁned by ψ(i) = Ai for 1 ≤ i ≤ n. Then a = (V, E , ψ) with hypergraph H µ,k,ω,τ P,Σ,complete ω E = simplify(E µ,k,ω,τ ) ∪ EP, backbone P,complete

is a chain-hypergraph over Σ, which is called the hypergraph representation of P over Σ by complete graphs with µ, k, ω and τ . We say that an undirected graph G = (V, E) where V = {v0 , v1 , . . . , vk } µ,k,ω,τ and E = {{v0 , vi } | vi ∈ V, vi = v0 } is a star graph. Let EP, star be the set of the hyperedges e ⊆ V satisfying the following conditions: 2 ≤ |e| ≤ k, µ,k,ω,τ max e−min e+1 ≥ τ , and GµP [e] is a star graph. Then a hypergraph HP,Σ, star = (V, E , ψ) with µ,k,ω,τ ω E = simplify(EP, star ) ∪ EP,backbone is a chain-hypergraph over Σ, which is called the hypergraph representation of P over Σ by star graphs with µ, k, ω and τ . Instead of the explicit representation with amino acid residues, it is often used to classify the amino acid residues into several categories (e.g., [2,10,11]). In order to deal with such cases, we represent a protein in a more extended way. Namely, we consider chain-hypergraphs whose nodes are labeled with some “colors”, which are not necessarily the same as the amino acid residues. Let ∆ be an alphabet which consists of such “colors” labeling the nodes of hypergraphs. In this paper, we assume that the tertiary structure of a protein is represented by a chain-hypergraph over some alphabet ∆ in a way mentioned above.

4

Conformation Rules

In this section, we deﬁne a conformation which transforms strings over ∆ to chain-hypergraphs over ∆. We denote the set of all chain-hypergraphs over ∆ by H∆ . Deﬁnition 3. A conformation over ∆ is a function c : ∆+ → H∆ such that c(x) = (V, E, ψ) for a string x = x1 · · · xn ∈ ∆+ satisﬁes V = {1, . . . , n} and ψ(i) = xi for 1 ≤ i ≤ n. We give a way of completing a conformation by introducing conformation rules over ∆, which is based on hypergraph rewriting rules deﬁned as follows: A hypergraph rewriting rule over ∆ is a triplet ρ = (B, A, D) of a hypergraph B = (V, E, ψ) over ∆ and subsets A and D of 2V . The elements of A and D are called additional and removable hyperedges, respectively. The rank of ρ is deﬁned to be max{r(B), max{|a| | a ∈ A}}. The degree of ρ is deﬁned to be d(B). Deﬁnition 4. Let ρ1 = (B1 , A1 , D1 ) and ρ2 = (B2 , A2 , D2 ) be hypergraph rewriting rules over ∆ where B1 = (V1 , E1 , ψ1 ) and B2 = (V2 , E2 , ψ2 ). We say that ρ1 is isomorphic to ρ2 , denoted by ρ1 ≈ ρ2 , if there is a bijection ι : V1 → V2 such that

248

1. 2. 3. 4.

O. Maruyama et al.

ψ1 (v) = ψ2 (ι(v)) for all v ∈ V1 , ι(e1 ) ∈ E2 for all e1 ∈ E1 , and ι−1 (e2 ) ∈ E1 for all e2 ∈ E2 , ι(e1 ) ∈ A2 for all e1 ∈ A1 , and ι−1 (e2 ) ∈ A1 for all e2 ∈ A2 , ι(e1 ) ∈ D2 for all e1 ∈ D1 , and ι−1 (e2 ) ∈ D1 for all e2 ∈ D2 .

Deﬁnition 5. Let D∆ be a set of hypergraph rewriting rules over ∆. For positive integers P and Q, we deﬁne a (P × Q)-conformation rule σ over D∆ as σ = (β1 , β2 , . . . , βP ), where with

βp = (γp,1 , γp,2 , . . . , γp,Q ) γp,q ⊆ D∆

for 1 ≤ p ≤ P and 1 ≤ q ≤ Q. γp,q is called the (p, q)-unit of σ, and βp is the pth unit-sequence of σ. D∆ is the domain of σ. The rank of D∆ is deﬁned as r(D∆ ) = max{r(H) | H ∈ D∆ }, and the degree of D∆ is d(D∆ ) = max{d(H) | H ∈ D∆ }. The rank of σ is max{r(γp,q ) | 1 ≤ p ≤ P, 1 ≤ q ≤ Q}, and the degree of σ is max{d(γp,q ) | 1 ≤ p ≤ P, 1 ≤ q ≤ Q}. In this paper, we consider a rather limited hypergraph rewriting rules deﬁned in the following way: Deﬁnition 6. A bundle rule over ∆ is a hypergraph rewriting rule ρ = (B, A, D) over ∆ if, for B = (V, E, ψ) over ∆, 1. 2. 3. 4. 5.

|A| = 1, say A = {U }. |U | ≥ 2. U ∈ E. For any hyperedge e in E, e ∩ U = ∅. D = {e ∈ E | e ⊂ U }.

For short, we denote such a bundle rule ρ = (B, A, D) by (B, U ). We denote by Γ∆ the set of all bundle rules over ∆, and, for integers k ≥ 2 and d ≥ 1, by Γk,d,∆ , the set of all bundle rules over ∆ such that the rank is at most k and the degree is at most d. Remark 1. Obviously, Γ∆ is inﬁnite. Note that Γk,d,∆ is ﬁnite if ∆ is ﬁnite. On the other hand, k≥2 Γk,d,∆ and d≥1 Γk,d,∆ are inﬁnite. We here describe a concrete conformation, which is a function transforming strings to hypergraphs by using conformation rules. Let σ = (β1 , β2 , . . . , βP ) be a (P × Q)-conformation rule over Γ∆ where βp = (λp,1 , λp,2 , . . . , λp,Q ) and λp,q ⊆ Γk,d,∆ for 1 ≤ p ≤ P and 1 ≤ q ≤ Q. We apply σ to a string x = x1 · · · xn in ∆+ . For a positive integer ω, we start with a rank ω linear chain-hypergraph H = (V, E, ψ), that is, V = {1, . . . , n}, ψs (i) = xi for 1 ≤ i ≤ n, and E = {{i, . . . , i + ω − 1} | 1 ≤ i ≤ n − ω + 1}. At the pth stage (1 ≤ p ≤ P ), the pth unit-sequence βp of σ is used in the following way. In each stage, a window

Learning Conformation Rules

249

on the node sequence 1, 2, . . . , n corresponding to the string x = x1 · · · xn is an interval of the sequence, and enlarged from smaller to larger. The initial window size is speciﬁed by τ . For each window size, the window is slided from left to right on V . Suppose a window of size w(≥ τ ) at position i, that is, an interval [i, . . . , i + w − 1] consisting of consecutive w nodes in V . Let q = w − τ + 1, whose range is from 1 to Q. The bundle rules in the (p, q)-unit γp,q of σ are applied to create new hyperedges e such that e consists of only nodes in [i, . . . , i+w −1] and i and i + w − 1 are in e. A new creation of a hyperedge e in the window depends on a local structure around e in the current hypergraph H = (V, E, ψ). Namely, we consider a subhypergraph H(e). A new hyperedge e will be created if there is a bundle rule (B, U ) ∈ γp,q which is isomorphic to (H(e), e). After creating all new hyperedges in the process of sliding the window from left to right, these hyperedges are added to E and the proper subsets of them are deleted from E, and this window sliding process is repeated after the window is enlarged. A formal description is given in Fig. 1.

Input:

a (P × Q)-conformation rule σ = (β1 , . . . , βP ) over Γ∆ where βp = (γp,1 , γp,2 , . . . , γp,Q ) and γp,q ⊆ Γ∆ for 1 ≤ p ≤ P and 1 ≤ q ≤ Q, positive integers ω and τ with ω < τ , and a string x = x1 · · · xn in ∆+ Output: a hypergraph H = (V, E, ψ) Procedure: CON FORM(ω, τ, σ, x) let k be the rank of σ V = {1, . . . , n} let ψ be a mapping deﬁned by ψ(i) = xi for 1 ≤ i ≤ n E = {{i, . . . , i + ω − 1} | 1 ≤ i ≤ n − ω + 1} H = (V, E, ψ) # linear chain-hypergraph of rank ω for p from 1 to P : for q from 1 to min{Q, n}: w =τ +q−1 # w is the window size A = ∅; D = ∅ foreach i from 1 to n − w + 1: j =i+w−1 foreach e ⊆ {i, . . . , j} such that i, j ∈ e and |e| ≤ k: if a bundle rule (H(e), e) ≈ ρ for some ρ in γp,q : add e to A add the proper subsets of e in E to D E =E∪A\D

Fig. 1. Algorithm CON FORM

The graph G given in Fig. 2 is an example of the graphs which cannot be generated by any (1, Q)-conformation rule for any Q. The following proposition is obvious by deﬁnitions:

250

O. Maruyama et al.

☛

G

0❦

1❦

☛ ✟

1❦

1❦

☛ ✟

1❦

1❦

☛ ✟

1❦

1❦

☛ ❦ ❦ ❦ ❦ γ1,1 = {( 0 1 1 1 , 0❦ u u v ☛ ✟ ☛ γ2,1 = {( 0❦ 1❦ 1❦ 1❦ 1❦ 1❦ u

γ3,1 γ4,1

v

,

☛ ✟ = {( 1❦ 1❦ 1❦ 1❦ 1❦, u v ☛ ✟ = {( 1❦ 1❦ 1❦ 1❦ 1❦ u

v

,

✟

1❦

1❦

1❦

1❦

1❦

✟ 1❦)}

v

✟ 1❦)}

u

☛ 1❦

v

u

☛ 1❦

1❦)} v

u

✟ 1❦)}

v

✟

Fig. 2. σ = ((γ1,1 ), (γ2,1 ), (γ3,1 ), (γ4,1 )) which is a (4 × 1)-conformation rule over Γ2,4,{0,1} generating the graph G

Proposition 1. Let σ be a (P × Q)-conformation rule over Γk,d,∆ , ω and τ be positive integers with ω < τ , and x ∈ ∆+ . The hypergraph CON FORM(ω, τ, σ, x) given in Fig. 1 is a chain-hypergraph over ∆ of at most rank k. Deﬁnition 7. For a (P × Q)-conformation rule σ over Γ∆ and positive integers as a function from ∆+ to ω and τ with ω < τ , we deﬁne a conformation cω,τ σ ω,τ the set of chain-hypergraphs over ∆, by cσ (x) = CON FORM(ω, τ, σ, x) for x ∈ ∆+ .

5

PAC-Learning of Conformation [n]

For a positive integer n, let H∆ be the set of all chain-hypergraphs over ∆ with [n] [n] [n] at most n nodes. By cω,τ we denote a function cω,τ : ∆[n] → H∆ obtained σ σ ω,τ [n] by restricting cσ to ∆ . For integers ω ≥ 2, τ > ω, P, Q ≥ 1, and an alphabet ∆, let ω,τ,P,Q C∆ = {cω,τ | σ is a (P × Q)-conformation rule over Γ∆ }. σ

As noted in Remark 1, the alphabet Γ∆ is inﬁnite even if ∆ is ﬁnite. This makes a trouble in discussing the PAC-learnability of a class of conformations. However, if we restrict the rank and degree of conformation rules to constant integers k and d, respectively, the alphabet Γk,d,∆ is ﬁnite for ﬁnite alphabets ∆. Let ω,τ,P,Q Ck,d,∆ = {cω,τ | σ is a (P × Q)-conformation rule over Γk,d,∆ } σ

for integers k ≥ 2, d ≥ 1, ω ≥ 2, τ > ω, P ≥ 1 and Q ≥ 1. Our main result is the following theorem: ω,τ,P,Q Theorem 2. The class Ck,d,∆ is polynomial-time PAC-learnable.

Learning Conformation Rules

251

ω,τ,1,R Theorem 3. The class R≥1 Ck,d,∆ is polynomial-time PAC-learnable. We can prove these theorems by showing that these classes satisfy three conditions in Theorem 1. For an integer k ≥ 2, a hypergraph H = (V, E, ψ) of rank k with n = |V | can be expressed under an appropriate encoding as a string over ∆ whose length is polynomially bounded with respect to n. Thus we regard a conformation c over ∆ as a function from ∆+ to ∆+ . Therefore we can see that any class of conformations over ∆ is of polynomial-expansion. ω,τ,P,Q ω,τ,1,R Next we show that Ck,d,∆ and R≥1 Ck,d,∆ are of polynomial-dimension. Let ω,τ,P,Q Ck,d,∆

[n]

[n] = {cω,τ | σ is a (P × Q)-conformation rule over Γk,d,∆ }. σ ω,τ,P,Q [n] ω,τ,1,R [n] By Lemma 1, it suﬃces to show that Ck,d,∆ and R≥1 Ck,d,∆ are

bounded by 2p(n) for some polynomial p(n). A (P × Q)-conformation rule σ over Γk,d,∆ can be considered as a P × Q matrix whose elements are subsets ω,τ,P,Q [n] of Γk,d,∆ . Since |Γk,d,∆ | is a ﬁnite constant, say δ, we have Ck,d,∆ ≤ δ P ·Q [n] ω,τ,P,Q , that is, Ck,d,∆ 2 is also bounded by a ﬁnite constant. It should ω,τ,1,R [n] ω,τ,1,R [n] be noted here that R≥1 Ck,d,∆ = n≥R≥1 Ck,d,∆ . Thus, we can see that δ P ·n ω,τ,1,R [n] , which is bounded by 2p(n) for some polynomial R≥1 Ck,d,∆ ≤ 2 p(n). ω,τ,P,Q ω,τ,1,R Finally we discuss polynomial-time ﬁttings for Ck,d,∆ and R≥1 Ck,d,∆ . It ω,τ,P,Q since the cardinality is trivial that there is a polynomial-time ﬁtting for Ck,d,∆ of the class is a ﬁnite constant. ω,τ,1,R We then describe a polynomial-time ﬁtting B for R≥1 Ck,d,∆ by employing the algorithm EX T RACT given in Fig. 3. Given chain-hypergraphs H1 = (V1 , E1 , ψ1 ), . . . , Ht = (Vt , Et , ψt ) over ∆ and positive integers ω and τ with τ > ω, the algorithm B computes, for 1 ≤ h ≤ t, a conformation rule over Γ∆ , σ ˆ (h) = EX T RACT (ω, τ, N, Hh ), where N = max1≤h≤t |Vh |. We denote by (h) ˆ (h) for 1 ≤ q ≤ N . For each q with 1 ≤ q ≤ N , let γˆ1,q γˆ1,q the (1, q)-unit of σ (h) = 1≤h≤t γˆ1,q , and σ ˆ = ((ˆ γ1,1 , γˆ1,2 , . . . , γˆ1,N )). The algorithm B outputs σ ˆ from H1 , . . . , Ht . Obviously, Q runs in polynomial time since the rank of conformation rules is a constant k. If H1 = CON FORM(ω, τ, σ, s1 ), H2 = CON FORM(ω, τ, σ, s2 ), . . . , Ht = CON FORM(ω, τ, σ, st ) for some (1, Q)-conformation rule σ over Γk,d,∆ and strings s1 , s2 , . . . , st ∈ ∆+ , then we can show that Hh = CON FORM(ω, τ, σ ˆ , sh ) for 1 ≤ h ≤ t, which means that CON FORM(ω, τ, σ ˆ , ·) is consistent with the examples {(si , Hi ) | 1 ≤ i ≤ t}. For 1 ≤ h ≤ t and 1 ≤ q ≤ N , let – Ch,q be the contents of E just after the qth iteration of the for-loop on q of the 1st iteration of the for-loop on p of CON FORM(ω, τ, σ, sh ) has been ﬁnished if q ≤ min{Q, |sh |}, Ch,q = Ch,q−1 otherwise.

252

O. Maruyama et al.

Input:

a chain-hypergraph H = (V, E, ψ) over ∆ of rank k, and positive integers ω, τ and R Output: a conformation rule σ = (β1 ) over Γ∆ of rank k where β1 = (γ1,1 , γ1,2 , . . . , γ1,R ) with γ1,q ⊆ Γ∆ for 1 ≤ q ≤ R Procedure: EX T RACT (ω, τ, R, H) n = |V | ˜ = {{i, . . . , i + ω − 1} | 1 ≤ i ≤ n − ω + 1} E ˜ = (V, E, ˜ ψ) H for q from 1 to R: w =τ +q−1 A=∅ D=∅ foreach i from 1 to n − w + 1: j =i+w−1 foreach U ⊆ {i, . . . , j} such that i, j ∈ U and |U | ≤ k: if U ∈ E: ˜ ), U ) ρ = (H(U add ρ to γq add U to A ˜ to D add the proper subsets of U in E ˜=E ˜∪A\D E

Fig. 3. Algorithm EX T RACT

˜ just after the qth iteration of the for-loop of – Eh,q be the contents of E EX T RACT (ω, τ, N, Hh ) has been ﬁnished. – Cˆh,q be the contents of E just after the qth iteration of the for-loop on q of the 1st iteration of the for-loop on p of CON FORM(ω, τ, σ ˆ , sh ) has been ﬁnished if q ≤ |sh |, Ch,q = Ch,q−1 otherwise. For convenience, let Ch,0 = Eh,0 = Cˆh,0 = {{i, . . . , i+ω−1} | 1 ≤ i ≤ |sh |−ω+1} for 1 ≤ h ≤ t, which the initial hyperedges in the algorithm CON FORM and EX T RACT on the string sh . It is not hard to prove by induction on q Ch,q = Eh,q = Cˆh,q for 1 ≤ h ≤ t. This completes the proof. The following theorem can be shown in a similar way and we omit its proof. ω,τ,P,R is polynomial-time PAC-learnable. Theorem 4. The class R≥1 Ck,d,∆

6

Refutably PAC-Learning Functions

In this section, we introduce the refutability of PAC-learning algorithms on functions. The refutability of PAC-learning algorithms on concepts have been already

Learning Conformation Rules

253

discussed in [5,6]. PAC-learning algorithms having the ability to refute classes which do not seem to include a target function would be helpful in dealing with real data. Let f be a function from Ω ∗ to Ω ∗ , F be a class of functions from Ω ∗ to Ω ∗ , and P be a probability distribution on Ω ∗ . We deﬁne optf (P, F ) by optf (P, F ) = min P (f f ). f ∈F

We can see that if f ∈ F then optf (P, F ) = 0 for any P . Deﬁnition 8. Let F be a class of functions from Ω ∗ to Ω ∗ . A function class F is polynomial-sample refutably learnable if there exist an algorithm A and a polynomial p(·, ·, ·, ·) which satisfy the following conditions: 1. The algorithm A takes as input parameters ε, ε , δ ∈ (0, 1) and n ≥ 1. We call ε a refutation accuracy parameter. 2. Let f be a target function from Ω ∗ to Ω ∗ and P an arbitrary and unknown probability distribution on Ω ∗ . The algorithm A takes a sample of size p(1/ε, 1/ε , 1/δ, n) using a subroutine EX(f, P ), which at each call produces a single example for f according to P . 3. If optf (P, F ) = 0 then A outputs a function g ∈ F which satisﬁes P (f g) < ε with probability at least 1−δ. If optf (P, F ) ≥ ε then A refutes the function class F with probability at least 1 − δ. Theorem 5. If a class F of functions is of polynomial dimension, then F is polynomial-sample refutably learnable. By this theorem the followings hold: ω,τ,P,Q and Corollary 1. The classes Ck,d,∆ refutably learnable.

R≥1

ω,τ,P,R Ck,d,∆ are polynomial-sample

Since F is of polynomial dimension, there exists a polynomial poly(·, ·) such that log2 |F [n1 ][n2 ] | ≤ poly(n1 , n2 ) for any n1 , n2 ≥ 1. We construct the algorithm described in Figure 4. We introduce a refutation threshold parameter η ∈ (0, 1) so that a learning algorithm produces an approximate function instead of refuting F when the minimum error optf (P, F ) is small enough. Deﬁnition 9. Let F be a class of functions from Ω ∗ to Ω ∗ . A function class F is polynomial-sample strongly refutably learnable if there exist an algorithm A and a polynomial p(·, ·, ·, ·) which satisfy the following conditions: 1. The algorithm A takes as input parameters ε, ε , δ, η ∈ (0, 1) and n ≥ 1. 2. Let f be a target function from Ω ∗ to Ω ∗ and P an arbitrary and unknown probability distribution on Ω ∗ . The algorithm A takes a sample of size p(1/ε, 1/ε , 1/δ, n) using a subroutine EX(f, P ), which at each call produces a single example for f according to P .

254

O. Maruyama et al. Input: ε, ε , δ, n1 , n2 Procedure: let m = (1/ε + 1/ε )(1/δ + poly(n1 , n2 )) make m calls of EX let S be the set of examples seen if there is a function g ∈ F consistent with S: return g else refute F Fig. 4. Refutable algorithm ARef uteBySampleComplexity (ε, ε , δ, n1 , n2 )

3. If optf (P, F ) ≤ η then A outputs a concept g ∈ F which satisﬁes P (f g) < η + ε with probability at least 1 − δ. If optf (P, F ) ≥ η + ε then A refutes the function class F with probability at least 1 − δ. Theorem 6. If a class F of functions is of polynomial dimension, then F is polynomial-sample strongly refutably learnable. ω,τ,P,Q ω,τ,P,R Corollary 2. The classes Ck,d,∆ and R≥1 Ck,d,∆ are polynomial-sample strongly refutably learnable. We construct the algorithm described in Figure 5. We denote by d(g, S) the number of examples in S with which g does not agree. Input: ε, ε , δ, η, n1 , n2 Procedure: κ = min{ε, ε2} 2 m = 4 1/ε + 1/ε (1/δ + poly(n1 , n2 )) make m calls of EX let S be the set of examples seen if there is a function g ∈ F with d(g, S) ≤ m (η + (1/2)κ) then return g else refute F Fig. 5. Strongly refutable algorithm AStronglyRef uteBySampleComplexity (ε, ε , δ, η, n1 , n2 )

We can easily see that F is of polynomial dimension if F is polynomial-sample refutably learnable or polynomial-sample strongly refutably learnable. Therefore the following three statements are equivalent: 1. F is of polynomial dimension. 2. F is polynomial-sample refutably learnable. 3. F is polynomial-sample strongly refutably learnable.

Learning Conformation Rules

7

255

Experiments

In this section, we report our preliminary computational experiments on learning conformation rules from hypergraphs representing tertially structures of proteins. We have implemented the PAC-learning algorithm shown in the algorithms CON FORM(ω, τ, σ, x) and EX T RACT (ω, τ, R, H) in the Python language [13]. 7.1

Method of Experiments

The hypergraph representation of a protein over ∆ by star graphs are used with µ, k, ω, τ speciﬁed as follows: µ = 5.8˚ A, k = 10, ω = 5 and τ = 8. The choice of the alphabet ∆ for labeling the nodes of a hypergraph is one of the key to experiments. The alphabet ∆ represents a classiﬁcation of amino acid residues. In Hart and Istrail [3], they used the hydrophobic-hydrophilic model that regards a protein as a linear chain amino acid residues that are of two types H (hydrophobic) and P (hydrophilic). However some amino acids are neither hydrophobic nor hydrophilic. In our experiments, ∆ is set to {H, P, N }, where the amino acid residues are assigned as follows: H : ALA, CYS, ILE, LEU, MET, PHE, TRP, VAL, P : ARG, ASN, ASP, GLN, GLU, LYS, PRO, ASX, GLX, N : GLY, HIS, SER, THR, TYR. ω,τ,P,Q The class of conformations, Ck,d,∆ where P = 1 and Q = 2, is considered in the experiments (Since the degree bound d is not important rather than the rank bound k, d is unlimited.). Given examples (s1 , H1 ), . . . , (st , Ht ), the polynomialtime ﬁtting B, used to prove Theorem 3, outputs a (1, 2)-conformation rule σ ˆ, which is applied in CON FORM(ω, τ, σ ˆ , x) for a sequence x. To evaluate how a hypergraph predicted by CON FORM is similar to the target hypergraph, we compare them hyperedge by hyperedge. To this end, we deﬁne a similarity between hyperedges as follows: Let g ≥ 0 and 0 ≤ κ ≤ 1, and subsets E1 and E2 of 2V , where V = {1, 2, . . . , n}. For e1 ∈ E1 and e2 ∈ E2 , we say that e1 is (g, κ)-similar to e2 if min e2 −g ≤ min e1 ≤ min e2 +g and e1 e2 ≤ κ. We denote Simg,κ (E1 , E2 ) = |{e1 ∈ E1 | e1 is (g, κ)-similar to e2 ∈ E2 }| TIM-barrel proteins have high regulatory conformations, which are composed by eight parallel β-sheets forming a barrel structure [12]. We downloaded PDB ﬁles of TIM-barrel proteins from the site of PDB [14], which are screened out. The 15 proteins remains, whose tertiary structures are fully determined and composed of a single chain of amino acids. In our experiments, the following small modiﬁcation has been done: for a bundle rule ρ = (B, A, D) where A = {U }, D is set to ∅ instead of {e ∈ E | e ⊂ U }, which aﬀects nothing but would enable to attain more detailed conformation rules. 7.2

Evaluation

We have executed two kinds of experiments. One is self-conformation, that is, for a single protein p, a (1, 2)-conformation rule α is learned from the hypergraph

256

O. Maruyama et al.

representation of p, and used in CON FORM with the sequence of p. Another is the case where a (1, 2)-conformation rule α is extracted from 14 TIM-barrel proteins, and applied to the remaining one. In self-conformation, the successful results are attained. Let HT = (V, ET , ψ) and HP = (V, EP , ψ) be a target and a predicted hypergraph, respectively. For a set S, by S c we denote the complement of S. We give a typical results of selfconformation test in Tab. 1. Since the experiment is going well under the window sizes 7 and 8, the experiment should be continued with the window sizes over 8. However, if it is done, the procedure does not ﬁnish in a practical time. The task of hypergraph matching is repeatedly done in our procedure. An eﬃcient and practical algorithm for the problem of hypergraph isomorphism should be developed, which would be one of the future works. Table 1. Result of self-conformation with protein 4ALD, whose sequence is of length 363. The backbone hyperedges are excluded. window size 7 8

EP ∩ ET 69 14

EP ∩ ETc 0 0

EPc ∩ ET 0 0

Tab. 2 shows the result of conformation of protein 4ALD obtained by applying a (1,2)-conformation rule learned from the other 14 TIM-barrel proteins. In the stage of window size 7, 23 (= 6+17) hyperedge are added, 6 hyperedges of which are similar or exactly identical to hyperedges in the target HT . However, the remaining 17 hyperedges are wrong, that is, there are no similar hyperedges to them in HT . An interesting observation is that correct hyperedge addition often occurs in a neighborhood, which would imply that the conformation rule causing correct hyperedge addition captures some regional property common to several proteins. In the stage of window size 8, no hyperedge is added. This is because, once a wrong hyperedge is added, the wrong hyperedge makes it diﬃcult to add correct hyperedges in the following stages with larger window sizes. To settle this problem is also a future work. Table 2. Result of conformation of protein 4ALD applied a (1,2)-conformation rule learned from the other 14 TIM-barrel proteins. window size 7 8

Sim2,0.8 (EP , ET ) 6 0

Sim2,0.8 (ET , EP ) 9 0

EP ∩ ETc 17 0

EPc ∩ ET 63 14

Learning Conformation Rules

8

257

Concluding Remarks

In this paper, we formulated the protein conformation problem as the PAClearning problem of hypergraph rewriting rules from hypergraphs. Since, in terms of the protein conformation problem, our graph-theoretic approach is very unique, this learning problem should be extensively studied with adding appropriate modiﬁcation to the framework we proposed this time, although the current results of our preliminary computational experiments are far from satisfaction.

Acknowledgments This work was in part supported by Grant-in-Aid for Encouragement of Young Scientists and Grant-in-Aid for Scientiﬁc Research on Priority Areas (C) “Genome Information Science” from MEXT of Japan, and the Research for the Future Program of the Japan Society for the Promotion of Science.

References 1. Church, B.W. and Shalloway, D., Top-down free-energy minimization on protein potential energy landscapes, Proc. Natl. Acad. Sci. U.S.A. 98, 6098–103, 2001. 2. Dill, K.A., Fiebig, K.M. and Chan, H.S., Cooperatively protein-folding kinetics, Proc. Natl. Acad. Sci. U.S.A. 90, 1942–1946, 1993. 3. Hart, W.E. and Istrail, S.C., Robust proofs of NP-hardness for protein folding: general lattices and energy potentials, J. Comput. Biol. 4, 1–22, 1997. 4. Konig, R. and Dandekar, T., Improving genetic algorithms for protein folding simulations by systematic crossover, Biosystems 50, 17–25, 1999. 5. Matsumoto, S. and Shinohara A., Refutably probably approximately correct learning, Proc. 5th International Workshop on Algorithmic Learning Theory, LNAI 872, 469–483, 1994. 6. Matsumoto, S., Studies on the learnability of pattern languages. PhD thesis, Kyushu University, 1998. 7. Natarajan, B.K., Probably approximate learning of sets and functions, SIAM J. Comput. 20, 328–351, 1991. 8. Natarajan, B.K., Machine Learning: A Theoretical Approach, Morgan Kaufmann, 1991. 9. Natarajan, B.K. and Tadepalli, P., Two new frameworks for learning, Proc. Fifth International Symposium on Machine Learning, 402–415, 1988. 10. Shimozono, S., Shinohara, A, Shinohara, T., Miyano, S., Kuhara, S. and Arikawa, S., Knowledge acquisition from amino acid sequences by machine learning system BONSAI, Trans. Information Processing Society of Japan 35, 2009–2018, 1994. 11. Smith, R.F. and Smith, T.F., Automatic generation of primary sequence patterns from sets of related protein sequences, Proc. Natl. Acad. Sci. U.S.A. 87, 118–122, 1990. 12. Wierenga, R.K., The TIM-barrel fold: a versatile framework for eﬃcient enzymes, FEBS Letters 492, 193–198, 2001. 13. http://www.python.org/ 14. http://www.rcsb.org/pdb/

A General Theory of Deduction, Induction, and Learning Eric Martin1 , Arun Sharma1 , and Frank Stephan2 1

School of Computer Science and Engineering, The University of New South Wales, Sydney, NSW 2052, Australia, {emartin, arun}@cse.unsw.edu.au 2 Universit¨ at Heidelberg, 69121 Heidelberg, Germany, fstephan@math.uni-heidelberg.de

Abstract. Deduction, induction, learning, are various aspects of a more general scientiﬁc activity: the discovery of truth. We propose to embed them in a common, logical framework. First, we deﬁne a generalized notion of “logical consequence.” Alternating compact and “weakly compact” consequences, we stratify the set of generalized logical consequences of a given theory in a hierarchy. Classical ﬁrst-order logic is a particular case of this framework; the fact that it is all about deduction is due to the compactness theorem, and this is reﬂected by the collapsing of the corresponding hierarchy to the ﬁrst level. Classical learning paradigms in the inductive inference literature provide other particular cases. Finite learning corresponds exactly to the ﬁrst level (or level Σ1 ) of the hierarchy, whereas learning in the limit corresponds to another level (namely Σ2 ). More generally, strong and natural connections exist between our hierarchy of generalized logical consequences, the Borel hierarchy, and the hierarchy which measures the complexity of a formula in terms of alternations of quantiﬁers. It is hoped that this framework provides the foundation of a uniﬁed logic of deduction and induction, and highlights the inductive nature of learning. An essential motivation for our work is to apply the theory presented here to the design of “Inductive Prolog”, a system with both deductive and inductive capabilities, based on a natural extension of the resolution principle.

1

Introduction

Let us ﬁrst make a few remarks about the nature of deduction and the nature of induction, before we turn to the nature of learning. If a formula ϕ is a deductive consequence of a set of formulas T , it is clear to anyone that ϕ is a logical consequence of T , in the sense that ϕ is true in every model of T . Many would also agree that we can substitute “inductive” for “deductive” in the previous sentence. What is then the diﬀerence between ϕ being a deductive and ϕ being

Eric Martin is supported by the Australian Research Council Grant A49803051. Frank Stephan is supported by the Deutsche Forschungsgemeinschaft (DFG) Heisenberg Grant Ste 967/1-1.

K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 228–242, 2001. c Springer-Verlag Berlin Heidelberg 2001

A General Theory of Deduction, Induction, and Learning

229

an inductive consequence of T ? Well, it should be possible to discover with certainty that ϕ is a deductive consequence of T , if this is indeed the case. Whereas it should not be possible to discover with certainty that ϕ is an inductive consequence of T , if ϕ is not in fact a deductive consequence of T . How can we discover with certainty that ϕ is true on the basis of T ? A natural answer is: if and only if ϕ is actually a logical consequence of a ﬁnite subset of T . In other words, if and only if ϕ is a compact logical consequence of T . On the other hand, if ϕ is an inductive, but not a deductive, consequence of T , then we need an inﬁnite part of T , if not the whole of T , in order to be able to establish this fact. At this point, two questions emerge: 1. What should count as a model of T ? If it is any structure (as deﬁned in classical logic) in which every member of T is true, and if T consists of ﬁrst-order formulas, then the compactness theorem shows that every logical consequence of T is actually a deductive consequence of T , and there is no scope for a proper notion of induction. Hence, we should be able to consider not all structures, but some of them. This would result in a generalized notion of logical consequence that might not be compact. Are there natural candidates for such sets of structures? 2. Suppose that the class of models of T that have been retained is such that some generalized logical consequence ϕ of T is not a deductive (compact) consequence of T . Is then ϕ automatically “promoted” to the status of inductive consequence of T ? The fact that every model of T is a model of ϕ involves inﬁnitely many members of T . But how diﬃcult is it to conclude that ϕ is true on the basis of T ? If we can deﬁne diﬃculty levels, should one of them be considered as “the inductive level?” Let us now consider learning (for more details on the notions mentioned below, see [7]). We claim that the classical paradigms in the inductive inference literature are also about discovering the truth. Suppose that the underlying logical vocabulary consists of a unary predicate symbol P , together with a constant n for each natural number n. A language L can be identiﬁed with T = {P n | n ∈ L}, and its complement with T = {¬P n | n ∈ N \ L}. A text (respect. informant) for L can then be identiﬁed with an enumeration of T (respect. T ∪ T ). The task of discovering an r.e. index for (respect. the characteristic function of) L can be identiﬁed with the task of discovering the inﬁnitary formula T (respect. T ∧ T ). Clearly T is a logical consequence of T —in the classical sense, hence also in any more general sense. Retaining only the structuresthat correspond to languages i.e., the intended possible realities, will make T ∧ T a generalized consequence of T . So in both cases, identiﬁcation in the limit is about discovering a particular generalized logical consequence of T , namely a formula that can be viewed upon as a description of the language to be learned. On the other hand, the task of discovering the truth of an arbitrary formula ϕ from a background theory is equivalent to partial classiﬁcation (see [4]): a formula ϕ represents the class C of all theories T which logically imply ϕ in a general sense, and the partial classiﬁer has to ﬁnd out, on the basis of data from background

230

E. Martin, A. Sharma, and F. Stephan

theory T , that ϕ is a generalized logical consequence of T , whenever this is true. Note the following: 1. Considering the inﬁnite formula P n0 ∧ P n1 ∧ P n2 ∧ . . . rather than the index of a Turing machine which generates n0 , n1 , n2 . . . provides a logical representation equivalent to a representation in terms r.e. indexes. But inﬁnite formulas are not only a technical way of embedding learning paradigms into a logical framework. It turns out that the extension of ﬁrst-order languages to countable fragments of Lω1 ω (see below) are the natural logical languages of our framework. 2. When learning from positive data only, there is an implicit assumption that all positive data are enumerated in a text for a language L. This means that the models of {P n | n ∈ L} to be considered should not be consistent with P n for any n ∈ N\L. The notion of generalized logical consequence, together with the right set of structures, should be able to accommodate this kind of property. The previous considerations go beyond epistemological concerns about the nature of induction or learning. Indeed, the aim of this work is also to investigate the foundations of induction in AI. Current work in Inductive Logic Programming (see [14]) focuses on a very speciﬁc inductive task: discovering the minimal model of a potentially inﬁnite set of data. We take a more general view and investigate general inductive abilities. Considering both deduction and induction as particular expressions of the art of discovering the truth opens the door to a uniﬁed framework which can provide the basis of an “Inductive Prolog”. If Prolog is the deductive engine of AI, giving an agent the ability to compute solutions to existential queries, Inductive Prolog should be the deductive-inductive engine of AI, giving an agent the ability to compute solutions to existential or Σ2 queries, such as: does there exist a chemical compound which has this eﬀect on all molecules having this and that property? We proceed as follows. In Section 2, we introduce the necessary notation and in Section 3, we describe the components of our framework. In Section 4, we deﬁne hierarchies of generalized logical consequences by the alternating the use of compactness and weak compactness, and show some of their properties relevant to learning paradigms. In Section 5, we investigate the relationship between the hierarchies of generalized logical consequences and formula complexity. As additional evidence of their naturalness we also demonstrate links with the Borel hierarchy. Finally, in Section 6, we show how a number of classical learning paradigms can be cast into our framework.

2

Notation

A vocabulary is a countable set of function symbols (possibly including constants) and predicate symbols. A vocabulary can, but does not have to, contain equality. If it does not, it is said to be equality free. From now on, S denotes an arbitrary countable vocabulary. For some results, assumptions on S will be made. We

A General Theory of Deduction, Induction, and Learning

231

denote by LSωω the set of all ﬁrst-order S-formulas, and by LSω1 ω the extension of LSωω that accepts countable nonempty conjunctions and disjunctions.1 So for all countable nonempty T ⊆ LSω1 ω , the disjunction ofall members of T , written T , and the conjunction of all members of T , written T , both belong to LSω1 ω . Note that the occurrence or nonoccurrence of = in S determines whether LSωω and LSω1 ω are languages with or without equality. A countable fragment of LSω1 ω is a countable subset L of LSω1 ω which contains LSωω , is closed under subformulas, boolean operators, and quantiﬁcation.2 From now on, L denotes a countable fragment of LSω1 ω . It represents the language on the basis of which the core of the theory is developed. Clearly, LSωω is the smallest countable fragment of LSω1 ω . The members of LSω1 ω which are in Σ0 or Π0 prenex form are the quantiﬁer free members of LSωω . Let nonnull ordinal α and ϕ ∈ LSω1 ω be given. We say that ϕ is in Σα (respect. Πα ) prenex form just in case one of the following holds: 1. ϕ is in Σβ or Πβ prenex form for some β < α, or 2. ϕ is of the form ∃xψ (respect. ∀xψ) for some ψ ∈ LSω1 ω which is in Σα (respect. Πα ) prenex form, or 3. ϕ is of the form X (respect. X) for some (countable) X ⊆ LSω1 ω all of whose members are in Σα (respect. Πα ) prenex form. It is easy to verify that every member of LSω1 ω is logically equivalent to a member of LSω1 ω which is in Σα prenex form for some α. If ϕ ∈ LSω1 ω is logically equivalent to a closed member of LSω1 ω which is in Σα (respect. Πα ) prenex form, then we say that ϕ is Σα (respect. Πα ). Note that the classical deﬁnition of a member of LSωω being Σn (respect. Πn ) for some n ∈ N is a particular case of the former. The ∼ operator is the function ∼: LSω1 ω → LSω1 ω which is deﬁned as follows. If ϕ ∈ LSω1 ω is of form ¬ψ for some ψ ∈ LSω1 ω , then ∼ (ϕ) = ψ; otherwise ∼ (ϕ) = ¬ϕ. Given Γ ⊆ LSω1 ω and S-structure M, the Γ -diagram of M, denoted DΓ (M), is the set of all members of Γ that are true in M. Terms will refer to S-terms, formulas to members of L (not LSω1 ω ), sentences to closed formulas, and structures to S-structures. A Henkin structure is a structure all of whose individuals interpret closed terms.3 A Herbrand structure is a structure each of whose individuals interprets a unique closed term.4 Hence Herbrand structures are Henkin. When we consider a Henkin or a Herbrand structure, or a nonempty class of Henkin or Herbrand structures, we tacitly assume that S contains at least one constant. 1

2 3

4

Given regular cardinal κ, L S κω denotes the set of all S-formulas built from atomic S-formulas using boolean operators, quantiﬁers, and disjunctions or conjunctions over nonempty sets of cardinality smaller than κ. See [10]. For more details on this deﬁnition, see [2] or [12]. Henkin structures should not be confused with Henkin models, which is the name often given to the general models deﬁned in [6]. Our notion of Henkin structure is closer to the canonical structures deﬁned in [17] for Henkin’s proof of the completeness of ﬁrst-order logic. Herbrand structures are close to the Herbrand models considered in Logic Programming. See [3] or [11].

232

3

E. Martin, A. Sharma, and F. Stephan

Components of the Theory

We denote by W a class of structures, the class of possible worlds. Classical ﬁrstorder logic would take for W the class of all structures. We have explained that in order to address questions such as deduction versus induction, we need to be free to choose a more restrictive class of possible worlds. The discussion about learning suggests the consideration of W to be the class of all Henkin structures, or the class of all Herbrand structures. Henkin and Herbrand structures are interesting in many respects, and play a prominent role in Logic Programming ([11]). Given T ⊆ L, we denote by ModW (T ) the class of all members of W that are models of T . We denote by O a set of sentences, that we call the class of possible observations. For classical ﬁrst-order logic, the choice of O would be irrelevant. Suppose we want to cast learning paradigms into this framework. For learning from positive data only, O will be equal to the set of all atomic sentences; for learning from both positive and negative data, O will be equal to the set of all basic sentences. Other examples can also be found in the literature (see for example [9]). We denote by T a set of sets of sentences, that we call class of possible theories. This corresponds roughly to the class of possible texts in the inductive inference literature. Classical ﬁrst-order logic would take for T the set of all sets of closed members of LSωω . The quintuple (S, L, W, O, T) contains all we need to deﬁne the fundamental concepts of this framework. We call this quintuple the paradigm under investigation, and we denote it by P. Deﬁnition 1. Let T ⊆ L and M ∈ W be given. We say that M is an O-minimal model of T in W iﬀ M ∈ ModW (T ) and for all N ∈ ModW (T ), {ϕ ∈ O | N |= ϕ} ⊂ {ϕ ∈ O | M |= ϕ}. The discussion above about learning should justify the previous deﬁnition. A similar notion is also encountered in AI in the form of the closed-world assumption deﬁned in [16] (for an overview see [5]), and of course in Logic Programming with the least Herbrand models (see [3,11]). Let T ⊆ L be given. Then T can have exactly one O-minimal model in W, or none, or many. We denote by ModO W (T ) the class of all O-minimal models of T in W. Note the following: Lemma 2. If O is closed under ∼ then for all T ⊆ L, ModO W (T ) = ModW (T ). We can now generalize the notion of logical consequence: Deﬁnition 3. Let T ⊆ L and ϕ ∈ L be given. We say that ϕ is a logical consequence of T in W, and we write T |=W ϕ, iﬀ every member of ModW (T ) is a model of ϕ. We say that ϕ is an O-minimal logical consequence of T in W, O and we write T |=O W ϕ, iﬀ every member of ModW (T ) is a model of ϕ. The notion of O-minimal logical consequence in W is the notion of generalized logical consequence we investigate; the other just proves useful. Although we develop the theory on a very broad basis, here we consider almost exclusively two cases of paradigms, that we now deﬁne.

A General Theory of Deduction, Induction, and Learning

233

Deﬁnition 4. We say that P is standard iﬀ T = {DO (M) | M ∈ W}. If P is O standard and for all T ∈ T and sentences ϕ, either T |=O W ϕ or T |=W ¬ϕ, then we say that P is ideal. Standard paradigms are the analogues of the classical paradigms in the inductive inference literature. When no data are missing, the latter even correspond to ideal paradigms.

4

The Hierarchies of Generalized Logical Consequences

We now deﬁne the hierarchies of generalized logical consequences that are basically the fundamental object of study of this framework.5 First we set, for all T ∈ T, Σ0P (T ) = Π0P (T ) = T . Deﬁnition 5. Let nonnull ordinal α and T ∈ T be given. Suppose that ΠβP (T ) has been deﬁned for all β < α. A sentence ϕ belongs to ΣαP (T ) iﬀ there exists ﬁnite E ⊆ T and ﬁnite H ⊆ β 1 and T ∈ T be given. A sentence ϕ belongs to ΣαP (T ) iﬀ there is ψ ∈ β 1, if follows P easily from Lemma 9 and the induction hypothesis that ϕ ∈ Σα+β (DO (M)). P P So we have shown that for all M ∈ W, Σβ (DO (M)) ⊆ Σα+β (DO (M)). Let M ∈ W and ϕ ∈ ΠβP (DO (M)) be given. To complete the proof we show that P (DO (M)). By Lemma 8, choose ψ ∈ ΣβP (DO (M)) such ϕ belongs to Πα+β O P that for all T ∈ T with T |=O W ψ and T |=W ϕ, ¬ϕ ∈ Σβ (T ). Let N ∈ W O with DO (N) |=O W ψ and DO (N) |=W ϕ be given. Since P and P are ideal, O P we infer that DO (N) |=O W ψ and DO (N) |=W ϕ. Hence ¬ϕ ∈ Σβ (DO (N)), so P ¬ϕ ∈ Σα+β (DO (N)) as proved above. Since the same part of the proof also shows P P (DO (M)), we conclude with Lemma 8 that ϕ ∈ Πα+β (DO (M)). that ψ ∈ Σα+β

5

Connections with Other Hierarchies

In order to be able to establish relations between the hierarchies of generalized logical consequences and other hierarchies, we deﬁne still a new hierarchy, where the Σα ’s levels are better behaved: P are deﬁned αP and Π Deﬁnition 14. For all ordinals α, the sets of sentences Σ α by induction on α, as follows. P = O ∪ {∼ ϕ | ϕ ∈ O}. 1. Σ 0

236

E. Martin, A. Sharma, and F. Stephan

P = {∼ ϕ | ϕ ∈ Σ P }. 2. For all ordinals α, Π α α P iﬀ there exists β < α with the 3. Let α = 0 be given. A sentence ϕ belongs to Σ α P following property. For all M ∈ W with M |= ϕ, there exists ﬁnite D ⊆ Π β such that M |= D and D |=W ϕ. Informally, we will refer to the hierarchy deﬁned above as the uniform hierarchy. We will need the following properties. First note that the levels of the uniform hierarchy are ordered as expected in a hierarchy of a Borel type. P ⊆ Σ P ∩ Π P. αP ∪ Π Proposition 15. For all ordinals α, β if α < β then Σ α β β P = Π P , which implies immediately that for all ordinals β, Proof. Trivially, Σ 0 0 P P P P ∪Π ⊆Σ ∩Π . Let α = 0 be given. For all β > α, the inclusions Σ P ⊆ Σ P Σ α 0 0 β β β P ⊆ Π P are straightforward. It is easily veriﬁed that Σ P ⊆ Π P . We and Π α α α+1 β P is equal to ∼ ψ for some ψ ∈ Σ P , hence ∼ ϕ ∈ Π P , infer that any ϕ ∈ Π α α α+1 P P P ⊆Σ hence ϕ ∈ Σ . So Π . The result follows. α+1

α

α+1

Remember that L is just a fragment of LSω1 ω , hence does not contain the disjunction or conjunction of any of its countable subsets. So X and X in the closure property below are members of LSω1 ω , but not necessarily members of L. Lemma 16. Let α = 0, sentence ϕ, and countable X ⊆ L be given. P and |=W (ϕ ↔ X) then ϕ ∈ Σ P. 1. If X ⊆ Σ α α P. αP and |=W (ϕ ↔ X) then ϕ ∈ Π 2. If X ⊆ Π α Adding the requirement that W consists exclusively of Henkin structures enables to treat existential quantiﬁers as countable disjunctions, and universal quantiﬁers as countable conjunctions. Corollary 17. Suppose that W is a set of Henkin structures. Let formula ϕ with free variables x1 , . . . , xn be given. Denote by X the set of all sentences of the form ϕ[t1 /x1 , . . . tn /xn ] for some closed terms t1 , . . . tn . Let α = 0 be given. P then ∃x1 . . . ∃xn ϕ ∈ Σ P. 1. If X ⊆ Σ α α P P. α then ∀x1 . . . ∀xn ϕ ∈ Π 2. If X ⊆ Π α P , assuming that O is closed under the ∼ operator. P and Π We characterize Σ 1 1 Proposition 18. Suppose that O is closed under ∼. Let sentence ϕ be given. P if and only if |=W ϕ, or |=W ¬ϕ, or there is a nonempty set X of 1. ϕ ∈ Σ 1 ﬁnite, nonempty subsets of O such that |=W { D | D ∈ X} ↔ ϕ. P if and only if |=W ϕ, or |=W ¬ϕ, or there is a nonempty set X of 2. ϕ ∈ Π 1 ﬁnite, nonempty subsets of O such that |=W { D | D ∈ X} ↔ ϕ.

A General Theory of Deduction, Induction, and Learning

237

Proof. The proof is trivial if |=W ϕ or |=W ¬ϕ, so suppose otherwise. Assume P . Let M ∈ W be such that M |= ϕ. Choose a nonempty subset that ϕ ∈ Σ 1 DM of DO (M) such that DM |=W ϕ. Set X = {DM | M ∈ W and M |= ϕ}. Then X is nonempty, and it is easy to verify that |=W { D | D ∈ X} ↔ ϕ. Conversely, nonempty subsets of O be such that let nonempty set X of ﬁnite, P for all D ∈ X, it follows from |=W { D | D ∈ X} ↔ ϕ. Since D ∈ Σ 1 P Lemma 16 that ϕ ∈ Σ1 . We conclude that 1. holds, and 2. is an immediate consequence. Then we characterize the other levels: Proposition 19. Let α > 1 and sentence ϕ be given. P iﬀ there is nonempty X ⊆ P P 1. ϕ ∈ Σ X ↔ ϕ. α β 20Ne + 4He

+ + +

11.68 (MeV) 6.18 2.38 .

_______ 1

The reaction formulations of ASTRA are based on neutral atoms. For this reason, there appear minor differences with textbook notations, such as in the second reaction above whose textbook version is H + 23Na --> 24Na + /e + nu, instead of H + 23Na -> 24Na + nu.

Automated Formulation of Reactions and Pathways

173

In each example, hydrogen and sodium (on the left hand side) combine to form one or more new substances (on the right hand side), along with the total energy emissions in MeV. For the runs described in this paper, we provided ASTRA with information about the elements from hydrogen to sulphur, their isotopes and a few elementary particles like the electron, proton, neutron and the neutrino with their antiparticles, giving a total of 68 distinct entities. From these, the system generated more than 600 different reactions. We manually eliminated minor variations such as 3He + 9Be --> 12C + e + /e and 3He + 9Be --> 12C + nu + /nu, leaving 472 reactions that included 344 fusion reactions and 28 decays. 3.2 Generating Reaction Chains Taking as input the reactions generated by the first stage, ASTRA generates the reaction chains for an element E from a small set of basic elements/isotopes (E) that we assume as given. The system uses a depth-first, backward chaining search to construct the reaction chains. On the first step, ASTRA finds those reactions that give as an output the final element E. Upon selecting one of these reactions, R, it recursively finds those reactions that give as an output one of more R’s input elements. The algorithm continues this process, halting its recursion when it finds a reaction chain for which all the reacting elements are in (E), or when it cannot find a reaction off which to chain. ASTRA generates all possible reaction chains in this systematic manner.

4 New Results of ASTRA In this section we report the new results of our tests with ASTRA concerning hydrogen-, helium-, carbon- and oxygen-burning reactions. We start with proton, electron and neutron capture reactions of heavier elements such as oxygen, fluor, neon, sodium, magnesium, aluminium, silicon and phosphorus. 4.1 Proton, Electron, and Neutron Captures Proton captures are an important class of exothermic reactions that also take part in processes transforming hydrogen into helium as will be described below. Proton capture by an atomic nucleus turns it into another element with one higher atomic number. ASTRA finds 33 examples of proton captures given in astrophysics literature (e.g., Fowler, et al., 1967, 1975, 1983) for elements from hydrogen to oxygen (16O), and 20 more for elements from oxygen to sulphur. ASTRA’s first stage predicts that all elements from hydrogen to sulphur (32S), with the exception of 4He, participate in exothermic proton capture. The program produces 46 such reactions for elements from hydrogen to oxygen, including all 33 examples we have found in texts, but also 13 others which we have not seen in astrophysics texts that we examined. The program also finds 72 proton captures for elements from oxygen (16O) to sulphur (32S), including the 20 such reactions cited in the same literature. Three examples of such proton captures are,

174

S. Kocabas

H + 19F --> 20Ne H + 23Na --> 24Mg H + 27Al --> 28Si. In these reactions, proton captures by fluorine, sodium and aluminium, transforms them into neon, magnesium and silicon, respectively. Also, all the isotopes from oxygen to sulphur, with the exception of the isotopes of neon and magnesium, participate in exothermic proton captures that produce helium (4He). Three examples to such reactions are, H + 19F --> 4He + 16O H + 23Na --> 4He + 20Ne. H + 27Al --> 4He +24Mg. Electron capture reactions are weak interactions in which an electron is absorbed by the atomic nucleus to be transformed into one with a smaller atomic number. In the process, the electron is combined with a proton in the nucleus, effectively transforming it into a neutron with the emission of a neutrino: e + p --> n + nu. ASTRA’s first stage produces 6 electron capture reactions for elements from hydrogen to oxygen of which only the one just given appears in astrophysics texts. The program also found 8 electron capture reactions for elements from oxygen to sulphur, none of which we have seen in the texts. In neutron capture, an element combines with a neutron to form a heavier isotope of the same element. We found 17 neutron captures for elements from hydrogen to oxygen in the literature, while ASTRA predicts 59 such reactions that are theoretically possible for the same elements. Some examples of these reactions can be found in Kocabas and Langley (1998). Recent runs of the system generated 76 reactions for elements from oxygen to sulphur. Three examples of such neutron capture reactions are, n + n + n +

18

19

22

23

F --> Na --> 31 S -->

F Na 32 S.

Here, as indicated above, in each case the nucleus that absorbs the neutron turns into a heavier isotope of the same element. 4.2 Hyrogen Burning Processes The transformation of hydrogen into helium in a series of nuclear processes which take place in main sequence stars i the principal source of energy. The standard reaction chains given in astrophysics texts (e.g. Audouze & Vauclair, 1980, p. 52; Williams, 1991, p. 351) for helium synthesis in such stars are the hydrogen-burning processes called “proton-proton” or pp chains. Other hydrogen burning reactions that

Automated Formulation of Reactions and Pathways

175

appear in texts involve heavier elements carbon, nitrogen and oxygen, and the pathway is called the CNO-chain. ASTRA produces all known CNO-chains, in addition to one viable variant using the electron capture of 13N (see, Kocabas & Langley, 1998). We have tested ASTRA on hydrogen burning reactions involving the elements heavier than oxygen. Such reactions are hypothesized to occur in stars several times larger than the sun. The program found four hydrogen burning chains involving the elements fluorine, neon, sodium, magnesium, silicon, phosphorus and sulphur. One of these processes is H + 24Mg -> 25Mg + nu H + 25Mg -> 26Al H + 26Al -> 27Si 27 Si + e -> 27Al + e + nu H + 27Al -> 24Mg + 4He --------------------------------4 H -> 4He + 2 nu . In this process four hydrogen atoms in effect, transform into one helium atom, while two neutrinos are also emitted. We did not see any of these processes in the texts that we examined, but we presume that they are known to astrophysicists. 4.3 Helium Burning Processes The origin and the relative abundance of carbon and oxygen has been one of the main concerns of astrophysics. The standard account (e.g., Fowler, 1986, pp. 5-6) relies on the process of helium-burning, in which helium nuclei react to form carbon and oxygen in the following steps: 4

He + 4He --> 8Be He + 8Be --> 12C 4 He + 12C --> 16O . 4

In its earlier runs, ASTRA found an alternative to this process which astrophysicists qualified as more likely in neutron-rich stellar media (see, Kocabas & Langley, 2000). ASTRA finds 25 exothermic helium burning reactions involving the range of elements from oxygen to silicon, including the 16 such reactions cited in the texts. Some of these reacitons are, 4

He + 16O ->

20

5.16 (MeV)

4

23

Na H + 22Ne

10.5 1.72 9.3 10.6

He + 19F -> He + 19F ->

4

Ne

4

20

24

4

22

26

He + He +

Ne -> Ne ->

Mg Mg.

176

S. Kocabas

4

23

4

23

Na -> 27Al Na -> /nu + 27Si 23 Na -> H + 26Mg

10.2 5.4 1.82

He + 24Mg -> 28Si He + 25Mg -> 29Si 4 He + 25Mg -> nu + 29Al 4 He + 25Mg -> n + 28Si 4 He + 26Mg -> 30Si 4 He + 26Mg -> /nu + 30P 4 He + 26Mg -> n + 29Si

10.1 11.2 7.5 2.73 10.8 6.5 0.13

He + He + 4 He + 4

4

4

He + 27Al -> 31P He + 27Al -> /nu + 31S 4 He + 27Al -> H + 30Si

9.7 4.2 2.42

4

28

4

28

6.9 4.6 7.2

4

He + He + 4 He +

Si -> 32S Si -> nu + 29 Si -> 33S

32

P

Among these reactions those that emit neutrinos (nu and /nu) are weak interactions which are much slower than the other alpha capture reactions. Astrophysicists generally ignore the weak reactions for their slow rates, except in processes that rely on such weak reactions. A careful comparison of the proton capture, neutron capture and helium burning reactions produced by ASTRA with the natural abundances of the elements from oxygen to sulphur in the CRC Handbook (80th ed., D.R.Lide, 1999-2000) reveals an interesting result: The elements fluorine, neon, sodium, magnesium, silicon, phosphorus and sulphur in the solar system must have been formed by alpha capture processes, rather than proton or neutron captures. This is because, the stable isotope abundances of these elements indicate a parallelism with the stepwise alpha-capture (helium burning) of the stable lighter isotopes of the elements in the series (see Table 1). Indeed, the two alpha capture chains (16O, 20Ne, 24Mg, 28Si, 32S and 19F, 23Na, 27Al, 31 P) contain the most abundant isotopes of these elements. These processes may have been accompanied by carbon, nitrogen and oxygen burning processes which produce 24 Mg, 28S and 32S respectively as shown in the next subsection. Although proton capture reactions explain the relative abundance of 19F, 20Ne, 23Na, 24 Mg, 27Al, 28Si, and 32S, they fail to explain the relative abundance of 31P. Similarly, neutron capture reactions fail to explain the relative abundances of 20Ne, 24Mg and 28 Si. Yet, stepwise alpha capture explains the relative abundances of all the isotopes in the series. We are currently investigating the astrophysical literature on the origins of the elements from fluorine to sulphur before claiming any novelty on this issue.

Automated Formulation of Reactions and Pathways

177

Table 1. Relative abundances of some isotopes for elements from oxygen to sulphur.

_______________________________________________________ isotope % abundance isotope % abundance 16

O F

19

18

99.76 100

O F

18

0.2 0

20

22 Ne 90.48 Ne 9.25 22 Na 100 Na 0 24 26 Mg 78.99 Mg 11.01 27 26 Al 100 Al 0 28 29 Si 92.23 Si 4.67 31 30 P 100 P 0 32 34 S 95.0 S 4.21 _______________________________________________________ 23

4.4 Carbon, Nitrogen, and Oxygen Burning Carbon burning, in which two carbon atoms fuse together to produce heavier elements, takes place after the helium burning stage in a star. ASTRA finds four carbon burning reactions which produce the elements neon, sodium, and magnesium: 12

C C 12 C 12 C 12

+ + + +

12

C C 12 C 12 C 12

-> 24Mg -> nu + 24Na -> H + 23Na -> 4He + 20Ne

+ + + +

14.4 (MeV) 8.9 2.72 5.1

In nitrogen burning, two nitrogen atoms fuse together to form elements ranging from oxygen to silicon. ASTRA finds 10 such reactions: 14

N N 14 N 14 N 14 N 14 N 14 N 14 N 14 N 14 N 14

+ + + + + + + + + +

14

N N 14 N 14 N 14 N 14 N 14 N 14 N 14 N 14 N 14

-> 28Si -> nu + 28Al -> n + 27Si -> H + 27Al -> D + 26Al -> 3He + 25Mg -> 4He + 24Mg -> 8Be + 20Ne -> 12C + 16O -> 13N + 15N

+ 27.82 (MeV) + 23.12 + 10.65 + 16.24 + 5.52 + 4.52 + 17.72 + 8.32 + 10.46 + 0.72

Finally, ASTRA formulates the following oxygen burning reactions in which two oxygen atoms fuse together in exothermic reactions, and the elements magnesium, silicon, phosphorus and sulphur are generated:

178

S. Kocabas

16

O + 16O -> 32S O + 16O -> nu + 32P 16 O + 16O -> n + 31S 16 O + 16O -> H + 31P 16 O + 16O -> 4He + 28Si 16 O + 16O -> 8Be + 24Mg 16

+ + + + + +

17.12 (MeV) 14.82 2.05 8.34 10.22 0.02

Carbon, nitrogen and oxygen burning reactions happen only in massive stars as they require higher energies to initiate. The astrophysics texts that we examined mention only a few of these reactions, such as 12C + 12C -> 24Mg, 14N + 14N -> 28Si, and 16O + 16 O -> 32S, while ASTRA provides a full account of such reactions.

5 Discussion of Results We have compared ASTRA’s earlier outputs involving the elements from hydrogen to oxygen to those available in astrophysics texts (Clayton, 1983; Audouze & Vauclair, 1980; Kippenhahn & Weigert, 1994; Fowler et al., 1967, 1975, 1983; Cujec & Fowler, 1980; Adelberger, E.G., et al. (1998), and discussed some of its results with astrophysicists. We received encouraging comments from domain experts on the earlier outputs (see, Kocabas & Langley, 2000). However, the reactions and processes of the light elements have already been studied extensively by nuclear astrophysicists. For this reason, we decided to extend the scope of the program to investigate the reactions and the processes of the elements from oxygen to sulphur. The ASTRA program can handle a very large volume of data for constructing reactions and reaction networks. Astrophysicists normally formulate the reactions by hand, and construct the reaction networks by focusing on the more likely reactions by using certain domain criteria. It is in this way the hydrogen and helium burning processes involving the lighter elements have been dealt with extensively in the current literature. But as the number of possible reactions increase rapidly for the heavier elements, a complete analysis of the reactions and processes can only be carried out with the aid of a computational tool such as our program. Although we tested ASTRA on the reactions of the elements from hydrogen (H) to sulphur (32S) with some interesting results, we plan to extend the system for exploring the reactions of heavier elements from sulphur to iron (56Fe) and further, which take place in stellar and interstellar processes. The understanding of the nuclear processes in which the chemical elements are formed is important in more ways than one, as this provides detailed information about the stellar and interstellar conditions that produced these elements. This is why cosmologists and astronomers are also very much interested in these processes as well as nuclear astrophysicists. We have described in Section 4, how an analysis of the reactions and the reaction processes produced by ASTRA and the natural abundances of chemical elements and isotopes, can lead to a detailed picture of the conditions in which these elements are formed. Astrophysicists use reaction rates to rule out slower reactions from their reaction networks. The current version of ASTRA can use reaction rates to rule out candidates, retaining only those reactions with the highest rates to construct reaction networks. But the rate for each reaction must be given by the user, the program cannot calculate them. We attempted to incorporate the rate calculation in ASTRA recently but decided

Automated Formulation of Reactions and Pathways

179

not to gon on with this, because of the complexities involved. Rate calculations are based on the reaction cross-sections and element concentrations in stellar media. Astrophysicists first construct a model of the star by making a number of assumptions about the star size, mass, temperature, pressure and element distribuiton. Stellar plasma are also treated in several layers through which element compositions, dominant reactions and processes change. Although ASTRA can search a much larger space of reactions and processes than can human scientists. We did not meet any problems with it for the elements and isotopes from hydrogen to sulphur involving 68 distinct entities. We have yet to see if we will need to constrain the scope of the reactions for the elements from hydrogen to iron. We plan to extend the program to investigate the reactions of the elements from sulphur to iron. Meanwhile we will continue to investigate the literature about the origins of heavier elements in the solar system.

6 Related Research The ASTRA system has evolved from our previous work in computational study of discoveries in particle physics with BR-4 (Kocabas & Langley, 1995), which models the discoveries in this field by prediction and theory revision. BR-4 inherits some of its capabilities from its predecessor BR-3 (Kocabas, 1991), which in turn descends from STAHL (Zytkow & Simon, 1986), and STAHLp (Rose & Langley, 1986) which modelled qualitative discovery in chemistry. Our system shares goals and techniques with more recent systems MECHEM (Valdes-Perez, 1995) designed to discover new reaction mechanisms in catalytic chemistry, and SYNGEN (Hendrickson, 1995) which constructs pathways for the synthesis of complex organic chemicals from simpler constituents. There are many similarities between ASTRA and MECHEM in terms of the tasks they perform. Both systems produce reactions and reaction mechanisms in large search spaces, and both are designed as computational aids for scientists. But the two systems differ in their inputs and outputs. MECHEM receives as input the initial and final chemical substances and generates all the simple reaction pathways using a set of constraints on chemical reactivity. Similarly, ASTRA uses a set of quantum constraints to formulate the reactions from which it constructs the reaction links for each element until the final element is reached. The reaction links in a chain constitute what is called by astrophysicists ‘the reaction network’. ASTRA has to deal with a large number of entities (elementary particles, elements and their isotopes), and even much larger number of reactions of these entities, to construct valid reactions and reaction chains, while MECHEM has a relatively smaller search space in its domain of application. MECHEM’s reaction pathways are lists of reaction steps normally with at most two reactants and two products. In contrast, the reactions of ASTRA can have from one to three entities in both sides. As to the comparison between our system and SYNGEN, the latter addresses the synthesis of organic chemicals, where one needs to determine reaction paths and the initial substances, through a set of known intermediate substances. The constraints of SYNGEN are more similar to those used by MECHEM though they operate in different fields of chemistry. Our program differs from these systems in its field of application and the types of constraints used.

180

S. Kocabas

7 Conclusions In this paper we described the new results of ASTRA, a computational tool which formulates reactions and reaction chains for researchers in nuclear astrophysics. The system determines all valid reactions for a given set of elements, isotopes and particles using a set of quantum constraints. The system also generates all reaction pathways for an element starting from a set of lighter elements. ASTRA generates all reactions we have seen in the astrophysics literature involving proton, electron and neutron captures, and helium, carbon, nitrogen and oxygen burning. ASTRA also reproduces all reaction chains that scientists have proposed for the synthesis of helium, carbon, nitrogen and oxygen in stellar media. But many of the valid reactions and reaction chains that the system generates do not appear in the related scientific literature. The domain experts that we have contacted suggested that some of these results carry theoretical interest for certain stellar models, but the vast majority of the reaction chains would be ignored by astrophysicists for their low rates. Earlier we decided to incorporate the rate calculations in the ASTRA system, but later abandoned this project because of the complexities involved. Instead, we focused on extending the system’s knowledge base to investigate the reactions and processes of the heavier elements. Given information about 32 more elements and isotopes from oxygen to sulphur, amounting to a total of 68 distinct entities, the program generated all the proton, electron, neutron capture reactions and all the helium, carbon, nitrogen burning reactions. A close comparison of these reactions with the stability and natural abundances of the 32 istopes between oxygen and sulphur indicated that the stable isotopes in this range must have been formed by exothermic alpha capture reactions accompanied by carbon, nitrogen and oxygen burning rather than proton or neutron capture reactions. We are currently investigating the literature for any scientific record on this issue.

References Adelberger, E.G., et al. (1998). Solar fusion cross sections. Reviews of Modern Physics, vol. 70, No. 4. Pp 1266-1291. Audouze, J., & Vauclair, S. (1980). An introduction to nuclear astrophysics. Holland: D. Riedel. Clayton, D.D. (1983). Principles of Stellar Evolution and Nucleosynthesis. Chicago: The University of Chicago Press. Cujec, B. & Fowler, W.A. (1980). Neglect of D, T, and 3He in advanced stellar evolution. The Astropysical Journal, 236: 658-660. Feigenbaum, E. A., Buchanan, B.G., Lederberg, J. (1971). On generality and problem solving: A case study using the DENDRAL program. In Machine Intelligence (vol. 6). Edinburgh: Edinburgh University Press. Fowler, W.A. (1986). The synthesis of the chemical elements carbon and oxygen. In S.L. Shapiro & S.A. Teukolsky (Eds.), Highlights of modern astrophysics. New York: John Wiley & Sons. Fowler, W.A., Caughlan, G.R., and Zimmermann, B.A. (1967). Thermonuclear Reaction Rates. Ann. Rev. Astron. Astrphysics, 5, 525-570. Fowler, W.A., Caughlan, G.R., and Zimmermann, B.A. (1975). Thermonuclear Reaction Rates. Ann. Rev. Astron. Astrphysics, 13, 69-112.

Automated Formulation of Reactions and Pathways

181

Harris, M.J., Fowler, W.A. Caughlan, G.R., and Zimmermann, B. (1983). Thermonuclear reaction rates. Ann. Rev. Astron. Astrophysics, 21, 165-176. Hendrickson, J.B. (1995). Systematic synthesis design: The SYNGEN program. Working Notes of the AAAI Spring Symposium on Systematic Methods of Scientific Discovery (pp. 1317). Stanford, CA: AAAI Press. Jones, R. (1986). Generating predictions to aid the scientific discovery process. Proceedings of the Fifth National Conference on Artificial Intelligence, pp. 513-517, Philadelphia: Morgan Kaufmann. Kippenhahn, R. and Weigert, A. (1994). Stellar Structure and Evolution. London: SpringerVerlag. Kocabas, S. (1991). Conflict resolution as discovery in particle physics. Machine Learning, 6, 277-309. Kocabas, S., & Langley, P. (1995). Integration of research tasks for modeling discoveries in particle physics. Working notes of the AAAI Spring Symposium on Systematic Methods of Scientific Discovery (pp. 87-92). Stanford, CA: AAAI Press. Kocabas, S. & Langley, P. (1998). Generating process explanations in nuclear astrophysics. Proceedings of the ECAI-98 Workshop on Machine Discovery (pp. 4 -9), Brighton, UK. Kocabas, S. & Langley, P. (2000). Computer generation of process explanations in nuclear astrophysics. International Journal of Human-Computer Studies, 53, 1149-1164, Academic Press. Kulkarni, D., & Simon, H.A. (1990). Experimentation in machine discovery. In J. Shrager & P. Langley (Eds.), Computational models of scientific discovery and theory formation. San Mateo, CA: Morgan Kaufmann. Lang, K.R. (1974). Astrophysical formulae: A compendium for physicists and astrophysicists. New York: Springer-Verlag. Langley, P. (1981). Data-driven discovery of physical laws. Cognitive Science, 5, 31-54. Langley, P. (1998). The computer-aided discovery of scientific knowledge. Proceedings of the 1st International Conference on Discovery Science, Fukuoka, Japan: Springer. Langley, P., Simon, H.A., Bradshaw, G.L., & Zytkow, J.M. (1987). Scientific Discovery: Computational explorations of the creative processes. Cambridge, MA: MIT Press. Lee, Y., Buchanan, B.G., Mattison, D.R., Klopman, G., & Rosenkranz, H.S. (1995). Learning rules to predict rodent carcinogenicity. Machine Learning, 30, 217-240. Lide, D.R. (Ed.). (1999-2000). CRC handbook of chemistry and physics (80th ed.). Florida: CRC Press. Mitchell, F., Sleeman, D., Duffy, J.A., Ingram, M.D., & Young, R.W. (1997). Optical basicity of metallurgical slags: A new computer-based system for data visulisation and analysis. Ironmaking and Steelmaking, 24, 306-320. Rose, D. & Langley, P. (1986). Chemical discovery as belief revision. Machine Learning, 1, 423-451. Valdes-Perez, R.E. (1995). Machine discovery in chemistry: New results. Artificial Intelligence, 74, 191-201. Williams, W.S.C. (1991). Nuclear and Particle Physics. Oxford: Clarendon Press. Zytkow, J.M., & Simon, H.A. (1986). A theory of historical discovery: The construction of componential models. Machine Learning, 1, 107-137.

Passage-Based Document Retrieval as a Tool for Text Mining with User’s Information Needs Koichi Kise1,2 , Markus Junker1 , Andreas Dengel1 , and Keinosuke Matsumoto2 1

German Research Center for Artiﬁcial Intelligence (DFKI GmbH), P.O.Box 2080, 67608 Kaiserslautern, Germany {Koichi.Kise, Markus.Junker, Andreas.Dengel}@dfki.de 2 Department of Computer and Systems Sciences, Graduate School of Engineering, Osaka Prefecture University 1-1 Gakuencho, Sakai, Osaka 599-8531, Japan {kise, matsu}@cs.osakafu-u.ac.jp

Abstract. Document retrieval can be considered as a basic but important tool for text mining that is capable of taking a user’s information need into account. However, document retrieval is a hard task if multitopic lengthy documents have to be retrieved with a very short description (a few keywords) of the information need. In this paper, we focus on this problem which is typical in real world applications. We experimentally validate that passage-based document retrieval is advantageous in such circumstances as compared to conventional document retrieval. Passage-based document retrieval is a kind of document retrieval which takes into account only small fractions (passages) of documents to judge the document relevance to the information need. As a passage-based method, we employ the method based on density distributions of keywords. This is compared with the following three conventional methods for document retrieval: the vector space model, pseudo-feedback, and latent semantic indexing. Experimental results show that the passagebased method is superior to the conventional methods if long documents have to be retrieved by short queries.

1

Introduction

The growing number of electronic textual documents has created the need of intelligent access to the information implied by them. The goal of text mining is to discover novel nuggets of information from a huge collection of documents to fulﬁll the need in the ultimate sense [1]. The unstructured nature of documents, however, makes it diﬃcult to realize the goal in a general way. The current stateof-the-art is to approach the goal by integrating the tools developed so far in other related research areas [2], though their functionality and/or domains of interest are still restricted. In order to take a step forward, it would be required both to devise a novel combination of the tools and to polish them up. A typical scenario of text mining would be that (1) information extraction is utilized to obtain the information from documents, (2) data mining is applied K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 155–169, 2001. c Springer-Verlag Berlin Heidelberg 2001

156

K. Kise et al.

to the extracted information to derive novel information. In this scenario, information of interest is ﬁxed in the ﬁrst stage of the processing. Another possibility is mining based on a user’s ad-hoc information need. In this scenario, document retrieval is a tool applied at the ﬁrst stage of processing to select documents analyzed at later stages. Although the research area of document retrieval has several decades of history, it is still not trivial to retrieve documents relevant to a user’s need. Two major problems uncovered through the research activities are as follows: Multi-topic documents. If a document is beyond the length of abstracts, it often contains several topics. Even though one of them is relevant to the user’s need, the rest are not necessarily relevant. As a result, these irrelevant parts severely disturb the retrieval of documents. Short queries. It is common that a user’s information need is fed to a system as a set of query terms. However, it is not an easy task for a user to transform the need into query terms. From the analysis of Web search logs, for example, it is well-known that typical users issue quite short queries consisting of several terms. Such queries are too poor to retrieve documents appropriately. In conventional document retrieval, the retrieval of multi-topic documents is a hard task since there is no way to avoid the inﬂuence of irrelevant parts of documents. In order to tackle this problem, some researchers have proposed a diﬀerent way of retrieval called passage-based document retrieval [3,4,5]. In passage-based document retrieval, documents are retrieved based only on fractions (passages) of documents in order not to be disturbed by the irrelevant parts. It has been shown in the literature that passage-based document retrieval outperforms conventional document retrieval in processing long documents [5]. To handle passages as units of retrieval is advantageous to the application to text mining since it also gives a clue to extract relevant parts from the documents. In this paper, we experimentally validate that, for the second (short queries) problem, passage-based document retrieval is also superior to conventional document retrieval. As a method of passage-based retrieval, we utilize a method based on “density distributions” [6]. This method segments documents into passages dynamically in response to a query. As conventional methods, we employ the following three [7,8]: the vector space model, pseudo-feedback and latent semantic indexing.

2

Conventional Document Retrieval

Let us begin with an overview of conventional document retrieval methods. The task of document retrieval is to retrieve documents relevant to a given query from a ﬁxed set of documents or a document database. In a common way to deal with documents as well as queries, they are represented using a set of index terms (simply called terms from now on) by ignoring their positions in documents and queries. Terms are determined based on words of documents in the database. In the following, ti (1 ≤ i ≤ m) and dj (1 ≤ j ≤ n) represent a term and a

Passage-Based Document Retrieval as a Tool for Text Mining

157

document in the database, respectively, where m is the number of terms and n is the number of documents. 2.1

Vector Space Model

The vector space model (VSM) [7,8] is the simplest retrieval model. In the VSM, a document dj is represented as a m dimensional vector: dj = (w1j , ..., wmj )T ,

(1)

where T indicates the transpose, wij is a weight of a term ti in a document dj . A query q is likewise represented as q = (w1q , ..., wmq )T ,

(2)

where wiq is a weight of a term ti in a query q. So far, a variety of schemes for computing weights have been proposed. In this paper, we employ a standard scheme called “tf-idf” deﬁned as follows: wij = tf ij · idf i ,

(3)

where tf ij is the weight calculated using the term frequency fij (the number of occurrences of a term ti in a document dj ), and idf i is the weight calculated using the inverse of the document frequency ni (the number of documents which contain a term ti ). In computing tf ij and idf i , the raw frequency is usually dampened by a function. We utilize tf ij = fij and idf i = log(n/ni ) where n is the total number of documents. The weight wiq is similarly deﬁned as wiq = fiq where fiq is the frequency of a term ti in a query q. The result of retrieval is represented as a list of documents ranked according to their similarity to the query. The similarity sim(dj , q) between a document dj and a query q is measured by the cosine of the angle between dj and q: sim(dj , q) =

dTj q . dj q

(4)

where · is the Euclidean norm of a vector. 2.2

Pseudo-Feedback

A problem of the VSM is that a query is often too short to rank documents appropriately. To cope with this problem, it has been proposed to enrich an original query by expanding it with terms in documents. A method called “pseudo-feedback” [8] is known as a way to obtain the terms for expansion. In this method, ﬁrst, documents are ranked with an original query. Then, highly ranked documents are assumed to be relevant and their terms are incorporated into the original query. Documents are ranked again by using the expanded query.

158

K. Kise et al.

In this paper, we employ a simple variant of pseudo-feedback. Let E be a set of document vectors for expansion given by sim(d+ , q) j + ≥τ , (5) E = dj maxi sim(di , q) where q is an original query vector and τ is a threshold of the similarity. The sum ds of document vectors in E: ds = d+ (6) j + dj ∈E can be considered as enriched information about the original query. Then, the expanded query vector q is obtained by q =

q ds , +λ q ds

(7)

where λ is a parameter for controlling the weight of the newly incorporated component. Finally, documents are ranked again according to the similarity sim(dj , q ) to the expanded query. 2.3

Latent Semantic Indexing

Latent semantic indexing (LSI) [7,8] is another well-known way to improve the VSM. Let D be a term-by-document matrix deﬁned by D = (dˆ1 , ..., dˆn ) ,

(8)

where dˆj = dj /dj . By applying the singular value decomposition, D is decomposed into the product of three matrices: D = U SV T ,

(9)

where U and V are matrices of size m × r and n × r (r = rank(D)), respectively, and S = diag(σ1 , ..., σr ) is a diagonal matrix with singular values σi (σi ≥ σj if i ≤ j). Each row vector in U (V ) corresponds to a r-dimensional vector representing a term (document). By keeping only the k(< r) largest singular values in S along with the corresponding columns in U and V , D is approximated by Dk = Uk Sk VkT ,

(10)

where Uk , Sk and Vk are matrices of size m×k, k ×k and n×k, respectively. This approximation allows us to uncover “latent” semantic relation among terms as well as documents.

Passage-Based Document Retrieval as a Tool for Text Mining

159

The similarity between a document and a query is measured as follows. Let v j = (vj1 , ..., vjk ) be a row vector in Vk = (vji ) (1 ≤ j ≤ n, 1 ≤ i ≤ k). In the k-dimensional (approximated) space, a document dj is represented as d∗j = Sk v Tj .

(11)

An original query is also represented in the k-dimensional space as q ∗ = UkT q .

(12)

Then the similarity is obtained by sim(d∗j , q ∗ ).

3

Passage-Based Document Retrieval

Passages used in passage-based methods can be classiﬁed into three types: discourse, semantic and window [3]. Discourse passages are deﬁned based on discourse units such as sentences and paragraphs. Semantic passages are obtained by segmenting text at the points where the subject of text changes. Window passages are determined based on the number of terms. In this paper, we employ a passage-based method with window passages called “density distributions”(DD). The density distribution was ﬁrst introduced to locate the descriptions of a word [9] and applied to passage retrieval by some of the authors [6]. The fundamental idea of DD is that parts of documents which densely contain the terms in a query are relevant to it. Figure 1 shows an example of a density distribution. The horizontal axis indicates the positions of terms in a document. The distribution of query terms in the document is shown as spikes in the ﬁgure: their height indicates the weight of a term. The density distribution shown in the ﬁgure is obtained by smoothing the spikes with a window function. The details are as follows. Let aj (l) (1 ≤ l ≤ Lj ) be a term at the position l in a document dj where Lj is the length of a document dj measured in terms. The weighted distribution bj (l) of terms in a query q is deﬁned by wiq · idf i if aj (l) = tiq , (13) bj (l) = 0 otherwise . Smoothing of bj (l) enables us to obtain the density distribution ddj (l) for a document dj : W/2 ddj (l) = f (x)bj (l − x) , (14) x=−W/2

where f (x) is a window function with a window size W . We employ the Hanning window function deﬁned by 1 x (1 + cos 2π W ) if |x| ≤ W/2 , f (x) = 2 (15) 0 otherwise ,

160

K. Kise et al. 6

Density / Weight

5

density distribution weighted distribution of query terms

4 3 2 1 0 0

500

1000 Term Position

1500

2000

Fig. 1. Density distribution. 1

f(x)

0.75

0.5

0.25

0 −W/2

−W/4

0

W/4

x

W/2

Fig. 2. Hanning window function.

whose shape is illustrated in Fig. 2. In order to utilize DD as a passage-based document retrieval method, a score of a document is calculated using the density distribution. The score of dj for a query q is obtained as the maximum value of its density distribution as follows: score(dj , q) = max ddj (l) . l

(16)

This score is used to rank documents according to a query.

4

Experimental Comparison

In this section, we show the results of the experimental comparison. After the description of the test collections employed for the experiments, our methods for evaluating the results are described. Then, the results of experiments are presented and discussed.

Passage-Based Document Retrieval as a Tool for Text Mining

161

Table 1. Statistics about documents in the test collections. MED CRAN CR FR size [MB] 1.1 1.6 235 209 no. of doc. 1,033 1,398 27,922 19,789 no. of terms† 4,284 2,550 37,769 43,760 doc. len.‡ min. 20 23 22 1 max. 658 662 629,028 315,101 mean 155 162 1,455 1,792 median 139 142 324 550 † : counted in words after stemming and eliminating stopwords ‡ : counted in words before stemming and eliminating stopwords Table 2. Statistics about queries in the test collections. MED CRAN no. of queries 30 query len.† min. 2 max. 33 mean 10.8 median 9.0 † : counted in words after

4.1

CR FR title desc narr title desc narr 225 34 85 3 2 4 12 1 3 12 21 7 19 79 9 22 93 9.2 3.0 7.7 28.7 3.5 10.4 37.0 9.0 3.0 6.5 24.5 3.0 10.0 34.0 stemming and eliminating stopwords

Test Collections

We made a comparison using four test collections: MED (medicine), CRAN (aeronautics), FR (federal register), CR (congressional record). The collections MED and CRAN are available at [12], and FR and CR are contained in the TREC disks No.2 and No.4, respectively [13]. All collections are provided with queries and their groundtruth (a list of documents relevant to each query). For these collections, terms used for document representation were obtained by stemming and eliminating stopwords 1 . Tables 1 and 2 show some statistics about the collections. In Table 1, an important diﬀerence is the length of documents: MED and CRAN consist of abstracts, while FR and CR contain much longer documents. In Table 2, a point to note is the diﬀerence of query length. In the TREC collections, each information need is described by query types of diﬀerent length. In order to investigate the inﬂuence of query length, we employed three types: “title”(the shortest representation), “desc” (description; medium length) and “narr” (narrative; the longest). 1

Words which convey no meaning such as “the”.

162

4.2

K. Kise et al.

Evaluation

Average Precision. A common way to evaluate the performance of retrieval methods is to compute the (interpolated) precision at some recall levels. This results in a number of recall / precision points which are displayed in recallprecision graphs [7]. However, it is sometimes convenient for us to have a single value that summarizes the performance. The average precision (noninterpolated) over all relevant documents [7,12] is a measure resulting in a single value. The deﬁnition is as follows. As described in Sect. 2, the result of retrieval is represented as the ranked list of documents. Let r(i) be the rank of the i-th relevant document counted from the top of the list. The precision for this document is calculated by i/r(i). The precision values for all documents relevant to a query are averaged to obtain a single value for the query. The average precision over all relevant documents is then obtained by averaging the respective values over all queries. For example, consider two queries q1 and q2 which have two and three relevant documents, respectively. Suppose the ranks of relevant documents for q1 are 2 and 5, and those for q2 are 1, 3 and 10. The average precision for q1 and q2 is computed as (1/2 + 2/5)/2 = 0.45 and (1/1 + 2/3 + 3/10)/3 = 0.66, respectively. Then the average precision over all relevant documents which takes into account both queries is (0.45 + 0.66)/2 = 0.56. Statistical Test. The next step for the evaluation is to compare the values of the average precision obtained by diﬀerent methods. An important question here is whether the diﬀerence in the average precision is really meaningful or just by chance. In order to make such a distinction, it is necessary to apply a statistical test. Several statistical tests have been applied to the task of information retrieval [10,11]. In this paper, we utilize the test called “macro t-test” [11] (called paired t-test in [10]). The following is the summary of the test as described in [10]. Let ai and bi be the scores (e.g., the average precision) of retrieval methods A and B for a query i and deﬁne di = ai − bi . The test can be applied under the assumptions that the model is additive, i.e., di = µ+εi where µ is the population mean and εi is an error, and that the errors are normally distributed. The null hypothesis here is µ = 0 (A performs equivalently to B in terms of the average precision), and the alternative hypothesis is µ > 0 (A performs better than B). It is known that the Student’s t-statistic d¯ t= (17) s2 /n follows the t-distribution with the degree of freedom of n − 1, where n is the number of samples (queries), d¯ and s2 are the sample mean and variance: n

1 d¯ = di , n i=1

(18)

Passage-Based Document Retrieval as a Tool for Text Mining

163

Table 3. Values of parameters. parameter MED, CRAN CR, FR PF weight λ 1.0, 2.0 1.0, 2.0 threshold τ 0.71 ∼ 0.99 step 0.02 0.71 ∼ 0.99 step 0.02 LSI dimension k 60 ∼ 500 step 20 50 ∼ 500 step 50 DD window size W 20 ∼ 200 step 20 20 ∼ 100 step 10, and 150,200,300 Table 4. Best parameter values. MED CRAN

title 1.0 1.0 0.85 0.85 260 300 100 50

PF λ 2.0 τ 0.71 LSI k 60 DD W 80

CR desc 1.0 0.85 500 90

narr 1.0 0.93 400 200

title 1.0 0.83 350 90

narr 1.0 0.71 500 40

n

1 ¯2 . s = (di − d) n − 1 i=1 2

FR desc 2.0 0.71 500 40

(19)

By looking up the value of t in the t-distribution, we can obtain the Pvalue, i.e., the probability of observing the sample results di (1 ≤ i ≤ n) under the assumption that the null hypothesis is true. The P-value is compared to a predetermined signiﬁcance level α in order to decide whether the null hypothesis should be rejected or not. As signiﬁcance levels, we utilize 0.05 and 0.01. 4.3

Results for the Whole Collections

The methods PF (pseudo-feedback), LSI (latent semantic indexing) and DD (density distributions) were applied by ranging the values of parameters as shown in Table 3. Figure 3 exemplarily illustrates the variation in the average precision when varying the threshold τ in PF (λ = 1.0; left) and the window size W in DD (right). The lines in the graphs were obtained from the experiments on the collections CR and FR. Since these collections have three query sets (title, desc, narr), six lines are shown in each graph. In the graph of PF, the average precision ﬂuctuated slowly but irregularly with the threshold τ . On the other hand, the average precision of DD partly changed rapidly on smaller window sizes, and showed a tendency to converge as the window size became larger. Since better performance of DD was often obtained with smaller window sizes, DD would be more sensitive to the parameter W than PF to τ . Although it is an important topic to develop a method of automated adjustment of the window size, it is beyond the scope of this paper; we simply selected the best values of parameters which are shown in Table 4. Table 5 shows the average precision obtained by using the best parameter values. In Table 5, the best and the second best values of average precision

164

K. Kise et al. DD CR / title CR / desc CR / narr FR / title FR / desc FR / narr

0.3

0.25

average precision

average precision

PF (λ=1.0)

CR / title CR / desc CR / narr FR / title FR / desc FR / narr

0.3

0.25

0.2

0.15

0.2

0.15

0.1

0.05 0.7

0.8

0.9

threshold τ

1

0.1

0.05 0

100

200

300

window size W

Fig. 3. Variations in the average precision. Table 5. Average precision over all relevant documents. MED

CRAN

CR FR title desc narr title desc narr VSM 0.530 0.401 0.127 0.172 0.172 0.098 0.094 0.120 PF 0.640 0.450 0.169 0.195 0.184 0.115 0.123 0.119 (+20.8%) (+12.2%) (+33.1%) (+13.4%) (+7.0%) (+17.3%) (+30.9%) (−0.8%) LSI 0.685 0.444 0.101 0.128 0.134 0.043 0.051 0.075 (+29.2%) (+10.7%) (−20.5%) (−25.6%) (−22.1%) (−56.1%) (−45.7%) (−37.5%) DD 0.507 0.370 0.165 0.159 0.151 0.177 0.207 0.237 (−4.3%) (−7.7%) (+29.9%) (−7.6%) (−12.2%) (+80.6%) (+120%) (+97.5%) ( ) : diﬀerence to the VSM

among the methods are indicated in bold and italic fonts, respectively. In the parentheses, the ratio of diﬀerence to the VSM is noted. Let x and y be the average precision by the VSM and a method for comparison, respectively. The ratio is calculated by (y − x)/x. Thus a positive and a negative value indicate gain and loss, respectively. The results of the macro t-test for all pairs of methods are shown in Table 6. The meaning of the symbols such as “”, “>” and “∼” is summarized at the bottom of the table. For example, the symbol “>” was obtained in the case of DD compared to the VSM for the MED collection. This indicates that, at the signiﬁcance level α = 0.05, the null hypothesis “DD performs equivalently to the VSM” is rejected and the alternative hypothesis “DD performs worse than the VSM” is accepted. At α = 0.01, however, the null hypothesis cannot be rejected. Roughly speaking, “A ( )B”, “A > ( DD - LSI ∼ ∼ ∼ PF - VSM ∼ ∼ ∼ PF - LSI ∼ > ∼ ∼ LSI - VSM ∼ ∼ ∼ , : P-value ≤ 0.01 >, < : 0.01 < P-value ≤ 0.05 ∼ : 0.05 < P-value

FR desc >

narr ∼

– For the collections of short documents (MED and CRAN), the methods PF and LSI outperformed the VSM and DD. – For the collection CR which includes long documents, the methods mostly performed equivalently. The exception was the performance of PF. As shown in Table 6, PF was better than the VSM and LSI for the shortest queries (title) as well as DD for the middle length queries (desc). Note that methods are found to be equivalent by the statistical test even though the ratios of the diﬀerence of the average precision are bigger than those for MED and CRAN. For example, PF outperformed the VSM for MED and CRAN with the ratios +20.8% and +12.2%, while DD was equivalent to the VSM for CR with the ratio +29.9% (cmp. Table 5). This is because, in the statistical test, not only the average precision but also its variance and the number of queries are taken into account. – For the collection FR which also includes long documents, on the other hand, DD clearly outperformed the other methods. The advantage of PF and LSI for the collections of short documents did not hold here. From the above results, the inﬂuence of the length of documents and queries to the performance of the methods remains unclear. Although it has been shown that DD is inferior to PF and LSI for short documents, DD outperformed the other methods only for one of the collections which contain long documents. This could be because of the nature of the collections CR and FR. Although these collections include much longer documents than MED and CRAN, they also include many short documents as shown by the gap between the mean and the median in Table 1. 4.4

Results for Partitioned Collections

In order to clarify the relation between the performance and the length of documents and queries, we partitioned each of the collections CR and FR into three smaller collections as follows. Documents in the collections were ﬁrst splitted

166

K. Kise et al. Table 7. Statistics about the partitioned collections. CR FR relevant doc. irrel. relevant doc. irrel. short middle long doc. short middle long doc. no. of doc. 251 251 252 27,168 148 148 148 19,345 doc. len. min. 67 604 3,055 22 114 1,554 6,037 1 max. 601 3,029 629,028 385,065 1,512 5,994 315,101 124,353 mean 334 1,315 33,550 1,169 859 3,075 35,982 1,528 median 303 1,078 11,236 318 835 2,886 17,037 536 no. of queries 27 30 27 — 43 44 63 —

into two disjoint sets: documents relevant to at least one query, and those irrelevant to all queries. The set of relevant documents was further divided into three disjoint subsets of almost equal size according to the length of documents: short relevant documents, middle length relevant documents, and long relevant documents. By combining each subset with the set of irrelevant documents, we prepared three partitioned collections called “short”, “middle” and “long”. As queries for each partitioned collection, we took the queries which are relevant to at least one document in the partitioned collection. Since some documents are relevant to more than one query, the number of queries does not sum up to the number of queries in the original collections (cmp. Table 2). The statistics about the partitioned collections are shown in Table 7. Using the best parameters as shown in Table 4, we computed the average precision for the partitioned collections. Figure 4 illustrates the results. Each graph in the ﬁgure represents the results for a pair of a set of partitioned collections and a query length. The horizontal axes of the graphs indicate the partitioned collections. These graphs show that the conventional methods (VSM, PF, LSI) performed worse as the documents became longer. On the other hand, DD yielded almost equivalent results for all document lengths on CR collection, and even better results for the FR collection as the documents were longer. Table 8 shows the results of the statistical test for the partitioned collections. DD yielded signiﬁcantly better results in most of the cases for the “long” partitions. These results conﬁrm that passage-based document retrieval is better for longer documents, which has already been reported in the literature [5]. Let us now turn to the inﬂuence of the query length. Figure 5 illustrates the same results as in Fig. 4 but arranged in a diﬀerent way. Here, each graph corresponds to a partitioned collection and the horizontal axes represent the query lengths. For the “short” partitioned collections, no clear relation between the eﬀectiveness of the methods and the query length could be found. On the other hand, for the “middle” and “long” partitioned collections with the shortest queries (title), DD was always the best among the methods. For the “middle” CR collection with longer queries, DD performed worse than the other methods. For the “long”

Passage-Based Document Retrieval as a Tool for Text Mining CR / title

CR / desc

0.3

0.3

0.25

0.25

0.2

0.2

0.15

0.15

0.15

0.1

0.1

0.1

0.05

0.05

0.05

VSM PF LSI DD

average precision

0.2

0

short

middle

long

0

short

FR / title

average precision

CR / narr

0.3

0.25

middle

long

0

0.3

0.25

0.25

0.25

0.2

0.2

0.2

0.15

0.15

0.15

0.1

0.1

0.1

0.05

0.05

0.05

middle

long

0

short

middle

middle

long

FR / narr

0.3

short

short

FR / desc

0.3

0

167

long

0

short

middle

long

Fig. 4. Average precision for the partitioned collections (horizontal axes : document length). Table 8. Results of the macro t-test for the partitioned collections. methods A B DD - VSM short DD - PF DD - LSI DD - VSM middle DD - PF DD - LSI DD - VSM long DD - PF DD - LSI

title < ∼ ∼ ∼ ∼

CR desc ∼ < ∼ < ∼ >

narr < ∼ ∼ < ∼ ∼

title < < ∼ > >

FR desc ∼ ∼ ∼ > ∼

narr ∼ ∼ ∼ ∼ ∼

CR collection and “middle” FR collection, the advantage of DD shrank as the query length became longer. These tendencies can also be found in Table 8. For the “long” FR collection, the diﬀerence of the average precision between DD and the best among the other methods was about the same for all query lengths. However, there were disparities in their P-values: the P-value obtained

168

K. Kise et al.

average precision

CR / short

CR / middle 0.3

0.3

0.25

0.25

0.25

0.2

0.2

0.2

0.15

0.15

0.15

0.1

0.1

0.1

0.05

0.05

0.05

0

title

desc

narr

0

title

FR / short

average precision

CR / long

0.3

desc

narr

0

0.3

0.25

0.25

0.25

0.2

0.2

0.2

0.15

0.15

0.15

0.1

0.1

0.1

0.05

0.05

0.05

desc

narr

0

title

desc

desc

narr

FR / long

0.3

title

title

FR / middle

0.3

0

VSM PF LSI DD

narr

0

title

desc

narr

Fig. 5. Average precision for the partitioned collections (horizontal axes : query length).

with the shortest queries (title) was about 10 and 100 times smaller than those with the middle length (desc) and the longest queries (narr), respectively. From the results obtained from the partitioned collections, we conclude that passage-based document retrieval outperforms conventional methods if relatively lengthy documents are retrieved with short queries. An explanation for this feature of passage-based document retrieval could be as follows. If lengthy documents are retrieved with short queries, it becomes more essential to take into account the proximity of query terms, as done only by the passage-based method. In other words, the passage-based method is capable of distinguishing a few query terms which are in the same context (located close to each other in a document) from those occurring in diﬀerent contexts (far away from each other).

5

Conclusion

We have experimentally evaluated the eﬀect of the length of documents and queries for document retrieval methods. The passage-based method which is capable of ranking documents based on segmented passages has been compared with three conventional document retrieval methods. The results for a variety of document collections show that the passage-based method is superior to conventional methods for longer documents with shorter queries. This feature of

Passage-Based Document Retrieval as a Tool for Text Mining

169

passage-based retrieval is essential if we consider document retrieval as a tool for text mining based on a user’s query, since (1) users tend to issue short queries, and (2) available documents are often longer than abstracts. In order to use passage-based document retrieval as a tool, however, the following things should be further considered. First, the window size appropriate for analyzing documents should be determined automatically. Second, it is required for passage-based document retrieval to work for short documents equivalently to the best conventional method. These issues will be a subject of our future research.

Acknowledgment This work was supported by the German Ministry for Education and Research, bmb+f (Grant: 01 IN 902 B8).

References 1. M.A.Hearst, Untangling Text Data Mining, in Proceedings of ACL’99: the 37th Annual Meeting of the Association for Computational Linguistics, 1999. 2. M.Grobelnik, D.Mladenic and N.Milic-Frayling, Text Mining as Integration of Several Related Research Areas: Report on KDD’2000 Workshop on Text Mining, http://www.cs.cmu.edu/ dunja/WshKDD2000.html. 3. J.P.Callan, Passage-level evidence in document retrieval, in Proc. SIGIR ’94, pp.302-310,1994. 4. G.Salton, A.Singhal and M.Mitra, Automatic text decomposition using text segments and text themes, in Proc. Hypertext ’96, pp.53-65, 1996. 5. O.de Kretser and A.Moﬀat, Eﬀective Document Presentation with a LocalityBased Similarity Heuristic, in Proc. SIGIR ’99, pp.113–120, 1999. 6. K.Kise, H.Mizuno, M.Yamaguchi and K.Matsumoto, On the Use of Density Distribution of Keywords for Automated Generation of Hypertext Links from Arbitrary Parts of Documents, in Proc. ICDAR’99, pp.301–304, 1999. 7. R. Baeza-Yates and B.Ribeiro-Neto, Modern Information Retrieval, AddisonWesley Pub. Co., 1999. 8. C.D.Manning and H.Sch¨ utze, Foundations of Statistical Natural Language Processing, MIT Press, 1999. 9. S.Kurohashi, N.Shiraki, and M.Nagao, A Method for Detecting Important Descriptions of a Word Based on Its Density Distribution in Text, Trans. Information Processing Society of Japan, Vol.38, No.4, pp.845–853, 1997 [In Japanese]. 10. D.Hull, Using Statistical Testing in the Evaluation of Retrieval Experiments, in Proc. SIGIR ’93, pp.329–338, 1993. 11. Y.Yang and X.Liu, A Re-Examination of Text Categorization Methods, in Proc. SIGIR ’99, pp.42–49, 1999. 12. ftp://ftp.cs.cornell.edu/pub/smart/ 13. http://trec.nist.gov/

Constructing Approximate Informative Basis of Association Rules Kouta Kanda, Makoto Haraguchi, and Yoshiaki Okubo Division of Electronics and Information Engineering Hokkaido University N-13 W-8, Sapporo 060-8628, JAPAN { makoto, yoshiaki}@db-ei.eng.hokudai.ac.jp

Abstract. In the study of discovering association rules, it is regarded as an important task to reduce the number of generated rules without loss of any information about the signiﬁcant rules. From this point of view, Bastide, et al. have proposed to generate only non-redundant rules [2]. Although the number of generated rules can be reduced drastically by taking the redundancy into account, many rules are often still generated. In this paper, we try to propose a method for reducing the number of the generated rules by extending the original framework. For this purpose, we introduce a notion of approximate generator and consider an approximate redundancy. According to our new notion of redundancy, many non-redundant rules in the original sense are judged redundant and invisible to users. This achieves the reduction of generated rules. Furthermore, it is shown that any redundant rule can be easily reconstructed from our non-redundant rule with its approximate support and conﬁdence. The maximum errors of these values can be evaluated by a user-deﬁned parameter. We present an algorithm for constructing a set of non-redundant rules, called an approximate informative basis. The completeness and weak-soundness of the basis are theoretically shown. Any signiﬁcant rule can be reconstructed from the basis and any rule reconstructed from the basis is (approximately) signiﬁcant. Some experimental results show an eﬀectiveness of our method as well.

1

Introduction

The discovery of association rules is an important task in the research area of Data Mining. Its main purpose is to identify relationships among items in a given large database. This kind of problem has ﬁrstly introduced by Agrawal, et al. [1]. According to their statement, the problem can be divided into two sub-problems: Finding frequent itemsets: Given a transaction database D, we try to ﬁnd all frequent itemsets 1 in D. Generating conﬁdent association rules: All conﬁdent association rules are generated based on the frequent itemsets. 1

An itemset is a set of items appearing in D.

K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 141–154, 2001. c Springer-Verlag Berlin Heidelberg 2001

142

K. Kanda, M. Haraguchi, and Y. Okubo

In order to solve the former problem, we would be required to search in an itemset-lattice consisting of 2m itemsets if we have m possible items. On the other hand, the latter problem can be solved in a straightforward manner, once we have all frequent itemsets. Therefore, the former is considered primary and the latter, secondary in an eﬃcient discovery of association rules. In fact, many studies on association rule discovery have tended to concentrate on an eﬃcient computation of the frequent itemsets and many algorithms for this task have been proposed [1,4,5]. Thus, as many researchers have actually investigated, the task of ﬁnding all frequent itemsets is one of the important subjects in the discovery of association rules. However, we still have another signiﬁcant issue to be addressed. It is concerned with the number of rules generated from the obtained frequent itemsets. In general, a large number of rules are generated and then presented to a user. Although it is ensured that the generated rules meet the requirements for support and conﬁdence given by the user 2 , they often include many rules that are not so interesting to the user in fact. Therefore, the user has to check each presented rule carefully in order to obtain actually interesting ones. However, such a task is quite hard due to the large number of presented rules. In some cases, unfortunately, several interesting rules might be missed. Therefore, it is helpful for the user to reduce the number of generated (and presented) rules without loss of any information of possible ones. The purpose of this paper is to propose a method for such a reduction. By introducing a notion of redundancy of association rules, Bastide, et al. have proposed to identify only the set of non-redundant ones, called an informative basis, and to present the basis to the user. In a word, a non-redundant rule can be viewed as a representative of a set of rules, each of which has exactly the same support and conﬁdence, and it can be easily reconstructed from the representative. For example, assume we have the following association rules: r1 = i1 → i2 ∧ i3 ∧ i4 , r2 = i1 ∧ i2 → i3 ∧ i4 and r3 = i1 ∧ i2 ∧ i3 → i4 , where their supports and conﬁdences are exactly identical. Given r1 , the others can be reconstructed from r1 by a quite simple operation. Furthermore, their precise supports and conﬁdences can be obtained immediately. In this sense, r2 and r3 are considered to be redundant and r1 to be non-redundant 3 . Identifying non-redundant rules is just suﬃcient to obtain the possible ones. As has been mentioned above, since a non-redundant rule corresponds to a representative of a set of rules, the number of non-redundant rules is expected to be much smaller than one of the possible rules. By considering only non-redundant rules, therefore, we can drastically reduce the number of rules to be generated. From the author’s viewpoint, however, there often still exist many nonredundant rules. It might be a costly task for users to check them. Although we can easily reconstruct any redundant rule from a non-redundant one with its 2 3

If a rule meets the requirements, we say that the rule is signiﬁcant. In a word, such a non-redundant rule is characterized as one with the minimal antecedent and the maximal consequent.

Constructing Approximate Informative Basis of Association Rules

143

precise support and conﬁdence according to the original framework, the authors would like to claim that from a practical point of view, even though we cannot surely derive the precise support and conﬁdence of redundant rule, it would be worth reducing the number of output rules further. We try in this paper to propose a method for such a reduction by extending the original approach. Especially for this purpose, the original notion of redundancy is extended according to the claim above. Since the support and conﬁdence of our redundant rule can be approximately derived from a non-redundant one according to such an extended redundancy, these approximate values might not satisfy some users who require a high precision of the derived values. In our framework, therefore, we can ﬂexibly adjust the maximum error by giving an adequate value of a user-deﬁned parameter ε (0 ≤ ε < 1). As ε approaches 1, the maximum error increases, but the number of non-redundant rules decreases. Conversely, as ε approaches 0, the maximum error approaches 0, but the number of non-redundant rules increases. Given a user-deﬁned parameter ε, in order to describe our non-redundancy, we deﬁne a set of rules w.r.t. ε, called an approximate informative basis (AIB(ε)). It will be proved that every rule r in AIB(ε) has the following property: Any rule r reconstructable from r has approximately the same support and conﬁdence as ones of r, where the maximum errors of these values are evaluated by some formulas determined by ε. Thus a rule reconstructable from r is redundant. For the same reason, such a rule r in AIB(ε) is non-redundant, and can approximately represent any rule reconstructable from it. For any signiﬁcant rule r, there always exists a corresponding non-redundant rule in AIB(ε) from which r can be reconstructed. No signiﬁcant rule can be lost, once AIB(ε) is computed. The completeness in this sense and weak-soundness of AIB(ε) are summarized in a theorem. We present an algorithm for constructing AIB(ε). An eﬀectiveness of our method is shown by some experimental results. This paper is organized as follows. In the next section, we introduce some terminologies used throughout this paper. In Section 3, we brieﬂy explain the original framework by Bastide, et al. Section 4 discusses our method for constructing AIB(ε) with an example. Our preliminary experimental results are presented in Section 5. We summarize this paper and give some discussions in the last section. Especially, we brieﬂy describe a new interactive strategy, which we are going to develop, for identifying interesting rules based on the method presented in this paper.

2

Preliminaries

Let I be a ﬁnite set of items. An itemset l is a non-empty subset of I. A tuple id, l is called a transaction, where id is a transaction identiﬁer and l is an

144

K. Kanda, M. Haraguchi, and Y. Okubo

itemset. A transaction database D is a ﬁnite set of transactions. We often refer to itemset(id) as the itemset associated with id in a transaction. For a transaction t = id, l, we say that t contains an itemset l if l ⊆ l. Given a transaction database D, the support of an itemset l, denoted by sup(l), is deﬁned as the ratio of the number of transactions containing l to the number of all transactions in D. Let minsup be a user-deﬁned threshold for the permissive minimum support. An itemset l is called a frequent itemset if sup(l) ≥ minsup. An association rule r is an implication between two itemsets which is of the form r = l1 → (l2 \ l1 ), where l1 and l2 are itemsets such that l1 ⊂ l2 . The support of r, denoted by sup(r), is deﬁned as sup(r) = sup(l2 ). Furthermore, the conﬁdence of r, denoted by conf (r), is deﬁned as conf (r) = sup(l2 ) / sup(l1 ). Let minconf be a user-deﬁned threshold for the permissive minimum conﬁdence. An association rule r is said to be signiﬁcant if sup(r) ≥ minsup and conf (r) ≥ minconf . Given a transaction database D, let ID be the set of transaction identiﬁers in D. We consider a mapping ψ : 2I → 2ID that is deﬁned as ψ(l) = {id | id, l ∈ Moreover, we consider a mapping ϕ : 2ID → 2I that is deﬁned D ∧ l ⊆ l }. as ϕ(ID) = id∈ID itemset(id). Based on these mappings, a closure operator γ : 2I → 2I is deﬁned as γ(l) = ϕ(ψ(l)), that is, γ computes the maximum itemset that is shared with all transactions containing l. We say that an itemset l is closed if γ(l) = l. Since γ(γ(l)) = γ(l) holds for any itemset l, γ(l) is a closed itemset. It should be noted that for any itemset l such that l ⊆ l ⊆ γ(l), γ(l ) = γ(l) and sup(l) = sup(l ) = sup(γ(l)) hold. An itemset l is called an exact generator (E-generator) of γ(l). For a frequent closed itemset f , we refer to the set of E-generators of f as EG(f ) and the set of minimal E-generators of f as M EG(f ), that is, M EG(f ) = { g | g ∈ EG(f )∧ ∃g ∈ EG(f ) such that g ⊂ g }. For a frequent closed itemset f and its E-generator g ∈ M EG(f ), a tuple (g, f ) is called an EGC-tuple. Given an EGCtuple (g, f ), for any itemset l such that g ⊆ l ⊆ f , sup(g) = sup(l) = sup(f ) holds. The set of EGC-tuples w.r.t. D is referred to as EGC(D).

3

Informative Basis of Association Rules

In this section, we brieﬂy introduce a method of reducing the number of generated rules [2]. The key notion of this approach is a redundancy of association rule. Deﬁnition 1. (Redundancy of Association Rule) [2] Let r = l1 → (l2 \ l1 ) be an association rule. r is called a redundant rule iﬀ there exists an association rule r = l1 → (l2 \ l1 ) such that l1 ⊆ l1 , l2 ⊆ l2 , r = r, sup(r ) = sup(r) and conf (r ) = conf (r). Intuitively speaking, a redundant rule r is a rule which has exactly the same support and conﬁdence as ones of some non-redundant rule r and can be easily reconstructed from r by a simple operation on itemsets. Therefore, a nonredundant rule can be viewed as a representative of a set of redundant ones. This

Constructing Approximate Informative Basis of Association Rules

145

implies that extracting only non-redundant rules can be considered suﬃcient for the discovery of all possible rules. Since it is obvious that the number of nonredundant rules is smaller than that of all rules, we can reduce the number of rules to be obtained by simply taking non-redundant ones into account. Each non-redundant rule is characterized as a rule with the minimal antecedent and maximal consequent and is formally deﬁned in terms of E-generator and closure. It is shown that any rule can be reconstructed from a non-redundant rule with its precise support and conﬁdence. Furthermore, some experimental results show that the number of nonredundant rules is much smaller than that of all possible rules. Therefore, the method can be considered eﬀective and promising in order to reduce the number of rules to be generated. However, there often exist a large number of nonredundant rules even though all redundant ones are discarded. Since the task of checking them would be still costly for users, more reduction is strongly desired to assist the user’s task. In the next section, we try to propose a method for such a reduction by extending the original approach.

4

Approximate Informative Basis of Association Rules

As just mentioned, we still have a large number of rules even if we consider redundant ones to be unnecessary. Although we can easily reconstruct any redundant rule from a non-redundant one with its precise support and conﬁdence according to the original framework, the authors would like to claim that from a practical point of view, even though we cannot precisely derive the supports and conﬁdences of redundant rules, it would be worth reducing the number of output rules further. In this section, we try to propose a method for reducing the number of rules to be generated. Especially for this purpose, the original notion of redundancy is extended according to the claim above. In order to present our method, we ﬁrst introduce a notion of approximate generator. 4.1

Approximate Generators of Closed Itemsets

An approximate generator is an extension of E-generator and it can work more ﬂexibly. Deﬁnition 2. (A-Generators) Let l be an itemset and f a closed itemset. l is called an approximate generator (A-generator) of f if γ(l) ⊆ f and sup(f ) / sup(γ(l)) ≥ 1 − ε, where ε is a user-deﬁned parameter (0 ≤ ε < 1). Note that any E-generator of a closed itemset f is an A-generator of f . The following property plays a very important role in our method.

146

K. Kanda, M. Haraguchi, and Y. Okubo

Proposition 1. Let g be an A-generator of a closed itemset f . For any itemset l such that g ⊆ l ⊆ f, sup(g) ≥ sup(l) ≥ (1 − ε)sup(g) and

sup(f ) / (1 − ε) ≥ sup(l) ≥ sup(f ).

Proof. From the deﬁnition of A-generator, 1 ≥ sup(f ) / sup(γ(g)) ≥ 1 − ε holds. Since sup(γ(g)) = sup(g), we have sup(g) ≥ sup(f ) ≥ (1 − ε)sup(g). From sup(g) ≥ sup(l) ≥ sup(f ), therefore, sup(g) ≥ sup(l) ≥ (1 − ε)sup(g) holds. Based on the inequalities above, we can easily obtain sup(f ) / (1 − ε) ≥ sup(l) ≥ sup(f ) as well. The proposition implies that sup(g) and sup(f ) can be considered as approximations of sup(l) if we could accept the errors. It should be noted here that the maximum errors are precisely evaluated with the parameter ε. Therefore, we can ﬂexibly adjust the maximum errors so that they are permissible for us. As ε approaches 1, the maximum becomes larger. Conversely, as ε approaches 0, the maximum error approaches 0. That is, in case of ε = 0, any A-generator corresponds to an E-generator. 4.2

Approximation of EGC-Tuples

As previously mentioned, for any itemset l, the support of l can be precisely identiﬁed with an EGC-tuple (g, f ) such that g ⊆ l ⊆ f , since sup(g) = sup(l) = sup(f ). Therefore, based on the set of EGC-tuples w.r.t. D, EGC(D), we can obtain the precise support of any itemset. On the other hand, we deﬁne here an approximation of EGC(D) with the help of A-generators. Using the approximation, we can approximately identify the support of any itemset with the maximum errors we just discussed. Deﬁnition 3. (Approximation of EGC(D)) Let F be the set of frequent closed itemsets w.r.t. D and ε, a user-deﬁned parameter (0 ≤ ε < 1). Consider a partition of F, {F1 , . . . , Fk } 4 . For each Fi , there uniquely exists a closure fi∗ ∈ Fi such that ∀f ∈ Fi f ⊆ fi∗ and ∗ ∗ sup(f i )/sup(f ) ≥ 1 − ε. For each Fi , let us consider AGC(Fi ) = {(g, fi ) | g ∈ min( f ∈Fi M EG(f ))} 5 . An approximation of EGC(D) is deﬁned as AGC(D, ε) =

k

AGC(Fi ).

i=1

Each tuple in AGC(D, ε) is called an AGC-tuple. 4 5

That is, F = ∪ki=1 Fi and Fi ∩ Fj = φ (i = j), where each Fi is called a cell. For a set S, min(S) denotes the set of minimal elements in S under the set-inclusion ordering.

Constructing Approximate Informative Basis of Association Rules

147

From the deﬁnition, for each EGC-tuple (g, f ) ∈ EGC(D), it is obvious that f uniquely belongs to some Fi and there exists an AGC-tuple (g ∗ , fi∗ ) ∈ AGC(D, ε) such that g ∗ ⊆ g and f ⊆ fi∗ . Moreover, for any AGC-tuple (g ∗ , f ∗ ), g ∗ is an A-generator of f ∗ . From these observations and Proposition 1, therefore, we can obtain the following statement. Proposition 2. For any frequent itemset l, there exists an AGC-tuple (g, f ) ∈ AGC(D, ε) such that g ⊆ l ⊆ f . Furthermore, sup(g) ≥ sup(l) ≥ (1 − ε)sup(g) and sup(f ) / (1 − ε) ≥ sup(l) ≥ sup(f ) hold. Proposition 2 implies that AGC(D, ε) can identify the support of any frequent itemset approximately, where the maximum errors are precisely evaluated by functions of ε. 4.3

Approximate Informative Basis of Association Rules

Based on the set of AGC-tuples, AGC(D, ε), we can construct a basis of association rules, called an approximate informative basis(AIB), from which any signiﬁcant rule can be easily reconstructed with its approximate support and conﬁdence. Before giving the formal deﬁnition, we introduce a notion of approximate source of association rules. Deﬁnition 4. (Approximate Sources of Association Rules) Let D be a transaction database, ε a user-deﬁned parameter (0 ≤ ε < 1) and F the set of frequent closed itemsets. Assume that {F1 , . . . , Fk } is the partition of F based on which AGC(D, ε) is constructed. For an EGC-tuple (g, f ) ∈ EGC(D), consider an Fi such that f ⊆ fi∗ . An association rule to which the pair of (g, f ) and AGC(Fi ) is attached, s = g → (fi∗ \ g) : (g, f ) , AGC(Fi ) , is called an approximate source (A-source) of association rules 6 . The set of A-sources is referred to as AS(D, ε). We can reconstruct a set of association rules from an A-source. Deﬁnition 5. (Reconstruction of Association Rules from A-source) Let s = g → (f ∗ \ g) : (g, f ) , AGC(F ) be an A-source. It is said that an association rule l1 → (l2 \ l1 ) can be reconstructed from s if g ⊆ l1 ⊆ f and for an AGC-tuple (g ∗ , f ∗ ) ∈ AGC(F ), g ∗ ⊆ l2 ⊆ f ∗ . As shown in the next proposition, for any association rule that is reconstructed from an A-source, its support and conﬁdence can be within certain ranges determined by the values of the source and ε. 6

In what follows, depending on contexts, s often denotes only the rule g → (fi∗ \ g) of s.

148

K. Kanda, M. Haraguchi, and Y. Okubo

Proposition 3. Let s be an A-source and r be an association rule reconstructed from s. Then sup(s) ≥ sup(r) ≥ sup(s) 1−ε

and

conf (s) ≥ conf (r) ≥ conf (s) 1−ε

hold. Proof. Let s = g → (f ∗ \ g) : (g, f ) , AGC(F ) be an A-source and r = l1 → (l2 \ l1 ) be an association rule reconstructed from s. From the deﬁnition of reconstruction, g ⊆ l1 ⊆ f and for an AGC-tuple (g ∗ , f ∗ ) in AGC(F ), g ∗ ⊆ l2 ⊆ f ∗ hold. Note here that sup(g) = sup(l1 ) = sup(f ). Furthermore, from Proposition 1, sup(g ∗ ) ≥ sup(l2 ) ≥ (1 − ε)sup(g ∗ ) and sup(f ∗ )/(1 − ε) ≥ sup(l2 ) ≥ sup(f ∗ ) holds. Since sup(s) = sup(f ∗ ) and sup(r) = sup(l2 ), we can immediately obtain sup(s)/(1 − ε) ≥ sup(r) ≥ sup(s). Moreover, since sup(g) = sup(l1 ) and sup(l2 ) ≥ sup(f ∗ ), sup(l2 )/sup(l1 ) ≥ sup(f ∗ )/sup(g) holds. Similarly, from sup(g) = sup(l1 ) and sup(f ∗ )/(1 − ε) ≥ sup(l2 ), sup(f ∗ )/{(1 − ε)sup(g)} ≥ sup(l2 )/sup(l1 ) holds. Therefore, we obtain sup(f ∗ )/{(1 − ε)sup(g)} ≥ sup(l2 )/sup(l1 ) ≥ sup(f ∗ )/sup(g), that is, conf (s)/(1 − ε) ≥ conf (r) ≥ conf (s). The proposition states that if we could accept the errors, then sup(s) and conf (s) can be viewed as approximations of sup(r) and conf (r), respectively. That is, a set of association rules can be easily reconstructed from an A-source with their approximate supports and conﬁdences. In this sense, we can consider these rules to be approximately redundant (A-redundant). Now we can deﬁne an approximate informative basis of association rules from which any signiﬁcant rule can be reconstructed with its approximate values of support and conﬁdence. Deﬁnition 6. (Approximate Informative Basis of Association Rules) Let D be a transaction database, ε be a user-deﬁned parameter (0 ≤ ε < 1). An approximate informative basis of the signiﬁcant association rules w.r.t. D and ε, denoted by AIB(D, ε), is deﬁned as the set of A-sources whose conﬁdences are not less than (1 − ε)minconf : AIB(D, ε) = { s | s ∈ AS(D, ε) ∧ conf (s) ≥ (1 − ε)minconf }.

Theorem 1. Weak-Soundness of AIB(D, ε) : Any association rule r reconstructed from s in AIB(D, ε) is signiﬁcant or at worst A-signiﬁcant 7 . 7

For an association rule r, if sup(r) ≥ minsup and minconf > conf (r) ≥ (1 − ε)minconf , we say that r is approximately signiﬁcant (A-signiﬁcant).

Constructing Approximate Informative Basis of Association Rules

149

Completeness of AIB(D, ε) : For any signiﬁcant association rule r, there exists an A-source s in AIB(D, ε) from which r can be reconstructed. Proof. Weak-Soundness: Let r = l1 → (l2 \ l1 ) be an association rule reconstructed from an A-source s = g → (f ∗ \ g) : (g, f ) , AGC(F ) in AIB(D, ε). Then, there exists an AGC-tuple (g ∗ , f ∗ ) in AGC(F ) such that g ∗ ⊆ l2 ⊆ f ∗ . From Proposition 3, sup(r) ≥ sup(s) and conf (r) ≥ conf (s). Since f ∗ is a frequent closed itemset, sup(f ∗ ) ≥ minsup. From sup(s) = sup(f ∗ ), therefore, we have sup(r) ≥ minsup. Furthermore, since conf (s) ≥ (1−ε)minconf , we immediately have conf (r) ≥ (1−ε)minconf . Therefore, r is at worst A-signiﬁcant. Completeness: Let r = l1 → (l2 \ l1 ) be a signiﬁcant association rule. For each li , there exists an EGC-tuple (gi , fi ) in EGC(D) such that gi ⊆ li ⊆ fi . It should be noted here that since l1 ⊂ l2 , f1 ⊆ f2 holds. Assume that AGC(D, ε) is constructed based on a partition of F, PF . For the EGC-tuple (g2 , f2 ), we can consider a cell F of PF such that f2 ⊆ f ∗ , where f ∗ is the maximum itemset in F . Therefore, there exists an AGC-tuple (g ∗ , f ∗ ) in AGC(F ) such that g ∗ ⊆ g2 ⊆ f2 ⊆ f ∗ . Furthermore, f1 ⊆ f ∗ holds. Therefore, s = g1 → (f ∗ \ g1 ) : (g1 , f1 ) , AGC(F ) is an A-source from which r can be reconstructed. Since r is a signiﬁcant rule, sup(l2 )/sup(l1 ) ≥ minconf holds. By multiplying both sides by (1 − ε), we obtain (1 − ε)sup(l2 )/sup(l1 ) ≥ (1 − ε)minconf . From Proposition 2, sup(f ∗ )/(1 − ε) ≥ sup(l2 ) holds, that is, sup(f ∗ ) ≥ (1 − ε)sup(l2 ). Therefore, we have sup(f ∗ )/sup(l1 ) ≥ (1 − ε)minconf . Since sup(l1 ) = sup(g1 ) and sup(f ∗ )/sup(g1 ) = conf (s), AIB(D, ε) contains the A-source s. From Theorem 1, it is ensured that once we have AIB(D, ε), no signiﬁcant rule can be lost. 4.4

Constructing Approximate Informative Basis

Given a transaction database D, minsup, minconf and a user-deﬁned parameter ε, we can construct an approximate informative basis w.r.t. D and ε, AIB(D, ε). The construction process is divided into three sub-tasks: 1. Computing the set of EGC-tuples, EGC(D). 2. Computing an approximation of EGC(D), AGC(D, ε). 3. Constructing an approximate informative basis, AIB(D, ε). The ﬁrst task can be performed by adopting a Close [3]-like algorithm and the last one is straightforward. An algorithm for the second task, computing AGC(D, ε) from EGC(D), is shown in Figure 1. In general, as ε becomes larger, the number of iteration for the while-loops decreases. The worst case complexity of the algorithm is O(N 2 ), where N is the size of EGC(D) (that is, the number of EGC-tuples in EGC(D)).

150

K. Kanda, M. Haraguchi, and Y. Okubo

Input : EGC(D) and ε. Output : AGC(D, ε). AGC(D, ε) ← φ; EG ← φ; Rem ← φ; M in ← φ; while EGC(D) = φ do pick up t = (g, f ) from EGC(D); while EGC(D) = φ do remove t = (g , f ) from EGC(D); If f ⊆ f ∧ sup(f )/sup(f ) ≥ 1 − ε then EG ← EG ∪ {g }; else Rem ← Rem ∪ {t }; end end M in ← the set of minimal elements of EG; for g ∈ M in do AGC(D, ε) ← AGC(D, ε) ∪ {(g, f )}; end EGC(D) ← Rem; EG ← φ; Rem ← φ; M in ← φ; end Output AGC(D, ε) Fig. 1. Algorithm for Constructing AGC(D, ε)

Example: For the transaction database D shown in Figure 2, we try to construct an approximate informative basis. In the database, each itemset is represented in a simple form. For example, an itemset {a, b, c} is denoted as abc. We assume here that minsup = 1/6 and ε = 0.7. At ﬁrst, the set of EGC-tuples, EGC(D), is computed. For the database, we can obtain the following 10 EGC-tuples: EGC(D) = { (a, ac) : 3/6, (b, b) : 5/6, (c, c) : 5/6, (d, acd) : 2/6, (e, be) : 4/6, (ab, abc) : 2/6, (ae, abce) : 1/6, (bc, bc) : 4/6, (bd, abcd) : 1/6, (ce, bce) : 3/6 }, where the value attached to each tuple is the support of the tuple. Then an AGC(D, ε) is constructed from EGC(D) according to the algorithm in Figure 1. For example, we have AGC(D, ε) = { (a, abce), (ce, abce), (d, abcd), (b, be), (e, be), (c, bc) }. It should be noted here that the set of frequent closed itemsets, F = { abce, abcd, abc, bce, acd, ac, be, bc, b, c }, is divided into the following 4 cells:

Constructing Approximate Informative Basis of Association Rules

151

ID itemset 1 acd 2 bce 3 abce 4 be 5 abcd 6 bce Fig. 2. Example of Transaction Database

F1 = { abce, abc, bce, ac }, F2 = { abcd, acd }, F3 = { be, b } and F4 = { bc, c }. That is, AGC(F1 ) = { (a, abce), (ce, abce) }, AGC(F2 ) = { (d, abcd) }, AGC(F3 ) = { (b, be), (e, be) } and AGC(F4 ) = { (c, bc) }. Based on AGC(D, ε), we can obtain the set of A-sources, AS(D, ε), consisting of 20 sources. Assuming minconf = 0.85, we have the following approximate informative basis consisting of 12 sources: AIB(D, ε) = { s1 = a → (abce \ a) : (a, ac), AGC(F1 ), s2 = a → (abcd \ a) : (a, ac), AGC(F2 ), s3 = b → (be \ b) : (b, b), AGC(F3 ), s4 = b → (bc \ b) : (b, b), AGC(F4 ), s5 = c → (bc \ c) : (c, c), AGC(F4 ), s6 = d → (abcd \ d) : (d, acd), AGC(F2 ), s7 = e → (be \ e) : (e, be), AGC(F3 ), s8 = ab → (abce \ ab) : (ab, abc), AGC(F1 ), s9 = ab → (abcd \ ab) : (ab, abc), AGC(F2 ), s10 = ae → (abce \ ae) : (ae, abce), AGC(F1 ), s11 = bd → (abcd \ bd) : (bd, abcd), AGC(F2 ), s12 = ce → (abce \ ce) : (ce, bce), AGC(F1 ) }.

152

K. Kanda, M. Haraguchi, and Y. Okubo Table 1. Experimental Results minsup = 0.1 minconf = 0.7 minconf = 0.5 minconf = 0.3 Close 5,134 9,290 15,048 Our System (ε = 0.1) 1,733 2,985 4,444 Our System (ε = 0.2) 1,196 1,793 2,502 minsup = 0.05 minconf = 0.7 minconf = 0.5 minconf = 0.3 Close 7,742 15,594 28,712 Our System (ε = 0.1) 3,203 5,817 9,822 Our System (ε = 0.2) 2,194 3,600 5,500 minsup = 0.01 minconf = 0.7 minconf = 0.5 minconf = 0.3 Close 11,997 28,458 59,153 Our System (ε = 0.1) 6,900 13,290 25,113 Our System (ε = 0.2) 3,824 6,357 10,432

For example, from the A-source s1 , an association rule r = a → (ac \ a) can be reconstructed with its approximate support and conﬁdence, 1/6 (= sup(s1 )) and 1/3 (= conf (s1 )). On the other hand, its precise support and conﬁdence are 1/6 and 1, respectively. We can easily verify that the error of the conﬁdence surely follows Proposition 3.

5

Experimental Results

In this section, we present our preliminary experimental results. In order to verify an eﬀectiveness of our method, we have implemented a system to compute an AIB based on the algorithms presented in the previous section. The algorithm Close has been implemented as well to compare with the original method by Bastide, et al. Our system and Close have been written in C and have been tested on a 400MHz PentiumII PC with 160MB memory. For our experimentation, we have obtained “1984 United States Congressional Voting Records Database”, a database from the U CI Repository [7]. It consists of 435 transactions and the number of possible items is 17. Our system has computed AIBs for the database in various settings of parameters, minsup, minconf and ε. The numbers of rules output by each system are summarized in Table 1, where the results obtained by the original method is referred to as Close. For each parameter setting, our system has output fewer rules compared to that by Close. In the most eﬀective case, about 70% reduction has been achieved

Constructing Approximate Informative Basis of Association Rules

153

compared to Close8 . Even in the worst case, about 43% reduction has been achieved. Therefore, we can consider that our method is very eﬀective to reduce the number of generated rules.

6

Concluding Remarks

In this paper, we have presented a method for constructing an approximate informative basis (AIB) for signiﬁcant association rules from which any signiﬁcant rule can easily be reconstructed with its approximate support and conﬁdence. The maximum errors of these values are precisely evaluated by some formulae determined by a user-deﬁned parameter ε. Therefore, we can ﬂexibly adjust the preciseness of these approximate values. Some experimental results have shown that our method can drastically reduce the number of rules to be generated compared to the original framework. Therefore, readability and understandability for the rules would be improved by providing an adequate value of ε. As a next step of this study, we are planning to formalize a method for identifying actually interesting rules with their support and conﬁdence in an interactive manner. In the initial stage, ε is given a value close to 1 by a user and we obtain a rough AIB for which we can easily and completely check the contents. By checking them, the user selects several A-sources from which some interesting rules seem to be reconstructed. Then the user decreases the value of ε to obtain a more precise AIB. It should be noted that the system presents only a part of the AIB which is relative to the A-sources previously selected by the user. Therefore, we can obtain a more precise AIB keeping the number of contents small. For the presented AIB, similar processes are iteratively performed until the user satisfactorily identiﬁes interesting rules with their support and conﬁdence. At each stage, since the system keeps the number of contents of presented AIB compact, the selection tasks by the user would not be costly. Therefore, such a system would be quite helpful for users who try to discover interesting rules easily. In order to construct such an interactive system, we are expecting that the eﬃciency of computing AIB has to be improved more. Our AIB is currently computed by adopting an extended algorithm of Close [3]. Although Close can eﬃciently identify the set of frequent closed itemsets, several new algorithms for the same task have been proposed recently, e.g., A-Close [4], CHARM [6] and CLOSET [5]. By adopting these algorithms, the eﬃciency of computation of AIB would be improved.

References 1. R. Agrawal, R. Srikant: Fast Algorithms for Mining Association Rules, Proc. of the 20th Int’l Conf. on Very Large Data Bases, pp. 478–499, 1994. 8

It has been reported that Close has achieved about 80 – 90% reductions compared to Apriori.

154

K. Kanda, M. Haraguchi, and Y. Okubo

2. Y. Bastide, N. Psquier, R. Taouil, G. Stumme and L. Lakhal: Mining Minimal NonRedundant Association Rule Using Frequent Closed Itemset Proc. of Int’l Conf. on Computational Logic–CL2000, LNAI 1861, pp.972-986, 2000 3. N. Pasquier, Y. Bastide, B. Raﬁk and L. Lakhal: Eﬃcient Mining of Association Rules Using Closed Itemset Lattices, Information Systems, vol. 24, no. 1, pp.25–46, 1999 4. N. Pasquier, Y. Bastide, B. Raﬁk and L. Lakhal: Discovering Frequent Closed Itemsets for Association Rules, Proc. of ICDT, LNCS 1540, pp.398–416 1999 5. J. Pei, J. Han and R. Mao: CLOSET: An Eﬃcient Algorithm for Mining Frequent Closed Itemsets, Proc. of DMKD2000, 2000 6. M. J. Zaki and C. Hsiao: CHARM: An Eﬃcient Algorithm for Closed Association Rule Mining, Technical Report 99-10, Computer Science, Rensselaer Polytechnic Institute, 1999 7. P.M. Marphy and D. W. Aha.: UCI Repository of machine learning databases, http://www.ics.uci.edu/mlearn/MLRepository.html, Univ. of California, Dept. of Information and Computer Science, 1994

Multicriterially Best Explanations Naresh S. Iyer and John R. Josephson The Ohio State University, Laboratory for Artiﬁcial Intelligence Research, Computer and Information Science Department, Columbus, Ohio, 43210 USA {niyer,jj}@cis.ohio-state.edu

Abstract. Inference to the best explanation, IBE, (or abduction) requires ﬁnding the best explanatory hypothesis, from a set of rival hypotheses, to explain a collection of data. The notion of best, however, is multicriterial and the available rival hypotheses might be variously good according to diﬀerent criteria. Thus, one can view the abduction problem as that of choosing the best hypothesis from among a set of multicriterially evaluated hypotheses - i.e as a multiple criteria decision making problem. In the absence of a single hypothesis that is the best along all dimensions of goodness, the MCDM problem becomes especially hard. The Seeker-Filter-Viewer architecture provides an eﬀective and natural way to use computer power to assist humans to solve certain classes of MCDM problems. In this paper, we apply an MCDM perspective to the abductive problem of red-cell antibody identiﬁcation and present the results obtained by using the S-F-V architecture.

1

Introduction

Abductive inference is a ubiquitous form of reasoning in science and common sense. Abduction has been referred to as inference to the best explanation by Harman [3] and as the explanatory inference by Lycan [4]. Typically the available evidence is insuﬃcient to narrow conclusively to single explanations. So, multiple hypotheses are available and the problem becomes one of choosing the best among rivals. Josephson & Josephson [2] have described abductions as following this pattern: D is a collection of data (facts, observations, givens) H explains D (would, if true, explain D) No other hypothesis can explain D as well as H does. Therefore, H is probably true. They also suggest that the judgment of likelihood associated with a conclusion should depend upon a number of considerations. Apart from how good a single hypothesis is by itself, it is also desirable that it decisively surpass the K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 128–140, 2001. c Springer-Verlag Berlin Heidelberg 2001

Multicriterially Best Explanations

129

alternative hypotheses. However, there are in general, multiple kinds of criteria by which hypotheses may be compared. Explanatory power and plausibility are examples. Thus, we may view abduction as requiring a choice among the multicriterially evaluated hypotheses, that is a species of multiple criteria decision making. MCDM problems have been widely studied across diverse ﬁelds and many techniques abound for solving MCDM problems [5]. An important concept in MCDM is the idea of dominance. Dominance is very much like an all-otherthings-being-equal kind of reasoning. Speciﬁcally, we say that some multicriterially evaluated alternative A dominates another alternative B if there is some criterion in which A is strictly better than B and there is no criterion in which B is strictly better than A. An alternative that is not dominated is called a Pareto Optimal alternative. For a given problem, the set of Pareto Optimal alternatives has the property that, within the set, the only way to improve along any dimension is to accept a loss in another dimension. That is, choosing among the Pareto Optimal alternatives is a matter of making trade-oﬀs. It is known that the size of the Pareto-optimal set is typically a very small percentage of the actual number of alternatives [6] [7]. Thus, the application of dominance as a ﬁlter can be expected to considerably reduce the number of alternatives which need to be considered [1]. It is worth noting is that there is no loss incurred in the elimination of the dominated alternatives unless signiﬁcant criteria have not been considered. This is because we know that for every alternative eliminated by the dominance ﬁlter, there is at least one Pareto-optimal alternative that dominates it and is therefore multicriterially better than it. The application of dominance minimally requires that an order relation hold among values for each criterion. The survivors of the dominance ﬁlter represent the multicriterially maximal subset of alternatives from the original set. From the deﬁnition of dominance it is clear that, for each pair of alternatives that survive the dominance ﬁlter, they outperform each other according to different criteria. In other words, if alternatives A and B are in the Pareto Optimal set, then it must be the case that there is at least one criterion in which A is better than B and that there is at least one criterion in which B is better than A, thereby preventing either from dominating the other. In an abduction problem, a more plausible hypothesis, H1 , might not explain as much as a less plausible one, H2 . That is, H2 is better according to explanatory coverage while H1 is better according to the criterion of plausibility. In such a case, there is no obvious sense in which either H1 or H2 can be said to be a distinctly better hypothesis. However, depending upon the need to explain more, and upon the degree of conﬁdence that is needed for the ﬁnal choice, a choice between H1 or H2 may become possible. The choice from among the Pareto Optimal set requires that trade-oﬀs be accepted between plausibility and explanatory coverage. This can be a challenge since such trade-oﬀ judgments are often a function of the speciﬁc values at hand. For example, a certain level of conﬁdence or of explanatory power may be suﬃcient.

130

N.S. Iyer and J.R. Josephson

In summary, a general way to solve an MCDM problem is to apply the dominance ﬁlter and then allow for choice from among the set of dominance survivors by applying human trade-oﬀ judgments with respect to the various criteria. The Seeker-Filter-Viewer architecture described in [1] is based on this strategy for solving the MCDM problem. The Seeker is a module which generates applicable alternatives and produces evaluations for them according to the diﬀerent criteria. The Filter uses the principle of dominance to produce the Pareto-optimal set from the generated and evaluated set of alternatives. It eliminates the distinctly suboptimal alternatives. The Viewer allows a human to express his trade-oﬀ judgments on the Pareto-optimal alternatives. The Viewer allows a user to view the candidate alternatives as points in graphs with the criteria as axes. If multiple criteria need to be considered, the Viewer will provide multiple interlinked 2-D plots and histograms. The human expresses preferences by selecting desirable regions in the graphs. The graphically selected points or regions are cross-linked across all the open plots so that a selection made on one plot shows the values of the selected alternatives according to the other criteria. Apart from explanatory coverage and plausibility, we will describe several other criteria that can generally be used to evaluate candidate hypotheses in abduction problems. These criteria may or may not apply depending upon the problem domain and other characteristics of the data. We will brieﬂy describe the S-F-V architecture and as an illustration both of viewing abduction from an MCDM perspective, and a demonstration of applying the S-F-V architecture, we will present the results of experiments in the domain of red cell antibody identiﬁcation as described in [2]. We will describe the antibody identiﬁcation problem as an abduction problem, and deﬁne the evaluation criteria used in the experiment. Finally, we will show the results of viewing this abduction problem as an MCDM problem and applying the S-F-V architecture to help solve the problem.

2

The Seeker-Filter-Viewer Architecture

The S-F-V architecture is described in detail in [1] and [9]. In this section, we provide a brief overview of the architecture and its use in solving MCDM problems. Essentially, the architecture is composed of three modules, the Seeker, theFilter, and the Viewer, each designed to perform a speciﬁc set of functions involved in solving the given MCDM problem. We next describe these components one at a time: 2.1

The Seeker

The Seeker is responsible for the generation of the choice alternatives for the MCDM problem. In case the choice alternatives are already present or supplied by the decision-maker, the Seeker makes these choice alternatives accessible to the Filter by reading them from the database. For problems where the decisionmaker cannot provide the choice alternatives himself, it is the function of the

Multicriterially Best Explanations

131

Seeker to seek out the alternatives from whatever sources are available, in a form that can be used by the Filter. Abstractly, this could be a search on the Internet looking for choice alternatives pertaining to the problem. The Seeker described in [1] is currently capable of generating choice alternatives as compositions of various components listed in a component library. The Seeker instantiates all possible choice alternatives that can be formed by some distinct composition of a set of components in the library. Having instantiated the choice alternative, it next makes use of simulation models to evaluate various property values for the choice alternatives. For example, for an instantiated car, the Seeker might run simulations to compute the mileage, cost, weight, top-speed and other properties related to cars, for which simulation models are available. At the end of the generation process, the Seeker produces a list of choice alternatives along with a set of {property-name, property-value} pairs for each alternative. It makes this list available to the Filter. 2.2

The Filter

The Filter is responsible for applying the dominance rule to the set of alternatives generated by the Seeker. In order to do this, the Filter expects the decision-maker to choose those properties of the choice alternatives which reﬂect the dimensions of outcomes that matter to him, and additionally the directions of goodness for the criteria. For example, if the decision-maker desires to buy a car that is costeﬀective to him, he should choose cost and mileage as properties of interest to him. Once such a set of properties have been selected by the decision-maker, the Filter uses these properties as criteria based on which to apply the dominance rule on the set of alternatives. Since the criteria values for the chosen criteria are already made available by the Seeker, the Filter makes use of these values to produce the Pareto-optimal set of alternatives. As mentioned earlier, this step is essential because alternatives not belonging to the Pareto-optimal set are known to be dominated by some Pareto-optimal alternative. As a result, there is no loss incurred in eliminating such alternatives. By doing so, the Filter prevents the decision-maker from having to even consider such alternatives, and thereby unintentionally select a suboptimal alternative. Finally, as indicated in [6], [7], the Pareto-optimal set often tends to be a very small fraction of the original set. Hence the application of the Filter also reduces the size of the set of alternatives that further need to be considered. While the Filter can reduce the relevant set of alternatives from a large number to a small fraction, choosing an alternative even from a handful of Pareto-optimal alternatives can be a demanding task for the decision-maker. The next module of the architecture allows the decision-maker to graphically interact with the Pareto-optimal alternatives in various ways, in order to select the ﬁnal choice alternative(s) of interest to him. 2.3

The Viewer

As mentioned previously, choice among Pareto-optimal alternatives requires the making of tradeoﬀs. The Viewer allows the decision-maker to interact with the

132

N.S. Iyer and J.R. Josephson

Pareto-optimal set by means of various kinds of graphical plots which enable the decision-maker to express his tradeoﬀ preferences in the context of the available alternatives. A more detailed description of all modes of interaction that the Viewer allows, along with a description of an interaction session between the Viewer and a decision-maker is provided in [9]. Here will only mention that the Viewer allows the decision-maker to plot the Pareto-optimal alternatives as points in 2-D scatter plots where the axes of the plots can be selected by the decision-maker himself; he can further pull up as many plots as he desires. The Viewer also maintains a set of 1-D plots where the Pareto-optimal alternatives are plotted along single property-axes. The Viewer allows the decision-maker to select points or collections of points by enabling graphical selection of such points. Upon selection by the decision-maker, all points within the selected region are indicated using a separate color and moreover such indication is provided across all the open points. Thus ,even though the decision-maker makes his selection on a single plot, he gets to examine the implications of his selection in terms of the other properties by examining the colored points on all other plots. This forces the decision-maker to make selections and at the same time evaluate the consequences of the selection. It is expected that this will lead to a more rational selection process. Apart from making tradeoﬀs, the Viewer enables other kinds of preference expression by the decision-maker. These include: choosing alternatives by categories from bar-charts, applying hard-constraints based on criteria by using the 1-D plots, applying various kinds of constraints based on as yet unconsidered properties, combining alternatives that belong to diﬀerent Viewer-based selections of the decision-maker, looking at a list of all properties of alternatives in the selected region in a tabular form, and so on. Thus, the Viewer complements the Filter by enabling many kinds of preferences that apply when choosing from Pareto-optimal alternatives. The synergy between the three modules of the S-F-V architecture provides it with the ability to act as an eﬀective decision support for solving MCDM problems. As an indication of its eﬀectiveness, we point to the experiment described in [1] where close to 2 million choice alternatives (Hybrid vehicles) were generated by the Seeker, the Filter reduced this set to 1078 alternatives, and interaction between a decision-maker and these Filter survivors using the Viewer resulted in a ﬁnal output of 7 alternatives. The architecture has been applied to a number of engineering problems and our claims about the eﬀectiveness of the architecture are based on the response we received from the users regarding the ease with which they were able to use the architecture. We realize that a formal usability analysis of the architecture would go a long way towards establishing this. We have compared the Viewer with a few other alternative visualization techniques in MCDM literature and our impression is that the Viewer has its own set of unique properties. We direct the interested reader to a survey of such visualization techniques that occurs in [10] (pp. 238-249). This brings our description of the S-F-V architecture to a close. We next describe some properties of explanatory hypotheses, which can be used as evaluation criteria for the hypotheses. The use of such criteria to evaluate hypotheses

Multicriterially Best Explanations

133

will allow hypotheses to be viewed as multicriterially evaluated alternatives, thereby allowing the problem of choosing the “multicriterially best” alternative to be seen as an MCDM problem.

3

Evaluation Criteria for Explanatory Hypotheses

As we said, the idea of the best hypothesis from among a set of hypotheses is a multicriterial notion. In [8] the following qualities are suggested as criteria for evaluating hypotheses: Explanatory Power, Plausibility, Internal consistency, Simplicity, Speciﬁcity, Predictive Power, and Theoretical Promise. In order to apply MCDM techniques it will be necessary that the evaluations according to the criteria can be obtained in a numerical form, or some other form that enables the comparison of criterion values, so that it is conducive to the application of MCDM techniques. This may well depend upon the domain for which the abduction problem is being solved. As an illustration of how this can be done, we next describe how evaluations were produced for the hypotheses in red cell antibody identiﬁcation domain.

4

The RED Domain: The Red Cell Antibody Identiﬁcation Task

As described in [2], the RED systems are medical test-interpretation systems that operate in the knowledge domain of hospital blood banks. Speciﬁcally, the RED systems are meant to help in the problem of red-cell antibody identiﬁcation. We will ﬁrst brieﬂy describe the problem and then formulate the problem as an abduction problem. 4.1

The Problem

Before blood transfusion is carried out it is imperative to check that the donor’s blood matches the patient’s blood. The process of matching involves ensuring that the donor’s blood does not contain antigens which would be identiﬁed as foreign bodies by the patient’s immune system. If the immune system does encounter foreign bodies, it produces antibodies directed against them. The antibodies that are produced by the patient’s blood against red cell antigens of a donor are called red-cell antibodies. If the patient’s blood contains antibodies directed against the red cell antigens of the donor’s blood, this is a case of mismatch. Transfusion of badly matched blood could result in many bad consequences including fever, anemia, and life threatening kidney-failure. Hence the red cell antibody identiﬁcation task is of crucial importance to blood banks. In addition to the familiar A, B, and Rh, more than 400 red-cell antigens are known. Once the blood has been tested to determine the patient’s A-B-O and Rh blood type, it is necessary to test for the presence of antibodies directed toward other red-cell antigens.

134

N.S. Iyer and J.R. Josephson

Table 1. Red-cell test panel. The various test conditions, or phases, are listed along the left side (AlbuminIS, etc.) and identiﬁers for donors of the red cells are given across the top (623A, etc.). Entries in the table record reactions graded from 0, for no reaction, to 4+ for the strongest agglutination reaction, or H for hemolysis. Intermediate grades of agglutination are +/- (a trace of reaction), 1+w(a grade of 1+, but with the modiﬁer “weak”), 1+, 1+s(the modiﬁer means “strong”), 2+w, 2+, 2+s, 3+w, 3+, 3+s, 4+w. Thus, cell 623A has a 3+ agglutination reaction in the Coombs phase. 623A 479 537A 506A 303A 209A 186A 195 164 AlbuminIS Albumin37 Coombs EnzymeIS Enzyme37

0 0 3+ 0 0

0 0 0 0 0

0 0 3+ 0 1+

0 0 0 0 0

0 0 3+ 0 0

0 0 3+ 0 1+

0 0 3+ 0 0

0 2 3+ 0 1+

0 1 3+ 0 0

Typically this identiﬁcation is performed by using one or more reaction panels of the form shown in Table 1. The columns in the table refer to diﬀerent applicable donors, while the rows refer to diﬀerent test conditions. Each entry in the table indicates reactions shown by a mixed sample of the patient’s blood serum and the indicated donor’s red blood cells, under the speciﬁed test conditions. These ﬁgures are produced by the blood bank technologist to indicate his visual assessment of the strength and type of reaction. Possible reaction types are agglutination (clumping of cells) or hemolysis (splitting of the cell walls). The strength of the reactions are expressed in the blood-banker’s vocabulary, some terms of which are shown in Table 1, and consists of thirteen possible reaction strengths. Hemolysis reactions were ignored for purposes of this experiment. All 3+ entries are converted into the number 3 for our experiment. Similarly, the 1+ values are converted to number 1 and so on. Reactions indicated as 2 + s are converted into the number 2.5 while those marked as 2 + w are converted into the number 1.5. Additionally, information about the signiﬁcant antigens present in each of the donor samples are recorded in a table called the antigram. By reasoning about the pattern of reactions displayed by the reaction panel and using the antigen information present in the donor antigram, the blood-bank technologist attempts to determine which antibodies are present in the patient’s serum and are causing the observed reactions and which are absent, or at least not present in enough strength to cause reaction. The RED systems were built to automate this reasoning process. 4.2

The Red Cell Antibody Identiﬁcation Problem as an Abduction Problem

The reaction panel shown in Table 1 can be considered as data to be explained. Using the antigrams which give information about the various antigens present in the donor samples, it is possible to construct hypotheses about the existence

Multicriterially Best Explanations

135

of various antibodies in the patient’s serum. Each such hypothesis will contain two kinds of information – A proﬁle similar to Table 1 representing how much this particular hypothesis can oﬀer to explain for each of the reactions in the panel. This is the most that can be consistently explained by the hypothesis. – A plausibility value, which is the result of applying rules given by domain experts, to the data of the case. In our experiment, this value is an integer between -3 to +3, representing the plausibility on a symbolic scale from “ruled out” to “highly plausible”. An example for a certain antibody is given in Table 2. It shows how much of the reactions shown for the case from Table 1 are accounted for by hypothesizing that the antibody, AntiNMixed, is present in the patient’s serum. Also, the plausibility value for the hypothesis is indicated to be -2. Table 2. Reaction proﬁle for an individual antibody(Anti NMixed) hypothesis. Note by comparison with the overall reaction panel in Table 1 that the hypothesis only oﬀers to partially explain some of the reactions. Anti NMixed Proﬁle; Plausibility=-2 623A 479 303A 209A 186A 195 AlbuminIS Albumin37 Coombs EnzymeIS Enzyme37

0 0 0.5 0 0

0 0 0 0 0

0 0 2 0 0

0 0 0.5 0 0.5

0 0 0.5 0 0

0 0 2 0 0.5

Table 2 does not contain as many columns as Table 1 because the hypothesis cannot explain any of the reactions pertaining to those columns. The same kind of proﬁle is created for all of the other non-ruled out antibodies. Hence given a donor, the following inputs are present: 1. The reaction panel as indicated in Table 1 2. A plausibility value for each antibody. 3. A reaction proﬁle for each antibody for which the plausibility value is not -3, i.e. it has not been “ruled out.” The desired output will be a set of antibodies which best explain the reactions, along with plausibility values associated with them. The above problem can now be seen as an abduction problem with the following mapping: 1. The reaction panel represents the data, D, to be explained. 2. The individual antibodies which have not been “ruled out” and all possible composite hypotheses that can be generated from them represent the set of possible explanatory hypotheses, the set E.

136

N.S. Iyer and J.R. Josephson

The abduction problem is one of ﬁnding the hypothesis which best explain the reactions in the reaction panel. However, sometimes the evidence will be insufﬁcient and there will be no unique, best explanation.

5

Evaluation Criteria for Hypotheses in the RED Domain

In this section, we will describe how the evaluation criteria for the hypotheses in the RED domain were computed from the given information. For a given problem, the set E of all possible explanatory hypotheses was created as described next. Firstly, the set of antibodies which are ruled out(i.e. with plausibility values -3) are no longer considered as potentially explanatory hypotheses for the problem. Such hypotheses are excluded from set E. The set, S, of simple hypotheses may be deﬁned as follows: S = { Ai : Ai hypothesizes the presence of a particular antibody and the plausibility of Ai is not − 3}

Now, the set E of applicable hypotheses is deﬁned as the set of all possible hypotheses obtainable as combinations(conjunctions) of the simple hypotheses in S. The set C = E − S is therefore the set of all composite hypotheses which hypothesize the presence of more than a single antibody to explain the reactions. Thus, if we suppose A1 , A2 , A3 , · · · , Ak to be each of the individual hypotheses related to the presence of single antibodies which have not been ruled out in advance, then the hypothesis A4 is an example of a simple hypothesis while the hypothesis {A2 , A3 , Ak } is an example of a 3-part composite hypothesis. Obviously, the size of the largest composite hypothesis is k and it includes all of the simple hypotheses in the set S as its parts. Next, we will discuss how some of the criteria mentioned in Section 3 were computed for the set E above. 1. Explanatory Power: Given the values in the reaction panel and the reaction proﬁles, Ri , for simple hypotheses, Ai , one way to quantify the explanatory power of a simple hypothesis is to compute the sum of all the values in its reaction proﬁle table. Since each individual entry in the reaction proﬁle oﬀers to explain an observed reaction as consistently as possible, the sum of the reaction proﬁle matrix is indicative of the overall explanatory power of the simple hypothesis. This value is used as a heuristic measure of the explanatory power of the simple hypothesis. For a composite hypothesis, the reaction proﬁle is constructed by using the proﬁles for its parts. That is, the reaction proﬁle for a composite hypothesis is constructed as the entry-wise sum of reactions in the individual reaction proﬁles, with a maximum for any entry of the reaction strength that needs to be explained. More formally the Explanatory Power, E, was computed as, ∀H ∈ E, E(H) =

R(a, b)

a,b

(1)

Multicriterially Best Explanations

137

2. Implausibility: The plausibility, pi , for each of the simple hypotheses, Ai , is already available as a part of the input. Since -3 is the lowest degree of plausibility that is assigned to a simple hypothesis, the implausibility can be computed by a heuristic measure which produces low value of implausibilities for high value of plausibilities and so on. The exact form of this function is not important since the values themselves are meant to be used only for relative comparisons. In our experiment, implausibility, I, was computed as, ∀H ∈ E, I(H) =

(4 − pj )

Aj ∈H

3. Simplicity: There are at least two ways to deﬁne this criterion. a) Cardinality: Simplicity can be deﬁned in terms of the number of parts in a hypothesis, that is its cardinality. Note that we would want to minimize this value in order to maximize simplicity. However, this measure provides a better score for a hypothesis like {A2 , A6 } relative to another hypothesis like {A1 , A3 , A7 } based merely on the diﬀerences in their structural simplicities. b) Inclusion simplicity: This measure cannot be quantiﬁed on a per hypothesis basis like the previous ones. However, when comparing two composite hypotheses, say H1 and H2 , we say that H1 is better in inclusionsimplicity than H2 if and only if all of the constituent parts of H1 are present in H2 as well. In all other cases, the two hypotheses are considered incomparable in simplicity. This measure makes sure that the least complex hypothesis is preferred to a more complex one that explains no more. For the RED domain experiment, only two of the above criteria were used. This is because the implausibility value, as deﬁned previously, already carries the information carried by the inclusion-simplicity criterion. The addition of another hypothesis to a composite will always reduce its plausibility. So an included hypothesis will always be more plausible than the including one. Consequently, if k simple hypotheses are not ruled out in advance, then the abduction problem involves as many hypotheses as the total possible combinations that result from k, simple hypothesis. In other words, the problem becomes a (2k −1) alternative, 2-criteria, MCDM problem. In the next section, we discuss the results of applying the S-F-V architecture to this MCDM problem.

6

Results of Applying the S-F-V Architecture

It is to be noted that the potential number of explanatory hypotheses in the RED domain is exponential in the number of simple hypotheses that have not been ruled out at the onset. Considering that up to 30 clinically signiﬁcant red-cell antigens are known, the total number of alternatives is potentially quite large.

138

N.S. Iyer and J.R. Josephson

Hence, the ability of dominance ﬁlter to prune eﬀectively becomes valuable in reducing the complexity of the problem. The results shown below are for the case labeled OSU-9 in the RED domain as described in [2]. The reaction panel shown in Table 1 refers to the same case. This case resulted in 15 simple hypothesis which could not be ruled out based on the evidence at the outset. Thus we have a total of 32,767 potential explanatory hypotheses, a formidable number. The Seeker generates this set by building the exhaustive set of combinations starting with 15 simple hypotheses. In the process of generation, the Seeker also evaluates the hypotheses along the various criteria using the heuristic measures indicated in the previous section. It now makes this set of multicriterially evaluated hypotheses available to the Filter. The Filter, after applying the dominance rule using implausibility and explanatory power as the criteria produces a Pareto-Optimal set containing only 3 hypotheses! In other words, as long as the goal is to ﬁnd the most plausible hypothesis which explains the reactions the best, there is no need to consider the remaining 32764 eliminated alternatives; dominance ensures that they are inferior to the survivors. The remaining 3 surviving hypotheses are plotted as points in a Viewer scatter plot shown below with the Implausibility and Explanatory Power as the axes. The labels for each point show the composition

Fig. 1. Plot showing the 3 survivors of dominance applied to the case OSU-9 from the RED domain

of the individual hypotheses. We see that of the 3 survivors, one is a simple hypothesis and in fact this hypothesis, A8 occurs in each of the remaining two composite hypotheses, {A8, A5} and {A8, A12}. Figure 1 also shows the trade-oﬀs available to the user of such a system. Such a trade-oﬀ is typical of many abduction problems where the ability to explain more comes with a cost in the conﬁdence associated with the explanation. By

Multicriterially Best Explanations

139

using this plot, which is displayed by the Viewer in the S-F-V architecture, the user can exercise his trade-oﬀ judgments by selecting the point of interest to him. For example, to get greater explanatory coverage than that provided by the simple hypothesis, the user is informed from the plot that he will need to incur an increase in the implausibility. The composite {A8, A12}, shown as the middle point in the plot, allows for one step of trade-oﬀ in the direction of explaining more, with a resulting increase in the implausibility. Similarly, the point to the extreme right and top explains the most but is also the most implausible among the three potential explanatory hypotheses. Figure 2 shows similar trade-oﬀs for another experimental case. This plot shows more clearly, how moving from the leftmost point to the next point results in a considerable increase in explanatory coverage while the resultant increase in implausibility is not as large. Conversely, looking at the rightmost pair of points, we see that a very small increase in explanatory coverage is obtained by incurring quite large increase in implausibility. This illustrates how the diﬀerent kinds of tradeoﬀ judgments can be brought upon to choose between competing hypotheses even if they are both Pareto Optimal. This plot also shows how choice between

Fig. 2. Plot showing the survivors of dominance applied to the case Pat-32 from the RED domain

multicriterially best explanations involves trade-oﬀ. The choice of an appropriate hypothesis will depend upon the user’s (in this case the person administering the blood) willingness to hypothesize the presence or absence of an antibody according to the urgency of the situation and other risk based considerations. Alternatively, if additional knowledge becomes available at a later stage of the problem, this may be used to rule out some of the surviving hypotheses.

140

7

N.S. Iyer and J.R. Josephson

Conclusions

We have shown how the MCDM perspective applies to abductive reasoning. IBE problems are inherently multicriterial. These criteria need not be commensurable. Even if that is the case, a well-deﬁned notion of multicriterially best explanations can be given. Such best explanations need not be unique. However computer-aided visualization of the alternatives can help human to choose from among the multicriterially best hypotheses. It is worth noting that if there is indeed a single hypothesis that is the most plausible, explains the most, and so on, then such a hypothesis will be the sole survivor of the dominance ﬁlter (this is because by virtue of being the best along all of the evaluation criteria, it will dominate every other alternative, using the deﬁnition of dominance from page 2). Moreover MCDM techniques can help reduce the complexity of the problem. One can envision scientists using powerful, computerized decision aids like the S-F-V architecture in the future to help solve complex problems of discovery. Acknowledgments. This material is based upon work supported by The Oﬃce of Naval Research under Grant No. N00014-96-1-0701. The support of ONR and the DARPA RaDEO program is gratefully acknowledged. Standard disclaimers apply.

References 1. Josephson, John, R., Chandrasekaran, B., Carroll Mark, Iyer Naresh, Wasacz Bryon, Rizzoni Giorgio, Li Qingyuan, Erb, David A. An Architecture for Exploring Large Design Spaces. Proceedings of National Conference of the American Association for Artiﬁcial Intelligence, Madison, Wisconsin, pp 143-150 (1998) 2. Josephson John, R., Josephson, Susan, G.: Abductive Inference: Computation, Philosophy, Technology. Cambridge University Press (1994) 3. Harman, G.: The Inference to the Best Explanation. Philosophical Review, Vol. 74, pp. 88-95, (1965) 4. Lycan, W., G.: Judgement and Justiﬁcation. Cambridge University Press (1988) 5. Keeney, Ralph, L., Raiﬀa Howard: Decisions with multiple objectives: preferences and value tradeoﬀs. Wiley Publishers (1976) 6. Calpine H. C., Golding A.,: Some properties of Pareto Optimal Choices in Decision Problems. International Journal of Management Science, Vol. 4, No. 2, pp. 141-147 (1976) 7. Bentley, J., L., Kung, H., T., Schkolnick, M., Thompson, C., D.: On the Average Number of Maxima in a Set of Vectors and Applications. Journal for the Association of Computing Machinery, Vol. 25, No. 4, pp. 536-543, (1978) 8. Josephson, John, R.,: Abduction-Prediction Model of Scientiﬁc Discovery Reﬂected in a Prototype System for Model-Based Diagnosis. Philosophica, Vol. 61, No. 1, pp. 9-17 (1998) 9. Chandrasekaran B.,: Functional and Diagrammatic Representation for Device Libraries. Technical Report, The Ohio State University, (2000) 10. Mietinnen, K. M.,: Nonlinear Multiobjective Optimization. International Series in Operations Research and Management Science, Kluwer Academic Publishers, (1999)

Towards Discovery of Deep and Wide First-Order Structures: A Case Study in the Domain of Mutagenicity Tam´as Horv´ath1 and Stefan Wrobel2 1

Institute for Autonomous intelligent Systems, Fraunhofer Gesellschaft, Schloß Birlinghoven, D-53754 Sankt Augustin, tamas.horvath@fhg.de 2 Otto-von-Guericke-Universit¨ at Magdeburg, IWS, P.O.Box 4120, D-39106 Magdeburg, wrobel@iws.cs.uni-magdeburg.de

Abstract. In recent years, it has been shown that methods from Inductive Logic Programming (ILP) are powerful enough to discover new ﬁrst-order knowledge from data, while employing a clausal representation language that is relatively easy for humans to understand. Despite these successes, it is generally acknowledged that there are issues that present fundamental challenges for the current generation of systems. Among these, two problems are particularly prominent: learning deep clauses, i.e., clauses where a long chain of literals is needed to reach certain variables, and learning wide clauses, i.e., clauses with a large number of literals. In this paper we present a case study to show that by building on positive results on acyclic conjunctive query evaluation in relational database theory, it is possible to construct ILP learning algorithms that are capable of discovering clauses of signiﬁcantly greater depth and width. We give a detailed description of the class of clauses we consider, describe a greedy algorithm to work with these clauses, and show, on the popular ILP challenge problem of mutagenicity, how indeed our method can go beyond the depth and width barriers of current ILP systems.

1

Introduction

In recent years, it has been shown that methods from Inductive Logic Programming (ILP) [23,32] are powerful enough to discover new ﬁrst-order knowledge from data, while employing a clausal representation language that is relatively easy for humans to understand. Despite these successes, it is generally acknowledged that there are issues that present fundamental challenges for the current generation of systems. Among these, two problems are particularly prominent: learning deep clauses, i.e., clauses where a long chain of literals is needed to reach certain variables, and learning wide clauses, i.e., clauses with a large number of interconnected literals. K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 100–112, 2001. c Springer-Verlag Berlin Heidelberg 2001

Towards Discovery of Deep and Wide First-Order Structures

101

In current ILP systems, these challenges are reﬂected in system parameters that bound the depth and width of the clauses, respectively. Practical experience in applications shows that tractable runtimes are achieved only when setting the values of these parameters to small values; in fact it is not uncommon to limit the depth of clauses to two or three, and their width to four or ﬁve. In a recent study, Giordana and Saitta [10] have shown, based on empirical simulations, that indeed there seems to be a fundamental limit for current ILP systems, and that this limit might in large parts be due to the extreme growth of matching costs, i.e., the cost of determining if a clause covers a given example. Thus, if matching costs could be reduced, it should be possible to learn clauses of signiﬁcantly greater depth and width than currently achievable. In this paper, we present an ILP algorithm and a case study which provide evidence that indeed this seems to be the case. In our algorithm, we build on positive complexity results on conjunctive query evaluation in the area of relational database theory, and employ the class of acyclic conjunctive queries where the matching problem is known to be tractable. In the domain of mutagenicity, we show that using our algorithm it is indeed possible to discover structural relationships that must be expressed in clauses that have signiﬁcantly greater depth and width than those currently learnable. In fact, the additional predictive power gained by these deep and wide structures has allowed us to reach a predictive accuracy comparable to the one attained in previous studies, without using the additional numerical information available in these experiments. The paper is organized as follows. In Section 2, we ﬁrst brieﬂy introduce the learning problem that is usually considered in ILP. In Section 3, we present a more detailed introduction into the matching problem and discuss the state of the art in related work on the issue. In Section 4, we then formally deﬁne the class of acyclic clauses that is used in this work, and describe its properties. Section 5 discusses our greedy algorithm which uses this class of clauses to perform ILP learning. Section 6 contains our case study in the domain of mutagenesis, and Section 7 concludes.

2

The ILP Learning Problem

The ILP learning problem is often simply deﬁned as follows (see, e.g., [32]). Deﬁnition 1 (ILP prediction learning problem). Given – a vocabulary consisting of ﬁnite sets of function and predicate symbols, – a background knowledge language LB , an example language LE , and an hypothesis language LH , all over the given vocabulary, – background knowledge B expressed in LB , and – sets E + and E − of positive and negative examples expressed in LE such that B is consistent with E + and E − (B ∪ E + ∪ E − |= ), ﬁnd a learning hypothesis H ∈ LH such that

102

T. Horv´ ath and S. Wrobel

(i) H is complete, i.e., together with B entails the positive examples (H ∪ B |= E+) (ii) and H is correct, i.e., is consistent with the negative examples (H ∪ B ∪ E + ∪ E − |= ). This problem is called the prediction learning problem because the learning hypothesis H must be such that together with B it correctly predicts (derives, covers) the positive examples, and does not predict (derive, cover) the negation of any negative example as true (otherwise the hypothesis would be inconsistent with the negative examples). For instance, if f lies(tweety) is a positive example and ¬f lies(bello) a negative one then f lies(bello) must not be predicted1 . In order to decide conditions (i) and (ii) in the above deﬁnition, one has to decide for a single e ∈ E + ∪ E − whether H ∪ B |= e. This decision problem is called the matching or membership problem. We note that in the general problem setting deﬁned above, the membership problem is not decidable. Therefore, in most of the cases implication is replaced by clause subsumption deﬁned as follows. Let C1 and C2 be ﬁrst-order clauses. We say that C1 subsumes C2 , denoted by C1 ≤ C2 , if there is a substitution θ (a mapping of C1 ’s variables to C2 ’s terms) such that C1 θ ⊆ C2 (for more details see, e.g., [25]).

3

The Matching Problem: State of the Art

One of the reasons why the width and depth of the clauses in the hypothesis language are usually bounded by a small constant is that even in the strongly restricted ILP problem settings the membership problem is still NP-complete. For instance, consider the ILP prediction learning problem, where (non-constant) function symbols are not allowed in the vocabulary, the background knowledge is an extensional database (i.e., it consists of ground atoms), examples are ground atoms, and the hypothesis language is a subset of the set of deﬁnite non-recursive ﬁrst-order Horn clauses, or in other words, it is a subset of the set of conjunctive queries [1,30]. This is one of the problem settings most frequently considered in ILP real-word applications. Although in this setting, the membership problem, i.e., the problem of deciding whether a conjunctive query implies a ground atom with respect to an extensional database, and implication between conjunctive queries are both decidable, they are still NP-complete [6]. In the ILP community, both of these problems are viewed as instances of the clause subsumption problem because implication is equivalent to clause subsumption in the problem setting considered (see, e.g., [11]). These decision problems play a central role e.g. in top-down ILP approaches (see, e.g., [25] for an overview), where the algorithm starts with an overly general clause, for instance with the empty clause, and specializes it step by step until a clause is found that satisﬁes the requirements deﬁned by the user. 1

Strictly speaking the above setting only refers to the training error of a hypothesis while ILP systems actually seek to minimize the true error on future examples.

Towards Discovery of Deep and Wide First-Order Structures

103

As mentioned above, subsumption between ﬁrst-order clauses is one of the most important operators used in diﬀerent ILP methods. Since the clause subsumption problem is known to be NP-complete, diﬀerent approaches can be found in the corresponding literature that try to solve it in polynomial time. Among these, we refer to the technique of identifying tractable subclasses of ﬁrst-order clauses (see, e.g., [12,18,26]), to the earlier mentioned phase transitions in matching [10], and to stochastic matching [27]. In general, clause subsumption problem can be considered as a homomorphism problem between the relational structures that correspond to the clauses, as one has to ﬁnd a function between the universes of the structures that preserves the relations (see, e.g., [16]). Homomorphisms between relational structures appear in the query evaluation problems in relational database theory or in the constraint-satisfaction problem in artiﬁcial intelligence (see, e.g., [19]). In particular, from the point of view of computational complexity, the query evaluation problem for the above mentioned class of conjunctive queries is well-studied. Research in this ﬁeld goes back to the seminal paper by Chandra and Merlin [6] in the late seventies, who showed that the problem of evaluating a conjunctive query with respect to a relational database is NP-complete. In [33], Yannakakis has shown that query evaluation becomes computationally tractable if the set of literals in the query forms an acyclic hypergraph. This class of conjunctive queries is called acyclic conjunctive queries. In [13], Gottlob, Leone, and Scarcello have shown that acyclic conjunctive queries are LOGCFL-complete. The relevance of this result, besides providing the precise complexity of acyclic conjunctive query evaluation, is that acyclic conjunctive query evaluation is highly parallelizable due to the nature of LOGCFL. The positive complexity result of Yannakakis was then extended by Chekuri and Rajaraman [7] to cyclic queries of bounded query-width. Despite the fact that the class of conjunctive queries is one of the most frequently considered hypothesis language in ILP, and that acyclic conjunctive queries form a practically relevant class of database queries, to our knowledge only the recent paper [15] by Hirata has so far been concerned with acyclic conjunctive queries from the point of view of learnability2 . In that paper, Hirata has shown that, under widely believed complexity assumptions, a single acyclic conjunctive query is not polynomially predictable, and hence, it is not polynomially PAC-learnable [31]. This means that even though the membership problem for acyclic clauses is decidable in polynomial time, under worst-case assumptions the problem of learning these clauses is hard, so that practical learning algorithms, such as the one presented in section 5, must resort to heuristic methods.

2

The notion of acyclicity appears in the literature of ILP (see, e.g., [2]), but is diﬀerent from the one considered in this paper.

104

4

T. Horv´ ath and S. Wrobel

Acyclic Conjunctive Queries

In this section we give the necessary notions related to acyclic conjunctive queries considered in this work. For a detailed introduction to acyclic conjunctive queries the reader is referred to e.g. [1,30] or to the long version of [13]. For the rest of this paper, we assume that the vocabulary in Deﬁnition 1 consists of a set of constant symbols, a distinguished predicate symbol called the target predicate, and a set of predicates called the background predicates. Thus, (non-constant) function symbols are not included in the vocabulary. Examples are ground atoms of the target predicate, and the background knowledge is an extensional database consisting of ground atoms of the background predicates. Furthermore, we assume that hypotheses in LH are deﬁnite non-recursive ﬁrstorder clauses, or in the terminology of relational database theory, conjunctive queries of the form L0 ← L1 , . . . , Ll where L0 is a target atom, and Li is a background atom for i = 1, . . . , l. In what follows, by Boolean conjunctive queries we mean ﬁrst-order goal clauses of the form ← L1 , . . . , Ll where the Li ’s are all background atoms. In order to deﬁne a special class of conjunctive queries, called acyclic conjunctive queries, we ﬁrst need the notion of acyclic hypergraphs. A hypergraph (or set-system) H = (V, E) consists of a ﬁnite set V called vertices, and a family E of subsets of V called hyperedges. A hypergraph is α-acyclic [9], or simply acyclic, if one can remove all of its vertices and edges by deleting repeatedly either a hyperedge that is empty or is contained by another hyperedge, or a vertex contained by at most one hyperedge [14,34]. Note that acyclicity as deﬁned here is not a hereditary property, in contrast to e.g. the standard notion of acyclicity in ordinary undirected graphs, as it may happen that an acyclic hypergraph has a cyclic subhypergraph. For example, consider the hypergraph H = ({a, b, c}, {e1 , e2 , e3 , e4 }) with e1 = {a, b}, e2 = {b, c}, e3 = {a, c}, and e4 = {a, b, c}. This is an acyclic hypergraph, as one can remove step by step ﬁrst the hyperedges e1 , e2 , e3 (as they are subsets of e4 ), then the three vertices, and ﬁnally, the empty hypergraph is obtained by removing the empty hyperedge that remained from e4 . On the other hand, the hypergraph H = ({a, b, c}, {e1 , e2 , e3 }), which is a subhypergraph of H, is cyclic, as there is no vertex or edge that could be deleted by the above deﬁnition. In [9], other degrees of acyclicity are also considered, and it is shown that among them, α-acyclic hypergraphs form the largest class properly containing the other classes. Using the above notion of acyclicity, now we are ready to deﬁne the class of acyclic conjunctive queries. Let Q be a conjunctive query and L be a literal of Q. We denote by Var(Q) (resp. Var(L)) the set of variables occurring in Q (resp. L). We say that Q is acyclic if the hypergraph H(Q) = (V, E) with V = Var(Q) and E = {Var(L) : L is a literal in Q} is acyclic. For instance, from the conjunctive

Towards Discovery of Deep and Wide First-Order Structures

105

queries P (X, Y, X) ← R(X, Y ), R(Y, Z), R(Z, X) P (X, Y, Z) ← R(X, Y ), R(Y, Z), R(Z, X) the ﬁrst one is cyclic, while the second one is acyclic. In [3] it is shown that the class of acyclic conjunctive queries is identical to the class of conjunctive queries that can be represented by join forests [4]. Given a conjunctive query Q, the join forest JF (Q) representing Q is an ordinary undirected forest such that its vertices are the set of literals of Q, and for each variable x ∈ Var(Q) it holds that the subgraph of JF (Q) consisting of the vertices that contain x is connected (i.e., it is a tree). Now we show how to use join forests for eﬃcient acyclic query evaluation. Let E be a set of ground target atoms and B be the background knowledge as deﬁned at the beginning of this section, and let Q be an acyclic conjunctive query with join forest JF (Q). In order to ﬁnd the subset E ⊆ E implied by Q with respect to B, we can apply the following method. Let T0 , T1 , . . . , Tk (k ≥ 0) denote the set of connected components of JF (Q), where T0 denotes the tree containing the head of Q, and let Qi ⊆ Q denote the query represented by Ti for i = 0, . . . , k. The deﬁnition of the Qi ’s implies that they form a partition of the set of literals of Q such that literals belonging to diﬀerent blocks do not share common variables. Therefore, the subqueries Q0 , . . . , Qk can be evaluated separately; if there is an i, 1 ≤ i ≤ k, such that the Boolean conjunctive query is false with respect to B then Q implies none of the elements of E with respect to B, otherwise Q and Q0 imply the same subset of E with respect to B. By deﬁnition, Q0 implies an atom e ∈ E if there is a substitution mapping the head of Q0 to e and the atoms in its body into B, and Qi (1 ≤ i ≤ k) is true with respect to B if there is a substitution mapping Qi ’s atom into B. That is, using algorithm Evaluate given below, Q implies E with respect to B if and only if k (E ⊆ Evaluate(B ∪ E, T0 )) ∧ (Evaluate(B, Ti ) = ∅) . i=1

It remains to discuss the problem of how to compute a join forest for an acyclic conjunctive query. Using maximal weight spanning forests of ordinary graphs, in [4] Bernstein and Goodman give the following method to this problem. Let Q be an acyclic conjunctive query, and let G(Q) = (V, E, w) be a weighted graph with vertex set V = {L : L is a literal of Q}, edge set E = {(u, v) : Var(u) ∩ Var(v) = ∅}, and with weight function w : E → IN deﬁned by w : (u, v) → |Var(u) ∩ Var(v)| . Let M SF (Q) be a maximal weight spanning forest of G(Q). Note that maximal weight spanning forests can be computed in polynomial time (see, e.g., [8]). It holds that if Q is acyclic then M SF (Q) is a joint forest representing Q. In addition, given a maximal weight spanning forest M SF (Q) of a conjunctive query

106

T. Horv´ ath and S. Wrobel

algorithm Evaluate input: extensional database D and join tree T with root labeled by n0 output: {n0 θ: θ is a substitution mapping the nodes of T into D} let R = {n0 θ: θ is a substitution mapping n0 into D} let the children of n0 be labeled by n1 , . . . , nk (k ≥ 0) for i = 1 to k S = evaluate(D, Ti ) // Ti is the subtree of T rooted at ni R = the natural semijoin of R and S wrt. n0 and ni endfor return R

Q, instead of using the method given in the deﬁnition of acyclic hypergraphs, in order to decide whether Q is acyclic, one can check whether the equation w(u, v) = (Class(x) − 1) (1) (u,v)∈M SF (Q) x∈Var(Q) holds, where Class(x) denotes the number of literals in Q that contain x (see also [4]).3

5

A Greedy Algorithm

The goal of our learning algorithm is to discover sets of acyclic clauses that together are correct and complete. From the results of [16] on learning multiple clauses it follows that this problem is NP-hard, so we resort to a greedy sequential covering algorithm (see, e.g., [21]) as it is commonplace in ILP. Our sequential covering algorithm takes as input the background knowledge B and the set E of examples, calls the subroutine SingleClause for ﬁnding an acyclic conjunctive query Q, then updates E by removing the positive examples implied by Q with respect to B, and starts the process again until no new rule is found by the subroutine. It ﬁnally prints as output the set of acyclic conjunctive queries discovered. Now we turn to the problem of how to ﬁnd a single acyclic conjunctive query4 . In order to give the details on the subroutine SingleClause called by 3 4

The reason why Class(x) − 1 is used in (1) is that the number of edges in a tree is equal to its number of vertices minus 1. We note that the general problem of ﬁnding a single consistent and complete (not necessarily acyclic) conjunctive query is a PSPACE-hard problem [17] and it is an open problem whether it belongs to PSPACE (see also [16]). On the other hand, it is not known whether it remains PSPACE-hard for the class of acyclic conjunctive queries considered in this work, or to the other three classes corresponding to β, γ, and Berge-acyclicity discussed in [9].

Towards Discovery of Deep and Wide First-Order Structures

107

the algorithm, we ﬁrst need the notion of reﬁnement operators (see Chapter 17 in [25] for an overview). We recall that a special ILP problem setting deﬁned at the beginning of the previous section is considered. Fix the vocabulary and let L denote the set of acyclic conjunctive queries over the vocabulary. A downward reﬁnement operator is a function ρ : L → 2L such that Q1 ≤Q2 for every Q1 ∈ L and Q2 ∈ ρ(Q1 ). algorithm SingleClause input: background knowledge B and set E = E + ∪ E − of examples output: either ∅ or an acyclic conjunctive query Best satisfying |Covers(Best, B, E + )|/|E + | ≥ Pcov and Accuracy(Best, B, E) ≥ Pacc Beam = {P (x1 , . . . , xn ) ←} // P denotes the target predicate Best = ∅ LastChange = 0 repeat NewBeam = ∅ forall C ∈ Beam forall C ∈ ρ(C) if |Covers(C , B, E + )|/|E + | ≥ Pcov then if Accuracy(C , B, E) ≥ max(Pacc , Accuracy(Best, B, E)) then Best = C LastChange = 0 endif update NewBeam by C endif endfor endfor LastChange = LastChange + 1 Beam = NewBeam until Beam = ∅ or LastChange > Pchange return Best

Algorithm SingleClause applies beam search for ﬁnding a single acyclic conjunctive query. Its input is B and the current set E of examples. It returns the acyclic conjunctive query Best that covers a suﬃciently large (deﬁned by Pcov ) part of the positive examples and has accuracy at least Pacc , where Pcov and Pacc are user deﬁned parameters. If it has not found such an acyclic conjunctive query, then it returns the empty set. In each iteration of the outer (repeat) loop, the algorithm computes the reﬁnements for each acyclic conjunctive query in the beam stack, and if a reﬁnement is found that is better than the best one discovered so far then it will be the new best candidate. The beam stack is updated according to the rules’ quality measured by Accuracy. Finally we note that the outer loop is terminated if no candidate reﬁnement has been generated

108

T. Horv´ ath and S. Wrobel

or in the last Pchange iterations of the outer loop the best rule has not been changed, where Pchange is a user deﬁned parameter.

6

Case Study: Mutagenesis

Chemical mutagens are natural or artiﬁcial compounds that are capable of causing permanent transmissible changes in DNA. Such changes or mutations may involve small gene segments as well as whole chromosomes. Carcinogenic compounds are chemical mutagens that alter the DNA’s structure or sequence harmfully causing cancer in mammals. A huge amount of research in the ﬁeld of organic chemistry has been focusing on identifying carcinogen chemical compounds. The ﬁrst study on using ILP for predicting mutagenicity in nitroaromatic compounds along with providing a Prolog database was published in [29]. This database consists of two sets of nitroaromatic compounds from which we have used the regression friendly one containing 188 compounds. Depending on the value of log mutagenicity, the compounds were split into two disjoint sets (active consisting of 125 and inactive consisting of 63 compounds). The basic structure of the compounds is represented by the background predicates ‘atm’ and ‘bond’ of the form atm(Compound Id,Atom Id,Element,Type,Charge), bond(Compound Id,Atom1 Id,Atom2 Id,BondType) , respectively. Thus, the background knowledge B can be considered as a labeled directed graph. In order to work with undirected graph, for each fact bond(c, u, v, t) we have added a corresponding fact bond(c, v, u, t) to B. In addition, in our experiments we have also included the background predicates – benzene, carbon 6 ring, hetero aromatic 6 ring, ring6, – carbon 5 aromatic ring, carbon 5 ring, hetero aromatic 5 ring, ring5, – nitro, and methyl. These predicates deﬁne building blocks for complex chemical patterns (for their deﬁnitions see the Appendix of [29]). We note that we have not used the available numeric information (i.e., charge of atoms, log P, and (LUMO ). In our experiments we used a simple reﬁnement operator allowing only adding new literals to the body of an acyclic conjunctive query, and not allowing the usual operators such as uniﬁcation of two variables or specialization of a variable. That is, a reﬁnement of an acyclic conjunctive query is obtained by selecting one of its literals, and, depending on the predicate symbol of the selected literal, by adding a set of literals to its body as follows. If the literal is the head of the clause we add either a single ’atm’ literal or a set of literals corresponding to one of the building blocks. If the literal selected is an ’atm’ then we add either a new atom connected by a bond fact with the selected one, or we add a building block containing the selected atom. If a bond literal has been selected then we add a building block containing the current bond. Such building blocks are a common

Towards Discovery of Deep and Wide First-Order Structures

109

element speciﬁable in several declarative bias languages already in use in ILP (see e.g. the relational clich´es of FOCL [28] or the lookahead speciﬁcations of TILDE [5]); at present, they are simply given as part of the reﬁnement operator5 . As an example, let Q : active(x1 , x2 ) ← . . . , L, . . . be an acyclic conjunctive query, where L = bond(x1 , xi , xj , 7). Then a reﬁnement of Q with respect to L and building block benzene is the acyclic conjunctive query Q = Q ∪ {bond(x1 , xj , y1 , 7), bond(x1 , y1 , y2 , 7), bond(x1 , y2 , y3 , 7), bond(x1 , y3 , y4 , 7), bond(x1 , y4 , xi , 7), atm(x1 , y1 , c, u1 , v1 ), atm(x1 , y2 , c, u2 , v2 ), atm(x1 , y3 , c, u3 , v3 ), atm(x1 , y4 , c, u4 , v4 ), benzene(x1 , xi , xj , y1 , y2 , y3 , y4 )} , where the y’s, u’s, and v’s are all new variables. Note that despite the fact that the new bond literals together with L form a cycle of length 6, Q is acyclic, as we have also attached the benzene literal containing the six corresponding variables. It holds in general that the reﬁnement operator used in our work does not violate the acyclicity property. Finally we note that only properly subsumed reﬁnements have been considered (i.e., if Q is a reﬁnement of Q then Q ≤ Q). In order to see how our restriction on the hypothesis language inﬂuences the predictive accuracy, we have used 10-fold cross-validation with the 10 partitions given in [29]. Setting parameters Pcov to 0.1, Pacc to 125/188 (default accuracy), the size of the beam stack to 100, and Pchange to 3 (note that this is not a depth bound), we obtained 87% accuracy. Using the ILP system Progol [22], the authors of [29] report 88% accuracy, and a similar result, 89% was achieved by STILL [27] on the same ten partitions. However, in contrast to our experiment, in the Progol and STILL experiments, the numeric information was considered as well. As an example, one of the rules discovered independently in each of the ten runs is active(x1 , x2 ) ← atm(x1 , x3 , c, 27, x4 ), bond(x1 , x3 , x5 , x25 ), bond(x1 , x5 , x6 , x26 ), bond(x1 , x6 , x7 , x27 ), bond(x1 , x7 , x8 , x28 ), bond(x1 , x8 , x9 , x29 ), bond(x1 , x9 , x3 , x30 ), atm(x1 , x5 , x10 , x11 , x12 ), atm(x1 , x6 , x13 , x14 , x15 ), atm(x1 , x7 , x16 , x17 , x18 ), atm(x1 , x8 , x19 , x20 , x21 ), atm(x1 , x9 , x22 , x23 , x24 ), ring6(x1 , x3 , x5 , x6 , x7 , x8 , x9 ), bond(x1 , x7 , x31 , x32 ), atm(x1 , x31 , c, 27, x33 ) 5

(2)

Note that the use of such building blocks facilitates the search by making wide and deep clauses reachable in fewer steps, but of course does not change the complexity of the membership problem. Thus, even when given these building blocks, such clauses would be diﬃcult to learn for other ILP learners due to the intractable cost of matching.

110

T. Horv´ ath and S. Wrobel

(see also Fig. 1). Applying the notion of variable depth given in [25], the depth of the above rule is 7 according to the depth of the deepest variable x22 . Furthermore, its width is 15. Finally we note that using the standard Prolog backtracking technique, just evaluating the single rule above would take on the order of hours. ❝x3 (element: carbon, atom type: 27) ✟❍ ✟ ❍ ❍❝ ❝✟ x5 x9 ❝

❝ ✟ x8 ✟

x6 ❍ ❍

❍ ❝✟

x7

❝

x31 (element: carbon, atom type: 27)

Fig. 1. A graphical representation of the body of rule (2).

7

Conclusion

In this paper, we have taken the ﬁrst steps towards discovery of deep and wide ﬁrst-order structures in ILP. Taking up the argument recently put forward by [10], our approach centrally focuses on the matching costs caused by deep and wide clauses. To this end, from relational database theory [1,30], we have introduced a new class of clauses, α-acyclic conjunctive queries, which has not previously been used in practical ILP algorithms. Using the algorithms summarized in this paper, the matching problem for acyclic clauses can be solved eﬃciently. As shown in our case study in the domain of mutagenicity, with an appropriate greedy learner as presented in the paper, it is then possible to learn clauses of signiﬁcantly greater width and depth than previously feasible, and the additional predictive power gained by these deep and wide structures has in fact allowed us to reach a predictive accuracy comparable to the one attained in previous studies, without using the addition numerical information available in these experiments. Based on these encouraging preliminary results, further work is necessary to substantiate the evidence presented in this paper. Firstly, in the case study presented here, we have used quite a simple greedy algorithm, so that further improvements seen possible with more sophisticated search strategies (see e.g. [20]). Secondly, further experiments are of course necessary to examine in which type of problem the advantages shown here will also materialize; we expect this to be the case in all problems involving structurally complex objects or relationships. To facilitate the experiments, we will switch to a reﬁnement operator based on a declarative bias language (see [24] for an overview), as is commonplace in ILP. Finally, it appears possible to generalize our results to an even

Towards Discovery of Deep and Wide First-Order Structures

111

larger class of clauses, by considering certain classes of cyclic conjunctive queries which are also solvable in polynomial time (see e.g. [7]).

References 1. S. Abiteboul, R. Hull, and V. Vianu. Foundations of Databases. Addison-Wesley, Reading, Mass., 1995. 2. H. Arimura. Learning acyclic ﬁrst-order Horn sentences from entailment. In M. Li and A. Maruoka, editors, Proceedings of the 8th International Workshop on Algorithmic Learning Theory, volume 1316 of LNAI, pages 432–445, Springer, Berlin, 1997. 3. C. Beeri, R. Fagin, D. Maier, and M. Yannakakis. On the desirability of acyclic database schemes. Journal of the ACM, 30(3):479–513, 1983. 4. P. A. Bernstein and N. Goodman. The power of natural semijoins. SIAM Journal on Computing, 10(4):751–771, 1981. 5. H. Blockeel and L. D. Raedt. Lookahead and discretization in ILP. In N. Lavraˇc and S. Dˇzeroski, editors, Proceedings of the 7th International Workshop on Inductive Logic Programming, volume 1297 of LNAI, pages 77–84, Springer, Berlin, 1997. 6. A. K. Chandra and P. M. Merlin. Optimal implementations of conjunctive queries in relational databases. In Proceedings of the 9th ACM Symposium on Theory of Computing, pages 77–90. ACM Press, 1977. 7. C. Chekuri and A. Rajaraman. Conjunctive query containment revisited. Theoretical Computer Science, 239(2):211–229, 2000. 8. T. H. Cormen, C. E. Leiserson, and R. L. Rivest. Introduction to Algorithms. MIT Press, Cambridge, Mass., 1990. 9. R. Fagin. Degrees of acyclicity for hypergraphs and relational database schemes. Journal of the ACM, 30(3):514–550, 1983. 10. A. Giordana and L. Saitta. Phase transitions in relational learning. Machine Learning, 41(2):217–251, 2000. 11. G. Gottlob. Subsumption and implication. Information Processing Letters, 24(2):109–111, 1987. 12. G. Gottlob and A. Leitsch. On the eﬃciency of subsumtion algorithms. Journal of the ACM, 32(2):280–295, 1985. 13. G. Gottlob, N. Leone, and F. Scarcello. The complexity of acyclic conjunctive queries. In Proceedings of the 39th Annual Symposium on Foundations of Computer Science, pages 706–715. IEEE Computer Society Press, 1998. 14. M. Graham. On the universal relation. Technical report, Univ. of Toronto, Toronto, Canada, 1979. 15. K. Hirata. On the hardness of learning acyclic conjunctive queries. In Proceedings of the 11th International Conference on Algorithmic Learning Theory, volume 1968 of LNAI, pages 238–251. Springer, Berlin, 2000. 16. T. Horv´ ath and G. Tur´ an. Learning logic programs with structured background knowledge. Artiﬁcial Intelligence, 128(1-2):31–97, 2001. 17. J.-U. Kietz. Some lower bounds for the computational complexity of inductive logic programming. In P. Brazdil, editor, Proceedings of the European Conference on Machine Learning, volume 667 of LNAI, pages 115–123. Springer, Berlin, 1993. 18. J.-U. Kietz and M. L¨ ubbe. An eﬃcient subsumption algorithm for inductive logic programming. In W. Cohen and H. Hirsh, editors, Proc. Eleventh International Conference on Machine Learning (ML-94), pages 130–138, 1994.

112

T. Horv´ ath and S. Wrobel

19. Kolaitis and Vardi. Conjunctive-query containment and constraint satisfaction. JCSS: Journal of Computer and System Sciences, 61(2):302–332, 2000. 20. N. Lavraˇc and S. Dˇzeroski. Inductive Logic Programming: Techniques and Applications. Ellis Horwood, 1994. 21. T. M. Mitchell. Machine Learning. McGraw-Hill, 1997. 22. S. Muggleton. Inverse entailment and Progol. New Generation Computing, 13(34):245–286, 1995. 23. S. Muggleton and L. De Raedt. Inductive logic programming: Theory and methods. The Journal of Logic Programming, 19/20:629–680, 1994. 24. C. N´edellec, C. Rouveirol, H. Ad´e, F. Bergadano, and B. Tausend. Declarative bias in ILP. In L. De Raedt, editor, Advances in Inductive Logic Programming, pages 82–103. IOS Press, 1996. 25. S.-H. Nienhuys-Cheng and R. Wolf. Foundations of Inductive Logic Programming, volume 1228 of LNAI. Springer, Berlin, 1997. 26. T. Scheﬀer, R. Herbrich, and F. Wysotzki. Eﬃcient Θ-subsumption based on graph algorithms. In S. Muggleton, editor, Proceedings of the 6th International Workshop on Inductive Logic Programming, volume 1314 of LNAI, pages 212–228, Springer, Berlin, 1997. 27. M. Sebag and C. Rouveirol. Resource-bounded relational reasoning: Induction and deduction through stochastic matching. Machine Learning, 38(1/2):41–62, 2000. 28. G. Silverstein and M. Pazzani. Relational cliches: Constraining constructive induction during relational learning. In Birnbaum and Collins, editors, Proceedings of the 8th International Workshop on Machine Learning, pages 203–207, Morgan Kaufmann, San Mateo, CA, 1991. 29. A. Srinivasan, S. Muggleton, M. J. E. Sternberg, and R. D. King. Theories for mutagenicity: A study in ﬁrst-order and feature-based induction. Artiﬁcial Intelligence, 85(1/2), 1996. 30. J. D. Ullman. Database and Knowledge-Base Systems, Volumes I and II. Computer Science Press, 1989. 31. L. G. Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134–1142, 1985. 32. S. Wrobel. Inductive logic programming. In G. Brewka, editor, Advances in Knowledge Representation and Reasoning, pages 153–189. CSLI-Publishers, Stanford, CA, USA, 1996. Studies in Logic, Language and Information. 33. M. Yannakakis. Algorithms for acyclic database schemes. In Proceedings of the 7th Conference on Very Large Databases, Morgan Kaufman pubs. (Los Altos CA), Zaniolo and Delobel(eds), 1981. 34. C. T. Yu and Z. M. Ozsoyoglu. On determining tree query membership of a distributed query. INFOR, 22(3), 1984.

Clipping and Analyzing News Using Machine Learning Techniques Hans Gr¨ undel, Tino Naphtali, Christian Wiech, Jan-Marian Gluba, Maiken Rohdenburg, and Tobias Scheﬀer SemanticEdge, Kaiserin-Augusta-Allee 10-11, 10553 Berlin, Germany {hansg, tinon, christianw, jang, scheffer}@semanticedge.com

Abstract. Generating press clippings for companies manually requires a considerable amount of resources. We describe a system that monitors online newspapers and discussion boards automatically. The system extracts, classiﬁes and analyzes messages and generates press clippings automatically, taking the speciﬁc needs of client companies into account. Key components of the system are a spider, an information extraction engine, a text classiﬁer based on the Support Vector Machine that categorizes messages by subject, and a second classiﬁer that analyzes which emotional state the author of a newsgroup posting was likely to be in. By analyzing large amount of messages, the system can summarize the main issues that are being reported on for given business sectors, and can summarize the emotional attitude of customers and shareholders towards companies.

1

Introduction

Monitoring news paper or journal articles, or postings to discussion boards is an extremely laborious task when carried out manually. Press clipping agencies employ thousands of personnel in order to satisfy their clients’ demand for timely and reliable delivery of publications that relate to their own company, to their competitors, or to the relevant markets. The internet presence of most publications oﬀers the possibility of automating this ﬁltering and analyzing process. One challenge that arises is to analyze the content of a news story well enough to judge its relevance for a given client. A second diﬃculty is to provide appropriate overview and analyzing functionality that allows a user to keep track of the key content of a potentially huge amount of relevant publications. Software systems that spider the web in search of relevant information, and extract and process found information are usually referred to as information agents [16,2]. They are being used, for instance, to ﬁnd interesting web sites or links [19,12], or to ﬁlter news group postings (e.g., [26]). One attribute of information agents is how they determine the relevance of a document to a user. Content-based recommendation systems (e.g., [1]) judge the interestingness of a document to the user based on the content of other documents that the user has found interesting. By contrast, collaborative ﬁltering approaches (e.g., [13]), K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 87–99, 2001. c Springer-Verlag Berlin Heidelberg 2001

88

H. Gr¨ undel et al.

draw conclusions based on which documents other users with similar preferences have found interesting. In many applications, it is not reasonable to ask the user to elaborate his or her preferences explicitly. Therefore, information agents often try to learn a function that expresses user interest from user feedback; e.g., [26,18]. By contrast, a user who approaches a press clipping agency usually has speciﬁc, elaborated information needs. The problem of identifying predetermined relevant information in text or hypertext documents from some speciﬁc domain is usually referred to as information extraction (IE) (e.g., [4,3]). In the news clipping context, several instances of the information extraction problem occur. Firstly, press articles have to be extracted from HTML pages where they are usually embedded between link collections, adverts, and other surrounding text. Secondly, named entities such as companies or products have to be identiﬁed and extracted and, thirdly, meta-information such as publication dates or publishers need to be found. While ﬁrst IE algorithms were hand-crafted sets of rules (e.g., [7]), algorithms that learn extraction rules from hand-labeled documents (e.g., [8,14,6]) have now become standard. Unfortunately, rule-based approaches sometimes fail to provide the necessary robustness against the inherent variability of document structure, which has led to the recent interest in the use of Hidden Markov Models (HMMs) [25,17,21,23] for this purpose. In order to identify whether the content of a document matches one of the categories the user is interested in and to summarize the subjects of large amounts of relevant documents, classiﬁers that are learned from hand-labeled documents (e.g., [24,11]) provide a means of categorizing a document’s content that reaches far beyond key word search. Furthermore, it can be interesting to determine the emotional state [9] of authors of postings about a company or product. In this paper, we discuss a press clipping information agent that downloads news stories from selected news sources, classiﬁes the messages by subject and business sector, and recognizes company names. It then generates customized clippings that match the requirement of clients. We describe the general architecture in Section 2, and discuss the machine learning algorithms involved in Section 3. Section 4 concludes.

2

Publication Monitoring System

Figure 1 sketches the general architecture of the system. A user can conﬁgure the information service by providing a set of preferences. These include the names of all companies that he or she would like to monitor, as well as all business areas (e.g., biotechnology, computer hardware) of interest The spider cyclically downloads a set of newspapers, journals, and discussion boards. The set of news sources is ﬁxed in advance and not depending on the users’ choices. All downloaded messages are recorded in a news database after the extraction engine has stripped the HTML code in which the message is embedded (header and footer parts as well as HTML tags, pictures, and advertisements).

Clipping and Analyzing News Using Machine Learning Techniques Client (access by web-browser) Client (access by web-browser)

Server Web Personalized press clipping

News story database

Information extraction

analysis functionality

... Client (access by web-browser)

89

Initial config

Customer profile database

Indexer, keyword search

Emotional analysis

Spider

Text classification

Fig. 1. Overview of the SemanticEdge Publication Monitoring System

The spider developed by SemanticEdge is conﬁgured by providing a set of patterns which all URLs that are to be downloaded have to match. Typically, online issues of newspapers have a fairly ﬁxed site structure and only vary the dates and story numbers in the URLs daily. Depending on the diﬃculty of the site structure, conﬁguring the spider such that all current news stories but no advertisements, archives, or documents that do not directly belong to the newspaper are downloaded, requires between one and four hours. Text classiﬁer, named entity recognizer, and emotional analyzer operate on this database. The text classiﬁer categorizes all news stories and newsgroup postings whereas the emotional analyzer is only used for newsgroup postings; it classiﬁes the emotional state that a message was likely to be written in. For each client company, a customized press clipping is generated, including summarization and visualization functionality. The press clipping consists of a set of dynamically generated web pages that a user can view in a browser after providing a password. The system visualizes the number of publications by source, by subject, and by referred company. For each entry, an emotional score between zero (very negative) and one (very positive) is visualized as a red or green bar, indicating the attitude of the article (Fig. 2), or the set of summarized articles. Figure 2 shows the list of all articles relevant to a client, Figure 3 shows the summary mode in which the system summarizes all articles either from one news source, or about one company, or related to one business sector per line. The average positive or negative attitude of the articles summarized in one line is visualized by a red or green bar. Several diagrams visualize the frequency of referrals to business sectors or individual companies and the average expressed attitude (Figure 4).

3

Intelligent Document Analysis

Document analysis consists of information extraction (including recognition of named entities), subject classiﬁcation, and emotional state analysis.

90

H. Gr¨ undel et al.

Fig. 2. Press clipping for client company: message overview

Fig. 3. Press clipping for client company: company summary

Clipping and Analyzing News Using Machine Learning Techniques

91

Fig. 4. Top: frequency of messages related to business sectors. Bottom: expressed emotional attitude toward companies

92

3.1

H. Gr¨ undel et al.

Information Extraction

Two main paradigms of information extraction agents which can be trained from hand-labeled documents exist; algorithms that learn extraction rules (e.g., [8,14, 6]) and statistical approaches such as Markov models [25,17], partially hidden Markov models [21,23] and conditional random ﬁelds [15]. Rule base information extraction algorithms appear to be particularly suited to extract text from pages with a very strict structure and little variability between documents. In order to learn how to extract the text body from the HTML page of a Yahoo! message board, the proprietary rule learner that we use needs only one example in order to identify where, in the document structure, the information to be extracted is located. We can then extract text bodies from other messages with equal HTML structure with an accuracy of 100%. Many other information extraction tasks, such as recognizing company names, or stock recommendations, rule based learners do not provide enough robustness to deal with the high variability of natural language. Hidden Markov models (HMMs) (see, [20] for an introduction) are a very robust statistical method for analysis of temporal data. An HMM consists of ﬁnitely many states {S1 , . . . , SN } with probabilities πi = P (q1 = Si ), the probability of starting in state Si , and aij = P (qt+1 = Sj |qt = Si ), the probability of a transition from state Si to Sj . Each state is characterized by a probability distribution bi (Ot ) = P (Ot |qt = Si ) over observations. In the information extraction context, an observation is typically a token. The information items to be extracted correspond to the n target states of the HMM. Background tokens without label are emitted in all HMM states which are not one of the target states. HMM parameters can be learned from data using the Baum-Welch algorithm. When the HMM parameters are given, then the model can be used to extract information from a new document. Firstly, the document has to be transformed into a sequence of tokens; for each token, several attributes are determined, including the word stem, part of speech, the HTML context, attributes that indicates whether the word contains letter, digits, starts with a capital letter and other attributes. Thus, the document is transformed into a sequence of attribute vectors. Secondly, the forward-backward algorithm [20] is used to determine, for each token, the most likely state of the HMM that it was emitted in. If, for a given token, the most likely state is one of the background states, then this token can be ignored. If the most likely state is one of the target states and thus corresponds to one of the items to be extracted, then the token is extracted and copied into the corresponding database ﬁeld. In order to adapt the HMM parameter, a user ﬁrst has to label information to be extracted manually in the example document. Such partially labeled documents form the input to the learning algorithm which then generates the HMM parameters. We use a variant of the Baum-Welch algorithm [23] to ﬁnd the model parameters which are most likely to produce the given documents and are consistent with the labels added by the user.

Clipping and Analyzing News Using Machine Learning Techniques

93

Figure 5 shows the GUI of the SemanticEdge information extraction environment. HMM based and rule based learners are plugged into the system.

Fig. 5. GUI of the information extraction engine

For specialized information extraction tasks such as ﬁnding company names in news stories, specially tailored information extraction agents outperform more general approaches such as HMMs. For instance, most companies that are being reported about are listed at some stock exchange. To recognize these companies, we only need to maintain a dynamically growing database. 3.2

Subject Classiﬁcation

For the subject classiﬁcation step, we have deﬁned a set of message subject categories (e.g., IPO announcement, ad hoc message) and a set of business sector and markets categories. The resulting classiﬁers assign each message a set of relevant subjects and sectors. The classiﬁer proceeds in several steps. First, a text is tokenized and the resulting tokens are mapped to their word stems. We then count, for each word stem and each example text, how often that word occurs in the text. We thus transform each text into a feature vector, treating a text as a bag of words. Finally, we weight each feature by the inverse frequency of the corresponding word which has generally been observed to increase the accuracy of the resulting

94

H. Gr¨ undel et al.

classiﬁers (e.g., [10,22]). This procedure maps each text to a point in a highdimensional space. The Support Vector Machine (SVM) [11] is then used to eﬃciently ﬁnd a hyper-plane which separates positive from negative examples, such that the margin between any example and the plane is maximized. For each category we thus obtain a classiﬁer which can then take a new text and map it to a negative or positive values, measuring the document’s relevance for the category. During the application phase, the support vector machine returns, for each category, a value of its decision function, that can range from large negative to large positive values. It is necessary to deﬁne a threshold value from which on a document is considered to belong to the corresponding category. There are several criteria by which this threshold can be set; perhaps the most popular is the precision recall breakeven point. The precision quantiﬁes the probability of a document really belonging to a class given that it is predicted to lie in that class. Recall, on the other hand quantiﬁes how likely it is that a document really belonging to a category is in fact predicted to be a member of that class by the classiﬁer. By lowering the threshold value of the decision function we can increase recall and decrease precision, and vice versa. The point at which precision equals is often used as a normalized measure of the performance of classiﬁcation and IR methods. Varying the threshold leads to precision and recall curves. Figure 6 shows the GUI of our text SVM-based categorization tool.

Fig. 6. GUI of the text classiﬁcation engine

Clipping and Analyzing News Using Machine Learning Techniques

95

It is also possible to deﬁne the accuracy (the probability of the classiﬁer making a correct prediction for a new document) as a performance measure. Unfortunately, many categories (such as IPO announcement) are so infrequent, that a classiﬁer which in fact never predicts that a document does belong to this class can achieve an accuracy of as much as 99.9%. This renders the use of accuracy as a performance metric less suited than precision/recall curves. For each category, we manually selected about 3000 examples, between 60 and 700 of these examples were positives. Figure 7 shows precision, recall, and accuracy of some randomly selected classes over the threshold value. The curves are based on hold-out testing on 20% of the data. Note that, for many of these classes such as xxx, the prior ratio of positive examples is extremely small (such as 1.4%). Specialized categories, such as “initial public oﬀering announcement” can be recognized almost without error; “fuzzy” concepts like “positive marked news” impose greater uncertainties. 3.3

Emotional Classiﬁcation

In psychology, a space of universal, culturally independent base emotional states have been identiﬁed according to the diﬀerential emotional theory (e.g., [5,9]); ten clusters within this emotional space are generally considered base emotions. These are interest, happiness, surprise, sorrow, anger, disgust, contempt, shame, fear, guilt (Figure 4). While it is typically impossible to analyze the emotional state of the author of a sober newspaper article, authors of newsgroup often do not conceal their emotions. Given a posting, we use an SVM to determine, for each of the ten emotional states, a score that rates the likelihood of that emotion for the author. We average the scores over all postings related to a company, or to a product, and visualize the result as in Figure 4. We can project emotional scores onto a “positive-negative” ray and visualize the resulting score as a red or green bar as in Figure 2. We manually classiﬁed postings to discussion boards into positive, negative, and neutral for each of the ten base emotional states. Emotional classiﬁcation of messages turned out to be a fairly noisy process; the judgment on the emotional content of postings usually varies between individuals. Unfortunately, we found no positive examples for disgust but between 2 and 21 positive and between 16 and 92 negative examples for the other states. Figure 8 shows precision and recall curves for those emotions for which we found most positive examples, based on 10-fold cross validation. As we expected, recognizing emotions seems to be a very diﬃcult task; in particular, from the small samples available. Still the recognizer performs signiﬁcantly better than random guessing. Emotional classiﬁers with a rather high threshold can often achieve reasonable precision values. Also, in many cases in which human and classiﬁer disagree, it is not easy to tell whether human or classiﬁer are wrong.

96

H. Gr¨ undel et al.

Fig. 7. Precision, recall, and accuracy for subject classiﬁcation. First row: “US treasury”, “mergers and acquisition”; second row: “positive” / “negative market and economy news”; third row: “initial public oﬀering announcement”, “currency and exchange rates”.

Clipping and Analyzing News Using Machine Learning Techniques

97

Fig. 8. Precision, recall, and accuracy for emotional classiﬁcation. First row: positive versus negative, anger; second row: contempt, fear.

4

Conclusion

We describe a system that monitors online news sources and discussion boards, downloads the content regularly, extracts the document bodies, analyzes messages by content and emotional state, and generates customer-speciﬁc press clippings. A user of the system can specify his or her information needs by entering a list of company names (e.g., the name of the own company and relevant competitors) and selecting from a set of message types and business sectors. Information extraction tasks are addressed by rule induction and hidden Markov modes; the Support Vector Machine is used to learn classiﬁers from hand-labeled data. The customer-speciﬁc news stories are listed individually, as well as summarized by several criteria. Diagrams visualize how frequently business sectors or companies are cited over time. The resulting press clippings are generated in near real-time and fully automatically. This tool enables companies to keep track of how they are being perceived in news groups and in the press. It is also inexpensive compared to press clipping agencies. On the down side, the system is certain to miss all news stories that only appear in printed issues. Also, the classiﬁer has a certain inaccuracy which imposes the risk of missing relevant articles. Of course, this risk is also present with press clipping agencies. Nearly all studied subject categories can be recognized very reliably using support vector classiﬁers.

98

H. Gr¨ undel et al.

References 1. A. Amrodt and E. Plaza. Cased-based reasoning: foundations, issues, methodological variations, and system approaches. AICOM, 7(1):39–59, 1994. 2. N. Belkin and W. Croft. Information ﬁltering and information retrieval: Two sides of the same coin? Communications of the ACM, 35(12):29–38, 1992. 3. Mark Craven, Dan DiPasquo, Dayne Freitag, Andrew K. McCallum, Tom M. Mitchell, Kamal Nigam, and Se´ an Slattery. Learning to construct knowledge bases from the World Wide Web. Artiﬁcial Intelligence, 118(1-2):69–113, 2000. 4. L. Eikvil. Information extraction from the world wide web: a survey. Technical Report 945, Norwegian Computing Center, 1999. 5. P. Ekman, W. Friesen, and P. Ellsworth. Emotion in the human face: Guidelines for research and integration of ﬁndings. Pergamon Press, 1972. 6. G. Grieser, K. Jantke, S. Lange, and B. Thomas. A unifying approach to html wrapper representation and learning. In Proceedings of the Third International Conference on Discovery Science, 2000. 7. Ralph Grishman and Beth Sundheim. Message understanding conference - 6: A brief history. In Proceedings of the International Conference on Computational Linguistics, 1996. 8. N. Hsu and M. Dung. Generating ﬁnite-state transducers for semistructured data extraction from the web. Journal of Information Systems, Special Issue on Semistructured Data, 23(8), 1998. 9. C. Izard. The face of emotion. Appleton-Century-Crofts, 1971. 10. T. Joachims. A probabilistic analysis of the rocchio algorithm with tﬁdf for text categorization. In Proceedings of the 14th International Conference on Machine Learning, 1997. 11. T. Joachims. Text categorization with support vector machines. In Proceedings of the European Conference on Machine Learning, 1998. 12. Thorsten Joachims, Dayne Freitag, and Tom Mitchell. WebWatcher: A tour guide for the World Wide Web. In Proceedings of the 15th International Joint Conference on Artiﬁcial Intelligence (IJCAI-97), pages 770–777, San Francisco, August23– 29 1997. Morgan Kaufmann Publishers. 13. J. Konstantin, B. Miller, D. Maltz, J. Herlocker, L. Gordon, and J. Riedl. Grouplens: applying collaborative ﬁltering to usenet news. Communications of the ACM, 40(3):77–87, 1997. 14. N. Kushmerick. Wrapper induction: eﬃciency and expressiveness. Artiﬁcial Intelligence, 118:15–68, 2000. 15. John Laﬀerty, Fernando Pereira, and Andrew McCallum. Conditional random ﬁelds: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the International Conference on Machine Learning, 2001. 16. P. Maes. Agents that reduce work and information overload. Communications of the ACM, 37(7), 1994. 17. Andrew McCallum, Dayne Freitag, and Fernando Pereira. Maximum entropy Markov models for information extraction and segmentation. In Proceedings of the Seventeenth International Conference on Machine Learning, 2000. 18. A. Moukas. Amalthaea: Information discovery and ﬁltering using a multiagent evolving ecosystem. In Proceedings of the Conference on Practical Application of Intelligent Agents and Multi-Agent Technology, 1996. 19. M Pazzani, J. Muramatsu, and D. Billsus. Syskill and webert: Identifying interesting web sites. In Proceedings of the National Conference on Artiﬁcial Intelligence, pages 54–61, 1996.

Clipping and Analyzing News Using Machine Learning Techniques

99

20. L. Rabiner. A tutorial on hidden markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257–285, 1989. 21. T. Scheﬀer, C. Decomain, and S. Wrobel. Active hidden markov models for information extraction. In Proceedings of the International Symposium on Intelligent Data Analysis, 2001. 22. T. Scheﬀer and T. Joachims. Expected error analysis for model selection. In Proceedings of the Sixteenth International Conference on Machine Learning, 1999. 23. Tobias Scheﬀer and Stefan Wrobel. Active learning of partially hidden markov models. In Proceedings of the ECML/PKDD Workshop on Instance Selection, 2001. 24. M. Sehami, M. Craven, T. Joachims, and A. McCallum, editors. Learning for Text Categorization, Proceedings of the ICML/AAAI Workshop. AAAI Press, 1998. 25. Kristie Seymore, Andrew McCallum, and Roni Rosenfeld. Learning hidden markov model structure for information extraction. In AAAI’99 Workshop on Machine Learning for Information Extraction, 1999. 26. B. Sheth. Newt: A learning approach to personalized information ﬁltering. Master’s thesis, Department of Electiric Engineering and Computer Science, MIT, 1994.

Spherical Horses and Shared Toothbrushes: Lessons Learned from a Workshop on Scientific and Technological Thinking 1

2

1

Michael E. Gorman , Alexandra Kincannon , and Matthew M. Mehalik 1

Technology, Culture & Communications, School of Engineering & Applied Science, P.O. Box 400744, University of Virginia, Charlottesville, VA 22904-4744 USA {meg3c, mmm2f}@virginia.edu 2 Department of Psychology, P.O. Box 400400, University of Virginia, Charlottesville, VA 22904-4400 USA kincannon@virginia.edu Abstract. We briefly summarize some of the lessons learned in a workshop on cognitive studies of science and technology. Our purpose was to assemble a diverse group of practitioners to discuss the latest research, identify the stumbling blocks to advancement in this field, and brainstorm about directions for the future. Two questions became central themes. First, how can we combine artificial studies involving ‘spherical horses’ with fine-grained case studies of actual practice? Results obtained in the laboratory may have low applicability to real world situations. Second, how can we deal with academics’ attachments to their theoretical frameworks? Academics often like to develop unique ‘toothbrushes‘ and are reluctant to use anyone else’s. The workshop illustrated that toothbrushes can be shared and that spherical horses and fine-grained case studies can complement one another. Theories need to deal rigorously with the distributed character of scientific and technological problem solving. We hope this workshop will suggest directions more sophisticated theories might take.

1

Introduction

At the turn of the 21st century, the most valuable commodity in society is knowledge, particularly new knowledge that may give a culture, a company, or a laboratory an advantage [1-3]. Therefore, it is vital for the science and technology studies community to study the thinking processes that lead to discovery, new knowledge and invention. Knowledge about these processes can enhance the probability of new and useful technologies, clarify the process by which new ideas are turned into marketable realities, make it possible for us to turn students into ethical inventors and entrepreneurs, and facilitate the development of business strategies and social policies based on a genuine understanding of the creative process.

2

A Workshop on Scientific and Technological Thinking

In order to get access to cutting-edge research on techno-scientific thinking, Michael Gorman obtained funding from the National Science Foundation, the Strategic Institute of the Boston Consulting Group and the National Collegiate Inventors and Innovators Alliance to hold a workshop at the University of Virginia from March 24-27, 2001. With assistance from Alexandra Kincannon, Ryan Tweney and others, he as K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 74-86, 2001. © Springer-Verlag Berlin Heidelberg 2001

Spherical Horses and Shared Toothbrushes

75

sembled a diverse group of practitioners, focusing on those in the middle of their careers and also on junior faculty and graduate students who represent the future. There were 29 participants, including 18 senior or mid-career researchers, and 11 junior faculty and graduate students. Representatives from the NSF, the Strategic Institute of the BCG and the NCIIA also attended. Their role was to keep participants focused on lessons learned, even as the participants worked to assess the state of the art and push beyond it, establishing new directions for research on scientific and technological thinking. In the rest of this brief paper, Gorman and Kincannon, two of the organizers of the workshop, and Matthew Mehalik, one of the participants, will highlight results from this workshop, citing the work of participants where appropriate and adding in1 terpretive material of their own. Two questions dominated in the workshop, each illustrated by a metaphor. David Gooding, a philosopher from the University of Bath who has done fine-grained studies of the thinking processes of Michael Faraday, told a joke that set up one theme. In the joke, a multimillionaire offered a prize for predicting the outcome of a horse race to a stockbreeder, a geneticist, and a physicist. The stockbreeder said there were too many variables, the geneticist could not make a prediction about any particular horse, but the physicist claimed the prize, saying he could make the prediction to many decimal places—provided it were a perfectly spherical horse moving through a vacuum. This metaphor led to a question: How can we combine artificial studies involving ‘spherical horses’ and fine-grained case studies of actual practice? Results obtained under rigorous laboratory conditions may have what psychologists call low ecological validity, or low applicability to real world situations [4]. Highly abstract computational models often ignore the way in which real-world knowledge is embedded in social contexts and embodied in hands-on practices [5]. The second metaphor came from Christian Schunn, then at George Mason University and now at the University of Pittsburgh, who noted that taxonomies and frameworks are like toothbrushes—no one wants to use anyone else’s. This metaphor led to another question: How can we transcend academics’ attachments to their individual theoretical frameworks? Academic psychologists, historians, sociologists and philosophers like to develop and refine unique toothbrushes and are reluctant to use anyone else’s. Real-world practitioners are not as fussy; they are willing to assemble a ‘bricolage’ of elements from various frameworks that academics might regard as incommensurable.

3

A Moratorium against Spherical Horses?

Nancy Nersessian, a philosopher and cognitive scientist from the Georgia Institute of Technology, reminded participants that Bruno Latour declared a ten-year moratorium against cognitive studies of science in 1986. Latour was one of the key figures in promoting a new sociology of scientific knowledge. He and others were reacting 1

The views reflected here are those of the authors, and have not been endorsed by workshop participants, the NSF, BCG or the NCIIA. All participants were taped, with their consent, and we have used these tapes in an effort to reconstruct highlights. Thanks to Pat Langley for his comments on a draft.

76

M.E. Gorman, A. Kincannon, and M.M. Mehalik

against the idea that science was a purely rational enterprise, carried out in an abstract cognitive space. Cognitive scientists like Herbert Simon contributed to this abstract cognizer view 2 of science. Simon was one of the founders of a movement Nersessian labeled “Good Old Fashioned Artificial Intelligence” (GOFAI). Simon’s toothbrush, or framework, began with the assumption that there is nothing particularly unique about what a Kepler does—the same thinking processes are used on both ordinary and extraordinary problems [6]. Simon was a revolutionary in the Kuhnian sense; he played a major role in creating artificial intelligence and linking it with a new science of thinking, called cognitive science. Peter Slezak used programs like BACON to turn the tables on Latour’s moratorium: A decisive and sufficient refutation of the 'strong programme' in the sociology of scientific knowledge (SSK) would be the demonstration of a case in which scientific discovery is totally isolated from all social or cultural factors whatever. I want to discuss examples where precisely this circumstance prevails concerning the discovery of fundamental laws of the first importance in science. The work I will describe involves computer programs being developed in the burgeoning interdisciplinary field of cognitive science, and specifically within 'artificial intelligence' (AI). The claim I wish to advance is that these programs constitute a 'pure' or socially uncontaminated instance of inductive inference, and are capable of autonomously deriving classical scientific laws from the raw observational data [7, pp. 563-564]. Slezak argued that if programs like BACON [8, 9] can discover, then there is no need to invoke all these interests and negotiations the sociologists use to explain discovery. His claims sparked a vigorous debate in the November, 1989 issue of the journal Social Studies of Science. Latour and Slezak illustrate how academics can create almost incommensurable frameworks. If the Simon perspective is a toothbrush, then Latour is denying that it even exists—and vice versa. Nersessian reminded participants that, had Simon been at the workshop, he would have argued that his toothbrush does incorporate the social and cultural; it is just that all of this is represented symbolically in memory [10]. Therefore, cognition is about symbol processing. These symbols could be as easily instantiated in a computer as in a brain. In contrast, Greeno and others advocate a position whose roots might be traced to Gibson and Dewey: that knowledge emerges from the interaction between the individual and the situation [11]. Cognition is distributed in the environment as well as the brain, and is shared among individuals [12, 13]. Merlin Donald discusses the role of culture in the evolution of cognition [14]. Nersessian in her own work explored how cultural factors can account for differences between the problem-solving approaches of scientists like Maxwell and Ampere [15]. In the symbol-processing view, discovery and invention are merely aspects of a general problem-solving system that can best be represented at the ‘spherical horse’ level. In the situated and distributed view, discovery and invention are practices that 2

Simon intended to be a participant in our workshop, but died shortly before it—a great tragedy and a great loss. During the planning stages, he referred to this as a workshop of ‘right thinkers’. For tributes to him, see http://www.people.virginia.edu/~apk5t/STweb/mainST.html.

Spherical Horses and Shared Toothbrushes

77

need to be studied in their social context. This situated cognition perspective comes much closer to that of sociologists and anthropologists of science [16], but advocates like Norman and Hutchins still talk about the importance of representations like mental models. Jim Davies, from the Georgia Institute of Technology, applied Nersessian’s cognitive-historical approach to a case study of the use of visual analogy in scientific discovery. Davies analyzed the process of conceptual change in Maxwell’s work on electromagnetism and applied to it a model of visual analogical problem solving called Galatea. He found that visual analogy played an important role in the development of Maxwell’s theories and demonstrated that the cognitive-historical approach is useful for understanding general cognitive processes. Ryan Tweney, a co-organizer of the workshop, described his own in vivo case study of Michael Faraday’s work on the interaction of light and gold films [33]. Tweney is in the process of replicating these experiments to unpack the tacit knowledge that is embodied in the cognitive artifacts created by Faraday. He hopes to do a kind of material protocol analysis that goes beyond the verbal material that is in Faraday’s diary. One end result might be a digital version of Faraday’s diary that includes images and perhaps even QuickTime movies of replications of his experiments. This kind of study potentially bridges the gap between situated and symbolic studies of discovery.

4

A Common Set of Toothbrushes?

David Klahr, a cognitive psychologist at Carnegie Mellon, has shown a preference for spherical horses, conducting experiments on scientific thinking. However, his experiments have used sophisticated, complex tasks. For example, he and two of his students (also workshop participants) Kevin Dunbar and Jeff Shrager asked participants in a series of experiments to program a device called the Big Trak, and studied the processes they used to solve this problem. The Big Trak was a battery-powered vehicle that could be programmed, via a keypad, to move according to instructions. One of the keys was labeled RPT. Participants had to discover its function. Following in Herb Simon’s footsteps, Klahr, with Dunbar and Schunn, characterized subjects’ performance as a search in two problem spaces, one occupied by possible experiments, the other by hypotheses [17]. They found that one group of subjects (Theorists) preferred to work in the hypothesis space, proposing about half as many experiments as the second group (Experimenters). Almost all of the former's experiments were guided by a hypothesis, whereas the latter's were often simply exploratory. Based on this and other work, Klahr proposed a possible general framework, or shareable toothbrush, for classifying the different kinds of cognitive studies. This general framework is based on multiple problem spaces, and whether the study was a general one, using an abstract task like the Big Trak, or domain-specific, like Nersessian’s studies of Maxwell [18]. Dunbar, currently at McGill University and moving to Dartmouth in the fall, added to this general framework the idea of classifying experiments based on whether they were in vitro (controlled laboratory experiments) or in vivo (case studies). Computational simulations can be based on either in vivo or in vitro studies. A system for classifying studies of scientific discovery might begin with a 2x2 matrix. Big Trak is an example of an in vitro technique; the work on Maxwell described by Nersessian,

78

M.E. Gorman, A. Kincannon, and M.M. Mehalik

on Faraday by Tweney, and on nuclear fission by Andersen, are examples of in vivo work. The three in vivo research programs did not explicitly distinguish between hypothesis and experiment spaces, but the practitioners studied generated both hypotheses and experiments. The rest of this paper will feature highlights from the workshop that will force us to expand and transform this classification scheme (see Table I). Dunbar’s work has iterated between in vivo and in vitro studies. The value of in vitro work is the way in which it allows for control and isolation of factors—like the way in which the possibility of error encourages experimental participants to adopt a confirmatory heuristic [19]. Dunbar thinks it is important to compare such findings with what scientists actually do. He has conducted a series of in vivo studies of molecular biology laboratories [20, 21]. Group studies have the heuristic value of forcing people to explain their reasoning. Regarding error, the molecular biologists had evolved special controls to check each step in a complex procedure in order to eliminate error. Dunbar ran an in vitro study in which he found that undergraduate molecular biology students would also employ this kind of control on a task that simulated the kind of reasoning used in molecular biology [22]. Dunbar’s work shows the importance of iterating between in vitro and in vivo studies. Schunn and his colleagues were interested in how scientists deal with unexpected results, or anomalies. In one study, he videotaped two astronomers interacting over a new set of data concerning the formation of ring galaxies. Schunn found that these researchers noticed anomalies as much as expected results, but paid more attention to the anomalies. The researchers developed hypotheses about the anomalies and elaborated on them visually, whereas they used theory to elaborate on expected results. When the two astronomers discussed the anomalies, they used terms like ‘the funky thing’ and ‘the dipsy-doodle’, staying at a perceptual rather than a theoretical level. Schunn’s astronomers were working neither in the hypothesis nor experimental space; instead, they were working in a space of possible visualizations dependent on their domain-specific experience. Hanne Andersen, from the University of Copenhagen, described the use of a family resemblance view of taxonomic concepts for understanding the dynamics of conceptual change. She noted that the family resemblance account has been criticized for not being able to distinguish sufficiently between different concepts, the problem of wide-open texture. This limitation could be resolved by including dissimilarity as well as similarity between concepts and by focusing on taxonomies instead of individual concepts. Anomalies can be viewed as violations of taxonomic principles that then lead to conceptual change. Andersen applied this approach to the discovery of nuclear fission, finding that early models of disintegration and atomic structure were revised in light of anomalous experimental results of this taxonomic kind Shrager, affiliated with the Department of Plant Biology, Carnegie Institution of Washington, and the Institute for the Study of Learning and Expertise, did a reflective study of his own socialization into phytoplankton molecular biology. In the beginning, he had to be told about every step, even when there were explicit instructions; he needed an extensive apprenticeship. As his knowledge grew, he noted that it was “somewhere between his head and his hands.” As his skill developed, he was able to take some of his attention off the immediate task at hand and understand the purpose of the procedures he was using. On at least one occasion, this came together in the “blink of an eye.” The cognitive framework he found most useful was his own tooth-

Spherical Horses and Shared Toothbrushes

79

brush: view application [23]. To his surprise, Shrager found that, “What passes for theory in molecular biology is the same thing that passes for a manual in car mechanics.” He found less of a need to keep reflective notes in his diary as he became more proficient, though he continued to record the details of experiments, where particular materials were stored and all the other procedural details that are vital to a molecular biologist. He commented that, “if you lose your lab notebook, you’re hosed.” Gooding indicated that more abstract computational models of the spherical horse variety have not worked well for him. For him, “the beauty is in the dirt.” In collaboration with Tom Addis, a computer scientist, he evolved a detailed, computational scheme for representing Faraday’s experiments, hypotheses and construals [24]. Gooding thought that communication ought to be added to the matrix proposed by Klahr and Dunbar (See Table 1). Paul Thagard, from the University of Waterloo, has been gathering ideas from leaders in the field about what it takes to be a successful scientist. According to Herb Simon, one should not work on what everyone else is working on and one needs to have a secret weapon, in his case, computational modeling. As part of a case study, Thagard interviewed a microbiologist, Patrick Lee, who accidentally discovered that a common virus has potential as a treatment for cancer. The discovery was the result of a “stupid” experiment in viral replication done by one of Lee’s graduate students. The “stupid” experiment produced an anomalous result that eventually led to the generation of a new hypothesis about the virus’ ability to kill cancer cells. This chain of events is an example of abductive hypothesis formation, in which hypotheses are generated and evaluated in order to explain data. Once a hypothesis was generated that fit the data, researchers used deduction to arrive at the hypothesis that the virus could kill cancer cells. Thagard raises the questions of how one decides what experiments to do and how one determines what is a good experiment. These questions are a critical part of the cognitive processes involved in discovery. Thagard is also looking at the role of emotions in scientific inquiry, in judgments about potential experiments, in reactions to unexpected results, and in reactions to successful experiments (Thagard’s model of emotions and science: http://cogsci/uwaterloo.ca ). Thagard suggested adding a space of questions to the Klahr framework. Robert Rosenwein, a sociologist at Lehigh, presented an in vitro simulation of science (SCISIM) that comes close to an in vivo environment [25]. Students in a class like Gorman’s Scientific and Technological Thinking (http://128.143.168.25/classes /200R/tcc200rf00.html) take on a variety of social roles in science. Some work in competing labs, others run funding agencies, still others run a journal and a newsletter. The students in the labs try to get funding for their experiments, and then publish the results. They do not do the kinds of fine-grained experimental processes done by participants in Big Trak; instead, they choose the variables they want to combine in an experiment, select a level of precision, and are given a result. Experiments cost ‘simbucks’ and salaries have to be paid, so there is continual pressure to fund the lab. There is a group of independent scientists as well, who have to decide which line of research to pursue. SCISIM adds another column to the matrix, for simulation of pursuit decisions. Pursuit decisions concern which research program to seek funding for (See Table 1). Such decisions are usually made within a network of enterprises. Marin Simina, a cognitive scientist at Tulane, described a computational simulation of Alexander Graham Bell’s network of enterprises. Howard Gruber coined the term ‘network of enter-

80

M.E. Gorman, A. Kincannon, and M.M. Mehalik

prises’ to describe the way in which Darwin pursued multiple projects that eventually played a synergistic role in his theory of evolution [26]. Similarly, Alexander Graham Bell had two major enterprises in 1873: making speech visible to the deaf, and sending multiple messages down a single wire. These enterprises were synthesized in his patent for a speaking telegraph, which focused on the type of current that would have to be used to transmit and receive speech [27, 28]. Simina created a program called ALEC, which simulated the discovery Bell made on June 2, 1875. At that time, Bell’s primary goal was to reach fame and fortune by solving the problem of multiple telegraphy, Bell had suspended the goal of transmitting speech because his mental model for a transmitter contained an indefinite number of metal reeds—it was not clear how it could be built. On June 2, 1877, a single tuned reed transmitted multiple tones with sufficient volume to serve as a transmitter for the human voice. Bell was not seeking this result; he wanted the reed to transmit only a single tone. But this serendipitous result allowed him to activate his suspended goal and instruct Watson to build the first telephone [29]. ALEC was able to simulate the process of suspending the goal and how Bell was primed to reactivate it by a result.

5

Collaboration and Invention

Gary Bradshaw, a cognitive scientist at Mississippi State and a collaborator with Herb Simon, talked about “stepping off Herb’s shoulders into his shadow.” In a study of the Wright Brothers, he adapted Klahr’s framework to invention, creating three spaces: function, hypothesis and design [30]. One of the major reasons the Wrights succeeded where others failed was that the brothers decomposed the problem into separate functions—like vertical lift, horizontal stability, and turning. Other inventors worked primarily in a design space, adding features like additional wings without the careful functional analysis done by the Wrights. This suggests that function and design spaces ought to be added for inventors (see Table 1). To see how well his framework of invention work-spaces held up, Bradshaw tried another case—the rocket boys from West Virginia, immortalized in a book by Homer Hickam [31], and in the film October Sky [32]. Their problem of rocket construction could be decomposed into multiple spaces, but a complete factorial of all the possible variations would come close to two million cells, so they could not follow the strategy called Vary One Thing at a Time (VOTAT) —they did not have the resources. Although the elements of the rocket construction were not completely separable, they tested some variables in isolation, such as fuel mixtures in bottles. They also did careful post-launch inspection, and used theory to reduce the problem space; for example, they used calculus to derive their nozzle shape. They built knowledge as they went along, taking good notes. Team members also took different roles—one was more of a scientist, another more of an engineer and project manager. Tweney argued from his own experience that the rocket system was much less decomposable than suggested by Bradshaw’s analysis and that the West Virginia group seemed to hit upon some serendipitous decompositions. Tweney’s rocket group was stronger in chemistry, so they used theory to create the fuel, and copied the nozzle design. Both were post-Sputnik groups active during the late 1950’s, although Tweney insists that his was a less serious “rocket boy” group than the one studied by Hickam.

Spherical Horses and Shared Toothbrushes

81

Mehalik, a Systems Engineer at the University of Virginia, developed a framework which combined Hutchins’ analysis of distributed cognition ‘in the wild’ [12], with three states or stages in actor networks. 1. A top-down state in which one actor or group of actors controls the research program and tells others what to do. 2. A trading zone state in which no group of actors has a comprehensive view, but all are connected by a boundary object that each sees differently. Peter Galison uses particle detectors as an example of this sort of boundary object [34], 3. A shared representation state in which all actors have a common perspective on what needs to be accomplished, even if there is still some division of labor based on skills, aptitude and expertise. Mehalik applied this framework to the invention of an environmentally sustainable furniture fabric by a global group. This network began with a shared mental model based on an analogy to nature, then struggled to settle into a stable trading zone in which participants would trade economic benefits and prestige. The resulting fabric has won almost a dozen major environmental awards and is seen as a leading example of innovative environmental design. Klahr suggested that Mehalik’s research might add another dimension to his overall framework: capturing work in groups and teams. It might be possible to take each of the major actants studied by Mehalik, look at what spaces they worked in, then show links between them and their different activities. Tweney raised an important question about distributed cognition—could intra-individual cognition be modeled in a way similar to inter-individual cognition by including the three-state framework? Michael Hertz, from the University of Virginia, developed a tool for determining causal attribution, and applied it to Monsanto’s initially unsuccessful introduction of GMO’s into Europe. The tool did not allow Hertz to identify a primary cause, but it did reduce the complexity of the decision space for students studying the Monsanto case and trying to determine who or what was at fault. Shrager suggested implementing this tool in an Echo network that would incorporate interaction with the decisionmakers themselves. Bernie Carlson raised the question of when it is useful to quantify certain decision situations, again relating to the theme of the balance between using a tool to help reduce complexity in a decision situation while still maintaining contextual validity. Ryan Tweney raised the issue of using Hertz’s framework in a predictive sense—the dynamic complexity of the situation may be too difficult to make predictions; however, prediction is what a company such as Monsanto may be most interested in. Hertz responded by saying that the act of trying to identify causes has heuristic value, especially if a tool helps Monsanto distinguish between the relative role of factors it can influence and factors that are largely beyond its control. Decision aids and simulations simplify complex situations; decision-makers need to remember that these simplifications may not accurately reflect all important aspects of the underlying situation, including complex, dynamic interactions among variables. Thomas Hughes, a historian of technology, talked about his analysis of collective invention in large-scale systems like the development of the Atlas and Polaris missiles [35]. He extolled the virtues of systems management techniques and the benefits of isolating scientists from bureaucracy. Project management and oversight functions change with the size of the group and management becomes more explicitly needed with larger groups. Without sufficient oversight, large projects can be too diffuse and

82

M.E. Gorman, A. Kincannon, and M.M. Mehalik

inefficient. Dunbar suggested that having this kind of systems management was one reason why the privately funded Celera outperformed the publicly funded Human Genome Project. William (Chip) Levy, from the Department of Neurosurgery at the University of Virginia, described a neural network that models results of an implicit learning experiment. He uses the model as an illustration of how variability can be an adaptive property in biological terms. Complex systems, like brains and like neural network models, benefit from the random fluctuations of noise. Eliminating variability in these systems would sacrifice too much memory capacity. Variability exists both within and between individuals. Levy’s research highlights the role of tacit knowledge in discovery and invention. Sociologists of science and technology emphasize the tacit dimension [36, 37]. There is a growing cognitive literature on implicit knowledge in psychology [38, 39], but this literature does not connect directly to discovery and invention. Several conference participants mentioned tacit knowledge. Robert Matthews, a cognitive psychologist at Louisiana State and one of the leading researchers on implicit learning [40], predicted that Dunbar’s scientists would be unable to explain why they did what they did. Dunbar responded that the scientists’ after-the-fact stories about how they did what they did had nothing to do with their actual processes. Schunn noted KarmiloffSmith’s three stages of learning, in which the second stage means you can do something without being able to explain it, and the third stage involves reflection [41]. The way to become aware of one’s implicit knowledge is to watch oneself, which can interfere with performance. Maria Ippolito, from the University of Alaska, compared the creative process exhibited in the writings of Virginia Woolf to that used by scientists. Ippolito offered Woolf as an example of a scientific thinker in a more general sense and constructed a multi-dimensional database using Woolf’s writings. Through the examination of Woolf’s development as a writer, Ippolito investigated the psychological processes of creative problem solving, including heuristics, scripts and schemata, development of expertise, and search of unstructured problem spaces. Elke Kurz, from the University of Tübingen, commented on two studies in which she observed the softening of often-perceived boundaries between cognitive-historical case study analysis and in-laboratory analyses. She examined how scientists and mathematicians used different representational systems, such as variant forms of Calculus, when problem solving. These differences can be traced to historical developments in the different scientific fields. Such historical developments invite historical case analysis as a necessary part of the study of the conceptual resources these different scientists possessed. Kurz also replicated experiments involving perception of size constancy that had been done earlier by Brunswik. During the attempts at replication, Kurz noted how Brunswik needed to constrain the participants’ agency into forms that Brunswik found tolerable in the context of his experiment. Kurz stated the construction of this context of acceptable agency is a process worth studying using historical case methods, again complementing the in-laboratory style of investigation. Finally, Kurz reported on the difficulties of attempting a replication of a previous experiment because of the changes in many contextual events between the original experiment and the replicated experiment. This situation again invites the crossing of any perceived boundary between the case study and in-laboratory approaches.

Spherical Horses and Shared Toothbrushes

6

83

Lessons Learned

The workshop illustrates that toothbrushes can be shared. The example we used in this paper was the Simon/Klahr multiple spaces framework. Table 1 summarizes the potential spaces identified in the workshop. Table 1. Different search spaces identified by participants in the workshop. Asterisks denote computational simulations, a kind of ‘spherical horse’ that can be based on either in vivo or in vitro studies. Italics denote spaces that are unique to invention. Search Spaces Hypotheses Experiments Pursuit Communication Embodied knowledge Taxonomies Visualizations Questions Links in a social network Function Design

In Vitro Big Trak, SciSim Big Trak, SciSim SciSim SciSim

In Vivo Maxwell, Faraday Maxwell, Faraday ALEC* Faraday Faraday, Shrager Nuclear fission Galatea*, Schunn’s astronomers Patrick Lee Hughes, Mehalik Wright brothers, rocket boys Wright brothers, rocket boys

The problem with this framework is that each study seemed to suggest the need for yet another space. There is not always a clear line of demarcation between spaces. For example, SciSim incorporates in vivo cases, which means that it can exist in a kind of gray zone between in vitro and in vivo. Visualizations can be thought experiments, ways of seeing the data, and mental models of a device or even of a social network. Despite its shortcomings, this framework has heuristic value, both for organizing research already done and for suggesting directions for future work. For example, only Bradshaw has worked with function and design spaces, and there is no in vitro work on invention Mehalik’s work demonstrated the need for mapping movements among spaces across individuals over time. What would happen if we added time-scale to the framework? Schun suggested that visualizations happen most repidly, with experi3 ments and hypotheses taking longer, and taxonomies even longer. Hughes and Mehalik remind us that time-scale is partly dependent on the extent to which each of these activities depends on network-building. This framework is also general enough to facilitate comparisions between discovery, invention and artistic creation, as Ippolito noted. More comparisions of this sort are needed.

3

Personal communication.

84

7

M.E. Gorman, A. Kincannon, and M.M. Mehalik

Future of Cognitive Studies of Science and Technology

Bruce Seely, a historian of technology on rotation at the NSF’s Science and Technology Studies program, felt that the workshop showed how cognitive studies of science and technology had grown in sophistication. highlighting the creators of new knowledge in ways that complemented studies of users by other STS disciplines. Tiha von Ghyczy, representing the Strategic Institute of the Boston Consulting Group, noted that managers are happy to use any toothbrush that will help them improve their business strategies, and they are also more concerned about practical results than methodological foundations. Still, he felt that managers would find lessons from the workshop interesting. Strategies have a very short half-life; a successful strategy is quickly imitated by competitors. Therefore, original thinking is essential for business survival. Besides business strategy and science-technology studies, a cognitive approach to invention and discovery should also inform work in ‘mainstream’ cognitive science. Theories and frameworks need to be able to deal in a rigorous way with the shared and distributed character of scientific and technological problem solving, and also its tacit dimension. We hope this workshop will suggest the outlines more sophisticated theories and models might take. Ideally, anyone doing a computational model or decision-aid for discovery would base it on one or more fine-grained case studies. Tweney and Dunbar have had particularly good success combining in vitro and in vivo approaches. We hope this workshop will encourage more collaborations between those trained in spherical-horse approaches and those capable of going deeply into the details of particular discoveries and inventions.

References 1. 2. 3. 4. 5. 6. 7. 8.

Christensen, C.M., The innovator’s dilemma: When new technologies cause great firms to fail. 1997, Boston: Harvard Business School Press. Evans, P. and T.S. Wurster, Blown to bits: How the new economics of information transforms strategy. 2000, Boston: Harvard Business School Press. Nonaka, I. and H. Takeuchi, The knowledge-creating company: how Japanese companies create the dynamics of innovation. 1995, New York: Oxford University Press. Gorman, M.E., et al., Alexander Graham Bell, Elisha Gray and the Speaking Telegraph: A Cognitive Comparison. History of Technology, 1993. 15: p. 156. Shrager, J. and P. Langley, Computational Models of Scientific Discovery and Theory Formation. 1990, San Mateo, CA: Morgan Kaufmann Publishers, Inc. Simon, H.A., Langley, P. W., & Bradshaw, G., Scientific discovery as problem solving. Synthese, 1981. 47: p. 1-27. Slezak, P., Scientific discovery by computer as empirical refutation of the Strong Programme. Social Studies of Science, 1989. 19(4): p. 563-600. Langley, P., Simon, H. A., Bradshaw, G. L., & Zykow, J. M. Scientific Discovery: Computational Explorations of the Creative Processes. 1987, Cambridge: MIT Press.

Spherical Horses and Shared Toothbrushes

9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23.

24.

25. 26. 27.

85

Bradshaw, G.L., Langley, P., & Simon, H. A., Studying scientific discovery by computer simulation. Science, 1983. 222: p. 971-975. Vera, A.H. and H.A. Simon, Situated action: A symbolic interpretation. Cognitive Science, 1993. 17(1): p. 7-48. Greeno, J.G. and J.L. Moore, Situativity and symbols: Response to Vera and Simon. Cognitive Science, 1993. 17: p. 49-59. Hutchins, E., Cognition in the Wild. 1995, Cambridge, MA: MIT Press. Norman, D.A., Things That Make Us Smart: Defending Human Attributes in the Age of the Machine. 1993, New York: Addison Wesley. Donald, M., Origins of the modern mind: Three stages in the evolution of culture and cognition. 1991, Cambridge, UK: Harvard. Nersessian, N., How do scientists think? Capturing the dynamics of conceptual change in science, in Cognitive Models of Science, R.N. Giere, Editor. 1992, University of Minnesota Press: Minneapolis. p. 3-44. Suchman, L.A., Plans and Situated Actions: The Problem of Human-Machine Interaction. 1987, Cambridge: Cambridge University Press. Klahr, D., Exploring Science: The cognition and development of discovery processes. 2000, Cambridge, MA: MIT Press. Klahr, D. and H.A. Simon, Studies of Scientific Discovery: Complementary approaches and convergent findings. Psychological Bulletin, 1999. 125(5): p. 524-543. Gorman, M.E., Simulating Science: Heuristics, Mental Models and Technoscientific Thinking. Science, Technology and Society, ed. T. Gieryn. 1992, Bloomington: Indiana University Press. 265. Dunbar, K., How scientists really reason: Scientific reasoning in real-world laboratories, in The nature of insight, R.J. Sternberg and J. Davidson, Editors. 1995, MIT Press: Cambridge, MA. p. 365-396. Dunbar, K., How scientists think, in Creative Thought, T.B. Ward, S.M. Smith, and J. Vaid, Editors. 1997, American Psychological Association: Washington, D.C. Dunbar, K. Scientific reasoning strategies in a simulated molecular genetics environment. in Program of the Eleventh Annual Conference of the Cognitive Science Society. 1989. Ann Arbor, MI: Lawrence Erlbaum Associates. Shrager, J., Commonsense perception and the psychology of theory formation, in Computational Models of Scientific Discovery and Theory Formation, J. Shrager, & Langley, P., Editor. 1990, Morgan Kaufmann Publishers, Inc.: San Mateo, CA. p. 437-470. Gooding, D.C. and T.R. Addis, Modelling Faraday’s experiments with visual functional programming 1: Models, methods and examples, . 1993, Joint Research Councils’ Initiative on Cognitive Science & Human Computer Interaction Special Project Grant #9107137. Gorman, M. and R. Rosenwein, Simulating social epistemology. Social Epistemology, 1995. 9(1): p. 71-79. Gruber, H., Darwin on Man: A Psychological Study of Scientific Creativity. 2nd ed. 1981, Chicago: University of Chicago Press. Gorman, M.E. and J.K. Robinson, Using History to Teach Invention and Design: The Case of the Telephone. Science and Education, 1998. 7: p. 173-201.

86

M.E. Gorman, A. Kincannon, and M.M. Mehalik

28.

Simina, M., Enterprise-directed reasoning: Opportunism and deliberation in creative reasoning, in Cognitive Science. 1999, Georgia Institute of Technology: Atlanta, GA. Gorman, M.E., Transforming nature: Ethics, invention and design. 1998, Boston: Kluwer Academic Publishers. Bradshaw, G., The Airplane and the Logic of Invention, in Cognitive Models of Science, R.N. Giere, Editor. 1992, University of Minnesota Press: Minneapolis. p. 239-250. Hickam, H.H., Rocket boys: A memoir. 1998, New York: Delacorte Press. 368. Gordon (producer), C. and J. Johnston (director), October Sky, . 1999, Universal Studios: Universal City, CA. Tweney, R.D., Scientific Thinking: A cognitive-historical approach, in Designing for Science: Implications for everyday, classroom, and professional settings, K. Crowley, C.D. Schunn, and T. Okada, Editors. 2001, Lawrence Earlbaum & Associates: Mawah, NJ. p. 141-173. Galison, P.L., Image and logic: A material culture of microphysics. 1997, Chicago: University of Chicago Press. Hughes, T.P., Rescuing Prometheus. 1998, New York: Pantheon books. Collins, H.M., Tacit knowledge and scientific networks, in Science in context: Readings in the sociology of science, B. Barnes and D. Edge, Editors. 1982, The MIT Press: Cambridge, MA. Mackenzie, D. and G. Spinardi, Tacit knowledge, weapons design, and the uninvention of nuclear weapons. American Journal of Sociology, 1995. 101(1): p. 44-99. Berry, D.C., ed. How implicit is implicit learning? . 1997, Oxford University Press: Oxford. Dienes, Z. and J. Perner, A Theory of Implicit and Explicit Knowledge. Behavioral and Brain Sciences, 1999. 22(5). Matthews, R.C. and L.G. Roussel, Abstractness of implicit knowledge: A cognitive evolutionary perspective, in How implicit is implicit learning?, D.C. Berry, Editor. 1997, Oxford University Press: Oxford. p. 13-47. Karmiloff-Smith, A., From meta-process to conscious access: Evidence from children’s metalinguistic and repair data. Cognition, 1986. 23(2): p. 95-147.

29. 30. 31. 32. 33.

34. 35. 36. 37.

38. 39. 40. 41.

Functional Trees Jo˜ ao Gama LIACC, FEP - University of Porto Rua Campo Alegre, 823 4150 Porto, Portugal Phone: (+351) 226078830 Fax: (+351) 226003654 jgama@liacc.up.pt http://www.niaad.liacc.up.pt/˜jgama Abstract. The design of algorithms that explore multiple representation languages and explore diﬀerent search spaces has an intuitive appeal. In the context of classiﬁcation problems, algorithms that generate multivariate trees are able to explore multiple representation languages by using decision tests based on a combination of attributes. The same applies to model trees algorithms, in regression domains, but using linear models at leaf nodes. In this paper we study where to use combinations of attributes in regression and classiﬁcation tree learning. We present an algorithm for multivariate tree learning that combines a univariate decision tree with a linear function by means of constructive induction. This algorithm is able to use decision nodes with multivariate tests, and leaf nodes that make predictions using linear functions. Multivariate decision nodes are built when growing the tree, while functional leaves are built when pruning the tree. The algorithm has been implemented both for classiﬁcation problems and regression problems. The experimental evaluation shows that our algorithm has clear advantages with respect to the generalization ability when compared against its components, two simpliﬁed versions, and competes well against the state-of-the-art in multivariate regression and classiﬁcation trees. Keywords: Decision Trees, Multiple Models, Supervised Machine Learning.

1

Introduction

The generalization ability of a learning algorithm depends on the appropriateness of its representation language to express a generalization of the examples for the given task. Diﬀerent learning algorithms employ diﬀerent representations, search heuristics, evaluation functions, and search spaces. It is now commonly accepted that each algorithm has its own selective superiority [3]; each is best for some but not all tasks. The design of algorithms that explore multiple representation languages and explore diﬀerent search spaces has an intuitive appeal. This paper presents one such algorithm. In the context of supervised learning problems it is useful to distinguish between classiﬁcation problems and regression problems. In the former the target K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 59–73, 2001. c Springer-Verlag Berlin Heidelberg 2001

60

J. Gama

variable takes values in a ﬁnite and pre-deﬁned set of un-ordered values, and the usual goal is to minimize a 0-1-loss function. In the later the target variable is ordered and takes values in a subset of . The usual goal is to minimize a squared error loss function. Mainly due to the diﬀerences in the type of the target variable successful techniques in one type of problems are not directly applicable to the other type of problems. The supervised learning problem is to ﬁnd an approximation to an unknown function given a set of labelled examples. To solve this problem, several methods have been presented in the literature. Two of the most representative methods are the General Linear Model and Decision trees. Both methods explore diﬀerent hypothesis space and use diﬀerent search strategies. In the former the goal is to minimize the sum of squared deviations of the observed values for the dependent variable from those predicted by the model. It is based on the algebraic theory of invariants and has an analytical solution. The description language of the model takes the form of a polynomial that, in its simpler form, is a linear combination of the attributes: w0 + wi × xi . This is the basic idea behind linear-regression and discriminant functions[8]. The latter use a divide-and-conquer strategy. The goal is to decompose a complex problem into simpler problems and recursively applying the same strategy to the sub-problems. Solutions of the sub-problems are combined in the form of a tree. Its hypothesis space is the set of all possible hyper-rectangular regions. The power of this approach comes from the ability to split the space of the attributes into subspaces, whereby each subspace is ﬁtted with diﬀerent functions. This is the basic idea behind well-known tree based algorithms [2,13]. In the case of classiﬁcation problems, a class of algorithms that explore multiple representation languages are the so called multivariate trees [2,20,12,6,11]. In this sort of algorithms decision nodes can contain tests based on a combination of attributes. The language bias of univariate decision trees (axis parallel splits) are relaxed allowing decision surfaces oblique with respect to the axis of the instance space. As in the case of classiﬁcation problems, in regression problems some authors have studied the use of regression trees that explore multiple representation languages, here denominated model trees [2,13,15,21,18]. But while in classiﬁcation problems multivariate decisions appear in internal nodes, in regression problems multivariate decisions appear in leaf nodes. The problem that we study in this paper is where to use decisions based on combinations of attributes. Should we restrict combinations of attributes to decision nodes? Should we restrict combinations of attributes to leaf nodes? Could we use combinations of attributes both at decision nodes and leaf nodes? The algorithm that we present here is an extension of multivariate trees. It is applicable to regression and classiﬁcation domains, allowing combinations of attributes both at decision nodes and leaves. In the next section of the paper we describe our proposal to functional trees. In Section 3 we discuss the diﬀerent variants of multivariate models using an illustrative example on regression domains. In Section 4 we present related work both in the classiﬁcation and re-

Functional Trees

61

gression settings. In Section 5 we evaluate our algorithm on a set of benchmark regression and classiﬁcation problems. Last Section concludes the paper.

2

The Algorithm for Constructing Functional Trees

The standard algorithm to build univariate trees consists of two phases. In the ﬁrst phase a large tree is constructed. In the second phase this tree is pruned back. The algorithm to grow the tree follows the standard divide-and-conquer approach. The most relevant aspects are: the splitting rule, the termination criterion, and the leaf assignment criterion. With respect to the last criterion, the usual rule consists of assignment of a constant to a leaf node. Considering only the examples that fall at this node, the constant is usually the majority class in classiﬁcation problems or the mean of the y values in the regression setting. With respect to the splitting rule, each attribute value deﬁnes a possible partition of the dataset. We distinguish between nominal attributes and continuous ones. In the former the number of partitions is equal to the number of values of the attribute, in the latter a binary partition is obtained. To estimate the merit of the partition obtained by a given attribute we use the gain ratio heuristic for classiﬁcation problems and the decrease in variance criterion for regression problems. In any case, the attribute that maximizes the criterion is chosen as test attribute at this node. The pruning phase consists of traversing the tree in a depth-ﬁrst fashion. At each non-leaf node two measures should be estimated. An estimate of the error of the subtree above this node, that is computed as a weighted sum of the estimated error for each leaf of the subtree, and the estimated error of the non-leaf node if it was pruned to a leaf. If the later is lower than the former, the entire subtree is replaced to a leaf. All of these aspects have several and important variants, see for example [2, 14]. Nevertheless all decision nodes contain conditions based on the values of one attribute, and leaf nodes predict a constant. 2.1

Functional Trees

In this section we present the general algorithm to construct a functional tree. Given a set of examples and an attribute constructor, the main algorithm used to build a functional tree is presented in Figure 1. This algorithm is similar to many others, except in the constructive step (steps 2 and 3). Here a function is built and mapped to new attributes. There are some aspects of this algorithm that should be made explicit. In step 2, a model is built using the Constructor function. This is done using only the examples that fall at this node. Later, in step 3, the model is mapped to new attributes. Actually, the constructor function should be a classiﬁer or a regressor depending on the type of the problem. In the case of regression problems the constructor function is mapped to one new attribute, the yˆ value predict by the constructor. In the case of classiﬁcation problems the number of new attributes is equal to the number of classes. Each

62

J. Gama

Function Tree(Dataset, Constructor) 1. If Stop Criterion(DataSet) – Return a Leaf Node with a constant value. 2. Construct a model Φ using Constructor 3. For each example x ∈ DataSet – Compute yˆ = Φ(x) – Extend x with a new attribute yˆ. 4. Select the attribute from both original and all newly constructed attributes that maximizes some merit-function 5. For each partition i of the DataSet using the selected attribute – Treei = Tree(Dataseti , Constructor) 6. Return a Tree, as a decision node based on the select attribute, containing the Φ model, and descendents Treei . End Function Fig. 1. Building a Functional Tree

new attribute is the probability that the example belongs to one class1 given by the constructed model. The merit of each new attribute is evaluated using the merit-function of the univariate tree, and in competition with the original attributes (step 4). The model built by our algorithm has two types of decision nodes: those based on a test of one of the original attributes, and those based on the values of the constructor function. When using Generalized Linear Models (GLM) [16] as attribute constructor, each new attribute is a linear combination of the original attributes. Decision nodes based on constructed attributes deﬁnes a multivariate decision surface. Once a tree has been constructed, it is pruned back. The general algorithm to prune the tree is presented in Figure 2. To estimate the error at each leaf (step 1) we distinguish between classiﬁcation and regression problems. In the former we assume a binomial distribution using a process similar to the pessimistic error of C4.5. In the latter we assume a χ2 distribution of the variance of the cases in it using a process similar to the χ2 pruning described in [18]. A similar procedure is used to estimate the constructor error (step 3). The pruning algorithm produces two diﬀerent types of leaves: Ordinary Leaves that predict a constant, and Constructor Leaves that predict the value of the Constructor function learned (in the growing phase) at this node. By simplifying our algorithm we obtain diﬀerent conceptual models. Two interesting lesions are described in the following sub-sections. Bottom-Up Approach. We denote as Bottom-Up Approach to functional trees when the functional models are used exclusively at leaves. This is the strategy 1

At diﬀerent nodes the system considers diﬀerent number of classes depending on the class distribution of the examples that fall at this node.

Functional Trees

63

Function Prune(Tree) 1. 2. 3. 4.

Estimate Leaf Error as the error at this node. If Tree is a leaf Return Leaf Error. Estimate Constructor Error as the error of Φ 2 . For each descendent i – Backed Up Error += Prune(Treei ) 5. If argmin(Leaf Error,Constructor Error,Backed Up Error) – Is Leaf Error • Tree = Leaf • Tree Error = Leaf Error – Is Model Error • Tree = Constructor Leaf • Tree Error = Constructor Error – Is Backed Up Error • Tree Error = Backed Up Error 6. Return Tree Error

End Function Fig. 2. Pruning a Functional Tree

used for example in M5 [15,21], and in NBtree system [10]. In our tree algorithm this is done restricting the selection of the test attribute (step 4 in the growing algorithm) to the original attributes. Nevertheless we still build, at each node, the constructor function. The model built by the constructor function is used later in the pruning phase. In this way, all decision nodes are based in the original attributes. Leaf nodes could contain a constructor model. A leaf node contains a constructor model if and only if in the pruning algorithm the estimated error of the constructor model is lower than the Backed-up-error and the estimated error of the node has if a leaf replaced it. Top-Down Approach. We denote as Top-Down Approach to functional trees when the multivariate models are used exclusively at decision nodes (internal nodes). In our algorithm, restricting the pruning algorithm to choose only between the Backed Up Error and the Leaf Error obtain these kinds of models. In this case all leaves predict a constant value. This is the strategy used for example in systems like LMDT [20], OC1 [12], and Ltree [6]. Functional trees extend and generalize multivariate trees. Our algorithm can be seen as a hybrid model that performs a tight combination of a univariate tree and a GLM function. The components of the hybrid algorithm use diﬀerent representation languages and search strategies. While the tree uses a divide-andconquer method, the linear-regression performs a global minimization approach. While the former performs feature selection, the later uses all (or almost all) the attributes to build a model. From the point of view of the bias-variance

64

J. Gama

decomposition of the error [1] a decision tree is known to have low bias but high variance, while GLM functions are known to have low variance but high bias. This is the desirable behaviour for components of hybrid models.

3

An Illustrative Example

In this section we use the well-known regression dataset Housing to illustrate the diﬀerent variants of functional models. The attribute constructor used is the linear regression function. Figure 3(a) presents a univariate tree for the Housing

RM 1.23

LR Node 18 32.67

2.58 Leaf 44.06

Fig. 3. (a)The Univariate Regression Tree and (b) Top-Down regression tree for the Housing problem.

dataset. Decision nodes only contain tests based on the original attributes. Leaf nodes predict the average of y values taken from the examples that fall at the leaf. In a top-down multivariate tree (Figure 3(b)) decision nodes could contain (not necessarily) tests based on a linear combination of the original attributes. The tree contains a mixture of learned attributes, denoted as LR Node, and original attributes, e.g. AGE, DIS. Any of the linear-regression attributes can be used both at the node where they have been created and at deeper nodes. For example, the LR Node 19 has been created at the second level of the tree. It is used as test attribute at this node, and also (due to the constructive ability) as test attribute at the third level of the tree. Leaf nodes predict the average of y values of the examples that fall at this leaf. In a bottom-up multivariate tree (Figure 4(a)) decision nodes only contain tests based on the original attributes. Leaf nodes could predict (not necessarily) values obtained by using a linear-regression function built from the examples that fall at this node. This is

Functional Trees LR Node 14

RM

361.92 2.58 Leaf 44.06

Fig. 4. (a)The Bottom-Up Multivariate Regression Tree and (b) The Multivariate Regression Tree for the Housing problem.

the kind of multivariate regression trees that usually appears on the literature. For example, systems M5 [15,21] and RT [18] generate this kind of models. Figure 4(b) presents the full multivariate regression tree using both top-down and bottom-up multivariate approaches. In this case, decision nodes could contain (not necessarily) tests based on a linear combination of the original attributes, and leaf nodes could predict (not necessarily) values obtained by using a linearregression function built from the examples that fall at this node. Figure 5 illustrates the functional models in the case of a classiﬁcation problem. We have used the UCI dataset Learning Qualitative Structure Activity Relationships - QSARs pyrimidines to illustrate the diﬀerent variants of tree models. This is a complex two classes problem deﬁned by 54 continuous attributes. The attribute constructor used is the LinearBayes [5] classiﬁer. In a bottom-up functional tree (Figure 5(a)) decision nodes only contain tests based on the original attributes. Leaf nodes could predict (not necessarily) values obtained by using a LinearBayes function built from the examples that fall at this node. Figure 5(b) presents the functional tree using both top-down and bottom-up multivariate approaches. In this case, decision nodes could contain (not necessarily) tests based on a linear combination of the original attributes, and leaf nodes could predict (not necessarily) values obtained by using a LinearBayes function built from the examples that fall at this node.

4

Related Work

Breiman et.al. [2] presents the ﬁrst extensive and in-depth study of the problem of constructing decision and regression trees. But, while in the case of decision trees they consider internal nodes with a test based on linear combination of

66

J. Gama d1p5pi_doner -0.5

0.48

LB Leaf

c0

d1p4size

4.5 c0

LB Leaf

c0

c1

LB Leaf

Fig. 5. (a)The Bottom-Up Functional Tree and (b) the Functional Tree for the QSARs problem.

attributes, in the case of regression trees internal nodes are always based on a single attribute. In the context of classiﬁcation problems, several algorithms have been presented that could use at each decision node tests based on linear combination of the attributes [2,12,20,6]. The most comprehensive study on multivariate trees has been presented by Brodley and Utgoﬀ in [4]. Brodley and Utgoﬀ discusses several methods for constructing multivariate decision trees: representing a multivariate test, including symbolic and numeric features, learning the coeﬃcients of a multivariate test, selecting the features to include in a test, and pruning of multivariate decision trees. Brodley only considers multivariate tests at inner nodes in a tree. In this context few works consider functional tree leaves. One of the earliest work is the Percepton tree algorithm [19] where leaf nodes may implement a general linear discriminant function. Also Kohavi[10] has presented the naive Bayes tree that uses functional leaves. NBtree is a hybrid algorithm that generates a regular univariate decision tree, but the leaves contain a naive Bayes classiﬁer built from the examples that fall at this node. The approach retains the interpretability of naive Bayes and decision trees, while resulting in classiﬁers that frequently outperform both constituents, especially in large datasets. Also, Gama [7] has presented Cascade Generalization, a method to combine classiﬁcation algorithms by means of constructive induction. The work presented here, near follows Cascade method but extended for regression domains and allowing models with functional leaves. In regression domains, Quinlan [13] has presented system M5. It builds multivariate trees using linear models at the leaves. In the pruning phase for each

Functional Trees

67

leaf a linear model is built. Recently, Witten and Eibe [21] have extended M5. A linear model is built at each node of the initial regression tree. All the models along a particular path from the root to a leaf node are then combined into one linear model in a smoothing step. Also Karalic [9] has studied the inﬂuence of using linear regression in the leaves of a regression tree. As in the work of Quinlan, Karalic shows that it leads to smaller models with increase of performance. Torgo [17] has presented an experimental study about functional models for regression tree leaves. Later, the same author [18] has presented the system RT. Using RT with linear models at the leaves, RT builds and prunes a regular univariate tree. Then at each leaf a linear model is built using the examples that fall at this leaf.

5

Experimental Evaluation

It is commonly accepted that multivariate regression trees should be competitive against univariate models. In this section we evaluate the proposed algorithm, its simpliﬁed variants, and its components on a set of classiﬁcation and regression benchmark problems. In regression problems the constructor is a standard linear regression function. In classiﬁcation problems the constructor is the LinearBayes classiﬁer [5]. For comparative proposes we evaluate also system M53 . The main goal in this experimental evaluation is to study the inﬂuence in terms of performance of the position inside a regression and a classiﬁcation tree of the linear models. We evaluate three situations: – Trees that could use linear combinations at each internal node. – Trees that could use linear combinations at each leaf. – Trees that could use linear combinations both at each internal and leaf nodes. All evaluated models are based on the same tree growing and pruning algorithm. That is, they use exactly the same splitting criteria, stopping criteria, and pruning mechanism. Moreover they share many minor heuristics that individually are too small to mention, but collectively can make diﬀerence. Doing so, the differences on the evaluation statistics are due to the diﬀerences in the conceptual model. In this work we estimate the performance of a learned model using 10 fold cross validation. To minimize the inﬂuence of the variability of the training set, we repeat this process ten times, each time using a diﬀerent permutation of the dataset. The ﬁnal estimate is the mean of the performance statistic obtained in each run of the cross validation. For regression problems the performance is measured in terms of the mean squared error statistic. For classiﬁcation problems the performance is measured in terms of the error rate statistic. To apply pairwise comparisons we guarantee that, in all runs, all algorithms learn and test on the same partitions of the data. We compare the performance of the 3

We have used M5 from version 3.1.8 of the Weka environment. We have used several regression systems. The most competitive was M5.

68

J. Gama

functional tree (FT) against its components: the univariate tree (UT) and the constructor function (linear regression (LR) in regression problems, and LinearBayes (LB) in classiﬁcation problems). The functional tree is also compared against to the two simpliﬁed versions: Bottom-up (FT-B) and Top-Down (FT-T). For each dataset, comparisons between algorithms are done using the Wilcoxon signed ranked paired-test. The null hypothesis is that the diﬀerence between performance statistics has median value zero. We consider that a diﬀerence in performance has statistical signiﬁcance if the p value of the Wilcoxon test is less than 0.01. 5.1

Results in Regression Domains

We have chosen 20 datasets from the Repository of Regression problems at LIACC 4 . The choice of datasets was restricted by the criteria that almost all the attributes are ordered with few missing values5 . The number of examples varies from 43 to 40768. The number of attributes varies from 5 to 48. The results in terms of MSE and standard deviation are presented in Table 1. The ﬁrst two columns refer to the results of the components of the hybrid algorithm. The following three columns refer to the simpliﬁed versions of our algorithm and the full model. The last column refers to the M5 system. For each dataset, the algorithms are compared against the full multivariate tree using the Wilcoxon signed rank-test. A − (+) sign indicates that for this dataset the performance of the algorithm was worse (better) than the full model with a p value less than 0.01. Table 1 presents a comparative summary of the results. The ﬁrst line presents the geometric mean of the MSE statistic across all datasets. The second line shows the average rank of all models, computed for each dataset by assigning rank 1 to the best algorithm, 2 to the second best and so on. The third line shows the average ratio of MSE. This is computed for each dataset as the ratio between the MSE of one algorithm and the MSE of M5. The fourth line shows the number of signiﬁcant diﬀerences using the signed-rank test taking the multivariate tree FT as reference. We use the Wilcoxon Matched-Pairs Signed-Ranks Test to compare the error rate of pairs of algorithms across datasets6 . The last line shows the p values associated with this test for the MSE results on all datasets and taking FT as reference. It is interesting to note that the full model (FT) signiﬁcantly improves over both components (LR and UT) in 14 datasets out of 20. All the multivariate trees have a similar performance. Using the signiﬁcant test as criteria, FT is the most performing algorithm. It is interesting to note that the bottom-up version is the most competitive algorithm. The ratio of signiﬁcant wins/losses between the bottom-up and top-down versions is 4/3. 4 5

6

http://www.ncc.up.pt/∼ltorgo/Datasets In regression problems, the actual implementation ignores missing values at learning time. At application time, if the value of the test attribute is unknown, all descendent branches produce a prediction. The ﬁnal prediction is a weighted average of the predictions. Each pair of data points consists of the estimate MSE on one dataset and for the two learning algorithms being compared.

Functional Trees

69

Table 1. Summary of Results in Regression Problems (MSE). L.Regression Univ. Tree Functional Trees Data (LR) (UT) Top Bottom FT M5 Abalone − 4.908±0.0 − 5.728±0.1 4.616±0.0 − 4.759±0.0 4.602±0.0 4.553±0.5 Auto-mpg − 11.470±0.1 − 19.409±1.2 + 8.921±0.4 9.560±0.8 9.131±0.5 7.958±3.5 Cart − 5.684±0.0 + 0.995±0.0 − 1.016±0.0 + 0.993±0.0 1.012±0.0 0.994±0.0 Computer − 99.907±0.2 − 10.955±0.6 − 6.426±0.6 − 6.507±0.5 6.284±0.6 − 8.081±2.7 Cpu − 3734±1717 − 4111±1657 − 1760±389 − 1197±161 1070±137 1092±1315 Diabetes 0.399±0.0 − 0.535±0.0 − 0.500±0.0 0.400±0.0 0.399±0.0 0.446±0.3 Elevators − 1.02e-5±0.0 − 1.4e-5±0.0 − 0.86e-5±0.0 0.5e-5±0.0 0.5e-5±0.0 0.52e-5±0.0 Fried − 6.924±0.0 − 3.474±0.0 − 1.862±0.0 − 2.348±0.0 1.850±0.0 − 1.938±0.1 H.Quake 0.036±0.0 0.036±0.0 0.036±0.0 0.036±0.0 0.036±0.0 0.036±0.0 House(16H) −2.06e9±6.1e5 − 1.69e9±3.3e7 + 1.20e9±2.2e7 1.19e9±3.0e7 1.23e9±2.2e7 1.27e9±1.2e8 House(8L) −1.73e9±8.2e5 − 1.19e9±1.2e7 + 1.01e9±1.3e7 1.02e9±9.2e6 1.02e9±1.3e7 9.97e8±7.1e7 House(Cal) −4.81e9±2.0e6 − 3.69e9±3.5e7 − 3.09e9±2.7e7 + 2.78e9±2.8e7 3.05e9±3.1e7 3.07e9±2.8e8 Housing − 23.840±0.2 − 19.591±1.7 16.251±1.1 + 13.359±1.7 16.538±1.3 12.467±7.5 Kinematics − 0.041±0.0 − 0.035±0.0 − 0.027±0.0 − 0.026±0.0 0.023±0.0 − 0.025±0.0 Machine − 5952±2053 − 6036±1752 3473±673 3300±757 3032±759 3557±4271 Pole − 930.08±0.3 + 48.55±1.2 79.48±2.6 + 35.16±0.7 79.31±2.4 + 42.0±5.8 Puma32 − 7.2e-4±0.0 − 1.1e-4±0.0 + 0.71e-4±0.0 0.82e-4±0.0 0.82e-4±0.0 0.67e-4±0.0 Puma8 − 19.925±0.0 − 13.307±0.2 + 11.047±0.1 11.145±0.1 11.241±0.1 + 10.299±0.5 Pyrimidines − 0.018±0.0 0.014±0.0 + 0.010±0.0 0.013±0.0 0.013±0.0 0.012±0.0 Triazines − 0.025±0.0 + 0.019±0.0 − 0.018±0.0 0.023±0.0 0.023±0.0 0.017±0.0

Geometric Mean Average Rank Average Ratio Wins / Losses Signi. Wins/Losses Wilcoxon Test

Summary of MSE Results LR UT FT-T FT-B 39.2 23.59 17.68 16.47 5.4 4.9 3.15 2.9 4.0 1.57 1.13 1.03 1/19 4/16 8/12 6/11 0/18 3/15 6/9 4/5 0.0 0.02 0.21 0.1

FT 16.90 2.5 1.07 – – –

M5 16.2 2.3 1 11/9 2/3 0.23

Nevertheless there is a computational cost associated with the increase in performance veriﬁed. To run all the experiments referred here, FT requires almost 1.8 more time than the univariate regression tree.

5.2

Results in Classiﬁcation Problems

We have chosen 30 datasets from the UCI repository. For comparative purposes we also evaluate M5 [21]. M5 decomposes a n-classes classiﬁcation problem into n−1 binary regression problems7 . The results in terms of error-rate and standard deviation are presented in Table 2. The ﬁrst two columns refer to the results of the components of our system, the LinearBayes and the univariate tree. The next two columns refer to the lesioned versions of the algorithm, the BottomUp (FT-B) and Top-Down (FT-T). The ﬁfth column refers to the full proposed 7

We have used other multivariate trees. The most competitive was M5 .

70

J. Gama

Table 2. Summary of Error Rate Results LinBayes Univ. Tree Functional Trees Dataset LB UT Bottom Top FT M5 Adult − 17.012±0.5 14.178±0.5 − 14.307±0.4 13.800±0.4 13.830±0.4 − 15.182±0.6 Australian 13.498±0.3 14.750±1.0 − 14.343±0.4 13.928±0.6 13.638±0.6 14.643±5.2 Balance − 13.355±0.3 − 22.467±1.1 − 10.445±0.6 7.313±0.9 7.313±0.9 − 13.894±3.2 Banding 23.681±1.0 23.512±1.8 23.512±1.8 23.762±2.2 23.762±2.2 22.619±5.3 Breast(W) + 2.862±0.1 − 5.123±0.2 − 4.337±0.1 3.346±0.4 3.346±0.4 5.137±3.1 Cleveland 16.134±0.4 − 20.995±1.4 + 15.952±0.5 17.369±0.9 16.675±0.8 17.926±8.0 Credit + 14.228±0.1 14.608±0.5 14.784±0.5 15.103±0.4 15.220±0.6 14.913±3.7 Diabetes + 22.709±0.2 − 25.348±1.0 23.998±1.0 − 25.206±0.9 23.658±1.0 25.002±4.8 German 24.520±0.2 28.240±0.7 + 23.630±0.5 24.870±0.5 24.330±0.7 26.300±3.1 Glass − 36.647±0.8 32.150±2.3 32.150±2.3 32.509±3.3 32.509±3.3 29.479±10.4 Heart 17.704±0.2 − 23.074±1.7 17.037±0.6 17.333±1.4 17.185±0.8 16.667±8.9 Hepatitis + 15.481±0.7 17.135±1.3 17.135±1.3 17.135±1.3 17.135±1.3 19.919±8.5 Ionosphere 13.379±0.8 10.025±0.9 10.624±0.9 11.175±1.4 11.175±1.4 9.704±4.1 Iris 2.000±0.0 − 4.333±0.8 2.067±0.2 − 3.733±0.8 2.067±0.2 5.333±5.3 Letter − 29.821±1.3 11.880±0.6 12.005±0.6 11.799±1.1 11.799±1.1 + 9.440±0.5 Monks-1 − 25.009±0.0 10.536±1.7 11.150±1.9 8.752±1.9 8.729±1.9 10.054±8.9 Monks-2 − 34.186±0.6 − 32.865±0.0 − 33.907±0.4 9.004±1.6 9.074±1.6 27.664±20.9 Monks-3 − 4.163±0.0 + 1.572±0.4 3.511±0.9 2.884±0.4 2.998±0.4 1.364±2.4 Mushroom − 3.109±0.0 + 0.000±0.0 + 0.062±0.0 0.112±0.0 0.112±0.0 0.025±0.1 Optdigits − 4.687±0.1 − 9.476±0.3 − 4.732±0.1 3.295±0.1 3.300±0.1 − 5.429±1.4 Pendigits − 12.425±0.0 − 3.559±0.1 − 3.099±0.1 2.890±0.1 2.890±0.1 2.419±0.4 Pyrimidines − 9.846±0.1 + 5.733±0.2 6.115±0.2 6.158±0.2 6.159±0.2 6.175±0.9 Satimage − 16.011±0.1 − 12.894±0.2 − 12.894±0.2 11.776±0.3 11.776±0.3 12.402±3.2 Segment − 8.407±0.1 3.381±0.2 3.381±0.2 3.190±0.2 3.190±0.2 2.468±0.8 Shuttle − 5.629±0.3 0.028±0.0 0.028±0.0 0.036±0.0 0.036±0.0 0.067±0.0 Sonar 24.955±1.2 27.654±3.5 27.654±3.5 27.654±3.5 27.654±3.5 22.721±9.0 Vehicle 22.163±0.1 − 27.334±1.2 + 18.282±0.5 21.090±1.1 21.031±1.1 20.900±4.6 Votes − 9.739±0.2 3.773±0.5 3.773±0.5 3.795±0.5 3.795±0.5 4.172±4.0 Waveform + 14.939±0.2 − 24.036±0.8 + 15.216±0.2 − 16.142±0.3 15.863±0.4 − 17.241±1.4 Wine 1.133±0.5 − 6.609±1.3 1.404±0.3 1.459±0.3 1.404±0.3 3.830±3.6

Average Mean Geometric Mean Average Rank Average Ratio Wins/Losses Signiﬁcant Wins/Losses Wilcoxon Test

LB 15.31 11.63 4.0 7.545 11/19 5/15 0.00

UT 14.58 9.03 4.1 1.41 9/19 3/12 0.00

FT-B 12.72 7.03 3.1 1.12 13/13 5/8 0.8

FT-T 11.89 6.80 3.3 1.032 6/10 0/3 0.07

FT 11.72 6.63 3.0 1 – – –

M5 12.77 7.24 3.4 1.23 12/18 1/4

Functional Trees

71

model(FT). The last column refers to the results of M5 . For each dataset, the algorithms are compared against the full functional tree using the Wilcoxon signed rank-test. A − (+) sign indicates that for this dataset the performance of the algorithm was worse (better) than the full model with a p value less than 0.01. Table 2 present a comparative summary of the results. The ﬁrst two lines present the arithmetic and the geometric mean of the error rate across all datasets. The third line shows the average rank of all models, computed for each dataset by assigning rank 1 to the best algorithm, 2 to the second best and so on. The fourth line shows the average ratio of error rates. This is computed for each dataset as the ratio between the error rate of one algorithm and the error rate of the full functional tree FT. The ﬁfth line shows the number of signiﬁcant diﬀerences using the signed-rank test taking the multivariate tree FT as reference. We use the Wilcoxon Matched-Pairs Signed-Ranks Test to compare the error rate of pairs of algorithms across datasets. The last line shows the p values associated with this test for the results on all datasets and taking FT as reference. All the evaluation statistics shows that FT is a competitive algorithm. The most competitive simpliﬁed version is, again, the bottom-up version. The ratio of signiﬁcant wins/losses between the bottom-up and top-down versions is 10/6. It is interesting to note that the full model (FT) signiﬁcantly improves over both components (LB and UT) in 6 datasets. 5.3

Discussion

The experimental evaluation points out some interesting observations: – For both types of problems we obtain similar rankings of the performance between the diﬀerent versions of the algorithms. – All multivariate trees versions have similar performance. On these datasets, there is no clear winner between the diﬀerent versions of functional trees. – Any functional tree out-performs its constituents in a large set of problems. In our study the results are consistent on both type of problems. Our experimental study suggests that the full model, that is a multivariate model using linear functions both at decision nodes and leaves, is the most performing algorithm. Another dimension of analysis is the size of the model. Here we consider the number of leaves. This measures the number of diﬀerent regions into which the instance space is partitioned. On this datasets, the average number of leaves for the univariate tree is 70. Any multivariate tree generates smaller models. The average number of leaves of the full model is 50, for the bottom approach is 56, and for the top approach is 52. Nevertheless there is a computational cost associated with the increase in performance veriﬁed. To run all the experiments referred here, FT requires almost 1.7 more time than the univariate tree.

6

Conclusions

In this paper we have presented Functional Trees, a new formalism to construct multivariate trees for regression and classiﬁcation problems. The proposed algo-

72

J. Gama

rithm is able to use functional decision nodes and functional leaf nodes. Functional decision nodes are built when growing the tree, while functional leaves are built when pruning the tree. A contribution of this work is that it provides a single framework for classiﬁcation and regression multivariate trees. Functional trees can be seen as a generalization of multivariate trees for decision problems and model-trees for regression problems, allowing functional decisions both at inner and leaf nodes. We have experimentally observed that the uniﬁed framework is competitive against the state-of-the-art in model-trees. Another contribution of this work is the study about where to use decisions based on a combination of attributes both in regression and classiﬁcation. In the experimental evaluation on a set of benchmark problems we have compared the performance of a functional tree against its components, two simpliﬁed versions and the state-of-the-art in multivariate trees. The results are consistent on both type of problems. Our experimental study suggests that the full model, that is a multivariate model using linear functions both at decision nodes and leaves, is the most performing algorithm. Although most of the work in multivariate classiﬁcation trees follows the top-down approach, the bottom-up approach seems to be competitive. A similar observation applies to regression problems. This observation point directions for future research on this topic. Acknowledgments. Gratitude is expressed to the ﬁnancial support given by the FEDER and PRAXIS XXI, the Plurianual support attributed to LIACC, and Esprit LTR METAL project, the project Data Mining and Decision Support for Business Competitiveness (Sol-Eu-Net), and project ALES. I would like to thank the anonymous reviewers for the constructive comments.

References 1. L. Breiman. Arcing classiﬁers. The Annals of Statistics, 26(3):801–849, 1998. 2. L. Breiman, J. Friedman, R. Olshen, and C. Stone. Classiﬁcation and Regression Trees. Wadsworth International Group., 1984. 3. Carla E. Brodley. Recursive automatic bias selection for classiﬁer construction. Machine Learning, 20:63–94, 1995. 4. Carla E. Brodley and Paul E. Utgoﬀ. Multivariate decision trees. Machine Learning, 19:45–77, 1995. 5. J. Gama. A Linear-Bayes classiﬁer. In C. Monard, editor, Advances on Artiﬁcial Intelligence -SBIA2000. LNAI 1952 Springer Verlag, 2000. 6. Jo˜ ao Gama. Probabilistic Linear Tree. In D. Fisher, editor, Machine Learning, Proceedings of the 14th International Conference. Morgan Kaufmann, 1997. 7. Jo˜ ao Gama and P. Brazdil. Cascade Generalization. Machine Learning, 41:315– 343, 2000. 8. Geoﬀrey J.Mclachlan. Discriminant Analysis and Statistical Pattern Recognition. New York, Willey and Sons, 1992. 9. Aram Karalic. Employing linear regression in regression tree leaves. In Bernard Neumann, editor, European Conference on Artiﬁcial Intelligence, 1992.

Functional Trees

73

10. R. Kohavi. Scaling up the accuracy of naive Bayes classiﬁers: a decision tree hybrid. In Proceedings of the 2nd International Conference on Knowledge Discovery and Data Mining. AAAI Press, 1996. 11. W. Loh and Y. Shih. Split selection methods for classiﬁcation trees. Statistica Sinica, 7:815–840, 1997. 12. S. Murthy, S. Kasif, and S. Salzberg. A system for induction of oblique decision trees. Journal of Artiﬁcial Intelligence Research, 1994. 13. R. Quinlan. Learning with continuous classes. In Adams and Sterling, editors, Proceedings of AI’92. World Scientiﬁc, 1992. 14. R. Quinlan. C4.5: Programs for Machine Learning. Morgan Kaufmann Publishers, Inc., 1993. 15. R. Quinlan. Combining instance-based and model-based learning. In P.Utgoﬀ, editor, ML93, Machine Learning, Proceedings of the 10th International Conference. Morgan Kaufmann, 1993. 16. Paul Taylor. Statistical methods. In M. Berthold and D. Hand, editors, Intelligent Data Analysis - An Introduction. Springer Verlag, 1999. 17. Luis Torgo. Functional models for regression tree leaves. In D. Fisher, editor, Machine Learning, Proceedings of the 14th International Conference. Morgan Kaufmann, 1997. 18. Luis Torgo. Inductive Learning of Tree-based Regression Models. PhD thesis, University of Porto, 2000. 19. P. Utgoﬀ. Percepton trees - a case study in hybrid concept representation. In Proceedings of the Seventh National Conference on Artiﬁcial Intelligence. Morgan Kaufmann, 1988. 20. P. Utgoﬀ and C. Brodley. Linear machine decision trees. Coins technical report, 91-10, University of Massachusetts, 1991. 21. Ian Witten and Eibe Frank. Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations. Morgan Kaufmann Publishers, 2000.

Bounding Negative Information in Frequent Sets Algorithms I. Fortes1 , J.L. Balc´ azar2 , and R. Morales3 1

Dept. Applied Mathematic, E.T.S.I. Inform´ atica, Univ. M´ alaga. Campus Teatinos. 29071 M´ alaga, Spain ifortes@ctima.uma.es 2 Dept. LSI, Univ. Polit`ecnica de Catalunya. Campus Nord. 08034 Barcelona, Spain balqui@lsi.upc.es 3 Dept. Languages and Computer Science, E.T.S.I. Inform´ atica, Univ. M´ alaga. Campus Teatinos. 29071 M´ alaga, Spain morales@lcc.uma.es

Abstract. In Data Mining applications of the frequent sets problem, such as ﬁnding association rules, a commonly used generalization is to see each transaction as the characteristic function of the corresponding itemset. This allows one to ﬁnd also correlations between items not being in the transactions; but this may lead to the risk of a large and hard to interpret output. We propose a bottom-up algorithm in which the exploration of facts corresponding to items not being in the transactions is delayed with respect to positive information of items being in the transactions. This allows the user to dose the association rules found in terms of the amount of correlation allowed between absences of items. The algorithm takes advantage of the relationships between the corresponding frequencies of such itemsets. With a slight modiﬁcation, our algorithm can be used as well to ﬁnd all frequent itemsets consisting of an arbitrary number of present positive attributes and at most a predetermined number k of present negative attributes.

Work supported in part by the EU ESPRIT IST-1999-14186 (ALCOM-FT), EU EP27150 (NeuroColt), CIRIT 1997SGR-00366 and PB98-0937-C04 (FRESCO).

K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 50–58, 2001. c Springer-Verlag Berlin Heidelberg 2001

Bounding Negative Information in Frequent Sets Algorithms

1

51

Introduction

Data Mining or Knowledge Discovery in Databases (KDD) is a ﬁeld of increasing interest with strong connections with several research areas such as databases, machine learning, and statistics. It aims at ﬁnding useful information from large masses of data; see [5]. One of the most relevant subroutines in applications of this ﬁeld is ﬁnding frequent itemsets within the transactions in the database. This task consists of ﬁnding highly frequent itemsets, by comparing their frequency of occurrence within the given database with a given parameter σ. This problem can be solved by the well-known Apriori algorithm [2]. The Apriori algorithm is a method of searching the lattice of itemsets with respect to itemset inclusion. The strategy starts from the empty set and scans itemsets from smaller to larger in an incremental manner. The Apriori algorithm uses this strategy to eﬀectively prune away a substantial number of unproductive itemsets. The frequent sets that result from this task can be used then to discover association rules that have support and conﬁdence values no smaller than the user-speciﬁed minimum thresholds [1], or to solve other related Knowledge Discovery problems [7]. We do not discuss here how to form association rules from frequent itemsets, nor any other application of these; but focus on the performance of that very step, ﬁnding highly frequent patterns, whose complexity dominates by far the computational cost of many such applications. Here we considered the case where each transaction of the database is a binary-valued function of the attributes. The diﬀerence with the itemsets view is that now we look for patterns where the non-occurrence of an item is important too. This is formalized in terms of partial functions, which, on each item, may include it (value 1), exclude it (value 0), or not to consider it (undeﬁned). It was noticed in [6] that essentially the same algorithms, with the same “a priori” pruning strategies, can be applied to many other settings in which one looks for a certain theory on a certain formal language according to a certain predicate that is monotone on a generalization/specialization relation. In particular, our setting with binary-valued attributes falls into this category, and actually there exist implementations of the Apriori algorithm that solve the problem for the setting where each transaction is actually a function. Thus, they can be used to solve the problem of ﬁnding partial functions whose frequency is over some threshold. However, it is known that direct use of these algorithms on real life data frequently come up with extremely large numbers of frequent sets consisting “only of zeros”; for example, in the prototypical case of market basket data, certainly the number of items is overwhelmingly larger than the average number of items bought, and this means that the output of any frequent sets algorithm will contain large amounts of information of the sort “most of the times that scotch is not bought, bourbon is not bought either, with large support”. If such negative information is not desired at all, the original Apriori version can be used; but there may be cases where limited amounts of negative information are deemed useful, for instance looking for alternative products that can act

52

I. Fortes, J.L. Balc´ azar, and R. Morales

as mutual replacements, and yet one does not want to be forced into a search through the huge space of all partial functions. We are interested in producing algorithms that will provide frequent “itemsets” that have “missing” products, but in a controlled manner, so that they are useful when having some missing products in the itemsets is important but not so much as the products that are in the itemsets. Here we develop a variant of the Apriori algorithm that, if supplied with a limit k on the maximum number of negated attributes desired in the output frequent sets, will take advantage of this fact, and produce frequent itemsets for which this limit is obeyed. Of course, it does so in a much more eﬃcient way than just applying Apriori and discarding the part of the output that does not fulﬁll this condition. First, because the exploration is organized in a way that naturally reﬂects the condition on the output. Second, because we know that items may be, or not be, in each itemset, but not both implies complementarity relationships between the frequencies of “itemsets” that contain, or do not contain, a given item. We use these relationships to ﬁnd out frequencies of some “itemsets” without actually counting them, thus saving computational work.

2

Preliminaries

Now, we give the concepts that we will use along the paper. We consider a database T = {t1 , . . . , tN } with N rows over a set R = {A1 , . . . , An } = {Ai : i ∈ I} of binary-valued attributes, that can be seen as either items or columns; actually they just serve as a visual aid for their index set I = {1, . . . , n}. Each row, or transaction, maps R into {0, 1}. For A ∈ R, we also write A ∈ tl for tl (A) = 1 and, departing from standard use, A ∈ tl for tl (A) = 0. Obviously, A ∈ tl or A ∈ tl but not both. The database is actually a multiset of transactions. Each transaction has a unique identiﬁer. As for partial functions, they map a subset of R into {0, 1}; those that are deﬁned for exactly attributes are called -itemsets. The goal of our algorithm will be to ﬁnd frequent itemsets with any number of attributes mapped to 0 and any number of attributes mapped to 1; but in some speciﬁc order. Our notation for these partial functions is as follows. For p ∈ P(I) and s ∈ P(I − p), (s ∩ p = ∅) we denote the subset Ap,s and identify it with the partial function mapping the subset Ap = {Ai : i ∈ p} to 1, the subset As = {Aj : j ∈ s} to 0 and undeﬁned on the rest. Itemsets Ap,s are called k-negative itemsets where |s| = k, k = 0, . . . , n. If |s| = 0 then we have the positive itemset Ap,∅ . We identify partial functions deﬁned on a single attribute Aj , namely, A{j},∅ or A∅,{j} , with the corresponding symbol Aj or Aj respectively. A transaction can be seen as a total function. An itemset can be seen as a partial function. If the partial function can be extended to the total function corresponding to a transaction then we say that an itemset is a subset of a transaction and we employ the standard symbol ⊆ for this case. The support of an itemset (or partial function) is deﬁned as follows.

Bounding Negative Information in Frequent Sets Algorithms

53

Deﬁnition 1. Let R = {A1 , . . . , An } = {Ai : i ∈ I} be a set of n items and let T = {t1 , . . . , tN } be a database of transactions as before. The support or frequency of an itemset A is the ratio of the number of transactions on which it occurs as a subset to the total number of transactions. Therefore: f r(A) =

|{t ∈ T : A ⊆ t}| N

Given a user-speciﬁed minimum support value (denoted by σ), we say than an itemset A is frequent if its support is more than the minimum support, i.e. f r(A) ≥ σ. We introduce a natural structure in the itemset space by placing them into “ﬂoors” and “levels”. The ﬂoor k contains the itemsets with k negative attributes. In each ﬂoor, the itemsets are organized in levels (as usual): the level is the number of the attributes of the itemset. Thus, in ﬂoor zero we place positive itemsets, ordered by itemset inclusion (or equivalently, index set inclusion); in the ﬁrst ﬂoor we place all itemsets with one attribute valued to 0, organized similarly, and related similarly to the itemsets in ﬂoor zero. In ﬂoor k we place all the itemsets with k attributes valued to 0, organized levelwise in the standard way, and related similarly to the itemsets in other ﬂoors. Thus we are considering the order relation deﬁned as follows: Deﬁnition 2. For p ∈ P(I), s ∈ P(I − p), q ∈ P(I), and t ∈ P(I − q), given partial functions X = Ap,s and Y = Aq,t , we denote by X Y the fact that p ⊆ q and s ⊆ t. With respect to this relation, the property of having frequency larger than any threshold is antimonotone, since X Y implies f r(X) ≥ f r(Y ). Thus, whenever an itemset is not frequent enough, neither is any of its extensions, and this fact allows one to prune away a substantial number of unproductive itemsets. Therefore, frequent sets algorithms can be applied rather directly to this case. Our purpose now is to aim at a somewhat more reﬁned algorithm. Now, we give a simple example to show the structure of the itemset space. This example will be useful to describe the frequent itemset candidate generation and the path that follows our algorithm for it. Example: Let R = {A, B, C, D} be the set of four items. In this case, we use four ﬂoors to represent the itemsets with any number of negative attributes and any number of positive attributes. In each rectangle, the pair (f, ) indicates the ﬂoor f (number of negative attributes in the itemsets of this rectangle) and level (cardinality of the itemsets of this rectangle). See ﬁgure 1.

3

Algorithm Bounded-neg-Apriori

Our algorithm performs the same computations as Apriori on the zero ﬂoor, but then uses the frequencies computed to try to reduce the computational eﬀort spent on 1-negative itemsets. This process goes on along all ﬂoors. Overall,

54

I. Fortes, J.L. Balc´ azar, and R. Morales

Fig. 1. The structure of the itemset space

bounded-neg-Apriori can be seen as a reﬁnemet of Apriori in which the explicit evaluation of the frequency of k-negative itemsets is avoided, since it can be obtained from some itemsets of the previous ﬂoor, if they are processed in the appropriate order. We use a number of very easy properties of the frequencies. Of course all of the frequencies are real numbers in [0, 1]. Proposition 1. Let p ∈ P(I) be arbitrary, and s ∈ P(I − p) with |s| ≥ 1. 1. For each j ∈ s, f r(Ap,s ) = f r(Ap,s−{j} ) − f r(Ap∪{j},s−{j} ) and, f r(A∅,∅ ) = 1 2. Ap,s is frequent iﬀ ∃j ∈ s, f r(Ap,s−{j} ) > σ + f r(Ap∪{j},s−{j} ). Remark 1 : Each of the up to |s|-many ways of decomposing f r(Ap,s ) in part 1 leads to the same result: if f r(Ap,s−{j} ) < σ, for any j ∈ s, then Ap,s is not frequent. We will also use the following easy properties regarding the relation of the threshold σ to the value one-half. They allow for some extra pruning to be done for quite high frequency values (although this case might be infrequently occurring in practice). Proposition 2. Let p ∈ P(I) be arbitrary, and s ∈ P(I − p), arbitrary for statements not depending on p. 1. |f r(Aj ) − 0.5| < |σ − 0.5| ⇔ |f r(Aj ) − 0.5| < |σ − 0.5|. 2. If σ < 0.5 then f r(Aj ) ≤ σ ⇒ f r(Aj ) > σ and f r(Aj ) > 1 − σ ⇔ f r(Aj ) < σ. 3. If σ > 0.5 then f r(Aj ) ≥ σ ⇒ f r(Aj ) < σ and f r(Aj ) < 1 − σ ⇔ f r(Aj ) > σ. 4. ∀j ∈ s, if σ > 0.5 and f r(Ap,s−{j} ) > σ + f r(Ap∪{j},s−{j} ) then f r(Ap∪{j},s−{j} ) < σ, i.e. in this case Ap∪{j},s−{j} is not frequent. Remark 2 If σ > 0.5 and ∃j ∈ s / f r(Ap∪{j},s−{j} ) > σ then Ap,s is not frequent.

Bounding Negative Information in Frequent Sets Algorithms

3.1

55

Candidate Generation

Moving to the next round of candidates once all frequent -itemsets have been identiﬁed corresponds to moving up, in all possible ways, one step within the same ﬂoor, and climbing up in all possible ways to the next ﬂoor. More formally, at the ﬂoor zero, frequent set Ap,∅ leads to consideration as potential candidates of the following itemsets: all Aq,∅ where q = p ∪ {i} and all Ap,{j} , for j ∈ / p. Also, itemset Ap,{j} would lead to Aq,{j} for q = p ∪ {i}, for i∈ / p and i = j; our algorithm does not use this last sort of steps. In the other ﬂoors the movements are in the same form. For all p ∈ P(I) and s = ∅, from Ap,s we can climb up to the next ﬂoor to Ap,t where t = s ∪ {j}, for j ∈ P(I − {s ∪ p}). Also, itemset Ap,s would lead to Aq,s for q = p ∪ {i}, for i∈ / p and i ∈ / s but we will not use such steps either. Therefore the scheme of the search of frequent itemsets with k 0-valued attributes (i.e. in the ﬂoor k) is based on the following: whenever enough frequencies in the previous ﬂoor are known to test it, if f r(Ap,s−{j} ) > σ + f r(Ap∪{j},s−{j} ) where j ∈ s, then we know f r(Ap,s ) > σ so that it can be declared frequent; moreover, for σ > 0.5 this has to be tested only when that Ap∪{j},s−{j} turned out to be nonfrequent although Ap,s−{j} was frequent. Example: Let us turn our atention again to the example. Let us suppose that σ < 0.5; we explain the process of candidate generation and the path that our algorithm follows for it. Suppose that the maximal itemsets to be found are ABC, ABC, and AB. Thus, A, B, C are frequent items, and also B and C are frequent ’negative items’. At the initialization, we ﬁnd that D, A, and D cannot appear in any frequent itemset. The algorithm stores this information by means of the set I (deﬁned later). In the following step, we take into consideration as potential candidates, ﬁrstly the itemsets in (0, 2), secondly in (1, 2), and at last, in (2, 2) that verify the conditions. There we ﬁnd the frequent itemsets are AB, AC, BC, AB, AC, BC. At this moment, we know that there do not exist frequent itemsets in (2, 2). So, there will not exist frequent itemsets in (f, ) with f ≥ 2, > 2 and ≥ f . This information is used in the algorithm by means of the set J (deﬁned later) to reﬁne the search of candidate generation. In the following step we scan for frequent itemsets in (0, 3) and (1, 3) and ABC, ABC are frequent itemsets, and the exploration of the next level proves that, together with AB, they are the maximal frequent itemsets. Along the example it is clear how the algorithm would proceed in case we are given a bound on the number of negative attributes present: this would just discard ﬂoors that do not obey that limitation. 3.2

The Algorithm

Now, we present the algorithm in a more precise form. The algorithm has as input the set of attributes, the database, and the threshold σ on the support. The output of the algorithm is the set of all frequent itemsets with negative and positive itemsets. Also, a similar algorithm can be easily developed to ﬁnd the

56

I. Fortes, J.L. Balc´ azar, and R. Morales

set of all frequent itemsets with at most k negative attributes: simply impose explicitly the bound k on the corresponding loop in the algorithm. Let us consider the symbol f for the ﬂoor (that is the number of negative attributes of the itemset, 0 ≤ f ≤ n) and the symbol for the level (the number of the attributes of the itemset 0 ≤ ≤ n): we will write the sets Cf, and Lf, for candidates and frequent itemsets respectively. At the beginning we suppose that all Cf, and Lf, for f ≤ ≤ n are empty. With respect to this notation our algorithm traces the following path: (0, 1), (1, 1); (0, 2), (1, 2), (2, 2); (0, 3), (1, 3), (2, 3), (3, 3);, etc (recall to the example). Now, we present the algorithm in a pseudocode style. For clarity, main loops are commented. After the algorithm we included additional comments about some instructions that improve the search of frequent itemsets. Algorithm bounded-neg-Apriori 1. set current ﬂoor f := 0 set current level := 1 “This set is explained after the algorithm” J := ∅ 2. “Initially, we ﬁnd the frequent itemsets with isolated positive attributes” Lf, := {A{i},∅ , ∀i ∈ I/ f r(A{i},∅ ) > σ} 3. “This is the main loop to climb up ﬂoors” while f ≤ and ≤ n do while Lf, = ∅ and f ≤ and ≤ n do k := f + 1 L , −1 := ∅ “At this moment we can obtain the frequent itemsets of the upper” “ﬂoors at same level from the itemsets in the previous ﬂoor” “There are two cases according to σ” while k ≤ do if k ∈ J then Lk, := ∅ if σ ≤ 0.5 then (1) Ck, := {Ap,s / Ap,s ∈ Lk−1, −1 , m ∈ I − (p ∪ s ), s = s ∪ {m}, ∀i ∈ p, Ap−{i},s ∈ Lk, −1 , ∀j ∈ s, Ap,s−{j} ∈ Lk−1, −1 } else Ck, := {Ap,s / Ap,s ∈ Lk−1, −1 , m ∈ I − (p ∪ s ), s = s ∪ {m}, ∀i ∈ p, Ap−{i},s ∈ Lk, −1 , ∀j ∈ s, Ap,s−{j} ∈ Lk−1, −1 , ∀j ∈ s, f r(Ap∪{j},s−{j} ) < σ} ﬁ Lk, := {Ap,s ∈ Ck, / ∃j ∈ s, f r(Ap,s−{j} ) > σ + f r(Ap∪{j},s−{j} )} if Lk, = ∅ then J := J ∪ {k} ﬁ ﬁ if = 1 and k = 1 and L1,1 = ∅ then I := {i/A∅,{i} ∈ L1,1 } ﬁ (2) set current ﬂoor k := k + 1

Bounding Negative Information in Frequent Sets Algorithms

57

od (while k) “Selected a ﬂoor we look for the frequent itemsets in next level” “into this ﬂoor” set current level := + 1 J := J ∪ {k + 1/ k ∈ J, k < n} if f = 0 then Cf, := {Ap,∅ / ∀i ∈ p, Ap−{i},∅ ∈ Lf, −1 } Lf, := {Ap,∅ ∈ Cf, / f r(Ap,∅ ) > σ} else Cf, := {Ap,s / ∀i ∈ p, Ap−{i},s ∈ Lf, −1 , ∀j ∈ s, Ap,s−{j} ∈ Lf −1, −1 } Lf, := {Ap,s ∈ Cf, / f r(Ap,s ) > σ} (3) ﬁ od (while ) “If the maximum level into a ﬂoor is reached then we must go over” “to the next ﬂoor at this maximum level” set current ﬂoor f := f + 1 Cf, := {Ap,s / ∀i ∈ p, Ap−{i},s ∈ Lf, −1 , ∀j ∈ s, Ap,s−{j} ∈ Lf −1, −1 } Lf, := {Ap,s ∈ Cf, / ∃j ∈ s, f r(Ap,s−{j} ) > σ + f r(Ap∪{j},s−{j} )} od (while f) Lk, 4. output k≤ ≤n

The algorithm reﬁnes the search of frequent itemsets by means of the set J. In each level, J indicates the ﬂoors where no frequent itemsets will exist. In the sentence labeled (3) the generation of candidates and the computation of their frequencies must be done by considering σ (less or more than 0.5), as in the instruction labeled (1). Note that, by the sentence labeled (2), the only negative attributes that could appear in the candidate itemsets are the elements of L1,1 . So, we use this set, as soon as it is computed, to reﬁne the index set I used later along the computation. With respect to the complexity of the algorithm, from a theoretical point of view, two aspects are considered: candidate generation and itemset frequence computation. In the candidate generation the worst case is reached when the threshold σ is less or equal to 0.5. In this case, two itemsets one of them with a particular attribute positive and the other itemset with the same attribute negative can be frequent simultaneously. If σ > 0.5 then by remark 2 in proposition 2 the generation is reﬁned. Independtly of the σ value the sets I and J reﬁne the candidate generation. So, the needed requirements can be reduced. In the itemset frequence computation only itemsets with positive attributes are computed directly from the database. The frequencies of the other candidate itemsets with any number of negative attributes are obtained by using proposition one. Therefore, the number of passes through the database is like in Apriori, i.e., n + 1, where n is the greatest frequent itemset.

58

I. Fortes, J.L. Balc´ azar, and R. Morales

4

Conclusions and Future Work

In cases where the absence of some items from a transaction is relevant but one wants to avoid the generation of many rules relating these absences, it can be useful to allow for a maximum of k such absences from the frequent sets; even if no good guess exists for k, it may be useful to organize the search in such a way that the itemsets with m items show up in the order mandated by how many of them are positive: ﬁrst all positive, then m − 1 positive and one negative, and so on. Our algorithm allows one to do it and takes advantage of a number of facts, corresponding to relationships between the itemset frequencies, to avoid the counting of some candidates. Of course, it makes sense to try to combine this strategy together with other ideas that have been used together with Apriori, like random sampling to evaluate the frequencies, or instead of Apriori, like alternative algorithms such as DIC [4] or Ready-and-Go [3]. Experimental developments, as well as more detailed analyses and a careful formalization of the setting, can lead to improved results, and we continue to work along these two lines.

References 1. Agrawal R., Imielinski T., Swami A.N.: Mining association rules between sets of items in large databases. Proceedings of ACM SIGMOD International Conference on Management of Data (SIGMOD’93), ACM Press Washington D.C. , May 26-28 (1993) 207–216. 2. Agrawal R., Mannila H., Srikant R., Toivonen H., Verkamo A.I.: Fast discovery of association rules, in Fayyad U.M., Piatetsky-Shapiro G., Smyth Rp., Uthurusamy R. Eds, Advances in Knowledge Discovery and Data Mining, AAAI Press, Menlo Park, CA; (1996) 307–328. 3. Baixeries J., Casas-Garriga G. and Balc´ azar J.L.: Frequent sets, sequences, and taxonomies: new, eﬃcient algorithmic proposals. Tech. Rep. LSI-00-78-R. UPC. Barcelona (2000). 4. Brin S., Motwani R., Ullman J.D., Tsur S.: Dynamic Itemset Counting and Implication Rules for Market Basket Data. Int. Conf. Management of Data, ACM Press (1997) 255–264. 5. Fayyad U.M., Piatetsky–Shapiro G., Smyth P.: From data mining to knowledge discovery: An overview. In Fayyad U.M., Piatetsky–Shapiro G., Smyth P. and Uthurusamy R., eds, Advances in Knowledge Discovery and Data Mining, AAAI Press, Menlo Park, CA, (1996) 1–34. 6. Gunopulos D., Khardon R., Mannila H., Toivonen H. Data Mining, Hypergraph Transversals, and Machine Learning. Proceedings of the Sixteenth ACM SIGACTSIGMOD-SIGART Symposium on Principles of Database Systems, ACM Press, Tucson, Arizona, May 12-14, (1997) 209–216. 7. Mannila H:, Toivonen H.: Levelwise search and borders of theories in knowledge discovery. Data Mining and Knowledge Discovery. 1(3) (1997) 241–258.

Computational Discovery of Communicable Knowledge: Symposium Report Saˇso Dˇzeroski1 and Pat Langley2 1

Joˇzef Stefan Institute, Jamova 39, 1000 Ljubljana, Slovenia Saso.Dzeroski@ijs.si, www-ai.ijs.si/SasoDzeroski/ 2

Institute for the Study of Learning and Expertise 2164 Staunton Court, Palo Alto, CA 94306 USA langley@isle.org, www.isle.org/˜langley/

Abstract. The Symposium on Computational Discovery of Communicable Knowledge was held from March 24 to 25, 2001, at Stanford University. Fifteen speakers reviewed recent advances in computational approaches to scientiﬁc discovery, focusing on their discovery tasks and the generated knowledge, rather than on the discovery algorithms themselves. Despite considerable variety in both tasks and methods, the talks were uniﬁed by a concern with the discovery of knowledge cast in formalisms used to communicate among scientists and engineers.

Computational research on scientiﬁc discovery has a long history within both artiﬁcial intelligence and cognitive science. Early eﬀorts focused on reconstructing episodes from the history of science, but the past decade has seen similar techniques produce a variety of new scientiﬁc discoveries, many of them leading to publications in the relevant scientiﬁc literatures. Work in this paradigm has emphasized formalisms used to communicate among scientists, including numeric equations, structural models, and reaction pathways. However, in recent years, research on data mining and knowledge discovery has produced another paradigm. Even when applied to scientiﬁc domains, this framework employs formalisms developed by artiﬁcial intelligence researchers themselves, such as decision trees, rule sets, and Bayesian networks. Although such methods can produce predictive models that are highly accurate, their outputs are not stated in terms familiar to scientists, and thus typically are not very communicable. To highlight this distinction, Pat Langley organized the Symposium on Computational Discovery of Communicable Knowledge, which took place at Stanford University’s Center for the Study of Language and Information on March 24 and 25, 2001. The meeting’s aim was to bring together researchers who are pursuing computational approaches to the discovery of communicable knowledge and to review recent advances in this area. The primary focus was on discovery in scientiﬁc and engineering disciplines, where communication of knowledge is often a central concern. K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 45–49, 2001. c Springer-Verlag Berlin Heidelberg 2001

46

S. Dˇzeroski and P. Langley

Each of the 15 presentations emphasized the discovery tasks (the problem formulation and system input, including data and background knowledge) and the generated knowledge (the system output). Although artiﬁcial intelligence and machine learning traditionally focus on diﬀerences among algorithms, the meeting addressed the results of computational discovery at a more abstract level. In particular, it explored what methods for the computational discovery of communicable knowledge have in common, rather than the great diversity of methods used to that end. The commonalities among methods for communicable knowledge discovery were summarized best by Raul Vald´es-P´erez in a presentation titled A Recipe for Designing Discovery Programs on Human Terms. The key step in his recipe was identifying a set of possible solutions for some discovery task, as it is here that one can adopt a formalism that humans already use to represent knowledge. Vald´es-P´erez viewed computational discovery as a problem-solving activity to which one can apply heuristic-search methods. He illustrated the recipe on the problem of discovering niche statements, i.e., properties of items that make them unique or distinctive in a given set of items. The knowledge representation formalisms considered in the diﬀerent presentations were diverse and ranged from equations through qualitative rules to reaction pathways. Most talks at the symposium fell within two broad categories. The ﬁrst was concerned with equation discovery in either static systems or dynamic ones that change over time. The second addressed communicable knowledge discovery in biomedicine and in the related ﬁelds of biochemistry and molecular biology. One formalism that scientists and engineers rely on heavily is equations. The task of equation discovery involves ﬁnding numeric or quantitative laws, expressed as one or more equations, from collections of measured numeric data. Most existing approaches to this problem deal with the discovery of algebraic equations, but recent work has also addressed the task of dynamic system identiﬁcation, which involves discovering diﬀerential equations. Takashi Washio from Osaka University presented a talk about Conditions on Law Equations as Communicable Knowledge, in which he discussed the conditions that equations must satisfy to be considered communicable. In addition to ﬁtting the observed data, these include generic conditions and domain-dependent conditions. The former include objectiveness, generality, and reproducibility, as well as parsimony and mathematical admissibility with respect to unit dimensions and scale type constraints. Kazumi Saito from Nippon Telegraph and Telephone and Mark Schwabacher from NASA Ames Research Center presented two related applications of computational equation discovery in the environmental sciences, both concerned with global models of the Earth ecosystem. Saito’s talk on Improving an Ecosystem Model Using Earth Science Data addressed the task of revising an existing quantitative scientiﬁc model for predicting the net plant production of carbon in the light of new observations. Schwabacher’s talk, Discovering Communicable Scientiﬁc Knowledge from Spatio-Temporal Data in Earth Science, dealt with

Computational Discovery of Communicable Knowledge

47

the problem of predicting from climate variables the Normalized Diﬀerence Vegetation Index, a measure of greenness and a key component of the previous ecosystem model. Four presentations discussed the task of dynamic system identiﬁcation, which involves identifying the laws that govern behavior of systems with continuous variables that change over time. Such laws typically take the form of diﬀerential equations. Two of these talks described extensions to equation discovery methods to address system identiﬁcation, whereas the other talks reported work that began with methods for system identiﬁcation and incorporated artiﬁcial intelligence techniques that take advantage of domain knowledge. Saso Dˇzeroski from the Joˇzef Stefan Institute, in his talk on Discovering Ordinary and Partial Diﬀerential Equations, gave an overview of computational methods for discovering both ordinary and partial diﬀerential equations, the second of which describe dynamic systems that involve change over several dimensions (e.g., space and time). Ljupˇco Todorovski, from the same research center, discussed an approach that uses domain knowledge to aid the discovery process in his talk, Using Background Knowledge in Diﬀerential Equations Discovery. He showed how knowledge in the form of context-free grammars can constrain discovery in the domain of population dynamics. Reinhard Stolle, from Xerox PARC, spoke about Communicable Models and System Identiﬁcation. He described a discovery system that handles both structural identiﬁcation and parameter estimation by integrating qualitative reasoning, numerical simulation, geometric reasoning, constraint reasoning, abstraction, and other mechanisms. Matthew Easley from the University of Colorado, Boulder, reported extensions to Stolle’s framework in his presentation, Incorporating Engineering Formalisms into Automated Model Builders. His approach relied on input-output modeling to plan experiments and using the resulting data, combined with knowledge at diﬀerent levels of abstraction, to construct a diﬀerential equation model. The talk by Feng Zhao from Xerox PARC, Structure Discovery from Massive Spatial Data Sets, described an approach to analyzing spatio-temporal data that relies on the notion of spatial aggregation. This mechanism generates summary descriptions of the raw data, which it characterizes at varying levels of detail. Zhao reported applications to several challenging problems, including the interpretation of weather data, optimization for distributed control, and the analysis of spatio-temporal diﬀusion-reaction patterns. The rapid growth of biological databases, such as that for the human genome, has led to increased interest in applying computational discovery to biomedicine and related ﬁelds. Five presentations at the symposium focused on this general area. They covered a variety of discovery methods, including both propositional and ﬁrst-order rule induction, genetic programming, theory revision, and abductive inference, with similar breadth in the biological discovery tasks to which they were applied. Bruce Buchanan and Joseph Phillips, from the University of Pittsburgh, gave a presentation titled Introducing Semantics into Machine Learning. This focused

48

S. Dˇzeroski and P. Langley

on their incorporation of domain knowledge into rule-induction algorithms to let them ﬁnd interesting and novel relations in medicine and science. They reviewed both syntactic and semantic constraints on the rule discovery process and showed that stronger forms of background knowledge increase the chances that discovered rules are understandable, interesting, and novel. Stephen Muggleton from York University, in his talk Knowledge Discovery in Biological and Chemical Domains, described his application of ﬁrst-order rule induction to predicting the structure of proteins, modeling the relations between a chemical’s structure and its activity, and predicting a protein’s function from its structure (e.g., identifying precursors of neuropeptides). Knowledge discovered in these eﬀorts has appeared in journals for the respective scientiﬁc areas. John Koza from Stanford University presented Reverse Engineering and Automatic Synthesis of Metabolic Pathways from Observed Data. His approach utilized genetic programming to carry out search through a space of metabolic pathway models, with search directed by the models’ abilities to ﬁt time-series data on observed chemical concentrations. The target model included an internal feedback loop, a bifurcation point, and an accumulation point, suggesting the method can handle complex metabolic processes. The presentation by Pat Langley, from the Institute for the Study of Learning and Expertise, addressed Knowledge and Data in Computational Biological Discovery. He reported an approach that used data on gene expressions to revise a model of photosynthetic regulation in Cyanobacteria previously developed by plant biologists. The result was an improved model with altered processes that better explains the expression levels observed over time. The ultimate goal is an interactive system to support human biologists in their discovery activities. Marc Weeber from the U.S. National Library of Medicine reported on a quite diﬀerent approach in his talk on Literature-based Discovery in Biomedicine. The main idea relies on utilizing bibliographic databases to uncover indirect but plausible connections between disconnected bodies of scientiﬁc knowledge. He illustrated this method with a successful example of ﬁnding potentially new therapeutic applications for an existing drug, thalidomide. Sakir Kocabas, from Istanbul Technical University, talked about The Role of Completeness in Particle Physics Discoveries, which dealt with a completely diﬀerent domain. He described a computational model of historical discovery in particle physics that relies on two main criteria – consistency and completeness – to postulate new quantum properties, determine those properties’ values, propose new particles, and predict reactions among particles. Kocabas’ system successfully simulated an extended period in the history of this ﬁeld, including discovery of the neutrino and postulation of the baryon number. At the close of the symposium, Lorenzo Magnani from the University of Pavia commented on the presentations from a philosophical viewpoint. In particular, he cast the various eﬀorts in terms of his general framework for abduction, which incorporates diﬀerent types of explanatory reasoning. The gathering also spent ˙ time honoring the memory of Herbert Simon and Jan Zytkow, both of whom played seminal roles in the ﬁeld of computational scientiﬁc discovery.

Computational Discovery of Communicable Knowledge

49

Further information on the symposium is available at the World Wide Web page http://www.isle.org/symposia/comdisc.html. This includes information about the speakers, abstracts of the presentations, and pointers to publications related to their talks. Slides from the presentations can be found at the Web page http://math.nist.gov/˜JDevaney/CommKnow/. Saˇso Dˇzeroski and Ljupˇco Todorovski are currently editing a book based on the talks given at the symposium. Information on the book will appear at the symposium page and the ﬁrst author’s Web page as it becomes available.

Acknowledgements The Symposium on Computational Discovery of Communicable Knowledge was supported by Grant NAG 2-1335 from NASA Ames Research Center and by the Nippon Telegraph and Telephone Corporation.

References Bradley, E., Easley, M., & Stolle, R. (in press). Reasoning about nonlinear system identiﬁcation. Artiﬁcial Intelligence. Kocabas, S., & Langley, P. (in press). An integrated framework for extended discovery in particle physics. Proceedings of the Fourth International Conference on Discovery Science. Washington, D.C.: Springer. Koza, J. R., Mydlowec, W., Lanza, G., Yu, J., & Keane, M. A. (2001). Reverse engineering and automatic synthesis of metabolic pathways from observed data using genetic programming. Paciﬁc Symposium on Biocomputing, 6 , 434–445. Lee, Y., Buchanan, B. G., & Aronis, J. M. (1998). Knowledge-based learning in exploratory science: Learning rules to predict rodent carcinogenicity. Machine Learning, 30 , 217–240. Muggleton, S. (1999). Scientiﬁc knowledge discovery using inductive logic programming. Communications of the ACM , 42 , 42–46. Saito, K., Langley, P., Grenager, T., Potter, C., Torregrosa, A., & Klooster, S. A. (in press). Computational revision of quantitative scientiﬁc models. Proceedings of the Fourth International Conference on Discovery Science. Washington, D.C.: Springer. Schwabacher, M., & Langley, P. (2001). Discovering communicable scientiﬁc knowledge from spatio-temporal data. Proceedings of the Eighteenth International Conference on Machine Learning (pp. 489–496). Williamstown, MA: Morgan Kaufmann. Shrager, J., Langley, P., & Pohorille, A. (2001). Guiding revision of regulatory models with expression data. Unpublished manuscript, Institute for the Study of Learning and Expertise, Palo Alto, CA. Todorovski, L., & Dˇzeroski, S. (2000). Discovering the structure of partial diﬀerential equations from example behavior. Proceedings of the Seventeenth International Conference on Machine Learning (pp. 991–998). San Francisco: Morgan Kaufmann. Vald´es-P´erez, R. E. (1999). Principles of human-computer collaboration for knowledge discovery in science. Artiﬁcial Intelligence, 107 , 335–346. Washio, T., Motoda, H., & Niwa, Y. (2000). Enhancing the plausibility of law equation discovery. Proceedings of the Seventeenth International Conference on Machine Learning (pp. 1127–1134). San Francisco: Morgan Kaufmann. Yip, K., & Zhao, F. (1996). Spatial aggregation: Theory and applications. Journal of Artiﬁcial Intelligence Research, 5 , 1–26.

VML: A View Modeling Language for Computational Knowledge Discovery Hideo Bannai1 , Yoshinori Tamada2 , Osamu Maruyama3 , and Satoru Miyano1 1

Human Genome Center, Institute of Medical Science, University of Tokyo 4-6-1 Shirokanedai, Minato-ku, Tokyo 108-8639, Japan. {bannai,miyano}@ims.u-tokyo.ac.jp 2 Department of Mathematical Sciences, Tokai University 1117 Kitakaname, Hiratuka-shi, Kanagawa 259-1292, Japan. tamada@ss.u-tokai.ac.jp 3 Faculty of Mathematics, Kyushu University Kyushu University 36, Fukuoka 812-8581, Japan. om@math.kyushu-u.ac.jp

Abstract. We present the concept of a functional programming language called VML (View Modeling Language), providing facilities to increase the eﬃciency of the iterative, trial-and-error cycle which frequently appears in any knowledge discovery process. In VML, functions can be speciﬁed so that returning values implicitly “remember”, with a special internal representation, that it was calculated from the corresponding function. VML also provides facilities for “matching” the remembered representation so that one can easily obtain, from a given value, the functions and/or parameters used to create the value. Further, we describe, as VML programs, successful knowledge discovery tasks which we have actually experienced in the biological domain, and argue that computational knowledge discovery experiments can be eﬃciently developed and conducted using this language.

1

Introduction

The general ﬂow and components which comprise the knowledge discovery process have come to be recognized [4,10] in the literature. According to these articles, the KDD process can be, in general, divided into several stages such as: data preparation (selection, preprocessing, transformation) data mining, hypothesis interpretation/evaluation, and knowledge consolidation. It is also well known that a typical process will not only go one-way through the steps, but will involve many feedback loops, due to the trial-and-error nature of knowledge discovery [2]. Most research in the literature concerning KDD focus on only a single stage of the process, such as the development of eﬃcient and intelligent algorithms for a speciﬁc problem in the data mining stage. On the other hand, it seems that there has been comparatively little work which considers the process as a whole, concentrating on the iterative nature inherent in any KDD process. K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 30–44, 2001. c Springer-Verlag Berlin Heidelberg 2001

VML: A View Modeling Language for Computational Knowledge Discovery

31

More recently, the concept of view has been introduced for describing the steps of this process in a uniform manner [1,12,13,14]. Views are essentially functions over data. These functions, as well as their combinations, represent ways of looking at data, and the values they return are attributes values concerning their input arguments. The relationship between data, view, and the result obtained by applying a view to the data, can be considered as knowledge. The goal of KDD can be restated as the search for meaningful views. Views also provide an elegant interface for human intervention into the discovery process [1,12], whose need has been stressed in [9]. The iterative cycle of KDD consists very much of composing and decomposing of views, and facilities should be provided to assist these activities. The purpose of this paper is to present the concept of a programming language, VML (View Modeling Language), which can help speed up this iterative cycle. We consider extending the Objective Caml (OCaml) language [27], a functional language which is a dialect of the ML [16] language. We chose a functional language for our base, since it can handle higher order values (functions) just like any other value, which should help in the manipulation of views. Also, functional languages have a reputation for enabling eﬃcient and accurate programming of maintainable code, even for complex applications [6]. We focus on the fact that the primary diﬀerence between a view and a function, is that views must always have an interpretable meaning, because the knowledge must be interpretable to be of any use. The two extensions we consider are the keywords ‘view’ and ‘vmatch’. ‘view’ is used to bind a function to a name as well as instructing the program to remember any value resulting from the function. ‘vmatch’ is a keyword for the decomposing of functional application, enabling the extraction of the origins of remembered values. Of course, it is not impossible to accomplish the “remembering” with conventional languages. For example, we can have each function return a data structure which contains the resulting value and their representation. However, we wish to free the programmer from the labor of keeping track this data structure: what parameters were used where and when, by packaging this information implicitly into the language. As a result, the following tasks, for example, can be done without much extra eﬀort: – Interpret knowledge (functions and their parameters) obtained from learning/discovery programs. – Reuse knowledge obtained from previous learning/discovery rounds. Although we do not yet have a direct implementation of VML, we have been conducting computational experiments written in the C++ language based on the idea of views, obtaining substantial results [1,25]. We show how such experiments can be conducted comparatively easily by describing the experiments in terms of VML. The structure of this paper is as follows: Basic concepts of views and VML is described in Section 3. We describe, using VML, two actual discovery tasks we have conducted in Section 4. We discuss various issues in Section 5.

32

H. Bannai et al.

2

Related Work

There have been several knowledge discovery systems which focus on similar problems concerning the KDD process as a whole. KEPLER [21] concentrates on the extensibility of the system, adopting a “plug-in architecture”. CLEMENTINE [8] is a successful commercial application which focuses on human intervention, providing components which can be easily put together in many ways through a GUI interface. Our work is diﬀerent and unique in that it tries to give a solution at a more generic level - until we understand the nature of the data, we must try, literally, any method we can come up with, and therefore universality is desired in our approaches. Concerning the “remembering” of the origin of a value, one way to accomplish this is to remember the source code of the function. For example, some dialects of LISP provide a function called get-lambda-expression, which returns the actual lisp code of a given closure. However, this can return too much information concerning the value (e.g. the source code of a complicated algorithm). The idea in our work is to limit the information that the user will see, by regarding functions speciﬁed by the view keyword as the smallest unit of representation.

3

Concepts

In this section, we ﬁrst brieﬂy describe the concept of views, as found in [1]. Then, we discuss the basic concepts of VML, as an extension to the OCaml language [27], and give simple examples. 3.1

Entity, Views, and View Operation

Here, we review the deﬁnitions of entity, view, and view operation, and show how the KDD process can be described in terms of these concepts. An entity set E is a set of objects which may be distinguished from one another, representing the data under consideration. Each object e ∈ E is called an entity. A view v : E → R is a function over E. v will take an entity e, and return some aspect (i.e. attribute value) concerning e. A view operation is an operation which generates new views from existing views and entities. Below are some examples: Example 1. Given a view v : E → R, a new view v : E → R may be created ψ v with a function ψ : R → R (i.e. v ≡ ψ ◦ v : E → R → R ). We can also consider n-ary functions as views. All arguments except for the argument expecting the entity can be regarded as parameters of the view. Hypothesis generation via machine learning algorithms can also be considered as a form of view operation. The generated hypothesis can also be considered a view. Example 2. Given a set of data records (entities) and their attributes (views), the ID3 algorithm [18] (view operator) generates a decision tree T . T is also a view

VML: A View Modeling Language for Computational Knowledge Discovery

33

because it is a function which returns the class that a given entity is classiﬁed to. The generated view T can also be used as an input to other view operations, to create new views, which can be regarded as knowledge consolidation. Views and view operators are combined to create new views. The structure of such combinations of a compound view, is called the design of the view. The task of KDD lies in the search for good views which explain the data. Knowledge concerning the data is encapsulated in its design. Human intervention can be conducted through the hand-crafted design of views by domain experts. To successfully assist the expert in the knowledge discovery process, the expert should be able to manipulate and understand the view design with ease. 3.2

Representations

Here, we describe the basic concepts in VML. We shall call how a certain value is created, its representation. For example, if an integer value 55 was created by adding the numbers from 1 to 10, the representation of 55 is informally, “add the integers from 1 to 10”. A value may have multiple representations, but every representation should have only one corresponding value (except if there is some sort of random process in the representation). Intuitively, the representation for any value can be considered as the source code for computing that value. However, in VML, the representation is limited to only primitive values (ﬁrst order values), and also application to functions speciﬁed with the view keyword, so that it is feasible for the users to understand and interpret the values, seeing only the information that they want to see. The purpose of the view keyword is to specify that the runtime system should remember the representation of the return value of the function. We shall call such speciﬁed functions, view functions. Representations of values can be deﬁned as: rep ::= primv (* primitive values *) | vf rep1 ... repn (* application to view functions *) | x . rep’ (* λ-abstraction of representations *) vmatch is used to extract components from the representation of a value, by conducting pattern matching against the representation. 3.3

Simple Example

We give a simple example to illustrate basic OCaml syntax and the use of view and vmatch statements. The syntax and semantics of VML are the same as OCaml except for the added keywords. Only descriptions for the extended keywords are given, and the reader is requested to consult the Objective Caml Manual [27] for more information.

34

H. Bannai et al.

For the example in the previous subsection, a function which calculates the sum of positive integers 1 to n can be written in OCaml as:1 # let rec sumn n = if n int = # sumn 10;; (* apply 10 to sumn *) - : int = 55 let binds the function (value) to the name sumn. rec speciﬁes that the function is a recursive function (a function that calls itself). n is an argument of the function sumn. int -> int is the type of the function sumn, which reads as follows: “sumn is a function that takes a value of type int as an argument, and returns a value of type int”. Notice that the type of sumn is automatically inferred by the compiler/interpreter, and need not be speciﬁed. Arguments can be applied to functions just by writing them consecutively. The syntax of the view keyword is the same as the let statement. If we specify the above function with the view keyword in place of let: (we capitalize the ﬁrst letter of view functions for convenience) # view rec Sumn n = if n int = ::(n . Sumn n) # Sumn 10;; - : int = 55::(Sumn 10) Sumn is deﬁned as a view function, and therefore, values calculated from Sumn are implicitly remembered. In the above example, the return value is 55, and its representation, shown to the right of the double colon ‘::’, is (Sumn 10). We do not need to see the inside of Sumn, if we know the meaning of Sumn, to understand this value of 55. The vmatch keyword is used to decompose a representation of a value and extract the function and/or any parameters which were used to created the value. Its syntax is the same as the match statement of OCaml, which is used for the pattern matching of miscellaneous data structures. # let v = Sumn 10;; (* apply 10 to Sumn and bind the value to v *) val v : int = 55::(Sumn 10) # vmatch v with (* Extract parameters used to calculate v *) (Sumn x) -> printf "%d was applied to Sumn\n" x | _ -> printf "Error: v did not match (Sumn x)\n";; 10 was applied to Sumn - : unit = () In the above example, the representation of v, which is (Sumn 10), is matched against the pattern (Sumn x). If the match is successful, ‘x’ in the pattern is 1

The expressions starting after ’#’ and ending with ’;;’ is the input by the user, the others are responses from the compiler/interpreter. Comments are written between ‘(*’ and ‘*)’.

VML: A View Modeling Language for Computational Knowledge Discovery

35

assigned the corresponding value 10. This value can be used in the expression to the right of ‘->’ which is evaluated in case of a match. Multiple patterns matches can be attempted: each pattern and its corresponding expression are separated by ‘|’, and the expression for the ﬁrst matching pattern is evaluated. The underscore ‘ ’ represents a wild card pattern, matching any representation. The entire vmatch expression evaluates to the unit type () (similar to void in the C language) in this case, because printf is a function that executes a side-eﬀect (print a string), and returns (). 3.4

Partial Application

Here, we consider how representations of partial applications to view functions can be done. We note, however, that our description here may contains subtle problems, for example, concerning the order of evaluation of the expressions, which may be counter intuitive when programs contain side-eﬀects. A formal description and sample implementation resolving these issues can be found in [20]. In the previous examples, we added integers from 1 to n. Suppose we want to specify where to start also: add the integers from m to n. We can write the view function as follows: # view rec Sum_m_to_n m n = if (m > n) then 0 else (n + (Sum_m_to_n val Sum_m_to_n : int -> int -> int = ::(m n # let sum3to = Sum_m_to_n 3;; (* partial val sum3to : int -> int = ::(n . Sum_m_to_n # sum3to 5;; - : int = 12::(Sum_m_to_n 3 5)

m (n-1)));; . Sum_m_to_n m n) application *) 3 n)

Sum m to n is a view function of type int->int->int, which can be read as “a function that takes two arguments of type int and returns a value of type int”, or, “a function that takes one argument of type int and returns a value of type int->int”. In deﬁning sum3to, Sum m to n is applied with only one argument, 3, resulting in a function of type int->int. Applying another argument 5 to sum3to will result in the same value as Sum m to n 3 5. Partially applied values are matched as follows. Arguments not applied will only match the underscore ‘ ’: # vmatch sum3to with (Sum_m_to_n x _) -> printf "Sum_m_to_n partially applied with %d\n" x | _ -> printf "failed match\n";; Sum_m_to_n partially applied with 3 - : unit = () We can reverse the order of arguments by the fun keyword, which is essentially lambda abstraction.

36

H. Bannai et al.

# let sum10from = fun m -> Sum_m_to_n m 10;; val sum10from : int -> int = ::(m . Sum_m_to_n m 10) # sum10from 5;; - : int = 45::(Sum_m_to_n 5 10) # vmatch sum10from with (Sum_m_to_n _ x) -> printf "Sum_m_to_n partially applied with %d\n" x | _ -> printf "failed match\n";; Sum_m_to_n partially applied with 10 - : unit = () The representation for sum10from is the result obtained by β-reduction of the representation. (m . ((m n . Sum m to n m n) m 10)) →β (m . ((n . Sum m to n m n) 10)) →β (m . (Sum m to n m 10)) 3.5

Multiple Representations

In the example with Sumn, although Sumn recursively calls itself, the representation of the values generated in the recursive calls is not remembered, because the function ‘+’ is not a view function. If multiple representations are to be remembered, they can be maintained with a list of representations, and vmatch will try to match any of the representations.

4

Actual Knowledge Discovery Tasks

We describe two computational knowledge discovery experiments, showing how VML can assist the programmer in such experiments. As noted in Section 1, VML is not yet fully implemented, and therefore the experiments conducted here were developed with the C++ language, using the HypothesisCreator library [25], based on the concept of views. 4.1

Detecting Gene Regulatory Sites

It is known that: for many genes, whether or not the gene expresses its function depends on speciﬁc proteins, called transcription factors, which bind to speciﬁc locations on the DNA, called gene regulatory sites. Gene regulatory sites are usually located in the upstream region of the coding sequence of the gene. Since proteins selectively bind to these sites, it is believed that common motifs exists for genes which are regulated by the same protein. We consider the case where the 2-block motif model is preferred, that is, when the binding site cannot be characterized by a single motif, and 2 motifs should be searched for.

VML: A View Modeling Language for Computational Knowledge Discovery

37

view ListDistAnd min max l1 l2: int->int->(int list)->(int list)->bool Return true if there exists e1 ∈ l1, e2 ∈ l2 such that min ≤ (e2 − e1) ≤ max. view AstrstrList mm pat str: astr mismatch->string->string->(int list) Return the match positions (using approximate pattern matching) of a pattern as a list of int. The type astr_mismatch is the tuple (int * bool * bool * bool) where the int value is the maximum number of errors allowed, and the bool values are flags to permit the error types: insertion, deletion, and substitution, respectively. Fig. 1. View functions used in the view design for detecting putative gene regulatory sites.

We develop a simple, original method, based on views. Testing the method on B.subtilis σ A -dependent promoter sequences taken from [5], our method was able to rediscover the same results, as well as other candidates for 2-block motifs. We started by modeling the 2-block motif for regulatory sites as consisting of three components: the motif pattern (a string pattern, with possible mismatches), the gap width of these patterns (how far apart they can be), and their positions (distance in base pairs from the beginning of the coding sequence). We construct a function with the following design (the representation is omitted): # let orig pos len g_min g_max mm1 mm2 pat1 pat2 str = ListDistAnd g_min g_max (AstrstrList mm1 pat1 (Substring pos len str)) (AstrstrList mm2 pat2 (Substring pos len str));; val orig : int -> int -> float -> float -> astr_mismatch -> astr_mismatch -> string -> string -> string -> bool = The explanations for the view functions used are given in Figure 1. The arguments except str are parameters, and when all the parameters are applied, a function of type string->bool is generated, returning true if a certain 2-block motif appears for a given string, and false otherwise. To look for good parameters, we take a supervised learning approach and randomly selected genes of B.subtilis not included in the original dataset, from the GenBank database [24], as negative data. The score of each view is based on its accuracy as a classiﬁcation function that interprets whether or not an input sequence has the motifs. We looked at several top ranking views in order to evaluate them. Numerous iterations with diﬀerent search spaces yielded some interesting results. Selected results are shown in Figure 2. By limiting the search space by using knowledge obtained from previous work, we were able to come up with views v1 and v2 where the 2-block motifs were consistent or were the same with “TTGACA” and “TATAAT” as detected in [5,11]. We also ran the experiments with a wider range of parameters, and found a view v3, that could perfectly discriminate the positive and negative examples. Although a biological

38

H. Bannai et al.

v1: (str . ListDistAnd 20 30 (AstrstrList (2,false,false,true) (AstrstrList (2,false,false,true) true positive 102 false negative 40 = false positive 0 true negative 142 = v2: (str . ListDistAnd 20 30 (AstrstrList (2,false,false,true) (AstrstrList (2,false,false,true) true positive 100 false negative 42 = false positive 0 true negative 142 = v3: (str . ListDistAnd 25 35 (AstrstrList (3,false,false,true) (AstrstrList (2,false,false,true) true positive 142 false negative 0 = false positive 0 true negative 142 =

"ttgtca" (Substring -40 35 str)) "tataat" (Substring -40 35 str))) 71.8 % 100.0 % "ttgaca" (Substring -40 35 str)) "tataat" (Substring -40 35 str))) 70.4 % 100.0 % "atgatc" (Substring -50 65 str)) "gttata" (Substring -50 65 str))) 100.0 % 100.0 %

Fig. 2. Representations of the results of our method to ﬁnd regulatory sites.

interpretation must follow for the result to be meaningful, we were successful in ﬁnding a candidate for a novel result. In this kind of experiment, VML can help the expert in the following way: Although the views are sorted by some score, it is diﬃcult to check the validity of a view according to the score: i.e., a valuable view will probably have a high score, but a view with a high score may not be valuable. In the evaluation stage, there is a need for the expert to look at the many diﬀerent views with adequately high scores, and see what kind of parameters were used to generate the view. This could be written easily in VML since it would be just to obtain and display the representations of high scoring functions. 4.2

Characterization of N-Terminal Sorting Signals of Proteins

Proteins are composed of amino acids, and can be regarded as strings consisting of an alphabet of 20 characters. Most proteins are ﬁrst synthesized in the cytosol, and carried to speciﬁed locations, called localization sites. In most cases, the information determining the subcellular localization site is represented as a short amino acid sequence segment called a protein sorting signal [17]. Given an amino acid sequence, predicting where the protein will be carried to is an important and diﬃcult problem in molecular biology. Although numerous signal sequences have been found, similarities between these sequence for the same localization site are not yet fully understood. Our aim was to come up with a predictor which could challenge TargetP [3], the state-of-the-art neural network based predictor, in terms of prediction accuracy while not sacriﬁcing the interpretability of the classiﬁcation rule. Data available from the TargetP web-site [28] was used, consisting of 940 sequences containing 368 mTP (mitochondrial targeting peptides), 141 cTP (chloroplast transit peptides), 269 SP (signal peptides), and 162 “Other” sequences. The general approach was to: discuss with an expert on how to design the views, conduct computational experiments with those view designs, present results to the expert as feedback, and then repeat the process.

VML: A View Modeling Language for Computational Knowledge Discovery

39

We ﬁrst considered binary classiﬁers, which distinguishes sequences of a certain signal. The entity set is the set of amino acid sequences. The views we look for are of type string -> bool: for an amino sequence, return a Boolean value, true if the sequence contains a certain signal, and false if it does not. The views we designed (in time order) can be written in VML as follows (the meanings of each view function is given in Figure 3): # let h1 pat mm ind pos len str = Astrstr mm pat (AlphInd ind (Substring pos len val h1 : string -> astr_mismatch -> (char -> char) int -> string -> bool = ::(pat mm ind pos Astrstr mm pat (AlphInd ind (Substring pos len

str));; -> int -> len str . str))

# let h2 thr ind pos len str = GT (Average (AAindex ind (Substring pos len str))) thr;; val h2 : float -> string -> int -> int -> string -> bool = ::(thr ind pos len str . GT (Average (AAindex ind (Substring pos len str))) thr # let h3 thr aaind pos1 len1 pat mm alphind pos2 len2 str = And (h1 pat mm alphind pos1 len1 str) (h2 thr aaind pos2 len2 str);; val h3 : float -> string -> int -> int -> string -> astr_mismatch -> (char -> char) -> int -> int -> string -> bool = ::(thr aaind pos1 len1 pat mm alphind pos2 len2 str . And (h1 pat mm alphind pos1 len1 str) (h2 thr aaind pos2 len2 str) Notice that after applying all the arguments except for the last string, we can obtain functions of type string -> bool as desired. For example, using view function h2, we can create a view function of type string -> bool: # let f = h2 3.5 "BIGC670101" 5 20;; val f : string -> bool = ::(str .(GT (Average (AAindex "BIGC670101" (Substring 5 20 str)) 3.5))) Each function is composed of view functions, so representation of such a function will contain information of the arguments. The representation of the above rule can be read as: “For a given amino acid sequence, ﬁrst, look at the substring of length 20, starting from position 5. Then, calculate the average volume2 of the amino acids appearing in the substring, and return true if it the value is greater than 3.5, false otherwise”. The task is now to ﬁnd good parameters which deﬁnes a function that can accurately distinguish the signals. For each view design, a wide range of parameters were applied. For each combination of parameters and view design shown 2

“BIGC670101” is the accession id for amino acid index: ‘volume’.

40

H. Bannai et al.

view Substring pos len str : int -> int -> string -> string return substring: [pos,pos+len-1] of str. A negative value for pos means to count from the right end of the string. view AlphInd ind str : (char -> char) -> string -> string convert str according to alphabet indexing ind. ind is a mapping of char->char, called an alphabet indexing [19], and can be considered as a classification of the characters of a given alphabet. view Astrstr mm pat str : astr_mismatch -> string -> string -> bool approximate pattern matching[22]: match pat & str with mismatch mm. Type ‘astr mismatch’ is explained in Figure 1. view AAindex ac str : string -> string -> (float array) convert str to an array of float according to amino acid index: ac. ac is an accession id of an entry in the AAindex database[7]. Each entry in the database represents some biochemical property of amino acids, such as volume, hydropathy, etc., represented as a mapping of char -> float. view Average v : float array -> float the average of the values in v view GT x y : ’a -> ’a -> bool greater than view And x y : bool -> bool -> bool Boolean ‘and’

Fig. 3. View functions used in the view design to distinguish protein sorting signals.

above, we obtain a function: string->bool. The programmer need not worry about keeping track of the meanings of each function, because the representation may be consulted using the vmatch statement when needed. We apply all the protein sequences to this function, and calculate the score of this function as a classiﬁer of a certain signal. Functions with the best scores are selected. View design h1, looks for a pattern over a sequence converted by a classiﬁcation of an alphabet [19]. We hoped to ﬁnd some kind of structural similarities of the signals with this design, but we could not ﬁnd satisfactory parameters which would let h1 predict the signals accurately. Next, we designed a new view h2 which uses the AAindex database [7], this time looking for characteristics of the amino acid composition of a sequence segment. This turned out to be very eﬀective, especially for the SP set, and was used to distinguish SP from the other signals. For the remaining signals, we tried combining h1 and h2 into h3. This proved to be useful for distinguishing the “Other” set (those which do not have N-terminal signals), from mTP and cTP. We can see that the functional nature of VML enables the easy construction of the view designs. By combining the views and parameters thus obtained for each signal type into a single decision list, we were able to create a rule which competes fairly well with TargetP in terms of prediction accuracy. The scores of a 5-fold crossvalidation is shown in Table 1. The knowledge encapsulated in the view design

VML: A View Modeling Language for Computational Knowledge Discovery

41

Table 1. The Prediction Accuracy of the Final Hypothesis (scores of TargetP [3] in (tp×tn−f p×f n) parentheses) The score is deﬁned by: √ where tp, tn, f p, (tp+f n)(tp+f p)(tn+f p)(tn+f n)

f n are the number of true positive, true negative, false positive, and false negative, respectively (Matthews correlation coeﬃcient (MCC) [15]). True # of Predicted category category seqs cTP mTP SP Other cTP 141 96 (120) 26 (14) 0 (2) 19 (5) mTP 368 25 (41) 309 (300) 4 (9) 30 (18) SP 269 6 (2) 9 (7) 244 (245) 10 (15) Other 162 8 (10) 17 (13) 2 (2) 135 (137) Speciﬁcity 0.71 (0.69) 0.86 (0.90) 0.98 (0.96) 0.70 (0.78)

Sensitivity 0.68 0.84 0.91 0.83

(0.85) (0.82) (0.91) (0.85)

MCC 0.64 0.75 0.92 0.71

(0.72) (0.77) (0.90) (0.77)

was consistent with widely believed (but vague) characteristics of each signal, and the expert was surprised that such a simple rule could describe the sorting signals with such accuracy. A system called iPSORT was built based on these rules, and an experimental web service is provided at the iPSORT web-site [26]. vmatch can be useful in the following situation: After obtaining a good view of design h2, we may want to see if we can ﬁnd a good view of design h1, but use the same substring sequence as h2. This can be regarded as ﬁrst looking for a segment which has a distinct amino acid composition, and then looking closer at this segment, to see if structural characteristics of the segment can be found. This function can be written as: # let newh f = vmatch f with GT (Average (AAindex _ (Substring p l _))) _ -> fun pat mm ind str -> h1 pat mm ind p l str;; val newh : ’_a -> string -> astr_mismatch -> (char -> char) -> string -> bool = If the representation of a function h was for example: (str . GT (Average (AAindex ind (Substring 3 16 str))) 3.5) then, the representation of (newh h) would become: (pat mm ind str . (Astrstr mm pat (AlphInd ind (Substring 3 16 str)))) representing a function of design h1, but using the parameters of h of view design h2 for Substring. Again, we need not worry about explicitly keeping track of what values were applied to h2 to obtain h, since it is implicitly remembered and can be extracted by the vmatch keyword. Thus, we have seen that the design and manipulation of views can be done easily with VML, and would assist the trial-and-error cycle of the experiments.

42

5 5.1

H. Bannai et al.

Discussion Implementation

In the C++ library, each view function is encapsulated in an instance of a class derived from the view class. The view class has a method for interpreting the value for an entity. Constructors for various derived classes can take other instances of view classes as arguments. The view class also has a method which returns the view classes which were used to build the instance (a facility for simulating vmatch, for decomposing the functions). However, after spending much time in development, we came to feel that C++ was error prone and rather tedious to code the view functions. Also, although the view classes encapsulate functions, the function itself could not be easily reused for other purposes. For the points mentioned above, we can safely say that VML is advantageous over our C++ library. However, an eﬃcient implementation of VML, which is beyond the scope of this paper, is a topic of interest. The implementation given in [20] uses the Camlp4 preprocessor (and printer) [23], which converts a VML program (with a diﬀerent syntax from this paper) into an OCaml program, and it may be the case that there are optimizations that can be performed by a dedicated compiler. 5.2

Conclusion

We presented the concept of a language called VML, as an extension of the Objective Caml language. The advantages of VML are: 1) Since VML is a functional language, the composition and application of views can be done in a natural way, compared to imperative languages. 2) By deﬁning the unit of knowledge as views, the programmer does not need to explicitly keep track of how each individual view was designed (i.e. manage data structures to remember the set of parameters). 3) The programmer can use “parts” of a good view which can only be determined perhaps at runtime, and apply it to another (the example in Section 4.2). 4) In an interactive interface, (i.e. a VML interactive interpreter), the user can compose and decompose views and view designs, and apply them to data. When the user accidently stumbles upon an interesting view, he/she can retrieve the design immediately. Using VML, we modeled and described successful knowledge discovery tasks which we have actually experienced, and showed that the points noted above can lighten the burden of the programmer, and as a result, give way to speeding up the iterative trial-and-error cycle of computational knowledge discovery processes.

6

Acknowledgements

The authors would like to thank Sumii Eijiro of the University of Tokyo for his most valuable comments and suggestions.

VML: A View Modeling Language for Computational Knowledge Discovery

43

This research was supported in part by Grant-in-Aid for Encouragement of Young Scientists and Grant-in-Aid for Scientiﬁc Research on Priority Areas (C) “Genome Information Science” from the Ministry of Education, Sports, Science and Technology of Japan, and the Research for the Future Program of the Japan Society for the Promotion of Science.

References [1] H. Bannai, Y. Tamada, O. Maruyama, K. Nakai, and S. Miyano. Views: Fundamental building blocks in the process of knowledge discovery. In Proceedings of the 14th International FLAIRS Conference, pages 233–238. AAAI Press, 2001. [2] P. Cheeseman and J. Stutz. Bayesian classiﬁcation (AutoClass): Theory and results. In Advances in Knowledge Discovery and Data Mining. AAAI Press/MIT Press, 1996. [3] O. Emanuelsson, H. Nielsen, S. Brunak, and G. von Heijne. Predicting subcellular localization of proteins based on their N-terminal amino acid sequence. J. Mol. Biol., 300(4):1005–1016, July 2000. [4] U. Fayyad, G. Piatetsky-Shapiro, and P. Smyth. From data mining to knowledge discovery in databases. AI Magazine, 17(3):37–54, 1996. [5] J. D. Helmann. Compilation and analysis of bacillus subtilis σ A -dependent promoter sequences: evidence for extended contact between RNA polymerase and upstream promoter DNA. Nucleic Acids Res., 23(13):2351–2360, 1995. [6] J. Hughes. Why functional programming matters. Computer Journal, 32(2):98– 107, 1989. [7] S. Kawashima and M. Kanehisa. AAindex: Amino Acid index database. Nucleic Acids Res., 28(1):374, 2000. [8] T. Khabaza and C. Shearer. Data mining with Clementine. IEE Colloquium on ‘Knowledge Discovery in Databases’, 1995. IEE Digest No. 1995/021(B), London. [9] P. Langley. The computer-aided discovery of scientiﬁc knowledge. In Lecture Notes in Artiﬁcial Intelligence, volume 1532, pages 25–39, 1998. [10] P. Langley and H. A. Simon. Applications of machine learning and rule induction. Communications of the ACM, 38(11):54–64, 1995. [11] X. Liu, D. L. Brutlag, and J. S. Liu. BioProspector: Discovering conserved DNA motifs in upstream regulatory regions of co-expressed genes. In Paciﬁc Symposium on Biocomputing 2001, volume 6, pages 127–138, 2001. [12] O. Maruyama and S. Miyano. Design aspects of discovery systems. IEICE Transactions on Information and Systems, E83-D:61–70, 2000. [13] O. Maruyama, T. Uchida, T. Shoudai, and S. Miyano. Toward genomic hypothesis creator: View designer for discovery. In Discovery Science, volume 1532 of Lecture Notes in Artiﬁcial Intelligence, pages 105–116, 1998. [14] O. Maruyama, T. Uchida, K. L. Sim, and S. Miyano. Designing views in HypothesisCreator: System for assisting in discovery. In Discovery Science, volume 1721 of Lecture Notes in Artiﬁcial Intelligence, pages 115–127, 1999. [15] B. W. Matthews. Comparison of predicted and observed secondary structure of t4 phage lysozyme. Biochim. Biophys. Acta, 405:442–451, 1975. [16] R. Milner, M. Tofte, R. Harper, and D. MacQueen. The Deﬁnition of Standard ML (Revised). MIT Press, 1997. [17] K. Nakai. Protein sorting signals and prediction of subcellular localization. In P. Bork, editor, Analysis of Amino Acid Sequences, volume 54 of Advances in Protein Chemistry, pages 277–344. Academic Press, San Diego, 2000.

44

H. Bannai et al.

[18] J. Quinlan. Induction of decision trees. Machine Learning 1, 1:81–106, 1986. [19] S. Shimozono. Alphabet indexing for approximating features of symbols. Theor. Comput. Sci., 210:245–260, 1999. [20] E. Sumii and H. Bannai. VMlambda: A functional calculus for scientiﬁc discovery. http://www.yl.is.s.u-tokyo.ac.jp/˜sumii/pub/, 2001. [21] S. Wrobel, D. Wettschereck, E. Sommer, and W. Emde. Extensibility in data mining systems. In Proceedings of the 2nd International Conference on Knowledge Discovery and Data Mining (KDD-96), pages 214–219, 1996. [22] S. Wu and U. Manber. Fast text searching allowing errors. Commun. ACM, 35:83–91, 1992. [23] Camlp4 - http://caml.inria.fr/camlp4/. [24] GenBank - http://www.ncbi.nlm.nih.gov/Genbank. [25] HypothesisCreator - http://www.hypothesiscreator.net/. [26] iPSORT - http://www.hypothesiscreator.net/iPSORT/. [27] Objective Caml - http://caml.inria.fr/ocaml/. [28] TargetP - http://www.cbs.dtu.dk/services/TargetP/.

Robot Baby 2001 Paul R. Cohen1 , Tim Oates2 , Niall Adams3 , and Carole R. Beal4 1

2

Department of Computer Science, University of Massachusetts, Amherst cohen@cs.umass.edu Department of Computer Science, University of Maryland, Baltimore County oates@cs.umbc.edu 3 Department of Mathematics, Imperial College, London n.adams@ic.ac.uk 4 Department of Psychology, University of Massachusetts, Amherst cbeal@psych.umass.edu

Abstract. In this paper we claim that meaningful representations can be learned by programs, although today they are almost always designed by skilled engineers. We discuss several kinds of meaning that representations might have, and focus on a functional notion of meaning as appropriate for programs to learn. Speciﬁcally, a representation is meaningful if it incorporates an indicator of external conditions and if the indicator relation informs action. We survey methods for inducing kinds of representations we call structural abstractions. Prototypes of sensory time series are one kind of structural abstraction, and though they are not denoting or compositional, they do support planning. Deictic representations of objects and prototype representations of words enable a program to learn the denotational meanings of words. Finally, we discuss two algorithms designed to ﬁnd the macroscopic structure of episodes in a domain-independent way.

K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, p. 29, 2001. c Springer-Verlag Berlin Heidelberg 2001

Inventing Discovery Tools: Combining Information Visualization with Data Mining Ben Shneiderman Department of Computer Science, Human-Computer Interaction Laboratory, Institute for Advanced Computer Studies, and Institute for Systems Research University of Maryland, College Park, MD 20742 USA ben@cs.umd.edu

Abstract. The growing use of information visualization tools and data mining algorithms stems from two separate lines of research. Information visualization researchers believe in the importance of giving users an overview and insight into the data distributions, while data mining researchers believe that statistical algorithms and machine learning can be relied on to find the interesting patterns. This paper discusses two issues that influence design of discovery tools: statistical algorithms vs. visual data presentation, and hypothesis testing vs. exploratory data analysis. I claim that a combined approach could lead to novel discovery tools that preserve user control, enable more effective exploration, and promote responsibility.

1

Introduction

Genomics researchers, financial analysts, and social scientists hunt for patterns in vast data warehouses using increasingly powerful software tools. These tools are based on emerging concepts such as knowledge discovery, data mining, and information visualization. They also employ specialized methods such as neural networks, decisions trees, principal components analysis, and a hundred others. Computers have made it possible to conduct complex statistical analyses that would have been prohibitive to carry out in the past. However, the dangers of using complex computer software grow when user comprehension and control are diminished. Therefore, it seems useful to reflect on the underlying philosophy and appropriateness of the diverse methods that have been proposed. This could lead to better understandings of when to use given tools and methods, as well as contribute to the invention of new discovery tools and refinement of existing ones. Each tool conveys an outlook about the importance of human initiative and control as contrasted with machine intelligence and power [16]. The conclusion deals with the central issue of responsibility for failures and successes. Many issues influence design of discovery tools, but I focus on two: statistical algorithms vs. visual data presentation and hypothesis testing vs. exploratory data analysis.

Keynote for Discovery Science 2001 Conference, November 25-28, 2001, Washington, DC. Also to appear in Information Visualization, new journal by Palgrave/MacMillan.

K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 17–28, 2001. c Springer-Verlag Berlin Heidelberg 2001

18

2

B. Shneiderman

Statistical Algorithms vs. Visual Data Presentation

Early efforts to summarize data generated means, medians, standard deviations, and ranges. These numbers were helpful because their compactness, relative to the full data set, and their clarity supported understanding, comparisons, and decision making. Summary statistics appealed to the rational thinkers who were attracted to the objective nature of data comparisons that avoided human subjectivity. However, they also hid interesting features such as whether distributions were uniform, normal, skewed, bi-modal, or distorted by outliers. A remedy to these problems was the presentation of data as a visual plot so interesting features could be seen by a human researcher. The invention of times-series plots and statistical graphics for economic data is usually attributed to William Playfair (1759-1823) who published The Commercial and Political Atlas in 1786 in London. Visual presentations can be very powerful in revealing trends, highlighting outliers, showing clusters, and exposing gaps. Visual presentations can give users a richer sense of what is happening in the data and suggest possible directions for further study. Visual presentations speak to the intuitive side and the sense-making spirit that is part of exploration. Of course visual presentations have their limitations in terms of dealing with large data sets, occlusion of data, disorientation, and misinterpretation. By early in the 20th century statistical approaches, encouraged by the Age of Rationalism, became prevalent in many scientific domains. Ronald Fisher (1890-1962) developed modern statistical methods for experimental designs related to his extensive agricultural studies. His development of analysis of variance for design of factorial experiments [7] helped advance scientific research in many fields [12]. His approaches are still widely used in cognitive psychology and have influenced most experimental sciences. The appearance of computers heightened the importance of this issue. Computers can be used to carry out far more complex statistical algorithms and they also be used to generate rich visual, animated, and user-controlled displays. Typical presentation of statistical data mining results is by brief summary tables, induced rules, or decision trees. Typical visual data presentations show data-rich histograms, scattergrams, heatmaps, treemaps, dendrograms, parallel coordinates, etc. in multiple coordinated windows that support user-controlled exploration with dynamic queries for filtering (Fig. 1). Comparative studies of statistical summaries and visual presentations demonstrate the importance of user familiarity and training with each approach and the influence of specific tasks. Of course, statistical summaries and visual presentations can both be misleading or confusing.

Fig. 1. Spotfire (www.spotfire.com) display of chemical elements showing the strong correlation between ionization energy and electronegativity, and two dramatic outliers: radon and helium.

Inventing Discovery Tools 19

20

B. Shneiderman

An example may help clarify the distinction. Promoters of statistical methods may use linear correlation coefficients to detect relationships between variables, which works wonderfully when there is a linear relationship between variables and when the data is free from anomalies. However, if the relationship is quadratic (or exponential, sinusoidal, etc.) a linear algorithm may fail to detect the relationship. Similarly if there are data collection problems that add outliers or if there are discontinuities over the range (e.g. freezing or boiling points of water), then linear correlation may fail. A visual presentation is more likely to help researchers find such phenomena and suggest richer hypotheses.

3

Hypothesis Testing vs. Exploratory Data Analysis

Fisher’s approach not only promoted statistical methods over visual presentations, but also strongly endorsed theory-driven hypothesis-testing research over casual observation and exploratory data analysis. This philosophical strand goes back to Francis Bacon (1551-1626) and later to John Herschel’s 1830 A Preliminary Discourse on the Study of Natural Philosophy. They are usually credited with influencing modern notions of scientific methods based on rules of induction and the hypothetico-deductive method. Believers in scientific methods typically see controlled experiments as the fast path to progress, even though its use of the reductionist approach to test one variable at a time can be disconcertingly slow. Fisher’s invention of factorial experiments helped make controlled experimentation more efficient. Advocates of the reductionist approach and controlled experimentation argue that large benefits come when researchers are forced to clearly state their hypotheses in advance of data collection. This enables them to limit the number of independent variables and to measure a small number of dependent variables. They believe that the courageous act of stating hypotheses in advance sharpens thinking, leads to more parsimonious data collection, and encourages precise measurement. Their goals are to understand causal relationships, to produce replicable results, and to emerge with generalizable insights. Critics complain that the reductionist approach, with its laboratory conditions to ensure control, is too far removed from reality (not situated and therefore stripped of context) and therefore may ignore important variables that effect outcomes. They also argue that by forcing researchers to state an initial hypothesis, their observation will be biased towards finding evidence to support their hypothesis and will ignore interesting phenomena that are not related to their dependent variables. On the other side of this interesting debate are advocates of exploratory data analysis who believe that great gains can be made by collecting voluminous data sets and then searching for interesting patterns. They contend that statistical analyses and machine learning techniques have matured enough to reveal complex relationships that were not anticipated by researchers. They believe that a priori hypotheses limit research and are no longer needed because of the capacity of computers to collect and analyze voluminous data. Skeptics worry that any given set of data, no matter how large, may still be a special case, thereby undermining the generalizability of the results. They also question whether detection of strong statistical relationships can ever lead to an understanding of cause and effect. They declare that correlation does not imply causation.

Inventing Discovery Tools

21

Once again, an example may clarify this issue. If a semiconductor fabrication facility is generating a high rate of failures, promoters of hypothesis testing might list the possible causes, such as contaminants, excessive heat, or too rapid cooling. They might seek evidence to support these hypotheses and maybe conduct trial runs with the equipment to see if they could regenerate the problem. Promoters of exploratory data analysis might want to collect existing data from the past year of production under differeing conditions and then run data mining tools against these data sets to discover correlates of high rates of failure. Of course, an experienced supervisor may blend these approaches, gathering exploratory hypotheses from the existing data and then conducting confirmatory tests.

4 The New Paradigms The emergence of the computer has shaken the methodological edifice. Complex statistical calculations and animated visualizations become feasible. Elaborate controlled experiments can be run hundreds of times and exploratory data analysis has become widespread. Devotees of hypothesis-testing have new tools to collect data and prove their hypotheses. T-tests and analysis of variance (ANOVA) have been joined by linear and non-linear regression, complex forecasting methods, and discriminant analylsis. Those who believe in exploratory data analysis methods have even more new tools such as neural networks, rule induction, a hundred forms of automated clustering, and even more machine learning methods. These are often covered in the rapidly growing academic discipline of data mining [6,8]. Witten and Frank define data mining as "the extraction of implicit, previously unknown, and potentially useful information from data." They caution that "exaggerated reports appear of the secrets that can be uncovered by setting learning algorithms loose on oceans of data. But there is no magic in machine learning, no hidden power, no alchemy. Instead there is an identifiable body of simple and practical techniques that can often extract useful information from raw data." [19] Similarly, those who believe in data or information visualization are having a great time as the computer enables rapid display of large data sets with rich user control panels to support exploration [5]. Users can manipulate up to a million data items with 100-millisecond update of displays that present color-coded, size-coded markers for each item. With the right coding, human pre-attentive perceptual skills enable users to recognize patterns, spot outliers, identify gaps, and find clusters in a few hundred milliseconds. When data sets grow past a million items and cannot be easily seen on a computer display, users can extract relevant subsets, aggregate data into meaningful units, or randomly sample to create a manageable data set. The commercial success of tools such as SAS JMP (www.sas.com), SPSS Diamond (www.spss.com), and Spotfire (www.spotfire.com) (Fig. 1), especially for pharmaceutical drug discovery and genomic data analysis, demonstrate the attraction of visualization. Other notable products include Inxight’s Eureka (www.inxight.com) for multidimensional tabular data and Visual Insights’ eBizinsights (www.visualinsights.com) for web log visualization. Spence characterizes information visualization with this vignette "You are the owner of some numerical data which, you feel, is hiding some fundamental relation...you then glance at some visual presentation of that data and exclaim ’Ah ha! - now I understand.’"

22

B. Shneiderman

[13]. But Spence also cautions that "information visualization is characterized by so many beautiful images that there is a danger of adopting a ’Gee Whiz’ approach to its presentation."

5 A Spectrum of Discovery Tools The happy resolution to these debates is to take the best insights from both extremes and create novel discovery tools for many different users and many different domains. Skilled problem solvers often combine observation at early stages, which leads to hypothesistesting experiments. Alternatively they may have a precise hypothesis, but if they are careful observers during a controlled experiment, they may spot anomalies that lead to new hypotheses. Skilled problem solvers often combine statistical tests and visual presentation. A visual presentation of data may identify two clusters whose separate analysis can lead to useful results when a combined analysis would fail. Similarly, a visual presentation might show a parabola, which indicates a quadratic relationship between variables, but no relationship would be found if a linear correlation test were applied. Devotees of statistical methods often find that presenting their results visually helps to explain them and suggests further statistical tests. The process of combining statistical methods with visualization tools will take some time because of the conflicting philosophies of the promoters. The famed statistician John Tukey (1915-2000) quickly recognized the power of combined approaches [14]: "As yet I know of no person or group that is taking nearly adequate, advantage of the graphical potentialities of the computer... In exploration they are going to be the data analyst’s greatest single resource." The combined strength of visual data mining would enrich both approaches and enable more successful solutions [17]. However, most books on data mining have only brief discussion of information visualization and vice versa. Some researchers have begun to implement interactive visual approaches to data mining [10,2,15]. Accelerating the process of combining hypothesis testing with exploratory data analysis will also bring substantial benefits. New statistical tests and metrics for uniformity of distributions, outlier-ness, or cluster-ness will be helpful, especially if visual interfaces enable users to examine the distributions rapidly, change some parameters and get fresh metrics and corresponding visualizations.

6

Case Studies of Combining Visualization with Data Mining

One way to combine visual techniques with automated data mining is to provide support tools for users with both components. Users can then explore data with direct manipulation user interfaces that control information visualization components and apply statistical tests when something interesting appears. Alternatively, they can use data mining as a first pass and then examine the results visually. Direct manipulation strategies with user-controlled visualizations start with visual presentation of the world of action, which includes the objects of interest and the actions. Early examples included air traffic control and video games. In graphical user interfaces, direct manipulation means dragging files to folders or to the trashcan for deletion. Rapid incremental and reversible actions

Inventing Discovery Tools

23

encourage exploration and provide continuous feedback so users can see what they are doing. Good examples are moving or resizing a window. Modern applications of direct manipulation principles have led to information visualization tools that show hundreds of thousands of items on the screen at once. Sliders, check boxes, and radio buttons allow users to filter items dynamically with updates in less than 100 milliseconds.

Fig. 2. Dynamic Queries HomeFinder with sliders to control the display of markers indicating homes for sale. Users can specify distances to markers, bedrooms, cost, type of house and features [18] .

Early information visualizations included the Dynamic Queries HomeFinder (Fig. 2) which allowed users to select from a database of 1100 homes using sliders on home price, number of bedrooms, and distance from markers, plus buttons for other features such as fireplaces, central air conditioning, etc. [18]. This led to the FilmFinder [1] and then the successful commercial product, Spotfire (Fig. 1). One Spotfire feature is the View Tip that uses statistical data mining methods to suggest interesting pair-wise relationships by using linear correlation coefficients (Fig. 3). The ViewTip might be improved by giving more user control over the specification of interesting-ness that ranks the outcomes. While some users may be interested in high linear correlation coefficients, others may be interested in low correlation coefficients, or might prefer rankings by quadratic,

24

B. Shneiderman

exponential, sinusoidal or other correlations. Other choices might be to rank distributions by existing metrics such as skewness (negative or positive) or outlierness [3]. New metrics for degree of uniformity, cluster-ness, or gap-ness are excellent candidates for research. We are in the process of building a control panel that allows users to specify the distributions they are seeking by adjusting sliders and seeing how the rankings shift. Five algorithms have been written for 1-dimensional data and one for 2-dimensional data, but more will be prepared soon (Fig. 4).

Fig. 3. Spotfire View Tip panel with ranking of possible 2-dimensional scatter plots in descending order by the strength of linear correlation. Here the strong correlation in baseball statistics is shown between Career At Bats and Career Hits. Notice the single outlier in the upper right corner, representing Pete Rose’s long successful career.

A second case study is our work with time-series pattern finding [4]. Current tools for stock market or genomic expression data from DNA microarrays rely on clustering in multidimensional space, but a more user-controlled specification tool might enable analysts to carefully specify what they want [9]. Our efforts to build a tool, TimeSearcher, have relied on query specification by drawing boxes to indicate what ranges of values are desired for each time period (Fig. 5). It has more of the spirit of hypothesis testing. While this takes somewhat greater effort, it gives users greater control over the query results. Users can move the boxes around in a direct manipulation style and immediately see the new set of results. The opportunity for rapid exploration is dramatic and users can immediately see where matches are frequent and where they are rare.

Inventing Discovery Tools

25

Fig. 4. Prototype panel to enable user specification of 1-dimensional distribution requirements. The user has chosen the Cluster Finder II in the Algorithm box at the top. The user has specified the cluster tightness desired in the middle section. The ranking of the Results at the bottom lists all distributions according to the number of identifiable clusters. The M93-007 data is the second one in the Results list and it has four identifiable clusters. (Implemented by Kartik Parija and Jaime Spacco).

26

B. Shneiderman

Fig. 5. TimeSearcher allows users to specify ranges for time-series data and immediately see the result set. In this case two timeboxes have been drawn and 5 of the 225 stocks match this pattern [9].

Inventing Discovery Tools

7

27

Conclusion and Recommendations

Computational tools for discovery, such as data mining and information visualization have advanced dramatically in recent years. Unfortunately, these tools have been developed by largely separate communities with different philosophies. Data mining and machine learning researchers tend to believe in the power of their statistical methods to identify interesting patterns without human intervention. Information visualization researchers tend to believe in the importance of user control by domain experts to produce useful visual presentations that provide unanticipated insights. Recommendation 1: integrate data mining and information visualization to invent discovery tools. By adding visualization to data mining (such as presenting scattergrams to accompany induced rules), users will develop a deeper understanding of their data. By adding data mining to visualization (such as the Spotfire View Tip), users will be able to specify what they seek. Both communities of researchers emphasize exploratory data analysis over hypothesis testing. A middle ground of enabling users to structure their exploratory data analysis by applying their domain knowledge (such as limiting data mining algorithms to specific range values) may also be a source of innovative tools. Recommendation 2: allow users to specify what they are seeking and what they find interesting. By allowing data mining and information visualization users to constrain and direct their tools, they may produce more rapid innovation. As in the Spotfire View Tip example, users could be given a control panel to indicate what kind of correlations or outliers they are looking for. As users test their hypotheses against the data, they find dead ends and discover new possibilities. Since discovery is a process, not a point event, keeping a history of user actions has a high payoff. Users should be able to save their state (data items and control panel settings), back up to previous states, and send their history to others. Recommendation 3: recognize that users are situated in a social context. Researchers and practitioners rarely work alone. They need to gather data from multiple sources, consult with domain experts, pass on partial results to others, and then present their findings to colleagues and decision makers. Successful tools enable users to exchange data, ask for consultations from peers and mentors, and report results to others conveniently. Recommendation 4: respect human responsibility when designing discovery tools. If tools are comprehensible, predictable and controllable, then users can develop mastery over their tools and experience satisfaction in accomplishing their work. They want to be able to take pride in their successes and they should be responsible for their failures. When tools become too complex or unpredictable, users will avoid their use because the tools are out of their control. Users often perform better when they understand and control what the computer does [11]. If complex statistical algorithms or visual presentations are not well understood by users they cannot act on the results with confidence. I believe that visibility of the statistical processes and outcomes minimizes the danger of misinterpretation and incorrect results. Comprehension of the algorithms behind the visualizations and the implications of layout encourage effective usage that leads to successful discovery.

28

B. Shneiderman

Acknowledgements. Thanks to Mary Czerwinski, Lindley Darden, Harry Hochheiser, Jenny Preece, and Ian Witten for comments on drafts.

References 1. Ahlberg, C. and Shneiderman, B., Visual Information Seeking: Tight coupling of dynamic query filters with starfield displays, Proc. of ACM CHI ’94 Human Factors in Computing Systems, ACM Press, New York (April 1994), 313-317 + color plates. 2. Ankerst, M., Ester, M., and Kriegel, H.-P., Towards an effective cooperation of the user and the computer for classification, Proc. 6th ACM SIGKDD International Conf. on Knowledge Discovery and Data Mining, ACM, New York (2000), 179-188. 3. Barnett, Vic, and Lewis, Toby, Outliers in Statistical Data, John Wiley & Son Ltd; 3rd edition (April 1994). 4. Bradley, E., Time-series analysis, In Berthold, M. and Hand, E. (Editors), Intelligent Data Analysis: An Introduction, Springer (1999). 5. Card, S., Mackinlay, J, and Shneiderman, B. (Editors), Readings in Information Visualization: Using Vision to Think, Morgan Kaufmann Publishers, San Francisco, CA (1999). 6. Fayyad, U., Piatetsky-Shapiro, G., Smyth, P., and Uthurusamy, R., (Editors), Advances in Knowledge Discovery and Data Mining. MIT Press, Cambridge, MA (1996). 7. Fisher, R.A., The Design of Experiments, Oliver and Boyd, Edinburgh (1935). 9th edition, Macmillan, New York (1971). 8. Han, Jiawei and Kamber, Micheline, Data Mining: Concepts and Techniques, Morgan Kaufmann Publishers, San Francisco (2000). 9. Hochheiser, H. and Shneiderman, B., Interactive exploration of time-series data, In Proc. Discovery Science, Springer (2001). 10. Hinneburg, A., Keim, D., and Wawryniuk, M., HD-Eye: Visual mining of high-dimensional data, IEEE Computer Graphics and Applications 19, 5 (Sept/Oct 1999), 22-31. 11. Koenemann, J. and Belkin, N., A case for interaction: A study of interactive information retrieval behavior and effectiveness, Proc. CHI ’96 Human Factors in Computing Systems, ACM Press, New York (1996), 205-212. 12. Montgomery, D., Design and Analysis of Experiments, 3rd ed, Wiley, New York (1991). 13. Spence, Robert, Information Visualization, Addison-Wesley, Essex, England (2001). 14. Tukey, John, The technical tools of statistics,American Statistician 19 (1965), 23-28.Available at: http://stat.bell-labs.com/who/tukey/memo/techtools.html 15. Ware, M., Frank, E., Homes, F., Hall, M., and Witten, I. H., Interactive machine learning: Letting users build classifiers, International Journal of Human-Computer Studies (2001, in press). 16. Weizenbaum, Joseph, Computer Power and Human Reason: From Judgment to Calculation, W. H. Freeman and Co., San Francisco, CA, (1976). 17. Westphal, Christopher and Blaxton, Teresa, Data Mining Solutions: Methods and Tools for Solving Real-World Problems, John Wiley & Sons (1999). 18. Williamson, Christopher, and Shneiderman, Ben, The Dynamic HomeFinder: Evaluating dynamic queries in a real-estate information exploration system, Proc. ACM SIGIR’92 Conference, ACM Press (1992), 338-346. 19. Witten, Ian, and Frank, Eibe, Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations, Morgan Kaufmann Publishers, San Francisco (2000).

Queries Revisited Dana Angluin Computer Science Department Yale University P. O. Box 208285 New Haven, CT 06520-8285 angluin@cs.yale.edu

Abstract. We begin with a brief tutorial on the problem of learning a ﬁnite concept class over a ﬁnite domain using membership queries and/or equivalence queries. We then sketch general results on the number of queries needed to learn a class of concepts, focusing on the various notions of combinatorial dimension that have been employed, including the teaching dimension, the exclusion dimension, the extended teaching dimension, the ﬁngerprint dimension, the sample exclusion dimension, the Vapnik-Chervonenkis dimension, the abstract identiﬁcation dimension, and the general dimension.

K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, p. 16, 2001. c Springer-Verlag Berlin Heidelberg 2001

Discovering Mechanisms: A Computational Philosophy of Science Perspective Lindley Darden Department of Philosophy University of Maryland College Park, MD 20742 darden@carnap.umd.edu http://www.inform.umd.edu/PHIL/faculty/LDarden/

Abstract. A task in the philosophy of discovery is to ﬁnd reasoning strategies for discovery, which fall into three categories: strategies for generation, evaluation and revision. Because mechanisms are often what is discovered in biology, a new characterization of mechanism aids in their discovery. A computational system for discovering mechanisms is sketched, consisting of a simulator, a library of mechanism schemas and components, and a discoverer for generating, evaluating and revising proposed mechanism schemas. Revisions go through stages from how possibly to how plausibly to how actually.

1

Introduction

Philosophers of discovery look for reasoning strategies that can guide discovery. This work is in the framework of Herbert Simon’s (1997) view of discovery as problem solving. Given a problem to be solved, such as explaining a phenomenon, one goal is to ﬁnd a mechanism that produces that phenomenon. For example, given the phenomenon of the production of a protein, the goal is to ﬁnd the mechanism of protein synthesis. The task of the philosopher of discovery is to ﬁnd reasoning strategies to guide such discoveries. Strategies are heuristics for problem solving; that is, they provide guidance but do not guarantee success. Discovery is not viewed as something that occurs in a single a-ha moment of insight. Instead, discovery is construed as a process that occurs over an extended period of time, going through cycles of generation, evaluation, and revision (Darden 1991). The history of science is a source of “compiled hindsight” (Darden 1987) about reasoning strategies for discovering mechanisms. This paper will use examples from the history of biology to illustrate general reasoning strategies for discovering mechanisms. Section 2 puts this work into the broader context of a matrix of biological knowledge. Section 3 discusses a new characterization of mechanism, based on an ontology of entities, properties, and activities. Section 4 outlines components of a mechanism discovery system, including a simulator, a library of mechanism designs and components, and a discoverer. K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 3–15, 2001. c Springer-Verlag Berlin Heidelberg 2001

4

L. Darden

Knowledge Bases & AI

Databases

Information Storage & Retrieval

Laboratory and Field Biology Data and Knowledge

Physics and Chemistry

Biomatrix Fig. 1. Matrix of Biological Knowledge

2

Biomatrix

This work is situated in a larger framework. In the 1980s, Harold Morowitz (1985) chaired a National Academy of Sciences workshop on models in biology. As a result of that workshop, a society was formed with the name, “Biomatrix: A Society for Biological Computation and Informatics” (Morowitz and Smith 1987). This society was ahead of its time; it has splintered into diﬀerent groups and its grand vision has yet to be realized. Nonetheless, its vision is worth revisiting in order to put the work to be discussed in this paper into a broader context. As Figure 1 shows, the biomatrix vision included relations among three areas: ﬁrst, databases; second, information storage and retrieval by literature cataloging (e.g., Medline); and, third, artiﬁcial intelligence and knowledge bases. Discovery science has worked in all three areas since the 1980s. Knowledge discovery in databases is a booming area (e.g., Piatetsky-Shapiro and Frawley, eds., 1991).

Discovering Mechanisms

5

Discovery using abstracts available from literature catalogues has been developed by Don Swanson (1990) and others. The area of discovery using knowledge based systems is an active area, especially in computational biology. The meetings on Intelligent Systems in Molecular Biology and the International Society for Computational Biology arose from that part of the biomatrix. It is in the knowledge based systems box that my work today will fall. Relations to databases and information retrieval as related to mechanism discovery will perhaps occur to the reader.

3

Mechanisms, Schemas, and Sketches

Often in biology, what is to be discovered is a mechanism. Physicists often aim to discover general laws, such as Newton’s laws of motion. However, few biological phenomena are best characterized by universal, mathematical laws (Beatty 1995). The ﬁeld of molecular biology, for example, studies mechanisms, such as the mechanisms of DNA replication, protein synthesis, and gene regulation. The lively area of functional genomics is now attempting to discover the mechanisms in which the gene sequences act. Such mechanisms include gene expression, during both embryological development and normal gene activities in the adult. The ﬁeld of biochemistry also studies mechanisms when it ﬁnd the activities that transform one stage in a pathway to next, such as the enzymes, reactants and products in the Krebs cycle that produces the energy molecule ATP. An important current scientiﬁc task is to connect genetic mechanisms studied by molecular biology with metabolic mechanisms studied by biochemistry. As that task is accomplished, science will have a uniﬁed picture of the mechanisms that carry out the two essential features of life according to Aristotle: reproduction and nutrition. Given this importance of mechanisms in biology, a correspondingly important task for discovery science is to ﬁnd methods for discovering mechanisms. If the goal is to discover a mechanism, then the nature of that product shapes the process of discovery. A new characterization of mechanism aids the search for reasoning strategies to discover mechanisms. A mechanism is sought to explain how a phenomenon is produced (Machamer, Darden, Craver 2000) or how some task is carried out (Bechtel and Richardson 1993) or how the mechanism as a whole behaves (Glennan 1996). Mechanisms may be characterized in the following way: Mechanisms are entities and activities organized such that they are productive of regular changes from start or set-up to ﬁnish or termination conditions. (Machamer, Darden, Craver 2000, p. 3). Mechanisms are regular in that they usually work in the same way under the same conditions. The regularity is exhibited in the typical way that the mechanism runs from beginning to end; what makes it regular is the productive continuity between stages. Mechanisms exhibit productive continuity without gaps

6

L. Darden

from the set up to the termination conditions; that is, each stage gives rise to, allows, drives, or makes the next. The ontology proposed here consists of entities, properties, and activities. Mechanisms are composed of both entities (with their properties) and activities. Activities are the producers of change. Entities are the things that engage in activities. Activities require that entities have speciﬁc types of properties. For example, two entities, a DNA base and its complement, engage in the activity of hydrogen bonding because of their properties of geometric shape and weak polar charges. For a given scientiﬁc ﬁeld, there are typically entities and activities that are accepted as relatively fundamental or taken to be unproblematic for the purposes of a given scientist, research group, or ﬁeld. That is, descriptions of mechanisms in that ﬁeld typically bottom out somewhere. Bottoming out is relative: diﬀerent types of entities and activities are where a given ﬁeld stops when constructing its descriptions of mechanisms. In molecular biology, mechanisms typically bottom out in descriptions of the activities of cell organelles, such as the ribosome, and molecules, including macromolecules, smaller molecules, and ions. The most important kinds of activities in molecular biology are geometrico-mechanical and electro-chemical activities. An example of a geometrico-mechanical activity is the lock and key docking of an enzyme and its substrate. Electro-chemical activities include strong covalent bonding and weak hydrogen bonding. Entities and activities are interdependent (Machamer, Darden, Craver 2000, p. 6). For example, appropriate chemical valences are necessary for covalent bonding. Polar charges are necessary for hydrogen bonding. Appropriate shapes are necessary for lock and key docking. This interdependence of entities and activities allows one to reason about one, based on what is known or conjectured about the other, in each stage of the mechanism (Darden and Craver, in press). A mechanism schema is a truncated abstract description of a mechanism that can be ﬁlled with more speciﬁc descriptions of component entities and activities. An example is the following: DNA → RNA → protein. This is a diagram of the central dogma of molecular biology. It is a very abstract, schematic representation of the mechanism of protein synthesis. A schema may be even more abstract if it merely indicates functional roles played in the mechanism by ﬁllers of a place in the schema (Craver 2001). Consider the schema DNA → template → protein. The schema term “template” indicates the functional role played by the intermediate between DNA and protein. Hypotheses about role-ﬁllers changed during the incremental discovery of the mechanism of protein synthesis in the 1950s and 1960s. Thus, mechanism schemes are particularly good ways of representing functional roles. (For discussion of “local” and “integrated” functions and a less schematic way of representing them in a computational system, see Karp 2000.)

Discovering Mechanisms

7

Table 1. Constraints on the Organization of Mechanisms Character of phenomenon Componency Constraints Entities and activities Modules Spatial Constraints Compartmentalization Localization Connectivity Structural Orientation Temporal Constraints Order Rate Duration Frequency Hierarchical Constraints Integration of levels (from Craver and Darden 2001)

Mechanism sketches are incomplete schemas. They contain black boxes, which cannot yet be ﬁlled with known components. Attempts to instantiate a sketch would leave a gap in the productive continuity; that is, knowledge of the needed particular entities and activities is missing. Thus, sketches indicate what needs to be discovered in order to ﬁnd a mechanism schema. Once a schema is found and instantiated, a detailed description of a mechanism results. For example, a more detailed description of the protein synthesis mechanism (often depicted in diagrams) satisﬁes the constraints that any adequate description of a mechanism must satisfy. It shows how the phenomenon, the synthesis of a protein, is carried out by the operation of the mechanism. It depicts the entities–DNA, RNA, and amino acids–as well as implicitly, the activities. Hydrogen bonding is the activity operating when messenger RNA is copied from DNA. There is a geometrico-mechanical docking of the messenger RNA and the ribosome, a particle in the cytoplasm. Hydrogen bonding again occurs as the codons on messenger RNA bond to the anticodons on transfer RNAs carrying amino acids. Finally, covalent bonding is the activity that links the amino acids together in the protein. Good mechanism descriptions show the spatial relations of the components and the temporal order of the stages. A detailed description of a mechanism satisﬁes several general constraints. (They are listed in Table 1 and indicated here by italics.) There is a phenomenon that the mechanism, when working, produces, for example, the synthesis of a pro-

8

L. Darden

tein. The nature of the phenomenon, which may be recharacterized as research on it proceeds, constrains details about the mechanism that produces it. For example, the components of the mechanism, the entities and activities, must be adequate to synthesize a protein, composed of amino acids tightly covalently bonded to each other. There are various spatial constraints. The DNA is located in the nucleus (in eucaryotes) and the rest of the machinery is in the cytoplasm. The ribosome is a particle with a two part structure that allows it to attach to the messenger RNA and orient the codons of the messenger so that particular transfer RNAs can hydrogen bond to them. There is a particular order in which the steps occur and they take certain amounts of time. All of these constraints can play roles in the search for mechanisms, and, then, they become part of an adequate description of a mechanism. (For more discussion of these constraints, see Craver and Darden 2001.) From this list of constraints on an adequate description of a mechanism, it is evident that mere equations do not adequately represent the numerous features of a mechanism, especially spatial constraints. Diagrams that depict structural features, spatial relations and temporal sequences are good representations of mechanisms. To sum up so far: Recent work has provided this new characterization of what a mechanism is, the constraints that any adequate description of a mechanism must satisfy, and an analysis of abstract mechanism schemas and incomplete mechanism sketches that can play roles in guiding discovery.

4

Outline of a System for Constructing Hypothetical Mechanisms

Components of a computational system for discovering mechanisms are outlined in Figure 2. They include a simulator, a hypothesized mechanism schema, a discoverer with reasoning strategies for generation, evaluation, and revision, and a searchable, indexed library. 4.1

Simulator

The goal is to construct a simulator that adequately simulates a biological mechanism. Given the set up conditions, the simulator can be used to predict speciﬁc termination conditions. The simulator is an instantiation of a mechanism schema. It may contain more or less detail about the speciﬁc component entities and activities and their structural, spatial and temporal organization. From a human factors perspective, a video option to display the mechanism simulation in action would aid the user in seeing what the mechanism is doing at each stage. The video could be stopped at each stage and details of the entities and activities of that stage examined in more detail.

Discovering Mechanisms

9

reasoning strategies for generation, evaluation, revision

Library types of schemas types of modules

hypothetical mechanism schema

types of entities mechanism simulator types of activities

Mechanism Discovery System Fig. 2. Outline for a Mechanism Discovery System

4.2

Library

A mechanism schema is discovered by iterating through stages of generation, evaluation, and revision. Generation is accomplished by several steps. First, a phenomenon to be explained must be characterized. Its mode of description will guide the search for schemas that can produce it. Search occurs within a library, consisting of several types of entries: types of schemas, types of modules, types of entities, and types of activities. The search among types of schemas is a search for an abstraction of an analogous mechanism (on analogies and schemas, see, e.g., Holyoak and Thagard 1995). Kevin Dunbar (1995) has shown that molecular biologists often use “local analogies” to similar mechanisms in their own ﬁeld and “regional analogies” to mechanisms in other, neighboring ﬁelds. Such analogies are good sources from which to abstract mechanism schemas. Types of schemas, modules, entities and activities are interconnected. A particular type of schema, for example, a gene regulation schema, may suggest one or more types of modules, such as derepression or negative feedback modules. A type of entity will have activity-enabling properties that indicate it can produce a type of activity. Conversely, a type of activity will require particular types of entities. For example, nucleic acids have polar charged bases that enable them to engage in the activity of hydrogen bonding, a weak form of chemical bonding that can be easily formed and broken between polar molecules. Schemas may be indexed by the kind of phenomenon they produce. For example, for the phenomenon of producing an adaptation, two types of mechanisms have been proposed historically by biologists–selective mechanisms and instructive mechanisms (Darden, 1987). At a high degree of abstraction, a selection

10

L. Darden

schema may be characterized as follows: ﬁrst comes a stage of variant production; next comes a stage with a selective interaction that poses a challenge to the variants; this is followed by diﬀerential beneﬁt for some of the variants. In contrast, instructive mechanisms have a coupling between the stage of variant production and the selective environment, so that an instruction is sent from the environment and interpreted by the adaptive system to produce only the required variant. In evolutionary biology and immunology, selective mechanisms rather than instructive ones have been shown to work in producing evolutionary adaptations and clones of antibody cells (Darden and Cain 1989). A library of modules can be indexed by the functional roles they can fulﬁll in a schema (e.g., Goel and Chandrasekaran 1989). For example, if a schema requires end-product inhibition, then a feedback control module can be added to the linear schema. If cell-to-cell signaling is indicated, then membrane spanning proteins serving as receptors are a likely kind of module. Entities and activities can be categorized in numerous ways. Types of macromolecules include nucleic acids, proteins, and carbohydrates. When proteins, for example, perform functions, such as enzymes that catalyze reactions, then the kind of function, such as phosphorylation, is a useful indexing method. 4.3

Discoverer: Generation, Evaluation, Revision

During generation, after a phenomenon is characterized, then a search is made to see if an entire schema can be found that produces such a type of phenomenon. If an entire schema can be found, such as a selective or instructive schema, then generation can proceed to further speciﬁcation with types of modules, entities, and activities. If no entire schema is available, then modules may be put together piecemeal to fulﬁll various functional roles. If functional roles and modules to ﬁll them are not yet known, then reasoning about types of entities and activities is available. By starting from known set up conditions, or, conversely, from the end product, a hypothesized string of entities and activities can be constructed. Reasoning forward from the beginning or backward from the end product of the mechanism will allow gaps in the middle to be ﬁlled. In sum, reasoning strategies for discovering mechanisms include schema instantiation, modular subassembly, and forward chaining/backtracking (Darden, forthcoming). Evaluation. Once one or more hypothesized mechanism schemas are found or constructed piecemeal, then evaluation occurs. Evaluation proceeds through stages from how possibly to how plausibly to how actually. (Peter Machamer suggests that “how actually” is best read as “how most plausibly,” given that all scientiﬁc claims are contingent, that is, subject to revision in the light of new evidence.) How possibly a mechanism operates can be shown by building a simulator that begins with the set up conditions and produces the termination conditions by moving through hypothesized intermediate stages. As additional constraints are fulﬁlled and evaluation strategies applied, the proposed mechanism becomes

Discovering Mechanisms

11

Table 2. Strategies for Theory Evaluation 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.

Internally consistent and nontautologous Systematicity vs. modularity Clarity Explanatory adequacy Predictive adequacy Scope and generality Lack of ad hocness Extendability and fruitfulness Relations with other accepted theories Metaphysical and methodological constraints Relation to rivals (from Darden 1991, p. 257)

more plausible. The constraints of Table 1 must be satisﬁed. Table 2 (from Darden 1991, Table 15-2) lists strategies for theory assessment often employed by philosophers of science. A working simulator will likely show that the proposed schema is internally consistent and consists of modules whose functioning is clearly understood, thus satisfying some of the conditions listed in 1-3. If the simulator can be run to produce the phenomenon to be explained, then condition 4 of explanatory adequacy is at least partially fulﬁlled. Testing a prediction against data is often viewed as the most important evaluation strategy. The simulator can be run with diﬀerent initial conditions to produce predictions, which can be tested against data. If a prediction does not match a data point, then an anomaly results and revision is required. We will omit further discussion of the other strategies for theory assessment in order to turn our attention to anomaly resolution strategies to use when revision is required.

Anomaly resolution. When a prediction does not match a data point, then an anomaly results. Strategies for anomaly resolution require a number of information processing tasks to be carried out. In previous work with John Josephson and Dale Moberg, we investigated computational implementation of such tasks (Moberg and Josephson 1990; Darden et al. 1992; Darden 1998). A list of such tasks is found in Figure 3. Reasoning in anomaly resolution is, ﬁrst, a diagnostic reasoning task, to localize the site(s) of failure, and, then, a redesign task, to improve the simulation to remove the problem. Characterizing the exact diﬀerence between the prediction and the data point is a ﬁrst step. Peter Karp (1990; 1993) discussed this step of anomaly resolution in his implementation of the MOLGEN system to resolve anomalies in a molecular biology simulator. One wants to milk the anomaly itself for all the information one can get about the nature of the failure. Often during diagnosis, the nature of the anomaly allows failures to be localized to one part of the system rather than others, sometimes to a speciﬁc site.

12

L. Darden

simulator + initial conditions

prediction by simulator

observed result anomaly detection

anomaly characterization

construct fault site hypotheses add'l information (e.g., from research program)

provisionally localize blame

add'l information Construct alternative redesign h’s evaluate h's & chose one h*

construct modified theory with h*

retry localization

test modified theory (resimulate)

retry redesign further evaluate modified theory (e.g., relation to other theories?) incorporate in explanatory repertoire Fig. 3. Information Processing Tasks in Anomaly Resolution (from Darden 1998, p. 69)

Discovering Mechanisms

13

Once hypothesized localizations are found by doing credit assignment, then alternative redesign hypotheses for that module can be constructed. Once again, the library of modules, entities and activities can be consulted to ﬁnd plausible candidates. The newly redesigned simulator can be run again to see if the problem is ﬁxed and the prediction now matches the data point.

5

Piecemeal Discovery and Hierarchical Integration

The view of scientiﬁc discovery proposed here is that discovery of mechanisms occurs in extended episodes of cycles of generation, evaluation, and revision. In so far as the constraints are satisﬁed, the assessment strategies are applied, and any anomalies are resolved, then the hypothesized mechanism will have moved through the stages of how possibly to how plausibly to how actually. A new mechanism will have been discovered. Once a new mechanism at a given mechanism level has been discovered, then that mechanism needs to be situated within the context of other biological mechanisms. Thus, the general strategy for theory evaluation of consistent relations with other accepted theories in other ﬁelds of science (see Table 2, strategy 9) is reinterpreted. By thinking about theories as mechanism schemas, the strategy gets implemented by situating the hypothesized mechanism in a larger context. This larger context consists of mechanisms that occur before and after it, as well as mechanisms up or down in a mechanism hierarchy (Craver 2001). Biological mechanisms are nested within other mechanisms, and ﬁnding such a ﬁt in an integrated picture is another measure of the adequacy of a newly proposed mechanism.

6

Conclusion

Integrated mechanism schemas can serve as the scaﬀolding of the biological matrix. They provide a framework to integrate general biological knowledge of mechanisms, the data that provide evidence for such mechanisms, and the reports in the literature of research to discover mechanisms. This paper has discussed a new characterization of mechanism, based on an ontology of entities, properties, and activities, and has outlined components of a computational system for discovering mechanisms. Discovery is viewed as an extended process, requiring reasoning strategies for generation, evaluation, and revision of hypothesized mechanism schemas. Discovery moves through the stages of from how possibly to how plausibly to how actually a mechanism works.

Acknowledgments This work was supported by the National Science Foundation under grant number SBR-9817942. Any opinions, ﬁndings and conclusions or recommendations expressed in this material are those of the author and do not necessarily reﬂect

14

L. Darden

those of the National Science Foundation. Many of the ideas in this paper were worked out in collaboration with Carl Craver and Peter Machamer.

References 1. Beatty, John (1995), “The Evolutionary Contingency Thesis,” in James G. Lennox and Gereon Wolters (eds.), Concepts, Theories, and Rationality in the Biological Sciences. Pittsburgh, PA: University of Pittsburgh Press, pp. 45-81. 2. Bechtel, William and Robert C. Richardson (1993), Discovering Complexity: Decomposition and Localization as Strategies in Scientiﬁc Research. Princeton, N. J.: Princeton University Press. 3. Craver, Carl (2001), “Role Functions, Mechanisms, and Hierarchy,” Philosophy of Science 68: 53-74. 4. Craver, Carl and Lindley Darden (2001), “Discovering Mechanisms in Neurobiology: The Case of Spatial Memory,” in Peter Machamer, R. Grush, and P. McLaughlin (eds.), Theory and Method in the Neurosciences. Pittsburgh, PA: University of Pittsburgh Press, pp. 112-137. 5. Darden, Lindley (1987), “Viewing the History of Science as Compiled Hindsight,” AI Magazine 8(2): 33-41. 6. Darden, Lindley (1990), “Diagnosing and Fixing Faults in Theories,” in J. Shrager and P. Langley (eds.), Computational Models of Scientiﬁc Discovery and Theory Formation. San Mateo, CA: Morgan Kaufmann, pp. 319-346. 7. Darden, Lindley (1991), Theory Change in Science: Strategies from Mendelian Genetics. New York: Oxford University Press. 8. Darden, Lindley (1998), “Anomaly-Driven Theory Redesign: Computational Philosophy of Science Experiments,” in Terrell W. Bynum and James Moor (eds.), The Digital Phoenix: How Computers are Changing Philosophy. Oxford: Blackwell, pp. 62-78. Available: www.inform.umd.edu/PHIL/faculty/LDarden/Research/pubs/ 9. Darden, Lindley (forthcoming), “Strategies for Discovering Mechanisms: Schema Instantiation, Modular Subassembly, Forward Chaining/Backtracking,” Presented at PSA 2000, Vancouver. Preprint available: www.inform.umd.edu/PHIL/faculty/LDarden/Research/pubs 10. Darden, Lindley and Joseph A. Cain (1989), “Selection Type Theories,” Philosophy of Science 56: 106-129. Available: www.inform.umd.edu/PHIL/faculty/LDarden/Research/pubs/ 11. Darden, Lindley and Carl Craver (in press), “Strategies in the Interﬁeld Discovery of the Mechanism of Protein Synthesis,” Studies in History and Philosophy of Biological and Biomedical Sciences. 12. Darden, Lindley, Dale Moberg, Sunil Thadani, and John Josephson, (July 1992), “A Computational Approach to Scientiﬁc Theory Revision: The TRANSGENE Experiments,” Technical Report 92-LD-TRANSGENE, Laboratory for Artiﬁcial Intelligence Research, The Ohio State University. Columbus, Ohio, USA. 13. Dunbar, Kevin (1995), “How Scientists Really Reason: Scientiﬁc Reasoning in RealWorld Laboratories,” in R. J. Sternberg and J. E. Davidson (eds.), The Nature of Insight. Cambridge, MA: MIT Press, pp. 365-395. 14. Glennan, Stuart S. (1996), “Mechanisms and The Nature of Causation,” Erkenntnis 44: 49-71. 15. Goel, Ashok and B. Chandrasekaran, (1989) “Functional Representation of Designs and Redesign Problem Solving,” in Proceedings of the Eleventh International Joint Conference on Artiﬁcial Intelligence, Detroit, MI, August 1989, pp. 1388-1394.

Discovering Mechanisms

15

16. Holyoak, Keith J. and Paul Thagard (1995), Mental Leaps: Analogy in Creative Thought. Cambridge, MA: MIT Press. 17. Karp, Peter (1990), “Hypothesis Formation as Design,” in J. Shrager and P. Langley (eds.), Computational Models of Scientiﬁc Discovery and Theory Formation. San Mateo, CA: Morgan Kaufmann, pp. 275-317. 18. Karp, Peter (1993), “A Qualitative Biochemistry and its Application to the Regulation of the Tryptophan Operon,” in L. Hunter (ed.), Artiﬁcial Intelligence and Molecular Biology. Cambridge, MA: AAAI Press and MIT Press, pp. 289-324. 19. Karp, Peter D. (2000), “An Ontology for Biological Function Based on Molecular Interactions,” Bioinformatics 16:269-285. 20. Machamer, Peter, Lindley Darden, and Carl Carver (2000), “Thinking About Mechanisms,” Philosophy of Science 67: 1-25. 21. Moberg, Dale and John Josephson (1990), “Diagnosing and Fixing Faults in Theories, Appendix A: An Implementation Note,” in J. Shrager and P. Langley (eds.), Computational Models of Scientiﬁc Discovery and Theory Formation. San Mateo, CA: Morgan Kaufmann, pp. 347-353. 22. Morowitz, Harold (1985), “Models for Biomedical Research: A New Perspective,” Report of the Committee on Models for Biomedical Research. Washington, D.C.: National Academy Press. 23. Morowitz, Harold and Temple Smith (1987), “Report of the Matrix of Biological Knowledge Workshop, July 13-August 14, 1987,” Sante Fe, NM: Sante Fe Institute. 24. Piatetsky-Shapiro, Gregory and William J. Frawley (eds.) (1991), Knowledge Discovery in Databases. Cambridge, MA: MIT Press. 25. Simon, Herbert A. (1977), Models of Discovery. Dordrecht: Reidel. 26. Swanson, Don R. (1990), “Medical Literature as a Potential Source of New Knowledge,” Bull. Med. Libr. Assoc. 78:29-37.

The Discovery Science Project in Japan Setsuo Arikawa Department of Informatics, Kyushu University Fukuoka 812-8581, Japan arikawa@i.kyushu-u.ac.jp

Abstract. The Discovery Science project in Japan in which more than sixty scientists participated was a three-year project sponsored by Grant-in-Aid for Scientiﬁc Research on Priority Area from the Ministry of Education, Culture, Sports, Science and Technology (MEXT) of Japan. This project mainly aimed to (1) develop new methods for knowledge discovery, (2) install network environments for knowledge discovery, and (3) establish Discovery Science as a new area of Computer Science / Artiﬁcial Intelligence Study. In order to attain these aims we set up ﬁve groups for studying the following research areas: (A) (B) (C) (D) (E)

Logic for/of Knowledge Discovery Knowledge Discovery by Inference/Reasoning Knowledge Discovery Based on Computational Learning Theory Knowledge Discovery in Huge Database and Data Mining Knowledge Discovery in Network Environments

These research areas and related topics can be regarded as a preliminary definition of Discovery Science by enumeration. Thus Discovery Science ranges over philosophy, logic, reasoning, computational learning and system developments. In addition to these ﬁve research groups we organized a steering group for planning, adjustment and evaluation of the project. The steering group, chaired by the principal investigator of the project, consists of leaders of the ﬁve research groups and their subgroups as well as advisors from the outside of the project. We invited three scientists to consider the Discovery Science overlooking the above ﬁve research areas from viewpoints of knowledge science, natural language processing, and image processing, respectively. The group A studied discovery from a very broad perspective, taking into account of historical and social aspects of discovery, and computational and logical aspects of discovery. The group B focused on the role of inference/reasoning in knowledge discovery, and obtained many results on both theory and practice on statistical abduction, inductive logic programming and inductive inference. The group C aimed to propose and develop computational models and methodologies for knowledge discovery mainly based on computational learning theory. This group obtained some deep theoretical results on boosting of learning algorithms and the minimax strategy for Gaussian density estimation, and also methodologies specialized to concrete problems such as algorithm for ﬁnding best subsequence patterns, biological sequence compression algorithm, text categorization, and MDL-based compression. The group D aimed to create computational strategy for speeding up the discovery process in total. For this purpose, K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 1–2, 2001. c Springer-Verlag Berlin Heidelberg 2001

2

S. Arikawa

the group D was organized with researchers working in scientiﬁc domains and researchers from computer science so that real issues in the discovery process can be exposed out and practical computational techniques can be devised and tested for solving these real issues. This group handled many kinds of data: data from national projects such as genomic data and satellite observations, data generated from laboratory experiments, data collected from personal interests such as literature and medical records, data collected in business and marketing areas, and data for proving the eﬃciency of algorithms such as UCI repository. So many theoretical and practical results were obtained on such a variety of data. The group E aimed to develop a uniﬁed media system for knowledge discovery and network agents for knowledge discovery. This group obtained practical results on a new virtual materialization of DB records and scientiﬁc computations that help scientists to make a scientiﬁc discovery, a convenient visualization interface that treats web data, and an eﬃcient algorithm that extracts important information from semi-structured data in the web space. This lecture describes an outline of our project and the main results as well as how the project was prepared. We have published and are publishing special issues on our project from several journals [5],[6],[7],[8],[9],[10]. As an activity of the project we organized and sponsored Discovery Science Conference for three years where many papers were presented by our members [2],[3],[4]. We also published annual progress reports [1], which were distributed at the DS conferences. We are publishing the ﬁnal technical report as an LNAI[11].

References 1. S. Arikawa, M. Sato, T. Sato, A. Maruoka, S. Miyano, and Y. Kanada. Discovery Science Progress Report No.1 (1998), No.2 (1999), No.3 (2000). Department of Informatics, Kyushu University. 2. S. Arikawa and H. Motoda. Discovery Science. LNAI, Springer 1532, 1998. 3. S. Arikawa and K. Furukawa. Discovery Science. LNAI, Springer 1721, 1999. 4. S. Arikawa and S. Morishita. Discovery Science. LNAI, Springer 1967, 2000. 5. H. Motoda and S. Arikawa (Eds.) Special Feature on Discovery Science. New Generation Computing, 18(1): 13–86, 2000. 6. S. Miyano (Ed.) Special Issue on Surveys on Discovery Science. IEICE Transactions on Information and Systems, E83-D(1): 1–70, 2000. 7. H. Motoda (Ed.) Special Issue on Discovery Science. Journal of Japanese Society for Artiﬁcial Intelligence, 15(4):592–702, 2000. 8. S. Morishita and S. Miyano(Eds.) Discovery Science and Data Mining (in Japanese). bit special volume , Kyoritsu Shuppan, 2000. 9. S. Arikawa, M. Sato, T. Sato, A. Maruoka, S. Miyano, and Y. Kanada. The Discovery Science Project. Journal of Japanese Society for Artiﬁcial Intelligence, 15(4) 595–607, 2000. 10. S. Arikawa, H. Motoda, K. Furukawa, and S. Morishita (Eds.) Theoretical Aspects of Discovery Science. Theoretical Computer Science (to appear) 11. S. Arikawa and A. Shinohara (Eds.) Progresses in Discovery Science. LNAI, Springer (2001, to appear)

Discovering Repetitive Expressions and Aﬃnities from Anthologies of Classical Japanese Poems Koichiro Yamamoto1 , Masayuki Takeda1,2 , Ayumi Shinohara1 , o Nanri3 Tomoko Fukuda3 , and Ichir¯ 1

Department of Informatics, Kyushu University 33, Fukuoka 812-8581, Japan 2 PRESTO, Japan Science and Technology Corporation (JST) 3 Junshin Women’s Junior College, Fukuoka 815-0036, Japan {k-yama, takeda, ayumi}@i.kyushu-u.ac.jp {tomoko-f@muc, nanri-i@msj}.biglobe.ne.jp

Abstract. The class of pattern languages was introduced by Angluin (1980), and a lot of studies have been undertaken on it from the theoretical viewpoint of learnabilities. However, there have been few practical studies except for the one by Shinohara (1982), in which patterns are restricted so that every variable occurs at most once. In this paper, we distinguish repetitive variables from those occurring only once within a pattern, and focus on the number of occurrences of a repetitive-variable and the length of strings it matches, in order to model the rhetorical device based on repetition of words in classical Japanese poems. Preliminary result suggests that it will lead to characterization of individual anthology, which has never been achieved, up till now.

1

Introduction

Recently, we have tackled several problems in analyzing classical Japanese poems, Waka. In [12], we successfully discovered from Waka poems characteristic patterns, named Fushi, which are read-once patterns whose constant parts are restricted to sequences of auxiliary verbs and postpositional particles. In [10], we addressed the problem of semi-automatically ﬁnding similar poems, and discovered unheeded instances of Honkadori (poetic allusion), one important rhetorical device in Waka poems based on speciﬁc allusion to earlier famous poems. On the contrary, we in [11] succeeded to discover expression highlighting diﬀerences between two anthologies by two closely related poets (e.g., master poet and disciples). In the present paper, we focus on repetition. Repetition is the basis for many poetic forms. The use of repetition can heighten the emotional impact of a piece. This device, however, has received little attentions in the case of Waka poetry. One of the main reasons might be that a Waka poem takes a form of short poem, namely, it consists only of ﬁve lines and thirty-one syllables, arranged 5-7-5-7-7, and therefore the use of repetition is often considered to waste words (letters) under this tight limitation. In fact, some poets/scholars in earlier times taught their disciples never to repeat a word in a Waka poem. They considered word repetition as ‘disease’ to be avoided. This K.P. Jantke and A. Shinohara (Eds.): DS 2001, LNAI 2226, pp. 416–428, 2001. c Springer-Verlag Berlin Heidelberg 2001

Discovering Repetitive Expressions

417

device, however, gives a remarkable eﬀect if skillfully used, even in Waka poetry. The following poem, composed by priest Egy¯ o (lived in the latter half of the 10th-century), is a good example of repetition, where two words ‘nawo’ and ‘kiku’ are respectively used twice 1 . Ha-shi-no-na-wo/na-wo-u-ta-ta-ne-to/ki-ku-hi-to-no/ ¯ -Shu ¯ #195) ki-ku-ha-ma-ko-to-ka/u-tsu-tsu-na-ga-ra-ni (Egyo

Since there has been few studies on this poetic device in the long research history of Waka poetry, it is necessary to develop a method of automatically extracting (candidates for) instances of the repetition from database. To retrieve instances of repetition like above, we consider the pattern matching problem for patterns such as x x y y, where is the variable-length don’t care (VLDC), a wildcard that matches any strings, and x, y are variables that match any nonempty strings. Recall the pattern languages proposed by Angluin [2]. A pattern is a string in Π = (Σ ∪V )+ , where V is an inﬁnite set {x1 , x2 , . . . } of variables and Σ ∩V = ∅. For example, ax1 bx2 x1 is a pattern, where a, b ∈ Σ. The language of a pattern π is the set of strings obtained by replacing variables in π by non-empty strings. For example, L(ax1 bx2 x1 ) = {aubvu | u, v ∈ Σ + }. Although the membership problem is NP-complete for the class of Angluin patterns as shown in [2], it becomes polynomial-time solvable when the number of variables occurring within π is bounded by a ﬁxed number k. Several subclasses have been investigated from the viewpoint of polynomial-time learnability. For example, the classes of read-once patterns (every variable occurs only at once) and one-variable patterns (only one variable is contained) are known to be polynomial-time learnable [2]. In the present paper, we try to study subclasses from viewpoints of pattern matching and similarity computation. It should be mentioned that the class of regular expressions with back referencing [1] is considered as a superclass of the Angluin patterns. The membership for this class is also known to be NP-complete. On the other hand, we attempted in [10] to semi-automatically discover similar poems from an accumulation of about 450,000 Waka poems in a machinereadable form. As mentioned above, one of the aims was to discover unheeded instances of Honkadori. The method is simple: Arrange all possible pairs of poems in decreasing order of their similarities, and then scholarly scrutinize a ﬁrst part. The key to success in this approach is how to develop an appropriate similarity measure. Traditionally, the scheme of weighted edit distance with a weight matrix may have been used to quantify aﬃnities between strings. This scheme, however, requires a ﬁne tuning of quadratically many weights in a matrix with the alphabet size, by a hand-coding or a heuristic criterion. As an alternative idea, we introduced a new framework called string resemblance systems (SRSs 1

We inserted the hyphens ‘-’ between syllables, each of which was written as one Kana character although romanized here. One can see that every syllable consists of either a single vowel or a consonant and a vowel. Thus there can be no consonantal clusters and every syllable ends in one of the ﬁve vowels a, i, u, e, o.

418

K. Yamamoto et al.

for short) [10]. In this framework, similarity of two strings is evaluated via a pattern that matches both of them, with the support by an appropriate function that associates the quantity of resemblance candidate patterns. This scheme bridges a gap between optimal pattern discovery (see, e.g., [5]) and similarity computation. An SRS is speciﬁed by (1) a pattern set to which common patterns belong, and (2) a pattern score function that maps each pattern in the set to the quantity of resemblance. For example, if we choose the set of patterns with VLDCs and deﬁne the score of a pattern to be the number of symbols in it, then the obtained measure is the length of the longest common subsequence (LCS) of two strings. In fact, the strings acdeba and abdac have a common pattern ada which contains three symbols. With this framework one can easily design and modify his/her measures. In fact we designed some measures as combinations of pattern set and pattern score function along with the framework, and reported successful results in discovering unnoticed instances of Honkadori [10]. The discovered aﬃnities raised an interesting issue for Waka studies, and we could give a convincing conclusion to it: 1. We have proved that one of the most important poems by Fujiwara-noKanesuke, one of the renowned thirty-six poets, was in fact based on a model poem found in Kokin-Sh¯ u. The same poem had been interpreted just to show “frank utterance of parents’ care for their child.” Our study revealed the poet’s techniques in composition half hidden by the heart-warming feature of the poem by extracting the same structure between the two poems2 . 2. We have compared Tametada-Sh¯ u, the mysterious anthology unidentiﬁed in Japanese literary history, with a number of private anthologies edited after the middle of the Kamakura period (the 13th-century) using the same method, and found that there are about 10 pairs of similar poems between Tametada-Sh¯ u and S¯ okon-Sh¯ u, an anthology by Sh¯ otetsu. The result suggests that the mysterious anthology was edited by a poet in the early Muromachi period (the 15th-century). There have been surmised dispute about the editing date since one scholar suggested the middle of Kamakura period as a probable one. We have had a strong evidence about this problem. In this paper, we focus on the class of Angluin patterns and on its subclasses, and discuss the problems of the pattern-matching, the similarity computation, and the pattern discovery. It should be emphasized that although many studies has been undertaken to the class of Angluin patterns and its subclasses, most of them has been done from the theoretical viewpoint of learnability. The only exception is due to Shinohara [9]. He mentioned practical applications, but they are limited to the subclass called the read-once patterns (referred to as regular patterns in [9]). We show in this paper the ﬁrst practical application of Angluin 2

Asahi, one of Japan’s leading newspapers, made a front-page report of this discovery (26 May, 2001).

Discovering Repetitive Expressions

419

patterns that are not limited to the read-once patterns. As our framework quantiﬁes similarities between strings by weighting patterns common to the strings, we modify the deﬁnition of patterns as follows: – Substitute a gap symbol for every variable occurring only once in a pattern. – Associate each variable x with an integer µ(x) so that the variable x matches a string w only if the length of w is at least µ(x). (In the original setting in [2], µ(x) = 1 for all variable x.) Since we are interested only in repetitive strings in a Waka poem, there is no need to name non-repetitive strings. It suﬃces to use gap symbols instead of variables for representing non-repetitive strings. Thus, the ﬁrst item is rather for the sake of simpliﬁcation. On the contrary, the second item is an essential augmentation by which the score of a pattern π can be sensitive to the values of µ(x) for variables x in π. In fact, we are strongly interested in the length of repeated string when analyzing repetitive expressions in Waka poems. Fig. 1 is an instance of Honkadori we discovered in [10]. The two poems have several common expressions, such as, “na-ka-ra-he-te” and “to-shi-so-he-ni-keru.” One can notice that both the poems use the repetition of words. Namely, the Kokin-Sh¯ u poem and the Shin-Kokin-Sh¯ u repeat “nakara” (stem of verb “nagarafu”; name of a bridge) and “matsu” (wait; pine tree), respectively. This strengthens the aﬃnities based on existence of common substrings. Poem alluded to. (Kokin-Sh¯ u #826) Sakanoue-no-Korenori. a-fu-ko-to-wo Without seeing you, na-ka-ra-no-ha-shi-no I have lived on na-ka-ra-he-te Adoring you ever ko-hi-wa-ta-ru-ma-ni Like the ancient bridge of Nagara to-shi-so-he-ni-ke-ru And many years have passed on. Allusive-variation. (Shin-Kokin-Sh¯ u #1636) Nijoin Sanuki. na-ka-ra-he-te Like the ancient pine tree of longevity na-ho-ki-mi-ka-yo-wo On the mount of expectation called “Matsuyama,” ma-tsu-ya-ma-no I have lived on ma-tsu-to-se-shi-ma-ni Expecting your everlasting reign to-shi-so-he-ni-ke-ru And many years have passed on. Fig. 1. Discovered instance of poetic allusion.

It may be relevant to mention that this work is a multidisciplinary study between the literature and the computer science. In fact, the second author from the last is a Waka researcher and the last author is a linguist in Japanese language.

2

A Uniform Framework for String Similarity

This section brieﬂy sketches the framework of string resemblance systems according to [10]. Gusﬁeld [6] pointed out that in dealing with string similarity

420

K. Yamamoto et al.

the language of alignments is often more convenient than the language of edit operations. Our framework is a generalization of the alignment based scheme and is based on the notion of common patterns. Before describing our scheme, we need to introduce some notation. The set of strings over an alphabet Σ is denoted by Σ ∗ . The length of a string u is denoted by |u|. The string of length 0 is called the empty string, and denoted by ε. Let Σ + = Σ ∗ − {ε}. Let us denote by R the set of real numbers. A pattern system is a triple of a ﬁnite alphabet Σ, a set Π of descriptions called patterns, and a function L that maps a pattern in Π to a subset of Σ ∗ . L(π) is called the language of a pattern π ∈ Π. A pattern π ∈ Π match a string w ∈ Σ ∗ if w belongs to L(π). A pattern π in Π is a common pattern of strings w1 and w2 in Σ ∗ if π matches both of them. Deﬁnition 1. A string resemblance system (SRS) is a 4-tuple Σ, Π, L, score , where Σ, Π, L is a pattern system and score is a pattern score function that maps a pattern in Π to a real number. The similarity SIM(x, y) between strings x and y with respect to Σ, Π, L, score

is deﬁned by SIM(x, y) = max{score(π) | π ∈ Π and x, y ∈ L(π) }. When the set {score(π) | π ∈ Π and x, y ∈ L(π) } is empty or the maximum does not exist, SIM(x,y) is undeﬁned. The above deﬁnition regards similarity computation as optimal pattern discovery. Our framework thus bridges a gap between similarity computation and pattern discovery. In [10], we deﬁned the homomorphic SRSs and showed that the class of homomorphic SRSs covers most of the known similarity (dissimilarity) measures, such as, the edit distance, the weighted edit distance, the Hamming distance, the LCS measure. We also extended in [10] this class to the semi-homomorphic SRSs, and the similarity measures we developed in [8] for musical sequence comparison fall into this class. We can handle a variety of string (dis)similarity by changing the pattern system and the pattern score function. The pattern systems appearing in the above examples are, however, restricted to homomorphic ones. Here, we shall mention SRSs with non-homomorphic pattern systems An order-free pattern (or fragmentary pattern) is a multiset {u1 , . . . , uk } such that k > 0 and u1 , . . . , uk ∈ Σ + , and is denoted by π[u1 , . . . , uk ]. The language of pattern π[u1 , . . . , uk ] is the set of strings that contain the strings u1 , . . . , uk without overlaps. The membership problem of the order-free patterns is NP-complete [7], and the similarity computation is NP-hard in general as shown in [7]. However, the membership problem is polynomial-time solvable when k is ﬁxed. The class of order-free patterns plays an important role in ﬁnding similar poems from anthologies of Waka poems [10]. The pattern languages, introduced by Angluin [2], is also interesting for our framework. Deﬁnition 2 (Angluin pattern system). The Angluin pattern system is a pattern system Σ, (Σ ∪ V )+ , L , where V is an inﬁnite set {x1 , x2 , . . . } of variables with Σ ∩ V = ∅, and L(π) is the set of strings π · θ such that θ is a homomorphism from (Σ ∪ V )+ to Σ + such that c · θ = c for every c ∈ Σ.

Discovering Repetitive Expressions

421

In this paper we discuss SRSs with the Angluin pattern system.

3

Computational Complexity

Deﬁnition 3. Membership Problem for pattern system Σ, Π, L . Given a pattern π ∈ Π and a string w ∈ Σ ∗ , determine whether or not w ∈ L(π). Theorem 1 ([2]). Membership problem for Angluin pattern system is NP-complete. Deﬁnition 4. Similarity Computation with respect to SRS Σ, Π, L, score . Given two strings w1 , w2 ∈ Σ ∗ , ﬁnd a pattern π ∈ Π with {w1 , w2 } ⊆ L(π) that maximizes score(π). Theorem 2. For an SRS with Angluin pattern system, Similarity Computation is NP-hard in general. Proof. We consider the following problem, that is a decision version of a special case of Similarity Computation with w1 = w2 , and show its NPcompleteness. Optimal Pattern with respect to SRS Σ, Π, L, score : Given a string w ∈ Σ ∗ and an integer k, determine whether or not there is a pattern π ∈ Π such that w ∈ L(π) and score(π) ≥ k. We give a reduction from Membership Problem for Angluin pattern system Σ, Π, L to Optimal Pattern with respect to SRS with Angluin pattern system Σ , Π , L , score for a speciﬁc score function score deﬁned as follows. Let Σ = Σ ∪ {#} with # ∈ Σ. We take a one-to-one mapping · from Π = (Σ ∪ V )+ to Σ ∗ that is log-space computable with respect to |π|. We deﬁne the score function score : Π → R by score(π ) = 1 if π is of the form π = π# π for some π ∈ Π = (Σ ∪ V )+ , and score(π ) = 0 otherwise. For a given instance π ∈ Π and w ∈ Σ ∗ of Membership Problem for Angluin pattern system, let us consider w = w# π and k = 1 as an input to Optimal Pattern. Then we can see that there is a pattern π ∈ Π with w ∈ L(π ) and score(π ) = 1 if and only if w ∈ L(π), since w ∈ L(π ) if and only if π = π# π and w ∈ L(π). This completes the proof.

4

Practical Aspects

Recall that similarities between strings are quantiﬁed by weighting patterns common to them in our framework. For a ﬁner weighting, we augment the descriptive power of Angluin patterns by putting a restriction on the length of a string matched by each variable. Namely, we associate each variable x with an integer µ(x) such that the variable x matches a string w only if µ(x) ≤ |w|. For example, suppose that π1 = z1 xz2 xz3 and π2 = z1 yz2 yz3 , where µ(x) = 2, µ(y) = 3, and µ(z1 ) = µ(z2 ) = µ(z3 ) = 0. Then, π1 is common to the strings bcaaabbaac and acabbaabbbb, but π2 is not. This enables us to deﬁne a score function so that it is sensitive to the lengths of strings substituted for variables.

422

K. Yamamoto et al.

On the other hand, as we have seen in the last section, similarity computation as well as membership problem is intractable in general for Angluin pattern system. From a practical point of view, it is valuable to consider subclasses of the pattern system that are tractable. Let occx (π) denote the number of occurrences of a variable x within a pattern π ∈ (Σ ∪ V )+ . For example, occx (abxcyxbz) = 2. A variable x is said to be repetitive w.r.t. π if occx (π) > 1. A pattern π is said to be read-once if π contains no repetitive variables. Historically, read-once patterns are called regular patterns because the induced languages are regular [9]. The membership problem of the read-once patterns is solvable in linear time. A k-repetitive-variable pattern is a pattern that has at most k repetitive-variables. It is not diﬃcult to see that: Theorem 3. The membership problem of the k-repetitive-variable patterns can be solved in O(n2k+1 ) time for input of size n. That is, non-repetitive variables do not matter. Moreover, we are interested only in repeated strings in text strings. For these reasons, we substitute for each of the non-repetitive variables in a pattern. Patterns are then strings over (Σ∪V ∪{}), in which every variable is repetitive. For example the above pattern abxcyxbz is written as abxc xb. Despite the polynomial-time computability, the membership problem of the k-repetitive-variable patterns requires much time to solve. The similarity computation is therefore very slow in practice. For this reason, we in this paper restrict ourselves to the case of k = 1, namely, the one-repetitive-variable patterns. In order to eﬃciently solve the membership problem and similarity computation for this class, we utilize a kind of ﬁltering technique. For example, when the pattern a xxb cx matches a string w, then the candidate strings for substituting for x must occur at least three times in w without overlaps. We obtain such substring statistics on a given string w by exploiting such data structures as the minimal augmented suﬃx trees developed by Apostolico and Preparata [3,4]. Suﬃx tree [6] for a string w is a tree structure that represents all suﬃces of w as paths from the root to leaves, so that every node except leaves have at least two children. Suﬃx trees are useful for the task of various string processing [6]. Each node v corresponds to a substring v˜ of w. For each internal node v, we associate the number of leaves of the subtree rooted at v. It corresponds to the number of (possibly overlapped) occurrences v˜ in w to the node (see Fig. 2 (a)). Minimal augmented suﬃx tree is an augmented version of the suﬃx tree, where additional nodes are introduced to count non-overlapping occurrences. (see Fig. 2 (b)).

5

Application to Waka Data

In this section, we present and discuss the results of our experiments carried out on the Eight Imperial Anthologies, the ﬁrst eight of the imperial anthologies compiled by emperor commands, listed in Table 1.

Discovering Repetitive Expressions 12 a

12

$

a

b

a

b

2 a b a $

$

4 a b a 2

a b a $

b

$ 4

b a

$

a b a $

2

b a a b a a b $ a b a $

$ b a

a 7

a

423

$

b a a b a b a $

1 a b a $

$

b 4

a 2

$

7

a

a 2

a $

3 a b a 1

a b a $

$

b a 1

$ b a a b a a b $ a b a $

(a)

4

$

$ b a a b a b a $

(b)

Fig. 2. (a)Suﬃx tree and (b)minimal augmented suﬃx tree for string ababaababa$. The number associated to each internal node denotes the number occurrences of the string in the string, where occurrence means possibly overlapped occurrence in (a) and non-overlapped occurrence in (b). For example, the string aba occurs four times in the string ababaababa, but it appears only three times without overlapping. Table 1. Eight Imperial Anthologies. no. I II III IV V VI VII VIII

5.1

anthology Kokin-Sh¯ u Gosen-Sh¯ u Sh¯ ui-Sh¯ u Go-Sh¯ ui-Sh¯ u Kiny¯ o-Sh¯ u Shika-Sh¯ u Senzai-Sh¯ u Shin-Kokin-Sh¯ u

compilation # poems 905 1,111 955–958 1,425 1005–1006 1,360 1087 1,229 1127 717 1151 420 1188 1,290 1216 2,005

Similarity Computation

For a success in discovery, we want to put an appropriate restriction on the pattern system and on the pattern score function by using some domain knowledge. However, there are few studies on repetition of words in Waka poems as stated before, and therefore we do not in advance know what kind of restriction is eﬀective. We take a stepwise-reﬁnement approach, namely, we start with very simple pattern system and score function, and then improve them based on analysis of obtained results. Here we restrict ourselves to one-repetitive-variable patterns. Moreover, we use a simple pattern score function that is not sensitive to characters or VLDCs in the patterns. Namely, the score of axxbcx is identical to that of xxx, for example. Despite this simpliﬁcation, we wish to pay attention to

424

K. Yamamoto et al.

how long the strings that match variable x are. Thus, a one-repetitive-variable pattern π is essentially expressed as two integers: occx (π) and µ(x). We assume that the score function is non-decreasing with respect to occx (π) and to µ(x). We compared the anthology Kokin-Sh¯ u with two anthologies Gosen-Sh¯ u and Shin-Kokin-Sh¯ u. The score function we used is deﬁned by score(π) = occx (π) · µ(x). The frequency distributions are shown in Table 2. From the taTable 2. Frequency distribution on similarity values in comparison of Kokin-Sh¯ u with Gosen-Sh¯ u and Shin-Kokin-Sh¯ u. Note that similarity values cannot be 1, 2, 3, 5, 7 because of the deﬁnition of the pattern score function. The frequencies for any similarity values not present here are all 0.

Gosen-Sh¯ u Shin-Kokin-Sh¯ u

0 1,390,030 1,962,550

4 178,331 244,776

6 1,944 2,173

8

37 11

10

8 0

ble, there seem relatively higher similarities between Kokin-Sh¯ u and Gosen-Sh¯ u, compared with Kokin-Sh¯ u and Shin-Kokin-Sh¯ u. We examined a ﬁrst part of a list of poem pairs arranged in the decreasing order of similarity value. However, we had impressions that most of pairs with high similarity value are dissimilar, probably because the pattern system we used is too simple to quantify the aﬃnities concerning repetition techniques. See the poems shown in Fig. 3. All the poems are matched by the pattern x x with µ(x) = 4. The ﬁrst three poems are similar each other, while the other pairs are dissimilar. It seems that information about the locations at which a string occurs repeatedly is important.

ka-su-ka-no-ha/ke-fu-ha-na-ya-ki-so/wa-ka-ku-sa-no/ ¯ #17) tsu-ma-mo-ko-mo-re-ri/wa-re-mo-ko-mo-re-ri/ (Kokin-Shu to-shi-no-u-chi-ni/ha-ru-ha-ki-ni-ke-ri/hi-to-to-se-wo/ ¯ #1) ko-so-to-ya-i-ha-mu/ko-to-shi-to-ya-i-ha-mu/ (Kokin-Shu hi-ru-na-re-ya/mi-so-ma-ka-he-tsu-ru/tsu-ki-ka-ke-wo/ ¯ #1100) ke-fu-to-ya-i-ha-mu/ki-no-fu-to-ya-i-ha-mu/ (Gosen-Shu ha-ru-ka-su-mi/ta-te-ru-ya-i-tsu-ko/mi-yo-shi-no-no/ ¯ #3) yo-shi-no-no-ya-ma-ni/yu-ki-ha-fu-ri-tsu-tsu/ (Kokin-Shu tsu-ra-ka-ra-ha/o-na-shi-ko-ko-ro-ni/tsu-ra-ka-ra-m/ ¯ #592) tsu-re-na-ki-hi-to-wo/ko-hi-m-to-mo-se-su/ (Gosen-Shu Fig. 3. Poems that are matched by the same pattern x x with µ(x) = 4. All pairs have a unique similarity value. The ﬁrst three poems can be considered to ‘share’ the same poetic device and are closely similar, while some pairs are dissimilar.

Discovering Repetitive Expressions

425

Moreover, we observed that there are a lot of meaningless repetitions of strings, especially when µ(x) is relatively small, say, µ(x) = 2. It seems better to restrict ourselves to repetition of strings occurring at the beginning or the end of a line in order to remove such repetitions. We assume the lines of a poem are parenthesized by [, ]. Then, the pattern [][x][x][][], for example, matches any poem whose second and third lines begin with a same string. We want to use the set of such patterns as the pattern set, but the number of such patterns is 35 = 243, which makes the similarity computation impractical. However, by using the Minimal Augmented Suﬃx Trees, we can ﬁlter out a wasteful computation and perform the computation in reasonable time. The results are shown in Table 3. By examining a ﬁrst part, we conﬁrmed that this time pairs with a high similarity value are closely similar. Table 3. Improved results. Frequency distribution on similarity values in comparison of Kokin-Sh¯ u with Gosen-Sh¯ u and Shin-Kokin-Sh¯ u. Note that similarity values cannot be 1, 2, 3, 5, 7 because of the deﬁnition of the pattern score function. The frequencies for any similarity values not present here are all 0.

Gosen-Sh¯ u Shin-Kokin-Sh¯ u

5.2

0 1,569,925 2,208,888

4

407 583

6

14 39

8

1 0

10

3 0

Characterization of Anthologies

Table 4 shows the most 30 patterns occurring in Kokin-Sh¯ u. The table illustrates variations of word repetition techniques. Table 4. Most frequent 30 patterns in Kokin-Sh¯ u. freq. 11 10 10 7 5 5 5 4 4 4

freq. pattern freq. pattern pattern 3 [x][][x][][] 1 [x][][x][][x] [][][x][x][] [x][x][][][] 3 [][x][][][x] 1 [x][][][][x] 3 [][x][][x][] 1 [x][][x][][] [][x][x][][] 3 [][][x][][x] 1 [x][][][][x] [x][][][][x] 3 [][][][x][x] 1 [][x][x][][] [][x][][][x] 2 [x][][x][][] 1 [][x][][x][] [][][x][x][] 2 [x][][][x][] 1 [][x][x][][] [][][][x][x] 2 [x][][][][x] 1 [][][x][x][] [x][][][x][] 2 [][x][x][][] 1 [][][x][][x] [x][x][][][] 1 [x][x][][][x] 0 [x][x][x][x][x] [][][x][x][]

426

K. Yamamoto et al.

For every pattern of the above mentioned form, we collected the poems that are matched by it from the ﬁrst eight imperial anthologies shown in Table 1. The results are summarized in Table 5. The ﬁrst four anthologies have a Table 5. Characterization of anthologies. I, II, III, IV, V, VI, VII, VIII represent Kokin-Sh¯ u, Gosen-Sh¯ u, Sh¯ ui-Sh¯ u, Go-Sh¯ ui-Sh¯ u, Kiny¯ o-Sh¯ u, Shika-Sh¯ u, Senzai-Sh¯ u, Shin-Kokin-Sh¯ u, respectively, (occx (π), µ(x)) (2, 2) (2, 3) (2, 4) (2, 5) (3, 2) (3, 3) (3, 4) (3, 5) (4, 2) (4, 3) (4, 4) (4, 5) (5, 2) (5, 3) (5, 4) (5, 5)

I

96 23 10 5 2 0 0 0 0 0 0 0 0 0 0 0

II

104 20 7 5 11 0 0 0 5 0 0 0 1 0 0 0

III 118 28 13 10 2 0 0 0 0 0 0 0 0 0 0 0

IV 108 31 5 3 3 2 0 0 0 0 0 0 0 0 0 0

V

24 5 4 2 0 0 0 0 0 0 0 0 0 0 0 0

VI

22 9 5 2 1 1 0 0 0 0 0 0 0 0 0 0

VII 77 17 3 1 1 0 0 0 0 0 0 0 0 0 0 0

VIII 112 19 1 0 0 0 0 0 0 0 0 0 0 0 0 0

considerable amount of poems that use repetition of words, even for a large value of µ(x). This is contrasted with Shin-Kokin-Sh¯ u where limited to a small value of µ(x). This might be a reﬂection of the editor’s preferences or of literary trend. Anyway, pursuing the reason for such diﬀerences will provide clues for further investigation on literary trend or the editors’ personalities.

6

Concluding Remarks

The Angluin pattern language has been studied mainly from theoretical viewpoints. There are no practical applications except for those limited to the readonce patterns. This paper presented the ﬁrst practical application of the Angluin pattern languages that are not limited to read-once patterns. We hope that pattern matching and similarity computation for the patterns discussed in this paper possibly lead to discovering overlooked aspects of individual poets. We distinguished repetitive variables (i.e., occurring more than once in a pattern) from non-repetitive variables, and associated each variable x with an integer µ(x) as the lower bound to the length of strings the variable x matches. This enables us to give a pattern score depending upon the lengths of strings substituted for variables. For one-repetitive-variable pattern, we presented a way

Discovering Repetitive Expressions

427

of speed-up of pattern matching, which uses substring statistics from minimal augmented suﬃx tree of a given string as a ﬁlter that excludes patterns which cannot match it. Preliminary experiment showed this idea successfully speeds up the pattern matching against many patterns repeatedly. In this paper, we restricted ourselves to one-repetitive-variable patterns and to repetition of words which occur at the beginning or the end of lines of Waka poem. The restriction played an important role but we want to consider a slightly more complex patterns. For example, the following two poems are matched by the pattern [][][x][xx][]. [shi-ra-yu-ki-no][ya-he-fu-ri-shi-ke-ru][ka-he-ru-ya-ma] ¯ #902) [ka-he-ru-ka-he-ru-mo][o-i-ni-ke-ru-ka-na] (Kokin-Shu [a-fu-ko-to-ha][ma-ha-ra-ni-a-me-ru][i-yo-su-ta-re] ¯ #244) [i-yo-i-yo-wa-re-wo][wa-hi-sa-su-ru-ka-na] (Shika-Shu

Moreover, the next poem is matched by the pattern [x][y][x∗][x∗][y∗] that contains two-repetitive-variables. [wa-su-re-shi-to][i-hi-tsu-ru-na-ka-ha][wa-su-re-ke-ri] ¯i-Shu ¯ #886) [wa-su-re-mu-to-ko-so][i-fu-he-ka-ri-ke-re] (Go-Shu

To deal with more general patterns like these ones will be future work.

References 1. A. V. Aho. Handbook of Theoretical Computer Science, volume A, Algorithm and Complexity, chapter 5, pages 255–295. Elsevier, Amsterdam, 1990. 2. D. Angluin. Finding patterns common to a set of strings. J. Comput. Sys. Sci., 21:46–62, 1980. 3. A. Apostolico and F. Preparata. Structural properties of the string statistics problem. J. Comput. & Syst. Sci., 31(3):394–411, 1985. 4. A. Apostolico and F. Preparata. Data structures and algorithms for the string statistics problem. Algorithmica, 15(5):481–494, 1996. 5. H. Arimura. Text data mining with optimized pattern discovery. In Proc. 17th Workshop on Machine Intelligence, Cambridge, July 2000. 6. D. Gusﬁeld. Algorithms on Strings, Trees, and Sequences: Computer Science and Computational Biology. Cambridge University Press, New York, 1997. 7. H. Hori, S. Shimozono, M. Takeda, and A. Shinohara. Fragmentary pattern matching: Complexity, algorithms and applications for analyzing classic literary works. In Proc. 12th Annual International Symposium on Algorithms and Computation (ISAAC’01), 2001. To appear. 8. T. Kadota, M. Hirao, A. Ishino, M. Takeda, A. Shinohara, and F. Matsuo. Musical sequence comparison for melodic and rhythmic similarities. In Proc. 8th International Symposium on String Processing and Information Retrieval (SPIRE2001). IEEE Computer Society, 2001. To appear. 9. T. Shinohara. Polynomial-time inference of pattern languages and its applications. In Proc. 7th IBM Symp. Math. Found. Comp. Sci., pages 191–209, 1982.

428

K. Yamamoto et al.

10. M. Takeda, T. Fukuda, I. Nanri, M. Yamasaki, and K. Tamari. Discovering instances of poetic allusion from anthologies of classical Japanese poems. Theor. Comput. Sci. To appear. 11. M. Takeda, T. Matsumoto, T. Fukuda, and I. Nanri. Discovering characteristic expressions from literary works. Theor. Comput. Sci. To appear. 12. M. Yamasaki, M. Takeda, T. Fukuda, and I. Nanri. Discovering characteristic patterns from collections of classical Japanese poems. New Gener. Comput., 18(1):61– 73, 2000.

Our partners will collect data and use cookies for ad personalization and measurement. Learn how we and our ad partner Google, collect and use data. Agree & close