Exploiting tensor rank-one decomposition in probabilistic inference

Petr Savický; Jiří Vomlel

Kybernetika (2007)

  • Volume: 43, Issue: 5, page 747-764
  • ISSN: 0023-5954

Abstract

top
We propose a new additive decomposition of probability tables – tensor rank-one decomposition. The basic idea is to decompose a probability table into a series of tables, such that the table that is the sum of the series is equal to the original table. Each table in the series has the same domain as the original table but can be expressed as a product of one- dimensional tables. Entries in tables are allowed to be any real number, i. e. they can be also negative numbers. The possibility of having negative numbers, in contrast to a multiplicative decomposition, opens new possibilities for a compact representation of probability tables. We show that tensor rank-one decomposition can be used to reduce the space and time requirements in probabilistic inference. We provide a closed form solution for minimal tensor rank-one decomposition for some special tables and propose a numerical algorithm that can be used in cases when the closed form solution is not known.

How to cite

top

Savický, Petr, and Vomlel, Jiří. "Exploiting tensor rank-one decomposition in probabilistic inference." Kybernetika 43.5 (2007): 747-764. <http://eudml.org/doc/33892>.

@article{Savický2007,
abstract = {We propose a new additive decomposition of probability tables – tensor rank-one decomposition. The basic idea is to decompose a probability table into a series of tables, such that the table that is the sum of the series is equal to the original table. Each table in the series has the same domain as the original table but can be expressed as a product of one- dimensional tables. Entries in tables are allowed to be any real number, i. e. they can be also negative numbers. The possibility of having negative numbers, in contrast to a multiplicative decomposition, opens new possibilities for a compact representation of probability tables. We show that tensor rank-one decomposition can be used to reduce the space and time requirements in probabilistic inference. We provide a closed form solution for minimal tensor rank-one decomposition for some special tables and propose a numerical algorithm that can be used in cases when the closed form solution is not known.},
author = {Savický, Petr, Vomlel, Jiří},
journal = {Kybernetika},
keywords = {graphical probabilistic models; probabilistic inference; tensor rank; graphical probabilistic model},
language = {eng},
number = {5},
pages = {747-764},
publisher = {Institute of Information Theory and Automation AS CR},
title = {Exploiting tensor rank-one decomposition in probabilistic inference},
url = {http://eudml.org/doc/33892},
volume = {43},
year = {2007},
}

TY - JOUR
AU - Savický, Petr
AU - Vomlel, Jiří
TI - Exploiting tensor rank-one decomposition in probabilistic inference
JO - Kybernetika
PY - 2007
PB - Institute of Information Theory and Automation AS CR
VL - 43
IS - 5
SP - 747
EP - 764
AB - We propose a new additive decomposition of probability tables – tensor rank-one decomposition. The basic idea is to decompose a probability table into a series of tables, such that the table that is the sum of the series is equal to the original table. Each table in the series has the same domain as the original table but can be expressed as a product of one- dimensional tables. Entries in tables are allowed to be any real number, i. e. they can be also negative numbers. The possibility of having negative numbers, in contrast to a multiplicative decomposition, opens new possibilities for a compact representation of probability tables. We show that tensor rank-one decomposition can be used to reduce the space and time requirements in probabilistic inference. We provide a closed form solution for minimal tensor rank-one decomposition for some special tables and propose a numerical algorithm that can be used in cases when the closed form solution is not known.
LA - eng
KW - graphical probabilistic models; probabilistic inference; tensor rank; graphical probabilistic model
UR - http://eudml.org/doc/33892
ER -

References

top
  1. Chavira M., Darwiche A., Compiling Bayesian networks with local structure, In: Proc. 19th Internat. Joint Conference on Artificial Intelligence (IJCAI), Edinburgh 2005, pp. 1306–1312 (19th) 
  2. Darwiche A., A differential approach to inference in Bayesian networks, J. Assoc. Comput. Mach. 50 (2003), 3, 280–305 MR2146356
  3. Lathauwer L. De, Moor B. De, From matrix to tensor: multilinear algebra and signal processing, In: 4th Internat. Conference on Mathematics in Signal Processing, Part I, IMA Conference Series, Warwick 1996, pp. 1–11 (1996) 
  4. Lathauwer L. De, Moor, B. De, Vandewalle J., On the best Rank-1 and Rank- ( R 1 , R 2 , ... , R N ) approximation of higher-order tensors, SIAM J. Matrix Anal. Appl. 21 (2000), 4, 1324–1342 MR1780276
  5. Díez F. J., Galán S. F., An efficient factorization for the noisy MAX, Internat. J. Intell. Systems 18 (2003), 2, 165–177 
  6. Golub G. H., Loan C. F. Van, Matrix Computations, Third edition. Johns Hopkins University Press, Baltimore 1996 MR1417720
  7. Heckerman D., A tractable inference algorithm for diagnosing multiple diseases, In: Proc. Fifth Annual Conference on Uncertainty in AI (M. Henrion, R. D. Shachter, L. N. Kanal, and J. F. Lemmer, eds.), August 18–21, 1989, Windsor, ON, pp. 163–171 (1989) 
  8. Heckerman D., Causal independence for knowledge acquisition and inference, In: Proc. Ninth Conference on Uncertainty in AI (D. Heckerman and A. Mamdani, eds.), July 9–11, 1993, Washington, D.C., pp. 122–127 (1993) 
  9. Heckerman D., Breese J. S., A new look at causal independence, In: Proc. Tenth Conference on Uncertainty in AI (R. Lopez de Mantaras and D. Poole, eds.), July 29–31, 1994, Seattle, WA, pp. 286–292 (1994) 
  10. Håstad J., Tensor Rank is NP-complete, J. Algorithms 11 (1990), 644–654 (1990) Zbl0716.65043MR1079455
  11. Jensen F. V., Bayesian Networks and Decision Graphs, (Statistics for Engineering and Information Science.) Springer–Verlag, New York – Berlin – Heidelberg 2001 MR1876880
  12. Jensen F. V., Lauritzen S. L., Olesen K. G., Bayesian updating in recursive graphical models by local computation, Computat. Statist. Quart. 4 (1990), 269–282 (1990) MR1073446
  13. Lauritzen S. L., Graphical Models, Clarendon Press, Oxford 1996 Zbl1055.62126MR1419991
  14. Olesen K. G., Kjærulff U., Jensen F., Jensen F. V., Falck B., Andreassen S., Andersen S. K., A MUNIN network for the median nerve – a case study on loops, Appl. Artif. Intell., Special issue: Towards Causal AI Models in Practice 3 (1989), 384–403 (1989) 
  15. Polak E., Computational Methods in Optimization: A Unified Approach, Academic Press, New York 1971 Zbl0301.90040MR0282511
  16. Takikawa M., D’Ambrosio B., Multiplicative factorization of noisy-max, In: Proc. Fifteenth Conference on Uncertainty in AI (K. B. Laskey and H. Prade, eds.), July 30 – August 1, 1999, Stockholm, pp. 622–630 (1999) 
  17. Vomlel J., Exploiting functional dependence in Bayesian network inference, In: Proc. Eighteenth Conference on Uncertainty in AI (UAI) – Edmonton (Canada), Morgan Kaufmann, San Francisco 2002, pp. 528–535 
  18. Zhang N. L., Poole D., Exploiting causal independence in Bayesian network inference, J. Artif. Intell. Res. 5 (1996), 301–328 (1996) Zbl0900.68384MR1426261

NotesEmbed ?

top

You must be logged in to post comments.

To embed these notes on your page include the following JavaScript code on your page where you want the notes to appear.

Only the controls for the widget will be shown in your chosen language. Notes will be shown in their authored language.

Tells the widget how many notes to show per page. You can cycle through additional notes using the next and previous controls.

    
                

Note: Best practice suggests putting the JavaScript code just before the closing </body> tag.