Modeling and simulation with augmented reality

Khaled Hussain; Varol Kaptan

RAIRO - Operations Research (2010)

  • Volume: 38, Issue: 2, page 89-103
  • ISSN: 0399-0559

Abstract

top
In applications such as airport operations, military simulations, and medical simulations, conducting simulations in accurate and realistic settings that are represented by real video imaging sequences becomes essential. This paper surveys recent work that enables visually realistic model constructions and the simulation of synthetic objects which are inserted in video sequences, and illustrates how synthetic objects can conduct intelligent behavior within a visual augmented reality.

How to cite

top

Hussain, Khaled, and Kaptan, Varol. "Modeling and simulation with augmented reality." RAIRO - Operations Research 38.2 (2010): 89-103. <http://eudml.org/doc/105309>.

@article{Hussain2010,
abstract = { In applications such as airport operations, military simulations, and medical simulations, conducting simulations in accurate and realistic settings that are represented by real video imaging sequences becomes essential. This paper surveys recent work that enables visually realistic model constructions and the simulation of synthetic objects which are inserted in video sequences, and illustrates how synthetic objects can conduct intelligent behavior within a visual augmented reality. },
author = {Hussain, Khaled, Kaptan, Varol},
journal = {RAIRO - Operations Research},
language = {eng},
month = {3},
number = {2},
pages = {89-103},
publisher = {EDP Sciences},
title = {Modeling and simulation with augmented reality},
url = {http://eudml.org/doc/105309},
volume = {38},
year = {2010},
}

TY - JOUR
AU - Hussain, Khaled
AU - Kaptan, Varol
TI - Modeling and simulation with augmented reality
JO - RAIRO - Operations Research
DA - 2010/3//
PB - EDP Sciences
VL - 38
IS - 2
SP - 89
EP - 103
AB - In applications such as airport operations, military simulations, and medical simulations, conducting simulations in accurate and realistic settings that are represented by real video imaging sequences becomes essential. This paper surveys recent work that enables visually realistic model constructions and the simulation of synthetic objects which are inserted in video sequences, and illustrates how synthetic objects can conduct intelligent behavior within a visual augmented reality.
LA - eng
UR - http://eudml.org/doc/105309
ER -

References

top
  1. R.C. Arkin, Motor Schema-Based Mobile Robot Navigation. Int. J. Robot. Res.8 (1989) 92-112.  
  2. R.C. Arkin, Behavior-Based Robotics. The MIT Press, Cambridge, Massachusetts (1998).  
  3. M. Bajura and U. Neumann, Dynamic registration correction in video-based reality systems. IEEE Comput. Graph. Appl.15 (1995) 52-60.  
  4. T. Balch, Behavioral Diversity in Learning Robot Teams. Ph.D. Thesis, Georgia Institute of Technology (1998).  
  5. E. Gelenbe, Reseaux neuronaux aleatoires stables. C.R. Acad. Sci. II 310 (1990) 177-180.  
  6. R. Brooks, A robust layered control system for a mobile robot. IEEE J. Robot. Autom.RA-2 14-23 (1986).  
  7. R. Brooks, Cambrian Intelligence: The Early History of The New AI. The MIT Press, Cambridge, MA (1999).  Zbl0968.68160
  8. C. Cramer, E. Gelenbe and H. Bakircioglu, Low bit rate video compression with neural networks and temporal subsampling. Proc. IEEE84 (1996) 1529-1543.  
  9. C. Cramer and E. Gelenbe, Video quality and traffic QoS in learning-based subsampled and receiver-interpolated video sequences. IEEE J. Selected Areas in Communications18 (2000) 150–167.  
  10. Y. Feng and E. Gelenbe, Adaptive object tracking and video compression. Netw. Inform. Syst. J.1 (1999) 371-400.  
  11. B. Foss, E. Gelenbe, K. Hussain, N. Lobo and H. Bahr, Simulation driven virtual objects in real scenes. Proc. ITSEC 2000, Orlando, FL (Nov. 2000).  
  12. E. Gelenbe, Reseaux stochastiques ouverts avec clients negatifs et positifs, et reseaux neuronaux. C.R. Acad. Sci. Paris II 309 (1989) 979-982.  
  13. E. Gelenbe, Random neural networks with positive and negative signals and product form solution. Neural Comput.1 (1989) 502-510.  
  14. E. Gelenbe, Stable random neural networks. Neural Comput.2 (1990) 239-247.  
  15. E. Gelenbe, Learning in the recurrent random network. Neural Comput.5 (1993) 154-164.  
  16. E. Gelenbe, Modeling CGF with learning stochastic finite-state machines. Proc. 8th Conference on Computer Generated Forces, Orlando, May 11–13, 113-116.  
  17. E. Gelenbe, Simulation with goal-oriented agents. in EUROSIM 2001. Delft University of Technology Delft, Netherlands, 26–30 June (2001).  
  18. E. Gelenbe, Applications of spiked recurrent stochastic networks in 13th International Conference on Artificial Neural Networks & 10th International Conference on Neural Information Processing, 26–29 June (2003).  
  19. E. Gelenbe, Spiked random neural networks, product forms, learning and approximation in Conference on Analytical and Stochastic Modeling Techniques and Applications, European Simulation Multi-conference, Nottingham, UK, 9–11 June (2003).  
  20. E. Gelenbe, C. Cramer, M. Sungur and P. Gelenbe, Traffic and video quality in adaptive neural compression. Multimedia Systems4 (1996) 357-369.  
  21. E. Gelenbe, T. Feng and K.R.R. Krishnan, Neural network methods for volumetric magnetic resonance imaging of the human brain. Proc. IEEE84 (1996) 1488-1496.  
  22. E. Gelenbe and J.M. Fourneau, Random neural networks with multiple classes of signals. Neural Comput.11 (1999) 953-963.  
  23. E. Gelenbe and K. Hussain, Learning in the multiple class random neural network. IEEE Trans. on Neural Networks13 (2002) 1257-1267.  
  24. E. Gelenbe, K. Hussain and V. Kaptan, Realistic simulation of cooperating robots, in Proc. CTS'03 (International Symposium on Collaborative Technologies and Systems), WMC'03 Society for Computer Simulation, Orlando, 19–23 January (2003), 151-156.  
  25. E. Gelenbe, V. Kaptan and K. Hussain, Simulating Autonomous Agents in Augmented Reality. Submitted for publication.  
  26. E. Gelenbe, E. Şeref and Z. Xu, Discrete event simulation using goal oriented learning agents, AI, Simulation & Planning in High Autonomy Systems, SCS, Tucson, Arizona, March 6–8 (2000).  
  27. E. Gelenbe, E. Şeref and Z. Xu, Simulation with learning agents. Proc. IEEE89 (2001) 148-157.  
  28. S.W. Lawson, Augmented reality for underground pipe inspection and maintenance, in SPIE Conference on Telemanipulator and Telepresence Technologies, Boston, Massachusetts, 3524 (1998) 98-104.  
  29. M. Mataric, Reinforcement learning in the multi-robot domain. Autonomous Robots4 (1997) 73-83.  
  30. J.P. Mellor, Enhanced reality visualization in a surgical environment. M.S. Thesis, Department of Electrical Engineering, MIT (January 1995).  
  31. N. Ono and K. Fukumoto, Multi-agent reinforcement learning: A modular approach in Proc. of the 2nd Int. Conf. on Multi-Agent Systems. AAAI Press (1996) 252-258.  
  32. N. Ono and K. Fukumoto, A modular approach to multi-agent reinforcement learning, edited by Gerhard Weiss in Springer-Verlag, Distributed Artificial Intelligence Meets Machine Learning, (1997) 167.  
  33. D.C. Pottinger, Implementing Coordinated Movement. Game Developer Magazine (1999).  
  34. J.H. Reif and H. Wang, Social Potential Fields: A Distributed Behavioral Control for Autonomous Robots, in International Workshop on Algorithmic Foundations of Robotics (WAFR), edited by A.K. Peters, Wellesley, MA (1998) 431-459.  
  35. C.W. Reynolds, Flocks, Herds, and Schools: A Distributed Behavioral Model. Comput. Graph.21 (1987) 25–34.  
  36. C.W. Reynolds, Steering Behaviors for Autonomous Characters. Game Developers Conference (1999).  
  37. L. Steels, The Artificial Life Roots of Artificial Intelligence. Artificial Life1 (1993) 75-110.  
  38. M. Tan, Multi-Agent Reinforcement Learning: Independent versus Cooperative Agents, in International Conference on Machine Learning (1993) 330-337.  

NotesEmbed ?

top

You must be logged in to post comments.

To embed these notes on your page include the following JavaScript code on your page where you want the notes to appear.

Only the controls for the widget will be shown in your chosen language. Notes will be shown in their authored language.

Tells the widget how many notes to show per page. You can cycle through additional notes using the next and previous controls.

    
                

Note: Best practice suggests putting the JavaScript code just before the closing </body> tag.