Home   >   CSC-OpenAccess Library   >    Manuscript Information
Full Text Available

(1.59MB)
This is an Open Access publication published under CSC-OpenAccess Policy.
Publications from CSC-OpenAccess Library are being accessed from over 74 countries worldwide.
Image-Based Multi-Sensor Data Representation and Fusion Via 2D Non-Linear Convolution
Aaron Rababaah
Pages - 138 - 156     |    Revised - 15-03-2012     |    Published - 16-04-2012
Volume - 6   Issue - 2    |    Publication Date - April 2012  Table of Contents
MORE INFORMATION
KEYWORDS
Multi-senor Data Fusion, Image-based Fusion, Data Fusion Via Non-linear Convolution, Situation Assessment
ABSTRACT
Sensor data fusion is the process of combining data collected from multi sensors of homogeneous or heterogeneous modalities to perform inferences that may not be possible using a single sensor. This process encompasses several stages to arrive at a sound reliable decision making end result. These stages include: senor-signal preprocessing, sub-object refinement, object refinement, situation refinement, threat refinement and process refinement. Every stage draws from different domains to achieve its requirements and goals. Popular methods for sensor data fusion include: ad-hock and heuristic-based, classical hypothesis-based, Bayesian inference, fuzzy inference, neural networks, etc. in this work, we introduce a new data fusion model that contributes to the area of multi-senor/source data fusion. The new fusion model relies on image processing theory to map stimuli from sensors onto an energy map and uses non-linear convolution to combine the energy responses on the map onto a single fused response map. This response map is then fed into a process of transformations to extract an inference that estimates the output state response as a normalized amplitude level. This new data fusion model is helpful to identify sever events in the monitored environment. An efficiency comparison with similar fuzzy-logic fusion model revealed that our proposed model is superior in time complexity as validated theoretically and experimentally.
CITED BY (3)  
1 You, J. (2014). Hierarchical Multi-sensors Data Fusion for Enhanced Context Inference. International Journal of Control and Automation, 7(2), 189-196.
2 Bai, J., Ma, Y., Li, J., Li, H., Fang, Y., Wang, R., & Wang, H. (2014). Good match exploration for thermal infrared face recognition based on YWF-SIFT with multi-scale fusion. Infrared Physics & Technology, 67, 91-97.
3 Suh, D., & You, J. (2013). Data Fusion with Reduced Calculation for Contextual Inference.
1 Google Scholar 
2 CiteSeerX 
3 refSeek 
4 Scribd 
5 SlideShare 
6 PdfSR 
1 Trivedi Mohan M., Gandhi Tarak L., and Huang Kohsia S., “Distributed Interactive. Video Arrays for Event Capture and Enhanced Situational Awareness,” in the 2005 IEEE Intelligent Systems.
2 Koller D., Weber J., Huang T., Malik J., Ogasawara G., Rao B., and Russell S., “ Toward Robust Automatic Traffic Scene Analysis in Real-Time,” IEEE Computer Science Division University of California, Berkeley, CA, 1994. http://dx.doi.org/10.1109/ICPR.1994.576243
3 Kogut Greg, Blackburn Mike, Everett H.R., “Using Video Sensor Networks to Command and Control Unmanned Ground Vehicles,” Space and Naval Warfare Systems Center San Diego, 2000. www.spawar.navy.mil/robots/pubs/usis03diva.pdf
4 Chang Edward Y. and Wang Yuan-Fang, “Toward Building a Robust and Intelligent Video Surveillance System: A Case Study,” Dept. of Electrical and Computer Engineering Dept. of Computer Science, Univ. of California, 2003. www.cs.ucsb.edu/~yfwang/papers/icme04_invited.pdf
5 Bodo Robert, Jackso Bennett, and Papanikolopoulos Nikolaos, “Vision-Based Human Tracking and Activity Recognition,” 2002. AIRVL, Dept. of Computer Science and Engineering, University of Minnesota. http://mha.cs.umn.edu/Papers/Vision_Tracking_Recognition.pdf
6 Ma Yunqian, Miller Ben, Buddharaju Pradeep, Bazakos Mike, “Activity Awareness: From Predefined Events to New Pattern Discovery,” Proceedings of the Fourth IEEE International Conference on Computer Vision Systems (ICVS 2006). www.cs.colostate.edu/icvs06/Program.pdf
7 Collins Robert T., Lipton Alan J., Fujiyoshi Hironobu and Fellow Takeo Nade, IEEE, “Algorithms for Cooperative Multi-sensor Surveillance,” 2001. Processing of the IEEE, VOL. 89, NO. 10, October 2001.
8 Goh King-Shy, Miyahara Koji, Radhakrishan Regunathan, Xiong Ziyou, and Divakaran Ajay, Audio-Visual Event Detection based on Mining of Semantic Audio-Visual Labels, MERL – A MITSUBISHI ELECTRIC RESEARCH LABORATORY, http://www.merl.comTR- 2004-008 March 2004
9 Varshney Pramod K., Mehrotra Kishan G., and Mohan Chilukuri K., Decision Making and Reasoning with Uncertain Image and Sensor Data, Syracuse University. http://wwwvideo. eecs.berkeley.edu/vismuri/meeting2006/syracuse.ppt
10 Klapuri, Audio signal classification, ISMIR Graduate School, October 4th-9th, 2004, http://mtg.upf.edu/ismir2004/graduateschool/people/Klapuri/classification.pdf
11 Temko Andrey, Malkin Robert, Zieger Christian, Macho Dusan, Nadeu Climent, and Omologo Maurizio, Acoustic Event Detection And Classification In Smart-Room Environments: Evaluation Of Chil Project Systems, TALP Research Center, UPC, Barcelona, Spain, interACT, CMU, Pittsburgh, USA, ITC-irst, Povo (TN), Italy Zaragoza • Del 8 al 10 de Noviembre de 2006.
12 Stauffer Chris, Automated Audio-visual Activity Analysis, Compute Science and Artificial Intelligence Laboratory, MIT-CAAIL-TR-2005-057, AIM-2005-026, 09, 20, 2005
13 Chen Datong, Malkin Robert, Yang Jie, Multimodal Detection of Human Interaction Events in a Nursing Home Environment, School of Computer Science, Carnegie Mellon University, Sixth International Conference on Multimodal Interfaces (ICMI'04), Penn State University, State College, PA, October 14-15, 2004
14 Gerosa L., Valenzise G., Tagliasacchi M., Antonacci F., and Sarti A.. Scream And Gunshot Detection In Noisy Environments, Dipartimento di Elettronica e Informazione, Politecnico di Milano, Italy, VISNET II, a network of excellence 2006.
15 Hall David L. and McMullen Sonya A. H. Mathematical Techniques in Multisensor Data Fusion, 2nd edition, 2004. Artech House, INC. Norwood, MA.
16 Brooks Alex and Williams Stefan, “Tracking People with Networks of Heterogeneous Sensors,” ARC Centre of Excellence in Autonomous Systems. School of Aerospace, Mechanical, and Mechatronic Engineering, University of Sydney, NSW Australia, 2003 www.acfr.usyd.edu.au/publications/downloads/2004/Brooks210/Brooks03Tracking.pdf
17 Meyer Michael, Ohmacht Thorsten and Bosch Robert GmbH and Michael Hotter, “Video Surveillance Applications Using Multiple Views of a Scene,” IEEE AES Systems Magazine, March 1999.
18 Turollu A., Marchesotti L. and Regazzoni C.S., “Multicamera Object Tracking In Video Surveillance Applications,” DIBE, University of Genoa, Italy, 2001. http://cmp.felk.cvut.cz/cvww2006/papers/39/39.pdf
19 Niu Wei, Long Jiao, Han Dan, and Wang Yuan-Fang, “Human Activity Detection and Recognition for Video Surveillance,” the 2004 IEEE International Conference on Multimedia and Expo. www.cs.ucsb.edu/~yfwang/papers/icme04_tracking.pdf
20 Siebel Nils T. and Maybank Steve, “Fusion of Multiple Tracking Algorithms for Robust People Tracking, Annotated Digital Video for Intelligent Surveillance and Optimized Retrieval (ADVISOR),”. Computational Vision Group, Department of Computer Science, The University of Reading, Reading RG6 6AY, England, 2001. www.cvg.reading.ac.uk/papers/advisor/ECCV2002.pdf
21 Ellis Tim. Multi-camera Video Surveillance. 2002. Information Engineering Center, School of engineering, City university, London. 0-7803-7436-3/02 2002 IEEE.
22 Snidaro L., Niu R., Varshney P.K., and Foresti G.L., “Automatic Camera Selection and Fusion for Outdoor Surveillance Under Changing Weather Conditions,” Dept. of Mathematics and Computer Science, University of Udine, Ital, Department of EECS, Syracuse University, NY, 2003. www.ecs.syr.edu/research/SensorFusionLab/Downloads/Rrixin%20Niu/avss03.pdf
23 Boult Terrance E., “Geo-spatial Active Visual Surveillance on Wireless Networks,” Proceedings of the 32nd Applied Imagery Pattern Recognition Workshop. http://vast.uccs.edu/.../PAPERS/IEEE-IAPR03-Geo-Spatial_active_Visual_Surveillance-On- Wireless-Networks-Boult.pdf
24 Chang Edward Y. and Wang Yuan-Fang, “Toward Building a Robust and Intelligent Video Surveillance System: A Case Study,” Dept. of Electrical and Computer Engineering Dept. of Computer Science, Univ. of California, 2003. www.cs.ucsb.edu/~yfwang/papers/icme04_invited.pdf
25 A. Turolla, L. Marchesotti and C.S. Regazzoni, “Multicamera Object Tracking in Video Surveillance Applications”, DIBE, University of Genoa, Italy
26 Sensory Physiology Lettuces at University Minnesota Dulutth, 03.17.2012. http://www.d.umn.edu/~jfitzake/Lectures/DMED/SensoryPhysiology/GeneralPrinciples/Codi ngTheories.html
27 MatLab Documentation on Fuzzy Logic. MatLab7 R14. The MathWorks Documentation. http://www.mathworks.com/access/helpdesk/help/toolbox/fuzzy/fuzzy.html?BB=1
28 MatLab Documentation on Signal Processing – Signal Windowing models. MatLab7 R14. The MathWorks Documentation. http://www.mathworks.com/documentation/signalprocessing/hamming
Dr. Aaron Rababaah
University of Maryland Eastern Shore - United States of America
haroun01@gmail.com