
Multimodal Surveillance : Sensors, Algorithms, and Systems.
Title:
Multimodal Surveillance : Sensors, Algorithms, and Systems.
Author:
Zhu, Zhigang.
ISBN:
9781596931855
Personal Author:
Physical Description:
1 online resource (448 pages)
Contents:
Multimodal SurveillanceSensors, Algorithms, and Systems -- Contents -- Foreword -- Preface -- Chapter 1 Multimodal Surveillance: An Introduction -- 1.1 Multimodal Surveillance: A Brief History -- 1.2 Part I: Multimodal Sensors and Sensing Approaches -- 1.2.1 The ARL Multimodal Sensor (Chapter 2) -- 1.2.2 Design and Deployment of Visible-Thermal Biometric SurveillanceSystems (Chapter 3) -- 1.2.3 LDV Sensing and Processing for Remote Hearing in a Multimodal Surveillance System (Chapter 4) -- 1.2.4 Sensor and Data Systems, Audio-Assisted Cameras, and Acoustic DopplerSensors (Chapter 5) -- 1.3 Part II: Multimodal Fusion Algorithms -- 1.3.1 Audiovisual Speech Recognition (Chapter 6) -- 1.3.2 Multimodal Tracking for Smart Videoconferencing and Video Surveillance (Chapter 7) -- 1.3.3 Multimodal Biometrics Involving the Human Ear (Chapter 8) -- 1.3.4 Fusion of Face and Palmprint for Personal Identification Based on Ordinal Features (Chapter 9) -- 1.3.5 Human Identification Using Gait and Face (Chapter 10) -- 1.4 Part III: Multimodal Systems and Issues -- 1.4.1 Sensor Fusion and Environmental Modeling for Multimodal Sentient Computing (Chapter 11) -- 1.4.2 An End-to-End eChronicling System for Mobile Human Surveillance (Chapter 12) -- 1.4.3 Systems Issues in Distributed Multimodal Surveillance (Chapter 13) -- 1.4.4 Multimodal Workbench for Automatic Surveillance Applications (Chapter 14) -- 1.4.5 Automatic 3-D Modeling of Cities with Multimodal Air and Ground Sensors (Chapter 15) -- 1.4.6 Multimodal Biometrics Systems: Applications and Usage Scenarios (Chapter 16) -- 1.4.7 SATware: Middleware for Sentient Spaces (Chapter 17) -- 1.5 Concluding Remarks -- References -- PART I Multimodal Sensors andSensing Approaches -- Chapter 2 The ARL Multimodal Sensor:A Research Tool for Target SignatureCollection, Algorithm Validation, and Emplacement Studies.
2.1 Introduction -- 2.2 Multimodal Sensors -- 2.2.1 Enclosure -- 2.2.2 System Description -- 2.2.3 Algorithms -- 2.2.4 Communications -- 2.2.5 Principles of Operation -- 2.3 Multimodal Sensor (MMS) System -- 2.3.1 Multimodal Sensors -- 2.3.2 Multimodal Gateway (mmGW) -- 2.3.3 PDA -- 2.4 Typical Deployment -- 2.4.1 Mission Planning -- 2.4.2 Sensor Emplacement -- 2.4.3 Survey and Test -- 2.4.4 Operation -- 2.4.5 Sensor Management -- 2.5 Algorithm Development -- 2.5.1 Signature Collection -- 2.5.1.1 Scenarios -- 2.5.1.2 Target Features -- 2.5.2 Validation -- 2.5.2.1 Simulation -- 2.5.2.2 In-Target Testing/Real Time -- 2.6 MMS Applications -- 2.6.1 Cave and Urban Assault (CUA) -- 2.6.1.1 Equipment Configuration -- 2.6.1.2 Test Environment -- 2.6.1.3 Test Scenarios -- 2.6.2 NATO LG-6 -- 2.6.2.1 Environment -- 2.6.3 C4ISR OTM -- 2.6.3.1 Equipment Configuration -- 2.6.3.2 Environment -- 2.6.3.3 Test Scenario -- 2.7 Summary -- References -- Chapter 3 Design and Deployment ofVisible-Thermal BiometricSurveillance Systems -- 3.1 A Quick Tour Through the (Relevant) Electromagnetic Spectrum -- 3.2 Why and When to Use a Fused Visible-Thermal System -- 3.3 Optical Design -- 3.4 Choice of Sensors -- 3.5 Biometrically Enabled Visible-Thermal Surveillance -- 3.6 Conclusions -- References -- Chapter 4 LDV Sensing and Processing for RemoteHearing in a Multimodal Surveillance System -- 4.1 Introduction -- 4.2 Multimodal Sensors for Remote Hearing -- 4.2.1 The LDV Sensor -- 4.2.2 The Infrared Camera -- 4.2.3 The PTZ Camera -- 4.3 LDV Hearing: Sensing and Processing -- 4.3.1 Principle and Research Issues -- 4.3.2 LDV Audio Signal Enhancement -- 4.4 Experiment Designs and Analysis -- 4.4.1 Real Data Collections -- 4.4.2 LDV Performance Analysis -- 4.4.3 Enhancement Evaluation and Analysis -- 4.5 Discussions on Sensor Improvements and Multimodal Integration.
4.5.1 Further Research Issues in LDV Acoustic Detection -- 4.5.2 Multimodal Integration and Intelligent Targeting and Focusing -- 4.6 Conclusions -- Acknowledgments -- References -- Chapter 5 Sensor and Data Systems, Audio-Assisted Cameras, and Acoustic Doppler Sensors -- 5.1 Introduction -- 5.2 Audio-Assisted Cameras -- 5.2.1 Prototype Setup -- 5.2.2 Sound Recognition -- 5.2.3 Location Recognition -- 5.2.4 Everything Else -- 5.2.5 Applications -- 5.2.6 Conclusion -- 5.3 Acoustic Doppler Sensors for Gait Recognition -- 5.3.1 The Doppler Effect and Gait Measurement -- 5.3.2 The Acoustic Doppler Sensor for Gait Recognition -- 5.3.3 Signal Processing and Classification -- 5.3.4 Experiments -- 5.3.5 Discussion -- 5.4 Conclusions -- References -- PART II Multimodal Fusion Algorithms -- Chapter 6 Audiovisual Speech Recognition -- 6.1 Introduction -- 6.1.1 Visual Features -- 6.1.2 Fusion Strategy -- 6.2 Sensory Fusion Using Coupled Hidden Markov Models -- 6.2.1 Introduction to Coupled Hidden Markov Models -- 6.2.2 An Inference Algorithm for CHMM -- 6.2.3 Experimental Evaluation -- 6.3 Audiovisual Speech Recognition System Using CHMM -- 6.3.1 Implementation Strategy of CHMM -- 6.3.2 Audiovisual Speech Recognition Experiments -- 6.3.3 Large Vocabulary Continuous Speech Experiments -- 6.4 Conclusions -- References -- Chapter 7 Multimodal Tracking for Smart Videoconferencing and Video Surveillance -- 7.1 Introduction -- 7.2 Automatic Calibration of Multimicrophone Setup -- 7.2.1 ML Estimator -- 7.2.2 Closed-Form Solution -- 7.2.3 Estimator Bias and Variance -- 7.3 System Autocalibration Performance -- 7.3.1 Calibration Signals -- 7.3.2 Time Delay Estimation -- 7.3.3 Speed of Sound -- 7.3.4 Synchronization Error -- 7.3.5 Testbed Setup and Results -- 7.4 The Tracking Algorithm -- 7.4.1 Algorithm Overview -- 7.4.2 Instantiation of the Particle Filter.
7.4.3 Self-Calibration Within the Particle Filter Framework -- 7.5 Setup and Measurements -- 7.5.1 Video Modality -- 7.5.2 Audio Modality -- 7.6 Tracking Performance -- 7.6.1 Synthetic Data -- 7.6.2 Ultrasonic Sounds in Anechoic Room -- 7.6.3 Occlusion Handling -- 7.7 Conclusions -- Acknowledgments -- References -- Appendix 7A Jacobian Computations -- Appendix 7B Converting the Distance Matrix to a Dot Product Matrix -- Chapter 8 Multimodal Biometrics Involving the Human Ear -- 8.1 Introduction -- 8.2 2-D and 3-D Ear Biometrics -- 8.2.1 2-D Ear Biometrics -- 8.2.2 3-D Ear Biometrics -- 8.3 Multibiometric Approaches to Ear Biometrics -- 8.4 Ear Segmentation -- 8.5 Conclusions -- Acknowledgments -- Chapter 9 Fusion of Face and Palmprint for Personal Identification Based on Ordinal Features -- 9.1 Introduction -- 9.2 Ordinal Features -- 9.2.1 Local Ordinal Features -- 9.2.2 Nonlocal Ordinal Features -- 9.3 Multimodal Biometric System Using Ordinal Features -- 9.3.1 Face Recognition -- 9.3.2 Palmprint Recognition -- 9.3.3 Fusion of Face and Palmprint -- 9.4 Experiments -- 9.4.1 Data Description -- 9.4.2 Experimental Results and Evaluation -- 9.5 Conclusions -- Acknowledgments -- References -- Chapter 10 Human Identification Using Gait and Face -- 10.1 Introduction -- 10.2 Framework for View-Invariant Gait Recognition -- 10.3 Face Recognition from Video -- 10.4 Fusion Strategies -- 10.5 Experimental Results -- 10.6 Conclusion -- Acknowledgments -- References -- Appendix 10A Mathematical Details -- PART III Multimodal Systems and Issues -- Chapter 11 Sensor Fusion and Environmental Modeling for Multimodal Sentient Computing -- 11.1 Sentient Computing-Systems and Sensors -- 11.1.1 Overview -- 11.1.2 The SPIRIT System -- 11.1.3 Motivation and Challenges -- 11.1.4 Sentient Computing World Model -- 11.2 Related Work -- 11.3 Sensor Fusion.
11.3.1 Sensory Modalities and Correspondences -- 11.3.2 Vision Algorithms -- 11.3.3 Fusion and Adaptation of Visual Appearance Models -- 11.3.4 Multihypothesis Bayesian Modality Fusion -- 11.4 Environmental Modeling Using Sensor Fusion -- 11.4.1 Experimental Setup -- 11.4.2 Enhanced Tracking and Dynamic State Estimation -- 11.4.3 Modeling of the Office Environment -- 11.5 Summary -- Acknowledgments -- References -- Chapter 12 An End-to-End eChronicling System for Mobile Human Surveillance -- 12.1 Introduction: Mobile Human Surveillance -- 12.2 Related Work -- 12.3 System Architecture and Overview -- 12.4 Event Management -- 12.4.1 Storage -- 12.4.2 Representation -- 12.4.3 Retrieval -- 12.5 Multimodal Analytics -- 12.5.1 Image Classification -- 12.5.2 Face Detection and License Plate Recognition from Images -- 12.5.3 Audio and Speech Analytics -- 12.5.4 Multimodal Integration -- 12.6 Interface: Analysis and Authoring/Reporting -- 12.6.1 Experiential Interface -- 12.7 Experiments and System Evaluation -- 12.7.1 Image Tagging Performance and Observations -- 12.8 Conclusions and Future Work -- Acknowledgments -- References -- Chapter 13 Systems Issues in Distributed Multimodal Surveillance -- 13.1 Introduction -- 13.2 User Interfaces -- 13.2.1 UI-GUI: Understanding Images of Graphical User Interfaces -- 13.2.2 Visualization and Representation -- 13.2.3 Advanced UI-GUI Recognition Algorithm -- 13.2.4 Sensor Fusion with UI-GUI: GUI Is API -- 13.2.5 Formal User Study on UI-GUI -- 13.2.6 Questionnaire Analysis -- 13.2.7 User Interface System Issues Summary -- 13.3 System Issues in Large-Scale Video Surveillance -- 13.3.1 Sensor Selection Issues -- 13.3.2 Computation and Communication Issues -- 13.3.3 Software/Communication Architecture -- 13.3.4 System Issues Summary -- References -- Chapter 14 Multimodal Workbench for Automatic Surveillance Applications.
14.1 Introduction.
Abstract:
From front-end sensors to systems and environmental issues, this practical resource guides you through the many facets of multimodal surveillance. The book examines thermal, vibration, video, and audio sensors in a broad context of civilian and military applications. This cutting-edge volume provides an in-depth treatment of data fusion algorithms that takes you to the core of multimodal surveillance, biometrics, and sentient computing. The book discusses such people and activity topics as tracking people and vehicles and identifying individuals by their speech.Systems designers benefit from discussions on 3-D scene and automatic modeling, high-end sensors for long-range tracking or high-fidelity, and low-cost sensor solutions. Developers gain new insight into architectures, frameworks, workbenches, real-time performance, and systems evaluation. Bringing together the work of leading international experts, this book is your authoritative reference on multimodal applications, methodologies, and systems implementation.
Local Note:
Electronic reproduction. Ann Arbor, Michigan : ProQuest Ebook Central, 2017. Available via World Wide Web. Access may be limited to ProQuest Ebook Central affiliated libraries.
Genre:
Added Author:
Electronic Access:
Click to View