
Proportionate-Type Normalized Least Mean Square Algorithms.
Title:
Proportionate-Type Normalized Least Mean Square Algorithms.
Author:
Wagner, Kevin.
ISBN:
9781118579251
Personal Author:
Edition:
1st ed.
Physical Description:
1 online resource (155 pages)
Series:
FOCUS Series
Contents:
Title Page -- Contents -- Preface -- Notation -- Acronyms -- Chapter 1. Introduction to PtNLMS Algorithms -- 1.1. Applications motivating PtNLMS algorithms -- 1.2. Historical review of existing PtNLMS algorithms -- 1.3. Unified framework for representing PtNLMS algorithms -- 1.4. Proportionate-type NLMS adaptive filtering algorithms -- 1.4.1. Proportionate-type least mean square algorithm -- 1.4.2. PNLMS algorithm -- 1.4.3. PNLMS++ algorithm -- 1.4.4. IPNLMS algorithm -- 1.4.5. IIPNLMS algorithm -- 1.4.6. IAF-PNLMS algorithm -- 1.4.7. MPNLMS algorithm -- 1.4.8. EPNLMS algorithm -- 1.5. Summary -- Chapter 2. LMS Analysis Techniques -- 2.1. LMS analysis based on small adaptation step-size -- 2.1.1. Statistical LMS theory: small step-size assumptions -- 2.1.2. LMS analysis using stochastic difference equations with constant coefficients -- 2.2. LMS analysis based on independent input signal assumptions -- 2.2.1. Statistical LMS theory: independent input signal assumptions -- 2.2.2. LMS analysis using stochastic difference equations with stochastic coefficients -- 2.3. Performance of statistical LMS theory -- 2.4. Summary -- 3. PtNLMS Analysis Techniques -- 3.1. Transient analysis of PtNLMS algorithm for white input -- 3.1.1. Link between MSWD and MSE -- 3.1.2. Recursive calculation of the MWD and MSWD for PtNLMS algorithms -- 3.2. Steady-state analysis of PtNLMS algorithm: bias and MSWD calculation -- 3.3. Convergence analysis of the simplified PNLMS algorithm -- 3.3.1. Transient theory and results -- 3.3.2. Steady-state theory and results -- 3.4. Convergence analysis of the PNLMS algorithm -- 3.4.1. Transient theory and results -- 3.4.2. Steady-state theory and results -- 3.5. Summary -- 4. Algorithms Designed Based on Minimization of User-Defined Criteria -- 4.1. PtNLMS algorithms with gain allocation motivated by MSE minimization for white input.
4.1.1. Optimal gain calculation resulting from MMSE -- 4.1.2. Water-filling algorithm simplifications -- 4.1.3. Implementation of algorithms -- 4.1.4. Simulation results -- 4.2. PtNLMS algorithm obtained by minimization of MSE modeled by exponential functions -- 4.2.1. WD for proportionate-type steepest descent algorithm -- 4.2.2. Water-filling gain allocation for minimization of the MSE modeled by exponential functions -- 4.2.3. Simulation results -- 4.3. PtNLMS algorithm obtained by minimization of the MSWD for colored input -- 4.3.1. Optimal gain algorithm -- 4.3.2. Relationship between minimization of MSE and MSWD -- 4.3.3. Simulation results -- 4.4. Reduced computational complexity suboptimal gain allocation for PtNLMS algorithm with colored input -- 4.4.1. Suboptimal gain allocation algorithms -- 4.4.2. Simulation results -- 4.5. Summary -- Chapter 5. Probability Density of WD for PtLMS Algorithms -- 5.1. Proportionate-type least mean square algorithms -- 5.1.1. Weight deviation recursion -- 5.2. Derivation of the Conditional PDF of WD for the PtLMS algorithm -- 5.2.1. Conditional PDF derivation -- 5.3. Applications using the conditional PDF -- 5.3.1. Methodology for finding the steady-state joint PDF using the conditional PDF -- 5.3.2. Algorithm based on constrained maximization of the conditional PDF -- 5.4. Summary -- 6. Adaptive Step-Size PtNLMS Algorithms -- 6.1. Adaptation of μ-law for compression of weight estimates using the output square error -- 6.2. AMPNLMS and AEPNLMS simplification -- 6.3. Algorithm performance results -- 6.3.1. Learning curve performance of the ASPNLMS, AMPNLMS and AEPNLMS algorithms for a white input signal -- 6.3.2. Learning curve performance of the ASPNLMS, AMPNLMS and AEPNLMS algorithms for a color input signal.
6.3.3. Learning curve performance of the ASPNLMS, AMPNLMS and AEPNLMS algorithms for a voice input signal -- 6.3.4. Parameter effects on algorithms -- 6.4. Summary -- 7. Complex PtNLMS Algorithms -- 7.1. Complex adaptive filter framework -- 7.2. cPtNLMS and cPtAP algorithm derivation -- 7.2.1. Algorithm simplifications -- 7.2.2. Alternative representations -- 7.2.3. Stability considerations of the cPtNLMS algorithm -- 7.2.4. Calculation of stepsize control matrix -- 7.3. Complex water-filling gain allocation algorithm for white input signals: one gain per coefficient case -- 7.3.1. Derivation -- 7.3.2. Implementation -- 7.4. Complex colored water-filling gain allocation algorithm: one gain per coefficient case -- 7.4.1. Problem statement and assumptions -- 7.4.2. Optimal gain allocation resulting from minimization of MSWD -- 7.4.3. Implementation -- 7.5. Simulation results -- 7.5.1. cPtNLMS algorithm simulation results -- 7.5.2. cPtAP algorithm simulation results -- 7.6. Transform domain PtNLMS algorithms -- 7.6.1. Derivation -- 7.6.2. Implementation -- 7.6.3. Simulation results -- 7.7. Summary -- Chapter 8. Computational Complexity for PtNLMS Algorithms -- 8.1. LMS computational complexity -- 8.2. NLMS computational complexity -- 8.3. PtNLMS computational complexity -- 8.4. Computational complexity for specific PtNLMS algorithms -- 8.5. Summary -- Conclusion -- Appendix 1. Calculation of β(0)i, β(1) i,j and β(2)i -- Appendix 2. Impluse Response Legend -- Bibliography -- Index.
Abstract:
The topic of this book is proportionate-type normalized least mean squares (PtNLMS) adaptive filtering algorithms, which attempt to estimate an unknown impulse response by adaptively giving gains proportionate to an estimate of the impulse response and the current measured error. These algorithms offer low computational complexity and fast convergence times for sparse impulse responses in network and acoustic echo cancellation applications. New PtNLMS algorithms are developed by choosing gains that optimize user-defined criteria, such as mean square error, at all times. PtNLMS algorithms are extended from real-valued signals to complex-valued signals. The computational complexity of the presented algorithms is examined. Contents 1. Introduction to PtNLMS Algorithms 2. LMS Analysis Techniques 3. PtNLMS Analysis Techniques 4. Algorithms Designed Based on Minimization of User Defined Criteria 5. Probability Density of WD for PtLMS Algorithms 6. Adaptive Step-size PtNLMS Algorithms 7. Complex PtNLMS Algorithms 8. Computational Complexity for PtNLMS Algorithms About the Authors Kevin Wagner has been a physicist with the Radar Division of the Naval Research Laboratory, Washington, DC, USA since 2001. His research interests are in the area of adaptive signal processing and non-convex optimization. Milos Doroslovacki has been with the Department of Electrical and Computer Engineering at George Washington University, USA since 1995, where he is now an Associate Professor. His main research interests are in the fields of adaptive signal processing, communication signals and systems, discrete-time signal and system theory, and wavelets and their applications.
Local Note:
Electronic reproduction. Ann Arbor, Michigan : ProQuest Ebook Central, 2017. Available via World Wide Web. Access may be limited to ProQuest Ebook Central affiliated libraries.
Genre:
Added Author:
Electronic Access:
Click to View