資料來源: Google Book
Stochastic approximation and its applications
- 作者: Chen, Hanfu.
- 出版: New York : Kluwer Academic Publishers ©2003.
- 稽核項: 1 online resource (xv, 357 pages) :illustrations.
- 叢書名: Nonconvex optimization and its applications ;volume 64
- 標題: Approximation stochastique. , Probability & StatisticsGeneral. , MATHEMATICS Probability & Statistics -- General. , Electronic books. , MATHEMATICS , Stochastic approximation.
- ISBN: 0306481669 , 9780306481666
- ISBN: 1402008066 , 9781402008061
- 試查全文@TNUA:
- 附註: Includes bibliographical references (pages 347-353) and index. Robbins-Monro algorithm. Finding zeros of a function ; Probabilistic method ; ODE method ; Truncated RM algorithm and TS method ; Weak convergence method -- Stochastic approximation algorithms with expanding truncations. Motivation ; General convergence theorems by TS method ; Convergence under state-independent conditions ; Necessity of noise condition ; Non-additive noise ; Connection between trajectory convergence and property of limit points ; Robustness of stochastic approximation algorithms ; Dynamic stochastic approximation -- Asymptotic properties of stochastic approximation algorithms. Convergence rate: nondegenerate case ; Convergence rate: degenerate case ; Asymptotic normality ; Asymptotic efficiency -- Optimization by stochastic approximation. Kiefer-Wolfowitz algorithm with randomized differences ; Asymptotic properties of KW algorithm ; Global optimization ; Asymptotic behavior of global optimization algorithm ; Application to model reduction -- Application to signal processing. Recursive blind identification ; Principal component analysis ; Recursive blind identification by PCA ; Constrained adaptive filtering ; Adaptive filtering by sign algorithms ; Asynchronous stochastic approximation -- Application to systems and control. Application to identification and adaptive control ; Application to adaptive stabilization ; Application to pole assignment for systems with unknown coefficients ; Application to adaptive regulation -- Appendices. Probability space ; Random variable and distribution function ; Expectation ; Convergence theorems and inequalities ; Conditional expectation ; Independence ; Ergodicity ; Convergence theorems for Martingale ; Convergence theorems for MDS I ; Borel-Cantelli-Levy Lemma ; Convergence criteria for adapted sequences ; Convergence theorems for MDS II ; Weighted sum of MDS -- References.
- 摘要: This book presents the recent development of stochastic approximation algorithms with expanding truncations based on the TS (trajectory-subsequence) method, a newly developed method for convergence analysis. This approach is so powerful that conditions used for guaranteeing convergence have been considerably weakened in comparison with those applied in the classical probability and ODE methods. The general convergence theorem is presented for sample paths and is proved in a purely deterministic way. The sample-path description of theorems is particularly convenient for applications. Convergence theory takes both observation noise and structural error of the regression function into consideration. Convergence rates, asymptotic normality and other asymptotic properties are presented as well. Applications of the developed theory to global optimization, blind channel identification, adaptive filtering, system parameter identification, adaptive stabilization and other problems arising from engineering fields are demonstrated.
- 電子資源: https://dbs.tnua.edu.tw/login?url=https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&AN=99474
- 系統號: 005306458
- 資料類型: 電子書
- 讀者標籤: 需登入
- 引用網址: 複製連結
Estimating unknown parameters based on observation data conta- ing information about the parameters is ubiquitous in diverse areas of both theory and application. For example, in system identification the unknown system coefficients are estimated on the basis of input-output data of the control system; in adaptive control systems the adaptive control gain should be defined based on observation data in such a way that the gain asymptotically tends to the optimal one; in blind ch- nel identification the channel coefficients are estimated using the output data obtained at the receiver; in signal processing the optimal weighting matrix is estimated on the basis of observations; in pattern classifi- tion the parameters specifying the partition hyperplane are searched by learning, and more examples may be added to this list. All these parameter estimation problems can be transformed to a root-seeking problem for an unknown function. To see this, let - note the observation at time i. e. , the information available about the unknown parameters at time It can be assumed that the parameter under estimation denoted by is a root of some unknown function This is not a restriction, because, for example, may serve as such a function.
來源: Google Book
來源: Google Book
評分