資料來源: Google Book
Advances in large margin classifiers
- 其他作者: Smola, Alexander J.
- 出版: Cambridge, Mass. : MIT Press ©2000.
- 稽核項: 1 online resource (vi, 412 pages) :illustrations.
- 叢書名: Advances in neural information processing systems [i.e. Neural information processing series]
- 標題: COMPUTERS , Enterprise ApplicationsBusiness Intelligence Tools. , Machine learning. , Algorithmes. , Noyaux (Mathématiques) , COMPUTERS Intelligence (AI) & Semantics. , Intelligence (AI) & Semantics. , Kernel functions. , Computer Science. , Engineering & Applied Sciences. , Apprentissage automatique. , Algorithms , COMPUTER SCIENCE/General , Electronic books. , COMPUTERS Enterprise Applications -- Business Intelligence Tools. , algorithms. , Algorithms.
- ISBN: 0262194481 , 9780262194488
- ISBN: 9780262194488
- 試查全文@TNUA:
- 附註: Includes bibliographical references (pages 389-407) and index.
- 摘要: The concept of large margins is a unifying principle for the analysis of many different approaches to the classification of data from examples, including boosting, mathematical programming, neural networks, and support vector machines. The fact that it is the margin, or confidence level, of a classification--that is, a scale parameter--rather than a raw training error that matters has become a key tool for dealing with classifiers. This book shows how this idea applies to both the theoretical analysis and the design of algorithms.The book provides an overview of recent developments in large margin classifiers, examines connections with other methods (e.g., Bayesian inference), and identifies strengths and weaknesses of the method, as well as directions for future research. Among the contributors are Manfred Opper, Vladimir Vapnik, and Grace Wahba.
- 電子資源: https://dbs.tnua.edu.tw/login?url=https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&AN=138583
- 系統號: 005320195
- 資料類型: 電子書
- 讀者標籤: 需登入
- 引用網址: 複製連結
The book provides an overview of recent developments in large margin classifiers, examines connections with other methods (e.g., Bayesian inference), and identifies strengths and weaknesses of the method, as well as directions for future research. The concept of large margins is a unifying principle for the analysis of many different approaches to the classification of data from examples, including boosting, mathematical programming, neural networks, and support vector machines. The fact that it is the margin, or confidence level, of a classification--that is, a scale parameter--rather than a raw training error that matters has become a key tool for dealing with classifiers. This book shows how this idea applies to both the theoretical analysis and the design of algorithms. The book provides an overview of recent developments in large margin classifiers, examines connections with other methods (e.g., Bayesian inference), and identifies strengths and weaknesses of the method, as well as directions for future research. Among the contributors are Manfred Opper, Vladimir Vapnik, and Grace Wahba.
來源: Google Book
來源: Google Book
評分