Nello Cristianini 先后在意大利的里雅斯特大學(xué)、英國倫敦大學(xué)皇家豪勒威學(xué)院、英國布里斯投大學(xué)、美國加州大學(xué)圣克魯茲分校學(xué)習(xí)。他是支持向量機與其他學(xué)習(xí)系統(tǒng)的理論與應(yīng)用方面卓有成就的年青研究人員,在各種雜志和國際學(xué)術(shù)會議上發(fā)表了許多有關(guān)這一領(lǐng)域的論文。John Shawe-Taylor 先后在英國劍橋大學(xué)、位于斯洛文尼亞的盧布爾雅那大學(xué)、加拿大西蒙·弗雷澤大學(xué)、英國倫敦大學(xué)帝國學(xué)院、英國倫敦大學(xué)皇家豪勒威學(xué)院學(xué)習(xí)。他發(fā)表了許多有關(guān)學(xué)習(xí)系統(tǒng)以及離散數(shù)學(xué)和計算機科學(xué)等領(lǐng)域的論文。他是英國倫敦大學(xué)皇家勒威學(xué)院計算科學(xué)系教授,同時還是由16年大學(xué)共同成立的歐洲合作基金的協(xié)調(diào)者,該基金是為了研究神經(jīng)學(xué)習(xí)和計算學(xué)習(xí)。
圖書目錄
Preface Notation 1 The Learning Methodology 1.1 Supervised Learning 1.2 Learning and Generalisation 1.3 Improving Generalisation 1.4 Attractions and Drawbacks of Learning 1.5 Support Vector Machines for Learning 1.6 Exercises 1.7 ,,Further Reading and Advanced Topic Linear Learning Machines 2.1 Linear Classification 2.1.1 Rosenblatt's Perceptron 2.1.2 Other Linear Classifiers 2.1.3 Multi-class Discrimination 2.2 Linear Regression 2.2.1 Least Squares 2.2.2 Ridge Regression 2.3 Dual Representation of Linear Machines 2.4 Exercises 2.5 Further Reading and Advanced Topics 3 Kernel-lnduced Feature Spaces 3.1 Learning in Feature Space 3.2 The Implicit Mapping into Feature Space 3.3 Making Kernels 3.3.1 Characterisation of Kernels 3.3.2 Making Kernels from Kernels 3.3.3 Making Kernels from Features 3.4 Working in Feature Space 3.5 Kernels and Gaussian Processes 3.6 Exercises 3.7 Further Reading and Advanced Topics 4 Generalisation Theory 4.1 Probably Approximately Correct Learning 4.2 Vapnik Chervonenkis (VC) Theory 4.3 Margin-Based Bounds on Generalisation 4.3.1 Maximal Margin Bounds 4.3.2 Margin Percentile Bounds 4.3.3 Soft Margin Bounds 4.4 Other Bounds on Generalisation and Luckiness 4.5 Generalisation for Regression 4.6 Bayesian Analysis of Learning 4.7 Exercises 4.8 Further Reading and Advanced Topics Optimisation Theory 5.1 Problem Formulation 5.2 Lagrangian Theory 5.3 Duality. ~ 5.4 Exercises 5.5 Further Reading and Advanced Topics Support Vector Machines 6.1 Support Vector Classification 6.1.1 The Maximal Margin Classifier 6.1.2 Soft Margin Optimisation 6.1.3 Linear Programming Support Vector Machines 6.2 Support Vector Regression 6.2.1 e-Insensitive Loss Regression 6.2.2 Kernel Ridge Regression 6.2.3 Gaussian Processes 6.3 Discussion 6.4 Exercises 6.5 Further Reading and Advanced Topics 7 Implementation Techniques 7.1 General Issues 7.2 The Naive Solution: Gradient Ascent 7.3 General Techniques and Packages 7.4 Chunking and Decomposition 7.5 Sequential Minimal Optimisation (SMO) 7.5.1 Analytical Solution for Two Points 7.5.2 Selection Heuristics 7.6 Techniques for Gaussian Processes 7.7 Exercises 7.8 Further Reading and Advanced Topics 8 Appfications of Support Vector Machines 8.1 Text Categorisation 8.1.1 A Kernel from IR Applied to Information Filtering 8.2 Image Recognition 8.2.1 Aspect Independent Classification 8.2.2 Colour-Based Classification 8.3 Hand-written Digit Recognition 8.4 Bioinformatics 8.4.1 Protein Homology Detection 8.4.2 Gene Expression 8.5 Further Reading and Advanced Topics A Pseudocode for the SMO Algorithm B Background Mathematics B.1 Vector Spaces B.2 Inner Product Spaces B.3 Hilbert Spaces B.4 Operators, Eigenvalues and Eigenvectors References lndex