The most exciting development in parallel computer architecture is the convergence of traditionally disparate approaches on a common machine structure. This book explains the forces behind this convergence of shared-memory, message-passing, data parallel, and data-driven computing architectures. It then examines the design issues that are critical to all parallel architecture across the full range of modern design, covering data access, communication performance, coordination of cooperative work, and correct implementation of useful semantics. It not only describes the hardware and software techniques for addressing each of these issues but also explores how these techniques interact in the same system. Examining architecture from an application-driven perspective, it provides comprehensive discussions of parallel programming for high performance and of workload-driven evaluation, based on understanding hardware-software interactions.
作者簡(jiǎn)介
暫缺《人工智能(英文版)》作者簡(jiǎn)介
圖書目錄
Contents
Preface
1 Introduction
1.1 What Is AI?
1.2 Approaches to Artificial Intelligence
1.5 BriefHistoryofAI
1.4 Plan of the Book
1.5 Additional Readings and Discussion Exercises
I Reactive Machines
2 Stimulus-Response Agents
2.1 Perception and Action
2.1.1 Perception
2.1.2 Action
2.1.5 Boolean AIgebra
2.1.4 Classes and Forms ofBoolean Functions
2.2 Representing and Implementing Artion Functions
2.2.1 Production Systems
2.2.2 Networks
2.2.3 The Subsumption Architecture
2.3 Additional Readings and Discussion Exercises
Neural Networks
3.1 Introduction
3.2 Training Single TLUs
3.2.1 TLU Geometry
3.2.2 Augmented Vectors
3.2.3 Gradient Descent Methods
3.2.4 The Widrow-Hoff Procedure
3.2.5 The Generalized Delta Procedure
3.2.6 The Error-Correction Procedure
3.3 Neural Networks
3.3.1 Motivation
3.3.2 Notation
3.3.5 The Backpropagation Method
3.3.4 Computing Weight Changes in the Final Layer
3.3.5 Computing Changes to the Weights in Intermediate Layers
3.4 Generalization, Accuracy, and Overfitting
3.5 Additional Readings and Discussion Exercises
Machine Evolution
4.1 Evolutionary Computation
4.2 Genetic Programming
4.2.1 Program Representation in GP
4.2.2 TheGPProcess
4.2.3 Evolving a Wall-Following Robot
4.5 Additional Readings and Discussion Exercises
State Machines
5.1 Representmg the Environment by Feature Vectors
5.2 Elman Networks
5.5 Iconic Representations
5.4 BIackboard Systems
5.5 Additional Readings and Discussion Exercises
B Robot Vision
6.1 Introduction
6.2 Steering an Automobile
6.3 Two Stages of Robot Vision
6.4 Image Processing
6.4.1 Averaging
6.4.2 Edge Enhancement
6.4.3 Combining Edge Enhancement with Averaging
6.4.4 Region Finding
6.4.5 Using Image Attributes Other Than Intensity
6.5 Scene Analysis
6.5.1 Interpreting Lines and Curves in the Image
6.5.2 Model-Based Vision
6.6 Stereo Vision and Depth Information
6.7 Additional Readings and Discussion Exercises
II Search in State Spaces
Agents That Plan
7.1 Memory Versus Computation
7.2, State-Space Graphs
7.5 Searching Explicit State Spaces
7.4 Feature-Based State Spaces
7.5 Graph Notation
7.6 Additional Readings and Discussion Exercises
Umnformed Search
8.1 Formulating the State Space
8.2 Components of Implicit State-Space Graphs
8.5 Breadth-First Search
8.4 Depth-First or Backtracking Search
8.5 Iterative Deepening
8.6 Additional Readings and Discussion Exercises
Heuristic Search
9.1 Using Evaluation Functions
9.2 A General Graph-Searching Algorithm
9.2.1 Algorithm A
9.2.2 Admissibility of A
9.2.3 The Consistency (or Monotone) Condition
9.2.4 Iterative-Deepening A
9.2.5 Recursive Best-First Search
9.5 Heuristic Functions and Search Efficiency
9.4 Additional Readings and Discussion
Exercises
10Planning, Acting, and Leaming
l0.l The Sense/Plan/Act Cycle
10.2 Approximate Search
10.2.1 Island-Driven Search
l0.2.2 Hierarchical Search
l0.2.3 Limited-Horizon Search
l0.2.4 Cycles
l0.2.5 Building Reactive Procedures
l0.3 Leaming Heuristic Functions
l0.3.l Explicit Graphs
10.3.2 Implicit Graphs
l0.4 OA Rewards Instead of Goals
l0.5 Additional Readings and Discussion
Exercises
11Altenuative Search Fomulations and Applications
ll.l Assignment Problems
ll.2 Constructive Methods
11.3 Heuristic Repair
11.4 Function Optimization
Exercises
12Adversarial Search
12.1 Two-Agent Games
12.2 The Minimax Procedure
12.3 The Alpha-Beta Procedure
12.4 The Search Effidency of the Alpha-Beta Procedun
l2.5 Other Important Matters
12.6 Games of Chance
12.7 Learning Evaluation Functions
l2.8 Additional Readings and Discussion
Exercises
III Knowledge Representation and Reasoning
The Propositional Calculus
13.1 Using Constraints on Feature Values
13.2 The Language
13.3 Rules of Inference
l3.4 Definition of Proof
13.5 Semantics
13.5.1 Interpretations
13.5.2 The Propositional Truth Table
l3.5.5 Satisfiability and Models
l3.5.4 Validity
l3.5.5 Equivalence
l3.5.6 Entailment
13.6 Soundness ahd Completeness
13.7 The PSAT Problem
13.8 Other Important Topics
13.8.1 Language Distinctions
l3.8.2 Metatheorems
l3.8.3 Associative Laws
13.8.4 Distributive Laws
Exercises
14Resolution in the Propositional Calculus
14.1 A New Rule of Inference: Resolution
14.1.1 Clauses as wffs
14.1.2 Resolution on Clauses
14.1.3 Soundness of Resolution
14.2 Converting Arbitrary wffs to Conjunctions of CIauses
14.3 Resolution Refutations
14.4 Resolution Refutation Search Strategies
14.4.1 Ordering Strategies
14.4.2 Refinement Strategies
14.5 Hom Clauses
Exercises
15 The Predicate Calculus
15.1 Motivation
15.2 The Language and Its Syntax
15.3 Semantics
15.3.1 Worlds
15.3.2 Interpretations
15.3.3 Models and Related Notions
15.3.4 Knowledge
15.4 Quantification
15.5 Semantics of Quantifiers
15.5.1 Universal Quantifiers
15.5.2 Existential Quantifiers
15.5.3 Useful Equivalences
15.5.4 Rules of Inference
15.6 Predicate Calculus as a Language for Representing Knowledge
15.6.1 Conceptualizations
15.6.2 Examples
15.7 Additional Readings and Discussion
Exerdses
16 Resolution in the Predicate Calculus
16.1 Unification
16.2 Predicate-Calculus Resolution
16.5 Completeness and Soundness
16.4 Converting Arbitrary wffs to Clause Form
16.5 Using Resolution to Prove Theorems
16.6 Answer Extraction
16.7 The Equality Predicate
16.8 Additional Readings and Discussion
Exercises
H Knowledge-Based Systems
17.1 Confronting the Real World
17.2 Reasoning Using Hom Clauses
17.3 Maintenance in Dynamic Knowledge Bases
17.4 Rule-Based Expert Systems
17.5 Rule Learning
17.5.1 Leaming Propositional Calculus Rules
17.5.2 Leaming First-Order Logic Rules
17.5.3 Explanation-Based Generalization
17.6 Additional Readings and Discussion
Exercises
Representing Commonsense Knowledge
18.1 The Commonsense World
18.1.1 What Is Commonsense Knowledge?
18.1.2 Difficulties in Representing Commonsense Knowledge
18.1.3 The Importance of Commonsense Knowledge
18.1.4 Research Areas
18.2 Time
18.3 Knowledge Representation by Networks
18.3.1 Taxonomic Knowledge
18.3.2 Semantic Networks
18.3.3 Nonmonotonic Reasoning in Semantic Networks
18.3.4 Frames
18.4 Additional Readings and Discussion
Exercises
19 Reasoning with Uncertain Information
19.1 Review of Probability Theory
19.1.1 Fundamental Ideas
19.1.2 Conditional Probabilities
19.2 Probabilistic Inference
19.2.1 A General Method
19.2.2 Conditional Independence
19.3 Bayes Networks
19.4 Pattems of Inference in Bayes Networks
19.5 Uncertain Evidence
19.6 D-Separation
19.7 Probabilistic Inference in Polytrees
19.7.1 Evidence Above
19.7.2 Evidence Below
19.7.3 Evidence Above and Below
l9.7.4 A Numerical Example
19.8 Additional Readings and Discussion
Exercises
20 Learning and Acting with Bayes Nets
20.1 Leaming Bayes Nets
20.1.1 Known Network Structure
20.1.2 Learning Network Structure
20.2 Probabilistic Inference and Action
20.2.1 The General Setting
20.2.2 An Extended Example
20.2.3 Generalizing the Example
20.3 Additional Readings and Discussion
Exercises
IV Planning Methods Based on
Logic
21 The Situation Calculus
21.1 Reasoning about States and Actions
21.2 Some Difficulties
21.2.1 Frame Axioms
21.2.2 Qualifications
21.2.3 Ramifications
21.3 Generating Plans
21.4 Additional Readings and Discussion
Exercises
Planning
22.1 STRlPS Planning Systems
22.1.1 Describing States and Goals
22.1.2 Forward Search Methods
22.1.3 Recursive STRlPS
22.1.4 Plans with Run-Time Conditionals
22.1.5 The Sussman Anomaly
22.1.6 Backward Search Methods
22.2 Plan Spaces and Partial-Order Planning
22.3 Hierarchical Planning
22.3.1 ABSTRlPS
22.3.2 Combining Hierarchical and Partial-Order Planning
22.4 Leaming Plans
22.5 Additional Readings and Discussion
Exercises
V Communication and Integration
23 Multiple Agents
23.1 Interacting Agents
23.2 Models of Other Agents
23.2.1 Varieties of Models
23.2.2 Simulation Strategies
23.2.3 Simulated Databases
23.2.4 The intentional Stance
23.3 A Modal Logic of Knowledge
23.3.1 Modal Operators
23.3.2 Knowledge Axioms
25.3.3 Reasoning about Other Agents' Knowledge
25.3.4 Predicting Actions of Other Agents
23.4 Additional Readings and Discussion
Exercises
Communication among Agents
24.1 Speech Acts
24.1.1 Planning Speech Acts
24.1.2 Implementing Speech Acts
24.2 Understanding Language Strings
24.2.1 Phrase-Structure Grammars
24.2.2 Semantic Analysis
24.2.3 Expanding the Grammar
24.3 Efficient Communication
24.3.1 UseofContext
24.3.2 Use of Knowledge to Resolve Ambiguities
24.4 Natural Language Processing
24.5 Additional Readings and Discussion
Exercises
Agent Architectures
25.1 Three-Level Architertures
25.2 Goal Arbitration
25.3 The Triple-Tower Architecture
25.4 Bootstrapping
25.5 Additional Readings and Discussion
Exercises
Bibliography
Index