Topic  Name  Description 

Course Introduction  Course Syllabus  
Course Terms of Use  
Unit 1: Intelligent Agents and Problems of AI  Unit 1 Learning Outcomes  
1.1: Is It an Agent, or Just a Program?  The University of Memphis: Stan Franklin and Art Graesser's "Is It an Agent, or Just a Program?"  Read this page to learn about the advent of software agents. Memorize the definitions of the AIMA, Maes, KidSim, HayesRoth, IBM, SodaBot, Foner, and Brustoloni Agents. Make sure you know how to define "agency" and work to memorize Franklin's definition of an agent. Read through the examples of the different taxonomies and classifications of agents. 
1.1.1: John Lloyd on Intelligent Agents  John Lloyd's"Intelligent Agents"  Watch the first part of this threepart video series by John Lloyd. As he lectures you may wish to work through the slides included on the page. Throughout the lecture, Professor Lloyd talks about AIMA agents and presents some pertinent examples. Compare his thoughts with yours and Franklin's from the previous sections. 
1.1.2: Stan Franklin  A Cognitive Theory of Everything  Stan Franklin's "A Cognitive Theory of Everything"  Watch this video, which explains how intelligent agents fit into "the big picture." Ask yourself whether Franklin's thoughts make sense to you. 
1.2: Overview of AI General Problems  Wikipedia: "Artificial Intelligence"  Read this entry on AI. After you read, you should know the meaning of terms such as knowledge representation, planning, learning, natural language processing, motion and manipulation, perception, social intelligence, creativity, and general intelligence. 
1.2.1: Knowledge Representation  Maurice Pagnucco's "Knowledge Representation and Reasoning"  Watch the first part of this threepart video series by Maurice Pagnucco. After you watch, you should be able to define the terms knowledge, representation, and reasoning; realize the advantages of this approach; and define the forms of knowledge representation. 
1.2.2: Planning  Jussi Rintanen's "Planning"  Watch this lecture. You may wish to work through the slides provided on the righthand side of the screen as Rintanen lectures. After viewing the lecture, you should understand why planning can be difficult and be able to define the term "transition systems." 
1.2.3: Learning  Olivier Bousquet's "Introduction to Learning Theory"  Watch this lecture, working through the provided on the righthand side of the screen as you listen. After you watch, you should have a general understanding of "learning theory," be able to differentiate between deduction and induction, and describe, in general terms, the concept of probability and Bayes' rule. 
1.3: Approaches to AI  Wikipedia: "Artificial Inelligence Approaches”  Review this entry on the different paradigms that guide AI research and make sure you know the differences between them. 
1.3.1: Systems with General Intelligence  Michael Thielscher's "Systems with General Intelligence"  Watch this lecture about general problems in AI, working through slides provided on the righthand side of the screen. After you watch, you should be familiar with the chessasanintelligentsystem example, understand what general game playing is about, and identify the major questions with which general AI is concerned. Do not let yourself get bogged down by the details; work for a general understanding of AI. 
1.4: Agents in Code  National Taiwan Normal University:TsungChe Chiang's "Vacuum Cleaner World"  Complete this activity. 
Unit 2: Solving Problems by Searching  Unit 2 Learning Outcomes  
2.1.1: Graph Definition  Cameron McLeman's "Graph"  Study the definition of a graph from this section and draw some examples of your own. 
2.1.2: Binary Tree  Thomas Niemann's "Binary Tree"  Make sure you know how a binary tree differs from a regular tree after the reading this section. 
2.1.3: Example Problem: Minimum Spanning Tree  Cameron McLeman's "Minimum Spanning Tree"  Read about minimum spanning trees and try to figure out how Prim's algorithm works; the solution can be found here. Before you check the solution, try to solve problem yourself. After you have solved the problem (or if you have spent a couple of hours working on it, and are stumped!), study the solution. 
2.2.1: Binary Search Trees  Thomas Niemann's "Binary Search Tree"  Read the article to learn how to build and search binary trees. 
2.2.2: RedBlack Trees  Thomas Niemann's "RedBlack Trees"  Read this article. After you read, you should know how a binary tree differs from a redblack tree and understand the basics of building and searching redblack trees. 
2.2.3: Skip List  Thomas Niemann's "Skip List"  Read this article to learn how to build and search a skip list. 
2.3.1: Depthfirst Search  Wikipedia: "DepthFirst Search"  Read this article to learn how depthfirst search works. Study the included example. 
2.3.2: BreadthFirst Search  Wikipedia: "BreadthFirst Search"  Read this article and make sure you know the differences between depthfirst and breadthfirst search algorithms. 
2.3.3: Dijkstra's Algorithm  Wikipedia: "Dijkstra's Algorithm"  Read this article to learn how Dijkstra's algorithm works. Work through the example in the article. 
2.4: Search Algorithms in General  Wikipedia: "Search Algorithm"  Read this article for an overview of search techniques. 
2.5: Basic Notions in Graph Theory  Zoubin Ghahramani's "Graphical Models: Parts 13"  Watch this threepart lecture on graph theory to develop a better understanding of how to use graphs in AI. After viewing the first part, you should know about directed, undirected, and factor graphs, conditional independence, dseparation, and plate notation. The second part will teach you about inference in graphical models, key ideas in belief propagation, and the junction tree algorithm. Watch the third part for fun, trying to follow along as much as possible. 
2.6: Graph Examples in Code  Artificial Intelligence: A Modern Approach: "Route Finding Agent"  Create a routefinding agent given the environment in the form of a graph. One possible solution can be under the "Route Finding Agent" section. Study the solution code after you have already solved the problem, or if you have spent a substantial amount of time and are stuck (this problem could take you up to 12 hours to solve!). 
Unit 3: Logical Agents and Knowledge Representation  Unit 3 Learning Outcomes  
3.1: Logic Programming  Wikipedia: "Logic Programming"  Read this article on logic programming. Make sure you understand the differences between abductive logic, metalogic, constraint logic, concurrent logic, and inductive logic, higherorder logic, and linear logic programming. 
Alwen Tiu's "Introduction to Logic: Parts 13"  Watch the first lecture on logic and compare it to the reading above. In this lecture, you will learn about the syntax and semantics of propositional logic, boolean functions, satisfiability, and binary decision trees. You will need to know the difference between conjunctive and disjunctive normal forms. Then, watch the second lecture to learn about firstorder logic. Finally, watch the third lecture, which presents modal logic. Make sure you know the differences between propositional, firstorder, and modal logic. 

3.2.1: Bayesian Network  Paulo C.G. Costa and Kathryn B. Laskey's "Bayesian Networks"  Read this article on Bayesian networks. Focus on the definitions it provides and work through the example provided. 
Christopher Bishop's "Introduction to Bayesian Inference"  Watch this lecture, which discusses Bayesian Inference. You may wish to work through the slides provided on the righthand side of the screen as Bishop lectures. Focus on learning the rules of probability and understanding the terms Bayes' theorem, Bayesian inference, probabilistic graphical models. Make sure you know how factor graphs are used. 

3.2.2: Hidden Markov Model  Wikipedia: "Hidden Markov Model"  Read this article, which discusses the hidden Markov model. 
3.2.3.1: Kalman Filter  Wikipedia: "Kalman Filter"  Read this article on the Kalman Filter. 
3.2.3.2: Decision Theory  Wikipedia: "Decision Theory"  Read this article, making sure you understand the normative and descriptive decision theory and what kinds of decisions need a theory. 
3.3: Knowledge Representation and Reasoning  Maurice Pagnucco's "Knowledge Representation and Reasoning"  Watch this lecture. Focus on learning how to represent what we know and how to use representation to make inferences about that knowledge. Work carefully through the examples included in the lecture. 
Massachusetts Institute of Technology: Randall Davis, Howard Shrobe, and Peter Szolovit's "What Is a Knowledge Representation?"  Read this article, which presents different views on knowledge representation. Contrast these views with your own. 

3.4: Coding Drills  Marty Hall's "NQueens Problem Demo"  Follow the instructions and solve this problem. 
Unit 4: Learning  Unit 4 Learning Outcomes  
4.1: Machine Learning  Wikipedia: "Machine Learning"  Read this article for an overview of machine learning. Be sure you understand the differences between the learning methods, which you can read about beneath the 'Algorithm types' section. 
4.1.1: Reinforcement Learning  John Lloyd's "Intelligent Agents"  Watch this lecture and pay attention to the AIMA learning agent. Compare Lloyd's explanation of reinforced learning with others that you have seen. 
4.1.2: Machine Learning, Probability, and Graphical Models  Sam Roweis' "Machine Learning, Probability, and Graphical Models"  Watch this lecture to review the applications of probabilistic learning, the concept of representation, and examples of training and graphical models. You may wish to work through the slides available on the lefthand side of this webpage as you listen to the lecture. 
4.2.1: Introduction to Neural Networks  Wolfram: "Introduction to Neural Networks”  Read this section to learn about general neural networks and how they are mathematically defined. 
4.2.2: Feedforward Neural Networks  Wolfram: "Feedforward Neural Networks"  Read this section, which covers the Feedforward Neural Network. Make sure you understand this network's mathematical definition and that you study the figures in this article. 
4.2.3: Radial Basis Function Networks  Wolfram: "Radial Basis Function Networks"  Read this section, which covers the Radial Basis Function Network. Make sure you understand the network's mathematical definition and be sure to study the figures in this article. 
4.2.4: The Perceptron  Wolfram: "The Perceptron"  Read this section about Perceptron. Be sure to understand its mathematical definition, learn the training algorithm, and study the figures. 
4.2.5: Vector Quantization (VQ) Networks  Wolfram: "Vector Quantization Networks"  Read this section about the Vector Quantization network. 
4.2.6: Hopfield Network  Wolfram: "Hopfield Network"  Read this section, which presents the Hopfield network and the equations that describe it. 
4.3.1: Kernel Methods  Wikipedia: "Kernel Methods"  Read this article to review Kernel methods. 
4.3.2: knearest Neighbor Algorithm  Wikipedia: "knearest Neighbor Algorithm"  Make sure you know how the knearest neighbor algorithm works (in principle) after reading this entry. 
4.3.3: Mixture Model  Wikipedia: "Mixture Model"  Read this article to learn about the different types of Mixture Models. 
4.3.4: Naive Bayes Classifier  Wikipedia: "Naive Bayes Classifier"  Read this article and make sure you know the definition of the naive Bayes classifier. 
4.3.5: Decision Tree  Wikipedia: "Decision Tree"  Read this article. You should be able to define the term "decision tree" when you are done. 
4.3.6: Kernels and Gaussian Processes  Mark Girolami's "Kernels and Gaussian Processes: Parts 13"  Watch the first video about machine learning and compare it to what you have learned already. After watching this video, you should the basics of linear regression, loss function, prediction techniques. Study nonlinear models, probabilistic regression, and uncertainty estimation. Then, watch the second video lecture to learn about Bayesian regression and classification. Finally, watch the last lecture to learn about Gaussian processes, regression, and classification. 
4.4: Machine Learning Coding Drills  "TicTacToe Demo"  Code an agent that plays the TicTacToe game. You can choose to play the game yourself by selecting board positions or have the Agent propose moves. This link shows one possible solution. Work towards a solution for no more than 10 hours and then check your work against the solution code. 
Unit 5: Philosophical Foundations of AI  Unit 5 Learning Outcomes  
5.1.1: Philosophical Issues and Turing Test  John Lloyd's "Intelligent Agents"  Watch the third part of this lecture series and compare his interpretation of the Turing test with what you learn later in this unit. 
5.1.2: Computing Machinery and Intelligence  A. M. Turing's "Computing Machinery and Intelligence"  Read this paper by A. M. Turing, a cornerstone in the field of A.I. studies. 
5.1.3: Turing Machine  Paul M.B. Vitanyi's "Turing Machine"  This article is fairly challenging; read through it to the best of your abilities for a detailed description of the Turing Machine. After you have read this article, you should know how to define Turing machine and summarize the ChurchTuring theses. Make sure you know what the Halting problem is. 
5.1.4: Computability and Incompleteness  Errol Martin's "Computability and Incompleteness"  These videos cover challenging topics mentioned earlier in this unit. Watch the first lecture to learn about Hilbert's consistency program, Godels incompleteness theorem, attributes of computable functions, Church's thesis, and three approaches to computability, paying particular attention to the examples. In the second video, will learn about the Halting problem, universal Turing machine, and the undecidability proof. 
5.2: Important Propositions in the Philosophy of AI  Wikipedia: "Artificial Brain"  Read this article. 
Wikipedia: "Physical Symbol System"  Read this article. 

5.3: Machine Consciousness  Igor Aleksander's "Machine Consciousness"  Read this article, which discusses machine consciousness. Focus on learning about early models of consciousness and neural models of consciousness. 
Massachusetts Institute of Technology: Marvin Minsky's "Emotion Machine"  Watch this lecture about emotional machines. Ask yourself whether you think one is possible and begin to think about how you would approach its creation. 