Topic | Name | Description |
---|---|---|
Course Introduction | ||
Unit 1: Intelligent Agents and Problems of AI | ||
1.1: Is It an Agent, or Just a Program? | Read this page to learn about the advent of software agents. Memorize the definitions of the AIMA, Maes, KidSim, Hayes-Roth, IBM, SodaBot, Foner, and Brustoloni Agents. Make sure you know how to define "agency" and work to memorize Franklin's definition of an agent. Read through the examples of the different taxonomies and classifications of agents. |
|
1.1.1: John Lloyd on Intelligent Agents | Watch the first part of this three-part video series by John Lloyd. As he lectures you may wish to work through the slides included on the page. Throughout the lecture, Professor Lloyd talks about AIMA agents and presents some pertinent examples. Compare his thoughts with yours and Franklin's from the previous sections. |
|
1.1.2: Stan Franklin - A Cognitive Theory of Everything | Watch this video, which explains how intelligent agents fit into "the big picture." Ask yourself whether Franklin's thoughts make sense to you. |
|
1.2: Overview of AI General Problems | Read this entry on AI. After you read, you should know the meaning of terms such as knowledge representation, planning, learning, natural language processing, motion and manipulation, perception, social intelligence, creativity, and general intelligence. |
|
1.2.1: Knowledge Representation | Watch the first part of this three-part video series by Maurice Pagnucco. After you watch, you should be able to define the terms knowledge, representation, and reasoning; realize the advantages of this approach; and define the forms of knowledge representation. |
|
1.2.2: Planning | Watch this lecture. You may wish to work through the slides provided on the right-hand side of the screen as Rintanen lectures. After viewing the lecture, you should understand why planning can be difficult and be able to define the term "transition systems." |
|
1.2.3: Learning | Watch this lecture, working through the provided on the right-hand side of the screen as you listen. After you watch, you should have a general understanding of "learning theory," be able to differentiate between deduction and induction, and describe, in general terms, the concept of probability and Bayes' rule. |
|
1.3: Approaches to AI | Review this entry on the different paradigms that guide AI research and make sure you know the differences between them. |
|
1.3.1: Systems with General Intelligence | Watch this lecture about general problems in AI, working through slides provided on the right-hand side of the screen. After you watch, you should be familiar with the chess-as-an-intelligent-system example, understand what general game playing is about, and identify the major questions with which general AI is concerned. Do not let yourself get bogged down by the details; work for a general understanding of AI. |
|
1.4: Agents in Code | Complete this activity. |
|
Unit 2: Solving Problems by Searching | ||
2.1.1: Graph Definition | Study the definition of a graph from this section and draw some examples of your own. |
|
2.1.2: Binary Tree | Make sure you know how a binary tree differs from a regular tree after the reading this section. |
|
2.1.3: Example Problem: Minimum Spanning Tree | Read about minimum spanning trees and try to figure out how Prim's algorithm works; the solution can be found here. Before you check the solution, try to solve problem yourself. After you have solved the problem (or if you have spent a couple of hours working on it, and are stumped!), study the solution. |
|
2.2.1: Binary Search Trees | Read the article to learn how to build and search binary trees. |
|
2.2.2: Red-Black Trees | Read this article. After you read, you should know how a binary tree differs from a red-black tree and understand the basics of building and searching red-black trees. |
|
2.2.3: Skip List | Read this article to learn how to build and search a skip list. |
|
2.3.1: Depth-first Search | Read this article to learn how depth-first search works. Study the included example. |
|
2.3.2: Breadth-First Search | Read this article and make sure you know the differences between depth-first and breadth-first search algorithms. |
|
2.3.3: Dijkstra's Algorithm | Read this article to learn how Dijkstra's algorithm works. Work through the example in the article. |
|
2.4: Search Algorithms in General | Read this article for an overview of search techniques. |
|
2.5: Basic Notions in Graph Theory | Watch this three-part lecture on graph theory to develop a better understanding of how to use graphs in AI. After viewing the first part, you should know about directed, undirected, and factor graphs, conditional independence, d-separation, and plate notation. The second part will teach you about inference in graphical models, key ideas in belief propagation, and the junction tree algorithm. Watch the third part for fun, trying to follow along as much as possible. |
|
2.6: Graph Examples in Code | Create a route-finding agent given the environment in the form of a graph. One possible solution can be under the "Route Finding Agent" section. Study the solution code after you have already solved the problem, or if you have spent a substantial amount of time and are stuck (this problem could take you up to 12 hours to solve!). |
|
Unit 3: Logical Agents and Knowledge Representation | ||
3.1: Logic Programming | Read this article on logic programming. Make sure you understand the differences between abductive logic, metalogic, constraint logic, concurrent logic, and inductive logic, higher-order logic, and linear logic programming. |
|
Watch the first lecture on logic and compare it to the reading above. In this lecture, you will learn about the syntax and semantics of propositional logic, boolean functions, satisfiability, and binary decision trees. You will need to know the difference between conjunctive and disjunctive normal forms. Then, watch the second lecture to learn about first-order logic. Finally, watch the third lecture, which presents modal logic. Make sure you know the differences between propositional, first-order, and modal logic. |
||
3.2.1: Bayesian Network | Read this article on Bayesian networks. Focus on the definitions it provides and work through the example provided. |
|
Watch this lecture, which discusses Bayesian Inference. You may wish to work through the slides provided on the right-hand side of the screen as Bishop lectures. Focus on learning the rules of probability and understanding the terms Bayes' theorem, Bayesian inference, probabilistic graphical models. Make sure you know how factor graphs are used. |
||
3.2.2: Hidden Markov Model | Read this article, which discusses the hidden Markov model. |
|
3.2.3.1: Kalman Filter | Read this article on the Kalman Filter. |
|
3.2.3.2: Decision Theory | Read this article, making sure you understand the normative and descriptive decision theory and what kinds of decisions need a theory. |
|
3.3: Knowledge Representation and Reasoning | Watch this lecture. Focus on learning how to represent what we know and how to use representation to make inferences about that knowledge. Work carefully through the examples included in the lecture. |
|
Read this article, which presents different views on knowledge representation. Contrast these views with your own. |
||
3.4: Coding Drills | Follow the instructions and solve this problem. |
|
Unit 4: Learning | ||
4.1: Machine Learning | Read this article for an overview of machine learning. Be sure you understand the differences between the learning methods, which you can read about beneath the 'Algorithm types' section. |
|
4.1.1: Reinforcement Learning | Watch this lecture and pay attention to the AIMA learning agent. Compare Lloyd's explanation of reinforced learning with others that you have seen. |
|
4.1.2: Machine Learning, Probability, and Graphical Models | Watch this lecture to review the applications of probabilistic learning, the concept of representation, and examples of training and graphical models. You may wish to work through the slides available on the left-hand side of this web-page as you listen to the lecture. |
|
4.2.1: Introduction to Neural Networks | Read this section to learn about general neural networks and how they are mathematically defined. |
|
4.2.2: Feedforward Neural Networks | Read this section, which covers the Feedforward Neural Network. Make sure you understand this network's mathematical definition and that you study the figures in this article. |
|
4.2.3: Radial Basis Function Networks | Read this section, which covers the Radial Basis Function Network. Make sure you understand the network's mathematical definition and be sure to study the figures in this article. |
|
4.2.4: The Perceptron | Read this section about Perceptron. Be sure to understand its mathematical definition, learn the training algorithm, and study the figures. |
|
4.2.5: Vector Quantization (VQ) Networks | Read this section about the Vector Quantization network. |
|
4.2.6: Hopfield Network | Read this section, which presents the Hopfield network and the equations that describe it. |
|
4.3.1: Kernel Methods | Read this article to review Kernel methods. |
|
4.3.2: k-nearest Neighbor Algorithm | Make sure you know how the k-nearest neighbor algorithm works (in principle) after reading this entry. |
|
4.3.3: Mixture Model | Read this article to learn about the different types of Mixture Models. |
|
4.3.4: Naive Bayes Classifier | Read this article and make sure you know the definition of the naive Bayes classifier. |
|
4.3.5: Decision Tree | Read this article. You should be able to define the term "decision tree" when you are done. |
|
4.3.6: Kernels and Gaussian Processes | Watch the first video about machine learning and compare it to what you have learned already. After watching this video, you should the basics of linear regression, loss function, prediction techniques. Study non-linear models, probabilistic regression, and uncertainty estimation. Then, watch the second video lecture to learn about Bayesian regression and classification. Finally, watch the last lecture to learn about Gaussian processes, regression, and classification. |
|
4.4: Machine Learning Coding Drills | Code an agent that plays the Tic-Tac-Toe game. You can choose to play the game yourself by selecting board positions or have the Agent propose moves. This link shows one possible solution. Work towards a solution for no more than 10 hours and then check your work against the solution code. |
|
Unit 5: Philosophical Foundations of AI | ||
5.1.1: Philosophical Issues and Turing Test | Watch the third part of this lecture series and compare his interpretation of the Turing test with what you learn later in this unit. |
|
5.1.2: Computing Machinery and Intelligence | Read this paper by A. M. Turing, a cornerstone in the field of A.I. studies. |
|
5.1.3: Turing Machine | This article is fairly challenging; read through it to the best of your abilities for a detailed description of the Turing Machine. After you have read this article, you should know how to define Turing machine and summarize the Church-Turing theses. Make sure you know what the Halting problem is. |
|
5.1.4: Computability and Incompleteness | These videos cover challenging topics mentioned earlier in this unit. Watch the first lecture to learn about Hilbert's consistency program, Godels incompleteness theorem, attributes of computable functions, Church's thesis, and three approaches to computability, paying particular attention to the examples. In the second video, will learn about the Halting problem, universal Turing machine, and the undecidability proof. |
|
5.2: Important Propositions in the Philosophy of AI | Read this article. |
|
Read this article. |
||
5.3: Machine Consciousness | Read this article, which discusses machine consciousness. Focus on learning about early models of consciousness and neural models of consciousness. |
|
Watch this lecture about emotional machines. Ask yourself whether you think one is possible and begin to think about how you would approach its creation. |