• Unit 2: Agent-Based Approach to AI

    Modern AI is based on the building block concept of "agents". An agent is typically a software entity that somehow encapsulates intelligent behavior utilizing different capabilities. In this unit, you will learn about the different types of agents with different capabilities to assist users. You will also see how the agent paradigm provides a uniform framework to describe simple and sophisticated agents. Different agents are typically needed to solve problems in different environments. This unit describes the critically important technical properties of environments and will show you how to analyze new problems in those environments. Once you know which kind of environment is involved, specific agents are designed to meet the requirements of those environments.

    Completing this unit should take you approximately 2 hours.

    • 2.1: Introduction to Agent-Based AI

      Present-day AI solutions are architected using the building block of an "intelligent agent". An agent is essentially a software component that has some autonomy, senses its environment through sensors, reasons about potential next steps through different mechanisms, and acts in some way to change its own state and the state of the environment using actuators. In this section, you will discover how some AI solutions are single-agent, and some involve multiple agents tackling a complex problem together through cooperation.

    • 2.2: Analyzing Environmental Characteristics

      One of the most important aspects of agent design is analyzing the nature of the environment within which the agent must perform its task. The key characteristics of the environment will determine which agents will work best to solve the problems. These properties include fully observable vs. partially observable environments (in a fully observable environment, the agent has access to ALL the data elements needed to formulate a solution, whereas in a partially observable environment, some aspects of the environment could be hidden from the agents and not available), deterministic vs. non-deterministic (non-determinism allows for random behavior in the environment, whereas in deterministic environments, the next state of the system is fully predictable from its current state and the action selected without any random events affecting the outcome – the real world is non-deterministic), episodic vs. non-episodic (episodic environments are those where prior history does not affect the choices available at present, whereas in non-episodic systems, the actions of the past will determine what choices are available now), and static vs. dynamic (a dynamic environment can change while the agent is thinking about what action to take, whereas a static environment remains unchanged while the agent deliberates).