Session 3

Intelligent Agents and their Goals


810:161

Artificial Intelligence


today's slides in PDF


What I Did on my Summer Vacation

I went to the Iowa State Fair for the first time ever this summer. But I paid to go twice. My wife bought us tickets early at one of the local grocery stores, which sold them at a discount. We set them aside in anticipation of the great event. When we started planning our trip again that week, we couldn't find them. We looked high, we looked low. We tore our house apart. Pretty soon, I was looking places that I know I had already looked, just out of the frustration of knowing that they had to be there somewhere. Eventually, the day of our trip arrived, so we drove to Des Moines and bought tickets at the gate. I think of the the lost tickets as our donation to the fair.

You know how they say that when you lose something you always find it in the last place you look? Well, we still haven't found those tickets. They'll probably still be there when we move out, tucked into some nook somewhere.


Agents, Environments, and Goals

Traditional AI begins with some simple premises:

The environment in which an agent is expected to operate has a large effect on what sort of behaviors it will need and on what we should expect it to be able to do. If we take an agent designed or accustomed to one environment and drop it into a much different one, how well or poorly might it perform? Think of your own experience as an intelligent agent... We are concerned mostly with complex environments, which require agents to be flexible and adaptable. [...]

We have spent a lot of time discussing what AI is, what AI is about, and how we can tell if we have succeeded. We won't rehash that discussion, but I think that we can agree that intelligent agents have goals. Some of those goals are automatic, just a part of being an agent For example, intelligent agents seem to have a goal of self-preservation, which leads them to seek sustenance and shelter. Some of the goals are elective, of the agent's own volition, such as doing a crossword puzzle or trying to find a lost ticket. An agent may choose to try to achieve a goal -- or to stop trying.

Focusing on the goals of an agent gives us a framework for discussing and implementing some components of intelligent behavior. For one thing, it gives us a way to operationalize evaluation of systems. "Performing well" means "satisfying some goal with reasonably low cost". How to measure cost, and what counts as reasonable, can be problem-specific or context-specific.

How might an intelligent agent try to achieve its goal? It might respond by reflex. An agent can satisfy its goal with a very simple algorithm:

This approach assumes that the agent can react to the world based solely on what it senses from its environment.

Do intelligent agents really use reflex? You bet! Self-preservation is often a matter of good reflexes. In fact, any behavior that is required often, that is easy to specify for all or most circumstances, and for which there is substantial cost to failure is a good candidate for reflex. (Why keep thinking about the same goal and action repeatedly? Why have to continually think to flinch when something comes at your face?

Reflex is often the right way to achieve a goal. But sometimes it fails because the world is complex. And sometimes it isn't the right approach because the cost of failure is too high.

We can think of reflex as 'hard-wired', either in the agent's program or muscle memory, outside of the agent's conscious thought. Try changing a reflex some time... It's hard to do!


Choosing an Action

Another way to satisfy goals is through deliberation: thinking and choosing an action. This leads to an algorithm of this sort:

Choosing an action can be almost as simple as reflex, by "looking up the answer" in a table of responses. Do intelligent agents really use table-based look-up? You bet! Like reflex, table-based look-up can be a useful mechanism for achieving efficient behavior in the face of a common problem whose solution is hard or tedious to derive. Example: what is the decimal value of 1/2?

What is the distinction between table look-up and reflex? I think of reflex as 'hard-wired' and table look-up as 'programmed' -- which is to say that the two are functionally the same but that the agent has greater control over table look-up. It can always memorize another answer.

Choosing an action can also be arbitrarily complex: An agent can plan ahead before choosing its action.

But planning ahead has its own costs.

Another of AI's basic premises:

Agents are resource-limited.

The size of an agentUs memory is bounded. The amount of time available is usually bounded.


Searching for an Answer

Another approach is to search for an answer. If you don't have the solution hard-wired or stored in a table, then you have to use what you know to figure out how to find the answer. An agent can consider what effect each possible action would have on the world and which action -- or sequence of actions -- leads to the goal.

Do intelligent agents do search? All the time...

An agent could search randomly among the actions it can do, trying to find an action that will achieve its goal:

    INPUT:a goal to achieve
    OUTPUT:an action to take

    STEPS   1. Choose one of the available actions.
            2. If the action achieves the goal, then stop.
               Otherwise, go to Step 1.

This algorithm is called Generate and Test. Generate and Test may be the only way to proceed. If the agent doesn't know what the effect of an action will be, then it canUt plan ahead based on that action.

But this isn't a systematic way to proceed. The agent may keep reconsidering the same ineffective action, or it may consider actions in a nonsensical order. (Why should that matter?)

A more systematic search algorithm would look something like this:

    INPUT:  the starting situation (the start state)
            a goal to achieve
            a search strategy

    OUTPUT: a sequence of actions (called operators)
            that transforms the problem's start state into a goal state
            OR an announcement that no such sequence can be found

    STEPS 

    1. Initialize the set of states to be considered to include the start
       state. 

    2. Repeat the following: 
       a. If the set of unconsidered states is empty, then announce failure. 
       b. Choose a state to consider, based on the search strategy. 
       c. If the state chosen is the goal state, then return the sequence
          of actions that leads to this state. 
       d. Add to the set of unconsidered states all of the states that
          can be reached from the current state by doing one action.

We will begin to consider this algorithm in more detail next time.


Wrap Up


Eugene Wallingford ==== wallingf@cs.uni.edu ==== September 5, 2001