Academic Integrity: tutoring, explanations, and feedback — we don’t complete graded work or submit on a student’s behalf.

1. Please use your own words to define (a) intelligence; (b) artificial intellig

ID: 3742356 • Letter: 1

Question

1. Please use your own words to define (a) intelligence; (b) artificial intelligence; (c) agent; and (3) rational agent (1 pt) 2. What is PEAS task environment description for intelligent agent? For the following agents, develop a PEAS description of their task environment (1 pt) Assembling line part-picking robot Robot soccer player 3. When developing a vacuum-cleaner agent as shown in Figure 1 (same as the Figures 2.2 and 2.3 of the textbook), please describe the properties of the task environments, with respect to "Observable", "Agents", "Deterministic", "Episode", "Static", and "Discrete". Please explain your answers (1 pt) Percept sequence A, Clea A, Dirty B, Clean) B, Dirty A, Clean], A, Clean A, Clean], A, Dirty Action Right Lefi Right uc Suck Figure 1 Please design pseudo-code of an energy efficient model-based vacuum-cleaner agent as follows (1) The environment has two locations A and B, which can be either "Clean" or 4. "Dirty",

Explanation / Answer

1.(a). Intelligence is the ability to perceive information and retain it, thus applying it within an environment. It involves reasoning, creativity, logic, problem-solving among other things.

b. AI or artificial intelligence can be referred to as the intelligence in machines. how machines understand the environment and take steps in order to achieve its required goal.

c. Here the AI or the autonomous entity can be called as an agent because it perceiving its environment through sensors and takes an action on the environment through actuators. It doesn't have any specific preferences or such.

d. A rational agent is one which is an agent but it has clear preferences and chooses a path such that its action is based upon the optimal outcome that is to maximize its expected performance measure.

2. The rational agent we want to design for this task environment is the solution.

Assembly line part picking robot:

Performance measure: Percentage of parts in correct bins

Environment: Conveyer belt with parts, bins

Actuators: Joint arm and hand

Sensors: Camera, joint angle sensors.

Robot Soccer player:

Performance measure: Percentage of goals

Environment: Field with a ball and a goal post

Actuators: Joint legs, feet

Sensors: Camera, Joint leg sensors.

3.Fully observable vs. Partially observable

A task environment is effectively fully observable if the agent’s sensors are able to detect all the aspects that are relevant to its choice of action.

Single-agent vs. multiagent

Either an agent is acting in the environment solely, or engage into certain relationships with other agents, distinguishing them from other objects of the environment (by identifying that its own performance depends on other agent’s performance). The multiagent environment can be competitive, cooperative, or partially both.

Deterministic vs. stochastic

If a next state of the environment is completely determined by an agent, and any variations are excluded, then the environment is deterministic. Otherwise, it is stochastic.

Episodic vs. sequential

The episodic environment is divided into atomic episodes, each of which consists of agent perceiving and performing a single action. Next episode is independent of actions taken in the previous episode. In contrast, in the sequential environment, each decision can affect all the future decisions.

Static vs. Dynamic

If an environment is changing while an agent is deliberating, then it is dynamic. Static environments do not change over time. Semidynamic environments do not change, but an agent’s performance score does.

Discrete vs. Continuous

Describes a state of the environment, the way time is being handled, and to the precepts and action of an agent. The chess game is discrete (finite number of states, discrete set of actions). Taxi driving is continuous.

4. Pseudo-Code

The vacuum starts with the Location A

If location A is Dirty

Vacuum will suck

Else if Clean

remain idle for 1 point time

Again vacuum checks for dirt in location A

If location A is Dirty

Vacuum will suck

Else if Clean

Switch Location // will switch to location B if its initial location is A or vice versa

This loop goes on

Precept Sequence

Action

[A,Dirty]

Suck

[A,Clean-1 time point]

Idle

[A,Clean-2 time point]

Switch

[B,Dirty]

Suck

[B,Clean-1 time point]

Idle

[B,Clean-2 time point]

Switch

5.

The difference is that model-based reflex agents have additional properties than simples based reflex agents

They also handle partial observability keeping track of the part of the world it cant see now. They also keep track of how the world evolves and how the agent action affects the world.

It is called model-based because it works based on the percept history which is the best guess and it takes into consideration affects the model that is the world.

Precept Sequence

Action

[A,Dirty]

Suck

[A,Clean-1 time point]

Idle

[A,Clean-2 time point]

Switch

[B,Dirty]

Suck

[B,Clean-1 time point]

Idle

[B,Clean-2 time point]

Switch