完成关于 Uninformed Search
等的AI简答题。
![UCS](https://upload.wikimedia.org/wikipedia/commons/thumb/5/57/Dijkstra_Animation.gif/220px-
Dijkstra_Animation.gif)
Probability Review
This question is meant to review part of the probability prerequisite. It
might be helpful to look into resources under General Resources.
Let A, B, C, D be four random variables.
(a) What is the smallest set of independence or conditional independence
relationships we need to assume for the following scenarios?
- (i) [1 pt] P(A, B) = P(A|B)P(B)
- (ii) [1 pt] P(A, B) = P(A)P(B)
- (iii) [2 pts] P(A, B, C) = P(A|B)P(B|C)P(C)
- (iv) [3 pts] P(A, B, C) = P(A)P(B|C)P(C)
- (v) [3 pts] P(A, B, C) = P(A)P(B)P(C)
(b) Simplify the following expressions to one probability expression. Please
show your work.
Uninformed Search and Heuristics
Consider the following simplified version of the classic Atari video game,
Montezuma’s Revenge: It is played on the board illustrated below. An agent
(represented by the person icon in cell (1,3)) wishes to grab the key (in cell
(3,0)). A skull starts in cell (5,2) and moves to the right by one cell after
each action is executed until it ends up in the rightmost cell, at which point
it starts moving to the left, and repeats this pattern back and forth.
The agent can be facing either left or right. There are 10 possible actions
for the agent: 2 turning actions (face_left, face_right) and 8 moving actions
(left, right, up, down, left_up, left_down, right_up, right_down). The agent
can move up or down while facing either direction, but can move sideways or
diagonally only if facing in that direction. For example, if the agent is
facing right but tries to move left up, the agent will not move and nothing
will happen. Furthermore, if the agent is already facing left and a face_left
action is taken, nothing happens.
Lastly, the agent cannot move into a cell currently occupied by the skull, or
a wall.
- (a) Answer the following questions for the Montezuma’s revenge board above:
- (i) [7 pts] Let N be the number of possible cell locations that the agent can be in, and let M be the number of possible cell locations that the skull can be in. Recall that for “pacman pathing”, the representation of the state was (x, y) where x was the row and y was the column of pacman’s position. Describe a representation of a state in the state space for this game and give an expression for the size of the state space.
- (ii) [3 pts] What is the goal test?
- (b) Now, consider the simplified board below, which has no skull and no facing-direction for the agent (i.e., the agent can move in any of the 8 directions as long as it remains in the board). For the three following graph search algorithms, perform the search procedure yourself (please show your work) and provide answers to the questions below regarding the nodes expanded during the search as well as the final path found by the algorithm.
On this board, assume that a diagonal move has a cost of 3, whereas moving
left, right, up, or down has a cost of 1. Do notice the difference in costs,
and recall which algorithms use this cost information and which algorithms do
not.
Remember that the search procedure should begin at the agent’s starting
position (C). To break ties when adding nodes of equal cost to the frontier,
follow alphabetical order.
Finally, when listing the order/number of nodes expanded, do not include nodes
which are taken off the frontier but discarded immediately due to already
having been expanded.- (i) [8 pts] Breadth-first graph search
Frontier data structure: FIFO.
Recall that BFS selects the shallowest unexpanded node in the frontier for
expansion, which will be the oldest node in the frontier. - (ii) [8 pts] Uniform-cost graph search
Frontier data structure: priority queue (make sure you update/reorder the
whole frontier after each addition)
Recall that UCS keeps track of the lowest cost, g(v), to get from the start
node to the node v. - (iii) [8 pts] A _graph search (with Manhattan distance to the goal as the heuristic)
Frontier data structure: priority queue (make sure you update/reorder the
whole frontier after each addition)
Recall that A _ computes f(v) for the nodes v that it expands, with f(v) =
g(v) + h(v) where g(v) is the lowest cost to reach v from the start node and
h(v) is an estimate of the distance from v to the goal.
- (i) [8 pts] Breadth-first graph search
- (c) [10 pts] Given your answers above, what are the qualitative differences between the results achieved by BFS, UCS, and A*? Which one finds the shortest path (in number of steps)? Which one finds the optimal path (in cost)?
- (d) For the same board and setting as part (b), give an example for each of the following types of heuristics. Please briefly explain why the example you chose satisfies the requested properties.
- (i) [3 pts] Admissible and consistent.
Note: You can use a heuristic that we have frequently used in this class, or
you can just assign any set of numbers to the states that qualifies as an
admissible and consistent heuristic.
State s - (ii) [4 pts] Admissible but inconsistent
- (iii) [3 pts] Inadmissible and inconsistent
- (i) [3 pts] Admissible and consistent.
- (e) (i) [8 pts] For this new version of the game, your friend Nancy suggests taking the old game setting from part (b) and now adding the ability for the agent to perform a maximum of one “teleportation” action during the game. That is, on one of the agent’s moves, it can choose to jump from its current state to any other non-goal state on the board, and the cost of teleporting is 1.
- How does this new teleportation ability change the state space of the game from part (b), which was (x, y)? Does anything need to be added or removed?
- Nancy argues that in this new game, at least one previously consistent heuristic can become inconsistent.
Note: we define heuristics for this problem as being a function of only the
cell location: They cannot incorporate anything that did not exist in the old
version of the game that we are comparing to.
If you believe Nancy is right, give an example of a heuristic that used to be
consistent in the old game but is no longer consistent in this new game. Make
sure to explain why it is no longer consistent (perhaps with a drawing of a
board state and an explanation).
If you believe Nancy is wrong, provide an argument for why a heuristic that
was consistent in the old game must also remain consistent in this new game.
Be specific about your reasoning and use mathematical quantities such as
heuristic costs of states h(v) and true costs of actions c(v, a, v’).