If you wish, you can now enter new observations. If you do not specify any observation, the agent will assume that everything went according to its expectations, and will declare that it achieved its goal after a few iterations of the loop.
However, let us make things more interesting once again: let us simulate a fault
in bulb 1. To do this, we will enter the observation that on(b(1)) is false, i.e.
that the bulb did not come on, even though the agent flipped switch 1. You will
need to click on "Set Observations", select "on(b(1))" and click "Set False"
(alternatively you can just doblue click on on(b(1)) until "false" appears).
Notice that the Enter Observations Dialog now contains three columns. The right-most column shows the values that the agent expects the fluents to have. There may be several columns, depending on how many trajectories are defined by the plan that the agent generated (but there is only at most one if the domain is deterministic).
Now you can close the dialog by clicking on "Close". The new observation will appear
in the History Window. Let the agent iterate again by clicking "Iterate".
The Log Window shows that the agent detected a contradiction between its expectations and the observations that we entered. The discrepancy can be explained by hypothesizing that brk(b(1)) occurred, unobserved, at time 0. In order to make sure that this is the case, the agent needs additional observations about fluent ab(b(1)). In fact, brk(b(1)) would cause the fluent to become true.
So, let us answer "Yes" to the Fluent Test Dialog. The agent updates its knowledge
with the observation given, together with the occurrence of action brk(b(1)) at time
0. Then, it computes a new plan to achieve goal "allBulbsOn", and decides to perform
At this point the bulb has been repaired. The agent is planning to perform flip(sw(2)) at time 2, and flip(sw(3)) at time 3 to achieve the goal. Let us click "Iterate" once...
...and once more.
After performing flip(sw(3)), the agent expects to have achieved its goal. You can see this by looking at the Trajectories Window:
or by clicking on "Set Observations", in the History Window.
Of course, you can add further observations to inform the agent that the goal was not really achieved. Otherwise, if you leave things the way the are, and click "Iterate" once again, the agent will inform you that it has achieved its goal and there is no action it needs to perform.
You completed this section. Click here to move to the next section. You can also go back to the main page.