We are not always learning new information that we get (read/see/hear) from outside world. We very often learn from our failures and successes. For example, imagine Alex is a graduate student at Georgia Tech. He wants to take two courses in this fall. However, he is not sure what two courses are good options for him. So he will automatically remove some courses from his list because he did not have a good experience in those courses. In this case, Alex learned that those courses are not suitable by taking into account his previous experience and trying to learn from his failure.
This can be also applied to our soccer predictor agent. Of course we cannot have an agent that always make correct predictions. However, we can have a model that learns from its own failures and successes and try to adapt its underlying model over time. One way of doing this is to check agent’s prediction with final results and update accordingly. You might be very confused by now but let me give you an example.
Suppose there is a game between Germany and France. We ask our agent to predict the game. Our agent predicts that Germany will win the game. However, after watching the game we see that France won the game. In this case, our agent was failed to predict the game. Now what our agent can do is to go back and check its prediction with the real result. In this case they are different so the agent needs to update its underlying cognitive model.