CMSC 491M/691M - Spring 2003
Discussion Questions for Class #15, March 17
Reading: Emotion paper: Sloman, Concar, Archer
Sloman
- What is an emotion? What's the difference between an
emotion, an attitude, and a mood (according to
Sloman, at least)?
- What are primary, secondary, and tertiary emotions? Is this
distinction crisp or soft?
- What does Sloman mean by shallow vs. deep models of
emotion?
- What function do you think emotions serve in human thought and
cognition? What are the possible purposes of modeling human emotion?
For what purposes/applications are shallow vs. deep models appropriate?
- How could you evaluate a model of emotion? (That is, how would
you know whether you had a "good" model of emotion?)
- Similar to the "intentional stance," we can take the "emotional
stance" to ascribe emotions to agents. When and why might it be
appropriate to take such a stance?
- Can an agent have emotion without having consciousness? (We can
leave this discussion for Wednesday, when we talk about
consciousness...)
- What is the purpose of the horizontal and vertical layers in the
CogAff architectures? Where are emotions in this architecture? Do
they emerge from the organization of the system or are they imposed
onto the organization?
Concar
- Sloman says that Damasio's claim that intelligence requires
emotion is a fallacy; essentially Sloman's argument is that this
conclusion is only one interpretation of the evidence. Concar's
article starts from the premise that Damasio is correct: that
feelings are needed (at least in humans) in order to make judgments
among alternatives. Without having read Damasio's actual research,
what are your intuitions on this question?
- The examples that Concar gives all seem like very shallow models
by Sloman's definition: recognizing emotional events from surface
features in a video, adapting to a human performer's musical style,
designing "emotional agents" to model simple social interactions. Is
it that these examples are shallow, or are these just victims of the
"as soon as you model it computationally, the magic is gone" problem
that pervades AI?
Archer
- The focus in this article is not on building intelligent agents
with emotion, but on modeling human emotions to improve interactions
with an agent. Do you think that the same emotional models are
appropriate for these two tasks?
- Picard et al.'s system can recognize eight different emotions with
a reasonable degree of accuracy. Is this level of granularity
sufficient for sophisticated social interactions?
- Do you really want your computer monitoring your pulse rate and
skin conductance? Don't you think that would get awfully annoying?
(Isn't the paper clip already annoying enough?)