Monday, May 02, 2005

Fwd: Book on Fun


From: "Kevin J. Burns"

Shlomo: When I was in U.K., at IEEE conference on Computational Intelligence in Games, I learned of a book titled "A Theory of Fun for Game Design", by Raph Koster. He has a degree in English/writing and also studied music and art. He never studied programming but I guess he is considered a guru of game design. He is currently Chief Creative Officer at Sony Online Entertainment (in San Diego).

The book is an easy read with every other page in cartoons with captions. It's mostly common sense, I think, but I guess game developers are eating it up - maybe because sense is not so common among them. FYI, in a nutshell Koster says:

(1) Games and fun are important because they can tell us a lot about learning. (pg 2)
(2) Games are most fun when they are at the right level, not too hard or easy, and this level changes as players learn. Games are also most fun when they are the right genre to match the personal preferences (style?) of the player (e.g., spatial, social, etc.). [This is, of course, the common sense that underlies my Goldilocks Theory.](pgs 10, 132)

(3) "Games are puzzles to solve, just like everything else we encounter in life." (pg 34)


(4) "Fun from games arises out of mastery. It arises out of comprehension. It is the act of solving puzzles that makes games fun." [This is, of course, the common sense that underlies my EVE' Model.] (pg 40)

(5) "In some ways games can be compared to music... Music excels at conveying only a few things - emotion being paramount among them. Games do very well at active verbs: controlling, projecting, surrounding, matching, remembering, counting, and so on. By contrast, literature can tackle all of the above and more." [Here I agree that games can be compared to music, but I disagree about the "few things" in music. I think music involves a LOT of controlling, projecting, surrounding, matching, remembering, counting, etc.] (pg 64)

(6) "... fun isn't flow. You can find flow in countless activities, but they aren't all fun. Most of the cases where we typically cite flow relate to exercising mastery, not learning." [And yet he says earlier that "fun from games arises out of mastery". So I don't get his point.] (pg 98)


(7) "To recap... Games aren't stories. Games aren't about beauty or delight... They stand, in their own right, as something incredibly valuable. Fun is about learning in a context where there is no pressure, and that is why games matter. [Here I would disagree because I think pressure matters a lot. Maybe the pressure in games is self-imposed, but it's still pressure.] (pg 98)


(8) "Emergent behavior is a common buzzword. The goal is new patterns that emerge spontaneously out of the rules allowing players to do things that the designer did not foresee... it usually makes games easier, often by generating loopholes and exploits." (pg 128)

(9) "We also hear a lot about storytelling. However, most games melded with stories tend to be Frankenstein monsters. Players tend to either skip the story or skip the game." (pg 128)

Monday, April 25, 2005

Some links on affordances

We have not considered affordances as a topic in the seminar, but here are some interesting links, specifically relating to conceptual models in design (and games?).

http://www.jnd.org/dn.mss/affordance_conventi.html
http://www.jnd.org/dn.mss/affordances-and-design.html

Classification Performance Measures

The Li,Ogihara paper "Detecting Emotions in Music" mentions several criteria for evaluation of their classifier. Here is a short summary of the terms they use.


General:
The experimental evaluation of a classifier usually measures its ability to take the right classification decisions. After a classifier is constructed using a training set, the effectiveness is evaluated using a test set.
The following counts are computed for each category i:
–TPi: true positives
TP w.r.t. category ci is the set of documents that both the classifier and the previous judgments (as recorded in the test set) classify under ci
–FPi: false positives
FP w.r.t. category ci is the set of documents that the classifier classifies under ci, but the test set indicates that they do not belong to ci
–TNi: true negatives
TN w.r.t. ci is when both the classifier and the test set agree that the documents in TNi do not belong to ci
–FNi: false negatives
FN w.r.t. ci is when the classifier does not classify the documents in FNi under ci, but the test set indicates that they should be classified under ci

Precision: TPi / (TPi+FPi)
Recall: TPi / (TPi + FNi)

See also http://www.hsl.creighton.edu/hsl/Searching/Recall-Precision.html


A classifier should be evaluated by means of a measure which combines recall and precision.

Example: The trivial acceptor (each document is classified under each category) has a recall = 1. In this case, precision would usually be very low.

Some combined measures:
–the breakeven point: the value where precision equals recall
–F1 measure: 2Pres.*Rec. / (Pres.+Rec.)

Example:
Breakeven point of a classifier is always less or equal than its F1 value.
For the trivial acceptor, Prec. -> 0 and Rec, = 1, F1 -> 0

Normal Errors - Q&A

Q.: Why the boat in the mental and real world (fig 1) are on different sides? Wouldn't the captain keep them both in same place in terms of left-right? Where is Potomac river and the port in the figure? (i.e. it was not clear to me why he decided to turn..).

A.: Yes, the boats are depicted on different sides in the mental model (top panels) and real world (bottom panels) of Fig 1. Unlike roadways, there are no "lane lines" in shipping, and no "shoulders" that are close enough to tell which side you are on, so in passing you're never quite sure what side you'll be on until you actually do pass. The Captain's mental model is his perception of the real world, and perceptions do not always match "ground truth".
In Fig. 1, the Chesapeake Bay is the large mass of light gray in which the boats move. The Potomac River is the inlet on the left. The Captain (white boat) turned left to enter the river (panel t4, top and bottom) because he thought (panel t4, top) that's what the black boat would do. It turns out this was not such a good idea, because his mental model (panel t4, top) did not match the real world (panel t4, bottom).

Q.: When you mention predicting at t1, it does not actually mean considering the new evidence of the speed of approach. It is just making an assumption about it, am I correct? Then the actual evidence comes only at t2, but if you are not using the confidence of this evidence for your decision, why do you need to talk (or reason) about it?

A.: In "predicting at t1", the Captain is anticipating what he might see at t2. The evidence (speed of approach) does not actually come until t2, but what the captain eventually does see at t2 will depend on his mental model of what he thinks can "possibly" be seen. For example, you'll never see an alien if you don't believe in aliens. What the Captain sees at t2 may also depend somewhat on what he expects will "probably" be seen.

Q:. Do you mean that there is some kind of alert mechanism that would have thrown off the wrong hypothesis in stage 1 if the evidence was a slow approach? Is that what you call "confirm the captain's expectation"?

A:. This is related to the so-called "confirmation bias", and one might say it is a form of "directed cognition". It also relates to "bounded rationality", since the Captain cannot gather an infinite amount of information at t2 so what he looks for (in directed cognition?) is biased (bounded?) by his beliefs at t1. In the case of the Cuyahoga, the Captain's most likely expectation (based on his mental models at t1) is that the ships will draw together rapidly. In my gray paper, this relates to the E part of EVE', where one establishes some "Expectations" for t+1 based on one's mental models at t. Then at t2, as you note, the actual evidence is that the ships indeed appeared to draw together rapidly. This evidence confirmed the Captain's Expectation (so there was no Violation V in EVE') and it allowed him to refine his "Explanation" (E' in EVE') so that he was left with only one likely hypothesis, namely that: It was a fishing vessel moving in the same direction. [Recall that there were two equally-likely hypotheses in the previous "Explanation" at t1.] Now, as you note, consider what WOULD have happened if the ships had instead been observed to draw together slowly (or not at all) at t2. This evidence would have been a Violation (V in EVE') of the Expectation (E in EVE) and would have led to a different Explanation (E'). Referring to my math, the Bayesian likelihoods for this evidence (ships draw together slowly) given each hypothesis (H1, H2, H3, H4) are: e, e, p, e. The Bayesian priors are n*p, n*e, n*p, n*e. So the Bayesian posteriors would be n*p*e, n*e*e, n*p*p, n*e*e, and the most likely Explanation (E' in EVE) of the Violation (V) in Expectations (E) would be H3(n*p*p), that is: It's an other (not fishing) vessel moving in the same direction.

Q.: Talking about the more general context of our discussion and the flow, would pleasure, fun and etc. be a mental model or is it an appraisal of the mental models?

A.: Regarding pleasure or "flow", Goldilocks Theory says that pleasure arises from success in forming Expectations (E in EVE) and Explanations (E' in EVE'), via the postulated functions G and G'. As discussed in my gray paper, I propose something like 1-P as a measure of success in Expectations (E) and something like 1-P-R as a measure of success in Explanations (E') - where P and R are defined as Bayesian probabilities. My theory is that things like P and R are psychological "representations" (mental models), while pleasure is an emotional "manifestation" (feeling of flow) that arises from computation of these representations.

Q.: How essential is the geometrical aspect of mental models? If the model itself consisted of some logical derivations (say it was not the drawing but some logical representation of possible states), then it might have encompassed inside of it all the bounded rationality. In such case, how do we go about the separation of the reasoning from the model? Is it by bounding the model as well?

A.: On "how to separate the reasoning from the model"... I'm not sure it can be done. In fact many people use the term "mental model" to refer to both mental representations and the mental computations that create and employ these representations. In my paper (Normal Errors) I used the terms "model" and "module" to make this distinction. But the distinction gets fuzzy, since the representations (geometric, lexical, etc.) will depend on the computations and the computations will depend on the representations.
Regarding the "drawing" (Fig. 1), I labeled the top panels "mental model" for brevity. A better title would be "artist's illustration of Captain's mental model, drawn to help readers follow the story". I'm not sure the Captain had a "mental picture" like Fig. 1 in his head (although he might have). More to the point of my paper, I'm claiming that the mental models in this case are the "representations" listed in Table 1, along with order-of-magnitude values (n, p, e), not necessarily the artist's drawing in Fig. 1. I thought this would be clear from the text and title of Table 1, but I see now how I may have confused readers with Fig. 1.

Q.: We learned from the Johnson-Laird paper that mental models represent one of several possibilities, usually focusing on one plausible "explanation" of the evidence and that representing alternative models, such as false relations, are not "natural" to mental reasoning. Do we "jump to conclusions" by constructing mental models and then we can not realize our errors since the mental modules (stages of computation) that operate on the models are bounded?

A.: I can't speak for JL and I'm not sure where he stands on the "rationality" of mental models. But I suspect he would pretty much agree with me if I said as I believe that: People are rational in the context of their mental models, but these models are always bounded - and that's the gist of "bounded rationality". I suspect that Simon would also pretty much agree with a statement like this, because the whole idea of "satisficing" is that subjective utilities bound the computation of subjective optimality. That is, if I don't think it's worth the extra effort to go from my "good enough" solution to an "optimal" solution, per my mental models of "effort" and "value", then I am bounded-rational when I stop at my good enough solution. Or, said the other way around, I would be irrational (in the context of my mental models) if I went further to compute the high-cost solutions that are rational (optimal) to a decision scientist whose model does not consider my cost-gain tradeoff.

Q.: Can I say that the main difference between the two paper was that JL considers these mental errors as irrational, while you claim that this is a normative behavior since it comes from naturally bounded mechanisms of otherwise normative (Bayesian) inference?

A.: In my case, the claim is not quite as you say here: "...while you claim that... the mental modules (stages of computation)... are bounded". Rather, the premise of my paper is that it is important to distinguish between
"representations" (which I call "models") and the "computations" (which I call "modules") that take these representations as input and that form new representations as output (see Table 1). Then, with this distinction, I argue that human thinking is "bounded Bayesian" in the sense that the representations are bounded and the computations are Bayesian, at least to a first approximation.

Q.: Is that what you mean by referring to heuristics and biases?

A.: My claim (above) is different (I think) from most claims about Heuristics and Biases (H&B) by authors like Tversky and Kahneman (T&K), because H&B argues that people DO NOT reason according the norms of Bayesian inference. So when you ask: "Is that what you mean by referring to heuristics and biases?", I would say "not quite". My point is that H&B assigns all the "boundedness" to the computations (heuristics) while
downplaying (or ignoring) the representations. My approach differs in that I assume that the computations are Bayesian (at least to a first approximation) but bounded by the representations that they act on.

Q.: When you describe the mental models by graphs, it suggests to me spatial and temporal reasoning. But, if the captain was indeed reasoning this way, wouldn't he probably had noted that there are discrepancies between the actual speed reading of his boat and and speed of the boats approaching? Also he should have realized that the Potomac river close so that he will have to turn in a way that cuts the fishing boat, and etc.?

A.: When you say: "In your paper you describe the mental models by graphs,which suggest spatial and temporal reasoning", I would clarify as follows.First, I do think that mental models are often used for spatial and temporal reasoning, and in fact my paper talks about this under the name of "mental simulation". But, I don't think that I "described mental models by graphs". Of course I did use Fig. 1 to help tell the story and I did use Fig. 3 tomake distinctions between levels/strata and stages/epochs of mental modelsand mental modules. But the mental models themselves that I list in Table 1and analyze in the text are representations of Confidence(order-of-magnitude probability), Hypotheses and Evidence - not really"graphs".
When you say "the Captain ... should have realized... etc.", well that'sthe whole point of the paper! That is, to someone like us (in hindsight) itappears that the Captain should have. But he didn't and the question is "whynot"? Referring to the illustrations in Fig. 1, which are simplified/summarydepictions of the Captain's mental model (compared to the real world), theanswer to the question is this: The Captain turned because he figured thatthe other boat would turn and hence he thought that his turn would avoid acollision (top panels). In fact his turned caused a collision (bottompanels).

Q.: Is your claim then that the fact that the captain did not maintain an alternative hypothesis (contrary to a perfect Bayesian) is due to bounds on reasoning?

A.: Of course we all know that computations (as well as representations) are bounded, simply because cognitive resources are not infinite. So my claim is really one about where scientists should look for the bounds first - in the representations (mental models) or in the computations (mental modules)? And
my approach, similar to JL (I think) but different from H&B (I think), is that scientists should look first and foremost at the representations simply because these representations are the input to the computations! In short, why assume irrational computations from people who are usually rational when you can predict the same results (irrational behavior) in terms of rational (Bayesian) computations acting on incomplete (bounded) representations?
The value of my approach is that it allows me to explain both Bayesian and non-Bayesian reasoning with a single (Bayesian) framework. In fact I did an experiment on this, and found that people were sometimes Bayesian and sometimes non-Bayesian on THE SAME PROBLEM, depending on how the PROBLEM WAS
FRAMED. Since the basic computational problem is the same, and the framing affects only the representational format (mental model), this experiment shows that non-Bayesian reasoning is driven (at least in some cases) by bounded representations (in mental models) rather than non-Bayesian
computations (in mental modules). I'll send you more on the experiment in a separate email.


Q.: But wouldn't JL counterfactual reasoning take that into account? i.e. if the captian had been taught about possible error in direction / speed judgments before, wouldn't he reason out the whole situation differently?

A.: Regarding "counterfactual reasoning"... A bounded Bayesian (as I define him) is someone who develops Confidence in a roughly Bayesian manner about Hypotheses based on Evidence. The "bounds" arise because he can't possibly represent all Hypotheses and he can't possibly gather all Evidence - simply because he has finite resources for attention, memory and processing. Thus, at a higher stratum of abstraction, a decision maker has to make decisions about what Hypotheses he should entertain as well as what Evidence he should
try to collect so that he can do his bounded Bayesian thing at a lower stratum. I have only a short discussion of this in my paper because the paper was already too complex for some readers - so I limited the analysis
to one stratum of abstraction as shown in Figure 3. But the next higher stratum is where "counterfactual" reasoning would come in. For example, one Hypothesis that the Captain might consider is that aliens will fly down from space and do one of a zillion bad things to his boat. Should the Captain consider all these zillion possibilities? Should he spend his limited time and resources looking for Evidence of alien tampering? I would say no because the probability of aliens is essentially zero. Well the same is true of many other "counterfactuals" that the Captain might consider, so he needs some method for selecting what Hypotheses to consider and what sorts of Evidence to look for. And Bayesian methods are again useful here, for analyzing how a decision maker's reasoning at a higher stratum bounds that decision maker's reasoning at a lower stratum. So counterfactual reasoning is easily incorporated in the Bayesian framework, and in fact that's another reason why this framework is so useful.

Monday, April 04, 2005

From Kevin Burns

At 11:21 AM 4/2/2005, Kevin J. Burns wrote:

Hi Shlomo:
THE BLOG
Thanks for including me! I've never blogged before so I'm looking forward to seeing how it works! You have a comprehensive reading list and it looks like a stimulating seminar. FYI, two easy-reading but insightful (I think) books that I've found on the subject of narrative are:
Steven Brams, (2003), "Biblical Games: Game Theory and the Hebrew Bible"
(MIT Press).
John Allen Paulos, (1998), "Once Upon a Number: The Hidden Mathematical Logic of Stories" (Basic Books).
On the topic of "computational aesthetics", one guy I didn't see on your reading list (maybe on purpose?) was Michael Leyton at Rutgers:
http://www.rci.rutgers.edu/~mleyton/homepage.htm. His thing is "process grammars" (much like Stiny) and he runs the "International Society for Mathematical and Computational Aesthetics":
http://www.rci.rutgers.edu/~mleyton/ISMA.htm A paper is titled, "Musical
Works are Maximal Memory Stores":
http://www.rci.rutgers.edu/~mleyton/music_theory4.pdf

Sunday, April 03, 2005

More on Sentics

As you have probably read in the Clynes paper, the author claims that there are typical time forms that represent human emotions. The paper does not contain any images of these mysterious shapes, so I thought to provide them here



The paper does give some mathematical equations that describe these shapes in terms of so-called Laplace transform. For Matlab users, here is a short explanation how these shapes are related to these equations:

(In order to run the following, you need the symbolic toolbox):
Lets assume we want to plot the function 2*s/(1+2*s)^3.
Type the following in Matlab command

syms F s
F = 2*s/(1+2*s)^3;
f = ilaplace(F);


What you get is analytical solution of the form
-1/16*t^2*exp(-1/2*t)+1/4*t*exp(-1/2*t)

Let us take now 100 samples of t and plot

t = 1:100;
plot(-1/16*t^2*exp(-1/2*t)+1/4*t*exp(-1/2*t))

You get a shape that roughly resembles one of the curves.

What are the exact parameters? How can they be estimated from a time form? What are the applications of these forms and how can they be captured?
We welcome discussion on that....

Thursday, March 31, 2005

Suggested Readings

Music 206: Emotions, Style and Meaning…
Reading list with some annotations
The pdf files of most of the paper can be found at http://music.ucsd.edu/~sdubnov/Mu206/

Introduction:
Common sense in AI, Minsky’s ideas on Music, role of body in reasoning and Sentics

“The St. Thomas Common Sense Symposium: Designing Architectures for Human-Level Intelligence”, Marvin Minsky, Push Singh, and Aaron Sloman, AI Magazine, 2004

“Music, Mind, and Meaning”, Marvin Minsky, Computer Music Journal, 1981,

“Building Brains for Bodies, Rodney Brooks and Lynn Andrea Stein, Autonomous Robots, 1 (1994)

“Time-Forms, Nature's Generators and Communicators of Emotion”, Manfred Clynes, IEEE International Workshop on Robot and Human Communication, Tokyo, Japan, Sept. 1992.

Decision Making and Mental Models:
Relation between rational behavior and mental models, what are the behavioral aspects of decision making and alternative models for games theory.

“Decision Making and Problem Solving”, by Herbert A. Simon et al., Research Briefings 1986
Greg

“Mental models: a gentle guide for outsiders”, P.N. Johnson-Laird, Vittorio Girotto, and Paolo Legrenzi, 1998
Arshia

“Behavioural studies of strategic thinking in games”, Colin F. Camerer, TRENDS in Cognitive Sciences Vol.7 No.5 May 2003
Mike / Joey

“Mental Models and Normal Errors”, Kevin Burns, 2002
Shlomo

Game theory and the Cuban missile crisis”, Steven J. Brams, 2001
David

Complexity:
Complexity in nature, cognition and music

Not assigned yet

"The Architecture of Complexity", The Sciences of the Artificial, By Herbert Simon, 1969

“Cognitive complexity and the structure of musical patterns”, Jeff Pressing.

“Complexity measures of musical rhythms”, Shmulevich, I., Povel, D.J. (2000). In P. Desain & L.Windsor, Rhythm perception and production

“Complexity Measures for complex systems and complex objects”, Pablo Funes

Emotions:
Models of Emotions in rationality and in music

Not assigned yet

“Beyond Shallow Models of Emotion”, Aaron Sloman, Cognitive Processing, Vol. 2, No 1, 2001

“Rationality and the Emotions”, Jon Elster, The Economic Journal, 1996

“Detecting Emotion in Music”, Tao Li and Mitsunori Ogihara, 2003

17. “Disambiguating Music Emotion using Software Agents”, Dan Yang, WonSook Lee, 2004

Affective Processing:
Algorithms and methods for affect processing

David (all 4 papers)

“Digital Processing of Affective Signals”, Jennifer Healey and Rosalind Pickard, ICASSP 98.

“Affective Content Detection using HMMs”, Hang-Bong Kang, MM’03

“Using audio features to model the affective response to music”, Marc Leman, Valery Vermeulen, Liesbeth De Voogdt, Dirk Moelants, ISMA2004

“Composing Affective Music with a Generate and Sense Approach”, Sunjung Kim and Elisabeth André, 2004

Improvisation and Emergence:

Greg (all 3 papers)

“Improvisation: Methods and Models”, in Generative processes in music (ed. J. Sloboda) Oxford University Press 1987

“The Mechanisms of Emergence”, R.K.Sawyer, Philosophy of Social Sciences, 2003

“Improvisational Cultures: Collaborative Emergence and Creativity in Improvisation”, R.K.Sawyer, Mind, Culture and Activity, 2000


Narrative:
Planning, form and narrative

Taken jointy by Michael and Joey

“Understanding Narrative is Like Observing Agents”, Guido Boella, Rossana Damiano, and Leonardo Lesmo, AAAI 1999

“Narrative Intelligence”, Michael Mateas and Phoebe Sengers, AAAI 1999

“Notes on the Use of Plan Structures in the Creation of Interactive Plot”, R. Michael Young, AAAI 99

“Improvisation and Narrative”, R.K.Sawyer, Narrative Inquiry, 2002

Style:
Learning surface features and local structures

“Style Machines”, Matthew Brand Aaron Hertzmann, SIGGRAPH 2000
Arshia

“Machine Learning of Musical Style”, Dubnov and Assayag, Computer Magazine, 2002
Shlomo

“Separating Style and Content”, J.B. Tennenbaum and W.T. Freeman, Adv. In NIPS, 1997
Arshia

“Style as a Choice of Blending Principles”, Joseph A. Goguen and D. Fox Harrell, 2004
Shlomo

Flow:
Is Flow and alternative for describing artistic experience?

“Quality of Experience in Virtual Environments”, Andrea GAGGIOLI, Marta BASSI, Antonella DELLE FAVE, 2003
Michael

“Improvisation Planning and Jam Session Design using concepts of Sequence Variation and Flow Experience”, Dubnov and Assayag, 2005
Shlomo

Musical Forces:
Physical metaphors and emotional forces

“Musical Forces and Melodic Expectations: Comparing Computer Models and Experimental Results”, Steve Larson, Music Perception, 2004
Arshia

“Influences of Large-Scale Form on Continuous Ratings in Response to a Contemporary Piece in a Live Concert Setting”, McAdams et al., Music Perception, 2004
Shlomo

“Structural and Affective Aspects of Music from Statistical Audio Signal Analysis”, Dubnov et al, JASIST 2005 Shlomo

Influential Aspects of Information:
Information is that which affects the listener

“Toward a Theory of Information Processing”, Sinan Sinanovi´c and Don H. Johnson, 2004.
Arshia

“Spectral Anticipations”, Dubnov, 2005
Shlomo

Computational Media Aesthetics:
Semantic gap between features and aesthetics

Joey (all 3 papers)

“Bridging the Semantic Gap in Content Management, Systems: Computational Media Aesthetics”, Chitra Dorai, Svetha Venkatesh

“Negotiating the semantic gap: from feature maps to semantic landscapes”, Rong Zhao, W.I. Grosky, Pattern Recognition 35 (2002)

“Where Does Computational Media Aesthetics Fit?”, Brett Adams, 2003

Total assignments so far: (as of April 4)
Arshia: 5
David: 5
Cristyn:
Joey: 3 + 5 jointly with Mike
Greg: 4
Mike: 1 + 5 joinly with Joey
Pei:
Shlomo: 3 + 3 of his own

What is it about?

Hi everyone,

I would like to draw your attention and possibly interest in an upcoming seminar on Semantic Music Processing: Emotion, Style and Meaning... that I will be giving Spring 05 quarter. The main motivation for this seminar is a general feeling that most of today's approaches to analysis and understanding of musical content do not actually describe how we feel about music or what we do when we create or engage in music making.
Problems of content analysis are mostly considered today in the context of retrieval and recognition, and are predominantly motivated by work in the visual domain. Accordingly, the common applications of semantic analysis are ones of indexing and annotation-based multimedia storage. This problem definition seems not to be fully adequate for media that do not have a clear denotation or semantics. Our perceptions of music, film or computer game are ones of experience and affect rather then schematic representation or categorization. In the seminar we would like to ask basic questions as to what consists musical content? What is musical experience or affect? Is music making a rational or decision making process? Are there relations between composing or improvising music and games? What aspects of these processes could be formalized or programmed by computer? and finally, what are the applications or new perspectives that such models and understanding might offer us?
The seminar would involve reading and considering various aspects of musical and media theory in view of related cognitive and computational theories. We will consider both music theoretic aspects and some of the more technical aspects of formalization in the domain of artificial intelligence research. What we mostly plan is to re-think, critically, about the tasks and applications of content-based processing, with focus on content creation, experimentation and interaction with the computer. Topics to be dealt with include music cognition, theories of metaphor and narrative, affective computing, emotions, style learning, machine improvisation, bounded rationality and possibly some interactive dramaturgy.

Suggested papers: see next posting.

cheers,
Shlomo