Normal Errors - Q&A
Q.: Why the boat in the mental and real world (fig 1) are on different sides? Wouldn't the captain keep them both in same place in terms of left-right? Where is Potomac river and the port in the figure? (i.e. it was not clear to me why he decided to turn..).
A.: Yes, the boats are depicted on different sides in the mental model (top panels) and real world (bottom panels) of Fig 1. Unlike roadways, there are no "lane lines" in shipping, and no "shoulders" that are close enough to tell which side you are on, so in passing you're never quite sure what side you'll be on until you actually do pass. The Captain's mental model is his perception of the real world, and perceptions do not always match "ground truth".
In Fig. 1, the Chesapeake Bay is the large mass of light gray in which the boats move. The Potomac River is the inlet on the left. The Captain (white boat) turned left to enter the river (panel t4, top and bottom) because he thought (panel t4, top) that's what the black boat would do. It turns out this was not such a good idea, because his mental model (panel t4, top) did not match the real world (panel t4, bottom).
Q.: When you mention predicting at t1, it does not actually mean considering the new evidence of the speed of approach. It is just making an assumption about it, am I correct? Then the actual evidence comes only at t2, but if you are not using the confidence of this evidence for your decision, why do you need to talk (or reason) about it?
A.: In "predicting at t1", the Captain is anticipating what he might see at t2. The evidence (speed of approach) does not actually come until t2, but what the captain eventually does see at t2 will depend on his mental model of what he thinks can "possibly" be seen. For example, you'll never see an alien if you don't believe in aliens. What the Captain sees at t2 may also depend somewhat on what he expects will "probably" be seen.
Q:. Do you mean that there is some kind of alert mechanism that would have thrown off the wrong hypothesis in stage 1 if the evidence was a slow approach? Is that what you call "confirm the captain's expectation"?
A:. This is related to the so-called "confirmation bias", and one might say it is a form of "directed cognition". It also relates to "bounded rationality", since the Captain cannot gather an infinite amount of information at t2 so what he looks for (in directed cognition?) is biased (bounded?) by his beliefs at t1. In the case of the Cuyahoga, the Captain's most likely expectation (based on his mental models at t1) is that the ships will draw together rapidly. In my gray paper, this relates to the E part of EVE', where one establishes some "Expectations" for t+1 based on one's mental models at t. Then at t2, as you note, the actual evidence is that the ships indeed appeared to draw together rapidly. This evidence confirmed the Captain's Expectation (so there was no Violation V in EVE') and it allowed him to refine his "Explanation" (E' in EVE') so that he was left with only one likely hypothesis, namely that: It was a fishing vessel moving in the same direction. [Recall that there were two equally-likely hypotheses in the previous "Explanation" at t1.] Now, as you note, consider what WOULD have happened if the ships had instead been observed to draw together slowly (or not at all) at t2. This evidence would have been a Violation (V in EVE') of the Expectation (E in EVE) and would have led to a different Explanation (E'). Referring to my math, the Bayesian likelihoods for this evidence (ships draw together slowly) given each hypothesis (H1, H2, H3, H4) are: e, e, p, e. The Bayesian priors are n*p, n*e, n*p, n*e. So the Bayesian posteriors would be n*p*e, n*e*e, n*p*p, n*e*e, and the most likely Explanation (E' in EVE) of the Violation (V) in Expectations (E) would be H3(n*p*p), that is: It's an other (not fishing) vessel moving in the same direction.
Q.: Talking about the more general context of our discussion and the flow, would pleasure, fun and etc. be a mental model or is it an appraisal of the mental models?
A.: Regarding pleasure or "flow", Goldilocks Theory says that pleasure arises from success in forming Expectations (E in EVE) and Explanations (E' in EVE'), via the postulated functions G and G'. As discussed in my gray paper, I propose something like 1-P as a measure of success in Expectations (E) and something like 1-P-R as a measure of success in Explanations (E') - where P and R are defined as Bayesian probabilities. My theory is that things like P and R are psychological "representations" (mental models), while pleasure is an emotional "manifestation" (feeling of flow) that arises from computation of these representations.
Q.: How essential is the geometrical aspect of mental models? If the model itself consisted of some logical derivations (say it was not the drawing but some logical representation of possible states), then it might have encompassed inside of it all the bounded rationality. In such case, how do we go about the separation of the reasoning from the model? Is it by bounding the model as well?
A.: On "how to separate the reasoning from the model"... I'm not sure it can be done. In fact many people use the term "mental model" to refer to both mental representations and the mental computations that create and employ these representations. In my paper (Normal Errors) I used the terms "model" and "module" to make this distinction. But the distinction gets fuzzy, since the representations (geometric, lexical, etc.) will depend on the computations and the computations will depend on the representations.
Regarding the "drawing" (Fig. 1), I labeled the top panels "mental model" for brevity. A better title would be "artist's illustration of Captain's mental model, drawn to help readers follow the story". I'm not sure the Captain had a "mental picture" like Fig. 1 in his head (although he might have). More to the point of my paper, I'm claiming that the mental models in this case are the "representations" listed in Table 1, along with order-of-magnitude values (n, p, e), not necessarily the artist's drawing in Fig. 1. I thought this would be clear from the text and title of Table 1, but I see now how I may have confused readers with Fig. 1.
Q.: We learned from the Johnson-Laird paper that mental models represent one of several possibilities, usually focusing on one plausible "explanation" of the evidence and that representing alternative models, such as false relations, are not "natural" to mental reasoning. Do we "jump to conclusions" by constructing mental models and then we can not realize our errors since the mental modules (stages of computation) that operate on the models are bounded?
A.: I can't speak for JL and I'm not sure where he stands on the "rationality" of mental models. But I suspect he would pretty much agree with me if I said as I believe that: People are rational in the context of their mental models, but these models are always bounded - and that's the gist of "bounded rationality". I suspect that Simon would also pretty much agree with a statement like this, because the whole idea of "satisficing" is that subjective utilities bound the computation of subjective optimality. That is, if I don't think it's worth the extra effort to go from my "good enough" solution to an "optimal" solution, per my mental models of "effort" and "value", then I am bounded-rational when I stop at my good enough solution. Or, said the other way around, I would be irrational (in the context of my mental models) if I went further to compute the high-cost solutions that are rational (optimal) to a decision scientist whose model does not consider my cost-gain tradeoff.
Q.: Can I say that the main difference between the two paper was that JL considers these mental errors as irrational, while you claim that this is a normative behavior since it comes from naturally bounded mechanisms of otherwise normative (Bayesian) inference?
A.: In my case, the claim is not quite as you say here: "...while you claim that... the mental modules (stages of computation)... are bounded". Rather, the premise of my paper is that it is important to distinguish between
"representations" (which I call "models") and the "computations" (which I call "modules") that take these representations as input and that form new representations as output (see Table 1). Then, with this distinction, I argue that human thinking is "bounded Bayesian" in the sense that the representations are bounded and the computations are Bayesian, at least to a first approximation.
Q.: Is that what you mean by referring to heuristics and biases?
A.: My claim (above) is different (I think) from most claims about Heuristics and Biases (H&B) by authors like Tversky and Kahneman (T&K), because H&B argues that people DO NOT reason according the norms of Bayesian inference. So when you ask: "Is that what you mean by referring to heuristics and biases?", I would say "not quite". My point is that H&B assigns all the "boundedness" to the computations (heuristics) while
downplaying (or ignoring) the representations. My approach differs in that I assume that the computations are Bayesian (at least to a first approximation) but bounded by the representations that they act on.
Q.: When you describe the mental models by graphs, it suggests to me spatial and temporal reasoning. But, if the captain was indeed reasoning this way, wouldn't he probably had noted that there are discrepancies between the actual speed reading of his boat and and speed of the boats approaching? Also he should have realized that the Potomac river close so that he will have to turn in a way that cuts the fishing boat, and etc.?
A.: When you say: "In your paper you describe the mental models by graphs,which suggest spatial and temporal reasoning", I would clarify as follows.First, I do think that mental models are often used for spatial and temporal reasoning, and in fact my paper talks about this under the name of "mental simulation". But, I don't think that I "described mental models by graphs". Of course I did use Fig. 1 to help tell the story and I did use Fig. 3 tomake distinctions between levels/strata and stages/epochs of mental modelsand mental modules. But the mental models themselves that I list in Table 1and analyze in the text are representations of Confidence(order-of-magnitude probability), Hypotheses and Evidence - not really"graphs".
When you say "the Captain ... should have realized... etc.", well that'sthe whole point of the paper! That is, to someone like us (in hindsight) itappears that the Captain should have. But he didn't and the question is "whynot"? Referring to the illustrations in Fig. 1, which are simplified/summarydepictions of the Captain's mental model (compared to the real world), theanswer to the question is this: The Captain turned because he figured thatthe other boat would turn and hence he thought that his turn would avoid acollision (top panels). In fact his turned caused a collision (bottompanels).
Q.: Is your claim then that the fact that the captain did not maintain an alternative hypothesis (contrary to a perfect Bayesian) is due to bounds on reasoning?
A.: Of course we all know that computations (as well as representations) are bounded, simply because cognitive resources are not infinite. So my claim is really one about where scientists should look for the bounds first - in the representations (mental models) or in the computations (mental modules)? And
my approach, similar to JL (I think) but different from H&B (I think), is that scientists should look first and foremost at the representations simply because these representations are the input to the computations! In short, why assume irrational computations from people who are usually rational when you can predict the same results (irrational behavior) in terms of rational (Bayesian) computations acting on incomplete (bounded) representations?
The value of my approach is that it allows me to explain both Bayesian and non-Bayesian reasoning with a single (Bayesian) framework. In fact I did an experiment on this, and found that people were sometimes Bayesian and sometimes non-Bayesian on THE SAME PROBLEM, depending on how the PROBLEM WAS
FRAMED. Since the basic computational problem is the same, and the framing affects only the representational format (mental model), this experiment shows that non-Bayesian reasoning is driven (at least in some cases) by bounded representations (in mental models) rather than non-Bayesian
computations (in mental modules). I'll send you more on the experiment in a separate email.
Q.: But wouldn't JL counterfactual reasoning take that into account? i.e. if the captian had been taught about possible error in direction / speed judgments before, wouldn't he reason out the whole situation differently?
A.: Regarding "counterfactual reasoning"... A bounded Bayesian (as I define him) is someone who develops Confidence in a roughly Bayesian manner about Hypotheses based on Evidence. The "bounds" arise because he can't possibly represent all Hypotheses and he can't possibly gather all Evidence - simply because he has finite resources for attention, memory and processing. Thus, at a higher stratum of abstraction, a decision maker has to make decisions about what Hypotheses he should entertain as well as what Evidence he should
try to collect so that he can do his bounded Bayesian thing at a lower stratum. I have only a short discussion of this in my paper because the paper was already too complex for some readers - so I limited the analysis
to one stratum of abstraction as shown in Figure 3. But the next higher stratum is where "counterfactual" reasoning would come in. For example, one Hypothesis that the Captain might consider is that aliens will fly down from space and do one of a zillion bad things to his boat. Should the Captain consider all these zillion possibilities? Should he spend his limited time and resources looking for Evidence of alien tampering? I would say no because the probability of aliens is essentially zero. Well the same is true of many other "counterfactuals" that the Captain might consider, so he needs some method for selecting what Hypotheses to consider and what sorts of Evidence to look for. And Bayesian methods are again useful here, for analyzing how a decision maker's reasoning at a higher stratum bounds that decision maker's reasoning at a lower stratum. So counterfactual reasoning is easily incorporated in the Bayesian framework, and in fact that's another reason why this framework is so useful.
A.: Yes, the boats are depicted on different sides in the mental model (top panels) and real world (bottom panels) of Fig 1. Unlike roadways, there are no "lane lines" in shipping, and no "shoulders" that are close enough to tell which side you are on, so in passing you're never quite sure what side you'll be on until you actually do pass. The Captain's mental model is his perception of the real world, and perceptions do not always match "ground truth".
In Fig. 1, the Chesapeake Bay is the large mass of light gray in which the boats move. The Potomac River is the inlet on the left. The Captain (white boat) turned left to enter the river (panel t4, top and bottom) because he thought (panel t4, top) that's what the black boat would do. It turns out this was not such a good idea, because his mental model (panel t4, top) did not match the real world (panel t4, bottom).
Q.: When you mention predicting at t1, it does not actually mean considering the new evidence of the speed of approach. It is just making an assumption about it, am I correct? Then the actual evidence comes only at t2, but if you are not using the confidence of this evidence for your decision, why do you need to talk (or reason) about it?
A.: In "predicting at t1", the Captain is anticipating what he might see at t2. The evidence (speed of approach) does not actually come until t2, but what the captain eventually does see at t2 will depend on his mental model of what he thinks can "possibly" be seen. For example, you'll never see an alien if you don't believe in aliens. What the Captain sees at t2 may also depend somewhat on what he expects will "probably" be seen.
Q:. Do you mean that there is some kind of alert mechanism that would have thrown off the wrong hypothesis in stage 1 if the evidence was a slow approach? Is that what you call "confirm the captain's expectation"?
A:. This is related to the so-called "confirmation bias", and one might say it is a form of "directed cognition". It also relates to "bounded rationality", since the Captain cannot gather an infinite amount of information at t2 so what he looks for (in directed cognition?) is biased (bounded?) by his beliefs at t1. In the case of the Cuyahoga, the Captain's most likely expectation (based on his mental models at t1) is that the ships will draw together rapidly. In my gray paper, this relates to the E part of EVE', where one establishes some "Expectations" for t+1 based on one's mental models at t. Then at t2, as you note, the actual evidence is that the ships indeed appeared to draw together rapidly. This evidence confirmed the Captain's Expectation (so there was no Violation V in EVE') and it allowed him to refine his "Explanation" (E' in EVE') so that he was left with only one likely hypothesis, namely that: It was a fishing vessel moving in the same direction. [Recall that there were two equally-likely hypotheses in the previous "Explanation" at t1.] Now, as you note, consider what WOULD have happened if the ships had instead been observed to draw together slowly (or not at all) at t2. This evidence would have been a Violation (V in EVE') of the Expectation (E in EVE) and would have led to a different Explanation (E'). Referring to my math, the Bayesian likelihoods for this evidence (ships draw together slowly) given each hypothesis (H1, H2, H3, H4) are: e, e, p, e. The Bayesian priors are n*p, n*e, n*p, n*e. So the Bayesian posteriors would be n*p*e, n*e*e, n*p*p, n*e*e, and the most likely Explanation (E' in EVE) of the Violation (V) in Expectations (E) would be H3(n*p*p), that is: It's an other (not fishing) vessel moving in the same direction.
Q.: Talking about the more general context of our discussion and the flow, would pleasure, fun and etc. be a mental model or is it an appraisal of the mental models?
A.: Regarding pleasure or "flow", Goldilocks Theory says that pleasure arises from success in forming Expectations (E in EVE) and Explanations (E' in EVE'), via the postulated functions G and G'. As discussed in my gray paper, I propose something like 1-P as a measure of success in Expectations (E) and something like 1-P-R as a measure of success in Explanations (E') - where P and R are defined as Bayesian probabilities. My theory is that things like P and R are psychological "representations" (mental models), while pleasure is an emotional "manifestation" (feeling of flow) that arises from computation of these representations.
Q.: How essential is the geometrical aspect of mental models? If the model itself consisted of some logical derivations (say it was not the drawing but some logical representation of possible states), then it might have encompassed inside of it all the bounded rationality. In such case, how do we go about the separation of the reasoning from the model? Is it by bounding the model as well?
A.: On "how to separate the reasoning from the model"... I'm not sure it can be done. In fact many people use the term "mental model" to refer to both mental representations and the mental computations that create and employ these representations. In my paper (Normal Errors) I used the terms "model" and "module" to make this distinction. But the distinction gets fuzzy, since the representations (geometric, lexical, etc.) will depend on the computations and the computations will depend on the representations.
Regarding the "drawing" (Fig. 1), I labeled the top panels "mental model" for brevity. A better title would be "artist's illustration of Captain's mental model, drawn to help readers follow the story". I'm not sure the Captain had a "mental picture" like Fig. 1 in his head (although he might have). More to the point of my paper, I'm claiming that the mental models in this case are the "representations" listed in Table 1, along with order-of-magnitude values (n, p, e), not necessarily the artist's drawing in Fig. 1. I thought this would be clear from the text and title of Table 1, but I see now how I may have confused readers with Fig. 1.
Q.: We learned from the Johnson-Laird paper that mental models represent one of several possibilities, usually focusing on one plausible "explanation" of the evidence and that representing alternative models, such as false relations, are not "natural" to mental reasoning. Do we "jump to conclusions" by constructing mental models and then we can not realize our errors since the mental modules (stages of computation) that operate on the models are bounded?
A.: I can't speak for JL and I'm not sure where he stands on the "rationality" of mental models. But I suspect he would pretty much agree with me if I said as I believe that: People are rational in the context of their mental models, but these models are always bounded - and that's the gist of "bounded rationality". I suspect that Simon would also pretty much agree with a statement like this, because the whole idea of "satisficing" is that subjective utilities bound the computation of subjective optimality. That is, if I don't think it's worth the extra effort to go from my "good enough" solution to an "optimal" solution, per my mental models of "effort" and "value", then I am bounded-rational when I stop at my good enough solution. Or, said the other way around, I would be irrational (in the context of my mental models) if I went further to compute the high-cost solutions that are rational (optimal) to a decision scientist whose model does not consider my cost-gain tradeoff.
Q.: Can I say that the main difference between the two paper was that JL considers these mental errors as irrational, while you claim that this is a normative behavior since it comes from naturally bounded mechanisms of otherwise normative (Bayesian) inference?
A.: In my case, the claim is not quite as you say here: "...while you claim that... the mental modules (stages of computation)... are bounded". Rather, the premise of my paper is that it is important to distinguish between
"representations" (which I call "models") and the "computations" (which I call "modules") that take these representations as input and that form new representations as output (see Table 1). Then, with this distinction, I argue that human thinking is "bounded Bayesian" in the sense that the representations are bounded and the computations are Bayesian, at least to a first approximation.
Q.: Is that what you mean by referring to heuristics and biases?
A.: My claim (above) is different (I think) from most claims about Heuristics and Biases (H&B) by authors like Tversky and Kahneman (T&K), because H&B argues that people DO NOT reason according the norms of Bayesian inference. So when you ask: "Is that what you mean by referring to heuristics and biases?", I would say "not quite". My point is that H&B assigns all the "boundedness" to the computations (heuristics) while
downplaying (or ignoring) the representations. My approach differs in that I assume that the computations are Bayesian (at least to a first approximation) but bounded by the representations that they act on.
Q.: When you describe the mental models by graphs, it suggests to me spatial and temporal reasoning. But, if the captain was indeed reasoning this way, wouldn't he probably had noted that there are discrepancies between the actual speed reading of his boat and and speed of the boats approaching? Also he should have realized that the Potomac river close so that he will have to turn in a way that cuts the fishing boat, and etc.?
A.: When you say: "In your paper you describe the mental models by graphs,which suggest spatial and temporal reasoning", I would clarify as follows.First, I do think that mental models are often used for spatial and temporal reasoning, and in fact my paper talks about this under the name of "mental simulation". But, I don't think that I "described mental models by graphs". Of course I did use Fig. 1 to help tell the story and I did use Fig. 3 tomake distinctions between levels/strata and stages/epochs of mental modelsand mental modules. But the mental models themselves that I list in Table 1and analyze in the text are representations of Confidence(order-of-magnitude probability), Hypotheses and Evidence - not really"graphs".
When you say "the Captain ... should have realized... etc.", well that'sthe whole point of the paper! That is, to someone like us (in hindsight) itappears that the Captain should have. But he didn't and the question is "whynot"? Referring to the illustrations in Fig. 1, which are simplified/summarydepictions of the Captain's mental model (compared to the real world), theanswer to the question is this: The Captain turned because he figured thatthe other boat would turn and hence he thought that his turn would avoid acollision (top panels). In fact his turned caused a collision (bottompanels).
Q.: Is your claim then that the fact that the captain did not maintain an alternative hypothesis (contrary to a perfect Bayesian) is due to bounds on reasoning?
A.: Of course we all know that computations (as well as representations) are bounded, simply because cognitive resources are not infinite. So my claim is really one about where scientists should look for the bounds first - in the representations (mental models) or in the computations (mental modules)? And
my approach, similar to JL (I think) but different from H&B (I think), is that scientists should look first and foremost at the representations simply because these representations are the input to the computations! In short, why assume irrational computations from people who are usually rational when you can predict the same results (irrational behavior) in terms of rational (Bayesian) computations acting on incomplete (bounded) representations?
The value of my approach is that it allows me to explain both Bayesian and non-Bayesian reasoning with a single (Bayesian) framework. In fact I did an experiment on this, and found that people were sometimes Bayesian and sometimes non-Bayesian on THE SAME PROBLEM, depending on how the PROBLEM WAS
FRAMED. Since the basic computational problem is the same, and the framing affects only the representational format (mental model), this experiment shows that non-Bayesian reasoning is driven (at least in some cases) by bounded representations (in mental models) rather than non-Bayesian
computations (in mental modules). I'll send you more on the experiment in a separate email.
Q.: But wouldn't JL counterfactual reasoning take that into account? i.e. if the captian had been taught about possible error in direction / speed judgments before, wouldn't he reason out the whole situation differently?
A.: Regarding "counterfactual reasoning"... A bounded Bayesian (as I define him) is someone who develops Confidence in a roughly Bayesian manner about Hypotheses based on Evidence. The "bounds" arise because he can't possibly represent all Hypotheses and he can't possibly gather all Evidence - simply because he has finite resources for attention, memory and processing. Thus, at a higher stratum of abstraction, a decision maker has to make decisions about what Hypotheses he should entertain as well as what Evidence he should
try to collect so that he can do his bounded Bayesian thing at a lower stratum. I have only a short discussion of this in my paper because the paper was already too complex for some readers - so I limited the analysis
to one stratum of abstraction as shown in Figure 3. But the next higher stratum is where "counterfactual" reasoning would come in. For example, one Hypothesis that the Captain might consider is that aliens will fly down from space and do one of a zillion bad things to his boat. Should the Captain consider all these zillion possibilities? Should he spend his limited time and resources looking for Evidence of alien tampering? I would say no because the probability of aliens is essentially zero. Well the same is true of many other "counterfactuals" that the Captain might consider, so he needs some method for selecting what Hypotheses to consider and what sorts of Evidence to look for. And Bayesian methods are again useful here, for analyzing how a decision maker's reasoning at a higher stratum bounds that decision maker's reasoning at a lower stratum. So counterfactual reasoning is easily incorporated in the Bayesian framework, and in fact that's another reason why this framework is so useful.
0 Comments:
Post a Comment
<< Home