Uncertainty Visualization Study Group Notes – 11/5/2012
Discussion
Joslyn Paper
Overview: (notes only augment, do not regurgitate info explicitly listed slides)
All participants had at least basic training in forecasting
Criteria for generating warning: likelihood of winds over 20 kt (risk tolerance left to participants)
Signal Detection Theory
Sensitvity – how good you are at actually detecting the effect
Response Bias – how much more likely are you to say there is/isn’t an effect, regardless of whether the effect actually exists
Positive ‘C’ value corresponds to conservative response bias; negative ‘C’ value to liberal response bias
[Bill] The analysis should be based on precisely the information given (probability product). The answer is explicitly given, as long as you know the “threshold”; making this a strange test.
Answer is yes or no – Would you put up a wind advisory?
The probability product is not a quantification of uncertainty; it is giving the distilled answer.
Windspeed advisory doesn’t have any predefined likelihood, but based on risk as internally determined by participants; Participants should simply be making a decision as to what that threshold is and sticking to it.
Clearly users are looking at other data, even when they shouldn’t
Maybe this is the entire point?
With regards to results thrown out b.c of translational errors between the two maps:
Why weren’t the map positions simply overlaid? Seems like unnecessary introduction of a possible source of error.
Map translation error not explicitly tested for; just assumed to be the natural source of the error
Compression effect seen in decision could be explained by the non-linear color coding
Non-linear response to non-linear scale not entirely unexpected (although observed effect seems opposite of our intuited expectation)
Should note: Same bias has been shown in other papers given explicit probability product values
At what ranges are people going to make different decisions? 10% binning? 20% binning?
Why are users not always posting a warning if probability listed as over 90%?
Are they interpreting the data differently?
Is a bias being introduced by the other data / model sources?
Is this the exhibition of a blind preference for using deterministic forecasting (a tendency to use deterministic forecasting, despite the question’s inherently probabilistic nature)?
Preliminary Experiments
Our simple distribution problem will have same sorts of data analysis issues
Have to answer: Is this an effect in general or effect of specific encodings employed in this particular visualization?
Administrativa
Where do we go from here as far as the readings are concerned?
Another cognition paper – specifics left to cognition group