Advertise on Bikeforums.net



User Tag List

Page 1 of 2 12 LastLast
Results 1 to 25 of 28
  1. #1
    Part-time epistemologist invisiblehand's Avatar
    Join Date
    Jun 2005
    Location
    Washington, DC
    My Bikes
    Jamis Nova, Bike Friday NWT, STRIDA, Austro Daimler Vent Noir, Haluzak Horizon, Salsa La Raza, Hollands Tourer, Bike Friday tikit
    Posts
    5,202
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)

    Evaluating advocacy and safety arguments in the pursuit of knowledge

    Sweet title to the thread ... right?

    A colleague of mine sent me this article.

    Others have already referenced cognitive biases. I thought that this document was particularly good at describing many of them. Given the nature of the discussions here and the general lack of good data to distinguish between alternative hypothesis in a convincing manner, the nature by which we evaluate information and make decisions seems particularly important. Mind you, reading and understanding cognitive biases can make people more susceptible to them.

    Anyway, the intensity of forum member's beliefs given the rough statistical figures we have has always surprised me. My hope is that consideration for our own inherent flaws in reasoning will help the discussion move forward in a more constructive manner.

  2. #2
    ---- buzzman's Avatar
    Join Date
    Nov 2005
    Location
    Newton, MA
    Posts
    4,551
    Mentioned
    1 Post(s)
    Tagged
    0 Thread(s)
    fascinating article. and much of it directly applicable to many of the threads in A&S.

    I thought this quote was particularly applicable:

    "someone ...should know how terribly dangerous it is to have an answer in your mind before you finish asking the question."

  3. #3
    Part-time epistemologist invisiblehand's Avatar
    Join Date
    Jun 2005
    Location
    Washington, DC
    My Bikes
    Jamis Nova, Bike Friday NWT, STRIDA, Austro Daimler Vent Noir, Haluzak Horizon, Salsa La Raza, Hollands Tourer, Bike Friday tikit
    Posts
    5,202
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by buzzman View Post
    fascinating article. and much of it directly applicable to many of the threads in A&S.

    I thought this quote was particularly applicable:

    "someone ...should know how terribly dangerous it is to have an answer in your mind before you finish asking the question."
    You read quickly Buzzman.

    I totally agree with your assessment on the section that discusses brainstorming on difficult issues.

    -G

  4. #4
    ---- buzzman's Avatar
    Join Date
    Nov 2005
    Location
    Newton, MA
    Posts
    4,551
    Mentioned
    1 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by invisiblehand
    I totally agree with your assessment on the section that discusses brainstorming on difficult issues.

    if it's not worth it for some to wade through the entire piece (it's a bit dense at times!) the section on prior attitudes and attitude change is rather enlightening as well.

  5. #5
    Part-time epistemologist invisiblehand's Avatar
    Join Date
    Jun 2005
    Location
    Washington, DC
    My Bikes
    Jamis Nova, Bike Friday NWT, STRIDA, Austro Daimler Vent Noir, Haluzak Horizon, Salsa La Raza, Hollands Tourer, Bike Friday tikit
    Posts
    5,202
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by buzzman View Post
    if it's not worth it for some to wade through the entire piece (it's a bit dense at times!) the section on prior attitudes and attitude change is rather enlightening as well.
    Look for confirmation bias, disconfirmation bias, motivated skepticism ...

  6. #6
    Banned
    Join Date
    Oct 2006
    Posts
    2,296
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by invisiblehand View Post
    Sweet title to the thread ... right?

    A colleague of mine sent me this article.

    Others have already referenced cognitive biases. I thought that this document was particularly good at describing many of them. Given the nature of the discussions here and the general lack of good data to distinguish between alternative hypothesis in a convincing manner, the nature by which we evaluate information and make decisions seems particularly important. Mind you, reading and understanding cognitive biases can make people more susceptible to them.

    Anyway, the intensity of forum member's beliefs given the rough statistical figures we have has always surprised me. My hope is that consideration for our own inherent flaws in reasoning will help the discussion move forward in a more constructive manner.
    Very interesting, although I'm not sure if I necessarily agree that it's fair to describe many of the phenomena as "biases".

    The author often appeals to the way human reasoning contradicts cut and dry statistical results, yet at the very heart of statistical theory you find the same kinds of heuristics. The very mathematical definition of probability is chosen to appeal to heuristic reasoning. If you pick a different heuristic, you get a different definition of probability, a different "statistical theory", and a different result. Many of these biases are presented as "errors", but that assumes that the original heuristics used in developing accepted statistical theory are absolute truth.

    To the credit of statistical theory, the precision and complexity with which it's basic heuristics are carried out is quite impressive. However, the simple fact of the matter is that many real world scenarios don't meet the prerequisites of the heurisics used and piecing together an indisputable statistical model within which data can be interpreted is a rare occurrence. Things like "availability", "hindsight", "conjunction", "confirmation", "anchoring", "affect", "scope neglect", and "overconfidence" can compensate for shoehorning data/information into an inappropriate statistical model for the sake of numerical computation. Indeed, other theories of analytical reason have been successfully applied with these kinds of heuristics as the very basis for their validity. The main thing setting traditional statistical theory apart is historical precedent, but even the classic "wild ass guess" surely trumps in that regard.
    Last edited by makeinu; 01-18-08 at 12:21 PM.

  7. #7
    Part-time epistemologist invisiblehand's Avatar
    Join Date
    Jun 2005
    Location
    Washington, DC
    My Bikes
    Jamis Nova, Bike Friday NWT, STRIDA, Austro Daimler Vent Noir, Haluzak Horizon, Salsa La Raza, Hollands Tourer, Bike Friday tikit
    Posts
    5,202
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by makeinu View Post
    The author often appeals to the way human reasoning contradicts cut and dry statistical results, yet at the very heart of statistical theory you find the same kinds of heuristics. The very mathematical definition of probability is chosen to appeal to heuristic reasoning. If you pick a different heuristic, you get a different definition of probability, a different "statistical theory", and a different result. Many of these biases are presented as "errors", but that assumes that the original heuristics used in developing accepted statistical theory are absolute truth.
    Hmmmm, do you have an example of this? It isn't clear to me what you are calling a heuristic in this sense.

    At the very core, the randomness of some process is described by some unobserved probability distribution. We can think of several ways to describe that distribution ... by its moments, a scatterplot, or histogram for instance ... which would be different methods to get information on this process. I would call this a heuristic. But I am not sure how one would go about getting a different statistical theory given the initial foundation of a probability distribution: the abstract notion that there is this thing that describes the randomness in the first place.

  8. #8
    Senior Member
    Join Date
    Aug 2007
    Location
    USA
    Posts
    502
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)

    Thanks for the post

    Quote Originally Posted by invisiblehand View Post
    ... A colleague of mine sent me this article...
    Thanks for posting the link to the article; I will be forwarding it to some of my colleagues. The sections I am familiar with from my profession are correct.

    One quick estimate of the 95% confidence interval when the numerator for an event is zero in a series of trials is up to three such events might have occurred. Thus if at work I'm told the instrument is now fixed because in the last 30 times the event did not occur, there is about a 5% chance the event occurs 10% or more of the time but we were just 'lucky' it did not occur in those 30 times. Depending upon the event, that may or may not be good enough. Reference (there are really several): Newman TB. If almost nothing goes wrong, is almost everything all right? Interpreting small numerators. JAMA 1995 274 (13) 1013.

    The other almost as easy (and sobering) calculation is how often a seemingly rare event happens in a large number of trials. It was somewhat helpful with new drivers in the family to point out that if the risk of an accident was just, for example, "only" 1 in 10,000 per day, over the course of 10 years 365 days a year the chance it never happens is 0.9999^(10*365) = only about 0.69. If you think about how many intersections (or whatever higher risk situation) you bike through per day, you've got to be really careful to avoid a collision over the course of many years.

  9. #9
    Banned. Helmet Head's Avatar
    Join Date
    Mar 2005
    Location
    San Diego
    Posts
    13,075
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Man, this is a great paper. I like this:

    R appears in the third-letter position of
    more English words than in the first-letter position, yet it is much easier to recall words
    that begin with "R" than words whose third letter is "R". Thus, a majority of respondents
    guess that words beginning with "R" are more frequent, when the reverse is the case.
    Similarly, cooperative interactions between cyclists and motorists occur much more often than conflicts, yet it is much easier to recall instances of conflicts than instances of cooperation. Thus, many cyclists feel that conflicts are more more frequent in traffic than are cooperative interactions.

    And then:

    Biases implicit in the availability heuristic affect estimates of risk.
    So, the bias to think conflicts are much more frequent than they actually are, affects the cyclist estimate of his risk in traffic.

    Unbelievable. I'm reading and commenting as I go along. This is the next sentence:
    However, asked to quantify risks more precisely, people severely overestimate
    the frequency of rare causes of death, and severely underestimate the frequency of
    common causes of death.
    In cycling, cyclists severely overestimate the frequency of cyclist-hit-from-behind deaths, and severely underestimate the frequency of deaths caused by cross-traffic collisions. And, based on observations, they ride accordingly.

    Wow, this gets better and better:

    Risks of human extinction may tend to be underestimated since, obviously, humanity has
    never yet encountered an extinction event.2
    And for the same reasons, risks of total economic collapse in America may tend to be underestimated since, Americans have never yet encountered total economic collapse (hence the lack of caution when it comes to employing socialist solutions to address social ills).

    This is great insight too:

    Viewing history through the lens of hindsight, we vastly underestimate the cost of
    preventing catastrophe. In 1986, the space shuttle Challenger exploded for reasons
    eventually traced to an O-ring losing flexibility at low temperature. (Rogers et. al. 1986.)
    There were warning signs of a problem with the O-rings. But preventing the Challenger
    disaster would have required, not attending to the problem with the O-rings, but attending
    to every warning sign which seemed as severe as the O-ring problem, without benefit of
    hindsight.
    I think this applies in instances where a truck driver failed to notice a cyclist who pulled up and stopped in the blind spot next to his front right wheel before turning right.

    Some VC objectors (e.g., ILTB) try to accuse vehicular cyclists of falling victim to the black swan:
    Black Swans are an especially difficult version of the problem of the fat tails: sometimes most of the variance in a process comes from exceptionally rare, exceptionally huge events.
    This is essentially what is argued when one claims that while VC might reduce the likelihood of certain types of crashes, it increases the likelihood of the black swan: being hit from behind. Of course, vehicular cycling advocates respond by arguing that being hit from behind is actually less likely with VC too.

    This is simply awesome:
    Because of hindsight bias, we learn overly specific lessons. After September 11th, the U.S.
    Federal Aviation Administration prohibited box-cutters on airplanes. The hindsight bias
    rendered the event too predictable in retrospect, permitting the angry victims to find it the
    result of 'negligence' - such as intelligence agencies' failure to distinguish warnings of Al
    Qaeda activity amid a thousand other warnings. We learned not to allow hijacked planes
    to overfly our cities. We did not learn the lesson: "Black Swans occur; do what you can to
    prepare for the unanticipated."
    Of course, "do what you can to prepare for the unanticipated" is what "following the rules simply for the sake of following the rules" is all about, and not just in vehicular cycling. Indeed, it is why these rules are created in the first place.

    And this is what vehicular cyclists are up against, and probably explains better than anything else why there is so much animosity in response to it:
    It is difficult to motivate people in the prevention of Black Swans... Prevention is not easily
    perceived, measured, or rewarded; it is generally a silent and thankless activity.
    Just
    consider that a costly measure is taken to stave off such an event. One can easily compute
    the costs while the results are hard to determine. How can one tell its effectiveness,
    whether the measure was successful or if it just coincided with no particular accident? ...
    Job performance assessments in these matters are not just tricky, but may be biased in
    favor of the observed "acts of heroism". History books do not account for heroic
    preventive measures.
    After all, vehicular cycling amounts to little more than a set of "preventive measures".

    That's as far as I've gotten so far, but definitely saving this. Awesome stuff!

  10. #10
    Part-time epistemologist invisiblehand's Avatar
    Join Date
    Jun 2005
    Location
    Washington, DC
    My Bikes
    Jamis Nova, Bike Friday NWT, STRIDA, Austro Daimler Vent Noir, Haluzak Horizon, Salsa La Raza, Hollands Tourer, Bike Friday tikit
    Posts
    5,202
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Hmmmmm, the paper is more informative than imagined ...

    ... although I suspect that the post is a farce.

  11. #11
    Banned. Helmet Head's Avatar
    Join Date
    Mar 2005
    Location
    San Diego
    Posts
    13,075
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Which post is a farce? If you're referring to mine, it's not. I think the paper (at least what I read so far), is outstanding.

  12. #12
    Part-time epistemologist invisiblehand's Avatar
    Join Date
    Jun 2005
    Location
    Washington, DC
    My Bikes
    Jamis Nova, Bike Friday NWT, STRIDA, Austro Daimler Vent Noir, Haluzak Horizon, Salsa La Raza, Hollands Tourer, Bike Friday tikit
    Posts
    5,202
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by Helmet Head View Post
    Which post is a farce? If you're referring to mine, it's not. I think the paper (at least what I read so far), is outstanding.
    OK Dokey. Sorry HH. It is difficult to tell via internet.

    -G

  13. #13
    Banned. Helmet Head's Avatar
    Join Date
    Mar 2005
    Location
    San Diego
    Posts
    13,075
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by invisiblehand View Post
    OK Dokey. Sorry HH. It is difficult to tell via internet.

    -G
    Please read my comments, and the quoted sections, taking it seriously, which is how I intended it.

  14. #14
    Banned
    Join Date
    Oct 2006
    Posts
    2,296
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by invisiblehand View Post
    Hmmmm, do you have an example of this? It isn't clear to me what you are calling a heuristic in this sense.

    At the very core, the randomness of some process is described by some unobserved probability distribution. We can think of several ways to describe that distribution ... by its moments, a scatterplot, or histogram for instance ... which would be different methods to get information on this process. I would call this a heuristic. But I am not sure how one would go about getting a different statistical theory given the initial foundation of a probability distribution: the abstract notion that there is this thing that describes the randomness in the first place.
    It turns out that at the very core randomness boils down to things even more basic than a probability distribution. You need to begin by considering set theory. It turns out that only certain kinds of sets can be interpreted as events with probabilities. In the simple case you can consider a finite number of discrete events, but from your use of the term "distribution" I'm sure you're aware that you can also consider probabilities for things which are infinite in number....or even uncountable. You'd be amazed at some of the ways you can mathematically describe observations and it turns out that some of those ways cannot be assigned probability distributions (or at least not ones which make any sense). A lot of these things are rather neatly resolved when you represent your observations with the real number system, but I don't think I need to explain to you why eliminating some of the messier aspects of real numbers (like pi) by using other structures might be appealing. In fact it turns out that in order to have a probability space your observations need to be described by what is known in mathematical circles as a "sigma field" (for a full technical discussion I refer you to the book: "Probability and Measure" by Patrick Billingsley). However, the most basic heuristic of probability theory is the presupposition that you can represent the uncertainty of every observation on a scale between 0 and 1. This fails from the get go. You can't for some kinds of observations.

    Returning to the usual conventions of scientists and engineers by representing observations with real numbers, the situation can be resolved by making sure that any distribution you work with follows a number of simple rules. No doubt you are familiar with the fact that not just any function can be a probability distribution. It needs to satisfy certain properties. However, even at this level we need to start wrestling with more heuristics. For example, one rule says that the probability of all events must be equal to one. This satisfies the intuitive notion that we know something must happen (ex when I throw a ball it will hit the ground at some point). The heuristic is that since the probability scale is supposed to correspond to sureness, these kinds of events should correspond to a probability of 1. However, there is a problem. It turns out that within this framework an event can have probability 1 yet still not be guaranteed (see the statistical term almost surely). This obviously defies reason. Another reasonable heuristic would be that those and only those events which are guaranteed can have probability 1, but this is not the heuristic used in statistical theory and we are again forced to adopt an obviously false premise for the sake of a more extensive mathematical theory.

    Now is when things really start to get hairy. Thus far we have used a number of heuristics to defined observable events, each of which has some measure of sureness on a scale of 0 to 1 (however imperfect) known as probability. But what is the true concrete empirical meaning of probability so defined? How do we interpret probability? Statisticians are generally divided into two camps: Bayesians and Frequentists. Frequentists say that if you perform a random observation over and over and divide the number of affirmative results by the total number of observations performed that this quotient will get closer and closer a number known as the probability and that every probability corresponds to such a series of observations (at least in theory). Bayesians say, in a nutshell, that probability is any notion of certainty that you'd like it to be and that the relative frequency notion of Frequentists is not necessary. Ironically enough, it turns out that no matter which road you choose to take, many results are the same or nearly the same, but from a fundamental standpoint of basic reason they both have their flaws. The bayesian approach is essentially untestable and unfalsifiable because there is no way to measure it. The frequentist approach, on the other hand, while measurable, is not guaranteed to exist for a given random process (ie Who says that the quotient will get closer to anything? Perhaps it will wander around several numbers). So we have a few more arbitrary heuristics. Do we arbitrarily assume that every sequence of random observations as described above must converge to a single number or do we adopt an interpretation which is loose, unmeasurable, unscientific, and unprovable. Just as the biases described by Yudkowsky, neither is 100% compatible with reason as we know it....but it gets worse.

    Now you actually want to start using this statistical theory for practical applications. You have your arsenal of moments, scatterplots, histograms, etc. But did you know that some probability distributions don't even have certain moments? For example, the heavy tailed Cauchy distribution has no moments. They're all nonconvergent/undefined (mean, variance, etc). In fact, if I have my history correct, Cauchy specifically thought up this distribution just to be a pain in the ass to the rest of the statistical community. Although many of the problems arising from these kinds of distributions can be resolved, I assure you that reality is a much bigger and more effective pain in the ass. Despite what you may have heard from your statistics teacher, the central limit theorem does not guarantee that the only distribution you'll ever need is the Normal distribution. Like statistical theory's version of the "Black Swan bias", real life distributions often don't work with the kinds of simple statistics we'd like to compute. They behave in surprising ways.

    In similar vain, did you ever notice that outside of the gambling parlor very few random phenomenon can be isolated into a few controlled variables? Even traditional parlor games can be affected by the problems above, but the truth is that in most real world applications there are simply too many important variables to write down the equations. It's easy to see when talking about cyclists, age, experience, self selection, etc, all make it extremely difficult to attribute a cause to an effect and details are often sacrificed for the ability to apply traditional, well established, and oversimplified equations. The general heuristic used by statisticians is to guess a probability model, take some measurements, and draw conclusions under the assumption that the model is correct. However, the complexity of the real world guarantees that the model is often incorrect and it turns out that there exist theories built on different heuristics which, even if their heuristics are not 100% correct, can be shown to give better results at these levels of complexity (see computational learning theory or, in particular, the book "Statistical Learning Theory" by Vladimir Vapnik which explains many of these concepts in a statistical context). Remember that, given the right circumstances, heuristics like "availability", "hindsight", "conjunction", "confirmation", "anchoring", etc can each be good predictors of the correct answer just as we all know the heuristics of statistical theory can. However, as it is in much of science, the key is not necessarily in choosing the heuristic which is the most justifiable, but the one which just so happens to work (or at least that's how I earn my lunch money).
    Last edited by makeinu; 01-18-08 at 05:09 PM.

  15. #15
    Senior Member
    Join Date
    Mar 2007
    Posts
    4,070
    Mentioned
    0 Post(s)
    Tagged
    1 Thread(s)
    Quote Originally Posted by makeinu View Post
    It turns out that at the very core randomness boils down to things even more basic than a probability distribution. You need to begin by considering set theory. It turns out that only certain kinds of sets can be interpreted as events with probabilities. In the simple case you can consider a finite number of discrete events, but from your use of the term "distribution" I'm sure you're aware that you can also consider probabilities for things which are infinite in number....or even uncountable. You'd be amazed at some of the ways you can mathematically describe observations and it turns out that some of those ways cannot be assigned probability distributions (or at least not ones which make any sense). A lot of these things are rather neatly resolved when you represent your observations with the real number system, but I don't think I need to explain to you why eliminating some of the messier aspects of real numbers (like pi) by using other structures might be appealing. In fact it turns out that in order to have a probability space your observations need to be described by what is known in mathematical circles as a "sigma field" (for a full technical discussion I refer you to the book: "Probability and Measure" by Patrick Billingsley). However, the most basic heuristic of probability theory is the presupposition that you can represent the uncertainty of every observation on a scale between 0 and 1. This fails from the get go. You can't for some kinds of observations.

    Returning to the usual conventions of scientists and engineers by representing observations with real numbers, the situation can be resolved by making sure that any distribution you work with follows a number of simple rules. No doubt you are familiar with the fact that not just any function can be a probability distribution. It needs to satisfy certain properties. However, even at this level we need to start wrestling with more heuristics. For example, one rule says that the probability of all events must be equal to one. This satisfies the intuitive notion that we know something must happen (ex when I throw a ball it will hit the ground at some point). The heuristic is that since the probability scale is supposed to correspond to sureness, these kinds of events should correspond to a probability of 1. However, there is a problem. It turns out that within this framework an event can have probability 1 yet still not be guaranteed (see the statistical term almost surely). This obviously defies reason. Another reasonable heuristic would be that those and only those events which are guaranteed can have probability 1, but this is not the heuristic used in statistical theory and we are again forced to adopt an obviously false premise for the sake of a more extensive mathematical theory.

    Now is when things really start to get hairy. Thus far we have used a number of heuristics to defined observable events, each of which has some measure of sureness on a scale of 0 to 1 (however imperfect) known as probability. But what is the true concrete empirical meaning of probability so defined? How do we interpret probability? Statisticians are generally divided into two camps: Bayesians and Frequentists. Frequentists say that if you perform a random observation over and over and divide the number of affirmative results by the total number of observations performed that this quotient will get closer and closer a number known as the probability and that every probability corresponds to such a series of observations (at least in theory). Bayesians say, in a nutshell, that probability is any notion of certainty that you'd like it to be and that the relative frequency notion of Frequentists is not necessary. Ironically enough, it turns out that no matter which road you choose to take, many results are the same or nearly the same, but from a fundamental standpoint of basic reason they both have their flaws. The bayesian approach is essentially untestable and unfalsifiable because there is no way to measure it. The frequentist approach, on the other hand, while measurable, is not guaranteed to exist for a given random process (ie Who says that the quotient will get closer to anything? Perhaps it will wander around several numbers). So we have a few more arbitrary heuristics. Do we arbitrarily assume that every sequence of random observations as described above must converge to a single number or do we adopt an interpretation which is loose, unmeasurable, unscientific, and unprovable. Just as the biases described by Yudkowsky, neither is 100% compatible with reason as we know it....but it gets worse.

    Now you actually want to start using this statistical theory for practical applications. You have your arsenal of moments, scatterplots, histograms, etc. But did you know that some probability distributions don't even have certain moments? For example, the heavy tailed Cauchy distribution has no moments. They're all nonconvergent/undefined (mean, variance, etc). In fact, if I have my history correct, Cauchy specifically thought up this distribution just to be a pain in the ass to the rest of the statistical community. Although many of the problems arising from these kinds of distributions can be resolved, I assure you that reality is a much bigger and more effective pain in the ass. Despite what you may have heard from your statistics teacher, the central limit theorem does not guarantee that the only distribution you'll ever need is the Normal distribution. Like statistical theory's version of the "Black Swan bias", real life distributions often don't work with the kinds of simple statistics we'd like to compute. They behave in surprising ways.

    In similar vain, did you ever notice that outside of the gambling parlor very few random phenomenon can be isolated into a few controlled variables? Even traditional parlor games can be affected by the problems above, but the truth is that in most real world applications there are simply too many important variables to write down the equations. It's easy to see when talking about cyclists, age, experience, self selection, etc, all make it extremely difficult to attribute a cause to an effect and details are often sacrificed for the ability to apply traditional, well established, and oversimplified equations. The general heuristic used by statisticians is to guess a probability model, take some measurements, and draw conclusions under the assumption that the model is correct. However, the complexity of the real world guarantees that the model is often incorrect and it turns out that there exist theories built on different heuristics which, even if their heuristics are not 100% correct, can be shown to give better results at these levels of complexity (see computational learning theory or, in particular, the book "Statistical Learning Theory" by Vladimir Vapnik which explains many of these concepts in a statistical context). Remember that, given the right circumstances, heuristics like "availability", "hindsight", "conjunction", "confirmation", "anchoring", etc can each be good predictors of the correct answer just as we all know the heuristics of statistical theory can. However, as it is in much of science, the key is not necessarily in choosing the heuristic which is the most justifiable, but the one which just so happens to work (or at least that's how I earn my lunch money).
    I have read through the article. I find that much of it is familiar to me, but there are parts that I do not understand. The posting above describes some of those parts, such as events, or sets of events, for which no probability is valid. There may well be such; I don't question those who are expert about such things. What I do question is whether such events or sets of events are relevant to our concerns regarding cycling or policies regarding cycling.

    I think that the description of Bayesian statistics that appears in the above post is not very accurate. Long ago, I taught an aspect of that subject, developed part of the theory, and wrote a text. As I see it, to make useful use of Bayesian logic one has to start with an initial probability distribution, called a priori. If the subject is totally unknown, then one might start with a uniform distribution, or any other that appears to meet the known facts (in which case, one already knows something rather than nothing). Then one becomes faced with some definite information that has some probability. It might be a current measurement that is based on random selection from some population about which a decision is needed. That sample has a probability distribution known with reasonable accuracy. The Bayesian calculation is the combination of the initial probability distribution with the sample probability distribution to produce a more realistic value for the population from which that sample has been taken. The results of that, and, indeed the measurement of the full population if that becomes available, are then combined to produce the initial distribution for the next time that the decision is faced. If, indeed, there is a reasonable probability distribution of the values of the various specific populations, then, as experience and its information is accumulated, the distribution becomes closer and closer to that of the infinite number of possible specific populations. Naturally, the closer that the initial initial assumed distribution is to the real distribution, the fewer trials are required to achieve useful accuracy.

    As a kind of example, suppose that a supplier has provided goods with some reasonably acceptable level of quality. Then a sample is taken from a new shipment, and the sample shows very bad quality. The statistically most probable result is that the sample is more erroneous than the historic record, so that the probable level of quality is somewhat, but not much, reduced from the historic record. However, that is a warning sign that the new sample has been taken from a specific population that is not representative of the specific populations that make up the historic record. So watch out.

  16. #16
    Part-time epistemologist invisiblehand's Avatar
    Join Date
    Jun 2005
    Location
    Washington, DC
    My Bikes
    Jamis Nova, Bike Friday NWT, STRIDA, Austro Daimler Vent Noir, Haluzak Horizon, Salsa La Raza, Hollands Tourer, Bike Friday tikit
    Posts
    5,202
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Hah! Note quite what I expected; but yes, I am pretty familiar with statistics and enjoyed the Bayesian and Frequentist arguments at yet another former life. Mathematics began to get pretty fuzzy for me around complex analysis; but I understand the principles you are talking about and even know what the Cauchy distribution is ...

    Regardless, please don't ask me to explain what a sigma field, sigma algebra, and so on is ...

    In a practical sense, a lot of the early points are really non-issues. Mind you, they are important since there are a lot of mathematical concepts that we just have not found their physical application; however, they are not very meaningful to the problems the vast majority of people face nor affect the results of many analyzes. Many of the assumptions regarding a PDF, for instance, are not just mathematical niceties but related to repeated observations of phenomenon.

    While there are some hardcore Bayesians and Frequentists who will piss in each other's pockets all day long, my experience is that when given the same data, they often come to identical conclusions: if their models are similar enough. In fact -- don't tell the Frequentists -- you can describe Frequentist statistics in a Bayesian framework by assuming a uniform prior. Interestingly, what I found is that the real difference between the two occurs when one wants to precisely model some complicated phenomenon. Frequentists will have to do a bunch of handstands to obtain a closed form model whereas the Bayesian framework -- given today's computational speed and a well-executed Metropolis algorithm -- effectively circumvents the issue by sampling from the distribution conditioned on the data.

    Hopefully no statistics teacher pushes such a vague notion of the Central Limit Theorem. It really only applies to summations from the same process.

    Your point about the necessity to model our data/observations in order to make specific measurements is practical and quite real in today's sciences. For instance, often I would be asked to estimate the number of poor based on ancillary data or the effect of some policy intervention. Any estimate I create and its standard error fundamentally assumes that the model is right. However, despite not looking at the specific studies, it would be surprising if the effect model error, omitted variable bias, and so on relative to the magnitude of cognitive bias -- assuming that there is such an effect -- is large.

    Hmmm, got to run. Pregnant wife calling.

  17. #17
    Part-time epistemologist invisiblehand's Avatar
    Join Date
    Jun 2005
    Location
    Washington, DC
    My Bikes
    Jamis Nova, Bike Friday NWT, STRIDA, Austro Daimler Vent Noir, Haluzak Horizon, Salsa La Raza, Hollands Tourer, Bike Friday tikit
    Posts
    5,202
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Hey MAKEINU ... I just wanted to add a thank you for the thorough response. Definitely in the spirit of the exercise.

  18. #18
    Banned
    Join Date
    Oct 2006
    Posts
    2,296
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by John Forester View Post
    I have read through the article. I find that much of it is familiar to me, but there are parts that I do not understand. The posting above describes some of those parts, such as events, or sets of events, for which no probability is valid. There may well be such; I don't question those who are expert about such things. What I do question is whether such events or sets of events are relevant to our concerns regarding cycling or policies regarding cycling.
    I agree. I only mentioned these things for sake of completeness. What is probably relevant to cycling issues and policy are the last two paragraphs. It's very likely the classes of distributions assumed by statisticians in their analysis of cycling issues do not actually match the distributions of the real world. Furthermore, it's also very likely that, do to the complexity of developing suitable equations and experimental procedures, all the variables aren't being taken into count leading to gross undersampling and undermining many of the asymptotic results on which the strength of many statistical methods depend. Human reasoning usually involves feedback. The answers you get change the questions you ask and the observations you perform. This is a very powerful method that's not fully exploited in traditional statistical methods. It's also, obviously, a double edged sword because it opens up the possibility of performing the very worst observations.

    Quote Originally Posted by John Forester View Post
    I think that the description of Bayesian statistics that appears in the above post is not very accurate. Long ago, I taught an aspect of that subject, developed part of the theory, and wrote a text. As I see it, to make useful use of Bayesian logic one has to start with an initial probability distribution, called a priori. If the subject is totally unknown, then one might start with a uniform distribution, or any other that appears to meet the known facts (in which case, one already knows something rather than nothing). Then one becomes faced with some definite information that has some probability. It might be a current measurement that is based on random selection from some population about which a decision is needed. That sample has a probability distribution known with reasonable accuracy. The Bayesian calculation is the combination of the initial probability distribution with the sample probability distribution to produce a more realistic value for the population from which that sample has been taken. The results of that, and, indeed the measurement of the full population if that becomes available, are then combined to produce the initial distribution for the next time that the decision is faced. If, indeed, there is a reasonable probability distribution of the values of the various specific populations, then, as experience and its information is accumulated, the distribution becomes closer and closer to that of the infinite number of possible specific populations. Naturally, the closer that the initial initial assumed distribution is to the real distribution, the fewer trials are required to achieve useful accuracy.

    As a kind of example, suppose that a supplier has provided goods with some reasonably acceptable level of quality. Then a sample is taken from a new shipment, and the sample shows very bad quality. The statistically most probable result is that the sample is more erroneous than the historic record, so that the probable level of quality is somewhat, but not much, reduced from the historic record. However, that is a warning sign that the new sample has been taken from a specific population that is not representative of the specific populations that make up the historic record. So watch out.
    I know how Bayesian statistics is done. However, the essential flaw I believe you're neglecting to highlight is that the statistician can choose the criteria for prior knowledge at whim. Yes, he can choose to measure it, but he can just as well choose not to and there's nothing to hold him accountable. A prior distribution is arrived at heuristically and can really be anything. The frequentist interpretation requires hard data, while the bayesian requires eloquent argument. Bayesians only use hard data when it's convenient. It's a form of confirmation bias.

    Quote Originally Posted by invisiblehand View Post
    In a practical sense, a lot of the early points are really non-issues. Mind you, they are important since there are a lot of mathematical concepts that we just have not found their physical application; however, they are not very meaningful to the problems the vast majority of people face nor affect the results of many analyzes. Many of the assumptions regarding a PDF, for instance, are not just mathematical niceties but related to repeated observations of phenomenon.
    Who says that it's even possible to repeatedly observe the exact same random phenomenon? You can describe the basis of statistical theory in english or you can describe it in mathematics, but the concepts are the same and in the end all they are are heuristics arbitrarily chosen because they lead to a neat and extensive theory with practical results.

    Yes, you can observe cyclists, but there is no guarantee that you are sampling a single distribution. You may very well be sampling a variety of distributions which just so happen to be related in a way that completely undermines the use of a single PDF. Perhaps even more insidiously they might only partially undermine the use of a single PDF...making the resulting biases small enough to be plausible, but large enough to make you wrong.

    Quote Originally Posted by invisiblehand View Post
    While there are some hardcore Bayesians and Frequentists who will piss in each other's pockets all day long, my experience is that when given the same data, they often come to identical conclusions: if their models are similar enough. In fact -- don't tell the Frequentists -- you can describe Frequentist statistics in a Bayesian framework by assuming a uniform prior. Interestingly, what I found is that the real difference between the two occurs when one wants to precisely model some complicated phenomenon. Frequentists will have to do a bunch of handstands to obtain a closed form model whereas the Bayesian framework -- given today's computational speed and a well-executed Metropolis algorithm -- effectively circumvents the issue by sampling from the distribution conditioned on the data.
    Exactly, exactly, exactly. Do you think the the "availability heuristic" is any different or any of the other heuristics presented in the article? When properly applied to real data, all reasonable heuristics perform reasonably. Using toy examples (or surveys) the article likes to pretend that the heuristics employed by traditional statistics (whether frequentist or bayesian) are objectively superior, but you can find toy examples which foul any of them, including highly regarded statistical approaches. Moreover, comparing the average individual's use of a given heuristic to proper usage by an expert is not fair. A fair comparison would be to compare the effectiveness of, for example, an expert police detective in using some of these other heuristics on real world data. If that were done I'm quite confident that what you would find is that, much like the distinction between the frequentist and bayesian heuristics, the methods of an expert police detective are just as effective as that of a statistician, but without the "handstands" so to speak. Furthermore, much like the distinction between the frequentist and bayesian heuristics, with fewer handstands to perform, the expert police detective is likely to be even more effective when it comes to modeling some complicated phenomenon (naturally the kinds typical to police work which contain many variables: random, deterministic, sociological, etc along with hindrances to obtaining large numbers of samples).

    Quote Originally Posted by invisiblehand View Post
    Your point about the necessity to model our data/observations in order to make specific measurements is practical and quite real in today's sciences. For instance, often I would be asked to estimate the number of poor based on ancillary data or the effect of some policy intervention. Any estimate I create and its standard error fundamentally assumes that the model is right. However, despite not looking at the specific studies, it would be surprising if the effect model error, omitted variable bias, and so on relative to the magnitude of cognitive bias -- assuming that there is such an effect -- is large.
    Well, they are both a form of cognitive bias. The distinction is that your bias for selecting a model is filtered though a lot of math. As statistics are notoriously lacking in robustness and vulerable to manipulation ("lies, damn lies," etc) I would imagine that this filtering would multiply the error of biased model selection, but that's only a suspicion on my part.

    You also have to consider the positive benefit afforded by having a wider breadth of heuristics, thereby increasing the potential of choosing the right heuristic. In other words, while your statistical method might depend on a single choice of model, cognition may employ a number of heuristics and choose between them depending on the scenario. For example, people are "biased" to use things like conjunction because very often the fact that data is being presented at all is correlated with it's certainty. Similarly, basing your notion of certainty on things like availability can automatically incorporate a kind of risk functional which you might be unable or otherwise neglect to quantify. In a sense this is analogous to computing statistics based on several different models. If combined with the ability to predict individual model/heuristic performance, then overall performance could be greater than using a single heuristic. Given constraints on computation and data aquisition, it may even be the case that the optimal strategy consists of a sloppy/rough but numerous combination of distinct heuristic strategies (similar to what we see in cognition).

    In any case, my point is not to be so quick getting down on nonstatistical modes of reasoning. Reasoning in the face of uncertainty is still very much an active area of research and the jury is still out on whether working with probabilities is the best strategy in all circumstances.

    Quote Originally Posted by invisiblehand View Post
    Hey MAKEINU ... I just wanted to add a thank you for the thorough response. Definitely in the spirit of the exercise.
    No, thank you invisiblehand. It's nice to see a little deeper conversations going on around the forum.
    Last edited by makeinu; 01-19-08 at 05:29 PM.

  19. #19
    Part-time epistemologist invisiblehand's Avatar
    Join Date
    Jun 2005
    Location
    Washington, DC
    My Bikes
    Jamis Nova, Bike Friday NWT, STRIDA, Austro Daimler Vent Noir, Haluzak Horizon, Salsa La Raza, Hollands Tourer, Bike Friday tikit
    Posts
    5,202
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    I think that we agree on a lot more issues than disagree. Moreover, it seems that we just have differences of opinion regarding the magnitude of certain effects than a fundamental difference.

    Statistics certainly makes its assumptions but it doesn't appear to me that the meaningful assumptions are that arbitrary. For instance, that the probability of all possible outcomes sums to 1 is arbitrary. But that the likelihood of an event has a lower bound of zero appears sensible considering our observations of the world.

    Statistics differs from the heuristics discussed in that it was actively designed to make consistent observations of the world whereas the brain's heuristics were developed through evolutionary pressure. Consequently, they would optimize different things and their performance would differ accordingly. In this sense, IMO, it is appropriate to compare human performance -- particularly since the comparisons are about the performance of a population of subjects -- to these benchmarks and call it bias with the understanding that there is some fuzziness.

    I generally argue that statistics are just another tool to help assess the true state of the world. Rarely should this be done in isolation ... without actively evaluating and thinking about the results. I gather that most trained researchers realize the limitations of the techniques and modeling assumptions when assimilating the measurements through a mental model of the world. As models get more complicated, typically researchers become more skeptical of the results. Simple models are understood to suffer from incomplete information. Moreover, there are reasons to think that the human brain -- with a little assistance -- would be better at making judgments based on complicated/chaotic relationships.

    With regarding to cycling advocacy and safety, my simple thought is that we should be more skeptical of our own arguments as well as those whose opinions differ and that we use similar skepticism when evaluating the descriptive evidence out there.

  20. #20
    ---- buzzman's Avatar
    Join Date
    Nov 2005
    Location
    Newton, MA
    Posts
    4,551
    Mentioned
    1 Post(s)
    Tagged
    0 Thread(s)
    this thread fascinates me and I'm sorry I'm so busy at this time I don't have as much time as I'd like to explore some of the issues brought up by the article in the OP.

    But I'd love to weigh in on a couple of points. Perhaps climb out on a limb or two and pose some possible other perspectives on human reasoning outside of the realm of the kind of probabilistic risk assessment as described in the paper.

    Certainly some of the studies cited demonstrate how human decision making is influenced by heuristic methodology and particular biases. However, as makeinu points out in his well presented response:

    Quote Originally Posted by makeinu
    In any case, my point is not to be so quick getting down on nonstatistical modes of reasoning. Reasoning in the face of uncertainty is still very much an active area of research and the jury is still out on whether working with probabilities is the best strategy in all circumstances.
    I completely agree and to bring the topic, if I may, back into the realm of bicycling, there are reasons why this is such an important observation and critical to why there is often disagreement in these forums that seem to throw the participants into an endless loop of misunderstandings.

    I agree that human beings fall prey to bias, that the use of Frequentist statistics or Bayesian logic in argument may make for stronger, more rational arguments so if the game is one of semantics, reasoning and statistics then having an awareness of the general flaws of heuristic reasoning and the logical traps of bias then the statistician will win. As the paper points out:

    Ironically, Taber and Lodge's experiments confirmed all six of the authors' prior hypotheses. Perhaps you will say: "The experiment only reflects the beliefs the authors started out with - it is just a case of confirmation bias." If so, then by making you a more sophisticated arguer - by teaching you another bias of which to accuse people - I have actually harmed you; I have made you slower to react to evidence. I have given you another opportunity to fail each time you face the challenge of changing your mind.
    Heuristics and biases are widespread in human reasoning. Familiarity with heuristics and biases can enable us to detect a wide variety of logical flaws that might otherwise evade our inspection. But, as with any ability to detect flaws in reasoning, this inspection must be applied evenhandedly: both to our own ideas and the ideas of others; to ideas which discomfort us and to ideas which comfort us. Awareness of human fallibility is a dangerous knowledge, if you remind yourself of the fallibility of those who disagree with you. If I am selective about which arguments I inspect for errors, or even how hard I inspect for errors, then every new rule of rationality I learn, every new logical flaw I know how to detect, makes me that much stupider. Intelligence, to be useful, must be used for something other than defeating itself.
    BUT aren't we ultimately trying to make better bike riders? And there lies the difference.

    All human beings fall prey to bias- having knowledge of this reality does not make for automatic immunity from it- even the author of the paper falls prey to bias. Knowing that your opponent also has this flaw may help in an argument much like learning the Sicilian defense in chess but eventually a player will be challenged to play less choreographed moves and have to improvise- and that's when it gets interesting.

    For example the author of the paper begins with the premise of "human extinction" being a concept outside the realm of human experience because human beings have never experienced the event. However, the author falls prey to a relatively recent awareness/concept (and "bias"), which is the idea that all human beings, "**** sapiens", are perceived by one another as one species or a "human race". Slavery, genocide and caste systems could never have existed without perceiving of others as "less than" human or "other than human". Victims (aboriginal, indigenous populations etc) of that thinking have often been driven to extinction.

    It could be argued that human beings are very cognizant of extinction when we look at it from an anthropological perspective. When Cortez and his men arrived in Central America the Mayans and Incans might as well have been another species, much the way we view our simian relatives. Ethnocentric thinking has been primary in human consciousness far longer than the concept of a single human race. There are enough examples of human genocide to build in a very strong sense of "extinction". Ironically, the indigenous population of the "Americas", by failing to conceive of the threat the conquistadors played, fell prey to their own heuristic thinking and biases. While the greatest threat from the intruders appeared to be their swords, shields and technological capacity to wage war it was ultimately disease that statistically killed far more of them and brought them to the brink and in some cases beyond the point of extinction.

    And even the concept of the "human race" is an abstraction and may evolve in such a way that we eventually see it as a limited view much the way we see the ethnocentric thinking of some cultures as "old world". Genome mapping makes the distinction between what is human and what is mouse far less cut and dry than we might have thought even 20 years ago. But I digress.

    But back to biking. While the paper will perhaps aid us in understanding how easily we fall prey to bias. And not to single him out (but I will) HH's response to the paper, which to me almost read like parody, uses the paper to feed his own bias. ie.:

    Quote Originally Posted by Helmet Head
    And for the same reasons, risks of total economic collapse in America may tend to be underestimated since, Americans have never yet encountered total economic collapse (hence the lack of caution when it comes to employing socialist solutions to address social ills)
    There are two leaps of logic here but let's just look at the conclusive one: "a lack of caution when it comes to employing socialist solutions to address social ills". Nothing in the paper supports that conclusion any more than had he concluded the opposite:"a lack of caution when it comes to employing any free market solutions to address social ills" both conclusions would have been a result of bias and not logic.

    Again, we're still in the realm of the game of semantics, debate, rationale and statistics but not really in the realm of real decision making. All of this kind of (statistical) thought resides in the cerebral cortex but for the actual decision making that a cyclist makes or even decision making of humans in general it resides in far deeper recesses of the brain. So this actually ends up being a huge exercise in hindsight. According to neurologist Antonio Damasios (Descartes Error, Looking for Spinoza, The Feeling of What Happens) actual decision making is never made by human beings by the part of the brain that can do statistical analysis. So as much as it may feel like we are making decisions based on that kind of logical progression ultimately we may be fooling ourselves just as much as the "emotional" reasoners thinks they are using logic.

    The kinds of snap decisions that enable us to ride a bike defy the logical mind. If you can regress yourself to your memory of learning to ride a bike you may recall that it was not until you stopped thinking about riding the bike that you were actually able to do it. The natural body responses had to take over- the idea of balancing on two skinny tires still boggles my mind at times. Understanding the "rules of the road" may make it slightly more predictable and keep us statistically safer but the part of the brain making the decisions will still come from a less logical part of the brain. I sometimes wonder if the reason why, statistically, older men (age 50+) are increasingly a higher percentage of bicycling (and motorcycling) accidents is because as we age the forward part of the brain becomes more dominant (we "learn the rules" and act accordingly). While this has some advantages it has some disadvantages in that it can slow some response times. It may be why so many of the older/"experienced' riders gasp and nod their heads in dismay as they watch youtube videos of NYC bike messengers flying through traffic. While the messenger accident rate is fairly high but given the amount of miles they ride and the apparent risks they take it's remarkable that older men (as more and more of us are riding at older ages) are outpacing their rate of death and injury.

    In conclusion, (hopefully someone has read this all the way through and follows my line of reasoning) I think the arguments will remain between those who exclusively do post analysis with statistics and "logic" (though still prone to bias) and those who exclusively recall the kinesthetic sense of bike riding. It may be like physics- particle or wave and never the twain shall meet. But I think there's a reason why Einstein, Huxley, H.G. Wells hopped on a bike once in a while and I would guess it was the escape from logical thought it occasionally offered them.

    * post edited for some clarity and grammar
    Last edited by buzzman; 01-21-08 at 11:20 AM. Reason: grammar, punctuation, clarity

  21. #21
    Senior Member
    Join Date
    Mar 2007
    Posts
    4,070
    Mentioned
    0 Post(s)
    Tagged
    1 Thread(s)
    Quote Originally Posted by buzzman View Post

    much non-cycling material snipped

    Again, we're still in the realm of the game of semantics, debate, rationale and statistics but not really in the realm of real decision making. All of this kind of (statistical) thought resides in the cerebral cortex but for the actual decision making that a cyclist makes or even decision making of humans in general it resides in far deeper recesses of the brain. So this actually ends up being a huge exercise in hindsight. According to neurologist Antonio Damasios (Descartes Error, Looking for Spinoza, The Feeling of What Happens) actual decision making is never made by human beings by the part of the brain that can do statistical analysis. So as much as it may feel like we are making decisions based on that kind of logical progression ultimately we may be fooling ourselves just as much as the "emotional" reasoners thinks they are using logic.

    The kinds of snap decisions that enable us to ride a bike defy the logical mind. If you can regress yourself to your memory of learning to ride a bike you may recall that it was not until you stopped thinking about riding the bike that you were actually able to do it. The natural body responses had to take over- the idea of balancing on two skinny tires still boggles my mind at times. Understanding the "rules of the road" may make it slightly more predictable and keep us statistically safer but the part of the brain making the decisions will still come from a less logical part of the brain. I sometimes wonder if the reason why, statistically, older men (age 50+) are increasingly a higher percentage of bicycling (and motorcycling) accidents is because as we age the forward part of the brain becomes more dominant (we "learn the rules" and act accordingly). While this has some advantages it has some disadvantages in that it can slow some response times. It may be why so many of the older/"experienced' riders gasp and nod their heads in dismay as they watch youtube videos of NYC bike messengers flying through traffic. While the messenger accident rate is fairly high but given the amount of miles they ride and the apparent risks they take it's remarkable that older men (as more and more of us are riding at older ages) are outpacing their rate of death and injury.

    In conclusion, (hopefully someone has read this all the way through and follows my line of reasoning) I think the arguments will remain between those who exclusively do post analysis with statistics and "logic" (though still prone to bias) and those who exclusively recall the kinesthetic sense of bike riding. It may be like physics- particle or wave and never the twain shall meet. But I think there's a reason why Einstein, Huxley, H.G. Wells hopped on a bike once in a while and I would guess it was the escape from logical thought it occasionally offered them.

    * post edited for some clarity and grammar
    I think that you are misstating Damasio's statement; I have read and considered his book. Indeed, I had previously considered it with regard to the decision that "Now" is the time to get out of bed. One can lie in bed considering the need, or the desires, that determine when to get up, while keeping an eye on the clock or an ear to the radio. This is rational thought. One arrive at the rational conclusion that one has to get up no later than five minutes after the alarm rings. So, one hears the alarm and shuts it off, stretches and wiggles, and then gets out of bed. All that Damasio's book says is that the brain sends out the motor signals that get you out of bed about 1/4 second before the knowledge of those signals enters your memory. You had already determined, by rational thought, that this was the time to get up, and you did it.

    I think that buzzman is applying his misinterpretation of Damasio's thesis to cycling in an entirely inappropriate manner. His description of learning to stay up on a bicycle doesn't really apply to traffic situations, so I skip criticism of that. Here are his words about cycling in traffic: "Understanding the 'rules of the road' may make it slightly more predictable and keep us statistically safer but the part of the brain making the decisions will still come from a less logical part of the brain." This is not so because it is an inaccurate description of the mental processes.

    One can understand the rules of the road in an entirely "remote" manner, so that one could work out, on a paper diagram, what movements should be made. (By the way, that seems to be the way that many bike planners work out how to cycle in traffic.) But that's not what we are discussing. If you start out riding according to the rules of the road you develop habits. You don't go through the process that the beginning driver goes through, such as thinking: "I need to turn left in two blocks. Therefore, destination positioning tells me that I must get near the road centerline. That requires that I move laterally to the left. The yielding rule tells me that I must look over my left shoulder. If I see a car close by, the yielding rules tells me to wait until it has overtaken me. Then I must look again .... etc. etc." Once one has acquired the proper habits, one does not have to go through this logic in verbal form. One knows the required sequence of movements, just as it one were dancing a dance that one had learned only by copying the steps, not from a book of words. However, the decision to make these movements, and the timing and the precise sequence (today, how many cars must I wait for, need I try to negotiate with any driver, etc.) of them are entirely under logical control. Apparently it is scientific fact that the mental signals to the muscles to turn your head are made before you have any memory of them (that's the 1/4 second lag), but the decision to turn your head is entirely under logical control.

    In short, we don't, or at least I don't and we shouldn't, conduct ourselves in traffic by some kind of "snap decisions that enable us to ride a bike", to use buzzman's words.

  22. #22
    ---- buzzman's Avatar
    Join Date
    Nov 2005
    Location
    Newton, MA
    Posts
    4,551
    Mentioned
    1 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by John Forester View Post
    I think that you are misstating Damasio's statement; I have read and considered his book. Indeed, I had previously considered it with regard to the decision that "Now" is the time to get out of bed. One can lie in bed considering the need, or the desires, that determine when to get up, while keeping an eye on the clock or an ear to the radio. This is rational thought. One arrive at the rational conclusion that one has to get up no later than five minutes after the alarm rings. So, one hears the alarm and shuts it off, stretches and wiggles, and then gets out of bed. All that Damasio's book says is that the brain sends out the motor signals that get you out of bed about 1/4 second before the knowledge of those signals enters your memory. You had already determined, by rational thought, that this was the time to get up, and you did it.

    I think that buzzman is applying his misinterpretation of Damasio's thesis to cycling in an entirely inappropriate manner. His description of learning to stay up on a bicycle doesn't really apply to traffic situations, so I skip criticism of that. Here are his words about cycling in traffic: "Understanding the 'rules of the road' may make it slightly more predictable and keep us statistically safer but the part of the brain making the decisions will still come from a less logical part of the brain." This is not so because it is an inaccurate description of the mental processes.

    One can understand the rules of the road in an entirely "remote" manner, so that one could work out, on a paper diagram, what movements should be made. (By the way, that seems to be the way that many bike planners work out how to cycle in traffic.) But that's not what we are discussing. If you start out riding according to the rules of the road you develop habits. You don't go through the process that the beginning driver goes through, such as thinking: "I need to turn left in two blocks. Therefore, destination positioning tells me that I must get near the road centerline. That requires that I move laterally to the left. The yielding rule tells me that I must look over my left shoulder. If I see a car close by, the yielding rules tells me to wait until it has overtaken me. Then I must look again .... etc. etc." Once one has acquired the proper habits, one does not have to go through this logic in verbal form. One knows the required sequence of movements, just as it one were dancing a dance that one had learned only by copying the steps, not from a book of words. However, the decision to make these movements, and the timing and the precise sequence (today, how many cars must I wait for, need I try to negotiate with any driver, etc.) of them are entirely under logical control. Apparently it is scientific fact that the mental signals to the muscles to turn your head are made before you have any memory of them (that's the 1/4 second lag), but the decision to turn your head is entirely under logical control.

    In short, we don't, or at least I don't and we shouldn't, conduct ourselves in traffic by some kind of "snap decisions that enable us to ride a bike", to use buzzman's words.
    your simplification of Damasio's thesis (and I'm not sure which one of his several books you are referring to when you say you've read and considered his book) and your response to my comments seem more deliberately based on being argumentative than on considering my points. I won't labor this discourse- it's endpoint is unfortunately, more of the same. Feel free to dismiss my observations if they don't fit your modalities. I understand why Hurst's book's title is "The Art of Urban Cycling" and why his descriptions feel more closely aligned with mine than yours is. But often it's a matter of perspective. My reference to "snap decisions" is similar to the use of the term in Malcolm Gladwell's "Blink", which is also an examination of new realizations in the field of neurology.

    At this point I feel a bit like Woody Allen in "Annie Hall" when he says, "I happen to have Marshall Mcluhan" right here. But I have close colleagues who occasionally do conferences with Mr. Damasios. I will forward them my post, and yours- if you don't mind- and have him weigh in, if possible, on the subject. No guarantees but I love his work and would not in anyway mind being corrected if I have presented his thesis inaccurately.

  23. #23
    ---- buzzman's Avatar
    Join Date
    Nov 2005
    Location
    Newton, MA
    Posts
    4,551
    Mentioned
    1 Post(s)
    Tagged
    0 Thread(s)
    okay so a couple of clarifications:

    I had someone else acquainted with Damasios' work review my post and he feels that while I got the gist of it right I may have overstated by using the word "never" regarding the logical rational processes being overridden by less conscious parts of the brain.

    Quote Originally Posted by Damasios states it best in "The Feeling of What Happens",
    "I did not suggest, however, that emotions are a substitute for reason or that emotions decide for us. It is obvious that emotional upheavals can lead to irrational decisions. The neurological evidence simply suggests that selective absence of emotion is a problem. Well-targeted and well-deployed emotion seems to be a support system without which the edifice of reason cannot operate properly. These results and their interpretation called into question the idea of dismissing emotion as a luxury or a nuisance or a mere evolutionary vestige. They also made it possible to view emotion as an embodiment of the logic of survival"
    My colleague does agree, however, that cycling in traffic would require full involvement of the decision making neural pathways that would be very dependent on resources outside of only the conscious, rational brain though a conscious awareness of the particular prior experiences and biases the cyclist had would assist in some decision making.

    I have no idea if this is overcomplicating my previous post since I'm typing so fast and off and running. But anyway great fun to examine these topics IMO even if they're shots in the dark. thanks JF for challenging my thinking. I'll do my best to state it clearer in later posts. Still not sure I agree with your conclusions, however, but I'll give it further thought.
    Last edited by buzzman; 01-21-08 at 11:49 PM.

  24. #24
    Senior Member
    Join Date
    Mar 2007
    Posts
    4,070
    Mentioned
    0 Post(s)
    Tagged
    1 Thread(s)
    Quote Originally Posted by buzzman View Post
    your simplification of Damasio's thesis (and I'm not sure which one of his several books you are referring to when you say you've read and considered his book) and your response to my comments seem more deliberately based on being argumentative than on considering my points. I won't labor this discourse- it's endpoint is unfortunately, more of the same. Feel free to dismiss my observations if they don't fit your modalities. I understand why Hurst's book's title is "The Art of Urban Cycling" and why his descriptions feel more closely aligned with mine than yours is. But often it's a matter of perspective. My reference to "snap decisions" is similar to the use of the term in Malcolm Gladwell's "Blink", which is also an examination of new realizations in the field of neurology.

    At this point I feel a bit like Woody Allen in "Annie Hall" when he says, "I happen to have Marshall Mcluhan" right here. But I have close colleagues who occasionally do conferences with Mr. Damasios. I will forward them my post, and yours- if you don't mind- and have him weigh in, if possible, on the subject. No guarantees but I love his work and would not in anyway mind being corrected if I have presented his thesis inaccurately.
    Go ahead and get what criticism you can from the author.

    I now understand that you were considering only "snap decisions" rather than considered decisions, and I know something about Gladwell's "Blink". It seems to me that Gladwell was not considering the kinds of snap decisions that have to be made in traffic. Be that as it may. It is a standard consideration of training that a person who understands how a system works is quicker to detect and respond to a situation of unusual operation. It has always been my belief that teaching cyclists to operate according to the rules of the road has two effects. It affects their behavior directly, and it allows them to develop the appreciation of proper operation, so that when some other driver or pedestrian does something unusual, the trained cyclist is better able to take appropriate avoidance action in time. It is even better for the cyclist to train himself in several appropriate avoidance maneuvers (rock dodging, instant turns, panic braking), so that when he makes the snap decision that one of these is needed, it appears at his fingertips, so to speak, without having to think about it.

  25. #25
    ---- buzzman's Avatar
    Join Date
    Nov 2005
    Location
    Newton, MA
    Posts
    4,551
    Mentioned
    1 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by John Forester View Post
    ... It is even better for the cyclist to train himself in several appropriate avoidance maneuvers (rock dodging, instant turns, panic braking), so that when he makes the snap decision that one of these is needed, it appears at his fingertips, so to speak, without having to think about it.
    agreed!

    and many of the cyclists that "get into it" with you are coming from a place of allowing that kinesthetic sensibility to predominate their thinking when discussing cycling. And they will resist efforts to focus primarily on rules based or formulaic approaches to cycling that seem counterintuitive or restrictive when their bodies have adapted to responding less consciously. And many times these cyclists have spent many hours and many thousands of miles on the road and their strategy has provided them with a very satisfying cycling experience making them even more skeptical of methodologies that literally come from a completely different part of the brain.

    I think it's one reason why there are such knee jerk responses on both sides in some of the A&S arguments. Quick rushes to judgement with labels of "phobias", "cyclist inferiority complex", "childish notions" and what have become almost cliched responses when a cyclist does not seemingly agree with a particular bias often halts forward movement in these discussions. Naturally, those of us who do not necessarily feel that a one size fits all methodology is appropriate and prefer a wider range of solutions (incl. bikeways, and strategies that don't fit the more traditional VC definition) need to be more open to the useful information that genuine statistical analyses and more conventional modalities afford. As well as trusting that the overall intention of safe cycling is a common goal.

    As I believe the article points out and what I was hoping to illuminate (albeit unsuccessfully) by referencing Damasios' work was that prior attitudes and bias are a natural part of our decision making process and that a healthy awareness of their role can help us discern when they have a negative effect or a positive effect on our thinking. Your description of training in avoidance maneuvers is an example of actually training the brain to have bias and prior attitude/experience with certain events that will then trigger appropriate responses. Those exercises are not training the brain to "logically" respond.

    And as I tried to point out before while an understanding of our inclination to prior attitudes and bias may assist us in debate it will not necessarily make us better cyclists if we rule out those response entirely- nor are we probably capable of doing so anyway.

Page 1 of 2 12 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •