It all sounds pretty reasonable to me, I think John is the one being unreasonable here. Not surprised they didn't publish his critique, it was probably full of slanderous attacks on the authors because they reached different conclusions than the Foresterologists, thus disturbing the Foresterologists faith-based approach to cycling.
Clear and specific categorization is also vital to transportation planners and engineers, so they can distinguish sometimes subtle differences between successful and problematic design characteristics. One of the difficulties of the studies in the English-language literature to date is that the range of infrastructure studied is small compared to the range of configurations used between and within jurisdictions. Some examples are described above, but there are many other features that merit investigation: stop signs; numbers of roads intersecting; junctions such as driveways and lanes; cyclist lane of travel in relation to parked cars; surface features such as cobble stones or street-car (tram) tracks; traffic calming measures such as diverters or road humps; and road/lane/path curvature.
Underreporting of some events is an issue that is common to all studies of bicycle injuries and crashes. Many of the studies reviewed here relied on administrative data sources including hospital records [16, 62, 64], police reported accidents [54-61, 69-73], and national or city-maintained registries [53, 63], all of which are likely to miss less severe events. For example, one of the large surveys  found that 9.8% of the respondents had had a crash in the last year, but only two in five crashes (38.2%) had been reported to police. Over half (56.6%) required medical attention, but only one in twenty crashes (5.5%) required admission to a hospital. This underreporting may create bias in infrastructure-specific risk calculations, since collisions involving motor vehicles may be more likely to be reported to police for insurance reasons and to hospitals because they are more severe, as compared to collisions that happen with non-motorized users (which may happen more frequently on off-street paths). Results of studies using these data sources should be interpreted as reflecting risk of severe events. Other studies in this review used data from cyclist surveys that may capture a wider range of crash types, including those that are less severe [29, 61, 65-68]. However, survey data will not capture events that resulted in fatalities (though these are extremely rare) or catastrophic incapacitating brain, spinal cord or other injuries and, depending on the method of survey administration, may not capture individuals who no longer cycle following a crash [29, 68]. No single study design can overcome these reporting problems, thus the importance of looking for consistency of results across different designs.
A great challenge in studying cycling injuries is ensuring that comparisons control for the number of cyclists at risk (also called “exposure to risk”). The before-after studies reviewed here aimed to do this by comparing numbers of injuries on the same intersection or roadway prior to and post introduction of an infrastructure intervention, with the assumptions that underlying traffic levels, injury rates, and types of cyclists stay the same. These assumptions may not hold , so some of these studies also adjusted for temporal trends in traffic volumes [58, 59, 63] or injury rates in the area , or made additional comparisons to unchanged intersections [56-59]. The non-intervention studies needed to include methods to derive bicycling trip volumes on the infrastructure types being compared. Sometimes these came from administrative data collected by transportation authorities [54, 55, 60, 71, 73], and sometimes from study participants describing the route of an injury trip or their typical cycling location [29, 61, 64-68]. Injury severity studies made comparisons within the injured populations, so did not require trip volume denominators [16, 69, 70, 72], but this meant that they examined differences in severity of the outcome once in an injury event, not the original risk of the event itself.
Though the most basic requirement for studies examining risk of crashes or injuries is to account for exposure to risk, there are many other factors that may confound comparisons and that ideally would be controlled in study design or adjusted for in analyses. For example, men and women or people in different age groups may choose to cycle on different facility types, and might have different skill levels or risk-taking behavior, thus creating the potential for confounding associations between infrastructure and injury. While it is difficult to control for all potential confounders, many of the non-intervention studies reviewed here did adjust for personal factors such as age [16, 29, 64, 65, 70, 71], sex [29, 64, 65, 71], cycling experience [29, 68], bicycle type , and environmental factors such as time of day [64, 69, 70, 72, 73] and weather [65, 69, 70, 72]. Most injury severity studies adjusted for helmet use [16, 69, 72]. A style of observational study that can control for most potential confounders is the case-crossover design . Such a study is underway in the Canadian cities of Toronto and Vancouver. It will compare infrastructure at the injury site to that of randomly selected control sites on the same trip, thus within-trip factors (including age, sex, cycling experience, propensity for risk taking, alcohol or drug use, bicycle type and condition, visibility via clothing or bicycle lights, weather, time of day, etc.) are controlled in the design.