Like many bike commuters, I have a tendency to obsess over flat tires. Like many bicyclists, I'm also a nerd. As a nerd who obsesses over flat tires, one of the things that intrigues me is the problem of understanding flat tire rates, particularly as it applies to comparing various tires.

It's well known among bike commuters that flat tires are essentially random events. You'll go eight months without getting a flat tire, then you'll get three in two weeks. It's just totally random, right? Well, I'm not giving up that easily.

One of the main problems with flats being random events is that it calls into question the possibility of comparing two different models of tires without using both for a long, long time. Nevertheless, as humans we all form opinions based on small sample sizes and can't be convinced otherwise. If I try tire A and get a bunch of flats then switch to tire B and don't get a bunch of flats you won't be able to convince me that tire A wasn't significantly more flat prone than tire B.

But is that really true?

That's one of the questions to which I wanted the answer. So, being a pseudo-scientific type, I set out to collect data. For the last three years I've been compulsively recording all information that seemed relevant about my flat tires -- the date, where I was riding, what the weather was like, how many miles were on the tire, front or rear, cause of the flat, etc. Now with three years worth of data, I'm starting some analysis.

So, I've got two tires, which I will call tire A and tire B. I used tire A for about 1900 miles and got 6 flats. I used tire B for 2000 miles and didn't get a single flat. Obviously tire B is more flat resistant, right? But how to quantify that?

What I decided is that I'd imagine a simplified probability model. I'd choose a somewhat arbitrary probability that I'd get a flat in any 10 miles of riding and then apply that probability to these two tires to see how well it would explain the data.

Let me say that I am aware of the crudity of this model. For one thing, the probability of getting a flat isn't actually consistent over time but seems to increase with tire wear. It also varies with weather and riding location. I'm ignoring these factors.

So, returning to my model, I made the guess that for any 10 miles of riding there was a 3% chance that I'd get a flat tire. Applying that (by means of the binomial formula), I find that in any given set of 200 10-mile trips, there is about a 60% chance that I'd get 6 or fewer flats, so that seems like a reasonable fit for tire A. However, with that probability, there is only a 0.2% chance that I would get zero flats in 200 10-mile trips. If both tires actually had this same probability of getting a flat, there would be about a 1 in 1000 chance that I would get more than 5 flats with tire A while getting no flats with tire B.

Conversly, in order to get as much as a 1 in 4 chance that I could have used tire B for 2000 miles without getting a flat, I have to assign a probability of 0.7% for a flat in any given 10 mile trip. Applying that value to tire A, there would be a 99.7% chance that I'd get fewer than 6 flats. This yields less than a 1 in 1000 chance that I would get more than 5 flats with tire A while getting no flats with tire B.

So, my conclusion is that given two tires both used for 2000 miles in similar conditions if one tires gets 6 flats while the other gets 0 flats then I can, in fact, trust my belief that the tire that got no flats has better flat protection.

The next thing I'd like to know is how many flat tires you need to get before you can conclusively say that a tire is not as flat-resistant as another tire that got no flats.

Yes, I have too much time on my hands.