Site outage horror story
I'm glad that BF withstood the attacks, even though it took a bit to get back functioning. I'm glad it didn't suffer the horror story of another place that I used to frequent.
A small, fairly esoteric forum site which ran on a LAMP (Linux, Apache, MySQL, PHP) stack. The admins knew what they were doing, and the site was backed up to tape nightly. I didn't have admin access to the box there (nor did I want them, because with the root password comes great responsibility), but I did have moderator status on their boards, and helped with the anti-spam methods.
On a Friday night, the admins found out the machine was flattened. Apparently someone managed to get root and run dd if=/dev/zero of=/dev/whatever and zero out all the hard disks.
It gets worse. The machine wasn't just rooted at that time where everything was taken down, but weeks to months earlier. For an unknown time beforehand, someone replaced the cronjob that did the backups (via tar) with a script that did a "dd if=/dev/urandom of=/dev/tape" so the machine looked like it was backing up, but wasn't. So, the admins swapped out tapes every so often in the usual cycle of backups, thinking that the backups were doing fine.
So, pretty much all was lost from the machine, except someone who had an rsync of the bare box's config files (and thus ended up finding the changed cron job after the fact).
Moral of the story: Verify your backups on another machine, take a tape out of the rotation at least quarterly, if not monthly to put on a shelf indefinitely. Offline backups that can be set to read only (either tapes, or burning to DVD) are very important.