This happened circa what, 2008? 2007? I forget now. Anyway, it was a while ago now - and it wasn't my fault - but at the time it was pretty "wtf oh shit I'm in so much trouble" scary levels of scary.
So here we go.
Around that time I was running a little consulting / hosting company. I went into the data centre which hosted my equipment to install a second hand Dell I had acquired and tested. I plugged it into the rack, then plugged in the power, then pushed the on button.
Then bang. Then everything went dark. Then bang again.
Then I get a phone call on the VOIP phone in the data centre. It was their owners, asking me what the fuck I had done.
Anyway, it turned out that everything was dead. It wasn't just that the circuit breakers had tripped. A large part of their power equipment in the DC had also gone.
So yes, I was blamed for taking out a whole data centre. By plugging in one Dell server.
But why was I not in court over it? Why am I not still paying back the damages? Well, it turns out there's way, way more to the story.
First up they added a new rule - "You need to test equipment at this outlet/breaker before you install it." Cool, my server definitely passed that test. It worked just fine.
But then it turns out that although my rack was perfectly under the rated current limits, the other racks were not. Like, in any meaningful way. There were some other customers way, way over their allotted power. So when I plugged in my server - again, I'm way way under my own rack power allotment - I tripped some breaker on the distribution board.
Tripping that breaker meant that the other phase now took the brunt of the load. Again, my rack is fine, but everyone elses was apparently not, so it .. pulled very hard on that rail. And it immediately tripped the second phase breaker.
But that wasn't it.
Then, the second click was when they tried remote flipping the breakers back on. The massive draw of power on phases when the computers in the data centre were powered back up caused some part of their power distribution setup to just plain fail. I forget the exact details here; I think the UPS was pulled on pretty hard too in that instant and I vaguely recall it also got cooked.
I had like, four? servers, a router and a switch. I was definitely not going to cause inrush problems. But the big hosting customers? Apparently they .. had more. Much, much more. Now this isn't my first rodeo when it comes to power sequencing of servers in a data centre - I had done this for like, a LOT of Sun T1s in circa 2000, staging how to turn them on a rack at a time upon a full power-off / power-on cycle event. But apparently this wasn't done by either the data centre or the hosting customers. All power, all on, all at once.
Now, I'm a small fry customer with one rack still paying the early adopter pricing. The companies in question had a lot more racks and were paying a lot more. So, this was all mostly swept under the rug, I stopped being blamed, and over the next few months we all got emails from the data centre telling us about the "new, very enforced power limits per rack, and we're going to keep an eye on it."
Anyway, fun times from ye olde past when I was doing dumb stuff but people were making much more money doing much dumber stuff at times.