A technical failure in a data-center switch caused Slashdot to flood itself with bogus traffic, shutting down the service for 75 minutes. Honestly, the only reason I'm posting this is so I can include that ^^^ headline. It's the headline I was born to write, srsly.
As a precautionary measure I rebooted each core just to make sure it wasn't anything silly. After the cores came back online they instantly went back to 100% fabric CPU usage and started shedding connections again. So slowly I started going through all the switch ports on the cores, trying to isolate where the traffic was originating. The problem was all the cabinet switches were showing 10 Gbit/sec of traffic, making it very hard to isolate. Through the process of elimination I was finally able to isolate the problem down to a pair of switches… After shutting the downlink ports to those switches off, the network recovered and everything came back. I fully believe the switches in that cabinet are still sitting there attempting to send 20Gbit/sec of traffic out trying to do something – I just don't know what yet.
(via Hack the Planet)