100% of anything is hard in I.T., and yet I often hear people shooting for it. 100% availability, 100% code coverage, 100% bug-free code. Yes, some people NEED to strive for this, but it is certainly not something to be undertaken lightly or in many cases something that can be realistically achieved. And just aiming for it, without any necessary guarantee of success, costs. Lots. High availability is probably one of the better understood areas here so lets look at the numbers: 99% uptime gives you around 87.5 hours of down-time per year. Going up to 5 9’s gives you about 5 minutes of downtime each year. That typically means redundancy all the way to the bottom, probably geo-plexing, replication of data, and maybe finding your local disused nuclear bunker and turning it into a data center. And building james-bond-esque data centres 30M under Swedish mountains ‘aint cheap. Some people from the less…rigorous parts of the organization structure might use numbers like “100%” with the best intentions in the world (they’re so used to telling people to give “110%“ that they probably think their being all engineering-y by asking for “99.999%“), and sometimes goals like this can take on a life of their own without any critical thought being applied to them. So next time you hear “100%” or some other arbitrary-sounding hard target think critically about it: - is it even achievable? Is it necessary (37signals didn’t think 5 9’s availability was with their Basecamp product, which by all accounts is a great success)? Has the person asking for it thought about the cost of aiming for it (even though we might not even get there)?