Here's a fact: without the efforts of me and my colleagues, it would have been impossible to get a UK passport in January 2000. Just because some systems were unaffected doesn't mean that no systems were unaffected or that the effects would not have been dramatic had they not been fixed.
I never said that no systems were unaffected, but that the levels of problems were massively overhyped. Don't forget that thousands of computer systems were not checked and 'fixed' and remained non Y2K compliant and guess what, they didn't stop working on Jan 1st as had been claimed, they sailed happily through the millennium working exactly as they had on 31st Dec 1999.
And of course the question to ask (and is being asked) is whether it was sensible to spend a fortune 'fixing' computers that weren't broken (i.e. they'd have carried on working absolutely fine on 1st Jan 2000 regardless) or to take retrospective action in the few places where problems occurred.
Of course there would be certain 'critical' systems that you'd really have to take prospective action, but probably not that many. For all the rest wouldn't it have been better to see whether there was an issue (and in most cases there wouldn't have been) and then fix those actual issues.
I gather that some countries (e.g. Italy and South Korea) really didn't put any effort into fixing the issues in advance, yet I don't remember their infrastructures crashing down in Jan 2000. Actually I think it is the case that their levels of issues were no greater than places that had spent millions of fixes.
http://www.nytimes.com/2010/01/01/opinion/01dutton.html?_r=1