One of the claimed virtues of open source software is that it ensures that program code is read and checked by a crowd. Although this could never bring any expectation of freedom from bugs, it should at least protect us from gaping security holes.
I know of no systematic study that has compared the number or severity of, or time to fix, bugs in closed- versus open-source products. One review speculated that open source “reinforces sound security practices by involving many people that expose bugs quickly”, which would seem fairly reassuring, particularly as one of its authors was a Microsoft employee.
Real-world evidence is that blatant coding cock-ups can rest in public source code for significant periods, putting users at risk. Apple’s contribution was last year’s flaw in SSL/TLS that it fixed briskly with iOS 7.0.6 and OS X 10.9.2. A classic example of a typographic error which opened up a well-known weakness in the syntax of the C family of programming languages, it is so obvious that it is hard to see how anyone with the slightest knowledge of programming could have checked that critically important source code.
I expect that it will surface in this year’s GCSE papers, where even the below-average candidate should find it an easy few marks. Apple’s post-incident investigation should have discovered how anything so grossly wrong could ever leak out into a release product, particularly in a component concerned with trust and security. But unlike air and other transport accidents, we are never privy to the reports of such investigations, assuming that they ever occur.
Hard on its heels, in early March last year, Linux distributions relying on GnuTLS for remarkably similar functions were found to be vulnerable to a related bug. Linux uses GnuTLS because the licence for the ‘standard’ open source library, OpenSSL, imposes restrictions that are incompatible with the GNU GPL, under which Linux operates.
Such parochial licensing issues sadly plague the ostensibly fraternal world of open source, and here exposed those using Linux – including Mac and iOS users connecting to an affected server – to potential attack. Although patched swiftly, who knows whether you can now, a year later, be confident that all the servers that you connect to have been properly updated?
OpenSSL itself is no stranger to embarrassing vulnerabilities: in May 2008, a bug in the OpenSSL package distributed with Debian Linux was discovered to have left a vulnerability since the error had been introduced in September 2006.
Then last September we were all, OS X, Linux and other Unixes alike, treated to a whole family of Shellshock vulnerabilities in the widely-used Bash command shell, which resulted from coding errors introduced way back in 1989, before many current software developers had even been born. Christmas 2014 was celebrated to the tune of vulnerabilities in NTP, which drove Apple to apply a remedial patch using its automatic security update for the first time ever.
A couple of decades ago, academic research in computing was buzzing with the concept of proving critical code to be correct. Extensively employed in hardware design, chip developers like Intel and ARM have been using it for years, so reducing the previous string of embarrassing bugs in processors. Whilst advances have been made in that direction, we are still a long way from automated security auditing that could detect even these glaringly obvious bugs.
If informal crowd-sourced code checking does not work, then the industry needs to come up with formal human checking and auditing that can address the need. And in the spirit of open source, we need confidence that such checks are made, and that further foul-ups are stopped.
A common excuse is that standard development languages like the C family are so prone to syntactic error, and so inherently obfuscating, that any form of checking is prone to further errors. Yet system software development, which builds the security foundations for OS X, Linux, Unix, and other major operating systems, still relies on C and its relatives.
Like the moth which flies miles to contact an irresistible lantern, only to die in its flame, the C languages continue to draw programmers to their nemesis.
Updated from the original, which was published in MacUser vol 30 issue 06, 2014.