Is open source software less secure?

security Is Open Source Software Really Safer?

display

Hand on heart: do you know exactly what the browser you are currently using is doing? In addition to your knowledge of software development, it mainly depends on which browser you are currently using. Because: If you use an open source browser such as Mozilla Firefox, you could inspect the source code and check in detail what happens, for example, when a website is called up. Closed browsers such as Microsoft Edge or Apple Safari do not offer this option. In fact, the following applies: only with open source programs can you really know what's inside.

Transparency as a security concept

The idea that source code that is accessible to everyone can provide more security seems paradoxical at first: Couldn't attackers also check the code and exploit weaknesses? Yes, they could and do it again and again. But an active developer community can just as quickly ensure that the relevant problems are resolved. Above all, security-relevant tools such as encryption programs benefit from this concept. In so-called Security audits large open source programs are checked for problems - a process that is not possible with closed source programs due to a lack of source code.

Another potential advantage of the open-source concept is that the code can be picked up by other development teams and further developed independently - one speaks of a so-called "Fork". This is what happened with the encryption tool, which was discontinued in 2014 TrueCrypt. After the original program was no longer developed due to a lack of security, another team continued the program base under the name VeraCrypt (in fact, the development of VeraCrypt started before the end of TC, but it would not be possible without the open source concept ). VeraCrypt also provides an example of a successful audit: any security gaps found in the source code were quickly closed by the developer community.

No security guarantees

Whether open or closed source: there are advocates for both concepts who propagate security. However, neither can guarantee perfect security.

Perhaps the "simplest" example that closed code does not automatically result in more security is provided by Microsoft with its Windows operating system. New ones arrive every month, not infrequently severe security vulnerabilities in Windows to the fore. Microsoft responds to this with the Patch Tuesday, on which the developers from Redmond distribute patches every month. Windows users have to rely on the Windows makers doing a good job - unlike Linux, for example, passionate developers have no chance of fixing code errors in Windows themselves. Regardless of that, you can never be sure what Windows is doing in the background. Not least because of Windows 10 and its data collection, which is still not completely transparent, this is a major disadvantage of non-open software. The same is of course not a Microsoft-exclusive problem, but can be transferred to any closed source program.

Open source systems such as Linux or BSD provide the opposite example. In theory, every developer can close security gaps on their own - of course, the corresponding changes must also be incorporated into the "official" code. Experience has shown that errors and security gaps in the Linux kernel are quickly eliminated by the Developer community Fixed. But that's only half the battle: If the patches don't reach the user, it's of little use. Here, too, a large operating system provides a negative example, namely Android. Millions of smartphones run completely outdated versions of Google's Linux-based mobile system. Not only unpatched Linux vulnerabilities, but also missing security concepts from Google are a problem. After all, there is at least the theoretical possibility of creating new Android versions via the Android Open Source Project - developer communities such as XDA devs jump into the breach with so-called custom ROMs - but the effort is great, the results are not always satisfactory. In addition, even with open source code, you cannot be sure that the programs generated from it actually originate from this source code and have not been manipulated in the meantime.

Mixed calculation

Often, large, proprietary programs also integrate open source projects for certain functions. One example of this is the popular smartphone messenger WhatsApp. While WhatsApp's basic code is closed, the one introduced in 2014 is based End-to-end encryption on the open source services of Open Whisper Systems. In fact, the message coding used in the alternative messenger signal is currently considered to be unbreakable, although the logs used are openly visible. What exactly the WhatsApp, which was acquired by Facebook in 2014, does besides message encryption cannot be found out easily - at least as long as the WhatsApp makers do not reveal the app's code.

display

Whether open source or not: how safely and reliably a program works always depends on who is responsible for the development. A security-relevant program that has not been worked on for years should at least be viewed with skepticism, even if the source code is open. If, on the other hand, the code is properly documented, and many programmers are concerned with further development, the chance of having a safe open source project in front of you is at least high.

Trust issues

We hang on to: If the source code has been checked and any security gaps found are quickly plugged by the developer community, active open source projects are absolutely safe. Or? Unfortunately not, at least not without reservation. If you download a “finished” open source program, you have to trust that the clean source code has not been tampered with. An example: If you download the aforementioned Firefox from a possibly dubious source, it is quite possible that someone has integrated malware into the code. After all, the source code must first be converted into a finished program, i.e. compiled with a compiler, before it can be used. Because you cannot easily see the original source code.

So the solution for skeptics could be: Compile it yourself. Simply download the proven clean source code, run it through the compiler of your choice and you have a safe open source program. But here, too, there is a “but”: Of course, you also have to trust the compiler. The so-called Ken Thompson hack. In the 1980s, the computer scientist demonstrated that a corrupted compiler could build a dangerous back door into a program with clean source code - even though nothing of this could be seen in the compiler's source code. In theory, for one hundred percent security you would not only have to compile the program, but also write the compiler used for it yourself, of course with a completely "clean" system. These thoughts could be continued at will, even if there are now test concepts against the Ken Thompson hack.

Admittedly, we are slowly moving on a level where the idea of ​​security almost borders on paranoia. In practice, however, it turns out that a Thinking in black and white when dealing with open or closed source software is not the best idea is. Open source is neither generally more secure, nor is proprietary software nebulous across the board. As is so often the case, the truth lies somewhere in the middle.