February 14, 2014 No Comments-
Kristian A. Bognaes, Director, Norman Safeground Development Center
– One often gets the impression that the early operating systems available to computer users did not contain any security measures to protect against data theft and malware. The picture is a bit more nuanced, as I will show in this article.
The early days
After World War II, several companies started making large mainframe computers for general use. IBM, Honeywell, NCR, Control Data, and others were competing for the emerging computer markets. The first operating systems were so-called batch systems, where a computing job was submitted to the computer operators and the result was returned, often as a printout in a mailbox. Only one job could run at a time, and computer resources were billed by the hour. Security concerns were mainly related to the physical security of buildings and machines.
As mainframe architectures evolved, time-sharing operating systems was introduced. This would allow multiple IO terminals to run programs simultaneously. At this time, some systems would allow for user management and –accounting. Security became an issue when user data needed protection and billing could be done on a per-user basis.
Computer worms, self-replicating software that spreads over network connections, was first demonstrated in the early 70’s on a DEC mainframe. A computer Trojan, a program that misrepresents itself to cover malicious intent, was also first developed on mainframes in the 70’s. Spread was limited due to OS permission management that was already implemented on many mainframe operating systems.
Common for these early instances of what can be described as ‘malware’, was research and curiosity. This was soon to change.
As the 70’s ended, a new era was starting – that of personal computing. Arguably, the first usable home computer entered the world under the name ‘Apple 2’. Based on the 6502 microprocessor, this machine went on to be highly successful, in business environments as well as in academia and in the home. The rapid development of more powerful microprocessors and supporting integrated circuits allowed for a multitude of similar systems to follow. The hardware quickly outpaced the availability of software, and the operating systems on these early systems were both simple and somewhat crude. Any concern for security was non-existent. The idea was that you had full control of your own machine – it was not necessary to consider multiple users, accounts, and data compartmentalization like you had on the mainframes. Also, these machines would not have been powerful enough to support such frameworks.
Snakes in the garden
The assumption that you had full control over your own computer was soon to be proven false. One of the first widespread viruses was called ‘Elk Cloner’, and was written to spread from one Apple 2 computer to the next via the floppy disks that the computers used. In 1982, this was probably the world’s first boot-sector virus. Around the same time, IBM introduced the PC. As with earlier personal computers, the operating system was introduced after the hardware had been designed, and the first version of DOS for the PC had many similarities to the Apple DOS system. In other words, the operating systems and the hardware they were running on were ‘wide open’. This made it easy to create software for these hugely popular platforms. However, the openness also made the systems easily exploitable by malicious code. The first PC virus, ‘Brain’, appeared on the PC in 1986. The amount of new malware has increased almost exponentially every year since then.
Better operating systems
Looking at the problems with malware, it was clear that the machine architectures and the OS’s for personal computers needed to take security into consideration. Intel introduced ‘protected mode’ with its 80286 processors, and the UNIX operating system became available on the x86 platforms (Solaris and SCO). SCO Unix came as early as 1989, and offered true privilege separation. The Windows OS went through several versions before the first version of NT with NTFS. This server-focused OS was the first Windows version to implement a security model that offered comparable security measures to those of UNIX. On these systems, users and programs are given privileges to prevent them from accessing resources that they shouldn’t have access to. This is an important defense against malware that could affect the host systems. However, as we will see, the focus of the battle may have only shifted with the introduction of these added measures.
‘Better’ malware as well
Returning to the motives for creating computer malware, we can assume that most malware was written to show off technical skills and create minor annoyance. As new operating systems came along, it would get increasingly more difficult to gain the required access to systems. This does not discourage malware writers – the amount of new malware being released every day continues to accelerate. Although the operating systems are more secure, there will always be unintentional bugs and weaknesses that can be exploited. The motive for creating malware has gradually shifted over time, though. Much of the malware today is created with specific purposes in mind. Typically, it is created to steal processing time, steal and interfere with data, hold your data hostage, or use your system for various networking activities. The common goal is often to make money. In fact, some propose that there is more money being made in exploiting computers than in traditional drug trafficking. The word ‘malware’ does not cover the problem sufficiently enough – the word ‘cybercrime’ describes the situation in a much better way.
Some current threats still depend on using weaknesses in applications or operating systems to succeed. The so-called ‘drive-by’ infections happen when malicious code that can exploit bugs in web browsers hide in objects on infected web pages. Visiting an infected site with a vulnerable browser is enough to become infected.
Many of the current threats will run in the user’s context and do not need access to high-privilege resources at all. They get installed by convincing the user that he/she should click on an executable file (mail attachment etc.). With the computers and operating systems becoming more secure, one weak link remains: the user. The recent outbreak of malware that takes files for ransom or sets up a CPU-hogging bitcoin miner are all installed through the users’ mislead actions. Network connections, CPU/GPU access, and storage, are all available to a rogue process running with regular user privileges.
A similar situation exists on mobile devices, where operating systems have been traditionally secure from early on. Malware, often in the form of Trojans, get access to resources when users accept the privileges that are requested. This remain a problem on an open platform like Android, while Apple is more strict with approving applications, much based on their functionality and privileges that are requested. ‘Privilege creep’ is a well-known problem; get users accustomed to an application, then provide updates that request a steadily increasing privilege level.
An additional issue with mobile devices is that the OS may be under the control of the phone owner, while a second processor, the baseband processor, has full privileged access to the device and is under control of the network provider. This should of course be a security concern.
It is difficult to draw a conclusion when dealing with computer security, operating systems, and users. While older operating systems have historically offered very little security, modern and secure platforms seem to have only moved the target zone to a higher abstraction level. Using API fuzzing, web protocol attacks, un-patched Java and similar frameworks, and luring the user to execute malicious code, are all methods that look very much like infection mechanisms in the old days. The only difference concerns what abstraction level the malware is attacking. While the hardware and OS may be safer, the attackers will get what they want higher up in the ‘stack’ while also trying to exploit vulnerabilities on the lower levels.
While the situation has clearly improved over the years, there are still major challenges that need attention. Educating the user is of great importance. It is a mistake to think that security can be had by switching to a different OS or purchasing another firewall. The most crucial piece of the puzzle is to teach the users to be mindful and vigilant when it comes to security. The threat situation is complex, and there is no ‘golden bullet’ when it comes to security. Firewalls, anti-malware software, robust operating systems, and educated users all play their parts to create a safe computing environment.
Kristian Bognaes during a lecture on OS’s and the Evolution of malware at the Norwegian School of Information Technology
Norman Safeground Blogs Archive