The internet and the PCs attached to it are sometimes known as "generative" technologies, meaning that they allow anyone to build and share new uses for them without the approval of "gatekeepers." Yet today, malicious code that seemed of little significance when it first appeared—such as viruses, and the spam email now known to everyone with an email account—threatens to drive people away from the internet and towards sterile, stand-alone appliances that can be manipulated only with the acquiescence of their manufacturers.
Our open technologies are now routinely subverted. One common type of "malware" compromises PCs to create "botnets"—networks of infected machines open to future instructions by the malware's creator. Such instructions may include directing each infected PC to become its own email server, sending spam by the millions to addresses harvested from the hard disc of the machine or gleaned from internet searches, with the process typically going unnoticed by the PC's owner. One estimate pegs the number of PCs involved in such botnets at 100 to 150m, or a quarter of all the computers on the internet as of early 2007. A study monitoring botnet activity in 2006 detected the emergence, on average, of 1m new bots per month. MessageLabs, a company that monitors spam, recently stopped counting bot-infected computers because it could not keep up. It says it quit when the figure passed about 10m. And since not all bots are active at any given time, the number of infected computers may be much higher.
Modern worms and viruses routinely infect vast swathes of internet-connected PCs. In 2004, the Sasser worm infected more than half a million computers in three days. The Sobig.f virus, which replicated through email, was released in August 2003 and within two days accounted for around 70 per cent of all email in the world. In May 2006, a virus exploiting a vulnerability in Microsoft Word propagated through the computers of the US department of state in east Asia, forcing the machines to be taken offline during critical weeks prior to North Korea's missile tests. As these numbers show, viruses are not simply the province of computing backwaters. The war is being lost across the board.
The rise of the virus
For most of the life of the internet, the threat posed by bad code increased only slowly. This was the result of several factors. First, the ethos of early hackers frowned upon destructive hacking. Almost all the most well-known viruses of the 1990s had comparatively innocuous payloads; they would attempt to reproduce themselves across networks, but otherwise left the machines they infected alone. Second, most of the internet's early computers were maintained by professional administrators who generally heeded admonitions to scout for security breaches. They carried beepers and were prepared to intervene quickly in the case of an intrusion. In the mid-1990s, less adept mainstream consumers began connecting unsecured PCs to the internet in earnest. At first, however, their machines were hooked up through transient dial-up connections—limiting both the amount of time during which they were exposed to security threats and the amount of time that, if hijacked, they would themselves contribute to the problem. Finally, there was no business model backing bad code. Programs that tricked users into installing them, or that bypassed users and sneaked on to machines, were written for fun or curiosity: bad code was more like graffiti than illegal drugs.
In the last few years, each of these factors has diminished. Now anyone can get online, and mainstream users are transitioning to broadband connections, which are always on. More than twice as many US adults have home broadband connections than have dial-up modems, and the number is increasing all the time. Awareness of security issues, however, has not kept up. A December 2005 study found that 81 per cent of home computers were lacking first-order protection measures, such as up-to-date anti-virus software, spyware protection or effective firewalls. The internet's users are no longer skilled computer scientists, yet the PCs they own are more powerful than the fastest machines of the 1980s.
Most significantly, there is now a business model for bad code—one that gives many viruses and worms payloads for purposes other than simple reproduction. This trumps the old hacker ethic of "do no harm." Many spam emails produce profits, as enough people actually buy the items advertised or invest in the stocks touted. Moreover, botnets can be used to launch co-ordinated attacks on a particular internet endpoint. For example, a criminal can crash a website by directing "zombie" PCs to continually reload it, and then extort payment from the website owner to make the attacks stop. Those who hack for fun have been joined by those who hack for profit—and, increasingly, those who hack for political or military purposes. (Just ask Estonia, which in 2007 saw many public and private websites shut down in the midst of a dispute with Russian nationalists.)
The dangers of flexibility
How does malware work? Much of it is no different in concept from the first internet worm. In 1988, there were about 60,000 computers—largely mainframes and minicomputers in government buildings and universities—connected to the internet. On the evening of 2nd November 1988, many of these computers started to slow down. An inventory of the running code on these machines revealed a number of rogue programs demanding processor time. Administrators terminated these foreign programs, but they reappeared and then multiplied. Within minutes, some computers started running so slowly that further investigation was made impossible.
System administrators discovered that renegade code was spreading from one machine to another. In response, some users unplugged their computers, insulating themselves from further attacks but sacrificing all communication. Others kept their machines plugged in and, working in groups, figured out how to kill the invading software and protect their machines against re-infection. The software—now commonly thought of as the first internet "worm," a special sort of virus with the ability to transmit itself between machines—was traced to a 23-year-old Cornell University graduate student named Robert Tappan Morris. He had launched it by infecting a machine at MIT from his terminal in Ithaca, New York. The worm identified other nearby computers on the internet by rifling through electronic address books found on the MIT machine. Its purpose was simple: to transmit a copy of itself to the other machines, where it would run alongside existing software—and repeat the cycle. Within a day, the worm had compromised an estimated 5 to 10 per cent of all internet-connected machines.
To most, the Morris attack was more a curiosity than a call to arms. Cornell convened a commission to analyse what had gone wrong. Its report laid the blame for the worm solely on Morris, who was prosecuted and given three years of probation, 400 hours of community service and a $10,050 fine. But his career was not ruined. In 1995 he co-founded a dot-com which three years later he sold to Yahoo for $49m. He is now a tenured professor at MIT.
Why were the internet-connected machines infected by the Morris worm so vulnerable? Unlike other proprietary networks of the time, such as the long-distance telephone system, the internet had no "control points" from which one could scan network traffic for early warnings of malicious behaviour and then stop it. The system was from its inception intended to be flexible rather than centralised. Endpoint computers could be compromised because they were running operating systems for which outsiders could write code. And the operating systems and applications running on the computers contained flaws that rendered them more accessible to uninvited code than their designers intended. Moreover, many administrators were lazy about installing available fixes to known vulnerabilities, and often utterly predictable in their choice of passwords. Since the computers were run by disparate groups who answered to no single authority, there was no way to secure them all against attack.
The generative dilemma
The internet has grown hugely over the past 20 years, thanks in part to its "generative" power—the ability for lots of people to build on a platform, and share what they do with others, without the permission of the platform-maker. This encompasses explicitly open and free projects such as the GNU/Linux operating system and Wikipedia, where anyone can edit an entry. The proprietary Microsoft Windows is also generative—people can write new code to run on a Windows PC without Bill Gates's permission. Yet the flip side of this generative power—the security vulnerabilities first exposed by the Morris worm—remains as acute as ever. On the internet, there is no "central" way to fix security breaches. More than ever, PC users expect to be able to to download new code built by strangers with a simple mouse click, whether to watch a video newscast embedded within a web page or to install applications like word processors. But the result of this convenience is that PC users increasingly find themselves the victims of bad code. In addition to overtly malicious programs like viruses and worms, their PCs are plagued with software they have nominally asked for that creates pop-up windows, crashes their machines and damages applications. Consumers are thus being pushed in one of two unfortunate directions: towards independent information appliances that reject third-party modifications, or towards a form of PC lockdown that resembles the centralised control IBM exerted over its rented mainframes in the 1960s.
This predicament is embodied in one of the most iconic recent computing devices to appear on the market. On 9th January 2007, Steve Jobs introduced the iPhone to an eager audience in San Francisco. A beautiful and brilliantly engineered device, the iPhone was a triumph for Jobs, bringing his company into a market with an extraordinary potential for growth. But for all its elegance, the iPhone is fundamentally different from the generative, open-ended internet PC. As a technology, it is sterile. The iPhone comes pre-programmed; you are not allowed to add programs to the all-in-one device that Steve Jobs sells you, but Apple can change its functionality through remote updates. Indeed, to those who managed to enable the iPhone to support different applications by tinkering with its code, Apple threatened—and then delivered on the threat—to transform the iPhone into an iBrick. The machine was expressly designed not to be generative beyond the innovations that Apple wants. Jobs recently made a small concession to the hackers: he announced a "software development kit," whereby third-party programmers could code for the phone the way they might for a Windows PC or Mac. But Jobs reserves the right to refuse to allow any application to run.
The iPhone represents a hugely significant shift that is being made because we have grown weary; not of the unexpected cool stuff that the generative PC produced, but of the unexpected uncool stuff that came along with it. Viruses, spam, identity theft, crashes: these are the consequences of the freedom built into the generative PC. And as these problems grow worse, for many people the promise of security will prove reason enough to give up that freedom. The PC revolution was launched with PCs that invited innovation by others. So too the internet. Both were generative. Both overwhelmed their proprietary, non-generative competitors, such as the makers of standalone word processors and online services like CompuServe and AOL. But the future now unfolding looks very different from this past.
Safety is, of course, an invaluable promise for consumers. But a lockdown on PCs and a corresponding rise of "tethered" appliances will eliminate much of what today we take for granted: a world in which mainstream technology can be influenced, even revolutionised, out of left field. The iPhone is a product of both fashion and fear. It boasts an undeniably attractive aesthetic, and it bottles some of the best innovations from the PC and the internet in a stable, controlled form. But a world dominated by appliances like the iPhone is not an appealing one. Eliminate or lock down the PC in many living rooms, and we eliminate the test bed and distribution point of new, useful software. Of course, the innovative capacity of generative networks may yet prove equal to the challenge. Google's "Android"—a new operating system for mobile phones based on the open-source Linux—is meant to act like a PC. But if the rise of online malware continues at its present rate, the point of no return for the modern internet may arrive sooner than anyone has dared to predict.