In 1889 the Spectator published an article, “The Intellectual Effects of Electricity,” intended to provoke its Victorian readers. Robert Cecil, the prime minister, had recently given a speech to the Institution of Electrical Engineers in which “he admitted that only the future could prove whether the effect of the discovery of electricity… would tell for good or evil.” The authors attacked him for being soft on electricity. Its material effects were welcome—“imagine the hundred million of ploughing oxen now toiling in Asia, with their labour superseded by electric accumulators!”—but its intellectual effects were not.
Electricity had led to the telegraph, which in turn saw “a vast diffusion of what is called ‘news,’ the recording of every event, and especially of every crime.” Foreshadowing Marshall McLuhan by almost a century, the magazine deplored a world that was “for purposes of ‘intelligence’ reduced to a village” in which “a catastrophe caused by a jerry-builder of New York wakes in two hours the sensation of pity throughout the civilised world.” And while “certainly it increases nimbleness of mind… it does this at a price. All men are compelled to think of all things, at the same time, on imperfect information, and with too little interval for reflection.”
Fast forward 120 years, and similar criticisms abound. Consider an anti-Twitter lament by the New Yorker writer George Packer in February, published, of all places, on his blog: “There’s no way for readers to be online, surfing, emailing, posting, tweeting, reading tweets, and soon enough doing the thing that will come after Twitter, without paying a high price in available time, attention span, reading comprehension, and experience of the immediately surrounding world.” In May, even the US president Barack Obama—a self-confessed BlackBerry addict—complained about a “24/7 media environment that bombards us with all kinds of content and exposes us to all kinds of arguments, some of which don’t always rank all that high on the truth meter,” adding that: “With iPods and iPads... information becomes a distraction, a diversion, a form of entertainment, rather than a tool of empowerment.” Of course there is a price to pay for processing information. But the real question is: is the price too high?
Enter Nicholas Carr, a technology writer and Silicon Valley’s favourite contrarian, whose book The Shallows: What the Internet is Doing to Our Brains (Norton) has just come out in the US (and will be published in Britain by Atlantic in September). It is an expanded version of an essay, “Is Google Making Us Stupid?,” printed in the Atlantic magazine in 2008, which struck a chord with several groups. Those worrying about Google’s growing hold on our culture felt Carr was justified in going after it (though there was little about the search giant in the article). Those concerned with the accelerating rhythm of modern life, the dispersion of attention, and information overload—all arguably made worse by the internet—found a new ally. Those concerned with the trivialisation of intellectual life by blogs, tweets, and YouTube videos of cats also warmed to Carr’s message. Online magazine Slate has already compared The Shallows to Silent Spring, the 1962 book by Rachel Carson that helped launch the environmental movement.
Whatever one makes of Carr’s broader claims about the internet, many readers will be impressed by his summation of recent discoveries in neuroscience. He builds on the work of Nobel-winner Eric Kandel and others to reveal that human brains adapt to new experiences—a feature known as “neuroplasticity.” This is helpful from an evolutionary perspective, but it also means some brain functions atrophy if we don’t use them. Here Carr mentions an oft-cited study that found changes in the brain structures of London cab drivers as they began relying on GPS rather than their memories to navigate. He believes that neuroplasticity provides the “missing link” to understanding how the media has “exerted their influence over the development of civilisation and helped to guide, at a biological level, the history of human consciousness.”
This claim is backed up with research on how our brains process information. Some of these findings are disturbing, if predictable: multi-tasking makes us less productive; those reading online skim read; using multimedia to present data may make it harder to grasp, and so on. In particular Carr cites work by Gary Small, a professor of psychiatry at UCLA, who found that the use of modern media “stimulates the brain cell alteration and neurotransmitter release, gradually strengthening new neural pathways in our brains while weakening older ones.” His experiments showed that just five hours of internet use saw activity in parts of the brain’s previously dormant prefrontal cortex—evidence, for Carr, that the internet “rewires” brains.
Carr believes that we are trapped, because the internet is programmed to “scatter our attention” in such a way that aggravates the pernicious influences of some types of information-processing. He concedes it is “possible to think deeply while surfing the net,” only to add this is “not the type of thinking the technology rewards.” The net “turns us into lab rats constantly pressing levers to get tiny pellets of social and intellectual nourishment.” Carr concludes that “with the exception of alphabets and number systems, the net may be the single most powerful mind-altering technology that has ever come into general use.” This leads to an even bleaker vision: “We are evolving from being cultivators of personal knowledge to being hunters and gatherers in the electronic data forest.”
Carr assumes that the internet, like any other tool, will affect its users in a certain way. Such technological determinism has fallen out of fashion in academia. But Carr is eager to rehash points made by earlier thinkers, above all, the US historian and philosopher Lewis Mumford. For instance, Carr argues that both maps and clocks “placed a new stress on measurement and abstraction, on perceiving and defining forms and processes beyond those apparent to the senses,” while “the clock’s methodical ticking helped bring into being the scientific mind and the scientific man.”
Yet how similar are a clock and the internet? The former is a tool; the latter is more like a new dimension in public life. It is a trap fallen into by the Spectator 120 years earlier, when it conflated the telegraph—the tool—with the electricity that powered it. While the telegraph may have shortened attention spans, this judgement needed to be balanced against the many positive effects of electricity on public life—from electric lights in libraries, to the cinema, to spreading knowledge across the globe. The intellectual effects of a new technology must be judged on what it does to social organisation, not just on how it affects our brains. Carr’s argument implies that something drastic must be done about the internet, but glosses over the obvious objection that “shallower” individuals may be the price to pay for a deeper public discourse.
The regulation of any medium needs to begin with a conception of public good that goes far beyond the micro-levels of neuroscience. How does the medium affect citizens’ ability to be heard, educated, earn a living, or move up the social ladder? Given the global nature of the internet, global justice needs to be taken into account too—regulatory decisions made in the US and Europe will affect people in the developing world. Would the globe’s poorest pass up the benefits of the internet in case it makes them more shallow?
To his credit, Carr is under no illusion that we can return to some simpler tribal past. But relying on neuroscience won’t do either—at least not without first trying to prove why certain configurations of neurons matter more than others. And in deploring the superficial discourse of the internet, Carr also faces the problem in his distinction between the old, “good” literary culture of books and journals he wishes to save, and the new, “bad” post-literary culture of Twitter and blogs. That the two are separate is far from self-evident—especially if the boundaries of the internet are pushed to include reading devices such as the iPad and the Kindle. According to Carr, the “logic” of the internet dictates that such devices will end up with links embedded into the texts they display. Yet that is by no means preordained. While the iPad may emerge as a new engine of distraction, the Kindle is likely to market itself as an engine of concentration instead.
Then there is the question of the internet’s political economy—a subject that, judging by his blog, Carr understands well, and yet The Shallows fails to analyse in any real depth. Internet users spend their time clicking on link after link, but this is not an inevitable feature of the web. Google and other companies create these “link traps” to entice us to click again and again, as the more is known about what interests us, the better the companies can customise adverts and other services. Our distraction is not pre-determined; it is the by-product of a bargain in which we have agreed to become targets of aggressive and intrusive advertising in exchange for free access to the internet’s goodies. In the future one could imagine Google offering a premium version of its service, where users would be charged a penny for each search but wouldn’t be shown any of the ads. Similarly, one can imagine the website of the New York Times that did not include links to external sources. This, in fact, is what it looked like ten years ago—and how the Kindle edition of the newspaper still looks today. For all his insights into the plasticity of the brain, Carr is blind to the plasticity of the internet itself. Today’s internet—with its profusion of hyperlinks, widgets, tweets, and pop-ups—is only one of the possible “internets” in the future. Equally, the level of concentration we can expend on reading the New York Times only matters as long as there is someone willing to publish it. To attack the net for ruining our concentration while glossing over how it disrupts the economics of publishing is like complaining about too many calories in the food served on the Titanic.
Carr’s chief problem, though, is a tendency to view every social problem he encounters as either caused by the internet or heavily influenced by it. He worries about the emergence of the post-literary mind; the fact that few people have time for novels like War and Peace; the lack of time and space for contemplative thought; and even a “slow erosion of our humanness and our humanity,” not to mention his constant fretting about the future of western civilisation held hostage by the ephemeral tweets of movie star Ashton Kutcher. There is cause for concern here, but most of these problems pre-date the internet. Similarly, Carr’s sections on the novel provide a conservative defence of linear narrative, stable truths, and highly-structured, rational discourse. Yet all of this came under severe assault from postmodernism long before Google’s founders entered high school.
But what disappoints most about The Shallows is that in his quest to explore our hidden neural pathways, Carr misses an opportunity to survey more urgent social ills that really can be connected to digital culture. Take the transparent culture of social networking that is slowly reshaping human behaviour in disturbing ways. Insurance companies have accessed their patients’ Facebook accounts to try to disprove they have hard-to-verify health problems like depression; employers have checked social networking sites to vet future employees; university authorities have searched the web for photos of their students’ drinking or smoking pot. And given the ubiquity of modern technology—especially the cameras and recorders that come with mobile phones—our natural reaction might be to stop any unusual behaviour for fear it may be made public. As a result we may end up with blander, more risk-averse citizens, especially amongst those who wish to run for public office. The web may have given us a revolution in transparency—but a revolution in complacency is something we can do without.
More disturbingly, there is evidence that young people may be not just complacent but also poorly informed. A study, published in February, by academics at East Carolina University, surveyed the information habits of 3,500 18-to-24 year olds during the 2008 US presidential campaign. The aim was to investigate whether those learning about news from cable television, comedy shows, podcasts, and social networking sites were equally well-informed about politics. The findings provide few reasons to be optimistic: “Users of these sites tend to seek out views that correspond with their own; they are no more knowledgeable about politics than their counterparts and, in fact, seem to be less so… they do not seem to be more likely to vote.” As David Gelernter, the Yale computer scientist, said: “If this is the information age, what are we so well-informed about?”
There is a danger that we will become even less well-informed, as the web becomes both more personalised and more social. Concerns that the internet traps users in unchallenging information ghettos are not new, stretching back to 2001 and the US legal scholar Cass Sunstein’s book Republic.com. Sunstein argues that, when compared to older media, the internet allows users to seek out opinions and news with which they already agree, creating online news ghettos in which the views of right and left rarely mix.
What is surprising, however, is that today’s technology companies seem to use that book as a to-do-list. Google, for example, has been pushing to provide personalised search results to its users, meaning that two people searching for the same term may now get different results, altered according to what they have clicked on before. In December 2009, Google tweaked its rules in such a way that even users who are not signed into Google—thus denying the search giant access to their previous search history—will see their results personalised too. Facebook is not far behind; in April it announced integration with Yelp, a platform for reviewing local businesses, and Pandora Radio, an online music recommendation service. Anyone coming to these sites while they are logged into Facebook will be immediately exposed to the kind of content—restaurants, cafes, music bands—that their friends have already marked as their favourite. It’s not clear how people will cultivate independent taste in such a collectivist environment. Film and restaurant criticism has already been superseded by automated one-line reviews culled from the internet. Judging by the health of the media industry, serious book criticism is also on the way out, clearing the way for Amazon’s anonymous reviewers. Overall the internet’s effects on critics and intellectuals is little examined—and yet, such issues have far-reaching implications for social and political life.
An explosion in social networking activity has also triggered an avalanche of narcissism, especially on college campuses. A 2009 poll of 1,068 college students in the US conducted by researchers at San Diego State University found that 57 per cent believe that their generation uses social networking sites for self-promotion and attention seeking, while 40 per cent agreed with the statement that “being self-promoting, narcissistic, overconfident, and attention-seeking is helpful for succeeding in a competitive world.” Jean Twenge, the associate professor of technology who conducted the study and co-authored a 2009 book, The Narcissism Epidemic, believes that the very structure of social networking sites rewards narcissists. A study in the Personality and Social Psychology Bulletin in 2008 surveyed 130 Facebook profiles for signs of narcissistic behaviour and found that “because narcissists have more social contacts on Facebook than the non-narcissists, the average user will experience a social network that over-represents narcissists.” Concerns about excessive narcissism are as old as those about the end of contemplation—but social networking has broken new ground here.
Such behaviour may well have an addictive edge to it. A study published in April by researchers at the University of Maryland asked 200 students to give up all media for 24 hours and then report their experiences. Here are some snippets: “I reached into my pocket at least 30 times to pull out a vibrating phone that wasn’t there…”, “I felt phantom vibrations all throughout the day…”, “I noticed physically, that I began to fidget, as if I was addicted to my iPod.” Here it is psychiatry, not neuroscience, that can help.
All these problems—erosion of privacy; the triumph of the collective mind over the individual and the uncertain future of criticism; the customisation of the web; the blossoming of narcissism; the worsening addiction to technology—are complex social and political problems that cannot be solved through technology alone. The internet has helped to cause a lot of them—but it doesn’t seem to hold many solutions. Policy-makers and civil society will need to bear the burden of working on regulation and organising advocacy campaigns.
Yet by contrast, most of Carr’s top concerns are, ironically, already being solved by the internet itself. Computer programs like Freedom allow distracted users to disconnect and get some work done; services like Instapaper make text readable by cleansing it of ads. Equally, just as there is now a push to get people to slow down and enjoy their food, and a smattering of literati pushing for a “slow reading” movement, it’s not unthinkable that people might be persuaded to slow down their web surfing too. It’s not so hard to imagine some kind of a “Slow OS,” a Windows-like operating system built around the key moral principles of the “slow web,” where one would be allowed to check email only twice a day; where sites would be cleansed of distracting advertising; where access to Twitter and Facebook would be limited to just ten minutes per day and so forth. All of this is easy and could probably appear on the market within a year, depending on the buzz around Carr’s book. If only all of the internet’s problems—not just the shallow ones—could be solved so easily.