Big brother's getting bigger

“Intelligent” software is making CCTV more effective, but would you want it watching you?
April 26, 2010

It is four years since Britain’s then information commissioner, Richard Thomas, warned we were slipping almost imperceptibly into a surveillance society. He singled out CCTV, in which Britain is the world leader, with 10 per cent of all the world’s cameras (about one for every 12 people) covering large swathes of our cities. The countless anecdotal reports of CCTV’s growing intrusion into the fabric of everyday existence made my own experience, on Christmas Day 2008, hardly unique.

I had just crossed the local railway line via a public right of way, taking me briefly onto the platform, when a voice boomed from the station speaker system: “What are you doing here, there are no trains today?” “I’m just walking by,” I muttered (I assume there was a microphone somewhere, though I didn’t see one). Grudgingly, the person in a central control room miles away allowed me to continue. Clearly the monitoring system had been programmed to identify any presence on a station platform that day as suspicious.

CCTV footage used to be pretty useless because it was such poor quality and time-consuming to analyse. Police often failed to arrest criminals even when they were caught, supposedly red-handed, on camera. But new technology has made it possible to detect incidents as they occur or even before. Researchers at Reading University have developed CCTV monitoring software capable of identifying, say, an abandoned package, and following the person who left it while they are still within range of a camera. Using technology first developed 20 years ago for burglar alarms, these systems are programmed to distinguish between different types of movement, and identify those defined as unusual—like depositing an object which remains unmoved for a given period, or movements such as frequent bathroom visits on an aeroplane. The latter might have detected the Detroit bomber last December before he tried to explode his device.

Such a system is capable of many useful things, like sounding the alarm when a parked car is being broken into, or when an elderly person has a fall in sheltered housing. And it could play a major role in policing the London Olympics, providing a powerful tool in the otherwise near-impossible task of monitoring public areas for signs of an impending terrorist attack.

Meanwhile, another development promises to reinforce intelligent CCTV surveillance by generating images of suspects from DNA profiles derived from crime-scene samples. These images could in principle be used either to sift through CCTV pictures as they are taken in “real time” or to search through recorded footage to find a suspect, and perhaps even reconstructing their actions leading up to a crime. Researchers at Arizona University have discovered that the identifying characteristics of hair, skin and eye colour are determined by variants in a handful of critical genes, and can be derived from DNA samples. From this, they believe it is possible to build up a profile that could be more accurate than E-Fit pictures generated from eyewitnesses.

Although these discoveries are some way from being put into practice, their investigative potential is obvious. But so are the dangers of wrongful suspicion. Early versions of the technology would almost certainly need to be refined, perhaps by taking extra genes into account. Even then, variations caused by environmental or lifestyle factors, such as diet and exposure to the sun, could render images little more useful than rough tools. And at what point would we judge DNA-generated likenesses accurate enough to be admissable in court?

Equally, technology built to identify suspicious behaviour will create a great scope for false positives, given that many people may behave in ways that the system might deem suspicious, for example when drunk, or just because they are dithering or, as in my case, appearing to loiter at a time and place they are not supposed to.

The Reading University system might have been able to detect the Detroit bomber. But it has yet to be exposed to a large-scale trial that would show whether it can avoid countless (and distracting) examples of “suspicious” but innocent behaviour. Experience with earlier, less sophisticated intelligent software has shown that human intervention is still needed to eliminate such false positives, and concentrate on real criminal activity.

There is also the issue of balancing sophisticated surveillance against concerns about civil liberties. The latest intelligent CCTV provides yet more scope for intrusion into our private lives—from governments monitoring political dissidents to people hacking into the system to spy on suspected cheating partners.

Yet there is little point attempting to inhibit the technology itself, for once it is out there it will be used. The answer must lie in stringent controls over its use and availability. This is no different in many ways from the situation with current systems of identification, like fingerprint records and vehicle registration databases. It’s just that as the scope for intrusion becomes more pervasive, we need to be ever more careful about how and why surveillance is carried out—and by whom.