Laura-from-across-the-road and I once tried to rent Titanic. When asked her age, Laura answered (truthfully) that she was only 11. I remember the cashier’s expression as a silent thunderstrike of disdain commingled with respect, rolling across Blockbusters in an invisible shock wave. Laura just didn’t have it in her to lie.
For children of the 1990s, walking to the local video-rental shop meant journeying into a cave filled with treasure. Eight minutes’ walk out of the house, down the road, round the corner, through the door: video rental was a quest. At the checkout, we would hand over the long-considered plastic case and hold our breath, hoping a copy was still in stock. If we were lucky, the guy behind the desk would hand us another case, covered in plastic that was always rippling and milkily translucent.
Would the video work? Once I rented a malfunctioning cassette of Watership Down, its picture zigzagged and frayed like the unravelling hem of a cardigan. But technical issues were for later. For now, there was the trudge home—swinging a plastic bag which contained the rented cassette and a packet of soon-to be incinerated microwave popcorn—buoyed along by that feeling that comes in between getting a thing you want and consuming it.
Read more arts and books:
Longing for New York’s bad old days
Fixing your leaky face
Can fiction capture the Iraq war?
A lot has changed in the way we enjoy video since then, but not all. Michael Z Newman’s stylish and informative new book Video Revolutions: On the History of a Medium hits pause on key moments in the biography of video, freezing them for closer examination, while always keeping an eye on the bigger picture.
Historians of video face an immediate challenge: the story of video is inseparable from the many different meanings the word has carried. "Video" made its lexical debut in the early 1930s, alongside television itself. At first those two words meant the same thing. Newman points to a 1937 Printer’s Ink Monthly’s citation of "video" simply as “the sight channel in television, as opposed to audio.”
In its early days, television was either live or broadcast via kinescope recording, in which a camera would record a broadcast directly from a monitor. That recording could then be re-broadcast, although the picture quality was considerably less good. Some shows, like I Love Lucy and Gunsmoke, were recorded on movie-style 35mm, rather than live (allowing I Love Lucy to become the first TV show to receive the honour of re-runs), but these were anomalies.
It wasn’t until 1956 that a commercially viable form of videotape was brought to market by the Ampex Corporation of California. Videotape’s quality far exceeded that of kinescope recordings, immediately solving the problem of broadcasting across America’s multiple time-zones: it was no longer possible to tell by the quality of a broadcast whether it was live or not.
In the decades after the invention of videotape as a recording medium, video and television peeled away from each other. Video no longer just meant “television.” It became a verb, too: to video, meaning to record on to videotape. But just as this division had been solidified, new videotape gadgetry changed video’s meaning again. The VCR wrenched videotape’s identity out of the television studio and into people’s homes.
The VCR is the most widely known version of a system actually called the VTR—video tape recorder. The “C” is for “cassette,” the innovation that kept the delicate roll of film away from our fingers. The first big success was Sony’s 1975 Betamax system, before JVC’s VHS (video home system) began to compete in 1977. Other formats, such as Sanyo’s V-Cord, jostled for power, but VHS ultimately came to dominate.
The joining of the words home and video was a genuine media revolution. Newman describes how executives hoped that markets for “movies, books, records, audio cassettes, adult courses, encyclopedias, business magazines and fairy tales,” would be collapsed into one thing:—videocassettes. Television, however, would remain separate: “the video audience had to be offered something that they weren’t already getting ‘for free’ over the airwaves,” explains Newman. Video was not just different to television: it was television's enemy.
The rhetoric around early home video was utopian in a way that we have heard again in conversations around the internet. Sony and Ampex had brought out extremely expensive home video products in the 1960s (Ampex’s cost $30,000), but despite the fact that nobody actually bought them, journalists and critics wrote about them in a state of feverish excitement. Long before any home video system had enjoyed any market penetration, journalists hoped that “Television’s unfulfilled promise would finally be realized by video as a recording (not transmitting/receiving) medium,” as the “hegemony of the networks would be stopped and the viewer newly installed as master of his her own leisure experience.”
Selling the feeling of autonomy to customers has been key to selling screen products ever since, whether in the “you”-centric YouTube or in RCA’s range of home video products, SelectaVision. In this new era, the viewer took on a curatorial role, purchasing videocassettes of movies to watch at home (for hundreds of dollars, at first) and recording television as it was broadcast. Today, we still buy into that dream of total control—although given that the 1975 ad slogan for the Sony Betamax was “Watch Whatever Whenever,” the principle of “on-demand” now feels very relative indeed. (The acronym for that slogan, WWW, reads like a prophecy of the websites that would come to define our sense of “on-demand” today: www.netflix.com, www.bbc.co.uk/iplayer, and so on.)
Newman notes that home video’s selling-point—autonomy over what you’re watching—also characterised early video games. In the late 70s, “both videotape players and video game consoles were draining prime time audiences away from network programs.” Meanwhile, home video systems like Magnavox Odyssey (which came with dice and a pack of cards!) were “routinely discussed in newspapers and magazines in terms of passivity and activity:” finally the viewer could “talk back” to the TV set. In this sense, videocassette technology represented a midway point between the idea of watching broadcast TV as a passive viewing activity, and playing video games as an active activity. The video in video games was shorthand for the act of participating with the screen.
In the early days of home video, demand exploded for video games and videocassette pornography. Until the end of the 1970s, adult films constituted half of all prerecorded video sales. Newman points out that VCR marketing had always been aimed at men, citing Ann Gray’s 1980s study of “male dominance of VCR programming and controls (though not of watching)” as an example of the gendered division of “household labour.” Women weren’t competent to operate the controls. They certainly weren’t VHS pornography’s target market either.
It seems inevitable, then, that home video gaming would come to be seen as a masculine activity. The active nature of game participation also “reads” as masculine, in contrast to the stereotype that watching broadcast TV is feminine: the woman passively watches sitcoms and soap operas while her husband is at work, then he comes home and wins a game against that same screen. While not hammering the point, Newman cautiously suggests that VHS porn consumption, video game playing, and “film nerd” technophilia are all linked by new forms of 1970s masculinity.
Before video arrived to change the worlds of porn and gaming, it had already given birth to a new artform. Video art had, since the late 1960s, exploited the opportunities offered by products such as the Sony CV-2000 (1965) and DV-2400 (1967) to “talk back” to the TV set. David Antin’s 1975 essay “Video: the Distinctive Features of the Medium,” positioned video art in opposition to commercial television’s mannered, heavily edited unrealism. Instead, video art would offer long, boring shots that imitated our actual day-to-day perception of reality, showing, by contrast, how phony television’s version of “reality” really was.
But when people started buying camcorders and making their own movies for fun, not just for art, the look and feel of reality onscreen was changed. Covert videotaping in the ABSCAM case (loosely fictionalised in the film American Hustle) had been allowed as evidence in court in 1980 and subsequently broadcast on TV, a historic event for the visibility of the video verité aesthetic. Video cameras had been around for a long time. But the cheap, light, mic-incorporated camcorders which hit the market in the mid-80s were instrumental in defining the new way reality looked onscreen. Jerky, hand-held, unprofessional: just as the unpolished written diction of Salinger reads as more “real” than F Scott Fitzgerald, the visual style of amateur videography came to represent truthfulness.
The rise of amateur movie-making was another opportunity for video to stake its claim as a democratising force. During the 1980s more and more non-professionals began to make films in their own backyards. The result, within a few years, was America’s Funniest Home Videos (and its British counterparts, such as You've Been Framed). There is something beautiful about our love of watching people simply falling over. It will probably endure for as long as we have bodies. But America’s Funniest Home Videos also undermined the notion of videotaping as a triumph for democratic art. The networks maintained total control of the money generated by this technology—videotaping—that was supposed to now be in the hands of the people. Film yourselves, the show says, and we’ll take it from there.
The democratisation of movie-making may have reached some kind of true form, however, in George Holliday’s 1991 recording of the police beating of Rodney King. Shot from his apartment window, Holliday’s footage was broadcast by LA news network KTLA and quickly spread across the globe, an instant classic of “sousveillance.” A majority-white jury acquitted three of the police officers of using excessive force on the African American suspect; the then-mayor of LA, Tom Bradley, responded that “the jury's verdict will not blind us to what we saw on that videotape.” The verdict and the video did not square with one another: as with the famous Zapruder video of John F Kennedy’s assassination in 1963 (the same year as the first sports instant-replay), people looking for the truth went to the video.
From the footage of Rodney King to hidden-camera TV show Cops, Newman points out that the videos associated “with camcorders and citizen media production” gave us a new way of capturing reality. Editing footage made it look better but, it now seemed, less truthful. In other words, the look of amateurism (shaky and hand-held) rapidly became associated with its authenticity: truth had a new aesthetic. Newman quotes a TV news producer in Tulsa, Oklahoma, who told Columbia Journalism Review in 1991that “footage shot by amateurs included in newscasts has ‘an unpolished quality that make it seem more real.’”
As time has passed and video-recording has changed, the “look” of truth has also changed. In 1999, a horror movie like The Blair Witch Project was scary because it looked real because it was lo-fi. In 2001, the September 11th terrorist attacks looked like a blockbuster movie. Iphone cameras weren’t yet ubiquitous, so the twin towers were filmed mainly by news-network cameras. But today the gap between lo-fi (amateur) video and high-definition (professional) video seems to have closed. In 2013, Russian dashboard cams filmed meteor showers with such clarity that it was impossible to tell whether the footage was captured by amateurs or professionals. With the advent of the smartphone, everyone has a high quality video camera—and player—in their pocket. But for those who still have fond memories of the rattling cassettes from Blockbusters, Newman's book is a fitting tribute. Rest in peace, VHS: you are outgrown but not forgotten.
Video Revolutions: On the History of a Medium is published by Columbia University Press and is available for £6.00