The technology of videography advances inexorably, but those advances don’t eliminate the need for people.
Question: How many videographers does it take to screw in a widescreen, interactive, digital, component, high-definition, MPEG-4, streaming light bulb? Answer: At least as many as it took to screw in an ordinary analog light bulb.
This month, videographers from all over the globe come to Las Vegas to attend the world’s largest exhibition of advanced audio and video technology, the annual convention of the National Association of Broadcasters (NAB). Those looking for smaller, faster, better, less expensive products are not disappointed. Those looking to have machines replace employees, on the other hand, may be.
There was a labor-saving concept that was the talk of the 2001 NAB show. It was called centralcasting. It involved a broadcast station group owner centralizing master-control facilities for multiple stations in one location connected to the others. The idea was that a single control-room operator could handle multiple stations as easily as one, eliminating the need for more master-control technicians.
For certain operations, centralcasting seems to work, but the jury is still out about what is lost along with the missing labor. A severe storm could make satellite reception unreliable, and a washed-out bridge could take a fiber-optic link with it. What then?
Even before centralcasting, the local cable-television news service New York One (NY1) made news by unveiling its one-person news crews. A reporter would set up a camera on a tripod, stop a passer-by to stand in for a moment so the lens could be focused, and then, holding a microphone, record a report. It’s a testament to New Yorkers that the biggest NY1 news story wasn’t the theft of all of its cameras.
Unfortunately, security is not the only function an additional crew member might serve. If the reporter is standing in front of a fire, and the building begins to collapse, there is no camera operator to pan to the action. Money is saved; important news footage is lost. It’s a business decision. If enough money is saved, maybe additional reporters can be hired, and they might get more shots than are lost.
Centralcasting and one-person news crews were made possible by technological advances. Without computer-assisted playback systems and advanced satellite and fiber-optic transmission systems, there could be no centralcasting. Without small, lightweight, sensitive camcorders, a one-person news crew would spend most of the day simply unloading and loading a vehicle. But centralcasting and one-person news crews are more policy decisions than technological destiny.
Having been around much longer than television, moviemaking technology has gone through even more advances. Today, feature movies may be shot on tiny consumer camcorders and edited on laptop computers. But, as the credits at the end of almost any modern movie indicate, technological advancement has not reduced the labor force involved in typical moviemaking.
A camera, no matter how small and lightweight, needs something to hold it. That something could be a steady videographer or an inorganic mount of some sort.
On a feature-movie set, one or more grips are usually involved in mounting the camera and moving the mount (such as a dolly or crane). Another person handles the camera’s pan & tilt functions, and yet another deals with focus. Could one person do it all? Certainly! That and more.
Consider a videographer operating a camera on the end of a jib arm. That one person can deal with the azimuth and elevation of the arm, the pan and tilt of the camera, the zoom and focus of the lens, and, if the jib happens to be mounted on, say, a pedestal, the back-and-forth and side-to-side motion of the pedestal and perhaps even the up-and-down motion of the pedestal’s column — nine degrees of freedom in all.
On the other hand, one person could deal with just the arm movement, another with just the mount movement (in which case it would more likely be a dolly than a pedestal), another with just pan and tilt, and another with zoom and focus. Finances aside, it’s difficult to say which is better. It depends on how talented the individual, how well-coordinated the team, and what’s supposed to happen in the shot. It’s difficult to precisely change focus from one character to another while simultaneously zooming, panning, tilting, and moving.
The director of photography (DP) on a feature might order such a focus “pull” but never physically operate a camera. The DP is responsible for the look of the imagery, however, and that means lighting.
HDTV, it was once thought, would reduce the number of cameras required for a television production (see “Up Close and Personal,” page TK). And electronic cinematography, thanks to a perceived increase in sensitivity over film (perceived by its proponents, in any case), would reduce the amount of lighting required, and, therefore, the size of the lighting crew.
There could be something to that argument. If one is shooting in a stadium at night, then increased sensitivity could significantly reduce the number of lighting instruments required. In the same stadium during the day, however, or in its locker room, the sensitivity/crew-size argument doesn’t work.
During the day, the sun will shine on the scene. If that’s the desired look, no additional lighting is required, regardless of the camera sensitivity. If lighting is required, it will be to balance the light from the sun. Again, the camera sensitivity doesn’t matter. The amount of light required is based only on the sun’s output. The more-sensitive camera might need a neutral-density filter to achieve the desired depth of field, but it can’t get away with less lighting.
In the locker room, a more sensitive camera will require less light level. That’s not necessarily less lighting.
There might be a key light, some back lights, and some modeling lights to get the desired effect. An insensitive system might require thousand-watt lamps; a system twice as sensitive could make do with 500-watt lamps. But the more-sensitive system will still need the same care to position the key light, back lights, and modeling lights. If a generator is providing power, it could be smaller for the more-sensitive system, but it will still have to be there, which means a driver and operator.
The latest long lenses with built-in image stabilizers allow camera operators to be farther from what they’re shooting. Some of those lenses offer an acceptance angle of just a fraction of a degree. One camera, from one position, can pick out many faces for close-ups, with no need for dollies and grips.
Indeed, if that’s the desired look, the new long lenses might reduce crew size. But the look of a close-up shot with a long lens is very different from that of a close-up shot up close.
If a camera is located 100 feet away from someone, and another character is ten feet behind the first, then the shot will show the second character at almost the same size as the first. There will not appear to be much space between them.
If the camera moves to just ten feet away, with a wider lens focal length, the second character will now appear to be only half the size of the first. There will be an obvious space between them. The close-up will look completely different. Moving a camera looks different from zooming a lens.
It’s not just cameras, lenses, mounts, and lighting. Each year, it seems, microphones get smaller. Sound crews, however, do not. Studio shows still use microphone boom operators standing on perambulators pushed around by other crew members.
Could new, advanced, tiny, wireless microphones be used instead? Perhaps. As with the long lenses versus the dolly moves, however, the results will be different, especially when the voices of characters standing in close proximity are picked up by one another’s microphones.
Can one audio person deal with getting microphones in position, mixing them, recording them, and dealing with intercom issues? Perhaps. It depends on the production.
If microphone wires for body mics are to be hidden beneath clothes, a production might require at least two people from the audio department just to deal separately with naked female and male torsos. Complicated shows often have at least one person dedicated to nothing but the intercom system. And some of the job descriptions at the ends of the intercom wires might seem strange, too.
The intercom section of the equipment designed for the Live From Lincoln Center series on PBS has a connector labeled Pusher. That’s not the person in charge of drugs nor someone who moves a microphone-boom perambulator around. It’s a stage manager sometimes used exclusively to (politely) push the conductor into the orchestra pit at the beginning of a ballet or opera production.
The pusher had no other function, but the job was crucial to the show’s fitting into its time slot, and it couldn’t be handled by just anyone. In addition to the people skills required to move the conductor firmly but gently, the pusher had to pluck from a steady stream of commands on the intercom the one instruction to begin the push.
Live From Lincoln Center, of course, is a rarity — a truly live transmission of a performance from a theatrical auditorium. Most videography these days is recorded. And technological advances allow a single person to edit a show at home — even on a lap.
Nevertheless, the trend is to have more people involved in the post-production process rather than fewer. In fact, some job descriptions didn’t exist until certain forms of technology made them possible.
When Videography, celebrating its 26th anniversary this month, long ago considered the field of video graphics, it consisted largely of character generators, cameras on stands, and some rare computer systems that allowed pictures to be displayed. When one of the first advanced video-manipulation systems was introduced, it was said that the operators needed strong familiarity with mathematical equations.
Today, graphics departments are among the largest at video production facilities. And animators need no longer seek employment only at Disney and Warner Bros.
Film-processing laboratories have long had a position called “color timer,” a person who adjusts the character of the light hitting the film to achieve the desired coloring. When “timed” film was transferred to video, some people made the assumption that, since the color had already been adjusted, there was no need for any further work. Today, of course, video colorists are among the most highly regarded of post-production personnel — but their jobs didn’t really exist until the introduction of the electronic color corrector.
More recently, some facilities have been adding a new job description: compressionist. A compressionist adjusts the many parameters of a video bit-rate-reduction system (a digital compression system) to allocate capacity to make the best possible DVD or digital-cinema master.
Digital television, whether delivered by terrestrial broadcasts, digital cable television, digital broadcast satellites, or even some form of Internet connection, involves relatively heavy compression. An uncompressed, high-quality, component-digital, standard-definition signal is usually said to occupy 270 million bits per second (270 Mbps), although some of that is devoted to portions of the signal never seen. For transmission via one of the digital broadcast media, it might be reduced to just 3 Mbps; on the Internet, it might be only a fraction of that.
High-definition television (HDTV) uses more bits per second. It also has different colorimetry than does standard-definition television. Some HDTV is shot at 24 frames per second, some at 30, some at 60.
Ordinary television has long used artificial edge enhancement to make up for the softness that its pictures could otherwise seem to have. HDTV needs less enhancement, if any. Ordinary television is four units wide to three high; HDTV is 16:9.
So, in an age in which HDTV and ordinary television coexist, there are conversions between HDTV colorimetry (Rec. 709) and ordinary (Rec. 601), between 16:9 and 4:3, between unenhanced images and enhanced, and between various frame rates, not to mention the difference between HDTV’s fine detail and ordinary TV’s lack thereof. Some have suggested that those differences beg for yet another new video job description: conversionist.
Not everyone will need a conversionist. Not everyone currently needs a compressionist or a colorist. Few need mic-boom perambulator pushers; fewer still need conductor pushers. Is it possible for a lone videographer to get away without grips, gaffers, video controllers, technical directors, and an audio crew? Absolutely!
Just as it’s been possible for more than a hundred years for one person with a film camera and some splicing equipment to create a hit movie, it’s certainly possible for a videographer with a pocket-sized camcorder and a laptop editing system to create — er — a hit movie (or television show). But, although the new technology can help bring out talent, it doesn’t create it.
Shakespeare wrote with quill pen and ink. The fountain pen, the ballpoint, and the word processor haven’t significantly improved writing over the course of the last 400 years. Some say that advanced motion image technology hasn’t made movies any better than, say, Hitchcock’s nor television shows better than I Love Lucy.
Of course, Shakespeare, Hitchcock, and Lucy were great practitioners of their arts and are irrelevant to the question of whether, say, centralcasting as a labor-saving tool is always a good idea. For that, Lincoln’s words might be most instructive:
“You may pool all of the people some of the time; you can even pool some of the people all of the time; but you can’t pool all of the people all of the time.”
Up Close and Personal
The theory was that HDTV would save money thanks to its added detail. Fewer cameras would be required, because each could offer as much information as five or six ordinary cameras. There could be less editing, because viewers could pick out whatever they wanted to see from the large, crisp image — do-it-yourself close-ups.
Perhaps when HDTV cameras get used by security guards, that will be the case. It hasn’t been for television programming.
Even giant-screen movies shot with large-format film cameras have had close-ups. In 2001: A Space Odyssey, a single extreme close-up of an eye sometimes fills the screen. Why? Because Stanley Kubrick, the director, wanted it to.
Could movie viewers have simply picked out the eye from a larger scene? Certainly! And perhaps some of them would have. But Kubrick wanted to call attention to the eye. So he used close-ups and rapid editing, even though he was using a medium capable of considerably more detail than even HDTV.
For HDTV to eliminate the “need” for close-ups and rapid cutting, it will have to eliminate something else first: the director. That’s not a suggestion.
Spike When the Irony Was Hot
The last surviving member of the BBC’s Goon Show, Spike Milligan, died earlier this year. Without the Goons there might never have been Firesign Theater or Monty Python’s Flying Circus, to name just two groups they influenced.
Although many fans would probably swear they saw extraordinary imagery whenever the show was broadcast, The Goon Show was aired only on radio. Nevertheless, the Goons were well versed in the labor-saving aspects of advanced videographic technology.
Here’s a tiny segment of a show broadcast on March 7, 1957, written by Milligan and Larry Stephens.
“Bluebottle: Captain, this machine can do the work of two men.
“Seagoon: Well, let’s see it.
“Bluebottle: Alright, but you’ll have to help us, ’cause it takes three men to work it.”