Produced by:
| Follow Us  

The History, Present, and Possible Future of Increased Resolution for Motion Imaging by Mark Schubin

July 15th, 2015 | No Comments | Posted in Download, Schubin Cafe

Presented on July 10, 2015 at the “International Symposium on Medical-Engineering Collaboration: Medicine Definitely Jumps Up with 8K,” organized by and presented at Nihon University, Tokyo.

Direct Link (26 MB / TRT 14:46):
The History, Present, and Possible Future of Increased Resolution for Motion Imaging by Mark Schubin

Embedded:

Tags: , , , , , , , , , , ,

Technology Year in Review

February 18th, 2015 | No Comments | Posted in Download, Schubin Cafe, Today's Special
Annual Technology Year in Review recorded at the 2015 HPA Tech Retreat, Hyatt Regency Indian Wells, CA
February 11, 2015

Direct Link (13 MB / 10:36 TRT): Technology Year in Review

Embedded:

Tags: , , , , , , , , , , , , , , , , , ,

HPA 2014 – Resolution Frame-Rate and Dynamic Range [video]

March 12th, 2014 | No Comments | Posted in Download, Schubin Cafe, Today's Special

 

Mark Schubin’s Resolution Frame-Rate and Dynamic Range presentation from the HPA Tech Retreat, presented on February 20, 2014 (audio recorded later).

(Extended Version: Bang for the Buck: Data Rate vs. Perception in UHD Production by Mark Schubin at http://youtu.be/UG6q2xVkKU4)

Video (TRT 12:57)

Tags: , , , , , , , , ,

The ‘Look’ of HFR (HPA, Feb. 18, 2013)

February 19th, 2013 | No Comments | Posted in Download, Today's Special

The ‘Look’ of HFR
(Supplement to the 2013 HPA Charles Poynton Seminar)

HPA, February 18, 2013

Video (12:20 TRT)

Tags: , , , , , , , , , , ,

The Habit and “The Hobbit”

February 5th, 2013 | 2 Comments | Posted in Schubin Cafe

Here are a couple of questions to get you started: What is the image at left? And what is the sound of a telephone call?

I’ll offer some more information about the first one. It’s an “intertitle,” the sort of thing inserted into silent movies to help advance their plot.

This one happens to be from a pretty famous movie. Got any idea yet of which one? You’re likely to be familiar with it even if you never saw it. But the answer might be surprising.

Now, how about that telephone call? Bell Labs researcher and audio pioneer Harvey Fletcher wanted its sound to be unidentifiable, i.e., just as good as being there. Today, if you use a certain type of mobile phone, you might be able to identify certain negative artifacts, but, in general, with contemporary technology, Fletcher’s dream has been achieved: a telephone call sounds pretty much like any other reproduction of an electronic audio signal. And that’s a problem.

When the kidnapper calls to demand ransom in a movie or TV thriller, the camera might offer a close-up of the person taking the call, but the kidnapper’s voice shouldn’t sound like it’s coming from the same room. So a voice filter is used, typically restricting the bandwidth of the sound to a range from roughly 300 Hz to 3 kHz as shown at the right in the Cisco white paper “Wideband Audio and IP Telephony” <http://bit.ly/116U1Mn>.

If you’re familiar with sampling theory, you know that, to avoid spurious frequencies known as aliases, sampling must be done at a rate higher than twice the desired highest frequency, and the signal must be filtered to prevent anything higher than that highest desired frequency from entering the sampler. Filters are imperfect, so, if a telephone company wanted to sample 8,000 times per second, it would not be totally unreasonable for the system to pass little more than 3 kHz.

Digital transmission systems don’t care about filtering low frequencies, however, so why the 300 Hz low-frequency cutoff? It dates back to analog transmission systems, wherein different frequencies would be attenuated by different amounts, and an equalizer would restore them. The attenuation might be described as a certain number of decibels per decade. A decade, in this case, is a tenfold increase in frequency, as from 300 Hz to 3 kHz. Going down to 30 Hz from 300 would add another decade, doubling the equalization needed.

Today, in the era of digital transmission, going down to 30 or even 20 Hz would not be a problem, which is why people describe today’s real-world telephone calls in such terms as “sounding like you’re next to me.” But the sound of a telephone-call voice in a movie or on TV still harks back to an earlier era (just as a print ad might tell its viewer to “dial” a certain phone number in an era when it’s hard to find a dial-equipped phone outside a museum).

It’s not easy on a visual web page to provide examples of telephone call sounds, especially since I have no idea what your listening equipment is like. But here is another common example of a motion-image-media indicator that strays from reality: the binoculars mask.

If you use binoculars, you probably know you’re supposed to adjust their eye separation so that there’s one circular image, not the lazy eight shown at left. But, if there’s no binoculars mask effect, how is a viewer supposed to know that the scene is seen through binoculars?

Now, perhaps, we can consider frame rate. Though he wanted telephone calls to sound just like being there in person, Fletcher did the research that identified the 300 Hz-to-3 kHz range for speech intelligibility and identification. Are there physical parameters affecting choice of frame rate? There are more than one.

One is typically called the fusion frequency, the frequency at which a sequence of individual pictures appears to be a motion picture. You can find your own fusion frequency with a common flip book; an 1886 version called a Kineograph is shown at right.

Flip through the pages slowly, and they are individual still pictures. Flip through them quickly, and they are a single motion picture.

Unfortunately, there is no single fusion frequency. It varies from person to person and with illumination, color, angle, and type of presentation.

The type of presentation becomes significant in another frame-rate variable: what’s commonly called the flicker frequency, the rate at which sources of illumination appear to be steady, rather than flickering.

Some of the earliest motion-picture systems took advantage of a fusion frequency generally lower than the flicker frequency. They presented motion pictures, but they flickered, thus an early nickname for movies: flickers or flicks.

One “solution” to the flicker problem was the use of a two-bladed shutter in the projector. A film image would be moved into place, the shutter would turn, the image would appear on screen, the shutter would turn again, the image would disappear, it would turn again, it would reappear, and it would turn again while a new image moved into place. The result was an illumination-repetition rate twice that of the frame rate, perhaps enough to achieve the flicker frequency, depending, again, on a number of viewing factors.

While the two-bladed (or, in some cases, three-bladed) shutter helped ameliorate flicker, it introduced a new artifact into motion presentation. A moving object would appear to move from one frame to another but to stall in mid-motion from one shutter opening to another. Clearly, that was a step away from reality, but, like a limited-bandwidth telephone call and a binoculars mask, it tended to indicate the look of a movie.

What rate is required? When Thomas Edison initially chose 46 frames per second (fps) for his Kinetoscope, he said it was because his research had showed that “the average human retina was capable of taking 45 or 46 photographs in a second and communicating them to the brain.” But the publication Electricity, in its June 6, 1891 issue, contrasted the Kinetoscope’s supposed 46 fps with Wordsworth Donisthorpe’s Kinesigraph’s six-to-eight: “Now, considering that the retina can retain an impression for 1/7 of a second, 8 photographs per second are sufficient for the purpose of reproduction and the remaining 38 are mere waste.”

Is there a “correct” frame rate? This week’s Super Bowl coverage made use of For-A’s FT-One cameras (above), which can shoot 4K images at up to 900 fps. But that was for replay analysis.

At the International Broadcasting Convention (IBC) in Amsterdam in 2008, the British Broadcasting Corporation (BBC) provided a demonstration in the European Broadcasting Union (EBU) “village” that showed how frame rates as high as 300 fps could be beneficial for real-time viewing. At left is a simulation of 50-fps (top) vs. 100-fps (bottom), showing a huge difference in dynamic resolution (detail in moving images).

Note that the stationary tracks and ties are equally sharp in both images. The moving train, however, is not. Other parts of the demonstration showed that high-definition resolution might appear no better than standard-definition for moving objects at common TV frame rates.

A clear case seemed to be made for frame rates higher than those normally used in television. Again, that was in 2008. In 2001, however, Kodak, Laser-Pacific, and Sony each won an engineering Emmy award for making possible 24-fps video–video at a lower frame rate than that normally used.

As the BBC/EBU demo at IBC clearly showed, 24-fps video has worse dynamic resolution than even normal TV frame rates, let alone higher ones. Yet 24-fps video has also been wildly successful. It provides a particular look, just as a binoculars mask does. In this case, the look contributes to a sensation that the sequence was shot on film. But why did movies end up at 24-fps? It’s not Edison’s 46 nor Donisthorpe’s 8.

The figure is based on research but not research into any form of visual perception. Go back to the intertitle at the top of this column. Have you guessed the movie yet? It’s The Jazz Singer, the one that ushered in the age of sound movies, even though, as the intertitle shows, it, itself, was not an all-singing, all-talking movie.

Some say 24-fps was chosen as the minimum frame rate that would provide sufficient sound quality. But The Jazz Singer, like many other sound movies, used a sound-reproduction system, Vitaphone, unrelated to the film rate: phonograph disks. In the 1926 demo photo above, engineer Edward B. Craft holds one of the 16-inch-diameter disks. Their size and rotational speed (33-1/3 rpm, the first time that speed had been used) were carefully chosen for sound quality and capacity, but they could have been synchronized to a projector running at any particular speed.

That was the key. Sound movies did not require 24-fps, but they required a single, standardized speed. The choice of that speed fell to Stanley Watkins, an employee of Western Electric, which developed the Vitaphone process. Watkins diligently undertook research. According to Scott Eyman’s book The Speed of Sound (Simon & Schuster 1997), he explained the process in 1961:

“What happened was that we got together with Warners’ chief projectionist and asked him how fast they ran the film in theaters. He told us it went at 80 to 90 feet per minute in the best first-run houses and in the small ones anything from 100 feet up, according to how many shows they wanted to get in during the day. After a little thought, we settled on 90 feet a minute [24-fps for 35 mm film] as a reasonable compromise.”

That’s it. That’s where 24-fps came from: no visual or acoustic testing, no scientific calculation, just a conversation between one projectionist, one engineer, and, according to Watkins’s daughter Barbara Witemeyer in a 2000 paper (“The Sound of Silents”), Sam Warner (of Warner Bros.) and Walter Rich, president of Vitaphone. After Vitaphone and Warner Bros., Fox adopted the speed, and soon it was ubiquitous.

Fluke or not, 24 fps came to symbolize the look of film, which is why 24-fps video is so popular. We have a habit of associating that rate with movies.

The Hobbit broke that habit. It is available in a 48-fps, so-called “HFR” (high-frame-rate) version. And its look has received some unusual reviews.

Some have complained of nausea. It’s conceivable that there is some artifact of the way The Hobbit has been projected in some theaters (in stereoscopic 3D) that triggers a queasiness response in some viewers, but it seems (to me) more likely that those viewers might be reacting to some overhead, spinning shots in the same way that viewers have reacted to roller-coaster shots in slower-frame-rate movies.

Others have complained of a news-like or video-like look that made it more difficult for them to suspend disbelief and get into the story. That’s certainly possible. If 24-fps contributes to the look of what we are in the habit of thinking of as a movie, then 48-fps is different.

Of course, we no longer watch flickering silent black-&-white movies with intertitles, projected at a rate faster than they were shot, either. Times change.

 

Tags: , , , , , , , , , , , , , , , , , , , , ,

New Angles on 2D and 3D Images

May 25th, 2012 | 4 Comments | Posted in 3D Courses, Schubin Cafe

Shooting stereoscopic 3D has involved many parameters: magnification, interaxial distance, toe-in angle (which can be zero), image-sensor-to-lens-axis shift, etc. To all of those, must we now also consider shutter angle (or exposure time)? The answer seems to be yes.

Unlike other posts here, this one will not have many pictures.  As the saying goes, “You had to be there.” There, in this case, was the SMPTE Technology Summit on Cinema (TSC) at the National Association of Broadcasters (NAB) convention in Las Vegas last month.

The reason you had to be there was the extraordinary projection facilities that had been assembled. There was stereoscopic, higher-than-HDTV-resolution, high-frame-rate projection. There was even laser-illuminated projection, but that will be the subject of a different post.

The subject of this post is primarily the very last session of the SMPTE TSC, which was called “High Frame Rate Stereoscopic 3D,” moderated by SMPTE engineering vice president (and senior vice president of technology of Warner Bros. Technical Operations) Wendy Aylsworth. It featured Marty Banks of the Visual Space Perception Laboratory at the University of California – Berkeley, Phil Oatley of Park Road Post, Nick Mitchell of Technicolor Digital Cinema, and Siegfried Foessel of Fraunhofer IIS.

You might not be familiar with Park Road Post. It’s located in New Zealand — a very particular part of New Zealand, within walking distance of Weta Digital, Weta Workshop, and Stone Street Studios. If that suggests a connection to the Lord of the Rings trilogy and other movies, it’s an accurate impression. So, when Peter Jackson chose to use a high frame rate for The Hobbit, Park Road Post arranged to demonstrate multiple frame rates. Because higher frame rates mean less exposure time per frame, they also arranged to test different shutter angles.

Video engineers are accustomed to discussing exposure times in terms of, well, times: 17 milliseconds, 10 milliseconds, etc. Percentages of the full-frame time (e.g., 50% shutter) and equivalent frame rates (e.g. 90 frames per second or 90 fps) are also used. Cinematographers have usually expressed exposure times in terms of shutter angles.

Motion-picture film cameras (and some electronic cameras) have rotating shutters. The shutters need to be closed while the film moves and a new frame is placed into position. If the rotating shutter disk is a semicircle, that’s said to be a 180-degree shutter, exposing the film for half of the frame rate. By adjusting a movable portion of the disk, smaller openings (120-degree, 90-degree, etc.) may be easily achieved, as shown above <http://en.wikipedia.org/wiki/File:ShutterAngle.png>.

Certain things have long been known about shutters: The longer they are open, the more light gets to the film. For any particular film stock, exposure can be adjusted with shutter, iris, and optical filtering (and various forms of processing can also have an effect, and electronic cameras can have their gains adjusted). Shorter shutter times provide sharper individual frames when there is motion, but they also tend to portray the motion in a jerkier fashion. And there are specific shutter times that can be used to minimize the flicker of certain types of lighting or video displays.

That was what was commonly known about shutters before the NAB SMPTE TSC. And then came that last session.

Oatley, Park Road Post’s head of technology, showed some tests that had been shot stereoscopically at various frame rates and shutter angles. The scene was a sword fight, with flowing water and bright lights in the image. Some audience members noticed what appeared to be a motion-distortion problem. The swords seemed to bend. Oatley explained that the swords did bend. They were toy swords.

That left the real differences. Sequences were shown that were shot at different frame rates and at different shutter angles. As might be expected, higher frame rates seemed to make the images somewhat more “video” like (there are many characteristics of what might be called the look of traditional film-based motion pictures, and one of those is probably the 24-fps frame rate).

At each frame rate, however, the change from a 270-degree shutter angle to 360-degree made the pictures look much more video like. The effect appeared greater than that of increasing frame rate, and it occurred at all of the frame rates.

Foessel, head of the Fraunhofer Institute’s department of moving picture technologies, also showed the effects of different frame rates and shutter angles, but they were created differently. A single sequence at a boxing gym was shot with a pair of ARRI Alexa cameras in a Stereotec mirror rig, time synchronized, at 120 fps at a 356-degree shutter.

When the first of every group of five frames was isolated, the resulting sequence was the equivalent of material shot at 24 fps with a 71.2-degree shutter (the presentation called it 72-degree). If the first three of every five frames were combined, the result was roughly the equivalent of material shot at 24 fps with a 213.6-degree shutter (the presentation called it 216-degree). It’s roughly equivalent because there are two tiny moments of blackness that wouldn’t have been there in actual shooting with the larger shutter angle.

As shown above, the expected effects of sharpness and motion judder were seen in the differently shuttered examples. But there was another effect. The stereoscopic depth appeared to be reduced in the larger-shutter-angle presentation.

Foessel had another set of demonstrations, as shown above. Both showed the equivalent of 60-fps with roughly a 180-degree shutter angle, but in one set the left- and right-eye views were time coincident, and in the other they alternated. In the alternating one, the boxers’ moving arms had a semitransparent, ghostly quality.

The SMPTE TSC wasn’t the only place where the effects of angles on stereoscopic 3D could be seen at NAB 2012. Another was at any of the many glasses-free displays. All of them had a relatively restricted viewing angle, though that angle was always far greater than the viewing angle of the only true live holographic video system ever shown at NAB.

That system was shown at NAB 2009 by NICT, Japan’s National Institute of Information and Communications Technology. It was smaller than an inch across, had an extremely restricted viewing angle, and, as can be seen above, was not exactly of entertainment quality. It also required an optics lab’s equipment for both shooting and display.

At NAB 2012, NICT was back. As at NAB 2009, they showed a multi-sensory apparatus, but this time it added 3D sound and olfactory stimulus. And, as at NAB 2009, they offered a means to view 3D without glasses.

This time, however, as shown above, instead of being poor quality, it was full HD; instead of being less than an inch, it was 200-inch; and, instead of having the show’s most-restricted no-glasses-3D viewing angle, it had the broadest. It also had something no other glasses-free 3D display at the show offered: the ability to look around objects by changing viewing angle, as shown below.

There was another big difference between the 2009 live hologram and the 2012 200-inch glasses-free system. There were no granite optical tables, first-surface mirrors, or lasers involved. The technology is so simple, it was explained in a single diagram (below).

Granted, aligning 200 projectors is not exactly trivial, and the rear-projection space required currently precludes installation in most homes. Despite its prototypical nature, however, NICT’s 200-inch, glasses-free, look-around-objects 3D system could be installed at a venue like a shopping center or sports arena today.

Of course, there is something else to be considered. The images shown were computer graphics. There doesn’t seem to be a 200-view camera rig yet.

Tags: , , , , , , , , , , , ,

Smellyvision and Associates

February 25th, 2012 | No Comments | Posted in 3D Courses, Schubin Cafe


 What is reality? And is it something we want to get closer to? Take a look at the picture of a cat above, as printed on a package of Bell Rock Growers’ Pet Greens® Treats <http://www.bellrockgrowers.com/cattreats.html>. Does it look unreal? Distorted? Is it?

At this month’s HPA Tech Retreat in Indian Wells, California (shown above in a photo by Peter Putman as part of his coverage of the event <http://www.hdtvexpert.com/?p=1804>), there was much talk about getting closer to reality by using images with higher resolution, higher frame rate, greater dynamic range, larger color gamut, stereoscopic sensation, and even surround vision. The last was based on a demonstration from C360 Technologies. Another demo featured Barco’s Auro-3D enveloping sound technology. In the main program, vision scientist Jenny Read explained how stereoscopic 3D in a cinema auditorium can’t possibly work right and why we think it does. And then there were the quizzes.

All of them related to the introduction of image and sound technologies at various World’s Fairs. Although the dates ranged from 1851 to the late 20th century, more than one quiz related to technologies introduced at the 1900 Paris Exposition. It stands to reason.

At that one event, people could attend sync-sound movies and watch large-format high-resolution movies on a giant-screen. They could also experience reality simulations: an “ocean voyage” on a motion-platform with visual effects called the Mareorama (depicted at left), a “train trip” on the Trans-Siberian Railway using spatial motion parallax (with one image belt moving at 1000 feet per minute!), and a “flight above the city” in the surround-projection-based Cinéorama (shown below, with synchronized projectors under the audience). At the same fair, they could also hear sound broadcasting of music (with no radios required) and even try out the newly coined word television.

Well over a century later, we still have sound broadcasting (though receivers are now required), we still watch sync-sound movies, and we still use the word television. There are still large-format large-screen, surround vision, and moving-platform experiences, but they tend to be at, well, World’s Fairs, museums, and other special venues.

There was a time when at least 70-mm film was used as a selling point for some Hollywood movies and the theaters where they were shown. And then it wasn’t. The audience’s desire for quality didn’t seem to justify the additional cost. The digital-cinema era started at lower-than-home-HD resolution but is now moving towards “4K,” more than twice the linear resolution of the best HD (the 4K effects and workflows of The Girl with the Dragon Tattoo were discussed at the HPA Tech Retreat).

Back in the publicized 70-mm film era, special-effects wizard, inventor, and director Douglas Trumbull created a system for increasing temporal resolution in the same way that 70-mm offered greater spatial resolution than 35-mm film. It was called Showscan, with 60 frames per second (fps) instead of 24.

The results were stunning, with a much greater sensation of reality. But not everyone was convinced it should be used universally. In the August 1994 issue of American Cinematographer, Bob Fisher and Marji Rhea interviewed a director about his feelings about the process after viewing Trumbull’s 1989 short, Leonardo’s Dream.

“After that film was completed, I drew a very distinct conclusion that the Showscan process is too vivid and life-like for a traditional fiction film. It becomes invasive. I decided that, for conventional movies, it’s best to stay with 24 frames per second. It keeps the image under the proscenium arch. That’s important, because most of the audience wants to be non-participating voyeurs.”

Who was that mystery director who decided 24-fps is better for traditional movies than 60-fps? It was the director of the major features Brainstorm and Silent Running. It was Douglas Trumbull.

As perhaps the greatest proponent of high-frame-rate shooting today, Trumbull was more recently asked about his 1994 comments. He responded that a director might still seek a more-traditional look for storytelling, but by shooting at a higher frame rate that option will remain open, and the increased spatial detail offered by a higher frame rate will also be an option.

That increased spatial detail is shown at left in a BBC/EBU simulation of 50-fps (top) and 100-fps (bottom) images based on 300-fps shooting. Note that the tracks and ties are equally sharp in both images; only the moving train changes. The images may be found in the September 2008 BBC White Paper on “High Frame-Rate Television,” available here <http://downloads.bbc.co.uk/rd/pubs/whp/whp-pdf-files/WHP169.pdf>.

Trumbull is a fan of using higher frame rates, especially for stereoscopic 3D (his Leonardo’s Dream was stereoscopic). Such other directors as James Cameron and Peter Jackson have joined that approach. And at the SMPTE International Conference on Stereoscopic 3D in June Martin Banks of UC-Berkeley’s Visual Space Perception Laboratory explained strobing effects that can occur in S3D viewing.

A hit of the 2012 HPA Tech Retreat, however, in both the main program and the demo area, was the Tessive Time Filter, a mechanism for eliminating (or at least greatly reducing) strobing effects without changing frame rate. It applies appropriate temporal filtering in front of the lens — essentially any lens. Because the filtering is temporal, it does not affect the sharpness of items that are stationary relative to the image sensor. Above right is an image illustrating a “compensator” plug-in for Apple’s Final Cut Pro “to achieve the best possible representation of time in your footage” (when the green word “Compensated” appears at the bottom right, the compensator is on <http://www.tessive.com/>).

That’s frame rate and resolution. Visual dynamic range (from brightest to darkest) and color gamut were also topics at the 2012 HPA Tech Retreat, primarily in Charles Poynton’s seminar on the physics of imaging displays and presentation on high-dynamic-range imaging, in a panel discussion on laser projection, and in Dolby’s high-dynamic-range monitoring demonstrations.

Poynton noted a conflict between displays that can “create” their own extended ranges and gamuts and the intentions of directors. He also noted that in medical imaging, where gray scale and color can be critical, there are standards that don’t exist in consumer television. But that doesn’t mean medical imaging is closer to reality. In fact, it might be nice for a tumor otherwise invisible to show up very obviously, like a clown’s red nose.

Above left is another scientific image, the National Oceanic and Atmospheric Administration’s satellite image of cloud cover over the U.S. this morning, at a very clear time. Rest assured that the air did not look green, yellow, and brown at the time. Sometimes reality is not desirable.

Consider the cat at the top of this post. Its unusual look is intentional, something to grab a shopper’s intention. But it’s actually not unrealistic.

Try holding your hand about a foot in front of your face and note its apparent size. Now move it two feet away. It looks smaller, but not half the size. Yet the “real” image of the hand on your retina is half the size.

Reality is even more complex. We track different moving objects at different times, changing what looks sharp or blurry. We focus on objects at different depths in a scene, unlike a camera (regarding stereoscopic 3D perception, at the HPA Tech Retreat Read noted that although a generation that grows up with S3D imagery might not experience today’s S3D viewing difficulties neither might they find S3D exciting). We can see 360 degrees in any direction (by moving our heads and bodies, if necessary). We can also hear sounds coming from any direction. And then there are our other senses.

At the 2010 International Broadcasting Convention in Amsterdam, the Korean Electronics and Telecommunications Research Institute demonstrated what they called “4D TV” (diagram above). When there was a fire on screen, viewers felt heat. When there was the appearance of speed on screen, viewers felt the rush of air across their faces. During an episode reminiscent of a news event in which an athlete was struck, viewers felt a blow on their legs. And there were also scents.

“There may come a time when we shall have ‘smellyvision’ and ‘tastyvision’. When we are able to broadcast so that all the senses are catered for, we shall live in a world which no one has yet dreamt about.”

That quotation by Archibald Montgomery Low appeared in the “Radio Mirror” of the (London) Daily News on December 30, 1926. Much more recently (June 14 of last year), the Samsung Advanced Institute of Technology and the University of California – San Diego’s Jacobs School of Engineering jointly announced the development of something that might sit on the back of a TV set and generate “thousands of odors” on command. But that raises the reality issue, again. Do we really want to smell what the sign above left depicts?

Archibald Low was an interesting character. He was inducted posthumously into the International Space Hall of Fame as the “father of radio guidance systems” and was one of the founders and presidents of the British Interplanetary Society, but he was also (among many other posts and appointments) fellow and president of the British Institute of Radio Engineers, fellow of the Chemical Society, fellow of the Geographical Society, and chair of the Royal Automobile Club’s Motor Cycle Committee (he built and arranged the demonstration of a rocket-powered motorcycle, above right).

Besides that motorcycle, he also developed drawing tools, a well-selling whistling egg boiler, and what was probably the first drone aircraft not carrying a pilot. But two other aspects of Low’s long and varied career might be worth considering.

In 1914, he demonstrated, first to the Institute of Automobile Engineers and later at Selfridge’s Department Store, something he called “televista” but probably better described in the title of his presentation, “Seeing by Wireless.” And, in a 1937 book, he wrote, “The telephone may develop to a stage where it is unnecessary to enter a special call-box. We shall think no more of telephoning to our office from our cars or railway-carriages than we do today of telephoning from our homes.” So he wasn’t too bad at predictions.

“Smellyvision”? Who knows? But, if we’re lucky, it won’t bring us any closer to reality.

Tags: , , , , , , , , , , , , , , , ,
Web Statistics