The announcement was made by Chris Cary, CEO of the new UK tech 3D start-up, 3D Visual Enterprises, parent company of the camera's developer, Meduza Systems. The news follows on the heels of Meduza's debut in April at the National Association of Broadcasters (NAB) show.
The Meduza digital imaging system, designed to set a new benchmark standard for stereoscopic 3D image capturing, will shoot the final Space Shuttle Launch in 4K 3D and in 2K High Speed. The camera system can support any number of cameras fully synchronized without the use of cabling.
"We are very excited to have the opportunity to work with NASA," says Cary. "This opportunity exemplifies the Meduza's versatility and flexibility and supports our interest in developing many valuable and critical applications for the camera in industries outside film and television."
With its modular components, the Meduza can be set up in minutes and has interchangeable lenses, precise remote controlled variable inter-axial (the distance between the lenses), and precise remote controlled convergence (features non-existent on Sony and Panasonic models). It is a single camera, with a single set of electronics and a single set of controls that powers two imaging sensors at the same time.
Meduza's modular design is capable of all types of filming, from aerospace and medicine to military and oil exploration. "3D is capable of providing a wide range of new qualities to productions," says Cary. "Imagine what Graham Norton could do with an audience in 3D."
The Meduza enables filmmakers to shoot in the native 4:3 format at beyond 4K; content is acquired at 4096 x 3072 pixels and covers everything from 15/70mm giant screens to general theatrical screens, as well as S3D television viewing.
Films currently produced in 3D are generally shot with two cameras linked together with stereoscopic grip equipment or 2 cameras sandwiched in one camera body with very little control or synchronization. "A simple analogy would be if you glue 2 motorcycles together, this does not make a car" says Jonathan Kitzen, president of Meduza Systems. "While left=eye and right-eye images are generated using two cameras, many more new problems are created, which must then be corrected in post-production, leading to data loss, image aberration, time, and expense."