The Circular Image Sensor

Rethinking emulsion-free sensor shape

 

I am likely ten years early with this proposal, but I like living beyond the edge. The phrase “emulsion-free” in the subtitle and my profile, while a humble attempt at pithiness, is also an observation that film is now irrelevant where the majority of cinematography is concerned.

“I’m only mostly dead.” — Film

As we start to go emulsion-free, we can shed — or at least rethink— all the conventions built around the physical film medium. One of those conventions is sensor shape. Motion picture film dictates a rectangular frame due to its physical constraints, but digital sensors are fundamentally different and don’t suffer from the necessity of physically pulling each frame into position.

Despite this difference, we still have digital sensors offered as rectangular, and the fragmentation has already begun: Arri Alexa comes in a 16:9 version and a 4:3 version. Red Epic crops the sensor upon choosing a smaller-than-highest resolution. Red’s new Dragon sensor is starting the move beyond traditional Super 35. I won’t even mention Micro 4/3.

I can think of three advantages of cameras with circular sensors, and no disadvantages:

Horizon adjustments in post without zooming

With a rectangular format, a mis-leveled camera can only be adjusted if the shot is zoomed in. Shooting on a high resolution helps the situation, but only to the extent that the cinematographer is willing to accept an adjustment of the intended frame. This problem is exacerbated as the aspect ratio widens.

With a circular sensor — assuming all the pixels are recording regardless of final aspect ratio — any rotation in post is zoom-free.

Similar number of pixels used/wasted for each aspect ratio

On a Red Epic X, when shooting with anamorphic lenses, pixels at the left and right side are wasted and resolution is lost. Similarly, when shooting 2.4:1, the top and bottom are cropped and resolution is lost. Alexa 4:3 was designed to overcome the anamorphic issue, but doesn’t overcome the spherical 2.4:1 issue (it would seem to exacerbate it, actually — that’s what the 16:9 Alexa is for).

But why should I have to switch cameras when I shoot anamorphic or spherical, 16:9 or 4:3 or 2.4:1? A circular camera sensor would allow all of these aspect ratios and compression ratios, without causing the cameraman to feel cheated out of pixels. A fair amount of pixels will be wasted in each of the aspect ratios. That is, of course, until you factor in horizon adjustments. Assuming that feature is utilized, nothing is wasted in any format. Especially the intended frame.

One-to-one equivalence between lens image circle and sensor size

One of the issues surrounding motion picture lenses is that as the move away from film is starting, traditional motion picture lenses are no longer guaranteed to cover any given future format size. Canon 5D is “full frame” which is a term cinematographers didn’t really use prior to the Canon 5D. It used to be called VistaVision. Super35 lenses aren’t designed to cover VistaVision, even though some of them do.

Image circle is the measurement of the diameter of the image created by a lens, wide-open at infinity (at the correct flange distance). This measurement is typically reported by the manufacturer of large format lenses, but is only starting to be reported by manufacturers of cine lenses — mostly because the Super35 standard rendered it unnecessary until recently.

A circular sensor design would allow at-a-glance confirmation of whether a lens will cover a certain sensor, without trigonometry. With a single number, a cinematographer will know that a lens with a 32mm image circle will cover a sensor up to 32mm diameter. The guessing game (ie, the many forum posts about specific lens/camera combinations) will be over.

The fun is only beginning

A circular sensor would allow very interesting in-camera effects, never achievable on film.

Imagine a circular sensor mounted in a freely-rotating mount inside the camera, with the rotation angle recorded as metadata. The spinning sensor could be “unspun” in post, using the infinitely-variable horizon adjustment driven by the recorded metadata. Perhaps the real-world horizon would also be recorded via three-axis accelerometer or gyroscope, so as not to cancel out actual camera movements.

I would envision a shot — perhaps a POV shot of a person getting drugged — where the sensor motion blur created by spinning the sensor could be ramped up from zero to a lot. Maybe the resulting radial blur would have a real-world quality that would trump the digital equivalent.

We won’t know until we try it.