Chronocam’s technology strategy is built on a straight-forward premise: in order to enable a safer, more efficent world though the capabilities of machine vision, we need to re-think traditional vision processing. Our technology introduces a new computer vision paradigm based on how the human eye and brain work. In a nutshell, our approach significantly improves the performance and power efficiency of how computer vision can be implemented in a wide range of products and applications that improve the convenience and safety of our daily lives. Chronocam addresses head on the obstacles faced by previous generations of camera technology to meet the needs of modern applications in automotive, consumer, IoT and industrial products.
Chronocam’s proprietary approach leverages the company’s deep expertise in computer vision technology, sensor design and neuromorphic computing that includes several patents to this new vision CMOS sensing and processing technology. The technology is unique in its ability to achieve scene-dependent data compression that can be optimized in real-time according to varying application requirements. Image information is not acquired and transmitted frame-wise but continuously, and conditionally only from parts of the scene where there is new visual information, mimicking how the eye works. With this method, those parts of the scene that contain fast motions are sampled rapidly, while slow-changing portions are sampled at lower rates, going all the way down to zero if nothing changes The result is an almost time-continuous but very sparse stream of events carrying the most useful full visual information.
More than >120dB
100 Kfps equivalent
Ultra low bandwidth and low latency
Less than <10mW
Simply put, digital cameras have worked the same way for decades – all the pixels in an array measure the light they receive at the same time, and then report their measurements to the supporting hardware. Do this once and you have a stills camera. Repeat it rapidly enough and you have a video camera – an approach that hasn’t changed much since Eadweard Muybridge accidentally created cinema while exploring animal motion in the 1880s.
This approach made sense when cameras were mainly used to take pictures of people for people. Today, computers are fast becoming the world’s largest consumers of images, and yet this is not reflected in the way that images are captured. Essentially, we’re still building selfie-cams for supercomputers.
EVENTS NOT IMAGES
The Chronocam approach differs from traditional methods in that its array of pixels doesn’t have a common frame rate. In fact, there are no frames at all. Instead, each pixel only outputs the intensity data it has measured once the light falling upon it has changed by a set amount. If the incident light isn’t changing (for example, in the background of a security camera’s field of view) then the pixel stays silent. If the scene is changing (for example, a car drives through it), the affected pixels report the change. If many cars pass, all the affected pixels report a sequence of changes.
This approach has intriguing advantages. Motion blur becomes a thing of the past, because the faster the image changes the faster each affected pixel reports that change. Conversely, static parts of the image don’t keep diligently reporting their unchanging status, reducing the amount of redundant data being processed. Under- or over-exposure issues are avoided since each pixel adjusts its exposure time according to the incident lighting conditions. Images can be filtered by their contrast level by adjusting how much the intensity of each pixel has to change before the pixel fires off a report of that change.
Probably the most important aspect of our event-driven sensor, though, is the way it changes how we think about computer vision. If looking at a conventional video is like being handed a sequence of postcards by a friend and being asked to work out what is changing by flicking through them, an event-driven sensor’s output is more like looking at a single postcard while that friend uses a highlighter to mark every change in the scene as it happens – no matter the lighting conditions in the scene.
In effect, the data stream that a computer vision system needs to analyse changes from a sequence of full-frame images, delivered to the beat of a fixed sampling clock, into an unsynchronised sequence of signals fired off by each pixel that has been subject to the set amount of change. A second signal produces pulses that represent the intensity of the light being measured by each pixel at that time.
It is not a coincidence that these ‘spiking’ pulse streams resemble the signals that the human brain and visual cortex uses to process temporal events – this is, after all, neuromorphic, or ‘brain-shaped’ engineering. In fact, Chronocam is building its approach to computer vision on a new mathematical framework that enables more effective analysis of such spiking signals by AI algorithms.
Optimized data acquisition, tailored for machines