In a recent publication, the magazine Nature gave large coverage to the results of a project that will certainly revolutionize the capture of images, as we know it today. It’s the AWARE-2 camera, which has a resolution of 1 Gigapixel, but whose technology is scalable up to 50 Gigapixels. One of the authors of this paper, and a member of the project team, is Dr. Esteban Vera. Previously, he was a member of the Center for Optics and Photonics (CEFOP) where he worked on his doctorate, and then left to go to the University of Arizona, where he is currently a researcher.
After this important appearance, Esteban Vera gave us his first impressions and explained how they got to this level of advancement.
In this regard, he explained that to obtain billion megapixel images, they had to overcome a few obstacles. “As there are no monolithic detectors that are able to reach such high amounts of pixels, the first thing is to put together a number of N detectors in order to add the required number of pixels. An option would be to group the detectors in a single focal plane, but the complexity of the associated optical system, as well as its volume, weight, and cost, are unmanageable.”
With that problem looming ahead, the researcher pointed out that as an alternative they designed a multiscale optical system, which is based on a spherical objective lens whose field of vision is shared by microcameras that relay a smaller portion of the image onto their own sensor. Thus, these small cameras have the appeal of being identical, and they only need to correct the local optical aberrations, decreasing the optical complexity and therefore its cost. Finally, it is only necessary to synchronize the acquisition and transmission of images captured by each “micro” camera”, to finally combine them, forming a unique panorama, that in this case reaches approximately 60000 x 18000 pixels, or 1 Gigapixel.”
Technology based on multiscale optics has the ability to be highly scalable in a reasonable manner in terms of cost, volume and weight, for it creates a platform for the development of imaging systems that could easily reach 50 Gigapixels with a still moderately compact system. According to Vera, “currently a new prototype is being designed that would reach 5 Gigapixels, and which would work with color detectors. Other issues to address relate to the challenges emanating from the handling of large volumes of data, as well as their transfer and processing. On the other hand, and to overcome these difficulties, another goal would be to definitively move from the concept of data capture, to the capture of information.”
One immediate use of this technology could be mostly for surveillance, either static or airborne, focusing mainly on public places with a large influx of people such as: intercity transportation, airports, or massive events.
From the scientific point of view, explained the researcher from the University of Arizona, “In the article we tried to give a couple of tentative examples of scientific applications, since it provides opportunities to capture simultaneous events occurring in nature, such as the migration of birds. However, the truth is that in the near future this type of camera will open a new window for the human imagination to explore events and natural phenomena, perhaps still unknown.”
The camera consists of an objective lens of 16 mm and 98 “micro” cameras located on a GEODESIC hemispherical surface, covering a total field of view (“FoV”) of 120 x 50 degrees.
Each of these “micro” cameras has a monochrome detector of 14 megapixels (with 1.4 pixel um), available commercially, and can independently adjust its exposure, gain and focus. Thus the field of view for each pixel is on the order of 8 x 8 arcseconds, surpassing the resolution of the human eye, which is on the order of 18 x 18 arcseconds in the fovea.
The associated electronics carry out the control of acquisition and transmission of images from sensors to a computer center, where it is processed and is finally combined to form a 1 Gigapixel panorama. The final image shows the advantage of possessing a high dynamic range (“HDR”) and an extended depth of field (“EDoF”) throughout the field of vision allowing, in addition to its high resolution, the display of contrasts unprecedented with the use of traditional cameras.
Time spent at CEFOP
Esteban Vera was part of our Center when it was being formed. He pursued his doctorate under the tutelage of Dr. Sergio Torres, working on inverse problems linked to the processes of image acquisition.
“As CEFOP was just beginning, that presented a series of opportunities to collaborate with other groups who were working on cutting-edge scientific instrumentation, such as optical tweezers in the laboratory of Dr. Juan Pablo Staforelli, incorporating ideas such as Super-resolution (SR) of images. This line of research finally led me to be interested in the work they were doing to capture optical information in a non-traditional way, and that motivated my participation in the Conference on computational photography carried out at MIT’s Media Lab in 2010,” the researcher explained.
There, Vera was amazed with the plenary talk delivered by Professor David Brady from Duke University, entitled “Terapixel Imaging”, who showed that the scientific community was not exploiting the content of optical information provided by the openings available in cameras (on the order of centimeters). He proposed the idea of using multiscale optics to achieve this.
During the last day of the conference, “There was an advertisement for a couple of Postdoctoral positions to work in bringing to reality the ideas of Dr. Brady. I couldn’t wait to apply, and in a few months I was invited by Dr. Michael Gehm to join his LENS Laboratory (Laboratory for Engineering Non-traditional Sensors) at the University of Arizona to develop the image formation platform for the Gigapixel camera,” he said.