Wednesday, January 17, 2018

RGB to Hyperspectral Image Conversion

Ben Gurion University, Israel, researches implement a physically impossible thing - converting regular RGB consumer camera images into hyperspectral ones, purely by software. Their paper "Sparse Recovery of Hyperspectral Signal from Natural RGB Images" by Boaz Arad and Ohad Ben-Shahar presented at European Conference on Computer Vision (ECCV) in Amsterdam, The Netherlands, in October 2016, says:

"We present a low cost and fast method to recover high quality hyperspectral images directly from RGB. Our approach first leverages hyperspectral prior in order to create a sparse dictionary of hyperspectral signatures and their corresponding RGB projections. Describing novel RGB images via the latter then facilitates reconstruction of the hyperspectral image via the former. A novel, larger-than-ever database of hyperspectral images serves as a hyperspectral prior. This database further allows for evaluation of our methodology at an unprecedented scale, and is provided for the benefit of the research community. Our approach is fast, accurate, and provides high resolution hyperspectral cubes despite using RGB-only input."


"The goal of our research is the reconstruction of the hyperspectral data from natural images from their (single) RGB image. Prima facie, this appears a futile task. Spectral signatures, even in compact subsets of the spectrum, are very high (and in the theoretical continuum, infinite) dimensional objects while RGB signals are three dimensional. The back-projection from RGB to hyperspectral is thus severely underconstrained and reversal of the many-to-one mapping performed by the eye or the RGB camera is rather unlikely. This problem is perhaps expressed best by what is known as metamerism – the phenomenon of lights that elicit the same response from the sensory system but having different power distributions over the sensed spectral segment.

Given this, can one hope to obtain good approximations of hyperspectral signals from RGB data only? We argue that under certain conditions this otherwise ill-posed transformation is indeed possible; First, it is needed that the set of hyperspectral signals that the sensory system can ever encounter is confined to a relatively low dimensional manifold within the high or even infinite-dimensional space of all hyperspectral signals. Second, it is required that the frequency of metamers within this low dimensional manifold is relatively low. If both conditions hold, the response of the RGB sensor may in fact reveal much more on the spectral signature than first appears and the mapping from the latter to the former may be achievable.

Interestingly enough, the relative frequency of metameric pairs in natural scenes has been found to be as low as 10^−6 to 10^−4. This very low rate suggests that at least in this domain spectra that are different enough produce distinct sensor responses with high probability.

The eventual goal of our research is the ability to turn consumer grade RGB cameras into a hyperspectral acquisition devices, thus permitting truly low cost and fast HISs.
"

X-Ray Imaging at 30fps

Teledyne Dalsa publishes a nice demo of its 1MP 30fps X-Ray sensor:

Tuesday, January 16, 2018

SD Optics Depth Sensing Camera

SD Optics publishes two videos of depth sensing by means of fast focus variations of its MEMS lens:




Monday, January 15, 2018

Imec 3D Stacking Aims to 100nm Contact Pitch

Imec article on 3D bonding technology by Eric Beyne, imec fellow & program director 3D system integration presents solutions that are supposed to reach 100nm contact pitch:

Gate/Body-tied MOSFET Image Sensor Proposes

Sensors and Materials publishes a paper "Complementary Metal Oxide Semiconductor Image Sensor Using Gate/Body-tied P-channel Metal Oxide Semiconductor Field Effect Transistor-type Photodetector for High-speed Binary Operation" by Byoung-Soo Choi, Sang-Hwan Kim, Jimin Lee, Chang-Woo Oh, Sang-Ho Seo, and Jang-Kyoo Shin from Kyungpook National University, Korea.

"In this paper, we propose a CMOS image sensor that uses a gate/body-tied p-chnnel metal oxide semiconductor field effect transistor (PMOSFET)-type photodetector for highspeed binary operation. The sensitivity of the gate/body-tied PMOSFET-type photodetector is approximately six times that of the p–n junction photodetector for the same area. Thus, an active pixel sensor with a highly sensitive gate/body-tied PMOSFET-type photodetector is more appropriate for high-speed binary operation."

The 3T-style pixel uses pmos instead of PD and has a non-linear response. Probably, its inherent non-linearity has been the main reason that the binary operation mode is proposed:

Sunday, January 14, 2018

The Rise of Smartphone Spectrometer

MDPI publishes a paper "Smartphone Spectrometers" by Andrew J.S. McGonigle, Thomas C. Wilkes, Tom D. Pering, Jon R. Willmott, Joseph M. Cook, Forrest M. Mims, and Alfio V. Parisi from University of Sheffield, UK and University of Sydney and University of Southern Queensland, Australia.

"Smartphones are playing an increasing role in the sciences, owing to the ubiquitous proliferation of these devices, their relatively low cost, increasing processing power and their suitability for integrated data acquisition and processing in a ‘lab in a phone’ capacity. There is furthermore the potential to deploy these units as nodes within Internet of Things architectures, enabling massive networked data capture. Hitherto, considerable attention has been focused on imaging applications of these devices. However, within just the last few years, another possibility has emerged: to use smartphones as a means of capturing spectra, mostly by coupling various classes of fore-optics to these units with data capture achieved using the smartphone camera. These highly novel approaches have the potential to become widely adopted across a broad range of scientific e.g., biomedical, chemical and agricultural application areas. In this review, we detail the exciting recent development of smartphone spectrometer hardware, in addition to covering applications to which these units have been deployed, hitherto. The paper also points forward to the potentially highly influential impacts that such units could have on the sciences in the coming decades."

Saturday, January 13, 2018

GM Self-Driving Car Has 5 LiDARs and 16 Cameras

GM autonomous car safety report details the sensors on board of Cruise self-driving vehicle: "To perform Perception functions, the vehicle has five LiDARs, 16 cameras and 21 radars. Their combined data provides sensor diversity allowing Perception to see complex environments."

Friday, January 12, 2018

Brillnics 90dB DR Image Sensor Paper

MDPI Sensors Special Issue on the 2017 International Image Sensor Workshop publishes Brillnics paper "An Over 90 dB Intra-Scene Single-Exposure Dynamic Range CMOS Image Sensor Using a 3.0 μm Triple-Gain Pixel Fabricated in a Standard BSI Process" by Isao Takayanagi, Norio Yoshimura, Kazuya Mori, Shinichiro Matsuo, Shunsuke Tanaka, Hirofumi Abe, Naoto Yasuda, Kenichiro Ishikawa, Shunsuke Okura, Shinji Ohsawa, and Toshinori Otaka.

"To respond to the high demand for high dynamic range imaging suitable for moving objects with few artifacts, we have developed a single-exposure dynamic range image sensor by introducing a triple-gain pixel and a low noise dual-gain readout circuit. The developed 3 μm pixel is capable of having three conversion gains. Introducing a new split-pinned photodiode structure, linear full well reaches 40 ke−. Readout noise under the highest pixel gain condition is 1 e− with a low noise readout circuit. Merging two signals, one with high pixel gain and high analog gain, and the other with low pixel gain and low analog gain, a single exposure dynamic rage (SEHDR) signal is obtained. Using this technology, a 1/2.7”, 2M-pixel CMOS image sensor has been developed and characterized. The image sensor also employs an on-chip linearization function, yielding a 16-bit linear signal at 60 fps, and an intra-scene dynamic range of higher than 90 dB was successfully demonstrated. This SEHDR approach inherently mitigates the artifacts from moving objects or time-varying light sources that can appear in the multiple exposure high dynamic range (MEHDR) approach."

Innoviz LiDAR Prototype

PRNewswire: Innoviz presents its prototype LiDAR (model Pro) at CES, with quite a complete performance spec, a rarity among LiDAR startups (sans Velodyne):


Looking forward, the company intends to bring the automotive-grade InnovizOne model sometime in 2019. It requires quite a leap in technology to reach the targets in resolution, FOV, range and size set at last year's CES:


Update: As of January 16, 2018, the following design targets are presented for InnovizOne automotive-qualified product on the company page:

Sony, Samsung Talk on Automotive Imaging

Kazuo Hirai, Sony CEO and President, spent few minutes talking about automotive imaging in his CES keynote:



Samsung announces an open DRVLINE platform that includes "a brand-new ADAS forward-facing camera system, created by Samsung and HARMAN, which is engineered to meet upcoming New Car Assessment Program (NCAP) standards. These include lane departure warning, forward collision warning, pedestrian detection, and automatic emergency braking."


On the sensors side, Samsung DRVLINE partners include 3 LiDAR startups: