scholarly journals Measurement and Estimation of Spectral Sensitivity Functions for Mobile Phone Cameras

Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 4985
Author(s):  
Shoji Tominaga ◽  
Shogo Nishi ◽  
Ryo Ohtera

Mobile phone cameras are often significantly more useful than professional digital single-lens reflex (DSLR) cameras. Knowledge of the camera spectral sensitivity function is important in many fields that make use of images. In this study, methods for measuring and estimating spectral sensitivity functions for mobile phone cameras are developed. In the direct measurement method, the spectral sensitivity at each wavelength is measured using monochromatic light. Although accurate, this method is time-consuming and expensive. The indirect estimation method is based on color samples, in which the spectral sensitivities are estimated from the input data of color samples and the corresponding output RGB values from the camera. We first present an imaging system for direct measurements. A variety of mobile phone cameras are measured using the system to create a database of spectral sensitivity functions. The features of the measured spectral sensitivity functions are then studied using principal component analysis (PCA) and the statistical features of the spectral functions extracted. We next describe a normal method to estimate the spectral sensitivity functions using color samples and point out some drawbacks of the method. A method to solve the estimation problem using the spectral features of the sensitivity functions in addition to the color samples is then proposed. The estimation is stable even when only a small number of spectral features are selected. Finally, the results of the experiments to confirm the feasibility of the proposed method are presented. We establish that our method is excellent in terms of both the data volume of color samples required and the estimation accuracy of the spectral sensitivity functions.

1971 ◽  
Vol 57 (3) ◽  
pp. 363-384 ◽  
Author(s):  
A. M. Granda ◽  
S. Yazulla

Responses to diffuse monochromatic light were recorded from single units in the diencephalon of pigeon. Units were both excited and inhibited by light stimulation. Intensity-response functions based on latency measures to the first spike after stimulation were used to generate action spectra. One class of spectral sensitivity functions presumably from rods, showed peak sensitivities near 500 nm: these functions were unaffected by changing criterion values used to generate the functions. A second class of cone functions showed multiple peak sensitivities at 540 nm and 600–620 nm. These units shifted their peak sensitivities with a change in criterion values. Unit response types tended to be localized differentially in the nucleus rotundus. Excitatory units were located in the dorsal half of the nucleus, while inhibitory units were located in the ventral half, with a few exceptions. An attempt was made to integrate the present findings with previous behavioral, electrophysiological, photochemical, and anatomical data in the pigeon.


Author(s):  
Xiao Chen ◽  
Zaichen Zhang ◽  
Liang Wu ◽  
Jian Dang

Abstract In this journal, we investigate the beam-domain channel estimation and power allocation in hybrid architecture massive multiple-input and multiple-output (MIMO) communication systems. First, we propose a low-complexity channel estimation method, which utilizes the beam steering vectors achieved from the direction-of-arrival (DOA) estimation and beam gains estimated by low-overhead pilots. Based on the estimated beam information, a purely analog precoding strategy is also designed. Then, the optimal power allocation among multiple beams is derived to maximize spectral efficiency. Finally, simulation results show that the proposed schemes can achieve high channel estimation accuracy and spectral efficiency.


Minerals ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 839
Author(s):  
Lucilla Pronti ◽  
Giuseppe Capobianco ◽  
Margherita Vendittelli ◽  
Anna Candida Felici ◽  
Silvia Serranti ◽  
...  

Multispectral imaging is a preliminary screening technique for the study of paintings. Although it permits the identification of several mineral pigments by their spectral behavior, it is considered less performing concerning hyperspectral imaging, since a limited number of wavelengths are selected. In this work, we propose an optimized method to map the distribution of the mineral pigments used by Vincenzo Pasqualoni for his wall painting placed at the Basilica of S. Nicola in Carcere in Rome, combining UV/VIS/NIR reflectance spectroscopy and multispectral imaging. The first method (UV/VIS/NIR reflectance spectroscopy) allowed us to characterize pigment layers with a high spectral resolution; the second method (UV/VIS/NIR multispectral imaging) permitted the evaluation of the pigment distribution by utilizing a restricted number of wavelengths. Combining the results obtained from both devices was possible to obtain a distribution map of a pictorial layer with a high accuracy level of pigment recognition. The method involved the joint use of point-by-point hyperspectral spectroscopy and Principal Component Analysis (PCA) to identify the pigments in the color palette and evaluate the possibility to discriminate all the pigments recognized, using a minor number of wavelengths acquired through the multispectral imaging system. Finally, the distribution and the spectral difference of the different pigments recognized in the multispectral images, (in this case: red ochre, yellow ochre, orpiment, cobalt blue-based pigments, ultramarine and chrome green) were shown through PCA false-color images.


2021 ◽  
Vol 13 (4) ◽  
pp. 803
Author(s):  
Lingchen Lin ◽  
Kunyong Yu ◽  
Xiong Yao ◽  
Yangbo Deng ◽  
Zhenbang Hao ◽  
...  

As a key canopy structure parameter, the estimation method of the Leaf Area Index (LAI) has always attracted attention. To explore a potential method to estimate forest LAI from 3D point cloud at low cost, we took photos from different angles of the drone and set five schemes (O (0°), T15 (15°), T30 (30°), OT15 (0° and 15°) and OT30 (0° and 30°)), which were used to reconstruct 3D point cloud of forest canopy based on photogrammetry. Subsequently, the LAI values and the leaf area distribution in the vertical direction derived from five schemes were calculated based on the voxelized model. Our results show that the serious lack of leaf area in the middle and lower layers determines that the LAI estimate of O is inaccurate. For oblique photogrammetry, schemes with 30° photos always provided better LAI estimates than schemes with 15° photos (T30 better than T15, OT30 better than OT15), mainly reflected in the lower part of the canopy, which is particularly obvious in low-LAI areas. The overall structure of the single-tilt angle scheme (T15, T30) was relatively complete, but the rough point cloud details could not reflect the actual situation of LAI well. Multi-angle schemes (OT15, OT30) provided excellent leaf area estimation (OT15: R2 = 0.8225, RMSE = 0.3334 m2/m2; OT30: R2 = 0.9119, RMSE = 0.1790 m2/m2). OT30 provided the best LAI estimation accuracy at a sub-voxel size of 0.09 m and the best checkpoint accuracy (OT30: RMSE [H] = 0.2917 m, RMSE [V] = 0.1797 m). The results highlight that coupling oblique photography and nadiral photography can be an effective solution to estimate forest LAI.


Author(s):  
Suyong Yeon ◽  
ChangHyun Jun ◽  
Hyunga Choi ◽  
Jaehyeon Kang ◽  
Youngmok Yun ◽  
...  

Purpose – The authors aim to propose a novel plane extraction algorithm for geometric 3D indoor mapping with range scan data. Design/methodology/approach – The proposed method utilizes a divide-and-conquer step to efficiently handle huge amounts of point clouds not in a whole group, but in forms of separate sub-groups with similar plane parameters. This method adopts robust principal component analysis to enhance estimation accuracy. Findings – Experimental results verify that the method not only shows enhanced performance in the plane extraction, but also broadens the domain of interest of the plane registration to an information-poor environment (such as simple indoor corridors), while the previous method only adequately works in an information-rich environment (such as a space with many features). Originality/value – The proposed algorithm has three advantages over the current state-of-the-art method in that it is fast, utilizes more inlier sensor data that does not become contaminated by severe sensor noise and extracts more accurate plane parameters.


Sensors ◽  
2020 ◽  
Vol 20 (11) ◽  
pp. 3208 ◽  
Author(s):  
Liangju Wang ◽  
Yunhong Duan ◽  
Libo Zhang ◽  
Tanzeel U. Rehman ◽  
Dongdong Ma ◽  
...  

The normalized difference vegetation index (NDVI) is widely used in remote sensing to monitor plant growth and chlorophyll levels. Usually, a multispectral camera (MSC) or hyperspectral camera (HSC) is required to obtain the near-infrared (NIR) and red bands for calculating NDVI. However, these cameras are expensive, heavy, difficult to geo-reference, and require professional training in imaging and data processing. On the other hand, the RGBN camera (NIR sensitive RGB camera, simply modified from standard RGB cameras by removing the NIR rejection filter) have also been explored to measure NDVI, but the results did not exactly match the NDVI from the MSC or HSC solutions. This study demonstrates an improved NDVI estimation method with an RGBN camera-based imaging system (Ncam) and machine learning algorithms. The Ncam consisted of an RGBN camera, a filter, and a microcontroller with a total cost of only $70 ~ 85. This new NDVI estimation solution was compared with a high-end hyperspectral camera in an experiment with corn plants under different nitrogen and water treatments. The results showed that the Ncam with two-band-pass filter achieved high performance (R2 = 0.96, RMSE = 0.0079) at estimating NDVI with the machine learning model. Additional tests showed that besides NDVI, this low-cost Ncam was also capable of predicting corn plant nitrogen contents precisely. Thus, Ncam is a potential option for MSC and HSC in plant phenotyping projects.


Sign in / Sign up

Export Citation Format

Share Document