scholarly journals End-to-End Image Simulator for Optical Imaging Systems: Equations and Simulation Examples

2013 ◽  
Vol 2013 ◽  
pp. 1-23 ◽  
Author(s):  
Peter Coppo ◽  
Leandro Chiarantini ◽  
Luciano Alparone

The theoretical description of a simplified end-to-end software tool for simulation of data produced by optical instruments, starting from either synthetic or airborne hyperspectral data, is described and some simulation examples of hyperspectral and panchromatic images for existing and future design instruments are also reported. High spatial/spectral resolution images with low intrinsic noise and the sensor/mission specifications are used as inputs for the simulations. The examples reported in this paper show the capabilities of the tool for simulating target detection scenarios, data quality assessment with respect to classification performance and class discrimination, impact of optical design on image quality, and 3D modelling of optical performances. The simulator is conceived as a tool (during phase 0/A) for the specification and early development of new Earth observation optical instruments, whose compliance to user’s requirements is achieved through a process of cost/performance trade-off. The Selex Galileo simulator, as compared with other existing image simulators for phase C/D projects of space-borne instruments, implements all modules necessary for a complete panchromatic and hyper spectral image simulation, and it allows excellent flexibility and expandability for new integrated functions because of the adopted IDL-ENVI software environment.

2021 ◽  
Vol 13 (3) ◽  
pp. 526
Author(s):  
Shengliang Pu ◽  
Yuanfeng Wu ◽  
Xu Sun ◽  
Xiaotong Sun

The nascent graph representation learning has shown superiority for resolving graph data. Compared to conventional convolutional neural networks, graph-based deep learning has the advantages of illustrating class boundaries and modeling feature relationships. Faced with hyperspectral image (HSI) classification, the priority problem might be how to convert hyperspectral data into irregular domains from regular grids. In this regard, we present a novel method that performs the localized graph convolutional filtering on HSIs based on spectral graph theory. First, we conducted principal component analysis (PCA) preprocessing to create localized hyperspectral data cubes with unsupervised feature reduction. These feature cubes combined with localized adjacent matrices were fed into the popular graph convolution network in a standard supervised learning paradigm. Finally, we succeeded in analyzing diversified land covers by considering local graph structure with graph convolutional filtering. Experiments on real hyperspectral datasets demonstrated that the presented method offers promising classification performance compared with other popular competitors.


2021 ◽  
Vol 13 (14) ◽  
pp. 7989
Author(s):  
Miriam Pekarcikova ◽  
Peter Trebuna ◽  
Marek Kliment ◽  
Michal Dic

The presented article deals with the issue of solving bottlenecks in the logistics flow of a manufacturing company. The Tx Plant Simulation software tool is used to detect bottlenecks and deficiencies in the company’s production, logistics and transportation systems. Together with the use of simulation methods and lean manufacturing tools, losses in business processes are eliminated and consequently flow throughput is improved. In the TX Plant Simulation software environment, using Bottleneck analyzer, bottlenecks were defined on the created simulation model and a method of optimizing logistics flows was designed and tested by introducing the Kanban pull system. This resulted in an improvement and throughput of the entire logistics flow, a reduction in inter-operational stocks and an increase in the efficiency of the production system as a whole.


2021 ◽  
Vol 13 (21) ◽  
pp. 4472
Author(s):  
Tianyu Zhang ◽  
Cuiping Shi ◽  
Diling Liao ◽  
Liguo Wang

Convolutional neural networks (CNNs) have been widely used in hyperspectral image classification in recent years. The training of CNNs relies on a large amount of labeled sample data. However, the number of labeled samples of hyperspectral data is relatively small. Moreover, for hyperspectral images, fully extracting spectral and spatial feature information is the key to achieve high classification performance. To solve the above issues, a deep spectral spatial inverted residuals network (DSSIRNet) is proposed. In this network, a data block random erasing strategy is introduced to alleviate the problem of limited labeled samples by data augmentation of small spatial blocks. In addition, a deep inverted residuals (DIR) module for spectral spatial feature extraction is proposed, which locks the effective features of each layer while avoiding network degradation. Furthermore, a global 3D attention module is proposed, which can realize the fine extraction of spectral and spatial global context information under the condition of the same number of input and output feature maps. Experiments are carried out on four commonly used hyperspectral datasets. A large number of experimental results show that compared with some state-of-the-art classification methods, the proposed method can provide higher classification accuracy for hyperspectral images.


2019 ◽  
Vol 11 (9) ◽  
pp. 1114
Author(s):  
Sixiu Hu ◽  
Jiangtao Peng ◽  
Yingxiong Fu ◽  
Luoqing Li

By means of joint sparse representation (JSR) and kernel representation, kernel joint sparse representation (KJSR) models can effectively model the intrinsic nonlinear relations of hyperspectral data and better exploit spatial neighborhood structure to improve the classification performance of hyperspectral images. However, due to the presence of noisy or inhomogeneous pixels around the central testing pixel in the spatial domain, the performance of KJSR is greatly affected. Motivated by the idea of self-paced learning (SPL), this paper proposes a self-paced KJSR (SPKJSR) model to adaptively learn weights and sparse coefficient vectors for different neighboring pixels in the kernel-based feature space. SPL strateges can learn a weight to indicate the difficulty of feature pixels within a spatial neighborhood. By assigning small weights for unimportant or complex pixels, the negative effect of inhomogeneous or noisy neighboring pixels can be suppressed. Hence, SPKJSR is usually much more robust. Experimental results on Indian Pines and Salinas hyperspectral data sets demonstrate that SPKJSR is much more effective than traditional JSR and KJSR models.


2021 ◽  
Vol 40 (2) ◽  
pp. 1-19
Author(s):  
Ethan Tseng ◽  
Ali Mosleh ◽  
Fahim Mannan ◽  
Karl St-Arnaud ◽  
Avinash Sharma ◽  
...  

Most modern commodity imaging systems we use directly for photography—or indirectly rely on for downstream applications—employ optical systems of multiple lenses that must balance deviations from perfect optics, manufacturing constraints, tolerances, cost, and footprint. Although optical designs often have complex interactions with downstream image processing or analysis tasks, today’s compound optics are designed in isolation from these interactions. Existing optical design tools aim to minimize optical aberrations, such as deviations from Gauss’ linear model of optics, instead of application-specific losses, precluding joint optimization with hardware image signal processing (ISP) and highly parameterized neural network processing. In this article, we propose an optimization method for compound optics that lifts these limitations. We optimize entire lens systems jointly with hardware and software image processing pipelines, downstream neural network processing, and application-specific end-to-end losses. To this end, we propose a learned, differentiable forward model for compound optics and an alternating proximal optimization method that handles function compositions with highly varying parameter dimensions for optics, hardware ISP, and neural nets. Our method integrates seamlessly atop existing optical design tools, such as Zemax . We can thus assess our method across many camera system designs and end-to-end applications. We validate our approach in an automotive camera optics setting—together with hardware ISP post processing and detection—outperforming classical optics designs for automotive object detection and traffic light state detection. For human viewing tasks, we optimize optics and processing pipelines for dynamic outdoor scenarios and dynamic low-light imaging. We outperform existing compartmentalized design or fine-tuning methods qualitatively and quantitatively, across all domain-specific applications tested.


Author(s):  
Yang Yang ◽  
Chubing Zhang ◽  
Yi-Chu Xu ◽  
Dianhai Yu ◽  
De-Chuan Zhan ◽  
...  

The main challenge of cross-modal retrieval is to learn the consistent embedding for heterogeneous modalities. To solve this problem, traditional label-wise cross-modal approaches usually constrain the inter-modal and intra-modal embedding consistency relying on the label ground-truths. However, the experiments reveal that different modal networks actually have various generalization capacities, thereby end-to-end joint training with consistency loss usually leads to sub-optimal uni-modal model, which in turn affects the learning of consistent embedding. Therefore, in this paper, we argue that what really needed for supervised cross-modal retrieval is a good shared classification model. In other words, we learn the consistent embedding by ensuring the classification performance of each modality on the shared model, without the consistency loss. Specifically, we consider a technique called Semantic Sharing, which directly trains the two modalities interactively by adopting a shared self-attention based classification model. We evaluate the proposed approach on three representative datasets. The results validate that the proposed semantic sharing can consistently boost the performance under NDCG metric.


2021 ◽  
Author(s):  
Bruno Silva ◽  
Luiz Guerreiro Lopes ◽  
Pedro Campos

<p>Processing, handling and visualising the large data volume produced by satellite altimetry missions is a challenging task. A reference tool for the visualisation of satellite laser altimetry data is the OpenAltimetry platform, a tool that provides altimetry-specific data from the Ice, Cloud, and land Elevation Satellite (ICESat) and ICESat-2 satellite missions through a web-based interactive interface. However, by focusing only on altimetry data, that tool leaves out access to many other equally important information existing in the data products from both missions.</p><p>The main objective of the work reported here was the development of a new web-based tool, called ICEComb, that offers end users the ability to access all the available data from both satellite missions, visualise and interact with them on a geographic map, store the data records locally, and process and explore data in an efficient, detailed and meaningful way, thus providing an easy-to-use software environment for satellite laser altimetry data analysis and interpretation.</p><p>The proposed tool is intended to be mainly used by researchers and scientists to aid their work using ICESat and ICESat-2 data, offering users a ready-to-use system to rapidly access the raw collected data in a visually engaging way, without the need to have prior understanding of the format, structure and parameters of the data products. In addition, the architecture of the ICEComb tool was developed with possible future expansion in mind, for which well-documented and standard languages were used in its implementation. This allows, e.g., to extend its applicability to data from other satellite laser altimetry missions and integrate models that can be coupled with ICESat and ICESat-2 data, thus expanding and enriching the context of studies carried out with such data.</p><p>The use of the ICEComb tool is illustrated and demonstrated by its application to ICESat/GLAS measurements over Lake Mai-Ndombe, a large and shallow freshwater lake located within the Ngiri-Tumba-Maindombe area, one of the largest Ramsar wetlands of international importance, situated in the Cuvette Centrale region of the Congo Basin.</p><p><strong>Keywords:</strong> Laser altimetry, ICESat/GLAS, software tool design, data visualization, Congo Basin.</p><p><strong>Acknowledgement.</strong> This work was partially supported by the Portuguese Foundation for Science and Technology (FCT) through LARSyS − FCT Pluriannual funding 2020−2023.</p>


2016 ◽  
Vol 13 (1) ◽  
pp. 26-27
Author(s):  
P. Milčák ◽  
J. Konvička ◽  
M. Jasenská

Abstract The paper is aimed at the description of implementation of a biogas station into software environment for the „Smart Heating and Cooling Networks”. The aim of this project is creation of a software tool for preparation of operation and optimization of treatment of heat/cool in small regions. In this case, the biogas station represents a kind of renewable energy source, which, however, has its own operational specifics which need to be taken into account at the creation of an implementation project. For a specific biogas station, a detailed computational model was elaborated, which is parameterized in particular for an optimization of the total computational time.


Author(s):  
Tibor Agócs ◽  
Willem Jellema ◽  
Joost van den Born ◽  
Rik ter Horst ◽  
Peter Bizenberger ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document