scholarly journals JexoSim 2.0: End-to-End JWST Simulator for Exoplanet Spectroscopy - Implementation and case studies

Author(s):  
Subhajit Sarkar ◽  
Nikku Madhusudhan

Abstract The recently developed JWST Exoplanet Observation Simulator (JexoSim) simulates transit spectroscopic observations of exoplanets by JWST with each of its four instruments using a time-domain approach. Previously we reported the validation of JexoSim against Pandexo and instrument team simulators. In the present study, we report a substantially enhanced version, JexoSim 2.0, which improves on the original version through incorporation of new noise sources, enhanced treatment of stellar and planetary signals and instrumental effects, as well as improved user-operability and optimizations for increased speed and efficiency. A near complete set of instrument modes for exoplanet time-series observations is now included. In this paper we report the implementation of JexoSim 2.0 and assess performance metrics for JWST in end-member scenarios using the hot Jupiter HD 209458 b and the mini-Neptune K2-18 b. We show how JexoSim can be used to compare performance across the different JWST instruments, selecting an optimal combination of instrument and subarray modes, producing synthetic transmission spectra for each planet. These studies indicate that the 1.4 $\mu$m water feature detected in the atmosphere of K2-18 b using the Hubble WFC3 might be observable in just one transit observation with JWST with either NIRISS or NIRSpec. JexoSim 2.0 can be used to investigate the impact of complex noise and systematic effects on the final spectrum, plan observations and test the feasibility of novel science cases for JWST. It can also be customised for other astrophysical applications beyond exoplanet spectroscopy. JexoSim 2.0 is now available for use by the scientific community.

2019 ◽  
pp. 27-35
Author(s):  
Alexandr Neznamov

Digital technologies are no longer the future but are the present of civil proceedings. That is why any research in this direction seems to be relevant. At the same time, some of the fundamental problems remain unattended by the scientific community. One of these problems is the problem of classification of digital technologies in civil proceedings. On the basis of instrumental and genetic approaches to the understanding of digital technologies, it is concluded that their most significant feature is the ability to mediate the interaction of participants in legal proceedings with information; their differentiating feature is the function performed by a particular technology in the interaction with information. On this basis, it is proposed to distinguish the following groups of digital technologies in civil proceedings: a) technologies of recording, storing and displaying (reproducing) information, b) technologies of transferring information, c) technologies of processing information. A brief description is given to each of the groups. Presented classification could serve as a basis for a more systematic discussion of the impact of digital technologies on the essence of civil proceedings. Particularly, it is pointed out that issues of recording, storing, reproducing and transferring information are traditionally more «technological» for civil process, while issues of information processing are more conceptual.


Author(s):  
J. R. Barnes ◽  
C. A. Haswell

AbstractAriel’s ambitious goal to survey a quarter of known exoplanets will transform our knowledge of planetary atmospheres. Masses measured directly with the radial velocity technique are essential for well determined planetary bulk properties. Radial velocity masses will provide important checks of masses derived from atmospheric fits or alternatively can be treated as a fixed input parameter to reduce possible degeneracies in atmospheric retrievals. We quantify the impact of stellar activity on planet mass recovery for the Ariel mission sample using Sun-like spot models scaled for active stars combined with other noise sources. Planets with necessarily well-determined ephemerides will be selected for characterisation with Ariel. With this prior requirement, we simulate the derived planet mass precision as a function of the number of observations for a prospective sample of Ariel targets. We find that quadrature sampling can significantly reduce the time commitment required for follow-up RVs, and is most effective when the planetary RV signature is larger than the RV noise. For a typical radial velocity instrument operating on a 4 m class telescope and achieving 1 m s−1 precision, between ~17% and ~ 37% of the time commitment is spent on the 7% of planets with mass Mp < 10 M⊕. In many low activity cases, the time required is limited by asteroseismic and photon noise. For low mass or faint systems, we can recover masses with the same precision up to ~3 times more quickly with an instrumental precision of ~10 cm s−1.


2020 ◽  
Vol 0 (0) ◽  
Author(s):  
Tawfik Yahya ◽  
Nur Azah Hamzaid ◽  
Sadeeq Ali ◽  
Farahiyah Jasni ◽  
Hanie Nadia Shasmin

AbstractA transfemoral prosthesis is required to assist amputees to perform the activity of daily living (ADL). The passive prosthesis has some drawbacks such as utilization of high metabolic energy. In contrast, the active prosthesis consumes less metabolic energy and offers better performance. However, the recent active prosthesis uses surface electromyography as its sensory system which has weak signals with microvolt-level intensity and requires a lot of computation to extract features. This paper focuses on recognizing different phases of sitting and standing of a transfemoral amputee using in-socket piezoelectric-based sensors. 15 piezoelectric film sensors were embedded in the inner socket wall adjacent to the most active regions of the agonist and antagonist knee extensor and flexor muscles, i. e. region with the highest level of muscle contractions of the quadriceps and hamstring. A male transfemoral amputee wore the instrumented socket and was instructed to perform several sitting and standing phases using an armless chair. Data was collected from the 15 embedded sensors and went through signal conditioning circuits. The overlapping analysis window technique was used to segment the data using different window lengths. Fifteen time-domain and frequency-domain features were extracted and new feature sets were obtained based on the feature performance. Eight of the common pattern recognition multiclass classifiers were evaluated and compared. Regression analysis was used to investigate the impact of the number of features and the window lengths on the classifiers’ accuracies, and Analysis of Variance (ANOVA) was used to test significant differences in the classifiers’ performances. The classification accuracy was calculated using k-fold cross-validation method, and 20% of the data set was held out for testing the optimal classifier. The results showed that the feature set (FS-5) consisting of the root mean square (RMS) and the number of peaks (NP) achieved the highest classification accuracy in five classifiers. Support vector machine (SVM) with cubic kernel proved to be the optimal classifier, and it achieved a classification accuracy of 98.33 % using the test data set. Obtaining high classification accuracy using only two time-domain features would significantly reduce the processing time of controlling a prosthesis and eliminate substantial delay. The proposed in-socket sensors used to detect sit-to-stand and stand-to-sit movements could be further integrated with an active knee joint actuation system to produce powered assistance during energy-demanding activities such as sit-to-stand and stair climbing. In future, the system could also be used to accurately predict the intended movement based on their residual limb’s muscle and mechanical behaviour as detected by the in-socket sensory system.


Author(s):  
E Gaztanaga ◽  
S J Schmidt ◽  
M D Schneider ◽  
J A Tyson

Abstract We test the impact of some systematic errors in weak lensing magnification measurements with the COSMOS 30-band photo-z Survey flux limited to Iauto &lt; 25.0 using correlations of both source galaxy counts and magnitudes. Systematic obscuration effects are measured by comparing counts and magnification correlations. We use the ACS-HST catalogs to identify potential blending objects (close pairs) and perform the magnification analyses with and without blended objects. We find that blending effects start to be important (∼ 0.04 mag obscuration) at angular scales smaller than 0.1 arcmin. Extinction and other systematic obscuration effects can be as large as 0.10 mag (U-band) but are typically smaller than 0.02 mag depending on the band. After applying these corrections, we measure a 3.9σ magnification signal that is consistent for both counts and magnitudes. The corresponding projected mass profiles of galaxies at redshift z ≃ 0.6 (MI ≃ −21) is Σ = 25 ± 6M⊙h3/pc2 at 0.1 Mpc/h, consistent with NFW type profile with M200 ≃ 2 × 1012M⊙h/pc2. Tangential shear and flux-size magnification over the same lenses show similar mass profiles. We conclude that magnification from counts and fluxes using photometric redshifts has the potential to provide complementary weak lensing information in future wide field surveys once we carefully take into account systematic effects, such as obscuration and blending.


2016 ◽  
Vol 35 (3) ◽  
pp. 358-370 ◽  
Author(s):  
Paul Hanlon ◽  
Gregory P. Brorby ◽  
Mansi Krishan

Processing (eg, cooking, grinding, drying) has changed the composition of food throughout the course of human history; however, awareness of process-formed compounds, and the potential need to mitigate exposure to those compounds, is a relatively recent phenomenon. In May 2015, the North American Branch of the International Life Sciences Institute (ILSI North America) Technical Committee on Food and Chemical Safety held a workshop on the risk-based process for mitigation of process-formed compounds. This workshop aimed to gain alignment from academia, government, and industry on a risk-based process for proactively assessing the need for and benefit of mitigation of process-formed compounds, including criteria to objectively assess the impact of mitigation as well as research needed to support this process. Workshop participants provided real-time feedback on a draft framework in the form of a decision tree developed by the ILSI North America Technical Committee on Food and Chemical Safety to a panel of experts, and they discussed the importance of communicating the value of such a process to the larger scientific community and, ultimately, the public. The outcome of the workshop was a decision tree that can be used by the scientific community and could form the basis of a global approach to assessing the risks associated with mitigation of process-formed compounds.


2010 ◽  
Vol 132 (4) ◽  
Author(s):  
Marwan Hassan ◽  
Achraf Hossen

This paper presents simulations of a loosely supported cantilever tube subjected to turbulence and fluidelastic instability forces. Several time domain fluid force models are presented to simulate the damping-controlled fluidelastic instability mechanism in tube arrays. These models include a negative damping model based on the Connors equation, fluid force coefficient-based models (Chen, 1983, “Instability Mechanisms and Stability Criteria of a Group of Cylinders Subjected to Cross-Flow. Part 1: Theory,” Trans. ASME, J. Vib., Acoust., Stress, Reliab. Des., 105, pp. 51–58; Tanaka and Takahara, 1981, “Fluid Elastic Vibration of Tube Array in Cross Flow,” J. Sound Vib., 77, pp. 19–37), and two semi-analytical models (Price and Païdoussis, 1984, “An Improved Mathematical Model for the Stability of Cylinder Rows Subjected to Cross-Flow,” J. Sound Vib., 97(4), pp. 615–640; Lever and Weaver, 1982, “A Theoretical Model for the Fluidelastic Instability in Heat Exchanger Tube Bundles,” ASME J. Pressure Vessel Technol., 104, pp. 104–147). Time domain modeling and implementation challenges for each of these theories were discussed. For each model, the flow velocity and the support clearance were varied. Special attention was paid to the tube/support interaction parameters that affect wear, such as impact forces and normal work rate. As the prediction of the linear threshold varies depending on the model utilized, the nonlinear response also differs. The investigated models exhibit similar response characteristics for the lift response. The greatest differences were seen in the prediction of the drag response, the impact force level, and the normal work rate. Simulation results show that the Connors-based model consistently underestimates the response and the tube/support interaction parameters for the loose support case.


2021 ◽  
Vol 37 (1_suppl) ◽  
pp. 1420-1439
Author(s):  
Albert R Kottke ◽  
Norman A Abrahamson ◽  
David M Boore ◽  
Yousef Bozorgnia ◽  
Christine A Goulet ◽  
...  

Traditional ground-motion models (GMMs) are used to compute pseudo-spectral acceleration (PSA) from future earthquakes and are generally developed by regression of PSA using a physics-based functional form. PSA is a relatively simple metric that correlates well with the response of several engineering systems and is a metric commonly used in engineering evaluations; however, characteristics of the PSA calculation make application of scaling factors dependent on the frequency content of the input motion, complicating the development and adaptability of GMMs. By comparison, Fourier amplitude spectrum (FAS) represents ground-motion amplitudes that are completely independent from the amplitudes at other frequencies, making them an attractive alternative for GMM development. Random vibration theory (RVT) predicts the peak response of motion in the time domain based on the FAS and a duration, and thus can be used to relate FAS to PSA. Using RVT to compute the expected peak response in the time domain for given FAS therefore presents a significant advantage that is gaining traction in the GMM field. This article provides recommended RVT procedures relevant to GMM development, which were developed for the Next Generation Attenuation (NGA)-East project. In addition, an orientation-independent FAS metric—called the effective amplitude spectrum (EAS)—is developed for use in conjunction with RVT to preserve the mean power of the corresponding two horizontal components considered in traditional PSA-based modeling (i.e., RotD50). The EAS uses a standardized smoothing approach to provide a practical representation of the FAS for ground-motion modeling, while minimizing the impact on the four RVT properties ( zeroth moment, [Formula: see text]; bandwidth parameter, [Formula: see text]; frequency of zero crossings, [Formula: see text]; and frequency of extrema, [Formula: see text]). Although the recommendations were originally developed for NGA-East, they and the methodology they are based on can be adapted to become portable to other GMM and engineering problems requiring the computation of PSA from FAS.


2021 ◽  
Vol 504 (2) ◽  
pp. 2224-2234
Author(s):  
Nan Li ◽  
Christoph Becker ◽  
Simon Dye

ABSTRACT Measurements of the Hubble–Lemaitre constant from early- and local-Universe observations show a significant discrepancy. In an attempt to understand the origin of this mismatch, independent techniques to measure H0 are required. One such technique, strong lensing time delays, is set to become a leading contender amongst the myriad methods due to forthcoming large strong lens samples. It is therefore critical to understand the systematic effects inherent in this method. In this paper, we quantify the influence of additional structures along the line of sight by adopting realistic light-cones derived from the cosmoDC2 semi-analytical extragalactic catalogue. Using multiple-lens plane ray tracing to create a set of simulated strong lensing systems, we have investigated the impact of line-of-sight structures on time-delay measurements and in turn, on the inferred value of H0. We have also tested the reliability of existing procedures for correcting for line-of-sight effects. We find that if the integrated contribution of the line-of-sight structures is close to a uniform mass sheet, the bias in H0 can be adequately corrected by including a constant external convergence κext in the lens model. However, for realistic line-of-sight structures comprising many galaxies at different redshifts, this simple correction overestimates the bias by an amount that depends linearly on the median external convergence. We therefore conclude that lens modelling must incorporate multiple-lens planes to account for line-of-sight structures for accurate and precise inference of H0.


2021 ◽  
Vol 11 (5) ◽  
pp. 2307
Author(s):  
João Lincho ◽  
Rui C. Martins ◽  
João Gomes

Parabens are widely used in different industries as preservatives and antimicrobial compounds. The evolution of analytical techniques allowed the detection of these compounds in different sources at µg/L and ng/L. Until today, parabens were already found in water sources, air, soil and even in human tissues. The impact of parabens in humans, animals and in ecosystems are a matter of discussion within the scientific community, but it is proven that parabens can act as endocrine disruptors, and some reports suggest that they are carcinogenic compounds. The presence of parabens in ecosystems is mainly related to wastewater discharges. This work gives an overview about the paraben problem, starting with their characteristics and applications. Moreover, the dangers related to their usage were addressed through the evaluation of toxicological studies over different species as well as of humans. Considering this, paraben detection in different water sources, wastewater treatment plants, humans and animals was analyzed based on literature results. A review of European legislation regarding parabens was also performed, presenting some considerations for the use of parabens.


2021 ◽  
Vol 2 (3) ◽  
Author(s):  
Thomas Ayral ◽  
François-Marie Le Régent ◽  
Zain Saleem ◽  
Yuri Alexeev ◽  
Martin Suchara

AbstractOur recent work (Ayral et al. in Proceedings of IEEE computer society annual symposium on VLSI, ISVLSI, pp 138–140, 2020. 10.1109/ISVLSI49217.2020.00034) showed the first implementation of the Quantum Divide and Compute (QDC) method, which allows to break quantum circuits into smaller fragments with fewer qubits and shallower depth. This accommodates the limited number of qubits and short coherence times of quantum processors. This article investigates the impact of different noise sources—readout error, gate error and decoherence—on the success probability of the QDC procedure. We perform detailed noise modeling on the Atos Quantum Learning Machine, allowing us to understand tradeoffs and formulate recommendations about which hardware noise sources should be preferentially optimized. We also describe in detail the noise models we used to reproduce experimental runs on IBM’s Johannesburg processor. This article also includes a detailed derivation of the equations used in the QDC procedure to compute the output distribution of the original quantum circuit from the output distribution of its fragments. Finally, we analyze the computational complexity of the QDC method for the circuit under study via tensor-network considerations, and elaborate on the relation the QDC method with tensor-network simulation methods.


Sign in / Sign up

Export Citation Format

Share Document