scholarly journals Instrumental Quality Predictions and Analysis of Auditory Cues for Algorithms in Modern Headphone Technology

2021 ◽  
Vol 25 ◽  
pp. 233121652110012
Author(s):  
Thomas Biberger ◽  
Henning Schepker ◽  
Florian Denk ◽  
Stephan D. Ewert

Smart headphones or hearables use different types of algorithms such as noise cancelation, feedback suppression, and sound pressure equalization to eliminate undesired sound sources or to achieve acoustical transparency. Such signal processing strategies might alter the spectral composition or interaural differences of the original sound, which might be perceived by listeners as monaural or binaural distortions and thus degrade audio quality. To evaluate the perceptual impact of these distortions, subjective quality ratings can be used, but these are time consuming and costly. Auditory-inspired instrumental quality measures can be applied with less effort and may also be helpful in identifying whether the distortions impair the auditory representation of monaural or binaural cues. Therefore, the goals of this study were (a) to assess the applicability of various monaural and binaural audio quality models to distortions typically occurring in hearables and (b) to examine the effect of those distortions on the auditory representation of spectral, temporal, and binaural cues. Results showed that the signal processing algorithms considered in this study mainly impaired (monaural) spectral cues. Consequently, monaural audio quality models that capture spectral distortions achieved the best prediction performance. A recent audio quality model that predicts monaural and binaural aspects of quality was revised based on parts of the current data involving binaural audio quality aspects, leading to improved overall performance indicated by a mean Pearson linear correlation of 0.89 between obtained and predicted ratings.

1989 ◽  
Vol 21 (8-9) ◽  
pp. 1015-1024 ◽  
Author(s):  
C. P. Crockett ◽  
R. W. Crabtree ◽  
I. D. Cluckie

In England and Wales the placing of effluent discharge consents within a statistical framework has led to the development of a new hybrid type of river quality model. Such catchment scale consent models have a stochastic component for the generation of model inputs and a deterministic component to route them through the river system. This paper reviews and compares the existing approaches for consent modelling used by various Water Authorities. A number of possible future developments are suggested including the potential need for a national approach to the review and setting of long term consents.


Author(s):  
Maria Ulan ◽  
Welf Löwe ◽  
Morgan Ericsson ◽  
Anna Wingkvist

AbstractA quality model is a conceptual decomposition of an abstract notion of quality into relevant, possibly conflicting characteristics and further into measurable metrics. For quality assessment and decision making, metrics values are aggregated to characteristics and ultimately to quality scores. Aggregation has often been problematic as quality models do not provide the semantics of aggregation. This makes it hard to formally reason about metrics, characteristics, and quality. We argue that aggregation needs to be interpretable and mathematically well defined in order to assess, to compare, and to improve quality. To address this challenge, we propose a probabilistic approach to aggregation and define quality scores based on joint distributions of absolute metrics values. To evaluate the proposed approach and its implementation under realistic conditions, we conduct empirical studies on bug prediction of ca. 5000 software classes, maintainability of ca. 15000 open-source software systems, and on the information quality of ca. 100000 real-world technical documents. We found that our approach is feasible, accurate, and scalable in performance.


Water ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 88
Author(s):  
Xiamei Man ◽  
Chengwang Lei ◽  
Cayelan C. Carey ◽  
John C. Little

Many researchers use one-dimensional (1-D) and three-dimensional (3-D) coupled hydrodynamic and water-quality models to simulate water quality dynamics, but direct comparison of their relative performance is rare. Such comparisons may quantify their relative advantages, which can inform best practices. In this study, we compare two 1-year simulations in a shallow, eutrophic, managed reservoir using a community-developed 1-D model and a 3-D model coupled with the same water-quality model library based on multiple evaluation criteria. In addition, a verified bubble plume model is coupled with the 1-D and 3-D models to simulate the water temperature in four epilimnion mixing periods to further quantify the relative performance of the 1-D and 3-D models. Based on the present investigation, adopting a 1-D water-quality model to calibrate a 3-D model is time-efficient and can produce reasonable results; 3-D models are recommended for simulating thermal stratification and management interventions, whereas 1-D models may be more appropriate for simpler model setups, especially if field data needed for 3-D modeling are lacking.


Author(s):  
Julien Siebert ◽  
Lisa Joeckel ◽  
Jens Heidrich ◽  
Adam Trendowicz ◽  
Koji Nakamichi ◽  
...  

AbstractNowadays, systems containing components based on machine learning (ML) methods are becoming more widespread. In order to ensure the intended behavior of a software system, there are standards that define necessary qualities of the system and its components (such as ISO/IEC 25010). Due to the different nature of ML, we have to re-interpret existing qualities for ML systems or add new ones (such as trustworthiness). We have to be very precise about which quality property is relevant for which entity of interest (such as completeness of training data or correctness of trained model), and how to objectively evaluate adherence to quality requirements. In this article, we present how to systematically construct quality models for ML systems based on an industrial use case. This quality model enables practitioners to specify and assess qualities for ML systems objectively. In addition to the overall construction process described, the main outcomes include a meta-model for specifying quality models for ML systems, reference elements regarding relevant views, entities, quality properties, and measures for ML systems based on existing research, an example instantiation of a quality model for a concrete industrial use case, and lessons learned from applying the construction process. We found that it is crucial to follow a systematic process in order to come up with measurable quality properties that can be evaluated in practice. In the future, we want to learn how the term quality differs between different types of ML systems and come up with reference quality models for evaluating qualities of ML systems.


2018 ◽  
Vol 18 (14) ◽  
pp. 10199-10218 ◽  
Author(s):  
Marta G. Vivanco ◽  
Mark R. Theobald ◽  
Héctor García-Gómez ◽  
Juan Luis Garrido ◽  
Marje Prank ◽  
...  

Abstract. The evaluation and intercomparison of air quality models is key to reducing model errors and uncertainty. The projects AQMEII3 and EURODELTA-Trends, in the framework of the Task Force on Hemispheric Transport of Air Pollutants and the Task Force on Measurements and Modelling, respectively (both task forces under the UNECE Convention on the Long Range Transport of Air Pollution, LTRAP), have brought together various regional air quality models to analyze their performance in terms of air concentrations and wet deposition, as well as to address other specific objectives.This paper jointly examines the results from both project communities by intercomparing and evaluating the deposition estimates of reduced and oxidized nitrogen (N) and sulfur (S) in Europe simulated by 14 air quality model systems for the year 2010. An accurate estimate of deposition is key to an accurate simulation of atmospheric concentrations. In addition, deposition fluxes are increasingly being used to estimate ecological impacts. It is therefore important to know by how much model results differ and how well they agree with observed values, at least when comparison with observations is possible, such as in the case of wet deposition.This study reveals a large variability between the wet deposition estimates of the models, with some performing acceptably (according to previously defined criteria) and others underestimating wet deposition rates. For dry deposition, there are also considerable differences between the model estimates. An ensemble of the models with the best performance for N wet deposition was made and used to explore the implications of N deposition in the conservation of protected European habitats. Exceedances of empirical critical loads were calculated for the most common habitats at a resolution of 100  ×  100 m2 within the Natura 2000 network, and the habitats with the largest areas showing exceedances are determined.Moreover, simulations with reduced emissions in selected source areas indicated a fairly linear relationship between reductions in emissions and changes in the deposition rates of N and S. An approximate 20 % reduction in N and S deposition in Europe is found when emissions at a global scale are reduced by the same amount. European emissions are by far the main contributor to deposition in Europe, whereas the reduction in deposition due to a decrease in emissions in North America is very small and confined to the western part of the domain. Reductions in European emissions led to substantial decreases in the protected habitat areas with critical load exceedances (halving the exceeded area for certain habitats), whereas no change was found, on average, when reducing North American emissions in terms of average values per habitat.


Author(s):  
Kelvin Kabeti Omieno

The enterprise resource planning (ERP) system is a complex and comprehensive software that integrates various enterprise functions and resources. Although ERP systems have been depicted as a solution in many organizations, there are many negative reports on ERP success, benefits, and effect on user performance. Previous research noted that there is a lack of knowledge and awareness of ERP systems and their overall value to ERP organizations. ERP systems have been widely studied during the past decade; yet they often fail to deliver the intended benefits originally expected. One notable reason for their failures is the lack of understanding in user requirements. There are many studies conducted to propose software quality models with their quality characteristics. However, there is currently no dedicated software quality model that can describe usability maturity and involve new features of ERP systems. This chapter proposes a framework for evaluating the usability maturity as a quality attribute of ERP systems.


Author(s):  
Mohamed A Sheriff ◽  
Elli Georgiadou

The ultimate object of software development should be to deliver value to all stakeholders. The traditional approach to delivering this value is to ensure that the software developed is of the highest quality. A number of quality models have been proposed to specify or describe what constitutes high quality software. The ISO9126 is one such model and perhaps the most comprehensive. Similarly, there are several methods, frameworks and guidelines for ensuring software quality in either the development or use process or both. Software Quality Management and Risk Management are probably the two most popular methods employed by developers during software development and implementation to deliver quality. In this paper the authors examine whether, and to what extent, the implied value propositions of software products as portrayed by the ISO9126 quality model and the prescribed processes in Software Quality Management and Risk Management, map onto user value perceptions and experiences. An ontology of value, in the form of a value tree, is developed and used to identify and analyse the key value dimensions of the ISO9126 quality model and the Software Quality Management and Risk Management process methods. These are then mapped onto contextualised user value characterisations derived from the extant literature. Differences identified are analysed and discussed and the authors suggest approaches that could narrow the perennial gap between idealised quality product and process models and stakeholder perceptions and actualisations of software value.


2007 ◽  
Vol 98 (5) ◽  
pp. 2705-2715 ◽  
Author(s):  
Ida Siveke ◽  
Christian Leibold ◽  
Benedikt Grothe

We are regularly exposed to several concurrent sounds, producing a mixture of binaural cues. The neuronal mechanisms underlying the localization of concurrent sounds are not well understood. The major binaural cues for localizing low-frequency sounds in the horizontal plane are interaural time differences (ITDs). Auditory brain stem neurons encode ITDs by firing maximally in response to “favorable” ITDs and weakly or not at all in response to “unfavorable” ITDs. We recorded from ITD-sensitive neurons in the dorsal nucleus of the lateral lemniscus (DNLL) while presenting pure tones at different ITDs embedded in noise. We found that increasing levels of concurrent white noise suppressed the maximal response rate to tones with favorable ITDs and slightly enhanced the response rate to tones with unfavorable ITDs. Nevertheless, most of the neurons maintained ITD sensitivity to tones even for noise intensities equal to that of the tone. Using concurrent noise with a spectral composition in which the neuron's excitatory frequencies are omitted reduced the maximal response similar to that obtained with concurrent white noise. This finding indicates that the decrease of the maximal rate is mediated by suppressive cross-frequency interactions, which we also observed during monaural stimulation with additional white noise. In contrast, the enhancement of the firing rate to tones at unfavorable ITD might be due to early binaural interactions (e.g., at the level of the superior olive). A simple simulation corroborates this interpretation. Taken together, these findings suggest that the spectral composition of a concurrent sound strongly influences the spatial processing of ITD-sensitive DNLL neurons.


2010 ◽  
Vol 61 (9) ◽  
pp. 2381-2390 ◽  
Author(s):  
Gabriele Freni ◽  
Giorgio Mannina ◽  
Gaspare Viviani

The objective of this paper is the definition of a methodology to evaluate the impact of the temporal resolution of rainfall measurements in urban drainage modelling applications. More specifically the effect of the temporal resolution on urban water quality modelling is detected analysing the uncertainty of the response of rainfall–runoff modelling. Analyses have been carried out using historical rainfall–discharge data collected for the Fossolo catchment (Bologna, Italy). According to the methodology, the historical rainfall data are taken as a reference, and resampled data have been obtained through a rescaling procedure with variable temporal windows. The shape comparison between ‘true’ and rescaled rainfall data has been carried out using a non-dimensional accuracy index. Monte Carlo simulations have been carried out applying a parsimonious urban water quality model, using the recorded data and the resampled events. The results of the simulations were used to derive the cumulative probabilities of quantity and quality model outputs (peak discharges, flow volume, peak concentrations and pollutant mass) conditioned on the observation according to the GLUE (Generalized Likelihood Uncertainty Estimation) methodology. The results showed that when coarser rainfall information is available, the model calibration process is still efficient even if modelling uncertainty progressively increases especially with regards to water quality aspects.


Sign in / Sign up

Export Citation Format

Share Document