scholarly journals Repeatability of Taste Recognition Threshold Measurements with QUEST and Quick Yes–No

Nutrients ◽  
2019 ◽  
Vol 12 (1) ◽  
pp. 24
Author(s):  
Richard Höchenberger ◽  
Kathrin Ohla

Taste perception, although vital for nutrient sensing, has long been overlooked in sensory assessments. This can, at least in part, be attributed to challenges associated with the handling of liquid, perishable stimuli, but also with scarce efforts to optimize testing procedures to be more time-efficient. We have previously introduced an adaptive, QUEST-based procedure to measure taste sensitivity thresholds that was quicker than other existing approaches, yet similarly reliable. Despite its advantages, the QUEST procedure lacks experimental control of false alarms (i.e., response bias) and psychometric function slope. Variations of these parameters, however, may also influence the threshold estimate. This raises the question as to whether a procedure that simultaneously assesses threshold, false-alarm rate, and slope might be able to produce threshold estimates with higher repeatability, i.e., smaller variation between repeated measurements. Here, we compared the performance of QUEST with a method that allows measurement of false-alarm rates and slopes, quick Yes–No (qYN), in a test–retest design for citric acid, sodium chloride, quinine hydrochloride, and sucrose recognition thresholds. We used complementary measures of repeatability, namely test–retest correlations and coefficients of repeatability. Both threshold procedures yielded largely overlapping thresholds with good repeatability between measurements. Together the data suggest that participants used a conservative response criterion. Furthermore, we explored the link between taste sensitivity and taste liking or which we found, however, no clear association.

Author(s):  
Pengfei Han ◽  
Lea Müller ◽  
Thomas Hummel

Abstract Introduction Taste perception is affected by trigeminal stimuli, i.e., capsaicin. This has been studied at suprathreshold concentrations. However, little is known about taste perception at threshold level in the presence of low concentration of capsaicin. The aim of the study was to explore whether taste sensitivity for sweet, sour, salt, bitter, and umami is modulated by the presence of capsaicin in the peri-threshold range. Methods Fifty-seven adults (age range 19–85 years; 32 women) with functional gustation participated in the study. Based on their perception of phenylthiocarbamide (PTC), the group was stratified into non-tasters (n = 20) and tasters (n = 37). Threshold for sweet (sucrose), sour (citric acid), salty (sodium chloride), bitter (quinine-hydrochloride), and umami (sodium-glutamate) tastes was estimated using a single-staircase paradigm (3-alternative forced choice; volume per trial 0.1 ml) with or without 0.9-µM capsaicin added. This capsaicin concentration had been determined in pilot studies to be in the range of oral perception thresholds. Results The addition of capsaicin produced lower taste thresholds for sweet, sour, salty, and bitter but not for umami. In contrast, neither PTC taster status nor sex affected these results. Conclusion The current results indicate that a low concentration of capsaicin increases gustatory sensitivity. Implications The current findings provide evidence supporting different effects of capsaicin on taste perception at threshold level. It has implications for boosting taste sensitivity or flavor enjoyment with low concentration of capsaicin.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1643
Author(s):  
Ming Liu ◽  
Shichao Chen ◽  
Fugang Lu ◽  
Mengdao Xing ◽  
Jingbiao Wei

For target detection in complex scenes of synthetic aperture radar (SAR) images, the false alarms in the land areas are hard to eliminate, especially for the ones near the coastline. Focusing on the problem, an algorithm based on the fusion of multiscale superpixel segmentations is proposed in this paper. Firstly, the SAR images are partitioned by using different scales of superpixel segmentation. For the superpixels in each scale, the land-sea segmentation is achieved by judging their statistical properties. Then, the land-sea segmentation results obtained in each scale are combined with the result of the constant false alarm rate (CFAR) detector to eliminate the false alarms located on the land areas of the SAR image. In the end, to enhance the robustness of the proposed algorithm, the detection results obtained in different scales are fused together to realize the final target detection. Experimental results on real SAR images have verified the effectiveness of the proposed algorithm.


Nutrients ◽  
2021 ◽  
Vol 13 (3) ◽  
pp. 878
Author(s):  
Arnaud Bernard ◽  
Johanne Le Beyec-Le Bihan ◽  
Loredana Radoi ◽  
Muriel Coupaye ◽  
Ouidad Sami ◽  
...  

The aim of this study was to explore the impact of bariatric surgery on fat and sweet taste perceptions and to determine the possible correlations with gut appetite-regulating peptides and subjective food sensations. Women suffering from severe obesity (BMI > 35 kg/m2) were studied 2 weeks before and 6 months after a vertical sleeve gastrectomy (VSG, n = 32) or a Roux-en-Y gastric bypass (RYGB, n = 12). Linoleic acid (LA) and sucrose perception thresholds were determined using the three-alternative forced-choice procedure, gut hormones were assayed before and after a test meal and subjective changes in oral food sensations were self-reported using a standardized questionnaire. Despite a global positive effect of both surgeries on the reported gustatory sensations, a change in the taste sensitivity was only found after RYGB for LA. However, the fat and sweet taste perceptions were not homogenous between patients who underwent the same surgery procedure, suggesting the existence of two subgroups: patients with and without taste improvement. These gustatory changes were not correlated to the surgery-mediated modifications of the main gut appetite-regulating hormones. Collectively these data highlight the complexity of relationships between bariatric surgery and taste sensitivity and suggest that VSG and RYGB might impact the fatty taste perception differently.


2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Yunzi Feng ◽  
Hélène Licandro ◽  
Christophe Martin ◽  
Chantal Septier ◽  
Mouming Zhao ◽  
...  

The objective of this work was to investigate whether the biological film lining the tongue may play a role in taste perception. For that purpose, the tongue film and saliva of 21 healthy subjects were characterized, focusing on microorganisms and their main metabolic substrates and products. In parallel, taste sensitivity was evaluated using a test recently developed by our group, and the links between biological and sensory data were explored by a correlative approach. Saliva and tongue film differed significantly in biochemical composition (proportions of glucose, fructose, sucrose, and lactic, butyric, and acetic acids) and in microbiological profiles: compared to saliva, tongue film was characterized by significantly lower proportions of Bacteroidetes (p<0.001) and its main genus Prevotella (p<0.01) and significantly higher proportions of Firmicutes (p<0.01), Actinobacteria (p<0.001), and the genus Streptococcus (p<0.05). Generic taste sensitivity was linked to biological variables in the two compartments, but variables that appeared influent in saliva (flow, organic acids, proportion of Actinobacteria and Firmicutes) and in tongue film (sugars and proportions of Bacteroidetes) were not the same. This study points to two interesting areas in taste research: the oral microbiome and the specific characterization of the film lining the tongue.


2018 ◽  
Vol 33 (6) ◽  
pp. 1501-1511 ◽  
Author(s):  
Harold E. Brooks ◽  
James Correia

Abstract Tornado warnings are one of the flagship products of the National Weather Service. We update the time series of various metrics of performance in order to provide baselines over the 1986–2016 period for lead time, probability of detection, false alarm ratio, and warning duration. We have used metrics (mean lead time for tornadoes warned in advance, fraction of tornadoes warned in advance) that work in a consistent way across the official changes in policy for warning issuance, as well as across points in time when unofficial changes took place. The mean lead time for tornadoes warned in advance was relatively constant from 1986 to 2011, while the fraction of tornadoes warned in advance increased through about 2006, and the false alarm ratio slowly decreased. The largest changes in performance take place in 2012 when the default warning duration decreased, and there is an apparent increased emphasis on reducing false alarms. As a result, the lead time, probability of detection, and false alarm ratio all decrease in 2012. Our analysis is based, in large part, on signal detection theory, which separates the quality of the warning system from the threshold for issuing warnings. Threshold changes lead to trade-offs between false alarms and missed detections. Such changes provide further evidence for changes in what the warning system as a whole considers important, as well as highlighting the limitations of measuring performance by looking at metrics independently.


Author(s):  
Sunilkumar Soni ◽  
Santanu Das ◽  
Aditi Chattopadhyay

An optimal sensor placement methodology is proposed based on detection theory framework to maximize the detection rate and minimize the false alarm rate. Minimizing the false alarm rate for a given detection rate plays an important role in improving the efficiency of a Structural Health Monitoring (SHM) system as it reduces the number of false alarms. The placement technique is such that the sensor features are as directly correlated and as sensitive to damage as possible. The technique accounts for a number of factors, like actuation frequency and strength, minimum damage size, damage detection scheme, material damping, signal to noise ratio (SNR) and sensing radius. These factors are not independent and affect each other. Optimal sensor placement is done in two steps. First, a sensing radius, which can capture any detectable change caused by a perturbation and above a certain threshold, is calculated. This threshold value is based on Neyman-Pearson detector that maximizes the detection rate for a fixed false alarm rate. To avoid sensor redundancy, a criterion to minimize sensing region overlaps of neighboring sensors is defined. Based on the sensing region and the minimum overlap concept, number of sensors needed on a structural component is calculated. In the second step, a damage distribution pattern, known as probability of failure distribute, is calculated for a structural component using finite element analysis. This failure distribution helps in selecting the most sensitive sensors, thereby removing those making remote contributions to the overall detection scheme.


Author(s):  
С.Б. Егоров ◽  
Р.И. Горбачев

«Выбросовая» вероятностная модель работы обнаружителя в режиме ожидания сигнала, предложенная авторами в [1], использована для оценки влияния селекции выбросов по длительности на вероятность ложной тревоги. Флюктуационные выбросы помехового индикаторного процесса, превысившие пороги селекции по уровню и длительности, трактуются как редкие события на интервале ожидания сигнала, подчиняющиеся вероятностному закону Пуассона. При условии, что средний период следования ложных выбросов превышает интервал корреляции индикаторного процесса, получено соотношение между средним числом выбросов любой длительности и средним числом выбросов, превысивших пороговую длительность. На основании известных числовых и вероятностных характеристик выбросов нормального стационарного случайного процесса получен уравнения, связывающие относительные пороги селекции по уровню и длительности с вероятностью ложной тревоги на интервале ожидания сигнала. Предложена методика определения порога селекции по длительности для снижения порога селекции по уровню до заданной величины. «Emissional» probability model of the detector in stand-by mode proposed by the authors in [1], is intended for estimation of false alarm rate dependence from the value of time-selection threshold. Fluctuation emissions of the noise indicator process are interpreted as rare events correspond to Poisson distribution. Assuming that average rate of false alarms exceeds the correlation interval of indicator process, obtained equation between average number of false alarms of any duration and average number of false alarms exceed the time threshold. Based on known numerical and statistical characteristics of emissions of normal stationary random process obtained equations, relating time and level thresholds with false alarm probability on stand-by mode time interval. Also suggested a method of determining time threshold intended to reduce level threshold.


2021 ◽  
Author(s):  
Thomas Röösli ◽  
David N. Bresch

&lt;p&gt;Weather extremes can have high socio-economic impacts. Better impact forecasting and preventive action help to reduce these impacts. In Switzerland, the winter windstorms caused high building damage, felled trees and interrupted traffic and power. Events such as Burglind-Eleanor in January 2018 are a learning opportunity for weather warnings, risk modelling and decision-making.&lt;/p&gt;&lt;p&gt;We have developed and implemented an operational impact forecasting system for building damage due to wind events in Switzerland. We use the ensemble weather forecast of wind gusts produced by the national meteorological agency MeteoSwiss. We couple this hazard information with a spatially explicit impact model (CLIMADA) for building damages due to winter windstorms. Each day, the impact forecasting system publishes a probabilistic forecast of the expected building damages on a spatial grid.&lt;/p&gt;&lt;p&gt;This system produces promising results for major historical storms when compared to aggregated daily building insurance claims data from a public building insurer of the canton of Zurich. The daily impact forecasts were qualitatively categorized as (1) successful (2) miss or (3) false alarm. The impacts of windstorm Burglind-Eleanor and five other winter windstorms were forecasted reasonably well, with four successful forecasts, one miss and one false alarm.&lt;/p&gt;&lt;p&gt;&amp;#160;The building damage due to smaller storm extremes was not as successfully forecasted. Thunderstorms are not as well forecasted with 2 days&amp;#8217; lead time and as a result the impact forecasting system produces more misses and false alarms outside the winter storm season. For the Alpine-specific southerly Foehn winds, the impact forecasts produce many false alarms, probably caused by an overestimation of wind gusts in the weather forecast.&lt;/p&gt;&lt;p&gt;The forecasting system can be used to improve weather warnings and allocate resources and staff in the claims handling process of building insurances. This will help to improve recovery time and costs to institutions and individuals. The open-source code and open meteorological data makes this implementation transferable to other hazard types and other geographical regions.&lt;/p&gt;


2021 ◽  
Author(s):  
Paolo Frattini ◽  
Gianluca Sala ◽  
Camilla Lanfranconi ◽  
Giulia Rusconi ◽  
Giovanni Crosta

&lt;p&gt;Rainfall is one of the most significant triggering factors for shallow landslides. The early warning for such phenomena requires the definition of a threshold based on a critical rainfall condition that may lead to diffuse landsliding. The developing of these thresholds is frequently done through empirical or statistical approaches that aim at identifying thresholds between rainfall events that triggered or non-triggered landslides. Such approaches present several problems related to the identification of the exact amount of rainfall that triggered landslides, the local geo-environmental conditions at the landslide site, and the minimum rainfall amount used to define the non-triggering events. Furthermore, these thresholds lead to misclassifications (false negative or false positive) that always induce costs for the society. The aim of this research is to address these limitations, accounting for classification costs in order to select the optimal thresholds for landslide risk management.&lt;/p&gt;&lt;p&gt;Starting from a database of shallow landslides occurred during five regional-scale rainfall events in the Italian Central Alps, we extracted the triggering rainfall intensities by adjusting rain gouge data with weather radar data. This adjustment significantly improved the information regarding the rainfall intensity at the landslide site and, although an uncertainty related to the exact timing of occurrence has still remained. Therefore, we identified the rainfall thresholds through the Receiver Operating Characteristic (ROC) approach, by identifying the optimal rainfall intensity that separates triggering and non-triggering events. To evaluate the effect related to the application of different minimum rainfall for non-triggering events, we have adopted three different values obtaining similar results, thus demonstrating that the ROC approach is not sensitive to the choice of the minimum rainfall threshold. In order to include the effect of misclassification costs we have developed cost-sensitive rainfall threshold curves by using cost-curve approach (Drummond and Holte 2000). As far as we know, this is the first attempt to build a cost-sensitive rainfall threshold for landslides that allows to explicitly account for misclassification costs. For the development of the cost-sensitive threshold curve, we had to define a reference cost scenario in which we have quantified several cost items for both missed alarms and false alarms. By using this scenario, the cost-sensitive rainfall threshold results to be lower than the ROC threshold to minimize the missed alarms, the costs of which are seven times greater than the false alarm costs. Since the misclassification costs could vary according to different socio-economic contexts and emergency organization, we developed different extreme scenarios to evaluate the sensitivity of misclassification costs on the rainfall thresholds. In the scenario with maximum false-alarm cost and minimum missed-alarm cost, the rainfall threshold increases in order to minimize the false alarms. Conversely, the rainfall thresholds decreases in the scenario with minimum false-alarm cost and maximum missed-alarm costs. We found that the range of variation between the curves of these extreme scenarios is as much as half an order of magnitude.&lt;/p&gt;


2019 ◽  
Vol 11 (3) ◽  
pp. 549-563 ◽  
Author(s):  
JungKyu Rhys Lim ◽  
Brooke Fisher Liu ◽  
Michael Egnoto

Abstract On average, 75% of tornado warnings in the United States are false alarms. Although forecasters have been concerned that false alarms may generate a complacent public, only a few research studies have examined how the public responds to tornado false alarms. Through four surveys (N = 4162), this study examines how residents in the southeastern United States understand, process, and respond to tornado false alarms. The study then compares social science research findings on perceptions of false alarms to actual county false alarm ratios and the number of tornado warnings issued by counties. Contrary to prior research, findings indicate that concerns about false alarm ratios generating a complacent public may be overblown. Results show that southeastern U.S. residents estimate tornado warnings to be more accurate than they are. Participants’ perceived false alarm ratios are not correlated with actual county false alarm ratios. Counterintuitively, the higher individuals perceive false alarm ratios and tornado alert accuracy to be, the more likely they are to take protective behavior such as sheltering in place in response to tornado warnings. Actual country false alarm ratios and the number of tornado warnings issued did not predict taking protective action.


Sign in / Sign up

Export Citation Format

Share Document