scholarly journals Interaction between static visual cues and force-feedback on the perception of mass of virtual objects

2018 ◽  
Author(s):  
Wenyan Bi ◽  
Jonathan Newport ◽  
Bei Xiao

ABSTRACTWe use force-feedback device and a game engine to measure the effects of material appearance on the perception of mass of virtual objects. We discover that the perceived mass is mainly determined by the ground-truth mass output by the force-feedback device. Different from the classic Material Weight Illusion (MWI), however, heavy-looking objects (e.g. steel) are consistently rated heavier than light-looking ones (e.g. fabric) with the same ground-truth mass. Analysis of the initial accelerated velocity of the movement trajectories of the virtual probe shows greater acceleration for materials with heavier rated mass. This effect is diminished when the participants lift the object for the second time, meaning that the influence of visual appearance disappears in the movement trajectories once it is calibrated by the force-feedback. We also show how the material categories are affected by both the visual appearance and the weight of the object. We conclude that visual appearance has a significant interaction with haptic force-feedback on the perception of mass and also affects the kinematics of how participants manipulate the object.CCS CONCEPTS• Human-centered computing → Empirical studies in HCI; Empirical studies in interaction design; Empirical studies in visualization;ACM Reference FormatWenyan Bi, Jonathan Newport, and Bei Xiao. 2018. Interaction between static visual cues and force-feedback on the perception of mass of virtual objects. In Proceedings of. ACM, New York, NY, USA, 5 pages.

2019 ◽  
Vol 11 (10) ◽  
pp. 1157 ◽  
Author(s):  
Jorge Fuentes-Pacheco ◽  
Juan Torres-Olivares ◽  
Edgar Roman-Rangel ◽  
Salvador Cervantes ◽  
Porfirio Juarez-Lopez ◽  
...  

Crop segmentation is an important task in Precision Agriculture, where the use of aerial robots with an on-board camera has contributed to the development of new solution alternatives. We address the problem of fig plant segmentation in top-view RGB (Red-Green-Blue) images of a crop grown under open-field difficult circumstances of complex lighting conditions and non-ideal crop maintenance practices defined by local farmers. We present a Convolutional Neural Network (CNN) with an encoder-decoder architecture that classifies each pixel as crop or non-crop using only raw colour images as input. Our approach achieves a mean accuracy of 93.85% despite the complexity of the background and a highly variable visual appearance of the leaves. We make available our CNN code to the research community, as well as the aerial image data set and a hand-made ground truth segmentation with pixel precision to facilitate the comparison among different algorithms.


Author(s):  
Akihiro Maehigashi ◽  
Akira Sasada ◽  
Miki Matsumuro ◽  
Fumihisa Shibata ◽  
Asako Kimura ◽  
...  

2021 ◽  
Author(s):  
Nina Rohrbach ◽  
Joachim Hermsdörfer ◽  
Lisa-Marie Huber ◽  
Annika Thierfelder ◽  
Gavin Buckingham

AbstractAugmented reality, whereby computer-generated images are overlaid onto the physical environment, is becoming significant part of the world of education and training. Little is known, however, about how these external images are treated by the sensorimotor system of the user – are they fully integrated into the external environmental cues, or largely ignored by low-level perceptual and motor processes? Here, we examined this question in the context of the size–weight illusion (SWI). Thirty-two participants repeatedly lifted and reported the heaviness of two cubes of unequal volume but equal mass in alternation. Half of the participants saw semi-transparent equally sized holographic cubes superimposed onto the physical cubes through a head-mounted display. Fingertip force rates were measured prior to lift-off to determine how the holograms influenced sensorimotor prediction, while verbal reports of heaviness after each lift indicated how the holographic size cues influenced the SWI. As expected, participants who lifted without augmented visual cues lifted the large object at a higher rate of force than the small object on early lifts and experienced a robust SWI across all trials. In contrast, participants who lifted the (apparently equal-sized) augmented cubes used similar force rates for each object. Furthermore, they experienced no SWI during the first lifts of the objects, with a SWI developing over repeated trials. These results indicate that holographic cues initially dominate physical cues and cognitive knowledge, but are dismissed when conflicting with cues from other senses.


2018 ◽  
Vol 14 (1) ◽  
pp. 101-118 ◽  
Author(s):  
Michael K. Gusmano ◽  
Erin Strumpf ◽  
Julie Fiset-Laniel ◽  
Daniel Weisz ◽  
Victor G. Rodwin

AbstractAlthough eliminating financial barriers to care is a necessary condition for improving access to health services, it is not sufficient. Given the contrasting health systems with regard to financing and organization of health insurance in the United States and Canada, there is a long history of comparing these countries. We extend the empirical studies on the Canadian and US health systems by comparing access to ambulatory care as measured by hospitalization rates for ambulatory care sensitive conditions (ACSC) in Montreal and New York City. We find that, in New York, ACSC rates were more than twice as high (12.6 per 1000 population) as in Montreal (4.8 per 1000 population). After controlling for age, sex, and number of diagnoses, significant differences in ACSC rates are present in both cities, but are more pronounced in New York. Our findings are consistent with the hypothesis that universal, first-dollar health insurance coverage has contributed to lower ACSC rates in Montreal than New York. However, Montreal’s surprisingly low ACSC rate calls for further research.


2021 ◽  
Vol 14 (6) ◽  
pp. 997-1005
Author(s):  
Sandeep Tata ◽  
Navneet Potti ◽  
James B. Wendt ◽  
Lauro Beltrão Costa ◽  
Marc Najork ◽  
...  

Extracting structured information from templatic documents is an important problem with the potential to automate many real-world business workflows such as payment, procurement, and payroll. The core challenge is that such documents can be laid out in virtually infinitely different ways. A good solution to this problem is one that generalizes well not only to known templates such as invoices from a known vendor, but also to unseen ones. We developed a system called Glean to tackle this problem. Given a target schema for a document type and some labeled documents of that type, Glean uses machine learning to automatically extract structured information from other documents of that type. In this paper, we describe the overall architecture of Glean, and discuss three key data management challenges : 1) managing the quality of ground truth data, 2) generating training data for the machine learning model using labeled documents, and 3) building tools that help a developer rapidly build and improve a model for a given document type. Through empirical studies on a real-world dataset, we show that these data management techniques allow us to train a model that is over 5 F1 points better than the exact same model architecture without the techniques we describe. We argue that for such information-extraction problems, designing abstractions that carefully manage the training data is at least as important as choosing a good model architecture.


2020 ◽  
Vol 31 (01) ◽  
pp. 030-039 ◽  
Author(s):  
Aaron C. Moberly ◽  
Kara J. Vasil ◽  
Christin Ray

AbstractAdults with cochlear implants (CIs) are believed to rely more heavily on visual cues during speech recognition tasks than their normal-hearing peers. However, the relationship between auditory and visual reliance during audiovisual (AV) speech recognition is unclear and may depend on an individual’s auditory proficiency, duration of hearing loss (HL), age, and other factors.The primary purpose of this study was to examine whether visual reliance during AV speech recognition depends on auditory function for adult CI candidates (CICs) and adult experienced CI users (ECIs).Participants included 44 ECIs and 23 CICs. All participants were postlingually deafened and had met clinical candidacy requirements for cochlear implantation.Participants completed City University of New York sentence recognition testing. Three separate lists of twelve sentences each were presented: the first in the auditory-only (A-only) condition, the second in the visual-only (V-only) condition, and the third in combined AV fashion. Each participant’s amount of “visual enhancement” (VE) and “auditory enhancement” (AE) were computed (i.e., the benefit to AV speech recognition of adding visual or auditory information, respectively, relative to what could potentially be gained). The relative reliance of VE versus AE was also computed as a VE/AE ratio.VE/AE ratio was predicted inversely by A-only performance. Visual reliance was not significantly different between ECIs and CICs. Duration of HL and age did not account for additional variance in the VE/AE ratio.A shift toward visual reliance may be driven by poor auditory performance in ECIs and CICs. The restoration of auditory input through a CI does not necessarily facilitate a shift back toward auditory reliance. Findings suggest that individual listeners with HL may rely on both auditory and visual information during AV speech recognition, to varying degrees based on their own performance and experience, to optimize communication performance in real-world listening situations.


2019 ◽  
Vol 11 (13) ◽  
pp. 3599 ◽  
Author(s):  
Lane ◽  
Murdock ◽  
Genskow ◽  
Betz ◽  
Chatrchyan

Climate change impacts on agriculture have been intensifying in the Northeastern and Midwestern United States. Few empirical studies have considered how dairy farmers and/or their advisors are interpreting and responding to climate impacts, risks, and opportunities in these regions. This study investigates dairy farmer and advisor views and decisions related to climate change using data from seven farmer and advisor focus groups conducted in New York and Wisconsin. The study examined how farmers and advisors perceived climate impacts on dairy farms, the practices they are adopting, and how perceived risks and vulnerability affect farmers’ decision making related to adaptation strategies. Although dairy farmers articulated concern regarding climate impacts, other business pressures, such as profitability, market conditions, government regulations, and labor availability were often more critical issues that affected their decision making. Personal experience with extreme weather and seasonal changes affected decision making. The findings from this study provide improved understanding of farmers’ needs and priorities, which can help guide land-grant researchers, Extension, and policymakers in their efforts to develop and coordinate a comprehensive strategy to address climate change impacts on dairy in the Northeast and the Midwest US.


Sign in / Sign up

Export Citation Format

Share Document