scholarly journals Quantifying Aphantasia through drawing: Those without visual imagery show deficits in object but not spatial memory

2019 ◽  
Author(s):  
Wilma A. Bainbridge ◽  
Zoë Pounder ◽  
Alison F. Eardley ◽  
Chris I. Baker

AbstractCongenital aphantasia is a recently characterized experience defined by the inability to form voluntary visual imagery, in spite of intact semantic memory, recognition memory, and visual perception. Because of this specific deficit to visual imagery, aphantasia serves as an ideal population for probing the nature of representations in visual memory, particularly the interplay of object, spatial, and symbolic information. Here, we conducted a large-scale online study of aphantasics and revealed a dissociation in object and spatial content in their memory representations. Sixty-one aphantasics and matched controls with typical imagery studied real-world scene images, and were asked to draw them from memory, and then later copy them during a matched perceptual condition. Drawings were objectively quantified by 2,795 online scorers for object and spatial details. Aphantasics recalled significantly fewer objects than controls, with less color in their drawings, and an increased reliance on verbal scaffolding. However, aphantasics showed incredibly high spatial accuracy, equivalent to controls, and made significantly fewer memory errors. These differences between groups only manifested during recall, with no differences between groups during the matched perceptual condition. This object-specific memory impairment in aphantasics provides evidence for separate systems in memory that support object versus spatial information.

2021 ◽  
pp. 003151252110197
Author(s):  
Kaitlyn Abeare ◽  
Kristoffer Romero ◽  
Laura Cutler ◽  
Christina D. Sirianni ◽  
Laszlo A. Erdodi

In this study we attempted to replicate the classification accuracy of the newly introduced Forced Choice Recognition trial (FCR) of the Rey Complex Figure Test (RCFT) in a clinical sample. We administered the RCFT FCR and the earlier Yes/No Recognition trial from the RCFT to 52 clinically referred patients as part of a comprehensive neuropsychological test battery and incentivized a separate control group of 83 university students to perform well on these measures. We then computed the classification accuracies of both measures against criterion performance validity tests (PVTs) and compared results between the two samples. At previously published validity cutoffs (≤16 & ≤17), the RCFT FCR remained specific (.84–1.00) to psychometrically defined non-credible responding. Simultaneously, the RCFT FCR was more sensitive to examinees’ natural variability in visual-perceptual and verbal memory skills than the Yes/No Recognition trial. Even after being reduced to a seven-point scale (18-24) by the validity cutoffs, both RCFT recognition scores continued to provide clinically useful information on visual memory. This is the first study to validate the RCFT FCR as a PVT in a clinical sample. Our data also support its use for measuring cognitive ability. Replication studies with more diverse samples and different criterion measures are still needed before large-scale clinical application of this scale.


2015 ◽  
Vol 66 (6) ◽  
pp. 559 ◽  
Author(s):  
Jerom R. Stocks ◽  
Charles A. Gray ◽  
Matthew D. Taylor

Characterising the movement and habitat affinities of fish is a fundamental component in understanding the functioning of marine ecosystems. A comprehensive array of acoustic receivers was deployed at two near-shore coastal sites in south-eastern Australia, to examine the movements, activity-space size and residency of a temperate rocky-reef, herbivorous species Girella elevata. Twenty-four G. elevata individuals were internally tagged with pressure-sensing acoustic transmitters across these two arrays and monitored for up to 550 days. An existing network of coastal receivers was used to examine large-scale movement patterns. Individuals exhibited varying residency, but all had small activity-space sizes within the arrays. The species utilised shallow rocky-reef habitat, displaying unimodal or bimodal patterns in depth use. A positive correlation was observed between wind speed and the detection depth of fish, with fish being likely to move to deeper water to escape periods of adverse conditions. Detection frequency data, corrected using sentinel tags, generally illustrated diurnal behaviour. Patterns of habitat usage, residency and spatial utilisation highlighted the susceptibility of G. elevata to recreational fishing pressure. The results from the present study will further contribute to the spatial information required in the zoning of effective marine protected areas, and our understanding of temperate reef fish ecology.


2013 ◽  
Vol 57 ◽  
pp. 208-217 ◽  
Author(s):  
Zhiqiang Zou ◽  
Yue Wang ◽  
Kai Cao ◽  
Tianshan Qu ◽  
Zhongmin Wang

2021 ◽  
Vol 13 (13) ◽  
pp. 2473
Author(s):  
Qinglie Yuan ◽  
Helmi Zulhaidi Mohd Shafri ◽  
Aidi Hizami Alias ◽  
Shaiful Jahari Hashim

Automatic building extraction has been applied in many domains. It is also a challenging problem because of the complex scenes and multiscale. Deep learning algorithms, especially fully convolutional neural networks (FCNs), have shown robust feature extraction ability than traditional remote sensing data processing methods. However, hierarchical features from encoders with a fixed receptive field perform weak ability to obtain global semantic information. Local features in multiscale subregions cannot construct contextual interdependence and correlation, especially for large-scale building areas, which probably causes fragmentary extraction results due to intra-class feature variability. In addition, low-level features have accurate and fine-grained spatial information for tiny building structures but lack refinement and selection, and the semantic gap of across-level features is not conducive to feature fusion. To address the above problems, this paper proposes an FCN framework based on the residual network and provides the training pattern for multi-modal data combining the advantage of high-resolution aerial images and LiDAR data for building extraction. Two novel modules have been proposed for the optimization and integration of multiscale and across-level features. In particular, a multiscale context optimization module is designed to adaptively generate the feature representations for different subregions and effectively aggregate global context. A semantic guided spatial attention mechanism is introduced to refine shallow features and alleviate the semantic gap. Finally, hierarchical features are fused via the feature pyramid network. Compared with other state-of-the-art methods, experimental results demonstrate superior performance with 93.19 IoU, 97.56 OA on WHU datasets and 94.72 IoU, 97.84 OA on the Boston dataset, which shows that the proposed network can improve accuracy and achieve better performance for building extraction.


Author(s):  
Zhizhong Han ◽  
Xiyang Wang ◽  
Chi Man Vong ◽  
Yu-Shen Liu ◽  
Matthias Zwicker ◽  
...  

Learning global features by aggregating information over multiple views has been shown to be effective for 3D shape analysis. For view aggregation in deep learning models, pooling has been applied extensively. However, pooling leads to a loss of the content within views, and the spatial relationship among views, which limits the discriminability of learned features. We propose 3DViewGraph to resolve this issue, which learns 3D global features by more effectively aggregating unordered views with attention. Specifically, unordered views taken around a shape are regarded as view nodes on a view graph. 3DViewGraph first learns a novel latent semantic mapping to project low-level view features into meaningful latent semantic embeddings in a lower dimensional space, which is spanned by latent semantic patterns. Then, the content and spatial information of each pair of view nodes are encoded by a novel spatial pattern correlation, where the correlation is computed among latent semantic patterns. Finally, all spatial pattern correlations are integrated with attention weights learned by a novel attention mechanism. This further increases the discriminability of learned features by highlighting the unordered view nodes with distinctive characteristics and depressing the ones with appearance ambiguity. We show that 3DViewGraph outperforms state-of-the-art methods under three large-scale benchmarks.


F1000Research ◽  
2021 ◽  
Vol 10 ◽  
pp. 1010
Author(s):  
Nouar AlDahoul ◽  
Hezerul Abdul Karim ◽  
Abdulaziz Saleh Ba Wazir ◽  
Myles Joshua Toledo Tan ◽  
Mohammad Faizal Ahmad Fauzi

Background: Laparoscopy is a surgery performed in the abdomen without making large incisions in the skin and with the aid of a video camera, resulting in laparoscopic videos. The laparoscopic video is prone to various distortions such as noise, smoke, uneven illumination, defocus blur, and motion blur. One of the main components in the feedback loop of video enhancement systems is distortion identification, which automatically classifies the distortions affecting the videos and selects the video enhancement algorithm accordingly. This paper aims to address the laparoscopic video distortion identification problem by developing fast and accurate multi-label distortion classification using a deep learning model. Current deep learning solutions based on convolutional neural networks (CNNs) can address laparoscopic video distortion classification, but they learn only spatial information. Methods: In this paper, utilization of both spatial and temporal features in a CNN-long short-term memory (CNN-LSTM) model is proposed as a novel solution to enhance the classification. First, pre-trained ResNet50 CNN was used to extract spatial features from each video frame by transferring representation from large-scale natural images to laparoscopic images. Next, LSTM was utilized to consider the temporal relation between the features extracted from the laparoscopic video frames to produce multi-label categories. A novel laparoscopic video dataset proposed in the ICIP2020 challenge was used for training and evaluation of the proposed method. Results: The experiments conducted show that the proposed CNN-LSTM outperforms the existing solutions in terms of accuracy (85%), and F1-score (94.2%). Additionally, the proposed distortion identification model is able to run in real-time with low inference time (0.15 sec). Conclusions: The proposed CNN-LSTM model is a feasible solution to be utilized in laparoscopic videos for distortion identification.


2019 ◽  
Vol 19 (10) ◽  
pp. 77a
Author(s):  
Paul S Scotti ◽  
Yoolim Hong ◽  
Andrew B Leber ◽  
Julie D Golomb

2019 ◽  
Author(s):  
Ciaran Docherty ◽  
Anthony J Lee ◽  
Amanda Hahn ◽  
Lisa Marie DeBruine ◽  
Benedict C Jones

Researchers have suggested that more attractive women will show stronger preferences for masculine men because such women are better placed to offset the potential costs of choosing a masculine mate. However, evidence for correlations between measures of women’s own attractiveness and preferences for masculine men is mixed. Moreover, the samples used to test this hypothesis are typically relatively small. Consequently, we conducted two large-scale studies that investigated possible associations between women’s preferences for facial masculinity and their own attractiveness as assessed from third-party ratings of their facial attractiveness (Study 1, N = 454, laboratory study) and self-rated attractiveness (Study 2, N = 8972, online study). Own attractiveness was positively correlated with preferences for masculine men in Study 2 (self-rated attractiveness), but not Study 1 (third-party ratings of facial attractiveness). This pattern of results is consistent with the proposal that women’s beliefs about their own attractiveness, rather than their physical condition per se, underpins attractiveness-contingent masculinity preferences.


Sign in / Sign up

Export Citation Format

Share Document