scholarly journals Layered image representations and the computation of surface lightness

2008 ◽  
Vol 8 (7) ◽  
pp. 18 ◽  
Author(s):  
Barton L. Anderson ◽  
Jonathan Winawer
2020 ◽  
pp. 003329412094560
Author(s):  
Jennifer Murray ◽  
Brian Williams

If illness behaviour is to be fully understood, the social and behavioural sciences must work together to understand the wider forms in which illness is experienced and communicated with individuals and society. The current paper synthesised literature across social and behavioural sciences exploring illness experience and communication through physical and mental images. It argues that images may have the capacity to embody and influence beliefs, emotions, and health outcomes. While four commonalities exist, facilitating understandings of illness behaviour across the fields (i.e., understanding the importance of the patient perspective; perception of the cause, sense of identity with the illness, consequences, and level of control; health beliefs influencing illness experience, behaviours, and outcomes; and understanding illness beliefs and experiences through an almost exclusive focus on the written or spoken word), we will focus on exploring the fourth commonality. The choice to focus on the role of images on illness behaviour is due to the proliferation of interventions using image-based approaches. While these novel approaches show merit, there is a scarcity of theoretical underpinnings and explorations into the ways in which these are developed and into how people perceive and understand their own illnesses using image representations. The current paper identified that the use of images can elucidate patient and practitioner understandings of illness, facilitate communication, and potentially influence illness behaviours. It further identified commonalities across the social and behavioural sciences to facilitate theory informed understandings of illness behaviour which could be applied to visual intervention development to improve health outcomes.


Cancers ◽  
2021 ◽  
Vol 13 (13) ◽  
pp. 3106
Author(s):  
Yogesh Kalakoti ◽  
Shashank Yadav ◽  
Durai Sundar

The utility of multi-omics in personalized therapy and cancer survival analysis has been debated and demonstrated extensively in the recent past. Most of the current methods still suffer from data constraints such as high-dimensionality, unexplained interdependence, and subpar integration methods. Here, we propose SurvCNN, an alternative approach to process multi-omics data with robust computer vision architectures, to predict cancer prognosis for Lung Adenocarcinoma patients. Numerical multi-omics data were transformed into their image representations and fed into a Convolutional Neural network with a discrete-time model to predict survival probabilities. The framework also dichotomized patients into risk subgroups based on their survival probabilities over time. SurvCNN was evaluated on multiple performance metrics and outperformed existing methods with a high degree of confidence. Moreover, comprehensive insights into the relative performance of various combinations of omics datasets were probed. Critical biological processes, pathways and cell types identified from downstream processing of differentially expressed genes suggested that the framework could elucidate elements detrimental to a patient’s survival. Such integrative models with high predictive power would have a significant impact and utility in precision oncology.


2016 ◽  
Vol 16 (11) ◽  
pp. 17 ◽  
Author(s):  
Christiane B. Wiebel ◽  
Manish Singh ◽  
Marianne Maertens
Keyword(s):  

2016 ◽  
Vol 16 (12) ◽  
pp. 817
Author(s):  
Christiane Wiebel ◽  
Guillermo Aguilar ◽  
Marianne Maertens
Keyword(s):  

2020 ◽  
Vol 34 (05) ◽  
pp. 9571-9578 ◽  
Author(s):  
Wei Zhang ◽  
Yue Ying ◽  
Pan Lu ◽  
Hongyuan Zha

Personalized image caption, a natural extension of the standard image caption task, requires to generate brief image descriptions tailored for users' writing style and traits, and is more practical to meet users' real demands. Only a few recent studies shed light on this crucial task and learn static user representations to capture their long-term literal-preference. However, it is insufficient to achieve satisfactory performance due to the intrinsic existence of not only long-term user literal-preference, but also short-term literal-preference which is associated with users' recent states. To bridge this gap, we develop a novel multimodal hierarchical transformer network (MHTN) for personalized image caption in this paper. It learns short-term user literal-preference based on users' recent captions through a short-term user encoder at the low level. And at the high level, the multimodal encoder integrates target image representations with short-term literal-preference, as well as long-term literal-preference learned from user IDs. These two encoders enjoy the advantages of the powerful transformer networks. Extensive experiments on two real datasets show the effectiveness of considering two types of user literal-preference simultaneously and better performance over the state-of-the-art models.


Sign in / Sign up

Export Citation Format

Share Document