scholarly journals Predicting the future direction of cell movement with convolutional neural networks

2018 ◽  
Author(s):  
Shori Nishimoto ◽  
Yuta Tokuoka ◽  
Takahiro G Yamada ◽  
Noriko F Hiroi ◽  
Akira Funahashi

SummaryImage-based deep learning systems, such as convolutional neural networks (CNNs), have recently been applied to cell classification, producing impressive results; however, application of CNNs has been confined to classification of the current cell state from the image. Here, we focused on cell movement where current and/or past cell shape can influence the future cell fate. We demonstrate that CNNs prospectively predicted the future direction of cell movement with high accuracy from a single image patch of a cell at a certain time. Furthermore, by visualizing the image features that were learned by the CNNs, we could identify morphological features, e.g., the protrusions and trailing edge that have been experimentally reported to determine the direction of cell movement. Our results indicate that CNNs have the potential to predict the future cell fate from current cell shape, and can be used to automatically identify those morphological features that influence future cell fate.

PLoS ONE ◽  
2019 ◽  
Vol 14 (9) ◽  
pp. e0221245 ◽  
Author(s):  
Shori Nishimoto ◽  
Yuta Tokuoka ◽  
Takahiro G. Yamada ◽  
Noriko F. Hiroi ◽  
Akira Funahashi

Author(s):  
N Seijdel ◽  
N Tsakmakidis ◽  
EHF De Haan ◽  
SM Bohte ◽  
HS Scholte

AbstractFeedforward deep convolutional neural networks (DCNNs) are, under specific conditions, matching and even surpassing human performance in object recognition in natural scenes. This performance suggests that the analysis of a loose collection of image features could support the recognition of natural object categories, without dedicated systems to solve specific visual subtasks. Research in humans however suggests that while feedforward activity may suffice for sparse scenes with isolated objects, additional visual operations (‘routines’) that aid the recognition process (e.g. segmentation or grouping) are needed for more complex scenes. Linking human visual processing to performance of DCNNs with increasing depth, we here explored if, how, and when object information is differentiated from the backgrounds they appear on. To this end, we controlled the information in both objects and backgrounds, as well as the relationship between them by adding noise, manipulating background congruence and systematically occluding parts of the image. Results indicate that with an increase in network depth, there is an increase in the distinction between object- and background information. For more shallow networks, results indicated a benefit of training on segmented objects. Overall, these results indicate that, de facto, scene segmentation can be performed by a network of sufficient depth. We conclude that the human brain could perform scene segmentation in the context of object identification without an explicit mechanism, by selecting or “binding” features that belong to the object and ignoring other features, in a manner similar to a very deep convolutional neural network.


Geophysics ◽  
2020 ◽  
Vol 85 (4) ◽  
pp. WA27-WA39 ◽  
Author(s):  
Xinming Wu ◽  
Zhicheng Geng ◽  
Yunzhi Shi ◽  
Nam Pham ◽  
Sergey Fomel ◽  
...  

Seismic structural interpretation involves highlighting and extracting faults and horizons that are apparent as geometric features in a seismic image. Although seismic image processing methods have been proposed to automate fault and horizon interpretation, each of which today still requires significant human effort. We improve automatic structural interpretation in seismic images by using convolutional neural networks (CNNs) that recently have shown excellent performances in detecting and extracting useful image features and objects. The main limitation of applying CNNs in seismic interpretation is the preparation of many training data sets and especially the corresponding geologic labels. Manually labeling geologic features in a seismic image is highly time-consuming and subjective, which often results in incompletely or inaccurately labeled training images. To solve this problem, we have developed a workflow to automatically build diverse structure models with realistic folding and faulting features. In this workflow, with some assumptions about typical folding and faulting patterns, we simulate structural features in a 3D model by using a set of parameters. By randomly choosing the parameters from some predefined ranges, we are able to automatically generate numerous structure models with realistic and diverse structural features. Based on these structure models with known structural information, we further automatically create numerous synthetic seismic images and the corresponding ground truth of structural labels to train CNNs for structural interpretation in field seismic images. Accurate results of structural interpretation in multiple field seismic images indicate that our workflow simulates realistic and generalized structure models from which the CNNs effectively learn to recognize real structures in field images.


2020 ◽  
Vol 6 (12) ◽  
pp. 129
Author(s):  
Mario Manzo ◽  
Simone Pellino

Malignant melanoma is the deadliest form of skin cancer and, in recent years, is rapidly growing in terms of the incidence worldwide rate. The most effective approach to targeted treatment is early diagnosis. Deep learning algorithms, specifically convolutional neural networks, represent a methodology for the image analysis and representation. They optimize the features design task, essential for an automatic approach on different types of images, including medical. In this paper, we adopted pretrained deep convolutional neural networks architectures for the image representation with purpose to predict skin lesion melanoma. Firstly, we applied a transfer learning approach to extract image features. Secondly, we adopted the transferred learning features inside an ensemble classification context. Specifically, the framework trains individual classifiers on balanced subspaces and combines the provided predictions through statistical measures. Experimental phase on datasets of skin lesion images is performed and results obtained show the effectiveness of the proposed approach with respect to state-of-the-art competitors.


PeerJ ◽  
2020 ◽  
Vol 8 ◽  
pp. e8668 ◽  
Author(s):  
Liangqun Lu ◽  
Bernie J. Daigle

Histopathological images contain rich phenotypic descriptions of the molecular processes underlying disease progression. Convolutional neural networks, state-of-the-art image analysis techniques in computer vision, automatically learn representative features from such images which can be useful for disease diagnosis, prognosis, and subtyping. Hepatocellular carcinoma (HCC) is the sixth most common type of primary liver malignancy. Despite the high mortality rate of HCC, little previous work has made use of CNN models to explore the use of histopathological images for prognosis and clinical survival prediction of HCC. We applied three pre-trained CNN models—VGG 16, Inception V3 and ResNet 50—to extract features from HCC histopathological images. Sample visualization and classification analyses based on these features showed a very clear separation between cancer and normal samples. In a univariate Cox regression analysis, 21.4% and 16% of image features on average were significantly associated with overall survival (OS) and disease-free survival (DFS), respectively. We also observed significant correlations between these features and integrated biological pathways derived from gene expression and copy number variation. Using an elastic net regularized Cox Proportional Hazards model of OS constructed from Inception image features, we obtained a concordance index (C-index) of 0.789 and a significant log-rank test (p = 7.6E−18). We also performed unsupervised classification to identify HCC subgroups from image features. The optimal two subgroups discovered using Inception model image features showed significant differences in both overall (C-index = 0.628 and p = 7.39E−07) and DFS (C-index = 0.558 and p = 0.012). Our work demonstrates the utility of extracting image features using pre-trained models by using them to build accurate prognostic models of HCC as well as highlight significant correlations between these features, clinical survival, and relevant biological pathways. Image features extracted from HCC histopathological images using the pre-trained CNN models VGG 16, Inception V3 and ResNet 50 can accurately distinguish normal and cancer samples. Furthermore, these image features are significantly correlated with survival and relevant biological pathways.


Author(s):  
Steven Walczak

Artificial neural networks (ANNs) have proven to be efficacious for modeling decision problems in medicine, including diagnosis, prognosis, resource allocation, and cost reduction problems. Research using ANNs to solve medical domain problems has been increasing regularly and is continuing to grow dramatically. This chapter examines recent trends and advances in ANNs and provides references to a large portion of recent research, as well as looking at the future direction of research for ANN in medicine.


2022 ◽  
pp. 1491-1509
Author(s):  
Steven Walczak

Artificial neural networks (ANNs) have proven to be efficacious for modeling decision problems in medicine, including diagnosis, prognosis, resource allocation, and cost reduction problems. Research using ANNs to solve medical domain problems has been increasing regularly and is continuing to grow dramatically. This chapter examines recent trends and advances in ANNs and provides references to a large portion of recent research, as well as looking at the future direction of research for ANN in medicine.


Author(s):  
Harsh Jindal ◽  
Jagdeep kaur

This data has been used in many existing inventions and even in many new inventions. This paper consists of a review of the current state of LiDAR technology and covers issues related to both data capturing and processing. In this paper, a discussion on different types of LiDAR sensors including LiDAR for autonomous vehicles was also done. It also explains existing data techniques and also gives a view of CNN (Convolutional Neural Networks). This paper also discusses the autonomous LiDAR technique in detail. The paper also discusses the future scope of technology.


Sign in / Sign up

Export Citation Format

Share Document