scholarly journals Fluorescent Texture: Proposal of a 2-3D Automatic Annotation Method for Deep Learning

2022 ◽  
Vol 40 (1) ◽  
pp. 71-82
Author(s):  
Shogo Okano ◽  
Tatsuhito Makino ◽  
Kosei Demura
Author(s):  
Guibin Wu ◽  
Junjie Zhou ◽  
Yongping Xiong ◽  
Chaoyi Zhou ◽  
Chong Li

AbstractUsing deep learning networks to recognize the table attracts lots of attention. However, due to the lack of high-quality table datasets, the performance of using deep learning networks is limited. Therefore, TableRobot has been proposed, an automatic annotation method for heterogeneous tables. To be more specific, the annotations of table consist of the coordinates of the item block and the mapping relationship between item blocks and table cells. In order to transform the task, we successfully design an algorithm based on the greedy approach to find the optimum solution. To evaluate the performance of TableRobot, we check the annotation data of 3000 tables collected from the LaTex documents in arXiv.com, and the result shows that TableRobot can generate table annotation datasets with the accuracy of 93.2%. Besides, the table annotation data is feed into GraphTSR which is a state-of-the-art table recognition graph neural network, and the F1 value of the network has increased by nearly 10% compared with before.


Author(s):  
Yu-Xiang Zhao ◽  
Yi-Zeng Hsieh ◽  
Shih-Syun Lin

With advances in technology, photo booths equipped with automatic capturing systems have gradually replaced the identification (ID) photo service provided by photography studios, thereby enabling consumers to save a considerable amount of time and money. Common automatic capturing systems employ text and voice instructions to guide users in capturing their ID photos; however, the capturing results may not conform to ID photo specifications. To address this issue, this study proposes an ID photo capturing algorithm that can automatically detect facial contours and adjust the size of captured images. The authors adopted a deep learning method (You Only Look Once) to detect the face and applied a semi-automatic annotation technique of facial landmarks to find the lip and chin regions from the facial region. In the experiments, subjects were seated at various distances and heights for testing the performance of the proposed algorithm. The experimental results show that the proposed algorithm can effectively and accurately capture ID photos that satisfy the required specifications.


2019 ◽  
Vol 8 (2S8) ◽  
pp. 1346-1350

The research literature on sentiment analysis methodologies has exponentially grown in recent years. In any research area, where new concepts and techniques are constantly introduced, it is, therefore, of interest to analyze the latest trends in this literature. In particular, we have chosen to primarily focus on the literature of the last five years, on annotation methodologies, including frequently used datasets and from which they were obtained. Based on the survey, it appears that researchers do more manual annotation in the formation of sentiment corpus. As for the dataset, there are still many uses of English language taken from social media such as Twitter. In this area of research, there are still many that need to be explored, such as the use of semi-automatic annotation method that is still very rarely used by researchers. Also, less popular languages, such as Malay, Korean, Japanese, and so on, still require corpus for sentiment analysis research.


Author(s):  
Réka Hollandi ◽  
Ákos Diósdi ◽  
Gábor Hollandi ◽  
Nikita Moshkov ◽  
Péter Horváth

AbstractAnnotatorJ combines single-cell identification with deep learning and manual annotation. Cellular analysis quality depends on accurate and reliable detection and segmentation of cells so that the subsequent steps of analyses e.g. expression measurements may be carried out precisely and without bias. Deep learning has recently become a popular way of segmenting cells, performing unimaginably better than conventional methods. However, such deep learning applications may be trained on a large amount of annotated data to be able to match the highest expectations. High-quality annotations are unfortunately expensive as they require field experts to create them, and often cannot be shared outside the lab due to medical regulations.We propose AnnotatorJ, an ImageJ plugin for the semi-automatic annotation of cells (or generally, objects of interest) on (not only) microscopy images in 2D that helps find the true contour of individual objects by applying U-Net-based pre-segmentation. The manual labour of hand-annotating cells can be significantly accelerated by using our tool. Thus, it enables users to create such datasets that could potentially increase the accuracy of state-of-the-art solutions, deep learning or otherwise, when used as training data.


Author(s):  
A. Mahmood ◽  
M. Bennamoun ◽  
S. An ◽  
F. Sohel ◽  
F. Boussaid ◽  
...  

Microscopy ◽  
2021 ◽  
Author(s):  
Kohki Konishi ◽  
Takao Nonaka ◽  
Shunsuke Takei ◽  
Keisuke Ohta ◽  
Hideo Nishioka ◽  
...  

Abstract Three-dimensional (3D) observation of a biological sample using serial-section electron microscopy is widely used. However, organelle segmentation requires a significant amount of manual time. Therefore, several studies have been conducted to improve their efficiency. One such promising method is 3D deep learning (DL), which is highly accurate. However, the creation of training data for 3D DL still requires manual time and effort. In this study, we developed a highly efficient integrated image segmentation tool that includes stepwise DL with manual correction. The tool has four functions: efficient tracers for annotation, model training/inference for organelle segmentation using a lightweight convolutional neural network, efficient proofreading, and model refinement. We applied this tool to increase the training data step by step (stepwise annotation method) to segment the mitochondria in the cells of the cerebral cortex. We found that the stepwise annotation method reduced the manual operation time by one-third compared with that of the fully manual method, where all the training data were created manually. Moreover, we demonstrated that the F1 score, the metric of segmentation accuracy, was 0.9 by training the 3D DL model with these training data. The stepwise annotation method using this tool and the 3D DL model improved the segmentation efficiency for various organelles.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1110
Author(s):  
Nathalie Neptune ◽  
Josiane Mothe

Earth observation satellites have been capturing a variety of data about our planet for several decades, making many environmental applications possible such as change detection. Recently, deep learning methods have been proposed for urban change detection. However, there has been limited work done on the application of such methods to the annotation of unlabeled images in the case of change detection in forests. This annotation task consists of predicting semantic labels for a given image of a forested area where change has been detected. Currently proposed methods typically do not provide other semantic information beyond the change that is detected. To address these limitations we first demonstrate that deep learning methods can be effectively used to detect changes in a forested area with a pair of pre and post-change satellite images. We show that by using visual semantic embeddings we can automatically annotate the change images with labels extracted from scientific documents related to the study area. We investigated the effect of different corpora and found that best performances in the annotation prediction task are reached with a corpus that is related to the type of change of interest and is of medium size (over ten thousand documents).


Sign in / Sign up

Export Citation Format

Share Document