Artificial Intelligence Based On Image Caption Generation

2020 ◽  
Author(s):  
Prachi Waghmare ◽  
Dr. Swati Shinde
2020 ◽  
Vol 2020 ◽  
pp. 1-13 ◽  
Author(s):  
Haoran Wang ◽  
Yue Zhang ◽  
Xiaosheng Yu

In recent years, with the rapid development of artificial intelligence, image caption has gradually attracted the attention of many researchers in the field of artificial intelligence and has become an interesting and arduous task. Image caption, automatically generating natural language descriptions according to the content observed in an image, is an important part of scene understanding, which combines the knowledge of computer vision and natural language processing. The application of image caption is extensive and significant, for example, the realization of human-computer interaction. This paper summarizes the related methods and focuses on the attention mechanism, which plays an important role in computer vision and is recently widely used in image caption generation tasks. Furthermore, the advantages and the shortcomings of these methods are discussed, providing the commonly used datasets and evaluation criteria in this field. Finally, this paper highlights some open challenges in the image caption task.


Symmetry ◽  
2018 ◽  
Vol 10 (11) ◽  
pp. 626 ◽  
Author(s):  
Zhibin Guan ◽  
Kang Liu ◽  
Yan Ma ◽  
Xu Qian ◽  
Tongkai Ji

Image caption generation is a fundamental task to build a bridge between image and its description in text, which is drawing increasing interest in artificial intelligence. Images and textual sentences are viewed as two different carriers of information, which are symmetric and unified in the same content of visual scene. The existing image captioning methods rarely consider generating a final description sentence in a coarse-grained to fine-grained way, which is how humans understand the surrounding scenes; and the generated sentence sometimes only describes coarse-grained image content. Therefore, we propose a coarse-to-fine-grained hierarchical generation method for image captioning, named SDA-CFGHG, to address the two problems above. The core of our SDA-CFGHG method is a sequential dual attention that is used to fuse different grained visual information with sequential means. The advantage of our SDA-CFGHG method is that it can achieve image captioning in a coarse-to-fine-grained way and the generated textual sentence can capture details of the raw image to some degree. Moreover, we validate the impressive performance of our method on benchmark datasets—MS COCO, Flickr—with several popular evaluation metrics—CIDEr, SPICE, METEOR, ROUGE-L, and BLEU.


2018 ◽  
Vol 06 (10) ◽  
pp. 53-55
Author(s):  
Sailee P. Pawaskar ◽  
J. A. Laxminarayana

Author(s):  
Feng Chen ◽  
Songxian Xie ◽  
Xinyi Li ◽  
Jintao Tang ◽  
Kunyuan Pang ◽  
...  

Author(s):  
Xinyuan Qi ◽  
Zhiguo Cao ◽  
Yang Xiao ◽  
Jian Wang ◽  
Chao Zhang

Sign in / Sign up

Export Citation Format

Share Document