scholarly journals Evaluating Frequency of words and Word Cloud from Astrological sentiments using NLP

Author(s):  
C. N. V. B. R. Sri Gowrinath ◽  
Dr. Ch. V. M. K. Hari ◽  
Prof. P. G. V. D. Prasad Reddy

The identification of interest/disinterest over a notion is having a huge demand in the current competitive data analytical world. For example, the customer preferences in various seasons, approximate visitors to a tourist place based on scenarios like weather and special occasions in the place, and so on. While giving an opinion on any concept, natural language in form of sentences/words/symbols/ratings plays a vital role. Depends upon the context and usage of natural language, captured opinions can be interpreted as either in a positive or negative sense. The terminology used for providing the opinions is used for analysing the data in an easy way. The evaluation of the word frequencies and word cloud are identified accurately, only after a keen analysis of the collected opinions. The Term-Document Matrix is one of the techniques that identify the frequency of words in each and every document/row in the given dataset, which can be used to generate the word cloud. In this paper to identify the frequency of words from the opinions given by multi-domain personalities on Astrology, distinct Natural Language Processing (NLP) techniques are used. A word cloud can also be generated from the set of words used for the astrological dataset.

Author(s):  
Santosh Kumar Mishra ◽  
Rijul Dhir ◽  
Sriparna Saha ◽  
Pushpak Bhattacharyya

Image captioning is the process of generating a textual description of an image that aims to describe the salient parts of the given image. It is an important problem, as it involves computer vision and natural language processing, where computer vision is used for understanding images, and natural language processing is used for language modeling. A lot of works have been done for image captioning for the English language. In this article, we have developed a model for image captioning in the Hindi language. Hindi is the official language of India, and it is the fourth most spoken language in the world, spoken in India and South Asia. To the best of our knowledge, this is the first attempt to generate image captions in the Hindi language. A dataset is manually created by translating well known MSCOCO dataset from English to Hindi. Finally, different types of attention-based architectures are developed for image captioning in the Hindi language. These attention mechanisms are new for the Hindi language, as those have never been used for the Hindi language. The obtained results of the proposed model are compared with several baselines in terms of BLEU scores, and the results show that our model performs better than others. Manual evaluation of the obtained captions in terms of adequacy and fluency also reveals the effectiveness of our proposed approach. Availability of resources : The codes of the article are available at https://github.com/santosh1821cs03/Image_Captioning_Hindi_Language ; The dataset will be made available: http://www.iitp.ac.in/∼ai-nlp-ml/resources.html .


2019 ◽  
Vol 8 (2S3) ◽  
pp. 1014-1018

This paper elaborates the transition system that gives the standard transition-based dependency parsing techniques for generating the graph. It is essential to know the standard transition techniques for all graphical problems. Cache transition technique plays a vital role in optimizing the search process in various text mining applications. This paper provides an overview on cache transition technique for parsing semantic graphs for several Natural Language Processing (NLP) applications. According to this paper, the cache is having the fixed size m, by tree decomposition theory according to which there is a relationship between the parameter m and class of graphs produced by the theory.


Author(s):  
Binh Nguyen ◽  
Binh Le ◽  
Long H.B. Nguyen ◽  
Dien Dinh

 Word representation plays a vital role in most Natural Language Processing systems, especially for Neural Machine Translation. It tends to capture semantic and similarity between individual words well, but struggle to represent the meaning of phrases or multi-word expressions. In this paper, we investigate a method to generate and use phrase information in a translation model. To generate phrase representations, a Primary Phrase Capsule network is first employed, then iteratively enhancing with a Slot Attention mechanism. Experiments on the IWSLT English to Vietnamese, French, and German datasets show that our proposed method consistently outperforms the baseline Transformer, and attains competitive results over the scaled Transformer with two times lower parameters.


Author(s):  
Jayashree Rajesh ◽  
Priya Chitti Babu

In the current machine-centric world, humans expect a lot from machines right from waking us up. We expect them to do activities like reminding us on traffic, tracking of appointments, etc. The smart devices we have with us are creating a constructive impact on our day-to-day lives. Many of us have not thought about the communication between ourselves and the devices we have and the language we use for communication. Natural language processing runs behind all these activities and is currently playing a vital role with respect to the communication with humans with the use of virtual assistants like Alexa, Siri, and search engines like Bing, Google, etc. This implies that we are talking with the machines as if they are human. The advanced natural language processing techniques have drastically modified the way to discover and interact with data. In the recent world, the same advanced techniques are primarily used in the data analysis using NLP in business intelligence tools. This chapter elaborates the significance of natural language processing in business intelligence.


Computers ◽  
2019 ◽  
Vol 8 (1) ◽  
pp. 22
Author(s):  
Frederik Bäumer ◽  
Joschka Kersting ◽  
Michaela Geierhos

The vision of On-the-Fly (OTF) Computing is to compose and provide software services ad hoc, based on requirement descriptions in natural language. Since non-technical users write their software requirements themselves and in unrestricted natural language, deficits occur such as inaccuracy and incompleteness. These deficits are usually met by natural language processing methods, which have to face special challenges in OTF Computing because maximum automation is the goal. In this paper, we present current automatic approaches for solving inaccuracies and incompletenesses in natural language requirement descriptions and elaborate open challenges. In particular, we will discuss the necessity of domain-specific resources and show why, despite far-reaching automation, an intelligent and guided integration of end users into the compensation process is required. In this context, we present our idea of a chat bot that integrates users into the compensation process depending on the given circumstances.


Blockchain was particularly used in Cryptocurrency technologies. Prior to 20th century there was no other technologies for determining the health of a person naturally. At the dawn of the 21st Century machine learning played a vital role in determining the health of a person using various algorithms and natural language processing techniques. Now for every machine learning technique to work for it needs data. Data is very important as far as providing information is concerned. Data sharing plays a vital role in improving accuracy of techniques involved. Along the blockchain technology plays a vital role in this aspect. Thus, the merging of these two techniques involve provides highly accurate results in terms of machine learning with privacy and reliability of Blockchain technology. This technique uses natural language processing techniques which focuses basically mainly on healthcare techniques such as cancer detection, prediction of machines used in healthcare etc. Prior to healthcare which is used in blockchain it was used in cryptographic techniques only. Also, this technology can be used to provide medical suggestions to the doctors based on the condition of the patient. The accuracy of this method can be increased more using providing as much data as we can. This combination of Blockchain and machine learning algorithms can be used widely in healthcare, where the data is highly secured and there is no fear of data loss. This paper involves how combining these two technologies can be helpful in healthcare.


2019 ◽  
Vol 8 (2S8) ◽  
pp. 1210-1214

This paper elaborates the transition system that gives the standard transition-based dependency parsing techniques for generating the graph. It is essential to know the standard transition techniques for all graphical problems. Cache transition technique plays a vital role in optimizing the search process in various text mining applications. This paper provides an overview on cache transition technique for parsing semantic graphs for several Natural Language Processing (NLP) applications. According to this paper, the cache is having the fixed size m, by tree decomposition theory according to which there is a relationship between the parameter m and class of graphs produced by the theory


2021 ◽  
pp. 233-252
Author(s):  
Upendar Rao Rayala ◽  
Karthick Seshadri

Sentiment analysis is perceived to be a multi-disciplinary research domain composed of machine learning, artificial intelligence, deep learning, image processing, and social networks. Sentiment analysis can be used to determine opinions of the public about products and to find the customers' interest and their feedback through social networks. To perform any natural language processing task, the input text/comments should be represented in a numerical form. Word embeddings represent the given text/sentences/words as a vector that can be employed in performing subsequent natural language processing tasks. In this chapter, the authors discuss different techniques that can improve the performance of sentiment analysis using concepts and techniques like traditional word embeddings, sentiment embeddings, emoticons, lexicons, and neural networks. This chapter also traces the evolution of word embedding techniques with a chronological discussion of the recent research advancements in word embedding techniques.


Procedures play a vital role in high-risk systems in ensuring that the required level of system performance is achieved while maintaining the associated risks below an acceptable level. An effective procedure has a low level of task complexities i.e., elements contributing to a multiplicity of paths/outcomes, uncertainty in information, interdependency, a multiplicity of instructions/object of instructions and an excess amount of information. As a part of this study, we have developed a Natural Language Processing algorithm that can identify these elements in a procedure. The algorithm was tested for a dataset consisting of 20 procedures and the results were found to be promising.


2020 ◽  
Vol 34 (05) ◽  
pp. 7448-7455
Author(s):  
Zied Bouraoui ◽  
Jose Camacho-Collados ◽  
Luis Espinosa-Anke ◽  
Steven Schockaert

While many methods for learning vector space embeddings have been proposed in the field of Natural Language Processing, these methods typically do not distinguish between categories and individuals. Intuitively, if individuals are represented as vectors, we can think of categories as (soft) regions in the embedding space. Unfortunately, meaningful regions can be difficult to estimate, especially since we often have few examples of individuals that belong to a given category. To address this issue, we rely on the fact that different categories are often highly interdependent. In particular, categories often have conceptual neighbors, which are disjoint from but closely related to the given category (e.g. fruit and vegetable). Our hypothesis is that more accurate category representations can be learned by relying on the assumption that the regions representing such conceptual neighbors should be adjacent in the embedding space. We propose a simple method for identifying conceptual neighbors and then show that incorporating these conceptual neighbors indeed leads to more accurate region based representations.


Sign in / Sign up

Export Citation Format

Share Document