Author(s):  
Saurabh Sen ◽  
Ruchi L. Sen

NPA is a “termite” for the banking sector. It affects liquidity and profitability of the bank to a great extent; in addition, it also poses a threat to the quality of asset and survival of banks. The post-reform era has changed the whole structure of the banking sector of India. Now, the economy is not confined to the domestic boundary of the country. The core intention of economic reforms in India was to attract foreign investments and create a sound banking system. This chapter provides an empirical approach to the analysis of profitability indicators with a focal point on Non-Performing Assets (NPAs) of commercial banks in the Indian context. The chapter discusses NPA, factors contributing to NPA, magnitude, and consequences. By using an analytical perspective, the chapter observes that NPAs affected significantly the performance of the banks in the present scenario. On the other hand, factors like better credit culture, managing the risk, and business conditions led to lowering of NPAs. The empirical findings using observation method and statistical tools like correlation, regression, and data representation techniques identify that there is a negative relationship between profitability measure and NPAs.


2019 ◽  
Vol 5 (1) ◽  
pp. 43-60
Author(s):  
Gabby Resch

Abstract Data engagement has become an important facet of engaged citizenship. While this is celebrated by those who advocate for expanding participatory channels in civic experience, others have rightfully expressed concern about the complicated dimensions of balancing access with data literacy. If engaged citizenship increasingly requires the ability to interpret civic data through city dashboards and open data portals, then there is a concomitant requirement for diverse populations to develop critical perspectives on data representation (what is commonly referred to as data visualisation, information graphics, etc.). Effective data representations are used to ground conversations, communicate policy ideas and substantiate arguments about important civic issues, but they are also frequently used to deceive and mislead. Expanding statistical, graphical, digital and media literacy is a necessary component of fostering a critical data culture, but who are the beneficiaries of expanded models of literacy and modes of civic engagement? Which communities are invalidated in the design of civic data interfaces? In this article, I summarise the results of a design study undertaken to inform the development of accessible data representation techniques. In this study, I conducted fourteen 2-h participatory design-inspired interview sessions with blind and visually impaired citizens. These sessions, in which I iteratively developed new physical data objects and assessed their interpretability, leveraged a public transit dataset made available by the City of Toronto through its open data portal. While ostensibly “open,” this dataset was initially published in a format that was exclusively visual, excluding blind and visually impaired citizens from engaging with it. What I discovered through the study was that the process of translating 2D, screen-based civic dashboards and data visualisations into tangible objects has the capacity to reintroduce visual biases in ways that data designers may not generally be aware of.


Author(s):  
James Simpson

The mobilization of eye-tracking for use outside of the laboratory provides new opportunities for the assessment of pedestrian visual engagement with their surroundings. However, the development of data representation techniques that visualize the dynamics of pedestrian gaze distribution upon the environment they are situated within remains limited. The current study addresses this through highlighting how mobile eye-tracking data, which captures where pedestrian gaze is focused upon buildings along urban street edges, can be mapped as three-dimensional gaze projection heat-maps. This data processing and visualization technique is assessed during the current study along with future opportunities and associated challenges discussed. 


Author(s):  
Saurabh Sen ◽  
Ruchi L. Sen

NPA is a “termite” for the banking sector. It affects liquidity and profitability of the bank to a great extent; in addition, it also poses a threat to the quality of asset and survival of banks. The post-reform era has changed the whole structure of the banking sector of India. Now, the economy is not confined to the domestic boundary of the country. The core intention of economic reforms in India was to attract foreign investments and create a sound banking system. This chapter provides an empirical approach to the analysis of profitability indicators with a focal point on Non-Performing Assets (NPAs) of commercial banks in the Indian context. The chapter discusses NPA, factors contributing to NPA, magnitude, and consequences. By using an analytical perspective, the chapter observes that NPAs affected significantly the performance of the banks in the present scenario. On the other hand, factors like better credit culture, managing the risk, and business conditions led to lowering of NPAs. The empirical findings using observation method and statistical tools like correlation, regression, and data representation techniques identify that there is a negative relationship between profitability measure and NPAs.


2018 ◽  
Author(s):  
Bruno Á. Souza ◽  
Alice A. F. Menezes ◽  
Carlos M. S. Figueiredo ◽  
Fabíola G. Nakamura ◽  
Eduardo F. Nakamura

Virtual environments such as online stores (e.g. Amazon, Google Play and Booking) adopt a collaborative strategy of evaluation and reputation, where users classify products and services. User's opinion represents the satisfaction level of a rated item. The set of ratings of an item is a reference to its reputation/quality. Therefore, the automatic identification of a usersatisfaction related to an item, considering its textual evaluation, is a tool with singular economic potential. With deep learning researches evolution in sentiment analysis based in aspects, opportunities to apply several neural networks in this context arisen. However, the data representation models applied in these works focus only on Embeddings pre-trained networks as a way to perform feature extraction. In this way, this work aims to present a comparison between data representation techniques and deep networks approaches, to analyze which of them have better results in classifying categories of aspects. Thus, we can seethat TF-IDF with a Convolution Neural Network (CNN) had an F1 measure of 0.93%, being at least 0.02% higher than the others approaches applied in this work.


2016 ◽  
Vol 2016 ◽  
pp. 1-16 ◽  
Author(s):  
Mujiono Sadikin ◽  
Mohamad Ivan Fanany ◽  
T. Basaruddin

One essential task in information extraction from the medical corpus is drug name recognition. Compared with text sources come from other domains, the medical text mining poses more challenges, for example, more unstructured text, the fast growing of new terms addition, a wide range of name variation for the same drug, the lack of labeled dataset sources and external knowledge, and the multiple token representations for a single drug name. Although many approaches have been proposed to overwhelm the task, some problems remained with poor F-score performance (less than 0.75). This paper presents a new treatment in data representation techniques to overcome some of those challenges. We propose three data representation techniques based on the characteristics of word distribution and word similarities as a result of word embedding training. The first technique is evaluated with the standard NN model, that is, MLP. The second technique involves two deep network classifiers, that is, DBN and SAE. The third technique represents the sentence as a sequence that is evaluated with a recurrent NN model, that is, LSTM. In extracting the drug name entities, the third technique gives the best F-score performance compared to the state of the art, with its average F-score being 0.8645.


Sign in / Sign up

Export Citation Format

Share Document