A controlled language-based evaluation approach to ensure image accessibility during web localisation

2015 ◽  
Vol 4 (2) ◽  
pp. 187-215 ◽  
Author(s):  
Silvia Rodríguez Vázquez

In spite of recent improvements in non-visual web access, images on the web still present an accessibility barrier to screen reader users. For this population group the presence of inappropriate text alternatives for images, or simply their absence, usually results in a poor web user experience. In this paper, we propose a controlled language (CL) rule-based approach that enables translation professionals to ensure image accessibility during the web localisation process. We describe the set of 40 CL rules we developed and then present the results of the evaluation of a selection of ten rules from the set. During the study, which sought to assess their impact on the appropriateness of text alternatives in French, the ten rules were applied using Acrolinx, a state-of-the-art CL checker. The results of the evaluation suggest that this sub-set of ten rules can help translators significantly improve the level of image accessibility obtained in the localised web product.

2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Paramita Ray ◽  
Amlan Chakrabarti

Social networks have changed the communication patterns significantly. Information available from different social networking sites can be well utilized for the analysis of users opinion. Hence, the organizations would benefit through the development of a platform, which can analyze public sentiments in the social media about their products and services to provide a value addition in their business process. Over the last few years, deep learning is very popular in the areas of image classification, speech recognition, etc. However, research on the use of deep learning method in sentiment analysis is limited. It has been observed that in some cases the existing machine learning methods for sentiment analysis fail to extract some implicit aspects and might not be very useful. Therefore, we propose a deep learning approach for aspect extraction from text and analysis of users sentiment corresponding to the aspect. A seven layer deep convolutional neural network (CNN) is used to tag each aspect in the opinionated sentences. We have combined deep learning approach with a set of rule-based approach to improve the performance of aspect extraction method as well as sentiment scoring method. We have also tried to improve the existing rule-based approach of aspect extraction by aspect categorization with a predefined set of aspect categories using clustering method and compared our proposed method with some of the state-of-the-art methods. It has been observed that the overall accuracy of our proposed method is 0.87 while that of the other state-of-the-art methods like modified rule-based method and CNN are 0.75 and 0.80 respectively. The overall accuracy of our proposed method shows an increment of 7–12% from that of the state-of-the-art methods.


2017 ◽  
Vol 111 (3) ◽  
pp. 1501-1519 ◽  
Author(s):  
Fang Liu ◽  
Wei-dong Zhu ◽  
Yu-wang Chen ◽  
Dong-ling Xu ◽  
Jian-bo Yang

2016 ◽  
Vol 25 (03) ◽  
pp. 1650018 ◽  
Author(s):  
Andreas Kanavos ◽  
Christos Makris ◽  
Yannis Plegas ◽  
Evangelos Theodoridis

It is widely known that search engines are the dominating tools for finding information on the web. In most of the cases, these engines return web page references on a global ranking taking in mind either the importance of the web site or the relevance of the web pages to the identified topic. In this paper, we focus on the problem of determining distinct thematic groups on web search engine results that other existing engines provide. We additionally address the problem of dynamically adapting their ranking according to user selections, incorporating user judgments as implicitly registered in their selection of relevant documents. Our system exploits a state of the art semantic web data mining technique that identifies semantic entities of Wikipedia for grouping the result set in different topic groups, according to the various meanings of the provided query. Moreover, we propose a novel probabilistic Network scheme that employs the aforementioned topic identification method, in order to modify ranking of results as the users select documents. We evaluated in practice our implemented prototype with extensive experiments with the ClueWeb09 dataset using the TREC’s 2009, 2010, 2011 and 2012 Web Tracks’ where we observed improved retrieval performance compared to current state of the art re-ranking methods.


2018 ◽  
Vol 7 (4.7) ◽  
pp. 322 ◽  
Author(s):  
Abbas Alasri ◽  
Rossilawati Sulaiman

A web service is defined as the method of communication between the web applications and the clients. Web services are very flexible and scalable as they are independent of both the hardware and software infrastructure. The lack of security protection offered by web services creates a gap which attackers can make use of. Web services are offered on the HyperText Transfer Protocol (HTTP) with Simple Object Access Protocol (SOAP) as the underlying infrastructure. Web services rely heavily on the Extended Mark-up Language (XML). Hence, web services are most vulnerable to attacks which use XML as the attack parameter. Recently, a new type of XML-based Denial-of-Service (XDoS) attacks has surfaced, which targets the web services. The purpose of these attacks is to consume the system resources by sending SOAP requests that contain malicious XML content. Unfortunately, these malicious requests go undetected underneath the network or transportation layers of the Transfer Control Protocol/Internet Protocol (TCP/IP), as they appear to be legitimate packets.In this paper, a middleware tool is proposed to provide real time detection and prevention of XDoS and HTTP flooding attacks in web service. This tool focuses on the attacks on the two layers of the Open System Interconnection (OSI) model, which are to detect and prevent XDoS attacks on the application layer and prevent flooding attacks at the Network layer.The rule-based approach is used to classify requests either as normal or malicious,in order to detect the XDoS attacks. The experimental results from the middleware tool have demonstrated that the rule-based technique has efficiently detected and prevented theattacks of XDoS and HTTP flooding attacks such as the oversized payload, coercive parsing and XML external entities close to real-time such as 0.006s over the web services. The middleware tool provides close to 100% service availability to normal request, hence protecting the web service against the attacks of XDoS and distributed XDoS (DXDoS).\  


2012 ◽  
Vol 24 (02) ◽  
pp. 207-236
Author(s):  
JIAN QU ◽  
THANARUK THEERAMUNKONG ◽  
NGUYEN LE MING ◽  
AKIRA SHIMAZU ◽  
CHOLWICH NATTEE ◽  
...  

Author(s):  
Padmavathi .S ◽  
M. Chidambaram

Text classification has grown into more significant in managing and organizing the text data due to tremendous growth of online information. It does classification of documents in to fixed number of predefined categories. Rule based approach and Machine learning approach are the two ways of text classification. In rule based approach, classification of documents is done based on manually defined rules. In Machine learning based approach, classification rules or classifier are defined automatically using example documents. It has higher recall and quick process. This paper shows an investigation on text classification utilizing different machine learning techniques.


2018 ◽  
Vol 48 (3) ◽  
pp. 84-90 ◽  
Author(s):  
E. A. Lapchenko ◽  
S. P. Isakova ◽  
T. N. Bobrova ◽  
L. A. Kolpakova

It is shown that the application of the Internet technologies is relevant in the selection of crop production technologies and the formation of a rational composition of the machine-and-tractor fl eet taking into account the conditions and production resources of a particular agricultural enterprise. The work gives a short description of the web applications, namely “ExactFarming”, “Agrivi” and “AgCommand” that provide a possibility to select technologies and technical means of soil treatment, and their functions. “ExactFarming” allows to collect and store information about temperature, precipitation and weather forecast in certain areas, keep records of information about crops and make technological maps using expert templates. “Agrivi” allows to store and provide access to weather information in the fi elds with certain crops. It has algorithms to detect and make warnings about risks related to diseases and pests, as well as provides economic calculations of crop profi tability and crop planning. “AgCommand” allows to track the position of machinery and equipment in the fi elds and provides data on the weather situation in order to plan the use of agricultural machinery in the fi elds. The web applications presented hereabove do not show relation between the technologies applied and agro-climatic features of the farm location zone. They do not take into account the phytosanitary conditions in the previous years, or the relief and contour of the fi elds while drawing up technological maps or selecting the machine-and-tractor fl eet. Siberian Physical-Technical Institute of Agrarian Problems of Siberian Federal Scientifi c Center of AgroBioTechnologies of the Russian Academy of Sciences developed a software complex PIKAT for supporting machine agrotechnologies for production of spring wheat grain at an agricultural enterprise, on the basis of which there is a plan to develop a web application that will consider all the main factors limiting the yield of cultivated crops.


2019 ◽  
Vol 50 (2) ◽  
pp. 98-112 ◽  
Author(s):  
KALYAN KUMAR JENA ◽  
SASMITA MISHRA ◽  
SAROJANANDA MISHRA ◽  
SOURAV KUMAR BHOI ◽  
SOUMYA RANJAN NAYAK

Sign in / Sign up

Export Citation Format

Share Document