PAKE on the Web

2009 ◽  
Vol 3 (4) ◽  
pp. 29-42 ◽  
Author(s):  
Xunhua Wang ◽  
Hua Lin

Unlike existing password authentication mechanisms on the web that use passwords for client-side authentication only, password-authenticated key exchange (PAKE) protocols provide mutual authentication. In this article, we present an architecture to integrate existing PAKE protocols to the web. Our integration design consists of the client-side part and the server-side part. First, we implement the PAKE client-side functionality with a web browser plug-in, which provides a secure implementation base. The plug-in has a log-in window that can be customized by a user when the plug-in is installed. By checking the user-specific information in a log-in window, an ordinary user can easily detect a fake log-in window created by mobile code. The server-side integration comprises a web interface and a PAKE server. After a successful PAKE mutual authentication, the PAKE plug-in receives a one-time ticket and passes it to the web browser. The web browser authenticates itself by presenting this ticket over HTTPS to the web server. The plug-in then fades away and subsequent web browsing remains the same as usual, requiring no extra user education. Our integration design supports centralized log-ins for web applications from different web sites, making it appropriate for digital identity management. A prototype is developed to validate our design. Since PAKE protocols use passwords for mutual authentication, we believe that the deployment of this design will significantly mitigate the risk of phishing attacks.

Author(s):  
Xunhua Wang ◽  
Hua Lin

Unlike existing password authentication mechanisms on the web that use passwords for client-side authentication only, password-authenticated key exchange (PAKE) protocols provide mutual authentication. In this article, we present an architecture to integrate existing PAKE protocols to the web. Our integration design consists of the client-side part and the server-side part. First, we implement the PAKE client-side functionality with a web browser plug-in, which provides a secure implementation base. The plug-in has a log-in window that can be customized by a user when the plug-in is installed. By checking the user-specific information in a log-in window, an ordinary user can easily detect a fake log-in window created by mobile code. The server-side integration comprises a web interface and a PAKE server. After a successful PAKE mutual authentication, the PAKE plug-in receives a one-time ticket and passes it to the web browser. The web browser authenticates itself by presenting this ticket over HTTPS to the web server. The plug-in then fades away and subsequent web browsing remains the same as usual, requiring no extra user education. Our integration design supports centralized log-ins for web applications from different web sites, making it appropriate for digital identity management. A prototype is developed to validate our design. Since PAKE protocols use passwords for mutual authentication, we believe that the deployment of this design will significantly mitigate the risk of phishing attacks.


2017 ◽  
Vol 2017 (3) ◽  
pp. 75-89 ◽  
Author(s):  
Kevin Corre ◽  
Olivier Barais ◽  
Gerson Sunyé ◽  
Vincent Frey ◽  
Jean-Michel Crom

Abstract Authentication delegation is a major function of the modern web. Identity Providers (IdP) acquired a central role by providing this function to other web services. By knowing which web services or web applications access its service, an IdP can violate the enduser privacy by discovering information that the user did not want to share with its IdP. For instance, WebRTC introduces a new field of usage as authentication delegation happens during the call session establishment, between two users. As a result, an IdP can easily discover that Bob has a meeting with Alice. A second issue that increases the privacy violation is the lack of choice for the end-user to select its own IdP. Indeed, on many web-applications, the end-user can only select between a subset of IdPs, in most cases Facebook or Google. In this paper, we analyze this phenomena, in particular why the end-user cannot easily select its preferred IdP, though there exists standards in this field such as OpenID Connect and OAuth 2? To lead this analysis, we conduct three investigations. The first one is a field survey on OAuth 2 and OpenID Connect scope usage by web sites to understand if scopes requested by websites could allow for user defined IdPs. The second one tries to understand whether the problem comes from the OAuth 2 protocol or its implementations by IdP. The last one tries to understand if trust relations between websites and IdP could prevent the end user to select its own IdP. Finally, we sketch possible architecture for web browser based identity management, and report on the implementation of a prototype.


2017 ◽  
Vol 2017 ◽  
pp. 1-28 ◽  
Author(s):  
Gabriela Bosetti ◽  
Sergio Firmenich ◽  
Silvia E. Gordillo ◽  
Gustavo Rossi ◽  
Marco Winckler

The trend towards mobile devices usage has made it possible for the Web to be conceived not only as an information space but also as a ubiquitous platform where users perform all kinds of tasks. In some cases, users access the Web with native mobile applications developed for well-known sites, such as, LinkedIn, Facebook, and Twitter. These native applications might offer further (e.g., location-based) functionalities to their users in comparison with their corresponding Web sites because they were developed with mobile features in mind. However, many Web applications have no native counterpart and users access them using a mobile Web browser. Although the access to context information is not a complex issue nowadays, not all Web applications adapt themselves according to it or diversely improve the user experience by listening to a wide range of sensors. At some point, users might want to add mobile features to these Web sites, even if those features were not originally supported. In this paper, we present a novel approach to allow end users to augment their preferred Web sites with mobile features.We support our claims by presenting a framework for mobile Web augmentation, an authoring tool, and an evaluation with 21 end users.


Author(s):  
Vojtěch Toman

With the growing interest in end-to-end XML web application development models, many web applications are becoming predominantly XML-based, requiring XML processing capabilities not only on the-server-side, but often also on the client-side. This paper discusses the potential benefits of using XProc for XML pipeline processing in the web browser and describes the developments of a JavaScript-based XProc implementation.


Author(s):  
Shashank Gupta ◽  
B. B. Gupta

Cross-Site Scripting (XSS) attack is a vulnerability on the client-side browser that is caused by the improper sanitization of the user input embedded in the Web pages. Researchers in the past had proposed various types of defensive strategies, vulnerability scanners, etc., but still XSS flaws remains in the Web applications due to inadequate understanding and implementation of various defensive tools and strategies. Therefore, in this chapter, the authors propose a security model called Browser Dependent XSS Sanitizer (BDS) on the client-side Web browser for eliminating the effect of XSS vulnerability. Various earlier client-side solutions degrade the performance on the Web browser side. But in this chapter, the authors use a three-step approach to bypass the XSS attack without degrading much of the user's Web browsing experience. While auditing the experiments, this approach is capable of preventing the XSS attacks on various modern Web browsers.


Author(s):  
Shashank Gupta ◽  
B. B. Gupta

Cross-Site Scripting (XSS) attack is a vulnerability on the client-side browser that is caused by the improper sanitization of the user input embedded in the Web pages. Researchers in the past had proposed various types of defensive strategies, vulnerability scanners, etc., but still XSS flaws remains in the Web applications due to inadequate understanding and implementation of various defensive tools and strategies. Therefore, in this chapter, the authors propose a security model called Browser Dependent XSS Sanitizer (BDS) on the client-side Web browser for eliminating the effect of XSS vulnerability. Various earlier client-side solutions degrade the performance on the Web browser side. But in this chapter, the authors use a three-step approach to bypass the XSS attack without degrading much of the user's Web browsing experience. While auditing the experiments, this approach is capable of preventing the XSS attacks on various modern Web browsers.


Author(s):  
Subrata Acharya

There is a need to be able to verify plaintext HTTP content transfers. Common sense dictates authentication and sensitive content should always be protected by SSL/HTTPS, but there is still great exploitation potential in the modification of static content in transit. Pre-computed signatures and client-side verification offers integrity protection of HTTP content in applications where SSL is not feasible. In this chapter, the authors demonstrate a mechanism by which a Web browser or other HTTP client can verify that content transmitted over an untrusted channel has not been modified. Verifiable HTTP is not intended to replace SSL. Rather, it is intended to be used in applications where SSL is not feasible, specifically, when serving high-volume static content and/or content from non-secure sources such as Content Distribution Networks. Finally, the authors find content verification is effective with server-side overhead similar to SSL. With future optimization such as native browser support, content verification could achieve comparable client-side efficiency.


Author(s):  
August-Wilhelm Scheer

The emergence of what we call today the World Wide Web, the WWW, or simply the Web, dates back to 1989 when Tim Berners-Lee proposed a hypertext system to manage information overload at CERN, Switzerland (Berners-Lee, 1989). This article outlines how his approaches evolved into the Web that drives today’s information society and explores its full potentials still ahead. The formerly known wide-area hypertext information retrieval initiative quickly gained momentum due to the fast adoption of graphical browser programs and standardization activities of the World Wide Web Consortium (W3C). In the beginning, based only on the standards of HTML, HTTP, and URL, the sites provided by the Web were static, meaning the information stayed unchanged until the original publisher decided for an update. For a long time, the WWW, today referred to as Web 1.0, was understood as a technical mean to publish information to a vast audience across time and space. Data was kept locally and Web sites were only occasionally updated by uploading files from the client to the Web server. Application software was limited to local desktops and operated only on local data. With the advent of dynamic concepts on server-side (script languages like hypertext preprocessor (PHP) or Perl and Web applications with JSP or ASP) and client-side (e.g., JavaScript), the WWW became more dynamic. Server-side content management systems (CMS) allowed editing Web sites via the browser during run-time. These systems interact with multiple users through PHP-interfaces that push information into server-side databases (e.g., mySQL) which again feed Web sites with content. Thus, the Web became accessible and editable not only for programmers and “techies” but also for the common user. Yet, technological limitations such as slow Internet connections, consumer-unfriendly Internet rates, and poor multimedia support still inhibited a mass-usage of the Web. It needed broad-band Internet access, flat rates, and digitalized media processing to catch on.


Author(s):  
Carmine Scavo ◽  
Jody Baumgartner

The World Wide Web has been widely adopted by local governments as a way to interact with local residents. The promise and reality of Web applications are explored in this chapter. Four types of Web utilizations are analyzed: bulletin board applications, promotion applications, service delivery applications, and citizen input applications. A survey of 145 municipal and county government Web sites originally conducted in 1998 was replicated in 2002, and then again in 2006. These data are used to examine how local governments are actually using the Web and to examine the evolution of Web usage over the 8-year span between the first and third survey. The chapter concludes that local governments have made progress in incorporating many of the features of the Web but that they have a long way to go in realizing its full promise.


Sign in / Sign up

Export Citation Format

Share Document