Experiential Learning of Networking Technologies: Understanding TCP States – Part 2

2018 ◽  
Author(s):  
Ram P Rustagi ◽  
Viraj Kumar

This article focuses on the states of a TCP connection once one of the endpoints decides to terminate the connection. This so-called teardown phase involves the exchange of numerous messages (for reasons we will explore), and the TCP connection itself transitions through several states. Web developers often have only an overly simplistic understanding of these states, which may suffice when the network behaves reliably. However, a deeper understanding of TCP states is essential to design web applications that robustly manage TCP connections even in the presence of network faults during the teardown phase, and debug poorly design applications that exhibit poor resource utilization and poor performance in such situations. As always, we will explore these issues through a series of experiential learning exercises.

2018 ◽  
Author(s):  
Ram P. Rustagi ◽  
Viraj Kumar

With the rapid increase in the volume of e-commerce, the security of web-based transactions is of increasing concern. A widespread but dangerously incorrect belief among web users is that all security issues are taken care of when a website uses HTTPS (secure HTTP). While HTTPS does provide security, websites are often developed and deployed in ways that make them and their users vulnerable to hackers. In this article we explore some of these vulnerabilities. We first introduce the key ideas and then provide several experiential learning exercises so that readers can understand the challenges and possible solutions to them in a hands-on manner.


Author(s):  
Omoruyi Osemwegie ◽  
Kennedy Okokpujie ◽  
Nsikan Nkordeh ◽  
Charles Ndujiuba ◽  
Samuel John ◽  
...  

<p>Increasing requirements for scalability and elasticity of data storage for web applications has made Not Structured Query Language NoSQL databases more invaluable to web developers. One of such NoSQL Database solutions is Redis. A budding alternative to Redis database is the SSDB database, which is also a key-value store but is disk-based. The aim of this research work is to benchmark both databases (Redis and SSDB) using the Yahoo Cloud Serving Benchmark (YCSB). YCSB is a platform that has been used to compare and benchmark similar NoSQL database systems. Both databases were given variable workloads to identify the throughput of all given operations. The results obtained shows that SSDB gives a better throughput for majority of operations to Redis’s performance.</p>


2018 ◽  
Vol 7 (4.1) ◽  
pp. 18
Author(s):  
Isatou Hydara ◽  
Abu Bakar Md Sultan ◽  
Hazura Zulzalil ◽  
Novia Admodisastro

Cross-site scripting vulnerabilities are among the top ten security vulnerabilities affecting web applications for the past decade and mobile version web applications more recently. They can cause serious problems for web users such as loss of personal information to web attackers, including financial and health information, denial of service attacks, and exposure to malware and viruses. Most of the proposed solutions focused only on the Desktop versions of web applications and overlooked the mobile versions. Increasing use of mobile phones to access web applications increases the threat of cross-site scripting attacks on mobile phones. This paper presents work in progress on detecting cross-site scripting vulnerabilities in mobile versions of web applications. It proposes an enhanced genetic algorithm-based approach that detects cross-site scripting vulnerabilities in mobile versions of web applications. This approach has been used in our previous work and successfully detected the said vulnerabilities in Desktop web applications. It has been enhanced and is currently being tested in mobile versions of web applications. Preliminary results have indicated success in the mobile versions of web applications also. This approach will enable web developers find cross-site scripting vulnerabilities in the mobile versions of their web applications before their release.  


2017 ◽  
Vol 2017 (1) ◽  
pp. 79-99 ◽  
Author(s):  
Muhammad Ikram ◽  
Hassan Jameel Asghar ◽  
Mohamed Ali Kaafar ◽  
Anirban Mahanti ◽  
Balachandar Krishnamurthy

Abstract Numerous tools have been developed to aggressively block the execution of popular JavaScript programs in Web browsers. Such blocking also affects functionality of webpages and impairs user experience. As a consequence, many privacy preserving tools that have been developed to limit online tracking, often executed via JavaScript programs, may suffer from poor performance and limited uptake. A mechanism that can isolate JavaScript programs necessary for proper functioning of the website from tracking JavaScript programs would thus be useful. Through the use of a manually labelled dataset composed of 2,612 JavaScript programs, we show how current privacy preserving tools are ineffective in finding the right balance between blocking tracking JavaScript programs and allowing functional JavaScript code. To the best of our knowledge, this is the first study to assess the performance of current web privacy preserving tools in determining tracking vs. functional JavaScript programs. To improve this balance, we examine the two classes of JavaScript programs and hypothesize that tracking JavaScript programs share structural similarities that can be used to differentiate them from functional JavaScript programs. The rationale of our approach is that web developers often “borrow” and customize existing pieces of code in order to embed tracking (resp. functional) JavaScript programs into their webpages. We then propose one-class machine learning classifiers using syntactic and semantic features extracted from JavaScript programs. When trained only on samples of tracking JavaScript programs, our classifiers achieve accuracy of 99%, where the best of the privacy preserving tools achieve accuracy of 78%. The performance of our classifiers is comparable to that of traditional two-class SVM. One-class classification, where a training set of only tracking JavaScript programs is used for learning, has the advantage that it requires fewer labelled examples that can be obtained via manual inspection of public lists of well-known trackers. We further test our classifiers and several popular privacy preserving tools on a larger corpus of 4,084 websites with 135,656 JavaScript programs. The output of our best classifier on this data is between 20 to 64% different from the tools under study. We manually analyse a sample of the JavaScript programs for which our classifier is in disagreement with all other privacy preserving tools, and show that our approach is not only able to enhance user web experience by correctly classifying more functional JavaScript programs, but also discovers previously unknown tracking services.


Author(s):  
Fagner Christian Paes ◽  
Willian Massami Watanabe

Cross-Browser Incompatibilities (XBIs) represent inconsistencies in Web Application when introduced in different browsers. The growing number of implementation of browsers (Internet Explorer, Microsoft Edge, Mozilla Firefox, Google Chrome) and the constant evolution of the specifications of Web technologies provided differences in the way that the browsers behave and render the web pages. The web applications must behave consistently among browsers. Therefore, the web developers should overcome the differences that happen during the rendering in different environments by detecting and avoiding XBIs during the development process. Many web developers depend on manual inspection of web pages in several environments to detect the XBIs, independently of the cost and time that the manual tests represent to the process of development. The tools for the automatic detection of the XBIs accelerate the inspection process in the web pages, but the current tools have little precision, and their evaluations report a large percentage of false positives. This search aims to evaluate the use of Artificial Neural Networks for reducing the numbers of false positives in the automatic detection of the XBIs through the CSS (Cascading Style Sheets) and the relative comparison of the element in the web page.


2020 ◽  
Author(s):  
Ram Rustagi P

In this series of articles on Experiential Learning of Networking Technologies, we have discussed a number of network protocols starting from HTTP [7] at application layer, TCP [3] and UDP [1] protocols at transport layers that provide end to end communications, and IP addressing [2] and routing for packet delivery at network layer. We have defined a number of experiential exercises for each underlying concept which provide a practical understanding of these protocols. Now, we would like to take a holistic view of these protocols which we have learned so far and look at how all these protocols come into play when an internet user makes a simple web request, e.g., what happens from network perspective when a user enters google.com in the URL bar of a web browser [12]. From the perspective of user, web page of Google’s search interface is displayed in the browser window, but inside the network both at the user’s local network and the internet, a lot of network activity takes place. The focus of this article is to understand the traversal of packets in the network triggered by any such user activity.


2019 ◽  
Author(s):  
RAM P. RUSTAGI ◽  
VIRAJ KUMAR

In a TCP connection, the underlying network drops packets when it lacks the capacity to deliver all the packets sent by the sender to the receiver. This phenomenon is called congestion. TCP at the sender’s side will not receive acks for these dropped packets. Since TCP is a reliable protocol, the sender must retransmit all these packets. The mechanism used by TCP to deal with such situations is called TCP Congestion Control. In this article, we explain the basics of congestion control and provide experiential exercises to help understand its impact on TCP performance.


2017 ◽  
Author(s):  
Ram P. Rustagi ◽  
Viraj Kumar

We have all experienced a degree of frustration when a web page takes longer than expected to load. The delay between the moment when the user enters a URL (or clicks a link) and when the page contents are finally displayed has two causes: the time needed to fetch the page contents from one or more web servers (known as the end to end network delay) and the time needed to render the content in the browser window (known as the page load time). In this article, we will explore the components of the former delay via a simple set of experiments.


2017 ◽  
Author(s):  
Ram P. Rustagi ◽  
Viraj Kumar

HTTP is the most widely used protocol today, and is supported by almost every device that connects to a network. As web pages continue to grow in size and complexity, web browsers and the HTTP protocol itself have evolved to ensure that end users can meaningfully engage with this rich content. This article describes how HTTP has evolved from a simple request-response paradigm to include mechanisms for high performance applications.


Author(s):  
Ahmad Hammoud ◽  
Ramzi A. Haraty

Most Web developers underestimate the risk and the level of damage that might be caused when Web applications are vulnerable to SQL (structured query language) injections. Unfortunately, Web applications with such vulnerability constitute a large part of today’s Web application landscape. This article aims at highlighting the risk of SQL injection attacks and provides an efficient solution.


Sign in / Sign up

Export Citation Format

Share Document