Experiential Learning of Networking Technologies

2017 ◽  
Author(s):  
Ram P. Rustagi ◽  
Viraj Kumar

HTTP is the most widely used protocol today, and is supported by almost every device that connects to a network. As web pages continue to grow in size and complexity, web browsers and the HTTP protocol itself have evolved to ensure that end users can meaningfully engage with this rich content. This article describes how HTTP has evolved from a simple request-response paradigm to include mechanisms for high performance applications.

2018 ◽  
Author(s):  
Ram P. Rustagi ◽  
Viraj Kumar

With the rapid increase in the volume of e-commerce, the security of web-based transactions is of increasing concern. A widespread but dangerously incorrect belief among web users is that all security issues are taken care of when a website uses HTTPS (secure HTTP). While HTTPS does provide security, websites are often developed and deployed in ways that make them and their users vulnerable to hackers. In this article we explore some of these vulnerabilities. We first introduce the key ideas and then provide several experiential learning exercises so that readers can understand the challenges and possible solutions to them in a hands-on manner.


2019 ◽  
Vol 11 (1) ◽  
pp. 1-1
Author(s):  
Sabrina Kletz ◽  
Marco Bertini ◽  
Mathias Lux

Having already discussed MatConvNet and Keras, let us continue with an open source framework for deep learning, which takes a new and interesting approach. TensorFlow.js is not only providing deep learning for JavaScript developers, but it's also making applications of deep learning available in the WebGL enabled web browsers, or more specifically, Chrome, Chromium-based browsers, Safari and Firefox. Recently node.js support has been added, so TensorFlow.js can be used to directly control TensorFlow without the browser. TensorFlow.js is easy to install. As soon as a browser is installed one is ready to go. Browser based, cross platform applications, e.g. running with Electron, can also make use of TensorFlow.js without an additional install. The performance, however, depends on the browser the client is running, and memory and GPU on the client device. More specifically, one cannot expect to analyze 4K videos on a mobile phone in real time. While it's easy to install, and it's easy to develop based on TensorFlow.js, there are drawbacks: (i) developers have less control over where the machine learning actually takes place (e.g. on CPU or GPU), that it is running in the same sandbox as all web pages in the browser do, and (ii) that in the current release it still has rough edges and is not considered stable enough to use in production.


Author(s):  
Ahmet Artu Yıldırım ◽  
Dan Watson

Major Internet services are required to process a tremendous amount of data at real time. As we put these services under the magnifying glass, It's seen that distributed object storage systems play an important role at back-end in achieving this success. In this chapter, overall information of the current state-of –the-art storage systems are given which are used for reliable, high performance and scalable storage needs in data centers and cloud. Then, an experimental distributed object storage system (CADOS) is introduced for retrieving large data, such as hundreds of megabytes, efficiently through HTML5-enabled web browsers over big data – terabytes of data – in cloud infrastructure. The objective of the system is to minimize latency and propose a scalable storage system on the cloud using a thin RESTful web service and modern HTML5 capabilities.


Author(s):  
Mike Thelwall

Scientific Web Intelligence (SWI) is a research field that combines techniques from data mining, Web intelligence, and scientometrics to extract useful information from the links and text of academic-related Web pages using various clustering, visualization, and counting techniques. Its origins lie in previous scientometric research into mining off-line academic data sources such as journal citation databases. Typical scientometric objectives are either evaluative (assessing the impact of research) or relational (identifying patterns of communication within and among research fields). From scientometrics, SWI also inherits a need to validate its methods and results so that the methods can be justified to end users, and the causes of the results can be found and explained.


Author(s):  
José-Fernando. Diez-Higuera ◽  
Francisco-Javier Diaz-Pernas

In the last few years, because of the increasing growth of the Internet, general-purpose clients have achieved a high level of popularity for static consultation of text and pictures. This is the case of the World Wide Web (i.e., the Web browsers). Using a hypertext system, Web users can select and read in their computers information from all around the world, with no other requirement than an Internet connection and a navigation program. For a long time, the information available on the Internet has been series of written texts and 2D pictures (i.e., static information). This sort of information suited many publications, but it was highly unsatisfactory for others, like those related to objects of art, where real volume, and interactivity with the user, are of great importance. Here, the possibility of including 3D information in Web pages makes real sense.


Computers ◽  
2020 ◽  
Vol 9 (3) ◽  
pp. 70
Author(s):  
Carolina Fernández ◽  
Sergio Giménez ◽  
Eduard Grasa ◽  
Steve Bunch

The lack of high-performance RINA (Recursive InterNetwork Architecture) implementations to date makes it hard to experiment with RINA as an underlay networking fabric solution for different types of networks, and to assess RINA’s benefits in practice on scenarios with high traffic loads. High-performance router implementations typically require dedicated hardware support, such as FPGAs (Field Programmable Gate Arrays) or specialized ASICs (Application Specific Integrated Circuit). With the advance of hardware programmability in recent years, new possibilities unfold to prototype novel networking technologies. In particular, the use of the P4 programming language for programmable ASICs holds great promise for developing a RINA router. This paper details the design and part of the implementation of the first P4-based RINA interior router, which reuses the layer management components of the IRATI Linux-based RINA implementation and implements the data-transfer components using a P4 program. We also describe the configuration and testing of our initial deployment scenarios, using ancillary open-source tools such as the P4 reference test software switch (BMv2) or the P4Runtime API.


2018 ◽  
Vol 14 (2) ◽  
pp. 212-232 ◽  
Author(s):  
Weidan Du ◽  
Zhenyu Cheryl Qian ◽  
Paul Parsons ◽  
Yingjie Victor Chen

Purpose Modern Web browsers all provide a history function that allows users to see a list of URLs they have visited in chronological order. The history log contains rich information but is seldom used because of the tedious nature of scrolling through long lists. This paper aims to propose a new way to improve users’ Web browsing experience by analyzing, clustering and visualizing their browsing history. Design/methodology/approach The authors developed a system called Personal Web Library to help users develop awareness of and understand their Web browsing patterns, identify their topics of interest and retrieve previously visited Web pages more easily. Findings User testing showed that this system is usable and attractive. It found that users can easily see patterns and trends at different time granularities, recall pages from the past and understand the local context of a browsing session. Its flexibility provides users with much more information than the traditional history function in modern Web browsers. Participants in the study gained an improved awareness of their Web browsing patterns. Participants mentioned that they were willing to improve their time management after viewing their browsing patterns. Practical implications As more and more daily activities rely on the internet and Web browsers, browsing data captures a large part of users’ lives. Providing users with interactive visualizations of their browsing history can facilitate personal information management, time management and other meta-level activities. Originality/value This paper aims to help users gain insights into and improve their Web browsing experience, the authors hope that the work they conducted can spur more research contributions in this underdeveloped yet important area.


2016 ◽  
Vol 2016 ◽  
pp. 1-14
Author(s):  
Shukai Liu ◽  
Xuexiong Yan ◽  
Qingxian Wang ◽  
Xu Zhao ◽  
Chuansen Chai ◽  
...  

The high-profile attacks of malicious HTML and JavaScript code have seen a dramatic increase in both awareness and exploitation in recent years. Unfortunately, exiting security mechanisms provide no enough protection. We propose a new protection mechanism named PMHJ based on the support of both web applications and web browsers against malicious HTML and JavaScript code in vulnerable web applications. PMHJ prevents the injection attack of HTML elements with a random attribute value and the node-split attack by an attribute with the hash value of the HTML element. PMHJ ensures the content security in web pages by verifying HTML elements, confining the insecure HTML usages which can be exploited by attackers, and disabling the JavaScript APIs which may incur injection vulnerabilities. PMHJ provides a flexible way to rein the high-risk JavaScript APIs with powerful ability according to the principle of least authority. The PMHJ policy is easy to be deployed into real-world web applications. The test results show that PMHJ has little influence on the run time and code size of web pages.


2020 ◽  
Author(s):  
Ram Rustagi P

In this series of articles on Experiential Learning of Networking Technologies, we have discussed a number of network protocols starting from HTTP [7] at application layer, TCP [3] and UDP [1] protocols at transport layers that provide end to end communications, and IP addressing [2] and routing for packet delivery at network layer. We have defined a number of experiential exercises for each underlying concept which provide a practical understanding of these protocols. Now, we would like to take a holistic view of these protocols which we have learned so far and look at how all these protocols come into play when an internet user makes a simple web request, e.g., what happens from network perspective when a user enters google.com in the URL bar of a web browser [12]. From the perspective of user, web page of Google’s search interface is displayed in the browser window, but inside the network both at the user’s local network and the internet, a lot of network activity takes place. The focus of this article is to understand the traversal of packets in the network triggered by any such user activity.


2019 ◽  
Author(s):  
RAM P. RUSTAGI ◽  
VIRAJ KUMAR

In a TCP connection, the underlying network drops packets when it lacks the capacity to deliver all the packets sent by the sender to the receiver. This phenomenon is called congestion. TCP at the sender’s side will not receive acks for these dropped packets. Since TCP is a reliable protocol, the sender must retransmit all these packets. The mechanism used by TCP to deal with such situations is called TCP Congestion Control. In this article, we explain the basics of congestion control and provide experiential exercises to help understand its impact on TCP performance.


Sign in / Sign up

Export Citation Format

Share Document