linux kernel
Recently Published Documents


TOTAL DOCUMENTS

396
(FIVE YEARS 28)

H-INDEX

19
(FIVE YEARS 1)

2021 ◽  
Vol 36 (6) ◽  
pp. 1325-1341
Author(s):  
Ying-Jie Wang ◽  
Liang-Ze Yin ◽  
Wei Dong

Author(s):  
Eduard Staniloiu ◽  
Razvan Nitu ◽  
Cristian Becerescu ◽  
Razvan Rughinis

2021 ◽  
Author(s):  
Yimin Cheng ◽  
Xujie Hou ◽  
Hailong Shi ◽  
Feng Liu
Keyword(s):  

Author(s):  
Muhammad Ejaz Sandhu

To test the behavior of the Linux kernel module, device drivers and file system in a faulty situation, scientists tried to inject faults in different artificial environments. Since the rarity and unpredictability of such events are pretty high, thus the localization and detection of Linux kernel, device drivers, file system modules errors become unfathomable. ‘Artificial introduction of some random faults during normal tests’ is the only known approach to such mystifying problems. A standard method for performing such experiments is to generate synthetic faults and study the effects. Various fault injection frameworks have been analyzed over the Linux kernel to simulate such detection. The following paper highlights the comparison of different approaches and techniques used for such fault injection to test Linux kernel modules that include simulating low resource conditions and detecting memory leaks. The frameworks chosen to be used in these experiments are; Linux Text Project (LTP), KEDR, Linux Fault-Injection (LFI), and SCSI. 


Queue ◽  
2021 ◽  
Vol 19 (4) ◽  
pp. 29-41
Author(s):  
Patrick Thomson

Modern static-analysis tools provide powerful and specific insights into codebases. The Linux kernel team, for example, developed Coccinelle, a powerful tool for searching, analyzing, and rewriting C source code; because the Linux kernel contains more than 27 million lines of code, a static-analysis tool is essential both for finding bugs and for making automated changes across its many libraries and modules. Another tool targeted at the C family of languages is Clang scan-build, which comes with many useful analyses and provides an API for programmers to write their own analyses. Like so many things in computer science, the utility of static analysis is self-referential: To write reliable programs, we must also write programs for our programs. But this is no paradox. Static-analysis tools, complex though their theory and practice may be, are what will enable us, and engineers of the future, to overcome this challenge and yield the knowledge and insights that we practitioners deserve.


2021 ◽  
Vol 2021 ◽  
pp. 1-21
Author(s):  
Đani Vladislavić ◽  
Darko Huljenić ◽  
Julije Ožegović

Network function virtualization (NFV) is a concept aimed at achieving telecom grade cloud ecosystem for new-generation networks focusing on capital and operational expenditure (CAPEX and OPEX) savings. This study introduces empirical throughput prediction model for the virtual network function (VNF) and network function virtualization infrastructure (NFVI) architectures based on Linux kernel. The model arises from the methodology for performance evaluation and modeling based on execution area (EA) distribution by CPU core pinning. EA is defined as a software execution unit that can run isolated on a compute resource (CPU core). EAs are derived from the elements and packet processing principles in NFVIs and VNFs based on Linux kernel. Performing measurements and observing linearity of the measured results open the possibility to apply model calibration technique to achieve general VNF and NFVI architecture model with performance prediction and environment setup optimization. The modeling parameters are derived from the cumulative packet processing cost obtained by measurements for collocated EAs on the CPU core hosting the bottleneck EA. The VNF and NFVI architecture model with performance prediction is successfully validated against the measurement results obtained in emulated environment and used to predict optimal system configurations and maximal throughput results for different CPUs.


Author(s):  
Pradip Ram Selokar ◽  
P. T. Karule
Keyword(s):  

2021 ◽  
Vol 11 (14) ◽  
pp. 6486
Author(s):  
Mei-Ling Chiang ◽  
Wei-Lun Su

NUMA multi-core systems divide system resources into several nodes. When an imbalance in the load between cores occurs, the kernel scheduler’s load balancing mechanism then migrates threads between cores or across NUMA nodes. Remote memory access is required for a thread to access memory on the previous node, which degrades performance. Threads to be migrated must be selected effectively and efficiently since the related operations run in the critical path of the kernel scheduler. This study focuses on improving inter-node load balancing for multithreaded applications. We propose a thread-aware selection policy that considers the distribution of threads on nodes for each thread group while migrating one thread for inter-node load balancing. The thread is selected for which its thread group has the least exclusive thread distribution, and thread members are distributed more evenly on nodes. This has less influence on data mapping and thread mapping for the thread group. We further devise several enhancements to eliminate superfluous evaluations for multithreaded processes, so the selection procedure is more efficient. The experimental results for the commonly used PARSEC 3.0 benchmark suite show that the modified Linux kernel with the proposed selection policy increases performance by 10.7% compared with the unmodified Linux kernel.


Sign in / Sign up

Export Citation Format

Share Document