scholarly journals An Efficient Analytical Approach to Visualize Text-Based Event Logs for Semiconductor Equipment

2021 ◽  
Vol 11 (13) ◽  
pp. 5944
Author(s):  
Gunwoo Lee ◽  
Jongpil Jeong

Semiconductor equipment consists of a complex system in which numerous components are organically connected and controlled by many controllers. EventLog records all the information available during system processes. Because the EventLog records system runtime information so developers and engineers can understand system behavior and identify possible problems, it is essential for engineers to troubleshoot and maintain it. However, because the EventLog is text-based, complex to view, and stores a large quantity of information, the file size is very large. For long processes, the log file comprises several files, and engineers must look through many files, which makes it difficult to find the cause of the problem and therefore, a long time is required for the analysis. In addition, if the file size of the EventLog becomes large, the EventLog cannot be saved for a prolonged period because it uses a large amount of hard disk space on the CTC computer. In this paper, we propose a method to reduce the size of existing text-based log files. Our proposed method saves and visualizes text-based EventLogs in DB, making it easier to approach problems than the existing text-based analysis. We will confirm the possibility and propose a method that makes it easier for engineers to analyze log files.

Computers ◽  
2021 ◽  
Vol 10 (7) ◽  
pp. 83
Author(s):  
Péter Marjai ◽  
Péter Lehotay-Kéry ◽  
Attila Kiss

Presently, almost every computer software produces many log messages based on events and activities during the usage of the software. These files contain valuable runtime information that can be used in a variety of applications such as anomaly detection, error prediction, template mining, and so on. Usually, the generated log messages are raw, which means they have an unstructured format. This indicates that these messages have to be parsed before data mining models can be applied. After parsing, template miners can be applied on the data to retrieve the events occurring in the log file. These events are made from two parts, the template, which is the fixed part and is the same for all instances of the same event type, and the parameter part, which varies for all the instances. To decrease the size of the log messages, we use the mined templates to build a dictionary for the events, and only store the dictionary, the event ID, and the parameter list. We use six template miners to acquire the templates namely IPLoM, LenMa, LogMine, Spell, Drain, and MoLFI. In this paper, we evaluate the compression capacity of our dictionary method with the use of these algorithms. Since parameters could be sensitive information, we also encrypt the files after compression and measure the changes in file size. We also examine the speed of the log miner algorithms. Based on our experiments, LenMa has the best compression rate with an average of 67.4%; however, because of its high runtime, we would suggest the combination of our dictionary method with IPLoM and FFX, since it is the fastest of all methods, and it has a 57.7% compression rate.


Author(s):  
Jozef Kapusta ◽  
Michal Munk ◽  
Dominik Halvoník ◽  
Martin Drlík

If we are talking about user behavior analytics, we have to understand what the main source of valuable information is. One of these sources is definitely a web server. There are multiple places where we can extract the necessary data. The most common ways are to search for these data in access log, error log, custom log files of web server, proxy server log file, web browser log, browser cookies etc. A web server log is in its default form known as a Common Log File (W3C, 1995) and keeps information about IP address; date and time of visit; ac-cessed and referenced resource. There are standardized methodologies which contain several steps leading to extract new knowledge from provided data. Usu-ally, the first step is in each one of them to identify users, users’ sessions, page views, and clickstreams. This process is called pre-processing. Main goal of this stage is to receive unprocessed web server log file as input and after processing outputs meaningful representations which can be used in next phase. In this pa-per, we describe in detail user session identification which can be considered as most important part of data pre-processing. Our paper aims to compare the us-er/session identification using the STT with the identification of user/session us-ing cookies. This comparison was performed concerning the quality of the se-quential rules generated, i.e., a comparison was made regarding generation useful, trivial and inexplicable rules.


Author(s):  
Ricardo Muñoz Martín ◽  
Celia Martín de Leon

The Monitor Model fosters a view of translating where two mind modes stand out and alternate when trying to render originals word-by-word by default: shallow, uneventful processing vs problem solving. Research may have been biased towards problem solving, often operationalized with a pause of, or above, 3 seconds. This project analyzed 16 translation log files by four informants from four originals. A baseline minimal pause of 200 ms was instrumental to calculate two individual thresholds for each log file: (a) A low one – 1.5 times the median pause within words – and (b) a high one – 3 times the median pause between words. Pauses were then characterized as short (between 200 ms and the lower threshold), mid, and long (above the higher threshold, chunking the recorded activities in the translation task into task segments), and assumed to respond to different causes. Weak correlations between short, mid and long pauses were found, hinting at possible different cognitive processes. Inferred processes did not fall neatly into categories depending on the length of possibly associated pauses. Mid pauses occurred more often than long pauses between sentences and paragraphs, and they also more often flanked information searches and even problem-solving instances. Chains of proximal mid pauses marked cases of potential hesitations. Task segments tended to happen within 4–8 minute cycles, nested in a possible initial phase for contextualization, followed by long periods of sustained attention. We found no evidence for problem-solving thresholds, and no trace of behavior supporting the Monitor Model. 


2021 ◽  
Vol 4 ◽  
Author(s):  
Rashid Zaman ◽  
Marwan Hassani ◽  
Boudewijn F. Van Dongen

In the context of process mining, event logs consist of process instances called cases. Conformance checking is a process mining task that inspects whether a log file is conformant with an existing process model. This inspection is additionally quantifying the conformance in an explainable manner. Online conformance checking processes streaming event logs by having precise insights into the running cases and timely mitigating non-conformance, if any. State-of-the-art online conformance checking approaches bound the memory by either delimiting storage of the events per case or limiting the number of cases to a specific window width. The former technique still requires unbounded memory as the number of cases to store is unlimited, while the latter technique forgets running, not yet concluded, cases to conform to the limited window width. Consequently, the processing system may later encounter events that represent some intermediate activity as per the process model and for which the relevant case has been forgotten, to be referred to as orphan events. The naïve approach to cope with an orphan event is to either neglect its relevant case for conformance checking or treat it as an altogether new case. However, this might result in misleading process insights, for instance, overestimated non-conformance. In order to bound memory yet effectively incorporate the orphan events into processing, we propose an imputation of missing-prefix approach for such orphan events. Our approach utilizes the existing process model for imputing the missing prefix. Furthermore, we leverage the case storage management to increase the accuracy of the prefix prediction. We propose a systematic forgetting mechanism that distinguishes and forgets the cases that can be reliably regenerated as prefix upon receipt of their future orphan event. We evaluate the efficacy of our proposed approach through multiple experiments with synthetic and three real event logs while simulating a streaming setting. Our approach achieves considerably higher realistic conformance statistics than the state of the art while requiring the same storage.


Author(s):  
Sagar Shankar Rajebhosale ◽  
Mohan Chandrabhan Nikam

A log is a record of events that happens within an organization containing systems and networks. These logs are very important for any organization, because a log file will able to record all user activities. Due to this, log files play a vital role and contain sensitive information, and therefore security should be a high priority. It is very important to the proper functioning of any organization, to securely maintain log records over an extended period of time. So, management and maintenance of logs is a very difficult task. However, deploying such a system for high security and privacy of log records may be overhead for an organization and require additional costs. Many techniques have been designed for security of log records. The alternative solution for maintaining log records is using Blockchain technology. A blockchain will provide security of the log files. Log files over a Blockchain environment leads to challenges with a decentralized storage of log files. This article proposes a secured log management over Blockchain and the use of cryptographic algorithms for dealing the issues to access a data storage. This proposed technology may be one complete solution to the secure log management problem.


ملخص: هدفت الدراسة للتعرف إلى بعض الصعوبات والمشكلات، والأوضاع المعيشية كما تراها الأسر المقيمة بمراكز الإيواء بمدارس وكالة الغوث الدولية واستخدم الباحثان المنهج الوصفي التحليلي ومنهج دراسة الحالة، وقد تكونت عينة الدراسة من 13 أسرة من المقيمين في مدرسة ذكور الزيتون الابتدائية “ب” بتل الهوى وتم عقد مقابلات متعمقة معهم للتعرف على الأوضاع المعيشية بشيء من التفصيل، وأيضاً تم مقابلة عدد 6 من الإخباريين الذين عايشوا الأحداث وقد أشارت النتائج المتعلقة بمكان الإعاشة وتجهيزاته إلى أن المعيشة صعبة، وأن كل الأسرة كانت تقطن في غرفة صف واحدة فى المدرسة، كما أن غرفة الصف غير معدة للمعيشة، وفيما يتعلق بالجانب الاقتصادي أكدت النتائج أن الجميع بدون عمل، وفيما يتعلق بالإخباريين؛ أكد الجميع أن كل القاطنين في مراكز الإيواء ليس لديهم أي مصدر دخل ولا عمل، وبالنسبة للجانب النفسي للنازحين وأبنائهم، أكدت كل العينة من خلال المقابلات أن أبناءهم وزوجاتهم يعانون من مشكلات نفسية عديدة تتمثل في الخوف، والتبول اللاإرادي، والأمراض النفسية، وتم تحويل جزء كبير منهم إلى عيادات خارجية، وفيما يتعلق بالجانب الاجتماعي وعلاقاتهم مع المحيطين بهم، أكدت غالبية العينة أن ليس لديهم علاقات اجتماعية مع المحيطين، حيث تقتصر علاقاتهم مع بعضهم داخل المدرسة، وبالنسبة للجانب السياسي ومستقبل عودتهم إلى بيوتهم بعد إعادة الإعمار، فيرى الجميع أنه سيكون بطيئا وسيستغرق وقتا طويلا.الكلمة الافتتاحية / الأوضاع المعيشية للأسر الفلسطينية Abstract This study aims to investigate some of the difficulties, problems and living conditions perceived by families living in shelter centers in schools of international relief agency. Researchers used descriptive analytical approach in their case study. The study sample consisted of 13 families residing in the Elzaytoon male elementary school ‘b’ in Tel al-Hawa district. It held in-depth interviews with families to know the living conditions in details. Also six news reporters were included in interviews that witnessed the events. The results concerning the place of living and its materials indicated that the living conditions were difficult; each family was living in one classroom in the school not intended for living. In regard to the economic aspect, results confirmed that inhabitants were jobless with no income. On the psychological aspect of displaced persons and their descendants, results showed that their sons and wives suffered from various psychological problems such as the fearbedwetting and mental illness. As a result of this a large part of them had been transferred to psychological clinics. In regard of the social aspect, the majority of sample individuals confirmed that they do not have social relations with those around them; relations were confined within the school only. On political level, they had no hope in returning to their homes. Moreover, they believed that house reconstruction would be slow and it will take a long time.


2016 ◽  
Vol 705 ◽  
pp. 323-331 ◽  
Author(s):  
Togay Ozbakkaloglu

This paper presents the results of 20 hollow and concrete-filled double-skin tubular columns (DSTCs), which were tested as part of a comprehensive experimental program that was undertaken at The University of Adelaide on FRP-concrete steel DSTCs. The paper is aimed at providing important insights into the influence of two key parameters, namely the diameter of inner steel tube and presence/absence of a concrete-filling inside the inner steel tube, which play major roles in the column behavior through their influences on a series of interacting mechanisms that govern the complex system behavior. A detailed examination of the results yielded a number of important insights into the mechanisms that influence the compressive behavior of DSTCs.


2019 ◽  
Vol 11 (1) ◽  
pp. 10
Author(s):  
Yurulina Gulo

The writing of this journal aims to give a new view of how a woman who in Ono Niha mythology has very high respect, but in reality the woman in Nias is the object of injustice in the culture of Patriarchy that has been formed for a long time in Nias. In this journal, using a descriptive-analytical approach with a qualitative approach. The qualitative approach emphasizes the accuracy of data, it will use an inductive approach, which means that data will be collected, approached, and abstracted through interviews, literature and field observations. Thus the authors obtain data that in Nias, women experienced oppression in a real patriarchal culture because of the social construction that made it number two and regarded as weak and low based on the nature of the natural label. The oppression experienced by women in society socially, politically and religiously. The basis of the injustice in society in various fields is rooted in the culture of patriarchy where men assume that women belong to their property, servants and complementaries.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Daniel Hofer ◽  
Markus Jäger ◽  
Aya Khaled Youssef Sayed Mohamed ◽  
Josef Küng

Purpose For aiding computer security experts in their study, log files are a crucial piece of information. Especially the time domain is very important for us because in most cases, timestamps are the only linking points between events caused by attackers, faulty systems or simple errors and their corresponding entries in log files. With the idea of storing and analyzing this log information in graph databases, we need a suitable model to store and connect timestamps and their events. This paper aims to find and evaluate different approaches how to store timestamps in graph databases and their individual benefits and drawbacks. Design/methodology/approach We analyse three different approaches, how timestamp information can be represented and stored in graph databases. For checking the models, we set up four typical questions that are important for log file analysis and tested them for each of the models. During the evaluation, we used the performance and other properties as metrics, how suitable each of the models is for representing the log files’ timestamp information. In the last part, we try to improve one promising looking model. Findings We come to the conclusion, that the simplest model with the least graph database-specific concepts in use is also the one yielding the simplest and fastest queries. Research limitations/implications Limitations to this research are that only one graph database was studied and also improvements to the query engine might change future results. Originality/value In the study, we addressed the issue of storing timestamps in graph databases in a meaningful, practical and efficient way. The results can be used as a pattern for similar scenarios and applications.


Sign in / Sign up

Export Citation Format

Share Document