scholarly journals Penerapan Data Mining Dalam Meningkatkan Mutu Perawatan dan Perbaikan Perlengkapan Alat-Alat Kapal Laut Menerapkan Metode K-Means Clustering

2021 ◽  
Vol 2 (3) ◽  
pp. 232
Author(s):  
Yulifa Esfana Putri Sinaga ◽  
Garuda Ginting ◽  
Melda Panjaitan

This study aims to propose in carrying out sailing activities on the KN. Arcturus ship every week, so that maintenance and maintenance activities need to be carried out properly and scheduled so as not to interfere with ongoing production activities. The current machine breakdown is still relatively high and requires a long repair time, what is proposed is preventive maintenance using the Clustering technique. The method used is K-Means Clustering. By using this method, the data that has been obtained can be grouped into several clusters, based on the similarity of these data, so that data that has the same characteristics are grouped into one cluster and those that have different characteristics are grouped into another cluster. We can use the abundance of data to find hidden information from these data. In order to find out the hidden information from these data, it is necessary to process the data. The data processing process is also known as data mining. With this grouping it will know the results of the analyzed data.

2020 ◽  
Vol 2 (2) ◽  
pp. 76-83
Author(s):  
Irmanita Nasution ◽  
Agus Perdana Windarto ◽  
M Fauzan

Proverty is one of the problems that inhibits national and regional growth. This research uses data mining techniques. In this study tha data used were sourced from the 2012-2018 statistical center. The research uses data mining techniques. In the data processing using k-means method. K-means method is a method of grouping existing data into several groups where the data in one group has the same characteristics with each other and has different characteristics from the data in other groups. The number of records used is 34 provinces which are divided into 2 clusters namely high and low clusters. The purpose of this study is divided into 2 parts, namely the provincial group with a high proverty rate and the provincial group with the lowest proverty level. From the result of grouping there were 8 provinces of high cluster and 26 low clusters. It is hoped that this research can provide input to the government so that it can give more attention to provinces that are categorized as high in proverty


2018 ◽  
Vol 6 (2) ◽  
pp. 60
Author(s):  
Koko Handoko

The concept of data mining becomes one of the important tools in information management because the existing information has an increasing number. Data mining has many techniques in practice, one of which is the clustering technique which is the process of grouping data into groups so that data exist in the same group have properties as closely as possible. Clustering has many different methods, one of which is K-Means. By using ata mining clustering on traffic activity data taken from Hang Nadim Airport Batam, it can be obtained by grouping passenger based on clusters according to the nature of each data. The data taken include the number of passengers coming, departing, and transiting. In the process of performing data mining clustering, existing sample data must go through several important stages in order to get the correct cluster results. Stages that must be passed the Stages of Data Processing, Clustering Stage and Stage Algorithm. Based on the results of research that has been done on the existing sample data, it can be concluded the results of data grouping of passengers at Hang Nadim Airport Batam.


2019 ◽  
Vol 13 (1) ◽  
pp. 27-36
Author(s):  
Andreas Neubert

Due to the different characteristics of the piece goods (e.g. size and weight), they are transported in general cargo warehouses by manually-operated industrial trucks such as forklifts and pallet trucks. Since manual activities are susceptible to possible human error, errors occur in logistical processes in general cargo warehouses. This leads to incorrect loading, stacking and damage to storage equipment and general cargo. It would be possible to reduce costs arising from errors in logistical processes if these errors could be remedied in advance. This paper presents a monitoring procedure for logistical processes in manually-operated general cargo warehouses. This is where predictive analysis is applied. Seven steps are introduced with a view to integrating predictive analysis into the IT infrastructure of general cargo warehouses. These steps are described in detail. The CRISP4BigData model, the SVM data mining algorithm, the data mining tool R, the programming language C++ for the scoring in general cargo warehouses represent the results of this paper. After having created the system and installed it in general cargo warehouses, initial results obtained with this method over a certain time span will be compared with results obtained without this method through manual recording over the same period.


Author(s):  
Man Tianxing ◽  
Nataly Zhukova ◽  
Alexander Vodyaho ◽  
Tin Tun Aung

Extracting knowledge from data streams received from observed objects through data mining is required in various domains. However, there is a lack of any kind of guidance on which techniques can or should be used in which contexts. Meta mining technology can help build processes of data processing based on knowledge models taking into account the specific features of the objects. This paper proposes a meta mining ontology framework that allows selecting algorithms for solving specific data mining tasks and build suitable processes. The proposed ontology is constructed using existing ontologies and is extended with an ontology of data characteristics and task requirements. Different from the existing ontologies, the proposed ontology describes the overall data mining process, used to build data processing processes in various domains, and has low computational complexity compared to others. The authors developed an ontology merging method and a sub-ontology extraction method, which are implemented based on OWL API via extracting and integrating the relevant axioms.


Author(s):  
Chong Chen ◽  
Ying Liu ◽  
Xianfang Sun ◽  
Shixuan Wang ◽  
Carla Di Cairano-Gilfedder ◽  
...  

Over the last few decades, reliability analysis has gained more and more attention as it can be beneficial in lowering the maintenance cost. Time between failures (TBF) is an essential topic in reliability analysis. If the TBF can be accurately predicted, preventive maintenance can be scheduled in advance in order to avoid critical failures. The purpose of this paper is to research the TBF using deep learning techniques. Deep learning, as a tool capable of capturing the highly complex and nonlinearly patterns, can be a useful tool for TBF prediction. The general principle of how to design deep learning model was introduced. By using a sizeable amount of automobile TBF dataset, we conduct an experiential study on TBF prediction by deep learning and several data mining approaches. The empirical results show the merits of deep learning in performance but comes with cost of high computational load.


Author(s):  
Haixu Xi ◽  
Feiyue Ye ◽  
Sheng He ◽  
Yijun Liu ◽  
Hongfen Jiang

Batch processes and phenomena in traffic video data processing, such as traffic video image processing and intelligent transportation, are commonly used. The application of batch processing can increase the efficiency of resource conservation. However, owing to limited research on traffic video data processing conditions, batch processing activities in this area remain minimally examined. By employing database functional dependency mining, we developed in this study a workflow system. Meanwhile, the Bayesian network is a focus area of data mining. It provides an intuitive means for users to comply with causality expression approaches. Moreover, graph theory is also used in data mining area. In this study, the proposed approach depends on relational database functions to remove redundant attributes, reduce interference, and select a property order. The restoration of selective hidden naive Bayesian (SHNB) affects this property order when it is used only once. With consideration of the hidden naive Bayes (HNB) influence, rather than using one pair of HNB, it is introduced twice. We additionally designed and implemented mining dependencies from a batch traffic video processing log for data execution algorithms.


Hadmérnök ◽  
2020 ◽  
Vol 15 (4) ◽  
pp. 141-158
Author(s):  
Eszter Katalin Bognár

In modern warfare, the most important innovation to date has been the utilisation of information as a  weapon. The basis of successful military operations is  the ability to correctly assess a situation based on  credible collected information. In today’s military, the primary challenge is not the actual collection of data.  It has become more important to extract relevant  information from that data. This requirement cannot  be successfully completed without necessary  improvements in tools and techniques to support the acquisition and analysis of data. This study defines  Big Data and its concept as applied to military  reconnaissance, focusing on the processing of  imagery and textual data, bringing to light modern  data processing and analytics methods that enable  effective processing.


Sign in / Sign up

Export Citation Format

Share Document