scholarly journals Process Mining to Explore Variations in Endometrial Cancer Pathways from GP Referral to First Treatment

Author(s):  
Angelina Prima Kurniati ◽  
Eric Rojas ◽  
Kieran Zucker ◽  
Geoff Hall ◽  
David Hogg ◽  
...  

The main challenge in the pathway analysis of cancer treatments is the complexity of the process. Process mining is one of the approaches that can be used to visualize and analyze these complex pathways. In this study, our purpose was to use process mining to explore variations in the treatment pathways of endometrial cancer. We extracted patient data from a hospital information system, created the process model, and analyzed the variations of the 62-day pathway from a General Practitioner referral to the first treatment in the hospital. We also analyzed the variations based on three different criteria: the type of the first treatment, the age at diagnosis, and the year of diagnosis. This approach should be of interest to others dealing with complex medical and healthcare processes.

2021 ◽  
Vol 11 (9) ◽  
pp. 4121
Author(s):  
Hana Tomaskova ◽  
Erfan Babaee Tirkolaee

The purpose of this article was to demonstrate the difference between a pandemic plan’s textual prescription and its effective processing using graphical notation. Before creating a case study of the Business Process Model and Notation (BPMN) of the Czech Republic’s pandemic plan, we conducted a systematic review of the process approach in pandemic planning and a document analysis of relevant public documents. The authors emphasized the opacity of hundreds of pages of text records in an explanatory case study and demonstrated the effectiveness of the process approach in reengineering and improving the response to such a critical situation. A potential extension to the automation and involvement of SMART technologies or process optimization through process mining techniques is presented as a future research topic.


2021 ◽  
Vol 4 ◽  
Author(s):  
Rashid Zaman ◽  
Marwan Hassani ◽  
Boudewijn F. Van Dongen

In the context of process mining, event logs consist of process instances called cases. Conformance checking is a process mining task that inspects whether a log file is conformant with an existing process model. This inspection is additionally quantifying the conformance in an explainable manner. Online conformance checking processes streaming event logs by having precise insights into the running cases and timely mitigating non-conformance, if any. State-of-the-art online conformance checking approaches bound the memory by either delimiting storage of the events per case or limiting the number of cases to a specific window width. The former technique still requires unbounded memory as the number of cases to store is unlimited, while the latter technique forgets running, not yet concluded, cases to conform to the limited window width. Consequently, the processing system may later encounter events that represent some intermediate activity as per the process model and for which the relevant case has been forgotten, to be referred to as orphan events. The naïve approach to cope with an orphan event is to either neglect its relevant case for conformance checking or treat it as an altogether new case. However, this might result in misleading process insights, for instance, overestimated non-conformance. In order to bound memory yet effectively incorporate the orphan events into processing, we propose an imputation of missing-prefix approach for such orphan events. Our approach utilizes the existing process model for imputing the missing prefix. Furthermore, we leverage the case storage management to increase the accuracy of the prefix prediction. We propose a systematic forgetting mechanism that distinguishes and forgets the cases that can be reliably regenerated as prefix upon receipt of their future orphan event. We evaluate the efficacy of our proposed approach through multiple experiments with synthetic and three real event logs while simulating a streaming setting. Our approach achieves considerably higher realistic conformance statistics than the state of the art while requiring the same storage.


Author(s):  
Bruna Brandão ◽  
Flávia Santoro ◽  
Leonardo Azevedo

In business process models, elements can be scattered (repeated) within different processes, making it difficult to handle changes, analyze process for improvements, or check crosscutting impacts. These scattered elements are named as Aspects. Similar to the aspect-oriented paradigm in programming languages, in BPM, aspect handling has the goal to modularize the crosscutting concerns spread across the models. This process modularization facilitates the management of the process (reuse, maintenance and understanding). The current approaches for aspect identification are made manually; thus, resulting in the problem of subjectivity and lack of systematization. This paper proposes a method to automatically identify aspects in business process from its event logs. The method is based on mining techniques and it aims to solve the problem of the subjectivity identification made by specialists. The initial results from a preliminary evaluation showed evidences that the method identified correctly the aspects present in the process model.


2021 ◽  
Vol 10 (9) ◽  
pp. 144-147
Author(s):  
Huiling LI ◽  
Xuan SU ◽  
Shuaipeng ZHANG

Massive amounts of business process event logs are collected and stored by modern information systems. Model discovery aims to discover a process model from such event logs, however, most of the existing approaches still suffer from low efficiency when facing large-scale event logs. Event log sampling techniques provide an effective scheme to improve the efficiency of process discovery, but the existing techniques still cannot guarantee the quality of model mining. Therefore, a sampling approach based on set coverage algorithm named set coverage sampling approach is proposed. The proposed sampling approach has been implemented in the open-source process mining toolkit ProM. Furthermore, experiments using a real event log data set from conformance checking and time performance analysis show that the proposed event log sampling approach can greatly improve the efficiency of log sampling on the premise of ensuring the quality of model mining.


2018 ◽  
Vol 27 (02) ◽  
pp. 1850002
Author(s):  
Sung-Hyun Sim ◽  
Hyerim Bae ◽  
Yulim Choi ◽  
Ling Liu

In Big data and IoT environments, process execution generates huge-sized data some of which is subsequently obtained by sensors. The main issue in such areas has been the necessity of analyzing data in order to suggest enhancements to processes. In this regard, evaluation of process model conformance to the execution log is of great importance. For this purpose, previous reports on process mining approaches have advocated conformance checking by fitness measure, which is a process that uses token replay and node-arc relations based on Petri net. However, fitness measure so far has not considered statistical significance, but just offers a numeric ratio. We herein propose a statistical verification method based on the Kolmogorov–Smirnov (K–S) test to judge whether two different log datasets follow the same process model. Our method can be easily extended to determinations that process execution actually follows a process model, by playing out the model and generating event log data from it. Additionally, in order to solve the problem of the trade-off between model abstraction and process conformance, we also propose the new concepts of Confidence Interval of Abstraction Value (CIAV) and Maximum Confidence Abstraction Value (MCAV). We showed that our method can be applied to any process mining algorithm (e.g. heuristic mining, fuzzy mining) that has parameters related to model abstraction. We expect that our method will come to be widely utilized in many applications dealing with business process enhancement involving process-model and execution-log analyses.


Author(s):  
Pavlos Delias ◽  
Kleanthi Lakiotaki

Automated discovery of a process model is a major task of Process Mining that means to produce a process model from an event log, without any a-priori information. However, when an event log contains a large number of distinct activities, process discovery can be real challenging. The goal of this article is to facilitate process discovery in such cases when a process is expected to contain a large set of unique activities. To this end, this article proposes a clustering approach that recommends horizontal boundaries for the process. The proposed approach ultimately partitions the event log in a way that human interpretation efforts are decomposed. In addition, it makes automated discovery more efficient as well as effective by simultaneously considering two quality criteria: informativeness and robustness of the derived groups of activities. The authors conducted several experiments to test the behavior of the algorithm under different settings, and to compare it against other techniques. Finally, they provide a set of recommendations that may help process analysts during the process discovery endeavor.


2020 ◽  
Vol 10 (4) ◽  
pp. 1493 ◽  
Author(s):  
Kwanghoon Pio Kim

In this paper, we propose an integrated approach for seamlessly and effectively providing the mining and the analyzing functionalities to redesigning work for very large-scale and massively parallel process models that are discovered from their enactment event logs. The integrated approach especially aims at analyzing not only their structural complexity and correctness but also their animation-based behavioral properness, and becomes concretized to a sophisticated analyzer. The core function of the analyzer is to discover a very large-scale and massively parallel process model from a process log dataset and to validate the structural complexity and the syntactical and behavioral properness of the discovered process model. Finally, this paper writes up the detailed description of the system architecture with its functional integration of process mining and process analyzing. More precisely, we excogitate a series of functional algorithms for extracting the structural constructs and for visualizing the behavioral properness of those discovered very large-scale and massively parallel process models. As experimental validation, we apply the proposed approach and analyzer to a couple of process enactment event log datasets available on the website of the 4TU.Centre for Research Data.


2020 ◽  
pp. 793-821 ◽  
Author(s):  
Dulce Domingos ◽  
Ana Respício ◽  
Ricardo Martinho

BPMN (Business Process Model and Notation) has become the de-facto business process modelling language standard. Healthcare processes have been increasingly incorporating participants other than humans, including Internet of Things (IoT) physical devices such as biomedical sensors or patient electronic tags. Due to its critical requirements, IoT-aware healthcare processes justify the relevance of Quality of Services aspects, such as reliability, availability, and cost, among others. This chapter focuses on reliability and proposes to use the Stochastic Workflow Reduction (SWR) method to calculate the reliability of IoT-aware BPMN healthcare processes. In addition, the chapter proposes a BPMN language extension to provide processes with reliability information. This way, at design time, modellers can analyse alternatives and, at run time, reliability information can be used to select participants, execute services, or monitor process executions. The proposal is applied to an Ambient Assisted Living system use case, a rich example of an IoT-aware healthcare process.


2011 ◽  
Vol 121-126 ◽  
pp. 4265-4268
Author(s):  
Qiang Du ◽  
Stephen Ledbetter ◽  
Rui Yang

The cladding industry has been experiencing a rapid development since two decades ago. The client’s requirements and architect’s ambition make the building cladding become more complex than ever before. The technical complexity calls for more comprehensive managerial techniques, especially information management. This paper proposes a web-based information management system, in which software package was developed and hardware incorporated. After establishing the framework initially, a main challenge, information sources, of this mechanism was discussed.


Sign in / Sign up

Export Citation Format

Share Document