scholarly journals An AI-Based Automated Continuous Compliance Awareness Framework (CoCAF) for Procurement Auditing

2020 ◽  
Vol 4 (3) ◽  
pp. 23
Author(s):  
Ke Wang ◽  
Michael Zipperle ◽  
Marius Becherer ◽  
Florian Gottwalt ◽  
Yu Zhang

Compliance management for procurement internal auditing has been a major challenge for public sectors due to its tedious period of manual audit history and large-scale paper-based repositories. Many practical issues and potential risks arise during the manual audit process, including a low level of efficiency, accuracy, accountability, high expense and its laborious and time consuming nature. To alleviate these problems, this paper proposes a continuous compliance awareness framework (CoCAF). It is defined as an AI-based automated approach to conduct procurement compliance auditing. CoCAF is used to automatically and timely audit an organisation’s purchases by intelligently understanding compliance policies and extracting the required information from purchasing evidence using text extraction technologies, automatic processing methods and a report rating system. Based on the auditing results, the CoCAF can provide a continuously updated report demonstrating the compliance level of the procurement with statistics and diagrams. The CoCAF is evaluated on a real-life procurement data set, and results show that it can process 500 purchasing pieces of evidence within five minutes and provide 95.6% auditing accuracy, demonstrating its high efficiency, quality and assurance level in procurement internal audit.

2014 ◽  
Vol 34 (2) ◽  
pp. 237-264 ◽  
Author(s):  
Mélanie Roussy

SUMMARY This paper proposes a “micro-level” analysis of the way in which internal auditors express role conflicts in their day-to-day practice and how they perceive, manage, and resolve them. From this perspective, I analyze real-life experiences described by 42 interviewed internal auditors. Using a theoretical lens specifically designed for this purpose, I highlight the complexity and magnitude of the role conflicts they experience during the internal audit process, along with the strategic and dynamic coping processes they mobilize, while remaining concretely grounded in their specific context. Thus, this study makes an original contribution to the development of new knowledge on internal auditing and concludes that internal auditors tend to lack independence and audit committee members often exercise disturbingly weak power (on the internal audit function), as compared to the top managers. Accordingly, this paper points to the difficulty of applying an idealized conception of independence and purist governance principles to practice. It also questions the appropriateness of considering internal auditing as a meaningful independent assurance device in operating the corporate governance “mosaic” (Cohen, Krishnamoorthy, and Wright 2002).


2021 ◽  
Vol 13 (46) ◽  
pp. 81-103
Author(s):  
Ahmed Ali Qassem Mohssen

The study aimed to measure the impact of applying governance standards in evaluating the quality of internal audit in Yemeni private universities. To achieve this, the researcher followed the descriptive and analytical approach and employed a questionnaire to collect data from a sample that included (68) participants. After conducting relative analysis, governance was at an average level of (64.9%). There was also a slight variation in the level of implementation, as the dimension of transparency and disclosure was the most frequent dimension, followed by accountability and independence. Further, there was a medium level of adherence to internal auditing standards at a relative weight (65%). The level of commitment to the audit dimensions was as follows: managing the internal audit activities (67%), communicating the results (66%), assessing the risk and control management (64%), and planning and implementing the audit process (62%). and The study also found that the combined governance standards (transparency and disclosure, accountability, independence) increase the quality of the internal audit in Yemeni private universities and the highest dimensions of governance standards affecting the quality of internal auditing are the dimension of independence (80.3%), accountability (71.7%), and disclosure and transparency (63.7%). In light of this, the study recommended adopting governance standards as an integrated approach to achieve quality performance in private Yemeni universities; spreading the culture of governance in private universities among their leaders and staff by holding training courses, seminars, and conferences in order to be accepted and absorbed. In addition, governance should be included in some related academic courses in the disciplines of administrative and accounting sciences. Keywords: governance standards, internal audit quality assessment, private Yemeni universities.


2021 ◽  
Vol 2050 (1) ◽  
pp. 012016
Author(s):  
Yong Wen

Abstract The development of digital industrialization has promoted the continuous emergence of new industries, new formats and new models, and has also promoted the transformation of the traditional internal audit model to digital and intelligent. Big data, cloud computing, XBRL, artificial intelligence and other digital technologies are important means to achieve full audit coverage, big data audit has become a hot topic in the current audit field, relevant literature mainly focuses on the impact of big data on traditional audit concepts and audit methods, the impact and risks of big data technology on informatization audits, and how the auditing community responds. However, the research on the integration of big data technology and XBRL technology into continuous internal auditing is relatively rare. Based on the introduction of three XBRL continuous internal audit models, this article analyzes the continuous internal audit process of the XBRL information system, and discusses the application of big data technology in XBRL continuous internal audit.


2019 ◽  
pp. 109442811987745
Author(s):  
Hans Tierens ◽  
Nicky Dries ◽  
Mike Smet ◽  
Luc Sels

Multilevel paradigms have permeated organizational research in recent years, greatly advancing our understanding of organizational behavior and management decisions. Despite the advancements made in multilevel modeling, taking into account complex hierarchical structures in data remains challenging. This is particularly the case for models used for predicting the occurrence and timing of events and decisions—often referred to as survival models. In this study, the authors construct a multilevel survival model that takes into account subjects being nested in multiple environments—known as a multiple-membership structure. Through this article, the authors provide a step-by-step guide to building a multiple-membership survival model, illustrating each step with an application on a real-life, large-scale, archival data set. Easy-to-use R code is provided for each model-building step. The article concludes with an illustration of potential applications of the model to answer alternative research questions in the organizational behavior and management fields.


Energies ◽  
2020 ◽  
Vol 13 (20) ◽  
pp. 5330
Author(s):  
Aleksandar Dimovski ◽  
Matteo Moncecchi ◽  
Davide Falabretti ◽  
Marco Merlo

The goal of the paper is to develop an online forecasting procedure to be adopted within the H2020 InteGRIDy project, where the main objective is to use the photovoltaic (PV) forecast for optimizing the configuration of a distribution network (DN). Real-time measurements are obtained and saved for nine photovoltaic plants in a database, together with numerical weather predictions supplied from a commercial weather forecasting service. Adopting several error metrics as a performance index, as well as a historical data set for one of the plants on the DN, a preliminary analysis is performed investigating multiple statistical methods, with the objective of finding the most suitable one in terms of accuracy and computational effort. Hourly forecasts are performed each 6 h, for a horizon of 72 h. Having found the random forest method as the most suitable one, further hyper-parameter tuning of the algorithm was performed to improve performance. Optimal results with respect to normalized root mean square error (NRMSE) were found when training the algorithm using solar irradiation and a time vector, with a dataset consisting of 21 days. It was concluded that adding more features does not improve the accuracy when adopting relatively small training sets. Furthermore, the error was not significantly affected by the horizon of the forecast, where the 72-h horizon forecast showed an error increment of slightly above 2% when compared to the 6-h forecast. Thanks to the InteGRIDy project, the proposed algorithms were tested in a large scale real-life pilot, allowing the validation of the mathematical approach, but taking also into account both, problems related to faults in the telecommunication grids, as well as errors in the data exchange and storage procedures. Such an approach is capable of providing a proper quantification of the performances in a real-life scenario.


2009 ◽  
Vol 19 (03) ◽  
pp. 383-397 ◽  
Author(s):  
ANNE BENOIT ◽  
YVES ROBERT ◽  
ERIC THIERRY

In this paper, we explore the problem of mapping linear chain applications onto large-scale heterogeneous platforms. A series of data sets enter the input stage and progress from stage to stage until the final result is computed. An important optimization criterion that should be considered in such a framework is the latency, or makespan, which measures the response time of the system in order to process one single data set entirely. For such applications, which are representative of a broad class of real-life applications, we can consider one-to-one mappings, in which each stage is mapped onto a single processor. However, in order to reduce the communication cost, it seems natural to group stages into intervals. The interval mapping problem can be solved in a straightforward way if the platform has homogeneous communications: the whole chain is grouped into a single interval, which in turn is mapped onto the fastest processor. But the problem becomes harder when considering a fully heterogeneous platform. Indeed, we prove the NP-completeness of this problem. Furthermore, we prove that neither the interval mapping problem nor the similar one-to-one mapping problem can be approximated in polynomial time by any constant factor (unless P=NP).


2020 ◽  
Vol 16 (9) ◽  
pp. 155014772095829
Author(s):  
Changsong Yang ◽  
Yueling Liu ◽  
Xiaoling Tao

With the rapid development of cloud computing, an increasing number of data owners are willing to employ cloud storage service. In cloud storage, the resource-constraint data owners can outsource their large-scale data to the remote cloud server, by which they can greatly reduce local storage overhead and computation cost. Despite plenty of attractive advantages, cloud storage inevitably suffers from some new security challenges due to the separation of outsourced data ownership and its management, such as secure data insertion and deletion. The cloud server may maliciously reserve some data copies and return a wrong deletion result to cheat the data owner. Moreover, it is very difficult for the data owner to securely insert some new data blocks into the outsourced data set. To solve the above two problems, we adopt the primitive of Merkle sum hash tree to design a novel publicly verifiable cloud data deletion scheme, which can also simultaneously achieve provable data storage and dynamic data insertion. Moreover, an interesting property of our proposed scheme is that it can satisfy private and public verifiability without requiring any trusted third party. Furthermore, we formally prove that our proposed scheme not only can achieve the desired security properties, but also can realize the high efficiency and practicality.


Mathematics ◽  
2020 ◽  
Vol 8 (7) ◽  
pp. 1106
Author(s):  
S. Bhaskaran ◽  
Raja Marappan ◽  
B. Santhi

Nowadays, because of the tremendous amount of information that humans and machines produce every day, it has become increasingly hard to choose the more relevant content across a broad range of choices. This research focuses on the design of two different intelligent optimization methods using Artificial Intelligence and Machine Learning for real-life applications that are used to improve the process of generation of recommenders. In the first method, the modified cluster based intelligent collaborative filtering is applied with the sequential clustering that operates on the values of dataset, user′s neighborhood set, and the size of the recommendation list. This strategy splits the given data set into different subsets or clusters and the recommendation list is extracted from each group for constructing the better recommendation list. In the second method, the specific features-based customized recommender that works in the training and recommendation steps by applying the split and conquer strategy on the problem datasets, which are clustered into a minimum number of clusters and the better recommendation list, is created among all the clusters. This strategy automatically tunes the tuning parameter λ that serves the role of supervised learning in generating the better recommendation list for the large datasets. The quality of the proposed recommenders for some of the large scale datasets is improved compared to some of the well-known existing methods. The proposed methods work well when λ = 0.5 with the size of the recommendation list, |L| = 30 and the size of the neighborhood, |S| < 30. For a large value of |S|, the significant difference of the root mean square error becomes smaller in the proposed methods. For large scale datasets, simulation of the proposed methods when varying the user sizes and when the user size exceeds 500, the experimental results show that better values of the metrics are obtained and the proposed method 2 performs better than proposed method 1. The significant differences are obtained in these methods because the structure of computation of the methods depends on the number of user attributes, λ, the number of bipartite graph edges, and |L|. The better values of the (Precision, Recall) metrics obtained with size as 3000 for the large scale Book-Crossing dataset in the proposed methods are (0.0004, 0.0042) and (0.0004, 0.0046) respectively. The average computational time of the proposed methods takes <10 seconds for the large scale datasets and yields better performance compared to the well-known existing methods.


2002 ◽  
Vol 14 (5) ◽  
pp. 1105-1114 ◽  
Author(s):  
Ronan Collobert ◽  
Samy Bengio ◽  
Yoshua Bengio

Support vector machines (SVMs) are the state-of-the-art models for many classification problems, but they suffer from the complexity of their training algorithm, which is at least quadratic with respect to the number of examples. Hence, it is hopeless to try to solve real-life problems having more than a few hundred thousand examples with SVMs. This article proposes a new mixture of SVMs that can be easily implemented in parallel and where each SVM is trained on a small subset of the whole data set. Experiments on a large benchmark data set (Forest) yielded significant time improvement (time complexity appears empirically to locally grow linearly with the number of examples). In addition, and surprisingly, a significant improvement in generalization was observed.


Author(s):  
Rania Noureldin Osman, Mohamed Al-Moutaz Al-Mougtaba

This research addressed Electronic Accounting Information Systems and Their Role in the Development Quality of the Internal Auditing by Computer. So the main problematic of this research is keen to raise the following questions did the relationship between the application of the electronic accounting system and the quality of the performance of the internal audit process? Does the use of the computer in the internal audit process affect the quality of the accounting information? The aim of the research is to identify the electronic accounting information systems, to know the impact of the electronic accounting information systems, on the internal audit, and to devaluate the extent of the use of references. To achieve the objective of the study was the use of the curriculum. On the formulation of hypotheses scientific research and use the tow researchers the questionnaire as tool to collect data. Which reflect the views sample analysis research which was cruising through the use of the program SPSS. The researcher to prove the validity of hypotheses linking variables. Electronic Accounting Information Systems impact is positive and on the Development Quality of the Internal Auditing.    


Sign in / Sign up

Export Citation Format

Share Document