A case study of an empirical approach to component requirements in developing a plagiarism detection tool

Author(s):  
Noriko Hanakawa ◽  
Mike Barker
2018 ◽  
Vol 10 (0) ◽  
pp. 1-7
Author(s):  
Huriye Armagan DOGAN

Memento value in heritage is one of the most essential characteristics facilitating the association between the environment and its users, by connecting structures with space and time, moreover, it helps people to identify their surroundings. However, the emergence of the Modern Movement in the architectural sphere disrupted the reflection of memory and symbols which serve to root the society in its language. Furthermore, it generated an approach that stood against the practice of referring to the past and tradition, which led to the built environment becoming homogeneous and deprived of memento value. This paper focuses on the impact of memento value on the perception and evaluation of cultural heritage. Furthermore, it investigates the notions which are perceived to influence the appraisal of cultural heritage by applying them to the Kaunas dialect of the Modern Movement with an empirical approach.


2021 ◽  
Vol 13 (1) ◽  
pp. 25-32
Author(s):  
Gkrilias Panagiotis ◽  
Armakolas Stefanos ◽  
Grigoropoulou Irida ◽  
Griva Anastasia

Abstract Recently, cases of plagiarism in education have been on the rise with the underlying causes of their appearance being numerous. Due to the large extent of this phenomenon, specialised software has been developed and is available for users to check the presence or absence of plagiarism. The purpose of this paper is to study cases of plagiarism in education, as well as the available plagiarism software. Also, this case study presents a practical example of the implementation of the control process using plagiarism software, as well as its results, in an already published article. This case study points out the importance of performing a further quality control to those parts of the text where a textual coincidence was spotted by the plagiarism detection software.


2020 ◽  
Vol 5 (1) ◽  
pp. 950-954
Author(s):  
Prabin Chhetri ◽  
Hem Sagar Rimal ◽  
Santosh Upadhyaya Kafle ◽  
Tara Kumari Kafle

Introduction: The word plagiarism, in literature, means stealing someone's works without acknowledging the author. It is an unavoidable fact that an article has to be original when it is presented for publication. Often it is seen that during the research works, authors put lot of effort in collecting the facts and figures for their article but they seem to have a blind spot when it comes to plagiarism. The detection of the plagiarism is a challenging and time consuming task for most of the journals. Objectives: The objective of the study is to understand the level of plagiarism in the articles submitted in Birat Journal of Health Sciences. Methodology: A descriptive cross-sectional study was conducted on plagiarism by retrieving the data of articles submitted to Birat Journal of Health Sciences (BJHS), an official medical journal of Birat Medical College (BMC), from April 2017 to August 2018. Total 111 articles were examined through the iThenticate Software, a commercial Plagiarism Detection Tool (PDT) Version 2.0.8. Articles were analyzed using descriptive statistics. Result: It was found that 63 (56.75%) articles were found to be less than 20% plagiarized and 48 (43.22%) articles were found above the cut-off point (20% plagiarized) putting them in the category of plagiarized article. Conclusion: It was found that the incidence of plagiarism in the articles submitted to BJHS was very common. It is also evident from the study that a commercial plagiarism detection tool (PDT) can be a very useful tool for detecting and preventing plagiarism in the articles. It was also noticeable to find that the articles of clinical consultants(who are not associated with academic institutions) had higher level of plagiarism in their articles in comparison to the academicians (who are associated with academic institutions).


2014 ◽  
Vol 13 (03) ◽  
pp. 1450028 ◽  
Author(s):  
Imad Rahal ◽  
Colin Wielga

Source code plagiarism is easy to commit but difficult to catch. Many approaches have been proposed in the literature to automate its detection; however there is little consensus on what works best. In this paper, we propose two new measures for determining the accuracy of a given technique and describe an approach to convert code files into strings which can then be compared for similarity in order to detect plagiarism. We then compare several string comparison techniques, heavily utilised in the area of biological sequence alignment, and compare their performance on a large collection of student source code containing various types of plagiarism. Experimental results show that the compared techniques succeed in matching a plagiarised file to its original files upwards of 90% of the time. Finally, we propose a modification for these algorithms that drastically improves their runtimes with little or no effect on accuracy. Even though the ideas presented herein are applicable to most programming languages, we focus on a case study pertaining to an introductory-level Visual Basic programming course offered at our institution.


2016 ◽  
Vol 26 (09n10) ◽  
pp. 1399-1429 ◽  
Author(s):  
Jeffrey Svajlenko ◽  
Chanchal K. Roy

An important measure of clone detection performance is precision. However, there has been a marked lack of research into methods for efficiently and accurately measuring the precision of a clone detection tool. Instead, tool authors simply validate a small random sample of the clones their tools detected in a subject software system. Since there could be many thousands of clones reported by the tool, such a small random sample cannot guarantee an accurate and generalized measure of the tool’s precision for all the varieties of clones that can occur in any arbitrary software system. In this paper, we propose a machine-learning-based approach that can cluster similar clones together, and which can be used to maximize the variety of clones examined when measuring precision, while significantly reducing the biases a specific subject system has on the generality of the precision measured. Our technique reduces the efforts in measuring precision, while doubling the variety of clones validated and reducing biases that harm the generality of the measure by up to an order of magnitude. Our case study with the NiCad clone detector and the Java class library shows that our approach is effective in efficiently measuring an accurate and generalized precision of a subject clone detection tool.


Sign in / Sign up

Export Citation Format

Share Document