The Development of Four-Tier Diagnostic Test Instrument to Identify the Learners’ Misconception on Circular Motions

Author(s):  
Nina Wahyuni ◽  
Yoga Budi Bhakti ◽  
Tatan Zenal Mutakin ◽  
Irnin Agustina Dwi Astuti

Difficulty understanding students' concepts needs to be known so that improvements can be made so that eventually students can master the concepts well. Misconception diagnostic tests can be an alternative to uncovering causes or factors of misconception in students. This study aims to develop a Four-Tier Test diagnostic test instrument to diagnose students' misconceptions on circular motion material. This research is a development study using 4D models.  The developed four-tier test diagnostic test instrument consists of 40 multiple-choice questions. The results of the content validation by five experts stated that the instrument was very feasible with an average percentage of 86,34%. Empirical trial results show as many as 40 valid diagnostic test items have enough to good power a level of difficulty and discrimination power. The reliability of the questions was analyzed using the Kuder Richardson formula of 0.785 with a high category. This instrument can be used to minimize the time spent in the process of identifying misconceptions because it is in the form of multiple-choice with varied questions.

2019 ◽  
Vol 5 (2) ◽  
pp. 69-76
Author(s):  
Ridho Adi Negoro ◽  
Viga Karina

Difficulty understanding students' concepts needs to be known so that improvements can be made so that eventually students can master the concepts well. Misconception diagnostic tests can be an alternative to uncovering causes or factors of misconception in students. This study aims to develop a Four-Tier Test diagnostic test instrument to diagnose students' misconceptions on vibration and wave material to adapt the Mechanical Waves Conceptual Survey by Thongchai which has content compatibility with the segmentation of Indonesian curriculum material. This research is a development study using 3D development models. The 3D development model consists of 3 stages, namely: 1) Define, 2) Design and 3) Develop. The developed four-tier test diagnostic test instrument consists of 22 multiple-choice questions. The results of the content validation by three experts stated that the instrument was very feasible with an average percentage of 87%. Empirical trial results show as many as 22 valid diagnostic test items have enough to good power a level of difficulty and discrimination power. The reliability of the questions was analyzed using the Kuder Richardson formula of 0.765 with a high category. This instrument can be used to minimize the time spent in the process of identifying misconceptions because it is in the form of multiple-choice with varied questions.


Pythagoras ◽  
2009 ◽  
Vol 0 (69) ◽  
Author(s):  
Belinda Huntley ◽  
Johann Engelbrecht ◽  
Ansie Harding

In this study we propose a taxonomy for assessment in mathematics, which we call the assessment component taxonomy, to identify those components of mathematics that can be successfully assessed using alternative assessment formats. Based on the literature on assessment models and taxonomies in mathematics, this taxonomy consists of seven mathematics assessment components, hierarchically ordered by cognitive level, as well as the nature of the mathematical tasks associated with each component. Using a model that we developed earlier for measuring the quality of mathematics test items, we investigate which of the assessment components can be successfully assessed in the provided response question (PRQ) format, in particular multiple choice questions (MCQs), and which can be better assessed in the constructed response question (CRQ) format. The results of this study show that MCQs can be constructed to evaluate higher order levels of thinking and learning. The conclusion is that MCQs can be successfully used as an assessment format in undergraduate mathematics, more so in some assessment components than in others. The inclusion of the PRQ assessment format in all seven assessment components can reduce the large marking loads, associated with continuous assessment practices in undergraduate mathematics, without compromising the validity of the assessment.


2017 ◽  
Vol 1 (2) ◽  
pp. 145
Author(s):  
Nadiyah El-Haq Diyanahesa ◽  
Sentot Kusairi ◽  
Eny Latifah

Preconception of the students influences to construct knowledge. They entered the classroom with conception formed from daily life. If their conception are not accordance with scientific concepts it’s  called misconceptions. Misconception is believed by students and used consistently. Teachers need to find out information on a student misconceptions by providing diagnostic tests immedietly. Diagnostic instrument that measure more accurately concepts understanding is isomorphic. Problem isomorphic composed of several question in different context and representation, but solved by the same principles. The goal of this study developing a multiple-choice diagnostic test instrument isomorphic to diagnose misconception of students on momentum and impulse. This study is research and development using ADDIE model (Analysis, Design, Development or Production, Implementation or Delivery and Evaluations). Development steps consist of: (1) Analysis, (2) Design, (3) Develop, and (4) Implement. Before  Implement steps, test instrument validated by lecture and two teachers. The instrument have been revised administered to students of class XI MIA 4 MAN Tlogo Blitar and  X MIA U1 MAN 1 Tulungagung. Results of this study are 15 items multiple choice diagnostic instruments isomorphic belongs five indicators. Based on the validation by two physics teachers and lecturers, instrument accordance used as a diagnostic instrument capable to distinguish between students misconceptions and naive concept. Around 67.8% of students who have answered the description are still misconceptions. Based on the test analysis accordance with open ended question instrument test diagnostic isomorphic produce higher accuracy by taking two of the three questions the answers provided.


2016 ◽  
Vol 7 (2) ◽  
pp. 44
Author(s):  
Kurnia Ningsih

This research aims to describe MIPA teachers’ ability to design knowledge assessment through the analysis of achievement aspects of knowledge assessment. This research used a descriptive method with SMP MIPA teachers in Pontianak City who have taught for more than 5 years and have an undergraduate degree as the population. The samples in this research, selected using a purposive sampling technique, consisted of 12 teachers who submitted MIPA test items. The research instrument used the data of the test item document designed by the teachers in the form of a multiple-choice test. The data were analyzed descriptively which included data reduction, systematic data display, and conclusion. The results showed that of the 12 test instruments made by with 380 questions in total, the teachers’ ability to design knowledge assessment (Multiple Choice Questions) obtained 17.37% of knowledge aspect, 67.90% of understanding aspect, 8.68% of implementation aspect, and 6.05% of analysis aspect. There were no questions made related to evaluation and creation aspects. Keywords: teachers ability, designing knowledge assessment.


2021 ◽  
Vol 2098 (1) ◽  
pp. 012032
Author(s):  
S Ardianti ◽  
W Wiji ◽  
T Widhiyanti

Abstract Acid-base is one of the materials that tend to be difficult for students to understand. Acid-base is a material that is conceptually solid and requires an integrated understanding of many of the concepts of introductory chemistry. This research is descriptive research that aims to find conceptions of students on acid-base subjects and asking about concepts that are considered troublesome according to their learning experiences. The subjects of this research were 31 students of class XI IPA 4 at SMAN 3 Pariaman. The instruments in this research are diagnostic tests and interviews. The result of this research is the students of SMAN 3 Pariaman have difficulties in learning about the acid-base subject with high category. The percentage of conceptions experienced by students in each indicator is 56.3% of students understand the concept, 20.8% misconception, and 22.9% do not understand the concept. In the second indicator, 45.2% of students understood the concept, 18.3% had misconceptions and 36.5% did not understand the concept. In the 3rd indicator, 35.5% of students understood the concept, 31.2% had misconceptions and 33.3% did not understand the concept. In the 4th indicator, 21.9% of students understand the concept, 27.7% do not understand the concept and 50.3% do not understand the concept. Meanwhile, the acid-base theory, the calculation of pH or pOH, and the relationship between the degree of acidity (pH) and the degree of ionization (a), and the acid equilibrium constant (Ka) or the base equilibrium constant (Kb) are considered troublesome knowledge because they can be conceptually difficult.


2021 ◽  
Vol 1 (2) ◽  
pp. 91
Author(s):  
Anggit Prabowo ◽  
Puspa Puspa ◽  
Fariz Setyawan

This study aims to develop a test instrument to measure the high-level thinking ability of the two-variable linear equation system material. This research is a Research and Development using the ADDIE development model consisting of steps of analysis, design, development, implementation, and evaluation. The test subjects in this study were students of class VIII of SMP Negeri 3 Payung. The instruments used in this study were validation sheets and documentation. Data collection techniques through expert judgment and tests. This research has succeeded in developing a test instrument to measure the high-level thinking ability of two-variable linear equations system material. The test developed consists of 10 multiple choice items and 5 description items that have been declared valid by expert judgment. The trial results show from the difficulty index analysis of multiple-choice questions was obtained as much as 1 item in the easy category, 9 questions in the moderate category, the analysis of the difficulty index for the essay obtained by 2 items in the medium category, 3 questions in the difficult category. In the analysis of discriminant index of multiple-choice questions, there were 1 question in the bad category, 3 questions with enough categories, and 6 questions in the good category. In the analysis of the difference in power, 2 questions were categorized as bad, 1 question was categorized as sufficient, 2 questions were categorized as good. Based on the analysis of the function of the distractor, it was found that 1 question was not functioning properly, 9 questions were functioning properly. Finally, from the reliability analysis, it was obtained 0.73 for multiple choice questions and 0.716 for essay questions with high reliability interpretation.


2022 ◽  
Vol 8 (1) ◽  
pp. 1-12
Author(s):  
Hendra Musfa Dirman ◽  
Fatni Mufit ◽  
Festiyed Festiyed

Misconceptions of a concept in a lesson will have an influence in understanding the next concept. Having misconceptions that exist in learning can understand the nature of the misconceptions and consequently can help student learning progress. Therefore, a diagnostic test for misconceptions is needed, including the newest four-tier multiple choice and five-tier multiple choice. This research is a literature review that provides information systematically using the PRISMA method which often occurs in students' misconceptions in high school physics subjects. The data for this research are 60 selected articles from 2017-2021. The purpose of this study is to reveal the use of four-tier multiple choice and five-tier multiple choice diagnostic tests in physics and also provide a comparison of each instrument with the strengths and weaknesses of the four-tier multiple choice and five-tier multiple choice diagnostic tests. Furthermore, the use of multiple choice four levels (83.33%) and multiple choice five levels (16.67). %). In the use of the four-tier multiple choice physical material diagnostic test, which are often used in identifying misconceptions, are optical devices (12%) and energy businesses (10%). And the use of an additional five-tier multiple choice instrument diagnostic test that is often used is to present an overview or conclusion at the fifth level. However, each type of four-tier multiple choice and five-tier multiple choice tests has its own advantages and disadvantages in assessing students' conceptions


2021 ◽  
Vol 20 (2) ◽  
Author(s):  
Siti Khadijah Adam ◽  
Faridah Idris ◽  
Puteri Shanaz Jahn Kassim ◽  
Nor Fadhlina Zakaria ◽  
Rafidah Hod

Background: Multiple-choice questions (MCQs) are used for measuring the student’s progress, and they should be analyzed properly to guarantee the item’s appropriateness. The analysis usually determines three indices of an item; difficulty or passing index (PI), discrimination index (DI), and distractor efficiency (DE). Objectives: This study was aimed to analyze the multiple-choice questions in the preclinical and clinical examinations with different numbers of options in medical program of Universiti Putra Malaysia. Methods: This is a cross-sectional study. Forty multiple-choice questions with four options from the preclinical examination and 80 multiple-choice questions with five options from the clinical examination in 2017 and 2018 were analyzed using optical mark recognition machine and Ms. Excel. The parameters included PI, DI, and DE. Results: The average difficulty level of multiple-choice questions for preclinical and clinical phase examinations were similar in 2017 and 2018 that were considered ‘acceptable’ and ‘ideal’ ranged from 0.55 to 0.60, respectively. The average DIs were similar in all examinations that were considered ‘good’ (ranged from 0.25 to 0.31) except in 2018 clinical phase examination that showed ‘poor’ items (DI = 0.20 ± 0.11). The questions for preclinical phase showed an increase in the number of ‘excellent’ and ‘good’ items in 2018 from 37.5% to 70.0%. There was an increase of 10.0% for preclinical phase, and 6.25% for clinical phase, in the number of items with no non-functioning distractors in 2018. Among all, preclinical multiple-choice questions in 2018 showed the highest mean of DE (71.67%). Conclusions: Our findings suggested that there was an improvement in the questions from preclinical phase while more training on questions preparation and continuous feedback should be given to clinical phase teachers. A higher number of options did not affect the level of difficulty of a question; however, the discrimination power and distractors efficiency might differ.


Jurnal Elemen ◽  
2022 ◽  
Vol 8 (1) ◽  
pp. 66-76
Author(s):  
Karlimah Karlimah

This article explains how to analyze test items in arithmetic operation with fractions to obtain the items' level of difficulty and fitness. Data were collected by using multiple-choice questions given to 50 fourth-grade students of an elementary school in Tasikmalaya city. The answers were then analyzed using the Rasch model and Winsteps 3.75 application, a combination of standard deviation (SD) and logit mean values (Mean). The score data of each person and question were used to estimate the pure score in the logit scale, indicating the level of difficulty of the test items. The categories were difficult (logit value +1 SD); very difficult (0.0 logit +1 SD); easy (0.0 logit -1 SD); very easy (logit value –SD). Three criteria were used to determine the level of difficulty and fitness of the questions: the Outfit Z-Standard/ZSTD value; Outfit Mean Square/MNSQ; and Point Measure Correlation. It resulted in a collection of test items suitable for use with several levels of difficulties, namely, difficult, very difficult, easy, and very easy, from the previous items, which had difficult, medium, and easy categories. Rasch model can help categorize questions and students' ability levels.


2019 ◽  
Author(s):  
Assad Ali Rezigalla ◽  
Elwathiq Khalid Ibrahim ◽  
Amar Babiker ElHussein

Abstract Background Distractor efficiency of multiple choice item responses is a component of item analysis used by the examiners to to evaluate the credibility and functionality of the distractors.Objective To evaluate the impact of functionality (efficiency) of the distractors on difficulty and discrimination indices.Methods A cross-sectional study in which standard item analysis of an 80-item test consisted of A type MCQs was performed. Correlation and significance of variance among Difficulty index (DIF), discrimination index (DI), and distractor Efficiency (DE) were measured.Results There is a significant moderate positive correlation between difficulty index and distractor efficiency, which means there is a tendency for high difficulty index go with high distractor efficiency (and vice versa). A weak positive correlation between distractor efficiency and discrimination index.Conclusions Non-functional distractor can reduce discrimination power of multiple choice questions. More training and effort for construction of plausible options of MCQ items is essential for the validity and reliability of the tests.


Sign in / Sign up

Export Citation Format

Share Document