scholarly journals Establishing Content Validity for the Nutrition Literacy Assessment Instrument

2013 ◽  
Vol 10 ◽  
Author(s):  
Heather Gibbs ◽  
Karen Chapman-Novakofski
EduKimia ◽  
2020 ◽  
Vol 2 (3) ◽  
pp. 128-133
Author(s):  
Apri Yolla Mulya Sartika ◽  
Eka Yusmaita

This chemistry literacy assessment instrument developed because of recent challenge toward student learning achievement evaluation so it could not only grade student’s cognitive ability in term of understanding and memorizing, but also will grade the application of students’ concept to face any problems. The availability of the chemistry literacy assessment instrument consists of various aspects, such as content, context, High Order Learning Skills (HOLS), and attitude. These aspects are expected to answer those challenges. This study aims to produce a proper and right chemistry literacy assessment toward fundamental law of chemistry and stoichiometry topic based on content validity value and questions numbers, reliability grade, difficulty level and chemistry literacy question differentiator level. This study is the development type, which is Model of Educational Reconstruction (MER) design. MER consists of three stages, (1) analysis of content structure, (2) empirical study, (3) development of instruction evaluation. The clarification of a test instrument on empirical study was done toward three chemistry and education experts (lecturer and teacher). The instrument used was chemistry literacy question which consists of seven discourse question, then expanded into 15 questions. The result showed that content validity value of the designed chemistry literacy assessment could be categorized as valid with 1,11 mark. From 15 questions, there are three questions in “very significant” category, nine in “significant” category and three other in “insignificant” category. The reliability test is 0,88. In difficulty level, there are eight questions in “medium” category, six in “difficult” and one in “very difficult”. The differentiator level, there are nine questions in “medium” category, four in “proper” and two other in “not proper”. Question in “insignificant” and “not proper” category is rejected. In conclusion, from 15 designed questions, there are 12 questions in “proper” and “right” category.


2018 ◽  
Author(s):  
Heather D. Gibbs ◽  
Edward F. Ellerbeck ◽  
Byron Gajewski ◽  
Chuanwu Zhang ◽  
Debra K. Sullivan

2020 ◽  
pp. 1-7
Author(s):  
Monika Engelke ◽  
Karl Ernst Grund ◽  
Dieter Schilling ◽  
Ulrike Beilenhoff ◽  
Ferdinand Stebner ◽  
...  

<b><i>Introduction:</i></b> The acquisition of sensorimotor skills, so-called “technical skills”, plays an essential part in the professional and continuing educational training of medical and nursing staff. Facilities turn to simulator training to promote the safe and accurate performance of endoscopic examinations. Thus, this study aimed to develop and pilot-test a corresponding assessment instrument to monitor necessary sensorimotor or “technical” skills of the examiner for a safe percutaneous endoscopic gastrostomy (AS-PEG). <b><i>Materials and Methods:</i></b> Instrument development and pilot validation involved four stages: identification of potential items and initial draft of the AS-PEG; expert panel with 11 experts (content validity index [CVI] calculated); empirical validation using a quasi-experimental intervention on simulators; revision of the pilot AS-PEG taking expert assessment, and empirical testing into consideration. <b><i>Results:</i></b> The initial instrument yielded 13 categories and 44 items describing the PEG procedure. Experts rated 30 out of 44 items (68%) extremely or very important for the safety of the puncture of the stomach. Initial item-CVIs ranged from 0.00 to 1.00; scale-CVI was 0.61. Twenty-four trainees (7 physicians, 17 nurses) participated in the pilot simulation study. On average, 8:25 min were required for PEG placement (min–max 5:59–13:38 min, SD = 1:43). The revised AS-PEG version was reduced to 14 items with a range of the item CVI from 0.8 to 1.0, and a scale-CVI of 0.90. <b><i>Conclusion:</i></b> The AS-PEG instrument facilitates the evaluation of sensorimotor skills during percutaneous gastric puncture procedures within the context of PEG placement, across professions and without relating to the number of procedures previously performed. The instrument is economical and shows satisfying content validity.


2020 ◽  
Vol 33 (1) ◽  
Author(s):  
Raira Fernanda Altmann ◽  
Karin Zazo Ortiz ◽  
Tainá Rossato Benfica ◽  
Eduarda Pinheiro de Oliveira ◽  
Karina Carlesso Pagliarin

Abstract Background Evaluating patients in the acute phase of brain damage allows for the early detection of cognitive and linguistic impairments and the implementation of more effective interventions. However, few cross-cultural instruments are available for the bedside assessment of language abilities. The aim of this study was to develop a brief assessment instrument and evaluate its content validity. Methods Stimuli for the new assessment instrument were selected from the M1-Alpha and MTL-BR batteries (Stage 1). Sixty-five images were redesigned and analyzed by non-expert judges (Stage 2). This was followed by the analysis of expert judges (Stage 3), where nine speech pathologists with doctoral training and experience in aphasiology and/or linguistics evaluated the images, words, nonwords, and phrases for inclusion in the instrument. Two pilot studies (Stage 4) were then conducted in order to identify any remaining errors in the instrument and scoring instructions. Results Sixty of the 65 figures examined by the judges achieved inter-rater agreement rates of at least 80%. Modifications were suggested to 22 images, which were therefore reanalyzed by the judges, who reached high levels of inter-rater agreement (AC1 = 0.98 [CI = 0.96–1]). New types of stimuli such as nonwords and irregular words were also inserted in the Brief Battery and favorably evaluated by the expert judges. Optional tasks were also developed for specific diagnostic situations. After the correction of errors detected in Stage 4, the final version of the instrument was obtained. Conclusion This study confirmed the content validity of the Brief MTL-BR Battery. The method used in this investigation was effective and can be used in future studies to develop brief instruments based on preexisting assessment batteries.


2021 ◽  
Vol 2 (01) ◽  
pp. 21-42
Author(s):  
Muhammad Afifullah Nizary ◽  
Ahmad Nur Kholik Nur Kholik

This article attempts to examine how to analyze the validity of the assessment instrument. In fact, not all teachers are able to make the correct instruments. Many teachers only take instruments from other schools to be used as a tool in measuring their students, even though the same instrument is not necessarily a measuring instrument with different objects. This habit needs to be improved. The teacher should be able to make their own instruments, because the teacher knows best about the differences in the abilities of each student. When the teacher uses instruments from other schools, where there are parts that the teacher has not conveyed in class, the students themselves are the losers. In making an assessment instrument the teacher must pay attention to two characteristics. Namely validity and reliability. Validity itself is divided into content validity, construct validity, and eksternal validity. Each validity has its own characteristics. Keywords: evaluation and teacher, the validity of the assessment instrument, the validity of the content


1987 ◽  
Vol 3 (1) ◽  
pp. 39-51 ◽  
Author(s):  
Ronald E. Anderson

Results from the 1979 Minnesota Computer Literacy Assessment conducted by the Minnesota Educational Computing Consortium, show that high school females performed better than males in some specific areas of programming. The areas of female superiority are those such as problem analysis and algorithmic application where the problems are expressed verbally rather than mathematically. While these findings may result from unique features of computer education in Minnesota, the findings may also be a consequence of the fact that the Minnesota assessment instrument was relatively free of mathematical bias. These findings and those of the 1982 National Assessment of Science on female superiority in “science decision making” imply that women are better than men at tasks usually defined as systems analysis rather than program coding.


2019 ◽  
Vol 107 ◽  
pp. 104538 ◽  
Author(s):  
Annemiek Vial ◽  
Claudia van der Put ◽  
Geert Jan J.M. Stams ◽  
Mark Assink

2018 ◽  
Vol 26 (2) ◽  
pp. 398-410 ◽  
Author(s):  
Andrea Egger-Rainer

Background and Purpose:The Epilepsy Monitoring Unit Comfort Questionnaire (EMUCQ) is a self-assessment instrument to measure perceived patient comfort during hospitalization in an EMU. This study aimed at initially determining the content validity by rating the content validity index (CVI).Methods:Nine experts judged the 60-item EMUCQ-1 by filling out a content validation form. The CVI was computed on item (I-CVI) and at an average scale (S-CVI/Ave) level.Results:As many as 26 items remained unchanged and 12 items were reworded to prepare the 38-item EMUCQ-2 (I-CVI scores ≥ .78). Fourteen items were omitted and an additional eight items were put aside for further evaluation. The S-CVI/Ave reached .90.Conclusion:The first results indicate the EMUCQ-2 to be valid in terms of content. Further assessment by members of the target population is advisable.


2017 ◽  
Vol 25 (1) ◽  
pp. 156-170 ◽  
Author(s):  
Girija Gopinathan Nair ◽  
Laurie-Ann M. Hellsten ◽  
Lynnette Leeseberg Stamler

Background/Purpose: Critical thinking skills (CTS) are essential for nurses; assessing students’ acquisition of these skills is a mandate of nursing curricula. This study aimed to develop a self-assessment instrument of critical thinking skills (Critical Thinking Self-Assessment Scale [CTSAS]) for students’ self-monitoring. Methods: An initial pool of 196 items across 6 core cognitive skills and 16 subskills were generated using the American Philosophical Association definition of CTS. Experts’ content review of the items and their ratings provided evidence of content relevance using the item-level content validity index (I-CVI) and Aiken’s content validity coefficient (VIk). Results: 115 items were retained (range of I-CVI values = .70 to .94 and range of VIkvalues = .69–.95; significant atp< .05). Conclusion: The CTSAS is the first CTS instrument designed specifically for self-assessment purposes.


Sign in / Sign up

Export Citation Format

Share Document