scholarly journals Child Safety Assessment: Do Instrument-Based Decisions Concur with Decisions of Expert Panels?

2021 ◽  
Vol 10 (5) ◽  
pp. 167
Author(s):  
Annemiek Vial ◽  
Mark Assink ◽  
Geert Jan Stams ◽  
Claudia Van der Put

To make decisions on children’s immediate safety, child welfare agencies have been using safety assessment instruments for decades. However, very little research on the quality of these instruments has been conducted. This study is the first to inspect the concurrent validity of a child safety assessment instrument by comparing its outcomes to a different measure of immediate child safety. It was examined to what extent decisions of practitioners using a safety assessment instrument concur with decisions of child maltreatment expert panels. A total of 26 experts on immediate child safety participated in 7 expert panels, in which the safety of children as described in 24 vignettes was discussed. Additionally, 74 practitioners rated the same vignettes using the ARIJ safety assessment instrument. The instrument-based safety decisions of practitioners concurred for a small majority with the safety decisions reached by the expert panels (58% agreement). Expert panels often identified more types of immediate safety threats than practitioners using the instrument; however, the latter group more often deemed the child to be in immediate danger than the first group. These findings provide indications on how the instrument can be improved and give insight into how immediate safety decisions are made.

2012 ◽  
Vol 7 (4) ◽  
pp. 187-197 ◽  
Author(s):  
Jim F. Schilling

Context: An emphasis on knowledge and skill competency acquisition continues to gain importance in allied health professions. Accuracy and fairness in the summative assessment of competencies are essential to ensure student competence. A positive demonstration of validity, reliability, and authentic quality criteria are needed to achieve evidence-based practice considerations in the assessment of competencies. Objective: To present a variety of instruments used in the assessment of competencies established in the fifth edition of the athletic training competencies document and judge them based on validity, reliability, and authenticity criteria. Data Sources: Literature reviewed for this article included published articles pertaining to the assessment of competencies used in health care professional programs. Data Synthesis: Self, written, and observation assessment methods with specific types of instruments for each category are used in the summative assessment of competencies. Quality of the assessment instruments are considered to ensure score authenticity, validity, and reliability of measures. The type of assessment instrument and its content was recommended depending on the level of competence, which was categorized according to the depth of understanding and complexity of skill in the competencies. Conclusions: There was no one-size-fits-all assessment method determined. Certain instruments demonstrated greater quality than others and were used depending on assessment goals and resources.


2019 ◽  
Vol 107 ◽  
pp. 104538 ◽  
Author(s):  
Annemiek Vial ◽  
Claudia van der Put ◽  
Geert Jan J.M. Stams ◽  
Mark Assink

2019 ◽  
Vol 4 (2) ◽  
pp. 300
Author(s):  
Aloysius Mering ◽  
Indri Astuti

This study aims to (1) describe clearly and comprehensively about the quality of non-cognitive assessment instruments made by elementary school teachers, (2) develop procedures for developing non-cognitive assessment instruments made by teachers, (3) develop non-cognitive assessment instruments made by teachers. To realize this goal, researchers used three structured research designs. The first design is survey research to describe the quality of non-cognitive assessment instruments made by teachers. The instruments studied are survey data, which are illuminated by non-cognitive instruments constructed by the teacher in the Lesson Plan (RPP). Furthermore, from the results of a review of the teacher's non-cognitive assessment instruments, a guidebook on the procedure for developing cognitive assessment instruments made by teachers will be developed. The development of the guidebook uses development procedures (R & D). In the third draft, the researcher and the teacher developed a non-cognitive assessment instrument in the workshop. This workshop is the application of the guidebook that has been prepared. The procedure for preparing instruments uses steps (a) development of instrument specifications, (b) instrument writing, (c) instrument review, (d) instrument assembly (for testing purposes), (e) instrument testing, (f) results analysis trial, (g) instrument selection and assembly, (h) printing instruments, (i) administration of instruments, and (j) preparation of scales and norms. The whole series of studies will produce outputs (a) research reports, financial reports, and logbooks, (b) articles that have been discussed, (c) guidelines for preparing non-cognitive assessment instruments made by teachers that can be used as teaching materials and alternative materials for drafting training assessment instruments, (d) scientific publications in accredited journals, (e) a collection of validated non-cognitive assessment instruments made by teachers.


2019 ◽  
Vol 9 (1) ◽  
pp. 91-100
Author(s):  
Nurhayati Nurhayati ◽  
◽  
Wahyudi Wahyudi ◽  
Syarif Lukman Hakim ◽  
◽  
...  

This study aims to 1) produce a HOTS assessment instrument; 2) knowing the quality of the test instrument in terms of the feasibility of construction, material feasibility, and language feasibility according to the expert; and 3) knowing the quality of the test items in terms of validity, reliability, difficulty level and distinguishing power based on the test results. Research and Development ware used as a research method with 4D procedural development model consists of four stages, namely: the define stage, the design stage, the development stage, and the dissemination stage. The questionnaire was used for expert judgment validation. The characteristics measurement of the HOTS items instrument including the validity, reliability, difficulty level and distinguishing power of the questions. The HOTS assessment instrument developed was in the form of multiple-choice options with a reason based on HOTS in aspects of analyzing, evaluating and creating. The results of expert validation show that the average item with criteria is very good in terms of content, construct and language aspects. Instruments that have been validated and revised were tested on students who had studied vibration and wave material and the test results showed that 77% of the questions developed were of good quality with valid criteria, good distinguishing criteria, level of difficulty at moderate and easy levels and very strong reliability so that feasible and ready to be used to measure students' higher order thinking skills in vibrations and waves material. Keywords: HOTS, Instruments test, Vibrations and waves


2020 ◽  
Vol 3 (1) ◽  
pp. 14-24
Author(s):  
Puji Hartini ◽  
Hari Setiadi ◽  
Ernawati Ernawati

The success of a learning process that is planned and implemented by the teacher can be seen through the process of assessing students. Assessment can be carried out using appropriate assessment instruments so that it is necessary for teachers to be able to understand and compile assessment instruments properly. This study aims to examine and describe the assessment instruments made by elementary school teachers in Jakarta. The type of research used is descriptive qualitative research with data collection techniques using interviews and document analysis. The subjects in this study were 8 grade VI elementary school teachers in Jakarta with science material on theme 7 about the reproductive system. Based on the results of the study, it shows variations in the ability of teachers in developing assessment instruments. There were 6 teachers who were able to make instruments proportionally according to the LOTS and HOTS categories, but there were 2 teachers who were only able to compile an assessment instrument that only measured LOTS. In this study, teachers also found errors in determining the level of cognitive items, especially at levels C4, C5, and C6. These results make the basis for the importance of teachers understanding hierarchy in depth, especially in relation to the preparation of HOTS-based assessment instruments because the quality of learning success is largely determined by the assessment instrument used.


Author(s):  
Yanti Gultom ◽  
Biner Ambarita ◽  
Syahnan Daulay

This study aims to know the development of authentic assessment instruments on drama text learning for students of class VIII in Junior High School 6 Tebing Tinggi. This study used random sampling. The sample were 32 student and 3 teacher. The quality of authentic assessment instruments in drama text learning is obtained from the results of validation and assessment given by material experts, expert evaluations, teacher responses, and student responses. The result shows that the average value of the student's pretest was 68.56. Based on the average value of the student pretest data, it can be concluded that the ability of students does not experience a significant high increase and has not reached KKM. Learning by using authentic assessment instruments on drama text learning gained an average of 80.97. The lowest student score is 70 and the highest was 98. Based on the average value of student posttest data, it can be concluded that the ability of students to experience a significant increase was high and reaches KKM as expected. The effectiveness of the assessment instrument developed was 80.97% and the effectiveness before using the valuation instrument was 68.56%. Therefore, the level of students' ability to answer drama text questions increases after the product of authentic assessment instruments in drama text learning was applied in learning.


2020 ◽  
Vol 8 (1) ◽  
pp. 67
Author(s):  
Septiana Dwi Utami ◽  
Ika Nurani Dewi ◽  
Ismail Efendi

This study aims to: 1) determine the appropriateness of practicum performance appraisal instruments; and 2) find out whether there are differences in competency of laboratory skills among students. The instrument development procedure refers to the 4-D model with a research trial design using one shoot case study. Before the performance appraisal instrument was applied to 77 semester IV students Department of Biology Education, FSTT, Mandalika University of Education, a validation test was first conducted by an expert. Data collection techniques using validation, testing, and observation. The instruments used in this study were instrument validation sheets, observation sheets of action laboratory skills and tests of thinking laboratory skills. Data analysis using the one way ANOVA test continued with the LSD test. The results showed that: 1) the feasibility of performance evaluation instruments developed in the valid and reliable categories; and 2) there were significant differences in laboratory skills in the three groups at ɑ = 5% in the sufficient category. The conclusion of this research is the performance assessment instrument developed can be used to train student laboratory skills. The implications of this research are expected to contribute to improving the quality of practicum assessment instruments in learning in the laboratory.


2021 ◽  
pp. 009385482110058
Author(s):  
Brian J. Brittain ◽  
Leah Georges ◽  
Jim Martin

The purpose of this study was to examine the predictive validity of the Public Safety Assessment (PSA), an actuarial pretrial assessment instrument, administered to 15,931 individuals in Volusia County, Florida, between 2016 and 2017. A series of logistic regression models analyzed the influence of the PSA’s risk scores for Failure to Appear (FTA) and New Criminal Activity (NCA), as well as gender, race, and the length of time spent in pretrial custody on incidents of failure to appear and new pretrial arrest. The findings suggest that while both the FTA and NCA scales predicted pretrial failure fairly well, the variation explained by the models suggest that there is much that we do not understand about predicting pretrial failure to appear and new pretrial arrest, indicating the need for further research and refinement of pretrial assessment instruments.


Sign in / Sign up

Export Citation Format

Share Document