scholarly journals Concordancia consensuada en metodología observacional

2021 ◽  
Vol 21 (2) ◽  
pp. 47-58
Author(s):  
Daniel Lapresa Ajamil ◽  
Alba Otero ◽  
Javier Arana ◽  
Ildefonso Álvarez ◽  
María Teresa Anguera

En metodología observacional, para abordar la fiabilidad de los datos ya registrados, suele recurrirse a coeficientes de concordancia, coeficientes de correlación o a la teoría de la generalizabilidad; además, cada vez está tomando mayor protagonismo la concordancia consensuada. Esta forma de concordancia trata de lograr la coincidencia entre los observadores antes del registro. A pesar de su creciente presencia en estudios observacionales, son pocos los trabajos que han profundizado en el desarrollo y optimización de esta forma cualitativa de concordancia. El presente trabajo, además de constituirse en un ejemplo de la utilización de la concordancia por consenso, ha comparado el resultado obtenido (tiempo empleado y ajuste con el registro ideal) por grupos de consenso formados por diferente número de integrantes (dos, tres y cuatro observadores). No se han encontrado diferencias significativas al comparar los grupos de concordancia por consenso de dos, tres y cuatro integrantes, ni en relación al tiempo empleado en el registro, ni en lo relativo al porcentaje de acuerdo con el registro ideal. La determinación del tamaño muestral necesario para obtener diferencias significativas entre los grupos, ha permitido elevar conclusiones en términos de eficiencia. The reliability of datasets in observational methodology is typically tested using coefficients of agreement, correlation coefficients, or generalizability theory. Another increasingly popular method used to demonstrate the quality of data is the consensus agreement method, in which two or more observers agree on their coding decisions while creating the dataset. Although the consensus agreement method is being increasingly used in observational studies, few studies have conducted an in-depth analysis of how this qualitative procedure is approached or of how it can be optimized. In this study, in addition to presenting a practical example of the application of the consensus agreement method, we compare the results from three groups (of two, three, and four observers) to analyze performance in terms of time required to code the data and goodness of fit with respect to an optimal dataset. No significant differences were found between the three groups for either of the variables analyzed. Prior calculation of the sample size required to detect significant differences between the groups adds strength to our conclusions regarding the efficiency of the consensus agreement method. Na metodologia observacional, para lidar com a confiabilidade de dois dados já registrados, costumamos passar pelos coeficientes de concordância, coeficientes de correlação ou pela teoria da generalização. Além disso, a concordância consensual vem ganhando cada vez mais destaque. Esta forma de concordância tenta chegar a um acordo entre os observadores antes do registro. Apesar de sua crescente presença em estudos observacionais, poucos estudos se aprofundam no desenvolvimento e otimização de uma forma qualitativa de concordância. Ou apresentar trabalho, além de ser um exemplo do uso de concordância de consenso, comparação ou resultado obtido (tempo despendido e ajuste como registro ideal) por grupos de consenso formados por diferentes números de membros (dois, três e quatro observadores). Não foram encontradas diferenças significativas na comparação de dois grupos de concordância por consenso de dois, três e quatro membros, não em relação ao tempo gasto não registrado, mas em relação ao percentual de concordância conforme lista ideal. A determinação do tamanho dá a amostra necessária para obter diferenças significativas entre os grupos, permitindo-nos tirar conclusões em termos de eficiência.

Retos ◽  
2020 ◽  
pp. 413-418
Author(s):  
Camila Bonjour ◽  
Diego Andres Tortajada ◽  
Gonzalo Dol ◽  
Andres Gonzalez

  El objetivo de esta investigación fue analizar la eficacia de las secuencias de ataque utilizando el cambio de portera - jugadora de campo y sus consecuencias en la siguiente fase defensiva en el balonmano femenino de élite. Se utilizó la metodología observacional. Se empleó un diseño observacional puntual, nomotético y multidimensional. La muestra estuvo compuesta por 571 secuencias de ataque de 50 partidos pertenecientes a la EHF Champions League 2018-2019 de balonmano femenino. Se elaboró un instrumento observacional ad hoc que presenta la combinación de formato de campo y sistemas de categorías. La calidad del dato fue probada mediante concordancia inter e intra observadores. Los resultados destacan que el principal motivo por el cual se hizo uso de la regla que posibilita el cambio portera-jugadora fue mantener la igualdad numérica luego de haber sufrido una sanción disciplinaria, y en menor medida para generar superioridad numérica 7x6. La eficacia presentó los valores más altos cuando se utilizó para generar superioridad numérica. Las situaciones de igualdad numérica 6x6 arrojaron peores valores tanto en eficacia lanzamiento como en pérdidas de pelota. Se concluye que los equipos al atacar con portería vacía asumen un riesgo importante de recibir un gol de forma rápida si el ataque no finaliza en gol, ya que las consecuencias defensivas inmediatas están directamente relacionadas con la eficacia en la fase de ataque.  Abstract. The objective of this research was to analyze the efficacy of the attack sequences performance using the goalkeeper-field player change and its consequences in the following defensive phase in the women´s handball elite. The observational methodology was used. The observational design was punctual, nomothetic and multidimensional.  The sample was 571 attack sequences from 50 matches from EHF Women´s Handball Champion League 2018-2019. An ad hoc observational instrument was developed that presents the combination of field format and category systems. The quality of data was tested by inter and intra observer agreement. The results highlight that the main reason why the rule that makes the goalkeeper-field player change possible was used was to maintain numerical equality after having suffered a disciplinary sanction, and to a less extent to generate 7x6 numerical superiority. The effectiveness presented the highest values when used to generate numerical superiority. The 6x6 numerical equality situations throw worse values in both shooting efficacy and lost balls. It is concluded that the teams assume a significant risk of receiving a goal quickly if the attack does not end in goal, since the immediate defensive consequences are directly related to the effectiveness in the attack phase.


2017 ◽  
Vol 33 (3) ◽  
pp. 450
Author(s):  
Angel Blanco-Villaseñor ◽  
Elena Escolano-Pérez

<p>Accurate evaluation of early childhood competencies is essential for favoring optimal development, as the first years of life form the foundations for later learning and development. Nonetheless, there are still certain limitations and deficiencies related to how infant learning and development are measured. With the aim of helping to overcome some of the difficulties, in this article we describe the potential and advantages of new data analysis techniques for checking the quality of data collected by the systematic observation of infants and assessing variability. Logical and executive activity of 48 children was observed in three ages (18, 21 and 24 months) using a nomothetic, follow-up and multidimensional observational design.</p><p>Given the nature of the data analyzed, we provide a detailed methodological and analytical overview of generalizability theory from three perspectives linked to observational methodology: intra- and inter-observer reliability, instrument validity, and sample size estimation, with a particular focus on the participant facet. The aim was to identify the optimal number of facets and levels needed to perform a systematic observational study of very young children.</p><p>We also discuss the use of other techniques such as general and mixed linear models to analyze variability of learning and development.</p><p>Results show how the use of Generalizability Theory allows controlling the quality of observational data in a global structure integrating reliability, validity and generalizability.</p>


2018 ◽  
Vol 42 (8) ◽  
pp. 595-612 ◽  
Author(s):  
Zhehan Jiang ◽  
Mark Raymond

Conventional methods for evaluating the utility of subscores rely on reliability and correlation coefficients. However, correlations can overlook a notable source of variability: variation in subtest means/difficulties. Brennan introduced a reliability index for score profiles based on multivariate generalizability theory, designated as [Formula: see text], which is sensitive to variation in subtest difficulty. However, there has been little, if any, research evaluating the properties of this index. A series of simulation experiments, as well as analyses of real data, were conducted to investigate [Formula: see text] under various conditions of subtest reliability, subtest correlations, and variability in subtest means. Three pilot studies evaluated [Formula: see text] in the context of a single group of examinees. Results of the pilots indicated that [Formula: see text] indices were typically low; across the 108 experimental conditions, [Formula: see text] ranged from .23 to .86, with an overall mean of 0.63. The findings were consistent with previous research, indicating that subscores often do not have interpretive value. Importantly, there were many conditions for which the correlation-based method known as proportion reduction in mean-square error (PRMSE; Haberman, 2006) indicated that subscores were worth reporting, but for which values of [Formula: see text] fell into the .50s, .60s, and .70s. The main study investigated [Formula: see text] within the context of score profiles for examinee subgroups. Again, not only [Formula: see text] indices were generally low, but it was also found that [Formula: see text] can be sensitive to subgroup differences when PRMSE is not. Analyses of real data and subsequent discussion address how [Formula: see text] can supplement PRMSE for characterizing the quality of subscores.


1978 ◽  
Vol 22 ◽  
pp. 143-150 ◽  
Author(s):  
J. W. Edmonds ◽  
W. W. Henslee

In most routine chemical analyses, a trade-off is made between quality of data and time required to obtain and analyze the data.In X-ray powder diffraction, identifications are normally made by Debye-Scherrer film methods or by medium speed (1-2°20/min.) diffractometry, with or without an internal standard. With one notable exception, the inherent precision of the Guinier camera geometry has been virtually ignored as too expensive or time consuming for routine work, or relegated to special projects. The accessibility of microcomputers, however, not only makes it economically and realistically feasible to automate the equipment previously used for special Guinier projects, but to extend the overall precision of observed d-spacings into the area of routine analysis. Search-match procedures benefit from the increased data precision to such an extent that they can be used routinely to propose the identity of major pure phases and release the analyst to concentrate on minor components and impure phases which may be subject to lattice constant shifts.


2020 ◽  
Author(s):  
Shuangyang Dai ◽  
Hong Xu ◽  
Beibei Li ◽  
Jingao Zhang ◽  
Xiaobin Zhou

AbstractBackgroundsObservational studies plays an important role in urology studies, But few studies have paid attention to the statistical reporting quality of observational studies. The purpose of this study was to investigate the frequency and evaluate the reporting quality of statistical methods of the published observational studies in urology.MethodsThe five urology journals were selected according to the 5-year impact factor. A systematic literature search was performed in PubMed for relevant articles. The quality of statistical reporting was assessed according to assessment criteria.ResultsA total of 193 articles were included in this study. The mean statistical reporting score of included articles was 0.42 (SD=0.15), accounting for 42% of total score. The items that must be reported with a reporting rate more than 50% were: alpha level (n=122, 65.2%), confidence intervals (n=134, 69.4%), name of statistical package (n=158, 84.5%) and exact P-values (n=161, 86.1%). The items with a reporting rate less than 50% were: outliers (n=2, 1.0%) and sample size (n=13, 6.7%). For multivariable regression models (liner, logistic and Cox), variables coding (n=27, 40.7%), validation analysis of assumptions (n=58, 40.3%), interaction test (n=43, 30.0%), collinearity diagnostics (n=5, 3.5%) and goodness of fit test (n=6, 5.9%) were reported. Number of authors more than 7(OR=2.06, 95%CI=1.04-4.08) and participation of statistician or epidemiologist (OR=1.73, 95%CI=1.18-3.39) were associated with the superior reporting quality.ConclusionThe statistical reporting quality of published observational studies in 5 high-impact factor urological journals was alarming. We encourage researchers to collaborate with statistician or epidemiologist. The authors, reviewers and editors should increase their knowledge of statistical methods, especially new and complex methods.


2013 ◽  
Vol 8 (3-4) ◽  
pp. 479-486
Author(s):  
C. Bonhomme ◽  
G. Petrucci

Nowadays, high resolution geographical data are widely available for research and operational purposes. The inclusion of these data into hydrological models may suggest a direct and clear improvement of their performances. Different configurations and model structures, including an increasing quantity of geographical information, are tested using the widely used Stormwater Management Model 5 on a 2.5 km2 pilot catchment (located in Sucy en Brie, close to Paris, France). The Nash-Sutcliffe criterion is used to estimate the goodness of fit between model simulations and available measurements. If including some basic spatial information on landuse clearly improves the performance of the uncalibrated model, the increase in performance is less obvious if the user continues to refine geographical information on the catchment. Moreover, model predictions are comparable between a model calibrated with an efficient calibration procedure and a more physical approach including a fine spatial description and no calibration. Finally, the quality of data used for calibration and validation seems to be a key parameter to obtain a good fit between measurements and model predictions.


Author(s):  
B. L. Armbruster ◽  
B. Kraus ◽  
M. Pan

One goal in electron microscopy of biological specimens is to improve the quality of data to equal the resolution capabilities of modem transmission electron microscopes. Radiation damage and beam- induced movement caused by charging of the sample, low image contrast at high resolution, and sensitivity to external vibration and drift in side entry specimen holders limit the effective resolution one can achieve. Several methods have been developed to address these limitations: cryomethods are widely employed to preserve and stabilize specimens against some of the adverse effects of the vacuum and electron beam irradiation, spot-scan imaging reduces charging and associated beam-induced movement, and energy-filtered imaging removes the “fog” caused by inelastic scattering of electrons which is particularly pronounced in thick specimens.Although most cryoholders can easily achieve a 3.4Å resolution specification, information perpendicular to the goniometer axis may be degraded due to vibration. Absolute drift after mechanical and thermal equilibration as well as drift after movement of a holder may cause loss of resolution in any direction.


2013 ◽  
Vol 11 (1) ◽  
pp. 8-13
Author(s):  
V. Behar ◽  
V. Bogdanova

Abstract In this paper the use of a set of nonlinear edge-preserving filters is proposed as a pre-processing stage with the purpose to improve the quality of hyperspectral images before object detection. The capability of each nonlinear filter to improve images, corrupted by spatially and spectrally correlated Gaussian noise, is evaluated in terms of the average Improvement factor in the Peak Signal to Noise Ratio (IPSNR), estimated at the filter output. The simulation results demonstrate that this pre-processing procedure is efficient only in case the spatial and spectral correlation coefficients of noise do not exceed the value of 0.6


Author(s):  
Nur Maimun ◽  
Jihan Natassa ◽  
Wen Via Trisna ◽  
Yeye Supriatin

The accuracy in administering the diagnosis code was the important matter for medical recorder, quality of data was the most important thing for health information management of medical recorder. This study aims to know the coder competency for accuracy and precision of using ICD 10 at X Hospital in Pekanbaru. This study was a qualitative method with case study implementation from five informan. The result show that medical personnel (doctor) have never received a training about coding, doctors writing that hard and difficult to read, failure for making diagnoses code or procedures, doctor used an usual abbreviations that are not standard, theres still an officer who are not understand about the nomenclature and mastering anatomy phatology, facilities and infrastructure were supported for accuracy and precision of the existing code. The errors of coding always happen because there is a human error. The accuracy and precision in coding very influence against the cost of INA CBGs, medical and the committee did most of the work in the case of severity level III, while medical record had a role in monitoring or evaluation of coding implementation. If there are resumes that is not clearly case mix team check file needed medical record the result the diagnoses or coding for conformity. Keywords: coder competency, accuracy and precision of coding, ICD 10


Sign in / Sign up

Export Citation Format

Share Document