scholarly journals Supporting general adult psychiatry higher trainees to develop research competencies: a training improvement project

BJPsych Open ◽  
2021 ◽  
Vol 7 (S1) ◽  
pp. S131-S132
Author(s):  
Annalie Clark ◽  
John Stevens ◽  
Sarah Abd El Sayed

AimsEvidence shows that research-active trusts have better clinical patient outcomes. Psychiatric trainees are required to develop knowledge and skills in research techniques and critical appraisal to enable them to practice evidence-based medicine and be research-active clinicians. This project aimed to evaluate and improve the support for developing research competencies available to general adult psychiatry higher trainees (HT) in the North-West of England.MethodGeneral Adult HT in the North–West of England completed a baseline survey in November 2019 to ascertain trainee's experience of research training provision. The following interventions were implemented to address this feedback:A trainee research handbook was produced, containing exemplar activies for developing research competencies and available training opportunities, supervisors and active research studies.The trainee research representative circulated research and training opportunities between November 2019 – August 2020.Research representatives held a trainee Question and Answer session in September 2020.All General Adult HT were asked to complete an electronic survey in November 2020 to evaluate the effect of these interventions.Result18 General Adult HT completed the baseline survey in November 2019. 29.4% of trainees thought they received enough information on research competencies and 88.9% wanted more written guidance. 38.9% of trainees knew who to contact about research within their NHS Trust and 33.3% were aware of current research studies. Identified challenges for meeting research competencies included lack of time, difficulty identifying a mentor and topic and accessibility of projects.20 General Adult HT completed the repeat survey in November 2020. 50% of trainees wanted to be actively involved in research and 35% wanted to develop evidence-based medicine skills. A minority of trainees aimed to complete only the minimum ARCP requirements. All trainees thought the handbook was a useful resource for meeting research competencies and would recommend it to other trainees. In trainees who received the handbook, 94.7% thought they had received adequate support on meeting research competencies and 94.7% knew who to contact about research in their trust. 68.4% of trainees would like further written guidance on meeting research competencies. Trainees highlighted ongoing practical difficulties with engaging with research and concern about lacking required skills for research.ConclusionTrainees are motivated to engage with research on various different levels, not purely for ARCP purposes. Simple interventions can help trainees feel adequately supported with meeting research competencies. Further work to support trainee involvement in research and improve trainee confidence in engaging with research is required.

2015 ◽  
Author(s):  
Michael Barnett ◽  
Niteesh Choudhry

Today, a plethora of resources for evidence-based medicine (EBM) are available via alert services, compendia, and more. In theory, a clinician researching a topic or looking for information regarding a clinical decision should easily find the literature or synopses needed. However, the real challenge lies in recognizing which resources (out of hundreds or possibly thousands) present the best and most reliable evidence. As well, evidence from research is only part of the decision calculus, and the clinician, not the evidence, makes the final decisions. Medical decision analysis attempts to formalize the process and reduce it to algebra, but it is difficult or impossible to represent all the components of a decision mathematically and validly let alone do so in “real time” for individual patients. This review discusses these challenges and more, including how to ask answerable questions, understand the hierarchy for evidence-based information resources, critically appraise evidence, and apply research results to patient care. Figures show the total number of new articles in Medline from 1965 to 2012, a “4S” hierarchy of preappraised medicine, percentage of physician and medical student respondents with a correct or incorrect answer to a question about calculating the positive predictive value of a hypothetical screening test, a nomogram for Bayes’s rule, an example of nomogram use for pulmonary embolism, and a model for evidence-informed clinical decisions. Tables list selected barriers to the implementation of EBM; Patient, Intervention, Comparison, and Outcome (PICO) framework for formulating clinical questions; guides for assessing medical texts for evidence-based features; clinically useful measures of disease frequency and statistical significance and precision; definitions of clinically useful measures of diagnostic test performance and interpretation; definitions of clinically useful measures of treatment effects from clinical trials; summary of results and derived calculations from the North American Symptomatic Carotid Endarterectomy Trial (NASCET); and selected number needed to treat values for common therapies. This review contains 6 highly rendered figures, 9 tables, and 28 references.


2015 ◽  
Author(s):  
Michael Barnett ◽  
Niteesh Choudhry

Today, a plethora of resources for evidence-based medicine (EBM) are available via alert services, compendia, and more. In theory, a clinician researching a topic or looking for information regarding a clinical decision should easily find the literature or synopses needed. However, the real challenge lies in recognizing which resources (out of hundreds or possibly thousands) present the best and most reliable evidence. As well, evidence from research is only part of the decision calculus, and the clinician, not the evidence, makes the final decisions. Medical decision analysis attempts to formalize the process and reduce it to algebra, but it is difficult or impossible to represent all the components of a decision mathematically and validly let alone do so in “real time” for individual patients. This review discusses these challenges and more, including how to ask answerable questions, understand the hierarchy for evidence-based information resources, critically appraise evidence, and apply research results to patient care. Figures show the total number of new articles in Medline from 1965 to 2012, a “4S” hierarchy of preappraised medicine, percentage of physician and medical student respondents with a correct or incorrect answer to a question about calculating the positive predictive value of a hypothetical screening test, a nomogram for Bayes’s rule, an example of nomogram use for pulmonary embolism, and a model for evidence-informed clinical decisions. Tables list selected barriers to the implementation of EBM; Patient, Intervention, Comparison, and Outcome (PICO) framework for formulating clinical questions; guides for assessing medical texts for evidence-based features; clinically useful measures of disease frequency and statistical significance and precision; definitions of clinically useful measures of diagnostic test performance and interpretation; definitions of clinically useful measures of treatment effects from clinical trials; summary of results and derived calculations from the North American Symptomatic Carotid Endarterectomy Trial (NASCET); and selected number needed to treat values for common therapies. This review contains 6 highly rendered figures, 9 tables, and 28 references.


2015 ◽  
Author(s):  
Michael Barnett ◽  
Niteesh Choudhry

Today, a plethora of resources for evidence-based medicine (EBM) are available via alert services, compendia, and more. In theory, a clinician researching a topic or looking for information regarding a clinical decision should easily find the literature or synopses needed. However, the real challenge lies in recognizing which resources (out of hundreds or possibly thousands) present the best and most reliable evidence. As well, evidence from research is only part of the decision calculus, and the clinician, not the evidence, makes the final decisions. Medical decision analysis attempts to formalize the process and reduce it to algebra, but it is difficult or impossible to represent all the components of a decision mathematically and validly let alone do so in “real time” for individual patients. This review discusses these challenges and more, including how to ask answerable questions, understand the hierarchy for evidence-based information resources, critically appraise evidence, and apply research results to patient care. Figures show the total number of new articles in Medline from 1965 to 2012, a “4S” hierarchy of preappraised medicine, percentage of physician and medical student respondents with a correct or incorrect answer to a question about calculating the positive predictive value of a hypothetical screening test, a nomogram for Bayes’s rule, an example of nomogram use for pulmonary embolism, and a model for evidence-informed clinical decisions. Tables list selected barriers to the implementation of EBM; Patient, Intervention, Comparison, and Outcome (PICO) framework for formulating clinical questions; guides for assessing medical texts for evidence-based features; clinically useful measures of disease frequency and statistical significance and precision; definitions of clinically useful measures of diagnostic test performance and interpretation; definitions of clinically useful measures of treatment effects from clinical trials; summary of results and derived calculations from the North American Symptomatic Carotid Endarterectomy Trial (NASCET); and selected number needed to treat values for common therapies. This review contains 6 highly rendered figures, 9 tables, and 28 references.


2015 ◽  
Author(s):  
Michael Barnett ◽  
Niteesh Choudhry

Today, a plethora of resources for evidence-based medicine (EBM) are available via alert services, compendia, and more. In theory, a clinician researching a topic or looking for information regarding a clinical decision should easily find the literature or synopses needed. However, the real challenge lies in recognizing which resources (out of hundreds or possibly thousands) present the best and most reliable evidence. As well, evidence from research is only part of the decision calculus, and the clinician, not the evidence, makes the final decisions. Medical decision analysis attempts to formalize the process and reduce it to algebra, but it is difficult or impossible to represent all the components of a decision mathematically and validly let alone do so in “real time” for individual patients. This review discusses these challenges and more, including how to ask answerable questions, understand the hierarchy for evidence-based information resources, critically appraise evidence, and apply research results to patient care. Figures show the total number of new articles in Medline from 1965 to 2012, a “4S” hierarchy of preappraised medicine, percentage of physician and medical student respondents with a correct or incorrect answer to a question about calculating the positive predictive value of a hypothetical screening test, a nomogram for Bayes’s rule, an example of nomogram use for pulmonary embolism, and a model for evidence-informed clinical decisions. Tables list selected barriers to the implementation of EBM; Patient, Intervention, Comparison, and Outcome (PICO) framework for formulating clinical questions; guides for assessing medical texts for evidence-based features; clinically useful measures of disease frequency and statistical significance and precision; definitions of clinically useful measures of diagnostic test performance and interpretation; definitions of clinically useful measures of treatment effects from clinical trials; summary of results and derived calculations from the North American Symptomatic Carotid Endarterectomy Trial (NASCET); and selected number needed to treat values for common therapies. This review contains 6 highly rendered figures, 9 tables, and 28 references.


2019 ◽  
Author(s):  
Michael Barnett ◽  
Niteesh Choudhry

Today, a plethora of resources for evidence-based medicine (EBM) are available via alert services, compendia, and more. In theory, a clinician researching a topic or looking for information regarding a clinical decision should easily find the literature or synopses needed. However, the real challenge lies in recognizing which resources (out of hundreds or possibly thousands) present the best and most reliable evidence. As well, evidence from research is only part of the decision calculus, and the clinician, not the evidence, makes the final decisions. Medical decision analysis attempts to formalize the process and reduce it to algebra, but it is difficult or impossible to represent all the components of a decision mathematically and validly let alone do so in “real time” for individual patients. This review discusses these challenges and more, including how to ask answerable questions, understand the hierarchy for evidence-based information resources, critically appraise evidence, and apply research results to patient care. Figures show the total number of new articles in Medline from 1965 to 2012, a “4S” hierarchy of preappraised medicine, percentage of physician and medical student respondents with a correct or incorrect answer to a question about calculating the positive predictive value of a hypothetical screening test, a nomogram for Bayes’s rule, an example of nomogram use for pulmonary embolism, and a model for evidence-informed clinical decisions. Tables list selected barriers to the implementation of EBM; Patient, Intervention, Comparison, and Outcome (PICO) framework for formulating clinical questions; guides for assessing medical texts for evidence-based features; clinically useful measures of disease frequency and statistical significance and precision; definitions of clinically useful measures of diagnostic test performance and interpretation; definitions of clinically useful measures of treatment effects from clinical trials; summary of results and derived calculations from the North American Symptomatic Carotid Endarterectomy Trial (NASCET); and selected number needed to treat values for common therapies. This review contains 6 highly rendered figures, 9 tables, and 28 references.


Sign in / Sign up

Export Citation Format

Share Document