scholarly journals Controlling for Placebo Effects in Computerized Cognitive Training Studies With Healthy Older Adults From 2016-2018: Systematic Review (Preprint)

2019 ◽  
Author(s):  
Alexander Masurovsky

BACKGROUND Computerized cognitive training has been proposed as a potential solution to age-related cognitive decline. However, published findings from evaluation studies of cognitive training games, including metastudies and systematic reviews, provide evidence both for and against transferability from trained tasks to untrained cognitive ability. There continues to be no consensus on this issue from the scientific community. Some researchers have proposed that the number of results supporting the efficacy of cognitive training may be inflated due to placebo effects. It has been suggested that placebo effects need to be better controlled by using an active control and measuring participant expectations for improvement in outcome measures. OBJECTIVE This review examined placebo control methodology for recent evaluation studies of computerized cognitive training programs with older adult subjects, specifically looking for the use of an active control and measurement of expectations. METHODS Data were extracted from PubMed. Evaluation studies of computerized cognitive training with older adult subjects (age ≥50 years) published between 2016 and 2018 were included. Methods sections of studies were searched for (1) control type (active or passive) and subtype (active: active-ingredient or similar-form; passive: no-contact or passive-task); (2) if expectations were measured, how were they measured, and whether they were used in analysis; and (3) whether researchers acknowledged a lack of active control and lack of expectation measurement as limitations (where appropriate). RESULTS Of the 19 eligible studies, 4 (21%) measured expectations, and 9 (47%) included an active control condition, all of which were of the similar-form type. The majority of the studies (10/19, 53%) used only a passive control. Of the 9 studies that found results supporting the efficacy of cognitive training, 5 were for far transfer effects. Regarding the limitations, due to practical considerations, the search was limited to one source (PubMed) and to search results only. The search terms may have been too restrictive. Recruitment methods were not analyzed, although this aspect of research may play a critical role in systematically forming groups with different expectations for improvement. The population was limited to healthy older adults, while evaluation studies include other populations and cognitive training types, which may exhibit better or worse placebo control than the studies examined in this review. CONCLUSIONS Poor placebo control was present in 47% (9/19) of the reviewed studies; however, the studies still published results supporting the effectiveness of cognitive training programs. Of these positive results, 5 were for far transfer effects, which form the basis for broad claims by cognitive training game makers about the scientific validity of their product. For a minimum level of placebo control, future evaluation studies should use a similar-form active control and administer a questionnaire to participants at the end of the training period about their own perceptions of improvement. Researchers are encouraged to think of more methods for the valid measure of expectations at other time points in the training.

10.2196/14030 ◽  
2020 ◽  
Vol 8 (2) ◽  
pp. e14030 ◽  
Author(s):  
Alexander Masurovsky

Background Computerized cognitive training has been proposed as a potential solution to age-related cognitive decline. However, published findings from evaluation studies of cognitive training games, including metastudies and systematic reviews, provide evidence both for and against transferability from trained tasks to untrained cognitive ability. There continues to be no consensus on this issue from the scientific community. Some researchers have proposed that the number of results supporting the efficacy of cognitive training may be inflated due to placebo effects. It has been suggested that placebo effects need to be better controlled by using an active control and measuring participant expectations for improvement in outcome measures. Objective This review examined placebo control methodology for recent evaluation studies of computerized cognitive training programs with older adult subjects, specifically looking for the use of an active control and measurement of expectations. Methods Data were extracted from PubMed. Evaluation studies of computerized cognitive training with older adult subjects (age ≥50 years) published between 2016 and 2018 were included. Methods sections of studies were searched for (1) control type (active or passive) and subtype (active: active-ingredient or similar-form; passive: no-contact or passive-task); (2) if expectations were measured, how were they measured, and whether they were used in analysis; and (3) whether researchers acknowledged a lack of active control and lack of expectation measurement as limitations (where appropriate). Results Of the 19 eligible studies, 4 (21%) measured expectations, and 9 (47%) included an active control condition, all of which were of the similar-form type. The majority of the studies (10/19, 53%) used only a passive control. Of the 9 studies that found results supporting the efficacy of cognitive training, 5 were for far transfer effects. Regarding the limitations, due to practical considerations, the search was limited to one source (PubMed) and to search results only. The search terms may have been too restrictive. Recruitment methods were not analyzed, although this aspect of research may play a critical role in systematically forming groups with different expectations for improvement. The population was limited to healthy older adults, while evaluation studies include other populations and cognitive training types, which may exhibit better or worse placebo control than the studies examined in this review. Conclusions Poor placebo control was present in 47% (9/19) of the reviewed studies; however, the studies still published results supporting the effectiveness of cognitive training programs. Of these positive results, 5 were for far transfer effects, which form the basis for broad claims by cognitive training game makers about the scientific validity of their product. For a minimum level of placebo control, future evaluation studies should use a similar-form active control and administer a questionnaire to participants at the end of the training period about their own perceptions of improvement. Researchers are encouraged to think of more methods for the valid measure of expectations at other time points in the training.


2017 ◽  
Vol 5 ◽  
pp. 1032-1035
Author(s):  
Antonia Yaneva ◽  
Nonka Mateva

Cognitive interventions, especially cognitive training, may improve cognitive functions in healthy older adults. Computerized cognitive training platforms offer several advantages over traditional programs for cognitive training and stimulation. The focus of this article is the methodology of the studies that apply a particular online training program. We investigate the effectiveness of several studies for cognitive training in healthy elderly people and evaluate reported outcomes and potential bias and what factors determine, influence or contribute to the positive or negative results. The post-intervention scores demonstrate that computerized cognitive training may enhance some cognitive functions and the overall cognitive status but there is need for additional research to prove its effectiveness.


Medicine ◽  
2018 ◽  
Vol 97 (45) ◽  
pp. e13007 ◽  
Author(s):  
Goo Joo Lee ◽  
Heui Je Bang ◽  
Kyoung Moo Lee ◽  
Hyun Ho Kong ◽  
Hyeun Suk Seo ◽  
...  

2018 ◽  
Author(s):  
Giovanni Sala ◽  
N Deniz Aksayli ◽  
K Semir Tatlidil ◽  
Tomoko Tatsumi ◽  
Yasuyuki Gondo ◽  
...  

Theory building in science requires replication and integration of findings regarding a particular research question. Second-order meta-analysis (i.e., a meta-analysis of meta-analyses) offers a powerful tool for achieving this aim, and we use this technique to illuminate the controversial field of cognitive training. Recent replication attempts and large meta-analytic investigations have shown that the benefits of cognitive-training programs hardly go beyond the trained task and similar tasks. However, it is yet to be established whether the effects differ across cognitive-training programs and populations (children, adults, and older adults). We addressed this issue by using second-order meta-analysis. In Models 1 (k = 99) and 2 (k = 119), we investigated the impact of working-memory training on near-transfer (i.e., memory) and far-transfer (e.g., reasoning, speed, and language) measures, respectively, and whether it is mediated by the type of population. Model 3 (k = 233) extended Model 2 by adding six meta-analyses assessing the far-transfer effects of other cognitive-training programs (video-games, music, chess, and exergames). Model 1 showed that working-memory training does induce near transfer, and that the size of this effect is moderated by the type of population. By contrast, Models 2 and 3 highlighted that far-transfer effects are small or null. Crucially, when placebo effects and publication bias were controlled for, the overall effect size and true variance equaled zero. That is, no impact on far-transfer measures was observed regardless of the type of population and cognitive-training program. The lack of generalization of skills acquired by training is thus an invariant of human cognition.


2019 ◽  
Vol 4 (3) ◽  
pp. 258-273
Author(s):  
Sheida Rabipour ◽  
Cassandra Morrison ◽  
Jessica Crompton ◽  
Marcelo Petrucelli ◽  
Murillo de Oliveira Gonçalves Germano ◽  
...  

2019 ◽  
Vol 5 (1) ◽  
Author(s):  
Giovanni Sala ◽  
N. Deniz Aksayli ◽  
K. Semir Tatlidil ◽  
Tomoko Tatsumi ◽  
Yasuyuki Gondo ◽  
...  

Theory building in science requires replication and integration of findings regarding a particular research question. Second-order meta-analysis (i.e., a meta-analysis of meta-analyses) offers a powerful tool for achieving this aim, and we use this technique to illuminate the controversial field of cognitive training. Recent replication attempts and large meta-analytic investigations have shown that the benefits of cognitive-training programs hardly go beyond the trained task and similar tasks. However, it is yet to be established whether the effects differ across cognitive-training programs and populations (children, adults, and older adults). We addressed this issue by using second-order meta-analysis. In Models 1 (k = 99) and 2 (k = 119), we investigated the impact of working-memory training on near-transfer (i.e., memory) and far-transfer (e.g., reasoning, speed, and language) measures, respectively, and whether it is mediated by the type of population. Model 3 (k = 233) extended Model 2 by adding six meta-analyses assessing the far-transfer effects of other cognitive-training programs (video-games, music, chess, and exergames). Model 1 showed that working-memory training does induce near transfer, and that the size of this effect is moderated by the type of population. By contrast, Models 2 and 3 highlighted that far-transfer effects are small or null. Crucially, when placebo effects and publication bias were controlled for, the overall effect size and true variance equaled zero. That is, no impact on far-transfer measures was observed regardless of the type of population and cognitive-training program. The lack of generalization of skills acquired by training is thus an invariant of human cognition.


Sign in / Sign up

Export Citation Format

Share Document