Introduction
Cognitive processes such as working memory, processing speed, attention, and language functioning all decline during healthy ageing (Reuter-Lorenz et al.,
2021; Salthouse,
2010; Segaert et al.,
2018). As life expectancy in developed countries continues to increase (Roser et al.,
2013), mitigating age-related cognitive decline has become an increasingly popular field of research. In the present study, we examined whether computerised cognitive training improves performance across multiple domains of cognition.
A recently popular method of delivering cognitive training has been to use commercially available brain training programmes. Applications such as Lumosity (Lumosity,
2023), Peak (Peak,
2023) and BrainHQ (BrainHQ,
2023) are commercially advertised as training programmes that will improve cognitive ability and delay cognitive decline. These applications are easy to use, relatively affordable, adaptive (increasing in difficulty with improved performance, which is key for cognitive training programmes to work (Brehmer et al.,
2012) and include training games that cover a variety of cognitive processes such as short-term memory, language, attention, and processing speed.
Support for the effectiveness of brain training programmes in healthy older populations is mixed. There is evidence from reviews and meta-analyses that computerised cognitive training or brain training leads to small but significant improvement in skills such as working memory, processing speed, and visuospatial skills in healthy older adults (Bonnechere et al.,
2020; Kueider et al.,
2012; Tetlow & Edwards,
2017). Conversely, papers report that efficacy varies across cognitive domains and can be affected by design choices (Lampit et al.,
2014), and a recent meta-analysis has found no convincing improvement after accounting for publication bias (Nguyen et al.,
2022). Older adults have higher expectations of brain training compared to younger adults (Rabipour & Davidson,
2015), and they could arguably benefit most from their use, if effective. Whether brain training programmes lead to tangible improvements in cognitive abilities in healthy older adults therefore warrants further investigation.
Some of the inconsistencies found in cognitive training research more broadly can be attributed to methodological differences (Green et al.,
2014; Noack et al.,
2014; Simons et al.,
2016). Sample sizes vary substantially and are often limited; 50% of studies in a 2014 review of transfer effects in cognitive training studies had fewer than 20 participants in each group (Noack et al.,
2014), and 90% had fewer than 45 in each group. Training duration is also often limited; 50% of studies reported 8 h and 20 min of training or less, with the majority (90%) reporting less than 20 h in total (Noack et al.,
2014). Another concern is the size and content of the test battery (Green et al.,
2014). Many studies, especially early studies when cognitive training was in its infancy, used a small test battery (i.e., one test per cognitive function) to assess cognitive outcome measures. However, to assess valid training benefits, the outcome measures need to be chosen such that they assess changes across the construct rather than the individual tasks. For example, executive function would ideally not be assessed by a single measure: executive function itself is made up of smaller subprocesses (inhibition, shifting and updating (Sandberg et al.,
2014), so one outcome measure that focuses on one of those processes is not enough to encompass executive function as a whole. Moreover, if cognitive training includes a specific task that trains, for example, working memory (e.g., an n-back task), then it’s true benefits can only be assessed through performance on a different task which measures skills within this domain (i.e. a task which also assesses working memory but is not an n-back task) to rule out that improvements are mere practice effects. A final consideration is the choice of control group (Simons et al.,
2016). The gold standard is to use an active control group that mimics the intervention as closely as possible, while leaving out the ‘active ingredient’ of the training. However, the very nature of cognitive training programmes makes this difficult. The type of control groups in published studies therefore varies, often including passive control groups, and not always accounting for placebo effects, motivation, or cognitive demands (Simons et al.,
2016). Active control groups can be divided further; into ‘active-ingredient’ controls and ‘similar-form’ controls (Masurovsky,
2020). ‘Active-ingredient’ control groups are identical in every aspect apart from the ‘active’ ingredient, but these are difficult to implement and in practice are rarely used. For example, Brehmer et al. (
2012) tested whether adaptivity of training was key to improvements in working memory. The control group training was identical to the intervention, but the difficulty remained the same throughout. The ‘active ingredient’, and the only thing that changed, was adaptable difficulty. ‘Similar-form’ active controls are much more common, mimicking aspects of the training but differing in more than one way, such as comparing computerised cognitive training to video games that are not designed to train cognitive domains (Ballesteros et al.,
2017). ‘Similar-form’ control groups are still considerably more suitable than passive or no-contact control groups (Masurovsky,
2020).
We note that in the above set of issues, a key concern, but one most often overlooked, is the need to establish evidence of
transfer effects (the benefits of the training ‘transferring’ to other, untrained, cognitive tasks), as opposed to practice effects (improvements within the training, or same tasks, itself). Transfer effects can be categorised by how similar they are to the trained cognitive domain (Sala et al.,
2019). Near transfer refers to skills generalising to similar domains (e.g., training in working memory which is transferring to other, related but untrained, working memory tasks), while far transfer relies on the cognitive domain being weakly related, or not related at all, to the trained domain (e.g., working memory training which is transferring to language, or executive control benefits; (Sala et al.,
2019). The more shared features there are between domains, the nearer the transfer effects (Sala et al.,
2019). Of course, the ultimate aim of brain training programmes is that training of specific cognitive processes leads to improvements across cognitive domains (Stojanoski et al.,
2018). There is some evidence that brain training can lead to transfer effects (McDougall & House,
2012), however, there are also cases where no transfer benefits are found at all (Kable et al.,
2017; Stojanoski et al.,
2018). Even when papers report significant positive effects of brain training programmes on cognition in healthy older populations, the effects are often driven by improvements on very near transfer tasks (Lee et al.,
2020), and little to no evidence of far transfer is established. Furthermore, a recent meta-analysis of brain training randomised controlled trials with older adults found small but significant transfer to some cognitive domains, however most effects were no longer significant once publication bias was taken into account (Nguyen et al.,
2022). There are also cases where previously reported effects have perhaps been exaggerated. Brain training research sometimes describes improvements in trained effects (improvement in performance within the programme) and report these as an improvement in cognitive ability (Bonnechere et al.,
2021). Instead, these are in fact practice effects and do not necessarily entail improvements in cognitive function, since transfer effects (near or far) were not established or were not even assessed. Transfer effects are essential if a training programme is going to be effective and wide-reaching, especially in ageing populations, but concrete evidence for them is often lacking.
Due to these inconsistencies and the controversy surrounding brain training programmes and their effects, there is a need for robust and rigorous research to assess its efficacy. An extensive review paper has given recommendations for how research into brain training programmes should be conducted and published (Simons et al.,
2016). The researchers recommended a large sample size with random allocation to groups and blinding of conditions if possible. An appropriate active control group should be utilised, meaning a control group that correctly mimics the level of engagement of the intervention, but that theoretically will not result in improved cognitive performance. This allows for placebo effects to be controlled for, and any effects to be attributed to the ‘active’ ingredient of the training programme (Simons et al.,
2016). Furthermore, interventions need to control for expectations and motivations of both groups. Finally, the researchers recommend using appropriate outcome measures and a test battery using multiple tasks to measure each construct. Our study has incorporated each of these key recommendations.
To assess the possible cognitive benefits of the training we measured cognition across a wide range of domains. Among various possible cognitive functions of interest, working memory stands out as a commonly reported function. This is not only due to its consistent decline with age (Salthouse,
2010) but also because it serves as a foundation for many other cognitive abilities. Working memory training has shown convincing improvements in memory skills in older adults in recent years (Karbach & Verhaeghen,
2014). Another cognitive skill that exhibits consistent decline with age is processing speed, which has been effectively trained in older adults: the well-known ACTIVE study demonstrated significant and sustained improvements in processing speed over a two-year (Ball et al.,
2002) and ten-year (Rebok et al.,
2014) period. Although findings on attention skills are not always consistent, attention skills do undergo changes with age (Veríssimo et al.,
2022), and deficits in attention can impact daily life (Glisky,
2007), making it a worthwhile line of enquiry. Finally, language problems, specifically word finding difficulties, increase with age (Maylor,
1990) (Segaert et al.,
2018) and are commonly reported as noticeable deficits by older adults. Assessments of language function are not often included in brain training research, but due to the relevance of language abilities to ageing, we believed it would be interesting to include a language assessment in the present study.
In sum, cognitive training is an important field of research that needs methodologically sound experiments to assess whether brain training programmes are effective in healthy older adult populations. The aim of the current study was to do just that: to assess the efficacy of a commercially available adaptive brain training programme (Peak) for improving function in a range of cognitive domains, using a randomised controlled study with healthy older adults.
Popular brain training applications including Lumosity, BrainHQ, Elevate, and Peak have many similarities; they aim to improve cognition more broadly by training specific cognitive tasks in domains like attention and memory. These applications use adaptive training, track user performance and scores over time, and some (including Peak) compare scores to other users or age groups. It can therefore be difficult to choose one over another. We chose Peak because it includes some games/exercises that also target emotional capacity and language skills (Lumosity and BrainHQ do not include this). Further, compared to its counterparts, it is relatively under-researched and reported on; to our knowledge, Peak has not been used as a cognitive training programme in a randomised controlled trial with an active control group.
We aimed to include a larger sample size than has been used in many previous cognitive training studies (Noack et al.,
2014) and an appropriate active control group. We assessed cognitive functions known to decline with healthy ageing and used tasks that are commonly used in ageing research. These included working memory (Forward Digit Span task and visual N-back task), processing speed (Choice Reaction Time task and Letter Comparison task), attention (Attention Network Task) and language functioning (tip-of-the-tongue task). We hypothesised that we would find significant improvements within the training games (practice effects) for our intervention group. Whether we would find transfer effects from the brain training to other cognitive abilities was uncertain, though we anticipated any transfer effects would be to similar cognitive tasks (near transfer) rather than to dissimilar tasks (far transfer).
Discussion
This study explored whether cognitive training using Peak (a commercially available but relatively under-researched adaptive brain training programme) results in cognitive improvements in a sample of older adults. We designed the study in order to rigorously assess Peak: we used a randomised controlled intervention study design, with a sufficiently large sample of healthy older adults, a blinded active control group, and assessed the potential cognitive benefits using a comprehensive test battery (including multiple outcome measures for cognitive functions and including different cognitive domains). In line with our hypothesis, we found practice effects in the intervention group, specifically observing improvements on the Peak training game scores. However, we found no evidence of transfer to untrained tasks. While a subset of the previous research had suggested promising effects of brain training programmes (Anderson et al.,
2013; Meltzer et al.,
2023; Savulich et al.,
2019), the literature is mixed and many other studies, alongside our own, find little to no transfer effects (Guye & von Bastian,
2017; Stojanoski et al.,
2018).
Our data demonstrated that participants in our intervention condition significantly improved their game scores for all seven of Peak’s categories: Memory, Problem Solving, Language, Mental Agility, Focus, Emotion, and Coordination. Such practice effects are shown regularly in brain training studies, so this was not unexpected. We also found some test re-test effects, where both our intervention and active control group improved in performance on our cognitive outcome measures. Test re-test effects (which are well documented in the literature (Scharfen et al.,
2018) as well as improvements within the training programme are unlikely to indicate true improvements in cognitive abilities as a result of the intervention. Instead, they are merely learning effects of being able to do the training task.
The key aim for the present study was to assess whether brain training leads to transferable benefits to wider cognitive abilities. We assessed cognitive performance pre- and post-brain training intervention, using multiple measures of attention, working memory, processing speed, and language functioning. We did not find any evidence that brain training significantly improved cognitive performance in the intervention group when comparing to the active control. As mentioned in the introduction, transfer effects are few and far between in existing brain training literature, and it is possible that ‘transfer effects’ in published research are sometimes misinterpreted practice effects instead. For example, one recent study reported transfer effects for a composite of memory measures, however these were driven solely by improvements in N-back training (Lee et al.,
2020). Upon closer examination of Lee et al.,
2020, the cognitive training programme administered in the intervention group (BrainHQ) involved an N-back style game, and therefore any suggested transfer is arguably due to a practice effect.
It is important in this field to closely scrutinise previous research, as the terms practice effects and transfer effects are sometimes not clearly defined or fully explained. For example, one recent study analysed retrospective data from Peak and suggested that from a sample of 12,000 users, processing speed increased after 100 sessions of Peak training (Bonnechere et al.,
2021). While this finding is impressive at face value, what is important here is that the authors’ measure of processing speed was performance on the trained games and thus should be interpreted as practice effects. Our data show practice benefits do indeed take place during brain training, but in the present study at least, this does not transfer to other measures of the same cognitive constructs, or to different cognitive constructs.
The only session by condition interaction we found was for the 6-letter condition of the Letter Comparison task, where data showed that reaction times were faster in the active control group compared to the intervention group post vs. pre-intervention. There was a similar trend for the 3-letter condition. These findings are slightly unexpected, but the finding that a control group improves more than a cognitive training group has previously been shown (Hardy et al.,
2015; van Muijden et al.,
2012). Further, the effect is small, with the average RT in the intervention group increasing by 40ms, while the control group decreasing by 32ms. We believe this interaction is a spurious one, as the games the control group played do not have any features that may have indirectly trained processing speed (e.g., time limits), though we cannot rule out the possibility that some aspect of the control condition may have had this effect.
There are potential issues with our choice of active control; it is a similar-form control rather than an active-ingredient control (Masurovsky,
2020). However, our active control group, who played computer games on a free smartphone application, was similar regarding the novelty and variety of tasks. We found no significant differences between our intervention and active control groups in terms of motivation or enjoyment. This is important as research has shown these factors can affect intervention success (Green & Bavelier,
2008). Furthermore, there were no significant differences between groups in how much time participants trained for. This shows that the active control group was a suitable match for the experimental condition – they were comparable in these important factors, but their training did not include the brain training element. Note that our study design is comparable to a cognitive training study that used Lumosity to assess improvements in executive function in younger adults (Kable et al.,
2017). Similar to our study, they used an active control group that was matched to the intervention group in terms of engagement, motivation and novelty (Kable et al.,
2017). In line with our findings, they found practice effects but did not find any evidence of transfer effects from the brain training programme to any cognitive outcome measures. We have both corroborated the findings from this study and extended the research to an older adult population.
Although our data suggest there are no direct benefits of brain training on cognitive performance in the older adult population, there may be indirect benefits. There is rarely a negative impact of using these programmes, and perhaps a belief of these applications working might lead to improvements in wellbeing, which itself is helpful. For example, worries about cognitive health has been associated with poorer psychological wellbeing (Sutton et al.,
2022) and have even been linked with poorer cognitive performance (Caughie et al.,
2021). Therefore, even if these applications do not directly improve general cognitive ability, if they reduce worry about cognitive decline in older adults this is still beneficial. This is a possible avenue for future research.
Limitations of the present study include that the sample consisted of mainly White participants, which limits the generalisability of the findings. A positive of our sample however is that the sample size was larger than 90% of previous cognitive training studies (Noack et al.,
2014). While our intervention was adaptive, i.e. it increased with difficulty as participant performance improves, it would have been beneficial to include an explicit measure to assess whether the intervention sufficiently challenged participants, or whether there were differences between conditions, as per Lövdén et al.’s (2010) framework of cognitive plasticity. We did not include a passive control group, which would have allowed direct comparison of training effects without revealing to the active control group what condition they were in. Timescales and study design did not allow for this in the present study, but this could be beneficial in future research. We acknowledge that only one language functioning task was included in the methodology, however, language functioning is time-consuming to measure. Including multiple measures of language in future research (such as measures of comprehension or sentence production, e.g. (Fernandes et al.,
2024) would be valuable. Finally, one could argue that the flexibility participants were afforded in training duration is a limitation of our study design. Indeed, there were large differences in how long participants trained for; some maintained the minimum 15 min a day, but many went above and beyond. While we designed this intervention to be controlled and robust, we also wanted it to be enjoyable and not too restrictive for the participants. A small pilot study found limiting training to 15 min a day was difficult due to participants enjoyment of the training, so we removed this requirement for the study presented here. Importantly, training duration did not significantly differ between the intervention and the control group, so this is unlikely to have impacted our findings.
In sum, we have shown that making a clear distinction between transfer and practice effects in the cognitive training literature is important. A recent meta-analysis concluded that at present there is no convincing empirical evidence to suggest brain training programmes lead to tangible transfer effects in older adults (Nguyen et al.,
2022). Our data is in line with this and suggests that commercial brain training leads to practice effects, without convincing evidence of transfer to cognitive abilities beyond the practiced tasks. In short, we offer a rigorous investigation into a brain training product (i.e. Peak) which had not been studied extensively, and in our sample of healthy older adults, practice makes perfect, but it does not transfer to wider cognitive benefits.