Engaging with educational research, alongside gaining practical experience in the classroom, is fundamental to developing the knowledge and understanding you need to teach effectively. Critiquing research evidence will continue to be important throughout your career, helping you to make informed judgements about ‘what works’ in education and to challenge the kinds of myths around teaching and learning that can take hold when research evidence is not considered. This article aims to give an overview of some aspects that come into play when talking about the creation of myths in education, with particular emphasis on neuromyths – common misconceptions about the brain. I will describe some of the mechanisms behind the formation of myths, briefly look at the role of social media and will finish by giving some pointers that might help prevent myths taking hold in education.

The nature of myths

In the last five years, numerous studies have looked at the prevalence of myths in education. For example, Howard-Jones (2014) looked at the level of agreement with several ‘neuromythical’ statements in different countries, and concluded that, even with very different cultures, there are similarly high levels of belief in neuromyths, such as that we mostly only use 10% of our brain, and that differences in left/right brain dominance can help explain individual differences amongst learners. The article also usefully reflects on possible ‘seeds of confusion’ that might spark myths. The most likely scenario seems to be that myths originate from ‘uninformed interpretations of genuine scientific facts’ (Howard-Jones, 2014).

Howard-Jones (2014) goes on to attempt to explain the perpetuation of neuromyths. Firstly, he flags up cultural conditions – for example, differences in terminology and language creating a gap between neuroscience and education. A second reason is that counter-evidence might be difficult to access. Relevant evidence might appear in specialist journals and, together with the complexity of the topic, this might mask any critical signals. A third element might be that claims are simply untestable – for example, because they assume knowledge about cognitive processes, or even the brain, that are unknown to us (yet). Finally, an important factor that we can’t rule out is bias. When we evaluate and scrutinise evidence, a range of emotional, developmental and cultural biases interact with emerging myths.

The good news, though, is that there are signs that training can decrease beliefs in neuromyths. In a recent study, Macdonald et al. (2017) compared the prevalence of neuromyths in the USA between three groups of participants: educators, participants exposed to neuroscientific knowledge, and the general public. The general public endorsed the greatest number of myths, with educators endorsing fewer and the high neuroscience exposure group even fewer. Unfortunately, it was still around 50%. The article also suggested, however, that in order to not invoke new myths, care must be taken in how myths are dispelled. The learning styles neuromyth is described as a particular challenge to the field, as it ‘seems to be supporting effective instructional practice, but for the wrong reasons’ (Macdonald et al., 2017). It is suggested that dispelling that particular myth might inadvertently discourage diversity in instructional approaches.

In some cases, simply saying that something is a myth is fine; in other cases, it is best to combine this with more information, to prevent new myths taking hold. A meta-analysis by Chan et al. (Chan et al., 2017) investigated the factors underlying effective messages to counter attitudes and beliefs based on misinformation, concluding that it seems helpful to not spend too much time talking about the misconception, but instead focusing on presenting counterarguments or even asking the audience to generate counterarguments. Perhaps a simple question such as ‘what is the best argument for not believing the following statement or study?’ could be rather revealing.

What about social media?

The double face of the digital revolution is demonstrated in the recent work by Robinson-Garcia et al. (Robinson-Garcia et al., 2017) in which the authors sought, in the field of dentistry, to assess the extent to which tweeting about scientific papers signified engagement with, attention to, or consumption of, scientific literature.

They argue that ‘simplistic and naïve use of social media data risks damaging the scientific enterprise, misleading both authors and consumers of scientific literature’ (p. 16).  I want to flag up some questions that years of using social media have sparked in me.

Let’s, for example, scrutinise the advent of economic papers with advanced statistical methods being cited in the education blogosphere. These papers often appear as pre-prints and deal with a range of important issues. However, like any piece of research, there are many features that – if not studied more deeply – can lead to myths. Issues that come to mind include whether the paper has already appeared in a peer-reviewed journal. If not, this means that no ‘peers’ have yet studied the article in detail; in general, peer-reviewed articles tend to be more rigorous and robust (although peer review is no guarantee!).

Another thing to look at might be whether it is clear how the authors operationalised complex variables in their statistical models. Sometimes these issues boil down to the way in which they are measured. When we talk about measurement, many people envisage some sort of ‘thermometer’ that can easily gauge the concept. This often is not the case for highly complex constructs; both Growth Mindset and Cognitive Load Theory primarily use self-report as a form of measurement. Of course, this need not be a problem; both concepts can still be very useful, but I would argue that a critically engaged teacher should be aware of these things.

A challenge can also lie in the summaries of underlying data. One can almost have a day job in unpicking research articles, the prior literature involved, the methodology, the data analysis and subsequent conclusions. We often have to rely on summaries and accounts from others and this can sometimes be subject to ‘Chinese Whispers’. When you dive in deeper, you see all sorts of surprising things, ranging from atypical definitions of concepts to selectively using data. Analyses of, for example, large-scale datasets like PISA and TIMSS certainly need to go further than the key tables reported in the media.

Keep in mind, too, that science is constantly revised and updated, and this means that one ideally looks at a whole body of literature. One article that contradicts previous literature does not nullify it, nor should it be disregarded. This, in my view, also means that we should not easily dismiss some older research, purportedly because ‘cognitive science’ has shown that they were ‘wrong’. I would assert that, for many ideas over the decades, ‘cognitive science’ has provided empirical backing for some ideas and no empirical backing for other ideas. Blanket dismissal would be inappropriate; approach the ideas as they are and evaluate them as such, and not through broad, sweeping generalisations.

Underneath all of this, it is useful to be aware of a very human tendency to appreciate novel and original findings in the research literature, sometimes leading to ‘publication bias’. Remember that what ends up in publication often is the remarkable, not the unremarkable.

Conclusion

In this article, I have tried to provide an overview of the complexities involved in studying myths and misconceptions. Of course, there is much more to say, but the key takeaways summarise the most important recommendations. Perhaps the key message for all is that we accept that no research finding will provide a ‘silver bullet’. Now go forth and factcheck my article!

KEY TAKEAWAYS
  • Try to follow up sources as much as possible. Of course, this is very time-consuming. Sometimes other people summarise research for you, but even then it is wise to remain critical. Perhaps refraining from too firm a position, until you feel you have reviewed a fair amount of material, from different sources, might be a good strategy. Sorry – this is just hard work, and I understand that practitioners do not always have this time.
  • Be mindful of over-simplifications. I completely understand that providing a multitude of pages to describe the complexities of an educational phenomenon is not helpful for practitioners. However, the fact that some overcomplicate things does not mean that ‘simple is best’ either. Follow the facts and, if one simplifies, be aware of the limitations or what it leaves out.
  • Be cautious about developing policy based on new claims. Some people have suggested that we wait at least 15 years before an initial (scientific) idea should ever end up as policy, allowing us to fully study the pros and cons. Although I think this time period is too lengthy, at a minimum research findings should be accompanied by a clear scope and disclaimer with regard to claims.

Watch Christian talk more about neuromyths: youtu.be/XjevH4Lc0HI.


Christian Bokhove was a secondary maths and computer science teacher in the Netherlands for 14 years and now is Associate Professor at the University of Southampton. He specialises in research methods, mathematics education and international comparisons like PISA and TIMSS.

References

Chan M, Jones C, Jamieson K, et al. (2017) Debunking: A meta-analysis of the psychological efficacy of messages countering misinformation. Psychological Science 28(11): 1531–1546.
Howard-Jones P (2014) Neuroscience and education: Myths and messages. Nature Reviews Neuroscience 15(12): 817–824.
Macdonald K, Germine L, Anderson A, et al. (2017) Dispelling the Myth: Training in Education or Neuroscience Decreases but Does Not Eliminate Beliefs in Neuromyths. Frontiers in Psychology 8(1314).
Robinson-Garcia N, Costas R, Isett K, et al. (2017) The unbearable emptiness of tweeting – About journal articles. PLOS one 12(8).