How to Spot Bad Coaching Advice Online
This article was Co-Written by James Adams and John ‘Hedge’ Hall
James is a Sports Scientist and former Parkour Coach. He runs the Parkour Clinic
Hedge is the Deputy CEO of Parkour Earth and the Executive Director of Access Parkour
All of our articles respresent the opinions of the Authors only and do not necesarilly reflect the position of Parkour Earth.
As sports coaches, we are endlessly bombarded with different sources of information. Finding our way through this endless stream of opinions and assertions is difficult and it’s not uncommon for coaches to be exposed to the best marketing arguments instead of the most valid ones. Scientific literature is often dense and difficult to understand, and instead we often absorb it via a pop-sci article written by a third party. Which articles are relevant to our needs? Which are useful and which are not? Sometimes, an article is flawed through error or unfamiliarity, and sometimes it’s just a bad take by a good salesman. How do we tell which is which?
Within this article we want to explore evidence based coaching, it’s limits and how conventional wisdom fits into the picture. We hope that, by the end of this discussion, you’ll at least have a good idea of the reason it can be so difficult to separate fact from fiction and better understand the struggles of a sports coach. Even if, like us, you’ll probably walk away still a little unsure how to deal with it all.
Sports science and empirical evidence
It’s been famously claimed that “democracy is the worst form of government, except for all those other forms.” Similar things could be said about the modern sports science approach of controlled experimentation and peer review.
Modern science works through observation, hypothesis, experiments, and review. A hypothesis is proposed (men in their 30s like to write long winded blog posts). An experiment is designed (we asked 57 different men of all different ages if they enjoyed producing blog posts over 1000 words) and then results are analysed (men in their 30s were significantly more likely to report they enjoyed producing blog posts over 1000 words than other age groups). But the experiment may have structural flaws that aren’t exposed until it is compared to many others (it turns out Hedge’s mates aren’t a representative statistical sample of the adult male population as a whole). This flaw is hopefully spotted when another party takes a look at the study (James peer reviews Hedge’s work and points this out). If the flaw is spotted, we can update the hypothesis or experiment and try again (ask a whole lot more adult men than just Hedge’s mates).
The scientific method (mostly) produces information that is true, for the majority of cases within a defined population. They are narrow in scope and should be read as such. It’s a feature, not a bug. Make your study too broad, and you risk introducing factors that make it more difficult to determine if the cause and effect link you are looking for is actually present. Famously, shark attacks increase in correlation with ice cream sales, because hotter weather prompts more visits to the beach, not because sharks buy a lot of ice cream.
So results should be analysed within the context of each experiment and whichever hypothesis prompted the experiment. An individual paper only discusses how valid the evidence is in relation to what is being studied and the method used to study it. Wider meaning may be inferred as an outcome of the results of the study, but this is usually a call for further investigation rather than a statement of truth. Consequently a single paper very rarely provides overwhelming proof and it’s perfectly possible for two valid papers to exist that contain opposing evidence.
Using this process, nothing can ever be shown to be 100% true. We can only find as many different ways as possible to show that it isn’t not true. The scientific process is messy and complicated and you can rarely read a lot into a specific study unless you are an expert in that particular niche. Even those with scientific training from different disciplines should hesitate before drawing conclusions from a single paper. These things often aren’t settled until sufficient evidence has been compiled, the meta-analysis has been done and then someone in scientific communication comes along and tries to form the body of evidence into a coherent story that is easy for others to digest. This is often how a larger picture is reached: in the aggregate.
All of this is a way of saying that empirical evidence is good and studies are important, but we should bear in mind that a single study alone is rarely sufficient evidence to change our approach to fitness. While recognising that sports science is absolutely producing important evidence that we, as coaches, should be listening to in order to improve our practice.
Conventional wisdom
Conventional wisdom can be thought of as any practice where people using these methods report that it has worked for them. However, either the underlying mechanisms aren’t understood enough to definitively say why it works, or there just isn’t enough evidence yet to call it one way or another. The practice often comes with proposed explanations about how it works and these explanations often lack credible evidence. Usually that evidence is personal, anecdotal experience. People have followed this process and it has worked for them. This is a valid experience but, unfortunately, it is not sufficient evidence that it will work for everyone or that it worked for exactly the reasons claimed.
Essentially, it’s not disputed that a number of people have had positive wellbeing results from the practice, but the mechanism may be different from what is described. Healing crystals can have really positive effects for a lot of people and astrology horoscopes provide comfort to many. But rocks don’t produce healing fields and stars don’t impact your personality. At the other end of the discussion, no-one is seriously arguing that stretching and mobility isn’t good for you. But we simply aren’t sure about why it works and exactly what it is doing. To compound the matter, two people may have very different results from the same stretching protocol. Massage is another topic where consensus is difficult to find and there are no easy answers, despite the fact that it’s almost universally agreed that massage provides benefits to those on the receiving end of it.
It’s quite common for these conventional practices to have many integrated parts that form part of a lifestyle. In Ashtanga Yoga, you are getting up earlier, exercising first thing, doing long stretches with specific breathing protocols and it is often associated with changes in diet as well. This is a fairly complex set of lifestyle changes that all occur at once. While the entire practice cumulatively may benefit people, it isn’t a simple matter for a study to be able to understand which parts of this practice are responsible for that.
As a sports coach, you’ll find there’s a lot more conventional wisdom out there than there are well written scientific op-eds that explain things to you. Especially if you work in a niche sport that isn’t well studied. It’s important to know that the two can co-exist and do not automatically cancel each other out. Scientific papers provide information on the population. Individuals provide information on themselves. If a client presents themselves to you and says that acupuncture really helped with their knee pain before, you can believe them. It does not mean acupuncture is the only thing that will help them and it does not mean you need to start delivering acupuncture to all your clients.
Unfortunately, you are likely to encounter many instructors with large media followings citing specific studies to back up their conventional wisdom. This can be a double edged sword as they may be experts in the field having deep domain specific discussions, or they may be using marketing gimmicks to appear to be experts. They may even be bad actors actively encouraging you to ignore a scientific consensus that exists against them. Most people likely lack the tools to properly assess the difference between these instructors.
Spotting good advice
Inevitably, at some point you are going to find yourself considering whether to trust someone without sufficient evidence backing up their claims. So within this whirling world of difficult to digest evidence, how do we actually decide who we should be trusting?
There are a few different areas you can focus on if you want to make sure you are following good advice online:
The goals and outcomes of the advice are applicable to you, your clients, or your sport and not just generic advice
You can see some evidence of successful results that indicate a generally positive, healthy outcome. If anecdotal, you are able to find or verify these success stories from sources independent of the person making the claims
The instructors’ clients are fairly diverse and their work is adaptable to many different bodies
The instructor seems to have a long history in the activity they are promoting and isn’t just jumping into the latest fad
The instructor rarely gives definitive answers about ‘why’ things are happening, but instead provides education and possibilities, with options to suit a variety of situations
You can try out and assess the advice for yourself with very low risk and commitment
Above all, avoid programmes that encourage an ‘all-or-nothing’ attitude. Anything where someone is claiming that this one simple trick fixes everything, or tells you that everyone else is wrong is almost certainly just selling something. We strongly encourage everyone to read widely, discuss openly, and engage with a variety of sources. Be critical but open, analytical but empathetic, discerning but not dismissive. Ultimately, it comes down to your judgement, personal and professional, to consider the evidence and potential risks in any modality you are delivering to a client or patient. But that judgement should absolutely be an informed one.