NHS Choices - Behind the Headlines

Can exercise offset some of the harms of regular drinking?

"Adults who booze regularly but exercise for five hours a week are no more likely to die than teetotallers," the Mail Online reports.

A study suggests exercise may compensate for some, but certainly not all, of the harms associated with excessive alcohol consumption. This latest study looked at deaths from cancer and cardiovascular disease, as well as premature death in general (usually judged to be dying before the age of 75).

Researchers looked at around 10 years' worth of national survey data from UK adults aged over 40. Unsurprisingly, they found links between all-cause and cancer mortality in inactive people. But they also found increasing levels of physical activity generally removed the association with drinking habits. In fact, occasional drinking was associated with a significant reduction in all-cause mortality for the most active of people.

Although the study had strengths in its large sample size and regular follow-up, we can't be sure that any links observed were solely down to the interaction between alcohol and exercise. For example, people who are physically active may also avoid smoking and consume healthy diets. It is difficult to completely control for such influences when analysing data like this.

While regular exercise may mitigate against some of the harms associated with excessive alcohol consumption it certainly won't make you immune. Many world-class sportspeople, such as George Best and Paul Gascoigne, have had both their careers and lives blighted by drinking.


Where did the story come from?

The UK-based study was carried out by an international collaboration of researchers from Canada, Australia, Norway and the UK. The health surveys on which the study was based were commissioned by the Department of Health, UK. Individual study authors also reported receiving funding from the National Health and Medical Research Council and University of Sydney. 

The study was published in the peer-reviewed British Journal of Sports Medicine. 

The media coverage around this topic was generally overly optimistic, highlighting that by exercising, individuals can completely undo the harm caused by excessive alcohol consumption, which is untrue.

In particular, the Mail Online claimed "Adults who booze regularly but exercise for five hours a week are no more likely to die than teetotallers" which could send out the wrong message to the public.


What kind of research was this?

This cohort study analysed data from British population-based surveys: Health Survey for England (HSE) and the Scottish Health Survey (SHS) to investigate whether physical activity is able to moderate the risk between alcohol consumption and mortality from cancer and cardiovascular diseases.

Cohort studies like this are useful for assessing suspected links between an exposure and outcome. However, there are potentially other factors that have a role to play in such associations and therefore the study design doesn't allow for confirmation of cause and effect.


What did the research involve?

The researchers collected data on 36,370 men and women aged 40 or above from Health Survey for England (1994; 1998; 1999; 2003; 2004; and 2006) and the Scottish Health Survey (1998 and 2003). Among other things, the participants were asked about their current alcohol consumption and physical activity.

Alcohol intake was defined by six categories (UK units/week):

  • never drink (lifetime abstainers)
  • ex-drinkers
  • occasional drinkers (haven't drank anything in past seven days)
  • within (previous) guidelines: <14 units (women) and <21 units (men)
  • hazardous: 14-15 units (women) and 21-19 units (men)
  • harmful: >35 (women) and >49 (men)

Frequency and type of physical activity in the past four weeks was questioned and converted into metabolic equivalent task-hour (MET-hours, which are an estimate of metabolic activity) per week according to national recommendations:

  • inactive (≤7 MET-hours)
  • lower level of active (>7.5 MET-hours)
  • higher level of active (>15 MET-hours)

The surveys were linked to the NHS Central Register for mortality data and the participants were followed up until 2009 (HSE) and 2011 (SHS). There were 5,735 recorded deaths; deaths from cancer and cardiovascular disease were of most interest for this study.

The data was analysed for associations between alcohol consumption and the risk of death from all-causes, cancer and cardiovascular disease. The results were then analysed according to levels of physical activity.

Potential confounders (such as sex, body mass index and smoking status) were controlled for.


What were the basic results?

Overall, the study found a direct link between all levels of alcohol consumption and risk of cancer mortality. It also found that increasing levels of physical activity reduced this association with cancer mortality, and also reduced the link with death from any cause.

  • In individuals who reported inactive levels of physical activity (≤7 MET-hours), there was a direct association between alcohol consumption and all-cause mortality.
  • However, in individuals who met the highest level of physical activity recommendations a protective effect of occasional drinking on all-cause mortality was observed (hazard ratio: 0.68; 95% confidence interval (CI): 0.46 to 0.99). It should be noted that this result just skimmed the cut-off point for statistical significance.
  • In this high activity group, there was no link between all-cause mortality and alcohol consumption within guidelines, or even hazardous amounts, but the risk was still increased for those drinking harmful amounts.
  • The risk of death from cancer increased with the amount of alcohol consumed in inactive participants, ranging from a 47% increased risk for those drinking within guidelines to 87% increased risk for those with harmful drinking.
  • In people with higher activity levels (above 7.5 MET hours) there was no significant link between any amount of alcohol consumption and cancer mortality.
  • No association was found between alcohol consumption and mortality from cardiovascular disease, although a protective effect was observed in individuals who reported the lower and higher levels of physical activity (>7.5 MET-hours) and (>15 MET-hours) respectively.


How did the researchers interpret the results?

The researchers concluded "we found evidence of a dose–response association between alcohol intake and cancer mortality in inactive participants but not in physically active participants. [Physical activity] slightly attenuates the risk of all-cause mortality up to a hazardous level of drinking."



This study aimed to explore whether physical activity is able to moderate the risk between alcohol consumption and mortality from cancer and cardiovascular diseases. It found that increasing levels of physical activity reduced the association for death from both all-causes and cancer.

This study has strengths in its large sample size, comprehensive assessments and long duration of follow-up. The findings are interesting, but there a few points to bear in mind:

  • As the authors mention, cohort studies such as this are unable to confirm cause and effect. Though the researchers have tried to account for various potential health and lifestyle confounding variables, there is the possibility that others are still influencing the results. A notable one is dietary habits which weren't assessed. Also, for example, the former drinkers may have quit due to other health issues which may have introduced bias.
  • The study was unable to look at binge drinking levels of alcohol consumption which would have likely had important health implications.
  • Additionally, there is always the possibility with self-reported surveys that the participants either under or over-reported their drinking habits which can increase the chance of misclassification bias.
  • Though having a large sample size, fewer people reported harmful drinking levels, so links within this category may be less reliable.
  • The study has only looked at the link between alcohol and actually dying from cancer or cardiovascular disease. Links may be different if they looked at associations between alcohol and just being diagnosed with cancer or heart disease, for example.
  • The study is also only representative of adults over the age of 40.

Overall, maintaining a healthy lifestyle seems to be the best bet for reducing the risk of any chronic disease, be it through physical activity, balanced diet or reasonable alcohol consumption.

Current alcohol recommendations for both men and women are to drink no more than 14 units per week.  

Links To The Headlines

How exercise undoes the harm from drinking: Adults who booze regularly but exercise for five hours a week are no more likely to die than teetotallers. Mail Online, September 8 2016

Two hours a week of exercise could offset the dangers of alcohol. The Daily Telegraph, September 8 2016

Exercise can cut risk from alcohol-related diseases, study suggests. The Guardian, September 8 2016

Links To Science

Perreault K, Bauman A, Johnson N, et al. Does physical activity moderate the association between alcohol drinking and all-cause, cancer and cardiovascular diseases mortality? A pooled analysis of eight British population cohorts. British Journal of Sports Medicine. Published online August 31 2016

Grooming pubic hair linked to increased STI risk

"Women and men who regularly trim or remove all their pubic hair run a greater risk of sexually transmitted infections," BBC News reports.

A survey of around 7,500 Americans, aged between 18 and 65 years, found "groomers" had a higher rate of sexually transmitted infections (STIs) such as herpes.

However, this doesn't necessarily mean that grooming pubic hair directly increases risk of STIs. The main limitation is that this study can't prove cause and effect. It could be the case that some people decided to remove their pubic hair after catching a STI.

And while the researchers took into account the number of lifetime sexual partners as a surrogate marker of sexual behaviour, they failed to assess the safe sex practices of participants. So any findings observed here are really only links to be assessed in further research.

The study speculates that grooming could lead to small cuts in the skin (microtears) which could make a person more vulnerable to catching certain types of STIs that can be spread via skin-to-skin contact, such as the human papilloma virus (HPV).

A study we discussed back in the summer found many women shaved their pubic hair as they mistakenly thought this was more hygienic. While you may choose to get rid of your pubic hair for cosmetic reasons, there is no evidence that doing so is good for your health.


Where did the story come from?

The study was carried out by researchers from The University of California and the University of Texas and was funded by the National Institute of Health and the Alafi Family Foundation.

The study was published in the peer-reviewed medical journal Sexually Transmitted Infections on an open-access basis so it is free to read online.

The findings of this study have been widely reported in the UK media, however there was no mention of the fact that this research is not able to prove causation (cause and effect).

BBC News provides some useful tips on how to reduce your risk of catching an STI.


What kind of research was this?

This study was a cross sectional study which aimed to assess the relationship between pubic hair grooming habits and sexually transmitted infections.

Whilst this type of study is useful for finding possible links, studies that question exposures and outcomes at the same time are not able to prove cause and effect. There may be other factors at play which are responsible for the STIs.

Surveys are also subject to bias as the participants may not be entirely honest in their responses.


What did the research involve?

The researchers surveyed a sample of adults aged 18 to 65 years living in the US.

When invited to take part in the survey the participants were aware the subject of the survey was "Personal Grooming Injuries", however they did not have details of any questions in the particular survey until they accepted.

The survey was designed to assess the following:

  • pubic hair grooming practices
  • grooming injuries
  • sexual behaviours
  • STI history

Questions queried participants' grooming practices and included the following:

  • whether they had ever groomed (yes/no)
  • how often they groomed (daily, weekly, monthly, every 3 to 6 months or every year)
  • amount of hair removed (trimming vs complete removal)
  • typical grooming method (non-electric razor, electric razor, wax, scissors, electrolysis, laser hair removal, depilatories or tweezers)

Participants were defined as "ever groomers" if they had ever groomed their pubic hair in the past, "extreme groomers" if they had removed all of their pubic hair more than 11 times per year and "high-frequency groomers" if they trimmed their pubic hair daily or weekly.

The participants were asked about their history of STIs, including the number and type of STIs. These STIs were categorised as either:

  • cutaneous (infections that can spread via the skin), including herpes simplex virus (HSV), human papilloma virus (HPV), syphilis and molluscum contagiosum (a viral infection causing itchy spots)
  • secretory (infections that can spread via body fluids), including gonorrhoea, chlamydia and HIV

Pubic lice were analysed separately.

The researchers made statistical adjustments for age and sex, and sexual behaviour variables such as frequency of sexual activity and number of sexual partners annually and over a lifetime.


What were the basic results?

In total 7,580 adults took part in the survey. Out of these, 74% reported pubic hair grooming, this was made up of 66% of men and 84% of women.

STIs were reported in 13% of participants (11% men and 15% women). Significantly more groomers reported a lifetime history of STIs than non-groomers (14% vs 8%).

Those who reported extreme grooming were more likely to report a lifetime history of STIs than those who reported non-extreme grooming (18% vs 14%).

No differences were observed between high-frequency and low-frequency groomers (15% vs 14%).

After analysing the survey findings and adjusting for the effects of age and lifetime sexual partners, "ever groomers" had an 80% higher rate of self-reported STIs (odds ratio [OR] 1.8; 95% confidence interval [CI] 1.4 to 2.2).

Cutaneous STIs were more than twice as likely in those who groomed (OR 2.6; CI 1.8 to 3.7), secretory STIs were 70% more likely (OR 1.7; CI 1.3 to 2.2) and lice 90% (OR 1.9; CI 1.3 to 2.9).

The association with cutaneous STIs was stronger for "extreme groomers" (OR 4.4; CI 2.9 to 6.8) and "high-frequency groomers" (OR 3.5; CI 2.3 to 5.4). Lice were more likely to be reported by "non-extreme groomers" (OR 2.0; CI 1.3 to 3.0) and "low frequency groomers" (OR 2.0; CI 1.3 to 3.1).

By type of cutaneous STI, grooming links were found for HSV, HPV and syphilis, but not for molluscum contagiosum which was reported by few people. For secretory STIs, links were found with chlamydia and HIV but not gonorrhoea.


How did the researchers interpret the results?

The researchers conclude: "Among a representative sample of US residents, pubic hair grooming was positively related to self-reported STI history. Further research is warranted to gain insight into STI risk-reduction strategies".

The team propose that a possible reason for this outcome is that grooming may cause epidermal microtears, which may increase the risk of STIs, particularly cutaneous, viral STIs. This mechanism was recently proposed for grooming and molluscum contagiosum.



This US questionnaire-based study aimed to assess the link between the grooming of pubic hair and lifetime history of STIs.

The findings showed that grooming was associated with a higher rate of STIs. The study has strengths in that it included a large number of both men and women with a very small number excluded from the analysis.

The team made attempts to make the survey as fair as possible and provided a laptop computer and free internet service to those without access to the internet. They also carried out a pilot survey to make sure it was valid and easy to understand.

However there are some important limitations to this research that overall mean that it can't prove conclusively that grooming pubic hair directly increases your risk of STIs.

  • By design this cross-sectional study is not able to prove cause and effect. It can't determine the timing of grooming compared to when STIs were acquired.
  • The study can't rule out the possibility that the link between grooming practices and STIs is being mediated by sexual activity (that is people who groom might be more sexually active and /or adventurous). The researchers failed to assess the safe sex practices of participants, they only used the number of lifetime sexual partners as a surrogate for risky sexual behaviour.
  • There is high risk of responder bias in a survey questioning such a sensitive topic – participants agreeing to take part in this survey may not be fully representative of the general public. That is, people with an active interest in the topic may be more likely to take part (which could explain the relatively high number of groomers in the study). Also, responders may not always be completely truthful in their answers.
  • There is the possibility of recall bias when participants were asked to recall both their past grooming habits and lifetime STIs – not all of which (for example, HPV or chlamydia) they may have been aware of having.
  • Overall, while the team attempted to control for possible confounding effects it is possible that some remained in the model and influenced these results.

Grooming of public hair itself is not going to cause a STI, but unsafe sexual activity can.

The best way to avoid sexually transmitted infections is to practice safe sex, including during anal and oral sex.

If you think you might have been involved in a risky sexual practice it's worth getting tested for STIs at a sexual health clinic, genitourinary medicine (GUM) clinic or GP surgery.

Some STIs, such as chlamydia, don't always cause obvious symptoms but can trigger complications, such as problems with fertility, further down the line.

Find out more about sexual health services in your local area.

The researchers do raise the point that groomers should be advised to delay sex after grooming in case their skin has been damaged.

Read more information about STIs.

Links To The Headlines

Pubic hair grooming 'STI risk linked to skin tears'. BBC News, December 6 2016

Do YOU trim your pubic hair? Then you're 80% more likely to have an STD. Mail Online, December 6 2016

Waxing warning: ‘extreme grooming’ of pubic hair quadruples risk of sexually transmitted disease. The Daily Telegraph, December 6 2016

People who trim their pubic hair have EIGHTY percent higher chance of having an STD. Daily Mirror, December 6 2016

Links To Science

Osterberg EC, Gaither TW, Awad MA, et al. Correlation between pubic hair grooming and STIs: results from a nationally representative probability sample. Sexually Transmitted Infections. Published online December 5 2016

Handful of nuts 'cuts heart disease and cancer' risk

"People consuming at least 20 grams of nuts daily less likely to develop potentially fatal conditions such as heart disease and cancer," The Independent reports. That was the main finding of a review looking at 20 previous studies on the benefits of nuts.

Researchers found consistent evidence that a 28 gram daily serving of nuts – which is literally a handful (for most nuts) – was linked with around 20% reduced risk of heart disease, cancer and death from any cause.

However, as is so often the case with studies into diet and health, the researchers cannot prove nuts are the sole cause of these outcomes.

It's hard to discount the possibility that nuts could be just one component of a healthier lifestyle pattern, including balanced diet and regular physical activity. It could be this overall picture that is reducing risk, not just nuts.

The researchers tried to account for these types of variables, but such accounting is always going to be an exercise in educated guesswork.

Also, many non-lifestyle factors may be involved in any individual's risk of disease. For example, if you are a male with a family history of heart disease, a healthy diet including nuts can help, but still may not be able to eliminate the risk entirely.

The link between nuts and improved health is nevertheless plausible. As we pointed out during a discussion of a similar study in 2015: "Nuts are a good source of healthy unsaturated fats, protein, and a range of vitamins and minerals … Unsalted nuts are the healthiest option."


Where did the story come from?

The study was carried out by researchers from Norwegian University of Science and Technology, Trondheim, Norway, Imperial College London, and other institutions in the US.

It was funded by Olav og Gerd Meidel Raagholt's Stiftelse for Medisinsk forskning (a Norwegian charitable foundation), the Liaison Committee between the Central Norway Regional Health Authority and the Norwegian University of Science and Technology (NTNU), and Imperial College National Institute of Health Research (NIHR) Biomedical Research Centre (BRC).

The study was published in the peer reviewed medical journal BMC Medicine on an open-access basis, so it is free to read online.

The UK media presents the results reliably but without discussing the inherent potential limitations of the type of observational evidence examined by the researchers.


What kind of research was this?

This was a systematic review that aimed to examine the link between nut consumption and risk of cardiovascular disease, cancer and death.

Previous studies have suggested an intake of nuts is beneficial, and some have found it could be linked with reduced risk of cardiovascular disease and cancer. Other studies though have found no link. The researchers consider the possibility that there is a weak link and that's what they aimed to look at.

A systematic review is the best way of compiling all literature on a topic available to date. However, systematic reviews are only as good as the underlying evidence. Studies looking at dietary factors are often observational and it is difficult to rule out the possibility of confounding variables from other health and lifestyle factors.


What did the researchers do?

The researchers searched two literature databases to identify any randomised controlled trials (RCTs) or prospective cohort studies that had looked at how nut intake in adults was linked with cardiovascular disease, cancer and death from any cause.

Studies had to report information on nut intake specifically (ideally by dose and frequency). Researchers assessed the quality of studies for inclusion.

Twenty prospective cohort studies met the inclusion criteria. Nine studies came from the US, six from Europe, four from Asia, and one came from Australia. All studies included adult populations; five were in women only, three in men only, and 12 in a mixed population.

The researchers did not find any suitable RCTs to include in their analysis. This is not especially surprising as RCTs involving diet are notoriously difficult to carry out. You could never be sure that everyone who was randomised into the "eat no nuts" group would stick to the plan, or vice versa.

Also they'd need large samples and long follow-up times to capture disease outcomes, so are not usually feasible. 


What did they find? Cardiovascular disease

Twelve studies (376,228 adults) found nut consumption reduced the risk of cardiovascular disease. Each 28 gram/day serving was linked with a 21% reduced risk of cardiovascular disease (relative risk [RR] 0.79, 95% confidence interval [CI] 0.70 to 0.88).

This was for any nut intake, but risk reductions were also found when analysing peanuts or tree nuts separately. Increasing intake was associated with reduced risk up to 15grams/day, above which there was no further risk reduction.

Looking at specific outcomes, 12 studies also found a 29% reduced risk of heart disease specifically (RR 0.71, 95% CI 0.63 to 0.80).

However, 11 studies didn't find a significant link with the outcome of stroke specifically (RR 0.93, 95% CI 0.83 to 1.05).


Nine cohorts (304,285 adults) found that one serving of nuts per day reduced risk of any cancer by 15% (RR 0.85, 95% CI 0.76 to 0.94). By separate analysis, the risk reduction was slightly higher for tree nuts (20%) than peanuts (7%).

All-cause death

Fifteen cohorts (819,448 people) recorded 85,870 deaths. One serving of nuts a day was linked with a 22% reduced risk of death during study follow-up (RR 0.78, 95% CI 0.72 to 0.84).

Looking at specific causes of death, each serving of nuts a day was linked with reduced risk of respiratory deaths (0.48 (0.26–0.89); three studies) and diabetes deaths (RR 0.61, 0.43 to 0.88; four studies).

There was no link with deaths from neurodegenerative diseases, and inconsistent links with deaths from kidney disease and infectious diseases. No other disease-related causes were reported.

Overall, the researchers estimate that 4.4 million premature deaths in 2013 across America, Europe, Southeast Asia and Western Pacific could be attributable to nut intakes below 20 grams/day.


What did the researchers conclude?

The researchers conclude: "Higher nut intake is associated with reduced risk of cardiovascular disease, total cancer and all-cause mortality, and mortality from respiratory disease, diabetes, and infections."



This systematic review finds evidence that nut intake may be linked with reduced risk of cardiovascular disease, cancer and death.

The systematic review has several strengths. It identified a large number of studies with a large total sample size. It also included only prospective cohorts assessing nut consumption and then followed up later disease outcomes.

It excluded cross sectional studies, which assess diet and disease at the same time, and so can't show the direction of effect. It also excluded cohorts that have retrospectively questioned diet when the person already has the disease, which could be subject to recall bias.

However, there are still a number of inherent limitations which mean these studies cannot easily prove that nuts are the magic dietary ingredient that are solely and directly responsible for these outcomes.

There were no randomised controlled trials of nut consumption. All studies were observational where people were choosing their own diet.

The researchers took care to include studies that only looked at nut consumption as an independent factor and looked at results that had adjusted for any confounders. However, the factors that the studies adjusted for, and how well they were assessed, will have varied across studies.

As such it's very difficult to prove that nuts alone are the causative factor and they are not just one component of a generally healthier lifestyle pattern, including balanced diet, regular physical activity, not smoking, and moderating alcohol.

When it comes to frequency or quantity of intake, it is likely there is an element of inaccuracy when people report how much they eat. For example, most people wouldn't weigh out how many nuts they're eating each day.

The review also provides limited information about specific types of nuts. Considering peanuts in particular, the studies included in the review didn't specify whether these are plain nuts, or whether they could have added salt and oils.

It is also likely that cardiovascular and cancer outcomes were not assessed the same way in all studies, for example whether by participant self-report or by checking medical records.

Overall there does seem to be a link between nut consumption and health, but nuts alone won't reduce your risk of cardiovascular disease or cancers, if your lifestyle is still generally unhealthy.

If you want to live a long and healthy life then you should exercise regularly and eat a balanced diet high in fruit and vegetables and low in salt, sugar and saturated fats, while avoiding smoking and moderating your consumption of alcohol.

Nuts are high in "good fats" and can be eaten in moderation as part of a healthy diet. Unsalted nuts are best as excessive amounts of salt can raise your blood pressure.  

Links To The Headlines

Eating handful of nuts a day can keep the doctor away, research proves. The Independent, December 5 2016

Eat nuts every day to cut heart and cancer risk: Just a handful can reduce chance of dying early by a fifth. Daily Mail, December 5 2016

A handful of nuts a day could slash risk of heart disease and cancer. The Daily Telegraph, December 5 2016

Eat a handful of nuts daily to slash your risk of heart disease and cancer. Daily Mirror, December 5 2016

Links To Science

Aune D, Keum N, Giovannucci E, et al. Nut consumption and risk of cardiovascular disease, total cancer, all-cause and cause-specific mortality: a systematic review and dose-response meta-analysis of prospective studies. BMC Medicine. Published online December 5 2016

Could Parkinson's disease start in the gut?

"Parkinson's disease 'may start in gut'," BBC News reports. New research involving mice suggests that bacteria in the gut may lead to a greater decline in motor function in patients with Parkinson's disease.

The study involved a mouse model of Parkinson's disease. The researchers gave some of the mice gut bacteria from people with Parkinson's disease, some were given gut bacteria from healthy individuals, and some mice were not given any bacteria.

They found that gut bacteria seemed necessary to trigger Parkinson's-like symptoms. There was greater decrease in motor function in mice infected with gut bacteria compared with those who remained germ-free, with the greatest decline seen in mice given bacteria from people with Parkinson's.

The researchers suggest that the presence of gut bacteria may cause the build-up of proteins called alpha-synuclein, which is found in patients with Parkinson's disease.

The study does not prove that Parkinson's is essentially a gut disorder and could potentially be treated or prevented with antibiotics or probiotics. And, humans aren't identical to mice, so the study findings may not apply to people.

The study arguably raises more questions than answers. But it could pave the way for further studies in people, with the hope of finding potential new treatments for Parkinson's.


Where did the story come from?

The study was carried out by researchers from a variety of institutions, mainly from the US and Sweden, including the California Institute of Technology, Rush University Medical Center in Chicago and Chalmers University of Technology in Sweden.

It was funded by the Knut and Alice Wallenberg Foundation and the Swedish Research Council.

The study was published in the peer-reviewed scientific journal Cell. It's available on an open-access basis and is free to read online.

Generally the UK media coverage on this topic was balanced, although the Mail Online did say this study "could overhaul medical research and treatment of Parkinson's" which is possibly over optimistic. 


What kind of research was this?

This was an animal study which aimed to investigate a possible link between gut bacteria and brain diseases such as Parkinson's disease.

Parkinson's is a disease of unknown cause where there is a loss of dopamine-producing cells in the brain. This leads to progressive decline in brain and motor function. Typical symptoms include slow movements, stiff muscles and involuntary shaking. There are also often mental health effects such as depression and dementia.

Past evidence has suggested that gut bacteria could influence the development of brain diseases such as Parkinson's by causing build-up of the protein alpha-synuclein (α-synuclein).

However, there was a lack of studies investigating the link through cellular research, an issue the researchers wanted to address.

Animal studies are useful early stage research which can indicate how processes in the body may work. On the other hand, mice and humans are quite different in biology so what works in mice may not necessarily be the same in humans. And even if the findings do apply, they may not provide the whole answer to the causes of diseases such as Parkinson's.


What did the research involve?

The research involved two groups of mice aged 12-13 weeks. One group of mice was genetically programmed to produce the protein alpha-synuclein (α-synuclein), which is thought to build up in people with degenerative brain conditions like Parkinson's. Another group of "normal" mice acted as controls.

Within these two groups, the gut composition of the mice was changed. Some mice remained germ-free, some were given gut bacteria from "healthy" donors, and others were given gut bacteria from people with Parkinson's.

The brain and motor function was tested over time in all groups of mice, along with gastrointestinal tests, up to the age of 24-25 weeks. Standardised testing, used for mice, was used to assess motor function.

The test results were compared between the different groups of mice to see whether gut bacteria composition, in combination with the protein, had any effect on the onset of Parkinson-like symptoms.


What were the basic results?

Overall, they found that a decrease in motor function for mice with gut microbes compared with those who remained germ-free.

  • The presence of gut bacteria promoted the decline in motor function caused by α-synuclein. Mice genetically modified to produce this protein and then given gut bacteria generally performed the worst in the motor function tests. Gut bacteria from people with Parkinson's caused the greatest decline in motor dysfunction.
  • Mice producing α-synuclein who remained germ-free still showed a decline in motor function by 24-25 weeks old, but the onset was significantly slower compared to the mice with gut bacteria. 
  • The researchers found that gut microbes seemed to be affecting brain function via the action of short-chain fatty acids. The microbes produce short-chain fatty acids. The acids then cause an inflammatory response in the brain's immune cells (microglia) which leads to the dysfunction. 
  • In the germ-free mice there was no fatty acid signalling, limited inflammatory effect and limited motor dysfunction.


How did the researchers interpret the results?

The researchers concluded: "remarkably, colonization of aSyn-overexpressing mice with microbiota from [people with Parkinson's] enhances physical impairments compared to microbiota transplants from healthy human donors.

"These findings reveal that gut bacteria regulate movement disorders in mice and suggest that alterations in the human microbiome represent a risk factor for Parkinson's disease."



This study aimed to investigate a possible link between gut bacteria and degenerative brain diseases such as Parkinson's.

In the animal model of Parkinson's, researchers found that the presence of gut bacteria seems to enhance the brain's inflammatory response and lead to greater decrease in motor function.

And gut bacteria from people with Parkinson's seemed to have the greatest effect.

But does this mean that Parkinson's is essentially a gut disorder and could potentially be treated or prevented with antibiotics? Unfortunately the answer isn't so simple.

Although these are interesting findings, biological function in mice isn't exactly the same as in humans, so you can't necessarily apply these findings to the human population.

Even if they are applicable in part, this still may not provide the whole answer as to how the disease process of Parkinson's starts. However, it does act as useful early stage research which could pave the way for further studies in humans.

Dr. Arthur Roach, Director of Research and Development at Parkinson's UK commented on this study: "This paper shows for the first time a way in which one of the key players in Parkinson's, the protein alpha-synuclein, may have its actions in the brain modified by gut bacteria. It is important to note however that this study has been done in mice and we would need further studies in other model systems and in humans to confirm that this connection is real … There are still many questions to answer but we hope this will trigger more research that will ultimately revolutionise treatment options for Parkinson's."

Find support in your area for people affected by Parkinson's.

Links To The Headlines

Parkinson's disease 'may start in gut'. BBC News, December 2 2016

Parkinson's could start in the GUT not the brain: Study finds first ever link between the disease and gut microbes. Mail Online, December 1 2016

Links To Science

Sampson TR, Debelius JW, Thorn T, et al. Gut Microbiota Regulate Motor Deficits and Neuroinflammation in a Model of Parkinson’s Disease. Cell. Published online December 1 2016

'Not enough over-50s' taking aspirin to prevent heart disease

"Aspirin a day could dramatically cut cancer and heart disease risk … study claims," the Mail Online reports.

U.S. researchers ran a simulation of what might happen if all Americans over 50 years old took aspirin on a daily basis. Their results found that people would live about four months longer on average, adding 900,000 people to the US population by 2036.

The study was designed to demonstrate the possible long-term effects of more people taking aspirin to prevent cardiovascular disease.

It should be pointed out that there is an important difference between UK and US guidelines. In the UK low-dose aspirin is usually recommended for people with a history of heart disease or stroke. In the US this advice is extended to people who are at risk of cardiovascular disease but don't have it yet.

We already know that aspirin reduces the risk of heart disease and strokes caused by blood clots (ischaemic stroke). There's some evidence it may reduce some types of cancer. However, aspirin also increases the risk of stroke caused by bleeding (haemorrhagic stroke) and increases the chances of bleeding in the stomach or gut.

So should you be taking low-dose aspirin? Without knowing your individual circumstances it is impossible to provide an accurate response. You need to ask your GP.


Where did the story come from?

The study was carried out by researchers from the University of Southern California and a company called Analysis Group. The authors received no funding for the study.

The study was published in the peer-reviewed journal PLOS One, on an open-access basis so it's free to read online.

The Mail Online reports the study as if the findings about aspirin reducing cardiovascular disease and potentially extending lifespan were new, while they have actually been known for some time.

The report says taking aspirin "would save the US $692 billion in health costs," which seems to be a misunderstanding. Health costs would actually increase, because of people living longer.

However, the researchers assigned a value of $150,000 to each additional year of life lived, which is how they arrived at the $692 billion figure.


What kind of research was this?

This was a "microsimulation" study, which used a modelling system to project possible outcomes under different scenarios, using information from health surveys. This type of modelling can throw up some interesting possibilities, but because it relies on so many assumptions, we have to be cautious about taking the results too literally.


What did the research involve?

Researchers used data from cohort studies to predict average life expectancy, cardiovascular events, cancers, disabilities and healthcare costs for people in the US aged 50 and over. They predicted what would happen with the current numbers of people taking aspirin, then with everyone currently recommended to take aspirin doing so, then with everyone over 50 taking aspirin.

They compared the results of their modelling, to see what effect it would have on average lifespan, the US population, costs and benefits.

Cohort studies providing data included the National Health and Nutrition Examination Survey (NHANES), Health and Retirement Study of Americans, Medical Expenditure Panel Survey and Medicare Current Beneficiary Survey.

The model included an assumption that more people would have gastrointestinal bleeding as a result of taking aspirin. It also modified the results using quality of life measures, so that additional life years were adjusted for quality of life.


What were the basic results?

The researchers found that, if everyone advised by US guidelines to take aspirin did so, the:

  • numbers of people with cardiovascular disease would fall from 487 per 1,000 to 476 per 1,000 (11 fewer cases, 95% confidence interval (CI) -23.2 to -2)
  • numbers with gastrointestinal bleeding would rise from 67 per 1,000 to 83 per 1,000 (16 more cases, 95% CI 3.6 to 30)
  • years of life expectancy at age 51 would rise from 30.2 years to 30.5 years, an additional four months of life (0.28 year, 95% CI 0.08 to 0.5)
  • life expectancy without disability would rise from 22.8 years to 22.9 years, an additional one month of life (0.12 year, 0.03 to 0.23)

The model found no reduction in the numbers of strokes or cancers.

The model shows there could be an additional 900,000 people (CI 300,000 to 1,400,000) alive in the US in 2036, who would otherwise have died.

Using the figure of $150,000 per quality-adjusted life year to represent benefits, the researchers say the value of extra life gained by 2036 would be $692 billion.


How did the researchers interpret the results?

The researchers said: "Expanded use of aspirin by older Americans with elevated risk of cardiovascular disease could generate substantial population health benefits over the next twenty years, and do so very cost-effectively."



This study doesn't really tell us anything we didn't already know. Aspirin has been used for many years to prevent heart attacks and strokes in people with cardiovascular disease. Aspirin's wider use is controversial, because of the potential side effects.

What this study does add is an estimate of what might happen if all people in the US who were advised to take aspirin under US guidelines, actually did so. (The researchers say that 40% of men and 10% of women advised to take aspirin don't take it).

The study assumes that people would get the same benefits as those seen in clinical trials of aspirin. This is unrealistic, because most studies find that people tend to do better in clinical trials than when being treated in the real world.

The average results – showing an additional one month of disability-free life for every 1,000 people – may sound trivial. However, it's important to remember that averages don't work like that in real life. Many people will get no benefit from aspirin, while a smaller group will avoid a heart attack or stroke, and so live many more months or possibly years, as a result of taking aspirin.

If you've already had a heart attack or stroke, or if you have angina or another heart or circulation problem, your doctor has probably prescribed low dose aspirin. There's good evidence that aspirin (or similar drugs, for those who can't take aspirin) can help prevent a second heart attack or stroke.

Find out more information about aspirin.

Links To The Headlines

Aspirin a day could dramatically cut cancer and heart disease risk - and even extend lifespan, study claims. Mail Online, November 30 2016

Links To Science

Agus DB, Gaudette E, Goldman DP, Messali A. The Long-Term Benefits of Increased Aspirin Use by At-Risk Americans Aged 50 and Older. PLOS One. Published online November 30 2016

'No need to wait to try again after miscarriage' advice

"Women who suffer a miscarriage should try for a baby again within six months, a major study has found," the Daily Mail reports.

Current guidance from the World Health Organization recommends couples wait at least six months before trying to conceive again after a miscarriage. But the researchers decided to investigate the validity of this recommendation as it was based on a single study from women in the developing world.

The researchers looked at information taken from around 1 million women from 11 different countries around the world. It found no more adverse outcomes for women who get pregnant less than six months after a miscarriage compared to those who wait. Moreover, a reduced risk of miscarriage and preterm birth was found for these women.

So this study suggests these guidelines should be reviewed and that couples be advised that delaying pregnancy doesn't necessarily improve outcomes.

If you have had a miscarriage, you should avoid having sex until all of your miscarriage symptoms have gone. Your periods should return within four to six weeks of your miscarriage, although it may take several months to settle into a regular cycle.

Not every woman will feel physically and/or emotionally ready to try for another pregnancy. Charities such as the Miscarriage Association can provide advice and support about trying for another pregnancy.

Where did the story come from?

The study was carried out by researchers from the University of Malta and the University of Aberdeen and did not receive any funding.

The study was published in the peer-reviewed medical journal Human Reproduction Update and the authors declare no conflict of interest.

The media generally reported the story accurately, acknowledging that women are more likely to have a successful pregnancy if they conceive sooner after a miscarriage, rather than waiting.

The Daily Mail suggests that women who have a miscarriage "should try for a baby again within six months". However not all women will feel emotionally ready to try again for a baby so soon.


What kind of research was this?

This was a systematic review and meta-analysis aiming to see if becoming pregnant less than six months after miscarriage is associated with adverse outcomes in the next pregnancy, compared with getting pregnant more than six months later.

Meta-analyses are a useful way of summarising many studies looking at the same outcomes, in this case adverse pregnancy outcomes.

However, this type of study will only ever be as good as the individual studies included, and weaknesses from these studies will be brought into the analyses.

The studies included were 13 cohort studies and three randomised controlled trials, from 11 different countries.

Cohort studies are a good way of looking at a link between two factors, but cannot prove that one – falling pregnant before six months – causes another – future pregnancy outcomes.


What did the research involve?

Researchers compared the results of 1,043,840 women from 16 trials. They then pooled the results of 10 similar trials, involving 977,972 women. They compared the difference in pregnancy-related outcomes between women who fell pregnant less than six months after having a miscarriage, and those who fell pregnant more than six months after miscarriage.

They looked at outcomes, including:

The results were analysed and the risk of each outcome for the two groups of women (less than six months to pregnancy or more than six months to pregnancy) was calculated.


What were the basic results?

Results showed that in women with less than six months between miscarriage and pregnancy compared to those with a six months or more interval, there was a:

  • decreased risk of further miscarriage of 18% (risk ratio (RR) 0.82, confidence interval (CI) 0.78 to 0.86)
  • decreased risk of preterm delivery of 21% (RR 0.79, 95% CI 0.75 to 0.83)

There was no significant difference between women with less than six months or more than six months between miscarriage and pregnancy for still birth, low birthweight or pre-eclampsia.


How did the researchers interpret the results?

The researchers concluded: "The results of this systematic review and meta-analyses show that an IPI [inter-pregnancy interval] of less than six months is associated with no increase in the risks of adverse outcomes in the pregnancy following miscarriage compared to delaying pregnancy for at least six months."

In fact, there is some evidence to suggest that chances of having a live birth in the subsequent pregnancy are increased with an IPI of less than six months.

They go on to add: "there is now ample evidence to suggest that delaying a pregnancy following a miscarriage is not beneficial and unless there are specific reasons for delay couples should be advised to try for another pregnancy as soon as they feel ready."



This study shows that getting pregnant sooner after a miscarriage results in no more adverse outcomes compared to waiting for more than six months.

In addition, there appear to be better outcomes in terms of a lower risk of further miscarriage and possibly preterm birth. It should be pointed out that for preterm birth the result only reached statistical significance when one of the relevant studies was excluded, which limits our confidence in this result.

This study has strengths as it included a large number of women from many different countries. However, it also has limitations:

  • The way data was collected from the original studies varied. Some used mother's recall while others gained information from databases – therefore the quality of data varied.
  • Studies had different definitions of miscarriage. While some included only spontaneous abortion (miscarriage), others did not distinguish between spontaneous and induced abortion.

However, there are a number of confounding factors that influence pregnancy outcomes, including:

  • maternal age
  • ethnicity
  • social class
  • smoking
  • alcohol
  • BMI
  • previous obstetric history

Other than maternal age, the included studies varied in addressing these potential confounding variables, which could have led to an over- or under-estimation of results.

Miscarriages are fairly common. Among women who know they're pregnant, it's estimated one in six of these pregnancies will end in miscarriage.

Recurrent miscarriages (losing three or more pregnancies in a row) are far less common, affecting only around 1 in a 100 women.

If you do want to get pregnant again, you may want to discuss it with your GP or hospital care team. Make sure you are feeling physically and emotionally well before trying for another pregnancy.

The Miscarriage Association provides more advice about trying for another pregnancy.

Links To The Headlines

Women who lose their baby to miscarriage told to try again within six months because it can reduce the risk by a fifth. Daily Mail, November 30 2016

Successful pregnancy more likely sooner after miscarriage, say researchers. BBC News, November 30 2016

Try again for a baby within six months of miscarriage for best chance of success – new research. The Daily Telegraph, November 30 2016

Women who conceive within six months of miscarriage are 'more likely to get pregnant', study finds. The Independent, November 30 2016

Links To Science

Kangatharan C, Labram S, Bhattacharya S. Interpregnancy interval following miscarriage and adverse pregnancy outcomes: systematic review and meta-analysis. Human Reproduction Update. Published online November 17 2016

'Want to live longer? Try racquet sports', recommends study

"If you want to stave off death for as long as possible, you might want to reach for a tennis racquet," The Guardian reports.

A study looking at the impact of individual sports on mortality found racquet sports reduced the risk of death by around 47%.

Researchers also found reduced risks of death for people who took part in cycling, swimming and aerobics.

They didn't find such effects for people who took part in rugby, football or running – although this unexpected finding may be explained by the low number of deaths, which may have skewed the statistics. The smaller the data set, the bigger the chance of the data being influenced by chance.

While the researchers found taking part in some sports reduced the risk of death compared to not taking part, they did not directly compare the benefits of different sports. That means we can't say which sport is "best" for health.

What is clear from the study is that any sort of regular physical activity is likely to help us stay healthier and live longer.


Where did the story come from?

The study was carried out by researchers from the UKK Institute in Finland, University of Edinburgh, University of Oxford, Loughborough University and University of Exeter in the UK, Victoria University and University of Sydney in Australia, and University of Graz in Austria. No information about funding was provided.

The study was published in the peer-reviewed British Journal of Sports Medicine on an open-access basis, making it free to read online.

Most of the UK media reported that tennis and badminton were the "best" exercise, because people participating in these sports had the biggest reductions in risk of death compared to people not taking part.

However, these headlines ignore the fact that the effects of football and running were probably underestimated.


What kind of research was this?

This was a cohort study using information from eight health surveys in England and three surveys in Scotland, linked to data about deaths.

Cohort studies can spot links between factors such as taking part in exercise and length of life, but they can't prove that one factor causes another.


What did the research involve?

Researchers analysed questionnaires from 80,306 people. These people (average age 52, more than half women) were followed up for an average of nine years, and any deaths recorded.

After adjusting their figures to account for factors such as age, smoking and weight, researchers looked for links between how long people lived and whether they took part in a sport.

The questionnaires came from two big annual surveys, the Health Survey for England and the Scottish Health Survey. They used questionnaires from 11 years between 1994 and 2008. People were asked if they had taken part in any of the following sports during the past four weeks:

  • cycling
  • swimming
  • aerobics, keep fit, gymnastics or dance for fitness (combined as aerobics)
  • running or jogging (combined as running)
  • football or rugby (combined as football)
  • badminton, tennis or squash (combined as racquet sports)

For each of the sports included, researchers compared the chances of being alive at the end of the study, between people who said they took part in them with people who didn't take part in them.

They tried to account for the seasonal nature of sports like football and rugby by spreading the questionnaires year-round, but this may have missed some participants.

In addition to age, smoking and weight, researchers took account of how much other physical activity (outside of the named sports) people did, as well as the following confounders:

  • long-term illness
  • alcohol use
  • mental health
  • education level
  • diagnosis of cardiovascular disease


What were the basic results?

Of the 80,306 people studied, 8,790 (10.9%) died during the average nine years of follow-up.

After adjusting their figures for confounding factors, researchers found that people who took part in sports had the following reduced chances of death during the study:

  • 15% lower for cycling (hazard ratio (HR) 0.85, 95% confidence interval (CI) 0.76 to 0.95)
  • 28% lower for swimming (HR 0.72, 95% CI 0.65 to 0.80)
  • 47% lower for racquet sports (HR 0.53, 95% CI 0.40 to 0.69)
  • 27% lower for aerobics (HR 0.73, 95% CI 0.63 to 0.85)

They did not find a statistically significant reduced chance of death for people taking part in running or football.

They found reduced chances of death from heart disease or stroke for swimming, racquet sports and aerobics, but not for running, cycling or football.


How did the researchers interpret the results?

The researchers said their results "demonstrate that participation in specific sports may have significant benefits for public health". They said they had found "robust evidence" that swimming, racquet sports, cycling and aerobics were linked to reduced chance of death.

They acknowledge their findings on running were "surprising" in light of four big studies conducted previously. They suggest the low number of deaths among people who went running (68 of 4,012 runners, or 1.6%) could have prevented the statistical model from reaching statistical significance.

They also say that asking people about their participation in running during the past four weeks could have been misleading, so that those who jogged occasionally were included among those who ran regularly, year round. They say their result should be seen as adding to the body of evidence supporting running, rather than contradicting it.

Similarly for football, they say the results were "somewhat unexpected" and may reflect only the low numbers of people in the study who said they played football.



The overall conclusion we can take from this study is that taking part in sport or fitness activities is linked to a lower chance of death in a given period.

It's encouraging to see that a wide range of popular activities, including swimming, aerobics and cycling, are likely to be beneficial.

But we should be wary about comparing the types of different sports against each other. They weren't directly compared in the study and there may be reasons why results for some activities, such as football and running, were found to be statistically non-significant (potentially down to chance).

Statistician Professor David Spiegelhalter said that making a distinction between the sports was "simply not valid" and the differing results only reflected the small number of deaths among football players and runners.

The statistical uncertainty may have come about because of the way in which the results were adjusted to take account of confounding factors. For example, runners are likely to be non-smokers, younger, do more exercise overall and be leaner, compared to people who don't run – all of which will reduce their chances of death.

Once you've taken these factors into account, the additional impact of running may be hard to measure.

Professor Spiegelhalter points out that because this is an observational study, we can't really tell whether taking part in those sports where researchers did find a statistically significant result actually caused the lower death rate among participants.

He said it was "equally plausible" that "those at increased risk of death over the next few years are less likely to be healthy enough to play active sports now."

So what should people do as a result of the study?

The sensible advice seems to be to find a physical activity you enjoy – whether that's swimming, tennis, dancing, football or anything else that gets you out of breath – and take part. The more you enjoy an activity the greater the possibility that you will carry on doing it on a long-term basis.

While we can't say that one sport is better than others at helping you to live longer, evidence shows that physical exercise is likely to keep us fitter, healthier and happier for longer.

Read more about the benefits of exercise and how to get active your way

Links To The Headlines

Health racquet: tennis reduces risk of death at any age, study suggests. The Guardian, November 29 2016

Forget that jog: Why squash and tennis are the best way to stay fit in middle age. Daily Mail, November 30 2016

Want to live for longer? Pick up a racket. ITV News, November 30 2016

Revealed: The best sports to ensure a long life... and it's bad news for joggers. Daily Mirror, November 30 2016

Why tennis could save your life – but football and running may not help you live longer. The Daily Telegraph, November 29 2016

Links To Science

Oja P, Kelly P, Pedisic Z, et al. Associations of specific types of sports and exercise with all-cause and cardiovascular-disease mortality: a cohort study of 80 306 British adults. British Journal of Sports Medicine. Published online November 28 2016

Lack of sleep may disrupt development of a child's brain

"New brain scans reveal sleep deprivation damages children's brains more than previously thought," the Mail Online reports.

Researchers measured the brain activity of children whose sleep had been restricted by four hours and found some potentially worrying signs.

The study included 13 children aged between five and 12 and compared the effects of a normal night's sleep (9pm bedtime) with a restricted night's sleep (2am bedtime), both with the same wake up time.

Previous studies in adults have shown that sleep restriction increases deep sleep waves – patterns of brain activity associated with the deepest sleep – in the front region of the brain.

The researchers found similar effects in children, but this time at the back and side regions of the brain involved in planned movements, spatial reasoning, and attention.

The researchers were concerned this could impact on the development of the brain. Neural structures inside the brain change and adapt to the stimulus the brain receives; a concept known as plasticity. The worry is that the deep sleep waves could disrupt or slow down normal plasticity development.

They also found that sleep deprivation was linked with some structural changes to the myelin sheath – the fatty coating on nerve fibres going towards the back of the brain. However, it's quite a big step to say this results in disruption to brain development.

This study was tiny, and observed short term effects. We have no idea whether similar sleep deprivation would have any long-term effect on a child.


Where did the story come from?

The study was carried out by researchers from a number of institutions including the University of Colorado and University Hospital Zurich.

Funding for the research was provided by the Swiss National Science Foundation, the Clinical Research Priority Program Sleep and Health of the University of Zurich, the Jacob's Foundation and the National Institutes of Health.

The study was published in the peer-reviewed medical journal Frontiers in Human Neuroscience on an open-access basis so it is free to read online.

The Mail Online's reporting of the study was generally accurate but some of the language used in the reporting was over the top. While the results of the study certainly deserve consideration, claims that they amount to "staggering damage" are unproven and exaggerated.


What kind of research was this?

This is a cross sectional study which aimed to assess whether sleep deprivation in school age children could have an effect on brain activity and development.

The researchers explain how previous research in adults has shown that the brain responds to sleep deprivation by increased depth of sleep (non-REM sleep).

This has been demonstrated by increased slow-wave activity (SWA) when monitoring the person's brain while they slept, using an electroencephalogram (EEG). An EEG uses a series of sensors placed around the scalp to monitor the electrical activity of the brain. SWA shows up as a distinct wave-like pattern.

When adults are sleep deprived, this SWA response is usually seen in the front of the brain. The researchers chose to study children as it is not known how their brain responds to acute sleep restriction, and whether any effects seen could be related to brain development.

This kind of study is good for identifying patterns but the very small sample size may make these results unreliable. It is also not able to predict whether these changes may affect longer term outcomes.


What did the research involve?

The researchers included 13 healthy children with no sleep problems aged between five and 12 years. The children were given a sleep programme to follow – either habitual sleep, going to bed around 9pm, or a restricted sleep of 50% of their normal bedtime where they went to bed at about 2am. Both groups had the same morning wake time of 7am.

The restricted sleep group were kept awake by interacting with the research team playing games or reading. The programme was verified by actigraphy, a non-invasive method of monitoring activity that use devices similar to commercial fitness tracker wrist bands, and sleep diaries.

While they were asleep, the children's brain wave patterns were monitored by EEG, where electrodes are attached to the scalp and send signals to a computer to record the results.

Three different time windows were analysed in both sleep settings:

  • The first hour of sleep – to see the effect of restricted sleep when under the greatest level of sleep deprivation.
  • The final hour of sleep – to compare the effect of restricted sleep just before waking up.
  • The last common hour of sleep – comparing brain activity after a common duration of sleep in both scenarios (this would be the 6-7am sleep window if going to sleep at 2am, compared to the 1-2am sleep window if going to sleep at 9pm).

Magnetic resonance imaging (MRI) was used in all children to measure the level of myelin present; this is a fatty coating around the nerve fibres in the brain and which transmits nerve signals. The researchers looked at this as a possible marker for the effects on brain development.


What were the basic results?

In general the researchers found that when sleep was restricted, the children, like adults, had increased depth of sleep, or non-REM sleep, as indicated by increased slow-wave activity (SWA). However, the brain location was different to adults.

Rather than the front regions of the brain, SWA was towards the side and back regions of the brain (parieto-occipital region).

This area of the brain has many functions including processing visual signals (occipital lobe) and sensory information (parietal lobe), so affecting planned movements, spatial reasoning, and attention.

It appears that for children this region may be more susceptible, and possibly vulnerable, to a lack of sleep.

Sleep restriction also seemed to be linked with the amount of water in the myelin coating a developing optic nerve fibre towards the back of the brain on both sides. The potential implication(s) of this are unclear.

How did the researchers interpret the results?

The researchers conclude that the short wave activity response to acute sleep restriction in children shows an effect on the ongoing refinement of the nerve fibres with observable changes to the structure of the myelin sheath.

They suggest "future studies are needed to investigate the functional consequences of inadequate sleep during different stages of development and to identify the key factors involved in the generation of the posterior homeostatic response in school-age children" – roughly translated to how balance is achieved in the back portions of the brain.



This cross sectional study aimed to see whether sleep restriction in children could affect brain activity in a similar way to adults, and whether this may have an effect on brain development.

They found that sleep deprivation does lead to deeper sleep patterns in the side and back regions of the brain, and this also seemed to be linked with an effect on the myelin coating certain nerve fibres.

This potentially indicates that sleep deprivation may affect the developing brain of school age children – but this is quite a big leap.

The findings might seem worrying to parents and children but it's important to note the number of limitations to this study.

Firstly, this is a very small study including only 13 healthy children without sleeping problems. The same findings in these children may not be repeated in another sample of children.

They also can't tell us whether similar or different effects would be observed in children who have sleep difficulties. For example, children who regularly have reduced or disrupted sleep for whatever reason may have developed adaptive mechanisms.

As the study did not take measurements over a very long period of time we also don't know whether the observed changes are long lasting. This would need to be assessed in further research.

Finally, we have no idea whether the effects observed would actually have an impact on the child's learning, development or day-to-day function.

Trouble sleeping can be a problem for children and adults, however there are things you can do to try and get a better night's sleep.

A minimum of 9 to 11 hours sleep a night is recommended for children aged 5 to 12.

Encouraging children to exercise for at least 60 minutes a day, cutting out caffeinated drinks such as cola during the evening, and not overeating before bedtime can help children have good quality sleep.

Read more about sleep advice in children.

Links To The Headlines

Why bedtime is SO important: Study reveals the widespread damage a late night does to children's brains. Mail Online, November 28 2016

Links To Science

Kurth S, Dean III DC, Achermann P, et al. Increased Sleep Depth in Developing Neural Networks: New Insights from Sleep Restriction in Children. Frontiers in Human Neuroscience. Published online September 21 2016

Expensive IVF add-ons 'not evidence based'

"Nearly all costly add-on treatments offered by UK fertility clinics to increase the chance of a birth through IVF are not supported by high-quality evidence," BBC News reports, covering the findings of a review by experts in evidence-based medicine.

IVF "add-ons" include a wide variety of treatments such as pre-implantation genetic screening, where the chromosomes of conceived embryos are checked for genetic conditions, and transfer of a "mock" embryo, as well as various drug treatments for blood clotting and immunity.  

The researchers reviewed 38 interventions offered by private clinics, and found most of them aren't supported by good evidence.

The NHS watchdog the National Institute for Health and Care Excellence (NICE) only provides clear recommendations for the use of 13 of these treatments, and most of these should only be used in specific circumstances. 

Systematic reviews have been carried out for 27 interventions, but there is only evidence that a handful actually improve live birth rates. Even then, the underlying studies behind the reviews have quality issues.

People seeking fertility treatment in the UK can be in a vulnerable situation and end up paying thousands to private clinics for treatments that may or may not work.

The authors of this review and other experts have rightly called for good-quality research into these treatments, and the publication of patient-friendly summaries so people can make an informed decision about their treatment.

Until then, while not particularly user-friendly, websites like NHS Evidence, the Trip Database and the Cochrane Library provide up-to-date information on the evidence base for various interventions.

Where did the story come from?

The study was carried out by researchers from the Centre for Evidence-Based Medicine at the University of Oxford.

The Centre was commissioned by the BBC Panorama team to carry out an independent review of the evidence for fertility treatments additional to IVF in the UK.

However, the BBC was said to have had no role in the review’s protocol, methodology or the interpretation of the findings.

Individual researchers also declared funding from several other sources, including the World Health Organization (WHO), the National Institute for Health Research, and the Wellcome Trust.

The study was published in the peer-reviewed British Medical Journal (BMJ) on an open access basis, so it is free to read online.

What kind of research was this?

This review aimed to look at the evidence on fertility treatment. As the researchers say, about one in seven couples are affected by fertility problems.

Many treatment options are extremely costly, with a reported 59% of them not funded by the NHS. This can put a large financial burden on couples.

But is there actually enough evidence to say these treatments are safe, effective and based on the latest research?

The authors aimed to try to provide evidence for a set of questions the Human Fertilisation and Embryology Authority (HFEA), the regulator of fertility treatment in the UK, suggests couples seeking fertility treatment might wish to ask when considering their treatment:

  • Is this treatment recommended by NICE? If not, why not?
  • Are there any adverse effects or risks (known or potential) of the treatment?
  • Has this treatment been subjected to randomised controlled clinical trials that show it is effective, and is there a Cochrane review available?

The Cochrane organisation produces internationally recognised systematic reviews on primary research in healthcare.

Cochrane reviews are considered to be of the highest standard of evidence-based healthcare resources.

What did the researchers look at?

The researchers first obtained a list of all clinics that provide fertility treatment in the UK from the HFEA.

They reviewed the websites of these clinics to gather a list of the treatments they offer to try to improve fertility outcomes, aside from standard IVF.

They excluded treatments for specific conditions like spinal injury or polycystic ovaries, treatments involving donor eggs or sperm, and complementary therapies. This gave 38 fertility treatments.

Six were described as alternatives to IVF, including intra-cytoplasmic sperm injection (ICSI) – where the sperm is injected directly into the egg – and intrauterine insemination. 

Five were described as preservation treatments, which included freezing of eggs, sperm and embryos.

The remaining 27 treatments were classed as "add-ons" to fertility treatment. This included a wide variety of treatments, such as genetic screening prior to implantation, sperm DNA test, mock embryo transfer, antioxidants, and aspirin treatment.

For all of the 38 treatments, the researchers looked for evidence in two literature databases to identify systematic reviews and randomised controlled trials, or next-best evidence if not available, published up to April 2016.

Are the treatments recommended by NICE?

NICE gives clear evidence recommendations for about a third of the treatments investigated (13 interventions, 34%). All of the recommended ones (11) are only advised when there are specific indications.

These evidence-based treatments include ICSI, sperm, egg and embryo freezing, frozen embryo transfer, ovulation induction, and intrauterine insemination.

NICE specifically advises against two interventions: assisted hatching and examination of the uterus (hysteroscopy).

For 19 interventions, NICE either did not mention their use or the evidence was unclear. This included various pre-implantation genetic testing methods, mock embryo transfer, time lapse embryo imaging, and ovarian tissue freezing. 

Six other interventions had research recommendations, including the use of aspirin, heparin and steroids. See the original study for the full list.

How good is the evidence?

A systematic review of the evidence had been carried out for just under three-quarters of the procedures (27 out of 38).

There was review-level evidence that only five of the 38 interventions improved live birth outcomes:

  • blastocyst culture – where embryos are transferred after a few days incubation
  • endometrial scratching – a procedure to help embryos implant in the uterus lining
  • adherence compounds – where compounds are used to increase the possibility that implanted embryos will adhere to the uterus
  • antioxidant treatment – where one, or both, of the parents are given antioxidants prior to IVF treatment
  • intrauterine insemination in a natural cycle – where sperm implantation is timed to combine with a woman's natural menstrual cycle in an attempt to maximise the chances of success

However, even for these interventions, there were quality limitations for the underlying studies. There was insufficient evidence for 13 interventions, and seven were found to have no effect on birth rates.

There was no systematic review evidence available for 11 interventions, and for eight of these only a single trial or observational study was identified that showed no benefit.

Three treatments had no evidence at all beyond expert opinion: segmented IVF (separating collection and transfer cycles), dummy embryo transfer, and quad therapy (a combination of four drugs affecting the clotting and immune system).

What are the possible side effects?

Evidence on the harms of fertility treatments seems limited. NICE only mentions that for IVF with or without ICSI there is a low risk of long-term adverse effects, and the possibility of a small increased risk of ovarian cancer can't be ruled out.

When using drugs to stimulate ovulation, NICE recommends that the lowest possible dose and duration should be used.

The reviews provided limited information on harms, mostly as the underlying studies were unclear or said little about harms.


The researchers rightly say "people seeking fertility treatment need good-quality evidence to make informed choices".

As the system currently stands, people seek treatment from a variety of private UK fertility clinics.

In their desire for a baby, many couples are in a vulnerable situation and rely heavily on the guidance of health professionals.

But clinics may offer treatments that aren't sufficiently backed by the evidence.

The researchers highlight several problems. The standard first-step recommendation is for people to ask their GP for advice.

But GPs are unlikely to have the specialist knowledge around the evidence on the safety and effectiveness of various fertility treatments. Evidence-based and up-to-date online sources for further information are also lacking.

The researchers suggest that the two regulators, NICE and HFEA, could work together to provide clear guidance for patients and professionals on the services available and the evidence behind them.

They say advice should focus particularly on live birth rates rather than pregnancy rates, which don't give a good indication of success.

Various experts have also commented. Dr Yakoub Khalaf, from King's College London, succinctly points out that "what does not add value to treatment should not add to the bill".

And a fitting conclusion is provided by Professor Adam Balen, chair of the British Fertility Society, who said: "It is important that patients receive full information about everything that is being offered, the current evidence for benefit and whether there are any side effects or risks associated with it."

Links To The Headlines

'No solid evidence' for IVF add-on success. BBC News, November 28 2016

IVF rip-off exposed: Undercover TV investigation reveals how expensive 'add on' fertility treatments are 'of no benefit' and could even be harmful. Daily Mail, November 28 2016

Desperate would-be parents charged for expensive packages on top of IVF treatment 'may not increase fertility'. The Sun, November 28 2016

Majority of 'add-on' fertility treatments not supported by science – damning report. The Daily Telegraph, November 28 2016

Couples exploited by fertility clinics offering 'add ons'. The Times, November 28 2016 (subscription required)

Links To Science

Heneghan C, Spencer EA, Bobrovitz N, et al. Lack of evidence for interventions offered in UK fertility centres. BMJ. Published online November 28 2016

Low social status 'damages immune function'

"Simply being at the bottom of the social heap directly alters the body," BBC News reports. The headline is based on a study in which researchers used female monkeys to simulate social hierarchies.

Monkeys of low social status were found to have biomarkers indicating poor immune function and possible increased vulnerability to infection.

The researchers arranged the monkeys into social groups and observed behaviours for two years to determine the social hierarchy. They then "mixed-up" the groups so that some of the monkeys were introduced into other groups as the "new girl". This effectively meant that the "newbie monkey" was stripped of all social status.

They then took blood samples to look at any effect this had on the immune system. The study found that social rankings in the monkey groups had an effect on white blood cells involved in fighting off disease. These findings suggested that the stress of a lower social ranking may increase inflammation and reduce resistance to infection and illness.

Although this study was specific to monkeys, the researchers argue that these findings are also applicable to humans. We do, after all, share much of our DNA with them.

Still, social status is a subjective concept not an objective fact. It only matters if you let it matter. As Eleanor Roosevelt famously said: "No one can make you feel inferior without your consent".


Where did the story come from?

The study was carried out by researchers from a number of international institutions in the US, Canada and Kenya, including Duke University, Emory University, the Universite de Montreal, and the Institute of Primate Research in Nairobi.

It was funded by grants, including one from the Canada Research Chairs Program.

The study was published in the peer-reviewed scientific journal Science.

BBC News and the Mail Online's reporting were fairly accurate. Although both outlets were quick to apply the findings to humans without highlighting the fact that social hierarchies, and their resulting influences in primates, may be different to those found in humans.

It could be the case that the primates in question – rhesus monkeys – were more sensitive to loss of social status than humans would be.


What kind of research was this?

This was an animal study which aimed to investigate how social status influences the immune system in captive adult female rhesus macaques.

Evidence has shown that social status is one of the strongest predictors of disease and death in humans. As rhesus macaques naturally form linear hierarchies (social groups where there is a clear pattern of rank), this study wanted to investigate the potential effects of social status by further exploring if and how it alters the immune system on a genetic level.

Animal studies are useful early stage research, especially in primates due to their biological similarity to humans. However, the social hierarchies observed in monkeys are not necessarily representative of those seen in humans.


What did the research involve?

The researchers conducted their investigation using 45 adult female rhesus macaques in captivity. In captivity, it's possible to manipulate the social hierarchies formed in these monkeys by the order in which the monkeys are introduced to new social groups. The monkeys were all unrelated and had never met each other before.

Nine groups containing five monkeys each were formed and these groups were maintained and observed (phase one). The monkeys were ranked where a higher status corresponded to a higher value. Social status was determined by observing whether an individual female was groomed by other monkeys (seen as a sign of high status) or conversely, harassed by other monkeys (a sign of low status).

After a year, these groups were rearranged by introducing the females one-by-one from phase one from either same or adjacent ranks into new groups (phase two). These were again followed for a year.

Alongside this qualitative observation, blood samples from the monkeys were analysed before and after each phase. The blood samples were analysed for any changes in the composition of white blood cells.


What were the basic results?

This study found a positive association between a monkey's rank and the activity of two specific types of white blood cell: T-helper cells and natural killer (NK) cells. T-helper cells play an overall role in regulating the immune system, while NK cells destroy infected or abnormal cells.

The researchers found that improvements in social status were reflected in the gene activity of these cells.

  • The gene activity of NK cells was the most responsive to social status. Researchers identified 1,676 genes that were responsive to rank. This was closely followed by the gene activity of T-helper cells (n=284 genes).
  • Weaker links were identified between monkey ranks and the activity of B-cells that produce antibodies (n=68 genes), and cytotoxic T-cells, another type of cell that targets and destroys abnormal cells (n=15 genes).
  • There was no detectable effect on the expression of purified monocytes – a type of white blood cell that develop into macrophages that "eat" or engulf dead and damaged cells.

Additionally, they found the rate of received harassment contributed a considerable proportion of the gene activity of T-helper and NK cells (17.3% and 7.8% respectively). Grooming rates (how often, or not, an individual monkey was groomed by other monkeys) had more influence on the activity of NK genes (33.4% of all rank-responsive genes).


How did the researchers interpret the results?

The researchers say their results suggest that most effects of social status are immune cell type–specific. They conclude: "Our findings provide insight into the direct biological effects of social inequality on immune function, thus improving our understanding of social gradients in health."



The negative effect of social deprivation on health has long been recognised. This has often been attributed to an increase in unhealthy behaviours such as smoking, drinking too much alcohol, poor diet and being overweight.

However, this study looked at a slightly different aspect – observing the effects of social status through relationships with others – and suggesting this may have wider health effects than just influencing our lifestyle and health behaviours.

They found that a monkey's rank changed the gene activity of specific types of white blood or immune cell, and altered their numbers. Therefore, social status or social deprivation could directly influence the body's resistance to infection and disease.

One of the researchers, Dr. Noah Snyder-Mackler, told the BBC: "It suggests there's something else, not just the behaviours of these individuals, that's leading to poor health.

"Our message brings a positive counter to that – there are these other aspects of low status that are outside of the control of individuals that have negative effects on health."

These findings are interesting, but even though primates are generally quite similar to humans in both genetic make-up and social interactions, they aren't exactly the same.

Nevertheless, these results could help further our understanding of the effects of social factors on health in humans.

If social mobility does impact on human health by lowering feelings of self-esteem, there are other methods of increasing your self-esteem, that don't involve money or status.

These include connecting with otherslearning new skills and taking time to help the less fortunate. Read more about boosting your self-esteem.

Links To The Headlines

Low social status 'can damage immune system'. BBC News, November 25 2016

Being popular is GOOD for your health: Climbing the social ladder can strengthen the immune system. Mail Online, November 25 2016

Links To Science

Snyder-Mackler N, Sanz J, Kohn JN, et al. Social status alters immune regulation and response to infection in macaques. Science. Published online November 25 2016


Just a small cut in saturated fats 'reduces heart disease risk'

"Swapping butter and meat for olive oil and fish does cut the risk of heart disease," The Times reports.

The headline is prompted by the findings from a US study involving data from over 100,000 men and women, followed for more than 20 years. The results showed that consumption of different types of saturated fats was associated with an increased risk of coronary heart disease.

The researchers also found that replacing just 1% of energy consumed in the form of saturated fats with polyunsaturated fats, monounsaturated fats, wholegrain carbohydrates or plant proteins, led to a 5-8% decreased risk of coronary heart disease.

The debate regarding the risks of "sat fats" continues.

A report we discussed in May this year argued that the current UK guidelines on saturated fats were flawed as there was no proven link between saturated fat consumption and heart disease. But critics attacked the report  for lacking independent peer review. The British Heart Foundation said it did not offer enough evidence to "take it seriously".

Current guidelines recommend that men eat no more than 30g of saturated fat a day and women no more than 20g of saturated fat, and this latest research appears to support current guidelines.


Where did the story come from?

The study was carried out by researchers from Harvard T.H. Chan School of Public Health, US and the Unilever Research & Development institute in the Netherlands. It was funded by the National Institutes of Health and the National Heart, Lung and Blood Institute and supported by Unilever. The study was published in the peer-reviewed British Medical Journal on an open-access basis, so it is free to read online.

One author declares they are supported by a grant from Unilever Research & Development and three other authors are employees of Unilever Research & Development. Unilever is a producer of food consumer products and as such there may be a conflict of interest.

Generally the UK’s media reported the story accurately.

However, the Daily Mail suggests that the fats identified in the study as increasing risk of coronary heart disease "be replaced in diets by other food like carbohydrates".

This may be misleading, as food products perceived by the public as carbohydrates may also contain ingredients such as butter that are high in saturated fats. The study only looked at wholegrain carbohydrates as a replacement for these fats.


What kind of research was this?

This was a longitudinal cohort study, which recruited male and female health professionals and followed them for over 20 years to assess how proportions of saturated fatty acids in the diet might affect risk of coronary heart disease later on.

This type of study is useful for suggesting links between factors but cannot prove that one factor – saturated fat intake – causes another – coronary heart disease.

The researchers tried to control for confounding factors, but there may be unmeasured factors, such as stress, that affect risk of coronary heart disease.


What did the research involve?

Researchers used data from the Nurses' Health Study, which included 73,147 female nurses and a cohort of 42,635 men from the Health Professionals Follow-up Study.

Information was collected at study baseline (1984 in the Nurses' Study and 1986 in the Health Professionals Study) on medical history, lifestyle, potential risk factors and disease diagnosis.

Participants also completed a food frequency questionnaire at baseline and then every four years until 2010, in which participants were asked how often they consumed specific foods in the previous year, ranging from "never" to "at least six per day". Cumulative averages of food intake were calculated from all dietary questionnaires completed in the follow-up.

Saturated fatty acids were distinguished by the length of their carbon chain. The number on the left indicates the number of carbon atoms and the number on the right the number of double bonds (saturated fatty acids do not have any double bonds). Therefore lauric acid (12:0) has 12 carbon atoms with no double bonds.

The major fatty acids included in the analysis were:

  • lauric acid (12:0), found in high quantities in coconut and palm kernel oils
  • myristic acid (14:0), found in cheese, butter, fresh and dried coconut and coconut oil
  • palmistic acid (16:0) found in palm oil, palm kernel oil, coconut oil and often in chocolate
  • stearic acid (18:0) found in butter, milk, meat/poultry/fish, lard, grain products and cocoa butter

Age-adjusted intake of individual saturated fatty acids was calculated and risk of non-fatal and fatal coronary heart disease was determined. The researchers adjusted their results to take into account the following possible confounding factors:

  • ethnicity
  • family history of myocardial infarction (heart attack)
  • body mass index
  • cigarette smoking
  • alcohol intake
  • physical activity
  • multivitamin use
  • menopausal status
  • postmenopausal hormone use
  • current aspirin use
  • baseline hypertension
  • baseline hypercholesterolemia
  • total energy intake


What were the basic results?

All participants were free of chronic illness at the beginning of the study. During the follow-up period, 7,035 cases of coronary heart disease were identified (4,348 were non-fatal; 2,687 were fatal).

Higher consumption of one type of fatty acid was associated with higher consumption of all fatty acids analysed.

Comparing groups with the highest and lowest intake of individual saturated fat intakes, there was an increased risk of coronary heart disease of:

  • 13% (hazard ratio (HR) 1.13, 95% confidence interval (CI) 1.05 to 1.22) for 14:0 chain
  • 18% (HR 1.18, 96% CI 1.09 to 1.27) for 16:0 chain
  • 18% (HR 1.18, 95% CI 1.09 to 1.28) for 18:0 chain
  • 18% (HR 1.18, 95% CI 1.09 to 1.28) for 12:0 to 18:0 chains combined

Replacement of 1% of energy intake from the 12:0 to 18:0 chain fats led to a reduced coronary heart disease risk of:

  • 8% (HR 1.08, 95% CI 0.89 to 0.96) when replaced by polyunsaturated fat
  • 6% (HR 1.06, 95% CI 0.91 to 0.97) when replaced by wholegrain carbohydrates
  • 7% (HR 1.07, 95% CI 0.89 to 0.97) when replaced by plant proteins
  • There was no significant decrease when replaced by monounsaturated fat (HR 1.05, 95% CI 0.90 to 1.01)

Participants who consumed higher proportions of saturated fatty acids were also more likely to be white, non-smokers, engage in less physical activity, less likely to take multivitamins and have a higher intake of total energy.


How did the researchers interpret the results?

The researchers concluded that "dietary replacement of 12:0-18:0 with more healthy macronutrients – such as polyunsaturated fat and wholegrain carbohydrates – was associated with a lower risk of coronary heart disease".

They further add that "owing to high correlations among individual saturated fatty acids (SFAs) in diet, these findings support the current dietary recommendations that focus on replacement of total saturated fat as an effective approach to preventing cardiovascular disease. The public health and clinical significance of modulating the content of individual SFAs in specific foods should be further evaluated".



This study shows an association between increased intake of individual saturated fats and increased risk of coronary heart disease.

It also shows a link between the replacement of these fatty acids with other types of fat, plant protein, or wholegrain carbohydrates and a reduction in coronary heart disease risk.

The strengths of this study are the large sample size and long follow-up period that looked at repeated measures such as diet, lifestyle and health outcomes.

It also provides clear support for dietary guidelines that recommend replacing dietary energy from saturated fats with polyunsaturated fats as well as wholegrain carbohydrates and plant source proteins.

However, there are a number of limitations to the study:

  • Although the study adjusted for confounding variables, there may be other factors that were not accounted for. For example, stress and life events might be contributors to coronary heart disease, but were not measured.
  • The analysis was based on self-reported dietary intake and therefore may be subject to recall bias.
  • The study populations were comprised of health professionals who might have very similar lifestyles to one another; therefore the results may not be representative of other populations.
  • Finally, most people did not just eat only one type of saturated fat, so it is hard to disentangle which have more association with coronary heart disease.

Also, the study did not consider other types of fatty acids, such as those found in dairy products, that may have beneficial effects.

There is ongoing controversy about how much of a threat saturated fats do actually pose to health. But aiming for even a 1% reduction in saturated fats from your daily energy intake in order to gain a possible 5-8% reduction in heart disease, could be a sensible option. 

Links To The Headlines

Butter is still bad for heart, say scientists. The Times, November 24 2016

Heart disease risk 'cut by swapping saturated fat for healthier energy sources'. Mail Online, November 24 2016

Eating just a little less butter each day drastically reduces YOUR heart attack risk, says major Harvard study. The Sun, November 24 2016

Links To Science

Zong G, Li Y, Wanders AJ, et al. Intake of individual saturated fatty acids and risk of coronary heart disease in US men and women: two prospective longitudinal cohort studies. BMJ. Published online November 23 2016

Review questions recent official vitamin D guidance

"Vitamin D pills branded 'waste of time' and could even be 'harmful' according to new research," The Sun reports. But, despite the headline, no new research has been done.

The news comes from a review of existing evidence published in the peer-reviewed British Medical Journal (BMJ), which questioned recent government advice on vitamin D supplements.

In July this year, Public Health England (PHE) recommended that everyone in the UK should consider taking 10mcg of vitamin D supplements daily in the autumn and winter.

They also recommended that people at risk of having low vitamin D levels should take supplements all year round.

The public health body was concerned that a combination of limited sunlight exposure – which stimulates the production of vitamin D – and a diet low in vitamin D may contribute to very low vitamin D levels, known as deficiency, in some people.

Vitamin D deficiency can lead to a range of complications, including a condition called osteomalacia, which makes bones soft, painful and more likely to break.

What is the basis for these current reports?

Researchers from the University of Auckland and the University of Aberdeen published a review in the BMJ that questions the evidence of PHE's recommendation about vitamin D supplements.

The researchers say that, despite much good-quality research into vitamin D, there is no evidence that taking vitamin D supplements alone reduces the risk of fractures or falls, or improves bone strength.

They say two studies into giving vitamin D supplements with calcium, which involved elderly women with very low vitamin D levels living in care homes, did show a reduction in fractures.

But studies in people who did not live in care homes did not find the same results.

The authors also looked at reports of other possible benefits of vitamin D, apart from bone and muscle health, and found "no consistent effect" of benefits.

As the authors of the review did not outline their search strategy for their evidence, it should be considered a narrative review (where researchers highlight evidence supporting their argument) rather than a systematic review (where researchers consider all available appropriate evidence).

A systematic review is considered to have more "weight of evidence".

The BMJ also included a "right of reply" piece by Dr Louis Levy, head of nutrition science at Public Health England.

Dr Levy makes the point that, "Vitamin D is found in only a small number of foods, including oily fish, red meat, liver and egg yolk, so it's not easy to get what you need from your diet alone."

"People with darker skin, from African, Afro-Caribbean, and south Asian backgrounds, may not get enough vitamin D from sunlight in the summer and should also consider taking a supplement all year round."

Most experts agree that people who are at high risk of low vitamin D levels may benefit from taking vitamin D supplements.

What is vitamin D deficiency?

Vitamin D is important for building healthy bones and muscles. Severe vitamin D deficiency can cause malformed bones (rickets) in children and osteomalacia in adults.

However, there's much scientific debate about what is considered a vitamin D deficiency and how much we need daily.

In July, PHE said everyone should aim to have a dietary intake of 10mcg of vitamin D daily, as it was difficult to say how much vitamin D is made by sunlight on skin.

Most of our vitamin D is made by sunshine on our skin. But in winter, sunlight in the UK is thought to be too weak to allow our skin to make vitamin D.

There's also vitamin D in some food – including oily fish like salmon, mackerel, herring and sardines, and red meat and eggs – as well as some breakfast cereals and fat spreads that have vitamin D added to them.

How does vitamin D affect you?

If you're not at high risk of a vitamin D deficiency, you probably get enough from a combination of food and sunshine during the summer months.

Find out how to get vitamin D from the sun without risking skin cancer.

PHE says adults should "consider" taking a daily supplement of 10mcg in the autumn and winter. The authors of the BMJ paper say this recommendation is unnecessary for most people.

But 10mcg is unlikely to be harmful, so it's a matter of individual choice. You should check that you're not taking vitamin D in multivitamin supplements as well.

Certain groups of people may be at higher risk of having a vitamin D deficiency:

  • people who don't go outside very often – for example, those who live in care homes
  • people who cover most of their skin when outside
  • people with darker skin from African, Caribbean and south Asian backgrounds

PHE recommends these groups consider taking a supplement all year round. They also recommend daily supplements for pregnant women, babies and children aged four years old or younger.


So, who's right? The authors of the BMJ review are certainly correct in their argument that, ideally, we can get all the vitamin D we need through a combination of diet and sensible exposure to sunlight.

They are also right that the evidence does not show that people with normal levels of vitamin D benefit from taking supplements.

But we don't live in an ideal world. The truth is many people in the UK eat unhealthy, vitamin D-poor diets and also don't get enough exposure to sunlight.

A sensible approach would be to consider taking vitamin D supplements as recommended, but also be alert for possible signs of excessive vitamin D levels causing a build-up of calcium in the blood (hypercalcemia).

Warning signs and symptoms of hypercalcemia include loss of appetite, feeling and being sick, and needing to pee frequently.   

Links To The Headlines

Vitamin D pills branded 'waste of time' and could even be 'harmful' according to new research. The Sun, November 24 11 2016

Evidence does not back vitamin D supplements, says BMJ. ITV News, November 24 2016

Ditch vitamin D pills . . . just get some sun: Healthy lifestyle and diet 'is all most of us need'. Daily Mail, November 24 2016

Links To Science

Bolland MJ, Avenell A, Grey A. Should adults take vitamin D supplements to prevent disease? BMJ. Published online November 23 2016

Spector TD, Levy L. Should healthy people take a vitamin D supplement in winter months? BMJ. Published online November 23 2016

Men's attitude towards fatherhood 'affects child behaviour'

"Children of confident fathers who embrace parenthood are less likely to show behavioural problems before their teenage years," The Guardian reports.

A study found a link between positive attitudes towards fatherhood and good behaviour at age 11. The UK study involved more than 6,000 children born in 1991 or 1992 as well as their parents.

Fathers were interviewed during the first year after their child's birth about their positive and negative reactions to becoming a father. Both parents were also asked about the amount of time the father was involved in childcare or domestic work.

After taking account of other factors, children of men scoring highly on confidence and emotional response to fatherhood were 13% and 14% less likely to have behaviour problems at age nine, and 11% less likely at age 11.

Factors such as a father's emotional response and confidence were found to be more important than the amount of time spent involved in the actual, sometimes messy, side of day-to-day childcare.

Attitudes to parenting have changed in the 25 years since the study began, so these results may no longer apply. Other factors linked to a reduced chance of children having behavioural problems included having older, better-educated parents.

And observational studies like this can't prove cause and effect. But perhaps it's not surprising that having positive, confident fathers at an early age is linked to better outcomes for children in later life.

For men concerned about upcoming "dadhood", there is training and advice available from a range of organisations, such as the National Childbirth Trust (NCT).


Where did the story come from?

The study was carried out by researchers from the University of Oxford and was funded by the Department of Health, the UK Medical Research Council, the Wellcome Trust and the University of Bristol.

The study was published in the peer-reviewed journal BMJ Open, which is open access so it is free to read online.

The UK media covered the study reasonably accurately. Different media sources picked different figures to illustrate the size of the effect, with some (including The Guardian) using figures that had been adjusted to take account of possible confounding factors, such the social status of the family.

Others (including the Daily Telegraph and the Daily Mail), used the unadjusted figures highlighted in the study's press release.

Unadjusted figures often sound more impressive but adjusted figures are usually more reliable.


What kind of research was this?

This is a longitudinal cohort study, which recruited children and their parents while the mother was pregnant, and followed them for many years to assess how factors from their early childhood might affect outcomes in later life.

This type of study is good at spotting links between factors, but cannot prove that one factor causes another. For example, some children with behavioural problems might have been difficult babies who cried a lot, which might have affected their father's emotional adjustment to fatherhood, rather than the other way around.


What did the research involve?

Researchers used information from a long-running ongoing study, the Avon Longitudinal Study of Parents and Children, which recruited more than 14,000 pregnant women in the Bristol area in 1991 and 1992. Researchers used questionnaires filled in by parents at 8 weeks, 8 months, 9 years and 11 years after birth.

They only included children who'd been living with both parents at eight months, and for whom there was follow-up data at 9 or 11 years.

They used the questionnaires filled in by men to identify three factors – emotional response, time spent on childcare or domestic work, and confidence as a partner and father – which might affect children's behaviour.

They used the responses to the questionnaires to construct a statistical model to assess high or low scores to allocate to men for each of these factors. Behavioural scores for the children were assessed by questionnaires filled in by the mother.

Researchers took the following potential confounding factors into account in their calculations:

  • age of mother
  • mental health of both parents
  • the family's social and economic status
  • child's age and sex

These were used to adjust the odds of children having behavioural problems, for fathers with high or low scores on emotional response, time spent on domestic work, and confidence in their role.


What were the basic results?

Children of men with a positive emotional response to fatherhood were:

  • 14% less likely to have behavioural problems at age nine (odds ratio [OR] 0.86, 95% confidence interval [CI] 0.79 to 0.94)
  • 11% less likely to have behavioural problems at age 11 (OR 0.89, 95% CI 0.81 to 0.98)

Children of men who felt confident as fathers and partners were:

  • 13% less likely to have behavioural problems at age nine (OR 0.87, 95% CI 0.79 to 0.96)
  • 11% less likely to have behavioural problems at age 11 (OR 0.89, 95% CI 0.81 to 0.99)

Researchers found no statistically significant link between children's behavioural problems and the amount of time their fathers had spent on domestic and childcare activities in early childhood.

However, parents who were older, had more education and higher social and economic status were less likely to have children with behavioural problems. Working more hours per week and worse mental health scores were linked to worse behavioural problems in children. Older children and boys were more likely to have behavioural problems than younger children and girls.


How did the researchers interpret the results?

The researchers say: "We found that the children of fathers whom we characterised as having a positive emotional response to parenting and a sense of security in their role as a parent and partner early in the child's life … were less likely to exhibit behavioural problems at 9 and 11 years of age."

They say these factors may be "a marker of favourable parental characteristics and positive parenting in the longer term", while involvement in work such as shopping, cleaning and childcare "may simply reflect temporary circumstances" such as lack of other family support.

They conclude that their results suggest "psychological and emotional aspects" of paternal involvement in early years are "most powerful" in children's later behaviour.



It may seem obvious that children would benefit from having fathers who are happy and confident about their role. But there hasn't been much research on which aspects of a father's role are important for children, so this study adds some useful information.

It's important to remember that all the children in the study had both parents living with them in early childhood, so this isn't a comparison of children in single parent families with dual parent families.

The study only looked at the attitudes of fathers who were living with their children, asking questions including whether they had a strong bond with their child, regretted having the child, enjoyed spending time with the child and felt confident looking after them.

It's surprising that paternal time spent on childcare and domestic work did not seem to affect the results.

However, as the researchers say, this apparent anomaly might not reflect the father's long-term parenting, but might be a short-term factor. Some mothers were probably able to take a lengthy maternity leave and had help from other sources, but opportunities for paternal leave were far more limited during the 1990s.

The study has some strengths. It is a big study, carried out over many years, collecting a large amount of data.

However, there are many limitations. Observational studies can't prove that factors such as men's attitudes to fatherhood are the reason for the children's behavioural outcomes.

The researchers took account of some potential confounding factors when presenting their results (although not in the results they highlighted in their press release) but not all of them. For example, we know that education level of parents affected chances of behavioural problems, but these were not adjusted for in the results. In addition, we do not know what other major influences the children may have had, such as grandparents, other extended family, or their experience of nursery or primary school.

The analysis is based on questionnaires filled out by the mother and father which may not be entirely accurate and subject to recall bias.

Finally, the questionnaires regarding the children's behaviour and psychological well-being did not cover any mental health or behavioural condition, such as autism spectrum disorder, which could account for more challenging behaviour.

It's also true that attitudes to childcare and family make up have changed a lot over the 25 years since the study began. It's possible that we would see different results if the study was run again in today's society.

For those men struggling to cope or worried about the future after birth, there is help available from a wide range of different sources.

Read more advice about Pregnancy, birth and beyond for dads and partners and the services and support for new parents.

Links To The Headlines

Men's attitude to fatherhood influences child behaviour, says study. The Guardian, November 22 2016

Why the children of hands-on fathers are better behaved when they reach age 11. Daily Mail, November 23 2016

Confident fathers have happier children, says study. BBC News, November 23 2016

Close paternal bond linked to lower likelihood of behavioural problems in children. The Independent, November 23 2016

Bonding with Dad helps cut bad behaviour. The Times, November 23 2016 (subscription required)

Links To Science

Opondo C, Redshaw M, Savage-McGlynn E, Quigley MA. Father involvement in early child-rearing and behavioural outcomes in their pre-adolescent children: evidence from the ALSPAC UK birth cohort. BMJ Open. Published online November 22 2016

Can a high-tech treatment help combat some of our oldest fears?

"Scientists have raised hopes for a radical new therapy for phobias," The Guardian reports.

Brain scanners were used to identify brain activity pinpointing when people are most receptive to the "rewriting" of fearful memories. The scanners used functional MRI (fMRI) technology to track the real-time workings of the brain.

It's already known that combining gradual exposure to a fearful stimulus, known as exposure therapy, sometimes with a reward, may re-condition the brain and reduce the fear. For example, a person with a phobia of spiders may first be shown pictures of spiders before eventually being exposed to actual spiders.

Some people with more severe phobias or post-traumatic stress disorder (PTSD) are unable to tolerate even this type of exposure.

So this experimental study aimed to see if it was possible to get the same effect subconsciously, without direct exposure.

The research included 17 healthy volunteers who had a "fear condition" induced by being given sudden electric shocks while simultaneously being shown coloured patterns. This then lead them to fearfully responding when they were shown the same patterns again.

They then re-conditioned this response by analysing the participants' brains with fMRI to estimate the optimal "receptive window" and giving them a small monetary reward while showing the same patterns. They showed that this was successful and on re-exposure their fear was reduced.

While interesting, this was a highly artificial scenario in a very small number of healthy people. It is far too early to say whether this approach would be effective in the long-term.


Where did the story come from?

The study was carried out by researchers from a number of institutions including ATR Computational Neuroscience Laboratories and Nagoya University both in Japan, Colombia University, and the University of Cambridge.

Funding was provided by the Strategic Research Program for Brain Sciences supported by the Japan Agency for Medical Research and Development (AMED), the ATR entrust research contract from the National Institute of Information and Communications Technology, and the US National Institute of Neurological Disorders and Stroke of the National Institutes of Health.

The study was published in the peer-reviewed medical journal Nature Human Behaviour on an open-access basis so it is free to read online.

This research has been presented accurately in the UK media. The Guardian provided a good explanation of the study methods and findings, while also stating some of the limitations.


What kind of research was this?

This was an experimental study in healthy volunteers to see whether it is possible to condition people against their fear memories and responses by issuing rewards.

As the researchers explain, the concept that fear can be reduced by combining the fearful with a reward or something non-threatening, has already been established. This approach is often referred to as exposure therapy. This can be included in a more comprehensive cognitive behavioural therapy (CBT) form of counselling.

However, some people are unable to tolerate even limited exposure to stimuli they find frightening.

It also remains unclear whether you need to give explicit exposure to the fear for this reward process to work. The researchers' newly developed approach uses a technique called fMRI (functional magnetic resonance imaging) decoded neurofeedback (DecNef).

DecNef combines brain scanning technology with a sophisticated computer algorithm "trained" to recognise certain patterns of brain activity, when people are thought to be most receptive to rewards to counter fear.

This means the person doesn't have to be consciously re-exposed to the fearful stimulus.

While this method is a good way of testing the possible effects of such therapies it can't prove that these methods would be safe and effective in people with genuine disorders, such as PTSD.


What did the research involve?

The researchers recruited healthy volunteers to take part in the study.

The experiment was split into stages which are as follows:


This part of the experiment was to establish fear. In this case the researchers chose to establish a fear of being shown red and green patterns by pairing this with a tolerable electric shock. Blue and yellow patterns were used as control stimuli.

Neural reinforcement (performed three times)

This stage was conducted for three consecutive days and aimed to induce brain activity for the red and green patterns even when the person wasn't exposed to or actively thinking about the fearful stimuli.

If brain activity patterns associated with the fearful stimuli were induced then the participants were given a monetary reward. 


Following the last neural reinforcement, a test was performed to measure the fear response when again directly exposed to the fear and control stimuli.

What were the basic results?

Seventeen healthy volunteers entered the trial having successfully established a fear response to the stimuli.

On testing after neural reinforcement, when re-shown both the fearful (red/green) and control (blue/yellow) stimuli, the brain's fear response to the red/green patterns was actually now significantly less than the control stimuli.

This suggested DecNef had been successful – fear towards the target stimuli had been reduced by pairing the fearful brain activity with a reward, effectively over-turning the previous fear conditioning.

The size of the effect was said to be similar to that seen with standard fear exposure methods (such as pictures of spiders, etc), but in this case it was achieved without the participants actually being made aware of the fearful stimulus.


How did the researchers interpret the results?

The researchers conclude that they have been able to show that fear can be reduced by pairing rewards with the activation patterns in visual cortex that are associated with the fearful stimulus, while participants remaining unaware of the content and purpose of the procedure.

They suggest: "This procedure may be an initial step towards novel treatments for fear-related disorders such as phobia and PTSD, via unconscious processing."



This experimental study assessed whether it is possible to counter-condition people against their fear memories by using reward without actually having to re-expose the person to the fearful stimulus.

The researchers conclude that they have shown this can be done, all with participants remaining unaware of the content and purpose of the procedure. They further suggest the procedure may be an initial step towards novel treatments for fear-related disorders such as phobia and PTSD, via unconscious processing.

While these findings show promise, there are some key limitations, the main one being the small number of healthy participants who had fear to colours induced by giving them tolerable electric shocks. This was also an artificial scenario. The "fear" or threat was very mild, compared to the threats people may fear or have experienced in real life.

The exposure in the form of different coloured lines was also very basic and simple to reproduce compared to complex and multidimensional real-life fears and traumas. As such we cannot know if the same findings would be seen in people with complex disorders such as PTSD.

Also, as this was an experiment with no follow-up period, we do not know if this conditioning against fear is long lasting. Much more research would be needed to confirm these findings.

It's normal to experience upsetting and confusing thoughts after a traumatic event, but in most people these naturally improve over a few weeks.

You should visit your GP if you are still having problems about four weeks after the traumatic experience.

Similarly you should contact your GP if you find that a phobia is significantly adversely affecting your quality of life.

Read more about the treatment of PTSD and phobias.

Links To The Headlines

Tests raise hopes for radical new therapy for phobias and PTSD. The Guardian, November 21 2016

The stress-free way to beat fear: Scientists reveal how subconscious brain training can cure phobias. Mail Online, November 21 2016

Links To Science

Koizumi A, Amano K, Cortese A, et al. Fear reduction without fear through reinforcement of neural activity that bypasses conscious exposure. Nature – Human Behaviour. Published online November 21 2016

Bagged salads 'pose salmonella risk,' say researchers

"Bagged salad can fuel the growth of food-poisoning bugs like salmonella and make them more dangerous," BBC News reports.

Researchers found evidence that the environment inside a salad bag offers an ideal breeding ground for salmonella, a type of bacteria that is a leading cause of food poisoning.

They grew salmonella in salad juice and leaves at different temperatures to see what happened, and found salad leaf juice – released from the leaves when they're damaged or broken – supports the growth of salmonella, even at fridge temperature.

They also found that if leaves are contaminated, the bacteria aren't removed by washing in water.

However, the chances of a salad bag being contaminated by salmonella or other bacteria in the first place are thought to be low.

An independent expert commented: "The rates of produce that have been found to be contaminated are between 0-3%." 

That said, it is important not to be complacent. An E. coli outbreak in July this year, thought to be linked to contaminated salad, killed two people and hospitalised 62 others. 

You should wash your hands thoroughly with soap and water, then dry them carefully, after using the toilet and before eating or preparing food.

You should also wash fruit and vegetables before eating them – although washing did not remove salmonella in this study – and pay attention to use-by dates.

Read more advice about washing fruit and vegetables and how to best wash your hands.

Where did the story come from?

The study was carried out by researchers from the University of Leicester. No sources of financial support are reported.

It was published in the peer-reviewed journal Applied and Environmental Microbiology on an open access basis, so it is free to download (PDF, 2.33 Mb).

The UK media's reporting was accurate, but some of the headlines could imply that salad bags have suddenly been found to be contaminated with salmonella: they haven't.

The potential for contamination is nothing new and something that the food industry tries to guard against.

What the study shows is that if salmonella is present, it will quickly grow to levels that could trigger food poisoning, even if the salad bag is placed in a fridge.

What kind of research was this?

This laboratory study aimed to examine the "behaviour" of salmonella bacteria if present in a bag of salad leaves.

Consumption of salad leaves like lettuce and spinach has increased considerably in recent years.

But they are highly perishable and require rapid processing and special packaging to keep them fresh.

They are particularly at risk of colonisation from the gut microbes E. coli, salmonella and listeria.

These may be present in contaminated soil or be transferred during the various processes of trimming, washing, packaging and transport, or through contaminated water or poor hygiene among people involved in food production.

The potentially contaminated leaves are usually eaten raw, which doesn't give any further opportunities to eradicate the bacteria through cooking, for example.

Salad leaves are said to be ranked as the second most common source of outbreaks of foodborne illness.

This study aimed to look at the factors that could enhance the growth of bacteria present in salad – for example, the effects of opening the bag or storing it at room temperature.

What did the research involve?

The laboratory tests involved a variety of salad juices, which were prepared by crushing and mixing cos lettuce, baby green oak lettuce, red romaine, spinach and red chard – each obtained from a selection of pre-prepared bagged salads. 

The researchers conducted a series of tests. In one experiment, they put salmonella cultures into a fluid medium and incubated salad leaf juice in this fluid for 18 hours at 37C (98.6F).

To model what may happen in a salad bag, they also cultured salmonella in sterile water mixed with 2% leaf juices and refrigerated this at 4C (39.2F) for five days.

The researchers also looked at the growth on salad leaves by mixing salmonella with sterile water, 2% leaf juice and three spinach leaf pieces.

These were incubated at room temperature for 30 minutes, after which the leaves were washed in sterile water.

The researchers similarly tested growth on plastic salad bags and looked at the effect of washing.

What were the basic results?

As would be expected, the researchers found that when they incubated the mix of salmonella, water and leaf juice at the high 37C temperature, salmonella grew on all varieties of leaf juice.

Even though the 4C temperature restricted growth, the bacteria still grew in number over the course of five days in the fridge.

Higher concentrations of leaf juice fluid further increased growth, suggesting the bugs may be able to use the leaf nutrients leached into the bag to support their growth.

This happened with increased concentration of any of the leaf juices, but spinach seemed to have the strongest effect.

The researchers also found salmonella attached to both the plastic bag and the salad leaves. The presence of salad juice increased the ability of salmonella to colonise both of these surfaces. 

Bacteria better colonised cut leaf surfaces because the leaked leaf juice supported their growth.

The bacteria attachment to salad leaves was resistant to several washings in water.

How did the researchers interpret the results?

The researchers say their results show that, "exposure to salad leaf juice may contribute to the persistence of salmonella on salad leaves, and strongly emphasises the importance of ensuring the microbiological safety of fresh produce". 


This laboratory study principally demonstrates that salad leaf juice – released from salad leaves when they are damaged or broken – supports the growth of salmonella bacteria, even at fridge temperature. If leaves are contaminated with salmonella, this isn't removed by washing in water.

The results don't show that all packaged salad leaves are contaminated with gut bacteria like salmonella.

What they do show is that if the bags have been contaminated with gut bacteria, these bacteria will replicate, even in the fridge, and there's little you can do to remove them.

The best thing to do is to throw the bag out, although there's no way of knowing whether a particular bag is contaminated or not.  

The study also cannot tell us whether we may be safer buying packaged salads unwashed, washed in spring water, or washed in chlorinated water.

And neither can it tell us whether we may be safer buying non-packaged lettuce – it's still possible that an unpacked lettuce may have been contaminated at some point along the line.

But any risk of food poisoning is far outweighed by the health benefits of eating fresh veg, such as reducing the risk of heart disease, stroke and some cancers.

You should be reassured that the contamination levels in the food chain are in reality very low, with only 0-3% of raw food products found to be contaminated.

Commonsense precautions will also reduce the risk:

  • Hands should be thoroughly washed with soap and water, and dried, after using the toilet and before eating or preparing food.
  • Keep salad in the fridge as, although it will not be prevented, growth of salmonella can be reduced.
  • Discard leaves that look damaged or "mushy". 
  • Always wash salad before eating it – while this may have a limited effect on salmonella, washing can remove soil and debris.
  • Follow use-by dates and use salad within a few days of opening the packet. 

Get more advice on food safety.

Links To The Headlines

Bagged salad is Salmonella risk, study finds. BBC News, November 18 2016

Salmonella warning over supermarket bags of salad: Broken leaves can help bug to grow... and washing them won't make a difference, experts warn. Mail Online, November 18 2016

Broken leaves in salad bags raise salmonella risk 2,400-fold – study. The Guardian, November 18 2016

Bagged salad linked to salmonella risk. ITV News, November 18 2016

Salmonella warning over bagged salad leaves. The Daily Telegraph, November 18 2016

Salmonella warning over bagged salads as broken leaves encourage growth of bacteria that can't be washed off. Daily Mirror, November 18 2016

Links To Science

Koukkidis G, Haigh R, Allcock N, et al. Salad leaf juices enhance Salmonella growth, fresh produce colonisation and virulence. Applied and Environmental Microbiology. Published online November 18 2016

Online calculator that tries to predict IVF success released

"Couples can find out their chances of having a baby over multiple cycles of IVF treatment using a new online calculator," BBC News reports.

The calculator is designed to predict the success of in vitro fertilisation (IVF) – often used when a woman has a fertility problem – or intracytoplasmic sperm injection (ICSI), which is used when a man has a fertility problem.

Researchers used data from more than 100,000 couples in the UK. The data was recorded by the Human Fertilisation and Embryology Authority (HFEA), the regulator that oversees fertility treatments in the UK.

The researchers looked at several factors linked with the chances of a successful live birth, such as maternal age, the number of eggs collected, and the underlying reasons for treatment.

They used the models to create an online tool, which at the time of writing is available on the University of Aberdeen website.

It's hoped this tool will help couples make informed decisions, both emotionally and financially, about which treatment method may be right for them.

As the researchers themselves make clear in the study, there are significant gaps in the data the HFEA provided, such as the woman's body mass index (BMI), smoking status and history of alcohol use.

This means the model may not be able to reflect the many biological, health and lifestyle factors that can influence the likelihood of successful fertility treatment, or the fact every couple will have a different experience.

Where did the story come from?

The study was carried out by researchers from the University of Aberdeen in Scotland and the Erasmus MC-University Medical Centre in the Netherlands.

It was supported by a Chief Scientist Office postdoctoral training fellowship in health services research and health of the public research.

The study was published in the peer-reviewed British Medical Journal (BMJ), and is available on an open access basis, so it's free to read online.

Generally, the UK media coverage was accurate, although the limitations of the "calculator" used by the researchers were not discussed.

What kind of research was this?

This modelling study aimed to develop a model that could predict the chances of a live birth over multiple cycles of in vitro fertilisation (IVF) or intracytoplasmic sperm injection (ICSI).

These are both assisted reproduction techniques. In IVF the egg is incubated with multiple sperm, while in ICSI a single sperm is injected directly into the egg.

ICSI may be the preferred option if there is less chance of fertilisation happening "naturally" – for example, if there are problems with sperm number, shape or movement.

The model used medical records from a UK population-based cohort study that collected data from couples who had undergone IVF or ICSI, including their biological characteristics and treatments.

Modelling studies like this one are useful for estimating potential outcomes, and could be helpful for both doctors and patients.

However, many different biological, health and lifestyle factors could influence the success of assisted reproduction, so this model may not necessarily be able to give definite answers.

What did the research involve?

Medical records of all complete IVF and ICSI cycles were collected from the Human Fertilisation and Embryology Authority (HFEA) in the UK.

The data used in this study was taken from 113,873 couples who started their first ovarian simulation between January 1999 and September 2008 using the woman's own eggs and partner's sperm, until exposure to IVF stopped in 2009.

Baseline characteristics included:

  • woman's age
  • duration of infertility (years)
  • type of infertility (problems with the fallopian tubes, absent ovulation, endometriosis, male factor or unexplained)
  • previous pregnancy status (yes or no)
  • treatment type (IVF or ICSI)
  • year of first egg retrieval

Treatment characteristics included:

  • number of eggs collected
  • number of embryos transferred
  • stage of embryo transfer – either a cleavage transfer (two to three days after fertilisation) or a blastocyst transfer (five to six days after fertilisation)
  • whether embryos were frozen

Using this information, the researchers estimated the cumulative chance of a couple having a first live birth after having up to six complete cycles of IVF.

They developed two models:

  • a pre-treatment model for couples about to undergo IVF or ICSI
  • a post-treatment model to update the cumulative probability of having a live birth after a first attempt at embryo transfer
What were the basic results?

Collectively, the 113,873 couples underwent 184,269 complete cycles of IVF or ICSI.

Just under a third of the women (33,154, 29%) had a live birth after their first cycle of treatment, and just over half (45,384, 56%) went on to have a complete second cycle of treatment.

Overall, 43% of the couples had a successful live birth over six complete cycles of IVF or ICSI.

In the pre-treatment model, the odds of a live birth decreased with every increasing complete cycle of treatment – the odds of a live birth after cycle two was 21% lower than the odds after cycle one.

The odds of a live birth after cycle six were 56% lower than the odds after cycle one.

As may be expected, odds also decreased with the increasing age of the woman, the length of infertility, male factor infertility, and in couples who had not had a previous pregnancy.

In the post-treatment model, after a fresh embryo transfer the odds of a live birth increased by 29% with the greater number of eggs collected.

This doubled in cases where frozen embryos were used. Odds decreased by 9% if ICSI was used.

How did the researchers interpret the results?

The researchers concluded: "We have estimated the individualised chances of having a live born baby over a complete package of in vitro fertilisation (IVF) or intracytoplasmic sperm injection (ICSI) treatment at two time points."

They added: "In couples embarking on IVF or ICSI, we found that increasing age of the woman (from 30 years) was far the best predictor of live birth.

"After transfer of a fresh embryo in the first complete cycle, aside from the woman's age, increasing number of eggs collected and the cryopreservation of embryos were the next best predictors." 


This study aimed to develop a model that could predict the chances of a live birth over multiple cycles of in vitro fertilisation (IVF) or intracytoplasmic sperm injection (ICSI).

The model used a large quantity of data taken from reliable UK medical records belonging to couples who had previously undergone IVF or ICSI.

The model also benefits from being able to account for a large number of personal and treatment characteristics.

The researchers developed two models for couples in both the pre-treatment and post-treatment phase of treatment cycles.

They hope this online tool will serve as an aid for couples to shape expectations about what their overall chances are of having a baby after IVF or ICSI treatment, as well as allowing couples to make more informed decisions about their treatment.

Although this is potentially very useful for both doctors and patients, it's important to bear in mind that every couple will have a different experience – models, no matter how sophisticated, are not necessarily able to reflect what happens in reality.

And this tool includes very broad questions – for example, it asks if you have male factor infertility or tube problems. As such, it's not possible to add in your own specific data on factors like which fertility problem you or your partner may suffer from.

If you're considering having fertility treatment, you can increase your chances of conception – whether you're woman or man – by quitting smoking if you smoke, sticking to the recommended alcohol intake guidelines, and achieving or maintaining a healthy weight

Get more advice about the factors that can affect your fertility and treatments for infertility.

Links To The Headlines

Online calculator predicts IVF baby chances. BBC News, November 17 2016

Want to know if IVF will work for you? A new calculator can predict a couple's chance of conceiving a baby. Mail Online, November 17 2016

IVF calculator can tell couples their chances of success. The Daily Telegraph, November 17 2016

Will IVF work for you? New online calculator tells couples their chances of getting pregnant. The Sun, November 17 2016

Links To Science

McLernon D, Steyerberg EW, te Velde, ER, et al. Predicting the chances of a live birth after one or more complete cycles of in vitro fertilisation: population based study of linked cycle data from 113 873 women. BMJ. Published online November 18 2016

Does vitamin D cut lung infection risk in older adults?

"Why you should take vitamin D as you get older: High doses reduce the risk of respiratory illnesses by 40%," the Mail Online reports.

Researchers in Colorado investigated whether a high dose of vitamin D in older adults living in long term care facilities could reduce their risk of acute respiratory (lung) infections, such as pneumonia.

Pneumonia is of particular concern in older people who are especially frail or have a pre-existing chronic health condition.

More than 100 older adults were included in the trial. Participants were randomly assigned to receive either a high or standard dose vitamin D supplement for a period of 12 months.

At the end of the 12-month period researchers found 40% fewer respiratory infections in those receiving the high dose – but this was mainly due to a reduction in simple upper respiratory infections like coughs and colds, rather than more serious infections such as pneumonia.

When it came to side effects, the researchers found that the high dose group had a higher number of falls, though no increase in the number of fractures. But there was no difference in the rate of other side effects linked with high doses of vitamin D, such as high blood calcium.

Due to the small number of participants, the study did not have the "statistical power" to reliably detect differences in respiratory infections or, importantly, in safety outcomes; so any result could have been due to chance.

Further research in a larger randomised trial is needed to prove any benefit and to check high dose vitamin D doesn't cause harms in this group.


Where did the story come from?

The study was carried out by researchers from a number of institutions including the University of Colorado, the Colorado School of Public Health and the Eastern Colorado Department of Veterans Affairs.

Funding for the study was provided by the Beeson Career Development Award, National Institute on Aging Grant, National Center for Advancing Translational Sciences Colorado Clinical and Translational Science Awards Grant, and the American Geriatrics Society Jahnigen Career Development Scholars Award.

The study was published in the peer-reviewed Journal of the American Geriatric Society.

The study has been reported reasonably accurately in the Mail Online, but the research's limitations were not discussed.


What kind of research was this?

This was a randomised controlled trial which aimed to assess whether high dose supplementation with vitamin D for 12 months would prevent acute respiratory infections in older adults in long term care.

Older adults are at higher risk of vitamin D deficiency and observational studies have provided some evidence of an association between deficiency and acute respiratory infection.

This trial was double blind which means patients and investigators were unaware which group they were assigned to for the whole 12 month period, limiting the risk of bias.

With this type of study it is most likely that the effect seen is due to the intervention rather than any confounding variables.


What did the research involve?

The researchers included 107 older adults (60 years and above) from 25 long term care facilities in Colorado. They excluded people with cancer, terminal illness or other conditions where they couldn't take too much vitamin D.

Participants were randomly assigned to one of two groups:

  • high dose – equivalent to 3,000-4,000 international units (IU) per day (75mcg-100mcg per day)
  • standard dose – equivalent to 400-1,000 IU per day (10mcg-25mcg)

If the participant was taking vitamin D as part of their usual care then this continued in addition to the study drug, but the doses where balanced out to ensure the person was receiving their allocated study dose. For example, if people allocated to the standard dose group were already taking this amount, they just took an extra placebo.

The main outcome of interest was the number of acute respiratory incidents (ARI) during the 12 month follow up period. These incidents were split into upper respiratory (common colds, sinusitis, sore throats and ear infection) and lower (acute bronchitis, influenza, pneumonia) requiring medical attention.

The researchers also looked at secondary outcomes which included severity of ARIs as measured according to emergency department visits or hospitalisation for ARIs, time to first ARI, and incidence of other infections.


What were the basic results?

Participants in the high-dose group had significantly fewer acute respiratory incidents, 0.67 per person per year versus 1.11 per person per year in the standard dose group. This equates to a 40% lower risk of ARIs in the high dose group (Incidence rate ratio (IRR) = 0.60, 95% confidence interval (CI) = 0.38 to 0.94).

When splitting up by type of infection, upper ARIs were less common in the high dose group, but there were no differences in incidence of lower ARI. There was also no difference in urinary tract infection, or other infections or hospitalisation for ARI.

Falls were more common in the high dose group (IRR = 2.33, 95% CI = 1.49 to 3.63), however this did not result in more fractures. There was no difference in the rate of other side effects associated with too much vitamin D, including high blood calcium or kidney stones.


How did the researchers interpret the results?

The researchers conclude: "Monthly high-dose vitamin D3 supplementation reduced the incidence of ARI in older long-term care residents but was associated with a higher rate of falls without an increase in fractures."



This randomised controlled trial assessed high dose supplementation with vitamin D for a period of 12 months as a way of preventing acute respiratory infections in older adults in long term care.

This study was well designed and reduced risk of bias where possible. However, there are some important limitations which affect the reliability of the findings:

  • The study has a small sample size and the authors state they did not manage to reach their target recruitment level; this means the study did not have the statistical power required for certainty in the findings.
  • There were some differences in the characteristics of the participants at the start of the study, including differences in body mass index, smoking status, heart disease and respiratory diseases. Ideally, differences of these types should be minimised in a randomised study. But they were present in this case – possibly as a result of the small sample size – and may have affected findings.
  • The study only included participants who are in long term care and this may not be representative of the effect in all older adults, including those with serious illnesses or contraindications to taking high doses of vitamin D.

The study did appear to find that supplementation reduced the chance of respiratory events – though this seemed only due to a reduction in upper respiratory infections such as coughs and colds rather than more serious infections.

It didn't find the increased dose of vitamin D caused high calcium levels in the blood which can affect the kidneys and weaken bones. However, it was associated with a higher risk of falls which needs further investigation.

As this was a small trial, further research would be needed to prove any benefit and ensure that high dose vitamin D in this group doesn't cause side effects.

From the age of one, throughout life, people need 10 micrograms of vitamin D per day. This can be obtained through dietary sources (e.g. red meat and fortified cereals) and exposure to natural daylight.

However, some people may not be able to get enough through these sources, including elderly adults who may have a poorer diet and get less sunlight exposure.

They may need supplements of 10 micrograms per day. The current level of evidence does not support taking any higher dose than this.

Read more advice about vitamin D.

Links To The Headlines

Why YOU should take vitamin D as you get older: High doses reduce the risk of respiratory illnesses by 40%. Mail Online, November 16 2016

Links To Science

Ginde AA, Blatchford P, Breese K, et al. High-Dose Monthly Vitamin D for Prevention of Acute Respiratory Infection in Older Long-Term Care Residents: A Randomized Clinical Trial. Journal of the American Geriatrics Society. Published online November 16 2016

Fat storage problems may increase diabetes risk

"Inability to store fat safely increases diabetes risk," BBC News reports.

Researchers have found links between genetic variations known to affect the storage of fat in the body and type 2 diabetes, as well as heart attacks and strokes.

People can store fat tissue in different ways, such as in their legs and arms. While this may be cosmetically unsightly, it is healthier than storing fat in the abdomen (known as visceral fat), especially around the liver and pancreas.

This type of distribution is associated with insulin resistance – where cells in the body fail to respond to the hormone insulin – and type 2 diabetes.

This difference in fat distribution could partly explain why not all obese people develop type 2 diabetes, and conversely why some people of normal weight develop type 2 diabetes.

The study was based on data on around 200,000 people from the UK and Europe.

In addition to the link between body fat distribution and insulin resistance, researchers also found variations in 53 genetic areas increased the risk of insulin resistance, which leads to type 2 diabetes.

Previously, only 10 genetic areas had been implicated. The greater number of these variations, the higher the risk.

Though the study found links between these genetic areas and fat distribution, this type of study cannot prove cause and effect.

But it may help target future prevention and treatment strategies, such as medications designed to target the fat.

In the meantime, you can still reduce your risk of developing type 2 diabetes by making lifestyle choices like eating a healthy, balanced diet, stopping smoking, reducing how much alcohol you drink, and exercising regularly.

Where did the story come from?

The study was carried out by researchers from the University of Cambridge, the Wellcome Trust Sanger Institute, the University of Oxford, the University of Exeter, the University of Geneva, the University of California, and the National Heart, Lung and Blood Institute in the US.

It was published in the peer-reviewed journal, Nature Genetics and was funded by the UK Medical Research Council. The authors declare no competing financial interests.

BBC News reported the story accurately, linking the inability to store fat safely to an increased risk of diabetes.

What kind of research was this?

This was a meta-analysis of studies investigating the influence of genetic variants on insulin and fat characteristics.

The research aimed to look at the variation in genes associated with patterns in fat deposits and insulin resistance.

Meta-analyses provide a useful way of summarising multiple studies looking at the same outcomes, in this case insulin resistance and storage of fat.

However, this type of study is only as good as the individual studies included, and any weaknesses of these studies will be brought into the analysis.

The studies included were population-based cohort studies, mostly from the UK and Europe.

Cohort studies are a practical way of looking at a link between two factors, but cannot prove one (genetic make-up) causes another (insulin resistance and location of fat deposits).

What did the research involve?

Researchers took 188,577 individuals from five population studies that analysed the genetic make-up of these individuals to identify variations in genes associated with insulin resistance.

They then looked at how the genetic variations played a role in cardiometabolic diseases.

This is a general term used to refer to diseases related to underlying problems with metabolism and bloodflow, such as type 2 diabetes and heart disease.

Researchers looked at the cardiometabolic traits and outcomes in people.

The levels of fat in certain areas of the body in those who were found to be at the highest genetic risk for cardiometabolic disease, including type 2 diabetes, were compared with those at lowest risk.

Leg fat mass was used as an indicator for peripheral fat, which is not in central areas.

What were the basic results?

Genetic predisposition to insulin resistance, through the 53 genetic areas, produced a higher risk of diabetes but lower levels of fat beneath the skin.

Looking at people with and without type 2 diabetes, the 53 genetic variants were associated with a 12% increased risk of type 2 diabetes (95% confidence interval [CI] 1.11 to 1.14).

No differences were found between genders or across body mass index categories.

People with a higher number of the 53 genetic variants were more likely to have a lower proportion of fat in their legs and a greater waist circumference.

How did the researchers interpret the results?

The researchers concluded that their findings "implicate a primary effect on impaired adipose [fat] function and a secondary effect on insulin resistance".

They further added that their findings "support the notion that the limited capacity of peripheral adipose tissue to store surplus energy is implicated in human insulin resistance and related cardiometabolic disease in the general population". 


Insulin is a hormone in the body that helps control blood sugar levels. When resistance to insulin occurs, blood sugar levels and lipids (fats) rise, increasing the risk of diabetes and heart disease.

This study shows that 53 separate genetic variants were associated with insulin resistance, underpinned by an association with lower levels of fat in peripheral regions, particularly in the lower half of the body, but – conversely – possibly higher levels of fat around the liver and pancreas.

While the study has strengths, such as using a very large number of people, and did demonstrate a link between genetic variants and insulin resistance, there were limitations.

The data was compiled from a number of different studies, which may have each had their own limitations.

The majority were prospective cohort studies, which, while helping to show an association, cannot prove that these genetic variations cause insulin resistance.

There may be a wide range of other factors affecting risk of insulin resistance and subsequent type 2 diabetes, such as lifestyle factors, including eating unhealthily and not being active.

Other factors that can influence insulin resistance include age, being Asian or African-Caribbean, or having polycystic ovary syndrome. 

Symptoms of diabetes include feeling thirsty, passing more urine than usual, feeling very tired and weight loss.

It is very important for diabetes to be diagnosed as soon as possible – see your GP if you think you may have symptoms. 

Links To The Headlines

Inability to store fat safely increases diabetes risk. BBC News, November 14 2016

Links To Science

Lotta LA, Gulati P, Day FR, et al. Integrative genomic analysis implicates limited peripheral adipose storage capacity in the pathogenesis of human insulin resistance. Nature Genetics. Published online November 14 2016

Study looks at nursing assistants' effect on patient outcomes

"Patients are a fifth more likely to die on wards where nurses have been replaced by untrained staff, a major study has found," the Daily Mail reports.

This latest research into 243 hospitals across Europe found those with more professional nurses, compared to nursing assistants, had lower death rates after surgery and were rated more highly by nurses and patients.

The numbers of professional nurses who'd had at least three years of training, as a proportion of all nursing staff, ranged from 82% in Germany to 57% in the UK. The researchers calculated that every increase of 10% in proportion of qualified nurses was linked to an 11% lower risk of death for patients after surgery.

However, analysis of the data is complicated by differences from country to country and interpreting it is more complex than media headlines suggest.

It is unclear whether training for nursing assistants was equivalent in all countries studied. In England, the Department of Health plans to introduce "nursing associates", who would have 18 months' training and work alongside professional nurses and existing, less well-trained health care assistants.

Additionally, the study does not prove that more qualified nurses are the reason for the differences in death rates and quality of care. The research is based on one "snapshot" of what was happening in hospitals at one point in time (2009 to 2010). Other factors, such as local doctor staffing levels, may also have an effect on outcomes.


Where did the story come from?

The study was carried out by researchers from the University of Pennsylvania School of Nursing, University of Southampton, Kings College London, University of Leuven in Belgium, Technische Universitat Berlin, Instituto de Salud Carlos III in Spain and Institute of Nursing Science in Basel.

It was funded by the European Union, National Institute of Health Research, National Institutes of Health and Spanish Ministry of Science and Technology.

The study was published in the peer-reviewed journal BMJ Quality and Safety on an open-access basis, so it's free to read online.

Most of the UK media reports linked the research to the Department of Health's plans to introduce new nursing associates, with some sources calling for the plans to be scrapped or reconsidered.

The headlines focused on the reported 21% increase in risk of death for patients if one qualified nurse was replaced by a less qualified assistant.

The media reports did not make it clear that the data about patient deaths applied only to patients who had undergone surgery.

Also, the limitations of the study, such as the potential for other confounding factors to influence results, such as doctor staffing levels, or local health policies, were not explained.


What kind of research was this?

This was a cross-sectional, observational study of nurses and patients from 243 hospitals, which also used mortality data for surgical patients from some of these hospitals.

Cross sectional studies can pick out associations between factors – in this case nursing skill mix and mortality, nurse survey data and patient survey data – but cannot prove that one causes another.


What did the research involve?

Researchers interviewed 13,077 nurses and 18,828 patients, and looked at discharge data for 275,519 surgical patients, from hospitals in Europe.

They asked patients and nurses about quality of care, and asked nurses about safety and how many professional and how many less qualified nursing staff were working on their last shift.

After adjusting their figures to account for factors that could affect the results, they analysed the data to see whether the mix of nursing staff in the hospitals was linked to mortality of patients who'd had surgery, and to patient and nurse ratings of quality and safety of care.

The 243 hospitals from Belgium, England, Finland, Ireland, Spain and Switzerland were part of a bigger Europe-wide study of nursing care.

Mortality was measured by the number of surgical patients in 188 of the hospitals (those with full data available) who died in hospital within 30 days of surgery.

Patients were said to have given hospitals low ratings if they described their care as anything less than excellent, or rated it as 8 or lower on a 10 point scale.

Researchers took account of the following confounding factors when making their calculations:

  • patient age and sex
  • emergency or routine admission
  • patient surgery type and other illnesses
  • total nurse staffing for the hospital
  • hospital size, teaching status and technology available

Researchers also checked if they needed to adjust figures for country-specific factors. 


What were the basic results?

The average staffing in hospitals in the survey was six care givers for every 25 patients, four of whom were professional nurses. However, this varied a lot between countries and hospitals.

There were on average 1.3 deaths for every 100 discharges from hospital after surgery.

The researchers found:

  • For every professional nurse replaced by a nurse assistant per 25 surgical patients, those patients have a 21% increased chance of death.
  • Every 10 point increase in the percentage of professional nurses (eg from 50% to 60%, or 60% to 70%) was linked to an 11% lower chance of death for surgical patients (odds ratio (OR) 0.89, 95% confidence interval (CI) 0.8 to 0.98).
  • Every 10 point increase in the percentage of professional nurses was linked to a 10% lower chance of the hospital being given a low patient rating (OR 0.90, 95% CI 0.81 to 0.99).
  • Every 10 point increase in the percentage of professional nurses was linked to a 15% lower chance of the hospital being given a poor safety rating by nurses (OR 0.85, 95% CI 0.73 to 0.99).


How did the researchers interpret the results?

The study authors said their research suggested "adding nursing associates and other categories of assistive nursing personnel without professional nurse qualifications may contribute to preventable deaths, erode quality and safety of hospital care."

They say that any such policy initiatives should be taken with "caution" because "the consequences can be life threatening for patients."



The headlines generated by this study are alarming, but there are some reasons to be cautious about the findings.

The study does not show that patients are more likely to die because of fewer professional nurses in the skill mix in a hospital. While that's a possible explanation of the results, this type of study can't tell us that for sure. It only tells us what happened at one particular point in time, not whether one factor led to another.

Other explanations – such as doctor staffing levels, or local health policies – might account for part or all of the findings. Researchers say they ruled out some explanations, such as whether hospital size or working environment had an effect, but this type of study cannot account for all possible explanations.

Also, some of the findings are close to the point at which they could be down to chance. The main finding of 10% lower chance of death, for example, has a margin of error that means the true figure could be anywhere between 2% and 20%.

One could also question some of the decisions around how the ratings were classified. Patients, for example, were considered to have given a low rating to their hospital if they said their care was anything other than excellent. It is likely that patients who said care was "very good" did not intend to give a low rating.

Links To The Headlines

Patients are 20% more likely to die on a ward where untrained staff have replaced nurses. Daily Mail, November 15 2016

Call for assistant nurse role rethink. BBC News, November 15 2016

Patients one fifth more likely to die in hospitals with fewer qualified nurses. The Daily Telegraph, November 15 2016

NHS nursing assistants could raise risk of death for patients, says study. The Guardian, November 15 2016

Links To Science

Aiken LH, Sloane D, Griffiths P, et al. Nursing skill mix in European hospitals: cross-sectional study of the association with mortality, patient ratings, and quality of care. BMJ Quality and Safety. Published online November 15 2016

Testing sense of smell may give early warning of Alzheimer's risk

"A new four-point test has fine-tuned smell exams to check for Alzheimer's," the Mail Online reports. The testing is based on recognising and then recalling certain distinct smells, such as lemon or menthol.

Some people who scored badly on the test were later found to have early signs associated with Alzheimer's disease.

Previous research has shown people's sense of smell gets worse as they get older. People with dementia seem to have an even worse sense of smell and ability to identify smells.

But simple odour identification tests do not account for variation in different people's sense of smell.

Researchers in the US tested 183 people to see if they could identify 10 common smells, including lemon, mint and strawberry.

They then did a second test to see whether people could identify 20 smells and remember the 10 they'd smelled in the first test.

The second test was better at identifying people with Alzheimer's as well as early symptoms of dementia.

It also picked out people who had no signs of Alzheimer's, but who carried gene variants connected with the disease.

We now need further research in more people to be sure that the findings are correct.

If you do lose your sense of smell (anosmia), you shouldn't panic – there could be a relatively trivial reason behind it, such as chronic sinusitis. But it is the sort of symptom you should get checked out by your GP.

Where did the story come from?

The study was carried out by researchers from Massachusetts General Hospital, the University of North Carolina, Harvard School of Public Health, and Osmic Enterprises, all in the US. 

It was funded by grants from the US National Institutes of Health, the Wilkens Foundation and Harvard Neurodiscovery Centre.

The study was published in the peer-reviewed journal, Annals of Neurology.

The Mail Online seems to have misunderstood some aspects of the study. It says the participants were "patients at the Massachusetts General Hospital" who "were deemed to have an increased risk" of Alzheimer's disease.

In fact, they were a mix of volunteers aged over 65 and living at home. Ten of them already had Alzheimer's disease, but most were healthy.

The point of the study was to see whether the test could pick out people at increased risk, not to test people already known to be at increased risk.

The Mail Online story also mistakenly said the test could pick out those with a build-up of amyloid protein in their brains, but this study found no link between amyloid protein and the test results.

What kind of research was this?

This cross-sectional cohort study looked at how people performed on smell tests at one point in time.

Researchers wanted to see whether this was related to their mental health or other markers linked to Alzheimer's disease. 

What did the research involve?

Researchers recruited people taking part in a long-term study of ageing and dementia, and five people with dementia from a memory clinic.

They were given standard tests to identify dementia and early signs of dementia, known as mild cognitive impairment.

Some people also had brain scans and genetic testing for gene variants linked to dementia.

They took three tests to assess their sense of smell, memory for smells, and ability to discriminate between smells.

Researchers then looked at the results to see whether – taking account of possible confounding factors like age and education level or medical reasons for a poor ability to smell – the smell test results could predict people with dementia or at higher risk of dementia.

The three tests were:

  • 10 common smells – people were asked if they recognised the smell and if they could identify it from a list of four names
  • 20 common smells, including the 10 from the first test – people were asked if they'd smelled the odour in the first test and to identify it from a list of four names
  • 12 smells – two smells were presented one after the other, and people were asked to say whether they were the same or different

The first two tests, when used together, were called the POEM test, short for Percepts of Odor Episodic Memory.

Researchers used a battery of statistical tests to see which factors correlated with which. Their primary interest was whether the test results predicted people's diagnoses (normal, some concerns, mild cognitive impairment, or Alzheimer's disease).

They also wanted to see if the smell test results were linked to other early predictors of Alzheimer's disease, such as:

  • degeneration of certain regions of the brain
  • deposits of amyloid protein in the brain
  • gene variants thought to be more common in people with Alzheimer's disease
What were the basic results?

People who were cognitively normal or had only some concerns about their memory tended to do well on the POEM test.

And their results were significantly better than those of people with mild cognitive impairment or Alzheimer's disease.

When researchers looked at people who were cognitively normal but did worse than expected on the POEM test based on their results from the first (10 smells) test, they found these people were more likely to have:

  • a gene variant associated with Alzheimer's disease
  • thinner tissue in a part of the brain associated with memory (the entorhinal cortex)
  • worse logical memory scores over time

However, there was no link seen between the POEM results and deposits of amyloid protein in the brain.

We don't know if the people who did worse on smell tests went on to get Alzheimer's disease, as this was not part of the study.

A study with a longer-term follow-up period would be required to investigate this.

How did the researchers interpret the results?

The researchers said the POEM test needed to be confirmed in longer studies and different groups of people.

However, they said if these results were confirmed, the POEM test "may identify a subset of clinically normal participants at greater risk for developing the progressive memory symptoms" of Alzheimer's disease.

They say this could identify suitable people to take part in research of treatments that might prevent the disease. They also suggest the tests could be used to screen for Alzheimer's disease risk in the general population.


Sense of smell varies greatly from one person to another, and tends to decline as we get older. Lots of people can lose their sense of smell – either temporarily or permanently – after illness or an accident.

Having a poor sense of smell does not mean you're going to get Alzheimer's disease, and that's not what this study found.

People who already had Alzheimer's disease, not surprisingly, did poorly at identifying smells.

But smell detection ability alone did not differentiate between healthy people, those with some memory concerns, and those with mild cognitive impairment.

Only the POEM test, which looked at people's ability to both identify and remember smells, could do that.

For people without dementia or mild cognitive impairment, those who did less well at remembering smells compared with their ability to identify them were more likely to have previously identified risk factors for Alzheimer's disease.

These include a genetic variant more common in those with Alzheimer's disease and physical evidence of some degree of tissue thinning.

But we don't know whether these people did go on to get dementia, as the study only looked at a snapshot in time, not at what happened to people over time. It's important to remember, too, that this was a relatively small study.

We need to have these POEM test results validated by larger studies that follow people over time before we can say whether it is a useful way of identifying older people likely to develop Alzheimer's disease before they have any symptoms of confusion or memory loss.

If you're concerned about symptoms that might be related to Alzheimer's disease or other forms of dementia, see your GP.  

Read more about how dementia is diagnosed.

Links To The Headlines

Why sense of smell is the biggest tell-tale factor for Alzheimer's – and could be spotted 10 years before memory loss symptoms. Mail Online, November 14 2016

Alzheimer's early signs: Declining sense of smell could be first warning of decline, not memory loss. The Independent, November 15 2016

Links To Science

Albers AD, Asafu-Adjei J, Delaney MK, et al. Episodic Memory of Odors Stratifies Alzheimer Biomarkers in Normal Elderly. Annals of Neurology. Published online November 14 2016