NHS Choices

test1

NHS Choices - Live Well - Mon, 13/04/2015 - 16:27
test1

Categories: NHS Choices

Can a facelift make you more likeable?

NHS Choices - Behind the Headlines - Mon, 13/04/2015 - 13:00

"Having plastic surgery can make you more likeable," the Mail Online reports. It says cosmetic facial surgery not only makes you look younger, but could also improve what people think of your character. As the Mail Online reports, women who received surgery "were rated as more attractive, feminine, and trustworthy".

This headline is based on a study carried out by plastic surgeons, which asked volunteers to rate the before and after photos of 30 women who had facial plastic surgery to make them look younger.

It found that, on average, the post-surgery photos were rated slightly, but significantly, better for femininity, attractiveness and four personality traits, including likeability (but not trustworthiness).

However, this study has a number of limitations, which means its results are not conclusive. For example, the study was relatively small. The results also may not apply to all people who have had facial surgery, or be in line with the opinions of all people who viewed the before and after results.

In addition, the differences in scores were relatively small – between 0.36 and 0.39 on a seven-point scale. It's unclear whether this would have any real-life impact on people's interaction with the women if they saw them in person. A much larger study is needed to confirm these findings.

Many would argue that resorting to cosmetic surgery to boost your perceived likeability by a small amount is a drastic step. If you are considering plastic surgery, you should think carefully about the reasons why you want it and discuss your plans with your GP first.

Where did the story come from?

The study was carried out by researchers from Georgetown University Hospital and other surgical and research centres in the US.

No sources of funding were reported, and the authors reported no conflicts of interest. However, two of the study authors performed the facial rejuvenation surgeries on the women.

The study was published in the peer-reviewed medical journal, JAMA Facial Plastic Surgery.

The Mail Online does not point out any of this study's limitations. Its headline suggests that perceived trustworthiness was improved after surgery. But this difference was not statistically significant, meaning that we cannot confidently rule out this result occuring by chance.  

What kind of research was this?

This was a cross-sectional study looking at whether people's perceptions of women's personalities changed after they had facial rejuvenation surgery.

While this type of plastic surgery is focused on making women look younger, the researchers wanted to see if people also changed their judgements about the women's personalities based on their photos alone.

This study design seems appropriate to the question, though it has many limitations in terms of the way it was applied, including the small sample size.    

What did the research involve?

The researchers used before and after photos of 30 white women who had undergone facial rejuvenation surgery. They split these photos into six groups, each with five pre-surgery and five post-surgery photos (not of the same women).

They asked volunteers to rate the photos for their views on the women's femininity, attractiveness and six personality traits. The researchers then assessed how women scored based on their post-surgery compared with their pre-surgery photos.

The women whose photos were used had surgery between 2009 and 2013, including procedures such as:

  • facelift
  • eyelid surgery (to remove loose skin above the eyes or bags under the eyes)
  • eyebrow lift
  • neck lift
  • chin implant

To be included, the women's photos had to show well-matched, neutral facial expressions. The women had given permission for their photos to be used for research purposes.

The volunteers who rated the photos online did not know what the aim of the study was. Each set of photos was shown to at least 50 volunteers, and at least 24 responses were received for each set.

The volunteers were asked to rate the women on how much they thought they had the following personality traits on a seven-point scale, ranging from "strongly disagree" to "strongly agree", based on facial photos only:

  • aggressiveness
  • extroversion
  • likeability
  • trustworthiness
  • risk-seeking
  • social skills

The volunteers were not shown the same woman before and after surgery to avoid comparing them directly. The volunteers did not know the aim of the study.

Doctors, nurses or other healthcare workers with experience of facial analysis or facial plastic surgery were not allowed to take part.

The researchers compared the average scores for the pre- and post-surgery photos for each woman individually and overall. They also assessed the women according to what type of surgery they had. 

What were the basic results?

Overall, the researchers found the women's post-surgery photos scored better than their pre-surgery photos on the seven-point scale for:

  • likeability – post-surgery photos scored 0.36 points higher on average
  • social skills – post-surgery photos scored 0.38 points higher on average
  • attractiveness – post-surgery photos scored 0.36 points higher on average
  • femininity – post-surgery photos scored 0.39 points higher on average

There were no statistically significant differences in:

  • trustworthiness
  • aggressiveness
  • extroversion
  • risk-seeking

When looking at individual surgeries, the only two procedures associated with significant changes in scores were facelift (22 women) and lower eyelid surgery (13 women).

The researchers did not find differences in results by women's age, pre-surgery attractiveness scores, number of surgical procedures, or operating surgeon. 

How did the researchers interpret the results?

The researchers concluded that, "Facial plastic surgery changes the perception of patients by those around them."

They say that although the surgery is generally aimed at making people look younger, the study found it also affected people's views on a woman's likeability, social skills, attractiveness, and femininity. 

Conclusion

This study suggests that people's perceptions of women's femininity, attractiveness and certain personality traits can improve after they receive facial surgery that aims to make them look younger.

However, there are a number of points to bear in mind:

  • The study was relatively small, assessing only 30 women (average age not reported) and only up to 50 people rating each set of photos. The women were also all white and operated on by the same two surgeons. The results may not be applicable to all people who have these kinds of surgeries or to all people viewing the results.
  • It was not clear how many women's photos were assessed for inclusion, or whether the person selecting which photos to use knew the purpose of the study. Ideally, they would have been blinded to the purpose of the study so this could not influence their selection, either consciously or subconsciously.
  • All patients reportedly had to agree to have their photos used, but it was unclear whether this meant every patient operated on, or just those who had their photos used in the study. If they were asked after surgery, women whose surgeries had a good result may have been more likely to allow their photos to be used.
  • Results may depend on how much younger the woman looks or how natural the results look. Ideally, researchers also would have assessed people's perceptions of the women's ages and whether they had facial surgery or looked natural, and how these factors affected personality assessment. In the three "after" photos shown in the research paper, the women look relatively natural, without obvious signs of having had facial surgery.
  • The researchers carried out a lot of statistical tests and there is a chance that some of them yielded significant results just by chance.
  • It was unclear exactly how many women had each surgery, and therefore whether the analyses by type of surgery had enough "power" to detect differences between the groups. Some women had multiple surgeries, making it difficult to separate their effects.
  • The differences seen in the scores were relatively small – between 0.36 and 0.39 on a seven-point scale. It is unclear whether a difference of this size would have any real-life impact on people's interaction with the women, or whether they would express similar views if they saw the women in person.
  • The photos shown as examples in the research paper were not identical in terms of what the women were wearing (clothes or make-up) – ideally, these would have been standardised.

Overall, this small study gives some indication that people may judge photographs of women who have had facial surgery differently in terms of attractiveness, femininity and personality, but it is not conclusive. A much larger study is needed to confirm this.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Having plastic surgery can make you more LIKEABLE: Women who went under the knife were rated as more attractive, feminine and trustworthy. Mail Online, April 10 2015

Links To Science

Reilly MJ, Tomsic JA, Fernandez SJ, et al. Effect of Facial Rejuvenation Surgery on Perceived Attractiveness, Femininity, and Personality. JAMA Facial Plastic Surgery. Published online April 9 2015

Categories: NHS Choices

How dogs could sniff out prostate cancer

NHS Choices - Behind the Headlines - Mon, 13/04/2015 - 11:31

"Dogs trained to detect prostate cancer with more than 90% accuracy," The Guardian reports. Two trained bomb-sniffing dogs also proved remarkably successful in detecting compounds associated with prostate cancer in urine samples.

This headline is based on research that trained two explosive-detection sniffer dogs to identify the urine samples of men with prostate cancer. They then tested the dogs on urine samples from 332 men with the condition and 540 controls without the condition, most of whom were men.

One dog correctly identified all the samples from men with prostate cancer, and the other dog identified 98.6% of them. The dogs incorrectly identified between one and four percent of the control samples as being from men with prostate cancer ("false positives").

Some of the samples in the study were used for training the dogs and assessing their performance, and ideally the study would be repeated with entirely new samples to confirm the results.

This study suggests dogs can be trained to differentiate between urine samples from men known to have prostate cancer and people without the condition. But further testing should be carried out to test whether the dogs can accurately detect men with prostate cancer who are not yet known to have the disease.

It seems unlikely that dogs would be routinely used on a widespread basis to detect prostate cancer. If researchers can identify the exact chemical(s) the dogs are detecting in urine, they could try to develop methods to detect them.

Read more about potential warning signs for prostate cancer and when you should see your GP.

Where did the story come from?

The study was carried out by researchers from the Humanitas Clinical and Research Center and other centres in Italy. Sources of funding were not reported.

It was published in the peer-reviewed medical publication Journal of Urology on an open-access basis, so it is free to read online or download.

This study has been covered by a range of news outlets, no doubt owing to the appeal of any story involving dogs.

Most news sources illustrated the story with photos of the wrong breeds of dog, but The Independent got it right by showing a German Shepherd. The Daily Mirror suggested that the control group was all male, when this was not the case. 

What kind of research was this?

This was a cross-sectional study that tested whether sniffer dogs could correctly differentiate between urine samples from men known to have or not have prostate cancer.

This type of study is suitable for an early-stage assessment of the promise of a new test. If successful, researchers would need to go on to test samples of men who are currently undergoing assessment for suspected prostate cancer, rather than those already known to have the disease. This would better assess how the dogs would perform in a real-world clinical situation.

The researchers say there is a need for a better way to detect prostate cancer. A blood test for prostate specific antigen (PSA) can indicate whether a man might have prostate cancer.

But PSA is also raised in non-cancerous conditions, such as infection or inflammation, so the test also picks up a lot of men who do not have the disease (false positives).

A raised PSA level alone is not a reliable test for prostate cancer. It needs to be combined with an examination and other invasive tests (for example, a biopsy) to determine whether a man has the condition.

Other studies have suggested sniffer dogs can detect the odour of certain chemicals in the urine of men with prostate cancer.

However, not all tests with dogs have been successful, possibly because of variations in how the dogs were trained and differences in the populations tested. The researchers wanted to test rigorously trained sniffer dogs to see how they would perform. 

What did the research involve?

The researchers trained two sniffer dogs to identify urine samples from men with prostate cancer. They then allowed the dogs to sniff urine samples from men with or without prostate cancer and indicate which ones had the prostate cancer smell.

The urine samples were collected from 362 men with prostate cancer at different stages detected in various ways. The control samples were from 418 men and 122 women who were either healthy, or had a different type of cancer or another health problem.

The dogs taking part in the study were two three-year-old female German Shepherd explosive detection dogs called Zoe and Liu. They were trained using a standard procedure to identify prostate cancer samples using 200 urine samples from the cancer group and 230 from the control group.

In the first stages of training, urine samples from healthy women and women with other forms of cancer were used as the control samples to make certain there would be no chance of the sample being from a man with undetected prostate cancer. The next stages of training first used samples from young healthy men, and then older healthy men.

After the training, the researchers tested the dogs on all of the samples from the men with prostate cancer and controls in batches of six random samples. The researcher analysing the results did not know which samples were from men with prostate cancer. 

What were the basic results?

One dog correctly identified all the prostate cancer urine samples and only incorrectly identified seven (1.3%) of the non-prostate cancer samples as coming from men with prostate cancer (false positives).

The other dog correctly identified 98.6% of the prostate cancer urine samples and missed the other 1.4% (five samples). She incorrectly identified 13 (3.6%) of the non-prostate cancer samples as coming from men with prostate cancer. The false positive results all came from men. 

How did the researchers interpret the results?

The researchers concluded that a trained sniffer dog can identify chemicals specific to prostate cancer in urine with a high level of accuracy.

They say further studies are needed to investigate how well the dog sniffing test would perform in a real-world sample of men undergoing investigation for possible prostate cancer. 

Conclusion

This study found highly trained sniffer dogs are capable of differentiating between urine samples from men known to have prostate cancer and people without the condition. The study's strengths are the rigorous training of the dogs and the large number of samples tested.

The samples tested were all from people already known to either have or not have prostate cancer, and included some samples used in the dogs' training. Ideally, the study would be repeated with completely new samples to confirm the results.

If the results are confirmed, the next step would be to test whether the dogs can accurately detect men with prostate cancer who are not yet known to have the disease. For example, the dogs could be used to assess the urine of men who have raised PSA levels but a negative biopsy who are being monitored to see if they develop the condition.

The researchers noted they could not completely rule out that a small number of men in the control group had undetected prostate cancer. The risk would be low as they were either young or had no family history of prostate cancer, no prostate enlargement detected on digital rectal examination, and low PSA levels.

It seems unlikely that dogs would ever be routinely used on a widespread basis to detect prostate cancer. However, if researchers can identify the exact chemical(s) the dogs are detecting in urine, they could try to develop methods of detecting these chemicals.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Dogs trained to detect prostate cancer with more than 90% accuracy. The Guardian, April 11 2015

Dogs can sniff out prostate cancer almost every time. The Daily Telegraph, April 11 2015

How DOGS can help diagnose prostate cancer: Canines SNIFF out disease with 98% accuracy. Daily Mail, April 11 2015

Prostate cancer detected by dogs with more than 90% accuracy. The Independent, April 11 2015

Dogs can detect prostate cancer with 98 percent reliability, study finds. Metro, April 21 2015

Dogs found to have '98% reliability rate' in sniffing out prostate cancer in men research finds. ITV News, April 11 2015

Amazing cancer-sniffing dogs detect prostate tumours in men 98% of the time. Daily Mirror, April 11 2015

Links To Science

Taverna G, Tidu L, Grizzi F, et al. Olfactory System of Highly Trained Dogs Detects Prostate Cancer in Urine Samples. The Journal of Urology. Published online April 2015

Categories: NHS Choices

Can plucking hairs stimulate new hair growth?

NHS Choices - Behind the Headlines - Fri, 10/04/2015 - 13:00

"Plucking hairs 'can make more grow'," BBC News reports, while the Daily Mail went as far as saying scientists have found "a cure for baldness". But before you all reach for your tweezers, this discovery was made in mice, not humans.

The study that prompted the headlines involved looking at hair regeneration in mice. The results showed hair regeneration depended on the density at which hairs were removed. Researchers describe how the hairs seemed to have a "sense and response" process that works around a threshold.

If hair removal – specifically plucking – was below this threshold, there was no biological response to repair and regrow the hair, and the mice remained bald.

However, once the plucking threshold was crossed, the plucked hair regrew – and often more hair regrew than was there originally. This effect is known as quorum sensing.

Quorum sensing is a biological phenomenon where, as the result of a range of different signalling devices, individual parts of a group are aware of the total population of that group. This means they can respond to changes in population values in different ways.

One example is the formation of new ant nests. A worker ant can tell when an individual part of the new nest is almost full, so they will then lead other ants to other parts of the new nest.

But we don't know whether the same thing would happen in people. It is certainly too early to claim that plucking hairs can cure baldness, as the Mail Online headline suggests: that may actually do more harm than good.

Philip Murray of Dundee University, one of the authors of the study, said: "It would be a bit of a leap of faith to expect this to work in bald men without doing more experiments." 

Where did the story come from?

The study was carried out by researchers from the University of Southern California in collaboration with colleagues based in Taiwan, China and Scotland.

It was funded by the US National Institute of Arthritis and Musculoskeletal and Skin Diseases (NIAMS), the National Science Council of Taiwan (NSC), the Taipei Veterans General Hospital, and several research grants.

The study reports that an invention number for "Enhance hair growth via plucking" was disclosed to the University of Southern California, which suggests that someone – possibly one of the authors – might have patented the idea, or a patent is pending.

The study was published in the peer-reviewed journal, Cell.

Generally, the media reported the story as if this study directly applies to people before revealing that all the research was done in mice. The Daily Mail even claimed in its headline that this research offered a cure for baldness, which was misleading. 

What kind of research was this?

This was an animal study using mice to explore the biology of hair regeneration. Hair loss, or alopecia, has many different symptoms and causes, and can be an issue for both men and women.

The study involved plucking hair from the backs of mice. This might have some similarity with people, but it's clearly not completely the same.

Researchers tend to use mice as a first step in their research when they have a theory they want to investigate without subjecting humans to experiments.

If the experiments in mice look helpful – say, in curing baldness – the researchers eventually try it in people. But the results in people aren't always the same as results in mice, so we shouldn't let our hopes climb too high. 

What did the research involve?

The study team plucked hairs from the backs of mice and studied the biological reaction. They analysed different skin cell behaviour, what chemical signals were sent to neighbouring cells, and how different repair systems were activated at different times.

They plucked hairs at different densities – that is, plucking hair close together or far apart to see if this affected any of the repair responses. 

What were the basic results?

The researchers found plucking was able to stimulate hairs to grow back, sometimes more than were there originally, but only after a certain threshold. Below this threshold, not enough signals were produced to kick-start the hair regeneration systems.

Mice usually have a hair density of between 45 and 60 hairs per square mm, probably much more than even the hairiest adults. A look at a selection of hair transplant websites suggests natural human hair density varies between 70 to 120 hairs per cm, less than 10 times the density of mice.

The researchers found they needed to pluck more than 10 hairs per square mm to stimulate regrowth, otherwise a bald patch remained. If they plucked all of the hairs, the same number grew back.

However, when they plucked 200 hairs from a diameter of 3mm, they found around 450 grew back. The new hairs grew back in the plucked area, but also nearby. When they plucked 200 hairs from a diameter of 5mm, this regenerated 1,300 hairs.

Based on these biological observations, the researchers believe each hair follicle was acting as a sensor for a wider skin area to assess the level of damage through hair loss.

Input from each follicle fed into a collective biological circuit, which was able to quantify injury strength. Once a threshold was reached, a regeneration mechanism was activated. This type of system is often referred to as quorum sensing. 

How did the researchers interpret the results?

The researchers made no mention of the human implications of this study. They concluded that the sense and response system they uncovered "is likely to be present in the regeneration of tissue and organs beyond the skin". 

Conclusion

This study showed that hair regeneration in mice depends on the density at which hairs are removed. The researchers describe a sense and response mechanism working around a threshold.

If hair removal, specifically plucking, was below this threshold, there was no biological response to repair and regrow the hair, and the mice remained bald. But once the plucking threshold was crossed, the plucked hair regrew – and often more hair regrew than was there originally.

The main limitation with this research is it did not involve humans, so we don't know whether the same thing would happen in people. It might actually be unlikely.

For example, people with trichotillomania, a condition where they impulsively pull out their hair, end up with patches of hair loss and balding that does not regrow. There may be specific stress-related reasons why this is the case, but it is a reminder not to take these mouse results at face value.

It is certainly too early to advise hair plucking as a cure for baldness, as the Daily Mail's headline suggests. That may do more harm than good. The "cure for baldness" headline is also misguided, as the study was about hair regeneration after recent plucking. The findings are less relevant to those with longer-term hair loss, either in mice or people.

Philip Murray of Dundee University, one of the authors of the study, summed this up in The Guardian when he said: "It would be a bit of a leap of faith to expect this to work in bald men without doing more experiments."

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Plucking hairs 'can make more grow'. BBC News, April 10 2015

Bald truth: plucking hair out can stimulate growth, study finds. The Guardian, April 9 2015

Cure for thinning hair? Scientists find plucking stimulates huge growth spurt. The Daily Telegraph, April 9 2015

At last, a cure for baldness! Scientists discover how to regrow hair (as long as you're prepared to pull it all out first). Daily Mail, April 9 2015

Links To Science

Chen C, Wang L, Plikus MV, et al. Organ-Level Quorum Sensing Directs Regeneration in Hair Stem Cell Populations. Cell. Published online April 9 2015

Categories: NHS Choices

Middle-age spread 'seems to reduce dementia risk'

NHS Choices - Behind the Headlines - Fri, 10/04/2015 - 12:30

"Being overweight 'reduces dementia risk'," BBC News reports. The story comes from a cohort study of nearly 2 million UK adults aged over 40. It showed that being overweight or obese was linked to a lower risk of dementia up to 20 years later, compared with people who were a healthy weight. Underweight people were at a higher risk of dementia.

This result is surprising as it contradicts the current consensus of opinion, including the advice on this website, that obesity may be a risk factor for some types of dementia.

In the best scientific tradition, this study raises more questions than it answers. But it is important not to overlook the many serious health risks associated with obesity, such as heart disease and diabetes.

As one of the key authors, Dr Qizilbash, rightly says, the findings are "not an excuse to pile on the pounds or binge on Easter eggs … You can't walk away and think it's OK to be overweight or obese. Even if there is a protective effect, you may not live long enough to get the benefits".

In conclusion, a single study is unlikely to lead to a change in clinical guidelines, but it is likely to prompt further research into the issue. 

Where did the story come from?

The study was carried out by researchers from the London School of Hygiene and Tropical Medicine, London, and OXON Epidemiology; a London/Madrid-based clinical research company.

The study reports no funding for the work and the authors declare no conflicts of interest.

It was published in the peer-reviewed medical journal, The Lancet Diabetes & Endocrinology.

Generally, the media reported the story accurately and responsibly, taking a range of angles. The Daily Telegraph outlined how "a middle age spread may protect against dementia"; The Guardian said "underweight people face significantly higher risk"; while The Independent went with a lack of risk angle, saying that, "being overweight may not increase dementia risk" as previously thought. All accurately reflect the results of the underlying study.

Much of the news outlined how these findings contradict previous research, but may be more reliable because the study was bigger and more robust. Most also cautioned against taking this to mean that being overweight or obese is somehow good for your health, and said the link between dementia and obesity was an open case, needing more research to find out what's going on.

What kind of research was this?

This was a retrospective cohort study looking at body mass index (BMI) and dementia, using information from UK GP records.

BMI is a measure of weight and height. The main four BMI categories – underweight, healthy weight, overweight and obese – are based on whether your weight is likely to affect your health.

The healthy weight category means your weight isn't likely to affect your health, whereas the overweight category means your weight is likely to increase your chance of death and disease. This is the same for the underweight category. Obese people are more likely to suffer death and disease than people who are overweight. 

This type of study cannot prove cause and effect, but can give us an idea of possible links. One of the disadvantages of using existing GP records is you can only use the information that has already been collected. This might not include all the information you would want to collect as a researcher, such as changes in body weight, physical activity levels, diet, and other lifestyle factors.  

What did the research involve?

The researchers analysed more than 1.9 million UK GP records to see whether BMI was linked to a recorded diagnosis of dementia.

The cohort of people analysed were all over 40, had no previous diagnosis of dementia, and had to have a BMI measure recorded in their GP notes between 1992 and 2007. Everyone else was excluded.

Eligible medical records were reviewed to see if people went on to develop dementia, changed GP practice, or died up to July 2013. The average time elapsed between the single BMI measurement and any of these events was nine years. Some had records spanning 20 years. 

The team split the people into standard BMI categories and calculated their relative risk of developing dementia. The categories were:

  • underweight: BMI less than 20kg/m2
  • healthy weight: BMI 20 to less than 25kg/m2
  • overweight: BMI 25 to 30kg/m2
  • obese: BMI greater than 30kg/m2, actually divided into three subcategories of obesity: class I, II and III

The analysis adjusted for a range of known confounders already recorded in the GP records, including:

  • age
  • gender
  • smoking
  • alcohol consumption
  • history of heart attack, stroke or diabetes
  • recent use of statins or drugs to treat high blood pressure
What were the basic results?

Dementia affected 45,507 people, just over 2 out of every 100 taking part (crude prevalence 2.32%).

Compared with people of a healthy weight, underweight people had a 34% higher risk of dementia (rate ratio [RR] 1.34 95% confidence interval [CI] 1.30 to 1.39).

Compared with people of a healthy weight, overweight people had a 19% lower risk of dementia (RR 0.81, 95% CI 0.79 to 0.83). The incidence of dementia continued to fall marginally for every increasing BMI category, with very obese people (BMI greater than 40kg/m2) having a 33% lower dementia risk than people of a healthy weight (RR 0.67, 95% CI 0.60 to 0.74).

These patterns stayed stable throughout two decades of follow-up, after adjustment for potential confounders and allowance for the J-shape association of BMI with mortality. 

How did the researchers interpret the results?

The research team says: "Our study shows a substantial increase in the risk of dementia over two decades in people who are underweight in mid-life and late-life.

"Our findings contradict previous suggestions that obese people in mid-life have a higher subsequent risk of dementia. The reasons for and public health consequences of these findings need further investigation." 

Conclusion

This cohort study of more than 1.9 million UK adults aged over 40 links being overweight or obese to a lower risk of dementia, compared with healthy weight people. Underweight people were at a higher risk of dementia.

The study has many strengths, such as its large size and applicability to the UK. However, the authors note their results buck the trend of other research, which found being overweight or obese was linked to an increase risk. They suggest their study is probably more reliable than the past ones as they were smaller.

They aren't quite sure what this means, and say: "The reasons for and public health consequences of these findings need further investigation."

It's important to realise that this finding doesn't mean that gaining weight will somehow protect you against dementia. Many dietary, environmental and genetic factors are likely to influence both BMI and dementia, so the relationship is complex.

However, we do know that being overweight or obese is bad for your health. The same is true for people who are underweight as they are not getting the nutrients their body needs, which may be one of the reasons why they were found to have an increased risk of dementia in this study.

Dr Liz Couthard, Consultant Senior Lecturer in Dementia Neurology at the University of Bristol, said: "We do know that obesity carries many other risks, including high blood pressure, heart disease, diabetes and increased rates of some types of cancer. So maintaining a healthy weight is recommended."

However, there are limitations to bear in mind with this study that may have affected the findings to some degree.

Selection bias

First is the possibility of selection bias. Around half (48%) of eligible people did not have a BMI record, so were excluded from the study. A further third (31%) with BMI records were excluded for not having at least 12 months of previous health records. The study team were aware of this, saying: "If BMI is more likely to be measured in people with comorbidities than in healthy people, which might in turn be associated with dementia risk, then some bias is possible." But they went on to say this is unlikely.

Confounders

Residual confounding is also a possibility. The researchers had to use variables collected in the GP records, which didn't cover everything they would have wanted. For example, they adjusted for anti-hypertensive medicines and statins but not for blood pressure and blood lipid values, which, they say, do affect the associations of BMI with heart attack and stroke.

Unavailable data

Other unavailable potential confounders, such as physical activity level, socioeconomic status and ethnic origin, might also have influenced the recorded association between BMI and dementia. We can't say to what extent.

Maintaining a healthy weight is recommended to reduce the risk of heart disease, diabetes and some cancers. This study suggests the benefits of this may not extend to reducing the risk of dementia, but the relationship is likely to be complex and is not yet fully understood.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Being overweight 'reduces dementia risk'. BBC News, April 10 2015

Middle-age spread may protect against dementia. The Daily Telegraph, April 10 2015

Underweight people face significantly higher risk of dementia, study suggests. The Guardian, April 10 2015

Could being skinny in middle age raise your risk of dementia? Underweight people third more likely to develop diseases. Mail Online, April 10 2015

Being overweight may not increase dementia risk and could protect against mental decline. The Independent, April 10 2015

Links To Science

Qizilbash N, Gregson J, Johnson ME, et al. BMI and risk of dementia in two million people over two decades: a retrospective cohort study. The Lancet – Diabetes and Endocrinology. Published online April 9 2015

Categories: NHS Choices

'Marathon men' make better sexual partners, media claims

NHS Choices - Behind the Headlines - Thu, 09/04/2015 - 15:31

"Marathon runners are the best in bed," is the spurious claim in Metro.

The headline is based on a study that only looked at long-distance runners’ finger ratios – said to be a marker for high testosterone levels – not reported partner sexual satisfaction (or as other sources report, high sperm counts and "reproductive fitness").

The study is based on the concept of what is known as 2D:4D ratio – a measurement of the ratio between the length of the index finger (second digit) and the ring finger (fourth digit).

Previous research suggests that men with a low 2D:4D ratio (when their ring finger is comparatively longer) may have been exposed to higher levels of testosterone in the womb, which is linked to the potential for reproductive success.

Researchers wanted to see if running prowess in males could be a sign of their evolutionary reproductive potential (as measured by their 2D:4D ratio).

They found that men with more "masculine" digit ratios – i.e. longer ring fingers – did better in the 2013 Robin Hood half marathon in Nottingham than those with the "least masculine" ratios. The same link was found in women, albeit to a lesser degree.

Researchers did not look at whether these more "masculine" men were judged to be more attractive by women.

Where did the story come from?

The study was carried out by researchers from the University of Cambridge and the Institute of Child Health, London. There was no external funding.

The study was published in the peer-reviewed medical journal PLOS ONE. This is an open-access journal, so the study is free to read online.

Reporting of this study by the UK media was almost universally poor, with many sources making claims that were not supported by the study:

  • Mail Online: "Those who run endurance races get more dates and have a higher sex drive" – unproven
  • Metro: "Marathon runners are the best in bed" – unproven
  • The Daily Telegraph: "Good runners are likely to have had ancestors who were excellent hunters… creating a biological advantage for their descendants and passing on the best genes" – unproven

At least the Daily Mirror and Huffington Post tempered their coverage with a "may" and a "probably".

None of the media coverage made it clear that the study was using running ability as a proxy for hunting prowess in pre-agricultural societies and had little or nothing to do with modern relationships.

 

What kind of research was this?

This was an observational study that aimed to test the researchers' theory that physical prowess at endurance running is associated with male reproductive fitness. In this study, the researchers used the digit ratio to predict reproductive success. This is the ratio between index and ring finger, which is a marker of hormone exposure in the womb.

The researchers explain that the high value placed by females on male ability to acquire resources has been well documented, especially in pre-industrial societies. Before agriculture developed, hunting ability may have provided an important way of demonstrating male resourcefulness and seems to be linked to fertility, offspring survivorship and number of mates.

There are several theories that try to explain this link; one is that hunting success is a reliable signal for underlying traits such as athleticism, intelligence or generosity in distributing meat.

In “Persistence Hunting” – one of the earliest forms of human hunting – prey often required running for long distances. This may act as a reliable signal of reproductive potential, say the researchers.

Since increased testosterone exposure in the womb is associated with reproductive success, an association between testosterone and endurance running would make running prowess a reliable signal of male reproductive potential, they argue.

 

What did the research involve?

The researchers recruited to their study 439 men and 103 women taking part in the Robin Hood half marathon in Nottingham in 2013. Participants ranged between the ages of 19 and 35, and were all white (Caucasian). The half marathon, they say, was chosen for its appropriateness to pre-agricultural, hunter-associated running and reflects endurance running ability.

All competitors wore small electronic chips to guarantee accurate race timings.

Photocopies were taken of the athlete’s left and right hands on finishing the race and these were used at a later date to measure the 2D:4D ratios.

The digit ratios were measured using special electronic callipers and were taken twice from each photocopy, to ensure accuracy.

The researchers then analysed the results, looking for an association between the digit ratio and the race time in each sex.

 

What were the basic results?

They found that among the men there was a "significant positive correlation" between right and left hand 2D:4D ratio and marathon time, with higher levels of performance associated with a lower, more "masculine", digit ratio. The correlation strengthened after controlling for age. The same was true of the female sample, but to a lesser degree.

 

How did the researchers interpret the results?

The researchers say their results support the theory that endurance running ability may signal reproductive potential in men through its association with prenatal exposure to testosterone. Running prowess they suggest, could act as a reliable signal for male reproductive potential. 

 

Conclusion

This study of distance runners and their digit ratios, and the possible relationship between successful hunting and male reproductive potential, is a little tenuous.

This was an observational study using long-distance runners as a proxy for hunters and digit ratio as a proxy for reproductive potential. The most it can show is an association between the two.

It should also be noted that:

  • the study did not assess any non-runners
  • the runners' ability was measured in only one race
  • many qualities contribute to marathon running success, including muscle strength and mental endurance
  • the study only included Caucasians, so the results may not apply to people of other ethnicities

This is an interesting study, but does not prove that long-distance runners are more fertile or more attractive.

Ways to increase your fertility levels include stopping smokingdrinking alcohol in moderation and maintaining a healthy weight through healthy eating and exercise.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Marathon runners are the best in bed. Metro, April 8 2015

Why long-distance runners make the best partners. The Daily Telegraph, April 8 2015

Male long-distance runners may find it easier to attract women. Daily Mirror, April 8 2015

How long distance running makes men attractive: Those who run endurance races get more dates and have a higher sex drive. Mail Online, April 8 2015

Male Long-Distance Runners Are (Probably) More Attractive To Women, Says Science. The Huffington Post, April 8 2015

Links To Science

Longman D, Well JCK, Stock JT. Can Persistence Hunting Signal Male Quality? A Test Considering Digit Ratio in Endurance Athletes. PLOS One. Published online April 8 2015

Categories: NHS Choices

Short people may have an increased risk of heart disease

NHS Choices - Behind the Headlines - Thu, 09/04/2015 - 13:00

"Shorter people at greater risk of heart disease, new research finds," reports The Guardian.

It reports that a study of nearly 200,000 people has found that for every 2.5 inches (6.35cm) less in height, there is a 13.5% increased risk of coronary heart disease or CHD (also known as coronary artery disease).

This means that someone who is 5ft (1.52m) would have a 32% increased risk of CHD compared to someone who is 5ft 6 (1.71m).

Previous research identified the link between shorter adult height and increased risk of CHD but why this might be was not known. It is thought that environmental factors could be involved. For example a person fed a poor diet in childhood could grow up both shorter than average and unhealthy.

This current study attempted to create a clearer picture by looking for genetic variations linked to short stature that were also linked to CHD.

Through sophisticated statistical analysis they measured the association between shorter height due to these variants and CHD. Oddly there was no association for women.

It should be noted that this type of study can indicate potential reasons for the associations (such as shortness being associated with high cholesterol) but cannot prove that shorter height directly causes CHD.

While you can put on a pair of "killer" or Cuban heels, there is not much you can do about your genetics. Ways you can reduce your CHD risk include stopping smoking, drinking alcohol in moderation and maintaining a healthy weight through diet and exercise. These steps should help keep your cholesterol and blood pressure at a healthy rate.

 

Where did the story come from?

The study was carried out by researchers from the University of Leicester, the University of Cambridge and numerous other institutes and universities across the UK and internationally. It was funded by the British Heart Foundation, the UK National Institute for Health Research, the European Union and the Leducq Foundation.

The study was published in the peer-reviewed The New England Journal of Medicine.

The UK media accurately reported the study. The Guardian helpfully put the results of the study into context with a quote from one of the authors, Sir Nilesh Samani who said: "The findings are relative, so a tall person who smokes will very likely be at much higher risk of heart disease than somebody who is smaller". He was then quoted by the BBC News as saying: "In the context of major risk factors this [short stature] is small – smoking increases the risk by 200-300% – but it is not trivial."

 

What kind of research was this?

This was a case control study which compared genetic make-up of people with and without coronary heart disease (CHD). It specifically looked at genetic variations associated with height, and aimed to see if there was an association between ‘genetically determined height’ and risk of CHD. They also studied whether genetically determined height was associated with cardiovascular risk factors.

Previous research identified the link between shorter adult height and increased risk of CHD but the exact reason why was not known. This type of study investigates whether genetics could be a potential reason for the association, but cannot prove that shorter height causes CHD, or rule out other factors contributing to the association.

 

What did the research involve?

The researchers compared genetic variations that are associated with height in people with and without CHD.

The researchers used data on 65,066 people who had CHD (cases) and 128,383 people with no history of CHD (controls) that had been collected from a number of different studies, and pooled in a previous meta-analysis. This meta-analysis identified 180 DNA sequence variations that were estimated to account for 10% of the difference in people’s heights.

In the current study they measured the association between each DNA variant and height. They then measured the association between each DNA variant and CHD. From this, they calculated whether there was an association between height determined by each DNA variant and CHD. As this association was very small for each DNA variant, the researchers then combined all of the DNA variant results to obtain an overall association for what they termed "genetically determined height" and risk of CHD. They performed separate analyses for men and women.

The researchers then looked for any associations between the genetically determined height and the following risk factors for CHD:

  • high blood pressure
  • high LDL "bad" cholesterol
  • low HDL "good" cholesterol
  • high triglyceride level (a type of fat)
  • type 2 diabetes
  • increased body mass index (BMI)
  • high blood sugar
  • low insulin sensitivity
  • smoking

 

What were the basic results?

The average age of participants was 57.3 years and the majority of cases were male (73.8%) compared to only half of the controls (49.8%).

Most of the 180 individual genetic variants that have been associated with height had no statistically significant association with the risk of CHD. The researchers had expected this, as each variant is associated with only a very small effect.

When all of the results were combined, for each 6.5cm decrease in "genetically determined height" there was a 13.5% increased risk of CHD (95% confidence interval (CI) 5.4% to 22.1%).

When looking at men and women separately, there was an association in men, but no significant association between the genetically determined height and CHD in women.

Among the risk factors for CHD, the height-related variants were only associated with LDL (bad) cholesterol and high triglyceride levels. They estimated that 19% of the association between shorter height and CHD could be accounted for by high LDL cholesterol and 12% by high triglycerides.

 

How did the researchers interpret the results?

The researchers concluded that using a genetic approach there is "an association between genetically determined shorter height and an increased risk of CHD". They suggest that this may in part be due to "the association between shorter height and an adverse lipid profile [levels of total cholesterol, high-density lipoprotein (HDL) cholesterol, triglycerides, and the calculated low-density lipoprotein (LDL) cholesterol.]".

 

Conclusion

Previous observational studies have suggested a link between shorter height and CHD. What was not clear was the extent to which this might be due to genetic factors or confounding by socioeconomic and lifestyle factors.

The current study aimed to assess the potential role of genetics, and reduce the possibility of socioeconomic factors influencing the results. To do this the researchers calculated the association between "genetically determined height" and CHD, using 180 genetic variations previously found to be associated with height in Europeans. This reduces the influence of socioeconomic factors as genetic variations are present from birth.

They found an association between genetically determined shorter height and increased risk of CHD. They also found that the genetic variants were associated with high LDL cholesterol and triglycerides and this could at least partly account for the increased risk of CHD. It remains unclear exactly how the genetic variants identified influence cholesterol, triglycerides or CHD. It is also not known if the results would be applicable to people not of European descent.

Interestingly, there was no significant association for women. The researchers say this could be because there were too few women with CHD in the analysis.

Though the study design aims to reduce the possibility of confounding, the researchers note that they cannot rule out the possibility of different behaviours in shorter people having an impact on the results. The study also does not completely rule out other factors influencing the overall link between height and CHD.

Whatever your height you should remain vigilant about the risk of CHD, which has now become the leading killer in the UK.

You cannot change your genetics, but factors that you can control to reduce the risk of CHD include stopping smoking, drinking alcohol in moderation and maintaining a healthy weight through diet and exercise. These steps should help keep your cholesterol and blood pressure at a healthy rate.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Shorter people at greater risk of heart disease, new research finds. The Guardian, April 8 2015

Short people's 'DNA linked to increased heart risk'. BBC News, April 9 2015

Short people more likely to develop coronary heart disease. The Daily Telegraph, April 8 2015

Short People More Likely To Get Heart Disease. Sky News, April 8 2015

Short people 'at higher risk of heart disease', study finds. ITV News, April 8 2015

Short people at greater risk of heart attack, says study. The Independent, April 8 2015

Links To Science

Nelson CP, Hamby SE, Saleheen D, et al. Genetically Determined Height and Coronary Artery Disease. The New England Journal of Medicine. Published online April 8 2015

Categories: NHS Choices

No such thing as baby brain, study argues

NHS Choices - Behind the Headlines - Wed, 08/04/2015 - 16:00

"'Baby brain' is a stereotype and all in the mind,
the Mail Online reports.

The headline is prompted by a US study that aimed to see if "baby brain" (aka "mumnesia") – alleged memory lapses and problems with concentration during pregnancy – is a real phenomenon or just a myth.

The study recruited 21 women in the third trimester of pregnancy. A second group of 21 women who had never been pregnant were recruited to act as a control. The women completed a variety of tests to measure their memory, attention and problem solving ability. The tests were repeated several months later and the two groups compared.

Though the pregnant women reported greater memory difficulties, there were no differences in the results of the tests between the two groups.

The researchers say this shows that pregnancy and childbirth do not affect the ability to "think straight". However, we do not know what the level of performance would have been for the pregnant women before they were pregnant. It is also possible that the small numbers of women in each group could have affected the results. The findings could be completely different with a different sample of women.

This study does not provide conclusive evidence that pregnancy has no effect on memory and attention.

Seeing that pregnancy can often cause tiredness, it would be surprising if some women didn’t have temporary problems with memory and concentration. 

 

Where did the story come from?

The study was carried out by researchers from Brigham Young University in Utah. It was funded by the Brigham Young University College of Family, Home and Social Sciences, and the Women’s Research Institute at Brigham Young University.

The study was published in the peer-reviewed Journal of Clinical and Experimental Neuropsychology.

The Mail Online reported the story reasonably accurately, but did not explain the major limitation of the study's design – that it does not take into account the memory and problem solving abilities of the women before they became pregnant.

 

What kind of research was this?

This was a case-controlled study that aimed to see if cognitive ability (memory and problem solving) changed in pregnancy and after childbirth. Previous research has found mixed results, with some studies indicating improved cognitive abilities during pregnancy and some showing a reduction or no difference.

This type of study can show associations, but cannot prove that any cognitive differences are due to the pregnancy, as other factors could cause the results.

 

What did the research involve?

The researchers recruited 21 pregnant women and a control group of 21 healthy women who had never been pregnant. The women completed a variety of tests to measure their cognitive ability. The tests were repeated several months later and the two groups compared.

The women were given 10 neuropsychological tests, which measured their memory, attention, language, executive abilities (such as problem solving) and visuospatial skills (the ability to process and interpret visual information about where objects are). They also filled out questionnaires to assess their mood, and levels of anxiety, quality of life, enjoyment and satisfaction.

Each test was conducted when the pregnant women were in their third trimester and repeated between three and six months after giving birth. The non-pregnant women were also tested twice, with a similar time gap between the tests.

Women were excluded from the study if they had a history of:

  • learning disabilities
  • attention deficit hyperactivity disorder (ADHD)
  • psychotic or bipolar disorder
  • epilepsy
  • stroke
  • traumatic brain injury
  • substance abuse/dependence

The results were then analysed during and after pregnancy, and compared to the controls. Further analysis was performed in the pregnancy group, comparing women in their first pregnancy with women who had previously given birth.

 

What were the basic results?

The pregnant women were older, on average, with a mean age of 25, compared to 22 for the control group. 11 of the pregnant women and nine of the controls were students. 

The main results were:

  • No difference between the groups in terms of language ability or memory, though the pregnant women reported worse memory than controls.
  • No difference between tests of attention and visuospatial ability, with higher scores for both groups in the second session of tests.
  • Executive functioning also improved for both groups. For one of the tests, the Trail Making Test, the pregnant women were slower at Part A both during and after pregnancy. Part A measures visual scanning and processing speed by asking the participant to draw a line as quickly as possible between consecutive numbers randomly written on paper. Part B measures scanning and processing speed, but also mental flexibility by requiring the person to join each consecutive number and letter: 1-A-2-B-3-C etc. There was no difference in scores for Part B between the groups.

Pregnant women reported a lower quality of life and were more likely to have depressive symptoms compared to controls. The results were as follows:

  • Six pregnant women had mild symptoms of depression during pregnancy. One of them continued to have mild symptoms after birth. These women performed similarly to the control women in the neuropsychological tests.
  • One woman had moderate symptoms of depression during pregnancy and developed severe symptoms by the second test after birth.
  • No women in the control group had significant symptoms of depression.

There were no differences between women in their first pregnancy compared to women who had previously given birth.

 

How did the researchers interpret the results?

The researchers say their "findings suggest no specific cognitive differences between pregnant/postpartum women and never-pregnant controls". This was despite the pregnant/postpartum women reporting more memory difficulties.

 

Conclusion

The researchers conclude that although the pregnant women reported memory problems, these did not show up on their tests. However, this does not take into account their pre-pregnancy ability. The women may have performed better before they got pregnant, which is why they are now reporting memory problems. None of these women were tested before they got pregnant, which is the major limitation of the study.

The researchers say that because there were a similar number of students in each group, the women in the control group was a good enough representation of how the pregnant women would have performed pre-pregnancy. However, there will be a wide variation between cognitive abilities, even between different students. There is no information about cognitive abilities, other than the length of time each group were in education. This was an average of 16 years for the pregnancy group, compared to 15 for the control group. The range was the same for each group, at 13 to 18 years.

The other limitation of the study is the small number of women in each group, which limits the strength of the results and makes it more likely that they could occur by chance. A different or larger sample of women could give completely different results.

It is unclear why the pregnant women were slower at the Trail Making Test Part A compared to the controls, but not with Part B. It is likely that the small sample size contributed to this anomaly.

The study highlights the importance of recognising low mood and symptoms of depression in pregnant women and in the months after giving birth. Read more about low mood and depression during pregnancy, and low mood and depression after pregnancy.

In conclusion, this study does not provide conclusive evidence that pregnancy has no effect on memory and attention.

Pregnancy can cause tiredness and fatigue, particularly during the first trimester, and looking after a newborn baby can be exhausting work. Therefore, you shouldn’t feel surprised if you do have the occasional memory lapse or loss of concentration. Dads may not be immune to "baby brain" after the baby is born, either.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

"Baby brain doesn't exist and the condition is all in mum's mind," say scientists. Daily Mirror, April 7 2015

'Baby brain' DOESN'T exist: Tests reveal pregnant women and new mothers suffer no decline - leading scientists to declare the condition is 'all in the mind'. Mail Online, April 8 2015

Links To Science

Logan DM, Hill KR, Jones R, et al. How do memory and attention change with pregnancy and childbirth? A controlled longitudinal examination of neuropsychological functioning in pregnant and postpartum women. Journal of Clinical and Experimental Neuropsychology. Published online May 12 2014

Categories: NHS Choices

Do diet soft drinks actually make you gain weight?

NHS Choices - Behind the Headlines - Wed, 08/04/2015 - 12:10

"Is Diet Coke making you fat? People who drink at least one can a day have larger waist measurements," the Mail Online reports. A US study found an association between the daily consumption of diet fizzy drinks and expanded waist size.

This study included a group of older adults aged 65 or over from San Antonio, Texas. Researchers asked participants about their consumption of diet soft drinks and measured their body mass index (BMI) and waist circumference. They then looked at whether this was associated with changes in body measures over the next nine years.

The study found people who drank diet soft drinks every day had a greater increase in waist circumference at later assessments compared with those who never drank them (3.04cm gain versus 0.77cm). Daily drinkers also had a slight gain in BMI (+0.05kg/m2) compared with a minimal loss in non-drinkers (-0.41kg/m2).

The hypothesis that diet drinks can actually make you fatter is not a new one – we covered a similar study back in January 2014. The problem with this field of research is it is very difficult to prove cause and effect. As with this study, people who regularly drink diet drinks may be overweight to start with and they could be drinking diet drinks in an effort to lose weight.

This study will add to the variety of research examining the potential harms or benefits of artificial sweeteners or diet drinks. But it does not prove that drinking diet drinks will make you fat.

If you are trying to lose weight, good old-fashioned tap water is a cheaper, calorie-free alternative to diet drinks. 

Where did the story come from?

The study was carried out by researchers from the University of Texas Health Science Center in the US, and was funded by the US National Institute on Aging, the National Institute of Diabetes and Digestive and Kidney Diseases, and the National Center for Research Resources. The authors declare no conflicts of interest. 

It was published in the peer-reviewed Journal of the American Geriatrics Society.

The Mail Online's coverage of this study seems overly conclusive, suggesting it provides evidence that drinking diet fizzy drinks causes people to become overweight. But this has not been proven, and the Mail did not consider this study's many limitations in their reporting.

It also includes an error in its story, describing the study of 749 people "in which 466 participants survived". This is the number of people who had data on body measurements available for at least one of the follow-up assessments. It is the retention of people in the study, not the survival rate.

Furthermore, in saying that, "Large waistlines linked to diabetes, stroke, heart attack and cancer", it is suggested that this study found a higher waist circumference was linked to the development of these diseases. However, health outcomes were not assessed in this study.

And, somewhat unfairly, Diet Coke was singled out as the main culprit. The study actually included any kind and brand of diet fizzy drink. 

What kind of research was this?

This was a prospective cohort study that aimed to look at the link between diet soft drink intake and waist circumference.

The researchers discuss how concerns about high sugar intake over the past few decades have led to an increase in the consumption of artificial sweeteners. But the potential detrimental health effects of sweeteners have often been debated.

Some studies found no evidence for either the benefits or harms of sweeteners and diet drinks, while others found an increased risk of cardiovascular and metabolic risk factors, such as causing weight gain, leading to obesity, high blood pressure and diabetes.

This study aimed to examine the effect artificially sweetened diet drinks have on weight changes over time by looking at people taking part in an ongoing cohort study.

The main limitation with this type of study, however, is that it is not able to prove cause and effect, as the relationship is likely to be influenced by various other factors (confounders)

What did the research involve?

This research included a group of older Mexican and European American people taking part in the San Antonio Longitudinal Study of Aging (SALSA). This community-based study aimed to look at cardiovascular risk factors in people who were aged 65 or over at the start of the study (1992-96).

The first follow-up assessments were conducted an average of seven years later (2000-01), with two further follow-ups at 1.5-year intervals (2001-03, then 2003-04). The study included 749 people, with an average follow-up time of 9.4 years.

Assessments included measurements of participants' height, weight, waist circumference, fasting blood glucose levels, physical activity, and presence of diabetes. Dietary questionnaires were given at baseline and included the consumption of diet soft drinks.

People were asked the number of cans or bottles of diet soft drinks they consumed a day, week, month or year, and were categorised into three intake groups: non-users, occasional users (more than zero but less than one a day), and daily users (more than one a day) of diet soft drinks.

The researchers looked at the relationship between diet fizzy drink intake at the start of the study, and changes in BMI and waist circumference from when the study started to each follow-up point. Analyses were adjusted for age, gender, ethnicity, socio-demographics, diabetes, smoking status, and leisure activity.

Despite the large initial cohort size, only 384 people (51%) had data available on soft drink intake at baseline and body measurements at the first and second follow-ups, reducing to 291 (39%) by the third follow-up.  

What were the basic results?

The researchers found people who drank diet drinks at the start of the study also had significantly higher BMIs at the beginning of the study compared with non-users. They also tended to have higher waist circumference compared with non-users, though not significantly so.

The proportion of daily users who were overweight or obese at the start of the study was 88%, compared with 81% of occasional users and 72% of non-users.

Overall, the researchers found that for people who returned for one or more follow-ups, changes in BMI varied according to diet soft drink intake. Non-users experienced a minimal decrease in BMI (average 0.41kg/m2 decrease), as did occasional users (0.11kg/m2 decrease), while daily users had a slight increase (0.05kg/m2 gain).

Changes in waist circumference, meanwhile, were much more notable, with daily diet soft drink users experiencing a gain four times that of non-users. Average waist circumference gains at each interval were 0.77cm for non-users, 1.76cm for occasional users, and 3.04cm for daily users.  

How did the researchers interpret the results?

The researchers concluded that, "In a striking dose-response relationship, increasing diet soda intake was associated with escalating abdominal obesity, a potential pathway for cardiometabolic risk in this ageing population." 

Conclusion

This prospective study found that people who drank diet soft drinks every day experienced greater waist circumference gain over up to nine years of follow-up compared with those who never drank diet drinks (3.04cm gain versus 0.77cm).

They also experienced a minimal gain in BMI (+0.05kg/m2) over follow-up, compared with a minimal loss in non-users of diet drinks (-0.41kg/m2).

However, this study certainly does not prove that diet drinks, and diet drinks alone, are responsible for these small increases in waist circumference and BMI.

People who drank diet drinks tended to have higher BMIs and waist circumferences than non-users to start with. At the start of the study, when diet soft drink consumption was assessed, 88% of those drinking them daily were overweight or obese, compared with 72% who weren't drinking soft drinks.

Though these people experienced slightly greater gains in BMI and waist circumference, these people tended to have generally higher body measurements to start with. It is possible that people with weight concerns may consume diet drinks in an effort to try to manage their weight.

There may be a variety of unhealthy lifestyle behaviours that contributed to the gain in body measures during the study. For example, the researchers adjusted their analyses for leisure-time physical activity, but did not consider food intake, apart from diet drinks, or look at total energy intake.

Overall, it is not possible to say from this analysis that the diet drinks are the cause of the changes in body measures, as various other unmeasured health and lifestyle factors could be having an influence.

Other points to bear in mind with this study are:

  • This was an older age cohort of people above 65, so we don't know how representative the results would be for younger groups.
  • This was a specific sample of people from San Antonio in Texas, and we don't know whether their health, lifestyle and environmental influences may differ from other population groups.
  • Despite the initial sample size being fairly large at 749, data on drink consumption and body measurements was only available for about half of these people. The results may have been different had data been available for the full cohort.
  • We don't know the significance of the small changes in BMI and waist circumference observed.
  • We don't know whether continued daily consumption of diet soft drinks in the longer term would be associated with continuously increasing body measures, or whether this would have direct health effects (such as in terms of cardiovascular disease).
  • The effects observed in this study can't be attributed to specific artificial sweeteners or specific diet soft drink brands.

The researchers' statement that there is a "striking dose-response relationship" between soda consumption and obesity seems overly bold given this study's limitations.

This study does not prove that drinking diet drinks will cause you to become fat. If you are trying to lose weight, we recommend that you ditch the expensive diet drinks and stick to water.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Is Diet Coke making you fat? People who drink at least one can a day have larger waist measurements. Mail Online, April 7 2015

Links To Science

Fowler SPG, Williams K, Hazuda HP. Diet Soda Intake Is Associated with Long-Term Increases in Waist Circumference in a Biethnic Cohort of Older Adults: The San Antonio Longitudinal Study of Aging. Journal of the American Geriatrics Society. Published online March 17 2015

Categories: NHS Choices

Superbug 'could kill 80,000 people' experts warn

NHS Choices - Behind the Headlines - Tue, 07/04/2015 - 16:00

"Superflu pandemic is biggest danger to UK apart from a terrorist attack – and could kill 80,000 people," is the warning in The Independent. A briefing produced by experts outlines how antibiotic resistance could pose a significant threat (PDF, 440kb) to public health.

"Up to 80,000 people in Britain could die in a single outbreak of an infection due to a new generation of superbugs," reports The Daily Telegraph – one of many news sources reporting on these estimated figures from the government.

 

Why are superbugs in the news again today?

The news is based on the threat of antimicrobial resistant microbes (sometimes called "superbugs" in the media) described in the government’s 2015 National Risk Register of Civil Emergencies (NRR). This is reported to be the first time the NRR has covered this threat.

 

What is the National Risk Register?

The NRR is an assessment of the risks of civil emergencies facing the UK over the next five years, and is produced every two years. The NRR report is a public-facing version of a classified internal government report called the National Risk Assessment (NRA). Civil emergencies are events or situations which threaten serious damage to human welfare or the environment in the UK, or threaten serious damage to national security.

In producing the report, the government assesses how likely an event is, and what the impact of it might be. The report considers events that have at least a 1 in 20,000 chance of happening in the next five years, and that would require government intervention. The report also covers issues that are longer-term or broader than single events, but which also have the potential to adversely impact society. The threat of antimicrobial resistance (AMR) is one such longer-term issue.

 

What is antimicrobial resistance and why is it a risk?

AMR is a global health threat.

Antimicrobials are drugs used to treat an infectious organism, and include antibiotics (used to treat bacteria), antivirals (for viruses), antifungals (for fungal infections) and antiparasitics (for parasites).

When antimicrobials are no longer effective against infections they were previously effective against, this is called antimicrobial resistance. Regular exposure to antimicrobials prompts the bacteria or other organisms to change and adapt to survive these drugs.

Nowadays, fewer new antibiotics are being developed, meaning we have fewer options and stronger drugs in our antibiotics armoury have to be used to treat common infections once they become resistant. This means we are now facing a possible future situation where we will be without effective antibiotics.

 

What could the impact be?

The report states that the cases of infection where AMR poses a problem are "expected to increase markedly over the next 20 years". It estimates that if a widespread outbreak were to happen, around 200,000 people could be affected by a bacterial blood infection resistant to existing drugs, and 80,000 of these people could die. It also says that many deaths could be expected from other forms of resistant infections.

 

What about "superflu"?

The Independent’s headline suggests that it is "superflu" which could kill 80,000, and that it is the "biggest danger to the UK apart from a terrorist attack". The headline appears to conflate two parts of the report. 

The 80,000 figure appears to come from the estimates of the potential impact of a resistant bacterial blood infection reported above, not specifically "superflu". The report does note that flu pandemics would become more serious without effective treatments, but does not give an estimate of how many people antimicrobial resistant pandemic flu could kill.

Flu pandemics (not specifically antimicrobial resistant flu) are also one of the specific risks assessed by the report. They are given a maximum relative impact score of five out of five, which is the same score as catastrophic terrorist attacks.

The report estimates that pandemic flu could infect half of the UK population and lead to between 20,000 and 750,000 additional deaths.

A flu pandemic was estimated as having a relative likelihood of between 1 in 2 and 1 in 20, and was reported to "[continue] to represent the most significant civil emergency risk".

 

How did the report assess the risk of AMR?

The report did not specify how it got to the specific AMR impact figures, but it does give its overall methods. The risks are identified by consulting experts both within and outside the government, and devolved administrations. For each risk the report selects a "reasonable worst case" scenario, which represents something which would be a challenge and could plausibly occur. The likelihood of an event (such as pandemic flu) was based on information such as historical analyses and modelling where possibly, along with scientific expertise. The impact score for an event was assessed on a scale of 0 to 5 (0 least and 5 greatest) and averaged across 5 areas:

  • deaths
  • illness or injury
  • social disruption
  • economic harm
  • psychological impact

 

What is being done about this threat?

The report notes that AMR is a global problem that needs international action to be tackled. The report describes some of the actions being taken:

  • The government and devolved administrations are working with international partners to get support for joint action internationally.
  • Government departments, the NHS and other partners are working together to implement the UK five-year Antimicrobial Resistance Strategy published in 2013.
  • The impact of actions to reduce the spread of AMR is being measured and reported on by a cross-government high level steering group.
  • There is an ongoing independent review of AMR, which is being chaired by economist Jim O’Neil. Two reports from this review have already been released. Further reports are expected in 2015, and in 2016 the review will recommend actions to be agreed internationally to deal with AMR.

 

What can you do to help reduce the spread of AMR?

People can help cut antibiotic (or wider antimicrobial) resistance by recognising that many common infections, such as coughs, colds and stomach upsets, are often viral infections that will go away after a short period without treatment (known as "self-limiting" infections). These infections do not need an antibiotic, as they will have no effect.

If you are prescribed an antibiotic (or other antimicrobial), it is also important to make sure you take the full course as prescribed, even if you feel better before you finish the course.

This will reduce the chances of the organisms being exposed to the drug but then surviving, which encourages the development and spread of resistance to the drug.

Taking the course as prescribed will also increase the chances of you getting better. By not taking a full course, you may find that the infection comes back and requires further antibiotic prescriptions, which further increases the chances of resistant organisms developing.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Superflu pandemic is biggest danger to UK apart from a terrorist attack – and could kill 80,000 people. The Independent, April 6 2015

Drug-resistant superbug could kill 80,000 in one outbreak as antibiotics lose their strength. Daily Mirror, April 6 2015

Superbug ‘could kill 80,000 Britons’, according to government report. Metro, April 6 2015

New breed of superbugs which are resistant to antibiotics could kill 80,000 Britons in one outbreak as scientists warn even catching flu could have 'serious' impact. Mail Online, April 6 2015

Tens of thousands of lives threatened by rise in drug-resistant superbugs, experts warn. ITV News, April 6 2015

The FLU could kill you: Shock report warns modern drugs will STOP working within 20 YEARS. Daily Express, April 6 2015

Categories: NHS Choices

Vigorous exercise 'may help prevent early death'

NHS Choices - Behind the Headlines - Tue, 07/04/2015 - 13:00

"Short bursts of vigorous exercise helps prevent early death," The Independent reports after an Australian study found vigorous exercise, such as jogging, reduced the risk of premature death.

The study involved adults aged 45 to 75 years old followed up over 6.5 years. Those who did more vigorous activity (as part of their general total moderate to vigorous activity levels) were less likely to die during follow-up than those who did no vigorous activity.

This large study was well designed, and the researchers also tried to take factors into account that they knew could influence the results (confounders).

But, as with all studies, there are some limitations – for example, the researchers only asked about physical activity once and this may have changed over time.

These results also bear out the proven benefits of exercise, regardless of how much of it is vigorous, and supports current recommendations for the amount of physical activity people should do.

While doing some vigorous activity may bring some benefits, it is important that people set themselves realistic targets they can safely achieve.  

Where did the story come from?

The study was carried out by researchers from James Cook University and other universities in Australia. It was funded by the Heart Foundation of Australia.

The study was published in the peer-reviewed medical journal JAMA Internal Medicine.

The coverage in the papers is variable. While all the papers are right in saying that vigorous exercise may be beneficial, there is some misreporting. The Daily Telegraph's headline says that, "Swimming, gardening or golf 'not enough to prevent early death'," which is not true.

Gentle swimming and vigorous gardening both fell under "moderate activity", and even those who just did moderate activity had a lower risk of death than those who did no moderate to vigorous activity at all.

The Telegraph also talks about the effects on heart disease and diabetes, but these outcomes were not assessed by this study.

The Daily Express helpfully includes a quote noting that, "There is no question that some exercise is better than nothing. But the more intensive the activity, the less likely people will come back to it, so the question is how do we get people to do some – and then those who do some to do a bit more?"

However, at the end of the story, they then include a video of "chubby guy dancing in Speedos to holiday exercise class" for people's amusement, which is not likely to encourage people to take up exercise.

The Independent refers to "short bursts" of vigorous exercise being beneficial, but the study itself did not assess length of the bursts.

The paper does include a note of caution from one study author, however, who said that, "For those with medical conditions, for older people in general and for those who have never done any vigorous exercise before, it's always important to talk to a doctor first." 

What kind of research was this?

This was a prospective cohort study assessing whether achieving more moderate to vigorous activity through vigorous activity specifically was associated with a reduced risk of death during follow-up.

While we know that physical activity is associated with longer life, it is not clear whether vigorous activity is better than moderate activity.

While a recent systematic review suggested that vigorous activity may reduce the risk of death more than moderate activity, some of the studies included did not take overall activity into account.

This means these studies were not able to rule out that some of the effect of vigorous exercise was because people who did more vigorous activity tended to do more physical activity overall.

The current study wanted to avoid this problem. A prospective cohort study is the best way to assess this question. It's unlikely to be feasible to carry out a randomised controlled trial to successfully answer this question, as it's difficult to get people to agree to stick to a specific exercise pattern for a long time.

But the main limitation of a cohort study is that factors other than the factor of interest (such as overall activity, in this case) could potentially influence the results, so the researchers need to take these into account in their analyses. 

What did the research involve?

The researchers enrolled adults aged 45 and over from New South Wales. At the start of the study, participants were asked how much physical activity they did and how intense this activity was.

They were then followed up over about 6.5 years, and the researchers identified who died in this period.

The researchers then analysed whether the proportion of the total moderate to vigorous physical activity (MVPA) a person did that was vigorous was associated with their risk of death.

The participants were enrolled as part of the 45 and Up study in 2006-09. Potential participants were selected at random from the Australian national medical insurance (Medicare) database, which includes all citizens and permanent residents of the country.

This study did not include people aged over 75, as it was mainly interested in earlier preventable deaths.

Participants filled out a questionnaire at the start of the study on their MVPA in the past week. They were asked how much of this activity was:

  • vigorous – anything that "made you breathe harder or puff and pant", such as jogging, cycling, aerobics or competitive tennis, but not household chores or gardening
  • moderate – gentle swimming, social tennis, vigorous gardening or housework

Participants also reported how much walking they did, and this was included in their total MVPA.

Those who died between the start of the study and June 2014 were identified through the New South Wales Registry of Births, Deaths, and Marriages.

The main analyses in this study included 204,542 people who reported doing at least some MVPA. The researchers took factors that could affect the results (potential confounders) into account, including:

  • total MVPA
  • age
  • sex
  • educational level
  • marital status
  • area of  residence (urban or rural)
  • body mass index (BMI)
  • physical function (whether the person had any physical limitations)
  • smoking status
  • alcohol consumption
  • fruit and vegetable consumption  
What were the basic results?

During the study, 7,435 of the 217,755 participants died:

  • 8.3% of those who did no MVPA
  • 4.8% of those who did 10 to 149 minutes of MVPA a week
  • 3.2% of those who did 150 to 299 minutes of MVPA a week
  • 2.6% of those who did 300 minutes or more or MVPA a week

After taking potential confounders into account, this meant that compared with those who did no MVPA, the risk of death during the 6.5 years of follow-up was:

  • 34% lower in those who did 10 to 149 minutes of MVPA a week (hazard ratio [HR] 0.66, 95% confidence interval [CI] 0.61 to 0.71)
  • 47% lower in those who did 150 to 299 minutes of MVPA a week (HR 0.53, 95% CI 0.48 to 0.57)
  • 54% lower in those who did 300 minutes or more or MVPA a week (HR 0.46, 95% CI 0.43 to 0.49)

Among those who did at least some MVPA, doing more of that activity as vigorous activity was associated with a reduced risk of death during follow-up:

  • 3.8% of those who did no vigorous activity died
  • 2.4% of those who did vigorous activity that accounted for less than 30% of their total MVPA died – a 9% reduction relative to those who did none (HR 0.91, 95% CI 0.84 to 0.98)
  • 2.1% of those who did vigorous activity that accounted for 30% or more of their total MVPA died – a 13% reduction relative to those who did none (HR 0.87, 95% CI 0.81 to 0.93)

The researchers found similar results when they looked at people with different BMIs, people who did different amounts of MVPA, and in people with or without cardiovascular disease or diabetes.  

How did the researchers interpret the results?

The researchers concluded there was an "inverse dose-response relationship" between the proportion of MVPA done as vigorous activity and the risk of death during follow-up.

They say this suggests that vigorous activity "should be endorsed in clinical and public health activity guidelines to maximise the population benefits of physical activity". 

Conclusion

This large study suggests that in middle to older age, doing more of your total moderate to vigorous activity as vigorous activity could help to reduce your risk of death.

This study's size is one of its strengths, with more than 200,000 people taking part. The fact that information on activity was collected at the start of the study, rather than asking people to recollect what they did in the past, is also beneficial.

The researchers also tried to take factors into account that they knew could influence their results, including cardiovascular medical conditions such as coronary heart disease, or other conditions that reduced people's ability to participate in physical activity, such as type 2 diabetes.

But, as with all studies, there are some limitations:

  • The researchers only asked about physical activity once, and people's activities may have been different before or after the week that was assessed.
  • The study only included those aged 45 to 75, and results may not apply to older individuals.
  • All lifestyle measures were reported by the participants themselves, and there may be some inaccuracies – the authors state people tend to be better at reporting vigorous activity than other types of activity.
  • The results may still be influenced by confounders the authors did not measure –  for example, only fruit and vegetable intake was assessed as a sign of a healthy diet, but other dietary aspects could have had an effect.

While the results suggest that doing more vigorous activity is beneficial, there are some points to think about. For example, the people who were doing more vigorous activity may also have done more vigorous activity in their younger years, and it may be that this consistency is the important factor.

The study also did not directly compare just moderate activity with vigorous activity. Further research is likely to assess these and other questions.

Importantly, the results highlight the beneficial effect of doing some moderate to vigorous activity, regardless of how much of it is vigorous. This supports current recommendations for exercise.

While doing some vigorous activity may add some benefit, it is important that people set themselves realistic targets they can safely achieve.

If it has been a while since you last exercised, the NHS Choices Couch to 5K running programme is one way to safely raise your fitness levels.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Short bursts of vigorous exercise helps prevent early death, says study. The Independent, April 6 2015

Swimming, gardening or golf 'not enough to prevent early death'. The Daily Telegraph, April 6 2015

How to live for longer: Brisk exercise slashes risk of early death. Daily Express, April 7 2015

Links To Science

Gebel K, Ding D, Chey T, et al. Effect of Moderate to Vigorous Physical Activity on All-Cause Mortality in Middle-aged and Older Australians. JAMA Internal Medicine. Published online April 6 2015

Categories: NHS Choices

Sedentary lifestyle – not watching TV – may up diabetes risk

NHS Choices - Behind the Headlines - Thu, 02/04/2015 - 16:31

“Experts claim being a couch potato can increase the risk of developing diabetes,”  the Daily Express reports.

A study of people at high risk of diabetes produced the sobering result that each hour of time spent watching TV increased the risk of type 2 diabetes by 2.1% (after being overweight was taken into account).

The study originally compared two interventions aimed at reducing the risk of developing diabetes compared to placebo. It involved 3,000 participants who were overweight, had high blood sugar levels and insulin resistance. These are early indications that they may be developing diabetes (often referred to as pre-diabetes). The interventions were either metformin (a drug used to treat diabetes) or a lifestyle intervention of diet and exercise.

This study used data collected from the original trial to see if there was a link between increased time spent watching the TV and risk of developing diabetes.

Across all of the groups they found a slightly increased risk, which was 3.4% per hour of TV watching when being overweight was not taken into account.

The findings may not be reliable, as researchers did not take other risk factors into account, such as family history of diabetes, use of other medication or smoking status. They also relied on self-reported TV watching times, which may not be very accurate.

That said, lack of exercise is a known risk factor for a range of chronic diseases – not just diabetes. Read more about why sitting too much is bad for your health.

 

Where did the story come from?

The study was carried out by researchers from the University of Pittsburgh, George Washington University, Pennington Biomedical Research Center and several other US universities. It was funded by many different US National Health Institutes and three private companies: Bristol-Myers Squibb, Parke-Davis and LifeScan Inc.

The main funding source was the National Institute of Diabetes and Digestive and Kidney Diseases of the US National Institutes of Health. One of the authors has a financial interest in a company called Omada, which develops online behaviour change programmes, with a focus on diabetes.

The study was published in the peer-reviewed medical journal Diabetologia.

The UK media has focused on the statistic that the risk of getting diabetes increases by 3.4% per hour of TV watched. However, this figure does not take into account the risk factor of being overweight. When this is accounted for, the increased risk is less, at 2.1%.

The Daily Express’s online headline "Watching too much TV can give you diabetes" would not be our preferred wording. Some readers may take it as a statement that their TV sends out dangerous rays that increase your blood sugar levels. A more accurate, if slightly less striking, headline would be "Sedentary behaviour increases your diabetes risk".

 

What kind of research was this?

This study looked at data from a randomised controlled trial that aimed to test whether lifestyle changes or the diabetes drug metformin reduced the risk of developing diabetes compared to placebo (dummy pill). It was conducted on over 3,000 people at high risk of diabetes. The trial found that metformin reduced the risk by 31% and that the lifestyle intervention reduced it by 58% compared to placebo.

This study aimed to see if the lifestyle intervention, which aimed to increase physical activity, had any effect in reducing the amount of self-reported time spent sitting. As a secondary outcome, the researchers looked at data from each group to see if there was an association between time spent sitting and the risk of diabetes. As this was not one of the aims of the study, the results of this type of secondary analysis are less reliable.

Critics of this approach argue that it is akin to "moving the goalposts"; researchers fail to get a striking result for their stated aim, so they then focus on a secondary aim that will get them the results.

 

What did the research involve?

Over 3,000 adults at high risk of diabetes were randomly allocated to take metformin, a placebo, or have a lifestyle intervention, from 1996 to 1999. They were followed up for an average of 3.2 years to see if any of the interventions reduced the risk of developing diabetes.

The lifestyle group had an "intensive" lifestyle intervention focusing on a healthy diet and exercise. The aim for this group was to achieve 7% weight loss and do at least 150 minutes of moderate intensity activity per week (the recommended minimal activity levels for adults). They were advised to limit inactive lifestyle choices, such as watching the TV. People given the metformin or placebo were also advised about a standard diet and had exercise recommendations. The study took place over 2.8 years.

A variety of measures were recorded, including weight and annual blood sugar tests. Each year, the participants were interviewed using a Modifiable Activity Questionnaire. This recorded self-reported estimates of leisure, TV watching and work-related activity.

In this analysis, the researchers compared the amount of time each person reported they spent watching the TV at the start and end of the study in each group.

 

What were the basic results?

Across all of the treatment groups, every hour per day of watching TV raised the risk of diabetes by 2.1%, after adjusting for age, sex, physical activity and weight. When the results did not take increased weight into account, the risk was higher, at 3.4% per hour.

By the end of the study, people in the lifestyle intervention group watched less TV. At the start of the study, each group reported watching a similar amount of TV – around 2 hours and 20 minutes per day. Three years later, people in the lifestyle group watched on average 22 minutes less per day. Those in the placebo group watched 8 minutes less, but those on metformin did not change their TV watching significantly.

 

How did the researchers interpret the results?

The researchers concluded that although it was not a primary goal of the study, "the lifestyle intervention was effective at reducing sedentary time". They report that "in all treatment arms, individuals with lower levels of sedentary time had a lower risk of developing diabetes". They advise that "future lifestyle intervention programmes should emphasise reducing television watching and other sedentary behaviours, in addition to increasing physical activity".

 

Conclusion

This study has found an association between TV watching and an increased risk of developing diabetes. However, there are many potential confounding factors that were not taken into account in the analysis. This includes other medical conditions, medication, family history of diabetes and smoking. 

Additionally, all of the participants were at high risk of developing diabetes. They were overweight at the start of the study, had high blood sugar levels and insulin resistance – therefore, the study does not show whether this association would be found in people at low or moderate risk.

The original study did not set out to see if increased TV watching was associated with increased risk of developing diabetes; this was an afterthought, using the data that had been collected. This makes the results less reliable.

A further limitation is that the study is reliant on self-reporting the amount of time spent watching TV. This was estimated for the previous year, which is unlikely to be entirely accurate.

Watching TV is not "going to give you diabetes" as the Express had confusingly stated, but it is important to compensate for time spent being a couch potato by exercising regularly, eating a healthy diet and trying to achieve or maintain a healthy weight.

Read more about reducing your type 2 diabetes risk

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Watching too much TV can give you diabetes, experts warn. Daily Express, April 2 2015

How watching TV can increase your risk of diabetes: Every hour spent slumped in front of the screen can raise chance of developing the condition by 3.4%. Mail Online, April 2 2015

Secret to cutting diabetes risk is to turn off your television says new research. Daily Mirror, April 1 2015

Every Hour Spent In Front Of The TV Could Increase Diabetes Risk, Study Warns. The Huffington Post, April 2 2015

Links To Science

Rockette-Wagner B, Edelstein S, Venditti EM, et al. The impact of lifestyle intervention on sedentary time in individuals at high risk of diabetes (zip file 206kb). Diabetologia. Published online April 1 2015

Categories: NHS Choices

Do I need to stretch before exercising?

NHS Choices - Live Well - Thu, 02/04/2015 - 15:30
Do I need to stretch before exercising?

From weekend warriors to elite athletes, we’re often told we should stretch before exercising but new research has cast doubt on this age-old practice.

Research suggests stretching before exercise is unlikely to reduce your risk of injury, improve your performance or prevent sore muscles.

But while it won’t necessarily do you any good, there’s no evidence that stretching before or after exercise will you any harm either.

The upshot is if you enjoy stretching or it is a staple in your exercise routine, there’s no reason to stop.

Read on to get a deeper understanding of the mechanics of stretching and work out just how much stretching you really need in your life.

What’s the point of stretching?

Stretching for sport and exercise improves flexibility, which increases the ability of a joint to move through its full range of motion, in other words, how far it can bend, twist and reach. Some activities such as gymnastics require more flexibility than others, such as running.

Different types of stretches

Static stretch: stretching a muscle to the point of mild discomfort and holding that position, typically for at least 30 seconds or longer.

Proprioceptive neuromuscular facilitation (PNF): methods vary but typically PNF involves holding a stretch while contracting and relaxing the muscle. 

Dynamic stretch: performing gentle repetitive movements, such as arm swings, where one gradually increases the range of motion of the movement but always remains within the normal range of motion. 

Ballistic or bouncing stretches: involves going into a stretch and performing bouncing or jerking movements to increase range of motion.

Most of the research on stretching has focussed on static stretching and there is less evidence on other forms of stretching.

What happens when we stretch?

While the exact mechanics of what happens are not fully understood, regular stretching is thought to increase flexibility both by making muscles more supple and by retraining the nervous system to tolerate stretching further. Flexibility from regular stretching gradually disappears once you stop stretching – typically after four weeks.

Dr Polly McGuigan, a lecturer in biomechanics from the University of Bath, says it’s unclear whether the increase in range of motion of a joint is due to physical changes in the muscles which control those joints, or just a greater tolerance to stretch. She says: “My feeling is that there must be some changes at the muscle-tendon unit level as just increasing tolerance would not have the scale of effect that can be seen with some stretching programmes.”

How much flexibility do I need?

It depends on your activity. The flexibility demands of a gymnast or a ballet dancer are clearly different to those of a runner. There is little to be gained for a jogger or runner from having the flexibility of a gymnast.

To generate power during exercise, the muscles and tendons store and release energy like a spring. Too much flexibility may reduce the muscle’s natural spring, which may be detrimental for activities involving running, jumping and sudden changes in direction, such as running, football or basketball. 

“However, too little flexibility may increase the risk of muscle strain injury as the muscles are unable to lengthen and absorb this energy,” says Dr Anthony Kay, Associate Professor of Biomechanics from the University of Northampton.

Does stretching prior to exercise affect performance?

Research suggests stretching before exercise makes your muscles weaker and slower, even though you might feel looser. “For most performances, this would be detrimental,” says Dr Ian Shrier, a sports medicine clinician and researcher and Associate Professor at the Department of Family Medicine, at Montreal’s McGill University.

However, stretching also increases your range of motion. “A ballerina might require stretching before performance to do a full split during the show,” says Dr Shrier. “Even though she is weaker, her performance will be improved.” 

Dr Kay, who was the lead author on one of the largest reviews on pre-performance stretching, believes the reduction in performance from pre-exercise stretching has been overstated. “It is likely that durations of stretch used in the warm-up routines of most recreational exercisers produce negligible and transient reductions in strength,” he says. 

Does stretching before exercising reduce the risk of injury?

The evidence strongly suggests that pre-exercise stretching does not reduce the risk of injury. Professor Rob Herbert, Senior Principal Researcher Fellow with Neuroscience Research Australia, took part in the three largest randomised trials of the effects of stretching, all of which concluded stretching had little or no beneficial effect on reduction in injury risk.

The most recent and largest of the three studies found “a hint” of an effect on reducing injuries like ligament tears, muscle tears, strains and sprains. But Prof Herbert cautioned, “If stretching does cut your odds of one of these types of injuries, it’s by only a very small amount.”

When do injuries occur?

Muscle injuries happen when the muscle is put under too much stress, typically when it is stretched under pressure, for instance, when lowering a heavy weight.

The injury occurs not because the muscle isn’t flexible enough but because the muscle isn’t producing enough force to support itself. A muscle might not produce enough force either because it is not strong enough or it didn’t contract at the right time for a particular movement.

Does stretching after exercise help reduce soreness?

There is no evidence that stretching helps to reduce or prevent a type of pain that can show up a day or two after exercising - also called delayed onset muscle soreness (DOMS).

A 2011 review by Prof Herbert found that “muscle stretching, whether conducted before, after, or before and after exercise, does not produce clinically important reductions in delayed-onset muscle soreness in healthy adults.”

Should I stretch before exercising?

Your decision to stretch or not to stretch should be based on what you want to achieve. “If the objective is to reduce injury, stretching before exercise is not helpful,” says Dr Shrier. Your time would be better spent by warming up your muscles with light aerobic movements and gradually increasing their intensity.

“If your objective is to increase your range of motion so that you can more easily do the splits, and this is more beneficial than the small loss in force, then you should stretch,” says Dr Shrier.

For most recreational exercisers, stretching before exercise is therefore a matter of personal preference. “If you like stretching, do it and if you don’t like stretching, don’t do it.” says Prof Herbert.

How should I warm-up?

The purpose of warming-up is to prepare mentally and physically for your chosen activity. A typical warm-up will take at least 10 minutes and involve light aerobic movements and some dynamic stretching that mimics the movements of the activity you’re about to perform.

“Gradually increasing the range of motion of these movements during the warm up will prepare the body for more intense versions of those movements during the sport itself,” says Dr McGuigan. This process will raise your heart rate and increase the blood flow to your muscles, thereby warming them up.

Warm muscles are less stiff and work more efficiently. Increased blood flow enables more oxygen to reach the muscles and produce energy. The warm-up also activates the nerve signals to your muscles which results in faster reaction times.  

Should I stretch after exercising?

There is some evidence that regular static stretching outside periods of exercise may increase power and speed as well as reduce injury. The best time to stretch is when the muscles are warm and pliable. This could be during a yoga or pilates class or just after exercising.

However there is very limited evidence about specifically stretching after exercise. Dr Shier says: “Since people tend not to set aside one time to stretch and one time for other activities, I recommend that they stretch after exercise.”

A post-exercise stretch will also help slow down your breathing and heart rate and bring the mind and body back to a resting state.

Categories: NHS Choices

New Down’s syndrome test more accurate than current screening

NHS Choices - Behind the Headlines - Thu, 02/04/2015 - 14:00

“Blood test for Down’s syndrome 'gives better results'," reports BBC News today. The test, which is based on spotting fragments of "rogue DNA", achieved impressive results in a series of trials.

A study of over 15,000 women found that the new blood test more accurately identifies pregnancies with Down's syndrome than the test currently used.

Down's syndrome is caused by having an extra chromosome (the packages of DNA containing information to grow and develop). The new test is able to detect small fragments of DNA from the baby floating about in the mother’s blood, called cell-free DNA (cfDNA).

This blood test measures the number of chromosomes in the mother’s blood, and from that it can see if there are any of these extra chromosomes.

The cfDNA test performed significantly better than the current test across a range of screening test measures for Down’s syndrome, but was not 100% accurate. Importantly, it had a much lower false positive rate than the current test; false positive is where a healthy baby is wrongly identified as having Down’s. A false positive result often leads to an unnecessary further diagnostic test that carries a small risk of causing a miscarriage.

The test is not yet available on the NHS, but it is being reviewed and a decision is expected later this year. It can be accessed privately at a cost of between £400 and £900.

 

Where did the story come from?

The study was carried out by researchers from the University of California, the Perinatal Diagnostic Center in San Jose, Sahlgrenska University Hospital in Sweden, and several other US institutions. It was funded by Ariosa Diagnostics and the Perinatal Quality Foundation.

The study was published in the peer-reviewed New England Journal of Medicine.

BBC News accurately reported on the study and provided expert opinion from both Great Ormond Street Hospital and the Down's Syndrome Association. Both organisations highlight the need for women to be given clear information about screening, so they can make an informed decision.

 

What kind of research was this?

This was a diagnostic study, which compared a new antenatal screening test with standard screening for three genetic conditions, including Down’s syndrome.

Normally, people have 23 pairs of chromosomes. However, in these three genetic conditions, there is an extra copy of one of the chromosomes. In Down’s syndrome, there is an extra chromosome 21 (trisomy 21); Edwards' syndrome has an extra chromosome 18 (trisomy 18); and Patau’s syndrome has an extra chromosome 13 (trisomy 13). In most cases, this happens by chance and isn’t inherited from the parents. This is why all mothers-to-be are offered screening to see whether this has happened.

Currently, all pregnant women in the UK are offered screening for these conditions, which involves a two-step process. The test offered depends on how far along the pregnancy is. Women between 11 and 14 weeks pregnant are offered a blood test plus an ultrasound scan, called a combined test. Women between 14 and 20 weeks of pregnancy are offered a different blood test. This is less accurate than the combined test.

If either of these tests indicates an increased risk of having a baby with Down’s, Edwards’ or Patau’s syndromes, the woman will be offered either chorionic villus sampling (CVS) or amniocentesis to find out. Both of these tests involve taking samples from the mother’s abdomen, which can be uncomfortable, although not usually painful. This increases the risk of miscarriage, which occurs in one in 100 women (1%).

The new test detects short fragments of the baby’s DNA floating about in the mother’s blood, called cell-free DNA (cfDNA). By measuring the level of each of the chromosomes, it is possible to see if there are more chromosomes 21, 18 or 13.

The researchers had previously performed proof of principle studies of cfDNA in women at high risk of having a baby with one of these conditions. They now wanted to see how accurate the test was in a large sample of women with any level of risk.

 

What did the research involve?

The researchers recruited 15,841 pregnant women eligible for screening for Down’s, Edwards’ or Patau’s syndromes. All were tested using the new cfDNA blood test and the standard combined test. The results of the two tests were compared to see which was more accurate at picking up any of the three trisomy conditions.

Women were enrolled in the study between March 2012 and April 2013 from 35 medical centres across the US, Canada and Europe. They were eligible to participate if they were aged 18 or older, and had a singleton pregnancy between weeks 10 and 14.3 at the time of screening.

A blood test for cfDNA was taken at the same time as the standard screening tests. The blood sample was then analysed at a laboratory without the analysts knowing any clinical details about the pregnancy, other than the gestational age and mother’s age (the sample was blinded). The results were not given to the mother or clinician.

The researchers then obtained the outcome of the pregnancy and compared the accuracy of the standard test results with the new cfDNA test. This included any termination of pregnancies and miscarriages if a genetic test had confirmed whether or not they had a trisomy condition.

They originally enrolled 18,955 women, but excluded 3,114, due to:

  • them not meeting the inclusion criteria
  • withdrawal from the study (either the woman or the investigator)
  • sample handling errors
  • no standard screening result
  • no cfDNA result
  • lost to follow-up

 

What were the basic results?

The new test outperformed the current one at detecting Down’s syndrome. Results were similar for Edwards' and Patau’s syndromes, but tended to be less accurate.

One of the most important measures for whether a new screening test is any good is the positive predictive value (PPV). This takes into account the number of correct test results, but also the number of false positives, based on the condition's prevalence.

In rare conditions, like these chromosomal conditions, the false positives are important, because they represent a potentially large group of women who could be sent to have further invasive diagnostic tests they might not need.

The PPV of the new test for Down’s syndrome was 80.9% – significantly higher than the 3.4% scored for the combined test. The PPV difference was lower for women deemed at lower risk of having a baby with Down’s syndrome (76.0% for the new test v 50.0% for the current test).

The detailed results for Down’s syndrome (trisomy 21) were:

  • cfDNA screening identified all 38 babies with Down’s syndrome (sensitivity 100%, 95% confidence interval (CI) 90.7 to 100)
  • standard screening identified 30 out of 38 babies with Down’s syndrome (sensitivity 78.9%, 95% CI 62.7 to 90.4)
  • the cfDNA test was positive in nine pregnancies that did not have Down’s syndrome (false positive rate 0.06%, 95% CI 0.03 to 0.11)
  • standard screening was positive in 854 pregnancies that did not have Down’s syndrome (false positive rate 5.4% (95% CI 5.1 to 5.8)

Results for Edwards' syndrome (trisomy 18) were:

  • cfDNA identified nine out of 10 cases (sensitivity 90%, 95% CI 55.5 to 99.7)
  • standard testing identified eight out of 10 (sensitivity 80%, 95% CI 44.4 to 97.5)
  • cfDNA wrongly diagnosed Edwards' syndrome in one case (false positive rate 0.01%, 95% CI 0 to 0.04)
  • standard testing was positive in 49 pregnancies that did not have Edwards' syndrome (false positive rate 0.31%, 95% CI 0.23 to 0.41)

The results for Patau’s syndrome (trisomy 13) were:

  • cfDNA screening identified both babies (sensitivity 100%,95% confidence interval (CI) 15.8 to 100)
  • standard screening identified one out of the two babies (sensitivity 50.0%, 95% CI 1.2 to 98.7)
  • the cfDNA test was positive in two pregnancies that did not have Patau’s syndrome (false positive rate 0.02%, 95% CI 0 to 0.06)
  • standard screening was positive in 28 pregnancies that did not have Patau’s syndrome (false positive rate 0.25% (95% CI 0.17 to 0.36)

 

How did the researchers interpret the results?

The researchers concluded that "the performance of cfDNA testing was superior to that of traditional first trimester screening for the detection of trisomy 21". They say that further cost benefit studies are now needed. The researchers also caution that "as emphasised by professional societies, the use of cfDNA testing and other genetic tests requires an explanation of the limitations and benefits of prenatal test choices to the patient".

 

Conclusion

This large study has shown that the new cfDNA test is better than current standard screening at detecting three trisomy conditions during pregnancy. The confidence in accurately identifying affected pregnancies was strongest for Down’s syndrome. There were much wider confidence intervals for the other two conditions.

The cfDNA test was not 100% accurate, as there were false positive results for each condition, though much fewer than with standard screening.

Around 3% of the cfDNA tests did not produce a result. Careful consideration and further research may be needed to decide the best approach in these cases. Should they all be sent for the next stage of diagnostic tests as a precaution, repeat the test, or be offered the standard test instead? 

The author’s admit that, had they included these "no result" cases in their main analysis, the performance of the cfDNA test would have been lower. How much lower we don’t know, as they don’t appear to have presented an analysis of this scenario.

The potential benefit of the test is that it could reduce the number of women being sent for the CVS or amniocentesis testing, which carry their own risks. As the authors say: "Before cfDNA testing can be widely implemented for general prenatal aneuploidy screening, careful consideration of the screening method and costs is needed."

This test is not yet available on the NHS, though it is being considered under an evaluation project run by Great Ormond Street Hospital. In the evaluation study, which is being carried out on women at low risk, if the results of the test show that a trisomy is highly likely or it is inconclusive, then they are offered the invasive tests to confirm the result. This is because of potential false positive results, which in previous research was found to occur in one in 300 women (0.3%) and false negative results – not picking up the diagnosis in two out of 100 babies.

At present, the test is only offered by private clinics and costs £400 to £900. It takes two weeks to get the result, as the sample is sent to the US. Details of private clinics can be easily found via any internet search engine.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Blood test for Down's syndrome 'gives better results'. BBC News, April 1 2015

Links To Science

Norton ME, Jacobsson B, Swamy GK, et al. Cell-free DNA Analysis for Noninvasive Examination of Trisomy. The New England Journal of Medicine. Published online April 1 2015

Categories: NHS Choices

Concerns raised about increased e-cigarette use in teenagers

NHS Choices - Behind the Headlines - Wed, 01/04/2015 - 13:30

"E-cigarettes: Many teenagers trying them, survey concludes," BBC News reports after a survey of around 16,000 English teenagers found one in five teens had tried an e-cigarette.

The concern is that rather than using e-cigarettes as a device to stop smoking, teenagers with no history of smoking could be using e-cigarettes because of their novelty value. This hypothesis seems to be borne out by the survey finding that 16% of teen e-cig users said they had never smoked conventional cigarettes.

While e-cigarettes are undoubtedly far safer than cigarettes, this does not mean they are 100% safe. Nicotine is a powerful substance and it is unclear what long-term effects it may have, especially on a teenage brain and nervous system that is still developing.

The study also found a strong association between alcohol misuse, such as binge drinking, and access to e-cigarettes. Other experts fear e-cigs could act as a potential gateway to smoking among children.

From 2016, the Medicines and Healthcare Products Regulatory Agency (MHRA) is expected to license e-cigarettes as a medicine in the UK, so they should become an age-restricted product.

One limitation of the study, however, is that it relied on self-reporting, so it is prone to selection bias. This makes its findings less reliable.

One final message you may want to convey to your children is that a nicotine addiction brings no useful benefits, but it can be expensive (especially for a teenager) and its long-term effects are unclear. 

Where did the story come from?

The study was carried out by researchers from Liverpool John Moores University, Public Health Wales, Health Equalities Group, and Trading Standards North West.

It was published in the peer-reviewed journal BMC Public Health. BMC Public Health is an open-access journal, so the study is free to read online.

It was covered broadly accurately in the papers, although reports focused on the number of non-smokers who had reportedly used e-cigarettes.

This raised fears in the press that the devices may become a gateway drug to tobacco, rather than concerns about the number of young smokers who reported using them.

The study's limitations, such as the issue of selection bias (which could either lead to an over- or underestimation of the true figure) and the fact the sample may not be representative of England, were not discussed.  

What kind of research was this?

This was a cross-sectional survey of more than 16,000 school students in northwest England looking at reported use of e-cigarettes, conventional smoking, alcohol consumption and other factors.

The authors say that while e-cigarettes are marketed as a healthier alternative to tobacco, they contain the addictive drug nicotine.

The battery-powered devices, which can be bought online and in some pubs, chemists and newsagents, deliver a hit of addictive nicotine and emit water vapour to mimic the feeling and look of smoking.

The vapour is considered potentially less harmful than cigarette smoke and is free of some of its damaging substances, such as tar. 

What did the research involve?

The researchers used a cross-sectional survey of 16,193 school students aged 14 to 17 in northwest England. This is part of a biennial survey conducted in partnership with Trading Standards, whose remit includes enforcing regulations on the sale of age-restricted products in the UK.

The survey includes detailed questions on:

  • age
  • gender
  • alcohol use (drinking frequency, binge drinking frequency, drink types consumed, drinking location, drinking to get drunk)
  • smoking behaviours (smoking status, age of first smoking)
  • how alcohol and tobacco were accessed
  • parental smoking
  • involvement in violence when drunk

In 2013, the survey included a question about e-cigarettes for the first time, asking students if they had ever tried or bought them.

The questionnaire was given to students by teachers during normal school lessons between January and April 2013. Students completed the questionnaire themselves voluntarily and anonymously. The researchers excluded questionnaires where data was incomplete or spoiled.

The researchers also collected information on deprivation using both home and school postcodes and assigning participants to five different groups (or quintiles). They used standard statistical methods to analyse associations between e-cigarette access and other factors. 

What were the basic results?

The main findings are summarised below:

  • one in five children (19.2%) who responded said they had "accessed" e-cigarettes
  • over one-third (35.8%) of those who reported accessing e-cigarettes were regular smokers, 11.6% smoked when drinking, 13.6% were ex-smokers, and 23.3% had tried smoking but didn't like it
  • 15.8% of teenagers who accessed e-cigarettes had never smoked conventional cigarettes
  • e-cigarette access was also associated with being male, having parents or guardians that smoke, and students' alcohol use
  • compared with non-drinkers, teenagers who drank alcohol at least weekly and binge drank were more likely to have accessed e-cigarettes (adjusted odds ratio [AOR] 1.89)
  • the link between e-cigarettes and alcohol was particularly strong among those who had never smoked tobacco (AOR 4.59)
  • among drinkers, e-cigarette access was related to drinking to get drunk, alcohol-related violence, consumption of spirits, self-purchase of alcohol from shops or supermarkets, and accessing alcohol by recruiting adult proxy purchasers outside shops  
How did the researchers interpret the results?

The researchers say their findings suggest e-cigarettes are being accessed by teenagers more for experimentation and as a recreational drug, rather than for help with smoking cessation.

There is an urgent need for controls on the promotion and sale of e-cigarettes to children, the researchers argue, although they also point out that those most likely to obtain e-cigarettes may already be familiar with "illicit methods" of accessing age-restricted substances. 

Conclusion

As the authors point out, this cross-sectional survey had a number of limitations:

  • it did not record how frequently e-cigarettes were reportedly accessed
  • it cannot tell us whether children who reported both conventional smoking and e-cigarette access had accessed e-cigarettes before or after using conventional cigarettes
  • it is possible that, as the questionnaire was voluntary, it suffered from selection bias, with only certain students completing it
  • students may have under- or over-reported their smoking and drinking behaviours

The survey should not be considered representative of all 14- to 17-year-olds in England or in the northwest. However, the finding that one in five children reported having access to e-cigarettes, and that many of them are non-smokers, is a clear cause for concern.

From 2016, the Medicines and Healthcare Products Regulatory Agency (MHRA) is expected to license e-cigarettes as a medicine in the UK. This should bring them in line with nicotine patches and gum, and allow the agency to apply rules around the purity of the nicotine in e-cigarettes, for example.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

E-cigarettes: Many teenagers trying them, survey concludes. BBC News, March 31 2015

Four in 10 teenage e-cigarette users would not have smoked, warn health experts. The Daily Telegraph, March 31 2015

Are teenagers trying e-cigarettes as a trendy novelty? ITV News, March 31 2015

E-cigarettes: why are young people vaping? Channel 4 News, March 31 2015

One in five teens have tried e-cigs: Fears youngsters will move on to real cigarettes after getting taste for nicotine. Mail Online, March 31 2015

Links To Science

Hughes K, Bellis MA, Hardcastle KA, et al. Associations between e-cigarette access and smoking and drinking behaviours in teenagers. BMC Public Health. Published online March 31 2015

Categories: NHS Choices

Paracetamol 'not effective' for lower back pain or arthritis

NHS Choices - Behind the Headlines - Wed, 01/04/2015 - 12:31

"Paracetamol doesn't help lower-back pain or arthritis, study shows," The Guardian reports on a new review.

The review found no evidence that paracetamol had a significant positive effect, compared to placebo (dummy treatment) in relieving pain and disability in cases of acute lower back pain and was only minimally effective in osteoarthritis.

Before you start clearing out your medicine cabinet, the results of this review are not as clear-cut as reported.

The findings for lower back pain are based on three randomised controlled trials (RCTs), which, when grouped together, found no difference for pain relief, disability or quality of life between paracetamol and placebo. However, there are limitations in each of these studies. Two of the studies were small and the third only looked at acute lower back pain up to six weeks, when paracetamol may not be strong enough.

They did actually find that paracetamol slightly improved pain and disability from osteoarthritis of the hip or knee compared to placebo.

The study does not prove that paracetamol is no better than placebo for other types of back pain, such as chronic back pain (pain that persists for more than six weeks).

The National Institute for Health and Care Excellence (NICE) recommends that people with persistent back pain and recurrent back pain should stay physically active to manage and improve the condition.

Paracetamol is recommended as a first choice of painkiller because it has few side effects. NICE recommends that if this is not effective, stronger or different types of painkillers should be offered.

This guidance is currently under review, and this will take into account any new research such as the results of this study.

 

Where did the story come from?

The study was carried out by researchers from the University of Sydney, St Vincent’s Hospital and University of New South Wales and Concord Hospital in Sydney. It was funded by the National Health and Medical Research Council.

The study was published in the peer-reviewed British Medical Journal (BMJ) on an open-access basis so is free to read online (PDF 673kb).

The UK media reported the story accurately but did not explain any of the limitations of the study.

 

What kind of research was this?

This was a systematic review of all RCTs assessing the effectiveness of paracetamol for back pain and osteoarthritis of the hip or knee compared to placebo. The researchers also performed a meta-analysis. This is a statistical technique that combines the results of the RCTs to give an overall measure of effectiveness.

Pooling the results of multiple studies can help to give a better estimate of effectiveness, which is sometimes not seen in the individual studies, for example if they are too small.

This type of research is good at summarising all the research on a question and calculating an overall treatment effect, but relies on the quality and availability of the RCTs.

Paracetamol is currently recommended as the first line for pain relief for back pain and osteoarthritis of the hip and knee in clinical guidelines. The researchers wanted to assess whether this recommendation is backed up by the evidence.

 

What did the research involve?

A systematic review and meta-analysis was performed to identify and pool all RCTs that have assessed paracetamol compared to placebo for back pain and osteoarthritis of the hip and knee.

The following medical databases were searched for RCTs published up until December 2014: Medline, Embase, AMED, CINAHL, Web of Science, LILACS, International Pharmaceutical Abstracts, and Cochrane Central Register of Controlled Trials. A search was also made for unpublished studies, and authors were contacted for further information where required.

Three reviewers selected all relevant RCTs that reported on any of the following outcomes:

  • pain intensity
  • disability status
  • quality of life

Trials were excluded where a specific serious cause of the back pain had been identified, such as a tumour or infection, if they looked at post-operative pain and studies of people with rheumatoid arthritis.

The quality of each RCT was assessed using the standardised approach called a "risk of bias" assessment. The strength of the body of evidence as a whole was summarised using the internationally recognised GRADE approach (The Grading of Recommendations Assessment, Development and Evaluation).

A meta-analysis was then performed to pool the results of trials in people with the different conditions using appropriate statistical methods. This included an analysis of whether the RCTs were similar enough to be combined. The researchers also performed "secondary exploratory analysis", which looks at the effect various different factors may have had in biasing the results.

 

What were the basic results?

The systematic review included 13 moderate- to high-quality RCTs and 12 of them in the meta-analysis:

  • three trials investigated short-term use of paracetamol for lower back pain (including 1,825 people)
  • 10 trials assessed paracetamol compared to placebo for osteoarthritis of the knee or hip (including 3,541 people)
  • no trials were found for neck pain

No significant difference was found between paracetamol and placebo in the short term control of lower back pain in terms of:

  • pain intensity
  • disability
  • quality of life

Paracetamol slightly improved pain and disability from osteoarthritis of the hip or knee compared to placebo.

People experienced a similarly small number of side effects when taking paracetamol or placebo. However, people taking paracetamol were four times more likely to have abnormal liver function tests than those taking placebo. The review did not describe how abnormal the tests were or how quickly the tests returned to normal after stopping paracetamol.

 

How did the researchers interpret the results?

The researchers concluded that "paracetamol is ineffective in the treatment of lower back pain and provides minimal short term benefit for people with osteoarthritis". They call for "reconsideration of recommendations to use paracetamol for patients with lower back pain and osteoarthritis of the hip or knee in clinical practice guidelines".

 

Conclusion

This systematic review and meta-analysis suggests paracetamol may not be effective for some people with lower back pain and of limited help to people with osteoarthritis of the hip and knee.

Strengths of the study include:

  • the systematic review only contained the "gold standard" type of trials – RCTs
  • existing published RCTs comparing paracetamol with a placebo were likely to have been identified, as a large number of databases were searched from the beginning of their records up to December 2014. There were also two independent reviewers, which reduces the risk of any slipping through the net
  • they also searched for unpublished studies, reducing the risk of publication bias in their results (trials are less likely to be published if their results do not show a clear benefit)
  • the quality of evidence was appropriately assessed

However, as noted above, this type of research is reliant on the availability of relevant RCTs.

So while the review itself was well-conducted, the actual body of new evidence found about lower back pain was small.

In this case, the results for back pain were limited to three studies in specific populations. Non-specific lower back pain (i.e. back pain without an obvious cause) is complex in nature and these small studies may not be representative of all people who experience lower back pain.

First study

The first study was small, of 36 adults on strong (opioid) painkillers for at least six months for chronic back pain. While on these painkillers they did not find any difference in pain between an injection into the vein of either paracetamol, placebo or the non-steroidal anti-inflammatory drugs (NSAID) diclofenac and parecoxib.

Second study

The second study assessed the effect of paracetamol in acute back pain in 113 people after two and four days of use, compared to 20 people on placebo. The small study size limits the strength of the results. It may be that paracetamol was not a strong enough painkiller at this point in the course of the back pain, but may have been during the recovery phase.

Third study

The main outcome for the third study was whether paracetamol speeded up the time to recovery from acute lower back pain compared to placebo. How effective paracetamol was at pain relief was a secondary outcome so may not be as reliably assessed.

Some people will find paracetamol helps relieve the pain with relatively few side effects compared to other types of pain killers. The NICE guideline recommends paracetamol as a first line pain relief drug for lower back pain that has lasted for at least six weeks, along with other measures such as staying active. They recommend that if this does not provide adequate pain relief, then an NSAID should be offered.

NICE is currently updating its guidance on lower back pain and will take the results of this review into account.

NICE’s guidance also recommends paracetamol as a first line pain relief drug for osteoarthritis, however it does note that an evidence review suggested paracetamol may not be as effective for these people as originally thought. They are going to be reviewing this guidance (a draft is expected in 2016), and may revise their recommendations at that point, but for now have kept their existing guidance.

If you are finding that any prescribed treatment doesn’t seem to be working then you shouldn’t suddenly stop taking it (unless advised to). You do have the option of contacting your GP or doctor in charge of your care to discuss alternative drug (as well as non-drug) options.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Paracetamol doesn't help lower-back pain or arthritis, study shows. The Guardian, March 31 2015

Paracetamol 'does not help back pain or arthritis'. The Daily Telegraph, March 31 2015

Paracetamol ‘no good for back pain'. BBC News, March 31 2015

Paracetamol for back pain? It's no better than a placebo: Experts say treatment does nothing to improve recovery time, sleep or quality of life. Daily Mail, April 1 2015

Paracetamol is ineffective against lower back pain says study in top medical journal. Daily Mirror, March 31 2015

Paracetamol 'doesn't work on back pain'. ITV News, March 31 2015

Links To Science

Machado GC, Maher CG, Ferreia PH, et al. Efficacy and safety of paracetamol for spinal pain and osteoarthritis: systematic review and meta-analysis of randomised placebo controlled trials (PDF, 672kb). BMJ. Published online March 31 2015

Categories: NHS Choices

Healthy diet could cut risk of Alzheimer's disease

NHS Choices - Behind the Headlines - Tue, 31/03/2015 - 13:44

"A new diet could more than halve a person's risk of developing Alzheimer's disease," the Mail Online reports.

In a new study, researchers looked at the effects of three diets on the risk of developing Alzheimer's disease. These were:

  • a standard Mediterranean-type diet
  • the Dietary Approach to Stop Hypertension diet (DASH) – designed to reduce blood pressure
  • Mediterranean-DASH Intervention for Neurodegenerative Delay (MIND) – this combines elements of the Mediterranean diet and the DASH diet

The study found older people whose usual diet was close to any one of these three healthy diets were less likely to develop Alzheimer's disease than those eating less healthily.

The researchers say they found the greatest effect from the MIND diet, which is rich in green leafy vegetables, wholegrains, nuts and berries, even if people didn't follow it closely. Participants who did stick rigorously to the MIND diet were 52% less likely to be diagnosed with Alzheimer's disease.

This large observational study can't show that the diets protected against Alzheimer's, only that there seems to be a link between eating a healthy diet and a lower risk of getting Alzheimer's disease. The three diets weren't compared directly, so we can't be sure which one is best.

The study provides further evidence that eating a healthy diet may reduce the chances of developing Alzheimer's disease

Where did the story come from?

The study was carried out by researchers from Rush University Medical Center in Chicago and Harvard School of Public Health in Boston, and was funded by grants from the US National Institute on Aging.

It was published in the peer-reviewed medical journal Alzheimer's & Dementia.

The Mail Online reported the study accurately for the most part, although it did not say that this type of study cannot prove causation. Strangely, it repeatedly said that the MIND diet called for a daily salad, although salad was not mentioned specifically in the study.  

What kind of research was this?

This was a large prospective cohort study of older people who were taking part in a long-running study of memory and ageing. It aimed to see whether people whose food consumption was closest to one of three types of healthy diet were less likely to be diagnosed with Alzheimer's disease during the course of the study.

As this was an observational study, it cannot prove that the diet protected against Alzheimer's disease or other types of dementia. A randomised controlled trial would be needed for that.  

What did the research involve?

Researchers worked with volunteers living in retirement communities and public housing in Chicago. They were asked to complete a questionnaire to assess their diet. They all had annual neurological examinations for an average of four to five years, which checked for Alzheimer's disease.

Researchers adjusted the results to take account of other factors that can affect Alzheimer's risk. They then looked for links between Alzheimer's diagnosis and people's diets.

At the start of the study, the researchers decided to assess three types of diet:

  • The Dietary Approach to Stop Hypertension (DASH) has been used to reduce blood pressure and stroke risk. It includes total grains and wholegrains, fruit, vegetables, dairy products, meat and fish, nuts and legumes, but restricts fat, sweets and salt.
  • The Mediterranean diet (MEDdiet) is often recommended for heart health. It includes olive oil, wholegrains, vegetables, potatoes, fruit, fish, nuts and legumes, and moderate wine, but restricts full-fat dairy products and red meat.
  • The Mediterranean-DASH Intervention for Neurodegenerative Delay (MIND) diet is a new diet developed by the researchers with elements from the DASH and MEDdiet, and also includes foods thought to protect the brain. It includes olive oil, wholegrains, green leafy vegetables, other vegetables, berries, fish, poultry, beans and nuts, and a daily glass of wine, but restricts red meat and meat products, fast or fried food, cheese, butter, pastries and sweets.

Using questionnaires from 923 volunteers, the researchers assessed how well each of them scored on each diet. They divided people into three groups showing high, moderate or low scores for each diet.

They then looked at whether people in the high-scoring groups for each diet were less likely to be diagnosed with Alzheimer's disease during the average 4.5 years of follow-up, compared with people in the low-scoring groups.

People diagnosed with other types of dementia, such as dementia with Lewy bodies or vascular dementia, were not included as Alzheimer's cases.

The researchers did a good job of checking for other factors that could affect Alzheimer's risk. This included testing for a type of gene (APOE) that raises the risk of Alzheimer's, as well as asking about people's education level, whether they took part in cognitively stimulating activities such as playing games and reading, how much physical activity they got, their body mass index (BMI), whether they had symptoms of depression, and their medical history.  

What were the basic results?

During the study, there were 144 cases of Alzheimer's disease among the 923 people taking part.

People with the highest scores in all three diets were less likely to be diagnosed with Alzheimer's disease than people with the lowest scores.

The link was slightly stronger for the MIND and MEDdiet than the DASH diet. People who had the highest scores on the MIND diet were 52% less likely to be diagnosed with Alzheimer's disease (hazard ratio [HR] 0.48, 95% confidence interval [CI] 0.29 to 0.79).

People who had moderate scores for the MIND diet were also less likely to be diagnosed with Alzheimer's than those with the lowest scores, but the link was not as strong (HR 0.64, 95% CI 0.42 to 0.97). Moderate scores on the DASH and MEDdiet did not show a statistically significant reduction in risk.  

How did the researchers interpret the results?

The researchers said their results showed that "even modest adherence" to the MIND diet "may have substantial benefits" for preventing Alzheimer's disease.

They say that while the DASH and MEDdiet also showed positive results, "only the highest concordance" with those diets was linked to the prevention of Alzheimer's disease.

They go on to speculate that the dairy and low-salt recommendations in DASH, while useful for reducing blood pressure, may not be particularly relevant to brain health.

They concluded that, "High-quality diets such as the Mediterranean and DASH diets can be modified ... to provide better protection against dementia." 

Conclusion

The study found people who ate a healthy diet – with plenty of green vegetables, wholegrains, legumes and less red meat – may be less likely to get Alzheimer's disease. However, we should be wary of saying that their diet actually protected them from Alzheimer's, as it is a complex disease with many potential causes.

The main limitation is that observational studies cannot prove causation, even when researchers take care, as they did here, to include factors that we know affect disease risk. It's also notable that the researchers excluded dementia, other than Alzheimer's disease, from their calculations.

It would be interesting to see the effect of these diets on other types of dementia, too, especially as the DASH diet protects against hypertension, which can be a cause of vascular dementia. This was not taken into consideration when the authors concluded that low dairy and salt may not be needed for brain health (though they still remain part of a healthy, balanced diet).

Another limitation is that the food frequency questionnaire may not have completely captured people's adherence to the three diets. For example, people were asked about how often they ate strawberries, not about other types of berries. This could underestimate the effect of berry consumption in the diet.

Experts already think a healthy lifestyle can help lower the risk of getting dementia. Recommendations include eating a healthy diet, keeping to a healthy weight, exercising regularly, not smoking, drinking in moderation, and keeping blood pressure healthy. The question is: what type of healthy diet is best?

This study suggests the MIND diet may be better at lowering the risk of Alzheimer's disease than two other healthy diets. However, the study did not compare the effect of the diets directly.

We also don't know which foods in the diets might make the difference. The best advice may be to follow a healthy balanced diet, without worrying too much about exactly which foods might protect your brain.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

The 10 foods that HALVE the risk of Alzheimer's and the 5 that harm the brain: Stock up on berries, salad and wine - but avoid cheese, pastries and sweets. Mail Online, March 30 2015

Links To Science

Morris MC, Tangney CC, Wang Y, et al. MIND diet associated with reduced incidence of Alzheimer's disease. Alzheimer's & Dementia. Published online February 11 2015

Categories: NHS Choices

Sperm quality pesticides claim 'should be treated with caution'

NHS Choices - Behind the Headlines - Tue, 31/03/2015 - 11:31

"Pesticides on fruit and vegetables may be damaging sperm counts and men should consider going organic if they want to have children," The Daily Telegraph reports.

A study found men who ate the highest amount of fruit and vegetables with high levels of pesticides had a 49% lower sperm count, as well as a 32% lower count of normally formed sperm, than men who consumed the least amount. Sperm can sometimes be an abnormal shape, making it harder for them to move and fertilise an egg.

The results of this study should be viewed with caution. Researchers did not assess individual diets for pesticide residues. They also did not know if the food the men ate was grown organically or conventionally (a failing The Telegraph overlooked). 

So it is possible the men's dietary exposure to pesticides was misclassified. The men in the study were all attending fertility clinics, so the results may not apply to the general population.

The study certainly should not be seen as an invitation to avoid eating fruit and vegetables. Aside from the general health harms a fruit and veg-free diet would hold, this could also negatively impact your sperm quality.

Many factors can affect men's sperm count and quality, including whether they smoke or drink alcohol, as well as how much exercise they take and their weight. Whether or not pesticide residue found in our diet is another factor that affects sperm quality is an important topic that needs further study. 

Where did the story come from?

The study was carried out by researchers from the Harvard T H Chan School of Public Health, Massachusetts General Hospital, Brigham and Women's Hospital, and Harvard Medical School in the US.

It was funded by the National Institute for Environmental Health Sciences, the National Institutes of Health, and the Ruth L Kirschstein National Research Service Award.

The study was published in the peer-reviewed journal Human Reproduction on an open-access basis, so it is free to read online.

The study was covered uncritically by most of the UK media. The Telegraph's assertion that, "Men who eat fruit and vegetables with high pesticide residues could double their sperm count by switching to organic food" was highly misleading.

The study did not compare the effects of organic and non-organic food on sperm count. However, both The Telegraph and the Mail Online included comments from UK experts. 

What kind of research was this?

This was a cohort study exploring whether the consumption of fruits and vegetables with high levels of pesticide residues is linked to lower semen quality.

This type of study cannot prove cause and effect, as other factors could be causing any effects seen. However, in studies of this type, researchers try to take account of other factors that can affect a health outcome.

In this case, for example, male fertility is known to be affected by lifestyle factors such as smoking and weight, which were taken into account in the statistical analyses.

The researchers say in nearly one-third of couples seeking help with conception the problem is one of male infertility.

They say occupational exposure to pesticides has been linked to lower sperm counts, and argue that pesticide exposure may explain a general decline in semen quality. Whether pesticide exposure through diet could affect male fertility is unknown. 

What did the research involve?

Men attending a fertility clinic filled out food frequency questionnaires from which the researchers estimated their intake of pesticides from fruit and vegetables. The results were then analysed to look for an association between higher pesticide consumption and lower sperm counts.

Researchers used an ongoing study of couples attending a US fertility clinic. The men in the study had to be aged between 18 and 55 without any history of vasectomy, and be in a couple seeking fertility treatment with their own eggs and sperm.

Between 2007 and 2012, the male partners in sub-fertile couples (couples who require medical assistance to conceive) completed a food frequency questionnaire. They were asked how often on average they consumed specified amounts of fruit and vegetables over the previous year using standard portion sizes.

The fruit and vegetables were categorised as being high, moderate or low in pesticide residues based on data from the annual United States Department of Agriculture Pesticide Data Program.

Fruit or vegetables low in pesticide residues included peas, beans, grapefruit and onions. Those with high residues included peppers, spinach, strawberries, apples and pears. This data takes account of how food has been prepared, such as whether it has to be peeled.

By this criteria, 14 of the fruits and vegetables in the questionnaire were categorised as high in pesticide residues and 21 as low-to-moderate in pesticide residues.

The researchers divided the men into four groups, ranging from those who ate the greatest amount of fruit and vegetables high in pesticide residues (1.5 servings or more per day), to those who ate the least amount (less than half a serving per day).

They also categorised whether men ate a "prudent" diet – consisting of high intakes of fish, chicken, fruit, vegetables and wholegrains – or a "Western pattern" – high intakes of red and processed meat, butter, high-fat dairy, refined grains, snacks, high-energy drinks, mayonnaise and sweets.

Semen samples were also collected from the men over an 18-month period following their dietary assessment. Both sperm count and the size and shape of the sperm and whether they moved normally were evaluated by computer-aided semen analysis (CASA).

A total of 338 semen samples collected from 155 men between 2007 and 2012 were used in the analysis. Fifty-seven men contributed one sample, 51 men provided two samples, and 47 provided three or more semen samples.

Using statistical methods, the researchers analysed the association between pesticide intake from fruit and vegetables with sperm count and quality.

They adjusted their findings for other factors known to affect male fertility, such as age, smoking status, weight, periods of sexual abstinence, exercise, dietary patterns, and history of varicose veins (variocele) in the testicles.

What were the basic results?

The researchers found that:

  • the men's total fruit and vegetable intake was unrelated to their semen quality
  • high pesticide residue fruit and vegetable intake was associated with poorer semen quality
  • on average, men in the highest quartile of high pesticide residue fruit and vegetable intake, with 1.5 or more servings a day, had a 49% (95% confidence interval [CI] 31 to 63) lower total sperm count and a 32% (95% CI 7 to 58) lower percentage of normally shaped sperm than men in the lowest quartile of intake (0.5 servings a day)
How did the researchers interpret the results?

The researchers say their findings suggest that exposure to pesticides used in agriculture through diet may be sufficient to affect the quality and amount of sperm in humans.

Conclusion

Whether pesticide exposure in the diet is linked to male fertility problems is an important issue, but, as the authors point out, there are several reasons to view the results of this trial with caution:

  • the men were all attending a fertility clinic with their partner, so some of them will have had fertility issues unrelated to their diet or lifestyle
  • they used national surveillance data, rather than looking at individual diets, to assess how much pesticide residue the men had consumed
  • they did not have information on whether the men were eating organic or non-organic food
  • the men had to remember and report on their diet over the previous year, which could affect the reliability
  • their diets were only assessed once, which might have led to misclassification, and diets could change over time

Male fertility can be affected by several factors. Although the researchers tried to adjust their findings for these, it is always possible that both measured and unmeasured confounders affected the results. Further studies looking at this important topic are needed.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Pesticide residues on some fruit and vegetables harming men's fertility, study claims. The Independent, March 31 2015

Pesticide in fruit and veg could harm man's fertility: Men who eat high levels have half the sperm count of those who ate the least. Mail Online, March 31 2015

Could switching to organic fruit and veg double sperm count? The Daily Telegraph, March 31 2015

Links To Science

Chiu YH, Afeiche MC, Gaskins AJ, et al. Fruit and vegetable intake and their pesticide residues in relation to semen quality among men from a fertility clinic. Human Reproduction. Published online March 30 2015

Categories: NHS Choices

Meningitis B jab to be added to NHS child vaccine schedule

NHS Choices - Behind the Headlines - Mon, 30/03/2015 - 12:45

"All babies in the UK will soon have a potentially life-saving vaccine against meningitis B," The Guardian reports. The vaccine, Bexsero, will soon be offered to babies once they reach the age of two months, followed by two more booster shots.

 

What is meningitis B?

Meningitis B is a highly aggressive strain of bacterial meningitis that infects the protective membranes surrounding the brain and spinal cord. It is very serious and should be treated as a medical emergency. If the infection is left untreated, it can cause severe brain damage and infect the blood (septicaemia). In some cases, bacterial meningitis can be fatal.

 

How common is meningitis B?

The charity Meningitis UK estimates that there are 1,870 cases of meningitis B each year in the UK. Meningitis B is most common in children under five years old, particularly in babies under the age of one.

Initial signs and symptoms of meningitis B in babies include:

  • a high temperature with cold hands and feet
  • they may feel agitated, but not want to be touched
  • they may cry continuously
  • some children are very sleepy and it may be difficult to wake them up
  • they may appear confused and unresponsive
  • they may develop a blotchy red rash that does not fade when you roll a glass over it

For more information, read about the signs and symptoms of serious illness in babies.

 

Why is this meningitis B vaccine in the news?

The development of a safe and effective meningitis B vaccine is the culmination of more than 20 years of research and represents a significant breakthrough in disease prevention.

 

What do we know about the vaccine?

The vaccine, Bexsero, is thought to provide 73% protection against meningitis B, which should significantly reduce the number of cases. The vaccine can be administered to infants aged two months or older either by itself, or in combination with other childhood vaccines.

The vaccine has been tested in clinical trials involving more than 8,000 people.

In infants, it was found to have similar levels of safety and tolerability as other routine childhood vaccines. The most commonly reported side effects were:

  • redness and swelling at the site of the injection
  • irritability
  • fever

It is thought that the vaccine will become available on the NHS in the autumn.

 

Edited by NHS Choices. Follow Behind the Headlines on Twitter.

Links To The Headlines

Meningitis B vaccine added to UK child immunisation scheme. The Guardian, March 29 2015

Now babies WILL get £20 meningitis jab. After year-long row over cost, NHS gives it the nod. Mail Online, March 30 2015

Meningitis B vaccine deal agreed. BBC News, March 30 2015

Every baby to be vaccinated against meningitis B in world first protection programme. Daily Express, March 30 2015

Links To Science

 

Categories: NHS Choices

Parents fail to spot that their kids are obese

NHS Choices - Behind the Headlines - Mon, 30/03/2015 - 12:31

"Parents hardly ever spot obesity in their children, resulting in damaging consequences for health," BBC News reports after a new study found a third of UK parents underestimated the weight of their child.

The study asked parents for their views about whether their child was underweight, a healthy weight, overweight or obese, comparing this with objective measurements of the child's weight and height taken on the same day.

Researchers found most parents were only likely to think a child was overweight when they were at the top end of the very overweight category.

The study was large, with almost 3,000 participants, but may not be representative of all parents in the UK, as many of those asked did not participate.

The study also cannot tell us why parents are not recognising when their child is overweight, or the best and most effective way of improving this. But it does suggest that some help is likely to be needed to make sure parents know when their child is overweight.

If you are concerned your child may be overweight, it is better to act quickly. Research suggests obesity in the teenage years tends to persist into adulthood.

Read more advice about obesity in childhood

Where did the story come from?

This study was carried out by researchers from the London School of Hygiene and Tropical Medicine, the University of Bristol, University College London, and Imperial College London, and was funded by the National Institute for Health Research.

It was published in the peer-reviewed British Journal of General Practice. One of the researchers received funding from the National Institute for Health Research.

The UK media generally reported the findings of the study accurately. They also speculated about the causes of the discrepancy. The Telegraph and BBC News, for example, suggested that being overweight is now "the norm", making it hard for parents to tell when their children are not a healthy weight.

"Society as a whole has become so fat we have collectively lost our sense of a healthy weight," said the BBC. But while the authors of the study do discuss possible reasons, the study did not directly assess whether these explain the discrepancy. 

What kind of research was this?

This was a cross-sectional study that compared parents' perceptions of their child's weight with objective measurements taken by school nurses. The researchers looked at how far the parents' assessments agreed with the objective assessments.

National figures show one-third of children in England aged 10 and 11 were overweight or very overweight in 2012-13. Overweight children have a higher chance of getting serious health problems such as type 2 diabetes in later life.

Previous studies showed only about half of parents can identify when their child is overweight. The researchers wanted to know at what point parents thought a child was overweight and what factors might affect this. The study didn't assess why people might wrongly estimate their children's weight. 

What did the research involve?

Every year, children in reception class (aged 4 to 5) and year 6 (aged 10 to 11) at state schools in England have their height and weight measured. This information was used to classify the children's weight against national standards.

Researchers sent questionnaires to the parents of children from five primary care trusts in England who were being measured in 2010-11. They asked the parents to estimate whether their child was underweight, a healthy weight, overweight, or very overweight.

They then compared the results of the children's measurements with what the parents thought, and looked for factors that were linked to their likelihood of estimating the child's weight correctly.

The children's weight and height was converted into body mass index (BMI) and then compared with reference measurements taken from British children from 1978 to 1990.

These measurements are organised in order of increasing BMI and split into 100 groups, or centiles, of increasing BMI, each containing 1% of the reference measurements. This shows the distribution of BMI for children at different ages and is the standard way of categorising child weight.

Children are categorised as underweight if their BMI is at or below the 2nd centile, a healthy weight if they are between the 2nd centile and the 85th centile, overweight at or above the 85th centile, and very overweight (obese) if they are at or above the 95th centile.

Researchers took the objective category for each child and compared it with the parents' assessment. They then looked at what point parents would be likely to categorise a child as underweight or overweight.

They also looked at the children's age, sex, ethnic group, school year and the local area's levels of deprivation to see if they could identify factors associated with parents being more or less likely to underestimate or overestimate their child's weight status.

Because so few parents categorised their children as being very overweight (obese), the researchers combined the very overweight and overweight groups for some of their calculations. 

What were the basic results?

Using the four categories of underweight, healthy weight, overweight, or very overweight, 68% of parents correctly categorised their child. Few parents (less than 1%) overestimated their child's weight status, but 31% underestimated it, believing them to be a healthy weight or even underweight when they were actually overweight or very overweight.

Only four parents described their child as being very overweight, although the objective measurements placed 369 children in that category. Parents only became more likely to categorise a child as overweight rather than a healthy weight once the child was at the extreme end of the spectrum: at or above the 99.7th centile of BMI for their age.

As an example, a child at the 98th centile, which is classed as very overweight according to national standards, had an 80% chance of being seen as a healthy weight by their parents, and only a 20% chance of being seen as overweight or very overweight.

There were similar findings for the underweight category, with parents only becoming more likely to categorise a child this way if they were at the extreme end of the spectrum (under the 0.8th centile), compared with under the 2nd centile national threshold.

The researchers said parents were more likely to underestimate their child's weight status if the children were black, south Asian, male, or older (in year 6 rather than reception). Families from better-off areas were less likely to underestimate their child's weight status.  

How did the researchers interpret the results?

The researchers concluded there is "extreme divergence" between the parents' estimation of their child's weight status and their categorisation according to their BMI.

They say parents who are "unable to accurately classify their own child's weight" may be less likely to be "willing or motivated" to make changes at home that could help the child to reach and maintain a healthy weight.

The researchers suggest some reasons for the discrepancy between parents' estimates and the medical assessments, including fear of being judged and unwillingness to label a child as overweight, as well as "shifting perceptions of normal weight" because society as a whole has seen an increase in body weight.

They say there's a need for measures to bridge the gap between parents' perceptions of a child's weight status and the BMI categories used by medical professionals.

Conclusion

This study found parents in the UK are much less likely to think their child is overweight or very overweight than standard childhood BMI categories suggest. It also found parents of black or south Asian children, boys, and those from more deprived areas are more likely to underestimate their child's weight status.

But this research has some limitations. While it is based on a fairly big sample size (2,976 children who had completed parental questionnaires stating their estimated weight classification and objective weight measurements), only 15% of the parents contacted actually sent back the questionnaire, and not all of them answered the question about weight status.

This means we cannot be sure that these children are representative of all the children in the areas selected for the study (Redbridge, Islington, West Essex, Bath, and North East Somerset and Sandwell). Therefore, these findings may not be representative of all parents in those areas or other areas in the UK.

There is also some debate about the most appropriate ways to measure being overweight or obese. Research from 2014 suggests using the BMI method (where weight is compared to height) is less accurate with children than with adults. 

Although researchers looked for factors affecting the parents' estimates, including ethnicity and measures of deprivation of the local area, they did not look at other factors that might also be related to parental perception – for example, the parents' own weight status, anything about the family diet, or the amount of exercise the children got. This limits the conclusions that can be drawn from the study.

While the authors discussed some possible reasons for the discrepancy between parents' estimates and the objective assessments, the study did not assess this directly, so we can't be sure what those reasons are. The study can't tell us why, for example, parents of boys or south Asian children are less likely to recognise that their child is overweight.

And we don't know if the problem is restricted to parents, or whether other professionals, such as teachers and nurses, would also underestimate a child's weight status. It's even possible that parents might not recognise that their own child is overweight, but would be able to spot it in other people's children.

It is a concern that parents don't recognise their children's weight problems – we know these children are at a higher risk of getting health problems in later life.

The authors note that a 2011 Cochrane review suggested parental support could be one important part of bringing about lifestyle changes at home and reducing childhood obesity.

Helping parents gain a better understanding of what a healthy weight looks like in a child could help reduce this problem and help improve children's long-term health.

If you're concerned your child may be too heavy, ask your GP to check whether they weigh more than they should for their age. The good news is that teaching them about healthy eating and regular exercise can lead to weight loss, as well as instilling healthy habits that may persist into adulthood. 

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Parents rarely spot child obesity. BBC News, March 30 2015

Obesity: parents unable to recognise if child is overweight. The Guardian, March 30 2015

Just one in 100 parents spot obesity in their children. The Daily Telegraph, March 30 2015

Parents fail to see that their own children are fat. The Times, March 30 2015

Parents 'do not recognise their own child's obesity'. ITV News, March 30 2015

Links To Science

Black JA, Park M, Gregson J, et al. Child obesity cut-offs as derived from parental perceptions: cross-sectional questionnaire. British Journal of General Practice. Published online March 30 2015

Categories: NHS Choices

Pages