NHS Choices

Breastfed babies 'grow up to be brainier and richer'

NHS Choices - Behind the Headlines - Wed, 18/03/2015 - 12:00

"Breastfed babies grow up smarter and richer, study shows," The Daily Telegraph reports. A study from Brazil that tracked participants for 30 years found a significant association between breastfeeding and higher IQ and income in later life.

This study followed almost 3,500 infants from birth to adulthood in Brazil. It found babies who were breastfed longer had higher IQs at the age of 30, as well as higher incomes. The authors say this is the first study to directly assess the impact of breastfeeding on income.

Another novel feature of the study was that the majority of the mothers were from low-income backgrounds. Studies in developed countries, such as the UK, may be skewed by the fact there is a trend for breastfeeding mothers to come from medium- to higher-income backgrounds.

The study used a good design and had a relatively high follow-up of participants (almost 60%) given how long it was. Although factors other than breastfeeding may have been influencing the results, the researchers did try to reduce their impact by making adjustments. The results for income may also not be as representative of more developed countries.

While it's difficult to conclusively state that breastfeeding itself definitely directly caused all of the differences seen, overall this research supports the view that breastfeeding can potentially benefit children long-term.

Current UK advice is that exclusive breastfeeding for around the first six months of life provides a range of health benefits for babies

Where did the story come from?

The study was carried out by researchers from the Federal University of Pelotas and the Catholic University of Pelotas in Brazil.

It was funded by the Wellcome Trust, the International Development Research Center (Canada), CNPq, FAPERGS, and the Brazilian Ministry of Health.

The study was published in the peer-reviewed medical journal Lancet Global Health on an open-access basis, so it is free to read online or download as a PDF.

The majority of the UK media provided a very balanced report of this study, noting the results and their implications, as well as the study's limitations.

What kind of research was this?

This was a prospective cohort study looking at whether breastfeeding was associated with higher IQ and income in adulthood. The short-term benefits of breastfeeding on a baby's immunity are well known.

The researchers also report that a meta-analysis of observational studies and two randomised controlled trials (RCT), which looked at the promotion of breastfeeding or compared breast milk versus formula in preterm babies, found longer-term benefits on IQ in childhood and adolescence.

There have been fewer studies looking at the effect on IQ in adults, all from developed high-income countries, but none looking at income.

Although two out of these three studies found a link with higher IQ, there is concern that this may at least in part be related to the fact that mothers of higher socioeconomic status in these countries tend to breastfeed for longer.

In the UK, women from a middle or upper class background are more likely to breastfeed than women from a working class background, so the researchers wanted to look at the link in a lower-income country (Brazil) where this pattern does not exist.

This is likely to be the best study design for assessing this question, as randomised controlled trials allocating babies to be breastfed or not are unlikely to be unethical.

As with all observational studies, the main limitation is that factors other than the one of interest (breastfeeding in this case) could be having an impact on the results, such as socioeconomic status.

Researchers can reduce the impact of these factors (confounders) by using statistical methods to take them into account in their analyses.

In this study, they also chose to analyse a population where a major confounder was thought to have less impact. There may still be some residual effect of these or other unmeasured factors, however.

What did the research involve?

The researchers recruited 5,914 babies born in 1982 in Pelotas, Brazil and their mothers, and recorded whether the babies were breastfed or not. They then followed them up and assessed their IQ, educational achievements and income as 30 year olds in 2013.

The researchers invited all mothers of babies born in five maternity hospitals in Pelotas in 1982 and who lived in the city to take part in their study, and almost all agreed.

When the babies were infants (19 months or 3.5 years old) the researchers recorded how long they were breastfed and whether they were mainly breastfed (that is, without foods other than breast milk, teas or water).

Researchers who did not know about the participants' breastfeeding history assessed their IQ using a standard test when they reached about 30 years of age. They also recorded the highest level of education participants had reached and their income in the previous month.

The researchers then compared outcomes in those who were breastfed longer against those who were breastfed for a shorter period of time or not at all.

They took into account a large range of potential confounders assessed around the time of the baby's birth (such as maternal smoking in pregnancy, family income, and baby's gestational age at birth) and during infancy (household assets). 

What were the basic results?

The researchers were able to follow-up and analyse data for 59% (3,493 individuals) of the participants they recruited.

About a fifth of babies (21%) were breastfed for less than a month, about half (49%) were breastfed for between one and six months, and the rest (about 30%) for longer than this. Most babies were mainly breastfed for up to four months, with only 12% mainly breastfed for four months or longer.

Longer duration of any breastfeeding or mainly being breastfed was associated with higher levels of education, adult IQ and income.

For example, compared with those who were breastfed for less than one month, those who had received any breastfeeding for a year or longer had:

  • IQ scores 3.76 points higher on average (95% confidence interval [CI] 2.20 to 5.33)
  • 0.91 more years of education on average (95% CI 0.42 to 1.40)
  • a higher than average monthly income (95% CI 93.8 to 588.3) – this was equivalent to around an extra 30% of the average income in Brazil

The researchers carried out a statistical analysis that suggested the difference seen in income with longer breastfeeding was largely a result of differences in IQ.

How did the researchers interpret the results?

The researchers concluded that, "Breastfeeding is associated with improved performance in intelligence tests 30 years later, and might have an important effect in real life by increasing educational attainment and income in adulthood."

Conclusion

This large long-term study found an association between being breastfed for longer and subsequent educational attainment, IQ and income at the age of 30 in participants from Brazil.

The authors say this is the first study to directly assess the impact of breastfeeding on income. The study used a good design and had a relatively high follow-up of participants (almost 60%) given its duration.

However, there are some points to note:

  • As with all observational studies, factors other than breastfeeding may have been influencing the results. The researchers did try to reduce their impact by making statistical adjustments, but some residual impact may remain.
  • There was less awareness of the benefits of breastfeeding in Brazil when the study started, so less association with socioeconomic status and education was expected. However, researchers did find that women who had the least, as well as the most, education and those with a higher family income tended to breastfeed more, although the differences tended to be small (less than 10% difference in frequency of breastfeeding at six months).
  • The results for IQ support those seen in higher-income countries, but there have not been any direct assessments of the effect of breastfeeding on income in these countries so far, and these may differ from lower-income countries.

While it's difficult to conclusively state that breastfeeding itself definitely directly caused all of the differences seen in this study, this research supports the belief that breastfeeding potentially has long-term benefits.

Breastfeeding is known to bring health benefits, and current UK advice is that this can be achieved through exclusive breastfeeding for around the first six months of life.

However, as experts noted on the BBC News website, breastfeeding is only one of many factors that can contribute to a child's outcomes, and not all mothers are able to breastfeed.

For more advice on breastfeeding, visit the NHS Choices Pregnancy and baby guide.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Breast-fed babies grow up smarter and richer, study shows. The Daily Telegraph, March 18 2015

The longer babies breastfeed, the more they achieve in life – major study. The Guardian, March 18 2015

Breastfeeding 'linked to higher IQ'. BBC News, March 18 2015

Longer breastfeeding 'boosts IQ'. Daily Mail, March 18 2015

Breastfeeding 'linked to success and higher IQ'. The Independent, March 18 2015

Breastfed babies earn more as adults. The Times, March 18 2015

Links To Science

Victoria CG, Horta BL, de Mola CL, et al. Association between breastfeeding and intelligence, educational attainment, and income at 30 years of age: a prospective birth cohort study from Brazil. The Lancet Global Health. Published online March 18 2015

Categories: NHS Choices

Obese people 'underestimate how much sugar they eat'

NHS Choices - Behind the Headlines - Tue, 17/03/2015 - 11:30

"Obese people are 'in denial' about the amount of sugar they eat," the Mail Online reports. Researchers looking into the link between sugar consumption and obesity found a "huge gap" between overweight people's self-reported sugar consumption and the reality, according to the news story.

Researchers assessed the self-reported sugar consumption (based on food diaries) and sugar levels in urine samples in about 1,700 people in Norfolk. After three years, they had their body mass index (BMI) measured.

The researchers found those whose urine test suggested they actually consumed the most sugar were more likely to be overweight after three years compared with those who consumed the least. However, the opposite was true for self-reported sugar intake.

The specific role of sugar (rather than calorie intake as a whole) in obesity is unclear, and previous studies have had inconsistent results.

One limitation of this study is that the spot-check urinary sugar test may not be representative of sugar intake over the whole study period. Also, the results may be affected by factors not taken into account by the analyses.

Although the news story focuses on the suggestion that overweight people are "in denial" about what they eat, this study itself did not attempt to explain the discrepancy between diet diaries and urine sugar measurements.

Overall, the main conclusion of this study is that more objective measures, rather than subjective diet-based records, may help future studies to better disentangle the effects of sugar on outcomes such as being overweight. 

Where did the story come from?

The study was carried out by researchers from the universities of Reading and Cambridge in the UK and Arizona State University in the US.

It was funded by the World Cancer Research Fund, Cancer Research UK, and the Medical Research Council.

The study was published in the peer-reviewed medical journal Public Health Nutrition. It is available on an open-access basis, so is available to download for free.

The Mail focuses on the suggestion that overweight people are "in denial" about what they eat. But this study did not assess why the discrepancies between diet diaries and urine sugar measurements exist. It also does not question some potential problems with the urine tests, which could undermine the results.

What kind of research was this?

This was a prospective cohort study, part of the European Prospective Investigation into Cancer and Nutrition (EPIC), a long-running investigation. It aimed to see whether people who ate more sugar were more likely to be overweight using two different ways of measuring sugar intake.

Observational studies assessing whether total sugar intake is linked to obesity have had conflicting findings. Such studies usually ask people to report what they eat using food frequency questionnaires or a food diary, and then use this information to calculate sugar intake.

However, there is concern that people under-report their food intake. Therefore, the researchers in this study used both food diaries and an objective measure (the level of sugar in urine) to assess sugar intake. They wanted to see if there was any difference in results with the two approaches.

The main limitation of observational studies such as this is that it is difficult to prove that a single factor, such as a particular type of food, directly causes an outcome such as being overweight. This is because other differences between people may be affecting the results.

However, it would not be ethical to expose people to potentially unhealthy diets in a long-term randomised controlled trial, so this type of observational study is the best practical way of assessing the link between diet and weight.

What did the research involve?

Researchers recruited adults aged 39 to 79 in Norfolk in the UK. They took measurements including their body mass index (BMI), lifestyle information, and tested their urine for sugar levels. Participants were also asked to record their diet over seven days.

Three years later, the participants were invited back and measured again for BMI and waist circumference. Researchers looked for links between people's sugar levels as shown in urine samples, the amount of sugar they reported eating based on their diet records, and whether they were overweight at this three-year assessment.

The entire EPIC study included more than 70,000 people, but researchers took a single urine sample from around 6,000 people as a "spot check" biomarker on sugar levels.

These single spot check samples measured recent sugar intake, and may be a less reliable measure of overall sugar intake than the more expensive and difficult test of collecting urine over a 24-hour period for analysis.

Almost 2,500 people did not come back for the second health check, and 1,367 people's urine tests were either not possible to analyse or the results were outside the standard range and so discarded.

This means only 1,734 of the original sample could be included in the final analysis. Because the people finally included were not randomly selected, it's possible that their results are not representative of all the people in the study.

The researchers ranked both the urine sugar results and sugar based on the dietary record results into five groups, from lowest to highest sugar intake. The specific sugar they were assessing was sucrose, found in normal table sugar.

For the analyses of people's self-reported sugar intake based on dietary record, the researchers took into account how many calories each person ate so this did not affect the analysis.

They then looked at how well the two types of sugar consumption measurement compared, and how likely people at the five different levels of sugar consumption were to be overweight or obese after three years, based on their BMI and waist circumference.

What were the basic results?

Results showed a striking difference between the urine sugar measurements and the sugar intake based on the diet diaries.

People who had the highest levels of sugar in their urine were more likely to be overweight after three years than those with the lowest levels.

The reverse was true when researchers looked at the people whose diet diaries suggested they ate the most sugar relative to their overall calorie intake compared with the least.

Using the urine sugar measurement, 71% of people with the highest concentration were overweight three years later, compared to 58% of people with the lowest concentration.

This meant that having the highest urinary levels of sugar was associated with a 54% increase in the odds of being overweight or obese after three years (odds ratio [OR] 1.54, 95% confidence interval [CI] 1.12 to 2.12).

Using people's seven-day diet diaries, 61% of people who said they ate the most sugar relative to their overall calorie intake were overweight, compared to 73% of people who said they ate the least sugar.

This meant those who reported the highest sugar intake relative to their overall calorie intake were 44% less likely to be overweight or obese after three years (OR 0.56, 95% CI 0.40 to 0.77).

How did the researchers interpret the results?

The researchers conclude that, "Sucrose measured by objective biomarker, but not self-reported sucrose intake, is positively associated with BMI."

They say there are "several possible reasons" for the discrepancies between the methods used to assess sugar intake. They admit the spot check urinary sugar marker may have disadvantages, but conclude that under-reporting of foods with high sugar content, particularly among those who are overweight or obese, may be a contributing factor.

As a result, they say future researchers looking at sugar as part of diet should consider using an "objective biomarker" such as urinary sugar, rather than relying on people's own estimates of what they have consumed.

Conclusion

This study has found conflicting associations between an objective measure of sugar intake and a subjective measure of sugar intake based on food diaries, and the risk of a person becoming overweight.

While more sugar in urine samples was associated with a greater risk of becoming overweight, consuming more sugar (based on food diary records) was actually associated with a reduced risk.

If the urine biomarker is a more accurate reflection of sugar consumed than diet diaries, then this research may explain why some previous diet studies have failed to show a link between sugar and being overweight.

However, there are some limitations to consider with the urine biomarker. Because the test used was a one-off snapshot of sugar intake, it can only show us how much sugar was in the person's urine at the time they were tested. Similar to a short-term food diary, we don't know whether that is representative of their sugar consumption over time.

The urine test is also not able to measure very high or very low sugar levels. The analyses of urine sugar levels did not adjust for overall calorie intake, while those for self-reported sugar intake did. It would have been interesting to see whether the association between urinary sugar levels remained once calorie intake was taken into account.

The current study did not assess why the dietary records and urinary measures of sugar differed. It also did not assess whether the discrepancies were larger among people who were overweight or obese at the start of the study – only how these measures were related to the outcomes at the end.

So it is not possible to say from this study alone that people who were overweight or obese had greater discrepancies between what they reported eating and their urinary sugar measurements.

However, the authors report that other studies have shown overweight people, especially women, are prone to under-reporting diet, particularly between-meal snacks.

As with all observational studies, it is difficult to rule out that factors other than those being assessed might be having an effect on the results. The researchers adjusted their analyses for age and gender, and say that results "did not change materially" after they adjusted the figures to take account of people's physical activity levels.

The results do not appear to have been adjusted to take account of other factors, such as people's level of education, income or other components of their diet, which may have an effect on weight.

The effect of sugar on health, independent of calorie intake, is still being debated by health organisations. If the findings of the current study are correct, using objective measures of sugar intake could help assess its effect on obesity and more widely on health. 

 

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Obese people are 'in denial over how much sugar they eat': Huge gap exists between how much fat people think they consume and the reality, landmark study warns. Mail Online, March 16 2015

Links To Science

Kuhnlea GC, Tasevska N, Lentjes MA et al. Association between sucrose intake and risk of overweight and obesity in a prospective sub-cohort of the European Prospective Investigation into Cancer in Norfolk. The journal Public Health Nutrition. Published online February 2015

Categories: NHS Choices

Could epilepsy drug help treat Alzheimer's disease?

NHS Choices - Behind the Headlines - Tue, 17/03/2015 - 10:37

A drug commonly used to treat epilepsy could help "slow down" the progress of Alzheimer's disease, reports The Daily Express. According to the news story, the drug levetiracetam was shown to "help restore brain function and memory". 

The story is based on a study analysing the short-term effect of the drug in 54 people with mild cognitive impairment (MCI). This is where people have problems with their memory and are at an increased risk of developing dementia, including Alzheimer's disease.

Dementia is a common condition that affects about 800,000 people in the UK. Most types of dementia cannot be cured.

Researchers found people with the condition showed overactivity in one part of the brain during one memory test involving image recognition.

This overactivity and performance on the test was better when participants had been taking 125mg of levetiracetam twice a day for two weeks, compared with when they had taken inactive "dummy" capsules.

This study was small, short-term and showed improvement on a single memory test. It is not possible to say from this study whether continuing to take the drug would reduce a person's chances of developing dementia.

Larger and longer-term trials would be needed to assess this. For now, levetiracetam remains a prescription-only medication that is only licensed for the treatment of epilepsy.

Where did the story come from?

The study was carried out by researchers from the Johns Hopkins University, and was funded by the US National Institutes of Health. It was published in the peer-reviewed medical journal NeuroImage: Clinical.

The Daily Express' headline, "Epilepsy drug found to slow down slide into Alzheimer's", overstates the findings of this study. It did not assess whether the drug affected a person's risk of Alzheimer's disease.

The study actually focused on how the drug affected short-term performance on one memory test in people with a specific type of MCI.

The news story also refers to "younger victims", but it is not clear what this means – the participants in this study were, on average, aged in their 70s.

What kind of research was this?

The main part of this study was a crossover randomised controlled trial looking at the effect of the anti-epileptic drug levetiracetam on brain function in people with amnestic mild cognitive impairment (aMCI). This type of study design is suitable if testing a drug or intervention that does not have lasting effects. 

The researchers report that previous studies have suggested people with aMCI have more activity in one part of one area of the brain (the dentate gyrus/CA3 region of the hippocampus) during certain memory tasks relating to recognising patterns.

Levetiracetam had been shown to reduce activity in these areas in animal research, so the researchers wanted to test whether low doses could reduce this excess activity and improve performance in memory tests in people with aMCI.

MCI is a decline in cognitive abilities (such as memory and thinking) that is greater than normal, but not severe enough to be classed as dementia. aMCI mainly affects a person's memory. A person with MCI is at an increased risk of developing dementia, including Alzheimer's disease.

What did the research involve?

The researchers recruited 69 people with aMCI and 24 controls (people of similar ages who did not have the condition). They gave levetiracetam to the people with aMCI and then tested their cognitive ability and monitored their brain activity with a brain scan (MRI).

They then repeated these tests with identical-looking dummy pills (placebo) and compared the results. They also compared the results with those of the controls taking the dummy pills.

All participants completed standard cognitive tests, such as the mini-mental status exam and other verbal and memory tests, as well as brain scans, at the start of the study.

Those with aMCI had to meet specific criteria – such as impaired memory, but without problems carrying out their daily activities – but not meet criteria for dementia. The control participants were tested to make sure they did not have MCI or dementia.

People with aMCI were randomly allocated to have either the levetiracetam test first and then the placebo test four weeks later, or the other way around. This aims to make sure that the order the tests were carried out does not affect the outcomes of the study.

In each test, participants took the capsules twice a day for two weeks before doing the cognitive test while having a brain scan. The researchers used three different doses of levetiracetam in their study (62.5mg, 125mg or 250mg, twice a day).

The cognitive test called the "three-judgement memory task" involved being shown pictures of common objects, such as a frying pan, beach ball, or a piece of luggage, shown one after the other.

Some of the pictures in the sequence were identical, some were similar but not identical (for example, different coloured beach balls), and most were unique pictures with no similar pictures shown.

The participants were asked whether each picture was new, identical to the one they had seen before, or similar to the one they had seen before. During the test, their brains were scanned using MRI to see which parts of the brain were active.

The researchers were able to analyse data from 54 people with aMCI and 17 controls, as some people dropped out of the study or did not have useable data – for example, if they moved too much while the brain scans were being taken.

What were the basic results?

After taking a placebo, people with aMCI tended to incorrectly identify more items as identical to ones they had seen before than control participants on the three-judgement memory task.

They identified fewer items as being similar to ones shown before compared with the control participants. This suggested people with aMCI were not as good at discriminating between items that were just similar to ones they had seen before and those that were identical.

When people with aMCI had been taking 62.5mg or 125mg of levetiracetam twice a day, they performed better on the three-judgement memory task than when they took placebo.

They correctly identified more items as being similar and fewer items incorrectly as similar, and performed similar to the controls. The highest dose of levetiracetam (250mg twice a day) did not improve test performance in people with aMCI.

Brain scans showed that when people with aMCI who had been taking placebo recognised identical items, they showed more activity in one area within a part of the brain called the hippocampus than controls recognising a match.

Taking 125mg of levetiracetam twice a day reduced this activity compared with placebo, but the lower and higher doses of levetiracetam did not.

The researchers say levetiracetam did not affect the performance of people with aMCI on standard neuropsychological tests. Results on these tests were not reported in detail.

How did the researchers interpret the results?

The researchers concluded that people with aMCI have overactivity of the dentate gyrus/CA3 region of the hippocampus during an image recognition memory task. Low doses of the epilepsy drug levetiracetam reduced this activity and improved performance on the tasks.

Conclusion

This small-scale study found that low doses of the epilepsy drug levetiracetam improved performance on an image recognition task for people with aMCI. This condition causes memory problems, and people who have it are at an increased risk of developing dementia.

While the news reporting has focused on the potential for levetiracetam to slow the onset of dementia, this is not something the research has assessed or focused on.

It instead focused on the short-term impact of the drug on a single test of memory, plus brain activity. There was reported to be no impact on other neuropsychological tests, which appeared to include other memory tests.

It's also important to note that the effect of taking the drug for two weeks was not lasting. It is not possible to say from this study whether continuing to take the drug would reduce a person's chances of developing dementia. Larger and longer-term trials would be needed to assess this. 

The researchers noted that they only looked at very specific brain areas, and this will not capture wider changes in brain networks.

Testing an existing drug that already has approval for treating another condition means that we already know it is safe enough for use in humans. This can mean that human trials can get started more quickly than if a completely new drug was being tested.

However, the benefits and risks still need to be weighed up for each new condition a drug is used for.

For now, levetiracetam remains a prescription-only medication that is only licensed for the treatment of epilepsy.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Epilepsy drug found to slow down slide into Alzheimer's, study finds. Daily Express, March 14 2015

Links To Science

Bakker A, et al. Response of the medial temporal lobe network in amnestic mild cognitive impairment to therapeutic intervention assessed by fMRI and memory task performance. NeuroImage: Clinical. Published February 21 2015

Categories: NHS Choices

All teens should be vaccinated against rare strain of meningitis

NHS Choices - Behind the Headlines - Mon, 16/03/2015 - 12:00

"A vaccination for meningitis is to be offered to all 14-18 year-olds in England and Wales, after a spike in a rare strain of the disease," The Guardian reports. The strain – meningitis W (MenW) – is described as rare, but life-threatening.

There has been a year-on-year increase in the number of meningitis cases caused by MenW since 2009, and infection has been associated with particularly severe disease and high fatality rates in teenagers and young adults. The increasing trend looks set to continue unless action is taken, so the government’s Joint Committee on Vaccination and Immunisation (JCVI), the body that advises on vaccination for England and Wales, has advised that immunisation against MenW should be routinely offered to all 14 to 18 year-olds.

 

What is meningitis?

Meningitis means inflammation (-itis) of the membrane (meninges) that covers the brain and spinal cord. It can be caused by infection with bacteria or viruses, but bacterial infection causes the most severe illness. The most common type of bacterial meningitis is meningococcal, caused by the bacteria Neisseria meningitidis. There are six main types of this bacterium – A, B, C, W, X and Y – with group B responsible for the majority of cases to date (over 90%).

Meningitis can cause different symptoms in different people, including:

  • fever with cold hands and feet
  • vomiting
  • severe headache
  • stiff neck
  • dislike of bright lights
  • tiredness
  • drowsiness
  • confusion
  • agitation
  • in some cases, convulsions or seizures

In babies, the soft spot on their head (fontanelle) may bulge. If infection spreads to the bloodstream (septicaemia), this can cause a non-blanching rash. This appears because the toxins released from the bacteria cause damage to the blood vessels, causing them to bleed.

Meningitis is a life-threatening medical emergency and requires immediate medical attention if suspected.

 

How many cases of MenW have there been?

Since 2009, Public Health England (PHE) reports a steady rise in the number of cases of meningitis caused by a particularly virulent W strain of meningitis. There were 22 cases in 2009, rising to 117 cases in 2014. In January 2015 alone, there were 34 confirmed cases in England, compared to 18 in January 2014, and nine in January 2013.

Andrew Pollard, Chair of JCVI, said: "We have seen an increase in MenW cases this winter, caused by a highly aggressive strain of the bug. We reviewed the outbreak in detail at JCVI and concluded that this increase was likely to continue in future years, unless action is taken. We have therefore advised the Department of Health to implement a vaccination programme for teenagers as soon as possible, which we believe will have a substantial impact on the disease and protect the public’s health."

 

When will the MenW vaccine be introduced?

The JCVI advises that immunisation against MenW should be offered to all 14 to 18 year-olds.

There is a quadrivalent MenACWY conjugate vaccine currently available that can give protection against MenW. However, this vaccine is currently not included in the UK’s immunisation schedule. It has, to date, only been recommended for groups at increased risk, including those with splenic dysfunction or who are travelling to certain parts of the world.

There doesn’t appear to be a set date for the introduction of the vaccine, but the Department of Health says it accepts JCVI’s advice on routine introduction of the vaccine and is now planning the implementation of a combined MenACWY immunisation programme.

John Watson, Deputy Chief Medical Officer for England, says on the PHE website: "We accept JCVI’s advice for an immunisation programme to combat this devastating disease. We are working with NHS England, PHE and the vaccine manufacturer to develop a plan to tackle the rising number of MenW cases."

Until the vaccine is introduced, remaining vigilant to the signs and symptoms of the disease will be the best form of protection.

As Dr Shamez Ladhani, Paediatric Infectious Disease Consultant at PHE, advises: "Meningococcal group W disease is a rare but life-threatening infection in children and adults. It’s crucial that we all remain alert to the signs and symptoms of the disease, and seek urgent medical attention if there is any concern. The disease develops rapidly ... be aware of all signs and symptoms – and don’t wait for a rash to develop before seeking urgent medical attention."

PHE is also reminding health professionals to be aware of the increase in MenW disease and to keep a high index of suspicion for this strain of the disease, across all age groups.

 

What other meningitis vaccines are currently available?

The current NHS immunisation programme offers protection against various other bacterial causes of meningitis.

The meningitis C vaccine is given as part of the childhood immunisation programme, with a routine booster jab given at ages 13 to 15 years. For non-vaccinated children and adults, like those going to university, they can also receive a single catch-up booster. Two decades ago meningitis C was responsible for a number of severe cases and deaths, particularly among adolescents and young adults at college. Introduction of the meningitis C vaccine in 1999 caused a big fall in the number of cases caused by this bacterium.

A new meningitis B vaccine was introduced last year, and the JCVI also recommended this is given as part of the child immunisation programme. However, there are still some cost-effectiveness issues to be resolved before it is routinely offered. There are also no current plans for this to be given as a booster jab in adolescence or young adulthood.

Protection against other non-meningococcal bacterial causes of meningitis is also given through the routine child immunisation programme. These are:  

  • the MMR vaccine
  • the 5-in-1 vaccine – which provides protection against diphtheria, tetanus, whooping cough (pertussis), polio and Hib (Haemophilus influenzae type b)
  • the pneumococcal vaccine

Regardless of immunisation status, as PHE advises, being vigilant to possible signs and symptoms of meningitis is the best protection against this potentially life-threatening disease.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Meningitis vaccine to be offered to teenagers between 14 and 18. The Guardian, March 13 2015

Over three million teenagers to be vaccinated against 'virulent' new Meningitis strain. The Independent, March 13 2015

Jabs drive to combat new deadly strain of meningitis after a number of cases rise by 431%. Mail Online, March 14 2015

All UK teenagers 'should be vaccinated' against aggressive meningitis strain. The Daily Telegraph, March 13 2015

Meningitis vaccine plan after steep rise in new strain. BBC News, March 13 2015

Links To Science

Public Health England. Meningococcal group W (MenW) immunisation advised for 14 to 18 year-olds (Press Release). March 13 2015

Categories: NHS Choices

Does light at night pose a health risk?

NHS Choices - Behind the Headlines - Mon, 16/03/2015 - 00:00

"Britons should fit blackout blinds and ban electronic gadgets from the bedroom to avert the risk of diseases such as cancer," the Mail Online warns.

This alarmist advice is prompted by a review looking at the theory that electrical light at night disrupts our normal body block and could therefore pose a risk to our health.  

In the review, researchers looked at various studies, including research linking night-shift work with breast or colon cancer, and light levels in the bedroom being linked to depression and obesity.

As the authors of this review acknowledge, the main problem with this type of evidence is that much of it is circumstantial, and may be influenced by bias and confounding from other factors.

Another drawback is this study does not appear to be systematic. The researchers provide no methods for how they identified the studies they discuss, and we do not know that all relevant studies have been included.

This effectively makes the review an opinion piece, albeit with lashings of supporting evidence. This means there is the risk that the authors have cherry-picked evidence that backs up their claims, while ignoring research that doesn't fit in with their theories.

The potentially large public health impact of even a small increase in disease risk linked with light at night seems worthy of further study. But this study doesn't prove that light at night harms our health.

Regardless, getting a good night's sleep is important. Read more about how to have a restful night.

 

Where did the story come from?

This opinion piece was written by two researchers from the University of Connecticut and Yale University in the US, and was jointly funded by the two universities.

It was published in the peer-reviewed Philosophical Transactions B on an open-access basis, so it is free to read online or download as a PDF.

The Mail appears to have taken the study at face value, recommending that Britons need to use blackout blinds on their windows, and clearly hasn't considered some of the drawbacks of this particular piece of research.

Because this wasn't a systematic review, we can't be certain that the studies used to inform the authors' conclusions are representative of the literature on the subject, and could also be of questionable quality. 

 

What kind of research was this?

This was an evidence-informed opinion piece, or narrative review, where the researchers discussed the theory that electrical light, especially at night, disrupts our normal body block. They consider whether this poses a risk to our health.

This narrative discussion is referenced throughout, but no methods are provided. It does not appear to be a systematic review, where researchers search all the available evidence to identify studies related to the issue of the effects that electrical light may have on the body clock.

This means we do not know that all the relevant studies related to this issue have been identified. As such, this review must largely be considered to be an article outlining the researchers' opinions, as informed by the evidence they looked at.

 

What do the researchers discuss?

The researchers present sleep deprivation or disruption at night as a result of exposure to electrical light as being a burden of modern life.

While they say light at night has been linked to sleep disruption, "What has not been 'proven' is that electric light at night causally increases risk of cancer, or obesity, or diabetes, or depression."

They say these links are plausible given that disturbed sleep can have an effect on cellular processes and DNA repair. The problem, they say, is that much of the evidence linking disturbed sleep and light at night to these diseases is circumstantial. They then describe what this circumstantial evidence looks like.

 

What do they say about light at night and disease risk?

The researchers discuss the issue of light at night and disease risk, supported by various studies.

They first discuss studies that have linked night-shift work in women with an increased risk of breast cancer, thought to possibly be a result of the influence of melatonin on oestrogen levels.

Melatonin is a sleep hormone, while high oestrogen levels are linked to breast cancer development.

Similarly, a handful of studies have linked shift work or sleep disruption with bowel cancer in both sexes, and with prostate cancer in men, as discussed in our special report on shift working and health last year.

But the researchers fail to mention that these studies may be influenced by various confounders.

The International Agency for Research on Cancer (IARC) currently defines the confidence that something may cause cancer as:

  • 1 – human carcinogen
  • 2a – probable carcinogen
  • 2b – possible carcinogen
  • 3 – inadequate evidence
  • 4 – probably not a carcinogen

In 2007 the IARC classified shift work that involves circadian disruption as a class 2a probable carcinogen, putting it in a category alongside anabolic steroids, vinyl fluoride and mustard gas.

This categorisation was based on a "compelling animal model", but limited epidemiological studies, where signs were consistent with a causal relationship but probably influenced by bias and confounding.

The researchers then discuss other observational studies linking light level in the bedroom (either self-reported or measured) with depression and obesity risk.

They acknowledge a risk of bias and confounding in these studies, but say that, "If these reported associations are causal, then there would be obvious and easy interventions, such as to use black-out shades and elimination of all light sources in the bedroom, no matter how minute."

The researchers go on to present other small experimental studies where participants were exposed to different amounts of light at night. The effects on body chemicals were then measured, including the sleep chemical melatonin.

Some of the broad conclusions were:

  • blue light has the greatest effect upon sleep disruption; red the least
  • there is a dose-response relationship
  • light exposure during the day influences night-time sensitivity
  • individuals have different levels of sensitivity to light
  • even through closed eyelids, a very bright light can suppress melatonin levels

The researchers go on to discuss the possible effect light has on genes involved in the control of the body clock, and how these could potentially be linked to cancer.

 

How did the researchers interpret the results?

In response to their overall question of whether electrical light exposure at night is a risk factor for our health, the researchers say this "cannot yet be answered with assurance, but is important to ask".

They say that, "It must be stressed that there is ample evidence for the disruptive effect of electric light on physiology in short-term experiments in humans.

"There is some epidemiologic evidence on the long-term impact on disease, but this evidence is not yet adequate to render a verdict."

However, they stress this is "an urgent issue given the increasing pervasiveness of electric lighting in our built environment."

 

Conclusion

This opinion piece discusses the evidence related to whether exposure to electrical light night is a health risk.

Much of the article considers various experimental studies where small numbers of participants were exposed to different light levels at night, as well as observational studies reportedly linking night-shift work with cancer, including breast and colon cancer.

The researchers also identified some studies linking self-reported or measured light in the bedroom with depression and obesity.

But this study has two prominent limitations. It does not appear to have been a systematic review. No methods are provided, and we do not know whether the researchers have searched the entire global literature on the subject to identify all relevant studies.

We also do not know whether studies linking light at night with disease could have been preferentially discussed as examples, while other studies that did not find any links were either not identified or not discussed in this review.

As such, this review must largely be considered to be the opinion of the researchers as informed by the evidence they looked at.

The second limitation is the strength and quality of the evidence linking light exposure at night to disease.

Most of the experimental studies discussed, where people were exposed to different light levels at night, were very small (one included 12 people, another eight).

These results are specific to the small sample included. This means they may be heavily biased and confounded by the characteristics of the participants, and therefore not apply to wider populations.

The small sample size may also fail to identify any real differences because of a lack of statistical power.

And just measuring body chemicals after a couple of nights of artificially manipulated light levels may not give us reliable evidence of health effects that would be seen with longer-term patterns.

Much of the evidence looked at is also circumstantial and based on observational studies. Though the design and quality of these underlying studies was not examined as part of this appraisal, it is likely the studies may be subject to various sources of bias or confounding, making it difficult to establish direct cause and effect.

The IARC study reportedly classified shift work that involves circadian disruption as a probable carcinogen. But the organisation acknowledged that this is based on limited epidemiological studies that may have been influenced by bias and confounding factors.

Overall, the possible links between electrical light exposure at night and disease is definitely worthy of further study. But, for now, people should not be overly alarmed by these findings and feel the need to rush out to buy blackout blinds for their bedroom windows.

That said, creating a calming environment in your bedroom, free from visual and audio distractions, can enhance the quality of your sleep.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Cancer warning over lights in the bedroom: Fit blackout blinds and ban gadgets to avert disease, say experts. Mail Online, March 16 2015

Links To Science

Stevens RG, Zhu Y. Electric light, particularly at night, disrupts human circadian rhythmicity: is that a problem? Philosophical Transactions B. Published online March 16 2015

Categories: NHS Choices

Do people with depression perceive time differently?

NHS Choices - Behind the Headlines - Fri, 13/03/2015 - 11:00

"How depression affects our sense of time: Hours drag on and even stand still," is the somewhat over-hyped headline from the Mail Online.

As the old saying goes – Time flies when you’re having fun. So does the reverse also ring true? Does feeling depressed slow down your perception of time? Two German researchers tried to find out.

They pooled the results of previous studies, which lead to 433 depressed people being compared with 485 non-depressed people. The results tentatively suggest that some people with depression may perceive time as going more slowly than those without.

No difference was found in their ability to estimate actual time durations in tests (for example, trying to judge when a minute had passed).

The study has a number of limitations, meaning we should be cautious in assuming that the findings are reliable. Their statistical methods, for example, made it more likely to find a statistically significant result by chance and they noted that using other methods would have wiped out any differences between the groups.

The clinical implications of this potential time perception difference are also unclear. Can knowing that people with depression perceive time as progressing slowly help their care or support?

The study provides little in the way of answers, but may stimulate useful debate.

 

Where did the story come from?

The study was carried out by researchers from Johannes Gutenberg-Universität Mainz, Germany, who report receiving no external funding for the work.

The study was published in the peer-reviewed Journal of Affective Disorders.

The Mail Online reported the story at face value and did not discuss any of its limitations. Its choice of headline, "Hours drag on and even stand still", is an exaggeration of the findings.

It included interviews with the study authors, who said their results confirmed anecdotal reports from hospital and private practice staff that: "depressed patients feel that their time only creeps forward slowly or is passing in slow motion". Anecdotal reports, while interesting, are not evidence.

 

What kind of research was this?

This was a meta-analysis pooling the results of studies looking at time perception of people with depression.

The study authors say that "depressive patients frequently report to perceive time as going by very slowly", but previous studies on the topic have given inconsistent results. They wanted to pool the past results to see if there was any overall effect. This pooling of many independent studies is called a meta-analysis.

A meta-analysis is an appropriate and potentially powerful way of studying the issue. However, the meta-analysis is only as good as the studies feeding it.

 

What did the research involve?

The team pooled the results from 16 individual studies in which 433 depressed people (cases) and 485 non-depressed people (controls) participated. The main analysis looked for differences in measures of time perception between the two groups.

To identify as much relevant material as possible, the researchers searched for published evidence online (using a Web of Science search) and called for unpublished information to be submitted by more than 100 experts in the field.

Studies were only included if they were in adults, had a control group of non-depressed people, had diagnosed depression using standardised criteria, and had sufficient statistical information to enable the pooling of estimates.

In the included studies, the participants were asked to estimate the duration of periods of time.

For example, they were asked to estimate the length of a film in minutes, press a button for five seconds, or discriminate the duration of two sounds. Studies measured time durations ranging from the ultra-short (less than a second) to the long (greater than 10 minutes).

They were also asked about their perception of whether time flowed quickly or slowly. This typically used visual scales, requiring the participant to mark a point on a line ranging from very fast to very slow.

 

What were the basic results?

The main results showed that people with depression were no different than people without at judging time duration.

However, the subjective perception of how time flowed did differ between the groups. People with depression perceived time as going more slowly than those without.

In effect, this meant that both groups could estimate time to the same accuracy, but the people with depression perceived the time to be passing much more slowly.

 

How did the researchers interpret the results?

The team concluded: "Depression has medium effects on the subjective flow of time, whereas duration judgments basically remain unaffected."

 

Conclusion

Having compared 433 depressed people with 485 non-depressed people, the study suggests that some people with depression perceive time as going more slowly than those without. No difference was found in their ability to actually estimate duration of time on testing, but people with depression rated time generally as flowing more slowly.

People with low mood often have associated feelings of little enjoyment in daily life and normal activities, and feelings of being hopeless or helpless. As such, the idea that they may perceive time as passing by more slowly seems plausible, and point to a possible phenomenon to be investigated further. However, the findings do not prove this outright. The study authors themselves advised caution when interpreting the findings. For example, the results weren’t able to take account of any influence of the use of medications or treatments for depression, such as psychotherapy. These could potentially influence time perception.

More importantly, their use of statistical methods made it more likely to find a statistically significant result by chance. They note that by using other methods, none of their findings would have reached statistical significance.

The clinical implications of this time perception are also unclear. Can knowing that people with depression perceive time as progressing slowly help their care or support?

As a result, it would be beneficial to see more robust research in this area before believing this is a widespread occurrence, and a clearer rationale for its importance.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

How depression affects our sense of time: Hours drag on and even stand still for those battling the condition, study reveals. Mail Online, March 12 2015

Links To Science

Thönes S, Oberfeld D. Time perception in depression: A meta-analysis. Journal of Affective Disorders. Published online January 14 2015

Categories: NHS Choices

Loneliness 'increases risk of premature death'

NHS Choices - Behind the Headlines - Fri, 13/03/2015 - 10:31

"Loneliness as big a killer as obesity and as dangerous as heavy smoking," the Daily Express reports. Researchers pooled the results of previous studies, estimating that loneliness can increase the risk of premature death by around 30%.

The headline follows a new analysis of more than 3.4 million participants, which showed evidence that people who feel, or are, socially isolated or live alone are at about a 30% higher risk of early death.

The study has many strengths: its large sample size, adjustment for initial health status, and use of prospective studies being the main three. This provided some evidence that the isolation was causing ill health, rather than the other way round, but we can't be certain.

Causation bias could still be a factor in some cases – in other words, people with a chronic disease are less likely to socialise with others. This makes it difficult to nail down cause and effect.

The results of this study remind us that health has a strong social element and is not merely physical. Connecting with others can improve both mental and physical wellbeing.

 

Where did the story come from?

The study was carried out by researchers from Brigham Young University in the US and was funded by grants from the same university.

It was published in the peer-reviewed journal, Association for Psychological Science.

The UK media generally covered the study accurately. Many news sources based their reporting on an assertion made by the lead author, Julianne Holt-Lunstad, who said the harmful effects of loneliness are akin to the harm caused by smoking, obesity or alcohol misuse.

Professor Holt-Lunstad was quoted in the Daily Mail as saying that, "The effect is comparable to obesity, something that public health takes very seriously ... we need to start taking our social relationships more seriously."

This assertion appears to be based on a previous study carried out by Professor Holt-Lunstad published in 2010. We were not able to appraise this study, so we cannot comment on the accuracy of this comparison. The 2010 research was published in the online journal PLOS One.

 

What kind of research was this?

This was a systematic review and meta-analysis investigating whether loneliness, social isolation, or living alone affects your chances of dying early.

The researchers say there are many lifestyle and environmental factors that increase our risk of dying early, such as smoking, being inactive and air pollution.

However, they say much less attention is paid to social factors, despite evidence they may carry an equal or greater influence on early death.

This study wanted to be the first to quantify the influence of loneliness and social isolation on early death.

 

What did the research involve?

The researchers searched online databases for studies reporting numerical data on deaths affected by loneliness, social isolation, or living alone. They then pooled all the studies to calculate the overall effect.

The literature search included relevant studies published between January 1980 and February 2014. These were identified using the online databases MEDLINE, CINAHL, PsycINFO, Social Work Abstracts, and Google Scholar.

Loneliness and social isolation were defined objectively and subjectively:

  • social isolation (objective) – pervasive lack of social contact or communication, participation in social activities, or having a confidant (example measure: Social Isolation Scale or Social Network Index)
  • living alone (objective) – living alone versus living with others (example measure: answer to a yes/no question on living alone)
  • loneliness (subjective) – feelings of isolation, disconnectedness and not belonging (example measure: University of California, Los Angeles Loneliness Scale)

Some studies made no adjustment for potential confounders. Others controlled for just a few variables (partial adjustment), usually age and gender.

A final group adjusted for several factors (fully adjusted), such as measures relevant to depression, socioeconomic status, health status, physical activity, smoking, gender and age.

Sensibly, the researchers presented separate results for the different categories of adjustment to see to what extent the results were potentially influenced by the confounders.

The bigger studies counted more towards the meta-analysis than the smaller ones – a "weighted" effect size.

 

What were the basic results?

In total, the study analysed 70 independent prospective studies containing more than 3.4 million participants followed for an average of seven years. Overall, the researchers found social isolation resulted in a higher likelihood of death, whether measured objectively or subjectively.

Pooling the best studies – those with full adjustment for confounding – showed the increased likelihood of death was 26% for reported loneliness, 29% for social isolation, and 32% for living alone. All were statistically significant increases compared with those reporting less loneliness or social isolation.

The researchers found no differences between measures of objective and subjective social isolation, and the results remained consistent across gender, length of follow-up and world region.

However, initial health status influenced the findings, as did participant age. For example, social deficits were more predictive of death in people under the age of 65 than over 65.

 

How did the researchers interpret the results?

The researchers said that, "Substantial evidence now indicates that individuals lacking social connections (both objective and subjective social isolation) are at risk for premature mortality.

"The risk associated with social isolation and loneliness is comparable with well-established risk factors for mortality, including those identified by the US Department of Health and Human Services (physical activity, obesity, substance abuse, responsible sexual behaviour, mental health, injury and violence, environmental quality, immunisation, and access to health care)."

They say there is mounting evidence that social isolation and loneliness are increasing in society, so it would be prudent to add social isolation and loneliness to lists of public health concerns.

 

Conclusion

This meta-analysis of more than 3.4 million participants indicates social isolation, living alone and loneliness are linked with about a 30% higher risk of early death.

The study has many strengths, including its huge sample size, adjustment for initial health status, and use of prospective studies.

This provided some evidence that the isolation was causing ill health, rather than the other way round, but we can't be certain. Poor health can lead to loneliness and social isolation and vice versa, so cause and effect are tricky to nail down.

The researchers believe the study of the effects of loneliness and social isolation is currently at the stage that research into the risks of obesity was decades ago. They have identified a problem and predict it will increase in years to come.

The findings also challenge assumptions. The study team said that, "The data should make researchers call into question the assumption that social isolation among older adults places them at greater risk compared with social isolation among younger adults [who may be at risk of alcohol and drug misuse, as well as suicide].

"Using the aggregate data, we found the opposite to be the case. Middle-age adults were at greater risk of mortality when lonely or living alone than when older adults experienced those same circumstances."

The results of this study remind us all that psychosocial and emotional feelings can be just as relevant to our overall health and wellbeing as physical factors. Read more about how how connecting with others can improve wellbeing and find out how to overcome feelings of loneliness.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Loneliness as big a killer as obesity and as dangerous as heavy smoking. Daily Express, March 12 2015

How loneliness leads to an early grave: Feeling alone 'shortens lifespan as much as obesity'. Mail Online, March 12 2015

Links To Science

Holt-Lunstad J, Smith TB, Baker M, et al. Loneliness and Social Isolation as Risk Factors for Mortality - A Meta-Analytic Review. Perspectives on Psychology. Published online March 11 2015

Categories: NHS Choices

Ebola risk remains low as medic flown home

NHS Choices - Behind the Headlines - Thu, 12/03/2015 - 16:40

A UK military healthcare worker who was infected with Ebola in Sierra Leone has been flown home and is being treated at the Royal Free Hospital in London.

Four other healthcare workers who had been in contact with the infected person are also being assessed. Two were flown home on the same flight as the infected worker and are now being monitored at the Royal Free. The others are being assessed in Sierra Leone. None of the four has been diagnosed with Ebola.

The latest case follows that of Glasgow nurse Pauline Cafferkey, who was found to have Ebola after arriving in Glasgow from Sierra Leone in December 2014. She recovered after specialist care at the Royal Free Hospital and was discharged.

Ms Cafferkey remains the only case confirmed in the UK, and the risk to the general public is very low. Ebola can be transmitted only by direct contact with the blood or bodily fluids of an infected person.

The UK has well-established and practised infection control procedures for dealing with cases of imported infectious disease, and these will be strictly followed to minimise the risk of transmission.

Professor Dame Sally Davies, Chief Medical Officer, said: "The UK has robust, well-developed and well-tested systems for managing Ebola virus disease. All appropriate infection control procedures have, and will continue to be, strictly followed to minimise any risk of transmission. UK hospitals have a proven record of dealing with imported infectious diseases."

More than 24,200 cases of Ebola have been confirmed in West Africa, with over 9,900 deaths – a mortality rate of around 40%.

Outbreaks of Ebola are nothing new, but health professionals are concerned about the size of the current outbreak.

What is Ebola?

Ebola is a virus that can be spread through blood and bodily fluids. The virus originated in the West African rainforest and is thought to have spread to humans by handling or butchering infected animals.

Once the virus enters the body it can replicate very quickly, causing a range of increasingly harmful symptoms, including internal bleeding. Left untreated, it can have a mortality rate as high as 90%.

 

What are the symptoms of Ebola virus?

An infected person will typically develop a fever, headache, joint and muscle pain, sore throat, and intense muscle weakness. These symptoms start suddenly 2 to 21 days after becoming infected.

Diarrhoea, vomiting, a rash, stomach pain, and impaired kidney and liver function follow. The infected person may then bleed internally, as well as from the ears, eyes and mouth.

 

How is the Ebola virus spread?

People can become infected with the Ebola virus if they come into contact with the blood, body secretions or organs of an infected person.

Some traditional African burial rituals may have played a part in its spread. The Ebola virus can survive for several days outside the body, including on the skin of an infected person.

In parts of Africa, it is common for mourners to touch the skin of the deceased. A person then only needs to touch their mouth to become infected.

Other ways people can catch the virus include:

  • touching the soiled clothing of an infected person and then touching their mouth
  • having sex with an infected person without using a condom – the virus can be present in semen for as long as seven weeks after an infected person has recovered
  • handling unsterilised needles or medical equipment that have been used on the infected person
  • handling infected animals or coming into contact with their bodily fluids

A person is infectious as long as their blood and secretions contain the virus.

Ebola virus is generally not spread through routine social contact, such as shaking hands with patients without symptoms.

The virus is not airborne, so it's not as infectious as diseases such as the flu – you'd need to get close to it to catch it.

 

Who's at risk from Ebola?

Anyone who has close contact with an infected person or handles samples from patients is at risk of becoming infected. Hospital workers, laboratory workers and family members are at greatest risk.

 

How is Ebola diagnosed?

It's difficult to know if a patient is infected with Ebola virus in the early stages. The early symptoms of Ebola, such as fever, headache and muscle pain, are similar to those of many other diseases.

But health workers are on standby to act quickly. If anyone in the UK develops the above symptoms and has potentially been in close contact with the virus, they will be admitted to hospital and will most likely be quarantined.

Samples of blood or body fluid can be sent to a laboratory to be tested for the presence of Ebola virus, and a diagnosis can be made rapidly. If the result is negative, doctors will test for other diseases, such as malaria, typhoid fever and cholera.

 

What are the treatments for Ebola?

There's currently no specific treatment or cure for the Ebola virus, although potential new vaccines and drug therapies are being developed and tested.

Patients need to be treated in isolation in intensive care. Dehydration is common, so fluids may be given intravenously (directly into a vein).

Blood oxygen levels and blood pressure will be maintained at the correct level, and the body organs supported while the patient recovers.

 

What is the risk of Ebola in the UK?

The risk to the UK is thought to be very low, and, while someone with the virus can bring it to the UK, the risk of it spreading is very low.

Ebola virus is not airborne, so there is no credible risk of a swine flu-like global pandemic.

You cannot catch Ebola by travelling on a plane with someone who is infected, unless you come into very close physical contact with them – for example, by kissing them.

 

What precautions are being taken?

Public Health England (PHE), the body responsible for public health in England, has told health professionals about the situation in West Africa and asked for vigilance about unexplained illness in people who have visited the affected area.

PHE has provided advice for humanitarian workers planning to work in affected areas. It is also working with people from Sierra Leone living in England.

Advice has been issued to immigration removal centres on carrying out health assessments for people who may have been in Ebola outbreak areas within the preceding 21 days.

Dr Brian McCloskey, PHE's director of global health, said: "The risk to UK travellers and people working in these countries of contracting Ebola is very low.

"People who have returned from affected areas, who have a sudden onset of symptoms such as fever, headache, sore throat and general malaise [sense of feeling unwell] within three weeks of their return should immediately seek medical assistance." 

Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Scottish Ebola patient transferred to London hospital – live updates. The Guardian, December 30 2014

Ebola healthcare worker transferred to London unit. BBC News, December 30 2014

Ebola patient transferred to London's Royal Free Hospital. The Daily Telegraph, December 30 2014

Nurse With Ebola To Arrive At London Hospital. Sky News, December 30 2014

Two more patients being tested for Ebola at UK hospitals. Daily Express, December 30 2014

Race to find hundreds of BA passengers who came into contact with UK nurse who brought Ebola back to Britain – but why did TWO screenings fail to spot her condition? Mail Online, December 30 2014

Categories: NHS Choices

Ultrasound 'breakthrough' in treating Alzheimer's - in mice

NHS Choices - Behind the Headlines - Thu, 12/03/2015 - 12:00

"Alzheimer's breakthrough as ultrasound successfully treats disease in mice," The Guardian reports.

New research found high-energy sound waves helped remove abnormal clumps of proteins from the brains of mice, and also improved their memory.

The mice used in this study were genetically engineered to produce amyloid plaques – abnormal clumps of protein fragment amyloid-β typically found in the brains of people with Alzheimer's disease.

There was a 50% reduction in plaques in mice whose brains were exposed to ultrasound once a week for five to seven weeks.

Memory also improved to the extent that the mice were able to negotiate a maze as well as healthy mice after the treatment. They were also better able to avoid a section of a spinning wheel that would give them an electric shock.

While the treated mice appeared to be unharmed, with no obvious tissue damage, human brains are much more complex. Ultrasound could damage brain function in ways that we cannot predict.

The current study used mice that have plaques, but not the other two main brain features of Alzheimer's: cell damage and loss of neural connections. Both these differences limit our certainty of how well the findings represent what would happen in humans. Therefore, further animal studies are needed.

 

Where did the story come from?

The study was carried out by researchers from the University of Queensland in Australia and was funded by the estate of Dr Clem Jones AO, the Australian Research Council, and the National Health and Medical Research Council of Australia.

The study was published in the peer-reviewed journal Science Translational Medicine.

The Guardian reported the story accurately and indicated that this is very early stage research, with human trials unlikely to occur for several years. It was encouraging that the newspaper's headline made it clear that the study was on mice, rather than humans.

 

What kind of research was this?

This was an animal study, which aimed to see if ultrasound showed potential for use as a treatment for Alzheimer's disease.

When ultrasound of the brain is combined with an injection of tiny spheres (microbubbles) into the blood, it temporarily makes it easier for substances to cross the blood-brain barrier (the membrane that separates the two). This might help the removal of amyloid-β from the brain and stop the build-up of plaques.

Alzheimer's disease is the most common form of dementia. The cause is unknown, but there are three main features of the disease in the brain. They are:

  • a build-up of amyloid plaques, which are deposits of a protein fragment called amyloid-β
  • neurofibrillary tangles, which are abnormal collections of a protein called tau in the nerve cells
  • loss of connections between the nerves

Previous research has aimed to reduce the amyloid plaques using drugs to decrease the production of amyloid-β or increase its removal by the immune system. Drugs used in both ways have had side effects.

Here, the researchers wanted to see if ultrasound could be used to reduce the amyloid plaques and whether this improved memory. A mouse model of Alzheimer's disease was used for their experiments.

Animal models are used for the early testing of potential treatments for the human form of the disease. These tests are essential for assessing the potential beneficial effects and safety of these treatments before they are used in humans.

However, there are differences between species, and between disease models and the actual human disease. This means results in animal models may not perfectly represent what will happen in humans.

Alzheimer's is a complex disease, and there are several mouse models of this condition, each with slightly different features of the disease. The mouse model used in this study developed amyloid plaques, but not neurofibrillary tangles or loss of connections between the nerves.

 

What did the research involve?

Twenty mice genetically engineered to develop amyloid plaques in their brains were given either five sessions of ultrasound over six weeks, or sham (placebo) treatment.

The sham treatment involved receiving the microbubble injection and being placed under the ultrasound machine, but not receiving any ultrasound. Both groups were then assessed for their spatial working memory using a maze.

The researchers compared 20 mice with amyloid plaques and 10 normal mice using the active place avoidance task. This involves mice getting an electric shock if they enter a particular zone in a rotating arena. Mice with amyloid plaques did not learn to avoid this area as well as control mice with no plaques.

The amyloid mice were then put into two groups. One group received ultrasound every week for seven weeks, and the other group had a sham treatment. The mice were then retested in the active place avoidance task.

After these tests, their brains were inspected for amyloid plaques. The researchers also carried out various tests to see how ultrasound might be having an effect on plaques.

 

What were the basic results?

The mice with amyloid plaques did not perform as well on the maze task as healthy mice. However, ultrasound restored the ability of the mice to negotiate the maze to the same level as normal mice.

When the researchers compared the brains of the two groups of mice, they found ultrasound reduced the amount of amyloid plaques by over half.

Mice treated with ultrasound weekly for seven weeks learned to avoid electric shocks in the active place avoidance task better than mice given sham treatment, indicating that their memory had improved. They also had half the amount of amyloid plaques in their brains as the untreated mice.

Ultrasound appeared to have stimulated microglial cells (brain support cells that get rid of waste) to engulf the amyloid-β to reduce the plaques. The treatment did not appear to cause any tissue damage.

 

How did the researchers interpret the results?

The researchers concluded that repeated ultrasound to the entire mouse brain reduced the amyloid plaques and improved the memory of the mice.

They say this has the potential to treat conditions such as Alzheimer's disease, though there are many hurdles to overcome.

 

Conclusion

This animal study found a technique using ultrasound directed at the brain reduces the number of amyloid plaques in mice. These mice were genetically engineered to develop these plaques, one of the key brain features of Alzheimer's disease.

There are two other features of Alzheimer's disease that these mice did not have: neurofibrillary tangles and loss of nerve connections.

As it is unknown how these features inter-relate, or if one causes another, this model has certain limitations.

However, the results did show that by reducing the amount of amyloid plaques, the memory and spatial awareness of the mice improved.

While mice studies can give us an indication of how a treatment may affect humans, they are only indications, as there are inherent differences between the species, and between the model and the actual human disease.

Although we can study the mice's ability to negotiate a maze and avoid electric shocks, it is more difficult to assess higher and more complex human brain functions that are affected in Alzheimer's, such as language and personality.

The authors pointed out several important differences between this mouse study and the ability to use the technique in humans:

  • The human brain is much larger, and the skull thicker, so the ultrasound would need to be stronger to penetrate all areas of the brain. This could have negative consequences, such as causing damage to healthy brain tissue.
  • There are concerns that the level of immune response that might be activated in the human brain could be too high. To counter this, the researchers suggest the potential treatment regimen could focus on giving ultrasound to smaller sections at a time.
  • The mice in the study already had plaques when the ultrasound was started. The researchers do not know at what point of Alzheimer's disease it would be appropriate to start treating humans. They are concerned that if they gave ultrasound to people with very early Alzheimer's disease when there are few amyloid plaques, it may damage brain tissue.
  • The study did not look at the long-term effects of the treatment.

Further animal studies will now be required, progressing to primates, before any human trials can take place.

The cause of Alzheimer's disease is not known, but you can reduce the risk of developing the condition by adopting a healthy lifestyle, including maintaining a healthy weight, not smoking, taking regular physical exercise, and drinking alcohol in moderation.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Alzheimer's breakthrough as ultrasound successfully treats disease in mice. The Guardian, March 12 2015

Links To Science

Leinenga G, Götz J. Scanning ultrasound removes amyloid-β and restores memory in an Alzheimer's disease mouse model. Science Translational Medicine. Published Online March 11 2015

Categories: NHS Choices

Diet, exercise and brain training may help keep the mind 'sharp'

NHS Choices - Behind the Headlines - Thu, 12/03/2015 - 11:30

"Dancing, doing Sudoku and eating fish and fruit may be the way to stave off … mental decline," The Guardian reports. A Finnish study suggests a combination of a healthy dietexercise and brain training may help stave off mental decline in the elderly.

The study looked at whether a combined programme of guidance on healthy eating, exercise, brain training and the management of risk factors such as heart health could have an effect on dementia risk and cognitive function.

Half of the 1,260 people in this two-year study were randomly allocated to receive this programme, while the other half acted as a control group, receiving only regular health advice. All participants were given standard tests to measure their brain function at the start, and at 12 and 24 months.

Researchers found that overall, scores measuring brain function in the group who received the programme were 25% higher than in the control group. For a part of the test called "executive functioning" (the brain's ability to organise and regulate thought processes), scores in the intervention group were 83% higher.

While the results of this well-conducted study are certainly encouraging, it's worth pointing out that the study does not look at whether people developed dementia in the longer term.

Most experts agree that a healthy diet, exercise and an active social life with plenty of interests may help reduce the risk of dementia.

 

Where did the story come from?

The study was carried out by researchers from several institutes in Scandinavia, including the Karolinska Institutet in Sweden, the Finnish National Institute for Health and Welfare, and the University of Eastern Finland.

It was funded by a number of different academic centres, including the Academy of Finland, La Carita Foundation, Alzheimer Association, Alzheimer's Research and Prevention Foundation, Juho Vainio Foundation, Novo Nordisk Foundation, Finnish Social Insurance Institution, Ministry of Education and Culture, Salama bint Hamdan Al Nahyan Foundation, and Axa Research Fund, EVO grants, Swedish Research Council, Swedish Research Council for Health, Working Life, and Welfare and af Jochnick Foundation.

The study was published in the peer-reviewed medical journal The Lancet.

The study was widely covered in the UK media. Most coverage was fair, although many papers reported that the study showed how lifestyle interventions can reduce the risk of dementia. This was incorrect – the study looked only at cognitive performance in people at risk of dementia.

A study with a much longer follow-up would be required to see if the interventions used in the study were effective in preventing dementia.

Reports also tended to only concentrate on the lifestyle interventions in the study and not the medical management. One of the interventions involved doctors and nurses monitoring risk factors for dementia, such as blood pressure and body mass index (BMI), with advice where needed for people to get medication from their GP.

It is possible some people found to be at risk – because, for example, they had high blood pressure – were prescribed medication by a physician and it was this that led to the improvement in cognitive function.

 

What kind of research was this?

This was a double blind randomised controlled trial (RCT) looking at whether a comprehensive programme of healthy eating, exercise, brain training and management of risk factors could have an effect on mental function in older people at risk of dementia. An RCT is the best kind of study to find out whether an intervention is effective.

The researchers say previous observational studies have suggested a link between cognitive function in older people and factors such as diet, fitness and heart health.

They say their study is the first large RCT looking at an intensive programme addressing whether a combination of interventions might help prevent cognitive decline in elderly people at risk of dementia.

 

What did the research involve?

Older adults at risk of dementia were randomised to receive either an intervention that addressed their diet, exercise, cognitive training and cardiovascular risk monitoring, or general health advice. After two years, the participants were compared using a range of cognitive assessments.

Researchers recruited 1,260 people aged 60 to 77. To be eligible, participants had to have a dementia risk score of six points or higher. This is a validated score based on age, sex, education, blood pressure, body mass index (BMI), total blood cholesterol levels, and physical activity. The score ranges from 0 to15 points.

Participants also had to have average cognitive function of slightly lower than expected for their age. This was established by cognitive screening using validated tests.

Anyone with previously diagnosed or suspected dementia was excluded. People with other major disorders, such as major depression, cancer, or severe loss of vision or hearing, were also excluded.

Participants were randomly assigned either into the intervention group or to a control group.

All participants had their blood pressure, weight, BMI, and hip and waist circumference measured at the start of the study, and again at 6, 12 and 24 months.

All participants (control and intervention group) met the study physician at screening and at 24 months for a detailed medical history and physical examination.

At baseline, the study nurse gave all participants oral and written information and advice on healthy diet and physical, cognitive, and social activities beneficial for the management of cardiovascular risk factors and disability prevention.

Blood samples were collected four times during the study: at baseline and at 6, 12, and 24 months. Laboratory test results were mailed to all participants, together with general written information about the clinical significance of the measurements and advice to contact primary health care if needed.

The control group received regular health advice.

The intervention group additionally received an intensive programme comprising four interventions.

Diet

The diet advice was based on Finnish nutritional recommendations. This was tailored to individual participants, but generally included high consumption of fruit and vegetables, consumption of wholegrain cereals and low-fat milk and meat products, limiting sugar intake to less than 50g a day, use of vegetable margarine and rapeseed oil instead of butter, and at least two portions of fish a week.

Exercise

The physical exercise programme followed international guidelines. It consisted of individually tailored programmes for progressive muscle strength (one to three times a week) and aerobic exercise (two to five times a week), using activities preferred by each participant. Aerobic group exercise was also provided.

Cognitive training

There were group and individual sessions, which included advice on age-related cognitive changes, memory and reasoning strategies, and individual computer-based cognitive training, conducted in two periods of six months each.

Medical management

Management of metabolic and cardiovascular risk factors for dementia was based on national guidelines. This included regular meetings with the study nurse or doctor for measurements of blood pressure, weight and BMI, hip and waist circumference, physical examinations, and recommendations for lifestyle management. Study doctors did not prescribe medication, but recommended participants contact their own doctor if needed.

Participants underwent a cognitive assessment using standard neuropsychological tests called the neurological test battery (NTB) at baseline and at 12 and 24 months. The test measures factors such as executive functioning, processing speed and memory. 

Researchers looked at any changes in people's cognitive performance over the course of the study, as measured by an NTB total score, with higher scores suggesting better performance.

They also looked at various scores on individual tests. They assessed participation in the intervention group with self reports at 12 and 24 months and recorded their attendance throughout the trial.

 

What were the basic results?

In total, 153 people (12%) dropped out of the trial.

People in the intervention group had 25% higher overall NTB scores after 24 months compared with the control group.

Improvement in other areas, such as executive function, was 83% higher in the intervention group, and 150% higher in processing speed. However, the intervention appeared to have no effect on people's memory.

Forty-six participants in the intervention group and six in the control group suffered side effects; the most common adverse event was musculoskeletal pain (32 individuals in the intervention versus none in the control group).

Self-reported adherence to the programme was high.

 

How did the researchers interpret the results?

The researchers say their findings support the effectiveness of a "multi-domain" approach for elderly people at risk of dementia. They will be investigating possible mechanisms whereby the intervention might affect brain function.

 

Conclusion

This RCT suggests a combination of advice on lifestyle, group activities, individual sessions and monitoring of risk factors appear to improve mental ability in elderly people at risk of dementia.

Whether it will have an effect on the development of dementia in such a population is uncertain, but the participants will be followed for at least seven years to determine whether the improved mental scores seen here are followed by reduced levels of dementia.

The trial was done in Finland and its results may not be applicable elsewhere, although the interventions included, such as diet and exercise, are similar to other countries' recommendations.

This study shows that a combined approach is beneficial. What is not clear is how active the clinical management of cardiovascular risk factors was in each group. Both groups were given health advice, but the intervention group were monitored more regularly for risk factors such as high blood pressure.

Though the study physicians did not prescribe medication, the participants were informed of results so they could seek advice from their GP. We do not know how many people in each group sought treatment for high blood pressure or cholesterol, and this could have affected the results.

All in all, it seems this study provides further evidence of the benefits of a healthy lifestyle. 

A good rule is that what is good for the heart, such as regular exercise and a healthy diet, is also good for the brain. It may also be useful to regard your brain as a type of muscle. If you don't exercise it regularly, it may well weaken.

Not all cases of dementia are preventable, but there are steps you can take to reduce your risk.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Dancing, Sudoku, fish and fruit – the keys to a mentally alert old age. The Guardian, March 12 2015

How diet, exercise and social activity can delay dementia: Study finds living a healthy lifestyle can boost brainpower by a quarter. Daily Mail, March 12 2015

Proof that dementia risk can be reduced by improving lifestyle. The Independent, March 12 2015

Want to know how to ward off dementia? We have the tips on how you can. Daily Mirror, March 12 2015

Links To Science

Ngandu T, Lehtisalo J, Solomon A, et al. A 2 year multidomain intervention of diet, exercise, cognitive training, and vascular risk monitoring versus control to prevent cognitive decline in at-risk elderly people (FINGER): a randomised controlled trial. The Lancet. Published online March 11 2015

Categories: NHS Choices

Autism-related genes linked to cognitive ability

NHS Choices - Behind the Headlines - Wed, 11/03/2015 - 12:00

"Autism is linked to higher intelligence," the Mail Online reports. A new study found evidence that certain genetic variations associated with autistic spectrum disorder (ASD) were linked to greater cognitive ability in non-autistic individuals.

Researchers looked at the DNA of more than 10,000 people in the general population in Scotland and Australia. The study aimed to see whether genetic variations associated with ASD are associated with improved cognitive ability in the general population. They carried out a similar assessment for variants associated with attention deficit hyperactivity disorder (ADHD).

They found that individuals carrying more of the genetic variations associated with ASD had slightly higher overall cognitive scores. No consistent link was found between ADHD-associated variants and improved cognitive performance.

The most important thing to know about these results is that the effect seen was very small. Less than 0.5% of the difference seen in people’s cognitive scores was explained by how many of the ASD-linked genetic variants they carried. These findings would need to be replicated in other, larger, samples to be confirmed.

These findings do not currently have any obvious practical implications for individuals.

 

Where did the story come from?

The study was carried out by researchers from the University of Edinburgh and other research centres in the UK and Australia. The study received its main funding from the Chief Scientist Office of the Scottish Government Health Directorates and the Scottish Funding Council, Age UK and the Biotechnology and Biological Sciences Research Council (BBSRC), and the researchers and centres had various other sources of funding.

The study was published in the peer-reviewed journal Molecular Psychiatry.

The Mail Online's reporting could give readers the impression the improvement in cognitive ability seen by the researchers is much more impressive than it actually was.

It is only towards the end of the article that the website provides a quote from one of researchers pointing out that the variants only provided a "small intellectual advantage".

 

What kind of research was this?

This was a cross sectional study looking at whether genetic variations associated with autism spectrum disorders (ASD) or attention deficit hyperactivity disorder (ADHD) are associated with cognitive ability in the general population.

The researchers report that individuals with ADHD and ASD often have cognitive difficulties, although the relationship is more complex for ASD. The researchers say that some studies have found better cognitive functioning on some tests in individuals with ASD than controls.

While some diseases, such as cystic fibrosis, are caused by mutations in a single gene, the genetic contribution to other conditions such as ASD and ADHD is more complex and not fully understood. Rather than a single gene, variations in a large number of genes are thought to each contribute to increasing a person’s risk. These genetic variations are common in the population, and most people who carry them will not have ASD or ADHD. Cognitive ability also appears to have a complex genetic component.

The researchers in the current study wanted to see whether the genetic variants that have been linked to ASD or ADHD are linked to cognitive ability in the general population. If so, this might suggest that the same genes are contributing to ASD and ADHD, as well as cognitive function.

 

What did the research involve?

The researchers analysed the DNA of 9,863 Scottish individuals, who had also completed cognitive tests. They looked for a range of common genetic variations that have been found to be linked with ASD or ADHD. They analysed whether the number of these variations a person had was related to their cognitive performance.

The participants were taking part in the Generation Scotland: Scottish Family Health Study. The participants completed four cognitive tests:

  • the Mill Hill vocabulary scale junior and senior synonym test – where participants are asked to explain the meaning of certain words
  • a test of verbal declarative memory (logical memory) – where participants are asked to recall a series of previous learned facts and events
  • the Wechsler digit symbol substitution task – where participants are given a list of digit-symbol pairs (such as # = 14) and then translate a list of symbols into digits as fast as possible
  • a verbal fluency test – where participants are asked to say as many words from a category in a given time – such as "how many animals beginning with the letter E can you name in 60 seconds"

The researchers combined the results of these tests, to obtain a measure of performance across them all. For each participant, the researchers calculated a genetic "risk score" based on how many of the ASD or ADHD-linked genetic variants they had. They then carried out statistical analyses to see if there were links between a person’s genetic "risk score" and cognitive performance. They took into account people’s age and gender in the analyses.

The researchers also repeated their analyses on two other samples of people – one from Scotland (1,522 people) and one from Australia (921 people) – to see if they got the same results. This second Scottish sample had cognitive tests in childhood and old age, and the Australian sample had cognitive tests in adolescence. Different cognitive tests were used in the three samples.

 

What were the basic results?

The researchers found that having a higher genetic risk score for ASD was linked with having a slightly higher overall cognitive performance score in their large Scottish sample. A higher genetic risk score for ASD was also linked with having a slightly higher score on three of the four individual cognitive tests (logical memory, vocabulary test and verbal fluency test). A one unit increase in genetic risk score was associated with between a 0.04 and 0.07 unit increase in each of these scores. These were small effects, and variation in the genetic risk score accounted for less than 0.5% of the variation in people’s scores.

The researchers found similar associations between genetic risk score for ASD and different cognitive tests in the Australian sample, but did not find associations for their second, smaller Scottish sample.

When looking at the genetic risk score for ADHD, the researchers found no associations with cognitive performance in the main Scottish sample or the Australian sample. There was an association between higher genetic risk score for ADHD and slightly lower IQ score at age 11 in the second (smaller) Scottish sample. A one unit increase in genetic risk score was associated with a 0.08 unit reduction in this score. Other measures of cognitive performance in this study did not show consistent associations with ADHD genetic risk score.

 

How did the researchers interpret the results?

The researchers concluded that their findings suggest common genetic variants linked to ASD are also associated with general cognitive ability in general population.

 

Conclusion

This study has found some small associations between common genetic variations linked to ASD and cognitive performance in the general population.

Even where an association was found between genetic risk score and cognitive performance, the effect was very small. Less than 0.5% of the variability in people’s cognitive scores in the main study sample was explained by their ASD genetic risk score.

The link between ASD genetic risk score and cognitive performance were only found in two of the three population samples assessed. This may be due to different age groups of participants, or to different cognitive tests being used. There was even less consistency in findings relating to ADHD genetic risk score. Ideally these findings need to be replicated in other, larger, samples to confirm them.

The study is likely to be of interest to researchers, but does not have any obvious practical implications for individuals.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Autism IS linked to higher intelligence: People with genes related to the condition 'scored better in mental ability tests'. Mail Online, March 11 2015

Links To Science

Clarke TK, Lupton MK, Fernandez-Pujals AM, et al. Common polygenic risk for autism spectrum disorder (ASD) is associated with cognitive ability in the general population. Molecular Psychiatry. Published online March 10 2015

Categories: NHS Choices

Genetic high cholesterol 'may help protect against type 2 diabetes'

NHS Choices - Behind the Headlines - Wed, 11/03/2015 - 11:30

"High cholesterol LOWERS the risk of diabetes," is the Daily Mail's rather misleading headline, going on to say that, "New study reveals why taking statins may be harmful".

But this study looked at familial hypercholesterolemia (FH) and not at the more common form of high cholesterol, which is associated with a high-fat diet.

FH is caused by an abnormal gene that affects how much cholesterol is absorbed by cells (cholesterol uptake). People with FH usually require lifelong statin treatments. Statins are drugs that help reduce cholesterol levels, which can reduce the risk of serious complications of the condition, such as a heart attack.

As greater cholesterol uptake by cells has been linked to increased type 2 diabetes risk, the researchers expected that diabetes may be less common in people with FH.  

The researchers studied 60,000 relatives of people with FH who were having a DNA test to see if they also had the condition. They compared how common type 2 diabetes was in those found to have the condition and those who were unaffected.

Overall, they found diabetes was slightly less common in those diagnosed with FH (1.75%) compared with those who did not have the condition (2.93%).

These findings certainly do not suggest that high cholesterol is good for you and taking statins is bad. Statins could potentially be lifesaving – without treatment, high circulating cholesterol levels could put people at a very high risk of heart attacks or strokes.

 

Where did the story come from?

The study was carried out by researchers from the Academic Medical Centre in the Netherlands.

The individual researchers in this study received various research grants, including those from the Netherlands Organisation for Scientific Research, the Cardiovascular Research Initiative and the European Union.

The study was published in the peer-reviewed medical journal JAMA.

The Daily Mail's headline, which claimed that "High cholesterol LOWERS the risk of diabetes: New study reveals why taking statins may be harmful", is misleading and arguably irresponsible.

This study specifically looked at people with a genetic condition that leads to raised cholesterol levels. It found they were less likely to have type 2 diabetes than their unaffected relatives.

The findings suggest poor cellular uptake of cholesterol could confer a lower risk of type 2 diabetes. But the biological link is not confirmed at this stage and requires further study.

As statins increase the cellular uptake of cholesterol, the Mail suggested they could therefore be harmful. But this study has not actually examined the effects of statins. 

The headline should have made it clear, as the researcher quoted in the article said, that statins have a "clear overall benefit" in high-risk patients.

 

What kind of research was this?

This was a cross sectional study aiming to look at the link between familial hypercholesterolemia and type 2 diabetes.

Familial hypercholesterolemia (FH) is a genetic condition where a person has very high cholesterol levels (both total cholesterol and LDL, or "bad" cholesterol) as the result of an abnormal gene.

People with FH have a high risk of cardiovascular disease from a young age and usually require lifelong statin treatment following diagnosis.

About 1 in 500 people in the general population have FH. If you have a parent with the condition, you have a one in two chance of having FH.

This study included people who had relatives with FH who were being screened by DNA testing to see if they also had the abnormal gene.

The researchers say the risk of type 2 diabetes has been found to be increased in statin users. This is believed to be the result of statins increasing the amount of LDL cholesterol receptors on body cells, causing an increased uptake of cholesterol.

People with FH have problems with cholesterol regulation and uptake, and in the majority of cases this is caused by an abnormality of the LDL receptor gene. As their body cells – including the insulin-producing cells of the pancreas – have decreased cholesterol uptake, the researchers therefore expected this might decrease their diabetes risk.

The researchers aimed to look at how common diabetes was in the relatives of people with FH who were screened in the Netherlands. They wanted to see whether prevalence differed between relatives who were also found to have the condition and those found to be unaffected.

 

What did the research involve?

The research included 63,320 first-degree relatives (parents, siblings or children) of people with FH. These people had DNA testing in the Netherlands between 1994 and 2014 to see if they also had the condition.

They also had their blood cholesterol levels measured. People were considered to have FH if they had one of the mutations known to cause the condition.

The main outcome the researchers looked at was whether a person had type 2 diabetes, as defined by self-report on a questionnaire.

They examined the difference in type 2 diabetes prevalence between those found to have FH and their unaffected relatives. They adjusted their analyses for the following potential confounders:

  • age
  • body mass index (BMI)
  • HDL ("good") cholesterol levels
  • triglycerides (another fat) levels
  • statin use
  • smoking
  • cardiovascular disease

 

What were the basic results?

Of the 63,320 relatives tested, 40% were found to have FH, and 60% were found to be unaffected and not carrying an FH mutation. Of those found to have FH, 86% had a mutation of the LDL receptor gene and others had less common mutations.

People with FH tended to be younger, have a lower BMI, higher "bad" LDL cholesterol but lower "good" HDL cholesterol, smoked less, and had greater statin use.

The overall prevalence of type 2 diabetes was 1.75% in people with FH (440 of 25,137) and 2.93% in unaffected relatives (1,119 of 38,183). This was a significant difference, calculating that people with FH had 38% reduced odds of type 2 diabetes (odds ratio [OR] 0.62, 95% confidence interval [CI] 0.55 to 0.69).

Repeating the analysis after adjusting for confounders still found that type 2 diabetes prevalence was lower in people with FH (1.44%) compared with unaffected relatives (3.26%), which was a significant difference (OR 0.49, 95% CI 0.41 to 0.58).

 

How did the researchers interpret the results?

The researchers concluded that, "In a cross-sectional analysis in the Netherlands, the prevalence of type 2 diabetes among patients with familial hypercholesterolemia was significantly lower than among unaffected relatives."

They say that if this finding is confirmed in further studies, it would raise the possibility that the transport of cholesterol into cells via the LDL receptor could be directly contributing to type 2 diabetes.

 

Conclusion

This cross sectional study included 60,000 first-degree relatives of people with FH who were undergoing genetic testing in the Netherlands to see if they also had the condition.

It compared the prevalence of type 2 diabetes between those relatives found to have the condition and those found to be unaffected. Overall, it found that those affected had lower prevalence of type 2 diabetes than those who were unaffected.

Compared with those who were unaffected, people with FH tended to have a lower BMI, higher LDL cholesterol, be less likely to be smokers, and more likely to be using statins at the time they were diagnosed.

This suggests they could have been taking statins and making healthy lifestyle changes as they already knew they had higher cholesterol, even before this was confirmed to be genetic FH.

However, their lower prevalence of type 2 diabetes was still found to be significantly lower than those without FH, even after adjustment for statin use and these healthier lifestyle factors.

This suggests, as the researchers propose, that the genetic abnormality in cholesterol regulation and cellular uptake – including the insulin-producing cells of the pancreas – could make people with FH less likely to develop type 2 diabetes.

But these results do not suggest high cholesterol is good for you and taking statins is bad, which is a simplistic interpretation of this study.

If the link is caused by the cellular uptake of cholesterol, statins may increase this process and could therefore potentially lead to a small increase in risk of type 2 diabetes.

Other research has also linked statin use with type 2 diabetes, as we discussed in September 2014. However, any potential risk must be weighed up against the benefits of statins in terms of reducing cardiovascular risk.

For people with FH, statins can really be viewed as a potentially lifesaving treatment – without these drugs, high circulating cholesterol levels put these people at a very high risk of cardiovascular disease at a young age.

Even for people who have raised cholesterol without having the genetic condition FH, the benefits of statins in terms of reducing cardiovascular risk are likely to outweigh any small increase in diabetes risk.

Overall, this study suggests the transport of cholesterol into cells via the LDL receptor may be linked with type 2 diabetes risk. But further study is needed to determine whether this is actually the case.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

High cholesterol LOWERS the risk of diabetes: New study reveals why taking statins may be harmful. Daily Mail, March 10 2015

Links To Science

Besseling J, Kastelein JJP, Defesche JC, et al. Association Between Familial Hypercholesterolemia and Prevalence of Type 2 Diabetes Mellitus. JAMA. Published online March 10 2015

Categories: NHS Choices

HRT review finds increased risk of blood clots and stroke

NHS Choices - Behind the Headlines - Tue, 10/03/2015 - 12:00

"Women on HRT pills should be aware that there is a small chance of an increased risk of blood clots and possibly stroke," BBC News reports.

This story is based on an update of a review on the effects of hormone replacement therapy (HRT) on the risk of heart and blood vessel diseases (cardiovascular diseases).

The updated results supported their previous findings that HRT did not reduce the risk of deaths from any cause or from cardiovascular causes during follow-up. However, it did increase the risk of stroke and blood clots (for example deep vein thrombosis – DVT).

Researchers estimated that for every 1,000 women taking HRT, an extra six would experience stroke and an extra eight would experience a blood clot.

The review is robust and the trials of good quality. Still, there are some important points to note. The trials did include different doses and types of oral HRT given over different time periods, but results may not be representative of each of these individually. Also, the women were, on average, about 60 years old at the start of the trials, so results may not be representative of women who start taking HRT at a younger age.

If you are receiving HRT and are concerned, or are considering it, you should discuss your individual risk factors (such as a family history of clotting or stroke), as well as potential benefits, with your GP.

 

Where did the story come from?

The study was carried out by researchers from the University of Oxford and other research centres in the UK and Spain. They were carrying out the review for the Cochrane Collaboration – an international independent network of researchers and professionals, as well as patients and carers, which produces and updates a library of systematic reviews on a wide range of healthcare questions. The Cochrane Collaboration does not accept commercial or conflicted funding.

The study in question was published in the Cochrane Library – an online resource that is free to access.

The UK media generally covered this story well, giving balanced viewpoints from the authors of the review.

What they don’t mention is that the previous version of the review had similar findings, so the results are not unexpected.

 

What kind of research was this?

This was a systematic review of randomised controlled trials (RCTs), assessing whether HRT affects postmenopausal women’s risks of cardiovascular (heart) disease. Observational studies had suggested that women taking HRT were at lower risk of death or heart disease events during follow up. However, later RCTs had contradicted these findings. Some research has suggested that HRT might only reduce cardiovascular risk if it is started soon after the menopause starts.

A systematic review is the best way of assessing what the existing research says about any given question. They aim to use transparent, rigorous and unbiased methods to identify as much of the relevant evidence as possible, to assess its quality, and to analyse and interpret their findings.

Over time, new research evidence is published, so Cochrane reviews are regularly updated to incorporate new evidence and see if conclusions change as a result. The current publication updated the previous version of this review from 2013. The previous version found that HRT did not reduce the risk of heart problems, but did increase risk of stroke and blood clotting events.

 

What did the research involve?

The researchers searched multiple literature databases to identify RCTs comparing the effects of HRT versus a dummy pill (placebo) or no treatment. They selected those RCTs that met their inclusion criteria, assessed their quality, and pooled their results in a meta-analysis.

The researchers only included RCTs in women followed up for at least six months. HRT had to be given orally – trials of HRT patches, creams etc. were excluded. HRT could contain oestrogen alone or oestrogen plus a progestogen.

The main outcomes the researchers were interested in were death from any cause, death from a heart-related (cardiovascular) cause, non-fatal heart attack, stroke or chest pain (angina). They were also interested in blood clots, either in the lungs (pulmonary embolism) or DVT.

The results of the trials were analysed using standard meta-analytical techniques.

 

What were the basic results?

The searches identified six relevant RCTs published since the review was last published. This brought the total number of RCTs to 19, featuring 40,410 postmenopausal women. Nine RCTs included relatively healthy women, the majority of whom were not known to have heart disease, and 10 RCTs included women with known heart disease. The RCTs assessed various types and doses of HRT, for different lengths of time (seven months to 10 years). The RCTs were generally of good quality.

Meta-analysis of the trials found that HRT did not affect women’s risk of death from any cause or death from cardiovascular disease during follow up, or of non-fatal heart attacks, compared to placebo or no treatment. Similar results were obtained if trials in women with and without heart disease were analysed separately.

HRT increased risk of stroke compared to placebo or no treatment – with an extra six women per 1,000 experiencing stroke with HRT. On average, across the studies, about 31 women per 1,000 taking HRT experienced stroke during follow-up, compared to 25 women per 1,000 not taking HRT. This meant that for every 165 women taking HRT, there would be one extra stroke over an average of about four years. The overall quality of the evidence on this outcome was rated as high.

HRT also increased risk of blood clot (venous thromboembolism) – with an extra eight women per 1,000, on average, experiencing clots with HRT. On average across the studies, about 19 women per 1,000 taking HRT experienced clots during follow-up, compared to 10 women per 1,000 not taking HRT. The overall quality of the evidence on this outcome was rated as moderate.

If the women were split by when they started taking HRT, results in the group of women who started taking HRT more than 10 years after menopause started were similar to the overall results above.

However, HRT reduced risk of death during follow-up in the women who started taking it less than 10 years after menopause started (relative risk (RR) 0.70, 95% confidence interval (CI) 0.52 to 0.95).

HRT also reduced risk of death from heart disease or non-fatal heart attack in these women (RR 0.52, 95% CI 0.29 to 0.96). The risk of stroke was not significantly affected by HRT in this group, but the risk of blood clots was still increased (RR 1.74, 95% CI 1.11 to 2.73). These analyses did not include as many women as the overall analyses (only about 8,000-9,000), and relied largely on one trial.

 

How did the researchers interpret the results?

The researchers concluded that their findings
"provide strong evidence that treatment with hormone therapy in post-menopausal women overall … has little if any benefit and causes an increase in the risk of stroke and venous thromboembolic events".

 

Conclusion

This updated Cochrane review has found that oral HRT increases risk of stroke and blood clots, and does not appear to reduce overall risk of cardiovascular disease or death during follow-up.

More exploratory analyses suggested that HRT might reduce risk of death from heart disease or non-fatal heart attack if it was started within 10 years of menopause, but this finding needs further confirmation.

The review was carried out using robust methods and the trials were of good quality. Its findings are in line with the previous version of the review, and also with other reviews.

There are some points to note:

  • This review specifically looked at the effects of HRT on the risk of heart and vascular disease. It did not assess the effects of HRT on women’s menopausal symptoms or quality of life.
  • Although there were 19 trials included, three large trials (two testing combined HRT) had the greatest impact on the analyses. The authors note that there is debate about whether the results of these trials apply to all types of HRT, and all women receiving HRT.
  • The trials did include different doses and types of HRT, but results may not be representative of each of these individually. The review did not assess non-oral HRT; therefore, results may not apply to this form of HRT.
  • The women were, on average, about 60 years old at the start of the trials. Many women would start HRT at a younger age than this, soon after the start of the menopause, to counteract menopausal symptoms.

Overall, the review supports previous findings.

When prescribing any medicine, it is important to consider the balance of benefits and harms for the individual. If you are receiving HRT and are concerned, or are considering it, you may find it useful to discuss individual risk factors with your GP, such as whether you have an increased risk of developing a blood clot or stroke. This will help you to weigh your risks against the benefits HRT can bring, especially if your menopausal symptoms are particularly severe.

The UK MHRA notes that HRT does relieve menopausal symptoms, and suggests that for all women taking HRT, the lowest effective dose should be used for the shortest time.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

HRT linked to clots - and possibly stroke - study finds. BBC News, March 10 2015

HRT could halve the risk of heart disease, Oxford University research suggests. The Daily Telegraph, March 10 2015

Taking HRT in your 50s ‘cuts risk of premature death and heart disease’. Mail Online, March 10 2015

Women on HRT ‘face an increased risk of stroke' according to new review. Daily Express, March 10 2015

Links To Science

Boardman HMP, Hartley L, Eisinga A, et al. Hormone therapy for preventing cardiovascular disease in post-menopausal women. Cochrane Database of Systematic Reviews. Updated March 2015

Categories: NHS Choices

A diet rich in veg and fish may reduce bowel cancer risk

NHS Choices - Behind the Headlines - Tue, 10/03/2015 - 11:00

"Becoming a pescetarian can protect against bowel cancer, new research suggests," the Mail Online reports. The US study found people who mainly eat fish and vegetables, and small quantities of meat, had a significantly reduced risk of bowel cancer.

This study followed more than 70,000 North American Seventh Day Adventists (a branch of Christianity mainly based in the US) for a seven-year period. It looked at whether vegetarian dietary patterns were associated with the risk of developing of bowel cancer.

The study looked at four types of vegetarian dietary patterns:

  • vegan – defined as eating eggs, dairy, fish and meat less than once a month (not strictly vegan)
  • lacto-ovo vegetarian – more frequent eggs and dairy than above, but still meat less than once a month
  • pescovegetarian – eating fish one or more times a month, but all other meats less than once a month
  • semi-vegetarian – eating fish and meat one or more times a month, but less than once a week

These definitions are not what most vegetarians and vegans would consider to be truly vegetarian.

Overall, the researchers found people in these vegetarian dietary groups had a combined reduced risk of bowel cancer compared with non-vegetarians (people who eat meat or fish more than once a week).

However, when split into specific vegetarian diet categories, a statistically significant risk reduction for bowel cancer was only found for the pescovegetarian pattern.

Identifying links between specific foods or dietary patterns and consequent outcomes is challenging, as it is difficult to remove the impact of all other health and lifestyle factors. This means that, taken on its own, this study does not prove that fish consumption definitely decreases the risk of bowel cancer.

Still, the results chime with previous studies – there is a broad body of evidence that a diet high in red and processed meat can increase the risk of bowel cancer.

 

Where did the story come from?

The study was carried out by researchers from Loma Linda University, California, and was funded by the National Cancer Institute and the World Cancer Research Fund.

It was published in the peer-reviewed journal JAMA Internal Medicine.

The Mail Online's reporting of the study was inaccurate for several reasons. The headline "Eating fish but not meat halves the risk of developing bowel cancer" is incorrect. People in the broad pescovegetarian group could also have eaten meat, but not as often as fish.

It is also misleading when the articles state: "Pescetarians, vegetarians and vegans had a lower risk of bowel cancer".

The significant link was only found when the four vegetarian groups were combined, and then only for pescovegetarians when looked at separately. No statistically significant links were found for vegans, lacto-ovo vegetarians, or semi-vegetarians.

 

What kind of research was this?

This was a prospective cohort study that aimed to look at the link between vegetarian dietary patterns and colorectal (bowel) cancer.

As the researchers say, bowel cancer is one of the leading causes of cancer deaths. Dietary factors are often implicated as a modifiable risk factor.

For example, a review of the evidence (PDF, 556kb) in 2011 by the World Cancer Research Fund (WCRF) concluded there was "convincing" evidence that increased red meat and processed meat consumption is associated with an increased risk of bowel cancer, and increased dietary fibre is associated with a decreased risk.

Vegetarian diets – with their lack of meat consumption, higher fibre content, and the fact adherents often have a lower body mass index (BMI) – might be expected to be associated with lower risk. But the researchers report that this link has not been found for British vegetarian diets.

This large study aimed to investigate different patterns of vegetarian diet and used the most appropriate study design for doing so.

The main limitation with this type of study, however, is that a range of other factors may be influencing any links seen, and removing their effect is difficult.

It is therefore difficult to prove definite cause and effect, although the use of a Seventh Day Adventist cohort should have removed some of these factors.

 

What did the research involve?

This study was a large prospective cohort of North American Seventh Day Adventists called The Adventist Health Study 2 (AHS-2), which is said to contain a substantial proportion of vegetarians. Almost 100,000 people were recruited between 2002 and 2007.

After excluding people who couldn't be linked with cancer registries, those who reported having had cancer in the past, those aged under 25, or those who had various other missing or improbable data on questionnaires, the researchers had a total of 77,659 people eligible for the study. On average, most participants were in their late 50s.

Dietary information was obtained from a food frequency questionnaire. Using this information, people were assigned into five dietary patterns:

  • vegan – consumption of eggs and dairy, fish and all other meats less than once a month
  • lacto-ovo vegetarians – consumption of eggs and dairy one or more times a month, but fish and all other meats less than once a month
  • pescovegetarians – consumption of fish one or more times a month, but all other meats less than once a month
  • semi-vegetarians – consumption of non-fish meats one or more times a month and all meats combined (fish included) one or more times a month, but a maximum of once per week
  • non-vegetarians – consumption of non-fish meats one or more times a month and all meats combined (fish included) more than once a week

Cancer outcomes were found through linkage to state cancer registries. They also sent participants two-yearly questionnaires asking about cancer diagnoses.

Various confounding factors taken into account in the analyses included age, gender, ethnicity, BMI, educational level, medical and reproductive history, medication, family history of bowel disease or cancer, smoking, alcohol consumption, and exercise.

In many of their analyses, the researchers combined the four vegetarian groups and compared them with the non-vegetarians. In other analyses, they looked at each vegetarian group separately.

 

What were the basic results?

Over an average follow-up period of 7.3 years, there were 490 cases of bowel cancer (including cancers of the colon or large bowel and rectum), with an incidence of 86 cases per 100,000 person years of follow-up.

In the fully adjusted model, compared with non-vegetarians, the four vegetarian dietary patterns combined were associated with a reduced risk of bowel cancer (hazard ratio [HR] 0.79, 95% confidence interval [CI] 0.64 to 0.97). 

Looking at the vegetarian dietary patterns separately compared with non-vegetarian diets, only pescovegetarians had a significantly reduced risk of bowel cancer (HR 0.58, 95% CI 0.40 to 0.84). The risk reductions were not significant for the other patterns (vegan, lacto-ovo vegetarians or semi-vegetarians).

 

How did the researchers interpret the results?

The researchers concluded that: "Vegetarian diets are associated with an overall lower incidence of colorectal cancers.

"Pescovegetarians in particular have a much lower risk compared with non-vegetarians. If such associations are causal, they may be important for primary prevention of colorectal cancers."

 

Conclusion

This prospective cohort study of a large group of Seventh Day Adventists has examined the links between vegetarian dietary patterns and the development of bowel cancer.

Over seven years of follow-up, it found links between any type of vegetarian pattern overall and a reduced risk of bowel cancer. But when looking at specific sub-groups of vegetarian diet separately, the study only found a statistically significant risk reduction for the pescovegetarian pattern.

This study's strengths are the fact it included a large sample of almost 80,000 adults, and that it linked with cancer registries to look at cancer outcomes, as well as adjusting analyses for a wide range of potential confounders.

However, there are important points to bear in mind:

  • Care should be taken before leaping to the conclusion that just eating fish reduces the risk of bowel cancer. The definitions for all four of the vegetarian dietary patterns were quite broad and non-specific. For example, pescovegetarian was defined as consumption of fish one or more times a month, but all other meats less than once a month. This could still encompass a wide range of dietary patterns with variable amounts (and types) of fish, as well as other food groups, such as fruit, vegetables, grains and dairy. It also does not, as the media suggests, exclude people who ate meat – these people just reported eating it less frequently.
  • With food frequency questionnaires, it is also possible people provided inaccurate estimations of the consumption of different foods, so they could have been categorised incorrectly.
  • Diet was only assessed once at the start of the study, so we don't know whether their diets are representative of lifelong consumption patterns.
  • Although the researchers adjusted for many potential confounders, because these were based on assessment at the start of the study only, it is possible the influence of these factors has not been fully accounted for – for example, people's tobacco and alcohol consumption or exercise levels may change. Other unmeasured health or lifestyle factors could also be having an influence.
  • The study involved a very specific population group of North American Seventh Day Adventists, who may have distinct health and lifestyle characteristics. This could mean the results do not necessarily apply to other population groups with different characteristics.

This study will contribute to the body of evidence on dietary risk associated with different food types. But on its own it does not prove that fish consumption decreases the risk of bowel cancer.

The World Cancer Research Fund (WCRF), which funded the study, carries out regular reviews of the evidence on risk factors contributing to cancer.

Its last review of bowel cancer was in 2011, and found the evidence on the relationship between fish and bowel cancer risk at that time was limited and inconclusive.

The WCRF will doubtless consider this and any other new studies when it next updates its review, and deliberate on whether this is enough to change the conclusions.

The WCRF currently advises that factors such as the consumption of red and processed meat, alcohol intake, and being overweight or obese are associated with an increased risk of bowel cancer. High dietary fibre, garlic, high-calcium diets and increased physical activity are associated with a decreased risk, they say.

Read more about how you can reduce your risk of bowel cancer.

In England, the NHS offers a bowel cancer screening programme for adults aged 60 to 74.

A
nalysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Forget being a veggie - it's healthier to be a pescetarian: Eating fish but not meat halves the risk of developing bowel cancer. Mail Online, March 9 2015

Links To Science

Orlich MK, Singh P, Sabaté J, et al. Vegetarian Dietary Patterns and the Risk of Colorectal Cancers. JAMA Internal Medicine. Published online March 9 2015

Categories: NHS Choices

Is education the best form of teen contraception?

NHS Choices - Behind the Headlines - Mon, 09/03/2015 - 11:50

"Getting a good education could be the best form of contraception for teenagers," The Independent reports after a study of recent data from England found an association between improved GCSE results and lower rates of teenage pregnancy.

Researchers looked at data from England on teenage pregnancy rates between 2004 and 2012. They were particularly interested in whether the increasing use of long-acting reversible contraceptive (LARCs), such as implants or injections, was associated with reduced teen pregnancy rates. It didn't.

What they did find is a link between educational achievement – specifically, more teenagers getting at least five GCSEs and reduced rates of teenage pregnancy. The reason for the link between higher educational attainment and reduced pregnancies was not specifically assessed.

This study cannot tell us whether individual teenagers were having sex or using contraceptives or not. It does not tell us, for example, that LARCs are not effective at preventing pregnancy for the individual using them – they are actually known to be highly effective.

 

Where did the story come from?

The study was carried out by researchers from Nottingham University. No sources of funding were reported.

It was published in the peer-reviewed journal, Social Science and Medicine.

The Independent's reporting of the study is accurate, although the headline that good education is the best form of contraception shouldn't be interpreted as meaning that better sex education in schools is the key, as this study didn't look at this issue specifically.

 

What kind of research was this?

This was an ecological study looking at trends in teenage pregnancy rates in England and factors that might contribute towards this.

The researchers say teenage pregnancy rates in England have dropped in recent years, and a number of factors have been suggested as having potentially contributed.

These include the promotion of long-acting reversible contraceptives (LARCs), such as contraceptive implants, injections, and intrauterine devices (IUDs, or "the coil"), for young people. Once in use, these methods do not rely on a woman remembering to use them or having to use them correctly.

Increasing levels of education – particularly in deprived areas – could contribute to this trend by making teenage pregnancy have greater consequences ("opportunity cost"). In other words, young women who are in education are more likely to appreciate the downside of becoming pregnant during their teenage years.

This type of study can identify related patterns of changes in the population and possible contributing factors. The approach is often used to look at the impact of a particular change in policy, for example, or to look for reasons for "real-world" changes. But because it does not look at the behaviour and outcomes of individuals, this type of study cannot definitely link the changes to each other.

 

What did the research involve?

The researchers obtained data on teenage pregnancy (conception), abortion and birth rates in almost 100 areas in England from 2002 to 2014. They also looked at the patterns of LARC use, educational attainment and other factors over the same period to see if the patterns could be related.

Data came from various sources, including:

  • publicly funded family planning clinics in 97 areas in England from 2004 to 2012
  • the Office for National Statistics – conception, abortion and birth rates, and unemployment rates for women under 20
  • the Department of Health – teenage women provided with LARCs at NHS community contraceptive clinics, the number of family planning clinic sessions aimed at young people, the rate of children aged 15 to 17 in care, and the presence of pharmacy schemes to provide emergency birth control
  • Public Health England – rates of GP prescriptions for LARCs and the rate of under-18s admitted to hospital with alcohol-related conditions
  • the Department of Education – GCSE results and non-white teenage population information
  • teenage pregnancy co-ordinators – the presence of pharmacy schemes to provide emergency birth control

The researchers used statistical analysis to look at whether the areas in England that promoted LARCs the most had greater reductions in teenage pregnancy, and similarly whether the other factors had an effect.

 

What were the basic results?

The researchers found that:

  • teenage pregnancy rates in England started to drop in 2008 and continued to decrease up to 2012
  • the percentage of teenagers using LARCs more than doubled from 6% in 2004 to around 15% in 2012, while the proportion provided with condoms reduced by more than 10%
  • the percentage of 16- and 17-year-olds staying in full-time education has increased significantly
  • the proportion of non-white individuals aged 15 to 17 years increased from just over 11% in 2004 to more than 16% in 2012
  • alcohol use in the past week among 11- to 15-year-olds decreased from 23% in 2004 to 10% in 2012

In their statistical analysis, the researchers found that although the promotion of LARCs was associated with a slightly reduced level of teenage pregnancies, this link was not large enough to be statistically significant

Changes in alcohol consumption among teenagers were also not found to be associated with changes in teenage pregnancy rates. There was a statistically significant association between better educational performance and reduced teenage pregnancy.

According to the researchers' statistical model, a 10% increase in the proportion of teenagers receiving five or more GCSE qualifications at grade C or above was associated with an 8% reduction in teenage pregnancy.

They say that as the proportion of teenagers achieving these GCSE grades has increased by about 50% since 2004, this could explain a lot of the decrease seen in teenage pregnancies in this period.

In addition, a 10% increase in the non-white teenage population was associated with about a 2% reduction in teenage pregnancies. The trends in an increasing non-white teenage population and improving GCSE attainment were similar, suggesting the two factors could be related.

The researchers found broadly similar results if they did their analyses in different ways – for example, if they looked at under-16s and older teenagers separately. In these analyses, there was some evidence that the promotion of LARCs had more of an effect in areas with the poorest educational outcomes, but the effects were still small.

 

How did the researchers interpret the results?

The researchers concluded that the promotion of LARCs has had a generally small and non-significant impact on teenage pregnancy rates in England.

However, they say that, "Improvements in educational achievement and, to a lesser extent, increases in the non-white proportion of the population, are associated with large and statistically significant reductions in teenage pregnancy."

 

Conclusion

This ecological study has found that the reduction in teenage pregnancy rates in England shows stronger links with increasing educational attainment than with the promotion of long-acting reversible contraceptives (LARCs).

This research aims to identify factors contributing to a real-world phenomenon (reduction in teenage pregnancies) by looking at trends in these factors and this outcome over time, and in different areas.

While this may uncover potential links at a population level, the study cannot definitively say that this is cause and effect, as other unmeasured factors may be playing a role.

For some factors, the study had to use measures that may not fully capture their effects. For example, alcohol use was assessed using the rate of hospital admissions with alcohol-related causes for under-18s, which is unlikely to fully capture alcohol use.

In addition, the study also does not have data on the behaviours and outcomes for individual teenagers. This means the research is not able to say, for example, whether individual teenagers were having sex or using contraceptives or not.

The results should also not be interpreted as meaning that LARCs are not effective at preventing pregnancy – they are actually known to be highly effective.

It is also difficult to interpret the reasons behind the links between higher educational attainment and reduced pregnancies. It's possible the link is being influenced by confounding factors (such as socioeconomic and lifestyle differences) and is not necessarily a direct effect of education.

If it is an effect of education, it is also not possible to say based on this study whether a particular curriculum or educational content has an effect, as this was not looked at. For instance, these findings shouldn't be interpreted as meaning that better sex education in schools is the key.

The researchers note that their study's findings need to be confirmed in other studies in other settings using other study designs, such as randomised controlled trials.

This type of research can give an idea of what effect new policies have on outcomes in real-world settings, and may suggest ways of improving outcomes. But these also need to be tested to identify their effects.

Still, the suggestion that better education for young people could also lead to less teenage pregnancy is a welcome one.

For more information about the range of contraceptive methods available, visit our Contraception guide.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Is a good education the best form of contraception? Teenage pregnancies fall as exam results improve. The Independent, March 8 2015

Links To Science

Girma S, Paton D. Is education the best contraception: The case of teenage pregnancy in England? Social Science and Medicine. Published online February 25 2015

Categories: NHS Choices

How alcohol intake can change over a lifetime

NHS Choices - Behind the Headlines - Mon, 09/03/2015 - 11:00

"Binge drinking peaks at 25 … but by middle age he's drinking daily," the Mail Online reports. In what has been described as the first of its kind, a new study has tried to track the average adult drinking pattern over the course of a lifespan.

Researchers combined information from nine studies, following almost 60,000 people, to model how average alcohol intake changes over a lifetime in men and women in the UK.

It found that, in men, alcohol consumption rose considerably in adolescence and peaked at around 20 units per week (around six pints of higher-strength lager) at age 25, before decreasing. Drinking daily or on most days of the week became more common in mid-life to older age. A similar pattern was seen in women, but they drank less (around seven to eight units a week).

The authors note that they were not able to obtain complete information on drinking patterns, as all the studies collected information in different ways. This means that while the study can tell us about average behaviours, it can’t tell us whether people were binge drinking or not.

While there is more to be learned in this area, the study gives an insight into estimated average alcohol consumption over time in the UK.

 

Where did the story come from?

The study was carried out by researchers from University College London and other universities in the UK. Two of the researchers were funded by the European Research Council, and the studies they utilised data from were funded by the Medical Research Council, British Heart Foundation, Stroke Association, National Heart Lung and Blood Institute, and National Institute on Aging.

The study was published in the peer-reviewed journal BMC Medicine on an open-access basis, so it is free to read online or download as a PDF.

All of the UK’s news sources refer to "binge drinking" when talking about the results, but the study did not look at binge drinking specifically. It looked at total weekly alcohol consumption and frequency of drinking alcohol, but did not assess how much people drank on a single occasion. While it is plausible that in some cases the 20 units per week average seen in men in their twenties were drunk during a single Friday or Saturday night, it’s not possible to know this for certain, based on the data presented.

In addition, there was a level of alarm in the reporting about the elderly drinking daily, but the average consumption per week in these age groups – particularly in women – was within the recommended UK drinking limits.

 

What kind of research was this?

This research analysed data from different UK cohort studies, which looked at how people’s alcohol consumption changed over their lifespans.

This is the best way to assess this question, although, ideally, the same people would be followed throughout their lifespan. This study used overlapping data from different people and combined it, to assess overall patterns.

 

What did the research involve?

The researchers used data from nine prospective cohort studies, each providing at least three measurements of alcohol consumption for each individual. These studies included 59,397 people in total, and had data from people ranging in age from 15 to more than 90. They turned data from these studies into statistical models, to predict patterns of alcohol consumption by age.

The studies covered data collected from 1979 to 2013, in individuals born between 1918 and 1973. The nine cohort studies included:

  • three nationally representative cohorts recruited at birth
  • three cohorts born 20 years apart, representative of the West of Scotland
  • a representative cohort of older people in England
  • a cohort of civil servants in London aged 35 to 55 years at recruitment
  • a population-based cohort of men aged 45 to 59 years from South Wales

Average weekly alcohol consumption was extracted from each study and expressed in UK units (one unit = eight grams of ethanol). Frequency of alcohol consumption was also extracted and classed as:

  • none in past year
  • monthly/special occasions
  • weekly – infrequent (not daily or almost daily)
  • weekly – frequent (daily or almost daily)

Statistical models were generated for men and women separately, and the researchers tested how well their models fit the observed data.

 

What were the basic results?

The researchers found that average alcohol consumption rose considerably in adolescence and peaked at around 20 units per week at age 25. After this, it decreased, until it started to level off in mid-life, and then finally decreased again from age 60, to around five to 10 units per week.

A similar pattern was seen for women, but they had lower levels of consumption, with a peak of around seven to eight units a week and consumption of two to four units in those aged 70 and over.

Drinking daily or on most days of the week became more common in mid-life to older age, particularly in men, with half of men drinking this frequently at this time of life in one cohort. Drinking this frequently reduced in very old age.

 

How did the researchers interpret the results?

The researchers concluded that this is the “first attempt to synthesise longitudinal data on alcohol consumption from several overlapping cohorts to represent the entire life course, and illustrates the importance of recognising that this behaviour is dynamic”. They say that this sort of information may help to design better strategies to reduce alcohol consumption. They also say that it suggests studies looking at the effects of alcohol consumption which only assess intake at one time point should be treated with caution.

 

Conclusion

This research combined information from nine studies, to model how alcohol intake changes over a lifetime in men and women in the UK, and is reportedly the first to do this.

There are some points and potential limitations to note:

  • The authors note that the different studies captured information on drinking in different ways, and although they tried to standardise this, they were not able to obtain complete information on drinking patterns.
  • The data on which the models are based come from different periods, when alcohol consumption habits and strength of alcohol available may have differed.
  • The authors looked at this, and say that the patterns seen in different periods appeared similar, although the rate at which consumption changed differed slightly.
  • There were also some differences in patterns seen between cohorts which were not due to differences in the period. For example, older women in a Scottish cohort had lower consumption than women of a similar age in the cohort of civil servants in London, despite this data being collected in a similar period. Other factors, such as socioeconomic status, could be contributing to these.
  • Not all cohorts covered the same age range, so although the total number of people being analysed was large (almost 60,000), each individual age would have a smaller number of people.
  • It was not clear how the nine cohorts were identified, and whether there were others available that were missed.

While there is more to be learned in this area, the study gives an insight into estimated average alcohol consumption over time in the UK.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

How a man's alcohol habits change: Binge drinking peaks at 25, tails off in his 30s, but by middle age he's drinking daily. Mail Online, March 6 2015

Binge drinking in men peaks at 25 before every day boozing starts later in life, says study. The Independent, March 6 2015

Men grow out of binge drinking… only to become daily boozers in older age. Daily Express, March 7 2015

Links To Science

Britton A, Ben-Shlomo Y, Benzeval M, et al. Life course trajectories of alcohol consumption in the United Kingdom using longitudinal data from nine cohort studies. BMC Medicine. Published online March 6 2015

Categories: NHS Choices

Hot flushes: how to cope

NHS Choices - Live Well - Mon, 09/03/2015 - 09:58
Hot flushes: how to cope

Hot flushes are the most common symptom of the menopause but there are a range of medical treatments and self-help techniques to beat the heat.

Not all women experience hot flushes going through the menopause, but most do. Three out of every four menopausal women have hot flushes. They’re characterised by a sudden feeling of heat which seems to come from nowhere and spreads through your body. They can include sweating, palpitations, and a red flush (blushing), and vary in severity from woman to woman.

Some women only have occasional hot flushes which don’t really bother them at all, while others report 20 hot flushes a day, that are uncomfortable, disruptive and embarrassing.

Hot flushes usually continue for several years after your last period. But they can carry on for many, many years – even into your 70s or 80s. They’re probably caused by hormone changes affecting the body’s temperature control.

Causes of hot flushes

Most women going through a natural menopause experience hot flushes. But there are other causes of hot flushes, including:

  • Breast cancer treatment – according to Cancer Research UK, seven out of 10 women who’ve had breast cancer treatment have hot flushes, and they tend to be more severe and frequent than those of women going through a natural menopause. This is because chemotherapy and tamoxifen tablets reduce oestrogen levels.
  • Prostate cancer treatment – men having treatment for prostate cancer can also have hot flushes, sometimes for years. Hormone treatment causes hot flushes in men by lowering the amount of testosterone in their body. Read advice for men with prostate cancer on how to tackle hot flushes.
What does a hot flush feel like?

Women often describe a hot flush as a creeping feeling of intense warmth that quickly spreads across your whole body and face ‘right up to your brow’ and which lasts for several minutes. Others say the warmth is similar to the sensation of being under a sun bed, feeling hot ‘like a furnace’ or as if someone had 'opened a little trap door in my stomach and put a hot coal in’.

Watch these videos where women describe what a hot flush feels like.

Hot flush triggers

Hot flushes can happen without warning throughout the day and night, but there are well-known triggers, including woolly jumpers, especially polo necks; feeling stressed; drinking alcohol or coffee; or eating spicy foods.

Treatments for hot flushes

Many women learn to live with menopause-related hot flushes, but if they’re really bothering you and interfering with your day-to-day life, talk to your doctor about treatments that may help.

The most effective is HRT which usually completely gets rid of hot flushes. But other medicines have been shown to help, including vitamin E supplements, some antidepressants, and a drug called gabapentin, which is usually used to treat seizures.

Note that doctors recommend that you don’t take HRT if you've had a hormone dependent cancer such as breast or prostate cancer.

Here’s more information on help for hot flushes from your GP.

Complementary therapies for hot flushes

Women often turn to complementary therapies as a ‘natural’ way to treat their hot flushes.

There have been small studies indicating that acupuncture, soy, black cohosh, red clover, pine bark supplement, folic acid, and evening primrose oil may help reduce hot flushes.

However, the research is patchy, the quality of the products can vary considerably, and the long-term safety of these therapies isn't yet known.

It’s important to let your doctor know before you take a complementary therapy because it may have side effects (for example liver damage has been reported with black cohosh) or mix badly with prescription medicines (red clover is unsuitable for women taking anticoagulants).

Be aware, too, that soy and red clover contain plant oestrogens so may be unsafe for women who have had breast cancer.

Read more about complementary therapies and whether they work.

Self help remedies for hot flushes

Try these everyday tips to ease the overheating:

  • cut out coffee, tea, and stop smoking
  • keep the room cool, use a fan – electric or handheld – if necessary
  • if you feel a flush coming on, spray your face with a cool water atomiser or use a cold gel pack (available from pharmacies)
  • wear loose layers of light cotton or silk clothes so you can easily take some clothes off if you overheat
  • have layers of sheets on the bed rather than a duvet so you can remove them as you need to and keep the bedroom cool
  • cut down on alcohol
  • sip cold or iced drinks
  • have a lukewarm shower or bath instead of a hot one
  • change the timing of your medicine. If tamoxifen is causing your hot flushes, Cancer Research UK suggests taking half your dose in the morning and half in the evening
Is a hot flush anything to worry about?

Hot flushes are generally a harmless symptom of the menopause. But very occasionally they may be a sign of a blood cancer or carcinoid (a type of neuroendocrine tumour).

See your doctor if, in addition to hot flushes, you've been unwell with, for example, fatigue, weakness, weight loss or diarrhoea.

Now read about the best foods to eat during the menopause.

Read other articles about the menopause.

Categories: NHS Choices

Being optimistic after heart attack may help with recovery

NHS Choices - Behind the Headlines - Fri, 06/03/2015 - 12:30

"It's true! Optimists do live longer," is the slightly misleading headline from the Mail Online.

The study it reports on actually looked at the effects of optimism on physical and emotional health in 369 people recovering from a heart attack or unstable angina (angina that does not respond to medication), rather than overall lifespan.

The participants were assessed for their level of optimism, depressive symptoms and physical health. They had a repeat assessment after 12 months.

The study also looked at whether participants were likely to have a major cardiac event (such as a heart attack or stroke) in the next 46 months.

Optimism alone did not have an effect on whether people had another major cardiac event, but a significant effect was seen when they looked at levels of optimism and symptoms of depression.

People who were both optimistic and free of depression had half the risk of having a major cardiac event compared to people with low optimism and some symptoms of depression.

This effect could be due to issues of compliance. People who feel they have something to live for are probably more likely to carry out recommended lifestyle changes, such as quitting smoking, as was seen in this study.

The researchers now hope to find ways to improve the optimism of people at risk of heart attacks.

 

Where did the story come from?

The study was carried out by researchers from University College London, the National University of Ireland, the Karolinska Institute in Stockholm and the University of London. It was funded by the British Heart Foundation.

The study was published in the peer-reviewed medical journal Psychosomatic Medicine and is available on an open-access basis so it is free to read online.

The Mail Online and the Daily Express’s reporting were accurate but both of their headlines were potentially misleading. The Mail’s “Optimists live longer” is unsupported as the study did not measure the difference in life expectancy between pessimists and optimists.

While the Daily Express’s headline "Keep positive to live longer: It cuts risk of heart attack by half, say experts" fails to make clear that this study was in people recovering from a heart attack or unstable angina.

The Mail did include an important quote from Dr Mike Knapton, associate medical director at the British Heart Foundation, who said: "The next steps for this research would be to show psychotherapy like cognitive behavioural therapy to improve optimism can improve the outcomes for pessimistic people."

 

What kind of research was this?

This was a cohort study that aimed to assess the impact of optimism on recovery after having an acute coronary syndrome (ACS). This term includes heart attacks and unstable angina. As optimism influences a person’s behaviour, the researchers wanted to see what effect this had on physical health, risk of having a further major cardiac event and depressive symptoms. As this was a cohort study it cannot prove that optimism alone directly causes better outcomes, as many other factors may be involved in the link.

 

What did the research involve?

Researchers assessed the level of optimism in 369 people after an ACS, then grouped them into low, medium and high categories and compared their health outcomes after 12 months. They also analysed their medical records for an average of 46 months.

The data analysed came from two prospective studies carried out at St George’s Hospital in London. People were invited to participate if they had suffered from an ACS between December 2001 and August 2004 and again from June 2007 to September 2008. The first study group was interviewed in hospital and completed questionnaires a week to 10 days after the ACS. The second group were assessed at home on average 21 days after the ACS.

A follow-up assessment was made by telephone and questionnaires 12 months later to measure physical health status, depressive symptoms, smoking, physical activity, and fruit and vegetable consumption. Hospital medical records were used over the next 46 months on average to determine if they had any further major cardiac event, including death due to cardiovascular disease, heart attack or unstable angina.

People were eligible for the study if they were over the age of 18 and did not have another condition that could affect symptom presentation or mood (giving examples such as cancer or unexplained anaemia).

Optimism was assessed using a revised version of the "Life Orientation Test". In this test, the person is asked to rate how strongly they agree or disagree with statements such as "in uncertain times, I usually expect the best".

Depressive symptoms were assessed using the standardised Beck Depression Inventory. This provides a score of between 0 and 63:

  • scores of up to 10 are considered to be normal
  • 11 to 16 mild mood disturbance
  • 17 to 20 borderline clinical depression
  • 21 to 30 moderate depression
  • 31 to 40 severe depression
  • over 40 extreme depression

In this study, the researchers used a cut-off of 10 or more to indicate clinically significant depressive symptoms.

Physical health status was assessed using the physical health section of the 12-Item Short Form Health Survey (SF-12). This is measured on a scale of 0 to 100, higher scores indicating better health. This includes factors such as limited physical function, effective role fulfilment and pain.

The data was analysed adjusting for age, sex, ethnicity, socioeconomic status, history of depression and Global Registry of Acute Coronary Events (GRACE) risk score, which is a measure of the clinical risk of having a further cardiac event.

 

What were the basic results? Further major cardiac event

After adjusting for the confounding factors, optimism alone was not significantly associated with further risk of a major cardiac event. When combining people with low optimism and clinically significant depressive symptoms, they were more than twice as likely to have a further cardiac event compared to people with high optimism and low depressive symptoms (odds ratio (OR) 2.56, 95% confidence interval (CI) 1.16 to 5.67).

Depressive symptoms

After 12 months, optimistic people were 18% less likely to have depressive symptoms (OR 0.82, 95% CI 0.74 to 0.90).

Physical health

Optimism was not related to physical health status score immediately after ACS, but higher scores were found after 12 months. People classed as having low or medium optimism had scores of 50 on the SF-12, whereas people with high optimism scored 54.6 (range 0 to 100).

Smoking

After 12 months, 47.9% of people with low optimism were still smoking compared to 15.3% of people with high optimism.

Fruit and vegetable intake

Twice as many highly optimistic people were eating five or more fruit and vegetables at 12 months compared to people with low optimism (40% compared to 20%).

Physical activity

There was no difference between optimism and changes in physical activity.

How did the researchers interpret the results?

The researchers concluded that "optimism predicts better physical and emotional health after ACS" and that "measuring optimism may help identify individuals at risk". They believe that "pessimistic outlooks can be modified, potentially leading to improved recovery after major cardiac events".

Conclusion

This well-designed study found that people who have a higher level of optimism are less likely to smoke or have depressive symptoms, more likely to be eating five portions of fruit and vegetables a day, and have a slightly higher physical health score. It also found that people who have low optimism and depressive symptoms are more than twice as likely to have a major cardiac event than people with high optimism and no depressive symptoms.

In many ways the overall findings that a greater sense of wellbeing could be transferred into positive lifestyle changes, which could be linked to lower risk of subsequent heart effects, seem plausible. The researchers took into account various confounding factors that could be influencing the link, such as level of physical illness after the first ACS and history of depression.

However, a variety of things could influence how positive, or not, a person feels after a heart attack. Though the study attempted to exclude certain conditions that may have influenced mood and symptoms, it is unclear whether the study will have been able to capture an overall picture of the person’s starting health and functional status.

Other unmeasured things that can have an important influence on sense of wellbeing and recovery after serious illness include interpersonal relationships and the support of partners, family and friends. For example, consider an isolated person living alone to a person living with (an)others and with a wide and active social network.

Overall despite the researchers' best attempt to reduce the likelihood of confounding, it is still possible that other factors are involved in the complex link between optimism and future cardiac events.

There may also be some bias towards more optimistic people taking part in the study as it relied on patients agreeing to be interviewed and fill out questionnaires. It is possible that people with very low optimism may have refused to participate as there would be "no point".

The researchers now hope to find ways to improve the optimism of people at risk of heart attacks.

People with a reason to live are probably more likely to take steps to live longer. Read more advice about how to be happier.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

It's true! Optimists DO live longer: Having a positive attitude lowers the risk of a heart attack. Mail Online, March 5 2015

Keep positive to live longer: It cuts risk of heart attack by half, say experts. Daily Express, March 6 2015

Links To Science

Ronaldson A, Molloy G, Wikman A, et al. Optimism and Recovery After Acute Coronary Syndrome: A Clinical Cohort Study. Psychosomatic Medicine. Published online March 3 2015

Categories: NHS Choices

Does a wife's illness lead to divorce?

NHS Choices - Behind the Headlines - Fri, 06/03/2015 - 11:30

"Husbands more likely than wives to seek divorce when partner falls sick, says study," the Daily Mail reports after a US study tracked around 2,700 married older couples for 20 years to see how chronic illness impacted on their relationships.

Researchers specifically looked at the effects of one of four serious illnesses on the relationship: any type of cancer (except skin cancer), heart diseaselung disease or stroke.

Unsurprisingly, the onset of one of these illnesses in either spouse was associated with an increased risk of widowhood at a subsequent assessment.

However, the study also found the onset of serious illness in the wife was associated with a small increased risk (an estimated 6%) of divorce. This link wasn't found when the husband had the illness.

But this study cannot show a direct causative link. There are a wide variety of unmeasured factors that are likely to be influencing any association between illness and divorce.

That said, it would not be surprising that having to care for a person with a chronic illness could place a strain on some couples' relationships.

There is a wide range of support for people suddenly thrust into the role of caring for others. See our Care and support guide for more information.

And if you feel your relationship with your partner is having problems, whatever your respective health issues, you may benefit from couples therapy.

 

Where did the story come from?

The study was carried out by two researchers from Iowa State University and the University of Indianapolis in the US, and was funded by the US National Institute on Aging.

It was published in the peer-reviewed Journal of Health and Social Behaviour.

The Daily Mail's reporting of the study is broadly accurate, but it does not touch on the various limitations of the study.

The piece contains quotes from the lead author of the study, Dr Amelia Karraker, who speculates that some men may struggle to adapt to the role of caregiver, while some women may think that, "You're doing a bad job of caring for me or I wasn't happy with the relationship to begin with, and I'd rather be alone than in a bad marriage". Both notions seem plausible, at least for some couples, but have not been proven by the study in question. 

 

What kind of research was this?

This study used data collected from a sample of married people taking part in the Health and Retirement study, an ongoing nationally representative cohort study of Americans over the age of 50 that has collected data every two years from 1992 onwards.

The researchers looked at the relationship between serious illness (cancer, heart or lung disease, or stroke) and the subsequent dissolution of the marriage, either through divorce or widowhood.

The authors discuss how the literature has often linked marital status to health and wellbeing, while divorce and widowhood can be followed by declines in physical and mental health.

Whether ill health may have a direct effect on marital status has not been studied as much, and this is what this study aimed to focus on. The researchers also wanted to see whether the relationship between the health of the spouse and divorce may vary by the nature of the illness or by gender.

The main limitation of a study like this is it can only find associations – it cannot prove cause and effect. There may be a wide variety of unmeasured factors involved in the link, especially when you are dealing with something as complex as human relationships. 

 

What did the research involve?

The study used data collected in waves 1 to 10 of the Health and Retirement study between 1992 and 2010. The researchers looked at people who were married at the start of the study, and excluded marriages where either spouse already had serious physical illness, as they were specifically interested in the onset of illness as a risk factor for dissolution.

They also excluded those who had divorced or were widowed by the second wave of assessments in 1994, as it could not be known whether this had been preceded by illness as the cause. After exclusions, they therefore had a final sample of 2,701 marriages. 

The main outcome of interest was whether marriage in wave 1 (1992) was followed by dissolution as the result of divorce or widowhood in a subsequent wave (beyond 1994).

They then wanted to see whether this was preceded by the onset of serious physical illness in either spouse. The researchers focused on four general categories of illness – cancer, heart disease, lung disease and stroke – as they say these form a lot of the chronic disease burden in the US.

In their analysis, they included the potential confounding factors (collected in wave 1) of age, education, ethnicity, socioeconomic status, marital duration, and initial marital satisfaction (assessed by the question, "Are you very satisfied, somewhat satisfied, about evenly satisfied and dissatisfied, somewhat dissatisfied, or very dissatisfied with your marriage?"). 

 

What were the basic results?

This 18-year study period in people above the age of 50 found marriages more often ended in divorce (33%) than widowhood (24%).

Unsurprisingly, increasing age was associated with an increased onset of physical diseases in both spouses, with husbands experiencing higher illness rates than wives.

The researchers' analysis found that the onset of illness in the husband was not associated with subsequent divorce. However, the onset of illness in the wife was associated with a 6% higher probability of being divorced in subsequent assessment. This represented a significant gender difference.

When looking at the relationship between illness and subsequent widowhood, there was no significant gender difference. Illness in the husband was associated with a 5% higher probability of the wife being a widow in a subsequent assessment. The respective figure for illness in a wife was 4%.

When the researchers carried out sub-analyses by illness, neither the husband's nor the wife's cancer or heart disease was associated with marital dissolution. There was some suggestion that a wife's lung disease and husband's stroke were associated with an increased risk of subsequent divorce, but these were not statistically significant.

 

How did the researchers interpret the results?

The researchers concluded that only the onset of illness in the wife is associated with an increased risk of divorce, but the onset of illness in either husband or wife is associated with an increased risk of widowhood.

They say their findings "suggest the importance of health as a determinant of marital dissolution in later life via both biological and gendered social pathways".

 

Conclusion

This US cohort study of older married couples (over the age of 50) finds links between the onset of serious illness in the wife and subsequent divorce, but the same link wasn't found with illness in the husband. Meanwhile, serious illness in either spouse was, rather unsurprisingly, associated with a higher risk of widowhood in a subsequent assessment.

This study has the strength of using a large, nationally representative dataset. However, it cannot prove direct causative links, and does not prove that wives are more likely to stick with their spouse during serious physical illness than husbands.

Though the study finds a link between illness onset and subsequent divorce, there are likely to be a wide variety of unmeasured factors involved in any link. For example, this could include:

  • personality characteristics of both the husband and wife
  • the nature of the illness – for example, the severity, prognosis, and impact on function and disability
  • it may not necessarily be the "healthy spouse" who is the instigator of the end of the marriage – for example, the person who is ill may want to get out of an unhappy marriage to be able to cope with illness better
  • mental health and other physical illnesses in the "ill spouse"
  • physical and mental health of the "healthy spouse"
  • lifestyle, activities, social and family connections, and external support
  • the strength of the relationship between the couple

The only one of these factors that this research was able to partially take into account was the latter. Even in this case this involved a very crude assessment at the start of the study, asking only about the duration of the marriage and a broad question on marital satisfaction.

The research took into account a few other potential confounders (age, ethnicity, education and socioeconomic status), but as this study relied on data collected as part of a wider cohort study, it probably had limited capacity to assess any others.

Other limitations include the broad illness categories of cancer, heart disease, lung disease and stroke. As above, these categories could include a wide range of specific diseases, of varying severity and disability. It is also not known how accurate this information was.

Lastly, this study may not be applicable to other populations in non-US cultures, to younger married adults, or non-married people in committed relationships. So, all in all, this study does not prove that marriage only lasts in health but not in sickness.

Still, it does highlight the potential strain chronic conditions such as stroke can place on some relationships. People often make the mistake of assuming that supporting a partner or loved one with a chronic condition will come naturally, but this isn't always the case – it can often be hard, frustrating and upsetting work.

There is help available that can make that job easier. A good first practical step is to apply for a Carer's Assessment. This involves a discussion between you and a trained person, either from the council or another organisation that the council works with, to see what help and support, including financial support, you may be entitled to. Read more about carers' assessments.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Husbands more likely than wives to seek divorce when partner falls sick, says study. Daily Mail, March 6 2015

Couples Over 50 Are More Likely To Divorce When The Wife Gets Sick, Study Suggests. The Huffington Post,  March 5 2015

Links To Science

Karraker A, Latham K. In Sickness and in Health? Physical Illness as a Risk Factor for Marital Dissolution in Later Life. Journal of Health and Social Behavior. Published online March 5 2015

Categories: NHS Choices

No proof 'alcohol will make you more gorgeous'

NHS Choices - Behind the Headlines - Thu, 05/03/2015 - 12:30

"How having just the one drink can make you look more gorgeous, according to science," The Independent reports. But the "science" turns out to be an experiment carried out under highly artificial conditions.

The headline comes from a small study looking at whether drinking alcohol makes people more physically attractive to others. It found photographs of those who had consumed a "low-dose" alcoholic drink (a large glass of wine) were rated as more attractive than images of sober individuals.

But photographs of people who went on to have a second drink were not rated as more attractive than those who drank nothing, and the apparent effect of alcohol on perceived attractiveness was only slight. 

The point of this small study is unclear, although the authors say they are interested in the relationship between alcohol and risky sexual behaviour.

If the researchers do go on to find a relationship between alcohol and risky sexual behaviour, the results would hardly be surprising.

It may well be true that a small amount of alcohol can help someone relax and therefore appear more approachable, but whether we needed a taxpayer-funded study to tell us this is debatable.

 

Where did the story come from?

The study was carried out by researchers from the University of Bristol and Macquarie University, Australia.

It was funded by a European Research Advisory Board grant. The Medical Research Council paid for the article to be published on an open-access basis.

It was published in the journal Alcohol and Alcoholism, and is free to access online. It should be noted that this journal does not have a peer review process.

Both The Independent and the Mail Online's headlines failed to make clear the highly artificial nature of this study: it didn't involve people speed dating in a wine bar, just students looking at photos.

Both news outlets deserve some praise, however, for making it clear that the alleged effects of alcohol on attractiveness were limited to only one drink.

But the Mail's claim that the study found "wine and other alcohol can dilate pupils, bring on rosy cheeks and relax facial muscles to make a person appear more approachable" is misleading.

This was the speculation of the authors of the study, but the study itself did not look at what mechanisms might increase facial attractiveness after consuming alcohol.

 

What kind of research was this?

This study set out to examine whether alcohol consumption leads to the consumer being rated as more attractive than sober individuals.

The authors point out that alcohol consumption can cause mild flushing and also result in facial changes that may indicate changes of mood, sexual arousal or expectancy of sex, making people more attractive.

Alcohol consumption is known to be associated with sexual behaviour, particularly risky sexual behaviour, and they say it is important to understand the mechanism through which alcohol might influence such behaviours.

While previous studies have looked at whether drinking alcohol leads the consumer to rate others as more attractive, the effects on the attractiveness of the consumer have not been explored.

 

What did the research involve?

Researchers recruited 40 people aged between 18 and 30, half of them women. The participants were all heterosexual students who typically consumed between 10 and 50 units of alcohol a week (males) and between five and 35 units a week (females). They were all required to be in good physical health and to not be using illicit drugs (except cannabis).

They were asked to look at photographs of around 36 students. These students were all in a heterosexual relationship, and each partner participated in the study because the researchers say there are "strong correlations observed between the attractiveness of romantic partners".

Each volunteer had been photographed three times:

  • when sober – they had not had an alcoholic drink
  • after the consumption of 0.4 g/kg of alcohol – equivalent to a large glass of wine (250 ml) at 14% alcohol by volume for a 70kg individual 
  • after the consumption of a further 0.4 g/kg of alcohol (a total dose of 0.8 g/kg of alcohol)

All photos were taken with applicants in the same position, from the same angle and distance, and with a neutral expression.

When sober, participants were asked to complete an attractiveness rating task where they were presented with pairs of colour photographs of the same person displayed on a monitor, comprising either:

  • facial images of them sober and after one alcoholic drink, or
  • facial images of them sober and after two alcoholic drinks

Participants were then asked to decide which image was more attractive and to what extent, using the number keys 1 to 8 on the computer.

Values 1 to 4 indicated that the face on the left was preferred (1 = strongly prefer, 2 = prefer, 3 = slightly prefer, 4 = guess), while 5 to 8 indicated the face on the right was preferred (5 = guess, 6 = slightly prefer, 7 = prefer, 8 = strongly prefer).

They had previously completed a validated questionnaire rating their mood.

 

What were the basic results?

Researchers found images of individuals who had consumed one alcoholic drink (a "low dose") were rated as more attractive than images of them sober.

The preference for an "intoxicated" face (of someone who had had one drink) over the "sober" face was slight (mean preference 54%, 95% confidence interval [CI] 50-59%).

However, when comparing someone who had not had a drink with someone who had had two drinks (the "high dose"), there was a slight tendency to prefer the "sober" face over the "intoxicated" face (mean preference 47%, 95% CI 43-51%).

They also found that in those who had one alcoholic drink, the skin tone in facial images was slightly redder and darker compared to the sober state, but no different when comparing sober with high-dose or low and high doses. 

 

How did the researchers interpret the results?

The researchers say their study suggests alcohol consumption increases the attractiveness of the consumer to others, and they may therefore receive "greater sexual interest" from potential mates.

The mechanism for this apparent increase in attractiveness is unknown, although they suggest it is driven by a change in appearance after low alcohol consumption – a flushing of the skin and a relaxation of facial muscles. 

"Understanding the mechanisms through which alcohol influences social behaviour, including factors that may impact on the likelihood of engaging in risky sexual behaviour, is important if we are to develop evidence-based public health messages," they argue.

 

Conclusion

This small study found a slight increase in the perceived attractiveness of people who had consumed (on average) one large glass of wine, compared with images of those who had consumed no alcohol. But what this finding adds to our knowledge of alcohol and risky sexual behaviour is unclear.

All kinds of factors might influence whether someone is considered attractive, including the mood and preferences of the onlooker, as well as the mood of those being photographed.

Also, the sample was drawn from a student population and the results might not be generalisable to other groups. It is also highly likely that the student participants recognised the students in the photographs, which could have influenced the results.

The official NHS guidelines on alcohol consumption are:

  • men should not regularly drink more than 3-4 units of alcohol a day
  • women should not regularly drink more than 2-3 units a day
  • to avoid alcohol for 48 hours after a heavy drinking session

Three units is the equivalent of a large glass of wine (alcohol content 12%) or a pint of higher strength beer, lager or cider. Read more about alcohol units.

Regularly drinking above these limits can lead to alcohol misuse problems. Alcohol misuse can trigger a range of health issues, such as weight gain, impotence (in men), jaundice, and various types of cancers.

Read more advice about how to enjoy alcohol responsibly.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

How having just the one drink can make you look more gorgeous, according to science. The Independent, March 4 2015

How drinking ONE glass of wine improves your looks: Booze can make you beautiful - but stay away from that second glass. Mail Online, March 3 2015

Links To Science

Van Den Abbeele J, Penton-Voak IS, Attwood AS, et al. Increased Facial Attractiveness Following Moderate, but not High, Alcohol Consumption. Alcohol and Alcoholism. Published online February 25 2015

Categories: NHS Choices

Pages