NHS Choices

'Rebooted' stem cells may lead to new treatments

NHS Choices - Behind the Headlines - Mon, 15/09/2014 - 13:00

"Scientists have managed to 'reset' human stem cells," the Mail Online reports. It is hoped studying these cells will provide more information about the mechanics of early human development.

This headline comes from a laboratory study that reports to have found a way to turn the clock back on human stem cells so they exhibit characteristics more similar to seven- to nine-day-old embryonic cells.

These more primitive cells are, in theory, capable of making all and any type of cell or tissue in the human body, and are very valuable for researching human development and disease.

Previous research efforts have successfully engineered early-stage stem cells capable of making several cell and tissue types, called pluripotent stem cells.

However, pluripotent stem cells engineered in the laboratory are not perfect and display subtle differences to natural stem cells.

This study involved using biochemical techniques to return pluripotent human stems cells to a more primitive "ground-state" stem cell.

If this technique is confirmed as reliable and can be replicated in other studies, it could ultimately lead to new treatments, although this possibility is uncertain.

While the immediate impact is probably minimal, it's hoped this research may lead to advances in the years to come.

 

Where did the story come from?

The study was carried out by researchers from the University of Cambridge, the University of London and the Babraham Institute.

It was funded by the UK Medical Research Council, the Japan Science and Technology Agency, the Genome Biology Unit of the European Molecular Biology Laboratory, European Commission projects PluriMes, BetaCellTherapy, EpiGeneSys and Blueprint, and the Wellcome Trust.

The study was published in the peer-reviewed journal Cell as an open access article, so it's available to read online for free.

The Mail Online's coverage was accurate and reflected many of the facts summarised in the press release issued by the Medical Research Council. Interviews with the research's authors and other scientists in the field added useful extra insight to interpret and contextualise the findings.

 

What kind of research was this?

This was a laboratory study to develop and test a new technique to return pluripotent human stem cells to an earlier, more pristine developmental state.

Pluripotent stem cells are early developmental cells capable of becoming several different cell types. Some stem cells are said to be totipotent (capable of becoming all types of cell), such as early embryonic stem cells shortly after fertilisation.

These types of cells are very valuable in developmental science research as they allow the study of developmental processes in the laboratory that aren't possible to study in a foetus shortly after conception.

As the MRC press release explains: "Capturing embryonic stem cells is like stopping the developmental clock at the precise moment before they begin to turn into distinct cells and tissues.

"Scientists have perfected a reliable way of doing this with mouse cells, but human cells have proved more difficult to arrest and show subtle differences between the individual cells. It's as if the developmental clock has not stopped at the same time and some cells are a few minutes ahead of others."

The aim of this study was therefore to devise and test a way of turning back the clock in human pluripotent stem cells so they exhibit more totipotent characteristics. This was also termed as returning the pluripotent cells to a "ground-state" pluripotency.

 

What did the research involve?

This research took existing human pluripotent stem cells and subjected them to a battery of laboratory-based experiments in an effort to produce stable stem cells showing a more ground-state pluripotency.

This chiefly involved culturing the human stem cells in a range of biological growth factors and other chemical stimuli designed to coax them into earlier phases of development. Extensive monitoring of the cell characteristics, such as self-replication, gene and protein activity (expression), occurred along the way.

 

What were the basic results?

The main findings include:

  • Short-term expression of proteins NANOG and KLF2 was able to put into action a biological pathway leading to the "reset" of pluripotent stems cells to an earlier state. The MRC press release indicated this was equivalent to resetting the cells to those found in an embryo before it implants in the womb at around seven to nine days old.
  • Inhibiting well-established biochemical signalling pathways involving extracellular signal-regulated kinases (ERK) and protein Kinase C (both of which are proteins involved in cell regulation) sustained the "rewired state", allowing cells to stay in the arrested development state.
  • The reset cells could self-renew – a key feature of stem cells – without biochemical ERK signalling, and their observable characteristics and genetics remained stable.
  • DNA methylation – a naturally occurring way of regulating gene expression associated with cellular differentiation – was also dramatically reduced, suggesting a more primitive state.

These features, the authors commented, distinguished these reset cells from other types of embryo-derived or induced pluripotent stem cell, and aligns them closer to the ground-state embryonic stem cell (totipotent) in mice.

 

How did the researchers interpret the results?

The researchers indicate their findings demonstrate the "feasibility of installing and propagating functional control circuitry for ground-state pluripotency in human cells". They added the reset can be achieved without permanent genetic modification.

The research group explained the theory that a "self-renewing ground state similar to rodent ESC [embryonic stem cells] may pertain to primates is contentious", but "our findings indicate that anticipated ground state properties may be instated in human cells following short-term expression of NANOG and KLF2 transgenes. The resulting cells can be perpetuated in defined medium lacking serum products or growth factors."

 

Conclusion

This laboratory study showed human pluripotent stem cells could be coaxed into a seemingly more primitive developmental state, exhibiting some of the key features of an equivalently primitive embryonic stem cell in mice. Namely, this is the ability to stably self-renew and be able to develop into a range of other types of cell.

If replicated and confirmed by other research groups, this finding may be useful to developmental biologists in their efforts to better understand human development and what happens when it goes wrong and causes disease. But this is the hope and expectation for the future, rather than an achievement that has been realised using this new technique.

Sounding a note of caution, Yasuhiro Takashima of the Japan Science and Technology Agency and one of the authors of the study, commented on the Mail Online website: "We don't yet know whether these will be a better starting point than existing stem cells for therapies, but being able to start entirely from scratch could prove beneficial."

This is the start rather than the end of the journey for this new technique and the cells derived from it. The technique will need to be replicated by other research groups in other conditions to ensure its reliability and validity.

The cells themselves will also need to be studied further to see if they do really have the stability and versatility of true primitive stem cells expected under different conditions and time horizons. This will include looking for any subtle or unusual behaviour further down the development line, as has been found to be the case with other types of stem cell thought to be primitive.

Overall, this study is important to biologists and medical researchers as it potentially gives them new tools to investigate human development and associated diseases. For the average person the immediate impact is minimal, but may be felt in the future if new treatments arise.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

British scientists 'reset' human stem cells to their earliest state: 'Major step forward' could lead to development of life-saving medicines. Mail Online, September 11 2014

Ultimate human stem cells created in the lab. New Scientist, September 12 2014

Links To Science

Takashima Y, Guo G, Loos R et al. Resetting Transcription Factor Control Circuitry toward Ground-State Pluripotency in Human. Cell. Published online September 11 2014

Categories: NHS Choices

Could meditation help combat migraines?

NHS Choices - Behind the Headlines - Mon, 15/09/2014 - 12:00

“Daily meditation may be the most effective way of tackling migraine,” the Daily Express reports.

This headline is not justified, as it was based on a small pilot study involving just 19 people.

It showed that an eight week "mindfulness-based stress reduction course" (a combination of mediation and yoga-based practices) led to benefits in measures of headache duration and subsequent disability in 10 adult migraine sufferers, compared to nine in a control group who received usual care.

There were no statistically significant differences found for the arguably more important measures of migraine frequency (migraines per month) and severity. However, the study may have been too small to reliably detect any differences in these outcomes. Both groups continued to take any migraine medication (preventative or for treatment during a headache) they were already taking before the trial.

Overall, this trial showed weak and tentative signs that mindfulness-based stress reduction might be beneficial in a very small group of highly select adults with migraines. However, we will only be able to say it works with any confidence after much larger studies have been carried out.

 

Where did the story come from?

The study was carried out by researchers from Wake Forest School of Medicine, North Carolina (US) and Harvard Medical School, Boston. It was funded by the American Headache Society Fellowship and the Headache Research Fund of the John Graham Headache Center, Brigham and Women’s Faulkner Hospital.

The study was published in the peer-reviewed journal Headache.

One of the study authors reported receiving research support from GlaxoSmithKline, Merck and Depomed. All other authors report no conflicts of interest.

The Daily Express’ coverage of this small study arguably gave too much prominence and validity to the findings, indicating they were reliable: “The ancient yoga-style technique lowers the number of attacks and reduces the agonising symptoms without any nasty side effects”.

Many of the limitations associated with the study were not discussed, including the fact that some of the findings may have been chance, due to the small sample size.

To be fair, the researchers themselves were forthright in highlighting the limitations of their research.

 

What kind of research was this?

This was a small randomised controlled trial (RCT) investigating the effects of a standardised eight week mindfulness-based stress reduction course in adults with migraines.

Stress is known to be associated with headaches and migraines, but the research group said solid evidence on whether stress-reducing activities might reduce the occurrence or severity of migraines was lacking. Because of this, they designed a small RCT to test one such activity – an eight-week mindfulness-based stress reduction course.

This was a small pilot RCT. These are usually designed to provide proof of concept that something might work and is safe before moving on to larger trials involving more people. The larger trials are designed to reliably and robustly prove effectiveness and safety. Hence, on their own, pilot RCTs rarely provide reliable evidence of effectiveness.

 

What did the research involve?

Researchers took a group of 19 adults who had been diagnosed with migraines (with or without aura) and randomly divided then into two groups. One group (n=10) received an eight-week mindfulness-based stress reduction course, while the others (n=9) received “usual care” – they were asked to continue taking any migraine medication they had, and not to change the dose during the eight-week trial.

During the mindfulness trial, participants were also allowed to continue to take any medication they usually would. The main outcome of interest was change in migraine frequency from the start of the trial to eight weeks. Secondary measures included change in headache severity, duration, self-efficacy, perceived stress, migraine-related disability/impact, anxiety, depression, mindfulness and quality of life from the start to the end of the eight-week trial period. 

The standardised mindfulness-based stress reduction course class met for eight weekly, two-hour sessions, plus one “mindfulness retreat day”, which comprised six hours led by a trained instructor and followed a method created by Dr Jon Kabat-Zinn. The intervention is based on systematic and intensive training in mindfulness meditation and mindful hatha yoga in the context of mind/body medicine. Participants were encouraged to practice at home to build their daily mindfulness practice for 45 minutes per day, on at least five additional days per week. Compliance was monitored through class attendance and by daily logs of home practice.

To be included in the trial, participants had to have reported between 4 and 14 migraine days per month, more than a year of migraine history, be over 18, in good general health and be able and willing to attend weekly sessions of mindfulness and to practice every day at home for up to 45 minutes. Excluding criteria included participating in yoga/meditation practice and having a major illness (physical or mental).

All participants in both groups were taking medications for their headaches.

At the end of the eight-week period, the control group were offered the mindfulness course as a courtesy for their participation in the trial. In an attempt to blind the control group to treatment allocation,they were told there were two start periods for the eight-week trial and they were merely on the second, continuing usual care in the interim.

For all final analyses, migraines were more precisely defined as those headaches that were more than 4 hours long with a severity of 6 to 10, based on patient diary information.

The study aimed to recruit 34 people, but only recruited 19, so was underpowered to detect statistically significant differences in the outcomes assessed.

All participants kept a daily headache diary for 28 days before the study began.

What were the basic results?

All nine people completed the eight-week stress reducing course, averaging 34 minutes of daily meditation. In both groups, more than 80% took daily prophylactic migraine medication, such as Propranolol and 100% used abortive medication, such as Triptans, when a migraine struck. There were no adverse events recorded, suggesting the intervention was safe, at least in the short term.

The main findings were:

Primary outcome

Mindfulness participants had 1.4 fewer migraines per month compared to controls (intervention 3.5 migraines during 28-day run-in, reduced to 1.0 migraines per month during the eight-week study, vs. control: 1.2 to 0 migraines per month, 95% confidence interval (CI) [−4.6, 1.8], an effect that did not reach statistical significance in this pilot sample. The lack of statistical significance means the result could be due to chance alone.

Secondary outcomes

Headaches were less severe (−1.3 points/headache on 0-10 scale, [−2.3, 0.09], on the borderline of statistical significance) and shorter (−2.9 hours/headache, [−4.6, −0.02], statistically significant) in the intervention group compared to the controls

Migraine Disability Assessment and Headache Impact Test-6 (a widely used test that assesses the impact of migraines on quality of life and day to day function) dropped in intervention group compared with the control group (−12.6, [−22.0, −1.0] and −4.8, [−11.0, −1.0], respectively), both of which were statistically significant. Self-efficacy and mindfulness improved in the intervention group compared with control (13.2 [1.0, 30.0] and 13.1 [3.0, 26.0]) and was also a statistically significant finding.

 

How did the researchers interpret the results?

The researchers indicated the mindfulness-based stress reduction course was “safe and feasible for adults with migraines. Although the small sample size of this pilot trial did not provide power to detect statistically significant changes in migraine frequency or severity, secondary outcomes demonstrated this intervention had a beneficial effect on headache duration, disability, self-efficacy and mindfulness. Future studies with larger sample sizes are warranted to further evaluate this intervention for adults with migraines”.

 

Conclusion

This pilot RCT, based on just 19 adult migraine sufferers, showed an eight-week mindfulness-based stress reduction course led to benefits for headache duration, disability, self-efficacy and mindfulness measures, compared to a control group who received usual care. There were non-significant benefits observed for measures of migraine frequency and severity. Both groups continued to take any migraine medication (prophylactic or for treatment during a headache) they were already taking before the trial.

The research group themselves were very reasonable in their conclusions and called for larger trials to be done to investigate this issue further. As they acknowledge, relatively little can be said with reliability based on this small pilot study alone. This is because small studies van often not be generalised to the wider population.

For example, what are the chances the experience of a group of nine people will represent the experiences of the UK population as a whole who could be different ages, have different attitudes and expectations of meditation and have different medical backgrounds? 

Also, larger trials are able to more accurately estimate the magnitude of any effect, whereas small studies may be more volatile to change or extreme findings. Taken together, a pilot study of this size cannot and does not prove that "mindfulness-based stress reduction" is beneficial for migraine sufferers. This point may have been missed by those reading The Daily Express’ coverage, which appeared to accept some of the positive findings at face value and assume widespread effectiveness, without considering the limitations inherent in a pilot RCT of this size.

It is also worth noting that the participants were recruited if they suffered between 4 and 14 migraines per month, but the actual frequency of headache was much smaller for all participants during the run-in period and the eight-week study period. Indeed, some participants in each group had no headaches during each period. This further reduces the ability of this study to show any significant difference between the groups.

Overall, the eight-week mindfulness-based stress reduction course showed tentative signs that it might be beneficial in a very small group of highly select adults with migraines. However, we will only be able to say it is beneficial with any confidence after much larger studies have been carried out. Until then, we simply don’t know if this type of course will help migraine sufferers, hence the Daily Express’ headline is premature.

That said, adopting a psychological approach to chronic pain conditions, rather than relying on medication alone, can help improve symptoms in some people. Read more about coping with chronic pain.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Daily meditation could help 8m conquer pain of migraines. Daily Express, September 13 2014

How To Cure A Migraine? Study Says Meditation Might Be The Answer. Huffington Post, September 12 2014

Links To Science

Wells RE, Burch R, Paulsen RH, et al. Meditation for Migraines: A Pilot Randomized Controlled Trial. Headache – the Journal of Head and Face Pain. Published online July 18 2014

Categories: NHS Choices

Pregnant drink binges harm kids' mental health

NHS Choices - Behind the Headlines - Fri, 12/09/2014 - 11:29

“Binge drinking ONCE during pregnancy can damage your child's mental health and school results,” says the Mail Online. 

The headline follows an analysis of results from a study including thousands of women and their children. In analyses of up to 7,000 children, researchers found that children of women who engaged in binge drinking at least once in pregnancy, but did not drink daily, had slightly higher levels of hyperactivity and inattention problems. These children also scored on average about one point lower in exams.

The results appear to suggest potential for some links, particularly in the area of hyperactivity/inattention. However, the differences identified were generally small, and weren’t always statistically significant after taking into account potential confounders. The links also weren’t always found across both boys and girls, or across both teachers’ and parents’ assessment of the child.

It’s already official advice for women to avoid binge drinking or getting drunk when pregnant. Pregnant women should avoid alcohol in the first three months of pregnancy, especially. If women choose to drink alcohol, officials say to stick to, at most, two units (preferably one) and no more than twice a week (preferably once).

 

Where did the story come from?

The study was carried out by researchers from the University of Nottingham and other research centres in the UK and Australia. The ongoing study is funded by the Medical Research Council, the Wellcome Trust and the University of Bristol. The study was published in the peer-reviewed European Journal of Child and Adolescent Psychiatry.

The media covers the research reasonably, although they sometimes refer generally to the effect on children’s mental health, which may make readers think they are referring to diagnoses of mental health conditions, which is not the case.

The study looked at teacher- and parent-rated levels of problems in areas such as “hyperactivity” and conduct, but did not assess whether the children had psychiatric diagnoses, such as ADHD.

 

What kind of research was this?

This research was part of a cohort study. The current analysis looked at the effect of binge drinking in pregnancy on mental health and school achievement when the children were aged 11. ALSPAC researchers recruited 85% of the pregnant women in the Avon region due to give birth between 1991 and 1992. Researchers have been assessing these women and their children regularly.

The researchers reported that previous analyses of this study have suggested that there was a link between binge drinking in pregnancy and the child having poorer mental health at ages four and seven as rated by their parents, particularly girls.

A prospective cohort study is the most appropriate and reliable study design for assessing the impact of binge drinking in pregnancy on the child’s heath later in life. For studies of this type, the main difficulty is trying to reduce the potential impact of factors other than the factor of interest (binge drinking) that could affect results. The researchers do this by measuring these factors and then using statistical methods to remove their effect in their analyses. This may not entirely remove their effect, and unknown and unmeasured factors could be having an effect, but it is the best way we have to try and isolate the impact of interest alone.

 

What did the research involve?

The researchers assessed the women’s alcohol consumption by questionnaire at 18 and 32 weeks into their pregnancy. They assessed their offspring’s mental health and school performance at age 11 using parent and teacher questionnaires, and their academic performance. They then analysed whether children of mothers who had engaged in binge drinking during pregnancy differed to children of mothers who had not.

Of the over 14,000 pregnant women in the study, 7,965 provided information on their alcohol consumption at both 18 and 32 weeks. They were asked about:

  • how many days in the past four weeks she had drunk at least four units of alcohol
  • how much and how often they had drunk alcohol in the past two weeks or around the time the baby first moved (only asked at 18 weeks)
  • how much she currently drank in a day (only asked at 32 weeks)

The researchers used this information to determine if the women:

  • had engaged in binge drinking at least once in pregnancy (defined as four or more units/drinks in a day) 
  • drank at least one drink a day at either 18 or 32 weeks

The children’s mental health was assessed using a commonly used standard questionnaire given to teachers and parents. This questionnaire (called the “Strengths and Difficulties Questionnaire”) gives an indication of the level of problems in four areas: 

  • emotional
  • conduct
  • hyperactivity/inattention
  • peer relationships

The Strengths and Difficulties Questionnaire also gives an overall score, and this is what the researchers focused on, as well as the conduct and hyperactivity/inattention scores. The researchers also obtained the children’s results on standard Key Stage 2 examinations taken in the final year at primary school. The researchers had information on 4,000 children for hyperactivity and conduct problems, and just under 7,000 children for academic results.

When the researchers carried out their analyses to look at the effect of binge drinking, they took into account a range of factors that could potentially influence results (potential confounders). These included:

  • mother’s age in pregnancy
  • parents’ highest education level
  • smoking in pregnancy
  • drug use in pregnancy
  • maternal mental health in pregnancy
  • whether the parents owned their house
  • whether the parents were married
  • whether the child was born prematurely
  • the child’s birthweight
  • the child’s gender

 

What were the basic results?

The researchers found that about a quarter of women (24%) reported having engaged in binge drinking at least once in pregnancy. Over half (59%) of the women who reported binge drinking at 18 weeks in their pregnancy also reported having engaged in binge drinking at 32 weeks.

Less than half of the women (about 44%) who had engaged in binge drinking reported doing so on more than two occasions in the past month. Women who had engaged in binge drinking were more likely to have more children, to also smoke or use illegal drugs in pregnancy, to have experienced depression in pregnancy, to have a lower level of education, to be unmarried and to be in rented accommodation.

Initial analyses showed children of mothers who had engaged in binge drinking at least once in pregnancy had higher levels of parent- and teacher-rated problems, and worse school performance than children of mothers who had not engaged in binge drinking. Their average difference in three problem scores was less than one point (possible score range 0 to 10 for conduct and hyperactivity/inattention problems, and 0 to 40 for the total problems score), and their average KS2 score was 1.82 points lower.

However, once the researchers took into account potential confounding factors, these differences were no longer large enough to rule out the possibility of having occurred by chance (that is, they were no longer statistically significant).

The researchers repeated their analyses for girls and boys separately. They found that even after adjustment, girls whose mothers had engaged in binge drinking in pregnancy did have higher levels of parent-rated conduct, hyperactivity/inattention and total problems (average score difference less than one point).

If the researchers looked at binge drinking and daily drinking separately, after adjustment they found children of women who had engaged in binge drinking in pregnancy, but did not drink daily, had higher levels of teacher-rated hyperactivity/inattention problems (average score 0.28 points higher) and lower KS2 scores (average 0.81 points lower).

 

How did the researchers interpret the results?

The researchers concluded that occasional binge drinking in pregnancy appears to increase risk of hyperactivity/inattention problems and lower academic performance in children at age 11, even if the women do not drink daily.

 

Conclusion

This prospective cohort study has suggested that even occasional binge drinking in pregnancy may increase the risk of hyperactivity/inattention problems and lower academic performance when the children reach 11 years old.

The strengths of the study are its design – selecting a wide and representative population sample collecting data prospectively – and using standardised questionnaires to assess the children’s outcomes.

Assessing the impact of alcohol in pregnancy on children’s outcomes is difficult. This is partly because assessing alcohol consumption is always difficult. People may not want to report their true consumption, and even if they do, there are difficulties in accurately remembering past consumption. In addition, as this link can only be assessed by observational studies (ethically you couldn’t do a trial that randomised pregnant women to binge drink), it is always possible that additional factors are having an effect.

The study found that women who had engaged in binge drinking in pregnancy were also more likely to have other unhealthy behaviours, such as smoking, and to be socioeconomically disadvantaged. The researchers tried to remove the effects of all of these factors, but this may not entirely remove the effect.

This latest study carried out a large number of analyses looking at different outcomes. The differences identified were generally small, and they weren’t always large enough to be statistically significant after taking into account potential confounders. They also weren’t always found across both boys and girls, or across both teachers’ and parents’ assessment of the child. These differences weren’t always large enough to be statistically significant. However, they do appear to suggest potential for some links, particularly in the area of hyperactivity/inattention.

The researchers note that even with small individual effects, the effect across the population as a whole can be considerable. The small effect may also reflect that it represents an average effect across all levels of binge drinking – ranging from one to many times.

We may never have completely concrete proof of an exact level at which harm occurs, and under which alcohol consumption in pregnancy is safe. Therefore, we have to work with the best information that is available. There is growing evidence that as well as how much we drink, the pattern of how we drink may be important.

Current UK recommendations from the National Institute for Health and Care Excellence (NICE) already advise that women who are pregnant should avoid binge drinking or getting drunk. It is also recommended that:

  • women who are pregnant should avoid alcohol in the first three months of pregnancy
  • if women choose to drink alcohol later in pregnancy, they should drink no more than two (preferably only one) UK units, no more than twice (preferably once) a week.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Binge drinking ONCE during pregnancy can damage your child's mental health and school results. Daily Mail, September 11 2014

Prenatal alcohol consumption linked to mental health problems. The Guardian, September 11 2014

This is how much alcohol you can have during pregnancy before it harms newborn’s mental health. Metro, September 10 2014

Links To Science

Sayal K, et al. Prenatal exposure to binge pattern of alcohol consumption: mental health and learning outcomes at age 11. European Child & Adolescent Psychiatry. Published September 11 2014

Categories: NHS Choices

Weight discrimination study fuels debate

NHS Choices - Behind the Headlines - Fri, 12/09/2014 - 10:41

Much of the media has reported that discriminatory “fat shaming” makes people who are overweight eat more, rather than less.

The Daily Mail describes how, “telling someone they are piling on the pounds just makes them delve further into the biscuit tin”. While this image may seem like a commonsense “comfort eating” reaction, the headlines are not borne out by the science.

In fact, the news relates to findings for just 150 people who perceived any kind of weight discrimination, including threats and harassment, and poorer service in shops – not just friendly advice about weight.

The research in question looked at body mass index (BMI) and waist size for almost 3,000 people aged over 50 and how it changed over a three- to five-year period. The researchers analysed the results alongside the people’s reports of perceived discrimination. But because of the way the study was conducted, we can’t be sure whether the weight gain resulted from discrimination or the other way around (or whether other unmeasured factors had an influence).

On average, the researchers found that the 150 people who reported weight discrimination had a small gain in BMI and waist circumference over the course of the study, while those who didn’t had a small loss.

Further larger-scale research into the types of discrimination that people perceived may bring more answers on the best way to help people maintain a healthy weight.
 

Where did the story come from?

The study was carried out by researchers from University College London, and was funded by the National Institute on Aging and Office for National Statistics. Individual authors received support from ELSA funding and Cancer Research UK. The study was published in the peer-reviewed Obesity Journal.

The media in general have perhaps overinterpreted the meaning from this study, given its limitations. The Daily Telegraph’s headline says, “fat shaming makes people eat more”, but the study hasn’t examined people’s dietary patterns, and can’t prove whether the weight gain or discrimination came first.

 

What kind of research was this?

This was an analysis of data collected as part of the prospective cohort study, the English Longitudinal Study of Ageing (ELSA). This analysis looked at the associations between perceived weight discrimination and changes in weight, waist circumference and weight status.

The researchers say that negative attitudes towards people who are obese have been described as “one of the last socially acceptable forms of prejudice”. The researchers cite common perceptions that discrimination against overweight and obesity may encourage people to lose weight, but that it may have a detrimental effect.

A cohort study is a good way of examining how a particular exposure is associated with a particular later outcome. However, in the current study the way in which the data was collected meant that it was not possible to clearly determine whether the discrimination or the weight gain came first.

As with all studies of this kind, finding that one factor has a relationship with another does not prove cause and effect. There may be many other confounding factors involved, making it difficult to say how and whether perceived weight discrimination is directly related to the person’s weight. The researchers did make adjustments for some of these factors in analyses, to try and remove their effect.

 

What did the research involve?

The English Longitudinal Study of Ageing is a long-term study started in 2001/02. It recruited adults aged 50 and over and has followed them every two years. Weight, height and waist circumference have been objectively measured by a nurse every four years.

Questions on perceptions of discrimination were asked only once, in 2010/11, and were completed by 8,107 people in the cohort (93%). No body measures were taken at this time, but they were taken one to two years before (2008/09) and after (2012/13) this. Complete data on body measurements and perceptions of discrimination were available for 2,944 people.

The questions on perceived discrimination were based on those previously established in other studies and asked how often in your day-to-day life: 

  • you are treated with less respect or courtesy
  • you receive poorer service than other people in restaurants and stores
  • people act as if they think you are not clever
  • you are threatened or harassed
  • you receive poorer service or treatment than other people from doctors or hospitals

The responders could choose one of a range of answers for each – from “never” to “almost every day”. The researchers report that because few people reported any discrimination, they grouped responses to indicate any perceived discrimination versus no perceived discrimination. People who reported discrimination in any situation were asked to indicate what they attributed this experience to, from a list of options including weight, age, gender and race.

The researchers then looked at the relationship between change in BMI and waist circumference between the 2008/09 and 2012/13 assessments. They then looked at how this was related to perceived weight discrimination at the midpoint. Normal weight was classed as a BMI less than 25, overweight between 25 and 30, “obese class I” between 30 and 35, “obese class II” 35 to 40, and “obese class III” was a BMI above 40.

In their analyses the researchers took into account age, sex and household (non-pension) income, as an indicator of socioeconomic status.

 

What were the basic results?

Of the 2,944 people for whom complete data was available, 150 (5.1%) reported any perceived weight discrimination, ranging from 0.7% of normal-weight individuals, to 35.9% of people in obesity class III. There were various differences between the 150 people who perceived discrimination and those who didn’t. People who perceived discrimination were significantly younger (62 years versus 66 years), of higher BMI (BMI 35 versus 27), waist circumference (112cm versus 94cm) and less wealthy.

On average, people who perceived discrimination gained 0.95kg in weight between the 2008/09 and 2012/13, while people who didn’t perceive discrimination lost 0.71kg (average difference between the groups 1.66kg).

There were significant changes in the overweight group (gain 2.22kg among those perceiving any discrimination versus loss of 0.39kg in the no discrimination group), and the obese group overall (loss of 0.26kg in the discrimination versus a loss of 2.07kg in the no discrimination group). There were no significant differences in any of the obesity subclasses.

People who perceived weight discrimination also gained an average 0.72cm in waist circumference, while those who didn’t lost an average of 0.40cm (an average difference of 1.12cm). However, there were no other significant differences by group.

Among people who were obese at the first assessment, perceptions of discrimination had no effect on their risk of remaining obese (odds ratio (OR) 1.09, 95% confidence interval (CI) 0.46 to 2.59), with most obese people staying obese at follow-up (85.6% at follow-up versus 85.0% before). However, among people who were not obese at baseline, perceived weight discrimination was associated with higher odds of becoming obese (OR 6.67, 95% CI 1.85 to 24.04).

 

How did the researchers interpret the results?

The researchers conclude that their results, “indicate that rather than encouraging people to lose weight, weight discrimination promotes weight gain and the onset of obesity. Implementing effective interventions to combat weight stigma and discrimination at the population level could reduce the burden of obesity”.

 

Conclusion

This analysis of data collected as part of the large English Longitudinal Study of Ageing finds that people who reported experiencing discrimination as a result of their weight had a small gain in BMI and waist circumference over the study years, while those who didn’t had a small loss.

There are a few important limitations to bear in mind. Most importantly, this study could not determine whether the weight changes or the discrimination came first. And, finding an association between two factors does not prove that one has directly caused the other. The relationship between the two may be influenced by various confounding factors. The authors tried to take into account some of these, but there are still others that could be influencing the relationship (such as the person’s own psychological health and wellbeing).

As relatively few people reported weight discrimination, results were not reported or analysed separately by the type or source of the discrimination. Therefore, it is not possible to say what form the discrimination took or whether it came from health professionals or the wider population.

People’s perception of discrimination and the reasons for it may be influenced by their own feelings about their weight and body image. These feelings themselves could also be having a detrimental effect against them being able to lose weight. This does not mean that discrimination does not exist, or that it should not be addressed. Instead, both factors may need to be considered in developing successful approaches to reducing weight gain and obesity.

Another important limitation of this study is that despite the large initial sample size of this cohort, only 150 people (5.1%) perceived weight discrimination. When further subdividing this small number of people by their BMI class, this makes the numbers smaller still. Analyses based on small numbers may not be precise. For example, the very wide confidence interval around this odds ratio for becoming obese highlights the uncertainty of this estimate.

Also, the findings may not apply to younger people, as all participants were over the age of 50.

Discrimination based on weight or other characteristics is never acceptable and is likely to have a negative effect. The National Institute for Health and Care Excellence has already issued guidance to health professionals, noting the importance of non-discriminatory care of overweight and obese people.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Fat shaming 'makes people eat more rather than less'. The Daily Telegraph, September 11 2014

Telling someone they're fat makes them eat MORE: People made to feel guilty about their size are six times as likely to become obese. Mail Online, September 11 2014

‘Fat shaming’ makes people put on more weight, study claims. Metro. September 10 2014

Links To Science

Jackson SE, Beeken RJ, Wardle, J. Perceived weight discrimination and changes in weight, waist circumference, and weight status. September 11 2014

Categories: NHS Choices

'Food addiction' doesn't exist, say scientists

NHS Choices - Behind the Headlines - Thu, 11/09/2014 - 12:30

“Food is not addictive ... but eating is: Gorging is psychological compulsion, say experts,” the Mail Online reports.

The news follows an article in which scientists argue that – unlike drug addiction – there is little evidence that people become addicted to the substances in certain foods.

Researchers argue that instead of thinking of certain types of food as addictive, it would be more useful to talk of a behavioural addiction to the process of eating and the “reward” associated with it.

The article is a useful contribution to the current debate over what drives people to overeat. It’s a topic that urgently needs answers, given the soaring levels of obesity in the UK and other developed countries. There is still a good deal of uncertainty about why people eat more than they need. The way we regard overeating is linked to how eating disorders are treated, so fresh thinking may prove useful in helping people overcome compulsive eating habits.

 

Where did the story come from?

The study was carried out by researchers from various universities in Europe, including the Universities of Aberdeen and Edinburgh. It was funded by the European Union.

The study was published in the peer-reviewed Neuroscience and Biobehavioural Reviews on an open-access basis, so it is free to read online. However, the online article that has been released is not the final one, but an uncorrected proof.

Press coverage was fair, although the article was treated somewhat as if it was the last word on the subject, rather than a contribution to the debate. The Daily Mail’s use of the term “gorging” in its headline was unnecessary, implying sheer greed is to blame for obesity. This was not a conclusion found in the published review.

What kind of research was this?

This was not a new piece of research, but a narrative review of the scientific evidence for the existence of an addiction to food. It says that the concept of food addiction has become popular among both researchers and the public, as a way to understand the psychological processes involved in weight gain.

The authors of the review argue that the term food addiction – echoed in terms such as “chocaholic” and “food cravings” has potentially important implications for treatment and prevention. For this reason, they say, it is important to explore the concept more closely.

They also say that “food addiction” may be used as an excuse for overeating, also placing blame on the food industry for producing so-called “addictive foods” high in fat and sugar.

What does the review say?

The researchers first looked at the various definitions of the term addiction. Although they say a conclusive scientific definition has proved elusive, most definitions include notions of compulsion, loss of control and withdrawal syndromes. Addiction, they say, can be either related to an external substance (such as drugs) or to a behaviour (such as gambling).

In formal diagnostic categories, the term has largely been replaced. Instead it is often changed to “substance use disorder” – or in the case of gambling “non-substance use disorder”.

One classic finding on addiction is the alteration of central nervous system signalling, involving the release of chemicals with “rewarding” properties. These chemicals, the authors say, can be released not just by exposure to external substances, such as drugs, but also by certain behaviours, including eating.

The authors also outline the neural pathways through which such reward signals work, with neurotransmitters such as dopamine playing a critical role.

However, the authors of the review say that labelling a food or nutrient as “addictive” implies it contains certain ingredients that could make an individual addicted to it. While certain foods – such as those high in fat and sugar – have “rewarding” properties and are highly palatable, there is insufficient evidence to label them as addictive. There is no evidence that single nutritional substances can elicit a “substance use disorder” in humans, according to current diagnostic criteria.

The authors conclude that “food addiction” is a misnomer, proposing instead the term “eating addiction” to underscore the behavioural addiction to eating. They argue that future research should try to define the diagnostic criteria for an eating addiction, so that it can be formally classified as a non-substance related addictive disorder.

“Eating addiction” stresses the behavioural component, whereas “food addiction” appears more like a passive process that simply befalls the individual, they conclude.

Conclusion

There are many theories as to why we overeat. These theories include the existence of the “thrifty gene”, which has primed us to eat whenever food is present and was useful in times of scarcity. There is also the theory and the “obesogenic environment” in which calorie dense food is constantly available.

This is an interesting review that argues that in terms of treatment the focus should be on people’s eating behaviour – rather than on the addictive nature of certain foods. It does not deny the fact that for many of us high fat, high sugar foods are highly palatable.

If you think your eating is out of control, or you want help with weight problems, it’s a good idea to visit your GP. There are many schemes available that can help people lose weight by sticking to a healthy diet and regular exercise.

If you are feeling compelled to eat, or finding yourself snacking unhealthily, why not check out these suggestions for food swaps that could be healthier.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Sugar 'not addictive' says Edinburgh University study. BBC News, September 9 2014

Food is not addictive ... but eating is: Gorging is psychological compulsion, say experts. Daily Mail, September 10 2014

Fatty foods are NOT addictive – but eating can be, Scottish scientists reveal. Daily Express, September 10 2014

Links To Science

Hebebrand J, Albayrak O, Adan R, et al. “Eating addiction”, rather than “food addiction”, better captures addictive-like eating behaviour. Neuroscience and Behaviour. Published online September 6 2014

Categories: NHS Choices

Bacteria found in honey may help fight infection

NHS Choices - Behind the Headlines - Thu, 11/09/2014 - 12:00

“Bacteria found in honeybee stomachs could be used as alternative to antibiotics,” reports The Independent.

The world desperately needs new antibiotics to counter the growing threat of bacteria developing resistance to drug treatment. A new study has found that 13 bacteria strains living in honeybees’ stomachs can reduce the growth of drug-resistant bacteria, such as MRSA, in the laboratory.

The researchers examined antibiotic-resistant bacteria and yeast that can infect human wounds such as MRSA and some types of E. coli. They found each to be susceptible to some of the 13 honeybee lactic acid bacteria (LAB). These LAB were more effective if used together.

However, while the researchers found that the LAB could have more of an effect than existing antibiotics, they did not test whether this difference was likely to be due to chance, so few solid conclusions can be drawn from this research.

The researchers also found that each LAB produced different levels of toxic substances that may have been responsible for killing the bacteria.

Unfortunately, the researchers had previously found that the LAB are only present in fresh honey for a few weeks before they die, and are not present in shop-bought honey.

However, the researchers did find low levels of LAB-produced proteins and free fatty acids in shop-bought honey. They went on to suggest that these substances might be key to the long-held belief that even shop-bought honey has antibacterial properties, but that this warrants further research.

 

Where did the story come from?

The study was carried out by researchers from Lund University and Sophiahemmet University in Sweden. It was funded by the Gyllenstierna Krapperup’s Foundation, Dr P Håkansson’s Foundation, Ekhaga Foundation and The Swedish Research Council Formas.

The study was published in the peer-reviewed International Wound Journal on an open-access basis, so it is free to read online.

The study was accurately reported by The Independent, which appears to have based some of its reporting on a press release from Lund University. This press release confusingly introduces details of separate research into the use of honey to successfully treat wounds in a small number of horses.

 

What kind of research was this?

This was a laboratory study looking at whether substances present in natural honey are effective against several types of bacteria that commonly infect wounds. Researchers want to develop new treatments because of the growing problem of bacteria developing antibiotic resistance. In this study, the researchers chose to focus on honey, as it has been used “for centuries … in folk medicine for upper respiratory tract infections and wounds”, but little is known about how it works.

Previous research has identified 40 strains of LAB that live in honeybees’ stomachs (stomach bacteria are commonly known as “gut flora”). 13 of these LAB strains have been found to be present in all species of honeybees and in freshly harvested honey on all continents – but not shop-bought honey.

Research has suggested that the 13 strains work together to protect the honeybee from harmful bacteria. This study set out to further investigate whether these LAB might be responsible for the antibacterial properties of honey. They did this by testing them in the laboratory setting on bacteria that can cause human wound infections.

 

What did the research involve?

The 13 LAB strains were cultivated and tested against 13 multi-drug resistant bacteria, and one type of yeast that had been grown in the laboratory from chronic human wounds.

The bacteria included MRSA and one type of E. coli. The researchers tested each LAB strain for its effect on each type of bacteria or yeast, and then all 13 LAB strains were tested together. They did this by placing a disc of material containing the LAB at a particular place in a gel-like substance called agar, and then placing bacteria or yeast onto the agar.

If the LAB had antibiotic properties, it would be able to stop the bacteria or yeast from growing near it. The researchers would be able to find the LABs with stronger antibiotic properties, by seeing which had the largest distance at which they could stop the bacteria or yeast growing.

The researchers compared the results with the effect of the antibiotic commonly used for each type of bacteria or yeast, such as vancomycin and chloramphenicol. They then analysed the type of substances that each LAB produced, in an attempt to understand how they killed the bacteria or yeast.

The researchers then looked for these substances in samples of different types of shop-bought honey, including Manuka, heather, raspberry and rapeseed honey, and a sample of fresh rapeseed honey that had been collected from a bee colony.

 

What were the basic results?

Each of the 13 LABs reduced the growth of some of the antibiotic-resistant wound bacteria. The LABs were more effective when used together. The LABs tended to stop bacteria and yeast growing over a larger area than the antibiotics, suggesting that they were having more of an effect. However, the researchers did not do statistical tests to see if these differences were greater than might be expected purely by chance.

The 13 LABs produced different levels of lactic acid, formic acid and acetic acid. Five of them also produced hydrogen peroxide. All of the LABs also produced at least one other toxic chemical, including benzene, toluene and octane. They also produced some proteins and free fatty acids. Low concentrations of nine proteins and free fatty acids produced by LABs were found in shop-bought honeys.

 

How did the researchers interpret the results?

The researchers conclude that LAB living in honeybees “are responsible for many of the antibacterial and therapeutic properties of honey. This is one of the most important steps forward in the understanding of the clinical effects of honey in wound management”.

They go on to say that “this has implications not least in developing countries, where fresh honey is easily available, but also in western countries where antibiotic resistance is seriously increasing”.

 

Conclusion

This study suggests that 13 strains of LAB taken from honeybees’ stomachs are effective against a yeast and several bacteria that are often present in human wounds. Although the experiments suggested that the LABs could inhibit the bacteria more than some antibiotics, they did not show that this effect was large enough to be relatively certain it did not occur by chance. All of the tests were done in a laboratory environment, so it remains to be seen whether similar effects would be seen when treating real human wounds.

There were some aspects of the study that were not clear, including the antibiotic dose that was used and whether the dose used was optimal, or had already been used in the clinical setting where the species were collected. The authors also report that an antibiotic was used as a control for each bacteria and the yeast, but this is not clearly presented in the tables of the study, making it difficult to assess whether this is correct.

The study has shown that each LAB produces a different amount or type of potentially toxic substances. It is not clear how these substances interact to combat the infections, but it appears that they work more effectively in combination.

Low concentrations of some of the substances that could be killing the bacteria and yeast were found in shop-bought honey, but this study does not prove that they would have antibacterial effects. In addition, as the researchers point out, shop-bought honey does not contain any LABs.

Antibiotic resistance is a big problem that reduces our ability to combat infections. This means there is a lot of interest in finding new ways to combat bacteria. Whether this piece of research will contribute to that is currently unclear, but finding these new treatments will be crucial.

 

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Bacteria found in honeybee stomachs could be used as alternative to antibiotics, scientists claim. The Independent, September 10 2014

Links To Science

Olofsson TC, Butler E, Markowicz P, et al. Lactic acid bacterial symbionts in honeybees – an unknown key to honey's antimicrobial and therapeutic activities. International Wound Journal. Published online September 8 2014

Categories: NHS Choices

Hundreds report waking up during surgery

NHS Choices - Behind the Headlines - Wed, 10/09/2014 - 12:30

“At least 150, and possibly several thousand, patients a year are conscious while they are undergoing operations,” The Guardian reports. A report suggests “accidental awareness” during surgery occurs in around one in 19,000 operations.

The report containing this information is the Fifth National Audit Project (NAP5) report on Accidental Awareness during General Anaesthesia (AAGA) – that is, when people are conscious at some point during general anaesthesia. This audit was conducted over a three-year period to determine how common AAGA is.

People who regain consciousness during surgery may be unable to communicate this to the surgeon due to the use of muscle relaxants, which are required for safety during surgery. This can cause feelings of panic and fear. Sensations that the patients have reported feeling during episodes of AAGA include tugging, stitching, pain and choking.

There have been reports that people who experience this rare occurrence may be extremely traumatised and go on to experience post-traumatic stress disorder (PTSD).

However, as the report points out, psychological support and therapy given quickly after an AAGA can reduce the risk of PTSD.

 

Who produced the report?

The Royal College of Anaesthetists (RCoA) and the Association of Anaesthetists of Great Britain and Ireland (AAGBI) produced the report. It was funded by anaesthetists through their subscriptions to both professional organisations.

In general, the UK media have reported on the study accurately and responsibly.

The Daily Mirror’s website points out that you are far more likely to die during surgery than wake up during it – a statement that, while accurate, is not exactly reassuring.

 

How was the research carried out?

The audit was the largest of its kind, with researchers obtaining the details of all patient reports of AAGA from approximately 3 million operations across all public hospitals in the UK and Ireland. After the data was made anonymous, a multidisciplinary team studied the details of each event. This team included patient representatives, anaesthetists, psychologists and other professionals.

The team studied 300 of more than 400 reports they received. Of these, 141 were considered to be certain/probable cases. In addition, 17 cases were due to drug error: having the muscle relaxant but not the general anaesthetic, thus causing “awake paralysis” – a condition similar to sleep paralysis, when a person wakes during sleep, but is temporarily unable to move or speak. Seven cases of AAGA occurred in the intensive care unit (ICU) and 32 cases occurred after sedation rather than general anaesthesia (sedation causes a person to feel very drowsy and unresponsive to the outside world, but does not cause loss of consciousness).

 

What were the main findings?

The main findings were:

  • one in 19,000 people reported AAGA
  • half of the reported events occurred during the initiation of general anaesthetic, and half of these cases were during urgent or emergency operations
  • about one-fifth of cases occurred after the surgery had finished, and were experienced as being conscious but unable to move
  • most events lasted for less than five minutes
  • 51% of cases caused the patient distress
  • 41% of cases resulted in longer moderate to severe psychological harm from the experience
  • people who had early reassurance and support after an AAGA event often had better outcomes

The awareness was more likely to occur:

  • during caesarean section and cardiothoracic surgery
  • in obese patients
  • if there was difficulty managing the patient’s airway at the start of anaesthesia
  • if there was interruption in giving the anaesthetic when transferring the patient from the anaesthetic room to the theatre
  • if certain emergency drugs were used during some anaesthetic techniques

 

What recommendations have been made?

64 recommendations were made covering national, institutional and individual health professional level factors. The main recommendations are briefly outlined below.

They recommend having a new anaesthetic checklist in addition to the World Health Organization (WHO) Safer Surgical Checklist, which is meant to be completed for each patient. This would be a simple anaesthesia checklist performed at the start of every operation. The purpose of it would be to prevent incidents occurring due to human error, and monitoring problems and interruptions to the administration of the anaesthetic drugs.

To reduce the experience of waking but being unable to move, they recommend that a type of monitor called a nerve stimulator should be used, so that anaesthetists can assess whether the neuromuscular drugs are still having an effect before they withdraw the anaesthetic.

They recommend that hospitals look at the packaging of each type of anaesthetic and related drugs that are used, and consider ordering some from different suppliers, to avoid multiple drugs of similar appearance. They also recommend that national anaesthetic organisations look for solutions to this problem with the suppliers.

They recommend that patients be informed of the possibility of briefly experiencing muscle paralysis when they are given the anaesthetic medications and when they wake up at the end, so that they are more prepared for its potential occurrence. In addition, patients who are undergoing sedation rather than general anaesthesia should be better informed of the level of awareness to expect.

The other main recommendation was for a new structured approach to managing any patients who experience awareness, to help reduce distress and longer-term psychological difficulties – called the Awareness Support Pathway.

 

How does this affect you?

As Professor Tim Cook, Consultant Anaesthetist in Bath and co-author of the report, has said: “It is reassuring that the reports of awareness … are a lot rarer than incidences in previous studies”, which have been as high as one in 600. He also states that “as well as adding to the understanding of the condition, we have also recommended changes in practice to minimise the incidence of awareness and, when it occurs, to ensure that it is recognised and managed in such a way as to mitigate longer-term effects on patients”.

 Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Awareness during surgery can cause long-term harm, says report. The Guardian, September 10 2014

Some patients 'wake up' during surgery. BBC News, September 10 2014

Three patients each week report WAKE UP during an operation because they are not given enough anaesthetic. Mail Online, September 10 2014

Hundreds of people wake up during operations. The Daily Telegraph, September 10 2014

More than 150 people a year WAKE UP during surgery: How does it happen? Daily Mirror, September 10 2014

Categories: NHS Choices

Prescription sleeping pills linked to Alzheimer’s risk

NHS Choices - Behind the Headlines - Wed, 10/09/2014 - 11:30

“Prescription sleeping pills … can raise chance of developing Alzheimer's by 50%,” reports the Mail Online.

This headline is based on a study comparing the past use of benzodiazepines, such as diazepam and temazepam, in older people with or without Alzheimer’s disease. It found that the odds of developing Alzheimer’s were higher in people who had taken benzodiazepines for more than six months.

Benzodiazepines are a powerful class of sedative drugs. Their use is usually restricted to treating cases of severe and disabling anxiety and insomnia. They are not recommended for long-term use, because they can cause dependence.

It’s also important to note that this study only looked at people aged 66 and above, therefore it is not clear what the effects are in younger people. Also, it is possible that the symptoms these drugs are being used to treat in these older people, such as anxiety, may in fact be early symptoms of Alzheimer’s. The researchers tried to reduce the likelihood of this in their analyses, but it is still a possibility.

Overall, these findings reinforce existing recommendations that a course of benzodiazepines should usually last no longer than four weeks.

 

Where did the story come from?

The study was carried out by researchers from the University of Bordeaux, and other research centres in France and Canada. It was funded by the French National Institute of Health and Medical Research (INSERM), the University of Bordeaux, the French Institute of Public Health Research (IRESP), the French Ministry of Health and the Funding Agency for Health Research of Quebec.

The study was published in the peer-reviewed British Medical Journal on an open access basis, so it is free to read online.

The Mail Online makes the drugs sound like they are “commonly used” for anxiety and sleep disorders, when they are used only in severe, disabling cases. It is also not possible to say for sure that the drugs are themselves directly increasing risk, as suggested in the Mail Online headline.

 

What kind of research was this?

This was a case control study looking at whether longer-term use of benzodiazepines could be linked to increased risk of Alzheimer’s disease.

Benzodiazepines are a group of drugs used mainly to treat anxiety and insomnia, and it is generally recommended that they are used only in the short term – usually no more than four weeks.

The researchers report that other studies have suggested that benzodiazepines could be a risk factor for Alzheimer’s disease, but there is still some debate. In part, this is because anxiety and insomnia in older people may be early signs of Alzheimer’s disease, and these may be the cause of the benzodiazepine use. In addition, studies have not yet been able to show that risk increases with increasing dose or longer exposure to the drugs (called a “dose-response effect”) – something that would be expected if the drugs were truly affecting risk. This latest study aimed to assess whether there was a dose-response effect.

Because the suggestion is that taking benzodiazepines for a long time could cause harm, a randomised controlled trial (seen as the gold standard in evaluating evidence) would be unethical.

As Alzheimer’s takes a long time to develop, following up a population to assess first benzodiazepine use, and then whether anyone develops Alzheimer’s (a cohort study) would be a long and expensive undertaking. A case control study using existing data is a quicker way to determine whether there might be a link.

As with all studies of this type, the difficulty is that it is not possible to determine for certain whether the drugs are causing the increase in risk, or whether other factors could be contributing.

 

What did the research involve?

The researchers used data from the Quebec health insurance program database, which includes nearly all older people in Quebec. They randomly selected 1,796 older people with Alzheimer’s disease who had at least six years’ worth of data in the system prior to their diagnosis (cases). They randomly selected four controls for each case, matched for gender, age and a similar amount of follow-up data in the database. The researchers then compared the number of cases and controls who had started taking benzodiazepines at least five years earlier, and the doses used.

Participants had to be aged over 66 years old, and be living in the community (that is, not in a care home) between 2000 and 2009. Benzodiazepine use was assessed using the health insurance claims database. The researchers identified all prescription claims for benzodiazepines, and calculated an average dose for each benzodiazepine used in the study. They then used this to calculate how many average daily doses of the benzodiazepine were prescribed for each person. This allowed them to use a standard measure of exposure across the drugs.

Some benzodiazepines act over a long period as they take longer to be broken down and eliminated from the body, while some act over a shorter period. The researchers also noted whether people took long- or short-acting benzodiazepine, those who took both were classified as having taken the longer acting form.

People starting benzodiazepines within five years of their Alzheimer’s diagnosis (or equivalent date for the controls) were excluded, as these cases are more likely to potentially be cases where the symptoms being treated are early signs of Alzheimer’s.

In their analyses, the researchers took into account whether people had conditions which could potentially affect the results, including:

  • high blood pressure
  • heart attack
  • stroke
  • high cholesterol
  • diabetes
  • anxiety
  • depression
  • insomnia

 

What were the basic results?

Almost half of the cases (49.8%) and 40% of the controls had been prescribed benzodiazepines. The proportion of cases and controls taking less than six months’ worth benzodiazepines was similar (16.9% of cases and 18.2% of controls). However, taking more than six months’ worth of benzodiazepines was more common in the controls (32.9% of cases and 21.8% of controls).

After taking into account the potential confounders, the researchers found that having used a benzodiazepine was associated with an increased risk of Alzheimer’s disease, even after taking into account potential confounders (odds ratio (OR) 1.43, 95% confidence interval (CI) 1.28 to 1.60).

There was evidence that risk increased the longer the drug was taken, indicated by the number of days’ worth of benzodiazepines a person was prescribed:

  • having less than about three months’ (up to 90 days) worth of benzodiazepines was not associated with an increase in risk
  • having three to six months’ worth of benzodiazepines was associated with a 32% increase in the odds of Alzheimer’s disease before adjusting for anxiety, depression and insomnia (OR 1.32, 95% CI 1.01 to 1.74) but this association was no longer statistically significant after adjusting for these factors (OR 1.28, 95% CI 0.97 to 1.69)
  • having more than six months’ worth of benzodiazepines was associated with a 74% increase in the odds of Alzheimer’s disease, even after adjusting for anxiety, depression or insomnia (OR 1.74, 95% CI 1.53 to 1.98)
  • the increase in risk was also greater for long-acting benzodiazepines (OR 1.59, 95% 1.36 to 1.85) than for short-acting benzodiazepines (OR 1.37, 95% CI 1.21 to 1.55).

 

How did the researchers interpret the results?

The researchers concluded that, “benzodiazepine use is associated with an increased risk of Alzheimer’s disease”. The fact that a stronger association was found with longer periods of taking the drugs supports the possibility that the drugs may be contributing to risk, even if the drugs may also be an early marker of the onset of Alzheimer’s disease.

 

Conclusion

This case control study has suggested that long-term use of benzodiazepines (over six months) may be linked with an increased risk of Alzheimer’s disease in older people. These findings are reported to be similar to other previous studies, but add weight to these by showing that risk increases with increasing length of exposure to the drugs, and with those benzodiazepines that remain in the body for longer.

The strengths of this study include that it could establish when people started taking benzodiazepines and when they had their diagnosis using medical insurance records, rather than having to ask people to recall what drugs they have taken. The database used is also reported to cover 98% of the older people in Quebec, so results should be representative of the population, and controls should be well matched to the cases.

The study also tried to reduce the possibility that the benzodiazepines could be being used to treat symptoms of the early phase of dementia, by only assessing use of these drugs that started at least six years before Alzheimer’s was diagnosed. However this may not remove the possibility entirely, as some cases of Alzheimer’s take years to progress, which the authors acknowledge.

All studies have limitations. As with all analyses of medical records and prescription data, there is the possibility that some data is missing or not recorded, that there may be a delay in recording diagnoses after the onset of the disease, or that people may not take all of the drugs they are prescribed. The authors considered all of the issues and carried out analyses where possible to assess their likelihood, but concluded that they seemed unlikely to be having a large effect.

There were some factors which could affect Alzheimer’s risk, which were not taken into account because the data was not available (for example, smoking and alcohol consumption habits, socioeconomic status, education or genetic risk).

It is already not recommended that benzodiazepines are used for long periods, as people can become dependent on them. This study adds another potential reason why prescribing these drugs for long periods may not be appropriate.

If you are experiencing problems with insomnia or anxiety (or both), doctors are likely to start with non-drug treatments as these tend to be more effective in the long term. 

Read more about alternatives to drug treatment for insomnia and anxiety.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Anxiety and sleeping pills 'linked to dementia'. BBC News, September 10 2014

Sleeping pills taken by millions linked to Alzheimer's. The Daily Telegraph, September 10 2014

Prescription sleeping pills taken by more than one million Britons 'can raise chance of developing Alzheimer's by 50%'. Daily Mail, September 10 2014

Sleeping pills can increase risk of Alzheimer's by half. Daily Mirror, September 10 2014

Sleeping pills linked to risk of Alzheimer’s disease. Daily Express, September 10 2014

Links To Science

De Gage SB, Moride Y, Ducruet T, et al. Benzodiazepine use and risk of Alzheimer’s disease: case-control study. BMJ. Published online September 9 2014

Categories: NHS Choices

Sibling bullying linked to young adult depression

NHS Choices - Behind the Headlines - Tue, 09/09/2014 - 12:00

“Being bullied regularly by a sibling could put children at risk of depression when they are older,” BBC News reports.

A new UK study followed children from birth to early adulthood. Analysis of more than 3,000 children found those who reported frequent sibling bullying at age 12 were about twice as likely to report high levels of depressive symptoms at age 18.

The children who reported sibling bullying were also more likely to be experiencing a range of challenging situations, such as being bullied by peers, maltreated by an adult, and exposed to domestic violence. While the researchers did take these factors into account, they and other factors could still be having an impact. This means it is not possible to say for certain that frequent sibling bullying is directly causing later mental health problems. However, the results do suggest that it could be a contributor.

As the authors suggest, interventions to target sibling bullying, potentially as part of a programme targeting the whole family, should be assessed to see if they can reduce the likelihood of later psychological problems.

 

Where did the story come from?

The study was carried out by researchers from the University of Oxford and other universities in the UK. The ongoing cohort study was funded by the UK Medical Research Council, the Wellcome Trust and the University of Bristol, and the researchers also received support from the Jacobs Foundation and the Economic and Social Research Council.

The study was published in the peer-reviewed medical journal Pediatrics. The article has been published on an open-access basis so it is available for free online.

This study was well reported by BBC News, which reported the percentage of children in each group (those who had been bullied and those who had not) who developed high levels of depression or anxiety. This helps people to get an idea of how common these things actually were, rather than just saying by how many times the risk is increased.

 

What kind of research was this?

This was a prospective cohort study that assessed whether children who experienced bullying by their siblings were more likely to develop mental health problems in their early adulthood. The researchers say that other studies have found bullying by peers to be associated with increased risk of mental health problems, but the effect of sibling bullying has not been assessed.

A cohort study is the best way to look at this type of question, as it would clearly not be ethical for children to be exposed to bullying in a randomised way. A cohort study allows researchers to measure the exposure (sibling bullying) before the outcome (mental health problems) has occurred. If the exposure and outcome are measured at the same time (as in a cross sectional study) then researchers can’t tell if the exposure could be contributing to the outcome or vice versa.

 

What did the research involve?

The researchers were analysing data from children taking part in the ongoing Avon Longitudinal Study of Parents and Children. The children reported on sibling bullying at age 12, and were then assessed for mental health problems when they were 18 years old. The researchers then analysed whether those who experienced sibling bullying were more at risk of mental health problems.

The cohort study recruited 14,541 women living in Avon who were due to give birth between 1991 and 1992. The researchers collected information from the women, and followed them and their children over time, assessing them at intervals.

When the children were aged 12 years they were sent a questionnaire including questions on sibling bullying, which was described as “when a brother or sister tries to upset you by saying nasty and hurtful things, or completely ignores you from their group of friends, hits, kicks, pushes or shoves you around, tells lies or makes up false rumours about you”. The children were asked whether they had been bullied by their sibling at home in the last six months, how often, what type of bullying and at what age it started.

When the children reached 18 they completed a standardised computerised questionnaire asking about symptoms of depression and anxiety. They were then categorised as having depression or not and any form of anxiety or not, based on the criteria in the International Classification of Diseases (ICD 10). The teenagers were also asked whether they had self-harmed in the past year, and how often.

The researchers also used data on other factors that could affect risk of mental health problems, collected when the children were eight years of age or younger (potential confounders), including any emotional or behaviour problems at age seven, the children’s self-reported depressive symptoms at age 10, and a range of family characteristics. They took these factors into account in their analyses.

 

What were the basic results?

A total of 3,452 children completed both the questionnaires about sibling bullying and mental health problems. Just over half of the children (52.4%) reported never being bullied by a sibling, just over a tenth (11.4%) reported being bullied several times a week, and the remainder (36.1%) reported being bullied but less frequently. The bullying was mainly name calling (23.1%), being made fun of (15.4%), or physical bullying such as shoving (12.7%).

Children reporting bullying by a sibling were more likely to:

  • be girls
  • to report frequent bullying by peers
  • to have an older brother
  • to have three or more siblings
  • to have parents from a lower social class
  • to have a mother who experienced depression during pregnancy
  • to be exposed to domestic violence or mistreatment by an adult
  • to have more emotional and behavioural problems at age seven

At 18 years of age, those who reported frequent bullying (several times a week) by a sibling at age 12 were more likely to experience mental health problems than those reporting no bullying:

  • 12.3% of the bullied children had clinically significant depression symptoms compared with 6.4% of those who were not bullied
  • 16.0% experienced anxiety compared with 9.3% 
  • 14.1% had self-harmed in the past year compared with 7.6%

After taking into account potential confounders, frequent sibling bullying was associated with increased risk of clinically significant depression symptoms (odds ratio (OR) 1.85, 95% confidence interval (CI) 1.11 to 3.09) and increased risk of self-harm (OR 2.26, 95% CI 1.40 to 3.66). The link with anxiety did not reach statistical significance after adjusting for potential confounders.

 

How did the researchers interpret the results?

The researchers concluded that “being bullied by a sibling is a potential risk factor for depression and self-harm in early adulthood”. They suggest that interventions to address this should be designed and tested.

 

Conclusion

The current study suggests that frequent sibling bullying at age 12 is associated with depressive symptoms and self-harm at age 18. The study’s strengths include the fact that it collected data prospectively using standard questionnaires, and followed children up over a long period. It was also a large study, although a lot of children did not complete all of the questionnaires.

The study does have limitations, which include:

  • As with all studies of this type, the main limitation is that although the study did take into account some other factors that could affect the risk of mental health problems, they and other factors could still be having an effect.
  • The study included only one assessment of bullying, at age 12. Patterns of bullying may have changed over time, and a single assessment might miss some children exposed to bullying.
  • Bullying was only assessed by the children themselves. Also collecting parental reports, or those of other siblings, might offer some confirmation of reports of bullying. However, bullying may not always take place when others are present.
  • The depression assessments were by computerised questionnaire, this is not equivalent to a formal diagnosis of having depression or anxiety after a full assessment by a mental health professional, but does indicate the level of symptoms a person is experiencing.
  • A large number of the original recruited children did not end up completing the questionnaires assessed in the current study (more than 10,000 of the 14,000+ babies starting the study). This could affect the results if certain types of children were more likely to drop out of the study (e.g. those with more sibling bullying). However, the children who dropped out after age 12 did not differ in their sibling bullying levels to those who stayed in the study, and analyses using estimates of their data did not have a large effect on results. Therefore the researchers considered that this loss to follow-up did not appear to be affecting their analyses.

While it is not possible to say for certain that frequent sibling bullying is directly causing later mental health problems, the study does suggest that it could be a contributor. It is also clear that the children experiencing such sibling bullying are also more likely to be experiencing a range of challenging situations, such as being bullied by peers, maltreated by an adult, and exposed to domestic violence.

As the authors say, the findings suggest that interventions to target sibling bullying, potentially as part of a programme targeting the whole family, should be assessed to see if they can reduce the likelihood of later psychological problems.

Read more about bullying, how to spot the signs and what to do if you suspect your child is being bullied (or is a bully themselves).

Analysis by
Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Sibling bullying increases depression risk. BBC News, September 8 2014

Lasting toll of bullying by a sibling: Brothers or sisters who are regularly picked on 'more likely to be depressed or take an overdose'. Daily Mail, September 9 2014

Links To Science

Bowes L, Wolke D, Joinson C, et al. Sibling Bullying and Risk of Depression, Anxiety, and Self-Harm: A Prospective Cohort Study. Published online September 8 2014

Categories: NHS Choices

Regular walking breaks 'protect arteries'

NHS Choices - Behind the Headlines - Tue, 09/09/2014 - 11:29

“Just a five-minute walk every hour helps protect against damage of sitting all day,” the Mail Online reports.

A study of 12 healthy but inactive young men found that if they sat still without moving their legs for three hours, the walls of their main leg artery showed signs of decreased flexibility. However, this was “prevented” if the men took five-minute light walking breaks about every hour.

Less flexibility in the walls of the arteries has been linked to atherosclerosis (hardening and narrowing of the arteries), which increases the risk of heart disease.

However, it is not possible to say from this small and short-term study whether taking walking breaks would definitely reduce a person’s risk of heart disease.

There is a growing body of evidence that spending more time in sedentary behaviour such as sitting can have adverse health effects – for example, a 2014 study found a link between sedentary behaviour and increased risk of chronic diseases.

While this study may not be definitive proof of the benefits of short breaks during periods of inactivity, having such breaks isn’t harmful, and could turn out to be beneficial.

 

Where did the story come from?

The study was carried out by researchers from the Indiana University Schools of Public Health and Medicine. It was funded by the American College of Sports Medicine Foundation, the Indiana University Graduate School and School of Public Health.

The study has been accepted for publication in the peer-reviewed journal Medicine & Science in Sports & Exercise.

The coverage in the Mail Online and the Daily Express is accurate though uncritical, not highlighting any of the research's limitations.

 

What kind of research was this?

This was a small crossover randomised controlled trial assessing the effect of breaks in sitting time on one measure of cardiovascular disease risk: flexibility of the walls of arteries.

The researchers report that sitting for long periods of time has been associated with increased risk of chronic diseases and death, and this may be independent of how physically active a person is when they are not sitting. This is arguably more an issue now than it would have been in the past, as a lot of us have jobs where sitting (sedentary behaviour) is the norm.

Short breaks from sitting are reported to be associated with improvements in a lower waist circumference, and fats and sugar in the blood.

A randomised controlled trial is the best way to assess the impact of an intervention on outcomes.

 

What did the research involve?

The researchers recruited 12 inactive, but otherwise healthy, non-smoking men of normal weight. These men were asked to sit for two three-hour sessions. During one session (called SIT), they sat on a firmly cushioned chair without moving their lower legs. In the other (called ACT), they sat on a similar chair but got up and walked on a treadmill next to them at a speed of two miles an hour for five minutes, three times during the session. The sessions were carried out between two and seven days apart, and the order in which each man took part in these sessions was allocated at random.

The researchers measured how rapidly the walls of the superficial femoral artery recovered from being compressed by a blood pressure cuff for five minutes. The femoral artery is the main artery supplying blood to the leg. The “superficial” part refers to the part that continues down the thigh after a deeper branch has divided off near the top of the leg.

The researchers took these blood pressure measurements at the start of each session, and then at hourly intervals. The person taking measurements did not know which type of session (SIT or ACT) the person was taking part in. The researchers compared the results obtained during the SIT and ACT sessions, to see if there were any differences.

 

What were the basic results?

The researchers found that the widening of the artery in response to blood flow (called flow-mediated dilation) reduced over three hours spent sitting without moving. However, getting up for five-minute walks in this period stopped this from happening. The researchers did not find any difference between the trials in another measure of what is going on in the arteries, called the “shear rate” (a measurement of how well a fluid flows through a channel such as a blood vessel).

 

How did the researchers interpret the results?

The researchers concluded that light hourly activity breaks taken during three hours of sitting prevented a significant reduction in the speed of the main leg artery recovering after compression. They say that this is “the first experimental evidence of the effects of prolonged sitting on human vasculature, and are important from a public health perspective”.

 

Conclusion

This small and very short-term crossover randomised controlled trial has suggested that sitting still for long periods of time causes the walls of the main artery in the leg to become less flexible, and that having five-minute walking breaks about every hour can prevent this.

The big question is: does this have any effect on our health?

The flexibility of arteries (or in this case, one particular artery) is used as what is called a “proxy” or “surrogate” marker for a person’s risk of cardiovascular disease. However, just because these surrogate markers improve, this does not guarantee that a person will have a lower risk of cardiovascular disease. Longer-term trials are needed to determine this.

The potential adverse effects of spending a lot of time sitting, independent of a person’s physical activity, is currently a popular area of study. Standing desks are becoming increasingly popular in the US, so people spend most of their working day on their feet. Some even bring a treadmill into their office (see this recent BBC News report on desk treadmills).

Researchers are particularly interested in whether taking breaks from unavoidable periods of sitting could potentially reduce any adverse effects, but this research is still at an early stage. In the interim, it is safe to say that having short breaks from periods of inactivity isn’t harmful, and could turn out to be beneficial.

There has been a rapid advancement in human civilisation over the past 10,000 years. We have bodies that were evolved to spend a large part of the day on our feet, hunting and gathering, but we also now have lifestyles that encourage us to sit around all day. It could be that this mismatch partially to blame for the rise in non-infectious chronic diseases, such as type 2 diabetes and heart disease.

If you feel brave enough, why not take on the NHS Choices 10,000 steps a day challenge, which should help build stamina, burn excess calories and give you a healthier heart.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Here's a good excuse to get up from your desk: Just a five-minute walk every hour helps protect against damage of sitting all day. Mail Online, September 8 2014

Walking five minutes at work can undo damage of long periods of sitting. Daily Express, September 8 2014

Links To Science

Thosar SS, Bieklo SL, Mather KJ, et al. Effect of Prolonged Sitting and Breaks in Sitting Time on Endothelial Function. Medicine & Science in Sports & Exercise. Published online September 8 2014

Categories: NHS Choices

Ebola vaccine hope after successful animal study

NHS Choices - Behind the Headlines - Mon, 08/09/2014 - 11:29

“Hopes for an effective Ebola vaccine have been raised after trials of an experimental jab found that it gave monkeys long-term protection,” The Guardian reports. An initial animal study found that a new vaccine boosted immunity.

Ebola is an extremely serious and often fatal viral infection thst can cause internal bleeding and organ failure.

It can be spread via contaminated body fluids such as blood and vomit.

Researchers tested vaccines based on chimpanzee viruses, which were genetically modified to not be infectious and to produce proteins normally found in Ebola viruses. As with all vaccines, the aim is to teach the immune system to recognise and attack the Ebola virus if it comes into contact with it again.

They found that a single injection of one form of the vaccine protected macaques (a common type of monkey) against what would usually be a lethal dose of Ebola five weeks later. If they combined this with a second booster injection eight weeks later, then the protection lasted for at least 10 months.

The quest for a vaccine is a matter of urgency, due to the current outbreak of Ebola in West Africa.

Now that these tests have shown promising results, human trials have started in the US. Given the ongoing threat of Ebola, this type of vaccine research is important in finding a way to protect against infection.

 

Where did the story come from?

The study was carried out by researchers from the National Institutes of Health (NIH) in the US, and other research centres and biotechnology companies in the US, Italy and Switzerland. Some of the authors declared that they claimed intellectual property on gene-based vaccines for the Ebola virus. Some of them were named inventors on patents or patent applications for either chimpanzee adenovirus or filovirus vaccines.

The study was funded by the NIH and was published in the peer-reviewed journal Nature Medicine.

The study was reported accurately by the UK media.

 

What kind of research was this?

This was animal research that aimed to test whether a new vaccine against the Ebola virus could produce a long-lasting immune response in non-human primates.

The researchers were testing a vaccine based on a chimpanzee virus from the family of viruses that causes the common cold in humans, called adenovirus. The researchers were using the chimpanzee virus rather than the human one, as the chimpanzee virus is not recognised and attacked by the human immune system.

The virus is essentially a way to get the vaccine into the cells, and is genetically engineered to not be able to reproduce itself, and therefore not spread from person to person or through the body. Other studies have tested chimp virus-based vaccines for other conditions in mice, other primates and humans.

To make a vaccine, the virus is genetically engineered to produce certain Ebola virus proteins. The idea is that exposing the body to the virus-based vaccine “teaches” the immune system to recognise, remember and attack these proteins. Later, when the body comes into contact with the Ebola virus, it can then rapidly produce an immune response to it.

This type of research in primates is the last stage before the vaccine is tested in humans. Primates are used in these trials due to their biological similarities to humans. This high level of similarity means that there is less chance of humans reacting differently.

 

What did the research involve?

Chimpanzee adenoviruses were genetically engineered to produce either a protein found on the surface of the Zaire form of the Ebola virus, or both this protein and another found on the Sudan form of the Ebola virus. These two forms of the Ebola virus are reported to be responsible for more deaths than other forms of the virus.

They then injected these vaccines into the muscle of crab-eating macaques and looked at whether they produced an immune response when later injected with the Ebola virus. This included looking at which vaccine produced a greater immune response, how long this effect lasted and whether giving a booster injection made the response last longer. The individual experiments used between four and 15 macaques.

 

What were the basic results?

In their first experiment, the researchers found that macaques given the vaccines survived when injected with what would normally be a lethal dose of Ebola virus five weeks after vaccination. Using a lower dose protected fewer of the vaccinated macaques.

The vaccine used in these tests was based on a form of the chimpanzee adenovirus called ChAd3. Vaccines based on another form of the virus called ChAd63, or on another type of virus called MVA, did not perform as well at protecting the macaques. A detailed assessment of the macaques' immune responses suggested that this might be due to the ChAd3-based vaccine producing a bigger response in one type of immune system cell (called T-cells).

The researchers then looked at what happened if vaccinated monkeys were given a potentially lethal dose of Ebola virus 10 months after vaccination. They did this with groups of four macaques given different doses and combination of the vaccines against both forms of Ebola virus, given as a single injection or with a booster. They found that a single high-dose vaccination with the ChAd3-based vaccine protected half of the four macaques. All four of the macaques vaccinated survived if they were given an initial vaccination with the ChAd3-based vaccine, followed by an MVA-based booster eight weeks later. Other approaches performed less well.

 

How did the researchers interpret the results?

The researchers concluded that they had shown short-term immunity against Ebola virus could be achieved with a single vaccination in chimps, and also long-term immunity if a booster was given. They state that: “This vaccine will be beneficial for populations at acute risk during natural outbreaks, or others with a potential risk of occupational exposure.”

 

Conclusion

This study has shown the potential of a new vaccine for Ebola virus in chimpanzees. Interest in the quest for a vaccine is seen as urgent, due to the ongoing outbreak of Ebola in West Africa. Animal studies such as this are needed to ensure that any new vaccines are safe, and that they look like they will have an effect. Macaques were used for this research because they, like humans, are primates – therefore, their responses to the vaccine should be similar to what would be expected in humans.

Now that these tests have shown promising results, the first human trials have started in the US, according to reports by BBC News. These trials will be closely monitored to determine the safety and efficacy of the vaccine in humans as, unfortunately, this early success does not guarantee that it will work in humans. Given the ongoing threat of Ebola, this type of vaccine research is important to protect against infection.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Hopes raised as Ebola vaccine protects monkeys for 10 months. The Guardian, September 7 2014

Vaccine gives monkeys Ebola immunity. BBC News, September 7 2014

Breakthrough as experimental Ebola vaccines protect monkeys from epidemic for 10 months. Mail Online, September 7 2014

Links To Science

Stanley DA, Honko AN, Asiedu C, et al. Chimpanzee adenovirus vaccine generates acute and durable protective immunity against ebolavirus challenge. Nature Medicine. Published online September 7 2014

Categories: NHS Choices

Wearing a bra 'doesn't raise breast cancer risk'

NHS Choices - Behind the Headlines - Mon, 08/09/2014 - 01:00

“Scientists believe they have answered the decades long debate on whether wearing a bra can increase your risk of cancer,” reports The Daily Telegraph.

There is an "urban myth" that wearing a bra disrupts the workings of the lymphatic system (an essential part of the immune system), which could lead to a build-up of toxins inside breast tissue, increasing the risk of cancer. New research suggests that this fear may be unfounded.

The study compared the bra-wearing habits of 1,044 postmenopausal women with two common types of breast cancer with those of 469 women who did not have breast cancer. It found no significant difference between the groups in bra wearing habits such as when a woman started wearing a bra, whether she wore an underwired bra, and how many hours a day she wore a bra.

The study had some limitations, such as relatively limited matching of characteristics of women with and without cancer. Also, as most women wear a bra, they could not compare women who never wore a bra versus those who wore a bra.

Despite the limitations, as the authors of the study say, the findings provide some reassurance that your bra-wearing habits do not seem to increase risk of postmenopausal breast cancer.

While not all cases of breast cancer are thought to be preventable, maintaining a healthy weight, moderating your consumption of alcohol and taking regular exercise should help lower your risk.

 

Where did the story come from?

The study was carried out by researchers from Fred Hutchinson Cancer Research Center in the US.

It was funded by the US National Cancer Institute.

The study was published in the peer-reviewed medical journal Cancer Epidemiology Biomarkers & Prevention.

The Daily Telegraph and the Mail Online covered this research in a balanced and accurate way.

However, suggestions that women who wore bras were compared with “their braless counterparts”, are incorrect. Only one woman in the study never wore a bra and she was not included in the analyses. The study was essentially comparing women who all wore bras, but starting at different ages, for different lengths of time during the day, or of different types (underwired or not).

 

What kind of research was this?

This was a case-control study looking at whether wearing a bra increases risk of breast cancer.

The researchers say there has been some suggestion in the media that bra wearing might increase risk, but that there is little in the way of hard evidence to support the claim.

A case-control study compares what people with and without a condition have done in the past, to get clues as to what might have caused the condition.

If women who had breast cancer wore bras more often than women who did not have the disease, this might suggest that bras could be increasing risk. One of the main limitations to this type of study is that it can be difficult for people to remember what has happened to them in the past, and people with a condition may remember things differently than those who don’t have the condition.

Also, it is important that researchers make sure that the group without the condition (the controls) are coming from the same population as the group with the condition (cases).

This reduces the likelihood that differences other than the exposure of interest (bra wearing) could contribute to the condition.

 

What did the research involve?

The researchers enrolled postmenopausal women with (cases) and without breast cancer (controls) from one area in the US. They interviewed them to find out detailed information about their bra wearing over the course of their lives, as well as other questions. They then statistically assessed whether the cases had different bra-wearing habits to the controls.

The cases were identified using the region’s cancer surveillance registry data for 2000 to 2004. Women had to be between 55 and 74 years old when diagnosed. The researchers identified all women diagnosed with one type of invasive breast cancer (lobular carcinoma or ILC), and a random sample of 25% of the women with another type (ductal carcinoma). For each ILC case, a control woman who was aged within five years of the case’s age was selected at random from the general population in the region. The researchers recruited 83% of the eligible cases (1,044 of 1,251 women) and 71% of eligible controls (469 of 660 women).

The in-person interviews asked about various aspects of past bra wearing (up to the point of diagnosis with cancer, or the equivalent date for controls):

  • bra sizes
  • age at which they started regularly wearing a bra
  • whether they wore a bra with an underwire
  • number of hours per day a bra was worn
  • number of days per week they wore a bra at different times in their life
  • whether their bra-wearing patterns ever changed during their life

Only one woman reported never wearing a bra, and she was excluded from the analysis.

The women were also asked about other factors that could affect breast cancer risk (potential confounders), including:

  • whether they had children
  • body mass index (BMI)
  • medical history
  • family history of cancer
  • use of hormone replacement therapy (HRT)
  • demographic characteristics

The researchers compared bra-wearing characteristics between cases and controls, taking into account potential confounders. The potential confounders were found to not have a large effect on results (10% change in odds ratio [OR] or less), so results adjusting for these were not reported. If the researchers just analysed data for women who had not changed their bra-wearing habits over their lifetime, the results were similar to overall results, so these were also not reported.

 

What were the basic results?

The researchers found that some characteristics varied between groups – cases were slightly more likely than controls to:

  • have a current BMI less than 25
  • to be currently using combined HRT
  • to have a close family history of breast cancer
  • to have had a mammogram in the past two years
  • to have experienced natural menopause (as opposed to medically induced menopause)
  • to have no children

The only bra characteristic that showed some potential evidence of being associated with breast cancer was cup size (which will reflect breast size). Women who wore an A cup bra were more likely to have invasive ductal cancer than those with a B cup bra (OR 1.9, 95% confidence interval [CI] 1.0 to 3.3).

However, the confidence intervals show that this increase in risk was only just significant, as they show that it is just possible that the risk in both groups is equivalent (an odds ratio of 1). If lower bra cup size was truly associated with increased breast cancer risk, the researchers would expect to see reducing risk as cup sizes got bigger. However, they did not see this trend across the other cup sizes, suggesting that there wasn’t a true relationship between cup size and breast cancer risk.

None of the other bra-wearing characteristics were statistically significantly different between cases with either type of invasive breast cancer and controls.

 

How did the researchers interpret the results?

The researchers concluded that their findings “provided reassurance to women that wearing a bra does not seem to increase the risk of the most common histologic types of postmenopausal breast cancer”.

 

Conclusion

This study suggests that past bra-wearing characteristics are not associated with breast cancer risk in postmenopausal women. The study does have some limitations:

  • There was only limited matching of the cases and controls, which could mean that other differences between the groups may be contributing to results. The potential confounders assessed were reported to not have a large impact on the results, which suggests that the lack of matching may not be having a large effect, but these results were not shown to allow assessment of this by the reader.
  • Controls were not selected for the women with invasive ductal carcinoma, only those with invasive lobular carcinoma.
  • As most women wear bras, but may differ in their bra-wearing habits (e.g. when they started wearing a bar or whether they wore an underwired bra), this means it wasn’t possible to compare the effect of wearing a bra versus not wearing a bra at all.
  • It may be difficult for women to remember their bra-wearing habits a long time ago, for example, exactly when they started wearing a bra, and their estimations may not be entirely accurate. As long as both cases and controls have the same likelihood of these inaccuracies in their reporting, this should not bias results. However, if women with cancer remember their bra wearing differently, for example, if they think it may have contributed to their cancer, this could bias results.
  • There were relatively small numbers of women in the control group, and once they were split up into groups with different characteristics, the number of women in some groups was relatively small. For example, only 17 women in the control group wore an A cup bra. These small numbers may mean some figures are less reliable.
  • The findings are limited to breast cancer risk in postmenopausal women.

While this study does have limitations as the authors say, it does provide some level of reassurance for women that bra wearing does not seem to increase risk of breast cancer.

While not all cases of breast cancer are thought to be preventable, maintaining a healthy weight, moderating your consumption of alcohol and taking regular exercise should help lower your risk. Read more about how to reduce your breast cancer risk.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Wearing a bra does not increase breast cancer risk, study finds. The Daily Telegraph, September 5 2014

Wearing a bra will NOT cause breast cancer - even if it's underwired or you wear it all year long, study finds. Mail Online, September 5 2014

Links To Science

Chen L, Malone KE, Li C I. Bra Wearing Not Associated with Breast Cancer Risk: A Population-Based Case-Control Study. Cancer Epidemiology Biomarkers and Prevention. Published online September 5 2014

Categories: NHS Choices

Gay people have 'poorer health' and 'GP issues'

NHS Choices - Behind the Headlines - Fri, 05/09/2014 - 12:30

“Lesbians, gays and bisexuals are more likely to have longstanding mental health problems,” The Independent reports, as well as “bad experiences with their GP”. A UK survey found striking disparities in survey responses compared to heterosexuals.

The news is based on the results of a survey in England of more than 2 million people, including over 27,000 people who described themselves as gay, lesbian or bisexual.

It found that sexual minorities were two to three times more likely to report having longstanding psychological or emotional problems and significantly more likely to report fair/poor health than heterosexuals.

People who described themselves as bisexuals had the highest rates of reported psychological or emotional problems. The researchers speculate that this could be due to a “double discrimination” effect; homophobia from the straight community as well as being stigmatised by the gay and lesbian communities as not being “properly gay” (biphobia).

Sexual minorities were also more likely to report unfavourable experiences with nurses and doctors in a GP setting.

Unfortunately this study cannot tell us the reasons for the differences reported in either health or relationships with GPs.

The results of this survey would certainly seem to suggest that there is room for improvement in the standard and focus of healthcare offered to gay, lesbian and bisexual people.

 

Where did the story come from?

The study was carried out by researchers from the RAND Corporation (a non-profit research organisation), Boston Children’s Hospital/Harvard Medical School and the University of Cambridge. The study was funded by the Department of Health (England).

The study was published in the peer-reviewed Journal of General Internal Medicine. This article is open-access so is free to read online.

The results of this study were well reported by The Independent and The Guardian.

 

What kind of research was this?

This was a cross-sectional study that aimed to compare the health and healthcare experiences of sexual minorities with heterosexual people of the same gender, adjusting for age, race/ethnicity and socioeconomic status.

A cross-sectional study collects data at one point in time so it is not able to prove any direct cause and effect relationships. It can be useful in highlighting possible associations that can then be investigated further.

 

What did the research involve?

The researchers analysed data from the 2009/10 English General Practice Patient Survey.

The survey was mailed to 5.56 million randomly sampled adults registered with a National Health Service general practice (it is estimated that 99% of England’s adult population are registered with an NHS GP). In all, 2,169,718 people responded (39% response rate).

People were asked about their health, healthcare experiences and personal characteristics (race/ethnicity, religion and sexual orientation).

The question about sexual orientation is also used in UK Office of National Statistics Social Surveys: “Which of the following best describes how you think of yourself?:

  • heterosexual/straight
  • gay/lesbian
  • bisexual
  • other
  • I would prefer not to say

Of the respondents 27,497 people described themselves as gay, lesbian, or bisexual.

The researchers analysed the responses to questions concerning health status and patient experience.

People were asked about their general health status (“In general, would you say your health is: excellent, very good, good, fair, or poor?”) and whether they had one of six long-term health problems, including a longstanding psychological or emotional condition.

The researchers looked to see whether people had reported:

  • having “no” trust or confidence in the doctor
  • “poor” or “very poor” to at least one of the doctor communication measures of giving enough time, asking about symptoms, listening, explaining tests and treatments, involving in decisions, treating with care and concern, and taking problems seriously
  • “poor” or “very poor” to at least one of the nurse communication measures
  • being “fairly” or “very” dissatisfied with care overall

The researchers compared the responses from sexual minorities and heterosexuals of the same gender after controlling for age, race/ethnicity and deprivation.

 

What were the basic results?

Both male and female sexual minorities were two to three times more likely to report having a longstanding psychological or emotional problem than heterosexual counterparts. Problems were reported by 5.2% heterosexual men compared to 10.9% gay men and 15% bisexual men and by 6.0% heterosexual women compared to 12.3% lesbian women and 18.8% bisexual women.

Both male and female sexual minorities were also more likely to report fair/poor health. Fair/poor health was reported by 19.6% heterosexual men compared to 21.9% gay men and 26.4% bisexual men and by 20.5% heterosexual women compared to 24.9% lesbian women and 31.6% bisexual women.

Negative healthcare experiences were uncommon in general, but sexual minorities were about one-and-a-half times more likely than heterosexual people to report unfavourable experiences with each of four aspects of primary care:

  • no trust or confidence in the doctor was reported by 3.6% heterosexual men compared to 5.6% gay men (4.3% bisexual men, difference compared to heterosexual men not statistically significant) and by 3.9% heterosexual women compared to 5.3% lesbian women and 5.3% bisexual women
  • poor/very poor doctor communication was reported by 9.0% heterosexual men compared to 13.5% gay men and 12.5% bisexual men and by 9.3% heterosexual women compared to 11.7% lesbian women and 12.8% bisexual women
  • poor/very poor nurse communication was reported by 4.2% heterosexual men compared to 7.0% gay men and 7.3% bisexual men and by 4.5% heterosexual women compared to 7.8% lesbian women and 6.7% bisexual women
  • being fairly/very dissatisfied with care overall was reported by 3.8% heterosexual men compared to 5.9% gay men and 4.9% bisexual men and by 3.9% heterosexual women compared to 4.9% lesbian women (4.2% bisexual women, difference compared to heterosexual women not statistically significant)

 

How did the researchers interpret the results?

The researchers concluded that “sexual minorities suffer both poorer health and worse healthcare experiences. Efforts should be made to recognise the needs and improve the experiences of sexual minorities. Examining patient experience disparities by sexual orientation can inform such efforts”.

 

Conclusion

This study has found that that sexual minorities were two to three times more likely to report having longstanding psychological or emotional problems and significantly more likely to report fair/poor health than heterosexuals.

Sexual minorities were also more likely to report unfavourable experiences with nurses and doctors in a GP setting.

It should also be noted that response rates to the survey were low, with only 39% people responding to the survey. It is unknown whether the results would have been different if more people had responded.

While potential reasons for these disparities may include the stress induced by homophobic attitudes, or the suspicion that a GP disapproves of their patient’s sexuality, these speculations are unproven.

As it stands, this study cannot tell us the reasons for the differences reported. However, it would suggest that healthcare providers need to do more to meet the needs of gay, lesbian and bisexual people.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Gay people more likely to have mental health problems, survey says. The Independent, September 4 2014

Gay people report worse experiences with GPs. The Guardian, September 4 2014

Links To Science

Elliott MN, Kanouse DE, Burkhart Q, et al. Sexual Minorities in England Have Poorer Health and Worse Health Care Experiences: A National Survey. Journal of General Internal Medicine. Published online September 4 2014

Categories: NHS Choices

1 in 5 child deaths 'preventable'

NHS Choices - Behind the Headlines - Fri, 05/09/2014 - 12:00

“One in five child deaths ‘preventable’,” reports BBC News.

The headline was prompted by the publication of a three-part series of papers on child death in high-income countries published in The Lancet.

The reviews outlined the need for child death reviews to identify modifiable risk factors, described patterns of child mortality at different ages across five broad categories. These were perinatal causes, congenital abnormalities, acquired natural causes, external causes, and unexplained deaths. They described contributory factors to death across four broad domains: biological and psychological factors, the physical environment, the social environment, and health and social service delivery.

Although the series did report that one in five child deaths are preventable, it should be noted that this was not a new figure and was published by the government in 2011.

Leading causes of preventable child deaths in the UK highlighted by the authors include accidents, abuse, neglect and suicide.

The authors also argue that child poverty and income inequality have a significant effect on risk factors for preventable child death and they are quoted in the media as calling for the government to do more in tackling child poverty.

 

Where did the story come from?

The series of papers was written by researchers from the University of Warwick in collaboration with researchers from universities and research institutes around the world. The source of funding for this series of three papers was not reported.

The series was published in the peer-reviewed medical journal The Lancet. All three papers are open-access so are free to read online (though you will need to register with The Lancet website):

 

Child health reviews

The first paper in the series discussed child death reviews, which have been developed in several countries. These aim to develop a greater understanding of how and why children die, which could lead to the identification of factors that could potentially be modified to reduce further deaths.

In England, multiagency rapid-response teams investigate all unexpected deaths of children aged 0-18 years. However, lessons learned from child death reviews are yet to be translated into large-scale policy initiatives, although local actions have been taken.

However, the researchers report that whether child death reviews have led to a reduction in national child death rates has not been assessed.

They also suggest that child death reviews could be extended to child deaths in hospital.

 

Patterns of death in England and Wales

The second paper in the series discussed the pattern of child death in England and Wales at different ages across five broad categories (perinatal causes, congenital abnormalities, acquired natural causes, external causes, and unexplained deaths).

It found that more than 5,000 infants, children and adolescents die every year in England and Wales.

Mortality is highest in infancy, dropping to very low rates in the middle childhood years, before rising again in adolescence.

Patterns of mortality vary with age and sex; perinatal and congenital causes predominate in infancy, with acquired natural causes (for example infections or neurological, respiratory and cardiovascular disorders) becoming prominent in later childhood and adolescence.

More than 50% of adolescent deaths occur from external causes, which included traffic deaths, non-intentional injuries (for example, falls), fatal maltreatment and death from assault, suicide and deliberate self-harm.

Deaths of children diagnosed with life-limiting disorders (disorders that are likely to reduce a child’s lifespan) might account for 50% or more of all child mortality in England and Wales.

 

Why do children die in high-income countries?

In the third review of the series the researchers summarised the results of key studies that described contributory factors to child death across four broad domains:

  • Intrinsic (genetic and biological) factors that are associated with child mortality include sex, ethnic origin, gestation and growth characteristics, disability and behaviour.
  • Physical environment, for example the home and surrounding area, including access to firearms (a particular problem in the US) and poisons.
  • Social environment (for example socioeconomic status, parental characteristics, parenting behaviours, family structures, and social support).
  • Service delivery (delivery of healthcare including national policy, healthcare services and the individual doctor; and the delivery of other welfare services (such as housing, welfare benefits and social care).

 

What do the researchers suggest?

In an accompanying editorial the researchers suggest that:

  • co-ordinated strategies that reduce antenatal and perinatal risk factors are essential
  • further research is needed into preventative interventions for preterm birth
  • efforts are needed to prevent child deaths due to acquired natural causes, including improved recognition of severity of illness
  • preventative strategies involving collaboration between health authorities and other agencies, including social, education, environmental, police and legal services, industry, and consumer groups are needed to prevent deaths due to external causes

 

Conclusion

A case could be made that these series of reports are more in the realm of political debate rather than health and medicine.

The lead author, Dr Peter Sidebotham, is quoted in The Daily Telegraph as saying: "It needs to be recognised that many child deaths could be prevented through a combination of changes in long-term political commitment, welfare services to tackle child poverty, and healthcare services.

"Politicians should recognise that child survival is as much linked to socioeconomic policies that reduce inequality as it is to a country's overall gross domestic product and systems of healthcare delivery."

While most of us would agree that reducing child poverty and income inequality is a good thing, exactly how we go about achieving these goals is a matter of heated debate.

Those on the Right of the political spectrum have argued that stimulating the economic activity of the free market will provide opportunities to lift people out of poverty. Those on the Left have argued that redistributing wealth through taxation can help create a safety net that stops children falling into poverty.

Seeing as this argument has been raging for centuries, we do not expect a resolution to the debate anytime soon.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

One in five child deaths 'preventable'. BBC News, September 5 2014

One in five child deaths in England is preventable, study finds. The Guardian, September 5 2014

Fifth of child deaths are preventable. The Daily Telegraph, September 5 2014

1 in 5 child deaths 'could be prevented and Government must take immediate action'. Daily Mirror, September 5 2014

One In Five Child Deaths Are 'Preventable'. Sky News, September 5 2014

Links To Science

Fraser J, Sidebotham P, Frederick J, et al. Learning from child death review in the USA, England, Australia, and New Zealand. The Lancet. Published online September 5 2014

Sidebotham P, Fraser J, Fleming P, et al. Patterns of child death in England and Wales. The Lancet. Published online September 5 2014

Sidebotham P, Fraser J, Covingtion T, et al. Understanding why children die in high-income countries. The Lancet. Published online September 5 2014

Categories: NHS Choices

How immunotherapy may treat multiple sclerosis

NHS Choices - Behind the Headlines - Thu, 04/09/2014 - 12:05

“Breakthrough hope for MS treatment as scientists discover how to ‘switch off’ autoimmune diseases,” reports the Mail Online.

Autoimmune disorders, such as multiple sclerosis (MS), occur when the body’s immune system attacks and destroys healthy body tissue by mistake.

The “holy grail” of treatment is to make the immune system tolerant to the part of the body that it is attacking, while still allowing the immune system to work effectively.

Previous studies in mice have shown that tolerance can be achieved by repeatedly exposing mice with autoimmune disorders to fragments of the components that the immune system is attacking and destroying. The immune cells that were attacking the healthy tissue convert into regulatory cells that actually dampen the immune response.

This process is similar to the process that has been used to treat allergies (immunotherapy).

It is known that doses of the fragments of the components that the immune system is attacking need to start low before increasing – this is known as the dose-escalation protocol.

A new mouse study has found that a carefully calibrated dose-escalation protocol caused changes in gene activity (gene expression). This then causes the attacking immune cells to express regulatory genes and to become suppressive. So rather than attacking healthy tissue, they are now ready to protect against further attacks against healthy tissue.

The researchers hope that some of the changes in immune cells and in gene expression that they have identified can be used in clinical studies to determine whether immunotherapy is working.

 

Where did the story come from?

The study was carried out by researchers from the University of Bristol and University College London and was funded by the Wellcome Trust, MS Society UK, the Batchworth Trust and the University of Bristol.

The study was published in the peer-reviewed journal Nature Communications. This article is open-access and can be read for free.

Although most of the media reporting was accurate, it should be noted that the current study focused on how dose-escalation therapy works rather than revealing it as a new discovery.

The principles underpinning immunotherapy and similar treatments have been known for many years.

 

What kind of research was this?

This was an animal study that aimed to improve the understanding of how dose-escalation therapy works so that it can be made more effective and safer.

Animal studies are the ideal type of study to answer this sort of basic science question.

 

What did the research involve?

Most of the experiments were performed in mice that were engineered to develop autoimmune encephalomyelitis, which has similarities to multiple sclerosis (MS).

In this mouse model, more than 90% of a subset of immune cells called CD4+ T cells recognise myelin basic protein, which is found in the myelin sheath that surrounds nerve cells. This causes the immune system to attack the myelin sheath, damaging it, which causes nerve signals to slow down or stop.

The researchers injected the mice subcutaneously (under the skin) with a peptide (small protein) that corresponded to the region of myelin basic protein that was recognised by the CD4+ T cells.

The researchers initially wanted to see what the maximum dose of peptide that could be tolerated was, and what dose was most effective at inducing tolerance.

They then did further experiments in which they increased the dose of peptide and compared that with just giving the same dose of peptide on multiple days.

Finally, they looked at what genes were being expressed or repressed in CD4+ T cells during dose-escalation.

 

What were the basic results?

The researchers found that the maximum dose of peptide that could be tolerated safely by the mice was 8µg (micrograms).

The tolerance to the peptide increased as peptide dose increased. This means that when the mice were re-challenged with peptide, the immune response was lower in mice that received 8µg of peptide compared to mice that had received lower doses.

The researchers found that dose escalation was critical for effective immunotherapy. If mice received 0.08µg on day 1, 0.8µg on day 2, and 8µg on day 3, they could then tolerate doses of 80µg with no adverse effects. In addition, this dose escalation protocol suppressed activation and proliferation of the CD4+ T cells in response to the peptide.

The researchers then looked at the gene expression within CD4+ T cells during dose escalation. They found that each escalating dose of peptide treatment modified the genes that were expressed. Genes that are associated with an inflammatory response were repressed and genes that are associated with regulatory processes were induced.

How did the researchers interpret the results?

The researchers concluded that “these findings reveal the critical importance of dose escalation in the context of antigen-specific immunotherapy, as well as the immunological and transcriptional signatures associated with successful self-antigen escalation dose immunotherapy”.

They go on to say that “with the immunological and transcriptional evidence provided in this study, we anticipate that these molecules can now be investigated as surrogate markers for antigen-specific tolerance induction in clinical trials”.

 

Conclusion

This mouse study using a mouse model of MS has found that the dose-escalation protocol is extremely important for inducing tolerance, in this case a small fragment of myelin basic protein.

Escalation dose immunotherapy minimised immune system activation and proliferation during the early stages, and caused changes in gene expression that caused the attacking immune cells to express regulatory genes and to become suppressive.

The researchers hope that some of the changes in immune cells and in gene expression that they have identified can be used in clinical studies of tolerance-inducing treatments for autoimmune disorders to determine whether therapy is working.

Analysis by
Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Breakthrough hope for MS treatment as scientists discover how to 'switch off' autoimmune disease. Mail Online, September 4 2014

Could a cure for MS and diabetes be on the way? Daily Express, September 4 2014

Links To Science

Burton BR, Britton GJ, Fang H, et al. Sequential transcriptional changes dictate safe and effective antigen-specific immunotherapy. Nature Communications. Published online September 3 2014

Categories: NHS Choices

Claims e-cigarettes are a 'gateway to cocaine'

NHS Choices - Behind the Headlines - Thu, 04/09/2014 - 12:00

“E-cigarettes could lead to using cocaine and cannabis scientists say,” the Daily Mirror reports.

In an article sure to prove controversial, two neuroscientists argue that nicotine may "prime" the brain to become addicted to harder drugs, such as cocaine.

The story comes from an article that argues that nicotine alters the brain’s circuitry, lowering the threshold for addiction to other substances such as cannabis and cocaine. Electronic cigarettes, the authors point out, are “pure nicotine delivery devices”, which could increase drug addiction among young people.

The “gateway drug” hypothesis is that use of certain (usually legal) drugs such as nicotine and alcohol can lead to the use of hard illegal drugs such as cocaine. This article argues that nicotine is such a drug and includes previous research by the authors that tested this hypothesis in a mouse model. 

The authors’ argument is based on the assumption that e-cigarette (or other nicotine) users will go on to use drugs such as cocaine. This assumption is unproven. While it is true that most cocaine users are also smokers, this does not equate to stating that most smokers use cocaine.

The article is of interest but it does not prove that e-cigarettes are a “gateway” to the use of drugs such as cocaine.

 

Where did the story come from?

The article is the print version of a lecture given by two researchers from Columbia University in the US. Their previous work in this area has been funded by the Howard Hughes Medical Institute, the National Institutes of Health and the National Institute on Drug Abuse, all in the US. One of the researchers, Professor Eric Kandel, shared the 2000 Nobel Prize in Physiology or Medicine for his discoveries related to the molecular basis of memory.

The article was published in the peer-reviewed New England Journal of Medicine on an open access basis so it is free to read online.

Coverage in the UK media was accurate but uncritical.

 

What kind of research was this?

This was not research but an article, based on a lecture, that presented evidence in favour of the theory that nicotine “primes” the brain for the use of other drugs such as cannabis and cocaine.

The authors say that while studies have shown that nicotine use is a gateway to the use of cannabis and cocaine in human populations, it has not been clear how nicotine accomplishes this. They say they have brought “the techniques of molecular biology” to bear on the question, revealing the action of nicotine in the brains of mice.

 

What did the article involve?

The authors first explain the gateway hypothesis (developed previously by one of them), which argues that in western societies, there is a “well defined” developmental sequence of drug use that starts with a legal drug and proceeds to illegal drugs. Specifically, it says, the use of alcohol and tobacco precede the use of cannabis, which in turn precedes the use of cocaine and other illicit drugs. They then review their own studies in which they tested the gateway hypothesis in a mouse model. 

Using this model they examined both addictive behaviour, brain “plasticity” (changes to the structures of the brain) and activity of a specific gene associated with addiction, in various experiments in which mice were exposed to both nicotine and cocaine.

One of the behavioural experiments they report on, for example, shows that mice given nicotine for seven days, followed by four days of nicotine and cocaine, were significantly (98%) more active than controls.

They also say they found that exposing mice brains to nicotine appeared to increase the “rewarding” properties of cocaine by encouraging production of the neurotransmitter dopamine.

Other experiments they report on found that nicotine given to mice before cocaine increased the expression of a gene that magnifies the effect of cocaine.

This “priming” effect, they say, does not occur unless nicotine is given repeatedly and in close conjunction with cocaine.

They then report on studies that they say show that nicotine also primes human brains to respond to cocaine, with the rate of cocaine dependence highest among users who started using cocaine after having smoked cigarettes.

Their conclusion is that, in humans, nicotine affects the circuitry of the brain in a manner that enhances the effects of a subsequent drug and that this “priming effect” happens if cocaine is used while using nicotine.

This effect is likely to occur, they argue, whether the exposure is from tobacco smoking, passive smoking or e-cigarettes.

They also argue that e-cigarettes are increasingly used by adolescents and young adults, with the potential for creating a new generation of people addicted to nicotine. “Whether e-cigarettes will prove to be a gateway to the use of combustible cigarettes and illicit drugs is uncertain but it is clearly a possibility.”

 

Conclusion

The article is of interest but it does not prove that e-cigarettes are a “gateway” to the use of drugs such as cocaine. The authors present evidence, much of it from their own research, in support of this hypothesis, but it remains just that – a hypothesis.

You could also make the point that it is somewhat unfair to demonise e-cigarettes in this way. Any product containing nicotine, such as patches or gum, could also be classed as a “gateway drug”, but as they release nicotine slowly these are not thought to be as “addictive”.

Also, as the authors point out, the “gateway drug” hypothesis is not universally accepted by addiction specialists. There is another hypothesis that the use of multiple drugs reflects a general tendency to drug use and that it is this tendency to addiction, rather than the use of a particular drug, that increases the risk of progressing to another drug.

From 2016 e-cigarettes are likely to be classed as "medicines", which means they will face stringent checks by medicine regulator the MHRA, and doctors will be able to prescribe them to smokers to help them cut down or quit. Tighter regulation will ensure the products are safe and effective.

If you want to try a safer alternative to cigarettes but are concerned about the uncertainties surrounding e-cigarettes, you may wish to consider a nicotine inhalator. This licensed quit smoking aid, available on the NHS, consists of just a mouthpiece and a plastic cartridge. It’s proven to be safe, but the nicotine vapour only reaches the mouth rather than the lungs, so you don’t get the quick hit of nicotine that comes with e-cigarettes.

It is well known that nicotine is addictive. Despite the risk of addiction and other uncertainties, e-cigarettes are likely to be safer than cigarettes (or other tobacco products). There is no conclusive evidence that using e-cigarettes will increase your risk of developing a drug addiction.


Analysis by
Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

E-cigarettes could lead to using cocaine and cannabis scientists say. Daily Mirror, September 4 2014

How an e-cigarette could lead to cocaine. The Daily Telegraph, September 3 2014

E-cigarettes could act as 'gateway' to harmful illegal drugs raising the risk of addiction. Mail Online, September 4 2014

Effects of using E-cigarettes 'stronger in adolescents'. ITV News, September 4 2014

Links To Science

Kandel ER, Kandel DB. A Molecular Basis for Nicotine as a Gateway Drug. The New England Journal of Medicine. Published online September 4 2014

Categories: NHS Choices

What is proton beam therapy?

NHS Choices - Behind the Headlines - Wed, 03/09/2014 - 14:00

Proton beam therapy has been discussed widely in the media in recent days.

This is due to the controversy surrounding the treatment of a young boy called Ashya King, who has medulloblastoma, a type of brain cancer.

Ashya was reportedly taken abroad by his parents to receive proton beam therapy.

But what does proton beam therapy involve, and can it treat cancer effectively?

 

How does proton beam therapy work?

Proton beam therapy is a type of radiotherapy.

Conventional radiotherapy uses high energy beams of radiation to destroy cancerous cells, but surrounding tissue can also be damaged. This can lead to side effects such as nausea, and can sometimes disrupt how some organs function.

Proton beam therapy uses beams of protons (sub-atomic particles) to achieve the same cell-killing effect. A "particle accelerator" is used to speed up the protons. These accelerated protons are then beamed into cancerous cells, killing them.

Unlike conventional radiotherapy, in proton beam therapy the beam of protons stops once it "hits" the cancerous cells. This means that proton beam therapy results in much less damage to surrounding tissue.

 

Who can benefit from proton beam therapy?

Proton beam therapy is useful for treating types of cancer in critical areas – when it is important to reduce damage to surrounding tissue as much as possible. For example, it is used most often to treat brain tumours in young children whose brains are still developing.

Proton beam therapy can also be used to treat adult cancers where the cancer has developed near a place in the body where damage would cause serious complications, such as the optic nerve.

These types of cancer make up a very small proportion of all cancer diagnoses. Even if there was unlimited access to proton beam therapy, its use would not be recommended in most cases.

Cancer Research UK estimates that only one in 100 people with cancer would be suitable for proton beam therapy.

 

Is proton beam therapy effective?

It is important not to assume that newly emerging treatments are more effective than existing treatments.

Proton beam therapy may cause less damage to healthy tissue, but it is still unclear whether it is as good at destroying cancerous tissue as conventional radiotherapy.

As proton beam therapy is usually reserved for very rare types of cancer, it is hard to gather systematic evidence about its effectiveness when compared to radiotherapy.

People who travel abroad from the UK to receive proton beam therapy usually respond well. But these people have specifically been selected for treatment as they were seen as "optimal candidates" who would benefit the most. Whether this benefit would apply to more people with cancer is unclear.

We cannot say with any conviction that proton beam therapy is “better” overall than radiotherapy.

 

Is proton beam therapy available in the UK?

Generally not. The NHS is building two proton beam centres, one in London and one in Manchester, which are expected to open in 2018. There is an existing low energy proton machine used specifically to treat some eye cancers at the NHS Clatterbridge Cancer Centre in Merseyside. This low energy machine cannot be used to treat most brain tumours as the low energy beam cannot penetrate far enough.

The NHS sends patients abroad if their care team thinks they are ideally suited to receive proton beam therapy. Around 400 patients have been sent abroad since 2008 – most of these patients were children. Read NHS England's advice for families of children being referred for proton beam therapy at overseas clinics (PDF, 1.39Mb).

Some overseas clinics providing proton beam therapy heavily market their services to parents who are understandably desperate to get treatment for their children. Proton beam therapy can be very costly and it is not clear whether all children treated privately abroad are treated appropriately.

It is important not to lose sight of the fact that conventional radiotherapy is, in most cases, both safe and effective with a low risk of complications. While side effects of radiotherapy are common they normally pass once the course of treatment has finished.

Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

What access to proton beam therapy do UK patients have? BBC News, September 1 2014

Ashya: Opinion Divided On Proton Beam Therapy. Sky News, September 2 2014

Categories: NHS Choices

Missing breakfast linked to type 2 diabetes

NHS Choices - Behind the Headlines - Wed, 03/09/2014 - 12:30

"Skipping breakfast in childhood may raise the risk of diabetes," the Mail Online reports. A study of UK schoolchildren found that those who didn’t regularly eat breakfast had early signs of having risk markers for type 2 diabetes.

The study found that children who did not usually eat breakfast had 26% higher insulin resistance than children who always ate breakfast. High insulin resistance increases risk of type 2 diabetes, which is why the results of this study are important. It should be pointed out that while the levels were higher in children who skipped breakfast, they were still within normal limits.

The researchers questioned more than 4,000 children aged nine and 10 about whether they usually ate breakfast, and took a fasting blood sample for a variety of measurements, including their blood sugar level and insulin level. 

The results suggest that eating breakfast may reduce the risk of higher insulin resistance levels, but due to the cross-sectional design of the study (a one-off assessment), it cannot prove that skipping breakfast causes higher insulin resistance or type 2 diabetes. And, as the researchers point out, even if a direct cause and effect relationship was established, it is still unclear why skipping breakfast would make you more prone to diabetes.

Despite this limitation of the study, eating a healthy breakfast high in fibre has many health benefits and should be encouraged.

 

Where did the story come from?

The study was carried out by researchers from St George’s University Hospital in London, the University of Oxford, the Medical Research Council Human Nutrition Research in Cambridge and University of Glasgow School of Medicine. It was funded by Diabetes UK, the Wellcome Trust, and the National Prevention Research Initiative. The authors declared no conflict of interest.

The study was published in the peer-reviewed medical journal PLOS Medicine. This is an open access journal so the study is free to read online.

The UK media generally reported the study accurately, although claims the study “tracked” children over time are inaccurate. Researchers used a one-off questionnaire and blood test, and none of the results showed that the children were insulin resistant – they just had higher levels within the normal range.

Also the Mail Online’s headline “Youngsters who don't eat morning meal more likely to be insulin dependent” appears to be written by someone without any grasp of human biology. All humans are insulin dependent.

 

What kind of research was this?

This was a cross-sectional study of nine- and 10-year-old children in England. It aimed to see if there was a link between eating breakfast and markers for type 2 diabetes, in particular insulin resistance and high blood sugar levels. Higher fasting insulin levels are seen when the body becomes insulin resistant, which is a risk factor for developing type 2 diabetes. As this was a cross-sectional study, it cannot prove that not eating breakfast causes children to be at higher risk of type 2 diabetes, but it can show that there is an association.

 

What did the research involve?

The researchers used information collected from 4,116 children who had participated in the Child Heart And health Study in England (CHASE) between 2004 and 2007. This study invited children aged nine and 10 from 200 randomly selected schools in London, Birmingham and Leicester to take part in a survey looking at risk factors for type 2 diabetes and cardiovascular disease.

This included questionnaires, measures of body fat and a fasting blood sample, taken eight to 10 hours after their last meal.

One of the questions related to how often they ate breakfast, with the following possible responses:

  • every day
  • most days
  • some days
  • not usually

Children from the last 85 schools were also interviewed by a research nutritionist to determine their food and drink intake in the previous 24 hours.

They analysed the data looking for an association between breakfast consumption and insulin resistance and higher blood sugar levels adjusting the results to take into account age, sex, ethnicity, day of the week and month, and school.

 

What were the basic results?

Of the 4,116 children:

  • 3,056 (74%) ate breakfast daily
  • 450 (11%) had breakfast most days
  • 372 (9%) had breakfast some days
  • 238 (6%) did not usually have breakfast

Compared to children who ate breakfast every day, children who did not usually have breakfast had:

  • 26% higher fasting insulin levels
  • 26.7% higher insulin resistance
  • 1.2% higher HbA1c (number of red blood cells attached to glucose, which is a marker of average blood glucose concentration, higher numbers increase the risk of diabetes) 1% higher glucose (blood sugar) level

These results remained significant even after taking into account the child’s fat mass, socioeconomic status and physical activity levels.

In the subset of children asked about their food intake over the previous 24 hours, children eating a high fibre breakfast had lower insulin resistance than those eating other types of breakfasts such as toast or biscuits.

 

How did the researchers interpret the results?

The researchers concluded that “children who ate breakfast daily, particularly a high fibre cereal breakfast, had a more favourable type 2 diabetes risk profile. Trials are needed to quantify the protective effect of breakfast on emerging type 2 diabetes risk”.

 

Conclusion

This well designed study found that children who did not usually eat breakfast had 26% higher insulin resistance than children who always ate breakfast, though the level was still within normal limits.

Higher levels indicate a risk of type 2 diabetes, which is why the results of this study are important.

Strengths of the study include the large sample size, multi-ethnicity of the participants and accuracy of the body fat measurements rather than just relying on body mass index (BMI).

A limitation of the study is that due to the cross-sectional design it cannot prove that not eating breakfast would cause diabetes, but it does show that this may begin to increase the risk. The study is also reliant on self-reporting of usual breakfast intake.

Eating a healthy breakfast rich in fibre has been linked to many health benefits and is thought to contribute to maintaining a healthy weight. As the researchers point out, further studies will be required to verify the link, such as through following children over time to see which ones develop diabetes.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Children who skip breakfast 'more likely to suffer diabetes': Youngsters who don't eat morning meal more likely to be insulin dependent. Mail Online, September 3 2014

Breakfast lowers risk of diabetes. The Times, September 3 2014

Children who skip breakfast might raise diabetes risk. New Scientist, September 2 2014

Links To Science

Donin AS, Nightingale CM, Owen CG, et  al. Regular Breakfast Consumption and Type 2 Diabetes Risk Markers in 9- to 10-Year-Old Children in the Child Heart and Health Study in England (CHASE): A Cross-Sectional Analysis. PLoS Medicine. Published online September 2 2014

Categories: NHS Choices

Lumpectomy 'as effective as double mastectomy'

NHS Choices - Behind the Headlines - Wed, 03/09/2014 - 11:29

“Double mastectomy for breast cancer 'does not boost survival chances' – when compared to breast-conserving surgery," The Guardian reports.

The news is based on the results of a large US cohort study of women with early stage breast cancer in one breast.

It found that the 10-year mortality benefit associated with bilateral mastectomy (removal of both breasts) was the same as breast-conserving surgery (also known as lumpectomy, where the cancer and a border of healthy tissue is removed) plus radiotherapy.

Unilateral mastectomy (removal of the affected breast) was associated with a slightly increased risk of 10-year mortality, although the absolute difference was only 4%.

In the UK, bilateral mastectomy may be recommended for women at high risk of breast cancer due to family history, or because of a gene mutation (for example mutations in the BRCA1 and BRCA2 genes). A bilateral mastectomy can then be followed by breast reconstruction surgery, restoring the original look of the breasts.

Disadvantages of a bilateral mastectomy compared to a lumpectomy include a longer recovery time and a higher risk of complications.

This study suggests that bilateral mastectomy may not be associated with any significant survival benefit over breast conserving therapy plus radiotherapy for most women.

It is important to note that the outcome for individual patients may vary, and the type of surgery a woman with breast cancer receives will depend on a number of factors, including her personal wishes and feelings.

 

Where did the story come from?

The study was carried out by researchers from Stanford University School of Medicine and the Cancer Prevention Institute of California. This study was funded by the Jan Weimer Junior Faculty Chair in Breast Oncology, the Suzanne Pride Bryan Fund for Breast Cancer Research at Stanford Cancer Institute, and the National Cancer Institute Surveillance, Epidemiology, and End Results Program. The collection of cancer incidence data was supported by the California Department of Health Services, the National Cancer Institute Surveillance, Epidemiology, and End Results Program and the Centres for Disease Control and Prevention National Program of Cancer Registries.

The study was published in the peer-reviewed medical journal JAMA. This article is open access so it is free to read and download.

The results of this study were well covered by the UK media. However, the headlines could be misconstrued as stating that there are no benefits associated with double mastectomies.

In fact, the headlines refer to the fact the double mastectomies weren’t associated with a significantly different survival benefit compared to breast-conserving therapy with radiotherapy, rather than with no survival benefit compared to no treatment.

 

What kind of research was this?

This was a cohort study that aimed to better understand the use of and outcomes after different treatment options for women diagnosed with early stage unilateral breast cancer (cancer in one breast).

Treatment options for breast cancer include surgery, radiotherapy, chemotherapy, hormone therapy and biological treatments.

In this study, the researchers were interested in different surgical options: unilateral mastectomy (removal of the breast with the cancer), bilateral mastectomy (removal of both breasts) and breast-conserving therapy with radiotherapy.

As this is a cohort study it cannot show that the type of surgery was the cause of poorer outcomes. A randomised controlled trial would be required for this. However, the researchers state that as bilateral mastectomy is an elective procedure for unilateral breast cancer, women who want this option are unlikely to accept randomisation to a less extensive surgical procedure in a trial.

 

What did the research involve?

The researchers identified women who had been diagnosed with early stage breast cancer (stage 0-III cancer) in one breast between 1998 and 2011 from the California Cancer Registry. Stage 0 breast cancer is localised and non-invasive, while stage III cancer is invasive and has spread to the lymph nodes.

The researchers followed these women for an average of 89.1 months.

The researchers looked for factors associated with the women receiving different types of surgical treatment.

They then looked to see how many women had died, and how many women had died from breast cancer, to see if the risk was different for women who had received different surgical treatment options.

The researchers adjusted their analyses for the following confounders:

  • age
  • race/ethnicity
  • tumour size
  • grade
  • histology (how the cells look under the microscope)
  • whether the cancer had spread to the lymph nodes
  • oestrogen receptor/progesterone receptor status
  • whether women also received chemotherapy and/or radiotherapy
  • neighbourhood socioeconomic status
  • marital status
  • insurance status
  • the socioeconomic composition of patients at the reporting hospital
  • whether women received care at a US National Cancer Institute designated cancer centre
  • year of diagnosis

 

What were the basic results?

The researchers identified 189,734 women who had been diagnosed with stage 0-III cancer in one breast between 1998 and 2011 from the California Cancer Registry. Of these, 6.2% underwent bilateral mastectomy, 55.0% received breast-conserving surgery with radiotherapy and 38.8% had a unilateral mastectomy.

The percentage of women who received bilateral mastectomy increased from 2.0% in 1998 to 12.3% in 2011, an annual increase of 14.3%. The increase in bilateral mastectomy rate was greatest among women younger than 40 years: the rate increased from 3.6% in 1998 to 33% in 2011.

The researchers compared the 10-year mortality (the percentage of women who don’t survive for 10 years) of women who had received breast-conserving surgery with radiotherapy, unilateral mastectomy and bilateral mastectomy.

  • 10-year mortality with breast-conserving surgery with radiotherapy was 16.8%
  • 10-year mortality with unilateral mastectomy was 20.1%
  • 10-year mortality with bilateral mastectomy was 18.8%

The researchers found that there was no significant mortality difference with bilateral mastectomy compared with breast-conserving surgery with radiotherapy (hazard ratio [HR] 1.02, 95% confidence interval [CI] 0.94 to 1.11), although unilateral mastectomy was associated with increased mortality (HR 1.35, 95% CI 1.32 to 1.39). The results for risk of death from breast cancer were similar.

The researchers also found that there were significant differences in the women who received the different surgical options.

Compared to women who received breast-conserving therapy plus radiotherapy, women were more likely to receive bilateral mastectomy if they:

  • were younger than 50 years old
  • were unmarried
  • were non-Hispanic white women
  • were diagnosed between 2005 and 2011 (vs. 1998 to 2004)
  • had a larger tumour, lymph node involvement, lobular histology (where cancer develops inside milk producing glands), higher grade or oestrogen receptor-/progesterone receptor-negative status (where cancer does not respond to hormonal treatments)
  • did not receive adjuvant treatment (chemotherapy and/or radiotherapy)
  • had private health insurance
  • came from neighbourhoods with higher socioeconomic status
  • received care at a National Cancer Institute designated cancer centre, or a hospital predominantly serving patients with lower socioeconomic status

Compared to women who received breast-conserving therapy plus radiotherapy, women were more likely to receive unilateral mastectomy if they:

  • were any age apart from 50 to 64 years old
  • were from a racial/ethnic minority
  • were married
  • were diagnosed between 1998 and 2004 (vs. 2005 to 2011)
  • had a larger tumour, lymph node involvement, lobular histology, higher grade, or oestrogen receptor-/progesterone receptor-negative status
  • did not receive adjuvant therapy (chemotherapy and/or radiotherapy)
  • had public/Medicaid insurance
  • came from neighbourhoods with lower socioeconomic status
  • received care at a hospital predominantly serving patients with lower socioeconomic status, and at hospitals that were not a National Cancer Institute designated cancer centre

 

How did the researchers interpret the results?

The researchers concluded that “use of bilateral mastectomy increased significantly throughout California from 1998 through 2011 and was not associated with lower mortality than that achieved with breast-conserving surgery plus radiotherapy. Unilateral mastectomy was associated with higher mortality than were the other two surgical options”.

 

Conclusion

This large US cohort study of women with early stage breast cancer in one breast has found no 10-year mortality benefit associated with bilateral mastectomy (removal of both breasts) compared with breast-conserving surgery (also known as lumpectomy, where the cancer and a border of healthy tissue is removed) plus radiotherapy.

Unilateral mastectomy was associated with a slightly increased risk of 10-year mortality, although the absolute difference was only 4%.

However, as there were significant differences between the patients receiving the different surgical options it makes it likely that the increase in risk associated with unilateral mastectomy is due to incomplete adjustment for some of the measured factors, unmeasured factors (for example, the presence of other diseases such as diabetes), or differences in access to care.

This study suggests that bilateral mastectomy may not be associated with any significant survival benefit compared to breast-conserving surgery with radiotherapy for the population of women with unilateral breast cancer.

However, as this was a cohort study it cannot prove that there was no significant survival difference; this would require a randomised controlled trial.

It is important to note that the outcome for individual patients may vary, and the type of surgery a woman with breast cancer receives will depend on a number of factors, including her personal wishes and feelings.

Ultimately, if you have been told you may require breast surgery, the choice of surgery will be down to you. Questions you may wish to ask your surgeon include:

  • What are the risks of the cancer reoccurring?
  • What are the risks of complications with each type of surgery?
  • What would be the likely impact on my quality of life for each type of surgery?
  • How will surgery affect the appearance of my breasts?
  • Are there any viable non-surgical options?

Read more about preparing for surgery.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Double mastectomy for breast cancer 'does not boost survival chances'. The Guardian, September 2 2014

Double mastectomy 'doesn't boost chance of surviving cancer': Women who have less drastic surgery live just as long. Daily Mail, September 3 2014

Double mastectomies may not reduce cancer survival rates, study shows. The Independent, September 2 2014

Breast cancer survival rate 'no greater' after double mastectomy despite rise in breast removals. Daily Mirror, September 2 2014

Links To Science

Kurian AW, Lichtensztajn DY, Keegan THM, et al. Use of and Mortality After Bilateral Mastectomy Compared With Other Surgical Treatments for Breast Cancer in California, 1998-2011. JAMA. Published online September 3 2014

Categories: NHS Choices

Could watching action films make you fat?

NHS Choices - Behind the Headlines - Tue, 02/09/2014 - 12:00

“Couch potatoes captivated by fast-paced action films eat far more than those watching more sedate programmes,” The Independent reports.

A small US study found that people snacked more when watching action-packed movies.

The study took 94 US student volunteers and randomly assigned them in groups to watch 20 minutes of either the action film “The Island” with sound, the same film without sound or “Charlie Rose”, a long-running American talk show.

They were provided with unlimited snacks of M&Ms, cookies, carrots and grapes.

People watching the action film with sound ate 65% more calories than those watching the talk show.

Researchers discussed the hypothesis that the frequent visual and audio variations in “The Island” (a style of filming that director Michael Bay, best known for the "Transformers" films, has become notorious for) may be distracting. This means participants may have been unaware of how much they were snacking.

However, this does not prove that action films make you fat. The study appeared to allow students to gather themselves into groups before being assigned to what they would watch. This could have meant the groups were not adjusted for factors such as food preferences, physical activity or when the students had last eaten, which could all have influenced results.

The study does remind us, however, that we need to pay attention to what we eat, including food we consume while distracted, as it all counts towards our daily calorie intake.

 

Where did the story come from?

The study was carried out by researchers from Cornell University in New York and Vanderbilt University in Nashville. It was funded by Cornell University.

The study was published in the peer-reviewed medical journal JAMA Internal Medicine.

The UK media reported the story accurately, but did not highlight any of its weaknesses. However, The Independent did helpfully publish advice from England’s Chief Medical Officer that people should do a minimum of 150 minutes (2.5 hours) of moderate activity a week.

 

What kind of research was this?

This was a randomised controlled trial that aimed to see if people ate more snacks depending on the type of TV content they were watching.

While randomising participants is the best way to get groups that are balanced in their characteristics, this study only gave limited details of how this was done. This makes it difficult to know exactly how well the randomisation worked, and if the groups were truly balanced.

 

What did the research involve?

The researchers recruited 94 undergraduate students, gathered in groups of up to 20 people, then randomly assigned them to watch TV for 20 minutes, which was either:

  • an excerpt from action movie “The Island”
  • the same excerpt from “The Island”, but without any sound
  • an interview programme (talk show) called “Charlie Rose” – a celebrity focused talk show

During the 20 minutes, four snacks were made available: M&Ms, cookies, carrots and grapes. They were allowed to eat as much of them as they wanted. The amount of snacking per person was calculated by weighing the snacks before and after the 20 minute programme.

The researchers then analysed the results by type of TV show and sex of the participant.

 

What were the basic results?

Participants watching the action film with sound ate 98 more grams (g) of food than those watching a talk show (206.5g versus 104.3g). This equated to 65% more calories (kcal) consumed in the action film with sound group (354.1kcal versus 214.6kcal).

Those watching the action film without sound also ate significantly more snacks than people watching the talk show – 36% more grams of food (142.1g versus 104.3g) and 46% more calories (314.5kcal versus 214.6kcal).

Males ate more than females in all three groups.

 

How did the researchers interpret the results?

The researchers concluded that “more distracting TV content appears to increase food consumption: action and sound variation are bad for one’s diet”. They suggest that people should either avoid snacking when watching distracting TV or use “proportioned quantities to avoid overeating”.

 

Conclusion

This study appears to indicate that the type of TV programme a person watches can influence how many calories are consumed as snacks. However, little information was provided about the methods and findings of this study, which makes it difficult to be certain how well it was performed and, therefore, how robust the results are.

The potential issues with the study that could affect interpretation of the results seen include:

  • The participants were not randomly assigned to the different groups individually – instead they “gathered” into groups, and then these groups were randomised. This might mean that friends with similar likes and preferences gathered together and ended up in the same group. These self-selected groupings may have differed in their characteristics (e.g. gender, body mass index (BMI), physical activity or socioeconomic status), and these differences could affect results.
  • It is not clear whether the same number of people were exposed to each scenario, as the number of people in the groups was not reported.
  • No information was provided on which snacks the participants chose to eat, only the overall quantity in grams and calories. While it is tempting to assume that the people eating more calories were eating the unhealthier food, we don’t know whether this was the case. Indeed, the difference between the average least amount of snacks and the highest average amount was 100g and 140kcal – this suggests that the difference was not entirely of unhealthy food, as 100g of M&Ms contains more than 544kcal.
  • It is unclear what time of day the programmes were watched or whether they were all watched at the same time of day. Time of viewing could have a large effect on snacking, depending on the timing in relation to meals.
  • The students eating the most snacks may have had a higher physical requirement for food due to their level of sport or usual activities. The study also didn’t look at whether the people who ate more in snacks compensated for this in their later meals.
  • The study was conducted on students, and their behaviour may not be representative of the population at large.

In conclusion, this study in isolation doesn’t prove that watching certain TV programmes or films makes you fat. However, it does act as a reminder that we should pay attention to what we eat, including food we consume while distracted, as it all part of our calorie intake.

It is still recommended that you aim for at least 150 minutes (2.5 hours) of moderate physical activity each week, as well as eating a healthy, balanced diet.

If you are trying to lose weight, it might be a good idea to remove snacks from situations where you may get distracted – whether that is at home watching TV or at the cinema.

Only eating in a set location, such as your kitchen or dining room, can be a good way of staying mindful of how much you are actually eating; even a few extra snacks every night can quickly add up.

There are, however, a range of 100 calories or less snacks you can try, that shouldn’t put you over your daily calorie intake.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Action films make you fat, study finds. The Independent, September 1 2014

Action films most likely to make you fat, says study. BBC News, September 2 2014

Links To Science

Tal A, Zuckerman S, Wansink D. Watch What You Eat: Action-Related Television Content Increases Food Intake. JAMA Internal Medicine. Published online September 1 2014

Categories: NHS Choices

Pages