NHS Choices

Does moderate drinking reduce heart failure risk?

NHS Choices - Behind the Headlines - Tue, 20/10/2015 - 12:10

"Seven alcoholic drinks a week can help to prevent heart disease," the Daily Mirror reports. A US study suggests alcohol consumption up to this level may have a protective effect against heart failure.

This large US study followed more than 14,000 adults aged 45 and older for 24 years. It found those who drank up to 12 UK units (7 standard US "drinks") per week at the start of the study had a lower risk of developing heart failure than those who never drank alcohol.

The average alcohol consumption in this lower risk group was about 5 UK units a week (around 2.5 low-strength ABV 3.6% pints of lager a week).

At this level of consumption, men were 20% less likely to develop heart failure compared with people who never drank, while for women it was 16%.

The study benefits from its large size and the fact data was collected over a long period of time.

But studying the impact of alcohol on outcomes is fraught with difficulty. These difficulties include people not all having the same idea of what a "drink" or "unit" is.

People may also intentionally misreport their alcohol intake. We also cannot be certain alcohol intake alone is giving rise to the reduction in risk seen.

Steps you can take to help reduce your risk of heart failure – and other types of heart disease – include eating a healthy diet, achieving and maintaining a healthy weight, and quitting smoking (if you smoke).

 

Where did the story come from?

The study was carried out by researchers from Brigham and Women's Hospital in Boston, and other research centres in the US, the UK and Portugal.

It was published in the peer-reviewed European Heart Journal.

The UK media generally did not translate the measure of "drinks" used in this study into UK units, which people might have found easier to understand.

The standard US "drink" in this study contained 14g of alcohol, and a UK unit is 8g of alcohol. So the group with the reduced risk actually drank up to 12 units a week.

The reporting also makes it seem as though 12 units – what is referred to in the papers as "a glass a day" – is the optimal level, but the study cannot not tell us this.

While consumption in this lower risk group was "up to" 12 units per week, the average consumption was about 5 units per week. This is about 3.5 small glasses (125ml of 12% alcohol by volume) of wine a week, not a "glass a day".

And the poor old Daily Express got itself into a right muddle. At the time of writing, its website is actually running two versions of the story. 

One story claims moderate alcohol consumption was linked to reduced heart failure risk, which is accurate. 

The other story claims moderate alcohol consumption protects against heart attacks, which is not accurate, as a heart attack is an entirely different condition to heart failure.

 

What kind of research was this?

This was a large prospective cohort study looking at the relationship between alcohol consumption and the risk of heart failure.

Heavy alcohol consumption is known to increase the risk of heart failure, but the researchers say the effects of moderate alcohol consumption are not clear.

This type of study is the best way to look at the link between alcohol consumption and health outcomes, as it would not be feasible (or arguably ethical) to randomise people to consume different amounts of alcohol over a long period of time.

As with all observational studies, other factors (confounders) may be having an effect on the outcome, and it is difficult to be certain their impact has been entirely removed.

Studying the effects of alcohol intake is notoriously difficult for a range of reasons. Not least is what can be termed the "Del Boy effect": in one episode of the comedy Only Fools and Horses, the lead character tells his GP he is a teetotal fitness fanatic when in fact the opposite is true – people often misrepresent how healthy they are when talking to their doctor.

 

What did the research involve?

The researchers recruited adults (average age 54 years) who did not have heart failure in 1987 to 1989, and followed them up over about 24 years.

Researchers assessed the participants' alcohol consumption at the start of and during the study, and identified any participants who developed heart failure.

They then compared the likelihood of developing heart failure among people with different levels of alcohol intake.

Participants came from four communities in the US, and were aged 45 to 64 years old at the start of the study. The current analyses only included black or white participants. People with evidence of heart failure at the start of the study were excluded.

The participants had annual telephone calls with researchers, and in-person visits every three years.

At each interview, participants were asked if they currently drank alcohol and, if not, whether they had done so in the past. Those who drank were asked how often they usually drank wine, beer, or spirits (hard liquor).

It was not clear exactly how participants were asked to quantify their drinking, but the researchers used the information collected to determine how many standard drinks each person consumed a week.

A drink in this study was considered to be 14g of alcohol. In the UK, 1 unit is 8g of pure alcohol, so this drink would be 1.75 units in UK terms.

People developing heart failure were identified by looking at hospital records and national death records. This identified those recorded as being hospitalised for, or dying from, heart failure.

For their analyses, the researchers grouped people according to their alcohol consumption at the start of the study, and looked at whether their risk of heart failure differed across the groups.

They repeated their analyses using people's average alcohol consumption over the first nine years of the study.

The researchers took into account potential confounders at the start of the study, including:

  • age
  • health conditions, including high blood pressure, diabetes, coronary artery disease, stroke and heart attack
  • cholesterol levels
  • body mass index (BMI)
  • smoking
  • physical activity level
  • educational level (as an indication of socioeconomic status)

 

What were the basic results?

Among the participants:

  • 42% never drank alcohol
  • 19% were former alcohol drinkers who had stopped
  • 25% reported drinking up to 7 drinks (up to 12.25 UK units) per week (average consumption in this group was about 3 drinks per week, or 5.25 UK units)
  • 8% reported drinking 7 to 14 drinks (12.25 to 24.5 UK units) per week
  • 3% reported drinking 14 to 21 drinks (24.5 to 36.75 UK units) per week
  • 3% reported drinking 21 drinks or more (36.75 UK units or more) per week

People in the various alcohol consumption categories differed from each other in a variety of ways. For example, heavier drinkers tended to be younger and have lower BMIs, but be more likely to smoke.

Overall, about 17% of participants were hospitalised for, or died from, heart failure during the 24 years of the study.

Men who drank up to 7 drinks per week at the start of the study were 20% less likely to develop heart failure than those who never drank alcohol (hazard ratio [HR] 0.80, 95% confidence interval [CI] 0.68 to 0.94).

Women who drank up to 7 drinks per week at the start of the study were 16% less likely to develop heart failure than those who never drank alcohol (HR 0.84, 95% CI 0.71 to 1.00).

But at the upper level of the confidence interval (1.00), there would be no actual difference in risk reduction.

People who drank 7 drinks a week or more did not differ significantly in their risk of heart failure compared with those who never drank alcohol.

Those who drank the most (21 drinks per week or more for men, and those drinking 14 drinks per week or more for women) were more likely to die from any cause during the study.

 

How did the researchers interpret the results?

The researchers concluded that, "Alcohol consumption of up to 7 drinks [about 12 UK units] per week at early middle age is associated with lower risk for future HF [heart failure], with a similar but less definite association in women than in men."

 

Conclusion

This study suggests drinking up to about 12 UK units a week is associated with a lower risk of heart failure in men compared with never drinking alcohol.

There was a similar result for women, but the results were not as robust and did not rule out the possibility of there being no difference.

The study benefits from its large size (more than 14,000 people) and the fact it collected its data prospectively over a long period of time.

However, studying the impact of alcohol on outcomes is fraught with difficulty. These difficulties include people not being entirely sure what a "drink" or a "unit" is, and reporting their intakes incorrectly as a result.

In addition, people may intentionally misreport their alcohol intake – for example, if they are concerned about what the researchers will think about their intake.

Also, people who do not drink may do so for reasons linked to their health, so may have a greater risk of being unhealthy.

Other limitations are that while the researchers did try to take a number of confounders into account, unmeasured factors could still be having an effect, such as diet.

For example, these confounders were only assessed at the start of the study, and people may have changed over the study period (such as taking up smoking). 

The study only identified people who were hospitalised for, or died from, heart failure. This misses people who had not yet been hospitalised or died from the condition.

The results also may not apply to younger people, and the researchers could not look at specific patterns of drinking, such as binge drinking.

Although no level of alcohol intake was associated with an increased risk of heart failure in this study, the authors note few people drank very heavily in their sample. Excessive alcohol consumption is known to lead to heart damage.

The study also did not look at the incidence of other alcohol-related illnesses, such as liver disease. Deaths from liver disease in the UK have increased 400% since 1970, due in part to increased alcohol consumption, as we discussed in November 2014.

The NHS recommends that:

  • men should not regularly drink more than 3-4 units of alcohol a day
  • women should not regularly drink more than 2-3 units a day
  • if you've had a heavy drinking session, avoid alcohol for 48 hours

Here, "regularly" means drinking this amount every day or most days of the week.

The amount of alcohol consumed in the study group with the reduced risk was within the UK's recommended maximum consumption limits.

But it is generally not recommended that people take up drinking alcohol just for any potential heart benefits. If you do drink alcohol, you should stick within the recommended limits.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Seven alcoholic drinks a week can help to prevent heart disease, new research reveals. Daily Mirror, January 20 2015

A drink a day 'cuts heart disease risk by a fifth' researchers claim...so don't worry about having a dry January. Mail Online, January 19 2015

A drink a night 'is better for your heart than none at all'. The Independent, January 19 2015

Glass of wine a day could protect the heart. The Daily Telegraph, January 20 2015

Daily drink 'cuts risk' of middle-age heart failure. The Times, January 20 2015

Drinking half a pint of beer a day could fight heart failure. Daily Express, January 20 2015

Links To Science

Gonçalves A, Claggett B, Jhund PS, et al. Alcohol consumption and risk of heart failure: the Atherosclerosis Risk in Communities Study. European Heart Journal. Published online January 20 2015

Categories: NHS Choices

Holidays and parties mean we may drink more than we think

NHS Choices - Behind the Headlines - Fri, 22/05/2015 - 14:48

"The amount of alcohol people in England drink has been underestimated by the equivalent of 12 million bottles of wine a week," BBC News reports.

It has long been known there is a big gap between the amount people say they drink in national surveys, like the Health Survey for England, and the amount of alcohol known to be sold in England.

In this new survey researchers set out on the assumption that while people may accurately report their standard drinking patterns from week to week, they may forget the drinking they do on special occasions, such as bank holidays, parties, weddings, wakes or big sporting events (which, for many England fans, is akin to a wake).

The study used a large phone interview to estimate the amount of extra drinking going on during these types of occasions. They found this accounted for an extra 12 million bottles of wine a week in England – just under a staggering eight and a half million litres, which is more than enough to fill three Olympic-size swimming pools.

The results seem plausible. As the scientists point out: "The impact of atypical and special occasion drinking is reflected in evening presentations to emergency units, which peak on weekends but also sports events, bank holidays, and even commemorative occasions such as Halloween."

If you are concerned about whether you may be drinking more than you should, you can download the Change4Life Drinks Tracker app, which is available for iOS and Android devices.

Where did the story come from?

The study was carried out by UK researchers from Cardiff University, Bangor University, Liverpool John Moores University, and the London School of Hygiene and Tropical Medicine. It was funded by Alcohol Research UK.
 
The study was published in the peer-reviewed medical journal BioMed Central. This is an open-access journal, so the study is free to read online or download as a PDF.

The UK media reported the story accurately.  

What kind of research was this?

This was a cross-sectional survey aiming to provide a more accurate picture of how much alcohol people in England drink.

The researchers say there is a big gap between the amount people report drinking in national surveys and the amount of alcohol being sold in England. So are we a nation of liars in denial about our drinking habits?

Rather than fibbing, the researchers suspected people might be being asked the wrong type of questions on alcohol surveys. You are usually asked what your average alcohol consumption is, say, over a week. People might not think to include special events in this estimate, such as drinking at a wedding or a birthday party, because they are not typical.

The scientists designed a large telephone interview study to see whether the special occasion drinking might make up the shortfall between estimates of typical drinking and alcohol sales. 

What did the research involve?

The team conducted a large-scale telephone survey between May 2013 and April 2014 of people aged 16 years or over living in England.

Respondents (n = 6,085) provided information on typical drinking (amounts per day, drinking frequency) and changes in consumption associated with routine atypical days (e.g. Friday nights) and special drinking periods (e.g. holidays) and events (e.g. weddings).

The team acknowledged it did not collect a representative sample of alcohol consumers and abstainers on a national basis, but instead used national population estimates and stratified drinking survey data to weight responses to match the English population.

The analysis looked to identify additional alcohol consumption associated with atypical or special occasion drinking by age, sex and typical drinking level. 

What were the basic results?

Accounting for atypical and special occasion drinking added more than 120 million units of alcohol per week (equivalent to 12 million bottles of wine) to population alcohol consumption in England.

The greatest impact was seen among 25- to 34-year-olds with the highest typical consumption, where atypical or special occasions added approximately 18 units a week (144g) for both sexes.

Those reporting the lowest typical consumption (≤1 unit/week) showed large relative increases in consumption (209.3%) with most drinking associated with special occasions.

In some demographics, adjusting for special occasions resulted in overall reductions in annual consumption – for example, women aged 65 to 74 years in the highest typical drinking category.

The Health Survey for England, a nationally representative survey, estimates alcohol consumption only accounted for 63.2% of sales. The new survey, including the special occasion drinking, accounted for 78.5%. 

How did the researchers interpret the results?

The research team concluded: "Typical drinking alone can be a poor proxy for actual alcohol consumption. Accounting for atypical/special occasion drinking fills 41.6% of the gap between surveyed consumption and national sales in England."

From a public health perspective they said: "These additional units are inevitably linked to increases in lifetime risk of alcohol-related disease and injury, particularly as special occasions often constitute heavy drinking episodes.

"Better population measures of celebratory, festival and holiday drinking are required in national surveys in order to adequately measure both alcohol consumption and the health harms associated with special occasion drinking." 

Conclusion

This large telephone survey sought to generate a more accurate estimate of England's alcohol consumption by taking account of atypical drinking days like Friday nights, holidays and events such as weddings.

It found atypical and special occasion drinking added more than 120 million units of alcohol a week (about 12 million bottles of wine) to population alcohol consumption in England

This accounted for some of the discrepancy between self-reported alcohol consumption and alcohol sales, but not all. The Health Survey for England, a nationally representative survey, estimates alcohol consumption only accounts for 63.2% of sales. The new survey improved this to 78.5%.

This begs the question, where is the other 21.5% going? There are many potential explanations for this. One is that people are pretty bad at estimating how much they drink, and generally underestimate it, for whatever reason, when asked.

An alternative, rather worrying, explanation is that a significant portion could be consumed by under-16s, who were excluded from the survey. And there could be people who just can't help downplaying the amount they drink, whether consciously or unconsciously, even to strangers on the telephone.

The research team highlighted a number of limitations of its own research. First, the survey did not attempt to generate a representative sample of alcohol consumers and abstainers on a national basis.

The scientists say their survey acts as a proof of concept, and a larger nationally representative survey is needed to test the usefulness of this methodology as a national alcohol monitoring tool. For example, participation rates were quite low (just 23.3% of those contacted) and the sample had more women, older people and people of white ethnicity than is true for England as a whole.

The estimates also might be imprecise. For example, the team didn't know if special drinking events were instead of or as well as the normal drinking days. In their analysis, they opted for a conservative measure by removing an average drinking day's consumption for each special event day reported.

The results make sense. As the scientists point out: "The impact of atypical and special occasion drinking is reflected in evening presentations to emergency units, which peak on weekends but also sports events, bank holidays, and even commemorative occasions such as Halloween."

If you find yourself regularly drinking more than the recommended daily limits (3-4 units for men, 2-3 units for women), you may have an alcohol misuse problem that may require treatment

Links To The Headlines

English drink 12 million bottles of wine a week more than estimated. BBC News, May 22 2015

Forgotten holidays and lost birthdays leave English drinking underestimated. The Guardian, May 22 2015

Drinkers in England consuming 12 million more bottles of wine a week than previously thought. The Independent, May 22 2015

English alcohol consumption 'hugely' underestimated, research suggests. The Daily Telegraph, May 22 2015

Links To Science

Bellis MA, Hughes K, Jones L, et al. Holidays, celebrations, and commiserations: measuring drinking during feasting and fasting to improve national and individual estimates of alcohol consumption. BMC Medicine. Published online May 22 2015

Categories: NHS Choices

Quarter of sun-exposed skin samples had DNA mutations

NHS Choices - Behind the Headlines - Fri, 22/05/2015 - 13:00

A sobering BBC News headline greets sun worshippers on the eve of the spring bank holiday: "More than a quarter of a middle-aged person's skin may have already made the first steps towards cancer."

Sunlight is made up of ultraviolet (UV) radiation. Low levels of exposure to UV light are actually beneficial to health – sunlight helps our bodies produce vitamin D.

But prolonged exposure can change (mutate) the DNA in the cells. Over time the mutations accumulate, turning the skin cells cancerous, which can lead to either non-melanoma or melanoma skin cancer.

As part of a study into skin cancer, researchers analysed skin removed from the eyelids of four people aged 55 to 73 known to have a varying history of sun exposure (but not a history of cancer) to see what DNA mutations had built up.

To their surprise they found hundreds of normal cells showing DNA mutations linked to cancer, called "mutant clones", in every 1sq cm (0.1 sq in) of skin, and there were thousands of DNA mutations per cell.

The results were based on skin cells from the eyelids of just four people, so we don't yet know if the same would be found in other skin areas, or in other people, or what proportion of the mutated cells would eventually progress to skin cancer. 

Where did the story come from?

The study was carried out by researchers from The Wellcome Trust Sanger Institute in the UK, and was funded by The Wellcome Trust and the Medical Research Council.

It was published in the peer-reviewed journal, Science.

The BBC and the Daily Mail reported the story accurately and reiterated the best ways to lower your risk of getting skin cancer. 

What kind of research was this?

This was a genetics study looking at changes in the DNA of normal skin cells to see what proportions were linked to cancer.

Skin cancer is one of the most common forms of cancer. There are two main types of skin cancer:

  • non-melanoma skin cancer – where cancer slowly develops in the upper layers of the skin; there are more than 100,000 new cases of non-melanoma skin cancer every year in the UK
  • melanoma skin cancer – a more serious type of skin cancer; there are around 13,000 new cases of melanoma diagnosed each year in the UK and 2,000 deaths

Radiation from too much sun exposure causes damage to the DNA of skin cells. When certain combinations of mutations accumulate, the cell can become cancerous, multiplying and growing uncontrollably. 

Scientists know about lots of skin cancer mutations, but these tend to have been studied using samples of cancerous skin cells. Researchers don't know what combination of mutations is needed to transform healthy skin cells into cancer, or in what order.

Approaching the problem from a different direction, this team looked at healthy skin cells to see what mutations might be accumulating in a pre-cancerous stage. 

What did the research involve?

The scientists analysed the DNA of healthy eyelid skins cells removed from four people during plastic surgery (blepharoplasty). They looked for DNA mutations they knew were linked to cancer later on. The removed eyelid skin was reported to be normal and free of any obvious damage.

The team used eyelid skin because of its relatively high levels of sun exposure and because it is one of the few body sites to have normal skin removed.

They say this procedure is performed for age-related loss of elasticity of the underlying skin, which can cause eyelid drooping sometimes severe enough to disrupt vision, although the epidermis remains otherwise normal.

The skin sample donors were three women and one man, aged 55 to 73. Two had low sun exposure, one moderate and one high. Three were of western European origin and one was of south Asian origin. It was not clear how sun exposure was assessed.  

What were the basic results?

The researchers found a lot more cancer-related mutations in the normal cells than they were expecting. In all, their analysis pinpointed 3,760 mutations. The pattern of DNA mutations "closely matched" those expected for UV light exposure and that seen in skin cancers. 

DNA is made up of a code of letters known as base pairs. The team estimated people have around two to six mutations per million base pairs per skin cell. This, they said, was lower than the number of mutations usually found in skin cancer, but higher than found in other solid tumours.

Overall, they estimated around 25% of all skin cells carried a certain type of cancer-linked mutation called NOTCH mutations. While not enough to cause cancer on their own, if other mutations accumulate on top of the NOTCH mutations, they may cause cancer in the future. 

How did the researchers interpret the results?

Dr Peter Campbell, head of cancer genetics at Sanger, told the BBC News website: "The most surprising thing is just the scale; that a quarter to a third of cells had these cancerous mutations is way higher than we'd expect, but these cells are functioning normally."

He added: "It certainly changes my sun worshipping, but I don't think we should be terrified … It drives home the message that these mutations accumulate throughout life, and the best prevention is a lifetime of attention to the damage from sun exposure." 

Conclusion

This study estimated around 25% of normal skin cells have DNA mutations that could prime them to develop into skin cancer in the future. This was a lot higher than the scientists expected.

The genetic analysis of the study was robust, but used skin samples from just four people. This severely limits the generalisablity of the findings to the general population. For example, the results might be different for people of different ages, sun exposures and skin colours, so we don't know if this is true for most people.

Similarly, the researchers only used eyelid cells. There may be something unique about eyelid tissue that is linked to this higher than expected mutation rate. This may or may not be true for skin from other areas. At the moment, we don't know if the one in four estimate applies to other skin areas.

The good news is there are simple and effective ways of reducing your risk of skin cancer. The best way to prevent all types of skin cancer is to avoid overexposure to the sun and to keep an eye out for new or changing moles.

A few minutes in the sun can help maintain healthy levels of vitamin D, which is essential for healthy bones, but it's important to avoid getting sunburn. Wearing protective clothing such as sun hats, seeking shade, and wearing sun cream of at least SPF 30 are all advised.

Read more about how to enjoy the benefits of the sun without exposing your skin to damage

Links To The Headlines

Quarter of skin cells 'on road to cancer'. BBC News, May 22 2015

Skin cancer alert for the over-50s: Millions failing to heed advice over sun damage. Daily Mail, May 22 2015

Links To Science

Martincorena I, Roshan A, Gerstung M, et al. High burden and pervasive positive selection of somatic mutations in normal human skin. Science. Published online May 22 2015

Categories: NHS Choices

Minor ailment scheme doesn't provide free Calpol for all

NHS Choices - Behind the Headlines - Thu, 21/05/2015 - 15:00

"Thousands discover Calpol has been free on NHS 'for years' as mum's Facebook post goes viral," the Daily Mirror reports.

This and other similar headlines were prompted by a post made on the social networking site Facebook. In the post, it was claimed that all medicines for children were available for free on the NHS as part of the minor ailment scheme.

"I was in Boots yesterday buying Calpol and happened to complain to the cashier how expensive it is. She told me, to my amazement, that if you register your details with them under the minor ailments scheme that all medicines for children are free – a scheme that has been going for eight years."

The post went viral, being "shared" and "liked" more than 100,000 times in the space of a few days.

But there are a number of inaccuracies both in the Facebook post and in the media's reporting of the story. 

What is the minor ailment scheme?

The minor ailment scheme is designed to enable people with minor health conditions to access medicines and advice they would otherwise visit their doctor for.

It allows patients to see a qualified health professional at a convenient and accessible location within their community, and means patients do not need to wait for a GP appointment or queue up for a valuable A&E slot with a non-urgent condition.

Childhood ailments that may be treated under the scheme include:

If the patient being treated is exempt from paying prescription charges – because they're under 16 or over 60, for example, or they have a prescription prepayment certificate (PPC) – you don't have to pay for the medicine. 

Important points about the minor ailment scheme

There are a number of important points that have not been made clear by the media:

  • The minor ailment scheme is not a national scheme. It is not possible to say exactly which medical conditions are covered because this will vary depending on the location and the particular service.
  • The scheme is designed to offer medication to meet an acute need. It is not an opportunity for parents to stock up on free children's medications – if a pharmacist thinks someone is trying to abuse the system, they can refuse any request for treatment at their discretion.
  • The pharmacist has no obligation to provide branded medication such as Calpol. If there is a cheaper generic version available that is known to be equally effective, it is likely that will be provided instead.
  • Claims that the scheme is secretive are incorrect. Information about the minor ailment scheme has been freely available on the NHS Choices website since 2008.

Read more about the services offered by pharmacies and how they can often save you a trip to the GP.

Links To The Headlines

Thousands discover Calpol has been free on NHS 'for years' as mum's Facebook post goes viral. Daily Mirror, May 20 2015

No need to cough up for Calpol - it's FREE on the NHS: Thousands discover they're entitled to various medications after mother's Facebook post goes viral. Daily Mail, May 20 2015

Calpol for FREE: Hundreds find they're entitled to free medication after mum's social post. Daily Express, May 20 2015

Did you know you can get Calpol for free? Metro, May 20 2015

Categories: NHS Choices

Is paracetamol use in pregnancy harmful for male babies?

NHS Choices - Behind the Headlines - Thu, 21/05/2015 - 14:48

"Paracetamol use in pregnancy may harm male foetus," The Guardian reports. Researchers found evidence that taking paracetamol for seven days may lower the amount of testosterone testicular tissue can produce – using human foetal testicular tissue grafted into mice.

Low testosterone levels in male pregnancies have been linked to a range of conditions, ranging from the relatively benign, such as undescended testicles, to more serious conditions, such as infertility and testicular cancer.

Reassuringly, just taking a one-day course of paracetamol did not affect the level of testosterone. It seems any effect could be from continuous daily use only, rather than occasional use, which is how most people would probably take paracetamol.

An obvious caveat is that as the series of experiments was performed in mice, it is not known what the effect would be in humans. It is also not known whether the effect of regular daily use would be reversible and over what timescale. And we also don't know whether exposure in pregnancy would actually have any detrimental effects in a male child.

Paracetamol is generally believed to be safe in pregnancy, but – as with all medicines – pregnant women should only take them if absolutely necessary in the lowest effective dose and for the shortest period of time.  

Where did the story come from?

The study was carried out by researchers from the University of Edinburgh, the Edinburgh Royal Hospital for Sick Children, and the University Department of Growth and Reproduction in Copenhagen.

It was funded by The Wellcome Trust, the British Society of Paediatric Endocrinology and Diabetes, and the UK Medical Research Council.

The study was published in the peer-reviewed journal, Science Translational Medicine.

In general, the media reported the story accurately, though many of the headline writers decided to discuss the "harms" of paracetamol – such as The Guardian's headline, "Paracetamol use in pregnancy may harm male foetus" – which is an unhelpful term.

Apart from the fact this study involved mice, not men, there is no evidence a temporary drop in testosterone levels would cause permanent harm to a male foetus. The effect could well be temporary and reversible.

The Daily Mail went particularly over the top with its claim that, "the popular painkiller is believed to have lifelong effects on baby boys, raising their risk of everything from infertility to cancer". The Mail may think this is the case, but most qualified experts would question this due to lack of evidence. 

What kind of research was this?

This was a laboratory study using a mouse model to look at the effect of paracetamol on testicular development. Testes produce the sex hormone testosterone.

Previous research found an association between low sex hormone exposure in the womb and reproductive disorders, such as undescended testes at birth, or low sperm count and testicular cancer in young adulthood.

The researchers wanted to investigate whether exposure to paracetamol reduces the level of testosterone production. As it would be unethical to study this in pregnant women, the researchers used a mouse model.

Paracetamol is one of the drugs believed to be safe to use in pregnancy. This safety is based on observational studies of use during human pregnancy, as randomised controlled trials – the gold standard in research – are not performed in pregnancy for ethical reasons.   

What did the research involve?

The researchers grafted samples of human foetal testicular tissue into mice. In a series of laboratory experiments, they gave the mice different doses of oral paracetamol over the course of a week. The researchers measured what effect the paracetamol had on the level of testosterone produced by the testicular tissue.

Human foetal testes were obtained from pregnancies terminated in the second trimester. Small 1mm3 tissue samples of these testes were grafted under the skin of mice so the researchers could see what effect paracetamol had on their growth in an animal that has some similarities to humans.

The mice had their testicles removed so their production of testosterone would not influence the study. Their immune system was also dampened down to reduce the chances of rejecting the testicular tissue.

After a week – enough time for the testicular tissue to establish a blood supply – the mice were given injections of a hormone called human chorionic gonadotrophin (hCG), which stimulates testosterone production and is usually present in the womb. The mice were then randomly assigned to be given different strengths and regimes of oral paracetamol or placebo.

The researchers measured the level of testosterone at different time points during the study. They also measured the weight of the seminal vesicle, a gland that holds the liquid that mixes with sperm to form semen. Previous research has shown the growth of seminal vesicles is sensitive to sex hormones.

Experiments were also performed on the mice to measure the effect of paracetamol on their production of testosterone. 

What were the basic results?

Testosterone levels were reduced by exposure to paracetamol for seven days. A dose of 20mg/kg three times a day for seven days, the equivalent of a normal dose in adults, resulted in:

  • 45% reduction in testosterone production by the testicular tissue grafts
  • 18% reduction in seminal vesicle weight

Exposure to 20mg/kg three times in one day was tested under the premise that most pregnant women would only use paracetamol for a short period of time. This did not reduce testosterone levels or cause any change in seminal vesicle weight.

High-dose paracetamol of 350mg/kg once a day for seven days did not change the level of testosterone, but it did result in reduced seminal vesicle weight in host mice of 27%.

Graft survival was 65% over the two-week experiment period. There was no difference in graft weight between mice exposed to any dose of paracetamol and placebo. The mice appeared healthy and had no change in body weight. 

How did the researchers interpret the results?

The authors concluded that: "One week of exposure to a human-equivalent therapeutic regimen of acetaminophen [paracetamol] results in reduced testosterone production by xenografted human foetal testis tissue, whereas short-term (one day) use does not result in any long-lasting suppression of testosterone production."

They say that because the study was performed in mice, it cannot directly inform new recommendations for the use of paracetamol in pregnancy, but suggest pregnant women should consider limiting their use of the drug. 

Conclusion

This was a well-designed laboratory study looking at the effect of paracetamol on testicular development. As it would be unethical to study this in pregnant women, the researchers used a mouse model. This involved grafting samples of human foetal testicular tissue under the skin of mice.

The main finding from the study was that oral paracetamol reduced testosterone production if given at a dose equivalent to humans, three times a day for one week. A single dose of paracetamol did not reduce testosterone production.

As the researchers say, they tested the effect of single dose exposure as it is assumed that pregnant women are more likely to use paracetamol occasionally rather than continuously. 

The study's strengths include the randomisation procedure, which meant different doses and regimes of paracetamol could be directly compared with the control condition.

However, this study has some limitations because of the nature of a mouse model. These include:

  • graft testicular tissue may not respond in exactly the same way as normal testicular development in the womb
  • the grafts were fragments of testis tissue – an intact testicle may act differently
  • the mice were immunocompromised, which may have influenced the results

The results of this study suggest that regular paracetamol for seven days may reduce the production of testosterone by the developing testicle. However, further studies would be required to determine if this would be the case in humans.

It is also not clear whether the effect would be reversible and over what timescale. It is further completely unknown whether pregnancy exposure would actually have any detrimental effects in the male child – for example, in terms of the development of sexual characteristics at puberty, or upon future fertility.

At present, the product safety information for paracetamol does not preclude its use in pregnancy. Paracetamol is the painkiller of choice during pregnancy, as alternatives such as ibuprofen, and particularly aspirin, are thought to be associated with a higher risk of complications.

Paracetamol is also excreted in breast milk, but this is not believed to be in an amount that would harm the baby. Infant versions of paracetamol such as Calpol, however, are not licensed in babies under the age of two months.

As with all medicines, pregnant women should only take them if absolutely necessary, in the lowest effective dose and for the shortest period of time. If you have a painful condition that persists for more than one to two days, ask your midwife or the doctor in charge of your care for advice.

Links To The Headlines

Paracetamol use in pregnancy may harm male foetus, study shows. The Guardian, May 20 2015

Paracetamol during pregnancy may affect male babies, study shows. The Independent, May 21 2015

Taking paracetamol while pregnant 'could harm baby boys': Drug raises risk of conditions such as infertility and cancer later in life. Daily Mail, May 20 2015

Pregnant women warned paracetamol may harm unborn sons' fertility. ITV News, May 20 2015

Limit paracetamol in pregnancy, say scientists. BBC News, May 20 2015

Links To Science

van den Driesche S, Macdonald J, Anderson RA, et al. Prolonged exposure to acetaminophen reduces testosterone production by the human fetal testis in a xenograft model. Science Translational Medicine. Published online May 20 2015

Categories: NHS Choices

Mildly cold weather 'more deadly' than heatwaves or very cold snaps

NHS Choices - Behind the Headlines - Thu, 21/05/2015 - 14:00

"Mildly cold, drizzly days far deadlier than extreme temperatures," The Independent reports. An international study looking at weather-related deaths estimated that moderate cold killed far more people than extremely hot or cold temperatures.

Researchers gathered data on 74,225,200 deaths from 384 locations, including 10 in the UK. The results showed that the days most countries have the fewest deaths linked to temperature are those with warmer temperatures than average.

Therefore, the researchers calculate, the majority of "excess deaths" occur on days that are colder than average. Because extreme temperatures occur on only a few days a year, they have an impact on fewer deaths than the majority of moderately cold days.

Overall, the researchers say, 7.71% of all deaths can be attributed to temperature based on their statistical modelling.

One hypothesis offered by the researchers is that exposure to mild cold may increase cardiovascular stress while also suppressing the immune system, making people more vulnerable to potentially fatal conditions.

The researchers suggest that their findings show public health officials should spend less time planning for heatwaves, and more time thinking about how to combat the effect of year-round lower than optimum temperatures.  

Where did the story come from?

The study was carried out by researchers from 15 universities and institutes in 12 countries led by a team from the London School of Hygiene and Tropical Medicine.

It was funded by the UK Medical Research Council. The study was published in the peer-reviewed medical journal The Lancet and has been made available on an open-access basis, so it is free to read online or download as a PDF.

The media reports focused on the finding that moderately cold weather – such as that experienced in the UK for much of the year – caused more deaths than hot weather or extremely cold weather. The Daily Telegraph gave a good overall summary of the research.

The Independent's claim that "mildly cold, drizzly days" are "far deadlier than extreme temperatures" is an extrapolation, as the study didn't look at drizzle or rain as a risk factor, just temperature.

The Guardian includes a number of reactions from independent experts, such as Sir David Spiegelhalter's, presumably tongue-in-cheek, suggestion that "perhaps they are really saying that the UK climate is killing people". 

What kind of research was this?

This study was a meta-analysis of data on temperatures and deaths around the world to find out what effect temperature has on the risk of death, and whether people are more likely to die during cold weather or hot weather.

The researchers used statistical modelling to estimate the proportions of deaths in the regions studied that could be attributed to heat, cold, and extreme heat and cold. This type of study can tell us about links between variables such as temperature and death rates, but not whether one causes the other. 

What did the research involve?

Researchers collected data on temperature and mortality (74,225,200 deaths) from 384 locations in 13 different countries, during time periods from 1985 to 2012. They used statistical analysis to calculate the relative risk of death at different temperatures for each location.

The countries included were Australia, Brazil, Canada, China, Italy, Japan, South Korea, Spain, Sweden, Taiwan, Thailand, the UK and the US. About one-third of the locations sampled were in the US.

The researchers were not able to adjust figures to take account of the potential effects of other factors, such as income levels in the different countries, although they used air pollution data when it was available.

The researchers divided the temperature data from each location into evenly spaced percentiles, from cold to hot days. This was so the temperatures for the coldest days would be in the lowest percentiles of 1 or 2, while the highest temperatures would be at the top range, 98 or 99.

They defined extreme cold for a location as below the 2.5th percentile, and extreme heat as above the 97.5th percentile. They looked for the "optimum" temperature for each location, being the temperature at which fewest deaths attributable to temperature were recorded.

They calculated the deaths linked to temperatures above or below the optimum, and sub-divided that again to show deaths linked to extreme cold or heat.

The statistical analysis used a complex new model developed by the researchers, which allowed them to take account of the different time lag of different temperatures.

The effects of very high temperatures on death rates are usually quite shortlived, while very cold temperatures may have an effect on deaths for up to four weeks. 

What were the basic results?

Across all countries, colder weather was linked to more excess deaths than warmer weather – approximately 20 times as many (7.29% deaths in colder weather compared with 0.42% in warmer weather). 

For all countries, the optimum temperature – when there were fewest deaths linked to weather – was warmer than the average temperature for that location.

In the UK, for example, the average temperature recorded was 10.4C, while optimum temperature ranged from 15.9C in the north east to 19.5C in London. The optimum temperature for the UK was in the 90th centile, meaning that 9 out of 10 days in the UK are likely to be colder than the optimum.

The proportion of all deaths linked to extremely hot or cold days was much lower than that linked to less extreme hot or cold. The researchers say extreme heat or cold was responsible for 0.86% of deaths according to their statistical modelling (95% confidence interval 0.84 to 0.87).

However, the relative risk of dying at extremes of temperatures was increased, with a sharp increase in deaths at the hottest temperatures for most countries.  

How did the researchers interpret the results?

The researchers say their results have "important implications" for public health planning, because planning tends to focus on how to deal with heatwaves, whereas their study shows that below-optimal temperatures have a bigger effect on the number of people who die.

They say deaths from cold weather may be attributed to stress on the cardiovascular system, leading to more heart attacks and strokes. Cold may also affect the immune response, increasing the chances of respiratory disease.

They say their results show that public health planning should be "extended and refocused" to take account of the effect of the whole range of temperature fluctuation, not just extreme heat. 

Conclusion

Many of the headlines focus on the finding that moderate cold may be responsible for more deaths than extreme hot or cold weather.

Perhaps more interesting is the finding that the optimum temperature for humans seems to be well above the temperatures we usually experience, especially in colder countries like the UK. If this is true, then the finding that most deaths occur on days colder than the optimum is unsurprising, as most days are colder than the optimum temperature.

The relative unimportance of very hot or very cold days in terms of mortality is interesting, because most research and public health planning has focused on extreme weather. However, this depends partly on the definition of extreme temperature.

The researchers used 2.5 upper and lower percentiles to decide on what was extreme for a particular location, so by definition these temperatures are experienced on very few days. Even though the relative risk of death is increased on those days, the absolute number of deaths is nowhere near as high as on the majority of days.

That doesn't mean it's not worth planning for the increased risk of deaths during extreme temperatures. In London, for example, the relative risk of death is more than doubled on days with temperatures below 0C, compared with days at the optimum temperature of 19.5C.

Read more advice about coping with heatwaves as well as very cold snaps.

There are some limitations to the study we should be aware of. First, although it sampled data from 13 countries from very different climates, it didn't include any countries in Africa or the Middle East. This means we can't be sure the findings would apply worldwide.

Second, the study did not take into account some confounders that could affect how many deaths occur in warmer or colder periods – for example, levels of air pollution, whether people have access to shelter and heating, the age make-up of a population, and whether people have access to nutritious food all year round.

This also makes it difficult to know how governments or public health bodies can make plans using this new data, as we don't know whether the effects of moderate cold on mortality could be affected by public health measures.

In the UK, the NHS already plans for more hospital admissions during the winter months, taking account of factors such as the amount of flu-like illness circulating in the population, as well as the temperature.

Read more advice about winter health.

Links To The Headlines

Mildly cold, drizzly days far deadlier than extreme temperatures for Brits, says study. The Independent, May 21 2015

Moderately cold weather 'more deadly than heatwaves or extreme cold'. The Guardian, May 21 2015

Cold weather twenty times more deadly than hot weather, study suggests. The Daily Telegraph, May 21 2015

Links To Science

Gasparrini A, Guo Y, Hashizume M, et al. Mortality risk attributable to high and low ambient temperature: a multicountry observational study. The Lancet. Published online May 20 2015

Categories: NHS Choices

Links between hay fever, asthma and prostate cancer inconclusive

NHS Choices - Behind the Headlines - Wed, 20/05/2015 - 14:00

"Men with hay fever are more likely to have prostate cancer – but those with asthma are more likely to survive it," the Daily Mirror reports. Those were the puzzling and largely inconclusive findings of a new study looking at these three conditions.

Researchers looked at data involving around 50,000 middle-aged men and followed them up for 25 years, looking at whether asthma or hay fever at study start were associated with diagnoses of prostate cancer or fatal prostate cancer during follow-up.

The findings weren't as conclusive as the headline suggests. The researchers did find hay fever was associated with a small (7%) increased risk of prostate cancer development. There was some suggestion asthma may be associated with a decreased risk of getting prostate cancer or fatal prostate cancer. However, these links were only of borderline statistical significance, meaning there was a high risk they could have been the result of chance.

And the links between hay fever and fatal prostate cancer weren't significant at all, meaning there was no evidence that men with hay fever were more likely to die from the disease (so no need to worry if you are affected).

The possibility that inflammation, or the immune system more generally, could be associated with risk of prostate cancer is plausible, but this study tells us little about how different immune profiles could influence cancer risk.

 

Where did the story come from?

The study was carried out by researchers from Johns Hopkins Bloomberg School of Public Health and other institutions in the US. It was funded by grants from The National Cancer Institute and The National Heart, Lung and Blood Institute. The study was published in the peer-reviewed International Journal of Cancer.

The Daily Mirror has taken an uncritical view of the research findings and fails to make clear to its readers that the findings were mainly based on borderline statistically significant or non-significant results. These don't provide firm proof of links between asthma or hay fever and prostate cancer or lethal prostate cancer. 

 

What kind of research was this?

This was a prospective cohort study looking into how the immune system might be involved in the development of prostate cancer.

The study authors say emerging research suggests inflammation, and the immune response in general, may be involved in the development of prostate cancer. As they say, one way to explore this is by looking at the links between prostate cancer and conditions that have a particular immune profile. Two such immune-mediated conditions are asthma and allergies, such as hay fever.

Previous studies looking at links between the conditions gave inconsistent results. This study looked at the link in a prospective cohort of almost 50,000 cancer-free men, looking to see whether they developed prostate cancer and the factors associated. Cohort studies such as these can demonstrate associations, but they cannot prove cause and effect as many other unmeasured factors may be involved.

 

What did the research involve?

The cohort was called the Health Professionals Follow-Up Study. In 1986 it enrolled 47,880 cancer-free men, then aged 40-75 years (91% white ethnicity), who were followed up for 25 years.

Every two years, men completed questionnaires on medical history and lifestyle, and filled in food questionnaires every four years.

At study enrolment they were asked whether they had ever been diagnosed with asthma, hay fever or another allergy and, if so, the year it started. In subsequent questionnaires they were asked about new asthma diagnoses and asthma medications, but hay fever was only questioned at study start.

Men reporting a diagnosis of prostate cancer on follow-up questionnaires had this confirmed through medical records. The researchers also used the National Death Index to identify cancer deaths.

The researchers looked at the associations between prostate cancer and reported asthma or hay fever, particularly looking at the link with "lethal" prostate cancer. This was defined as being prostate cancer either diagnosed at a later stage when the cancer had already spread around the body (so expected to be terminal), or being the cause of death.

They adjusted their analyses for potential confounders of:

  • age
  • body mass index (BMI)
  • ethnicity
  • smoking status
  • physical activity
  • diabetes
  • family history of prostate cancer

 

What were the basic results?

Five percent of the cohort had a history of asthma at study start and 25% had hay fever. During the 25-year follow-up there were 6,294 cases of prostate cancer. Of these, 798 were expected to be lethal, including 625 recorded deaths.

After adjusting for confounders, there was a suggestion that having asthma at study start was associated with a lower risk of developing prostate cancer. We say a suggestion, because the 95% confidence interval (CI) of the result included 1.00. This makes it of borderline relative risk (RR) 0.89, 95% CI 0.78 to 1.00) meaning the finding may have been down to chance alone.

Hay fever, by contrast, was associated with an increased risk of developing prostate cancer, which did just reach statistical significance (RR 1.07, 95% CI 1.01 to 1.13).

Looking at lethal prostate cancer, there was again a suggestion that asthma was associated with decreased risk, but this again appeared of borderline statistical significance (RR 0.67, 95% CI 0.45 to 1.00). Hay fever was not this time significantly associated with risk of lethal prostate cancer.

The researchers then looked at ever having a diagnosis of asthma, this time not only looking at the 5% already diagnosed at study start, but also the 4% who developed the condition during follow-up. Again they found that ever having a diagnosis of asthma was associated with a decreased risk of lethal prostate cancer, but this was only of borderline statistical significance (RR 0.71, 95% CI 0.51 to 1.00).

The researchers also considered the time of diagnosis. They report that onset of hay fever in the distant past (more than 30 years ago) "was possibly weakly positively associated with risk of lethal" prostate cancer. However, this link is not statistically significant (RR 1.10, 95% CI 0.92 to 1.33).

 

How did the researchers interpret the results?

The researchers' conclude: "Men who were ever diagnosed with asthma were less likely to develop lethal and fatal prostate cancer." They add: "Our findings may lead to testable hypotheses about specific immune profiles in the [development] of lethal prostate cancer."

 

Conclusion

The researchers' suggestion that this research is "hypothesis generating" is the most apt. It shows a possible link between immune profiles and prostate cancer, but doesn't prove it or explain the underlying reasons for any such link.

This single study does not provide solid evidence that asthma or hay fever would have any influence on a man's risk of developing prostate cancer or dying from it, particularly when you consider the uncertain statistical significance of several of the findings.

Links suggesting asthma may be associated with a lower risk of total or lethal prostate cancer were all only of borderline statistical significance, meaning we can have less confidence that these are true links.

Links with hay fever were similarly far from convincing. Though the researchers found a 7% increased risk of developing prostate cancer with hay fever, this only just reached statistical significance (95% CI 1.01 to 1.13). The links between hay fever and risk of lethal prostate cancer that hit the headlines weren't significant at all, so they provide no evidence for a link.

Even if there is a link between asthma and allergy and prostate cancer risk, it's still possible this could be influenced by unmeasured health and lifestyle factors that have not been adjusted for.

Other limitations to this prospective cohort include its predominantly white sample, particularly given that prostate cancer is known to be more common in black African or black Caribbean men.

The results may not be applicable to these higher-risk populations. Also, though prostate cancer diagnoses were confirmed through medical records and death certificates, there is the possibility for inaccurate classification of asthma or allergic conditions as these were self-reported.

The possibility that inflammation, or the immune system more generally, could be associated with risk of prostate cancer is definitely plausible. For example, history of inflammation of the prostate gland is recognised to be possibly associated with increased prostate cancer risk. Therefore, study into how different immune profiles could have differing cancer risk is a worthy angle of research into prostate cancer.

However, the findings of this single cohort should not be of undue concern to men with hay fever or, conversely, suggest that men with asthma have protection from the disease. 

Links To The Headlines

Men with hay-fever more likely to have prostate cancer – but those with asthma more likely to survive. Daily Mirror, May 19 2015

Links To Science

Platz EA, Drake CG, Wilson KM, et al. Asthma and risk of lethal prostate cancer in the Health Professionals Follow-Up Study. International Journal of Cancer. Published online February 27 2015

Categories: NHS Choices

Children of the 90s more likely to be overweight or obese

NHS Choices - Behind the Headlines - Wed, 20/05/2015 - 13:48

"Children of the 90s are three times as likely to be obese as their parents and grandparents," the Mail Online reports. A UK survey looking at data from 1946 to 2001 found a clear trend of being overweight or obese becoming more widespread in younger generations. Another related trend saw the threshold from being a normal weight to being overweight was passed at a younger age in younger generations

The study examined 273,843 records of weight and height for 56,632 people in the UK from five studies undertaken at different points since 1946. The results found children born in 1991 or 2001 were much more likely to be overweight or obese by the age of 10 than those born before the 1980s, although the average child was still of normal weight.

The study also found successive generations were more likely to be overweight at increasingly younger ages, and the heaviest people in each group got increasingly more obese over time. These results won't come as much of a surprise given the current obesity epidemic.

The findings are a potential public health emergency in the making. Obesity-related complications such as type 2 diabetesheart disease and stroke can be both debilitating and expensive to treat. The researchers called for urgent effective interventions to buck this trend. 

Where did the story come from?

The study was carried out by researchers from University College London and was funded by the Economic and Social Research Council.

It was published in the peer-reviewed journal, PLOS Medicine. The journal is open access, meaning the study can be read online free of charge.

The Mail Online focused on the risk to children, saying children were more likely to be obese.

But the figures in the study were for obesity and being overweight combined. We don't know how the chances of obesity alone changed over time because there were too few obese children in the earliest cohorts to carry out the calculations.

BBC News gave a more accurate overview of the study and the statistics.  

What kind of research was this?

This was an analysis of data from five large long-running cohort studies carried out in England ranging around 50 years in total. It aimed to see how people's weight changed over time through childhood and adulthood, and how this compared across generations.

Studies like this are useful for looking at patterns and telling us what has changed and how, but can't tell us why these changes arose.  

What did the research involve?

Researchers used data from cohort studies that recorded the weight and height of people born in 1946, 1958, 1970, 1991 and 2001.

They used the data to examine how the proportion of people who were normal weight, overweight or obese changed over time for the five birth cohort groups. They also calculated the chances of becoming overweight or obese at different ages, across childhood and adulthood, for the five groups.

The researchers used data from 56,632 people, with 273,843 records of body mass index (BMI) recorded at ages ranging from 2 to 64. BMI is calculated for adults as weight in kilograms divided by height in metres squared.

For children, BMI is assessed differently to account for the way children grow, using a reference population to decide whether children are underweight, normal weight, overweight or obese at specific ages.

To keep the populations as similar as possible across the cohort studies, the researchers only included data on white people, as there were few non-white people in the earliest study. Immigration of non-white people to the UK did not begin in any significant numbers until the 1950s.

For each of the five cohort studies, men were analysed separately from women, and children were analysed separately from adults. Each cohort was divided into 100 equal centiles, or sub-groups, according to BMI – for example, the 50th centile is the group where half the people in the study have a higher BMI and half have a lower BMI.

Tracking the 50th centile over time can show whether the average person in the group is normal weight or overweight at certain ages. Higher centiles, such as the 98th centile, show the BMI of the heaviest people in the group, where only 2% of people in the group had a higher BMI and 97% had a lower BMI.  

What were the basic results?

The study found that:

  • People born in the more recent birth cohorts were more likely to be overweight at younger ages. The age at which the average (50th centile) subgroup became overweight was 41 for men born in 1946, 33 for men born in 1958, and 30 for men born in 1970. For women, the age fell from 48 to 44, then to 41 across the three birth cohorts.
  • The chances of becoming overweight in childhood increased dramatically for children born in 1991 or 2001. For children born in 1946, the chance of being overweight or obese at the age of 10 was 7% for boys and 11% for girls. For children born in 2001, the chance was 23% for boys and 29% for girls. However, the average children (50th centile) remained in the normal weight range in all five birth cohorts.
  • The biggest changes in weight were seen at the top end of the spectrum. The heaviest people from the group born in 1970 (98th centile) reached a higher BMI earlier in life than the people born in earlier birth cohorts.  
How did the researchers interpret the results?

The researchers say their results show children born after the 1980s are more at risk of being overweight or obese than those born before the 1980s.

They say this is because of their exposure to an "obesogenic environment", with easy access to high-calorie food. They say the changes in obesity over time among older cohorts also support the theory that changes to the food environment in the 1980s are behind the rise in obesity.

They go on to warn that if trends persist, modern day and future generations of children will be more overweight or obese for more of their lives than previous generations, and this could have "severe public health consequences" as they will be more likely to get illnesses such as coronary heart disease and type 2 diabetes. 

Conclusion

The study shows how, while the whole population of England has become heavier over the past 70 years, different generations have been affected in different ways. People born in 1946 were, on average, normal weight until their 40s, but this group has since seen their weight rise and they are now, on average, overweight.

By the time they reached 60, 75% of men and 66% of women from this group were overweight or obese. People born in 1946 from the heaviest cohorts, who were already overweight in early adulthood, are now likely to be obese or very obese.

For people born since 1946, the chance of being overweight as young adults, adolescents or children has been increasing. The chances of being overweight or obese by the age of 40 was 65% for men born in 1958 (45% for women) and 67% for men born in 1970 (49% for women). The chances of children born in 2001 being overweight or obese by the age of 10 are almost three times that of the children born in 1946.

We can deduce from the figures that something may have happened during the 1980s – the decade when the earliest birth cohort's average group moved from normal weight to overweight – to increase the chances of people of all ages becoming overweight or obese.

What these figures can't tell us is what that was, despite the researchers' assertion this was a change to an obesogenic environment. Still, it seems plausible that a combination of high-calorie, low-cost food and an increasingly sedentary lifestyle – both in terms of working life and recreation – contributed to this trend.

This study has some limitations. Of the five studies, four were national studies across the UK, while one (the 1991 study) was limited to one area of England, so may not be representative of the UK as a whole.

More importantly, the five studies used differing methods to record height and weight at different time points. Some records were self-reported, which means they rely on people accurately recording and reporting their own height and weight.

We know that being overweight and obese is bad for our health. These conditions increase the chances of a range of illnesses, including heart disease, diabetes and some cancers. We also know children who are overweight tend to grow up to be overweight or obese adults, so increasing their chances of illness.

This study gives us more information about who is at risk of becoming overweight and at which ages, which may help health services develop better strategies for turning the tide of obesity.

Links To The Headlines

Children of the 90s are THREE times as likely to be obese as their parents and grandparents. Mail Online, May 19 2015

UK children becoming obese at younger ages. BBC News, May 20 2015

Links To Science

Johnson W, Li L. Kuh D, Hardy R. How Has the Age-Related Process of Overweight or Obesity Development Changed over Time? Co-ordinated Analyses of Individual Participant Data from Five United Kingdom Birth Cohorts. PLOS Medicine. Published online May 19 2015

Categories: NHS Choices

Stem cells could provide a treatment for a 'broken heart'

NHS Choices - Behind the Headlines - Tue, 19/05/2015 - 15:00

"Scientists believe they may have discovered how to mend broken hearts," reports the Daily Mirror.

While it may sound like the subject of a decidedly odd country and western song, the headline actually refers to damage to the heart muscle.

heart attack occurs when the muscle of the heart becomes starved of oxygen causing it to be damaged. If there is significant damage the heart can become weakened and unable to effectively pump blood around the body. This is known as heart failure and can cause symptoms such as shortness of breath and fatigue.

The heart contains "dormant" stem cells, and researchers want to learn more about them to work out ways to get them to help repair damaged heart tissue.

In this new laboratory and animal study, researchers identified a characteristic genetic "signature" of adult mouse heart stem cells. This led to them being more easily identified than they have been previously, making them easier to "harvest" for study.

Injections of these cells into damaged mouse hearts was shown to improve heart function, even though very few of the donor cells remained in the heart.

These findings will help researchers to study these cells better, for example investigating whether they could be chemically triggered to repair the heart without removing them first. While the hope is that this research could lead to treatments for human heart damage, as yet the results are just in mice.

The researchers also note that they need to find out whether human hearts have the equivalent cells.

Where did the story come from?

The study was carried out by researchers from Imperial College London and other UK and US universities. It was funded by the British Heart Foundation, European Commission, European Research Council and the Medical Research Council, with some of the researchers additionally supported by the US National Heart and Lung Institute Foundation and Banyu Life Science Foundation International.

The study was published in the peer-reviewed scientific journal Nature Communications. It is open access, meaning it can be read for free online.

The Mirror’s main report covers the story reasonably, but one of its subheadings – that scientists have identified a protein that if injected can stimulate heart cell regeneration – is not quite right. The researchers have not yet been able to utilise a protein to stimulate heart regeneration. They have just used a specific protein on the surface of the stem cells to identify the cells. So it was the cells, and not the protein, that were used in regeneration.

The Daily Telegraph’s coverage of the study is good and includes some useful quotes from the lead researcher Professor Michael Schneider. The article also makes it clear that this study only involved mice.

What kind of research was this?

This was laboratory and animal research studying the adult stem cells in mice that can develop into heart cells.

A number of diseases cause (or are caused by) damage to the heart. For example, heart attacks occur when some heart muscle cells do not get enough oxygen and die – usually due to a blockage in the coronary arteries that supply the heart muscle with oxygen-rich blood. There are "dormant" stem cells in the adult heart that can generate new heart muscle cells, but are not active enough to completely repair damage.

Researchers are starting to test ways to encourage the stem cells to repair heart damage fully. In this study, the researchers were studying these cells very closely, to understand whether all heart stem cells are the same, or whether there are different types and what they do. This information could help them to identify the right type of cells and conditions they need to fix heart damage.

This type of research is a common early step in understanding how the biology of different organs works, with the aim of eventually being able to develop new treatments for human diseases. Much of human and animal biology is very similar, but there can be differences. Once researchers have developed a good idea of how the biology works in animals, they will then carry out experiments to check to what extent this applies to humans.

What did the research involve?

The researchers obtained stem cells from adult mouse hearts and studied their gene activity patterns. They then went on to study which of these cell types could develop into heart muscle cells in the lab, and which could successfully produce heart muscle cells that could integrate into the heart muscle of living mice.

The researchers started by identifying a population of adult mouse heart cells that is known to contain stem cells. They separated out these into different groups, some of which are known to contain stem cells, and further separated each group into single cells, and studied exactly which genes were active in each cell. They looked at whether the cells showed very similar gene activity patterns (suggesting that they were all the same type of cells, doing the same things), or whether there were groups of cells with different gene activity patterns. They also compared these activity patterns to young heart muscle cells from newborn mice.

Once they identified a group of cells that looked like the cells that could develop into heart muscle cells, they tested whether they would be able to grow and maintain these in the lab. They also injected the cells into the damaged hearts of mice to see if they formed new heart muscle cells. They also carried out various other experiments to further characterise the cells that form new heart muscle cells.

What were the basic results?

The researchers found distinct groups of cells with different gene activity patterns. One particular group of these cells was identified as the cells that have started to develop into heart muscle cells. These cells were referred to as Sca1+ SP cells, and one of the genes they expressed produces a protein called PDGFRα, which is found on the surface of these cells. These cells grew and divided well in the lab, and the offspring cells maintained the characteristics of the original Sca1+ SP cells.

When the researchers injected samples of the offspring cells into damaged mouse hearts, they found that between 1% and 8% of the cells remained in the heart muscle tissue the day after the injection. Over time, most of these cells were lost from the heart muscle, but some remained (about 0.1% to 0.5% at two weeks).

By two weeks, some (10%) of the remaining cells were showing signs of developing into immature muscle cells. At 12 weeks, more of the remaining cells (50%) were showing signs of being muscle cells. These cells were also showing signs of being more developed and forming muscle tissue. However, there were only a few of these donor cells in each heart (5 to 10 cells). Some of the donor cells also appeared to have developed into the two sorts of cells found in blood vessels.

Mice whose hearts had been injected with the donor cells showed better heart function at 12 weeks than those who had a "dummy" injection with no cells. The size of the damaged area was smaller in those with donor cell injections, and the heart was able to pump more blood.

Further experiments showed the researchers that they could identify and separate out the cells that specifically develop into heart muscle cells by looking for the PDGFRα protein on their surface. The cells identified in this way grew well in the lab, and when injected into the heart they could integrate into the heart muscle and showed signs of developing into muscle cells after two weeks.

How did the researchers interpret the results?

The researchers concluded that they had developed a way to identify and separate out a specific subset of adult mouse heart stem cells and can generate new heart muscle cells. They say that at the very least this will help them to study these cells in mice more easily. If a human equivalent of these cells exists, they may also be able to utilise this knowledge to obtain stem cells from adult heart tissue.

Conclusion

This laboratory and animal study has identified a characteristic genetic "signature" of adult mouse heart stem cells. This has allowed them to be more easily identified than they have been previously. Injections of these cells have also been shown to be able to improve heart function after heart muscle damage in mice.

These findings will help researchers to study these cells more closely in the lab and investigate how they can prompt them to repair damaged heart muscle, possibly without removing them from the heart first. While the hope is that this research could lead to treatments for human heart damage, for example after a heart attack, as yet the results are just in mice. The researchers themselves note that they now need to find out whether human hearts have the equivalent cells.

Many researchers are working on the potential uses of stem cells to repair and damage human tissue, and studies such as this are important parts in this process.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Stem cells could be used to repair previously irreversible damage from heart attacks, say scientists. Daily Mirror, May 18 2015

'Heartbreak' stem cells could repair damage of heart attack. The Daily Telegraph, May 19 2015

Links To Science

Noseda M, Harada M, McSweeney S, et al. PDGFRα demarcates the cardiogenic clonogenic Sca1+ stem/progenitor cell in adult murine myocardium. Nature Communications. Published online May 18 2015

Categories: NHS Choices

Bioengineering advances raise fears of 'home-brew heroin'

NHS Choices - Behind the Headlines - Tue, 19/05/2015 - 13:00

The Daily Mirror carries the alarming headline that, "Heroin made in home-brew beer kits could create epidemic of hard drug abuse". It says scientists are "calling for urgent action to prevent criminal gangs gaining access to [this] new technology" following the results of a study involving genetically modified yeast.

This study did not actually produce heroin, but an important intermediate chemical in a pathway that produces benzylisoquinoline alkaloids (BIAs). BIAs are a group of plant-derived chemicals that include opioids, such as morphine.

BIAs have previously been made from similar intermediate chemicals in genetically engineered yeast. Researchers hope that by joining these two parts of the pathway, they will get yeast that can produce BIAs from scratch. This could be cheaper and easier than current production methods, which often still involve extraction from plants.

But because morphine can be refined into heroin using standard chemical techniques and yeast can be grown at home, this has led to concerns about the potential misuse of this discovery.

So, will this lead to a rash of "Breaking Bad"-style heroin labs in criminals' garages and spare rooms? We doubt it – at least in the near future. A strain that can produce morphine has not yet been made and would need to be specially genetically engineered to do this, not just using unmodified home-brewing yeast available off the shelf.

Still, it may be worth raising awareness about the potential need for regulation of opioid-producing strains. 

Where did the story come from?

The study was carried out by researchers from the University of California and Concordia University in Canada.

It was funded by the US Department of Energy, the US National Science Foundation, the US Department of Defense, Genome Canada, Genome Quebec, and a Canada Research Chair.

The study was published in the peer-reviewed journal, Nature Chemical Biology. It is open access, meaning it can be read online for free.

The Daily Mirror's reporting takes a sensationalist angle – the picture caption, for example, reads: "Home-brewed heroin is on the rise, scientists warn". No heroin was made in this study, and complete opioid-producing strains of yeast have not been made yet – home-brewing heroin from yeast is not yet possible, much less on the rise.

The possibility of home-brewing comes from a commentary on the article in Nature, which discusses the findings of this and related studies. This commentary also discusses the potential legal implications, and the ways that risks could be reduced. For example, scientists could only produce yeast strains that make weaker opioids. But they acknowledge that the risk of criminals making opiate-producing yeast strains themselves is low.

The Guardian and BBC News take a slightly more restrained approach, suggesting that home-brew heroin may be a problem in the future but it certainly is not an issue now. The BBC also points out that producing medicines in microbes is not a new thing. 

What kind of research was this?

This laboratory research studied whether a group of chemicals called benzylisoquinoline alkaloids (BIAs) could be produced in yeast. BIAs include a range of chemicals used as drug treatments in humans. This includes opioids used for pain relief, as well as antibiotics and muscle relaxants.

Opioids are among the oldest drugs first identified as being produced naturally by opium poppies. Morphine is an opioid derived from poppies, and it and other derivatives or man-made versions of opioids are used to treat pain.

Opioids also produce euphoria and can be addictive. The illegal drug heroin is an opiate that can be produced by refining morphine to make it more powerful.

The researchers say many of these compounds are still made from plants such as the opium poppy, as they are chemically very complex and therefore difficult and expensive to make from scratch in the lab.

However, now we know much more about how the chemicals are made in plants, it may be possible to genetically engineer microbes in the lab to produce these chemicals in industrial quantities.

The researchers say the yeast S. cerevisiae – sometimes known as baker's or brewer's yeast – has been used to produce BIAs in the lab from intermediate chemicals in the BIA production pathway. The earlier steps in the pathway have not yet been managed in yeast, although they have in bacteria.

In this study, the researchers wanted to see if they could produce the intermediate chemical (S)-reticuline in yeast. This has been tried before, but was not successful. 

What did the research involve?

The researchers knew they needed one particular type of protein called a tyrosine hydroxylase, which would work in yeast to perform the first step in the process of making (S)-reticuline.

They developed a system to allow them to quickly screen a large group of known tyrosine hydroxylases to identify one that would work in yeast. The tyrosine hydroxylase is needed to produce the intermediate chemical dopamine.

The researchers then needed other proteins that convert dopamine and another chemical already present in yeast into another intermediate chemical, and then carry out the other chemical steps needed to form (S)-reticuline. They identified proteins they needed for these stages from the opium poppy and the Californian poppy.

Finally, they genetically engineered yeast cells to produce tyrosine hydroxylase and all of the other proteins needed, and tested whether the yeasts were able to produce (S)-reticuline.  

What were the basic results?

The researchers were able to identify tyrosine hydroxylase from the sugar beet that worked in yeast, allowing them to produce the intermediate chemical dopamine. They used genetic engineering to make a version of this protein in yeast that worked even better than the original one.

They were also able to produce the other proteins they needed in yeast. A yeast strain producing all of these proteins was able to produce (S)-reticuline, the chemical intermediate needed in the production of opioids. 

How did the researchers interpret the results?

The researchers concluded that coupling their work with the work already done, and improving on the yield of the process, "will enable low-cost production of many high-value BIAs".

They say that, "Because of the potential for illicit use of these products, including morphine and its derivatives [such as heroin], it is critical that appropriate policies for controlling such strains be established so that we garner the considerable benefits while minimising the potential for abuse." 

Conclusion

This laboratory study has successfully managed to produce an important intermediate chemical in the pathway that produces benzylisoquinoline alkaloids (BIAs), a group of plant-derived chemicals that include opioids.

BIAs such as morphine have previously been made from similar intermediate chemicals in genetically engineered yeast, but this is the first time the earlier stages have been completed successfully in yeast. The researchers hope that by joining these two parts of the pathways, they will get yeast that can produce BIAs from scratch.

This study has not completed this final step, however. Researchers will need to test this before they know that it will be successful. They acknowledge that further optimisation of their method to produce more of the intermediate chemical is needed before it could be used to produce BIAs.

This study has generated media coverage speculating about the possibility of "home-brew heroin" creating an "epidemic of hard drug use". But the researchers did not produce heroin or any other opioid, only an intermediate chemical. These yeasts have been specially genetically engineered, and the experiments are not the sort of thing most people are going to be able to easily replicate in their garage.

While the likelihood of such strains being successfully made for criminal use seems very small, at least in the short to medium term, criminals can be resourceful. Considering the potential implications of this research and whether policies are needed, both nationally and internationally, may be prudent.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Home-brewed heroin? Scientists create yeast that can make sugar into opiates. The Guardian, May 18 2015

'Home-brewed morphine' made possible. BBC News, May 19 2015

Heroin made in home-brew beer kits could create epidemic of hard drug abuse. Daily Mirror, May 18 2015

Links To Science

DeLoache WC, Russ ZN, Narcross L, et al. An enzyme-coupled biosensor enables (S)-reticuline production in yeast from glucose. Nature Chemical Biology. Published online May 18 2015

Categories: NHS Choices

No proof orange juice boosts brain power

NHS Choices - Behind the Headlines - Mon, 18/05/2015 - 14:00

"Drinking orange juice every day could improve brain power in the elderly, research shows," the Mail Online reports. Despite the encouraging words from the media, the small study this headline is based on does not provide strong evidence that an older person would see any noticeable difference in their brain power if they drink orange juice for two months.

The study involved 37 healthy older adults, who were given orange juice or orange squash daily for eight weeks before switching to the other drink for the same amount of time. The 100% orange juice contains more flavonoids, a type of plant compound that has been suggested to have various health benefits.

The researchers gave participants a whole battery of cognitive tests before and after each eight-week period. Both drinks caused very little change on any of the test results and were not significantly different from each other on any of the tests individually.

The researchers also carried out analyses where they combined the test results and looked at the statistical relationships between the drink given and when the test was given. On this occasion, they did find a significant result – overall cognitive function (the pooled result of all the tests combined) was better after the juice than after the squash.

But the overall pattern of the results doesn't seem very convincing. This study does not provide conclusive evidence that drinking orange juice has an effect on brain function.  

Where did the story come from?

The study was carried out by researchers from the University of Reading, and was funded by Biotechnology and Biological Sciences Research Council grants and the Florida Department of Citrus, also known as Florida Citrus.

Florida Citrus is a government-funded body "charged with the marketing, research and regulation of the Florida citrus industry", a major industry in the state. Florida Citrus was reported to have helped design the study.

The study was published in the peer-reviewed American Journal of Clinical Nutrition.

The Mail Online took the study at face value without subjecting it to any critical analysis. Looking into the research reveals rather unconvincing evidence that drinking orange juice would have any effect on a person's brain function.  

What kind of research was this?

This was a randomised crossover trial that aimed to compare the effects of 100% orange juice, which has high flavanone content, and orange-flavoured cordial, which has low flavanone content, on cognitive function in healthy older adults.

Flavonoids are pigments found in various plant foods. It has been suggested they have various health benefits – for example, some studies have suggested that high consumption of flavonoids can have beneficial effects on cognitive function. Flavanones are the specific type of flavonoids found in citrus fruits. This trial investigated the effect of flavanones in orange juice.

This was a crossover trial, meaning the participants acted as their own control, taking both the high and low flavanone content in random order a few weeks apart. The crossover design effectively increases the sample size tested, and is appropriate if the interventions are not expected to have a lasting impact on whatever outcome is being tested. 

What did the research involve?

The study recruited 37 older adults (average age 67) who were given daily orange juice or orange squash for eight weeks in a random order, with a four-week "washout" period in between. They were tested to see whether the drinks differed in their effect on cognitive function.

All participants were healthy, without significant medical problems, did not have dementia and had no cognitive problems. In random order, they were given:

  • 500ml 100% orange juice containing 305mg natural flavanones daily for eight weeks
  • 500ml orange squash containing 37mg natural flavanones daily for eight weeks

The drinks contained roughly the same calories. The participants were not told which drink they were drinking, and the researchers assessing the participants also did not know.

Before and after each of the eight-week periods, participants visited the test centre and had data collected on height, weight, blood pressure, health status and medication. They also completed a large battery of cognitive tests assessing executive function (thinking, planning and problem solving) and memory.

The researchers analysed change in cognitive performance from baseline to eight weeks for each drink, and compared the effects of the two drinks. 

What were the basic results?

On the whole, the two drinks caused very minimal change from baseline on any of the individual tests. There was no statistically significant difference between the two drinks when comparing score change from baseline on any of the tests individually.

There was only a single significant observation when looking at the individual tests at the end of treatment (rather than change from baseline). A test of immediate episodic memory was higher eight weeks after drinking 100% orange juice compared with squash (score 9.6 versus 9.1). However, when this was compared to the change from baseline, it didn't translate into any significant difference between the groups.

The researchers also carried out a statistical analysis looking at the interactions between the drink given and the testing occasion. In this analysis, they did find an interaction between the drink and testing for global cognitive function (when all test results were combined). This showed that, overall, this was significantly better at the eight-week visit after the orange juice intake. 

How did the researchers interpret the results?

The researchers concluded that, "Chronic daily consumption of flavanone-rich 100% orange juice over eight weeks is beneficial for cognitive function in healthy older adults."

They further say that, "The potential for flavanone-rich foods and drinks to attenuate cognitive decline in ageing and the mechanisms that underlie these effects should be investigated." 

Conclusion

Overall, this small crossover study does not provide conclusive evidence that drinking orange juice has an effect on brain function.

A wide variety of cognitive tests were performed in this study before and after the two drinks (orange juice and squash). The individual test results do not indicate any large effects. Notably, both drinks caused very little change from baseline on any of the test results, and were not significantly different.

The only significant results were found for overall cognitive function when combining test results and looking at statistical interactions. The fact a consistent effect wasn't seen across individual measures and the different analyses means the results are not very convincing.

The trial is also quite small, including only 37 people. These participants were also a specific sample of healthy older adults who volunteered to take part in this trial, and none of them had any cognitive impairment, so the results may not be applicable to other groups.

While the participants were not told what they were drinking and the drinks were given in unlabelled containers, they did have to dilute them differently. This and the taste of the drinks may have meant the participants could tell the drinks apart. The researchers did ask participants what they thought they were drinking, and although about half said they did not know, most of those who gave an opinion (16 out of 20) got it right.

There is also only comparison of high- and low-flavanone orange juice. There is no comparison with a flavanone-free drink, or foods or drinks that contain other types of flavonoid.

The possible health benefits of flavonoids or flavanones specifically will continue to be studied and speculated. However, this study can't conclusively tell us that they have an effect on brain power.

A good rule of thumb is what's good for the heart is also good for the brain – taking regular exercise, eating a healthy diet, avoiding smoking, maintaining a healthy weight, and drinking alcohol in moderation.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Two daily glasses of orange juice 'boosts elderly brainpower': Significant improvements can be seen in less than two months, research shows. Mail Online, May 16 2015

Orange juice 'improves brain function'. The Daily Telegraph, May 15 2015

Links To Science

Kean RJ, Lamport DJ, Dodd GF, et al. Chronic consumption of flavanone-rich orange juice is associated with cognitive benefits: an 8-wk, randomized, double-blind, placebo-controlled trial in healthy older adults. The American Journal of Clinical Nutrition. Published online January 14 2015

Categories: NHS Choices

Drug combination for cystic fibrosis looks promising

NHS Choices - Behind the Headlines - Mon, 18/05/2015 - 13:00

"A 'groundbreaking' new therapy for cystic fibrosis could hugely improve patients' quality of life," The Daily Telegraph reports after a combination of two drugs – lumacaftor and ivacaftor – was found to improve lung function.

The headline is prompted by a trial looking at a new treatment protocol for cystic fibrosis, a genetic condition caused by a mutation in a gene that normally creates a protein that controls salt balance in a cell. This leads to thick mucus build-up in the lungs and other organs, causing a persistent cough, poor weight gain and regular lung infections.

The prognosis for cystic fibrosis has improved dramatically over the past few decades, but the condition is still life-limiting. This new drug combination works together to make the faulty cell protein work better.

More than 1,000 people with cystic fibrosis were given the new protocol or a placebo for 24 weeks. The treatment led to meaningful improvements in lung function compared with the placebo. It also reduced the number of lung infections, improved quality of life, and helped people gain weight.

Further study of the drugs' effects in the longer term will be needed, in addition to collecting more information on side effects.

But this treatment won't work for all people with cystic fibrosis. There are various gene mutations, and this treatment only targeted the most common one, which affects half of people with the condition.  

Where did the story come from?

The study was carried out by researchers from various international institutions, including the University of Queensland School of Medicine in Australia, and Queens University Belfast.

There were various sources of funding, including Vertex Pharmaceuticals, which makes the new treatment.

The study was published in the peer-reviewed New England Journal of Medicine.

The UK media provided balanced reporting of the study, including cautions that the treatment should work in around half of people with cystic fibrosis. Researchers were quoted as saying that although they hope this could improve survival for people with cystic fibrosis, they don't know this for sure.

However, some of the reporting focusing on the quality of life improvements does not take note of the researchers' caution that, overall, these improvements fell short of what was considered meaningful.

The media also debated the cost of the treatment protocol. The Guardian reports one year's course of lumacaftor alone costs around £159,000. The new treatment protocol is being assessed by the National Institute for Health and Care Excellence (NICE) to see if it is a cost effective use of resources.  

What kind of research was this?

This was a randomised controlled trial (RCT) aiming to investigate the effects of a new treatment for cystic fibrosis.

Cystic fibrosis is a hereditary disease caused by mutations in a gene called cystic fibrosis transmembrane conductance regulator (CFTR). The protein made by the CFTR gene affects the balance of chloride and sodium inside the cells.

In people with cystic fibrosis, the CFTR protein does not work. This causes mucus secretions in the lungs and other parts of the body to be too thick, leading to symptoms such as a persistent cough and frequent chest infections.

There is no cure for cystic fibrosis, and current management focuses on breaking down mucus and controlling the symptoms with treatments such as physiotherapy and inhaled medicines.

We have two copies of all of our genes – one inherited from each parent. To develop cystic fibrosis, you need to inherit two abnormal copies of the CFTR gene. One in 25 people carry a copy of the abnormal CFTR gene. If two people carrying an abnormal gene have a child and the child receives the abnormal gene from both parents, they will develop cystic fibrosis.

This trial looked at the effects of a treatment that helps the abnormal CFTR protein work better, called lumacaftor. It was tested in combination with another treatment called ivacaftor, which also boosts the activity of CFTR proteins.

There are various different types of CFTR gene mutations. One, called Phe508del, is the most common and affects 45% of people with the condition. Lumacaftor specifically corrects the abnormality caused by the Phe508del mutation, so this trial only included people with this mutation. An RCT is the best way of examining the safety and effectiveness of a new treatment. 

What did the research involve?

This study reports the pooled results of two identical RCTs that have investigated the effects of two different doses of lumacaftor, in combination with ivacaftor, for people with cystic fibrosis who have two copies of the Phe508del CFTR mutation.

The study recruited 1,122 people aged 12 or older; 559 in one of the trials and 563 in the other. Participants in both trials were randomly assigned to one of three study groups:

  • 600mg of lumacaftor every 24 hours in combination with 250mg of ivacaftor every 12 hours
  • 400mg of lumacaftor every 12 hours in combination with 250mg of ivacaftor every 12 hours
  • placebo pills every 12 hours

The placebo pills looked just like the lumacaftor and ivacaftor pills and were taken in the same way, so researchers and participants could not tell whether they were taking placebo or not. All treatments were taken for 24 weeks.

The main outcome examined was how well the participants' lungs worked, measured as a change in percentage of predicted forced expiratory volume (FEV1). This is the amount of air that can be forcibly exhaled in the first second after a full in-breath, which provides a well-validated method of assessing lung health and function.

The percentage of predicted FEV1 shows how much you exhale as a percentage of what you would be expected to, based on your age, sex and height.

The researchers also looked at the change in body mass index (BMI) and in people's quality of life in terms of their lung function, as reported in the patient-reported Cystic Fibrosis Questionnaire – Revised (CFQ-R).

The study analysis included all patients who received at least one dose of the study drug, which was 99% of all participants.  

What were the basic results?

At the start of the study, the average FEV1 of participants was 61% of what was predicted (what it ought to be). There were no differences between the randomised groups in terms of age, sex, lung function, BMI or current cystic fibrosis treatments used.

Lumacaftor-ivacaftor significantly improved how well the participants' lungs worked compared with placebo in both trials, and at both doses. The change in percentage of predicted FEV1 ranged between 2.6% and 4.0% across the two trials compared with placebo over the 24 weeks.

There were also significant improvements compared with placebo in BMI (range of improvement 0.13 to 0.41 units), and respiratory quality of life (1.5 to 3.9 points on the CFQ-R). There was also a reduced rate of lung infections in the treatment groups.

There was similar reporting of side effects across the two treatment groups and placebo groups (around a quarter of participants experienced side effects). The most common adverse effect participants experienced was lung infections.

However, the proportion of participants who stopped taking part in the study as a result of side effects was slightly higher among the drug treatment groups (4.2%) compared with placebo groups (1.6%).

The specific reasons for discontinuation varied between individuals – for example, a couple stopped because of shortness of breath or wheezing; some stopped because of blood in their sputum; some because of a rash; and so on. 

How did the researchers interpret the results?

The researchers concluded that, "These data show that lumacaftor in combination with ivacaftor provided a benefit for patients with cystic fibrosis [who carried two copies of] the Phe508del CFTR mutation." 

Conclusion

This trial has demonstrated that this new treatment combination could be effective in improving lung function for people with cystic fibrosis who have two copies of the common Phe508del CFTR mutation.

The trial has many strengths, including its large sample size and the fact it captured outcomes at six months for almost all participants. The improvements in lung function were seen while the participants continued to use their standard cystic fibrosis treatments. As the researchers suggest, this indicates the treatment could be a beneficial add-on to normal care to further improve symptoms.

The results seem very promising, but there are limitations that should be addressed. Though lung function improvements were said to be clinically meaningful, improvements in quality of life relating to lung function fell short of what is considered to be meaningful clinically (four points and above on the CFQ-R scale).

The trial only included people with well-controlled cystic fibrosis, and effects of the treatment might not be as good for people with poorer disease control. The treatment combination would also only be suitable for people with the Phe508del CFTR mutation.

This trial only included people with two copies of this mutation, which is only the case in around 45% of people with the condition. Whether the treatment would benefit people who carry one copy of the Phe508del mutation and a different second CFTR mutation is not yet clear, and people with two non-Phe508del mutations would not be expected to benefit from this treatment.

The effects of this treatment combination will need to be studied in the longer term, beyond six months – for example, to see whether it could prolong life. Further information will need to be collected on side effects and how commonly they cause people to stop treatment.

Though this treatment targets the abnormal protein that causes symptoms, as one of the study authors notes in The Guardian, it is not a cure. The lead researcher, Professor Stuart Elborn, was quoted as saying: "It is not a cure, but it is as remarkable and effective a drug as I have seen in my lifetime."

Overall, the results of this trial show promise for this new treatment for people with cystic fibrosis who carry two copies of this specific gene mutation.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

'Groundbreaking' new treatment for cystic fibrosis. The Daily Telegraph, May 17 2015

Cystic fibrosis treatment found to improve lives of sufferers in trials. The Guardian, May 17 2015

Cystic fibrosis drug offers hope to patients. BBC News, May 17 2015

New treatment brings hope on cystic fibrosis: Thousands of patients could have lives transformed by new drug for most common form of the condition. Mail Online, May 18 2015

Links To Science

Wainwright CE, Elborn JS, Ramsey BW, et al. Lumacaftor–Ivacaftor in Patients with Cystic Fibrosis Homozygous for Phe508del CFTR. The New England Journal of Medicine. Published online May 17 2015

Categories: NHS Choices

Does holding your breath during an injection make it less painful?

NHS Choices - Behind the Headlines - Fri, 15/05/2015 - 13:00

"Hate injections? Holding your breath can make the pain of jabs more bearable," the Mail Online reports. A team of Spanish researchers mechanically squeezed the fingernails of 38 willing volunteers to cause them pain.

For one round of experiments, the group were told to hold their breath before and during the pain squeeze. In the second round, they had to breathe in slowly while the pain was applied. Those holding their breath reported slightly lower pain ratings overall than those breathing in slowly.

The hypothesis underpinning this technique is that holding your breath increases blood pressure, which in turn reduces nervous system sensitivity, meaning you have a reduced perception of any pain signals.

But before you try this out, it's worth saying the pain perception differences were very small – a maximum 0.5 point difference on a scale from 0 to 10.

Also, the pain scores of the experimental breathing styles weren't compared with normal breathers, so we don't actually know if they were beneficial overall at reducing pain perception, only relative to one other.

We wouldn't advise changing your breathing habits in an attempt to avoid pain based on the results of this study.  

Where did the story come from?

The study was carried out by researchers from University of Jaén in Spain, and was funded by the Spanish Ministry of Science and Innovation.

It was published in the peer-reviewed journal, Pain Medicine.

Generally, the Mail Online reported the story accurately. In their article, the lead study author explained that holding your breath won't work for an unexpected injury, such as standing on a pin or stubbing a toe. But it might work if you start holding your breath before the pain kicks in – for example, anticipating the sting of an injection.

The Mail added balance by indicating other scientists were critical of the findings. They said the pain reduction was very small, and pointed out that holding your breath might make your muscles more tense, which could worsen pain in some circumstances, such as childbirth.  

What kind of research was this?

This human experimental study looked at whether holding your breath affects pain perception.

The researchers explain that holding your breath immediately after a deep inhalation slows your heart rate and increases your blood pressure. This stimulates pressure-sensing receptors called baroreceptors to send signals to the brain to reduce blood pressure.

This happens through reduced activity of the sympathetic nervous system, which is involved in the "fight or flight" response to danger. When working as it should, this feedback loop ensures blood pressure doesn't get too high.

The researchers say the dampening down of this part of the nervous system might also reduce sensitivity to pain. In this study, the researchers wanted to test their theory that increasing your blood pressure through holding your breath would reduce your perception of pain.   

What did the research involve?

Researchers used a machine to squeeze the fingernails of 38 healthy adult volunteers at different pressures to stimulate pain. Before the squeeze, the group were told to inhale slowly or to hold their breath after a deep breath in.

The researchers analysed ratings of pain in the two breathing styles to see if there was a difference. Volunteers were pre-tested to find a nail squeeze pressure they found painful and three personalised pain intensity thresholds.

Two breathing styles were tested and compared in each person. One involved breathing in slowly for at least seven seconds while the pain was applied. The other involved inhaling deeply, holding your breath while the pain was applied, before exhaling for seven seconds without actively forcing the breath out.

Both groups practised the breathing styles before the experiment began until they were confident they could do it properly. Once they had established their breathing, each volunteer had one fingernail mechanically squeezed for five seconds. After the squeeze, participants could breathe normally.

They were asked to rate the pain on a Likert scale ranging from 0 (not at all painful) to 10 (extremely painful). The experiment was repeated on the same person using three pain intensity thresholds for each breathing condition.

Volunteers knew the experiment was about pain and breathing, but they were not told which breathing experiment the study team expected to work. 

What were the basic results?

Ratings of pain intensity were consistently higher in the slow breathing group compared with the breath-holders. This held true for each of the three pain intensities tested.

Both breathing styles slowed heart rates, but this happened a little quicker, and the difference was larger, in the breath-holding condition. 

How did the researchers interpret the results?

The researchers concluded that, "During breath-holding, pain perception was lower relative to the slow inhalation condition; this effect was independent of pain pressure stimulation."

On the implications of their findings, they said: "This simple and easy-to-perform respiratory manoeuvre may be useful as a simple method to reduce pain in cases where an acute, short-duration pain is present or expected (e.g. medical interventions involving needling, bone manipulations, examination of injuries, etc.)." 

Conclusion

This small human experimental study used a fingernail-squeezing machine to cause pain to 38 willing volunteers. It found those instructed to hold their breath before the pain stimulus consistently rated their pain lower than those told to breathe slowly.

The difference between the two breathing groups was very small, although statistically significant. The biggest pain difference seen looked to be less than 0.5 points on a 10-point scale. How important this is to doctors or patients is debateable.

Similarly, the study compared two artificial breathing conditions against one another. They did not compare these against pain scores in people breathing normally throughout. This would have been useful, as it would give us an idea of whether one or both of the breathing types were any better than breathing normally.

On this point, the Mail Online reported that, "On a scale of 1 to 10, the pain experienced by volunteers fell by half a point from 5.5 to 5 when they held their breath". It wasn't completely clear whether they were talking about the difference between the two groups, or the absolute pain reduction experienced related to normal breathing.

This figure wasn't clear in the published research, so may have come from an interview. If true, it again highlights the rather small reduction in pain found.

The volunteers knew they were taking part in a pain study related to breathing. Participants' general expectations about the likely effects of the two breathing conditions therefore might have biased the results. Larger studies involving study blinding and randomisation would reduce the chance of this bias and others.

Overall, this study shows that changing your breathing pattern might affect your pain perception – but at such a small level that it might not be useful in any practical way.

There may be other dangers in holding your breath in an attempt to control pain. For example, you might feel lightheaded and pass out, or tense your muscles, which can hamper the ease of injections.

If you are worried about having an injection, you should tell the health professional before they give you an injection. They can take steps to make the experience less distressing.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Hate injections? Holding your breath can make the pain of jabs more bearable. Mail Online, May 14 2015

Links To Science

Reyes del Paso G, de Guevara ML, Montoro CI. Breath-Holding During Exhalation as a Simple Manipulation to Reduce Pain Perception. Pain Medicine. Published online April 30 2015

Categories: NHS Choices

Single mothers have 'worse health in later life'

NHS Choices - Behind the Headlines - Fri, 15/05/2015 - 12:30

The Daily Telegraph today tells us that: "Single mothers in England [are] more likely to suffer ill health because their families 'do not support them'."

This is a half-truth. The large international study – involving 25,000 people from England, the US and 13 other European countries – behind the headline found a link between single motherhood between the ages of 16 and 49 and worse health in later life. But it did not find this was because families do not support them.

It would appear that this claim is prompted by a trend spotted in the study by the researchers. It found that health risks were more pronounced in northern European countries and the US. While in southern European countries the risk was less pronounced.

The researchers speculated that in southern European countries there is more of a tradition of informal support services, where grandparents, aunts, uncles, cousins etc all pitch in with childcare duties. Or as the proverb puts it "It takes a village to raise a child".

While this hypothesis is plausible it is also unproven and was not backed up with any new robust data on social support as part of the study.

The study was very large and diverse so the mother health link appears real. However, the reasons and causes behind it are still to be worked out.

 

Where did the story come from?

The study was carried out by researchers from US, Chinese, UK and German universities and was funded by the US National Institute on Aging.

The study was published in the peer-reviewed Journal of Epidemiology & Community Health.

The media reporting was generally partially accurate, as most took the finding about social support at face value. The link between single motherhood and later ill health was supported by the body of this study, but the study did not collect any information on social support, so this explanation, although plausible, was not based on direct evidence. 

 

What kind of research was this?

The study investigated if single motherhood before the age of 50 was linked to poorer health later in life, and whether it was worse in countries with weaker "social [support] safety nets". To do this they analysed data collected from past cohort and longitudinal studies across 15 countries.

The researchers say single motherhood is known to be linked to poorer health, but didn’t know whether this link varied between countries.

Analysing previously collected data is a practical and legitimate study method. A limitation is that the original information was collected for specific reasons that usually differ from the research aims when coming to use it later. This can mean some information that would ideally be analysed is not there. In this study, the researchers couldn’t get information on social support networks, which they thought might explain some of their results.

 

What did the research involve?

The research team analysed health and lifestyle information on single mothers under 50 collected from existing large health surveys. The single mothers' health was documented into older age and compared across 15 countries.

Data was available from 25,125 women aged over 50 who participated in the US Health and Retirement Study; the English Longitudinal Study of Ageing; or the Survey of Health, Ageing and Retirement in Europe (SHARE). Thirteen of the 21 countries represented by SHARE (Denmark, Sweden, Austria, France, Germany, Switzerland, Belgium, The Netherlands, Italy, Spain, Greece, Poland, Czech Republic) had collected relevant data. With the US and England on board, this gave 15 countries for final analysis.

The researchers used data on number of children, marital status and any limitations on women's capacity for routine daily activities (ADL), such as personal hygiene and getting dressed, and instrumental daily activities (IADL), such as driving and shopping. Women also rated their own health.

Single motherhood was classified as having a child under the age of 18 and not being married, rather than living with a partner.

 

What were the basic results?

Single motherhood between the ages of 16 and 49 was linked to poorer health and disability in later life in several different countries. The risks were highest for single mothers in England, the US, Denmark and Sweden.

Overall 22% of English mothers had experienced single motherhood before age 50, compared with 33% in the US, 38% in Scandinavia, 22% in western Europe and 10% in southern Europe.

While single mothers had a higher risk of poorer health and disability in later life than married mothers, associations varied between countries.

For example, risk ratios for ADL limitations were significant in England, Scandinavia and the US but not in western Europe, southern Europe and eastern Europe.

Women who were single mothers before age 20, for more than eight years, or resulting from divorce or non-marital childbearing, had a higher risk.

 

How did the researchers interpret the results?

The researchers' concluded that: "Single motherhood during early adulthood or mid-adulthood is associated with poorer health in later life. Risks were greatest in England, the US and Scandinavia."

Although they didn’t have good data to back it up, they suggested that social support and networks may partially explain the findings. For example, areas like southern Europe, which the researchers say have strong cultural emphasis on family bonds, were not associated with higher health risks.

They add: "Our results identify several vulnerable populations. Women with prolonged spells of single motherhood; those whose single motherhood resulted from divorce; women who became single mothers at young ages; and single mothers with two or more children, were at particular risk."

 

Conclusion

This large retrospective study of over 25,000 women linked single motherhood between the ages of 16 and 49 with worse health in later life. This is not a new finding. What was new was that the link varied across different countries. Risks were estimated as greatest in England, the US and Scandinavia for example, but were less consistent in other areas of Europe.

The research team thought this might be caused by differences in how social networks supported single mothers in different countries, such as  being able to rely on extended families. But they had no data to directly support this. They did not have information on, for example, socioeconomic status, social support or networks during single motherhood, so could not analyse whether these were important causes. They also did not know whether any of the women they classed as single were actually in non-marital or same-sex partnerships, which may have affected results.

Health status in later life is likely to be linked to a complex number of interrelated factors. Being a single mum may be one, social networks might be another. But based on this study we don't yet know for sure, or the mechanisms by which this could lead to worse health.

Studies that collect information on levels of social support alongside health outcomes for single women would be able to tell us whether this is the likely cause, but getting this data may not be easy.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Single mothers more likely to suffer ill health, study finds. The Independent, May 14 2015

'Health risk' for single mothers. Mail Online, May 15 2015

Single mothers in England more likely to suffer ill health because their families 'do not support them'. The Daily Telegraph, May 14 2015

Links To Science

Berkman LF, Zheng Y, Glymour MM, et al. Mothering alone: cross-national comparisons of later-life disability and health among women who were single mothers. Journal of Epidemiology and Community Health. Published online May 14 2015

Categories: NHS Choices

Cannabis-extract pills 'not effective' for dementia symptoms

NHS Choices - Behind the Headlines - Thu, 14/05/2015 - 15:00

"Cannabis pills 'do not help dementia sufferers'," reports The Daily Telegraph. Previous research suggested one of the active ingredients in cannabis – tetrahydrocannabinol (THC) – can have effects on the nervous system and brain, such as promoting feelings of relaxation.

In this study, researchers wanted to see if THC could help relieve some of the behavioural symptoms of dementia, such as mood swings and aggression.

They set up a small trial involving 50 dementia patients with behavioural symptoms. They found taking a pill containing a low dose of THC for three weeks did not reduce symptoms any more than a dummy pill. Other studies suggested the substance might have benefits, but these studies were not as well designed as this trial.

The study was small, which reduces its ability to detect differences between groups. But the trend was for a greater reduction of symptoms in the placebo group than the THC group, suggesting that THC would not be expected to be better, even with a larger group.

People taking the THC pills did not show more of the typical side effects expected, such as sleepiness or dizziness. This led researchers to suggest the dose of THC may need to be higher to be effective. Further studies would be needed to determine whether a higher dose would be effective, safe and tolerable. 

Where did the story come from?

The study was carried out by researchers from Radboud University Medical Center and other research centres in the Netherlands and the US.

It was funded by the European Regional Development Fund and the Province of Gelderland in the Netherlands. The study drug was provided by Echo Pharmaceuticals, but they did not provide any other funding or have any role in performing the study.

The study was published in the peer-reviewed medical journal, Neurology.

The Daily Telegraph covered this story well.  

What kind of research was this?

This was a randomised controlled trial (RCT) looking at the effects of tetrahydrocannabinol (THC), one of the active ingredients in cannabis, on neuropsychiatric symptoms in people with dementia.

This was a phase II trial, meaning it is a small-scale test in people with the condition. It aims to check safety and get an early indication of whether the drug has an effect.

The researchers say they had also carried out a similar trial with a lower dose of THC (3mg daily), which did not have an effect, so they increased the dose in this trial to 4.5mg daily.

People with dementia often have neuropsychiatric symptoms, such as being agitated or aggressive, delusions, anxiety, or wandering. 

The researchers report that existing drug treatments for dementia have a delicate balance of benefits and harms, and non-drug treatments are therefore preferred, but they have limited evidence of effectiveness and may be difficult to put into practice.

An RCT is the best way to assess the effects of a treatment. Randomisation is used to create well-balanced groups, so the treatment is the only difference between them. This means any differences in outcome can be attributed to the treatment itself and not to other confounding factors

What did the research involve?

The researchers enrolled 50 people with dementia and neuropsychiatric symptoms. They randomly assigned them to take either a THC pill or an identical-looking inactive placebo pill for three weeks. They assessed symptoms over that time and looked at whether these differed in the two groups.

The trial initially intended to assess people who also had pain, but the researchers could not find enough people with both symptoms to participate, so they focused on neuropsychiatric symptoms. It also intended to recruit 130 people, but did not reach this number because of delays in getting approval for the trial in some centres.

About two-thirds (68%) of the participants had Alzheimer's disease and the rest had vascular dementia or mixed dementia. They all had experienced neuropsychiatric symptoms for at least a month. Both groups were taking similar neuropsychiatric drugs, including benzodiazepines, and continued to take these drugs during the study period. 

People with a major psychiatric disorder or severe aggressive behaviour were excluded. Just over half (52%) lived in a special dementia unit or nursing home. The participants were aged about 78 years on average.

The pills contained 1.5mg of THC (or none in the case of placebo) and were taken three times a day for three weeks. Neither the participants nor the researchers assessing them knew which pills they were taking, which stops them influencing the results.

The researchers assessed the participants' symptoms at the start of the study, two weeks later, and then at the end of the study. They used a standard questionnaire, which asked the caregiver about symptoms in 12 areas, including agitation or aggression and unusual movement behaviour, such as pacing, fidgeting or repeating actions such as opening and closing drawers.

The researchers used a second method to measure agitated behaviour and aggression, and also measured quality of life and the participants' ability to carry out daily activities. They also assessed whether the participants experienced any side effects from treatment. The researchers then compared the results of the two groups.   

What were the basic results?

Three participants did not complete the study: one in each group stopped treatment because they experienced side effects, and one in the placebo withdrew their consent for taking part.

Both the placebo and the THC pill groups had a reduction in neuropsychiatric symptoms during the trial. There was no difference in the reduction between the groups. The groups also did not differ in a separate measure of agitation and anxiety, quality of life, or ability to carry out daily activities.

Two-thirds of people (66.7%) taking THC experienced at least one side effect, and over half of those taking placebo (53.8%). The sorts of side effects that have been previously reported with THC, such as sleepiness, dizziness and falls, were actually more common with placebo. 

How did the researchers interpret the results?

The researchers concluded they found no benefit of 4.5mg oral THC for neuropsychiatric symptoms in people with dementia after three weeks of treatment.

They suggested the dose of THC used may be too low because the participants did not experience the expected side effects of THC, such as sleepiness. 

Conclusion

This small phase II trial has found no benefit of taking THC pills (4.5mg a day) for neuropsychiatric symptoms in people with dementia in the short term.

The authors say this contrasts with previous studies, which have found some benefit. However, they note the previous studies were also limited in that they were even smaller, did not have control groups, or did not collect data prospectively.

The study was small, which reduces its ability to detect differences between groups. However, the non-significant trend was for a greater reduction of symptoms in the placebo group than the THC group.

The researchers note the improvement in the placebo group was "striking" and may be the result of factors such as the attention and support received from the study team, expectations of the participants on the effects of THC leading to perceived improvement, and training of the nursing personnel in the study.

While the authors suggest the dose of THC may need to be higher, further studies are needed to determine whether a higher dose would be effective and safe.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Cannabis pills 'do not help dementia sufferers'. The Daily Telegraph, May 13 2015

Links To Science

van den Elsen G, Ahmed AIA, Verkes R, et al. Tetrahydrocannabinol for neuropsychiatric symptoms in dementia. Neurology. Published online May 13 2015

Categories: NHS Choices

Could testing grip strength predict heart disease risk?

NHS Choices - Behind the Headlines - Thu, 14/05/2015 - 12:40

"Poor grip can signal chances of major illness or premature death," the Mail Online reports. An international study has provided evidence that assessing grip strength could help identify people who were at higher risk of cardiovascular incidents such as a heart attack.

The study authors wanted to see whether muscle strength, measured by grip, can predict the chances of getting a range of illnesses, and of dying, in high-, medium- and low-income countries. To find out, they tested 142,861 people across 17 countries and tracked what happened to them over the course of four years. The study found that the chances of dying during this period were higher for people with weaker grips, as were the chances of having a heart attack or stroke.

The grip test predicted death from any cause better than systolic blood pressure did, but the blood pressure test was better at predicting whether someone would have a heart attack or stroke.

Grip tests may be a quick way of assessing someone's chances of having cardiovascular disease, or dying from it, but the study doesn't tell us whether muscle weakness causes illness, or the other way around.

It is unlikely that a "grip test" would replace standard protocols for diagnosing cardiovascular diseases, which rely on a combination of risk assessment methods and tests, such as electrocardiogram (ECG) and a coronary angiography. However, such a test could be useful in areas of the world where access to medical resources is limited.

 

Where did the story come from?

The study was carried out by researchers from 23 different universities or hospitals, in 17 different countries. It was led by researchers at McMaster University in Ontario, Canada, and funded by grants from many different national research institutes and pharmaceutical companies. The study was published in the peer-reviewed medical journal The Lancet.

The media reported the study reasonably accurately, although the Mail and The Daily Telegraph seemed to confuse the maximum grip strength measured by dynamometer with the strength of a person's handshake, which isn't the same thing. You would hope that somebody shaking your hand wouldn’t try to grip it as powerfully as possible.

 

What kind of research was this?

This was a longitudinal population study carried out in 17 countries, with high, medium and low income levels. It aimed to see whether muscle strength, measured by grip, could predict someone's chances of illness or death from many different causes. As this was an observational study, it can't tell us whether grip strength is a cause of illness or death, but it can show us whether the two things are linked.

 

What did the research involve?

Researchers recruited 142,861 people from households across the 17 countries included in the study. They tested their grip strength and took other measurements, including their weight and height, and asked questions about their:

  • age
  • diet
  • activity levels
  • education
  • work
  • general health

They checked up with them every year for an average of four years, to find out whether they were still alive and whether they had developed certain illnesses. After four years, the researchers used the data to calculate whether grip strength was linked to a higher or lower risk of having died or developed an illness.

The researchers aimed to get an unbiased sample of people from across the countries involved. They tried to get documentary evidence about cause of death, if people had died. However, if that was not available, they asked a standard set of questions of the people in their household to try to ascertain the cause of death. Most people in the study had their grip strength tested in both hands, although some at the start of the study had only one hand tested.

The data was analysed in a number of different ways, to check whether the results were consistent across different countries and within the same country. One big problem with this type of study is reverse causation. This means that the thing being measured – in this case, grip strength – could be either a cause or a consequence of illness. So someone with a weak grip might have weak muscles because they are already ill with a fatal disease. To try to get around this, the researchers analysed the figures excluding everyone who had died within six months of being enrolled in the study, and another analysis excluding everyone with cardiovascular disease or cancer at the start of the study. The results were adjusted to take account of age and gender, because older people and women, on average, have weaker muscle strength than younger people and men.

 

What were the basic results?

The researchers had follow-up data for 139,691 people, of whom 3,379 (2%) died during the study. After adjusting their figures, researchers found that people with lower grip strength were more likely to have died during the study, whether from any cause, cardiovascular disease or non-cardiovascular disease. People with low grip strength were also more likely to have had a heart attack or stroke. There was no link between grip strength and diabetes, hospital admission for pneumonia or chronic obstructive pulmonary disease (COPD), injury from falling, or breaking a bone. The results did not change significantly when excluding people who died within six months, or who had cancer or cardiovascular disease at the start.

Grip was measured in kilograms (kg) and adjusted for age and height. Average values for men ranged from 30.2kg in low income-countries to 38.1kg in high-income countries. On average, across all study participants, a 5kg reduction in grip strength was associated with a 16% increase in the chance of death (hazard ratio 1.16, 95% confidence interval 1.13 to 1.20). Grip strength alone was more strongly associated with the chance of dying from cardiovascular disease (such as a heart attack or stroke) than systolic blood pressure – a more commonly used measurement. However, blood pressure was better at predicting whether someone was going to have a heart attack or stroke.

 

How did the researchers interpret the results?

The researchers said their findings showed that muscle strength is a strong predictor of death from cardiovascular disease and a moderately strong predictor of having a heart attack or stroke. They say that muscle strength predicts the chances of death from any cause, including non-cardiovascular disease, but not the chances of getting non-cardiovascular illness.

They go on to say that these findings suggest muscle strength may predict what happens to people who get illnesses, rather than just predicting whether they get ill. When they looked at what happened to people who got ill, whether from cardiovascular disease or other causes, those who had low grip strength were more likely to die than those with high grip strength. 

They say they cannot tell from the study why there is an association between muscle strength and chances of getting cardiovascular disease. They say further research is needed to see whether improving muscle strength would cut the chances of having heart attacks or strokes. 

 

Conclusion

These are interesting results from a range of very different countries, showing that people with low muscle strength may be at higher risk of dying prematurely than other people. Earlier studies in high-income countries had already suggested that this was the case, but this is the first study to show it holds true across countries from high to low incomes.

The study also shows that Europeans, and men from high-income countries, on average, have higher grip strength than people from lower-income countries. Interestingly, women from middle-income regions, such as China and Latin America, had slightly higher muscle strength than women from high-income countries.

What we don't know from the study is why and how muscle strength is linked to the chances of death. It might seem obvious that people who are weak and frail are more at risk of death than other people, but we don't know whether this is because they are already ill, or whether their weak muscle strength makes them more vulnerable to getting ill, or less able to survive illness if they do get sick.

Importantly, the study doesn't tell us what can be done for people with low muscle strength. Should we all be doing weight training to increase our strength, or would that make no difference? Low muscle strength may reflect lots of things, such as the amount of exercise people do, what type of diet they eat, their age and occupation. We know that muscle strength declines as we age, but we don't know the effect of trying to halt this decline.

Should doctors routinely measure people's grip strength to test their health? The researchers say it is a better predictor of cardiovascular death than blood pressure, and could be easily used in lower-income countries. But raised blood pressure and cholesterol both increase the risk of cardiovascular disease, and there are treatments available to get them under control. Simply measuring a person’s grip strength would miss this opportunity and not lead to any preventive strategies.

The "grip test" could be used in poorer countries as a quick way to identify people who might be at risk of heart attack or stroke, who could then benefit from follow-up testing.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

What your handshake says about your health: Poor grip can signal chances of major illness or premature death. Mail Online, May 14 2015

Palm 'holds secrets of future health'. BBC News, May 14 2015

Handshake strength 'could predict' heart attack risk. The Daily Telegraph, May 14 2015

Why testing your grip could save your life. Daily Express, May 14 2015

Links To Science

Leong DP, Teo KK, Rangarajan S, et al. Prognostic value of grip strength: findings from the Prospective Urban Rural Epidemiology (PURE) study. The Lancet. Published online May 13 2015

Categories: NHS Choices

Study finds seasons may affect immune system activity

NHS Choices - Behind the Headlines - Wed, 13/05/2015 - 12:25

"Winter immune boost may actually cause deaths," The Guardian reports. A new gene study suggests there may be an increase in inflammation levels during winter, which can protect against infection but could also make the body more vulnerable to other chronic diseases.

The study looked at gene expression (the process of using a gene to make a protein) in blood samples taken from 1,315 children and adults in different months throughout the year in a range of different countries. The researchers found increased activity of some of the genes involved in inflammation during the winter, and decreased activity in the summer.

The authors concluded that this seasonal change in the immune system could, for example, contribute to the worsening of some autoimmune disorders during the winter, such as rheumatoid arthritis.

But the immune system is extremely complex, and different genes showed different seasonal expression patterns. There were also important discrepancies in expression patterns in different parts of the world. Saying the immune system is "weaker" in certain seasons at this stage is therefore oversimplifying the findings of this research.

It is also likely these seasonal changes could at least in part be a response to changes in infections and allergens, such as pollen in the summer, but this type of study cannot prove cause and effect. Further research into this area is required before any practical application of these results can be found. 

Where did the story come from?

The study was carried out by researchers from the University of Cambridge and the London School of Hygiene and Tropical Medicine in the UK, and the Technical University of Munich and the Technical University of Dresden in Germany.

It was funded by various institutions, including the National Institute for Health Research, Cambridge Biomedical Research Centre, the UK Medical Research Council (MRC), The Wellcome Trust, and the UK Department for International Development.

The study was published in the peer-reviewed journal Nature Communications. This is an open-access journal, so the study is free to read online.

The media, on the whole, reported the story accurately, though the total number of people who had gene expression analyses was 1,315, not more than 16,000, as reported.

Many of the news sources talked of the immune system being "stronger", "weaker" or "boosted". These terms are, arguably, overly simplistic and not representative of the findings of this research. It is probably better to think of the overall pattern of immune activity changing from season to season, rather than the immune system going from "weak" to "strong", and back to "weak" again.

The Mail Online also reported that it is believed the amount of daylight we get "plays a role" in this increased immune activity. They say this "could explain why the seasonal effect was weaker in people from Iceland, where the extremely long summer days and short, dark winter days may upset the process". But this seems contradictory – if daylight plays a role, you would expect a greater seasonal effect in Iceland. 

What kind of research was this?

This research combined several observational studies that looked at the level of immune system activity at different times of the year in people from around the world.

It aimed to see if there was seasonal variation in the:

  • gene expression of inflammatory proteins and receptors such as interleukin-6 (IL-6) and C-reactive protein (these proteins are associated with autoimmune conditions such as rheumatoid arthritis)
  • number of each type of white cell in the blood (white cells fight different types of infections)

As these were observational studies, they can only show an association between the different seasons and the immune system. They cannot prove that the season causes the immune system to become more or less active, as there could be other factors (confounders) causing any results seen.

What did the research involve?

The researchers looked at the gene expression of nearly 23,000 genes in one type of white blood cell in samples of blood taken from children and adults at different times of the year.

They measured the number of each type of white cell in blood samples from healthy adults from the UK and The Gambia taken during different months. They then looked at gene expression in samples of fat tissue from women in the UK.

Gene expression of 22,822 genes was analysed in samples from 109 children genetically at risk of developing type 1 diabetes. The samples came from the German BABYDIET study, where babies had a blood test taken every three months until the age of three.

Gene expression was measured from blood samples taken at different times of the year from:

  • 236 adults with type 1 diabetes from the UK
  • adults with asthma but no reported current infection from Australia (26 people), the UK/Ireland (26 people), the US (37 people) and Iceland (29 people)

The researchers then measured the number of each type of white cell in blood samples taken from 7,343 healthy adults from the UK and 4,200 healthy children and adults from The Gambia. They wanted to see if there were seasonal changes in the types of white cells in the blood.

Finally, they looked at gene expression in fat tissue samples taken from 856 women from the UK. They did this to see whether only cells in the immune system showed variation in gene expression with the seasons. 

What were the basic results?

In the first group of children and adults from Germany, the researchers found almost a quarter of all genes (23%, about 5,000 genes) showed seasonal variation in the white blood cells assessed. Some genes were more active in the summer and others in winter.

When looking at all of the population groups they tested, 147 genes were found to show the same seasonal variation in the blood samples taken from the children and adults from the UK/Ireland, Australia and the US.

Again, some genes were more active in the summer and others in the winter. The genes included one encoding protein, which controls the production of anti-inflammatory proteins and was found to be more active in the summer months.

Other genes involved in promoting inflammation were more active in the winter. Seasonal genes from the samples of Icelandic people did not show the same pattern.

The numbers of different types of white blood cells from the UK samples also showed seasonal variation. Lymphocytes, which mostly fight viral infections, were highest in October and lowest in March. Eosinophils, which have many immune functions, including allergic reactions, were highest in the summer.

There were also seasonal patterns in the numbers of different types of white blood cell from people in The Gambia, but these were different from those in the UK. All white cell types increased during the rainy season.

The researchers also found some genes showed seasonal variation in their activity in fat cells.  

How did the researchers interpret the results?

The researchers say their results indicate gene expression and the composition of blood varies with seasons and geographical locations.

They say the increased gene expression of inflammatory proteins in the European winter may help explain why some autoimmune conditions are more likely to start in the winter, such as type 1 diabetes.

Conclusion

This research found seasonal variations in gene expression in one type of white blood cell. Some genes became more active in the summer months, while others became more active in the winter.

For example, one gene involved in the body's anti-inflammation response was increased during the summer, while some involved in inflammation were increased in the winter.

The researchers also found seasonal variation in the numbers of each type of white cell. These patterns were different in samples taken from people in the UK, compared with people from The Gambia.

Because of the observational nature of each study, it is not possible to say for certain that the time of year caused the results seen. The immune system is affected by a variety of factors, such as current and past infections, stress and exposure to allergens.

For example, it is not surprising that the number of eosinophils was highest in the UK during the summer months, when the allergen pollen (linked to hay fever) is most abundant.

Concurrent illness may have confounded the results of the gene expression studies, as they were performed on adults with either type 1 diabetes or asthma and children at increased risk of type 1 diabetes.

The immune system is extremely intricate, involving a wide range of different genes, proteins and cells that have complex interactions, as shown in this study. Further research into this area is required before any practical application of these results can be found.

The most season-specific health advice we can offer at this point is to wrap up warm in winter, avoid getting sunburnt during the summer, and take the opportunity to safely top up your vitamin D throughout the year.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Winter immune boost may actually cause deaths, study suggests. The Guardian, May 12 2015

Seasons affect 'how genes and immune system work'. BBC News, May 12 2015

Body's immune response is stronger in the summer, Cambridge University study suggests. The Daily Telegraph, May 12 2015

We really ARE healthier in the summer: 'Serious illnesses strike more in winter because our immune systems are weaker'. Mail Online, May 12 2015

Links To Science

Dopico XC, Evangelou M, Ferreira RC, et al. Widespread seasonal gene expression reveals annual differences in human immunity and physiology. Nature Communications. Published online May 12 2015

Categories: NHS Choices

Doctors issue warning about overtreating patients

NHS Choices - Behind the Headlines - Wed, 13/05/2015 - 12:00

"NHS tests and drugs 'do more harm than good'," is the headline in The Telegraph, while The Guardian warns: "Doctors to withhold treatments in campaign against 'too much medicine'."

Both of these alarmist headlines are reactions to a widely reported opinion piece from representatives of the UK's Academy of Medical Royal Colleges (AMRC) in the BMJ about the launch of a campaign to reduce overdiagnosis and overtreatment in the UK.

However, the article does not suggest that doctors should "withhold" effective treatments, or say that all, or most, NHS tests and drugs do more harm than good.  

Who wrote the opinion piece?

The piece was written by a group of doctors representing the UK's AMRC. The academy represents all medical royal colleges in the UK.

The authors include Dr Aseem Malhotra, consultant clinical associate at the AMRC, Dr Richard Lehman, senior research fellow at the University of Oxford, and Professor Sir Muir Gray, the founder of the NHS Choices Behind the Headlines service.

The piece marks the launch of the Choosing Widely campaign in the UK. The campaign is already underway in the US and Canada. Its purpose is to ask medical organisations to identify five commonly used tests or treatments in their specialities that may be unnecessary, and should be questioned and discussed with their patients.

An example given on the website for the US Choosing Wisely campaign is the routine use of X-rays for first-line management of acute lower back pain. As these types of cases usually resolve by themselves, the use of X-rays could be seen as both a waste of time and money.

The piece is published as an open-access article, meaning it can be read for free online in the peer-reviewed medical journal BMJ. Read the article in full on the BMJ website

What arguments does the piece make?

The piece argues that some patients are diagnosed with conditions that will never cause symptoms or death (overdiagnosis) and are then treated for these conditions unnecessarily (overtreatment).

In addition, the authors say, some treatments are used with little evidence that they help, or despite being more expensive, complex or lengthy than other acceptable treatments.

They say overdiagnosis and overtreatment are driven by "a culture of 'more is better', where the onus on doctors is to 'do something' at each consultation".

The idea that doing nothing may actually be the best option could be a concept alien to many doctors as a result of medical culture and training.

The article says this culture is caused by factors such as:

  • NHS England's system of payment by results, which rewards doctors for carrying out investigations and providing treatment – though a case could be made that this is more of a problem in private healthcare systems, such as the US, where the incentive to provide often expensive investigations and treatment is much higher
  • patient pressures, partly driven by lack of shared information and decision making with patients
  • misunderstanding of health statistics, which means doctors, for example, overestimate the benefits of treatment or screening

Overtreatment matters, the authors say, because it exposes people to the unnecessary risk of side effects and harms, and because it wastes money and resources that could be spent on more appropriate and beneficial treatments.  

What evidence do the authors use to support their argument?

The authors cite various studies and sources to support their arguments. They point to patterns of variation in the use of medical and surgical interventions around the country, which do not relate to a need for these procedures.

They say the National Institute for Health and Care Excellence (NICE) has identified 800 clinical interventions commissioners could stop paying for because the available evidence suggests they do not work or have a poor balance of benefits and risks.

It should be pointed out the BMJ piece did not report specific evidence estimating how common overdiagnosis or overtreatment are in the UK as a whole.

The authors also note that a study of the effect of the GP payment system introduced in 2004 – which provides financial incentives for a range of activities, such as recording blood pressure, testing for diabetes and prescribing statins to people at risk of heart disease – found these tests and treatments are now more common, but this did not seem to have reduced the levels of premature death in the population.

Finally, the piece cites a study that found fewer people chose to have an angioplasty when told that, although it can improve symptoms, it does not reduce the future chances of a heart attack, compared with people who had not been told this explicitly.

It is important to point out the evidence presented in this article did not seem to be gathered via a systematic method (a systematic review). This means there could have been evidence that countered the authors' argument that was overlooked or not included.

The authors acknowledge there is no evidence that the Choosing Wisely campaign has had any effect in reducing the use of low-value medical procedures in the US. 

How accurate is the media reporting?

While the actual articles in the UK press are, in most cases, accurate, some of the headlines are alarmist and not particularly useful.

The Independent gives a good overview of the campaign and sets it into context, with information from NICE and examples from the US.

Several of the newspapers highlight specific tests and treatments that might be targeted by the campaign. For example, the Guardian reports that, "Doctors are to stop giving patients scores of tests and treatments, such as X-rays for back pain and antibiotics for flu, in an unprecedented crackdown".

This is premature – the first step that will be taken by medical organisations such as the royal colleges is to identify the top five lists of treatments or tests they consider to have dubious value, before discussing whether to reduce their use or, in some cases, not use them at all.

Once these have been identified, the piece calls for sharing this information with doctors and patients to help them discuss the benefits and harms of the identified treatments and tests more fully.

Does it make any recommendations?

The AMRC makes four recommendations:

  • Doctors should provide patients with resources to help them better understand the potential harms of medical tests and treatments.
  • Patients should be encouraged to ask whether they really need a test or treatment, what the risks attached to it are, and whether there are simpler, safer options. They should also be encouraged to ask what happens if they do nothing about their condition.
  • Medical schools should teach students better about risk and the overuse of tests and treatments, and organisations responsible for postgraduate training should ensure practising doctors receive the same education.
  • Those responsible for paying hospitals and doctors should consider a different payment system that doesn't encourage overtreatment.

In addition, the authors say the clinical, patient and healthcare organisations participating in the Choosing Wisely campaign are to work together to develop top five lists of tests or interventions of questionable value. They will then promote discussion about these interventions.

For an up-to-date, unbiased and entirely transparent overview of your options for testing or treating a particular condition, go to the NHS Choices Health A-Z.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Doctors urged to stop 'over-treating'. BBC News, May 13 2015

Stop handing out so many drugs, doctors are warned: 'Over-treating' patients is wasting the NHS money and can cause harm. Mail Online, May 13 2015

Millions of patients are being put at risk with pills and surgery they don't really need. Daily Mirror, May 12 2015

NHS tests and drugs 'do more harm than good'. The Daily Telegraph, May 12 2015

'Over-treating' patients is wasteful, unnecessary and can cause them harm, campaign claims. The Independent, May 12 2015

Doctors to withhold treatments in campaign against 'too much medicine'. The Guardian, May 12 2105

Links To Science

Malhotra A, Maughan D, Ansell J, et al. Choosing Wisely in the UK: the Academy of Medical Royal Colleges’ initiative to reduce the harms of too much medicine. BMJ. Published online May 12 2015

Categories: NHS Choices

Hormone oestrogen linked to male breast cancer

NHS Choices - Behind the Headlines - Tue, 12/05/2015 - 14:00

"Men with high oestrogen more likely to develop breast cancer," reports the Daily Telegraph.

This headline is based on an international study looking at potential risk factors for male breast cancer. This is a much rarer cancer compared to female breast cancer – an estimated 350-400 UK cases per year for men compared to 50,000 cases in women.

It is known that the hormone oestrogen can trigger the development of some types of female breast cancer. Men as well as women produce oestrogen, but at much lower levels, so researchers wanted to see if there was a similar connection.

This study compared blood samples taken from 101 men who went on to develop breast cancer, with 217 men who didn’t.

It found that men with the highest levels of one form of the hormone oestrogen were about two-and-a-half times more likely to develop the condition than those with the lowest levels.

The study used a good design and approach, and the findings seem plausible, given what is known in women. However, it is still difficult to say whether a raised oestrogen level is directly raising the risk of breast cancer, or if both could be the result of another underlying factor.

Learning more about the causes of male breast cancer might help to find ways to prevent it or find new treatments in the long term.

 

Where did the story come from?

The study was carried out by researchers from the National Cancer Institute in the US, and other research centres in the US, Europe and Canada. It was part of the Male Breast Cancer Pooling Project, and was funded by various international sources, including the National Cancer Institute in the US, and Cancer Research UK and the UK Medical Research Council.

The study was published in the peer-reviewed Journal of Clinical Oncology.

The Telegraph covers this study reasonably well.

 

What kind of research was this?

This was a nested case-control study looking at whether levels of sex hormones are related to a man’s risk of developing breast cancer.

Breast cancer can occur in men, but is very rare. In the UK, about 350 men are reported to be diagnosed with the condition each year. This makes the condition difficult to study, and this is why researchers came together to form an international collaboration, so that they could identify more cases than they would be able to by working alone.

Men and women both produce the sex hormones oestrogen and testosterone – but at different levels. In women, breast cancer is known to be influenced by these hormones. The roles these hormones play in male breast cancer is not known.

A nested case-control study is the most feasible way of looking for possible risk factors for rare diseases. Being "nested" means that information is collected on risk factors in a prospective fashion in a larger group of people, and then people who develop the condition are identified. These people are the "cases" and a matched group of people with similar characteristics, but without the condition, are the "controls".

 

What did the research involve?

The researchers identified 101 men with breast cancer (cases), and 217 similar men without the condition were selected as controls. They analysed blood samples that had been collected from the men before their diagnosis, and compared hormone levels to see if there were any differences from cases and controls.

The participants were identified through seven cohort studies that recruited men without breast cancer. The men provided blood samples, and these were stored. They were then followed up to see if they developed breast cancer. When a case was identified, the researchers selected up to 40 control men from their cohort who were similar to the affected man in terms of race, year of birth, year they entered the study, and how long they had been followed up for.

The researchers then analysed the stored samples to measure the levels of various forms of the steroid sex hormones oestrogen and testosterone. They compared levels in men who later went on to develop breast cancer and controls, to see if they differed. They took into account factors that might affect results (potential confounders) such as:

  • age when the blood sample was taken
  • race
  • body mass index (BMI)
  • date of the blood sample

 

What were the basic results?

The researchers found that for the male sex hormones (androgens such as testosterone) there were no differences in levels between men who went on to develop breast cancer, and those who did not.

However, men who developed breast cancer did have higher levels of the hormone oestradiol (one form of oestrogen) than controls. Men who had the highest oestradiol levels were about two-and-a-half times more likely to develop the condition than those with the lowest levels (odds ratio (OR) 2.47, 95% confidence interval (CI) 1.10 to 5.58).

 

How did the researchers interpret the results?

The researchers conclude that their results support a role for oestradiol (oestrogen) in the development of breast cancer in men. They report that this is similar to the level of effect seen in postmenopausal women.

 

Conclusion

This study has identified that oestrogen may play a role in the development of breast cancer in men. The study’s strengths include the prospective collection of data, and the relatively large group of cases, given how rare the disease is.

One of the main limitations of this type of study is that other factors may influence results. In this study, this risk was minimised by matching controls to cases within each country, and by adjusting for various confounders in the analyses. Despite this, some unmeasured confounders may still have an effect. For example, breast cancer in a first-degree relative (parent or sibling) was five times more common in men who developed breast cancer, and there was no information on whether any of the men carried a high risk form of the BRCA genes, which increase the risk of cancer.

In addition, only one blood sample appeared to be tested for each man, and at various times before their diagnosis. It is possible that the single sample taken may not be representative of levels over a longer period.

It is difficult to say from this type of study whether oestrogen levels are directly causing an increase in risk. The authors note that it is not clear how higher levels of oestrogen might increase breast cancer risk.

Overall, the findings of this study seem plausible, given what is known about breast cancer in women, and could increase knowledge about possible risk factors for male breast cancer.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Men with high oestrogen more likely to develop breast cancer. The Daily Telegraph, May 11 2015

Links To Science

Brinton LA, Key TJ, Kolonel LN, et al. Prediagnostic Sex Steroid Hormones in Relation to Male Breast Cancer Risk. Journal of Clinical Oncology. Published online May 11 2015

Categories: NHS Choices

Scientists 'amazed' at spread of typhoid 'superbug'

NHS Choices - Behind the Headlines - Tue, 12/05/2015 - 13:00

“Antibiotic-resistant typhoid is spreading across Africa and Asia and poses a major global health threat,” BBC News reports.

Typhoid fever is a bacterial infection. If left untreated, it can lead to potentially fatal complications, such as internal bleeding.

Uncommon in the UK (there were 33 confirmed UK cases in the first quarter of 2015 and it is thought most of these were contracted abroad), it is more widespread in countries where there is poor sanitation and hygiene.

The headline is based on a study that looked at the genetics of the bacteria that causes typhoid fever, Salmonella typhi, to trace their origins.

The study analysed genetic data from almost 2,000 samples of Salmonella typhi collected between 1903 and 2013. It was looking for a strain called H58 that is often antibiotic-resistant. It found that this strain was likely to have originated in South Asia around the early 1990s, and has spread to other countries in Africa and Southeast Asia. It accounted for about 40% of samples collected each year. Over two-thirds of the H58 samples had genes that would allow them to be resistant to antibiotics.

It would be complacent to assume that this is just a problem for people in the developing world, as antibiotic resistance is a major threat facing human health worldwide. Studies such as this help researchers to identify and track how such bacteria spread. This may help them to use existing antibiotics more effectively, by identifying where specific types of resistance are common.

 

Where did the story come from?

The study was carried out by a large number of researchers from international institutions, including the Wellcome Trust Sanger Institute, in the UK. The researchers were also funded by a wide range of international organisations, including the Wellcome Trust and Novartis Vaccines Institute for Global Health.

The study was published in the peer-reviewed journal Nature Genetics.

The news sources cover this story reasonably. Some reporting implies that it is the H58 strain that is killing 200,000 people a year, but this study has not assessed this.

The 200,000 figure seems to be taken from information provided by the US’s Centers for Disease Control and Prevention (CDC), and is an estimate of all types of typhoid fever, not just the H58 strain.

 

What kind of research was this?

This was a genetic study looking at the origins and spread of the H58 strain of Salmonella typhi – the bacteria that causes typhoid fever. This strain is often found to be antibiotic-resistant.

The typhoid bacteria are spread by ingestion of infected faecal matter from a person with the disease. This means that it is a problem in countries where there is poor sanitation and hygiene. Typhoid fever is uncommon in the UK, and most cases in this country are people who have travelled to high-risk areas where the infection still occurs, including the Indian subcontinent, South and Southeast Asia, and Africa. The researchers say that 20-30 million cases of typhoid are estimated to occur each year worldwide.

Typhoid fever has been traditionally treated with the antibiotics chloramphenicol, ampicillin and trimethoprim-sulfamethoxazole. Since the 1970s, strains of typhoid have started to emerge that are resistant to these antibiotics (called multidrug-resistant strains). Different antibiotics, such as fluoroquinolones, have been used since the 1990s, but strains resistant to these antibiotics have been identified recently in Asia and Africa. One such strain, H58, is becoming more common, and was the focus of this study.

 

What did the research involve?

The researchers used genetic sequence data from 1,832 samples of Salmonella typhi bacteria collected across the world. They used this data to assess when the H58 strain (which has identifiable genetic characteristics) had arisen and how it had spread.

They first identified which of the samples belonged to the H58 strain, and in what year it was first identified. They also looked at what the proportion of samples collected in each year were of this strain, to see if it was becoming more common.

Over time, DNA accumulates changes, and the researchers used computer programmes to analyse the genetic changes present in each sample to identify how each strain is likely to be related to the others. By combining this information with the origin and year of each sample, the researchers developed an idea of how the strain had spread.

 

What were the basic results?

The researchers found that nearly half of their samples (47%) belonged to the H58 strain. The first sample identified as part of this strain was from Fiji in 1992, and continued to be identified up to the latest samples, from 2013. H58 strain samples were identified from 21 countries in Asia, Africa and Oceania, showing that it is now widespread. Overall, 68% of these H58 samples had genes that would allow them to be antibiotic-resistant.

There were some very genetically closely related samples found in different countries, suggesting that there was human transfer of the bacteria between these countries. Their genetic analyses suggested that the strain was initially located in South Asia, and then spread to Southeast Asia, western Asia and East Africa, as well as Fiji.

There was evidence of multiple transfers of the strain from Asia to Africa. The H58 strain accounted for 63% of samples from eastern and southern Africa. The analysis suggested that there had been a recent wave of transmission of the H58 strain from Kenya to Tanzania, and on to Malawi and South Africa. This had not previously been reported, and the researchers described it as an “ongoing epidemic of H58 typhoid across countries in eastern and southern Africa”.

Multidrug resistance was reported to be common among H58 samples from Southeast Asia in the 1990s and, more recently, samples from this region have acquired mutations which have made them less susceptible to fluoroquinolones. These have become more common in the area, and researchers suggested that this is due to the use of fluoroquinolones to treat typhoid fever over this period, leading to these resistant strains having a survival advantage.

In South Asia, there are lower rates of multidrug resistance in recent samples than in Southeast Asia. In Africa, most samples showed multidrug resistance to the older antibiotics, but not fluoroquinolones, as these are not frequently used there.

 

How did the researchers interpret the results?

The researchers say that their analysis is the first of its kind for the H58 typhoid strain, and that the spread of this strain “requires urgent international attention”. They say that their study “highlights the need for longstanding routine surveillance to capture epidemics and monitor changes in bacterial populations as a means to facilitate public health measures, such as the use of effective antimicrobials and the introduction of vaccine programs, to reduce the vast and neglected morbidity and mortality caused by typhoid”.

 

Conclusion

This study has provided information about the spread of a strain of typhoid called H58, which is commonly antibiotic-resistant, by looking at the genetics of samples collected between 1903 and 2013. It has shown that the strain was likely to have arisen in South Asia and then spread to Southeast Asia and Africa. The strain showed different patterns of antibiotic resistance in different regions – likely driven by different patterns in the use of antibiotics.

While this study has not estimated the number of cases or deaths worldwide attributable to this strain specifically, there are reported to be 20-30 million cases of typhoid fever globally each year.

The spread of antibiotic resistance is a major threat to human health, and studies like this can help us to monitor them and target treatment more effectively.

Read more about the battle against antibacterial resistance and how we can all help do our bit.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Drug-resistant typhoid 'concerning'. BBC News, May 11 2015

Antibiotic-resistant typhoid spreading in silent epidemic, says study. The Guardian, May 11 2015

Deadly typhoid superbug poses global threat after 'rapidly spreading' through Asia and Africa, experts warn. Mail Online, May 11 2015

Links To Science

Wong VK, Baker S, Pickard DJ, et al. Phylogeographical analysis of the dominant multidrug-resistant H58 clade of Salmonella Typhi identifies inter- and intracontinental transmission events. Nature Genetics. Published online May 11 2015

Categories: NHS Choices

Pages