NHS Choices

Ebola could reach UK, but outbreak risk is low

NHS Choices - Behind the Headlines - Tue, 07/10/2014 - 12:12

“Global threat of Ebola: From the US to China, scientists plot spread of deadly disease across the world from its West African hotbed,” reports the Mail Online. This is a terrifyingly apocalyptic-sounding headline, yet the real story about Ebola is that, while still frightening and deadly, it is still a very low risk to people in the UK.

The Ebola virus causes a serious, usually fatal, disease, for which there are no licensed treatments or vaccines.

An ongoing outbreak of Ebola virus started in the West African country of Guinea, which was first reported in December 2013. This Ebola outbreak is the largest ever observed, both geographically and in terms of the number of people affected.

A study published on September 2 2014 has modelled how the virus may spread. It found that the short-term probability of international spread outside the African region was small, but not negligible. This short-term probability covered three and six weeks, which corresponded to September 1 and 22 2014. The study found that the country outside the African region with the highest risk of importation was the UK.

The original forecasts have since been revised and will have to be further updated after a Spanish nurse contracted Ebola. This happened after she treated two Spanish missionaries, who died of the disease after being flown back from Africa. This nurse is the first person known to have contracted Ebola outside of West Africa.

 

Where did the story come from?

The study was carried out by researchers from Northeastern University, the Fred Hutchinson Cancer Research Center, and the University of Florida, all in the US, and the Institute for Scientific Interchange in Italy. It was funded by the Defense Threat Reduction Agency and MIDAS-National Institute of General Medical Sciences.

The study was published in the peer-reviewed journal PLOS Current Outbreaks on September 2 2014. This is an open access journal, which is freely available to all.

The researchers state that the results of their model may change as more information becomes available and are publishing new data, projections and analysis online.

The media has reported the results of the updated projections published on the site above. It’s worth bearing in mind that, despite the very worrying headlines and the deadliness of Ebola, the risk to anyone in the UK is very low.

 

What kind of research was this?

This was a modelling study that aimed to forecast the local transmission of the Ebola virus in West Africa, and the probability of international spread if the containment measures are not successful at stopping the outbreak.

Like the weather forecast, modelling studies have to contain assumptions and approximations, and although they are useful tools to help predict what might happen, they are not always correct. The assumptions and approximations in this model are being updated by the researchers as new information becomes available.

 

What did the research involve?

The researchers used computer simulations to model the transmission of the Ebola virus.

 

What were the basic results?

The researchers estimate that each case of Ebola in West Africa will spread to 1.5 to 2 unaffected people.

In the short term (three and six weeks, which corresponded to until September 1 and September 22 2014), the probability of international spread outside the African region is small, but not negligible. The country outside the African region with the highest risk of importation in the short term is the UK.

The outbreak is more likely to spread to other African countries, which will increase the risk of international spread over a longer time period.

 

How did the researchers interpret the results?

The researchers conclude that their modelling has shown that, “the risk of international spread of the Ebola virus is still moderate for most countries. The current analysis, however, shows that if the outbreak is not contained, the probability of international spread is going to increase consistently, especially if other countries are affected and are not able to contain the epidemic”.

They go on to stress that the current model contains assumptions and approximations that may need to be modified as more information becomes available.

 

Conclusion

This modelling study found that the short-term probability of international spread outside the African region is small, but not negligible. The country outside the African region with the highest risk of importation is the UK.

The assumptions and approximations in this model are being updated by the researchers as new information becomes available, and these forecasts have since been revised.

If you are travelling abroad and are worried about infectious diseases, you may want to check out the country-by-country guide provided by the National Travel Health Network and Centre.

Health professionals should stay abreast of the latest Ebola advice from Public Health England.

 

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Global threat of Ebola: From the US to China, scientists plot spread of deadly disease across the world from its West African hotbed. Mail Online, October 5 2014

Ebola outbreak: Virus could reach UK and France by the end of the month, scientists claim. The Independent, October 5 2014

Ebola outbreak: Britain has 50% chance of importing deadly virus with the next three weeks. Daily Mirror, October 5 2014

Deadly Ebola virus could reach Britain in THREE WEEKS say scientists. Daily Express, October 6 2014

Links To Science

Gomes MFC, et al. Assessing the International Spreading Risk Associated with the 2014 West African Ebola Outbreak. PLOS Current Outbreaks. September 2 2014.

Categories: NHS Choices

Cannabis labelled 'harmful and as addictive as heroin'

NHS Choices - Behind the Headlines - Tue, 07/10/2014 - 11:29

"Cannabis: the terrible truth," is today's Daily Mail front page splash story. The paper cites the risks posed by cannabis – including a doubling of the risk of schizophrenia – based on research the paper says has "demolished the argument that the drug is safe".

The "terrible truth" is we still don't know enough about the safety and harms of cannabis because it's legally and ethically a difficult area to research. However, we can be pretty certain you can't take a fatal overdose from recreational cannabis use.

The headlines in the Mail and several other papers were prompted by the publication of a narrative review of cannabis research by Professor Wayne Hall, an expert adviser on addiction to the World Health Organization.

Professor Hall concludes that cannabis research since 1993 has shown its use is associated with several adverse health effects, including a doubling of the risk of crashing if driving while "cannabis impaired". He also found that around one in 10 regular cannabis users develop dependence.

He also reports regular cannabis use in adolescence was strongly linked with using other illicit drugs, as well as increased risk of cognitive impairment and psychoses.

In addition, cannabis smoking probably increases cardiovascular risk in middle-aged adults with pre-existing heart disease, but its effects on respiratory function and respiratory cancer remain unclear as most cannabis smokers have smoked, or still smoke, tobacco.

But as this review was not systematic, it is impossible to tell if all relevant studies have been included. And all these conclusions were based on the results of observational studies, which means we can't tell if cannabis caused all the effects.

 

Where did the story come from?

The study was carried out by a single researcher from the University of Queensland Centre for Youth Substance Abuse Research, the University of Queensland Centre for Clinical Research, and the National Drug and Alcohol Research Centre in Australia, and the National Addiction Centre at King's College London.

It was funded by the National Health and Medical Research Council of Australia and was published in the peer-reviewed journal, Addiction.

Despite the somewhat hyped headlines, the media coverage of this study was generally accurate, but did not point out the limitations of the research. Indeed, the Mail's description of the study as "definitive" is rather at odds with the nature of the research.

 

What kind of research was this?

This was a narrative review that aimed to examine the changes in the available evidence on the adverse health effects of cannabis since 1993.

It was not clear how the author identified the studies used as a basis for the review. It may be the case there are other studies showing no effect or harm that have not been included in the review.

It is also not clear how the author compiled the results of the research to come up with strengths of effect.

A systematic review is required to assess the adverse health effects of cannabis use.

Also, although the author applied rules to the interpretation of the research, the conclusions are based on the results of observational studies.

It is difficult to conclude from these types of studies that cannabis causes the effects seen, as there are still potentially differences between people who use cannabis and people who don't that could explain the differences seen.

 

What did the research involve?

The author looked at studies published over a 20-year period since 1993 (when a previous review was conducted) to see if there was evidence that cannabis caused adverse health effects. To do this, Professor Hall looked at whether:

  • there were case control and cohort studies that showed an association between cannabis use and a health outcome
  • cannabis use preceded (started before) the outcome
  • the association remained after controlling for potential confounding variables
  • there was clinical and experimental evidence that supported the biological plausibility of a causal relationship

 

What were the basic results?

The author listed the conclusions that he believes can now reasonably be drawn in the light of evidence that has accrued over the past 20 years.

Adverse effects of acute use

Professor Hall concluded that:

  • The risk of a fatal overdose is considered to be extremely small. The estimated fatal dose in humans is between 15 and 70g, far greater than it is reported a heavy user could ever use in one day. There have also been no reports of fatal overdose in the literature.
  • Driving while cannabis impaired approximately doubles car crash risk.
  • Maternal cannabis use during pregnancy modestly reduces birthweight.
Adverse effects of chronic use

Professor Hall concluded that:

  • Around one in 10 regular cannabis users develop dependence, and this rises to one in six among people who start in adolescence.
  • Regular (daily or near daily) cannabis use in adolescence approximately doubles the risks of early school leaving and cognitive impairment and psychoses in adulthood.
  • Regular cannabis use in adolescence is also associated strongly with the use of other illicit drugs.
  • Cannabis smoking may increase the risk of cardiovascular events such as angina or heart attack in middle-aged and older adults with pre-existing cardiovascular disease. Some isolated reports suggest younger people not yet diagnosed with cardiovascular disease may also be at risk of cardiovascular events.
  • The effects of cannabis on respiratory function and respiratory cancer remain unclear because most cannabis smokers have smoked, or still smoke, tobacco.

 

How did the researcher interpret the results?

Professor Hall concluded that: "The epidemiological literature in the past 20 years shows that cannabis use increases the risk of accidents and can produce dependence, and that there are consistent associations between regular cannabis use and poor psychosocial outcomes and mental health in adulthood."

 

Conclusion

This narrative review has concluded that cannabis research in the past 20 years has shown that cannabis use is associated with a number of adverse health effects.

It also found driving while cannabis impaired approximately doubles car crash risk and around one in 10 regular cannabis users develop dependence.

Regular cannabis use in adolescence approximately doubles the risks of early school leaving and cognitive impairment and psychoses in adulthood, according to the review.

Regular cannabis use in adolescence is also associated strongly with the use of other illicit drugs.

In addition, cannabis use probably increases cardiovascular risk in middle-aged adults with pre-existing heart disease, but its effects on respiratory function and respiratory cancer remains unclear because most cannabis smokers have smoked, or still smoke, tobacco.

However, as this was not a systematic review it is impossible for readers to know whether all relevant studies have been included.

All the review's conclusions were based on the results of observational studies. So while it seems probable that cannabis use increases the risk of some adverse outcomes, it is also possible there are differences between cannabis smokers and non-smokers that explain some of the differences seen.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Cannabis as addictive as heroin, major new study finds. The Daily Telegraph, October 7 2014

Cannabis can damage you, warns study. The Times, October 7 2014

The terrible truth about cannabis: Expert's devastating 20-year study finally demolishes claims that smoking pot is harmless. Daily Mail, October 7 2014

Links To Science

Hall W. What has research over the past two decades revealed about the adverse health effects of recreational cannabis use? Addiction. Published online October 7 2014

Categories: NHS Choices

Eating with a fat friend 'makes you eat more'

NHS Choices - Behind the Headlines - Mon, 06/10/2014 - 13:00

“Sitting next to overweight people makes you more likely to gorge on unhealthy food,” the Daily Express reports.

The paper reports on a small-scale research experiment showing that the presence of an overweight woman (an actress in a fat suit) near a buffet made student volunteers choose and eat a larger amount of unhealthy food (spaghetti) than when she was a healthy weight (without the fat suit). This effect was not influenced by whether the actress chose to eat healthily or unhealthily herself, something the study also looked at.

The researchers’ explanation of this was, “that when eating with or near an overweight person, you may be less likely to adhere to your own health goals.”

The study was not wholly convincing and does not prove this phenomenon exists in the general population, where food and social interactions may be more complex and nuanced. Food choice was artificially restricted to just two foods: spaghetti and salad – not the best buffet going. The same results may not have been found if participants were given a more realistic range of food choices.

It is difficult to see what practical implications the study "brings to the table", other than to be conscious of your own food choices, regardless of the social situation.

This may be a poignant reminder to those looking to maintain a healthy weight that when it comes to “all-you-can-eat” situations, it’s probably best to regard it as a special offer, not a personal challenge.

 

Where did the story come from?

The study was carried out by researchers from Southern Illinois University and Cornell University (US). No funding source was mentioned in the publication. The study was published in the peer-reviewed science journal Appetite.

The Express generally covered the story accurately, though the headlines indulged in a bit of “fat shaming”, as eating a small extra amount of spaghetti was morphed into “greed”.

 

What kind of research was this?

This was a randomised, single blind, human study looking at the influence of an overweight eating companion on healthy and unhealthy eating behaviour.

The researchers indicated that many social factors influence food intake, such as the presence or absence of eating companions, as well as the body type of these companions.

This study aimed to investigate the effect of:

  • the presence of an overweight person on what other people chose to eat
  • whether this was influenced by what food (healthy vs. unhealthy) the overweight person served themselves

 

What did the research involve?

The research team recruited 82 undergraduate college students (average age 19.5 years; 40 women and 42 men) to eat a buffet meal restricted to spaghetti with meat sauce and or salad at lunch. They also enlisted an actress to wear a suit that added three-and-a-half stone (50 pounds) to her weight. Without the “fat suit” she was a healthy weight, but donning the fat suit put her at the border of overweight/obese categories (with a BMI of 29.3).

Each of the 82 participants was randomly assigned to one of four scenarios:

  • the actress served herself healthily (more salad and less pasta) while wearing the fat suit
  • she served herself the same healthy meal without the fat suit
  • she served herself less healthily (more pasta and less salad) while wearing the fat suit
  • she served herself the same less healthy meal without the fat suit

Participants in each scenario viewed the actress serving herself and then served themselves pasta and salad.

The actress was not known to the participants, but drew attention to what she was eating by asking out loud “do I need to use separate plates for pasta and salad?” and dropping her fork and asking for a new one. She also sat in full view of the buffet queue.

The first part of the study looked at the effect of the fat suit. The second part of the study looked at the influence on the participants’ food choice when the actress served herself either a small amount of pasta and a large amount of salad (described as the “healthy eating condition”), or a large amount of pasta and a small amount of salad (“unhealthy eating condition”).

Participants were asked to report the number of hours and minutes since they had last eaten, to control for their hunger prior to the experiment.

The participants knew that the study aimed to examine eating behaviour, including serving size and intake, but they were blinded to the scenario allocation of the actress. When asked, no participants revealed suspicion about the purpose of the study.

 

What were the basic results?

These were two main findings:

  • When the actress wore the fat suit, appearing overweight, the other participants served and ate more pasta regardless of whether she served herself mostly pasta or mostly salad, compared to when she was of normal weight.
  • When she wore the fat suit and served herself more salad, the other participants actually served and ate less salad.

This meant that, regardless of whether the actress served healthy or unhealthy food, participants served and ate a larger amount of pasta (unhealthy food) when she appeared overweight than when she appeared a healthy weight.

 

How did the researchers interpret the results?

The research team said their results “support the ‘lower health commitment’ hypothesis, which predicted that participants would serve and eat a larger amount of pasta when eating with an overweight person, probably because the health commitment goal was less activated.” They added that their, “results did not support the ‘avoiding stigma’ hypothesis, which predicted that participants would serve and eat a smaller amount of pasta when an overweight confederate served herself unhealthily, to avoid association with the stigmatised group”.

 

Conclusion

This small-scale research experiment found that the presence of an overweight woman (an actress in a fat suit) near a buffet made student volunteers choose a larger amount of unhealthy food than when she was a healthy weight (without the suit). This effect was not influenced by whether the actress chose to eat healthily or unhealthily herself.

These findings suggest that people may serve and eat larger portions of unhealthy foods and smaller portions of healthy foods when eating with or near an overweight person. The researchers did not test out any reasons for this, but speculated this might be, “because they are less in tune with their own health goals”. They said this phenomenon might be easy to avoid by “assessing your level of hunger before going to the restaurant and planning your meal accordingly”.

However, the study was not wholly convincing and doesn’t prove this phenomenon exists in the general population, where food and social interactions may be more complex. For example, the study was restricted to a relatively small amount of young American adults (average age of 19.5), which may not be representative of findings in older people, children or other countries and cultures.

Similarly, the study investigated a single eating scenario, a buffet, where food choice was artificially restricted to only two foods to help the study design. The same results may not have been found in other eating scenarios, or if participants were given a more realistic range of food choices at a buffet. In addition, they did not measure how much cheese or salad dressing was used, which could have a substantial impact on whether the meal was healthy or unhealthy.

The study participants were also aware that their serving and intake levels were being recorded, which may have influenced the results.

Anyone who has been to an all-you-can-eat buffet and over-indulged can probably recognise how the social context of a meal can influence the amount of food people eat. This study suggests a further influence, body type, may also be influential, but only tentatively. This phenomenon is likely to be the subject of future research. 

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Having fat friends makes you greedy. Daily Express, October 4 2014

Fat friends DO make you eat more: Study finds we're more likely to ditch healthy eating when dining with overweight people. Mail Online, September 22 2014

Links To Science

Shimizu M, Johnson K, Wansink B. In good company. The effect of an eating companion's appearance on food intake. Appetite. Published online September 16 2014

Categories: NHS Choices

Green tea compound may improve cancer drugs

NHS Choices - Behind the Headlines - Mon, 06/10/2014 - 12:30

"Green tea could helps [sic] scientists develop new cancer fighting drugs," the Mail Online reports. But before you rush out to the shops, in no way does this study suggest green tea can fight cancer.

Instead, research has found a compound in green tea – the catchily named Epigallocatechin-3-O-gallate (EGCG) – may help improve the effectiveness of anti-cancer drugs such as Herceptin, used in the treatment of breast and stomach cancer.

This study used nanotechnology techniques to develop a new way of packaging and carrying protein drugs by combining them with EGCG.

Scientists formed a complex compound consisting of by-products of EGCG and the protein cancer drug Herceptin.

Tests in the laboratory and mice indicated the compound might have better anti-cancer properties than standard Herceptin treatment.

This is encouraging research and may lead to improvements in delivery mechanisms for protein drugs further down the line. However, it is at a very early stage of development, so new treatments are not guaranteed.

The results from the laboratory and mice studies need to be confirmed by other research groups before the team can consider testing potential treatments in humans.

Only then will they be able to assess whether such a drug delivery system could benefit people and in what circumstances. These studies will have to pay special attention to potential side effects of the drugs.

Overall, this new nanotechnology might prove useful in several years' time, but its immediate impact is minimal.

 

Where did the story come from?

The study was led by researchers from the Institute of Bioengineering and Nanotechnology, Singapore, and Beth Israel Deaconess Medical Center and Harvard Medical School in the US.

It was funded by the Institute of Bioengineering and Nanotechnology and the US National Institutes of Health.

The study was published in the peer-reviewed journal, Nature Nanotechnology.

The Mail Online's coverage was broadly accurate.

 

What kind of research was this?

This laboratory-based bioengineering study developed new drug carrier technology that was then tested in mice.

Most drugs require carrier substances to ensure the active drug ingredients reach the appropriate part of the body and are released at the appropriate time.

Carriers are usually inert and are broken down in the body over time. But high quantities of some carriers can produce toxicity in the body, leading to troublesome side effects.

This study aimed to improve current drug carriers by developing a carrier that was easily metabolised in the body, and may even do some good by itself.

The researchers said green tea extract was used because previous research indicated it had anti-cancer effects, as well as protective effects on the nervous system and DNA.

Many new technologies are tested in mice first as – despite the difference in size – they have a similar biology to humans. However, some things work differently in mice and men, so any positive findings in mice would not automatically apply to humans.

 

What did the research involve?

The research involved developing a new biological compound to carry cancer drugs based on derivatives (by-products) of one of the main ingredients in green tea, called Epigallocatechin-3-O-gallate (EGCG).

The research team joined EGCG derivatives with various anti-cancer proteins to form what are known as nanocomplexes – intricately engineered combinations of proteins.

One of the nanocomplexes comprised the anti-cancer protein Herceptin bundled with an EGCG derivative, forming a core, and a separate EGCG-derived shell around the outside.

They injected this into mice with cancer to see if the Herceptin-EGCG carrier nanocomplex was more or less effective at fighting tumour cells than "free" Herceptin alone.

 

What were the basic results?

The team found they were able to make stable nanocomplexes incorporating anti-cancer proteins with EGCG derivatives.

When the Herceptin-EGCG complex was injected into mice with cancer, it was better at targeting tumour cells (it had better "selectivity") and reducing their growth, and lasted longer in the blood than free Herceptin.

This complex also displayed better anti-cancer properties when tested on human breast cancer cells in the laboratory.

The researchers also coupled ECGC derivatives with another protein called interferon α-2a, which is used in combination with chemotherapy and radiation as a cancer treatment. This nanocomplex was better at limiting cancer cell growth than free interferon α-2a.

 

How did the researchers interpret the results?

The researchers stated they developed and characterised a new green tea-based mechanism for protein drug delivery where the carrier itself displays anti-cancer effects.

They said the nanocomplex effectively protected the protein drugs against many obstacles from the point of administration to the required delivery sites.

They concluded that, "The combined therapeutic effects of the green tea-based carrier and the protein drug showed greater anti-cancer effect than the free protein."

 

Conclusion

This study developed a new way of packaging and carrying protein drugs by combining them with a green tea extract called Epigallocatechin-3-O-gallate (EGCG), which itself may have anti-cancer properties.

They formed a complex between derivatives of EGCG and the protein cancer drug Herceptin. Tests in the laboratory and in mice indicated it might have better anti-cancer properties than non-complexed free Herceptin.

This is encouraging research and may lead to improvements in delivery mechanisms for protein drugs further down the line.

But this research remains at a very early stage of development. The results from the laboratory and mice studies need to be confirmed by other research groups before the team can consider testing potential treatments in humans.

Only then will they be able to assess whether such a drug delivery system could benefit people. These further studies will have to pay special attention to potential side effects of the drugs.

Green tea extracts are often the subject of news headlines, often in the very early stages of drug development.

Other claims made about green tea include how it can help prevent prostate cancer, reduce stroke riskboost the ability of the brain, and help ward off Alzheimer's disease.

Some people have even gone as far as claiming the beverage is a “"superfood". Many of these claims are not backed by robust evidence, however.

Overall, this new nanotechnology might prove useful in many years' time, but its immediate impact is minimal.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Green tea could helps scientists develop new cancer fighting drugs. Daily Mail, October 5 2014

Links To Science

Chung JE, Tan S, Gao SJ, et al. Self-assembled micellar nanocomplexes comprising green tea catechin derivatives and protein drugs for cancer therapy. Nature Nanotechnology. Published online October 5 2014

Categories: NHS Choices

Study finds clue to why colds trigger asthma

NHS Choices - Behind the Headlines - Fri, 03/10/2014 - 12:20

The Mail Online reports how "a simple cold can set off a deadly asthma attack: Scientists discover chemical can send the immune system into overdrive".

It is well known that in people with asthma, respiratory infections such as colds or flu can trigger asthma symptoms, and, in more serious cases, an asthma attack.

This study involved experiments in mice and humans to see exactly why this might be the case. In particular, the researchers wanted to find out how inflammatory processes might play a part.

They found in people with asthma, infection with the common cold virus (rhinovirus) causes an increase in levels of an inflammatory protein called IL-25 in the cells lining the airways.

This sets off a range of inflammatory processes, such as narrowing of the airways, which can cause asthma symptoms.

As the researchers suggest, the findings indicate using a drug to block IL-25 could prevent people with asthma getting worse symptoms if they catch a cold.

This research is in its early stages and further studies will now be needed to develop an IL-25-blocking drug for testing.

 

Where did the story come from?

The study was carried out by researchers from Imperial College London.

It was funded by the Medical Research Council, Asthma UK, the National Institute for Health Research, Imperial Biomedical Research Centre and the Novartis Institute for Biomedical Research.

The study was published in the peer-reviewed journal Science Translational Medicine.

The Mail Online's reporting of the study was accurate.

 

What kind of research was this?

This was laboratory, human and animal research that aimed to investigate the role a protein called interleukin-25 (IL-25) plays in triggering worsening symptoms in people with asthma when they catch a cold.

Viral infections such as the common cold (mostly caused by rhinoviruses) are known to be a trigger for worsening asthma symptoms or causing asthma attacks.

IL-25 is a protein involved in inflammatory and autoimmune processes (where the immune system attacks health tissue) in the body and has previously been identified as playing a role in asthma.

This study used laboratory experiments and studies in mice and humans. The results showed how people with asthma express more IL-25, and that infection with rhinovirus can increase levels of IL-25 and other inflammatory molecules.

 

What did the research involve?

The researchers first studied samples of the cells lining the airways in the lungs (the bronchi) obtained from 10 people with asthma and 10 people without asthma.

They looked at the levels of IL-25 and then looked at what happened when these cells were infected with rhinovirus.

They then followed up these laboratory results with studies in mice and humans. The researchers infected 39 people with rhinovirus – 28 people with asthma and 11 people without asthma – to see what effect this had on the levels of IL-25 in nasal secretions.

They then studied mice to look at the exact mechanisms by which rhinovirus may lead to increased IL-25 and so trigger asthma symptoms.

A mouse model of asthma was used in these experiments. In this model, the mice were sensitised with an allergen once daily for three days via the nose, while some were given a saline control.

The allergen used was RV-OVA, which causes allergic inflammation in the airways similar to that which occurs in people with asthma.

After this sensitisation, some were infected with rhinovirus, while some were not. The researchers then examined the levels of IL-25 and inflammatory cells in the airways. 

The researchers followed this up by investigating the effects of an IL-25-blocking antibody in mice.

 

What were the basic results?

In the first laboratory study, the researchers found the cells lining the airways in people with and without asthma were no different in how much IL-25 they produced when they were not infected with rhinovirus.

After eight hours of exposure to rhinovirus, infected cells showed tenfold greater levels of IL-25 than those not infected. Using allergy tests, the researchers found increased IL-25 expression was associated with increased sensitivity to various allergens.

Their next experiments in people both with and without asthma showed there was no significant difference in the level of IL-25 nasal secretions before rhinovirus infection.

Up to 10 days after infection with rhinovirus, 61% of those with asthma (17 of 28) demonstrated a significant increase in their IL-25 levels.

People without asthma also had a significant increase in IL-25 secretion, but peak levels during infection were higher in people with asthma.

The researchers found the "asthmatic mice" (whose airways had been sensitised by the allergen RV-OVA) had higher IL-25 levels, whether subsequently infected with rhinovirus or not, compared with the "non-asthmatic" mice.

When "allergic" mice were infected with rhinovirus, they had IL-25 levels 28 times higher than asthmatic mice who were not infected. Infection of non-asthmatic mice with rhinovirus also caused an increase in IL-25 levels compared with non-asthmatic, non-infected mice, but at much lower levels.

Further examination of lung tissue from the mice demonstrated the inflammatory response that was occurring in association with IL-25. Using an IL-25-blocking antibody blocked the inflammatory response in the mice's lungs that occurred after rhinovirus infection. 

 

How did the researchers interpret the results?

The researchers concluded that rhinovirus can induce IL-25 production in the lining of the airways, and that this is more pronounced in people with asthma than healthy controls.

In a mouse model of allergic asthma, rhinovirus infection induced IL-25 production, and blocking IL-25 could reduce rhinovirus-induced lung inflammation.

 

Conclusion

It is well known that respiratory infections such as colds or flu can trigger asthma symptoms in those who have the condition.

This study demonstrates how, in people with asthma, infection with the common cold virus (rhinovirus) causes an increase in levels of the inflammatory protein IL-25 in the cells lining the airways. This sets off an inflammatory process that could be causing the asthma symptoms.

As the researchers suggest, the findings indicate that using a drug to block IL-25 could be a promising way to try to prevent people with asthma getting worse symptoms if they catch a cold.

The research is in its early stages, and further studies will now be needed to develop an IL-25-blocking treatment that shows enough promise to be tested in human trials.

While there is no guaranteed way to prevent catching a cold, people can help prevent the spread of colds by always coughing or sneezing into a tissue, binning it and washing their hands.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

How a simple cold can set off a deadly asthma attack: Scientists discover chemical can send the immune system into overdrive. Mail Online, October 2 2014

Links To Science

Beale J, Jayaraman A, Jackson DJ, et al. Rhinovirus-induced IL-25 in asthma exacerbation drives type 2 immunity and allergic pulmonary inflammation. Science Translational Medicine. Published online October 1 2014

Categories: NHS Choices

Moderate regular drinking may 'damage sperm'

NHS Choices - Behind the Headlines - Fri, 03/10/2014 - 11:30

“Just five alcoholic drinks a week could reduce sperm quality,” The Guardian reports. A study involving Danish military recruits found that even moderate drinking, if done regularly, was associated with a drop in quality.

The study involved 1,200 young Danish military recruits (with an average age of 19), and assessed their semen quality, as well as questioning their alcohol intake in the week preceding the sample, or binge drinking in the past 30 days.

Overall, there was no clear association between semen quality and alcohol intake. However, in analyses restricted to the 45% of men who said this was a typical week for them, there was a dose-response relationship, with higher alcohol intakes associated with lower sperm quality.

Men who did not drink any alcohol at all also had impaired sperm quality. They could have had health issues that impacted on their sperm quality and also meant they needed to avoid drinking, although this is pure speculation.

As always, there are limitations. Importantly, as the study assessed alcohol intake and sperm quality at the same time, it cannot prove cause and effect. Various other factors could also influence the relationship.

There is also the possibility of inaccurate recall of alcohol units consumed, although we would suspect that young men tend to underestimate rather than overestimate how much they drink.

We also don’t know whether any of the measures of reduced sperm quality observed would actually have any effect on fertility.

Nevertheless, the detrimental effects of high alcohol intake in various areas of health are well known, so laying off the drink for a few days a week certainly wouldn’t hurt.

 

Where did the story come from?

The study was carried out by researchers from University of Southern Denmark, University of Copenhagen and Icahn School of Medicine at Mount Sinai, New York, and was funded by The Danish Council for Strategic Research, Rigshospitalet, European Union, DEER, The Danish Ministry of Health and The Danish Environmental Protection Agency, and Kirsten and Freddy Johansens Foundation.

The study was published in the peer-reviewed British Medical Journal Open, which is an open access journal, meaning the study is free to read online.

The UK media's reporting of the study is accurate and included some useful observations from independent fertility experts. However, the reporting doesn’t make clear that, overall, there was no clear association between semen quality and alcohol intake. An association was only seen in men who reported habitually drinking five units or more.

 

What kind of research was this?

This was a cross sectional study which aimed to look at the association between alcohol consumption, semen quality and reproductive hormones.

As the researchers say, several research studies have associated excessive alcohol consumption and binge drinking (defined in the paper as five units or more in a single day; roughly the same as two standard cans of UK premium 5% abv lager) with adverse health outcomes. Some studies have reported an association between alcohol consumption and semen quality, while others have not.

However, few studies have specifically examined the effect of binge drinking.

The main limitation with this type of study is that being cross sectional, it cannot show that alcohol consumption causes poor semen quality. It cannot show that the men previously had higher-quality semen and that they then developed these alcohol consumption patterns and had this effect. There could be other factors (confounders) that explain the association seen.

For example, the results of this study could also be used to suggest that men with poor semen quality are more likely to drink.

A more suitable study design would be a cohort study, where men are followed over many years, but these are both expensive and time-consuming to carry out.

 

What did the research involve?

This Danish study used a specific population of 1,221 men (average age of 19) recruited to compulsory military service between January 2008 and April 2012. At recruitment, they undergo a compulsory physical examination and were invited to have an assessment of semen quality. The semen samples were analysed for volume, sperm concentration, total sperm count and percentage mobile and morphologically normal. Blood samples were also tested for levels of sex hormones such as testosterone.

All the men completed a questionnaire which, as well as collecting medical information, also included assessment of alcohol intake. They completed a diary reporting their daily intake of red and white wine, beer, strong alcoholic drinks, alcopops and others during the week prior to the semen and blood samples. They were asked to give their intake in units, being told that one standard beer, one glass of wine or 40ml of spirits contained 1 unit of alcohol (≈12g of ethanol), one strong beer or one alcopop contained 1.5 units of alcohol, and one bottle of wine contained 6 units.

Alcohol intake was calculated as the sum of daily reported unit intakes within that week. They were asked whether the intake in that week was typical for them (habitual intake). They were also asked how many times during the past 30 days they had been drunk or had consumed more than five units of alcohol on one occasion, which was defined as binging.

In their analyses they considered alcohol intake in five unit intervals, with the intake of one to five units as the reference category to which all others were compared. They also categorised the number of binge episodes and the number of times a person was drunk in the past week.

 

What were the basic results?

The median (average) alcohol intake in the preceding week was 11 units, and beer was the most common drink (making up an average 5 units). In the past month 64% of the men had binge drunk and 59% had been drunk more than twice. Almost half of the men (45%) said the preceding week had been a typical intake week for them.

Semen quality generally decreased with increasing alcohol intake and binge drinking. Men with an intake of 30 units, or who frequently binged, tended to have higher caffeine intake, were more likely to be smokers, were more likely to report having had sexually transmitted infections and were younger. Overall, after adjusting for confounders, such as time since last ejaculation, smoking and body mass index (BMI), there was no clear association between semen quality and alcohol intake or binge drinking.

There was a dose-response association between increasing units per week (or increasing binge episodes or being drunk) and higher blood testosterone levels, and lower sex hormone-binding globulin (SHBG), indicating more testosterone is freely available to the body tissues. This dose-response association remained after controlling for confounders.

In analyses restricted to the 45% of men who said that this was a typical week for them, there was a dose-response association: as alcohol intake went up, sperm concentration, total sperm count and percentage morphologically normal sperm went down, even after adjustment. The trend was more pronounced among men with a typical weekly alcohol intake above 25 units.

No alcohol intake at all was also associated with reduced semen quality. It is unclear why this is the case.

 

How did the researchers interpret the results?

The researchers conclude that, “Our study suggests that even modest habitual alcohol consumption of more than 5 units per week had adverse effects on semen quality, although most pronounced associations were seen in men who consumed more than 25 units per week. Alcohol consumption was also linked to changes in testosterone and SHBG levels. Young men should be advised to avoid habitual alcohol intake.”

 

Conclusion

This study of more than 1,200 young Danish military recruits finds some associations between alcohol intake and measures of semen quality and sex hormones.

Overall, after adjustment for confounders, there was no clear association between alcohol intake in the past week or binge drinking in the past 30 days and semen quality. However, in analyses restricted to the 45% of men who said this was a typical week for them, there was a dose-response relationship, with higher alcohol intakes being associated with lower sperm concentration, total sperm count and percentage morphologically (structurally) normal sperm.

Increasing alcohol intake was also associated with increased levels of free testosterone in the body.

However, there are various points to consider when interpreting this study:

  • The main limitation with this study is that, being cross sectional, it cannot prove cause and effect. We do not know that the alcohol consumption has directly influenced the measures of sperm quality. Various other health and lifestyle factors could also be having an influence on the relationship (adjustment was only made for confounders of time since last ejaculation, smoking and BMI). For example, men who drink more may have poorer overall diet and activity and lifestyle habits, and these things may all be associated.
  • There is the possibility of inaccurate recall or inaccurate calculation of units of alcohol consumed in the previous week, or numbers of past binge drinking episodes.
  • Also, though the researchers asked whether this was a “typical week”, it cannot be known how representative it was of longer-term patterns. This is especially the case as this was a week in which they were due to be called up to military service which, depending on individual personality, may cause them to drink either more or less than usual.
  • Though this is a large sample of men, they were all young adult Danish men recruited to the military. Therefore, they may not be representative of all populations.
  • We don’t know that any of the measures of reduced sperm quality observed would actually have any effect on fertility.

Overall, this study is a valuable contribution to the body of literature assessing the relationship between alcohol intake and effects on semen quality, but it doesn’t provide conclusive answers on its own.

It is recommended, whether you are trying for a baby or not, to spend at least a few days per week without drinking alcohol.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Just five alcoholic drinks a week could reduce sperm quality. The Guardian, October 3 2014

Even moderate drinking reduces sperm count. The Daily Telegraph, October 3 2014

How just four pints of lager a week harms male fertility: More than 7.5 units over seven days found to affect sperm quality. Mail Online, October 2 2014

Links To Science

Jensen TK, Gottschau M, Madsen JOB, et al. Habitual alcohol consumption associated with reduced semen quality and changes in reproductive hormones; a cross-sectional study among 1221 young Danish men. BMJ Open. Published online October 2 2014

Categories: NHS Choices

Scientists look into regenerating retinal cells

NHS Choices - Behind the Headlines - Thu, 02/10/2014 - 13:00

“Scientists … have discovered stem cells in the human eye which can be transformed into light-sensitive cells and potentially reverse blindness,” The Daily Telegraph reports.

While this story is an accurate summary, the research is still at a very early stage, but does show potential.

The cells in question are called limbal neurosphere (LNS cells) and are located at the front of the eye. Unlike standard stem cells, these LNS cells have already started to become specialised eye cells. This new research has found that they may still have the ability to become different types of retinal cells.

Many common causes of blindness, such as macular degeneration, occur when retinal cells are damaged, so the ability to grow new retinal cells would be groundbreaking.

In the experiments, adult mouse LNS cells transplanted on to the retina of newborn mice were able to develop into mature light-detecting (photoreceptor) cells. However, they were not able to integrate into the retina. Human LNS cells showed some signs of developing into retinal cells in the laboratory, but they did not develop into mature cells. They survived when transplanted on to mice retina, but did not develop into retinal cells.

These rather significant hurdles need overcoming before any cure for human blindness becomes possible.

 

Where did the story come from?

The study was carried out by researchers from the University of Southampton, University Hospital Southampton NHS Foundation Trust and the University of Bristol. It was funded by the National Eye Research Centre, T.F.C. Frost Charity, Rosetrees Trust, the Gift of Sight Appeal and the Brian Mercer Charitable Trust.

The study was published in the peer-reviewed journal PLOS One. PLOS One is an open access journal, so the study is free to read online.

The UK media glossed over the preliminary nature of this study. They also didn't explain that the researchers were unable to get the human cells to grow into mature photoreceptor cells in either the laboratory or mouse settings.

 

What kind of research was this?

This study involved laboratory experiments using human and mouse eye tissues, and trials in mice. The researchers wanted to investigate progenitor cells (cells that can develop into one or more kinds of cells) called LNS cells. They aimed to see if mouse and human LNS would develop into retinal cells in the laboratory setting and in mice.

The light-sensory nerve cells (photoreceptors) in the retina cannot regenerate in humans once damaged. This means that currently the only option to fix this damage is to use a donor retina, and the availability of donations is limited. There is also the risk of an individual’s immune system rejecting the donation. Researchers wanted to find a way to take stem cells or cells at the next stage of development (progenitor cells) and use these to develop into any of the cells required to repair the retina – such as photoreceptors. Taking these cells and transplanting them back into the same person would prevent the rejection problems that are seen when a donor retina is used.

 

What did the research involve?

The researchers took limbal tissue (the border between the transparent cornea and opaque sclera) from donated human eyes from adults up to the age of 97, and mice. They extracted LNS cells from them and cultured (grew) them in the laboratory in different conditions, to encourage the cells to develop into mature retinal cells. This included growing them with retinal cells from newborn mice. They assessed whether the LNS cells started to look like retinal cells and express genes, and whether they produced proteins (“markers”) that are typically seen in mature light-sensing retinal cells.

The researchers then transplanted adult mouse LNS cells into the retina of newborn mice, and looked to see whether these cells developed into mature retinal cells. They then repeated this experiment, transplanting human LNS cells into the retinas of newborn mice.

 

What were the basic results?

At least some of the mouse LNS cells showed markers that indicated they appeared to have developed into mature light-sensing retinal cells in the laboratory. When transplanted into newborn mice, the cells produced markers that indicated they had developed into photoreceptor cells, but they did not integrate into – that is, become part of – the retina.

Human-donated LNS grown in the lab with retinal cells from newborn mice showed some signs of developing into retinal cells in the laboratory, but did not produce the mature photoreceptor cell markers. Human-donated LNS cultured with human-donated foetal retinal cells from week seven to eight did not show signs of developing into retinal tissue.

Human LNS transplanted into newborn mice’s retinas survived for up to 25 days, but did not develop into retinal-like cells, including photoreceptors.

 

How did the researchers interpret the results?

The researchers suggest that the human LNS cells were not able to develop into mature retinal cells because there may be a more complex regulatory mechanism in humans than mice. However, they concluded that “as a readily accessible progenitor cell resource that can be derived from individuals up to 97 years of age, LNS cells remain an attractive cell resource for the development of novel therapeutic approaches for degenerative retinal diseases”.

 

Conclusion

This early-stage research has found that LNS cells can be accessed from donated human eyes up to the age of 97. The mouse versions of these cells appear to retain the ability to develop into mature light-sensing retinal cells. However, the researchers have not yet worked out the conditions necessary for human LNS cells to fully develop into mature retinal cells or to integrate with the retina, which would repair it.

If they are able to achieve the necessary conditions for human LNS cells, then people with retinal damage could potentially have the cells taken from the front part of their eye and transplanted on to the retina to repair and regrow photoreceptors. This would remove the need to find a suitable donor, as well as preventing the problems seen with transplant rejections.

However, this is likely to require a lot more research, with the reality a long way off, even if the research proves successful.

Analysis by
Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Hope for blind as scientists find stem cell reservoir in human eye. The Daily Telegraph, October 2 2014

How stem cells found on the front surface of the eye could lead to treatment for blindness. Mail Online, October 2 2014

UK scientists discover special stem cells that could cure blindness. Daily Express, October 2 2014

Links To Science

Chen X, Thomson H, Cooke J, et al. Adult Limbal Neurosphere Cells: A Potential Autologous Cell Resource for Retinal Cell Generation. PLOS One. Published online October 1 2014

Categories: NHS Choices

Does losing your sense of smell predict death risk?

NHS Choices - Behind the Headlines - Thu, 02/10/2014 - 11:29

"Sense of smell 'may predict lifespan'," BBC News reports. New research suggests people unable to smell distinctive scents, such as peppermint or fish, may have an increased risk of death within five years of losing their sense of smell.

The study found adults aged 57 or above who could not correctly identify five particular scents – peppermint, fish, orange, rose and leather – were more than three times as likely to die in the next five years.

The authors speculate loss of smell does not directly cause death, but it could be an early warning sign that something has gone wrong, such as exposure to toxic environmental elements or cell damage.

While this study is interesting, it does not prove that loss of sense of smell (anosmia) is a predictor of early death. Researchers used only five scents to identify people with anosmia and only tested people's sense of smell once, which makes the results less reliable. 

There are many reasons for temporary loss of sense of smell, including viral infections, nasal blockage and allergy, so you shouldn't panic if you suddenly stop "smelling the roses".  But you are advised to see your GP if there is no obvious reason for a sudden loss of smell.

 

Where did the story come from?

The study was carried out by researchers from the University of Chicago and was funded by the US National Institutes of Health, as well as other public bodies.

The study was published in the peer-reviewed journal, PLOS One. PLOS One is an open access journal, so the study is free to read online.

Many headlines were alarmist – for example, "Your nose knows death is imminent" in The Guardian and The Daily Telegraph's claim that, "A poor sense of smell could mean the end is nigh".

The Daily Mail took a more responsible approach by including comments from independent experts, who urged people with anosmia not to panic and said more research was needed on this topic.

 

What kind of research was this?

The research was part of a large US cohort study looking at health and social relationships in a large nationally representative sample of men and women aged 57 to 85. It is based on two surveys of about 3,000 participants – one carried out in 2005-06 and the second five years later.

The authors say sense of smell (olfactory function) plays an essential role in health and is also linked to key parts of the central nervous system. Normal olfactory function depends on cellular regeneration, which may be affected by the ageing process, they say.

They also say problems with sense of smell are already known as an early symptom of some major neurodegenerative diseases, including Alzheimer's and Parkinson's diseases.

Their hypothesis is olfactory dysfunction could be an early indicator of impending death. But because this was a cohort study, it is unable to prove cause and effect – in other words, that the loss of sense of smell led to death.

 

What did the research involve?

In 2005-06, researchers conducted interviews with 3,005 participants (1,454 men and 1,551 women) at home, assessing their ability to identify five common distinctive smells. These were, in order of increasing difficulty: peppermint, fish, orange, rose and leather.

Researchers used a validated odour identification test, presented using felt-tip pens. The five smells were selected and presented one at a time. Participants were asked to identify each by choosing from a set of four picture or word prompts.

The results were used to categorise olfactory function as:

  • anosmic (loss of sense of smell) by 4 to 5 errors
  • hyposmic (moderate loss of sense of smell) by 2 to 3 errors 
  • normosmic (normal sense of smell) by 0 to 1 error

In a second survey in 2010-11, researchers investigated which of the participants were still living. They did this either by speaking to the participants, family members or neighbours, or by examining public records or news sources.

They analysed their results using standard statistical methods and produced various models of their results, one of which adjusted for other factors that might affect mortality (confounders).

These included age, socioeconomic status, disease status, nutrition and body mass index. The researchers also controlled their results for frailty (measured by the inability to perform one or more of seven activities of daily living), cognitive function, smoking and drinking.

 

What were the basic results?

In 2010-11, 430 (12.5%) of the original 3,005 study subjects had died and 2,565 were still alive. In 10 cases it was unknown if the participant was alive or not, and these people were excluded from the analysis. A further 77 were excluded because of missing data.

Researchers found 39% of older adults with anosmia were dead at the time of the second survey, compared with 19% with hyposmia and 10% of those with a normal sense of smell. This pattern was seen in all age groups.

Once all other factors were taken into account, anosmic older adults had more than three times the odds of death at five years compared with those with a normal sense of smell (odds ratio 3.37, 95% confidence interval 2.04-5.57).

 

How did the researchers interpret the results?

The researchers say olfactory function is one of the strongest predictors of five-year mortality and may serve as a "bellwether" for slowed cellular regeneration, or as a marker for cumulative exposure to toxic environments.

They say loss of sense of smell was an independent risk factor stronger than several common causes of death, such as heart failure, lung disease and cancer.

Olfaction is "the canary in the coalmine of human health", they say, adding that "this evolutionarily ancient special sense may signal a key mechanism that affects human longevity".

A short olfactory test may be clinically useful in identifying patients at risk who might benefit from additional monitoring and follow-up.

 

Conclusion

This is an interesting study but it had limitations, including its use of only one short test and only five smells to identify people with anosmia. The diagnosis was not clinically verified and the test was performed in the person's home environment, rather than standardised across all participants in a clinic.

Although researchers tried to control for confounders, it is still possible that measured and unmeasured confounders played a role.

Even if the results of this study were robust, this study did not look at cause of death, so no preventative strategies were identified for people with anosmia.

Being told you have an increased risk of death is not particularly useful if there are no well-validated methods of reducing said risk. If anything, such news does more harm than good.

There are many reasons for temporary loss of sense of smell, including viral infections, nasal blockage and allergy. But anyone who suddenly loses their sense of smell is advised to see their GP as anosmia may be a sign of an underlying – and treatable – disorder.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Sense of smell 'may predict lifespan'. BBC News, October 1 2014

Your nose knows death is imminent. The Guardian, October 1 2014

People who can no longer smell peppermint, fish, rose or leather 'may have only five years left to live'. Daily Mail, October 2 2014

A poor sense of smell could mean the end is nigh. The Daily Telegraph, October 1 2014

Links To Science

Pinto JM, Wroblewski KE, Kern DW, et al. Olfactory Dysfunction Predicts 5-Year Mortality in Older Adults. PLOS One. Published online October 1 2014

Categories: NHS Choices

Minimum alcohol pricing would 'save 100s of lives'

NHS Choices - Behind the Headlines - Wed, 01/10/2014 - 13:30

“‘Hundreds of lives lost' over failure to bring in minimum alcohol pricing,” The Independent reports.

The news is based on research that modelled the effect of a minimum alcohol price per unit of around 40-50p. It suggested that this would be 50 times more effective than the current ban on "below-cost sales" (where alcohol is sold at a loss to attract sales of other goods).

The statistical analyses conducted in this study were based on a number of well-respected data sets, such Hospital Episodes Statistics from 2005 to 2006, showing alcohol-related admissions to hospital.

The model estimated that a minimum cost of 45p per unit of alcohol would affect 23.2% of alcohol units sold, compared to just 0.7% for the ban on below-cost selling. This would mean saving 624 alcohol-related deaths per year, compared to 14 for the ban on below-cost selling, as well as 23,700 fewer admissions to hospital per year, compared to 500 for the ban on below-cost selling.

It’s worth remembering the scientific truism that “all models are wrong, but some are useful”. Based on its methodology, this appears to be one of the useful ones.

We are unlikely to see a minimum alcohol price in 2014, but policymakers will be sure to consider this new study alongside other emerging evidence on how best to protect public health, while allowing people the freedom to enjoy their booze.

 

Where did the story come from?

The study was carried out by researchers from the University of Sheffield and was funded by the Medical Research Council and Economic and Social Research Council, in the UK.

The study was published in the peer-reviewed British Medical Journal on an open access basis, so it is free to read online.

Media reporting was accurate and provided comments from the chief executives of Alcohol Concern and Addaction regarding the importance of the findings.

 

What kind of research was this?

This was a modelling study that aimed to estimate the impact that different alcohol control measures might have on alcohol-related deaths, illnesses, admissions to hospital and quality-adjusted life years.

In May 2014, the government implemented a ban on "below-cost selling" of alcohol (selling at less than the cost of producing and selling the alcohol) in England and Wales. The researchers outline that as there is the alcohol beverage-specific duty tax and value added tax (VAT) of 20%, this ban affects drinks with higher duty rates such as spirits, and has less effect on drinks with lower duty rates, such as cider.

The government had previously considered alternative strategies, including having a minimum unit price for alcohol of between 40p and 50p. Minimum unit pricing would effectively target drinks that are high in alcohol content and sold relatively cheaply.

It is these types of drinks, such as extra-strength cider or lager, or cheap spirits, which many experts hold responsible for causing the increase in alcohol-related health problems that have occurred in recent years. For example, in some supermarket chains, cider with an alcohol content of 8% ABV is cheaper to buy than some brands of bottled water.

The researchers wanted to model the potential effects that different minimum unit prices would have on outcomes compared to the ban on below-cost selling of alcohol.

Modelling studies can provide a framework to look at the potential effects of different policies or procedures, but they cannot accurately predict the precise number that would be affected.

 

What did the research involve?

The researchers used data from the 2009 general lifestyle survey, which captured average daily and weekly alcohol intake for 11,385 people in England.

Estimates on the price paid for various different types of alcoholic drinks between 2001 and 2009 were acquired from annual two week purchasing diary surveys of around 6,500 UK households, which amounted to 227,933 transactions.

The researchers looked at 96 different subgroups from the sample, split by age, sex, mean consumption (moderate, hazardous and harmful) and income (below or above the poverty line), and compared the amount and cost of alcohol they purchased.

They then estimated the effect that different price changes would have on alcohol consumption in each of the subgroups. They looked at a ban on below-cost selling and minimum unit price policies with thresholds of 40p, 45p and 50p.

An example of the statistical model is that a 1% increase in the price of beer would be estimated to result in a 0.98% reduction in the amount purchased, but this might cause a slight increase in wine purchased of 0.096%. They performed different analyses according to the subgroups, accounting for:

  • younger men drinking more beer on nights out
  • middle-aged women drinking more wine at home
  • harmful drinkers spending less per unit of alcohol
  • people who drink cheaper cider being more affected by an increase in the minimum price per unit

Lastly, the researchers used this estimated change in alcohol consumption to model its effect on the mortality and disease prevalence of 47 conditions that are related to harmful alcohol use. These conditions include oesophageal cancer. The researchers used data from the Office for National Statistics and Hospital Episodes Statistics from 2005 to 2006.

 

What were the basic results?

A ban on below-cost selling is estimated to:

  • affect 0.7% of alcohol units sold
  • reduce harmful drinkers' mean annual consumption by 0.08% – around 3 units per year
  • save 14 deaths and 500 admissions to hospital per year

A minimum cost of 45p per unit of alcohol is estimated to:

  • affect 23.2% of alcohol units sold
  • reduce harmful drinkers’ mean annual consumption by 3.7% – around 137 units per year
  • save 624 deaths and 23,700 admissions to hospital per year

Interestingly, estimates of the impact on supermarkets and shops selling alcohol suggested minimum unit pricing would increase alcohol revenues by 5.6%, while banning below-cost selling would lead to a much smaller rise of 0.2%. The team expected this would translate into bigger profits for the retailers, but could not confirm this for sure because they weren’t able to access the commercially sensitive data needed to calculate this information.

 

How did the researchers interpret the results?

The researchers conclude that, “the UK government’s recently implemented ban on below-cost selling affects just 0.7% of alcohol units currently sold, and is estimated to have small effects on consumption and health harm. The previously considered policy of a minimum unit price, if set at expected levels between 40p and 50p per unit, is estimated to have an approximately 40 to 50 times greater effect”.

 

Conclusion

This modelling study provides a framework to look at the potential effects of two different alcohol control measures on alcohol-related illness and death across the UK. The policy background to this story is that the UK government originally announced plans to introduce minimum unit pricing, but later changed its view.

Instead, the government implemented a ban on below-cost selling. This modelling study suggested that minimum unit alcohol pricing originally announced (at between 40p and 50p) would have a 40 to 50 times greater beneficial impact on public health compared with a ban on selling alcohol below-cost price.

As it was based on a mathematical model, this study cannot accurately predict the precise number of deaths that could be avoided by changing alcohol use. However, it can provide an approximation, which is useful for policymakers.

Limitations for this particular model include its reliance on estimates from surveys, which could be open to recall bias regarding the amount of alcohol purchased or consumed. However, strengths of the model include the large number of people who were surveyed and that the survey was conducted without participants knowing that the information would be used in this way.

The researchers carried out a number of sensitivity analyses to test the reliability of their model using different inputs. The team said, “these effects suggest that the relative scale of impact between a ban on below cost selling and a minimum unit price are robust to a variety of assumptions and uncertainties”. This gives an added degree of confidence to the main findings.

If you are concerned that you may be drinking more than you should, then it may be useful to keep a “drink diary” in which you keep a note of the amount of units you drink over the course of a week. It could be a lot higher than you think. Download the drinks diary leaflet (PDF, 697kb) to work out your alcohol intake over a week. You can also download the NHS Choices drink tracker app, which is available for iOS devices.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

'Hundreds of lives lost' over failure to bring in minimum alcohol pricing, says study. The Independent, October 1 2014

Minimum unit price '50 times more effective' than alcohol floor price. BBC News, October 1 2014

Study backs minimum alcohol prices to cut drinking harm. The Daily Telegraph, September 30 2014

Minimum alcohol pricing 'more effective' than ban on bulk buying. ITV News, October 1 2014

Links To Science

Brennan A, Meng Y, Holmes J, et al. Potential benefits of minimum unit pricing for alcohol versus a ban on below cost selling in England 2014: modelling study. BMJ. Published online September 30 2014

Categories: NHS Choices

Viagra 'may cause visual disturbance' in some men

NHS Choices - Behind the Headlines - Wed, 01/10/2014 - 12:30

"Viagra may permanently damage vision in some men, study finds," reports The Guardian. But the news is, in fact, based on research in mice.

This research suggests the medication may not be suitable for men who carry a gene mutation associated with the inherited eye condition retinitis pigmentosa.

Researchers found Viagra (the brand name of the drug sildenafil) caused visual disturbance in mice genetically engineered to carry a single copy of the retinitis pigmentosa mutation.

It took two weeks for the mice's visual response to go back to normal.

The researchers say this has human implications as 1 in 50 men are believed to be retinitis pigmentosa carriers.

Retinitis pigmentosa is a hereditary condition that causes progressive loss of light reception and the outer fields of vision, leading to tunnel vision and blindness.

Despite The Guardian's headline, Viagra didn't cause permanent damage to the mice's eyes, and all of the mice in the study recovered. In addition, the doses used were between 5 and 50 times the equivalent recommended dose for men.

However, you should stop taking sildenafil citrate and seek immediate medical advice if you suddenly develop eye or eyesight problems.

 

Where did the story come from?

The study was carried out by researchers from the School of Optometry and Vision Science at the University of New South Wales, the Centre for Eye Health, Sydney, and the University of Melbourne in Australia, and the University of Auckland, New Zealand.

It was funded by the National Health and Medical Research Council of Australia.

The study was published in the peer-reviewed medical journal Experimental Eye Research.

The Guardian reported the study accurately, but its headline gave a stronger indication of permanent visual damage than was found in the study. It also implied the new research had been done on humans rather than mice.

 

What kind of research was this?

This was an animal study investigating the effects of sildenafil (more commonly known by the brand name Viagra) on the retina of mice. Temporary visual disturbance (blurred vision, increased light sensitivity and colour change) has been reported by some people after taking sildenafil.

Previous research in humans found 50% of healthy men who take at least double the maximum recommended dose of sildenafil will experience temporary visual disturbance (200mg rather than the recommended 25mg to 100mg).

The researchers wanted to see if the effect of sildenafil on sight was greater if there was a susceptibility to retinal damage, as it is estimated 1 in 50 men are carriers of a single copy of a gene for one of several degenerative retinal conditions, but have normal vision.

To test the theory, the researchers used mice genetically engineered to be carriers for the degenerative condition retinitis pigmentosa and checked if they were more susceptible to visual disturbance.

Retinitis pigmentosa is a hereditary condition that causes progressive loss of light reception and the outer fields of vision, leading to tunnel vision and blindness.

Most people with the condition have a defect in both genes. Some people with just one gene can be affected, though most have normal vision and are considered to have "carrier status".

 

What did the research involve?

The genetically engineered mice carriers for retinitis pigmentosa had normal retinal structure and function, as assessed by electroretinography (ERG). ERG uses electrodes to assess how the retina responds to certain types of visual stimulations, such as flashing lights.

However, there were molecular differences in the rod cells of the mice (rod cells detect light, shape and movement), which made them more sensitive to light than normal mice. The researchers suggest this also made their sight more susceptible to degeneration.

The researchers heavily anaesthetised normal mice and carrier mice using ketamine. They then measured their ability to detect flashes of light in a dark room by ERG.

The mice were also injected with doses of sildenafil (5 to 50 times higher than the equivalent recommended dose for humans) and the researchers repeated the ERG after an hour.

Some mice were given a dose 20 times higher, and the ERG was performed after a period of either one hour, two days or two weeks. They performed the same experiment using a saline (salty water) injection to act as a control.

The mice were then killed and their retinas were examined using several laboratory processes.

 

What were the basic results?

In normal mice, photoreceptor response decreased as sildenafil dose increased (this is termed a "dose-dependent" response). For the mice given 20 times the equivalent human dose, this decreased response resolved by day two, although at bright light levels a reduced ERG response was still apparent.

Although there was a decrease in photoreceptor response for the carrier mice after one hour, this was smaller than that seen in normal mice. Sildenafil also increased the response to light of the inner retinal neurons, especially in bright light.

For the mice given 20 times the equivalent human dose, this decreased response did not improve until two weeks later.

In the carrier mice, there were increased levels of cytochrome C, a molecule that indicates cell death, but there was no sign of cell loss or change in retinal thickness in any of the mice retina.

 

How did the researchers interpret the results?

The researchers concluded that in normal mice, sildenafil caused reduced electroretinogram (ERG) response that resolved within 48 hours.

In mice that were carriers for the degenerative condition retinitis pigmentosa but who had normal sight, the reduced ERG response took two weeks to return to normal, and they had an increase in a molecule that indicates cell death.

The researchers concluded this may mean sildenafil could cause retinal degeneration.

They say that, "The results of this study are significant considering approximately 1 in 50 people are likely to be carriers of recessive traits leading to retinal degeneration."

 

Conclusion

This study looked at the effects of sildenafil (Viagra) on the retinas of mice. It showed that genetically engineered mice with retinitis pigmentosa carrier status are more susceptible to the temporary side effect of visual disturbances than normal mice.

These carrier mice also had increased levels of the chemical cytochrome C, which is an indicator of cell death.

However, there was no sign of cell loss or change in retinal thickness in any of the mice retina. This research therefore did not prove sildenafil causes permanent retinal degeneration because the changes were reversible in all mice.

It should be emphasised the smallest amount of sildenafil used in these experiments was five times the equivalent recommended dose for men, so it is not clear whether similar results would be seen at normal dose levels.

The product information for sildenafil states its safety has not been determined for people with hereditary degenerative retinal disorders, so it is not recommended for this group.

However, this is only problematic for men who carry a single copy of the gene – although they might not be aware of this because it doesn't cause problems.

Studies conducted over a longer period of time would be useful to determine if sildenafil causes retinal degeneration or permanent visual changes, and whether these types of symptoms or changes are more likely in people with carrier status for a degenerative retinal condition.

If you do experience a sudden decrease or loss of vision, stop taking sildenafil and contact your doctor immediately. 

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Viagra may permanently damage vision in some men, study finds. The Guardian, October 1 2014

Viagra 'may cause blindness': Ingredient in the drug can permanently affect sight, doctors warn. Mail Online, October 1 2014

Links To Science

Nivison-Smith L, Zhu Y, Whatham A, et al. Sildenafil alters retinal function in mouse carriers of Retinitis Pigmentosa. Experimental Eye Research. Published online September 17 2014

Categories: NHS Choices

High levels of tooth decay found in three-year-olds

NHS Choices - Behind the Headlines - Tue, 30/09/2014 - 12:20

"Tooth decay affects 12% of three-year-olds, says survey," BBC News reports. The survey, carried out by Public Health England, found big variations in different parts of the country. Experts believe sugary drinks are to blame for this trend.

The survey looked at the prevalence and severity of tooth decay in three-year-old children in 2013. This is the first time the dental health of this age group has been surveyed nationally. It found 12% of children surveyed had tooth decay – more than one in eight children.

Tooth decay (also known as dental decay or dental caries) occurs when a sticky acidic film called plaque builds up on the teeth and begins to break down the tooth's surface. A diet high in sugar can help stimulate the production of plaque.

As it progresses, tooth decay can cause an infection of underlying gum tissue. This type of infection is known as a dental abscess and can be extremely painful.

 

Who produced the children's dental health report?

The survey and subsequent report was produced by Public Health England (PHE), part of the Department of Health. PHE's role is to protect and improve the nation's health and wellbeing, and reduce health inequalities.

This survey of the prevalence and severity of tooth decay in three-year-olds was performed to help identify which age group interventions to improve tooth decay should be aimed at.

 

What data did the report look at?

The report looked at the prevalence and severity of dental decay in three-year-old children in 2013. At three years of age most children have all 20 milk teeth (also known as primary teeth).

PHE randomly sampled children attending private and state-funded nurseries, as well as nursery classes attached to schools and playgroups. The children's teeth were examined to see if they had missing teeth, filled teeth or obvious signs of tooth decay.

 

What were the main findings of the report?

Of the 53,814 children included in the survey, 12% had dental decay. Of the children with dental decay, on average these children had at least three teeth that were decayed, missing or filled.

Across all the children included in the survey, the average number of decayed, missing or filled teeth was 0.36 per child.

The report found a wide variation in the levels of decay experienced by three-year-old children living in different parts of the country. The four regions with the most dental decay were:

  • the East Midlands
  • the north west
  • London
  • Yorkshire and the Humber

 

What are the implications of the report?

Where there are high levels of tooth decay among three-year-olds, Public Health England wants earlier interventions to target this younger age group, rather than waiting until the age of five (when these interventions usually take place).

Where there are high levels of tooth decay found in the primary incisors (a condition known as early childhood caries), PHE wants local organisations to tackle problems related to infant feeding practices.

Early childhood caries are associated with young children being given sugar-sweetened drinks in a bottle – especially when these are given overnight or for long periods of the day.

Where tooth decay levels increase sharply between the ages of three and five, PHE wants local organisations to tackle this by helping parents reduce the amount and frequency of sugary food and drinks their children have, as well as increasing the availability of fluoride.

 

Conclusion

There are two important steps you can take to protect your children's teeth against tooth decay:

  • limit their consumption of sugar, especially sugary drinks
  • make sure they brush their teeth at least twice a day with fluoridated toothpaste
Sugar

Sugar causes tooth decay. Children who eat sweets every day have nearly twice as much decay as children who eat sweets less often.

This is caused not only by the amount of sugar in sweet food and drinks, but by how often the teeth are in contact with the sugar. This means sweet drinks in a bottle or feeder cup and lollipops are particularly damaging because they bathe the teeth in sugar for long periods of time. Acidic drinks such as fruit juice and squash can harm teeth, too.

Don't fall into the trap of thinking that a fruit juice advertised as "organic", "natural" or with "no added sugar" is inherently healthy. A standard 330ml carton of orange juice can contain almost as much sugar (30.4g) as a can of coke (around 39g).

As Dr Sandra White, director of dental public health at PHE, points out: "Posh sugar is no better than any other sugar … our key advice for [children] under three is to just have water and milk."

Tooth brushing

A regular teeth cleaning routine is essential for good dental health. Follow these tips and you can help keep your kids' teeth decay free:

  • Start brushing your baby's teeth with fluoride toothpaste as soon as the first milk tooth breaks through (usually at around six months, but it can be earlier or later). It's important to use a fluoride paste as this helps prevent and control tooth decay.
  • Children under the age of three can use a smear of family toothpaste containing at least 1,000ppm (parts per million) fluoride. Toothpaste with less fluoride is not as effective at preventing decay.
  • Children between the ages of three and six should use a pea-sized blob of toothpaste containing 1,350 to 1,500ppm fluoride. Check the toothpaste packet for this information or ask your dentist.
  • Make sure your child doesn't eat or lick the toothpaste from the tube.
  • Brush your child's teeth for at least two minutes twice a day, once just before bedtime and at least one other time during the day.
  • Encourage them to spit out excess toothpaste, but not to rinse with lots of water. Rinsing with water after tooth brushing will wash away the fluoride and reduce its benefits.
  • Supervise tooth brushing until your child is seven or eight years old, either by brushing their teeth yourself or, if they brush their own teeth, by watching how they do it. From the age of seven or eight they should be able to brush their own teeth, but it's still a good idea to watch them now and again to make sure they brush properly and for the whole two minutes. 

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Tooth decay affects 12% of three-year-olds, says survey. BBC News, September 30 2014

One in 8 three-year-olds has rotting teeth... and fruit juice is to blame: Parents warned organic drinks and smoothies can contain as much sugar as a glass of coke. Daily Mail, September 30 2014

Sugary drinks in baby bottles triggering rise in tooth extractions. The Guardian, September 30 2014

Fruit drinks fuelling tooth decay among under 3s. The Daily Telegraph, September 30 2014

Three-Year-Old Children Suffering Tooth Decay. Sky News, September 30 2014

1 in 8 children suffering from tooth decay 'due to too much sugar'. ITV News, September 30 2014

Shock figures reveal 1 in 8 three-year-olds have tooth decay. Daily Mirror, September 30 2014

Rotten truth: Tooth decay in under-3s is on the rise. Daily Express, September 30 2014

Categories: NHS Choices

Deep-fried Mars bars: unhealthy, but no killer

NHS Choices - Behind the Headlines - Tue, 30/09/2014 - 11:50

“Eating a deep-fried Mars bar could give you a stroke in minutes,” reports the Metro.

However, the study that prompted this headline found no evidence that the Scottish snack can potentially trigger a fatal stroke within minutes. 

Fans of deep-fried Mars bars actually have little to worry about in this regard, aside from the obvious risks of regularly consuming a meal full of sugar and saturated fats.

The over-alarmist headlines are based on the results of a small study using 24 healthy participants, which looked at whether eating a deep-fried Mars bar could affect the body’s ability to respond to breath holding by increasing blood flow to the brain (termed “cerebrovascular reactivity”). Impaired cerebrovascular reactivity has been associated with stroke, but this latest study didn’t look at stroke as an outcome.

Importantly, it found no significant differences in cerebrovascular reactivity after eating either a deep-fried Mars bar or porridge.

When the researchers analysed men and women separately, they also found no significant differences in cerebrovascular reactivity after eating a deep-fried Mars bar or porridge in either men or women.

However, when the researchers compared men with women, they found a significant difference.

Common sense suggests that eating deep-fried Mars bars regularly is not good for your health. However, this study didn't find any evidence that a deep-fried Mars bar alone can trigger a stroke within minutes.

 

Where did the story come from?

The study was carried out by researchers from the University of Glasgow and the British Heart Foundation Cardiovascular Research Centre in Scotland, and was funded departmentally.

The study was published in the peer-reviewed Scottish Medical Journal.

The media coverage of this story was poor. The oft-repeated claim that the snack can trigger a stroke within minutes is entirely baseless. Obviously, if you are recovering from a stroke or told that you have risk factors for a stroke, then a deep-fried Mars bar would probably be bottom of the list of recommended foods. However, it seems unlikely that a single sugary snack would trigger a stroke.

Causes of stroke are usually a combination of interrelated risk factors, such as smoking, high blood pressure, high cholesterol, obesity and excessive alcohol consumption.

 

What kind of research was this?

This was a randomised crossover trial that aimed to determine whether eating deep-fried Mars bars impaired cerebrovascular reactivity in comparison to eating porridge.

Cerebrovascular reactivity is the change of blood flow in the brain in response to a stimulus. In this trial, the researchers looked at the change in blood flow after participants were asked to hold their breath for 30 seconds. Holding your breath should increase blood flow to the brain.

The researchers say that impaired change in brain blood flow following a stimulus is associated with an increased risk of ischaemic stroke (stroke caused by a lack of blood flow to the brain).

In this randomised crossover trial, all participants ate both a deep-fried Mars bar and porridge. Half the participants ate the Mars bar first, and half the participants ate the porridge first. Whether they ate the Mars bar first or the porridge first was randomised.

A randomised crossover trial is an appropriate study design to answer this sort of question.

 

What did the research involve?

The researchers studied 24 people, with an average age of around 21. Their body mass index (BMI) was within the healthy range (an average of 23.7).

After fasting for at least four hours, people were randomised to receive a deep-fried Mars bar or porridge.

The researchers looked at changes in blood flow in the brain after participants held their breath for 30 seconds, before and 90 minutes after, eating either a deep-fried Mars bar or porridge.

They looked at blood flow using ultrasound.

Participants returned to receive the other foodstuff on a second visit at least 24 hours after the first.

The researchers compared the changes in blood flow after participants ate the deep-fried Mars bar and porridge.

 

What were the basic results?

Eating a deep-fried Mars bar caused a non-statistically significant reduction in cerebrovascular reactivity compared to eating porridge.

The researchers then looked at men and women separately (14 of the 24 people in the study were male). Changes in cerebrovascular reactivity were not significant in either men or women after they ate a deep-fried Mars bar or porridge.

The researchers then compared men with women. They found there was a significant difference in cerebrovascular reactivity after eating a deep-fried Mars bar compared to eating porridge, with a modest decrease of cerebrovascular reactivity in males. 

 

How did the researchers interpret the results?

The researchers concluded that, “Ingestion of a bolus of sugar and fat caused no overall difference in cerebrovascular reactivity, but there was a modest decrease in males. Impaired cerebrovascular reactivity is associated with increased stroke risk, and therefore deep-fried Mars bar ingestion may acutely contribute to cerebral hypoperfusion [decreased blood flow] in men.”

 

Conclusion

This study found no significant differences in cerebrovascular reactivity (the body’s ability to respond to breath holding by increasing blood flow to the brain) after eating either a deep-fried Mars bar or porridge.

When the researchers analysed men and women separately, they found no significant differences in cerebrovascular reactivity after eating a deep-fried Mars bar or porridge. However, when the researchers compared men with women, they found a significant difference, although whether there is any clinical significance to this finding is unclear.

The researchers point out that there are limitations to their study, including the fact they studied young, healthy individuals. It may be that there are differences in cerebrovascular reactivity in older patients at risk of stroke.

Confirming whether the risk is significant in this sub-group would be challenging, not least due to ethical considerations. Assigning people a diet you suspect could harm them would be a serious breach of medical ethics.

Common sense suggests that eating deep-fried Mars bars regularly will not be good for your health, as a diet high in saturated fats and sugar can increase the risk of cardiovascular diseases (diseases that affect the heart and blood vessels).

However, the very occasional late night “guilty pleasure” is highly unlikely to trigger a stroke.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Deep-fried Mars bar raises risk of having a stroke WITHIN MINUTES, experts claim. Daily Mirror, September 29 2014

Deep-fried Mars bars 'could trigger a stroke': Blood flow to the brain slows within minutes of eating the snack. Mail Online, September 29 2014

Eating a deep fried Mars bar could give you a stroke in minutes. Metro, September 29 2014

Links To Science

Dunn WG, Walters MR. A randomised crossover trial of the acute effects of a deep-fried Mars bar or porridge on the cerebral vasculature. Scottish Medical Journal. Published online September 22 2014

Categories: NHS Choices

Will a 'wonder drug' be available in 10 years?

NHS Choices - Behind the Headlines - Mon, 29/09/2014 - 12:50

"Wonder drug to fight cancer and Alzheimer's disease within 10 years," is the headline in The Daily Telegraph.

This headline is a textbook example of hope (and hype) triumphing over reality, as the new "wonder drug" is neither available today nor inevitable in the future.

The headline was based on a study that provides new information about the role of the protein N-myristoylation (NMT) in human cells and a mechanism that inhibits it.

The study's authors suggest NMT could be involved in the development and progression of a range of diseases, including cancerdiabetes and Alzheimer's disease.

Inhibiting the actions of NMT could help combat these diseases. But this remains to be seen: if true, this greater understanding may open up new avenues for medical research, which could ultimately lead to new treatments in the future.

While the results are both intriguing and promising, it is very difficult to predict the precise route or timing of future medical developments (drugs, treatments or therapies) based on early laboratory investigations.

Even if treatments based on NMT inhibition were developed and found to be effective, there is no guarantee they would also be safe or free from serious side effects.

All in all, the 10-year timeframe suggested by The Daily Telegraph should be taken with a pinch of salt.

 

Where did the story come from?

The study was carried out by researchers from Imperial College London and was funded by Cancer Research UK, the Biotechnology and Biological Sciences Research Council, the Engineering and Physical Sciences Research Council, the European Union, and the Medical Research Council.

It was published in the peer-reviewed journal Nature Communications.

While The Daily Telegraph's hyped-up headline was a little over the top, the coverage was accurate and balanced.

Optimistic quotes from the study authors, such as, "Eventually we hope this would simply be a pill you could take. It will be perhaps 10 years or so to a drug 'on the market' but there are many hurdles to get over", were counterbalanced with a note of realism from Cancer Research UK's senior science officer: "The next steps will be to develop this idea and make a drug – but there's a way to go before we'll know if it's safe and effective in people".

 

What kind of research was this?

This was a laboratory-based study looking at the structure and function of proteins in human cells.

Proteins are very important in human biology as they are involved in, or carry out, a huge range of biological tasks and processes.

This study looked at a specific chemical modification called N-myristoylation (NMT), which happens to some proteins as they are being made and after they have been made. This is a very common chemical modification of proteins, which in turn affects their function – a form of regulation.

The researchers say NMT has been implicated in the development and progression of a range of human diseases, including cancer, epilepsy, Alzheimer's disease, Noonan syndrome (a genetic condition that can disrupt the normal development of the body), and viral and bacterial infections.

 

What did the research involve?

The study used laboratory-grown human cells to study all the characteristics of the NMT process. This was achieved by identifying all the proteins undergoing the NMT process and finding out what these chemically tagged proteins did inside the cells, what processes they were involved in, other chemicals they interacted with, and whether the protein NMT process could be stopped (inhibited).

The group studied cells in the laboratory during normal cell function and apoptosis – the natural process in which a cell self-destructs in an ordered way, also known as programmed cell death. Apoptosis is often inhibited in cancer cells, causing them to grow indefinitely and not die.

 

What were the basic results?

The researchers' findings include:

  • Identifying more than 100 NMT proteins present in the human cells studied.
  • Identifying more than 95 proteins for the first time.
  • Quantifying the effect of inhibiting the NMT process across more than 70 chemicals (substrates) simultaneously. This showed which chemicals the NMT proteins were interacting with inside the cells.
  • Finding a way to inhibit the NMT process by inhibiting the main enzyme responsible for the chemical modification, called N-myristoyltransferase.

 

How did the researchers interpret the results?

The research team said: "Numerous important pathways involve proteins that are shown here for the first time to be co- or post-translationally N-myristoylated [N-myristoylated during or after their formation]."

Commenting on the wider implications of their research, they said: "These data indicate many potential novel roles for myristoylation that merit future investigation in both basal cell function and apoptosis, with significant implications for basic biology, and for drug development targeting NMT [N-myristoyltransferase]."

 

Conclusion

This laboratory protein study has provided new information about the role of protein N-myristoylation in human cells and a mechanism to inhibit it. The findings suggest proteins undergoing N-myristoylation are involved in many key biological processes and tasks.

Given the researchers' assumption that protein N-myristoylation has been implicated in the development and progression of a range of diseases is true, this greater understanding may open up new avenues for medical research, which could ultimately lead to new treatments in the future.

However, it is very difficult to predict the precise route or timing of future medical developments (drugs, treatments or therapies) based on early findings.

The study's authors struck a balance of optimism and realism when quoted in The Daily Telegraph.

They first said their findings could lead to new treatments in the future and that, "Eventually we hope this would simply be a pill you could take. It will be perhaps 10 years or so to a drug on the market". Balancing this out, they also said: "There are many hurdles to get over".

This study represents one of the first steps on the road to new drug discovery, so the exact path ahead is unclear.

But despite these promising early findings, there are no sure bets in drug development – the history of medicine is full of initially encouraging avenues of research that ended up leading to dead ends. 

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Wonder drug to fight cancer and Alzheimer's disease within 10 years. The Daily Telegraph, September 27 2014

Links To Science

Thinon E, Serwa RA, Broncel M, et al. Global profiling of co- and post-translationally N-myristoylated proteomes in human cells. Nature Communications. Published online September 26 2014

Categories: NHS Choices

Cherry juice touted as treatment for gout

NHS Choices - Behind the Headlines - Mon, 29/09/2014 - 11:40

“Daily drinks of cherry juice concentrate could help thousands of patients beat gout,” the Mail on Sunday reports.

This headline is based on a small study that found drinking tart cherry juice twice a day temporarily lowered the blood uric acid levels of 12 young healthy volunteers for up to eight hours after they consumed the drink. This is of potential interest, as high levels of uric acid can lead to crystals forming inside joints, which triggers the onset of the painful condition gout.

Somewhat puzzlingly, the study recruited healthy young volunteers who didn’t have gout. A more relevant study design would have included people with a history of gout, to see what effect, if any, cherry juice had on them.

So, based on this study alone, we cannot say that drinking cherry juice helps prevent the onset of gout, or the recurrence of gout in those who have had it before. It is not clear whether reductions in uric acid of the magnitude found in this study would be sufficient to prevent gout or relieve gout symptoms.

The Mail on Sunday’s assertion that “now doctors say drinking cherry juice daily could help beat the condition” is not backed up by this research alone, nor is health advice on gout from health professionals likely to change based on this small study.

 

Where did the story come from?

The study was carried out by researchers from the UK and South Africa, and was part funded by Northumbria University and the Cherry Marketing Institute. The latter is a non-profit organisation, funded by cherry growers, with a brief to promote the alleged health benefits of tart cherries.

This obviously represents a potential conflict of interest, although the research paper says, “The funders had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript.”

The study was published in the peer-reviewed Journal of Functional Foods.

The Mail on Sunday’s coverage over-extrapolates the findings of this small study involving healthy people, not gout sufferers. While it is plausible that cherry juice may be of some benefit to people affected by gout, this is currently unproven.

 

What kind of research was this?

This was a single blind randomised crossover study testing the effects of two doses of cherry juice on levels of uric acid (urate) in the body.

The researchers say that nutritional research has focused more on the use of foods for improving human health, and particular attention has been placed upon foods containing high concentrations of anthocyanins – such as tart cherries.

Gout is a type of arthritis, where crystals of sodium urate form inside and around joints. The most common symptom is sudden and severe pain in a joint, along with swelling and redness. The joint of the big toe is commonly affected, but it can develop in any joint. Symptoms develop rapidly and are at their worst in just six to 24 hours. Symptoms usually last for three to 10 days (this is sometimes known as a gout attack). After this time, the joint will start to feel and look normal again, and the pain of the attack should disappear completely. Almost everyone with gout will have further attacks in the future.

People with gout usually have higher than normal urate levels in their blood, but the reasons for this may vary; for example, some people may produce too much urate, while in others the kidneys may not be so effective at filtering out urate from the bloodstream. The condition may run in families.

This study did not study people with gout, but only looked at the concentration of sodium urate (uric acid) in the blood of healthy young people who did not have gout or high levels of sodium urate, suggesting that they would develop gout in the near future. Hence, it does not provide good evidence that the cherry juice was beneficial to relieve gout symptoms or prevent the recurrence of symptoms.

randomised control trial including people with gout, or people more likely to develop gout (such as older men with a family history), would be required to give us better evidence on the issue.

 

What did the research involve?

The research took 12 healthy volunteers (average age of 26 years, 11 of which were male) and gave them two different volumes (30ml and 60ml) of concentrated cherry juice mixed with water, to see what effect this had on measures of uric acid activity and inflammation up to 48 hours later – both of which are biological measures indirectly related to gout.

None of the volunteers actually had a history of gout.

In an effort to reduce other dietary sources of anthocyanins (outside of that gained from the cherry juice), participants were requested to follow a low-polyphenolic diet by avoiding fruits, vegetables, tea, coffee, alcohol, chocolate, cereals, wholemeal bread, grains and spices for 48 hours prior to, and throughout each arm of the trial. Food diaries were completed for 48 hours before, and throughout, the testing phase to assess the diet for compliance.

Participants were required to attend the start of each phase of the study at 9am, following a 10-hour overnight fast to account for diurnal variation. Each phase was comprised of two days supplementation with cherry concentrate. One supplement was taken immediately following a morning blood and urine sample, and a second consumed prior to each evening meal.

Multiple supplements were administered to identify any cumulative effects. The length of the supplementation phase (48 hours) was chosen due to the short period of time in which anthocyanins are metabolised.

 

What were the basic results?

The main results were as follows

  • Blood urate (uric acid) concentrations in the volunteers went down for both the low and high doses of cherry juice in about the same amount, from around 500 micoMol per litre at the start to around 300 micoMol per litre after eight hours. The concentrations at the 24-hour and 48-hour time points appeared to have increased back up to the 400 micoMol per litre.
  • The amounts of urate (uric acid) removed from the body via urine increased, peaking at two to three hours. The amount excreted then dipped but remained broadly above the starting level up to 48 hours.
  • Levels of a general blood inflammatory marker (high sensitivity C-reactive protein; hsCRP) decreased. 
  • There was no clear dose effect between the cherry concentrate and the biological findings.

 

How did the researchers interpret the results?

The researchers said, “These data show that MC [Montmornecy tart cherry concentrate] impacts upon the activity of uric acid and lowers hsCRP, previously proposed to be useful in managing conditions such as gouty arthritis; the findings suggest that changes in the observed variables are independent of the dose provided.”

They also said, “these results provide rationale for the use of Montmorency cherry concentrate as an adjuvant therapy to NSAIDs [non-steroidal anti-inflammatory drugs] in the treatment of gouty arthritis [gout].”

 

Conclusion

This small study found that drinking tart cherry juice twice a day temporarily lowered the blood uric acid levels of 12 young healthy volunteers without gout, up to eight hours after they consumed the drink. The levels began to increase back to the starting levels after 24-48 hours. The researchers and media extrapolated this finding to mean that the drink may be useful for gout, which is caused by an excess accumulation of uric acid crystals.

Based on this study alone, we cannot say that drinking cherry juice helps prevent the onset of gout, or the recurrence of gout in those who have had it before. The study did not test the effect of the juice on people with gout, or those likely to get gout in the future, so is only indirectly relevant to these groups. For example, it is not clear whether reductions in uric acid of the magnitude found in this study would be sufficient to prevent or treat gout in people with a propensity for high uric acid levels in the body (for whatever reason).

Furthermore, there may have been other dietary factors contributing or interacting with the cherry juice compounds that could account for the changes observed. Hence, cherry juice might not be the sole cause of the effects seen.

The Mail on Sunday carried a useful quote from a UK Gout Society spokesman, who said that while “Montmorency cherries [those used in the study] could help reduce uric acid levels in the body, ‘People with gout should go to their GP because it can be linked to other conditions such as stroke and psoriasis’”.

We find no evidence to support the Mail’s comments that “Now doctors say drinking cherry juice daily could help beat [the] condition”.

For the reasons above, this study alone provides weak evidence that concentrated cherry juice might help those with gout. The media have somewhat overhyped the significance of the findings, which are underdeveloped and tentative. The hype would be justified if a more robust study of people with gout had been undertaken. 

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Fighting the agony of gout - with a daily glass of cherry juice. Mail on Sunday, September 28 2014

Links To Science

Bell PG, Gaze DC. Davison GW, et al. Montmorency tart cherry (Prunus cerasus L.) concentrate lowers uric acid, independent of plasma cyanidin-3-O-glucosiderutinoside. Journal of Functional Food. Published online September 27 2014

Categories: NHS Choices

Could curry spice boost brain cell repair?

NHS Choices - Behind the Headlines - Fri, 26/09/2014 - 12:30

“Spicy diet can beat dementia,” is the unsupported claim in the Daily Express. Researchers found that the spice turmeric stimulated the growth of neural stem cells in rats, though this is a long way from an effective dementia treatment for humans.

This was laboratory and animal research investigating the effect of a turmeric extract (aromatic turmerone) on neural stem cells (NSCs). NSCs have some ability to regenerate brain cells after damage, but usually not the damage caused by degenerative brain diseases such as Alzheimer’s disease.

The study found that when the turmeric extracts were either directly cultured with NSCs in the laboratory (in vitro) or when they were directly injected into the brains of live rats (in vivo), the extracts increased the growth and development of the stem cells.

However, this research is in the very early stages. We don’t know whether this apparent increase in stem cells would have any effect on repairing brain damage in rats with degenerative brain diseases, let alone humans with these conditions. We certainly don’t know that eating turmeric, or other spices, would have any effect on the brain’s powers of regeneration.

Though the researchers hope these findings may pave the way towards new treatments for degenerative brain conditions, this is likely to be a long way off.

 

Where did the story come from?

The study was carried out by researchers from Institute of Neuroscience and Medicine, Research Centre Juelich, and University Hospital of Cologne, both in Germany. The study was supported by the Koeln Fortune Program/Faculty of Medicine, University of Cologne and the EU FP7 project “NeuroFGL.”

The study was published in the peer-reviewed Stem Cell Research and Therapy on an open access basis, so it is free to read online.

The quality of the Daily Express’s and the Mail Online’s reporting of the study is poor. Both sources claim that eating curries can "beat dementia". These claims are entirely unproven and are at best sensationalist, and at worst cruel for giving people false hope.

BBC News and ITV News’ coverage takes a more appropriate tone, pointing out that any potential human application at this stage is entirely hypothetical.

 

What kind of research was this?

This was an animal and laboratory study, which aimed to investigate the effect of Aromatic (ar-) turmerone on brain stem cells.

Ar-turmerone and curcumin are active compounds of the herb Curcuma longa, or turmeric as it is more commonly known. Many studies (such as a study we covered in 2012) have suggested that curcumin has anti-inflammatory effects and may have a protective effect on brain cells, though the effects of ar-turmerone are yet to be examined.

Neural stem cells (NSCs) have some ability to regenerate brain cells that have been destroyed or damaged, but usually are insufficient to repair the damage caused by degenerative brain diseases (such as Alzheimer’s) or stroke.

This research aimed to investigate the effects of ar-turmerone on NSCs in brain cells in the laboratory and in live rats.

 

What did the research involve?

In the first part of the research, NSCs were obtained from the brains of rat foetuses and cultured in the laboratory. Ar-turmerone was added to the cultures at various concentrations and studied for a number of days to look at the rate of stem cell proliferation. 

In the second part of the research, a group of male rats were anaesthetised. Three then received an injection of ar-turmerone into the brain; six were injected with an equal volume of salt water. After recovery from the anaesthetic, the animals were put into cages and given free access to food and water as normal.

For five days following the surgical procedure, a tracer was injected into the animals (bromodeoxyuridine), which is taken up by replicating cells. Seven days after the surgery, the rats were scanned with a positron emission tomography (PET) scanner, which detects the tracer and produces 3-D images demonstrating the active cell division in the tissues.

After death, the brains of the rats were examined in the laboratory to look at how ar-turmerone had affected brain structure. 

 

What were the basic results?

In the laboratory, the researchers found that ar-turmerone increased the number of neural stem cells. Higher concentrations of ar-turmerone caused greater increases in NSC proliferation.

In the rats, they also found that injection of ar-turmerone into the brain promoted the proliferation of NSCs and differentiation into different brain cell types. This was evident on both PET scanning and autopsy examination of the brain after death.

 

How did the researchers interpret the results?

The researchers conclude that both in the laboratory and in live animals, ar-turmerone causes the proliferation of nerve stem cells. They suggest that “ar-turmerone thus constitutes a promising candidate to support regeneration in neurologic disease”.

 

Conclusion

This laboratory and animal research has found that an extract from turmeric (aromatic turmerone) seems to increase the growth and differentiation of neural stem cells (NSCs).

However, this research is in the very early stages. So far, the extract has only been added to brain stem cells in the laboratory, or directly injected into the brains of only three rats. Though NSCs have some ability to regenerate brain cells after damage, this is usually not enough to have an effect in degenerative brain diseases such as Alzheimer’s.

The hope is that by boosting the number of NSCs, they could be more effective at repairing damage in these conditions. This study has not investigated whether the observed effects would make any meaningful functional differences in rats with degenerative brain diseases, never mind humans with these conditions.

As the researchers further caution, there are various issues to be considered when contemplating the possibility of any trials in humans. For example, it is recognised that causing the increased rate of growth and differentiation of NSCs carries some risk of cancerous change. Also, the route of administration used here in the rats – direct injection into the brain – would be likely to carry far too much risk and may not be possible in humans. We certainly don’t know whether taking turmeric extracts by mouth – or just by eating a spicy diet as the Express headline suggests – would have any effect on the brain’s powers of regeneration.

Though the researchers hope these findings may pave the way towards new treatments for degenerative brain conditions, this is likely to be a long way off.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Spicy diet can beat dementia: Breakthrough in fight to cure cruel disease. Daily Express, September 26 2014

Brain repair 'may be boosted by curry spice'. BBC News, September 26 2014

Eating a curry 'can help beat dementia': Ingredient found in turmeric may hold key to repairing brains of people with condition. Mail Online, September 26 2014

Tumeric 'link' to brain cell repair. ITV News, September 26 2014

 

Links To Science

Hucklenbroich K, Klein R, Neumaier B, et al. Aromatic-turmerone induces neural stem cell proliferation in vitro and in vivo. Stem Cell Research and Therapy. Published online September 26 2014

Categories: NHS Choices

Antibiotic treatments 'fail' 15% of the time

NHS Choices - Behind the Headlines - Fri, 26/09/2014 - 12:10

“Antibiotic treatments from GPs 'fail 15% of the time’,'' BBC News reports. In one of the largest studies of its kind, researchers estimated that just under one in seven antibiotic prescriptions in 2011 "failed".

This study examined the failure rates of antibiotics prescribed by GPs in the UK for common infections over a 21-year period – from 1991 to 2012. Most of the failures (94%) were cases where a different antibiotic needed to be prescribed within 30 days, suggesting that the first antibiotic had not worked.

In general, the overall failure rate remained fairly static over the course of three decades; 13.9% in 1991 only increased to 15.4% by 2012.

When considering specific types of infection in combination with specific classes of antibiotics, there were notable changes in failure rates. For example, when the antibiotic trimethoprim was prescribed for an upper respiratory tract infection, failure rates increased from 25% in 1991 to 56% in 2012. Reassuringly, failure rates with commonly prescribed antibiotics (such as amoxicillin) currently remain fairly low.

The study did not look at the reasons for antibiotic failure, but one reason could be antibiotic resistance – an increasing problem worldwide.

If you are prescribed an antibiotic, you can increase the chances of it working and decrease the risk of antibiotic resistance by ensuring that you take the full course as prescribed by your GP, even when you start to feel better.

 

Where did the story come from?

The study was carried out by researchers from Cardiff and Oxford universities, and Abbott Healthcare Products in the Netherlands, who also funded the study.

The study was published in the peer-reviewed British Medical Journal (BMJ) on an open access basis, so it is free to read online.

While the overall reporting by the UK media was broadly accurate, many of the headlines were not.

The Daily Telegraph claimed that “Up to half of antibiotics 'fail due to superbugs'”. 

We don't actually know the reason for needing another antibiotic prescription, as this was not examined in this study. Therefore, we don't know that any of these apparent antibiotic failures were due to “superbugs” as no laboratory data was available.

The Daily Mail claims that, “Now one in seven patients cannot be cured using antibiotics”, which is also not correct. It could well be the case that many patients were cured through the use of alternative antibiotics.

 

What kind of research was this?

This study examined the failure rates of antibiotics prescribed by General Practices in the UK over a 21-year period – from 1991 to 2012. Antibiotic resistance is a problem that has been increasing over the past few decades. As the World Health Organization (WHO) has declared, this is becoming worldwide public health crisis, as previously effective antibiotics become ineffective at treating certain infections. Though many people may think of antibiotic resistance as a problem predominantly found in hospital care (e.g. patients becoming ill with resistant “superbugs”), resistant bugs are just as much a problem in the community. As the researchers say, recent antibiotic treatment in primary care puts a person at risk of developing an infection that is resistant to antibiotics.

This study used a large general practice database to assess the failure of first-line (initial) antibiotic treatments prescribed in the UK over a 21-year period, alongside looking at general antibiotic prescription patterns.

 

What did the research involve?

This study used the UK Clinical Practice Research Datalink (CPRD) – an anonymised database collecting data from more than 14 million people attending almost 700 general practices in the UK. The database contains well-documented medical records and information on prescriptions, and these were examined between 1991 and 2012.

The researchers decided to look at antibiotics prescribed for four common classes of infection:

  • upper respiratory tract infections (e.g. sore throats, tonsillitis, sinusitis)
  • lower respiratory tract infections (e.g. pneumonia)
  • skin and soft tissue infections (e.g. cellulitis, impetigo)
  • acute ear infection (otitis media)

They looked at whether these infections had received treatment with a course of a single antibiotic (termed monotherapy, rather than two antibiotics in combination, for example). An antibiotic was considered as the first-line treatment if there had been no prescriptions for other antibiotics in the preceding 30 days.

They assessed the proportion of antibiotic courses resulting in treatment failure. As the researchers say, there is no specific definition of treatment failure, but based on previous research findings they considered treatment failure as:

  • prescription of a different antibiotic within 30 days of the first antibiotic prescription
  • GP record of admission to hospital with an infection-related diagnosis within 30 days of  prescription
  • GP referral to an infection-related specialist service within 30 days of prescription
  • GP record of an emergency department visit within three days of prescription (the shorter time window being selected to increase the probability that the emergency was related to the infection, rather than another cause)
  • GP record of death with an infection-related diagnostic code within 30 days of prescription

For each year, from 1991 to 2012, the researchers determined antibiotic treatment failure rates for the four infection classes and overall.

 

What were the basic results?

The database contained records of almost 60 million antibiotic prescriptions prescribed to more than 8 million people.

Almost 11 million prescriptions were the first-line single antibiotic treatment of the four groups of infection being studies: 39% for upper and 29% for lower respiratory tract infections, 23% for skin and tissue infections, and 9% for ear infections.

Overall, GP consultation rates for the four common infection groups decreased over time, but the number of consultations for which an antibiotic was prescribed marginally increased: 63.9% of consultations in 1991 and 65.6% in 2012. Across the whole 21 years, the proportion of consultations where an antibiotic was prescribed was 64.3%. However, within infection groups, there were more significant changes: prescriptions for lower respiratory tract infections decreased (59% in 1991 to 55% in 2012) while those for ear infection went up considerably (63% in 1991 to 83% in 2012).

The most commonly prescribed antibiotics were amoxicillin (42% of all prescriptions), and most upper respiratory tract infections received this antibiotic.

Most antibiotic treatment failures (94.4%) were cases where an alternative antibiotic had been prescribed within 30 days of treatment.

The overall antibiotic treatment failure rate for the four infection classes was 14.7%. The rate was 13.9% in 1991 and 15.4% in 2012, but there was not a clear linear increase in the rate over the time period. For each year, the highest failure rates were seen for lower respiratory tract infections (17% in 1991 and 21% in 2012).

Within the infection classes, individual antibiotics were associated with different failure rates. There were some particularly high rates of failure. For example, when the antibiotic trimethoprim (most often prescribed for urine infections) was prescribed for an upper respiratory tract infection, it failed 37% of the time overall, increasing from 25% in 1991 to 56% in 2012. For lower respiratory tract infections, failure rates were highest for a group of broad-spectrum antibiotics called cephalosporins (including antibiotics like cefotaxime and cefuroxime), with failure rates increasing from 22% in 1991 to 31% in 2012. 

In 2012, despite its high prescription rate for upper respiratory tract infections, amoxicillin had quite a low failure rate (12.2%).

 

How did the researchers interpret the results?

The researchers conclude that, “From 1991 to 2012, more than one in 10 first-line antibiotic monotherapies for the selected infections were associated with treatment failure. Overall failure rates increased over this period, with most of the increase occurring in more recent years, when antibiotic prescribing in primary care plateaued and then increased”.

 

Conclusion

Overall, this is a highly informative study of GP antibiotic prescribing for common infections in the UK. The overall antibiotic treatment failure rate was 15% over the course of the study period; these were mainly cases where there was a need to prescribe a different antibiotic within 30 days. There was a slight increase in failure rate, from 13.9% in 1991 to 15.4% in 2012. Within the infection classes, particular antibiotics had notable changes in failure rates, while others remained fairly stable. Reassuringly, amoxicillin and other commonly prescribed antibiotics currently still have fairly low failure rates.

However, despite this study using a wealth of data from a reliable GP database, there are some limitations to bear in mind.

Importantly, as the researchers say, there was no specific definition of treatment failure for them to use, so they had to use various proxy measures. They had no laboratory data available on the resistance of organisms to different antibiotics, so the study is not able to definitely say that antibiotic resistance was the reason for treatment failure. The most common indication of “treatment failure” in this study was the need for prescription of another antibiotic within 30 days, but it may not mean that the organism was resistant to the first antibiotic. – e.g. the person may not have taken the full prescribed treatment course, or the antibiotic may not have turned out to be appropriate for the type of bacteria the person had.

There is also the possibility of incorrect coding within the database, or the antibiotic not being prescribed for the indication that it was assumed to be.

However, antibiotic resistance is an increasing global problem, and is likely to have contributed to the failure rates. As a patient, it is important to be aware that many common respiratory infections can be self-limiting viral infections that do not need an antibiotic. If you are prescribed an antibiotic, you can help decrease the risk of the bug being developing resistance to the antibiotic by ensuring that you take the full course as prescribed by your GP, even when you start to feel better.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Antibiotic treatments from GPs 'fail 15% of the time'. BBC News, September 26 2014

Up to half of antibiotics 'fail due to superbugs' study finds. The Daily Telegraph, September 26 2014

Now one in seven patients cannot be cured using antibiotics after they were handed out too freely by GPs. Mail Online, September 26 2014

Research shows 1 in 10 antibiotic prescriptions 'fail'. ITV News, September 26 2014

Links To Science

Currie CJ, Berni E, Jenkins-Jones S, et al. Antibiotic treatment failure in four common infections in UK primary care 1991-2012: longitudinal analysis. BMJ. Published online September 23 2014

Categories: NHS Choices

Skirt size increase ups breast cancer risk

NHS Choices - Behind the Headlines - Thu, 25/09/2014 - 12:20

“Skirt size increase linked to breast cancer risk,” BBC News reports. The story comes from a UK study of nearly 93,000 postmenopausal women that looked at whether changes in skirt size since their twenties was associated with increased risk of breast cancer.

It found that going up a skirt size every 10 years was associated with a 33% increased risk of developing breast cancer after the menopause. As an example, this could be going from a size 8 at 25 years old to a size 16 at 65 years old.

It's important to stress that the initial risk of developing breast cancer, the baseline risk, is small, with only 1.2% of women involved in the study going on to develop breast cancer.

This large study used skirt size as a proxy measure for “central obesity” – the accumulation of excess fat around the waist and stomach. While overweight and obesity is known to be a risk factor for several cancers, this study suggests that a thickening waist may be an independent measure of increased breast cancer risk.

The good news is that the “skirt size effect" appears to be reversible, as losing weight and trimming your waist size may help reduce your breast cancer risk.

 

Where did the story come from?

The study was carried out by researchers from the Universities of London and Manchester, and was funded by the Medical Research Council, Cancer Research UK and the National Institute of Health Research, as well as the Eve Appeal.

The study was published in the peer-reviewed medical journal BMJ Open. As the name suggests, this is an open-access journal, so the study can be read for free online.

The paper was widely covered in the UK media. Coverage was fair, if uncritical.

Several headlines gave the impression that going up a single skirt size would raise breast cancer risk by 33%. Such a rise in risk would only be expected if a person went up a dress size every decade from their mid-twenties to when they were over 50 years old – the youngest age of the women recruited to the study.

Several media sources included useful comments from independent experts.

 

What kind of research was this?

This was a cohort study that looked at whether changes in skirt size between a woman’s twenties and the menopause was associated with increased risk of breast cancer. Skirt size was used as a proxy measure for central obesity (an excessive amount of fat around the stomach and abdomen – sometimes known as a "pot belly" or "beer belly").

The researchers say that both overall and central obesity are associated with an increased breast cancer risk in postmenopausal women, yet no studies have looked at the relationship between breast cancer risk and changes in central obesity alone.

Skirt and trouser size, they say, provide a reliable estimate of waist circumference, which may be predictive of risk, independent of body mass index (BMI), which is based on the individual’s height and weight.

 

What did the research involve?

The researchers recruited to their study women taking part in a large UK trial of ovarian cancer screening. The women were aged 50 or over and had no known history of breast cancer when they entered the study, between 2005 and 2010.

At enrolment, they answered a questionnaire providing detailed information on height and weight, reproductive health, number of pregnancies, fertility, family history of breast and ovarian cancer, use of hormonal contraceptives and hormone replacement therapy (HRT) – all of which influence (confound) breast cancer risk.

They were also asked about their current skirt size (SS) and what their SS had been in their twenties. Women could choose from 13 SS categories, ranging from size 6 to 30. These answers were used to calculate an increase in SS for each 10 years gone by. A “one unit” increase in SS would mean an increase from, say, 10 to 12 – as odd sizes do not exist in the UK.

The women were followed up three to four years after recruitment, when they completed a further questionnaire, providing information on education, skirt size, continuing use of HRT, smoking, alcohol use, health status and any cancer diagnosis.

The researchers used official health records to identify those women who had a diagnosis of breast cancer during the follow-up period.

They used standard statistical methods to analyse their results, adjusting these for confounders such as BMI, HRT use and family history.

 

What were the basic results?

The researchers report that 92,834 women completed the study and were included in their analysis. The average age of participants was 64. Participants were mainly white, educated to university degree level, and overweight at the point of entry to the study, with an average BMI of just over 25. 

At the age of 25, the average skirt size had been a UK 12, and at 64 it was 14. An increase in skirt size over their lifetime was reported in 76% of women.

During the monitoring period, 1,090 women developed breast cancer, giving an absolute risk of just over 1%.

Researchers found that for each unit increase in skirt size per 10 years, the risk of breast cancer after menopause increased by 33% (hazard ratio (HR) 1.330, 95% confidence interval (CI) 1.121 to 1.579).

For those with an increase of two SS units every 10 years, the risk was increased by 77% (HR 1.769, 95% CI 1.164 to 2.375).

They also found that a reduction in skirt size since the twenties was associated with a decreased risk of breast cancer.

Changes in skirt size, they say, was a better predictor of breast cancer risk than BMI or weight generally. It should also be noted that the association of skirt size with breast cancer risk was independent of BMI.

 

How did the researchers interpret the results?

The researchers conclude that a change in skirt size is associated with a risk of breast cancer independent of a woman’s height and weight. They estimate an increase in five-year absolute risk of postmenopausal breast cancer from one in 61 to one in 51 with each increase in skirt size each 10 years.

Their findings, they say, may provide women with a simple and easy to understand message, given that skirt size is a reliable measure of waist circumference, and women may relate to skirt size more easily than other measures of fat, such as BMI.

They theorise that fat around the waist may be more “metabolically active” than fat elsewhere and may increase levels of circulating oestrogen – an established risk factor for breast cancer.

 

Conclusion

This study suggests that while obesity generally is a risk factor for breast cancer, an increase in waist circumference, as shown in skirt size, between a woman’s twenties and after the menopause, may be an independent measure of increased risk.

Keeping to a healthy weight is important for overall health, and for reducing the risk of several cancers. However, few women in their 60s have the same waist size as they did in their twenties – in this study, for example, the average skirt size at 25 was a 12, but at 64 it was a size 14.

The 33% increased risk of breast cancer after menopause calculated by the researchers was based on an increase in skirt size every 10 years, which could mean increasing from size 12 aged 25 to size 18 by age 55.

The study had several limitations that may affect the reliability of its results. For example, it had a short follow-up period (three to four years) and it also required postmenopausal women in their 50s and 60s to recall their skirt size in their twenties.

In addition, while researchers adjusted their results for several factors that might influence the risk of breast cancer, it is always possible that both measured and unmeasured confounders affected the results.

Finally, most of the women were white, well-educated and also overweight when they were recruited. The results may not be generalisable to other groups of women.

It’s important to maintain a healthy weight, but it would be sad if women in their sixties started needlessly worrying that they should have the same waist size as when they were in their twenties. Surely all of us are entitled to some degree of middle-age spread?

Other ways you can reduce your breast cancer risk include taking regular exercisechoosing to breastfeed rather than bottle feed, and attending screening appointments if invited. 

Read more about breast cancer prevention.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Skirt size increase linked to breast cancer risk, says study. BBC News, September 25 2014

Expanding waistline for women ‘is a predictor for breast cancer risk’. The Guardian, September 25 2014

Women who go up a skirt size 'raise breast cancer risks'. The Daily Telegraph, September 24 2014

Long-term weight-gain increases women's risk of breast cancer by a third, study suggests. The Independent, September 25 2014

Going up a dress size raises breast cancer risk 33%: Danger of gaining weight every decade. Mail Online, September 24 2014

Increased breast cancer risk for women who go up a skirt size as they get older. Daily Mirror, September 24 2014

Links To Science

Fourkala E, Burnell M, Cox C, et al. Association of skirt size and postmenopausal breast cancer risk in older women: a cohort study within the UK Collaborative Trial of Ovarian Cancer Screening (UKCTOCS). BMJ Open. Published online September 24 2014

Categories: NHS Choices

Media multitasking 'brain shrink' claims unproven

NHS Choices - Behind the Headlines - Thu, 25/09/2014 - 12:00

“Multitasking makes your brain smaller,” the Daily Mail reports. UK researchers found that people who regularly “media multitasked” had less grey matter in a region of the brain involved in emotion.

The researchers were specifically interested in what they term media multitasking; for example checking your Twitter feed on your smartphone while streaming a boxset to your tablet as you scan your emails on your laptop.

In the study, 75 university students and staff were asked to complete a questionnaire about their media multitasking habits. The researchers compared the results with MRI brain scans and found that people with the highest level of media multitasking had a smaller volume of grey matter in a region of the brain called the anterior cingulate cortex (ACC), which is believed to be involved in human motivation and emotions.

The clinical implications are not clear – motivation and emotions were not assessed and all of the participants were healthy and intelligent.

Importantly, this study was essentially a single snapshot in time so it cannot prove cause and effect. The idea that this section of the brain has shrunk was not established by this study. It may be that people who used more media forms had a smaller size of this area of the brain to start with, and this could have influenced their media use. 

  

Where did the story come from?

The study was carried out by researchers from Graduate Medical School in Singapore, the University of Sussex and University College London. It was funded by the Japan Science and Technology Agency.

The study was published in the peer-reviewed medical journal PLOS One. PLOS One is an open access journal so the study is free to read online.

The Daily Mail’s reporting of the study gives the impression that a direct cause and effect relationship between media multitasking and brain shrinkage has been proven. This is not the case.

The Daily Telegraph takes a more appropriate and circumspect approach, including a quote from one of the researchers who points out that further cohort-style studies are required to prove (or not) a definitive causal effect.

 

What kind of research was this?

The researchers say that existing literature on the topic has suggested that people who engage in heavier media multitasking have poorer cognitive control (ability to concentrate and focus on one task despite distractions, to flexibly switch between thoughts, and to control thinking and emotions).

They conducted this cross-sectional study to see if there was an association with increased media multitasking and any differences in the size of the grey matter in the brain. As this was a cross sectional study it cannot prove causation – that is, that the level and combination of media use caused the brain to shrink.

The study can’t inform whether there has been any change in brain size at all or whether people with increased media use already had this brain structure.

A better study design would be a prospective cohort study that carried out regular brain scans of people over time from a young age to see whether their level of media use (for example through work or study) influenced their brain structure.

However, aside from any ethical considerations, it is likely there would be significant practical difficulties with such a study design; try telling a young person that they couldn’t text while watching TV for the next five years and see how far that gets you.

Also a cohort study would still be likely to be subject to potential confounders.

 

What did the research involve?

The researchers recruited 75 healthy university students and staff who were “well acquainted” with computers and media technologies. They asked them to fill out two questionnaires and to have an MRI brain scan.

A media multitasking index (MMI) score was calculated for each participant. This involved participants completing a media multitasking questionnaire, the results of which were converted into a score using a mathematical formula.

The first section of the questionnaire asked people to estimate the number of hours per week they spent using different types of media:

  • print media
  • television
  • computer-based video or music streaming
  • voice calls using mobile or telephone
  • instant messaging
  • short messaging service (SMS) messaging
  • email
  • web surfing
  • other computer-based applications
  • video, computer or mobile phone games
  • social networking sites

The second section asked them to estimate how long they used any of the media types at the same time, using a scale of:

  • 1 – never
  • 2 – a little of the time
  • 3 – some of the time
  • 4 – all of the time

Participants were then asked to complete another questionnaire called the Big Five Inventory (BFI), which is a 44-item measure for the personality factors:

  • extroversion
  • agreeableness
  • conscientiousness 
  • neuroticism
  • openness to experience

 

What were the basic results?

Higher media multitasking (MMI) score was associated with smaller grey matter volumes in the anterior cingulate cortex (ACC) portion of the brain. No other brain regions showed significant correlations with MMI score. The precise function of the ACC is not known, but it is believed to be involved in motivation and emotions.

There was a significant association between extroversion and higher MMI score.

After controlling for extroversion and other personality traits, there was still a significant association between higher MMI and lower grey matter density in the ACC part of the brain.

 

How did the researchers interpret the results?

The researchers concluded that “individuals who engaged in more media multitasking activity had smaller grey matter volumes in the ACC”. They say that “this could possibly explain the poorer cognitive control performance and negative socioeconomic outcomes associated with increased media multitasking” seen in other studies.

 

Conclusion

This cross-sectional study finds an association between higher media multitasking and a smaller volume of grey matter in the ACC portion of the brain that is believed to be involved in human motivation and emotions.

Despite the apparent link, a key limitation of the study is that, being cross-sectional, its assessment of brain size and structure has only provided a single snapshot in time, at the same time as assessing media use. We do not know whether there has actually been any change in the person’s brain size at all. The study cannot tell us whether using multimedia has caused this area to reduce in size, or conversely whether having this reduced ACC size influenced people’s use of more media forms at the same time.

Furthermore, motivation, emotions and ability to concentrate were not assessed in any of the participants, so it is unclear whether the observed differences in volume had any clinical relevance. The media makes reference to previous studies that suggested an association with poor attention, depression and anxiety, but this was not assessed in this study. It should also be noted that all the participants were educated to at least undergraduate degree level, implying a high level of cognitive control.

Further study population bias included that they were only selected if they had familiarity with computers and media technologies so there was no control group who did not use as many multimedia types.

Another limitation of the study is that the media multitasking score is unlikely to be precise, as it was reliant on the participants accurately estimating the amount of time they spent using each media type per week, and how much time there was cross-over of activities.

Overall, while of interest, this study does not prove that using multiple forms of media causes the brain to shrink.

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Multitasking makes your brain smaller: Grey matter shrinks if we do too much at once. Daily Mail, September 25 2014

Second screening 'may alter the brain and trigger emotional problems'. The Daily Telegraph, September 24 2014

Links To Science

Loh KK, Kanai R. Higher Media Multi-Tasking Activity Is Associated with Smaller Gray-Matter Density in the Anterior Cingulate Cortex. PLOS One. Published online September 24 2014

Categories: NHS Choices

Benefits of statins 'outweigh diabetes risk'

NHS Choices - Behind the Headlines - Wed, 24/09/2014 - 12:45

“Statins increase risk of diabetes, but benefits are still worth it, say experts,” The Guardian reports.

A large study found the medication lead to a modest increase in weight and subsequent diabetes risk. The authors report that these risks were more than offset by the reduction in cardiovascular disease, but these results were not provided in the study. 

The study involved nearly 130,000 people, which found that statin use (used to lower cholesterol levels) increases the risk of type 2 diabetes by 12% and is associated with weight gain of around quarter of a kilo (half a pound) over four years.

It found indirect evidence that the protein statins target to reduce cholesterol could be at least partly responsible for the effect on type 2 diabetes as well. This evidence was based on looking at the effect of natural genetic variations that affect the protein, and not on a direct analysis of the effect of statins.

Importantly, the authors themselves note that this “should not alter present guidance on prescription of statins for prevention of [CVD]”. They do suggest that lifestyle changes, such as exercise, should be emphasised as still being an important part of heart disease prevention in people who are taking statins. This seems reasonable, and it is likely to be part of what doctors already recommend. 

 

Where did the story come from?

The study was carried out by researchers from University College London, Glasgow University, and a large number of international universities and institutes. It was funded by the Medical Research Council, the National Institutes of Health, the British Heart Foundation, the Wellcome Trust, the National Institute on Aging, Diabetes UK and several other European grants.

The study was published in the peer-reviewed medical journal The Lancet on an open access basis, so it is free to read online (PDF, 1.2Mb).

The media focused on the part of this study that looked at the effect of statins on weight change and risk of type 2 diabetes. However, it didn’t really focus on the main aim of this research, which was to look at how statins might have an effect on these outcomes, although this is understandable, as this information is not likely to be of interest to the average reader.

Refreshingly, all of the media sources that reported on the study resisted the temptation to engage in fear mongering, and were careful to stress that the benefits of statins outweighed any risks.

 

What kind of research was this?

The current study aimed to investigate how statins increase the risk of type 2 diabetes. The researchers had carried out a previous statistical pooling (meta-analysis) of data from randomised crossover trials (RCTs) and found that statins increased the risk of type 2 diabetes compared to placebo or no statins. One part of the current study added new studies to this meta-analysis, to get a more up-to-date estimate of the effect, and to look at statins’ effect on bodyweight as well.

Statins lower cholesterol by reducing the activity of a protein called 3-hydroxy-3-methylglutaryl-CoA reductase (HMGCR). The main part of this study carried out a new meta-analysis of genetic studies, to look at whether this protein might also be related to the effect of statins on diabetes risk.

Meta-analyses are a way to pool lots of data from different studies together. It helps researchers to identify small effects that individual studies may not be able to detect.

However, the benefits of statins in reducing cardiovascular disease such as heart attack and stroke are believed to outweigh this risk, even for people with type 2 diabetes.

 

What did the research involve?

The original meta-analysis looking at the effect of statins on type 2 diabetes had included RCTs of at least 1,000 people, followed up for one year or more. This meta-analysis had not looked at the effect of statins on weight change. The researchers contacted the investigators from 20 of the trials to provide data on changes in bodyweight during the follow-up. They then analysed the effect on weight gain of statins compared to placebo (“dummy” pills with no active ingredient) or just usual treatment (with no statins or placebo pills). They also analysed the results without the participants who had a heart attack or stroke.

They also analysed the effect of statins on change in LDL cholesterol (sometimes called “bad” cholesterol), blood sugar and insulin concentrations, BMI, waist circumference and waist:hip ratio.

The main part of the study looked at how statins might have an effect on type 2 diabetes risk. Doing this is difficult, so the genetic meta-analysis took a novel approach. Statins reduce levels of LDL cholesterol by reducing the activity of the HMGCR protein. Rather than look directly at the effect of statins, the meta-analysis looked at whether people who have genetic variations which naturally reduce the function of HMGCR also have an increased risk of type 2 diabetes. Their thinking was that if this was the case, then the effect of statins on type 2 diabetes might at least partly be explained by its effect on HMGCR.

Their meta-analysis pooled data from studies which looked at whether these variations were linked to type 2 diabetes, and other outcomes such as weight.

The meta-analysis pooled observational population studies that assessed two genetic variations lying in the gene that encodes the HMGCR protein. People who have these variations tend to have lower LDL cholesterol. For the main analysis, they compared people with these variations to those without in terms of their total cholesterol, LDL cholesterol, non-HDL cholesterol, bodyweight, body mass index (BMI), waist and hip circumferences, waist:hip ratio, height, plasma glucose and plasma insulin.

 

What were the basic results?

Information was obtained on change in LDL cholesterol in 20 statin trials and bodyweight change for 15 of the 20 statin trials.  There was no information available from these studies about the effect of statins on plasma glucose and insulin concentrations, BMI, waist circumference and waist:hip ratio.

Results for the 129,170 people from the randomised trials found that statins:

  • lowered LDL cholesterol after one year by 0.92 mmol/L (95% confidence interval (CI) 0.18–1.67)
  • increased bodyweight in all trials combined over a mean of 4.2 years (range 1.9–6.7) of follow-up by 0.24 kg (95% CI 0.10–0.38)
  • increased bodyweight compared to placebo or standard care by 0.33 kg (95% CI 0.24–0.42)
  • increased the risk of new-onset type 2 diabetes by 12% in all trials combined (Odds Ratio (OR) 1.12, 95% CI 1.06–1.18)
  • increased the risk of new-onset type 2 diabetes by 11% in placebo or standard care controlled trials (OR 1.11, 95% CI 1.03–1.20)

The researchers found that higher (intensive) doses of statins:

  • reduced body weight compared to moderate dose statins by –0.15 kg (95% CI –0.39 to 0.08)
  • increased the risk of new-onset type 2 diabetes by 12% compared with moderate dose statins (OR 1.12, 95% CI 1.04–1.22)

Meta-analysis of a total of up to 223,463 individuals from 43 studies in whom genetic data was available, found that each copy of the main genetic variation in HMGCR gene that they looked at was associated with:

  • lower cholesterol: 0.06 to 0.07 mmol/L
  • lower LDL cholesterol, total cholesterol and non-HDL cholesterol
  • 1.62% higher plasma insulin
  • 0.23% higher blood sugar (glucose) concentration
  • a 300g increase in bodyweight and 0.11 point increase in BMI
  • a slightly greater waist circumference of 0.32cm and hip circumference of 0.21cm
  • a 2% higher risk of type 2 diabetes that was almost statistically significant (OR 1.02, 95% CI 1.00 to 1.05)

They found similar results for the second genetic variation they looked at.

 

How did the researchers interpret the results?

The researchers concluded that “the increased risk of type 2 diabetes noted with statins is at least partially explained by HMGCR inhibition”. Importantly, they say that this “should not alter present guidance on prescription of statins for prevention of CVD”. Despite this, they say that their findings “suggest lifestyle interventions such as bodyweight optimisation, healthy diet and adequate physical activity should be emphasised as important adjuncts to prevention of [heart disease] with statin treatment to attenuate risks of type 2 diabetes.”

 

Conclusion

The results of these updated meta-analyses indicate that statin use is associated with a 12% increase in risk of type 2 diabetes and also weight gain of half a pound over the course of four years. This confirms the findings of the previous meta-analysis of the effect on diabetes, and adds new findings for weight.

The main meta-analyses in this study attempted to address how statins might have this effect. They found that people who have genetic variations in the gene encoding the protein HMGCR that is targeted by statins, have lower LDL (bad) cholesterol but also increased levels of insulin, blood sugar, body weight and BMI, and slightly increased risks of diabetes. The researchers conclude that the effects of statin on HMGCR could therefore be at least part of the cause of the increased risk of type 2 diabetes seen with statins.

While the results support this theory, this study cannot directly prove this. The genetic variations were used as a “mimic” or “proxy” of the effect of statins, and the study populations in this analysis had not taken statins. Also, the exact effect of the genetic variations on the HMGCR protein need to be looked into further, as they are not in the part of the gene which actually contains the instructions for making the protein.

Drugs can have an effect on the body in more than one way, and statins may also have other effects which could account for the weight gain or increased risk of type 2 diabetes. It is likely that further studies will be carried out to test the theory arising from this research.

If you are taking statins and are worried about your diabetes risk then taking steps to achieve or maintain a healthy weight, such as taking regular exercise and eating a healthy diet, should help reduce your diabetes risk. It will also have the added benefit of reducing your CVD risk as well; win-win! 

Analysis by Bazian. Edited by NHS Choices. Follow Behind the Headlines on Twitter. Join the Healthy Evidence forum.

Links To The Headlines

Statins increase risk of diabetes but benefits are still worth it, say experts. The Guardian, September 24 2014

Statins increase weight and blood sugar and raise diabetes risk, study finds. The Daily Telegraph, September 23 2014

Statin users more at risk of piling on the pounds: Scientists warn millions of users to do more exercise to counter side effect. Mail Online, September 24 2014

Benefits of Statins 'greatly outweigh' small risks say experts. Daily Express, September 24 2014

Benefits of taking statins outweigh the diabetes risks, major new study finds. The Independent, September 23 2014

Patients on statins are told to exercise more. The Times, September 24 2014

Links To Science

Swerdlow DI, Preiss D, Kuchenbaecker KB, et al. HMG-coenzyme A reductase inhibition, type 2 diabetes, and bodyweight: evidence from genetic analysis and randomised trials (PDF,1.2Mb). The Lancet. Published online September 24 2014

Categories: NHS Choices

Ebola outbreak to get worse, says WHO

NHS Choices - Behind the Headlines - Wed, 24/09/2014 - 11:40

“Ebola infections will treble to 20,000 by November,” BBC News reports, following the publication of an analysis of the current epidemic by the World Health Organization (WHO).

The report assesses what is known about the spread and devastating impact of the Ebola outbreak to date, while also predicting what may happen in the near future.

The study used data from five West African countries affected by the ongoing Ebola outbreak to estimate that around 70% of people infected (probable or confirmed cases) died from it up to September 14 2014. It states that the disease is likely to continue to spread, unless there are rapid improvements in disease control measures. Without this, it estimates that 20,000 people could be infected by the end of November – an almost quadrupling of the numbers affected up to mid-September (around 4,500). 

This report appears to be based on the pragmatic data available during the outbreak, meaning that it will be prone to some error. However, given the circumstances, it is unlikely that substantially better data will be available any time soon.

However, the analysis did offer a glimmer of hope. It discussed how new cases of the disease may be reduced within two to three weeks of introducing disease control measures, such as

  • improvements in contact tracing
  • adequate case isolation
  • increased capacity for clinical management
  • safe burials
  • greater community engagement
  • support from international partners

 

Where did the story come from?

The study was carried out by members of the WHO Ebola Response Team and was funded by numerous sources, including: the Medical Research Council, the Bill and Melinda Gates Foundation, the Models of Infectious Disease Agent Study of the National Institute of General Medical Sciences (National Institutes of Health), the Health Protection Research Units of the National Institute for Health Research, European Union PREDEMICS consortium, Wellcome Trust and Fogarty International Center.

The study was published in The New England Journal of Medicine – a peer-reviewed medical journal – on an open access basis, so it is free to read online.

BBC News covered the research accurately.

The Mail Online and The Independent covered reports by both the WHO and CDC. Again, their reporting reflected the underlying research.

 

What kind of research was this?

This was a cross-sectional study assessing cases of Ebola virus disease (EVD, or Ebola for short) in five West African countries.

As of September 14 2014, a total of 4,507 confirmed and probable cases of Ebola, as well as 2,296 deaths from the virus, had been reported from five countries in West Africa: Guinea, Liberia, Nigeria, Senegal and Sierra Leone.

Smaller Ebola outbreaks have happened before, but the current outbreak is far larger than all previous epidemics combined. This latest study aimed to gather information from the five countries most affected, to gain an insight into the severity of the outbreak and predict the future course of the epidemic.

 

What did the research involve?

By September 14 2014, a total of 4,507 probable and confirmed cases, as well as 2,296 deaths, from Ebola (Zaire species) had been reported to the WHO from five West African countries – Guinea, Liberia, Nigeria, Senegal and Sierra Leone. The latest WHO report analysed a detailed subset of data on 3,343 confirmed and 667 probable Ebola cases from these countries.

Ebola outbreak data was collected during surveillance and response activities for Ebola in the respective countries during the outbreak.

Clinical and demographic data were collected from probable and confirmed cases using a standard Ebola case investigation form. Additional information on the outbreak was gathered from informal case reports, by data from diagnostic laboratories and from burial records. The data recorded for each Ebola case included the district of residence, the district in which the disease was reported, the patient’s age, sex, signs and symptoms, the date of symptom onset and of case detection, the name of the hospital, the date of hospitalisation, and the date of death or discharge.

The analysis focused on describing epidemiological characteristics of the outbreak using the individual confirmed and probable cases records for each country. Results related to suspected cases were demoted to an appendix, as they were less reliable.

 

What were the basic results?

The main characteristics of the Ebola outbreak are:

  • The majority of patients are 15 to 44 years of age (49.9% male).
  • The estimated chance of dying from Ebola is 70.8% (95% confidence interval [CI], 69 to 73) among persons with known infection. This was very similar across the different countries.
  • The average delay between being infected with the Ebola virus and displaying symptoms is 11.4 days. The course of infection, including signs and symptoms, is similar to that reported in previous Ebola outbreaks.
  • The estimated current reproduction numbers are: 1.81 (95% CI, 1.60 to 2.03) for Guinea, 1.51 (95% CI, 1.41 to 1.60) for Liberia and 1.38 (95% CI, 1.27 to 1.51) for Sierra Leone. The reproduction number is the number of new cases one existing case generates over the time they are infected with the virus. For example, the Guinea rate of 1.81 means that, on average, every person with Ebola infects just under 2 new people with the disease. Reproduction numbers greater than 1 indicate the disease is spreading in a population, with a higher number indicating that the spread is faster. A reproduction rate of above 2 is particularly concerning, as it means that an infection is now spreading exponentially (1 person infects 2; 2 infects 4, 4 infects 8, and so on).
  • The corresponding doubling times – the time it takes for the disease incidence to double – was 15.7 days (95% CI, 12.9 to 20.3) for Guinea, 23.6 days (95% CI, 20.2 to 28.2) for Liberia and 30.2 days (95% CI, 23.6 to 42.3) for Sierra Leone.
  • On the basis of the initial periods of exponential growth, the estimated basic reproduction numbers for the future are: 1.71 (95% CI, 1.44 to 2.01) for Guinea, 1.83 (95% CI, 1.72 to 1.94) for Liberia and 2.02 (95% CI, 1.79 to 2.26) for Sierra Leone.
  • Assuming no change in the control measures for this epidemic, by November 2 2014, the cumulative reported numbers of confirmed and probable cases are predicted to be 5,740 in Guinea, 9,890 in Liberia and 5,000 in Sierra Leone – exceeding 20,000 in total.

 

How did the researchers interpret the results?

The Ebola Response Team were clear in their conclusions, saying their findings “indicate that without drastic improvements in control measures, the numbers of cases of and deaths from EVD [Ebola virus disease] are expected to continue increasing from hundreds to thousands per week in the coming months.”

 

Conclusion

This latest WHO study used data from five West African countries affected by the ongoing Ebola outbreak to estimate that around 70% of people infected (probable or confirmed cases) died from it up to September 14 2014. They found the disease is spreading, and is likely to continue spreading unless there are improvements in disease control measures. This means that if the status quo is maintained, they predict that the outbreak will get worse, rather than better.

This report appears to be based on the pragmatic data available during the outbreak. Such data is always prone to some error, as record keeping and case detection is not 100% accurate, particularly in resource-poor countries or districts. The WHO team thinks their estimates of the disease underestimate the size of the issue, as not all cases will have been detected by their methods, and case records were often incomplete.

One way the WHO investigators got around this was to focus their analysis on the confirmed or probable cases of Ebola. They placed much less emphasis on the more uncertain “suspected cases”. Hence, the data can be viewed as a broadly useful estimate of the situation. It is not precise but, given the circumstances, it is unlikely that significantly better information will be available any time soon.

The team found that the infectiousness and fatality rate of this Ebola outbreak was similar to previous smaller outbreaks. They thought this outbreak was much larger and more serious because the populations affected were different – for example, the populations of Guinea, Liberia and Sierra Leone are highly interconnected. The report said there was “much cross-border traffic at the [Ebola outbreak] epicentre and relatively easy connections by road between rural towns and villages, and between densely populated national capitals. The large intermixing population has facilitated the spread of infection”.

However, they said the large epidemic in these countries was not inevitable. They explained how in Nigeria, including in densely populated large cities such as Lagos, the disease was contained, possibly due to the speed of implementing rigorous control measures.

There was, however, a glimmer of hope in the otherwise worrying report. It discussed how, based on previous outbreaks, new cases of the disease can be reduced within two to three weeks of introducing disease control measures.

The report called for a swift improvement in current control measures to address this problem, specifically:

  • improvements in contact tracing
  • adequate case isolation
  • increased capacity for clinical management
  • safe burials
  • greater community engagement
  • support from international partners

Want to help, but don’t know how? A donation to one of the medical charities that are helping to combat the spread of Ebola could help. A quick online search for "Ebola charities" will bring up a range of deserving causes for you to choose from.

Analysis by Bazian. Edited by NHS ChoicesFollow Behind the Headlines on TwitterJoin the Healthy Evidence forum.

Links To The Headlines

Ebola death rates 70% - WHO study. BBC News, September 23 2014

Ebola outbreak: Experts warn cases could number one million by January as 'window closes' to stop disease becoming endemic. The Independent, September 23 2014

Experts warn Ebola could infect 1.4 million by January in just two African nations. Mail Online, September 23 2014

Links To Science

WHO Ebola Response Team. Ebola Virus Disease in West Africa — The First 9 Months of the Epidemic and Forward Projections. The New England Journal of Medicine. Published online September 23 2014

Meltzer MI, Atkins CY, Knust B, et al. Estimating the Future Number of Cases in the Ebola Epidemic — Liberia and Sierra Leone, 2014–2015. Morbidity and Mortality Weekly Report. Published online September 23 2014

Categories: NHS Choices

Pages