Blog

Tackling childhood obesity in the UK

by Sandra Evans - 20 Jun 2018

With the highest rate of obesity in western Europe, the UK must take action to promote health and support the wellbeing of the population.

The UK is the most obese country in western Europe, and the 11th most obese in the world.

The Organisation for Economic Co-operation and Development (OECD) Health at a Glance report shows that, in 2015, 26.9% of the UK population were obese – a 92% increase since the 1990s. Obesity is one of the main drivers behind cancer, heart attacks, stroke and diabetes. Rising obesity rates – particularly in childhood – are a significant threat to the future health of the population, not to mention the cost of healthcare.

Politicians are making efforts to tackle the obesity crisis. Mayor of London Sadiq Khan published his draft London Food Strategy in May 2018. It promises more radical change than its national equivalent, Prime Minister Theresa May’s Childhood Obesity action plan.

The London Food Strategy

The draft London Food Strategy outlines the Mayor’s ambitious vision for good food in the capital. Developed in partnership with the London Food Board, the strategy outlines actions under six priority areas:

  • Good food at home – and reducing food insecurity
  • Good food shopping and eating out – a healthier environment
  • Good food in public institutions and community settings – better food procurement
  • Good food for maternity, early years, education and health – supporting healthier habits
  • Good food growing, community gardens and urban farming – increasing sustainable food growing
  • Good food for the environment – making the food system work better.

Childhood obesity in numbers

The numbers are shocking, with more than 1 in 5 children in reception (aged 4–5) and more than 1 in 3 children in year 6 (aged 10–11) measured as obese or overweight in 2015/16. The same study found that children from deprived areas were twice as likely to be obese than those in the least deprived areas, with Black or Black-British children more likely to be obese or overweight than any other ethnicity.

Nearly half of parents of obese children and 80% of parents of overweight children thought their child was a healthy weight. Studies have shown that parents – particularly mothers – do not classify their children as overweight according to commonly used clinical criteria. This highlights huge health literacy gaps in the general population.

Making positive change

It seems that the combination of high-calorie foods being heavily promoted via almost every form of media, and a low awareness of what ‘healthy weight’ actually means, is dangerous for the health of children. This is a major concern to both present and future society, as obese and overweight children and adolescents are more likely to die young and experience poor health in the form of diabetes, heart disease, stroke, hypertension, asthma and polycystic ovary syndrome.

Putting profits from big business ahead of the long-term health of children is both unethical and economically short-sighted. Thankfully, this bold, positive move from the Mayor of London has forced the Prime Minister to rethink the Childhood Obesity plan. She is now expected to revive some of the proposals that were previously rejected into a new strategy, likely to be released shortly. This strategy could include restrictions on ‘by one, get one free’ deals on unhealthy products and a 9pm watershed on advertising for foods high in sugar, salt or fat.

More needs to be done

The UK cannot expect a slimmer population in 10 or 20 years unless it develops the infrastructure to protect young people from food and drink industry marketing, and to support them in making the best health decisions. Policymakers need to look at all factors that drive obesity in order to develop effective, multifactorial policy that nurtures health and empowers people.

How can data sharing help in responding to a health crisis?

by Katharina Beyer - 15 Jun 2018

There is great potential in data sharing, but its effectiveness may rely on standardised processes and data governance agreements.

In the current information age, public health data sharing has become increasingly important – and essential for developing policy. Data are used for numerous reasons: to monitor the health status of populations, to target interventions, and to find solutions to increase the overall health of individuals. Governments use such data to allocate resources and to better understand and prioritise healthcare services to fit their populations’ needs.

At a global level, health data are used to understand the global burden of diseases, measure the progress of health and development agencies, and for research and technology development. In 2003, successful health data sharing helped to reduce the spread of severe acute respiratory syndrome (SARS). Public health data sharing also helped to control the 2009 influenza A (H1N1) pandemic. The benefits of data sharing are widely acknowledged and recognised. It can improve transparency, cooperation between sectors, cost efficiency and innovation.

There is no international standard for data sharing

Despite the potential benefits of data sharing – and an apparent global commitment to the exchange and use of health data – it seems to face challenges in practice. There is no global, comprehensive structural framework or operational guidance on how to use and share health data.

Instead, there are many perceptions – and misperceptions – of existing laws. Lack of clarity regarding the application of data protection to various data-sharing arrangements creates both potential and real barriers in accessing, using and sharing public health data.

What are the barriers to data sharing?

Technical barriers may occur due to challenges with data collection or preservation, while there may also be language barriers as health data are almost always collected in the local language.
Motivational barriers are related to the personal or institutional motivations to share data. These may result from an absence of incentives to prioritise data sharing. Additionally, high opportunity costs (the cost of choosing one option over another and thus missing the potential benefit of the option not chosen) may limit the motivation and willingness to share data.

Economic barriers also create challenges in accessing, using and sharing health data. One example might be concern that potential health threats in a particular country – revealed through that country’s sharing of health data – could have a negative impact on tourism or trade. Lack of resources – either human or technical – can also create barriers. Political barriers, such as lack of trust between different stakeholders, could lead to barriers anticipated through concerns that data might be misused or misinterpreted. Meanwhile, restrictive policies also create challenges. Ethical barriers involve conflicts between the stakeholders’ values and moral principles.

What needs to change?

In practice, the exchange of health data relies on local, national and regional solutions. The need to share health data often arises as a response to a specific disease, which creates narrow policies and barriers.

Ideally, data sharing should be based on trust and transparency between all actors involved. This is built through dialogue and communication. To achieve a positive response, the values and benefits of data sharing must be clearly understood. Incentives should be created to increase willingness to share data.

Data sharing can be crucial in responding to health crises around the world. But for it to be effective, there must be some level of standardisation. Only by including all actors involved in the process is it possible to secure buy-in, build a legal framework and establish data governance agreements that support data exchange in health through political commitment and development. Standardised sharing, system development and useful tools to share health data will be fundamental.

Perspective matters

by Suzanne Wait - 10 May 2018

When considering information and evidence, it’s vital to consider both the wider context and the specific perspective.

I recently attended a parents’ evening at my children’s school. I usually dread these evenings – the inevitable biting my tongue and counting to ten under my breath as my daughter’s teacher tells me with utmost earnestness that she is worried that my daughter does not have a good grasp of cuboids. (What are cuboids, you may wonder…) This time, however, was different. It helped to restore my faith in the education system and to think about how, as adults, we often forget the fundamental things we learnt in childhood.

My daughter’s history teacher showed us three pictures of Elisabeth I, taken at different times in her life, and two letters that were written by German and French dignitaries reporting back to their home countries about the Virgin Queen and whether she was holding on well to power. The teacher told us that she had asked my daughter and her nine-year-old classmates to explain the differences in the two letters’ accounts, and contrast these with the pictures. Why were the accounts different? What factors could have explained the different accounts of how powerful the Queen was?

The point of the exercise was to explain two things to the children:

  • One: perspective matters. A French or German dignitary may have wanted to convey to his superiors that the Queen was powerful, or losing power, to serve his own national or personal interests. The personal ambitions of these dignitaries may also have coloured their reports, as whatever they reported may have swayed in their favour in some way. So perspective is crucial.
  • Two: always think of confounding factors. The three portraits of Elisabeth I were taken at different periods of her life – and if her age (written in small text at the bottom of each picture) was not reported in the dignitaries’ accounts, then a very different portrayal of her physical beauty, health and wellbeing may have been given. Confounding factors are typically thought of in the case of clinical trials data: is the observed difference between treatment arms really due to the different treatments given to patients, or is this hiding a confounding effect of another factor, such as a different age profile or case mix of the two groups being treated?

Clarity in a character-limited culture

Confounding factors and perspective have huge relevance to health policy. In much of the work we do, gaps in individuals’ health literacy may account for misinterpretation of the benefits and risks of given interventions. Statistics – particularly economic statistics – are often reported out of context. They may hit the headlines, but there is little understanding as to where they came from or whether they can be extrapolated to other contexts.

At HPP, we pride ourselves on accuracy, concision and relevance – all of which depend on our ability to concisely convey clear facts, supported by indisputable evidence (which we have spent a long time turning upside down), in such a way that they will not be prone to misinterpretation or misunderstanding. So we would never say, ‘Cancer is more important to policymakers than heart disease’; rather, we may say, ‘A greater proportion of current health policies [precise fact] in the UK [specific context] are focused on cancer than heart disease.’

Why is this important? Because we are living in a culture of soundbites, character limits and skim‑reading – and as much as we might disparage the individuals or cultural norms that propagate this culture, we are all part of it. We measure the success of an event not by who attends it, but by the number of tweets it engenders. We measure the relevance of a campaign not by the meaningfulness of its cause, but by the comms value of its message. That is not necessarily wrong, and one would be foolish to refuse to tick the box against these metrics – but we mustn’t forget that every fact should be interrogated for its context, accuracy and relevance to other situations.

Getting some perspective

Peer-reviewed publications usually look on the use of ‘we’ with disfavour – and there is a reason for this, as it compromises objectivity. The problem in policy, of course, is that without saying ‘we’, there is a risk of floating in a world of abstract policy recommendations without thinking about who should be held accountable for translating laudable goals into concrete actions.

So I will continue to drum home to my nine-year-old – as well as to my extremely competent, well-educated and highly literate colleagues – that perspective and context matter, and we should never succumb to laziness in neglecting to add these notes of precision and accuracy to anything we write. This is what differentiates good research, and good writing, from that which isn’t up to scratch. If we want to provide useful insights to decision makers in the hope of changing policy, we owe it to ourselves, and to them, to report findings to the highest standard.

Context. Confounding factors. Perspective. Far more important lessons for life than cuboids.

For character-limited updates from HPP, follow us on Twitter.

A community-based approach is key to truly person-centred care

by Saara Ahmed - 20 Mar 2018

As our understanding of health has developed over the years, the importance of wider determinants of health in the community has become clear.

Person-centred care: the bigger picture

The role of the patient has changed greatly over the years. Patients were once mostly ignorant of their ailments and heavily dependent on the advice of a doctor; they were considered passive in the doctor–patient relationship and the healthcare system. Now, however, patient autonomy and the importance of the patient perspective has changed this dynamic: patients are more empowered, able to make informed decisions about their care, and can take on a more active role.

Alongside this change, our understanding of health has transformed, from aiming to balance the four humours to a realisation that there are wider determinants of health. It seems natural, therefore, that to support person-centred care we must address these wider determinants and acknowledge that healthcare does not comprise only hospitals, healthcare professionals, patients and governments. The role of communities is equally important.

The community and its role in health

The community is made up of networks that provide individuals with the social capital that is essential for health. Social capital has been defined as those resources which accrue to an individual when they are in a network of relationships based on mutual recognition and acquaintance. Such networks are depicted in the health determinants model, originally established by Dahlgren and Whitehead in 1991. The model demonstrates the layers that influence one’s health, from general socioeconomic, cultural and environmental conditions, through living/working conditions and community networks, to individual lifestyle factors and elements such as age, sex and genetic makeup. The resources that social capital produces for health include the building of resilience, improvement in mental health and wellbeing, and encouragement of health-seeking behaviours.

Engaging with the community can empower individuals to seek help and change behaviours. Ultimately, they gain more control of their own health. In a strong community, individuals can manage chronic illness more easily through the support of others. This may also encourage patients to communicate more confidently about their ailments with healthcare professionals, as they feel they have the support of the community behind them.

Improving health outcomes is, naturally, a goal of healthcare systems, which are constantly experiencing a push and pull between containing costs and improving quality of care. However, a study carried out by the Robert Wood Johnson Foundation supports the view that social determinants of health are of equal importance to clinical care. It reveals that only 20% of the factors that influence a person’s health are related to access and quality of healthcare, the other 80% being determined by socioeconomic, environmental or behavioural factors. There are many examples of this: homelessness drives up emergency-room visits, poverty is associated with an increased prevalence of asthma, and urban areas devoid of affordable fresh foods have an increased incidence of type 2 diabetes.

Adopting a community approach to healthcare will likely see a shift from traditional fee-for-service models towards pay-for-performance models, encouraging a focus on value, not volume, and on prevention, not cure. There is an economic argument, then, for keeping people healthy by addressing social determinants of health. There is also the sustainability perspective, with one study utilising cluster analysis to predict the sustainability and positive changes brought about by community health programmes after implementation. And then there are moral arguments for strengthening the social fabric of our society, reminding clinicians, patients, families and communities what we owe one another, and how we respect and take care of each other.

The challenge and the future

Community health services are widely recognised as a key component in providing patient-centred care. Organisations are willing to collaborate, and health systems are keen to engage in prevention, but they are let down by a lack of rigorous and comparable universal indicators that can be used to measure performance of such services. Datasets through which comparisons can be made only offer limited information on quality of community services and/or social strengthening.

National policy should focus on patient outcomes, a community-based approach to health, and comparable, robust metrics to evaluate performance. Collaborations between different services in the health system, community outreach projects, patient organisations and local councils will need to focus on improving social capital in a sustainable and efficient way.

 

To read more about this topic:

Read HPP’s 2015 report on The state of play in person-centred care.

Teenage pregnancy: will public health cuts negate previous policy success?

by Sandra Evans - 16 Feb 2018

Although the UK’s teenage pregnancy rate has fallen, it remains the highest in Western Europe – and cuts to services may hinder efforts to ameliorate this.

Teenage pregnancy is a significant public health concern in the UK, despite progress in recent years. The UK has the highest rate of teenage pregnancy (defined as pregnancy before 18 years of age) in Western Europe.

Policies and politics

The UK has experienced a notable reduction in teenage pregnancy since 1990, when it reached a peak of 48 conceptions per 1000 women aged 18 or under. In 2015, this rate had more than halved, to 21 conceptions per 1000.

High teenage pregnancy rates were recognised as a health and social care policy priority in 1993, when the UK government introduced the ‘Back to Basics’ campaign. This campaign, which called for a return to ‘traditional morality’, was generally seen as a failure in reducing teen pregnancy rates – and was accompanied by a succession of scandals afflicting John Major’s government.

During the 1980s and 1990s, young pregnant women were often demonised as lower-class women who were using pregnancy as a way to access social housing. The alienation and stigmatisation of a social strata was probably counterproductive to the objectives of the policy.

The 1997, the Labour government under Tony Blair highlighted teenage pregnancy as a policy interest. The Social Exclusion Unit was commissioned to create the Teenage Pregnancy Strategy (TPS) in 1999. The ten-year strategy, which continued John Major’s promise to halve teenage pregnancy rates, was the first full strategy for sexual health in the UK.

Addressing the right factors

In 2004, social scientist Dr Lisa Arai critiqued the research done by the TPS. She categorised the three risk factors identified by the TPS as structural, technical/educational and sociocultural themes (see Table 1).

Table 1. Categorising the risk factors associated with teenage pregnancy

Structural
‘Low expectations’
Technical/educational
‘Ignorance’
Sociocultural
‘Mixed messages’
Socioeconomic status Sexual health knowledge and education Wider messages about sex and reproduction
Educational attainment Information/use of contraception or abortion Community messages about sex and reproduction
Employment Knowledge about parenthood Peer group messages about sex and reproduction

 

The TPS primarily addressed technical/educational factors through increasing contraception availability and promoting sex and relationship education. However, a qualitative study published in Young Mothers? by Ann Phoenix found that most young women in her study had not fallen pregnant because of ignorance about contraception, but because they had long anticipated motherhood to be the most fulfilling part of their lives.

These women were usually members of communities where teenage pregnancy was common among friends and family, suggesting that they would be familiar with the lifestyle associated with being a young parent as it was part of their culture. This raises a question about linear causation between educational ignorance and teenage pregnancy rates, although the former may be indicated as a factor in multifactorial causation.

National sociocultural background structures and norms are difficult to measure and evaluate, which may explain the TPS focus on immediately visible technical factors. Intervention for only one category of risk factors will reduce teenage pregnancy to a point, but technical factors are part of a multifactorial causation. Effective policy should aim to influence all amenable risk factors.

Where are we now?

The UK’s teenage pregnancy rates have continued to fall since the 1990s, suggesting the TPS was successful in addressing at least some factors associated with its prevalence.

However, since the TPS came to an end in 2010, public health spending has been drastically cut. This has brought many preventive strategies to a close, which is often felt most by the poorest, who are at highest risk of teenage pregnancy.

Funding for sexual health services was transferred from the NHS to local authority budgets in 2013/14, and cuts continue. Of course, this affects more than teen pregnancy rates. Other sexual health-related issues, predominantly sexually transmitted infections (STIs), are also impacted. This comes at a time when people are accessing sexual health services more than ever before, record levels of STIs are being diagnosed, difficult-to-treat antibiotic-resistant strains of infection are being detected, and the need for quality contraception and HIV testing is more important than ever.

What does the future hold?

Without the financial provision for preventive care, and with cuts to social care leading to more people living in poverty, the UK can expect to see a rise in various public health challenges. Inevitably, this will incur a cost to society – both financially and socially.

What will the future bring for health and healthcare?

by Katharina Beyer - 01 Feb 2018

How can the world deliver affordable and high-quality access to healthcare in future, in the face of rising costs, an ageing population and an increase in non-communicable diseases (NCDs)?

Our current healthcare systems are not sustainable. Costs are high, meaning affordability of healthcare is a problem across various systems and countries. The US is spending by far the most, while Switzerland is the biggest spender in Europe. Other countries, such as Israel, manage to achieve similar or better results for far lower expenditure. This is partly down to the fact that convenient access to family physicians is a cornerstone of Israel’s health service.

Data from the Organisation for Economic Co-operation and Development (OECD) show the variation in healthcare spending as a proportion of gross domestic product (GDP) across 35 countries (see figure below).

Graph: current expenditure on health as percentage of GDP

Source: OECD

There are ongoing discussions within European governments about the changes needed in healthcare systems. The media in Germany and the UK, in particular, frequently report on the countries’ methods of delivering care. These two countries – with the Bismarck system in Germany and the Beveridge system in the UK – have helped to shape healthcare systems all over Europe. For example, some eastern European countries have followed Germany’s approach, while countries like Portugal have echoed the UK. What the different systems have in common is that both direct and indirect healthcare costs are huge.

Who are the key stakeholders in reforming healthcare systems?

Patients, providers, payers and policymakers often have different opinions about what the healthcare system should look like. There are various cultural approaches among healthcare systems, and differences between emerging and developed economies. But regardless of the type of system, the common problem is sustainability. Aligning the ideas of key stakeholders and creating common goals will enable services to achieve change and aim for a more sustainable system.

What needs to be done?

Common goals must be formulated by all stakeholders involved. Performance and accountability need to be increased, by introducing shared goals that respond to all stakeholders’ needs. The priority should be achieving high-value healthcare. Current structures tend to ignore measuring value. Providers tend to measure only what they can directly control, rather than what matters for high value or outcomes. A focus on measuring value will reward providers in efficiency, achieving good outcomes and increasing accountability for healthcare.

Efficient health systems have certain structures in common, such as strong primary care and technology to contain costs. A robust primary care system is one reason for the successes of Israel’s healthcare system. Technology to contain costs can help to accelerate activity while increasing the efficiency and accuracy of big data analysis.

There is a need for real-world data and transparency, describing patient outcomes at different stages of disease. Data from clinical trials are necessary, as well as structured and longitudinal data.

It is difficult to say what 2018 might bring. But surely other countries will soon start to follow examples like that of the UK in discussing how to shape the future of health and healthcare.

Wine: the larger the glass, the more you drink

by Sandra Evans - 20 Dec 2017

A recent study shows wine glasses have grown substantially over the decades, with a public health impact as people consume more alcohol.

’Tis the season

The Christmas season is upon us. For many people, this means going out for Christmas parties, catching up with old friends and getting together with family. In doing so, we may raise a glass or two. This season, take a moment to turn your eye to those clinking wine glasses.

During my Master’s degree, I worked on a paper that has just been published in the BMJ. We found that the size of wine glasses in England has increased sevenfold over the past 300 years. This rise has been steepest in the last two decades, as wine consumption has risen.

This raises the question of whether downsizing wine glasses might reduce consumption – bringing clear health benefits.

Wine glasses through the years

Wine glasses from the early 17th century were far smaller than they are today. They were usually kept on the sideboard and brought by one’s butler when one wanted to make a toast. But as glass became less brittle, with the invention of lead crystal glass by George Ravenscroft in the late 17th century, glasses became more extravagant in their design. Wine glasses began to move to the table and, gradually, glass size started to increase.

This size increase of wine glasses parallels the explosion of the middle class associated with the industrial revolution. This new middle class created a huge wine market.

Size matters

While wine glass size was already gradually rising, there was a sharp increase in the late 20th century. This is probably due to demand from the US market, and the promotion of larger wine glasses by bars to boost profits. A recent study showed that larger wine glasses have been associated with increased sales.

Humans consume in units: one cup of tea, one chocolate bar or one glass of wine. People tend to drink a glass of wine regardless of its volume, because they regard one glass as a whole unit. If someone perceives a glass as mostly empty, they are less likely to feel satisfied after drinking it than they would if they drank the same volume from a full, smaller glass. This means people may be more likely to have a second or third glass when they are drinking ‘unsatisfying’ portions from a large glass.

Designing our environment

For years, marketing has been designing the environment around us, through product placement and design to increase our consumption – sometimes to the detriment of our health and wellbeing.

But this can work the other way around. Health promotion campaigns can use the same method to design the environment around us to ‘nudge’ people to make healthier choices. We can use design to de-incentivise ‘bad’ health choices, thus making ‘good’ health choices the reflex option.

If policymakers were to regulate wine glass sizes as part of licensing regulations, we might feel more satisfied after our small-but-full glass of wine at the pub. This could mean we order fewer glasses and, ultimately, drink less – a welcome boost to our health and wellbeing, particularly in the festive season.

Palliative care should not be a last resort

by Suzanne Wait - 06 Dec 2017

Palliative care exists to reduce suffering and improve patients’ quality of life. So why is it often unconsidered until other options have been exhausted?

Last month, I had the opportunity to speak on a panel on palliative care at the Economist’s War on Cancer event in London. The discussion powerfully reconfirmed something that has struck me time and again in the work we’ve done with patients on All.Can and other HPP projects: that a patient’s ability to cope with the physical, psychological and emotional effects of their illness is often not considered in their treatment.

What is palliative care?

Many people consider palliative care to be the care offered when ‘real’ – i.e. curative – treatment options have been exhausted. But what it really means is care focused on improving quality of life for patients with a serious illness, regardless of the stage of that illness.

The American Cancer Society is among the organisations advocating that palliative care be fully integrated into cancer care, to ensure patients receive relief from the effects of their treatment throughout the care journey, not just at the end. Studies have shown that the early introduction of palliative care for cancer patients can not only improve patients’ quality of life, but may also reduce their need for intensive care and prolong survival, while being cost-effective for healthcare systems.

There is evidence that patients are far more likely to seek palliative care services if their oncologist recommends it and makes a referral. So why isn’t palliative care more readily offered during the early stages of treatment?

Palliative care is too often associated with failure, death and lack of hope. Doctors are hesitant to raise the subject, and patients don’t want to hear about it. This can have devastating consequences. A study across several European countries found that in a third of cases, patients with advanced cancer did not receive pain control appropriate for their level of pain. It is shocking to think of patients enduring avoidable pain, simply because of an aversion to broaching a subject.

Psychosocial elements of care

Earlier this year, I contributed to a position paper by the European CanCer Organisation (ECCO) on access to innovation. One of the areas of innovation we found to be inadequately adopted in many cancer centres was a systematic psychosocial assessment for every cancer patient.

Patients consistently report that they feel abandoned after their ‘active’ phase of treatment and care. They are left without support to improve or maintain their quality of life, deal with long-term effects of treatment, or cope with the psychological aspects of living with cancer. Clearly, there is something wrong. There is an urgent need for greater follow-up care after ‘active’ treatment. Palliative care can play a key role here, as it focuses on improving patients’ quality of life.

Rethinking palliative care

During the War on Cancer session, we considered whether palliative care needs rebranding. Many health systems now call it ‘supportive care’, to reduce negative associations. But this is not just about a name change. We need a philosophical shift to embrace palliative care as an integral and essential component of quality cancer care. This reframing could create huge benefits for patients. Less stress, less pain, more support, increased quality of life and, as studies have shown, potentially longer survival.

With the excitement created by advances like immunotherapy, genetics and artificial intelligence, there is a risk that we forget to focus on addressing the basic tenets of cancer care – including palliative and end-of-life care. The entire discourse around ‘fighting’ cancer is problematic; the idea of an epic battle or quest inevitably frames death as a tragic failure. A relentless focus on survivorship may leave little room for the emotional, psychological and often practical realities of life with (or after) cancer, which are so important for patients and their families. This is an area in need of attention if we are really going to improve outcomes for cancer patients.

Survivorship is, of course, the outcome we all want. Innovation in cancer care is enormously exciting and, obviously, worth striving for. But the early and appropriate introduction of palliative care in the cancer pathway should be seen as an important innovation in its own right. We must never lose sight of the need to do everything possible to improve and protect patients’ quality of life, whatever the prognosis.

An extended version of this blog post was originally published on the All.Can website.

Software is eating the world… except in medicine

by Shannon Boldon - 30 Nov 2017

We are living in a golden age of technological invention, so why are healthcare systems seemingly reluctant to get on board with innovative software?

A week after attending the Economist War on Cancer event, I am still thinking about the discussions from the Digital Health & Cancer Care panel. Data and technology have transformed the way we do things in many industries. From Uber to Ocado to Airbnb, there are so many examples of radical game-changers for consumers. The words of Andaman7 CEO Vincent Keunen have not rung truer: ‘Software is eating the world – except in medicine.’

Innovative software in healthcare

But why is medicine not catching up? It is not for a lack of interesting, game-changing technology in healthcare, that’s for sure:

  • Babylon Health offers a service allowing people to book a GP appointment or have a video or phone consultation with a doctor 24/7, through a smartphone app.
  • IBM Watson Health’s artificial intelligence (AI) technology has shown 90% concordance with tumour board recommendations on breast cancer. Watson, used for early detection of melanoma, has been shown to give a 95% level of accuracy (compared with that achieved by dermatologists) in initial studies of detecting dangerous skin lesions.
  • Smartphone app Natural Cycles provides women with a certified contraceptive method that is non-hormonal and non-intrusive.

Despite these innovations, healthcare systems and practitioners seem reluctant to embrace the potential of new technologies.

Resisting change

Perhaps the reason for this is a fear of change, or ingrained professional attitudes that are resistant to a transfer of power. Or perhaps it’s a more extreme fear: that robots might, one day, replace doctors.

Among the speakers on the digital health panel at the War on Cancer event was Dusty Majumdar of IBM Watson Health, whose artificial intelligence sequencing technology has allowed doctors to sequence glioblastomas (a rare and aggressive form of brain tumour) in ten minutes, rather than seven days. Majumdar pointed out that fear of automation is misplaced, saying: ‘It is not a matter of machines replacing doctors: it is more that radiologists who use artificial intelligence will replace radiologists who do not use artificial intelligence.’

Meanwhile Neil Bacon, founder of patient feedback website iWantGreatCare, eloquently summarised his take on digital health: ‘These new technologies have side effects, like all drugs ever. But we must find them, and mitigate them. Do not let perfect be the enemy of good.’

Regardless of individual opinions, it is only a matter of time before technology will change the way we deliver, and engage with healthcare. For better or for worse and my guess is the former.

Speakers at the Economist War on Cancer event (L-R): moderator Vivek Muthu, Neil Bacon, Vincent Keunen and Dusty Majumdar

 

Antimicrobial resistance: is it too late to suppress the superbugs?

by Madeleine Murphy - 16 Nov 2017

World Antibiotic Awareness Week reminds us that we all have a role in solving one of the most pressing crises in global healthcare.

Imagine we have travelled through time, to a world where there are no organ transplants, no joint replacements, and no cosmetic surgical procedures. Even in the world’s richest countries, maternal and infant mortality rates skyrocket, and common infections claim thousands of lives every year.

You might assume that we’re in the past, before scientific endeavour brought about the medical procedures and treatments we now take for granted. But unfortunately not. This is, potentially, the future – a bleak world in which antimicrobial resistance has nullified many of healthcare’s pivotal achievements.

What is antimicrobial resistance?

Antimicrobial resistance occurs when microorganisms, such as bacteria, mutate and become resistant to antimicrobial drugs, such as antibiotics. Although it’s a natural evolutionary process, it is hugely accelerated through misuse of these drugs. But you didn’t need me to tell you that – you’ve heard all about it already. In fact, we’ve known about resistance to antibiotics for almost as long as we’ve known about antibiotics themselves.

In 1945, having just won a Nobel Prize for the discovery of penicillin, Alexander Fleming warned that it was ‘not difficult to make microbes resistant to penicillin in the laboratory’ and that ‘the same thing [had] happened in the body’. He correctly predicted that inappropriate use of antibiotics might lead to the spread of resistance among bacteria. But even Fleming probably didn’t foresee quite how inappropriate our use of antibiotics would become.

Misuse of a miracle

Antibiotics were once hailed as a ‘miracle cure’. As with most miracles, this was too good to be true. Excessive use of the drugs has hindered their effectiveness. In 2015, Nature reported that global antibiotic consumption grew by 30% between 2000 and 2010. Despite awareness of antimicrobial resistance, particularly concerning antibiotics, clinicians continue to prescribe these drugs even when they’re not needed. Sore throats, for example, are usually viral infections; only 10% will respond to antibiotics, and yet antibiotics are prescribed in as many as 60% of cases. The US Centers for Disease Control and Prevention (CDC) has reported that at least 30% of antibiotics prescribed in the US are unnecessary.

Overprescription has contributed to a rise in antibiotic resistance. As a BBC report points out: ‘Bacterial resistance is essentially a numbers game: the more humans try to kill bacteria with antibiotics, and the more different antibiotics they use, the more opportunities bacteria have to develop new genes to resist those antibiotics.’

Underuse is another part of the problem. In some countries, poor access to antimicrobial treatments may force patients to use substandard alternatives or take an incomplete course. While there is debate over whether antibiotics must be taken as an entire course, there is evidence that exposure to suboptimal doses makes bacteria more likely to develop resistance. To paraphrase Friedrich Nietzsche: what doesn’t kill you makes you stronger.

Of course, antibiotic use is not limited to healthcare. A huge culprit in the misuse of these drugs is the agriculture industry, where they have been used not only to prevent infection but also to promote animal growth. Although the EU banned antibiotics as growth promoters 10 years ago, other countries continue this practice. The US Food and Drug Administration only removed growth from the indicated uses on drug labelling in 2017. In China, around 15,000 tons of antibiotics were used in livestock in 2010, and the country’s antibiotic consumption is projected to double by 2030.

The rise of antimicrobial resistance is compounded by the dearth of new drugs. Owing partly to a lack of incentives, research and development is slow; only two new antibiotic classes have been introduced in the past 40 years. Given the speed with which bacteria can develop resistance, it’s not hard to see why we are losing the arms race.

What would a world without antimicrobials look like?

The dystopia to which I alluded at the start of this post – and which forms the setting of a recent science-fiction comic book series – may seem extreme, but the reality is that we are hugely dependent on antimicrobial drugs. Antibiotics are essential for infection prevention in surgery, so it is likely that minor operations would be considered too risky without them. Additionally, anyone with a suppressed immune system – owing to chronic illness, cancer treatment, organ transplants or any other reason – could be at risk of death even from minor infections.

And it’s not just antibiotics. Antiviral medications are also coming up against resistance. There have been reports from various countries of drug-resistant influenza, malaria and HIV. This is a global problem, requiring a global solution – and time is running out.

How can we combat antimicrobial resistance?

Without buy-in from all stakeholders, we can’t realistically expect to win the fight against antimicrobial resistance. The World Health Organization is using this year’s World Antibiotic Awareness Week (13–19 November) to raise awareness of what we can all do to combat the crisis. Any solution must be viewed as a continuum, including infection prevention, incentives for research and development, improved surveillance and diagnostics, alternative treatments and widespread behaviour change. As emphasised in an editorial in The Lancet Global Health, everyone has a role to play, including patients, researchers, healthcare providers, farmers, veterinarians, the media and policymakers.

Only by taking responsibility for our part in this problem, and acting accordingly to solve it, will we see a brighter future. Let’s do all we can to ensure that the dystopian vision of a world without antimicrobials retains its rightful place – in fiction, not reality.