Health care systems

How to choose? (Photo credit: Wikipedia)

The scandal that has now led to the ‘resignation’ of the head of the US VA Health System matters to more than just the US and US veterans. The VA health system is the closest thing the US has to the UK’s NHS and to the health systems of many other countries where the state is the controlling force.

According to reports in the New York Times, three factors are relevant and expanded in Forbes:

  1. shortage of physicians
  2. perverse incentives
  3. culture of dishonesty

Boiling this down to critical factors that are relevant to systems outside the US leads to specific considerations to countries which try to control healthcare through greater state intervention:

  1. Physician shortages are caused in the main by health systems limiting access to medical schools (and indeed to other professions). There is far too much evidence that labour force forecasting is inaccurate and given the highly specialised nature of healthcare, we really don’t know how many doctors, nurses, etc. we need, just that it is unlikely the current system of rationing produces sufficient supply. While the costs of training of health professionals are high, the rewards are also high and of good quality. These benefits accrue to the individuals as well as society. Why, though, the public purse should subsidise this as much as it does, and also limit access needs to be rethought.
  2. Health systems use a variety of incentives to coerce or alter clinical behaviour. While putting doctors on the payroll is assumed to limit financial conflicts of interest, it embeds clinical behaviour within a managed system full of rules and regulations which invariably will put administrative convenience above clinical and patient needs. Falsifying records is nothing new, but using data to influence rewards only creates the incentive to game those rules to maximise the benefits. Gaming of incentives is not new, but it is possible to model/test whether the proposed incentives will work and how they might be perverse.
  3. Dishonesty is embedded in the culture of work, and rooting out dishonesty needs to go back to, perhaps incentives again, to understand why it is more beneficial to lie. This may exist more easily within highly bureaucratised systems, where people are dislocated from the patients, and see themselves simply tasked with ensuring the stability of the system. This is a tough one but in some countries doctors’ employment contracts explicitly put them in a conflict with their employers by emphasising the relationship between their work and costs.

As the US has noted on the VA, the system often put the doctor in a conflict of interest between the patient and their paymaster, the government. Many countries have the same arrangements and should not, therefore, be complacent.

It is certainly timely and appropriate for policymakers and those who think systematically about healthcare systems, to study carefully what happened at VA, and apply that learning on their own healthcare systems. I am sure there would be much to think about.

If anyone wants to do this, give me a call.





Just about every country has identified life sciences in some form or other as a priority for academic and commercial development. But what will characterise the countries that may in the end prevail?

  1. The research community needs a high degree of autonomy. The European University Association released an interesting study, University Autonomy in Europe II: the Scorecard, in 2011, assessing the degree of institutional autonomy universities in the various EU member states enjoyed. The countries with the greatest university autonomy were from northern Europe: Denmark, Ireland, UK, Finland, Sweden Latvia, Lithuania. Those with highly regulated and state controlled systems were from southern Europe, or had systems where the state just likes to intrude: France, Luxembourg, Greece, Italy and others. To be fair, some countries were more or less autnomous on different indicators, but the rough distinction can be drawn. Surprisingly, at least to me, was the middling performance of countries like the Netherlands, Austria and Germany. No doubt various higher control states will endeavour to justify why the state needs to be so intrusive, but as evidence that this is perhaps an unhealthy state of affairs, we see the highly instrusive French state over the past year moving to create greater diversity and differentiation in funding for its universities with greater autonomy (see this news item for instance). Clearly, greater autonomy necessitates greater diversity and differentiation and in the end some will need to become better than others. While we would like to think that all universities are essentially the same, reality suggests that the only real equality lies in the extent to which they all meet minimum standards, rather than all trying to meet some arbitrary ‘gold standard’.
  2. The second point is that the bulk of significant research results in life sciences arise from centres known as academic health science centres (AHSC). This is a theme I warm too, as it provides an organisational model that drives innovation from the clinical user end, rather than from the research end. Yes, more research funds are always needed, but we also need solutions. Efforts to operationalise translational medicine are doomed to fail if the driving forces are not coupled to the clinical user and innovation policies in general need to start with problems needing solutions, and hence a factor more likely to be evidenced. Only a few countries have AHSCs — such as US (over 50), Canada (about 14), Sweden (1), Belgium (1), Netherlands (8) and the UK (5). Germany arguably has at least one as does Italy. France has none, and one will need to see whether changes in their higher education system are likely to lead to formal establishment of this approach. The challenge (and this was the subject of a paper I presented, see the previous entry below), is that while universities are more likely to enjoy degrees of autonomy, hospitals are less likely to. The UK was only able to move toward establishment of AHSCs when the state control of the hospitals was relaxed through successive periods of NHS reform. The Netherlands model built on existing relationships. Countries without AHSCs, though, will confront the twin challenge of institutional autonomy of both universities and hospitals.
  3. The third point is that not all countries will be able to do everything in life sciences and therefore will need to set some priorities. National priorities are hard to conceive, because countries usually think of themselves as being able to do everything and so efforts for instance, get diluted and underperform. Cash is tight these days (think debt) and governments just cannot afford everything, so the most difficult challenge is establishing priorities.

We are awash with regulation. Healthcare and medicines are particularly affected.

For instance, in their wisdom, European lawmakers have deemed it inappropriate for medicines to be advertised. And this in the 21st century, with open information access, calls for transparency and the empowered and informed patient. Of course, the logic of such restrictions reflect real-world anxieties, but they also reflect the anxieties of another age. If we examine regulatory practices, we’ll find that in the main they use instruments that would have been popular in the 1950s and 1960s. Today, more subtle and information rich tools are available.

We regulate to coerce people and organisations to behave in certain ways that they would not, of their own volition, otherwise do. This coercion is legitimate if it arises through due process and democratic accountability, and not just the whim of the regulator or government. Sometimes this coercion has perverse consequences, such as with medicines where the legitimate manufacturer of a product is prohibited from publicising a product, but all manner of snake-oil salesman can make all manner of inappropriate claims for medicines, each pits or peanut butter as a sunscreen! The truth lies somewhere but regulations make truth-telling more difficult and not always in the public interest.

What I want to propose draws its inspiration from the US, with Don Berwick and colleagues suggested Triple Aim tool for determining high-value intervention targets (the three are: quality of care, patient satisfaction, and cost).

The Regulatory Triple Aim would comprise three tests, the simultaneous failure of which would indicate that the proposed regulation should not be considered further.

  1. Will the regulation produce poor quality or substandard outcomes? This is likely to be measured through evidence or insight into perverse consequences, weak enforcement, lack of suitable performance data, etc.
  2. Will the regulation produce dissatisfaction amongst the regulated? This is comparable to the patient satisfaction and goes to whether the regulation is appropriate and proportionately coercive and will it enjoy high degrees of compliance.
  3. Are there avoidable costs associated with the regulation? This is an interesting test as it actually asks two things: [1] is there an incremental burden of costs associated with regulation and [2] is the cost proportionate to the benefits.

As a formula, we have Quality (#1) + Satisfaction (#2)  divided by Costs (#3) = Value for Money.

We need ways to sharpen our focus on regulation, and we need to ensure that there is not too much of it for the value we seek to achieve.

Let’s test it with the regulation that was intended to control refillable olive oil containers or pots in restaurants, something that more insightul minds eventually decided was a silly thing to do (I’ll wager though that has just gone into hibernation while a study is commissioned to find evidence that such refillable containers are full of fake olive oil or somesuch and then it will re-emerge), but it did get a long way along the regulatory process without anyone (group think?) challenging it — are people really that dumb? I wonder how that happened — did no one apply a Wilson matrix to this to see if the distribution of costs and benefits was properly understood? Anyway, back to Triple Aim.

Olive Oil in Refillable Pots or Containers

Would regulating olive oil in refillable pots or containers …

1. … produce poor quality or substandard outcomes?

2. … produce dissatisfaction?

3. … create avoidable costs?

Triple failure, meaning answering YES to each question, would suggest this would not be a good idea.

Post your assessments and comments. Obviously, if you’ve got better examples, (such as regulation of clinical trials or whatever) please feel free to expand the scope.



The lost of my trilogy picks up on medicines again.

In the US, not taking medicines correctly is thought to be the fourth leading cause of death – could this be true?

WHO data on mortality captures medicines use in a variety of categories. I ran the data on the categories concerned with medicines-related harm (ICD-10 codes: X40-X44,X60-X64,Y10-Y14,Y45-Y47,Y49-Y51,Y57). It is less than 1% for any European country for the whole population, but breaking it down by age cohorts reveals interesting results that show at different ages, in different EU countries, there are age-related variations in cause of death, rising with age. None, however, emerges as a leading cause of death on its own.

However, medicines use sit within a system of patient care. Therefore, medicines misuse and medication errors may create conditions for a co-morbidity to assert itself. And of course, whether the drugs were toxic for the patient at a particular dose (keep in mind, pills for instance come in standard sizes and may need to be cut in half or so to get an accurate dose for a patient). Working through this data, though, does highlight areas to pay attention to, and in particular countries where there appear to be noteworthy higher risk. I’d like to see better analysis of medication errors.

Once again, before we target the drugs bill as being out of control, let’s get a better understanding of the dynamics of medicines use itself. We may be spending money foolishly or carelessly. What are the incentives in health systems that may actually encourage this sort of professional conduct?

The devil is not just in the detail, but in the data and in clinical practices.

Want to know more?

WHO datasets are here:


In these days of trying to better understand the determinants of rising healthcare expenditure, it is productive to look in the waste bin, to see what is being thrown away. Let’s look in the waste bin and see what medicines we find.

Medicines waste is medicines given to patients that they do not take. But this needs to distinguish between actions taken by the patient, and other factors since not all wastage is patient non-adherence.

The costs include the cost of the medicine itself, but also the changed procedures in pharmacies to reduce patient-related waste (procedural costs drive duplicate medicines ordering on hospital wards, for instance). There is also the costs associated with safe disposal of the medicine waste itself and how patients dispose of unwanted/unused medicines. Environmental contamination by pharmaceuticals is of rising concern. [Pharmaceuticals in the Environment, European Environment Agency, 2010].

Considerable medicines waste arises because the patient has died and correlates with condition: 100% return for anaesthetic drugs, 60% for drugs used in immunosuppression/malignant disease, 26% for cardiovascular conditions, 19% for drugs used for infections. This suggests that gross wastage data needs to be viewed with some care.

Reducing the stock held by patients in the home shifts the stocking costs to pharmacies. UK evidence suggests that “if all repeat prescriptions in 2008 had been issued at just 28 days, then total pharmacy costs would have been even higher – around £2.3 billion, or 28% of the net cost of medicines dispensed.” [Gilmour review on prescription charges, “Medicines Wastage” Prescription charges review: implementing exemption from prescription charges for people with long term conditions, May 2010] This suggests that included in wastage costs are pharmacy dispensing charges.

As in all cases of healthcare expenditure, the challenge involves a complex mix of activities and stakeholders. We need much better tracking of waste, if only to ensure we do not inappropriately target expenditure of medicines without first ensuring that medicines that are being bought are properly used. Industry, healthcare and regulators can usefully work together here.

I haven’t mentioned the environmental impact of flushing unused medicines down the toilet. I’ll let your imagination go to work on that one.

Want to know more?

Evaluation of the Scale, Causes and Costs of Waste Medicines, Final Report, York Health Economics Consortium/School of Pharmacy, London, 2010. This has a good international literature review of costs, but caution is needed in the context of the comments below.

Kummerer K, Hempel M (eds) Green and Sustainable Pharmacy, Springer 2010. See page 170 in for a table of waste by country, but not costed.

Capturing race

Is HTA like GO? (Photo credit: Wikipedia)

Increasingly widespread amongst the world’s healthcare systems is the assessment of medicines and devices using various types of cost-benefit or cost-utility analysis; this is called health technology assessment or HTA. HTA seeks to determine, using evidence of one sort or another, whether something is broadly speaking affordable, taking account of the cost of the medicine/device taken against the benefit to a particular constellation of diagnostic attributes in patients. This is usually quantified in a measure called a QALY: a quality-adjusted life year, which is a way to assess the value for money of a particular health technology. In short, it is a way of valuing lives.

HTA is a utilitarian approach to assessment. To some extent, this is not surprising as HTA is in the main a method developed by health economists, who, like economists in general, hypothesise that we make daily decisions based on the utilty of this or that, in terms of trade-offs (Pareto optimisation, for instance) and rational decision making (that people seek to maximise value, or utility in what they do). This approach is increasingly in dispute in light of the findings from neurosciences and behaviour economics: by posting that people do not always make decisions that are in their own best interests, a key assumption of traditional economics, that of the rational actor, always calculating trade-offs and maximising benefits, and so on, is questioned.

The problem with utilitarianism, though, is it doesn’t pay attention to the freedom of the individual; it positions the justification of its results on the net benefit to society, regardless of the impact on rights of individuals. Obviously, health economists don’t watch Star Trek or they would know that the needs of the one outweigh the needs of the many. But then, that, too, is a moral position.

Indeed, it is perhaps the sense that utilitarian conclusions don’t seem to correlate with many people’s moral sentiments that may explain why decisions of HTA agencies, for instance NICE in the UK (England) lead to moral outrage and a sense of, if not injustice, at least unfairness. While the results of an HTA process may lead to a quantitatively defensible conclusion, people sense that this conclusion is not morally defensible.

How are we to judge? Few would use utilitarian arguments in this way in other spheres: would we calculate who needs welfare in terms of the net benefit to society in terms of quality of life years, though perhaps we do allocate welfare on moral assumptions that some people deserve welfare while others don’t.

Do we allocate support to communities ravaged by floods based on their overall contribution, or utility, to society.  If you could donate £10 million to a university, would you pick Oxford University or Thames Valley University; which one is more worthy? But would you want to treat people this way?

HTA doesn’t even let us value lives in quite this way, since it neatly avoids deciding about the worth of any particular type of person, who just happens through misfortune to find themselves needing some medicine that fails the HTA tests. HTA keeps us from confronting the fact that HTA is a way of drawing a conclusion, without actually having to decide any allocations for any one person in particular. Bentham would approve.

There is, though, a technical problem with HTA and it has to do with whether at one level of assessment outcome, a utilitarian models can be used when the decision to be made does not have life threatening consequences for some people.

If the QALY threshold is, say £35,000, as it apparently is in the case of NICE, are the decisions below that threshold, which tend toward ‘yes’ or ‘approval’ morally different from decisions above that threshold?  I suggest that different moral criteria come into play above the threshold and this is where I think out moral outrage should be directed and where HTA fails.  Regretfully, HTA models see the results as broadly continuous, that is, decisions above and below this threshold are seen as essentially of the same type.  But I have argued elsewhere that above the threshold, HTA models fail but for reasons other their analytical soundness, because above this threshold, the conclusions may lead to a lessened quality of life, in other words, they actually crystallise the health outcome rather than avoid it.

Therefore, in valuing lives, those above the threshold experience greater injustice than those below; they are treated differently, unfairly, unjustly, perhaps less worthy, but certainly differently.  Indeed, above the threshold, we feel we are more in the realm of our moral sentiments about the value of human life, and less our moral sentiments about the allocation of scarce resources.

If this were not so, then we would be living in a society that believes that the determinant of all important moral and political decisions is affordability, and if that were so, they we could not even afford the costs of inefficiency brought on by democracy, the inconvenience of not being able to exploit people, the costs of equal rights.

Perhaps, though, on our financially contaminated world, all we can think about today is money and that is further contaminating our perception of what sort of society we are actually trying to foster.  Certainly, protests on Wall Street and elsewhere point to the view that there seems to be some unjust allocation of the benefits of government bail-outs that just doesn’t benefit those ‘at the bottom’.

John Rawls wrote that the we should distribute opportunity in a society in such a way as to ensure that the least well off benefit the most. In the context of HTA, medicines and technologies that benefit only a few, but at great cost, represent a cost worth having as the least well off, namely those who would need it most ( have the condition it treats, and in some societies can afford it least), would benefit, even if a little, as that is the price we pay for justice.

This, I suggest, is the root of our moral outrage at HTA, that is unjustly fails to serve those who need it most.

I am left with wondering about the underlying morality of HTA as a government scheme. Governments, as we know, are the last resort, when things are tough and one would hope, ensure that the least well-off in society are not penalised simply in virtue of being least well-off.  In healthcare, someone has to be the carer of last resort; using HTA as a way of avoiding this responsibility is not morally defensible.

Enhanced by Zemanta
English: Diagram relating various pre-test pro...

Huh? (Photo credit: Wikipedia)

The media do have considerable trouble reporting health statistics partly because these statistics often report probabilities, estimates, and approximations. Phrases like “x times more likely” abound. Without knowing what the base likelihood is, we have no idea whether this is a lot or a little. So small numbers can sound impressive and people can be easily mislead into think that they might live forever. Like reporting that 42% of the population will die with or from cancer — the difference is important: men frequently die with prostate cancer, but not from it.

What do you think this paragraph means from The Guardian newspaper: (by the way, a search was unable to locate the relevant document the article was based on. Newspapers should these days cite the names of the documents, with links, to enable independent followup.)

“Twenty-year-olds are three times more likely to reach their 100th birthdays than their grandparents and twice as likely as their parents, official figures show. A baby born this year is almost eight times more likely to reach 100 than one born 80 years ago, according to the figures issued by the Department for Work and Pensions.  A girl born this year has a one-in-three chance of reaching their 100th birthday, while boys have a one-in-four chance.”

Many people look to the media for information on health, but it doesn’t help when within a single paragraph (!) we are confronted with this rush of statistics.

They sound important, like they ought to mean something. But what? Can these statistics be converted into something that might actually shed light on what the the numbers might mean or is the newspaper just repeating statistics in the usually confusing way papers do? (Another example of where papers confuse when they report statistics, is they’ll say something like the number of mortgages issues declined by 1% last month; of that 200 were remortgages. Huh?)

Today’s grandparents were probably born, say, 1930, when the life expectancy was about 60 years, while today it is about 75, and for a twenty year old today it is estimated at 100, 80 years from now. Life expectancy rose about 15 years between 1930 and today (about 80 years) and will rise a further 25 years by the year 2090. Hmmm, that suggests growth in improvement in life expectancy is accelerating as it will increase 40% or so more over the next 80 years than if it just continued at a steady, linear, pace.

Most people die by 100, and certainly for this discussion, we could say 99% of the population born in 1930 will be dead by 2030. So I had a tiny chance of living to 100 if I were born in 1930 and now a baby born today has an 8 times chance, which still seems like quite a small number. We also know it is twice as likely as that person’s parents, say born in 1950 of whom most will also be dead by 2050.

Let’s be generous: 1% of the population lives to 100 born in 1930, now 8% of the population will live to 100. Is that what they are saying? But it also says that boys have a 25% chance of having a 100th birthday, while girls have a 33% chance. Are they saying that of 100 boys, 25 ‘may live to 100’, and and is that broadly equivalent to an 8 times improvement over their grandparents? Hmmm.

So how many boys born today will live to 100? And how many girls? Answers need to take account of the probabilities, so we also need to know if the various statistics in the quote above are compatible with each other or are they inconsistent? Do you think an average person would understand the article? (By the way, we know that doctors often misunderstand what statistics like this mean when referring to the likelihood or not that people may or may not acquire a particular disease or condition, so if that is true, what are the chances for the rest of us: 1 in 50…..?)

Post your answers.

QED, I think.


Enhanced by Zemanta