As doctors and nurses we always want to prevent/reduce disease. The introduction of new technology keeps telling us that we will reduce mistakes, treat certain diseases and reduce cost of certain treatment pathways. However, although technology is accelerating at an alarming rate we don’t see patient safety increasing in the same manner. In some years the reports have increased compared to the previous year.
I am aware that there are selection bias factors at play. Nurses and doctors may feel more comfortable reporting an incident one year compared to the next depending on the political climate. The population is increasing and what constitutes an incident may change. However, another interesting factor that has something to do with tech innovation is an economic theory called the Peltzman effect. This is when humans take riskier choices the situation becomes safer. A classic example used in demonstrating this effect is skydiving. The development of equipment has made skydiving a safer sport through the years but death rates in skydiving have been constant. They found that skydivers took riskier jumps, waited later to open parachutes and made faster maneuvers closer to the ground.
To some of you this may seem a little niche. Skydivers partly skydive for the thrill which puts it into a different set of laws to nursing and medicine. However, this effect has been observed in daily driving with seatbelts vs no seatbelts resulting in seatbelt wearers having more accidents. ABS brakes vs no ABS brakes resulting in drivers with no ABS brakes having less crashes and cycle helmets, where they found that cyclists wearing helmets were more likely violate traffic laws.
Before you throw your cycle helmet away and cut your seatbelts out it is worth noting that although non-wearers are less likely to crash the wearers are less likely to die. There’s a reason why these laws have infiltrated most developed countries but it does raise an interesting observation. In my clinical experience, I have found double checking generally increases the risk of the check not happening. My matron did an audit on checking for pressure areas, my probability distributions showed that the failure rate was the highest when the patient came to A and E an hour before staff change over. They did not correlate with busy periods or certain days. My theory is that one nurse was assuming that the other nurse will or had checked the pressure areas. This isn’t just in clinical risk management. In 2014 a paper was written calling for measuring the Peltzman effect in clinical trials to better understand the real world impact as real-world benefits fail to materialize [link].
Although it is hard to measure alongside selection bias the Peltzman effect is simple and very real. It will be useful to consider this when developing an intervention or policy. However, I bet most nursing managers do this anyway. To those who already do, now you have a fancy term to quantify your concerns.
I help clinicians get to grips with coding and tech, I also code for a financial tech firm