, ,

Could the algorithm kill our capacity for kindness?

We all want services to be kind. Nobody seriously disputes it. All social change starts with the  very personal relationship between two people, and yet, as I’ve commented before, we struggle to be kind, citing professional codes, financial challenges and regulatory restrictions to explain our rather cool, and sometimes frankly impersonal,  approach to the decisions that public policy makes. Relational seems to be OK in theory, but much more problematic when it encourages  us to break the rules. Rules help and protect the professional, reducing the discriminatory impact of discretion. They allow us to sensibly ration spending, make clear and transparent decisions – and create a framework that seems to be fair.

And we want our services to  be fair. We don’t want our outcomes dependent on whether the nurse likes us or not. We don’t want some children encouraged more than others because the teacher enjoys their company.

We also want our services to be transparent. We want to know that the choice of drug regime for  a particular condition  is not governed by hunch, but by sound medical rationale, linked to outcomes. We want university places to be awarded in ways that can be understood. We want to be cared for by people who we can trust not to have favourites.  We fear the discretionary and the partial.

We’re getting better and better at demonstrating fairness, and encouraging transparency. The algorithm provides a  powerful support. We’re all familiar  with the amazon algorithm – you bought this so you’d like that – and we don’t need to be terribly insightful to recognise that our daily scrolling and browsing and tweeting provides an enormous body of data that allows companies large and small to target their wares very precisely at our credit cards.  The predictive power of data analytics, and their capacity to shape services can be seen in every clinical pathway, every assessment form, every checklist – and they daily grow in power. Protocols and pathways ensure that intervention most likely to result in the right outcomes are always chosen, and that time – and money – is not wasted on experimentation and following hunches. It’s clean, its straightforward, it passes the test of legislation, and social media challenge, but is it kind?

Kindness requires intuition. It requires a personal relationship. It require both  warmth and risk. It probably involves personal liking, and empathy, and it may not always be fair. A doctor treating patients with warmth and humanity may not follow the prescribed pathway. A teacher may see a spark that would never register on any scorecard. Someone else might see the sadness behind the eyes. A care worker might understand the  grief and loss that is the source of  so much anger and frustration. They might all recognise the boredom and tedium, the fury, the fear – the raw emotions that drive us to need public services, and  sometimes to  loathe them too.

No algorithm in the world can replace human understanding. It can produce fairness. It can resist challenge. It can tolerate the bright light of public transparency. And it can protect  the professional from accusations of partiality. It can make sure that both money and services are carefully rationed (and in any system, at any time, that will always be needed.)

But if it can’t also allow for the warmth of human interaction,  we may need to recognise that sometimes kindness and human relationships trump mechanical approaches to fairness, and to transparency. It might not be the algorithm alone that challenges kindness. Our approach to fairness and to transparency might also be questioned.

 

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *