Warning over use in UK of unregulated AI chatbots to create social care plans.

University of Oxford study shows benefits and risks of technology to healthcare, but ethical issues remain.

Britain’s hard-pressed carers need all the help they can get. But that should not include using unregulated AI bots, according to researchers who say the AI revolution in social care needs a hard ethical edge.

A pilot study by academics at the University of Oxford found some care providers had been using generative AI chatbots such as ChatGPT and Bard to create care plans for people receiving care.

That presents a potential risk to patient confidentiality, according to Dr Caroline Green, an early career research fellow at the Institute for Ethics in AI at Oxford, who surveyed care organisations for the study.

“If you put any type of personal data into [a generative AI chatbot], that data is used to train the language model,” Green said. “That personal data could be generated and revealed to somebody else.”

She said carers might act on faulty or biased information and inadvertently cause harm, and an AI-generated care plan might be substandard.

But there were also potential benefits to AI, Green added. “It could help with this administrative heavy work and allow people to revisit care plans more often. At the moment, I wouldn’t encourage anyone to do that, but there are organisations working on creating apps and websites to do exactly that.”

Technology based on large language models is already being used by health and care bodies. PainChek is a phone app that uses AI-trained facial recognition to identify whether someone incapable of speaking is in pain by detecting tiny muscle twitches. Oxevision, a system used by half of NHS mental health trusts, uses infrared cameras fitted in seclusion rooms – for potentially violent patients with severe dementia or acute psychiatric needs – to monitor whether they are at risk of falling, the amount of sleep they are getting and other activity levels.

Projects at an earlier stage include Sentai, a care monitoring system using Amazon’s Alexa speakers for people without 24-hour carers to remind them to take medication and to allow relatives elsewhere to check in on them.

The Bristol Robotics Lab is developing a device for people with memory problems who have detectors that shut off the gas supply if a hob is left on, according to George MacGinnis, challenge director for healthy ageing at Innovate UK.

“Historically, that would mean a call out from a gas engineer to make sure everything was safe,” MacGinnis said. “Bristol is developing a system with disability charities that would enable people to do that safely themselves.

“We’ve also funded a circadian lighting system that adapts to people and helps them regain their circadian rhythm, one of the things that gets lost in dementia.”

Last month, 30 social care organisations including the National Care Association, Skills for Care, Adass and Scottish Care met at Reuben College to discuss how to use generative AI responsibly. Green, who convened the meeting, said they intended to create a good practice guide within six months and hoped to work with the CQC and the Department for Health and Social Care.

Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *