AI Don’t Do!
In a month Microsoft announced increased profits driven by AI demand and ploughs ahead with building new data centres, the United States, Britain and the European Union have issued the strictest regulations yet on the use and development of artificial intelligence, setting a precedent for many other countries. While countries debate whether certain AI practices should be banned, to restrain significantly enhanced AI models, how can business use ‘guard rails’ today and drive the benefits?
When a customer asked us whether AI can help detect if users are authorised to progress workflows or whether they have completed a task correctly, we had to keep talking about the context of the permissions and tasks being done. Once the context was defined, adding a layer of organisational relevancy to a behaviour driven prompt helped map workflow compliance. The AI assisted human decision making or actually stopped it. And that was the point. The project ended up being more about stopping incorrect workflows, tasks and unauthorised users with the help of AI. This maintained the strict compliance practices in a highly regulated sector focusing on ‘don’t do’ rather than ‘can-do’.
Another client asked the same question in a different way, ‘when someone just ticks a workflow to progress a series of tasks, can AI tell if they really know what they are doing?’ Our first context question was, ‘Are they the right person doing that task and have they had training?’
We felt quite proud of those ‘guard rails’ in our generative business intelligence platform.