That giant of data crunching, Google, is launching Explainable AI. Whether this is because Google is perceived to be lagging behind Amazon Web Services or Microsoft Azure is uncertain.
Professor Andrew Moore (no, not the Moore’s Law one) of Google Cloud’s AI Division explained that whilst they develop very sophisticated machine learning models they also have to understand what’s going on within them. To that end they have been developing tools to help the industry ascertain why their particular algo is working like it is.
It seems not every input yields a predictable output from an AI algo and it can be incredibly difficult to fathom what’s happened to cause that. If we are to rely on AI systems in the future its vital we know how and why they go wrong (hopefully pre-release) and build contingencies in so that they remain safe.
It’s fascinating to learn that Google seems to be taking the moral high ground with AI, whilst it has developed AI algos to detect faces, it hasn’t gone on to do facial recognition. Having said that, Google did work with the Department of Defense over Project Maven (analysing drone footage) but has since dropped the contract. However, Google is willing to work in aspects of national security which make people safer (one man’s safety could be to another man’s demise, so best of luck with that one and don’t forget Mandela was once considered to be a terrorist).
So, to conclude, if your AI algo has gone rogue have a dip in to Google’s Explainable AI toolbox to see if they have the solution.