Democratising AI: Who defines AI for good?

At the ReWork Retail and AI Assistants summit in London I was lucky enough to interview Kriti Sharma, VP of AI and Robotics at Sage, in a fireside chat on AI for Good.  Kriti spoke a lot about her experiences and projects not only in getting more diverse voices heard within AI but also in using the power of AI as a force for good.

We discussed the current state of AI and whether we needed legislation.  It is clear that legislation will come if we do not self-police how we are using these new tools.  In the wake of the Cambridge Analytica story breaking, I expect that there will be more of a focus on data privacy laws accelerated, but this may bleed into artificial intelligent applications using such data.

It was interesting when we engaged the audience that there was a distinct difference in the language we used surrounding AI when speaking to prospects and investors, compared to colleagues or friends and family.  As a community we cannot be complicit in the over hyping of AI nor can we ignore its use for applications that are immoral.

One thing that is clear is that current state of media ensures that headlines need to be attention grabbing click bait – what is submitted with the best intentions can get twisted into something over sensationalised1.  We need to push back more on this.

Kriti is passionate about ensuring that more voices are heard in the AI community, democratising AI so that it can be applied to more relevant every day problems: AI for good. She is personally involved with schemes to encourage individuals from under privileged backgrounds to get education and also in projects where AI is being used to solve problems for those communities.  We spoke in detail prior to the public session about a particular project to support women suffering from domestic abuse – without doubt a morally just use of AI.

When it came to the questions at the end, there were several that were asking Kriti to expand her views on specific topics, but there was a specific question that did cause a little confusion, which needs to be addressed.

The questioner2 started with a list of caveats: they were also encouraging diversity in AI and walking the walk with using AI for good with products for the uneducated and the homeless.  They also pre-assumed that the audience would send abuse via social media for the question that they were about to ask.  This got the rest of us intrigued, but the actual question was lost amongst all of the explanation and it came across as less of a question and more of a statement of the great things they were doing themselves.  We moved on as there were a lot of perplexed faces in the audience.

When the talks were done for the day I made a point of finding the questioner to ask them what was really behind their question.  Sadly, Kriti had to leave before this point but I didn’t just want to write off the question.

I had a really good talk with them and it turned out that the real question was this:

Who can really decide whether an application is AI for good? Should everyone have access to AI and if not who decides?

I think most people would start from a point that it is easy to know whether something is good or not and everyone should have access.  However, there are so many polarised debates in the world that it is not a stretch to imagine equally passionate individuals on opposite sides of an AI use debate, both of whom think that they are correct.  Or, potentially more dangerously, argue from a standpoint of “I won’t be negatively impacted by this so it’s not a problem”.  This latter point can only be addressed with diversity and open debate in the industry.  It saddened me that the questioner felt genuinely that there would not be an open debate and that they would be “set upon” by the audience for even daring to suggest that not everyone should have access to AI.  If data scientists can’t cope with rational debate of challenging ideas then they have no right to call themselves data scientists.

So how do we go about addressing the main point?  There are obvious “AI for good” projects.  But there are far more shades of grey on the spectrum.  From predicting our likes and dislikes to affecting them.  From facial recognition to make it easier to get through airport immigration to tracking your every movement. From instant translation services to your every conversation being semantically tagged and flagged.  It has been reported that China is implementing a ranking system for its citizens3 which is reminiscent of a sci-fi series I’ve been reading recently: The girl who dared to think. I find the human reaction to being “scored” in such a way very believable in the books and I’ll be interested to see how this works out in China.

With the democratisation of AI, it is easier and easier for non-specialists to create systems.  Anyone with a computer and an internet connection can start.  The more expensive the hardware and the more people, then the faster the creation process, but you only need an idea and now quite basic coding ability and you can have something in a short space of time4, particularly if you are not bothered about the impact of inaccurate results…

I don’t believe that we can limit who has access to AI, but we can be vigilant as a community to ensure that the companies for which we work are ethical in the projects, and continue to strive for a world where people don’t want to create anything that harms other people.

  1.   You know the ones, AI is rubbish or it’s going to kill us all… this is one of my pet hates which I talked about here.
  2.   I’m deliberately obsfucating their details here as they’re not relevant.
  3. While I don’t know for sure that this uses AI, I can’t imagine that it’s built in a traditional way and with the data volumes an AI approach would be natural.
  4. Depending on what you’re trying to do obviously.

Published by

janet

Dr Janet is a Molecular Biochemistry graduate from Oxford University with a doctorate in Computational Neuroscience from Sussex. I’m currently studying for a third degree in Mathematics with Open University. During the day, and sometimes out of hours, I work as a Chief Science Officer. You can read all about that on my LinkedIn page.