Democratising AI: Who defines AI for good?

At the ReWork Retail and AI Assistants summit in London I was lucky enough to interview Kriti Sharma, VP of AI and Robotics at Sage, in a fireside chat on AI for Good.  Kriti spoke a lot about her experiences and projects not only in getting more diverse voices heard within AI but also in using the power of AI as a force for good.

We discussed the current state of AI and whether we needed legislation.  It is clear that legislation will come if we do not self-police how we are using these new tools.  In the wake of the Cambridge Analytica story breaking, I expect that there will be more of a focus on data privacy laws accelerated, but this may bleed into artificial intelligent applications using such data. Continue reading Democratising AI: Who defines AI for good?

Cambridge Analytica: not AI’s ethics awakening

From the wonderful XKCD, research ethics

By now, the majority of people who keep up with the news will have heard of Cambridge Analytica, the whistle blower Christopher Wylie, and the news surrounding the harvesting of Facebook data and micro targeting, along with accusations of potentially illegal activity.  In amongst all of this news I’ve also seen articles that this is the “awakening ” moment for ethics and morals AI and data science in general.  The point where practitioners realise the impact of their work.

“Now I am become Death, the destroyer of worlds”, Oppenheimer

Continue reading Cambridge Analytica: not AI’s ethics awakening

AI Congress London 2018 Day 2

AI Congress (still making me think of  @jack_septic_eye – let me know if you get that…)

If you’ve not read the day 1 summary then you can find that here.

Day 2 had a new host for track A in the form of David D’Souza from CIPD. His opening remarks quoted Asimov and Crichton and encouraging us not be magicians and to step back and think about what we should do rather than just what we could. Continue reading AI Congress London 2018 Day 2

AI Congress London 2018 Day 1

AI Congress (not @jack_septic_eye – I feel I may be in a very small subset of AI professionals who get that…)

London is a hive of AI activity. The UK is positioning itself as a leader in AI technology and you can barely walk around London without passing an AI company or meetup or training course1. If I didn’t actually have a day job, I could fill my time with AI conferences without actually doing much more than my daily commute. That said I am quite picky about the ones I go to. I’d never been to the AI Congress before and liked the diverse set of speakers and topics.  I was lucky that the team at Logikk had invited me as their guest for the two days. So how did it stack up? Well, day 1 was at a much higher level than some of the other conferences I’ve been to, with a lot of implementation and enterprise discussions and far fewer talks on the technical implementations. If you’re senior then these conferences are for you. If you want someone to talk about their latest paper on arxiv then there are far more technical events that will suit you better.

One of the biggest problems I had was that there were three separate tracks and only one of me, so if I didn’t make notes on a particular talk then hopefully the slides will be available after the event at some point. I missed some of the high profile talks, in preference of other speakers, on purpose as I’d already heard those speakers at other events. Continue reading AI Congress London 2018 Day 1

Fooling AI and transparency – testing and honesty is critical

The effect of digitally changing an image on a classifier. Can you tell the difference between the pictures? Image from Brendel et al 2017.

If you follow my posts on AI (here and on other sites) then you’ll know that I’m a big believer on ensuring that AI models are thoroughly tested and that their accuracy, precision and recall are clearly identified.  Indeed, my submission to the Science and Technology select committee earlier this year highlighted this need, even though the algorithms themselves may never be transparent.  It was not a surprise in the slightest that a paper has been released on tricking “black box” commercial AI into misclassification with minimal effort. Continue reading Fooling AI and transparency – testing and honesty is critical

Is a Robot tax on companies using AI a way of protecting the workforce?

Don’t fear the robots, they’re already here.

While there may be disagreements on whether AI is something to worry about or not, there is general agreement that it will change the workforce. What is a potential concern is how quickly these changes will appear. Anyone who has been watching Inside the Factory1 can see how few people are needed on production lines that are largely automated: a single person with the title “manager” whose team consists entirely of robots. It wasn’t too long ago that these factories would have been full of manual labour.

The nature of our workforce has changed. It’s been changing constantly – the AI revolution is no different in that respect. We just need to be aware of the speed and scale of potential change and ensure that we are giving everyone the opportunity to be skilled in the roles that will form part of our future. There is an inevitability about this. Just as globalisation made it easy for companies to outsource work to cheaper locations (and even easier with micro contract sites) AI will make it cheaper and easier for companies to do tasks so it will be adopted. Tasks that aren’t interesting enough or wide market enough or even too difficult right now to be automated will still need human workers. Everything else will slowly be lost “to the robots”. Continue reading Is a Robot tax on companies using AI a way of protecting the workforce?

Chatbot immortality

University of Washington’s artificial Obama created from reference videos and audio files.

While I like to kid myself that maybe I’m only a quarter or third of the way through my life, statistics suggest that I’m now in the second half and my future holds a gradual decline to the grave.  I’m not afraid of my age, it’s just a number1.  I certainly don’t feel it.  My father recently said that he doesn’t feel his age either and is sometimes surprised to see an old man staring back at him from the mirror.

As an atheist, death terrifies me.  My own and that of those I love.  I don’t have the easy comfort blanket of an afterlife and mourn the loss of everything an individual was when they cease to be.  Continue reading Chatbot immortality

Biologically Inspired Artificial Intelligence

Mouse cortex neurons, from Lee et al, Nature 532, 370–374

Artificial intelligence has progressed immensely in the past decade with the fantastic open source nature of the community. However there are relatively few people, even in the research areas, that understand the history of the field from both the computational and biological standpoints. Standing on the shoulders of giants is a great way to step forward, but can you truly innovate without understanding the fundamentals?

I go to a lot of conferences and I’ve noticed a subtle change in the past few years. Solutions that are being spoken about now don’t appear to be as far forward as some of those presented a couple of years ago. This may be subjective, but the more I speak to people about my own background in biochemically accurate computational neuron models, the more interest it sparks. Our current deep learning model neurons are barely scratching the surface of what biological neurons can do. Is it any wonder that models need complexity and are limited in their scope? Continue reading Biologically Inspired Artificial Intelligence

Evidence in our AI future

Generated handwriting from the team at UCL

If you’ve been following this blog you’ll know that there have been great advances in the past few years with artificial image generation, to the stage where having a picture of something does not necessarily mean that it is real.  Image advances are easy to talk about, as there’s something tangible to show, but there have been similar large leaps forward in other areas, particularly in voice synthesis and handwriting.

Continue reading Evidence in our AI future

Submitting evidence to parliament committees

(c) Parliament committee on Science and Technology

I love the fact that here in the UK everyone can be involved in shaping the future of our country, even if a large number of individuals choose not to and, in my eyes, if you don’t get involved then you don’t have the right to complain.  While this is most generally applied to the election of our representatives from local parish councils to our regional MPs (or actually standing yourself)1 there are also a lot of other ways to be involved.  In addition to raising issues with your local representative, parliament has cross bench committees that seek input from the public and to help create policy or consider draft legislation.

Our elected parliament is not made up of individuals who are experts in all fields.  Even government departments are not necessarily headed by individuals with large amounts of relevant experience.  It is critical that these individuals are informed by those with the experience and expertise in the issues that  are being considered.  Without this critical input, our democracy is weakened. Continue reading Submitting evidence to parliament committees