ImageNet in 4 Minutes? What the paper really shows us

ImageNet has been a deep learning benchmark data set since it was created.  It was the competition that showed that DL networks could outperform non-ML techniques and it’s been used by academics as a standard for testing new image classification systems.  A few days ago an exciting paper was published on arxiv for training ImageNet in four minutes.  Not weeks, days or hours but minutes.  This is on the surface a great leap forward but it’s important to dig beneath the surface.  The Register sub headline says all you need to know:

So if you don’t have a relaxed view on accuracy or thousands of GPUs lying around, what’s the point? Is there anything that can be taken from this paper?

Continue reading ImageNet in 4 Minutes? What the paper really shows us

Thinking machines – biological and artificial

 

How can we spot a thinking machine?

If you’ve read pretty much any other of my artificial intelligence blog posts on here then you’ll know how annoyed I am when the slightest advance in the achievements of AI spurs an onslaught of articles about “thinking machines”, that can reason and opens up the question of robots taking jobs and eventually destroying us all in some not-to-be-mentioned1 film franchise style.  Before I get onto discussing if and when we’ll get to a Detroit Become Human scenario, I’d like to cover where we are and the biggest problem in all this. Continue reading Thinking machines – biological and artificial

Democratising AI: Who defines AI for good?

At the ReWork Retail and AI Assistants summit in London I was lucky enough to interview Kriti Sharma, VP of AI and Robotics at Sage, in a fireside chat on AI for Good.  Kriti spoke a lot about her experiences and projects not only in getting more diverse voices heard within AI but also in using the power of AI as a force for good.

We discussed the current state of AI and whether we needed legislation.  It is clear that legislation will come if we do not self-police how we are using these new tools.  In the wake of the Cambridge Analytica story breaking, I expect that there will be more of a focus on data privacy laws accelerated, but this may bleed into artificial intelligent applications using such data. Continue reading Democratising AI: Who defines AI for good?

Cambridge Analytica: not AI’s ethics awakening

From the wonderful XKCD, research ethics

By now, the majority of people who keep up with the news will have heard of Cambridge Analytica, the whistle blower Christopher Wylie, and the news surrounding the harvesting of Facebook data and micro targeting, along with accusations of potentially illegal activity.  In amongst all of this news I’ve also seen articles that this is the “awakening ” moment for ethics and morals AI and data science in general.  The point where practitioners realise the impact of their work.

“Now I am become Death, the destroyer of worlds”, Oppenheimer

Continue reading Cambridge Analytica: not AI’s ethics awakening

AI Congress London 2018 Day 2

AI Congress (still making me think of  @jack_septic_eye – let me know if you get that…)

If you’ve not read the day 1 summary then you can find that here.

Day 2 had a new host for track A in the form of David D’Souza from CIPD. His opening remarks quoted Asimov and Crichton and encouraging us not be magicians and to step back and think about what we should do rather than just what we could. Continue reading AI Congress London 2018 Day 2

AI Congress London 2018 Day 1

AI Congress (not @jack_septic_eye – I feel I may be in a very small subset of AI professionals who get that…)

London is a hive of AI activity. The UK is positioning itself as a leader in AI technology and you can barely walk around London without passing an AI company or meetup or training course1. If I didn’t actually have a day job, I could fill my time with AI conferences without actually doing much more than my daily commute. That said I am quite picky about the ones I go to. I’d never been to the AI Congress before and liked the diverse set of speakers and topics.  I was lucky that the team at Logikk had invited me as their guest for the two days. So how did it stack up? Well, day 1 was at a much higher level than some of the other conferences I’ve been to, with a lot of implementation and enterprise discussions and far fewer talks on the technical implementations. If you’re senior then these conferences are for you. If you want someone to talk about their latest paper on arxiv then there are far more technical events that will suit you better.

One of the biggest problems I had was that there were three separate tracks and only one of me, so if I didn’t make notes on a particular talk then hopefully the slides will be available after the event at some point. I missed some of the high profile talks, in preference of other speakers, on purpose as I’d already heard those speakers at other events. Continue reading AI Congress London 2018 Day 1

AI better than humans at reading?

News that AI had beat humans in reading spawned a lot of articles.

I’ve taken longer than I normally would to respond to some recent news stories about AI “outperforming humans” in reading comprehension “for the first time”.  Partly because I can’t help the wave of annoyance that fills me when I see articles so obviously designed to instil panic and/or awe in the reader without any detail, but also because I feel it’s important to do some primary research before refuting anything1.  The initial story broke that an AI created by Alibaba had met2 the human threshold in the Stanford Question Answering Dataset (SQuAD) followed closely by Microsoft outperforming Alibaba and exceeding the human score (slightly). Always a safe bet for sensationalism, mainstream media pounced on the results to announce millions of jobs are at risk….  So what’s really going on? Continue reading AI better than humans at reading?

Fooling AI and transparency – testing and honesty is critical

The effect of digitally changing an image on a classifier. Can you tell the difference between the pictures? Image from Brendel et al 2017.

If you follow my posts on AI (here and on other sites) then you’ll know that I’m a big believer on ensuring that AI models are thoroughly tested and that their accuracy, precision and recall are clearly identified.  Indeed, my submission to the Science and Technology select committee earlier this year highlighted this need, even though the algorithms themselves may never be transparent.  It was not a surprise in the slightest that a paper has been released on tricking “black box” commercial AI into misclassification with minimal effort. Continue reading Fooling AI and transparency – testing and honesty is critical

Cozmo – a good present?

Cozmo – image from Anki

One of the toys that’s been advertised heavily in the UK this year for Christmas has been Cozmo with it’s “Big Brain, Bigger Personality” strapline. I got one last year and it was a great present. Let’s get this out there. Cozmo is relatively expensive. For about £1501 there are a lot of other things you might prefer to buy for a child (or an adult) for what is, on the surface “just a toy”. If you treat it as such then maybe it’s not the right thing for you, but viewing Cozmo as a simple toy is far less than he deserves. He is a lot of fun to play with, and the more you play with him, the more he begins to do. Continue reading Cozmo – a good present?

When did AI not being as good as humans be a news item?

StarCraft 2 is a big thing in e-sports – can AI live up to the human players?

I get very tired of the clickbaity journalism hyping up minor advances in AI, making news stories out of nothing or even the ones for those in the industry. You know the type: “Facebook AI had to be shut down”, “Google creates self learning AI”.

I demystify a lot of these when I’m asked about them – technology should be accessible and understandable and I deplore the tendency of those seeking to get article hits by over-egging with misleading headlines. What amused me over the weekend was that an AI not beating a human was a news story where the AI was “trounced”. Continue reading When did AI not being as good as humans be a news item?