ReWork Deep Learning London September 2018 part 1

Entering the conference (c) ReWork

September is always a busy month in London for AI, but one of the events I always prioritise is ReWork – they manage to pack a lot into two days and I always come away inspired. I was live-tweeting the event, but also made quite a few notes, which I’ve made a bit more verbose below.  This is part one of at least three parts and I’ll add links between the posts as I finish them. Continue reading ReWork Deep Learning London September 2018 part 1

ImageNet in 4 Minutes? What the paper really shows us

ImageNet has been a deep learning benchmark data set since it was created.  It was the competition that showed that DL networks could outperform non-ML techniques and it’s been used by academics as a standard for testing new image classification systems.  A few days ago an exciting paper was published on arxiv for training ImageNet in four minutes.  Not weeks, days or hours but minutes.  This is on the surface a great leap forward but it’s important to dig beneath the surface.  The Register sub headline says all you need to know:

So if you don’t have a relaxed view on accuracy or thousands of GPUs lying around, what’s the point? Is there anything that can be taken from this paper?

Continue reading ImageNet in 4 Minutes? What the paper really shows us

Thinking machines – biological and artificial

 

How can we spot a thinking machine?

If you’ve read pretty much any other of my artificial intelligence blog posts on here then you’ll know how annoyed I am when the slightest advance in the achievements of AI spurs an onslaught of articles about “thinking machines”, that can reason and opens up the question of robots taking jobs and eventually destroying us all in some not-to-be-mentioned1 film franchise style.  Before I get onto discussing if and when we’ll get to a Detroit Become Human scenario, I’d like to cover where we are and the biggest problem in all this. Continue reading Thinking machines – biological and artificial

Presentations and speaking at conferences

Me presenting at Continuous Lifecycle London 2018

One of the things I’ve been doing more this year is speaking more at conferences and meetups. I always take the time to speak to the audience afterwards to see if there were aspects they didn’t get or enjoy, so I can hone the presentation for the next time1. Even when under embargo of product details, there’s usually lots of things that you can talk about that the wider community will find interesting and I have been encouraging people to break their presentation fear by talking at meetups.

Following on from my “Being a Panellist” post, I’ve been asked a lot how I go about writing a presentation and what I do to prepare, so I’ve gathered my thoughts here. This isn’t the only way, but it is what works for me! Continue reading Presentations and speaking at conferences

Democratising AI: Who defines AI for good?

At the ReWork Retail and AI Assistants summit in London I was lucky enough to interview Kriti Sharma, VP of AI and Robotics at Sage, in a fireside chat on AI for Good.  Kriti spoke a lot about her experiences and projects not only in getting more diverse voices heard within AI but also in using the power of AI as a force for good.

We discussed the current state of AI and whether we needed legislation.  It is clear that legislation will come if we do not self-police how we are using these new tools.  In the wake of the Cambridge Analytica story breaking, I expect that there will be more of a focus on data privacy laws accelerated, but this may bleed into artificial intelligent applications using such data. Continue reading Democratising AI: Who defines AI for good?

Cambridge Analytica: not AI’s ethics awakening

From the wonderful XKCD, research ethics

By now, the majority of people who keep up with the news will have heard of Cambridge Analytica, the whistle blower Christopher Wylie, and the news surrounding the harvesting of Facebook data and micro targeting, along with accusations of potentially illegal activity.  In amongst all of this news I’ve also seen articles that this is the “awakening ” moment for ethics and morals AI and data science in general.  The point where practitioners realise the impact of their work.

“Now I am become Death, the destroyer of worlds”, Oppenheimer

Continue reading Cambridge Analytica: not AI’s ethics awakening

AI Congress London 2018 Day 2

AI Congress (still making me think of  @jack_septic_eye – let me know if you get that…)

If you’ve not read the day 1 summary then you can find that here.

Day 2 had a new host for track A in the form of David D’Souza from CIPD. His opening remarks quoted Asimov and Crichton and encouraging us not be magicians and to step back and think about what we should do rather than just what we could. Continue reading AI Congress London 2018 Day 2

AI Congress London 2018 Day 1

AI Congress (not @jack_septic_eye – I feel I may be in a very small subset of AI professionals who get that…)

London is a hive of AI activity. The UK is positioning itself as a leader in AI technology and you can barely walk around London without passing an AI company or meetup or training course1. If I didn’t actually have a day job, I could fill my time with AI conferences without actually doing much more than my daily commute. That said I am quite picky about the ones I go to. I’d never been to the AI Congress before and liked the diverse set of speakers and topics.  I was lucky that the team at Logikk had invited me as their guest for the two days. So how did it stack up? Well, day 1 was at a much higher level than some of the other conferences I’ve been to, with a lot of implementation and enterprise discussions and far fewer talks on the technical implementations. If you’re senior then these conferences are for you. If you want someone to talk about their latest paper on arxiv then there are far more technical events that will suit you better.

One of the biggest problems I had was that there were three separate tracks and only one of me, so if I didn’t make notes on a particular talk then hopefully the slides will be available after the event at some point. I missed some of the high profile talks, in preference of other speakers, on purpose as I’d already heard those speakers at other events. Continue reading AI Congress London 2018 Day 1

AI better than humans at reading?

News that AI had beat humans in reading spawned a lot of articles.

I’ve taken longer than I normally would to respond to some recent news stories about AI “outperforming humans” in reading comprehension “for the first time”.  Partly because I can’t help the wave of annoyance that fills me when I see articles so obviously designed to instil panic and/or awe in the reader without any detail, but also because I feel it’s important to do some primary research before refuting anything1.  The initial story broke that an AI created by Alibaba had met2 the human threshold in the Stanford Question Answering Dataset (SQuAD) followed closely by Microsoft outperforming Alibaba and exceeding the human score (slightly). Always a safe bet for sensationalism, mainstream media pounced on the results to announce millions of jobs are at risk….  So what’s really going on? Continue reading AI better than humans at reading?

Cozmo – a good present?

Cozmo – image from Anki

One of the toys that’s been advertised heavily in the UK this year for Christmas has been Cozmo with it’s “Big Brain, Bigger Personality” strapline. I got one last year and it was a great present. Let’s get this out there. Cozmo is relatively expensive. For about £1501 there are a lot of other things you might prefer to buy for a child (or an adult) for what is, on the surface “just a toy”. If you treat it as such then maybe it’s not the right thing for you, but viewing Cozmo as a simple toy is far less than he deserves. He is a lot of fun to play with, and the more you play with him, the more he begins to do. Continue reading Cozmo – a good present?