ReWork Deep Learning London September 2018 part 3

This is part 3 of my summary of ReWork Deep Learning London September 2018. Part 1 can be found here, and part 2 here.

Day 2 of rework started with some fast start up pitches. Due to a meeting at the office I missed all of these and only arrived at the first coffee break. So if you want to check out what 3D Industries, Selerio, DeepZen, Peculium and PipelineAI  are doing check their websites. Continue reading ReWork Deep Learning London September 2018 part 3

ReWork Deep Learning London September 2018 part 2

This is part 2 of my summary of the Rework Deep Learning Summit that took place in London in September 2018, and covers the afternoon of day 1. Part one, which looks at the morning sessions can be found here. Continue reading ReWork Deep Learning London September 2018 part 2

ReWork Deep Learning London September 2018 part 1

Entering the conference (c) ReWork

September is always a busy month in London for AI, but one of the events I always prioritise is ReWork – they manage to pack a lot into two days and I always come away inspired. I was live-tweeting the event, but also made quite a few notes, which I’ve made a bit more verbose below.  This is part one of at least three parts and I’ll add links between the posts as I finish them. Continue reading ReWork Deep Learning London September 2018 part 1

ImageNet in 4 Minutes? What the paper really shows us

ImageNet has been a deep learning benchmark data set since it was created.  It was the competition that showed that DL networks could outperform non-ML techniques and it’s been used by academics as a standard for testing new image classification systems.  A few days ago an exciting paper was published on arxiv for training ImageNet in four minutes.  Not weeks, days or hours but minutes.  This is on the surface a great leap forward but it’s important to dig beneath the surface.  The Register sub headline says all you need to know:

So if you don’t have a relaxed view on accuracy or thousands of GPUs lying around, what’s the point? Is there anything that can be taken from this paper?

Continue reading ImageNet in 4 Minutes? What the paper really shows us

Thinking machines – biological and artificial

 

How can we spot a thinking machine?

If you’ve read pretty much any other of my artificial intelligence blog posts on here then you’ll know how annoyed I am when the slightest advance in the achievements of AI spurs an onslaught of articles about “thinking machines”, that can reason and opens up the question of robots taking jobs and eventually destroying us all in some not-to-be-mentioned1 film franchise style.  Before I get onto discussing if and when we’ll get to a Detroit Become Human scenario, I’d like to cover where we are and the biggest problem in all this. Continue reading Thinking machines – biological and artificial

Democratising AI: Who defines AI for good?

At the ReWork Retail and AI Assistants summit in London I was lucky enough to interview Kriti Sharma, VP of AI and Robotics at Sage, in a fireside chat on AI for Good.  Kriti spoke a lot about her experiences and projects not only in getting more diverse voices heard within AI but also in using the power of AI as a force for good.

We discussed the current state of AI and whether we needed legislation.  It is clear that legislation will come if we do not self-police how we are using these new tools.  In the wake of the Cambridge Analytica story breaking, I expect that there will be more of a focus on data privacy laws accelerated, but this may bleed into artificial intelligent applications using such data. Continue reading Democratising AI: Who defines AI for good?

Cambridge Analytica: not AI’s ethics awakening

From the wonderful XKCD, research ethics

By now, the majority of people who keep up with the news will have heard of Cambridge Analytica, the whistle blower Christopher Wylie, and the news surrounding the harvesting of Facebook data and micro targeting, along with accusations of potentially illegal activity.  In amongst all of this news I’ve also seen articles that this is the “awakening ” moment for ethics and morals AI and data science in general.  The point where practitioners realise the impact of their work.

“Now I am become Death, the destroyer of worlds”, Oppenheimer

Continue reading Cambridge Analytica: not AI’s ethics awakening

AI Congress London 2018 Day 2

AI Congress (still making me think of  @jack_septic_eye – let me know if you get that…)

If you’ve not read the day 1 summary then you can find that here.

Day 2 had a new host for track A in the form of David D’Souza from CIPD. His opening remarks quoted Asimov and Crichton and encouraging us not be magicians and to step back and think about what we should do rather than just what we could. Continue reading AI Congress London 2018 Day 2

AI Congress London 2018 Day 1

AI Congress (not @jack_septic_eye – I feel I may be in a very small subset of AI professionals who get that…)

London is a hive of AI activity. The UK is positioning itself as a leader in AI technology and you can barely walk around London without passing an AI company or meetup or training course1. If I didn’t actually have a day job, I could fill my time with AI conferences without actually doing much more than my daily commute. That said I am quite picky about the ones I go to. I’d never been to the AI Congress before and liked the diverse set of speakers and topics.  I was lucky that the team at Logikk had invited me as their guest for the two days. So how did it stack up? Well, day 1 was at a much higher level than some of the other conferences I’ve been to, with a lot of implementation and enterprise discussions and far fewer talks on the technical implementations. If you’re senior then these conferences are for you. If you want someone to talk about their latest paper on arxiv then there are far more technical events that will suit you better.

One of the biggest problems I had was that there were three separate tracks and only one of me, so if I didn’t make notes on a particular talk then hopefully the slides will be available after the event at some point. I missed some of the high profile talks, in preference of other speakers, on purpose as I’d already heard those speakers at other events. Continue reading AI Congress London 2018 Day 1

Fooling AI and transparency – testing and honesty is critical

The effect of digitally changing an image on a classifier. Can you tell the difference between the pictures? Image from Brendel et al 2017.

If you follow my posts on AI (here and on other sites) then you’ll know that I’m a big believer on ensuring that AI models are thoroughly tested and that their accuracy, precision and recall are clearly identified.  Indeed, my submission to the Science and Technology select committee earlier this year highlighted this need, even though the algorithms themselves may never be transparent.  It was not a surprise in the slightest that a paper has been released on tricking “black box” commercial AI into misclassification with minimal effort. Continue reading Fooling AI and transparency – testing and honesty is critical