ReWork Deep Learning London 2016 Day 1 Morning

Entering the conference (c) ReWork
Entering the conference (c) ReWork

In September 2016, the ReWork team organised another deep learning conference in London.  This is the third of their conferences I have attended and each time they continue to be a fantastic cross section of academia, enterprise research and start-ups.  As usual, I took a large amount of notes on both days and I’ll be putting these up as separate posts, this one covers the morning of day 1.  For reference, the notes from previous events can be found here: Boston 2015, Boston 2016.

Day one began with a brief introduction from Neil Lawrence, who has just moved from the University of Sheffield to Amazon research in Cambridge.  Rather strangely, his introduction finished with him introducing himself, which we all found funny.  His talk was titled the “Data Delusion” and started with a brief history of how digital data has exploded.  By 2002, SVM papers dominated NIPs, but there wasn’t the level of data to make these systems work.  There was a great analogy with the steam engine, originally invented by Thomas Newcomen in 1712 for pumping out tin mines, but it was hugely inefficient due to the amount of coal required.  James Watt took the design and improved on it by adding the condenser, which (in combination with efficient coal distribution) led to the industrial revolution1.   Machine learning now needs its “condenser” moment.

Continue reading ReWork Deep Learning London 2016 Day 1 Morning

Amazon Echo Dot (second generation): Review

Echo Dot (c) Amazon
Echo Dot (c) Amazon

When I attended the ReWork Deep Learning conference in Boston in May 2016, one of the most interesting talks was about the Echo and the Alexa personal assistant from Amazon.  As someone whose day job is AI, it seemed only right that I surround myself by as much as possible from other companies.  This week, after it being on back order for a while, it finally arrived.  At £50, the Echo Dot is a reasonable price, with the only negative I was aware of before ordering being that the sound quality “wasn’t great” from a reviewer. Continue reading Amazon Echo Dot (second generation): Review

Formula AI – driverless racing

The AI racing car (c) Roborace
The AI racing car (c) Roborace

We’re all starting to get a bit blasé about self driving cars now.  They were a novelty when they first came out, but even if the vast majority of us have never seen one, let alone been in one, we know they’re there and they work1 and that they are getting better with each iteration (which is phenomenally fast).  But after watching the formula 1 racing, it’s a big step from a 30mph trundle around a city to over 200mph racing around a track with other cars.  Or is it? Continue reading Formula AI – driverless racing

AI for understanding ambiguity

Please do not park bicycles against these railings as they may be removed - the railings or the bikes? Understanding the meaning is easy for us, harder for machines
Please do not park bicycles against these railings as they may be removed – the railings or the bikes? Understanding the meaning is easy for us, harder for machines

Last year I wrote a post on whether machines could ever think1.  Recently, in addition to all the general chatbot competitions, there has been a new type of test for deeper contextual understanding rather than the dumb and obvious meanings of words.  English2 has a rich variety of meanings of words with the primary as the most common and then secondary and tertiary meanings further down in the dictionary.  It’s probably been a while since you last sat down and read a dictionary, or even used an online one other than to find a synonym, antonym or check your spelling3 but as humans we rely mostly on our vocabulary and context that we’ve picked up from education and experience.

Continue reading AI for understanding ambiguity

Rework DL Boston 2016 – Day 2

Me, networking at breakfast
Me, networking at breakfast

This is a summary of day 2 of ReWork Deep Learning summit 2016 that took place in Boston, May 12-13th.  If you want to read the summary of day 1 then you can read my notes here. Continue reading Rework DL Boston 2016 – Day 2

Military AI arms race

So yesterday there was the news that over 1000 people had signed an open letter  requesting a ban on autonomous weapons.  I signed it too. While AI is advancing rapidly and the very existence of the letter indicates that research is almost certainly already progressing in this area, as a species we need to think about where to draw the line.

Completely autonomous offensive AI would make its own decisions about who to kill and where to go.  Battlefields are no longer two armies facing up on some open fields.  War is far more complex, quite often with civilians mixed in.  Trusting an AI to make those kill decisions in complex scenarios is not something that would sit easily with most. Collateral damage reduced to an “acceptable” probability?   Continue reading Military AI arms race

Can machines think?

The following tweet appeared on my timeline today:

Initially I thought “heh, fair point – we are defining that the only true intelligence is described by the properties humans exhibit”, and my in-built twitter filter1 ignored the inaccuracies of the analogy. I clicked on the tweet as I wanted to see what the responses were and whether there was a better metaphor that I could talk about. There wasn’t – the responses were mainly variants on the deficiencies of the analogy and equally problematic in their own right. While this didn’t descend into anything abusive2, I do feel that the essence of what was trying to be conveyed was lost and this will be a continual problem with twitter. One of the better responses3 did point out that cherry-picking a single feature was not the same as the Turing Test.  However, this did get me thinking based on my initial interpretation of the tweet.

In order to answer a big question we are simplifying it in one way.  Turing simplified “Can machines think?” to “can machines fool humans into thinking they are human?”.   Continue reading Can machines think?

AI for image recognition – still a way to go

Result from IBM watson using images supplied by Kaptur
Result from IBM watson using images supplied by Kaptur

There’s a lot of money, time and brain power going in to various machine learning techniques to take the aggravation out of manually tagging images so that they appear in searches and can be categorised effectively.  However, we are strangely fault-intolerant of machines when they get it wrong – too many “unknowns” and we’re less likely to use the services but a couple of bad predictions and we’re aghast about how bad the solution is.

With a lot of the big players coming out with image categorisers, there is the question as whether it’s really worth anyone building their own when you can pay a nominal fee to use the API of an existing system.  The only way to really know is to see how well these systems do “in the wild” – sure they have high precision and recall on the test sets, but when an actual user uploads and image and frowns at the result, something isn’t right. Continue reading AI for image recognition – still a way to go

ReWork DL Boston 2015 – Day 2

This post is a very high level summary of Day 2 at the Boston ReWork Deep Learning Summit 2015.  Day 1 can be found here.

The first session kicked off with Kevin O’Brian from GreatHorn.  There are 3 major problems facing the infosec community at the moment:

  1. Modern infrastructure is far more complex than it used to be – we are using AWS, Azure as extensions of our physical networks and spaces such as GitHub as code repositories and Docker for automation.  It is very difficult for any IT professional to keep up with all of the potential vulnerabilities and ensure that everything is secure.
  2. (Security) Technical debt – there is too much to monitor/fix even if business released the time and funds to address it.
  3. Shortfall in skilled people – there is a 1.5 million shortage in infosec  people – this isn’t going to be resolved quickly.

Continue reading ReWork DL Boston 2015 – Day 2

ReWork DL Boston 2015 – Day 1

brain-like computing
Brain-like computing

So, day one of the ReWork Deep Learning Summit Boston 2015 is over.  A lot of interesting talks and demonstrations all round.  All talks were recorded so I will update this post as they become available with the links to wherever the recordings are posted – I know I’ll be rewatching them.

Following a brief introduction the day kicked off with a presentation from Christian Szegedy of Google looking at the deep learning they had set up to analyse YouTube videos.  They’d taken the traditional networks used in Google and made them smaller, discovering that an architecture with several layers of small networks was more computationally efficient that larger ones, with a 5 level (inception-5) most efficient.  Several papers were referenced, which I’ll need to look up later, but the results looked interesting.

Continue reading ReWork DL Boston 2015 – Day 1