AI for understanding ambiguity

Please do not park bicycles against these railings as they may be removed - the railings or the bikes? Understanding the meaning is easy for us, harder for machines
Please do not park bicycles against these railings as they may be removed – the railings or the bikes? Understanding the meaning is easy for us, harder for machines

Last year I wrote a post on whether machines could ever think1.  Recently, in addition to all the general chatbot competitions, there has been a new type of test for deeper contextual understanding rather than the dumb and obvious meanings of words.  English2 has a rich variety of meanings of words with the primary as the most common and then secondary and tertiary meanings further down in the dictionary.  It’s probably been a while since you last sat down and read a dictionary, or even used an online one other than to find a synonym, antonym or check your spelling3 but as humans we rely mostly on our vocabulary and context that we’ve picked up from education and experience.

Continue reading AI for understanding ambiguity

Rework DL Boston 2016 – Day 2

Me, networking at breakfast
Me, networking at breakfast

This is a summary of day 2 of ReWork Deep Learning summit 2016 that took place in Boston, May 12-13th.  If you want to read the summary of day 1 then you can read my notes here. Continue reading Rework DL Boston 2016 – Day 2

ReWork DL Boston 2016 – Day 1

brainLast year, I blogged about the rework Deep Learning conference in Boston and, being here for the second year in a row, I thought I’d do the same.  Here’s the summary of day 1.

The day started with a great intro from Jana Eggers with a positive message about nurturing this AI baby that is being created rather than the doomsday scenario that is regularly spouted.  We are a collaborative discipline of academia and industry and we can focus on how we use this for good. Continue reading ReWork DL Boston 2016 – Day 1

Can machines think?

The following tweet appeared on my timeline today:

Initially I thought “heh, fair point – we are defining that the only true intelligence is described by the properties humans exhibit”, and my in-built twitter filter1 ignored the inaccuracies of the analogy. I clicked on the tweet as I wanted to see what the responses were and whether there was a better metaphor that I could talk about. There wasn’t – the responses were mainly variants on the deficiencies of the analogy and equally problematic in their own right. While this didn’t descend into anything abusive2, I do feel that the essence of what was trying to be conveyed was lost and this will be a continual problem with twitter. One of the better responses3 did point out that cherry-picking a single feature was not the same as the Turing Test.  However, this did get me thinking based on my initial interpretation of the tweet.

In order to answer a big question we are simplifying it in one way.  Turing simplified “Can machines think?” to “can machines fool humans into thinking they are human?”.   Continue reading Can machines think?

Facebook’s latest deep learning research

Images auto generated by Facebook's deep learning system
Facebook’s auto generated images

A few days ago, researchers from Facebook published a paper on a deep learning technique to create “natural images”, with the result being that human subjects were convinced 40% of the time that they were looking at a real image rather than an automatically generated one.  When I saw the tweet linking this, one of the comments1 indicated that you’d “need a PhD to understand” the paper, and thus make any use of the code Facebook may release.

I’ve always been a big believer in knowledge being accessible both from being freely available (as their paper is) and also that any individual who wants to understand the concepts presented should be able to, even if they don’t have extensive training in that specialism.  So, as someone who does have a PhD and who is in the deep learning space, challenge accepted and here I’ll discuss what Facebook have done in a way that doesn’t require advanced degrees, but rather just a healthy interest in the field2. Continue reading Facebook’s latest deep learning research

ReWork DL Boston 2015 – Day 2

This post is a very high level summary of Day 2 at the Boston ReWork Deep Learning Summit 2015.  Day 1 can be found here.

The first session kicked off with Kevin O’Brian from GreatHorn.  There are 3 major problems facing the infosec community at the moment:

  1. Modern infrastructure is far more complex than it used to be – we are using AWS, Azure as extensions of our physical networks and spaces such as GitHub as code repositories and Docker for automation.  It is very difficult for any IT professional to keep up with all of the potential vulnerabilities and ensure that everything is secure.
  2. (Security) Technical debt – there is too much to monitor/fix even if business released the time and funds to address it.
  3. Shortfall in skilled people – there is a 1.5 million shortage in infosec  people – this isn’t going to be resolved quickly.

Continue reading ReWork DL Boston 2015 – Day 2

ReWork DL Boston 2015 – Day 1

brain-like computing
Brain-like computing

So, day one of the ReWork Deep Learning Summit Boston 2015 is over.  A lot of interesting talks and demonstrations all round.  All talks were recorded so I will update this post as they become available with the links to wherever the recordings are posted – I know I’ll be rewatching them.

Following a brief introduction the day kicked off with a presentation from Christian Szegedy of Google looking at the deep learning they had set up to analyse YouTube videos.  They’d taken the traditional networks used in Google and made them smaller, discovering that an architecture with several layers of small networks was more computationally efficient that larger ones, with a 5 level (inception-5) most efficient.  Several papers were referenced, which I’ll need to look up later, but the results looked interesting.

Continue reading ReWork DL Boston 2015 – Day 1

Machine Learning – A Primer

If you’ve been following this blog you’ll know that I’ve started a new role that requires me to build a deep learning system and I’ve been catching up on the 10+ years of research since I completed my PhD.  With a background in computing and mathematics I jumped straight in to what I thought would be skimming through the literature.  I soon realised that it would be better all round to jump back to first principles rather than be constrained with the methods I had learned over a decade ago.

So, I found a lot of universities who had put their machine learning courses online and have decided to work through  what’s out there as if I was an undergraduate and then use my experience to build on top of that.  I don’t want to miss an advantage because I wasn’t aware of it.

So I picked up two key tutorials from different Professors: Continue reading Machine Learning – A Primer

Machine intelligence – training and plasticity

brain-like computing
Brain-like computing

I’m four weeks in to my new role and one of the threads of work I have is looking into machine learning and how this has advanced since my own thesis.  The current approach to machine intelligence is via learning networks where the data is abstracted: rather than recognising specifics about the problem, the algorithm learns the common elements of the problem and solution to match an input to the expected output, without needing an exact match.  Our brains are very good at this: from a very early age we can recognise familiar faces from unfamiliar ones and quickly this progresses to identification in bad light, different angles, when the face is obscured.  Getting machines to do the same has been notoriously difficult. Continue reading Machine intelligence – training and plasticity

Artificial Intelligence

PersonaSynthetics: Sally
PersonaSynthetics: Sally

If you’ve been watching anything on Channel 4 recently you’ll have seen a trailer for PersonaSynthetics – advertising the latest home must-have gadget.  The ad itself is slightly creepy, despite the smiling family images, and the website supports this sterile AI view to an extent that some people have expressed concern over a genuine product being available.  It’s a fantastic ad campaign for their new series Humans, which in itself looks like it’d be worth a watch (there’s a nice trailer on the website), but it has raised again the issues around artificial intelligence, and how far should it go.

This is of particular interest to me as I am starting a new project in machine learning and, while my work isn’t going to lead to a home based automaton, there are some interesting questions to be considered in this area to ensure that we don’t end up making ourselves obsolete as a species. Continue reading Artificial Intelligence