This is a summary of day 2 of ReWork Deep Learning summit 2016 that took place in Boston, May 12-13th. If you want to read the summary of day 1 then you can read my notes here. Continue reading Rework DL Boston 2016 – Day 2
So yesterday there was the news that over 1000 people had signed an open letter requesting a ban on autonomous weapons. I signed it too. While AI is advancing rapidly and the very existence of the letter indicates that research is almost certainly already progressing in this area, as a species we need to think about where to draw the line.
Completely autonomous offensive AI would make its own decisions about who to kill and where to go. Battlefields are no longer two armies facing up on some open fields. War is far more complex, quite often with civilians mixed in. Trusting an AI to make those kill decisions in complex scenarios is not something that would sit easily with most. Collateral damage reduced to an “acceptable” probability? Continue reading Military AI arms race
The following tweet appeared on my timeline today:
The Turing test is like saying planes don’t fly unless they can fool birds into thinking they’re birds. (h/t Peter Norvig) #AI
— Pedro Domingos (@pmddomingos) July 19, 2015
Initially I thought “heh, fair point – we are defining that the only true intelligence is described by the properties humans exhibit”, and my in-built twitter filter1 ignored the inaccuracies of the analogy. I clicked on the tweet as I wanted to see what the responses were and whether there was a better metaphor that I could talk about. There wasn’t – the responses were mainly variants on the deficiencies of the analogy and equally problematic in their own right. While this didn’t descend into anything abusive2, I do feel that the essence of what was trying to be conveyed was lost and this will be a continual problem with twitter. One of the better responses3 did point out that cherry-picking a single feature was not the same as the Turing Test. However, this did get me thinking based on my initial interpretation of the tweet.
In order to answer a big question we are simplifying it in one way. Turing simplified “Can machines think?” to “can machines fool humans into thinking they are human?”. Continue reading Can machines think?
There’s a lot of money, time and brain power going in to various machine learning techniques to take the aggravation out of manually tagging images so that they appear in searches and can be categorised effectively. However, we are strangely fault-intolerant of machines when they get it wrong – too many “unknowns” and we’re less likely to use the services but a couple of bad predictions and we’re aghast about how bad the solution is.
With a lot of the big players coming out with image categorisers, there is the question as whether it’s really worth anyone building their own when you can pay a nominal fee to use the API of an existing system. The only way to really know is to see how well these systems do “in the wild” – sure they have high precision and recall on the test sets, but when an actual user uploads and image and frowns at the result, something isn’t right. Continue reading AI for image recognition – still a way to go
The first session kicked off with Kevin O’Brian from GreatHorn. There are 3 major problems facing the infosec community at the moment:
- Modern infrastructure is far more complex than it used to be – we are using AWS, Azure as extensions of our physical networks and spaces such as GitHub as code repositories and Docker for automation. It is very difficult for any IT professional to keep up with all of the potential vulnerabilities and ensure that everything is secure.
- (Security) Technical debt – there is too much to monitor/fix even if business released the time and funds to address it.
- Shortfall in skilled people – there is a 1.5 million shortage in infosec people – this isn’t going to be resolved quickly.
So, day one of the ReWork Deep Learning Summit Boston 2015 is over. A lot of interesting talks and demonstrations all round. All talks were recorded so I will update this post as they become available with the links to wherever the recordings are posted – I know I’ll be rewatching them.
Following a brief introduction the day kicked off with a presentation from Christian Szegedy of Google looking at the deep learning they had set up to analyse YouTube videos. They’d taken the traditional networks used in Google and made them smaller, discovering that an architecture with several layers of small networks was more computationally efficient that larger ones, with a 5 level (inception-5) most efficient. Several papers were referenced, which I’ll need to look up later, but the results looked interesting.
While on my flight to Boston for ReWorkDL I watched Ex Machina the “must see” latest AI film. I’d been warned that it wasn’t very good by my husband (who’d just flown home the day before!) but I thought that since he’d already seen it, I’d better take the chance to watch it since it’s unlikely to be something we’d watch together in the future. If you haven’t seen it, then please be aware that this post does contain spoilers so read on with caution. Continue reading Ex Machina – film review
I’m four weeks in to my new role and one of the threads of work I have is looking into machine learning and how this has advanced since my own thesis. The current approach to machine intelligence is via learning networks where the data is abstracted: rather than recognising specifics about the problem, the algorithm learns the common elements of the problem and solution to match an input to the expected output, without needing an exact match. Our brains are very good at this: from a very early age we can recognise familiar faces from unfamiliar ones and quickly this progresses to identification in bad light, different angles, when the face is obscured. Getting machines to do the same has been notoriously difficult. Continue reading Machine intelligence – training and plasticity
If you’ve been watching anything on Channel 4 recently you’ll have seen a trailer for PersonaSynthetics – advertising the latest home must-have gadget. The ad itself is slightly creepy, despite the smiling family images, and the website supports this sterile AI view to an extent that some people have expressed concern over a genuine product being available. It’s a fantastic ad campaign for their new series Humans, which in itself looks like it’d be worth a watch (there’s a nice trailer on the website), but it has raised again the issues around artificial intelligence, and how far should it go.
This is of particular interest to me as I am starting a new project in machine learning and, while my work isn’t going to lead to a home based automaton, there are some interesting questions to be considered in this area to ensure that we don’t end up making ourselves obsolete as a species. Continue reading Artificial Intelligence
So, a few days ago, the internet had a new toy: How Old Robot – a very simple website where you can upload a photograph and it will guess your age and gender. For many people the guess was about right, but there were some howlers, with very similar images being uploaded and giving age results differing by (several) decades!
The site doesn’t hide the fact that it’s a learning tool based on Microsoft’s facial recognition technology and is built on the Azure platform as an example of how quickly it is to build and deploy sites using Azure. What started off as a quick demo from the Build conference soon became viral, with people all over the world loading their photos into the app and sharing the results on social media. This is exactly what Microsoft wanted and they’ve been oh so clever with this and here’s why.