Thinking machines – biological and artificial

 

How can we spot a thinking machine?

If you’ve read pretty much any other of my artificial intelligence blog posts on here then you’ll know how annoyed I am when the slightest advance in the achievements of AI spurs an onslaught of articles about “thinking machines”, that can reason and opens up the question of robots taking jobs and eventually destroying us all in some not-to-be-mentioned1 film franchise style.  Before I get onto discussing if and when we’ll get to a Detroit Become Human scenario, I’d like to cover where we are and the biggest problem in all this. Continue reading Thinking machines – biological and artificial

AI Congress London 2018 Day 2

AI Congress (still making me think of  @jack_septic_eye – let me know if you get that…)

If you’ve not read the day 1 summary then you can find that here.

Day 2 had a new host for track A in the form of David D’Souza from CIPD. His opening remarks quoted Asimov and Crichton and encouraging us not be magicians and to step back and think about what we should do rather than just what we could. Continue reading AI Congress London 2018 Day 2

AI Congress London 2018 Day 1

AI Congress (not @jack_septic_eye – I feel I may be in a very small subset of AI professionals who get that…)

London is a hive of AI activity. The UK is positioning itself as a leader in AI technology and you can barely walk around London without passing an AI company or meetup or training course1. If I didn’t actually have a day job, I could fill my time with AI conferences without actually doing much more than my daily commute. That said I am quite picky about the ones I go to. I’d never been to the AI Congress before and liked the diverse set of speakers and topics.  I was lucky that the team at Logikk had invited me as their guest for the two days. So how did it stack up? Well, day 1 was at a much higher level than some of the other conferences I’ve been to, with a lot of implementation and enterprise discussions and far fewer talks on the technical implementations. If you’re senior then these conferences are for you. If you want someone to talk about their latest paper on arxiv then there are far more technical events that will suit you better.

One of the biggest problems I had was that there were three separate tracks and only one of me, so if I didn’t make notes on a particular talk then hopefully the slides will be available after the event at some point. I missed some of the high profile talks, in preference of other speakers, on purpose as I’d already heard those speakers at other events. Continue reading AI Congress London 2018 Day 1

AI better than humans at reading?

News that AI had beat humans in reading spawned a lot of articles.

I’ve taken longer than I normally would to respond to some recent news stories about AI “outperforming humans” in reading comprehension “for the first time”.  Partly because I can’t help the wave of annoyance that fills me when I see articles so obviously designed to instil panic and/or awe in the reader without any detail, but also because I feel it’s important to do some primary research before refuting anything1.  The initial story broke that an AI created by Alibaba had met2 the human threshold in the Stanford Question Answering Dataset (SQuAD) followed closely by Microsoft outperforming Alibaba and exceeding the human score (slightly). Always a safe bet for sensationalism, mainstream media pounced on the results to announce millions of jobs are at risk….  So what’s really going on? Continue reading AI better than humans at reading?

Submitting evidence to parliament committees

(c) Parliament committee on Science and Technology

I love the fact that here in the UK everyone can be involved in shaping the future of our country, even if a large number of individuals choose not to and, in my eyes, if you don’t get involved then you don’t have the right to complain.  While this is most generally applied to the election of our representatives from local parish councils to our regional MPs (or actually standing yourself)1 there are also a lot of other ways to be involved.  In addition to raising issues with your local representative, parliament has cross bench committees that seek input from the public and to help create policy or consider draft legislation.

Our elected parliament is not made up of individuals who are experts in all fields.  Even government departments are not necessarily headed by individuals with large amounts of relevant experience.  It is critical that these individuals are informed by those with the experience and expertise in the issues that  are being considered.  Without this critical input, our democracy is weakened. Continue reading Submitting evidence to parliament committees

WiDS2017: Women in Data Science Reading

Yesterday I had the great pleasure in being part of the global WiDS2017 event show casing women in all aspects of data science.  The main conference was held at Stanford but over 75 locations world wide had rebroadcasts and local events, of which Reading was one.   In addition to spending a great evening with some amazing women, I was asked to speak in the career panel on my experiences and overall journey. Continue reading WiDS2017: Women in Data Science Reading

Artificial images: seeing is no longer believing

Loom.ai can generate a 3D avatar from a single image

“Pics or it didn’t happen” – it’s a common request when telling a tale that might be considered exaggerated.  Usually, supplying a picture or video of the event is enough to convince your audience that you’re telling the truth.  However, we’ve been living in an age of Photoshop for a while and it has (or really should!!!) become habit to check Snopes and other sites before believing even simple images1 – they even have a tag for debunked images due to photoshopping. Continue reading Artificial images: seeing is no longer believing

ReWork Deep Learning London 2016 Day 1 Morning

Entering the conference (c) ReWork
Entering the conference (c) ReWork

In September 2016, the ReWork team organised another deep learning conference in London.  This is the third of their conferences I have attended and each time they continue to be a fantastic cross section of academia, enterprise research and start-ups.  As usual, I took a large amount of notes on both days and I’ll be putting these up as separate posts, this one covers the morning of day 1.  For reference, the notes from previous events can be found here: Boston 2015, Boston 2016.

Day one began with a brief introduction from Neil Lawrence, who has just moved from the University of Sheffield to Amazon research in Cambridge.  Rather strangely, his introduction finished with him introducing himself, which we all found funny.  His talk was titled the “Data Delusion” and started with a brief history of how digital data has exploded.  By 2002, SVM papers dominated NIPs, but there wasn’t the level of data to make these systems work.  There was a great analogy with the steam engine, originally invented by Thomas Newcomen in 1712 for pumping out tin mines, but it was hugely inefficient due to the amount of coal required.  James Watt took the design and improved on it by adding the condenser, which (in combination with efficient coal distribution) led to the industrial revolution1.   Machine learning now needs its “condenser” moment.

Continue reading ReWork Deep Learning London 2016 Day 1 Morning

Formula AI – driverless racing

The AI racing car (c) Roborace
The AI racing car (c) Roborace

We’re all starting to get a bit blasé about self driving cars now.  They were a novelty when they first came out, but even if the vast majority of us have never seen one, let alone been in one, we know they’re there and they work1 and that they are getting better with each iteration (which is phenomenally fast).  But after watching the formula 1 racing, it’s a big step from a 30mph trundle around a city to over 200mph racing around a track with other cars.  Or is it? Continue reading Formula AI – driverless racing

Testing applications

It's true. Image from Andy Glover http://cartoontester.blogspot.com/
It’s true. Image from Andy Glover http://cartoontester.blogspot.com/

As part of a few hours catching up on machine learning conference videos, I found this great talk on what can go wrong with machine recommendations and how testing can be improved.  Evan gives some great examples of where the algorithms can give unintended outputs.  In some cases this is an emergent property of correlations in the data, and in others, it’s down to missing examples in the test set.  While the talk is entertaining and shows some very important examples, it made me realise something that has been missing.  The machine learning community, having grown out of academia, does not have the same rigour of developmental processes as standard software development.

Regardless of the type of machine learning employed, testing and quantification of results is all down to the test set that is used.  Accuracy against the test set, simpler architectures, fewer training examples are the focal points.  If the test set is lacking, this is not uncovered as part of the process.  Models that show high precision and recall can often fail when “real” data is passed through them.  Sometimes in spectacular ways as outlined in Evan’s talk:  adjusting pricing for Mac users, Amazon recommending inappropriate products or Facebook’s misclassification of people.  These problems are either solved with manual hacks after the algorithms have run or by adding specific issues to the test set.  While there are businesses that take the same approach with their software, they are thankfully few and far between and most companies now use some form of continuous integration, automated testing and then rigorous manual testing.

The only part of this process that will truly solve the problem is the addition of rigorous manual testing by professional testers.  Good testers are very hard to find, in some respect harder than it is to find good developers.  Testing is often seen as a second class profession to development and I feel this is really unfair.  Good testers can understand the application they are testing on multiple levels, create the automated functional tests and make sure that everything you expect to work, works.  But they also know how to stress an application – push it well beyond what it was designed to do, just to see whether these cases will be handled.  What assumptions were made that can be challenged.  A really good tester will see deficiencies in test sets and think “what happens if…”, they’ll sneak the bizarre examples in for the challenge.

One of the most difficult things about being a tester in the machine learning space is that in order to understand all the ways in which things can go wrong, you do need some appreciation of how the underlying system works, rather than a complete black box.  Knowing that most vision networks look for edges would prompt a good tester to throw in random patterns, from animal prints to static noise.  A good tester would look of examples not covered by the test set and make sure the negatives far outweigh the samples the developers used to create the model.

So where are all the specialist testers for machine learning?  I think the industry really needs them before we have (any more) decision engines in our lives that have hidden issues waiting to emerge…