Conference season online

October has always been a super busy month for me. I’m usually starting a new OU module and travelling around speaking at conferences and meetups, all while doing my day job, spending time with my family and enjoying my hobbies. Sometimes I’ve not got the balance right! 2019 I remember was particularly hectic. I optimistically submitted conference sessions at the start of the year on a variety of different topics and, as the year went on I was invited to speak at various meetups in the UK and even stepped in to do some last minute presentations where other speakers had dropped out. This time last year I had just finished 8 weeks where I had a week’s holiday, spoken at 5 conferences, 2 breakfast briefings and 8 meet ups, all of which were on slightly different topics!

Continue reading Conference season online

A diagnostic tale of docker

twenty sided die showing common excuses for developers not to fix problems, the top of the die shows "Can't reproduce"
Developer d20 gives the answer 🙂 (from Pretend Store)

If you’ve been to any of my technical talks over the past year or so then you’ll know I’m a huge advocate for running AI models as api services within docker containers and using services like cloud formation to give scalability. One of the issues with this is that when you get problems in production they can be difficult to trace. Methodical diagnostics of code rather than data is a skill that is not that common in the AI community and something that comes with experience. Here’s a breakdown of one of these types of problems, the diagnostics to find the cause and the eventual fix, all of which you’re going to need to know if you want to use these types of services.

Read more

Let’s talk about testing

One of the things that I find I have to teach data scientists and ML researchers almost universally is understanding how to test their own code. Too often it’s all about testing the results and not enough about the code. I’ve been saying for a while that a lack of proper testing can trip you up and recently we saw a paper that rippled through academia about a “bug” in some code that everyone used…

A Code Glitch May Have Caused Errors In More Than 100 Published Studies

https://www.vice.com/en_us/article/zmjwda/a-code-glitch-may-have-caused-errors-in-more-than-100-published-studies

The short version of this is that back in 2014, a python protocol was released for calculating molecule structure through NMR shifts1 and many other labs have been using this script over the past 5 years.

Continue reading Let’s talk about testing

Fooling AI and transparency – testing and honesty is critical

The effect of digitally changing an image on a classifier. Can you tell the difference between the pictures? Image from Brendel et al 2017.

If you follow my posts on AI (here and on other sites) then you’ll know that I’m a big believer on ensuring that AI models are thoroughly tested and that their accuracy, precision and recall are clearly identified.  Indeed, my submission to the Science and Technology select committee earlier this year highlighted this need, even though the algorithms themselves may never be transparent.  It was not a surprise in the slightest that a paper has been released on tricking “black box” commercial AI into misclassification with minimal effort. Continue reading Fooling AI and transparency – testing and honesty is critical

Why are data scientists so bad at science?

Do you check your inputs?

It’s rare that I am intentionally provocative in my post titles, but I’d really like you to think about this one. I’ve known and worked with a lot of people who work with data over the years, many of who call themselves data scientists and many who do the role of a data scientist but by another name1. One thing that worries me when they talk about their work is an absence of scientific rigour and this is a huge problem, and one I’ve talked about before.

The results that data scientists produce are becoming increasingly important in our lives; from determining what adverts we see to how we are treated by financial institutions or governments. These results can have direct impact on people’s lives and we have a moral and ethical obligation to ensure that they are correct. Continue reading Why are data scientists so bad at science?

Testing applications

It's true. Image from Andy Glover http://cartoontester.blogspot.com/
It’s true. Image from Andy Glover http://cartoontester.blogspot.com/

As part of a few hours catching up on machine learning conference videos, I found this great talk on what can go wrong with machine recommendations and how testing can be improved.  Evan gives some great examples of where the algorithms can give unintended outputs.  In some cases this is an emergent property of correlations in the data, and in others, it’s down to missing examples in the test set.  While the talk is entertaining and shows some very important examples, it made me realise something that has been missing.  The machine learning community, having grown out of academia, does not have the same rigour of developmental processes as standard software development.

Regardless of the type of machine learning employed, testing and quantification of results is all down to the test set that is used.  Accuracy against the test set, simpler architectures, fewer training examples are the focal points.  If the test set is lacking, this is not uncovered as part of the process.  Models that show high precision and recall can often fail when “real” data is passed through them.  Sometimes in spectacular ways as outlined in Evan’s talk:  adjusting pricing for Mac users, Amazon recommending inappropriate products or Facebook’s misclassification of people.  These problems are either solved with manual hacks after the algorithms have run or by adding specific issues to the test set.  While there are businesses that take the same approach with their software, they are thankfully few and far between and most companies now use some form of continuous integration, automated testing and then rigorous manual testing.

The only part of this process that will truly solve the problem is the addition of rigorous manual testing by professional testers.  Good testers are very hard to find, in some respect harder than it is to find good developers.  Testing is often seen as a second class profession to development and I feel this is really unfair.  Good testers can understand the application they are testing on multiple levels, create the automated functional tests and make sure that everything you expect to work, works.  But they also know how to stress an application – push it well beyond what it was designed to do, just to see whether these cases will be handled.  What assumptions were made that can be challenged.  A really good tester will see deficiencies in test sets and think “what happens if…”, they’ll sneak the bizarre examples in for the challenge.

One of the most difficult things about being a tester in the machine learning space is that in order to understand all the ways in which things can go wrong, you do need some appreciation of how the underlying system works, rather than a complete black box.  Knowing that most vision networks look for edges would prompt a good tester to throw in random patterns, from animal prints to static noise.  A good tester would look of examples not covered by the test set and make sure the negatives far outweigh the samples the developers used to create the model.

So where are all the specialist testers for machine learning?  I think the industry really needs them before we have (any more) decision engines in our lives that have hidden issues waiting to emerge…