I’ve been speaking at several events recently giving practical advice on getting started with AI projects. There is a huge chasm between high level inspirational business pieces on all the usual sites1 that business leaders read and the “getting started in AI” guides that pretty much start with installing Tensorflow. There was nothing aimed at the non-AI CTO who didn’t want to fall behind. Nothing to indicate to them how to start a project, what talent they’d need or even which problems to start with. Sure, there are a lot of expensive consulting companies out there, but this knowledge shouldn’t be hidden.
This time last year, I sat down with David Kelnar of MMC Ventures and we talked about why so many AI projects don’t succeed. He asked me to contribute some ideas to be included in the new State of AI report for 2019, to which I gladly agreed. It soon became clear that to do this justice, it was more than just a chapter, and the MMC AI Playbook was born, which we recently launched. Contributing to this amazing publication took a lot of time and research, and this blog was the thing that had to give.
If you are trying to find the right time to start your first project and need help on where to begin, please take a look at the playbook. Here’s a taster, based on talks I gave at Austin Fraser’s #LeadersInTech event and the Barclays AI Frenzy event both in July 2019.
The very first thing you need is a common understanding of what AI means. You may think this is obvious, but even in my field, different people have different thoughts about what AI is and what it isn’t. Some will tell you that AI is the research field of computer science and get infuriated that businesses are claiming to do AI. Some people will tell you that you have to have a system capable of replacing a human or it’s not AI. You don’t have to dig too far into the AI hashtag on Linked in to see people complaining that a company claiming to do AI is just “NLP” and some statistics under the hood. Or it’s just a if-this-then-that. What’s key is that you are clear on what your AI is and what it is not. Don’t claim that you’re doing deep learning if you’re not. Ask people to clarify what they mean by AI.
I prefer to stick with the original 1950s definition: a system that appears to be intelligent. I prefer this because it is shifting the emphasis from the “how” of implementation to the impact.
And impact is the whole point of any successful AI project. If it’s not going to give you a return on investment in terms of speed and scale, accuracy or insight then all you will have is a money sink of a pet project just so you can put “we do AI” on a slide at your next pitch. With a little care, you can start something significant.
There are six competencies for a successful AI project: problem and buy in, data, people, development, production, regulation. Each of these needs careful thought in itself so let’s take a look at the headlines.
The tools of AI are particularly good at assignment, grouping, generation and forecasting. All businesses have these sorts of challenges so you should be able to find something. I’m not saying don’t dream big, but you need to realise that state of the art AI projects come with a hefty price tag in terms of both time and talent Do not arbitrarily start an AI project and start with focus – spread yourself too thinly and you won’t get tangible results for future buy in. Prioritise your problems based on the value you’ll get from the project and then look at the data you have. Like any other business problem you need to be clear on your measures of success – is it speed, scalability, accuracy, improved features? You should also think about what is going to happen with the outputs of the system. Will there be a human in the loop who is informed by the output – or will this directly impact an individual in some way? Unless you are deliberately trying to disrupt an industry or have hired in experts, start with a tangible project that will give you clear success measures as a starting point for further projects as this will give you the easiest buy in and require smaller budgets.
The more data you have the better your models will be. There’s a whole branch of research looking at poorly labelled data and how much signal do you need within the noise to get a good result, but all the experiments say the cleaner your data the better your model. One of the most common questions is how much data? The problem in answering this question is that it really depends on your problem but as a rule of thumb, for assignment have 1000 examples per class, for predictions, make sure you have double the periodicity of the data. You can still work with fewer, you may even need more than this for your specific problem, but this will give you a starting point and remember that more is always better.
What is critical is to understand where your data comes from, what permissions you have to use it and which models it was used in. If you are taking data from the web make sure that you are allowed to use it. Individuals are more data savvy and GDPR has some grey areas about whether you need to retrain models if someone asks you to remove their data. If the outputs of your models create protected variables then make sure you treat them sensitively.
Have someone in your company who really understands statistics – the different types of sampling errors and the impact this can have on your results. If you are only looking at your own customer data will that tell you anything about potential new customers ? No, you are only sampling people who are already engaged. The graph shows the spread of potential error based on different sample sizes that give an average of 50% – the bigger your sample the more confident you can be in your results. The Wikipedia page on sampling errors in very accessible and a great starting place.
If you are doing very particular niche research you will need a bunch of PhDs and probably a lot of them with a large supporting team, however you can achieve a lot with a few good people. These people don’t necessarily have to have PhDs but they do need to demonstrate some key skills and equally not all people with PhDs have them2.
Depending on how you are looking to develop a solution you need to consider the skill mix. Do not have an isolated individual. Understand the difference between an analyst, scientist, researcher and engineer – candidates and recruiters throw the terms data science and AI around with abandon, so make sure that you know what they mean and what skills you expect them to have. Understand their value. AI hype has really pushed up salaries and many candidates are just chasing the highest offers. While you will need to pay market rates unless you are offering either equity or a very meaningful problem, you don’t want the people who only care about salary. They will jump as soon as a better offer comes along. You want talent that understands the problems and where you can give them engaging interesting problems. When I’m hiring I look for: communication to non specialists, ability to understand academic papers and implement their ideas, evidence of creative problem solving and most importantly, a recognition that in business we have deadlines. This last one is not the same as someone saying they understand because they had a deadline for a NeurIPS paper. This is knowing when to stop and try something else, when to ask for help, when to use a simpler approach to get a first version out, even if the accuracy isn’t where we’d like it. Agility in thought and activity. This isn’t something that’s easy to define on skill list, so you need to take a different approach. Use specialist recruiters. If an agency representing you can’t speak their language then the candidates won’t deal with them. They’ll want to know what techniques and frameworks are being used, the impact of the project. Go to meetups, conferences – speak to candidates and excite them about your projects. Then when you’ve got a first version out, talk about it, present what you’re doing. This way, candidates will get to know about your company. Make sure that the people you have in truly understand the problem and are curious enough to ask questions about how the results will be used.
Having isolated development for AI can be a disaster unless they can create production ready code or you’re happy to bear the cost of redoing their work. You may find that the APIs from AWS, Google, IBM, and Azure can give you what you need just by running your data through it so all you need are a few AI minded engineers. Make sure you understand whether a) you have permissions to put the data on a third party service (and where it’s located) and b) what you are giving up (in addition to money) to get this short cut. It might not be a problem, but if you are trying to sell something unique then you might need to do more than your competitors.
In house gives you the potential for something unique and is the only option if you are using sensitive data. This comes at a cost. Not only do you have to pay for the talent, but you’re also looking at managing the development pipelines. Balancing time to train against cost of machines. Cloud servers will give you scalability and flexibility but if you already have your own hardware then it’s worth considering buying in GPU servers – particularly with the discounts you can get through the Nvidia small business support – but if you don’t already have the infrastructure and staff to support this then it can have a whole host of hidden costs. The biggest factor in how you develop will be the security of your data – are you allowed to move it off premise? Are you allowed to store it outside of the EU? Some of these third party APIs can be hosted worldwide. This will not only affect the time for the API call but more importantly may also contravene data contracts, including GDPR (depending on the type of data you’re using) If in doubt, do it in house.
Actual AI development does mirror normal development processes, but the timelines can be a bit more elongated. Make sure you have a team lead how can ensure that work is time-boxed. Avoid open ended projects. If something needs research then limit it, with a decision on next steps at the end – this way you can be clear on deliverables in each sprint.
Let your teams use the tools that make life easy. I’ve seen a few companies code direct access to CUDA on the GPUs because the CUDNN interface was too abstracted for their needs. It is extremely unlikely that you will need to do this. It’s okay for there to be different levels of abstraction for development and production as long as they share a common core. For example, my team use Keras on top of tensorflow to visualise their networks. Our production systems use pure tensorflow, but the outputs can be dropped straight into our production containers. Don’t restrict development unnecessarily.
Your first version doesn’t have to be perfect but it does need to give some sort of ROI. Get it seen by the rest of the business and solicit feedback. Make sure you have an automated pipeline that’s continually feeding in new data and is testing. Testing is another word that’s misused heavily within the AI community and something I also talk about regularly. If you ask a data scientist or AI researcher if they’ve tested a model, what they mean is that they’ve taken a curated validation set and determined accuracy, recall and precision3 – all three of which fall under the catch all of accuracy measures so make sure you ask for them specifically! – They will not have tested its performance or under what circumstances it breaks. Something 95% accurate that returns in seconds is usually more useful in business than something 99.999% accurate that returns in days.
As soon as you release your AI into the wild humans will try to break it – humans are spiteful and belligerent and we have an in built defense against being made obsolete so we cannot help but try to outsmart the AI. Whether that’s spraying confusing markings on a road to affect self-driving vehicles, uploading doctored images to classification systems, teasing chatbots, or changing their form inputs to get a specific results, find the “DROP TABLES;” of your AI system. Make sure you’ve accounted for all possible outcomes and deal with the long tail. Again – this doesn’t need perfection, but you need to catch and handle it.
As with development, you can get great scalability with putting your systems in the cloud, but you need to be sure that you are secure and if you need to deploy on GPU then you might find costs scale quickly. For comparison, one of my solutions consists of 29 docker containers that run in parallel on an image and a paragraph of text – the whole stack can run on my laptop in CPU only mode in a few seconds.
Finally on production – techniques are changing rapidly – the architecture I used for models 2 years ago is not as fast or as accurate as some of the latest methods. Plan and budget that you will need to revisit solutions not just automatically keep throwing data at them.
The last key part is regulation. You may need your AI to be explainable, you may need full traceability of your data. Build this in from the beginning. You should be able to say with certainty what data was used to train each model, where that data came from and the permissions associated with it. All inferences/predictions/classifications should be allocated to a model. Not only will this help you track down issues quickly, but it will also ensure you have an audit trail.
We are post GDPR now – everyone should have a data strategy and be correctly managing protected variables, including inferred protected variables. EU citizens have a right to explanation if an autonomous decision affects them. Read the GDPR in full and protect your customers’ data – in doing so, you’ll protect your company
Legislation is changing and there are discussions going on in UK government right now about the impact of algorithmic decision making – make sure someone in your company is aware of them.
While this has been quite a long high level summary, I hope you can appreciate that there are a lot of aspects to getting started with an AI project. We’ve tried to cover everything you need to know in the playbook so please take a look and down load a copy. I’d love to hear your thoughts on it!
- Forbes, Medium, LinkedIn etc ↩
- IMHO a PhD should indicate that an individual has excellent communication skills, ability to dictate their own work and make key decisions about how to progress with a problem alongside a really good understanding of practicalities of business. Sadly, not all PhD programs are equal and you can’t guarantee these skills from a qualification ↩
- If you don’t know the difference between these then learn it ↩