AI Congress London 2018 Day 1

AI Congress (not @jack_septic_eye – I feel I may be in a very small subset of AI professionals who get that…)

London is a hive of AI activity. The UK is positioning itself as a leader in AI technology and you can barely walk around London without passing an AI company or meetup or training course1. If I didn’t actually have a day job, I could fill my time with AI conferences without actually doing much more than my daily commute. That said I am quite picky about the ones I go to. I’d never been to the AI Congress before and liked the diverse set of speakers and topics.  I was lucky that the team at Logikk had invited me as their guest for the two days. So how did it stack up? Well, day 1 was at a much higher level than some of the other conferences I’ve been to, with a lot of implementation and enterprise discussions and far fewer talks on the technical implementations. If you’re senior then these conferences are for you. If you want someone to talk about their latest paper on arxiv then there are far more technical events that will suit you better.

One of the biggest problems I had was that there were three separate tracks and only one of me, so if I didn’t make notes on a particular talk then hopefully the slides will be available after the event at some point. I missed some of the high profile talks, in preference of other speakers, on purpose as I’d already heard those speakers at other events.

The day started with opening remarks by Rob McCargow of PwC. He made some great opening remarks and ones that resonated with me were that AI needs to be responsible, the benefits need to be spread, and that there needs to be a culture of trust in technology companies. He was a great compare for the day and I wish I’d had chance to catch up with him further2.

The keynote was by the amazing Melanie Warrick from Google Cloud. She reminded the room that the destination is always the return on investment, what is the problem to be solved and how will it improve things – AI is a tool to get to that destination. There were some great practical examples too of companies who had made real gains. Compology  are using IoT and computer vision to ensure that waste is only picked up as necessary. Pickups have been reduced by 50% saving both costs and the environmental impacts of truck driving. Find the problem, then get the data and then look at the tools you need. Wise advice for anyone looking at implementing an AI strategy. What is key is that you can’t have models in isolation, you need an end to end pipeline that will solve the problem.  Melanie has kindly already posted her slides here.

The second talk was from Sue Daley from TechUK. She took the recent conversations about AI in the UK and gave a detailed picture of what needed to be done to achieve this potential. There were four key aspects:

  • Fill the digital skills gap. From encouraging diversity to ensuring that our young people had the skills necessary for the future workplace, this is a problem that needs to be solved. While not everyone needs to be a developer, digital skills are essential for life in the 21st century.
  • Balance of interaction. Jobs will change. We need to consider how this will happen and ensure that there is a sensible approach that does not stifle creativity.
  • Trust and confidence. Echoing Rob’s opening sentiment, people need to trust the technology companies. Digital ethics are being discussed more and more. What do we need to do to allow the public to have trust in our industry?
  • Right environment. Then legislative and regulatory systems need to be right to allow innovative AI solutions to be used to solve industry problems. We also need to ensure free flow of data so that we have the information necessary to solve these problems. This is a concern for TechUK with the current negotiations on leaving the EU.

We do have the right digital infrastructure for this. 95% of the country have access to fast broadband3, we have significant data centres, and representations from many big companies investing.

Innovate UK provides funding to start up and small companies and Paul Mason gave an insight into some of the ways in which they can help. Often, early stage companied can’t get venture capital funding as they have not clarified their ideas or have not had the funds to get a prototype together. Innovate UK has funded over 400 companies throughout the UK and 47 Universities, including names like Swiftkey, giving a good ROI for taxpayer funds. One of the great comments he made was regarding the concerns that people have with AI and the confusion between informed decisions and autonomous decisions. We’ve been using data to inform our decisions for a long time, how is this any different.

Bill Gates showing the amount of information a CD-ROM could hold in 1994. Technology has moved on considerably since then…

The fourth speaker was Tim Hynes from AIB who spoke about the rate of technology change. He made the statement that we over estimate what we can achieve in a year, but underestimate the progress that can be made in 3 years. While anecdotal, this feels right to me. He showed a series of images of the progress of computing and the predictions that were made. Asking if we remembered the walkman4 he showed a picture of Bill Gates on a stack of paper equivalent to the data storage of a CD-ROM and that there was only 10 years from that to the first iPhone. If we can’t predict 3 years then how are we supposed to predict 5 or 10 years ahead? He described AI as a way of taking the robot out of the human – hardware, software and us: “squishyware”5. AI can remove repetitive tasks and also deal with spikes in demand to smooth out costs. The future of AI will be based on enhanced compute (commercial quantum computing will be here sooner than originally predicted), enhanced communications (5G and beyond will allow fast data transmission so not everything needs to be on device), and enhanced capabilities (bionics and robotics). It’s an exciting time.

There was a very short segment from Sudhir Jha from sponsor Infosys who noted that enterprise AI required automation, explainability, and the ability to deal with the sparse data that occurs in real world problems.

To close off the session “AI means business” was Birgitte Andersen of the Big Innovation Centre. Her key points were that we need to think differently about data. If this is shared rather than siloed then more problems can be solved than companies and individuals hiding it away. The problem is managing this in an age of data privacy.

I had to take a break at this point – a criticism of the conference would be that there were no natural breaks between the session groups to change tracks, grab a coffee or network. As such, there was constant movement and background noise, which some could find very distracting. With coffee in hand, thanks to one of the exhibitors (shout out to JamieAI) who had brought a barista for real coffee (further shout out to the Mobile Coffee Bean for some amazing lattes) in exchange for your business card, I stretched my legs and caught up with some of the speakers from the first session.

I rejoined for Olof Hernell’s talk on how AI is transforming the investment landscape. It was interesting, although slightly meta, to see an investment company that had developed its own AI to determine whether AI companies were a good investment. It was also a good reinforcement of the business discussions of earlier in the morning that their statistics showed companies who were digital leaders out performed other companies on every metric.

The investment panel followed, moderated by David Kelnar, Partner and Head of Research at MMC ventures, who I know well. I’ve been a panellist for him myself and he’s also interviewed me for a podcast6. The investors were diverse both in the stage at which they invested and the types of companies in which they were interested. My favourite soundbite of the day was from Samantha Jerusalmy of Elaia Partners who said that if you want investment, “Don’t bullshit investors”. Over 80% of “AI” companies are not actually doing AI. Other great advice included: understanding what sort of tech company you were, what is the essence of the problem you are solving and its benefit, and why can’t anyone else do what you have done. One of the worst things you can do is focus on incremental changes, you’ll get swamped. identify the problem and show fast progress to get buy in. David ended the panel (as he did with me) with a quick fire round. It’s so difficult to answer some of these questions in one or two word answers. Should we be worried about weaponised AI? Will AI create more jobs that it destroys? Which industry will it have the biggest impact?

I had to jump out again for a work call (day jobs don’t go on hold for a conference!) and returned for what was advertised as tech talks, but instead were the exhibitors promoting their products. Even though these were only 10 minute segments, I had hoped they’d be something more technical. Ian Firth from Speechmatics did attempt to outline some of the problems in speech recognition with phoneme variation, but there wasn’t enough time (or the right audience) to go into depth. Brainpool and Deepomatic outlined their products, both of which I’d heard before. What intrigued me was Xephor solutions, who were claiming that they’d taken a different approach to AI with deductive reasoning rather than inductive, following Karl Popper’s open society philosophy. Fedor Sapronov claimed that they weren’t seeking investors or customers at this time which made me wonder why they were presenting and with no substance of how they were doing anything different, the cynic in me wonders how much is actually implemented.

The AI Tech session kicked off with Dr Yasmeen Ahmad of Think Big Analytics asking “How do we move beyond the hype?” What was interesting is the contract with traditional companies. The likes of HSBC have taken a century to get to the same customer level that internet based companies have achieved in ten – twenty years. She gave a fantastic example from a banking fraud problem showing how a transition from rules based algorithms to machine learning to deep learning have improved fraud detection and lowered false positives for a better overall customer experience. It was almost possible to miss the use of all these technologies in parallel for more accurate ensemble approach. The accuracy improvement is undeniable.

Lots of people from the other tracks rejoined for the talk by Hamed Valizadegan of NASA. “At NASA we have a lot of data” is a great start to a talk! Hamed had worked on projects for Hubble and Kepler telescopes to predict maintenance and failure. For Hubble, the fine guidance sensor keeps the telescope pointed to a millionth of a degree for long distance observations. There are three on the telescope, one of which has been replaced twice, one once and the third is still original. There was no mass production of these items, they were custom for Hubble so there were 6 variable length time series data showing diagnostics from the component. They used a semi supervised approach and showed that there was an increase in error values with the degradation that could be used for prediction of failure. He made it clear that domain experts were essential where there was limited data.

I took a quick break after this and headed over to track B. I’d heard about AI at Spotify previously so didn’t feel bad about missing Marc Romeyn’s talk, and had to sit on the floor to listen to Dejan Petelin’s discussion about how Gousto use AI to streamline their business and minimise waste. With a key takeaway being continual monitoring and reaction. His slides are also available here. I felt like I’d been in the wrong track – was this where the technical talks had been?

Akis Tsiotsios from Transport for London outlined some interesting projects that are under trial on a couple of the tube lines. The trains have a lot of sensors and there is a lot of event data leading up to failures that they could use for prediction. If maintenance could be predictive rather than reactive then no only could there be larger savings but also better customer service. For the central line there is a pilot approach to predict engine failures and on the Victoria line they are predicting door failures. If this is validated then they predict over £3Million in savings. Another use of AI at TFL is to use NLP to check that work orders match the categories assigned. If the labelled department does not match the text of the work then this is flagged for review by a human expert for correct allocation. Another example of AI augmenting human capability.

One of the issues close to my heart with AI is standards and ethics and there were several talks on this issue accross the two days. Ray Walsche from Dublin City University gave a great overview of the ethical issues and this debate has been going on for years. “AI has the potential to impact basic rights and liberties in profound ways”. We can’t debate whether AI will become legal entities without some sort of standardisation of language and adoption of these standards by the community. Will these voluntary standards ever become legislation and thus enforced? Lots to think about.

There was much disappointment in the cancellation of the AI and blockchain talk back in the main track, but it did give me the opportunity for another coffee and a bite to eat.

Dr Richard Freeman of JustGiving gave a very interesting talk on how they look at the data for the charity donation sector. Getting true labels from the free text of the charities and the fundraisers, finding common ground and being able to recommend reliable interesting results. He outlined how they’d used Neo4j, redshift and the apache spark for different graphs to find the connections and correlations. It was actually really nice to see a demonstration of some of the other tools available than the `throw a CNN at the problem’ approaches that are normally discussed.

There were a couple of other sessions to finish off the day, followed by an “after dark” party with a DJ, but by this point I was ready to head home7. I’m hoping that the slides for some of these presentations will be made available – I’m particularly interested in Wing Commander Kieth Dear’s talk on AI Security.

My overall feel at the end of Day 1 was that AI Congress is aimed at the senior level: head of department and CXOs. While I certainly got a lot out of it, if you were looking for more technical discussions then this wasn’t the right conference. There was plenty of advice and useful information for more senior individuals and I headed home looking forward to day 2, which is always a good sign!

Notes from Day 2 are here.

  1. An exaggeration I know but it really does feel like this!
  2. But there’s only one of me… bring on the sentient clones…
  3. Even though it might not feel like it for you personally 🙂
  4. I do, I had one, I feel very old.
  5. I may start using this and others will as well I think! Thanks Tim!
  6. Which I’ll announce on twitter when it goes live.
  7. It’s unlikely that they would have played music I enjoy listening to and I was personally all talked out and was looking forward to some quiet time. Not to mention an early night given the need for another 5.30 am alarm…

Published by

janet

Dr Janet is a Molecular Biochemistry graduate from Oxford University with a doctorate in Computational Neuroscience from Sussex. I’m currently studying for a third degree in Mathematics with Open University. During the day, and sometimes out of hours, I work as a Chief Science Officer. You can read all about that on my LinkedIn page.

One thought on “AI Congress London 2018 Day 1”

Comments are closed.