AI Congress London 2018 Day 2

AI Congress (still making me think of  @jack_septic_eye – let me know if you get that…)

If you’ve not read the day 1 summary then you can find that here.

Day 2 had a new host for track A in the form of David D’Souza from CIPD. His opening remarks quoted Asimov and Crichton and encouraging us not be magicians and to step back and think about what we should do rather than just what we could.

The keynote for the day was from Karolin Nackonz from IBM. She showed a picture of her daughter’s mobile phone and how the “call” and “messaging” app icons where not prominent in the slightest. Teenagers don’t pick up the phone1 and this means that interaction with them requires a different mind-set. Consumer behaviour is changing and is 24×7. It needs to be scalable and supported when the customer needs it, not at prescribed opening hours. Virtual agents understand natural language but need integration with other systems to provide real value. This is all possible. Agents are more consistent than humans in their responses2. Not all conversations are easy and virtual agents can often disappoint, forcing the user to repeat something in a different way or do course correction. This can leave the user with a bad feeling. Further more, the agent must represent the brand in tone and language so that the user doesn’t feel disjointed and getting this right is critical. Users can create a bond during a conversation even if they know they’re talking to a machine. One of the biggest challenges for virtual agents is the long tail of difficult questions and complicated cases. One approach is to handle random questions gracefully (“I can’t recommend a restaurant as I don’t eat”) or hand off to a human, although this needs to be seamless. Virtual agents don’t deal well with emotional and upset users so this needs to be detected and passed immediately to a human. Lots of good advice there for utilising the technology already existing.

Vishral Chatrath from Prowler.io followed. Most AI is perception/classification and, echoing some of the talks of the previous day, civilisation requires help making complex decisions. However, we have evolved to simplify tasks. When driving, we use a handful of data points because we have prior probabilistic models in mind. We know it’s unlikely that something will fall from the sky or (depending on where we are) animals will run out from the side. We understand how people and vehicles move and their relative potential velocity changes. We have a good idea of normal behaviour – movement isn’t random, it’s within bonded rationality. This allows us to focus our attention as our situation changes3. In 2011 a study of Israeli judges showed that appetite affected parole decisions. Incentives for humans change over time as we have different needs and this affects our behaviour. Something similar happens in the financial markets. DNNs are data inefficient as we know – can they be made better with probabilistic models? This is the approach that Prowler has taken. Applying their system to an example of taxi data, Vishral showed how modelling several possible “tomorrows” could give insight into resource planning. By decentralising the planning, this becomes infinitely scalable. An interesting approach and one I’m going to look into further.

Just as in day 1, I had to miss a couple of talks for a call, one of these was cancelled anyway so I re-joined towards the end of what appeared to be a fantastic talk about the sins of companies selling AI by Rurik Bradbury of LivePerson.  He had some great “gotchas” to avoid in selling (or mis-selling) AI technology.  He also reinforced the message of understanding the intent and sentiment of the end user to route them to the correct place.  I look forward to seeing his slides when they’re published.

Peter Appleby of AutoTrader gave a very interesting talk about combining machine learning in an agile organisation and some of the difficulties in bigger organisations with multiple reporting lines having to work together.  What started as a a series of inefficient components from different teams with no overall owner converted to a much leaner implementation with better team structure.  This allows AutoTrader to validate specifications against pricing to make “best value” suggestions even as prices are updated across their system.  For larger organisations with established engineering teams there are definitely learnings there.

“Lots of things considered to be AI historically no longer are” – Dr Mai Le, BT Openreach

The first of two speakers from the BT family, Dr Mai Le from BT Openreach gave some interesting background on the evolution of AI technology and also the internal perception of AI within BT.  Once something is no longer new, no longer sexy, the IT industry does have a habit of discarding it.  She showed how BT were using churn prediction and process prediction to improve customer service.

An important step before doing anything with data…

Ryan den Rooijen from Dyson talked about the relatively new initiative in Dyson to look at their customer data4. Citing Bloomberg’s study5, he noted that we’ve been going on about the power and insight that big data can provide, but it doesn’t seem to have made any difference to any corporate metrics.  He showed one of my favourite slides with the very simple call to action to “ask the right question” before diving into the data.

The second speaker from BT, Dr Simon Thompson, BT Research started with a passionate rant about AI.  His definition was that “Artificial Intelligence” was the academic field studying simulation of natural intelligence and “AI” is a subset of computer science to make decisions.  Since AI stands for artificial intelligence I think the horse may have already bolted on that one and we might be better educating on the difference between Artificial Sentience and decision making.  The scope of reasoning for AI is immense.  He cited several projects that were bigger advances than DeepMind’s Alpha Go:

  • Genesis AI that can summarise large quantities of free text into meaningful sentences6
  • Poker playing AI – this is a game of hidden information and requires skills very different from games such as Chess and Go where moves can be predicted.  (I’ve spoken about this before)
  • Soar cognitive robotics, requiring greater environmental understanding.

What is interesting is the perception of AI set by the big players in technology.  Customers expect everything rather than specific AI applications that serve the market.

As I’d already heard about Amazon Alexa recently I switched to track B that had just switched from AI in healthcare7 to AI in Enterprise and entered halfway through Chris Boos (CEO of Arago) talk on why there is no time to waste. Again, it looked like I’d missed something very interesting.  Arago taught their AI to play Civalisation.  I don’t recall this making the headlines and it reinforced Simon’s point about the big players getting all the focus.   Rather than repetitive training, they used humans to redirect the AI when it went wrong.  Chris claimed this was 95% more energy efficient than the Alpha go solution.  If you want to know more about the Hiro AI and Civ then it’s on their website here.

Charlotte Walker-Osborn from Eversheds gave a summary of some of the legal implications with AI, touching on discussion within UK and EU Parliaments.  With this level of discussion it’s almost certain that there will be some level of legislation in addition to what we are seeing creeping in, e.g. use of Ai in financial services and autonomous vehicles. She also touched on Patent Law, if an AI system invents something then how is this viewed in law?  It does vary around the world.  UK Copyright does have the provision for no human author, but the US does not for example.  Building on the standards talk from day 1, we need consensus of terminology in order to define rights and regulations8.

Ali Dasdan from Tesco discussed problems with speed of response for advertising.  With 5 million requests per second, each one has to be returned within 20ms.  This provides some interesting challenges for AI.  Furthermore, there are many customer touch points in retain and advertising.  Traditionally the point immediately before purchase is given 100% of the conversion credit, but the path to that purchase can be much more complex and only occur with other touch points.  Tesco are using AI to help determine the best approaches with their customers.

Anodot‘s Amir Kupervas closed off the AI Enterprise series with his talk on anomaly detection.  He gave us some visual tests to show how humans are very good at anomaly detection at small scale, but fail miserably at large scale.  We also have data blind spots, failing to pick on things we don’t believe are important to simplify the problem (subconsciously or not).  Data is not all equal in how it is sampled and this can give the perception of patterns when there is just noise.  Data periodicity can also cloud anomalies as a data series can have multiple periodic effects form hourly through to many years.  The Anodot solution can handle this by learning normal patterns and finding something different.  It’s a method I’ve seen applied before in security AI systems with success (e.g. Darktrace).

I headed back to the main hall at the point to try to catch some of the speakers, grab some food and listen to the tech talks.  As in day 1, the talks were less tech and more pitching, so I didn’t take many notes.  Botfuel, ASI Sherlock ML,  and Ammi Systems were all represented.

Straight after the tech talks was an unscheduled talk that had been postponed from the previous day.  Reema Patel from the RSA on AI ethics so I was pleased to catch that.  She outlined key areas that the RSA are discussing to advise on human and AI interaction going forward.  From the trolley problem and risk through social legitimacy and balance of power.  Each of the points she mentioned are worthy of their own well researched essays.  They are in the processes of gathering research on viewpoints on AI ethics so please do get in touch and be involved in that.

Even though there were still other talks scheduled, I headed up to the VIP area for the Women in Tech networking event.  While grabbing a cup of water ahead of what turned out to be an hour of talking, the first lady I spoke to had been at my “Personal Brand” talk back in August last year and gave me some great feedback.  Always nice to hear how people have acted on some of the takeaways from talks.  I encouraged her to speak at one of the future events.  I also spoke to an entrepreneur who was literally days into setting up her business venture.  Such an amazing story and passion and I’ve no doubt she’ll do well. It was lovely to see both familiar faces and make some new connections.  I headed back down to the main hall with some new LinkedIn acceptances and a handful of business cards safely in my pocket.  I have a lot of emails to write.  The outcome of this is that even though women are underrepresented in tech, there are a lot of us out there and there’s really no excuse for a lack of diversity in any event.  Please contact me if you’re short of a speaker or panellist and I can put you in touch with a very large network of passionate technical women with amazing stories to tell.

I dashed back to the B track for what turned out to be the last two talks for me.  Dr Daniele Magazzeni from Kings College London spoke about explainability in AI.  Citing GDPR clauses 12,13 and 22 there is an increasing need to ensure that AI decisions are explainable.  But explainable data-driven AI is hard.  One option is to show alternative decisions and the impact of those decisions similar to the different routes that mapping algorithms give.  this could inspire confidence that the decision presented was the best one9.  There needs to be some sort of situational awareness in parallel to the explainability so that any human override is done with full knowledge of the decision and the consequences.  Daniele mentioned the tragic Air France flight  447 where the hand off from the automatic systems in an error situation did not give the pilots the required information to make the correct emergency decision.  This is something that needs thought.

David Ferguson from EDF spoke about his experiences implementing AI within a legacy company.  He had two great quotes.  Referencing Mark Zuckerberg’s famous quote “if you’re not breaking things you’re not moving fast enough” he showed us one of the robots used to handle nuclear fuel rods and how it wouldn’t be wise to apply that philosophy.  There was also the challenge in a legacy company that something that might take a couple of months to prototype could take the rest of the year to deploy!  He had a checklist of steps to start using AI:

  • Buy, adapt and build.  If the problem is generic, buy the solution, don’t reinvent the wheel unless there’s a very good reason.  If something is close to what you need adapt it.  Only build from scratch the very industry specific aspects.
  • Create a talent pipeline.  Good people are very difficult to find so start with a small internal team, allow them the space to work innovatively and make connections with universities.
  • Think about ethics early.  What has been automated, handle data privacy, bias and the impact on jobs in the rest of the company
  • Start with something easy.  If you can get results then you can get further buy in from the company.

One of the other points of interest was that a lot of their systems were not well connected, with some data being obtained by a main with a clipboard!  The best claimed statistic I heard was that the component recogniser they built was 105% accurate – it found valid components they didn’t know existed.  I’ve immediately set this as a new benchmark for my own team 🙂

Sadly I missed the last few talks of the day again, and I’m hoping to get the slides for these as soon as they are available.  Overall it was a thought provoking conference, and I’ll stick to my assertion that it wasn’t for AI techies, but senior leaders interested in the adoption of AI.

I still don’t know why these guys were there though… 😀

  1. I can really relate to this 😉
  2. This is actually one of the points made by people arguing for AI in weapons – under pressure machines don’t make mistakes, so providing they were built correctly then there’s wouldn’t be “trigger finger” incidents. This is a whole debate worthy of far more than a footnote.
  3. I remember this switch when I was learning to drive myself. When I started there was too much information overload, but once I’d learned “road sense” I got a feel for the movement of traffic as easily as I walk through a crowd. I guess I’d finished training my internal probabilistic model at this point 🙂
  4. As opposed to the extensive product R&D e.g. for the 360 Eye
  5. I think this is the one, I didn’t see the full reference
  6. I can’t find a reference for this sadly, as soon as I do I’ll link it.
  7. Which would have been fascinating but I had to focus on the session that are relevant rather than piquing my curiosity!
  8. Don’t get me started on the whole Sophia debacle….
  9. From the available information anyway.

Published by

janet

Dr Janet is a Molecular Biochemistry graduate from Oxford University with a doctorate in Computational Neuroscience from Sussex. I’m currently studying for a third degree in Mathematics with Open University. During the day, and sometimes out of hours, I work as a Chief Science Officer. You can read all about that on my LinkedIn page.