At the ReWork Retail and AI Assistants summit in London I was lucky enough to interview Kriti Sharma, VP of AI and Robotics at Sage, in a fireside chat on AI for Good. Kriti spoke a lot about her experiences and projects not only in getting more diverse voices heard within AI but also in using the power of AI as a force for good.
We discussed the current state of AI and whether we needed legislation. It is clear that legislation will come if we do not self-police how we are using these new tools. In the wake of the Cambridge Analytica story breaking, I expect that there will be more of a focus on data privacy laws accelerated, but this may bleed into artificial intelligent applications using such data. Continue reading Democratising AI: Who defines AI for good?
In September 2016, the ReWork team organised another deep learning conference in London. This is the third of their conferences I have attended and each time they continue to be a fantastic cross section of academia, enterprise research and start-ups. As usual, I took a large amount of notes on both days and I’ll be putting these up as separate posts, this one covers the morning of day 1. For reference, the notes from previous events can be found here: Boston 2015, Boston 2016.
Day one began with a brief introduction from Neil Lawrence, who has just moved from the University of Sheffield to Amazon research in Cambridge. Rather strangely, his introduction finished with him introducing himself, which we all found funny. His talk was titled the “Data Delusion” and started with a brief history of how digital data has exploded. By 2002, SVM papers dominated NIPs, but there wasn’t the level of data to make these systems work. There was a great analogy with the steam engine, originally invented by Thomas Newcomen in 1712 for pumping out tin mines, but it was hugely inefficient due to the amount of coal required. James Watt took the design and improved on it by adding the condenser, which (in combination with efficient coal distribution) led to the industrial revolution1. Machine learning now needs its “condenser” moment.
The day started with a great intro from Jana Eggers with a positive message about nurturing this AI baby that is being created rather than the doomsday scenario that is regularly spouted. We are a collaborative discipline of academia and industry and we can focus on how we use this for good. Continue reading ReWork DL Boston 2016 – Day 1
At the ReworkDL conference in Boston last month I listened to a fantastic presentation by Ryan Adams of Whetlab on how they’d created a business to add some science to the art of tuning deep learning engines. I signed up to participate to their closed beta and came back to the UK very excited to use their system once I’d got my architecture in place. Yesterday they announced that they had signed a deal with Twitter and the beta would be closed. I was delighted for the team – the business side of me is always happy when a start-up is successful enough to get attention of a big corporate, although I was personally gutted as it means I won’t be able to make use of their software to improve my own project.
The first session kicked off with Kevin O’Brian from GreatHorn. There are 3 major problems facing the infosec community at the moment:
Modern infrastructure is far more complex than it used to be – we are using AWS, Azure as extensions of our physical networks and spaces such as GitHub as code repositories and Docker for automation. It is very difficult for any IT professional to keep up with all of the potential vulnerabilities and ensure that everything is secure.
(Security) Technical debt – there is too much to monitor/fix even if business released the time and funds to address it.
Shortfall in skilled people – there is a 1.5 million shortage in infosec people – this isn’t going to be resolved quickly.
So, day one of the ReWork Deep Learning Summit Boston 2015 is over. A lot of interesting talks and demonstrations all round. All talks were recorded so I will update this post as they become available with the links to wherever the recordings are posted – I know I’ll be rewatching them.
Following a brief introduction the day kicked off with a presentation from Christian Szegedy of Google looking at the deep learning they had set up to analyse YouTube videos. They’d taken the traditional networks used in Google and made them smaller, discovering that an architecture with several layers of small networks was more computationally efficient that larger ones, with a 5 level (inception-5) most efficient. Several papers were referenced, which I’ll need to look up later, but the results looked interesting.
So, most people know by now that in a week’s time I start a new role. After 12 years of working for established business both small and large I am joining a start up in an area at the current edge of what is possible in computer science. I’m very much looking forward to having my technical and scientific abilities stretched as far as they’ll go and, not unsurprisingly, the immersion in a new venture where the focus is on the solution and not why things can’t be done (often the case in established companies).
I have a reading list as long as the references for my own thesis to get through in the next few weeks so I can become an expert in my new field: deep learning and artificial intelligence. One of the first things I’ll be doing is attending the ReWorkDL summit in Boston, MA, which is just a fascinating line up of some of the leading people in this space. All being well I will be presenting at the 2016 summit.
I’ll be tweeting throughout the event with thoughts and comments and will do a summary post afterwards.