If you’ve not read the day 1 summary then you can find that here.
Day 2 had a new host for track A in the form of David D’Souza from CIPD. His opening remarks quoted Asimov and Crichton and encouraging us not be magicians and to step back and think about what we should do rather than just what we could. Continue reading AI Congress London 2018 Day 2
London is a hive of AI activity. The UK is positioning itself as a leader in AI technology and you can barely walk around London without passing an AI company or meetup or training course1. If I didn’t actually have a day job, I could fill my time with AI conferences without actually doing much more than my daily commute. That said I am quite picky about the ones I go to. I’d never been to the AI Congress before and liked the diverse set of speakers and topics. I was lucky that the team at Logikk had invited me as their guest for the two days. So how did it stack up? Well, day 1 was at a much higher level than some of the other conferences I’ve been to, with a lot of implementation and enterprise discussions and far fewer talks on the technical implementations. If you’re senior then these conferences are for you. If you want someone to talk about their latest paper on arxiv then there are far more technical events that will suit you better.
One of the biggest problems I had was that there were three separate tracks and only one of me, so if I didn’t make notes on a particular talk then hopefully the slides will be available after the event at some point. I missed some of the high profile talks, in preference of other speakers, on purpose as I’d already heard those speakers at other events. Continue reading AI Congress London 2018 Day 1
If you’ve been following this blog you’ll know that there have been great advances in the past few years with artificial image generation, to the stage where having a picture of something does not necessarily mean that it is real. Image advances are easy to talk about, as there’s something tangible to show, but there have been similar large leaps forward in other areas, particularly in voice synthesis and handwriting.
Earlier this month, Dove launched their new baby range with another of their fantastic adverts challenging stereotypes and questioning is there a “perfect mum”. As a mum myself I can relate to the many hilarious bloggers1 who are refreshingly honest about the unbrushed hair, lack of make-up, generally being covered in whatever substances your new tiny human decides to produce, and all other parenting frustrations. I’m really pleased that there are lots of women2 out there challenging the myths presented in the media – we don’t all have a team to make us beautiful, nor someone photo-shopping the results to perfection, and the pressure can be immense. This is where Dove’s campaign is fantastic. Rather than just creating a photoshoot with a model and doctoring the results, the image is actually completely artificial, having been generated by AI. Continue reading Artifical image creation takes another step forward in advertising
I love the fact that here in the UK everyone can be involved in shaping the future of our country, even if a large number of individuals choose not to and, in my eyes, if you don’t get involved then you don’t have the right to complain. While this is most generally applied to the election of our representatives from local parish councils to our regional MPs (or actually standing yourself)1 there are also a lot of other ways to be involved. In addition to raising issues with your local representative, parliament has cross bench committees that seek input from the public and to help create policy or consider draft legislation.
Our elected parliament is not made up of individuals who are experts in all fields. Even government departments are not necessarily headed by individuals with large amounts of relevant experience. It is critical that these individuals are informed by those with the experience and expertise in the issues that are being considered. Without this critical input, our democracy is weakened. Continue reading Submitting evidence to parliament committees
If you’re starting out in deep learning and would prefer a laptop over a desktop, basic research will lead you to a whole host of blogs, Q&A sites and opinions that basically amount to “don’t do it” and to get a desktop or remote into a server instead. However, if you want a laptop, whether this is for college, conferences or even because you have a job where you can work from anywhere, then there are plenty of options available to you. Here I’ll lay out what I chose and why, along with how it’s performing. Continue reading Choosing a Laptop for Deep Learning Development
The Science and Technology Select Committee here in the UK have launched an inquiry into the use of algorithms in public and business decision making and are asking for written evidence on a number of topics. One of these topics is best-practise in algorithmic decision making and one of the specific points they highlight is whether this can be done in a ‘transparent’ or ‘accountable’ way1. If there was such transparency then the decisions made could be understood and challenged.
In September 2016, the ReWork team organised another deep learning conference in London. This is the third of their conferences I have attended and each time they continue to be a fantastic cross section of academia, enterprise research and start-ups. As usual, I took a large amount of notes on both days and I’ll be putting these up as separate posts, this one covers the morning of day 1. For reference, the notes from previous events can be found here: Boston 2015, Boston 2016.
Day one began with a brief introduction from Neil Lawrence, who has just moved from the University of Sheffield to Amazon research in Cambridge. Rather strangely, his introduction finished with him introducing himself, which we all found funny. His talk was titled the “Data Delusion” and started with a brief history of how digital data has exploded. By 2002, SVM papers dominated NIPs, but there wasn’t the level of data to make these systems work. There was a great analogy with the steam engine, originally invented by Thomas Newcomen in 1712 for pumping out tin mines, but it was hugely inefficient due to the amount of coal required. James Watt took the design and improved on it by adding the condenser, which (in combination with efficient coal distribution) led to the industrial revolution1. Machine learning now needs its “condenser” moment.