It’s not often that I feel the need to write a reactionary post as mainly the things that tend to inflame me are usually by design. However today I read something on LinkedIn that caused a polarisation in debate within a group of people who should really appreciate learning from different data: Data Scientists.
What was interesting was how the responses fell neatly into one of two camps: the first praising the poster for speaking out and saying this, supported by nearly an order of magnitude more likes than the total number of comments, and the second disagreeing and pointing out that it can work. What has been lost in this was that “can” is not synonymous with “always” – it really needs a good team and better explanation than many companies sometimes use. What irked me most about the whole thread was the accusation that people doing data science with agile obviously “didn’t understand what science was”. I hate these sweeping generalisations and I really do expect a higher standard of debate from anyone with either “data” or “science” anywhere near their profile. Continue reading Agile Data Science: your data point is probably an outlier
This is part 3 of my summary of ReWork Deep Learning London September 2018. Part 1 can be found here, and part 2 here.
Day 2 of rework started with some fast start up pitches. Due to a meeting at the office I missed all of these and only arrived at the first coffee break. So if you want to check out what 3D Industries, Selerio, DeepZen, Peculium and PipelineAI are doing check their websites. Continue reading ReWork Deep Learning London September 2018 part 3
September is always a busy month in London for AI, but one of the events I always prioritise is ReWork – they manage to pack a lot into two days and I always come away inspired. I was live-tweeting the event, but also made quite a few notes, which I’ve made a bit more verbose below. This is part one of at least three parts and I’ll add links between the posts as I finish them. Continue reading ReWork Deep Learning London September 2018 part 1
ImageNet has been a deep learning benchmark data set since it was created. It was the competition that showed that DL networks could outperform non-ML techniques and it’s been used by academics as a standard for testing new image classification systems. A few days ago an exciting paper was published on arxiv for training ImageNet in four minutes. Not weeks, days or hours but minutes. This is on the surface a great leap forward but it’s important to dig beneath the surface. The Register sub headline says all you need to know:
If you’ve read pretty much any other of my artificial intelligence blog posts on here then you’ll know how annoyed I am when the slightest advance in the achievements of AI spurs an onslaught of articles about “thinking machines”, that can reason and opens up the question of robots taking jobs and eventually destroying us all in some not-to-be-mentioned1 film franchise style. Before I get onto discussing if and when we’ll get to a Detroit Become Human scenario, I’d like to cover where we are and the biggest problem in all this. Continue reading Thinking machines – biological and artificial
Throughout my academic career one thing that was repeatedly enforced was that if you were claiming something to be true in a paper, you needed to show results to prove it or cite a credible source that had those results. It took a lot of effort in those pre-Google Scholar and pre-Arxiv days1. Reading the journals, being aware of retractions and clarifications and building the evidence to support your own work took time2. Writing up my thesis was painful solely because of finding the right references for things that were “known”. I had several excellent reviewers who sent me back copies of my thesis with “citation needed” where I’d stated things as facts without a reference. My tutor at Oxford was very clear on this: without a citation, it’s your opinion not a fact. Continue reading Citation Needed – without it you have opinion not facts
While there may be disagreements on whether AI is something to worry about or not, there is general agreement that it will change the workforce. What is a potential concern is how quickly these changes will appear. Anyone who has been watching Inside the Factory1 can see how few people are needed on production lines that are largely automated: a single person with the title “manager” whose team consists entirely of robots. It wasn’t too long ago that these factories would have been full of manual labour.
The nature of our workforce has changed. It’s been changing constantly – the AI revolution is no different in that respect. We just need to be aware of the speed and scale of potential change and ensure that we are giving everyone the opportunity to be skilled in the roles that will form part of our future. There is an inevitability about this. Just as globalisation made it easy for companies to outsource work to cheaper locations (and even easier with micro contract sites) AI will make it cheaper and easier for companies to do tasks so it will be adopted. Tasks that aren’t interesting enough or wide market enough or even too difficult right now to be automated will still need human workers. Everything else will slowly be lost “to the robots”. Continue reading Is a Robot tax on companies using AI a way of protecting the workforce?
While I like to kid myself that maybe I’m only a quarter or third of the way through my life, statistics suggest that I’m now in the second half and my future holds a gradual decline to the grave. I’m not afraid of my age, it’s just a number1. I certainly don’t feel it. My father recently said that he doesn’t feel his age either and is sometimes surprised to see an old man staring back at him from the mirror.
As an atheist, death terrifies me. My own and that of those I love. I don’t have the easy comfort blanket of an afterlife and mourn the loss of everything an individual was when they cease to be. Continue reading Chatbot immortality
Artificial intelligence has progressed immensely in the past decade with the fantastic open source nature of the community. However there are relatively few people, even in the research areas, that understand the history of the field from both the computational and biological standpoints. Standing on the shoulders of giants is a great way to step forward, but can you truly innovate without understanding the fundamentals?
I go to a lot of conferences and I’ve noticed a subtle change in the past few years. Solutions that are being spoken about now don’t appear to be as far forward as some of those presented a couple of years ago. This may be subjective, but the more I speak to people about my own background in biochemically accurate computational neuron models, the more interest it sparks. Our current deep learning model neurons are barely scratching the surface of what biological neurons can do. Is it any wonder that models need complexity and are limited in their scope? Continue reading Biologically Inspired Artificial Intelligence