Evolving Machines

I, for one, welcome our new metal overlords ;)
I, for one, welcome our new metal overlords 😉

Following my post on AI for understanding ambiguity, I got into a discussion with a friend covering the limitations of AI if we only try to emulate ourselves.  His premise was that we know so little about how our brains actually enable us to have our rich independent thoughts that if we constrain AI to what we observe, an ability to converse in our native language and perform tasks that we can do with higher precision, then we will completely limit their potential.  I had a similar conversation in the summer of 2015 while at the start-up company I joined1– we spent a whole day2 discussing whether in 100 years’ time the only jobs that humans would do would be to code the robots.  While the technological revolution is changing how we live and work, and yes it will remove some jobs and create others just like the industrial revolution did and ongoing machine automation has been doing, there will always be a rich variety of new roles that require our unique skills and imagination, our ability to adapt and look beyond what we know.

There have been a few interesting advances in the area of self improving programs.  New machine learning techniques are giving constant adaption in response to input problems, with a sub branch on the field involved in more “natural” learning rather than simple paired data3.  This sort of thing is fascinating but also potentially fear inducing.  Do we really want applications that can write and improve themselves?  That can go beyond their original programming?

While school has taught us that to be alive an entity must have certain characteristics (such as reproduction, growth, functional activity and change) there are exceptions that we ‘know’ are not alive but fit the definition.  Traditional (DNA and RNA based) viruses are examples of this, as are crystals, and I love edge cases – it makes us challenge our definitions.  Back in 2003 there was a great post from NASA on how to define life in the things we may encounter outside of Earth.  In it, Carol Clelend argues that rather than a definition of life we should have a general theory of living systems4 in the scientific sense, that allows us to uncouple from our limited single origin observations.

Computer virus can replicate and have functional activity.  If they had learning mechanisms so they could adapt and grow then they would fulfil the classical definition of ‘life’, but I think there would be a long search to find someone who considered it alive.  If, however, there was a chat bot program that could improve as you spoke to it and could create other programs, and seemed to have some understanding of life or death then it may be a more difficult decision.  The closer something artificial becomes to what we consider to be alive then the more ethical turmoil there will be in controlling or destroying it.  There are an innumerable number of apocalyptic novels that could be written with some uncontrollable computer program at the start, and these only get worse if you add in any sort of AI that can evolve!

There are, however, some positive views that we should not completely discount.  For a more positive view on self aware artificial life, Diaspora by Greg Egan is a really good non-apocalyptic sci-fi with humans, robots and abstract intelligences all sharing the planet.  We’re so used to coding in high level languages that we forget the interpreters or compilers are already changing our code, augmenting it to add the memory management that we are too lazy to need to do any more.  Is it really much of a step to a language that allows a much higher level of definition so the computer can figure the rest out for itself, and then it’s a small step to self improvement, allowing the ability to respond to situations the program has never encountered.  Machine learning is doing just this.  Show an object classifier something it’s never seen before and it will give its best guess5, adversarial networks will keep learning, super learners6 will optimise themselves far beyond anything a human could hand tune.  So what really is the difference?  We don’t want to give the machines power to do us harm – whether that’s not opening the pod bay doors, restricting our freedom, or even making crazy logical decisions.  This can be solved with the appropriate level of ability and trust as set out in this talk by Igor Markov: we give individual machines or applications only such permissions as required to do the job they were designed to do, and any learning is tightly within those parameters.

Andrew Ng is oft quoted as saying he worries about AI taking over the world as much as he “worries about overpopulation on Mars”.  I’ve said before that I believe this is unhelpful and a remarkably naïve thing for someone in his position to say.  With the rapid growth in AI, this is not a far future problem, this is something we need to manage now before it becomes an issue.  Whether that’s some form of “three laws safe”7 or more restriction over what any one machine or program can do, we will come up with something as a community to ensure that the worst fears of science fiction writers never come to pass.

Let’s let technology do what it does best: remove the mundane, make our transport safer, our education more accessible and fundamentally our lives richer.  And let’s let ourselves do what we do best, creativity and adaption, the passion to discover new things about ourselves and our universe.  Would it be a bad thing if the machines gradually evolved along side us?

We shouldn’t be scared of these changes, we should equip ourselves with the tools to be part of the revolution and let the immense technological revolution give us better quality of life, but at the same time, we should concern ourselves with it to make sure that we do the right thing.

  1. That in itself will be another chapter in my book abut being a woman in IT 🙂
  2. While working of course!
  3. For example some of the work Facebook are doing discussed at ReWork Boston by Adam Lehrer
  4. Sadly the link to this from the NASA blog post is no longer active
  5. For a good example of this have a look at this paper by Hendricks et al and skip to the end of the paper where they show some of the problems with the network results
  6.   If you’ve not heard this term watch Dr Erin LeDell’s talk at MLConf in 2016
  7. I, Robot or Bicentennial man if you’ve never read any of the original Asimov, although if you haven’t I’d really recommend reading his robot series

Published by


Dr Janet is a Molecular Biochemistry graduate from Oxford University with a doctorate in Computational Neuroscience from Sussex. I’m currently studying for a third degree in Mathematics with Open University. During the day, and sometimes out of hours, I work as a Chief Science Officer. You can read all about that on my LinkedIn page.