Thinking machines – biological and artificial

 

How can we spot a thinking machine?

If you’ve read pretty much any other of my artificial intelligence blog posts on here then you’ll know how annoyed I am when the slightest advance in the achievements of AI spurs an onslaught of articles about “thinking machines”, that can reason and opens up the question of robots taking jobs and eventually destroying us all in some not-to-be-mentioned1 film franchise style.  Before I get onto discussing if and when we’ll get to a Detroit Become Human scenario, I’d like to cover where we are and the biggest problem in all this.

There was a great advance in machine learning techniques a few weeks ago where a team from University of California came up with a new way to determine reward functions.    As someone active in the field, this was a really cool paper.   A Rubik’s cube can be solved from any position with 25 moves or less and there is a pretty cool set of moves that will get you to a solution, but not necessarily in the smallest number of moves2  There are a lot of solvers that follow these methods.  What was really cool about the paper by McAleer et al was they that did not force this solution and instead used reinforcement learning to allow the agent to figure out the best solution itself.  The key here was rewarding “correct” moves  With the high number of combinations possible, scoring a new configuration is difficult.  Their logical leap was to reward based on working backwards from the completed cube, with fewer moves to get to the configuration as higher reward.  While this seems obvious in hindsight, it’s not how we usually think about solving problems.  Generally, we look at our current status and the consequence of particular decisions, if we’re particularly focussed we start with the outcome we want and work backwards, but it’s harder to compute backwards permutations.  Not so in a confined rules space and that is what makes this solution particularly interesting to me.  Admittedly the paper title is “Solving the Rubik’s Cube without Human Knowledge” but this means that the system was not given the direct instructions on the optimal solution.   This is described as a form of reasoning, which it is: what path will lead to the optimal solution, but this is a far leap to “reasoning” thinking machines in the way most people understand them, and how some of the summary articles have portrayed it.

Similarly I’ve seen a worrying trend of describing deep learning as “self-aware” in the context of better intuition of surroundings.  Earlier this year a paper from Stanford described a system that had knowledge of its location and direction within a space they used the term “self-aware” in it’s broadest sense.  The Medium article on this went for the clickbait title of “Ego-motion in Self-Aware Deep learning” and even derided DeepMind in the article for using “unimpressive titles”3.  While I agree that a sense of environment is necessary for conscious decision making, I am wary of how easily these terms are thrown around as it is giving the general public a sense of innovation in technology that has not yet been achieved.

Similarly, IBM are pushing the bounds on language understanding with its debating AI.  In the write up for the Guardian it was noted:

In both debates the audience voted Project Debater to be worse at delivery but better in terms if the the amount of information it conveyed. And despite several robotic slip-ups, the audience voted the AI to be more persuasive (in terms of changing the audience’s position) than it’s human opponent, Zafrir, in the second debate.

As you would expect, an AI with access to facts can use them and unsurprisingly the audience (many of whom were noted to be IBM staff) were swayed by facts.  This is an amazing step in conversational AI, although like all AI systems it is only as good as the information it is fed.  I’d love to see a similar experiment with some false facts – would the audience be  persuaded due to their inherent trust of a machine being factual and not a liar?

I’ve given several talks this year on abstract intelligence.  I firmly believe that we are just biological thinking machines and therefore at some point we will have non-biological thinking machines.  The problem is that we will not recognise it when we see it, because we have no good definition of intelligence or consciousness.  We need a testable definition that we can apply to all life as we know it, and also to all generated constructs, whether they are robots or software.  We don’t have this right now.  The Turing test resolves around communication and is not useful.  We know that there are many animals that are self aware4 and we generally agree that while plants are alive they are not conscious or intelligent in anyway.  We have a very abstract definition of life and an even more abstract set of definitions about intelligence, which have started with “only humans” and then as a species we have grudgingly accepted that other animals display intelligence.  We have a wide range of neural structures and numbers – we don’t know if there is a requirement before these tangled chemical sacks start having a sense of self.  We understand the biochemistry of neurons, just not how this leads to the emergent property of self-awareness.

While this is a fair definition, and the agent created by Stanford above would fall into this definituion, who do we make it testable?  What are the implications for humans who fail such a test?  There is a logical point here at which we must accede commonality of rights to all entities that pass such a test.  Including non-human biological machines, including those that aren’t immediately obvious such as birds.  I fear that because this is difficult, both in test definition and also the philosophical consequences that this present, that we will have decisional paralysis as a community.

My current hypothesis is that somewhere in the chemistry is the key to solving consciousness and self-awareness, and one day we will have a definition that works not just for our biology but also for any system that equally matches this biology.  Different does not mean “lesser”.

It’s a common thread through human history to maintain our superiority.  To treat things that are not human differently.  I think nothing of giving my vacuuming robot a gentle kick if it’s going the wrong way, yet I still use “please” and “thank you” with Alexa, probably because it has a human voice and I’ve been brought up to treat people with respect but machines are things to be understood, taken apart and put back together.  However, we have seen that there are a lot of people who treat Alexa5 badly because they see it as some sort of subservient human.  How many times have you heard and of the following to justify inappropriate behaviour? It’s okay, it’s not alive; it’s okay it’s not conscious; it’s okay, it doesn’t feel pain; or even it’s okay, it’s not human6.

This is one of the many social commentaries in the amazing game Detroit: Become Human. What happens when the robots we have created to serve us become self aware and decide that the orders they are given are wrong? What happens when they don’t want to be treated badly any more?  By playing the game as the androids and not the humans, you get an insight not only into your own character but some of the issues we may face in the future.  If you don’t have a PS4 then I can heartily recommend JackSepticEye’s playthrough on YouTube, particularly his own emotion at trying to do the “right” thing.  What does it mean to be thinking machines, to be conscious, to be alive, to be free?

This isn’t the first time that the ethics of conscious machines has been explored and it won’t be the last7.  What is different now is that we are creating agents that can control robots, agents that can solve problems and agents that talk like humans.  We are starting to combine these skills to solve bigger problems.  At what point will these agents cease to be complex decision trees and start to become something more?  We don’t know because we can’t define “more” adequately in order that it can be tested.

We will have thinking machines, because that’s all we are.   Let’s be ready to recognise them when they appear.

  1. Seriously, there are other AI films 🙂 If an AI article is illustrated with that image then I have a hard time getting the motivation to read it…
  2. I remember a very long car journey with my step-son sat in the back muttering to himself as he taught himself how to do this.
  3. I’m often asked why I don’t publish my posts on Medium and its affiliated sites. This is why.
  4. I’m not excluding all animals here, just we’ve not tested all of them!
  5. And other bots
  6. I’m not going to delve further into this, but we’ve also seen many examples of humans treating other humans badly because “they’re not us”
  7. Other films that go into this are Chappie, Bicentennial Man, Her, Ex Machina, I Robot and of course Blade Runner to name but a few.

Published by

janet

Dr Janet is a Molecular Biochemistry graduate from Oxford University with a doctorate in Computational Neuroscience from Sussex. I’m currently studying for a third degree in Mathematics with Open University. During the day, and sometimes out of hours, I work as a Chief Science Officer. You can read all about that on my LinkedIn page.