Last year I wrote a post on whether machines could ever think1. Recently, in addition to all the general chatbot competitions, there has been a new type of test for deeper contextual understanding rather than the dumb and obvious meanings of words. English2 has a rich variety of meanings of words with the primary as the most common and then secondary and tertiary meanings further down in the dictionary. It’s probably been a while since you last sat down and read a dictionary, or even used an online one other than to find a synonym, antonym or check your spelling3 but as humans we rely mostly on our vocabulary and context that we’ve picked up from education and experience.
Alexa, what’s the weather like?
- Spoiler, it depends on how you define thinking 😉 ↩
- As this is the language I know most intimately. ↩
- I know it was 12 years for me – I was looking for some really cool spell names for a MUD that I was involved in creating and I believe got as far as ‘D’, with each spell bearing relevance to the meaning of the word… ↩
- But not always, and the ability does vary from person to person. The case of Derek Bentley, who was given a posthumous pardon after being hanged, is an interesting example. When his accomplice in the burglary was asked to hand over the gun, he was alleged to have said “let him have it” – did he mean “let him have the gun” or more colloquially “shoot him”. His friend chose the latter and despite Derek himself having learning difficulties, both boys were accused of murder. Assuming these words were even spoken. See the Wikipedia entry on the case ↩
- If you want an example of this in real time, the subtitles on the live news is a good one to watch – these cannot be prepared in advance and the speech recognition dumbly translates what it thinks rather than what is said. ↩
- which has since been fixed and used successfully ↩
2 thoughts on “AI for understanding ambiguity”
Nice post, I like the practical bicycle and railings example 🙂
As one of the contestants, here’s my explanation of my program’s low score (see website). Though truth be told I only ever developed a partial solution because, as you say, there are other problems to solve first that are of greater priority. This contest required a combination of language understanding, reasoning, and all common knowledge in the world, and all three of those are still unsolved problems by themselves.
The entry that used deep learning scored 58% after the organisers fixed a technicality with the input. I don’t expect this approach to be a full solution either, as deep learning goes by the statistics of word occurrences. As soon as you say something that is statistically uncommon, it’ll still misinterpret. However, since a majority of things we say is literally common, it will probably outdo other approaches for a long while.
Thanks for reading 🙂 Lovely to hear about your involvement – sadly I only found out about this after the event. I think you’re correct in your assessment, and I believe that there will be a better way of doing it than reusing the current techniques. Maybe we need to understand our own decision making skills better 🙂
Comments are closed.