Artificial intelligence’s development seems to be moving at breakneck speed, and the ability of AI to automate even complex tasks – and, potentially, to outwit its human creators – has been making plenty of headlines in recent months. But how far back does our fascination with, and our fear of, AI extend? Matt Elton spoke to Michael Wooldridge, professor of computer science at the University of Oxford, to find out more.
EPISODE DESCRIPTION
The Beekeepers
In this episode we explore the relationships between humans and machines, discuss some of the ethical dangers of AI, and how a balanced relationship with technology might present itself in a Solarpunk setting.
Links mentioned:
- One guy on reddit - https://www.reddit.com/r/solarpunk/comments/lm4cqj/what_makes_something_solarpunk/
- The march of the robot dogs“ - https://link.springer.com/article/10.1023/A:1021386708994
- How AI Will Rewire Us“ - https://medium.com/the-atlantic/how-ai-will-rewire-us-6d7baa0fe6d4
- Open letter from several ‘Artificial Intelligence and Robotics Experts’ - http://www.robotics-openletter.eu/
- Recent studies in invertebrate neurobiology - https://www.sciencedirect.com/science/article/abs/pii/S096098220000169X
Music from:
ステム88 - Biofield - https://globalpattern.bandcamp.com/album/solarpunk-a-brighter-perspective
On Twitter, people have blamed the strange translations on ghosts and demons. Users on a subreddit called TranslateGate have speculated that some of the strange outputs might be drawn from text gathered from emails or private messages.
Andrew Rush, an assistant professor at Harvard who studies natural language processing and computer translation, said that internal quality filters would probably catch that type of manipulation, however. It’s more likely, Rush said, that the strange translations are related to a change Google Translate made several years ago, when it started using a technique known as “neural machine translation.”
In neural machine translation, the system is trained with large numbers of texts in one language and corresponding translations in another, to create a model for moving between the two. But when it’s fed nonsense inputs, Rush said, the system can “hallucinate” bizarre outputs—not unlike the way Google’s DeepDream identifies and accentuates patterns in images.