don't like the way the AI cult appears to be smoothly progressing to the group suicide phase of the movement
I am literally begging people to show me some counterarguments here. I will take anything
imagine someone had already created eight billion (!) human-level intelligences (!!) armed with nuclear weapons (!!!)
Okay let me rephrase: I would like to reduce (i.e. be convinced to reduce) the number of world-ending threats I’m developing an anxiety disorder concerned about, starting with AI since it’s overly salient and predictions seem more apocalyptic than the rest
Serious questions:
- When was the first time you heard about AI risk?
- Who did you hear it from?
- Who did they hear it from?
- How long ago was that?
- How much has AI realistically moved toward those apocalyptic predictions since that time?
The thing is, people have been alarmist about AI for a few decades now and we haven't really seen any negative consequences of AI that were not entirely the result of human action and that could have been stopped by human intervention at any step in the process. Malicious use of AI seems like a much bigger threat at the moment than anything else in the space.
But also remember: the fact that a group of people are more scared of one thing than another doesn't mean that the outcome they fear is more likely than other outcomes. The fact that people who discuss AI risk are so apocalyptic about it should be a cause for skepticism, not credulity. If they're that scared are they really thinking rationally? Do they have any measurable proof that the things they fear are possible with current technology or with currently predicted technologies? What (real world!) steps are they taking to mitigate these things?
I think the comparison to climate change is apt because you can see the way that messaging around the climate has changed in the last 30 years (which is about how long the AI-as-existential-risk movement has been around) but the climate change group is able to show clear and replicable data about extreme weather and droughts and polar ice. Additionally, when they talk about how to mitigate risk they have clear proposals with models of how each proposal might change the possible impacts of climate change, and you can see that their messaging strategy has changed and that they have a sense of urgency that is paired to real-world findings.
"I'm so much more worried about something that might happen instead of something that's already taking place that I've stopped caring about the one that's already present" seems like such a strange stance to take, vis a vis AI and climate change, and yet.