The problem with the Turing Test
The problem with using the Turing Test as a measure of whether we’ve achieved true “Artificial Intelligence”* is that it assumes only (neurotypical) Human ways of thinking count as “intelligence.”
For example, if someone ever comes up with a user interface that enables an octopus to engage in a conversation from the other side of a computer monitor, I doubt the human running the test would be fooled into thinking they’re conversing with another human.
But let’s face it: a major reason we’re not kept in underwater terrariums by our cephalopod overlords is that octopuses die before their eggs hatch, and so can’t pass on their lifetime accumulation of cunning trickery.
A self-aware computer program would not have that limitation.
And, as I suggested in my introduction, there are plenty of actual, thinking, self-aware human beings, who would fail a Turing test, because of autism or other neurodivergence, who, even as I type this, are having the reality of their humanity denied. And they are suffering for it.
*Once “intelligence” becomes complex enough to be self-sustaining (that is: able to learn new things by independently seeking them out and experimenting, rather than being fed select information by pedagogue/programmer) I don’t think it should be qualified with “artificial.”
At that point, it’s real.