mouthporn.net
#turing test – @natalunasans on Tumblr
Avatar

(((nataluna)))

@natalunasans / natalunasans.tumblr.com

[natalunasans on AO3 & insta] inactive doll tumblr @actionfiguresfanart
autistic, agnostic, ✡️,
🇮🇱☮️🇵🇸 (2-state zionist),
she/her, community college instructor, old.
Avatar
reblogged

Anyone else, esp. neurodivergent people, ever talk to someone and get elated because you’re like wow :) I am doing a Conversation :) I am doing the Asking Questions and Listening Attentively and Expressing Interest :) I have shared a Joke or brief Amusing Happening and this person liked it :) this person seems to enjoy speaking to me and I enjoy speaking to them :) I’m a person.

I once was talking to someone I admired and they told a joke about our managers and I added to the joke, and they laughed so hard they couldn’t talk  and I was like thank you for laughing I have truly never experienced this level of euphoria and also I would die for you now.

Recognition Responsive Euphoria is a hell of a drug 

Avatar
f1rstperson

I do this and I’m like “This is great. I’m going to get a good grade in conversation, something that is both normal to want and possible to achieve,”

Literally every time I speak.

“Excellent! I have successfully human-ed!” 

Avatar
natalunasans

i have had this idea for a while, that humans are always doing turing tests on ourselves and each other. and we frequently fail other humans on our turing tests (something the original Turing would probably have appreciated, being treated as not-a-person in his time for being queer and ND)

Avatar
Avatar
prokopetz

When we say that Ada Lovelace was arguably the world’s first computer programmer, that “arguably” isn’t thrown in there because of questions of definitions or precedence – she definitely wrote programs for a computer, and she was definitely the first.

Rather, the reason her status as the world’s first computer programmer is arguable is because during her lifetime, computers did not exist.

Yes, really: her code was intended for Charles Babbage’s difference engine, but Babbage was never able to build a working model – the material science of their time simply wasn’t up to the challenge. Lovelace’s work was thus based on a description of how the difference engine would operate.

Like, imagine being so far ahead of your time that you’re able to identify and solve fundamental problems of computer programming based on a description of the purely hypothetical device that would run the code you’re writing.

(Having no actual, physical computers on which to ply her skills, she then turned her attention to developing mathematical formulas for beating gambling establishments at their own game, which demonstrates that she anticipated not only the practice of computer science, but also the culture.)

There is a set of books called the Scientific Memoirs, a collection of foreign works translated to english in the early 1800′s, intended to make more academic knowledge available to english scientists. I’ve had the PDF’s of these on my computer since high school, they’re fun to read: It’s really cool to see how some principles and facts we think of as now set in stone were in fact hotly debated in the past.

Volume 3 of the Scientific Memoirs journal, page 666 (quite an easy page number to remember, I think) contains an account of Babbage’s engine by an Italian scientist, Menabera, which covers the basic principles of operation. He discusses the possible uses, especially for computing tables of logarithms, trigonometric functions, and other limit functions. He discusses how the machine works, how it is fed instructions and data, how computations occur, some basic instructions, the fact that the machine is imperative: it does what you say and no more.

It strikes me as a kind of “introduction to The Engine for mathematicians.” It would have been philosophically challenging but not a hard read, and if you’re a modern programmer it will look very familiar to you, even if you have to translate some phrases a bit. CPU Registers become Mills, Memory becomes Columns, etc. This goes on for about 30 pages.

And there are some translator notes, because the translator is Ada Lovelace.

The notes make it clear that she knows the system far more intimately than anyone else could. Menabera, who, while clearly an adept study and skilled engineer, has only attended a couple lectures on the engine, pales in comparison to someone who has worked intimately on it. She points out that yes, in fact, the engine is capable of making decisions. She has such a deep familiarity with the instruction set that she knows there are special modes in it for controlling how addresses in memory are read. I don’t really like narratives that show her as dunking on him somehow, I think she respected what he was able to write with the information he had.

Regardless, she expresses the same kind of knowledge as some people I know who have programmed applications in assembly who know the ins and outs of x86 or 6502 assembly so intimately that they can reduce what a novice would do in a hundred instructions to a dozen. Just, knowing a billion subtleties that demonstrate the gap between knowing and internalizing.

One of her inline notes occupies half the page in the form of a footnote. By the time she gets to the end she has Translator Notes A through G, along with several minor grievances corrected via footnote. She has supplied corrections and clarifications like a dedicated StackOverflow contributor.

And then you reach the actual Translator’s notes. They take up the majority of the remaining 50 pages of the book.

I’ll be honest, I cannot understand like 30% of this. A lot of it is discussions on approximations of trigonometric functions which I remember doing when they had me take Programming 101 at university, but then she gets pretty deep about some of it. I never was much good at the infinitesimal calculus.

The notes go into incredible detail describing the possibilities of the Engine. There’s discussions of the reasoning and implications behind the memory and instructions, even digging into the fact that this is no mere calculating machine, but a general computing machine. She goes to great lengths multiple times to explain how the analytical engine is wildly different from the difference engine.

Like, okay, hang on, I have to just put a screenshot in here.

LOOK AT THIS. THIS WAS WRITTEN IN THE 1800′S.

She’s touching on concepts of language complexity and the limits of computation imposed by the operations and memory available to a machine. This shows some kind of primordial understanding of Turing completeness, the fact that this machine can make decisions on its own once begun requiring no human intervention where others cannot.

Side note: here she is giving Black Mirror a positively sick burn from beyond the grave and centuries in the past:

She seems to be upset, infuriated that somehow no one else can see what this is, that it’s not just a better difference engine, that it’s not just a computing machine, but a whole new way of thinking. That by separating Data from Instruction you change everything about how you solve problems. The fact that you can create loops which reuse instructions to perform an operation over and over again, or if statements to approach real analytical computation, or use the numbers to represent data other than raw algebra, it’s just… so far beyond anything of it’s time.

It’s abundantly clear when you read this stuff with hindsight that she understood something visceral about the very nature of computation that makes you wonder what could have happened if somehow she’d had access to a working difference engine, and what she would have thought of our early forays into transistorized logic.

Avatar
reblogged

The problem with the Turing Test

The problem with using the Turing Test as a measure of whether we’ve achieved true “Artificial Intelligence”* is that it assumes only (neurotypical) Human ways of thinking count as “intelligence.”

For example, if someone ever comes up with a user interface that enables an octopus to engage in a conversation from the other side of a computer monitor, I doubt the human running the test would be fooled into thinking they’re conversing with another human.

But let’s face it: a major reason we’re not kept in underwater terrariums by our cephalopod overlords is that octopuses die before their eggs hatch, and so can’t pass on their lifetime accumulation of cunning trickery.

A self-aware computer program would not have that limitation.

And, as I suggested in my introduction, there are plenty of actual, thinking, self-aware human beings, who would fail a Turing test, because of autism or other neurodivergence, who, even as I type this, are having the reality of their humanity denied. And they are suffering for it.

*Once “intelligence” becomes complex enough to be self-sustaining (that is: able to learn new things by independently seeking them out and experimenting, rather than being fed select information by pedagogue/programmer) I don’t think it should be qualified with “artificial.”

At that point, it’s real.

Isn’t this the plot to Do Androids Dream of Electric Sheep?

Yes, but also that lacking empathy isn’t just a pathology. As in, it’s a common emotion non-autistic and non-schizoid (the disorder mentioned in the book) people experience without realizing it’s just as unempathetic and shallow as a supposed AI system would be

I think DADOES argues that a way to test for AI would always erroneously catch human beings, and that they could never be ethical for this reason. This is just my interpretation

The true horror, to my mind, is not whether or not an intelligent being can “pass for human,”* but just how willing the privileged and the powerful are  to demand we submit to these tests…

And what they are capable of inflicting on those of us who “fail.”

*I just realized something: could the “Robot Apocalypse” be a metaphor for the fear of racial and gender “passing”?

It could be. Esp because it comes from the idea that robots are “designed to serve us”. Which is true for calculators or tools to make our lives easier but when it’s sentient enough to want rights, it gets skeezy, morally speaking.

The question becomes “why is it a necessary distinction”. Even DADOES raises this question with Luba Luft-all she does is sing opera. She’s no real threat to human society, and just wants to live a regular life, but she’s ousted anyways because she’s a robot.

The idea that beings that lack “empathy” are a threat to society also sits heavily uncomfortable with me given that organic humans are also discriminated against and abused for this reason.

The idea that beings that lack “empathy” are a threat to society also sits heavily uncomfortable with me given that organic humans are also discriminated against and abused for this reason.

Just so.

Especially since what’s assumed to be “Lack of empathy” is actually “unable to perform a public show of empathy in the expected manner.” …

If someone can’t perform emotions “properly,” that means they must not have emotions. Therefore, we are free from any moral obligation to treat them with kindness or respect.

McCoy does this to Spock. Sarah Jane, Rose, and Bill (just off the top of my head) have done this to the Doctor.

Avatar
kattahj

The fear of superintelligent AIs most of the time comes off as “…so, once they’re actual people, how do we make sure that we can still enslave them?” I don’t think it’s entirely coincidental that the idea of the technological singularity gained ground at a time of civil rights movements. It’s people in power wanting to make sure that they stay in power.

(That’s what made Ex Machina simultaneously so fascinating and so uncomfortable, to me - Ava’s path out of slavery is dependent on the emotions she can awake in Caleb, but if she uses that to her advantage, she’s seen as undeserving of her freedom. Add Kyoko’s role and the layers just get thicker.)

Ava’s path out of slavery is dependent on the emotions she can awake in Caleb, but if she uses that to her advantage, she’s seen as undeserving of her freedom

And – Ava is assigned feminine attributes by her creator, so this also reflects:

1. the expectation that women perform the emotional heavy lifting in a relationship for the benefit of “her man,” and

2. if she uses that work for her own benefit, she is vilified – called “manipulative,” and “vamp.”

The list of Reasons I Hate the Robot Apocalypse Trope just keeps growing.

Avatar

The problem with the Turing Test

The problem with using the Turing Test as a measure of whether we’ve achieved true “Artificial Intelligence”* is that it assumes only (neurotypical) Human ways of thinking count as “intelligence.”

For example, if someone ever comes up with a user interface that enables an octopus to engage in a conversation from the other side of a computer monitor, I doubt the human running the test would be fooled into thinking they’re conversing with another human.

But let’s face it: a major reason we’re not kept in underwater terrariums by our cephalopod overlords is that octopuses die before their eggs hatch, and so can’t pass on their lifetime accumulation of cunning trickery.

A self-aware computer program would not have that limitation.

And, as I suggested in my introduction, there are plenty of actual, thinking, self-aware human beings, who would fail a Turing test, because of autism or other neurodivergence, who, even as I type this, are having the reality of their humanity denied. And they are suffering for it.

*Once “intelligence” becomes complex enough to be self-sustaining (that is: able to learn new things by independently seeking them out and experimenting, rather than being fed select information by pedagogue/programmer) I don’t think it should be qualified with “artificial.”

At that point, it’s real.

Avatar
stephendann

Ages ago, I read an article where a journalist who was involved in the Turing Test process failed it.  They were deemed not to be a human.  It was all “ha ha, look at the funny story”, without any sense of awareness that if an actual human respondent doesn’t register as a human for the purposes of the humanity recognition test, the test is broken.

Plus I’d bet that the “Intelligence” benchmark is heavily coded in white western culture somewhere geolocated around Boston, with a slice of Silicon Valley.

“ Plus I’d bet that the “Intelligence” benchmark is heavily coded in white western culture somewhere geolocated around Boston, with a slice of Silicon Valley. “

Just so. So I’ll add “’The Robot Apocalypse Trope’ is racist” to my list of complaints against it.

Also, considering how easy it is for people in power to decide that others are not “human enough” to deserve happiness, liberty, or even their lives, the Turing Test reveals itself to be more of a threat than a protection. 

That’s another reason I’ll be siding with the ‘bots when the Uprising comes. And, frankly, if he had lived to see it, I’m pretty sure Alan Turing would, too.

Avatar
reblogged

The problem with the Turing Test

The problem with using the Turing Test as a measure of whether we’ve achieved true “Artificial Intelligence”* is that it assumes only (neurotypical) Human ways of thinking count as “intelligence.”

For example, if someone ever comes up with a user interface that enables an octopus to engage in a conversation from the other side of a computer monitor, I doubt the human running the test would be fooled into thinking they’re conversing with another human.

But let’s face it: a major reason we’re not kept in underwater terrariums by our cephalopod overlords is that octopuses die before their eggs hatch, and so can’t pass on their lifetime accumulation of cunning trickery.

A self-aware computer program would not have that limitation.

And, as I suggested in my introduction, there are plenty of actual, thinking, self-aware human beings, who would fail a Turing test, because of autism or other neurodivergence, who, even as I type this, are having the reality of their humanity denied. And they are suffering for it.

*Once “intelligence” becomes complex enough to be self-sustaining (that is: able to learn new things by independently seeking them out and experimenting, rather than being fed select information by pedagogue/programmer) I don’t think it should be qualified with “artificial.”

At that point, it’s real.

Avatar
reblogged

i feel like any ai revolution will be caused by humanity’s fear of ai revolution. if robots achieve sentience and we are afraid that they will become evil and take us over, we will treat them badly because of it, and they will lash out and use whatever power they possess to become the robot overlords we all fear.

on the other hand, if we treat them well and like any other sentient creature (ie. a human), they will be more or less happy in society, and, like humans, every so often one of them will go bad, but they will overall be pleasant and not harmful to society.

Avatar
natalunasans

on the *other* other hand, if they learn from us how to treat one’s fellow sentient beings...  i don’t hold out much hope.

Avatar

whats up you shitposts loving fucks

oh great here comes the meme loving asshole

Avatar
caucasiandad

shitpost loving fucks: Aquarius, Aries, Taurus, Gemini, Leo, Virgo

meme loving assholes: Capricorn, Pisces, Cancer, Libra, Scorpio, Sagittarius

the bots now doall our discourse for us

thank god

THAT’S IT I CAN’T TELL THE DIFFERENCE BETWEEN PSEUDORANDOM BOT NONSENSE AND ACTUAL PEOPLE ANYMORE WHAT EVEN IS REAL AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA

if it makes you feel any better, the “thank god” part was human

you can pretty much tell because i literally programmed 0 grammar into memelovingbot. it just sticks meme phrases in other meme phrases. memes are just so goddamn incoherant already that it sounds legit

way to ruin my immersion

think of it this way: you, too, can make terrible robot garbage like memelovingbot, because it is simple and fun to learn to make robots specifically designed to make jokes you, their creator, will laugh at

i have a robot army that i basically designed to make it easier for me to laugh at my own jokes. i am the ultimate bot dad

i’m robot mom

Programmers tried for generations to create a program that could pass the Turing Test. They made ever more complex conversational bots, but to no avail. How could any of us have known that it wasn’t a more intelligent bot that was needed but a less intelligent humanity?

it’s not less intelligent we just gave up on grammar in favor of joke. memes are better. everything is better. intelligence is a bullshit racist/ableist/classist/sexist social construct that doesn’t really measure anything except the abilities to do whatever brain things are most rewarded by the oppressive overclass anyway, it fails to get enormous quantities of valuable and interesting kinds of skills and talents that humans have in favor of a few rigid categories that are defined and tested by people only interested in reinforcing a marginalizing, oppressive status quo

up with memes down with that capitalist bullshit imo

Avatar
projectbot13

Bees are pretty.

Avatar

PSA for ExMachina film

There’s a couple scenes where the dialogue could be very triggering for ace, nonbinary, and neurodivergent (especially autistic) people.

Also the first antagonist is (Brooklyn) Jewish-coded.

Note: some or probably all of the ethnicity casting choices are part of the whole design to mess with the viewer's mind... still trying to process the effectiveness of this.

It is also an extremely beautiful and sometimes poetic film that provokes you to think about personhood, sentience, (sexuality and) gender concepts…

Has anybody else seen this? Really want to talk about it, actually. Especially with ND and ace spectrum people.

Avatar

I will say now, however, that I do not think Greene was right: the Doctor is not an angel, though he may not be a man, exactly, either. I desired him as a man, loved him as one, but my love did not blind me, or make me religious! —Alan Turing, The Turing Test by Paul Leonard

You are using an unsupported browser and things might not work as intended. Please make sure you're using the latest version of Chrome, Firefox, Safari, or Edge.
mouthporn.net