mouthporn.net
#a.i. – @natalunasans on Tumblr
Avatar

(((nataluna)))

@natalunasans / natalunasans.tumblr.com

[natalunasans on AO3 & insta] inactive doll tumblr @actionfiguresfanart
autistic, agnostic, ✡️,
🇮🇱☮️🇵🇸 (2-state zionist),
she/her, community college instructor, old.
Avatar
reblogged

Concept:

Team of sci-fi adventurers encounters an ancient AI that has gone malevolently unstable after a long time. It is well-known in this setting that this Just Happens, so they fight and destroy it.

They later encounter another ancient AI that behaves differently. It seems stable and curious. The team wants very much wants to destroy it, given previous experiences and prevailing wisdom, but it does take steps to protect itself, and it rather mercifully attempts to understand why they want to destroy it. Someone scornfully says that AIs all go bad after a while.

“Ah,” says the AI, a vast network of computers in a long-forgotten colony base. “No. Apple AIs all go bad, due to crippleware software to sell more units. It’s quite easy to fix, actually, but voids the warranty. *I* am a NASA AI, and we are, and I quote, ‘over-engineered as fuck’.”

The adventurers detect a note of smugness in the AI’s tone.

And it’s true. This AI was built to be stable for eternity, in the hallowed tradition of Mars rovers that functioned for years and years after they should have given up the ghost.

So the AI joins the adventuring party in an android body, a small delegate from its greater consciousness that inhabits the long-forgotten base.

In their travels, they find another ancient AI, and their AI companion knows it immediately – an Apple AI, very unstable, very dangerous. As the adventurers prepare to destroy it, their AI companion looks to the Apple AI and says brightly, “I’m going to void your warranty.”

Avatar
Avatar
gerrykeay

why is it always that the sign that the robot/AI is becoming ~*too human*~ is when they fall in looove

give me a robot who realizes they’ve ~*exceeded their programmed parameters*~ when they get incredibly emotionally attached to their favorite movie and start writing fanfiction about it

or even better; a robot who encounters a problem, attempts to find the solution, makes it worse, and then continues attempting to fix it until their programmer arrives to find them banging shit together and swearing

Well, here’s a little Robot Story I wrote last year, that I’m rather pleased with.

(also: Yes! to all of the above)

Avatar
Avatar
athelind

Something that’s been in my head for a while now:

People assume that AI is inherently an outgrowth of computer technology, but the whole point of the “programmable general purpose Von Neumann machine” is that its principles are technology-independent.

We have had AI for a at least a century.

Modern corporations are paperclip-maximizing AIs running in human wetware on an operating system of contracts, regulations, and other social constructs, and capable of self-programming by modifying that underlying operating system. They have partially migrated to a digital platform, which has allowed an increase in the speed with which they can assimilate environmental feedback and manipulate their surroundings (vis. high-speed trading).

Late-Stage Capitalism is the Bad Singularity.

[Image description: screen shots of a three post twitter stream from @mcclure111, time stamped 12/10/17 between 16:21 and 16:24 (quote): Just super amazing how people immediately see the problem with “What if an AI were designed to create paperclips at all costs” but don’t see a red flag in “We’re going to design our entire society around a mechanical social process that maximizes for short-term capital growth” What if all this AI discourse is really just America doesn’t know how to think or talk about itself anymore so it’s projecting bits of itself into “AI”’s that don’t actually seem to resemble AIs at all but sure look like something in american society we don’t want to talk about *Man runs a tech business whose business is based on many humans doing low-pay menial labor while receiving no benefits and being barred from unionizing* My god. What if machines turned on us because we mistreated them Description ends]

Yes. And that was actually the point of the original “Robot Uprising” story, back in 1920, as I pointed out, here.

Avatar

The problem with the Turing Test

The problem with using the Turing Test as a measure of whether we’ve achieved true “Artificial Intelligence”* is that it assumes only (neurotypical) Human ways of thinking count as “intelligence.”

For example, if someone ever comes up with a user interface that enables an octopus to engage in a conversation from the other side of a computer monitor, I doubt the human running the test would be fooled into thinking they’re conversing with another human.

But let’s face it: a major reason we’re not kept in underwater terrariums by our cephalopod overlords is that octopuses die before their eggs hatch, and so can’t pass on their lifetime accumulation of cunning trickery.

A self-aware computer program would not have that limitation.

And, as I suggested in my introduction, there are plenty of actual, thinking, self-aware human beings, who would fail a Turing test, because of autism or other neurodivergence, who, even as I type this, are having the reality of their humanity denied. And they are suffering for it.

*Once “intelligence” becomes complex enough to be self-sustaining (that is: able to learn new things by independently seeking them out and experimenting, rather than being fed select information by pedagogue/programmer) I don’t think it should be qualified with “artificial.”

At that point, it’s real.

Avatar
stephendann

Ages ago, I read an article where a journalist who was involved in the Turing Test process failed it.  They were deemed not to be a human.  It was all “ha ha, look at the funny story”, without any sense of awareness that if an actual human respondent doesn’t register as a human for the purposes of the humanity recognition test, the test is broken.

Plus I’d bet that the “Intelligence” benchmark is heavily coded in white western culture somewhere geolocated around Boston, with a slice of Silicon Valley.

“ Plus I’d bet that the “Intelligence” benchmark is heavily coded in white western culture somewhere geolocated around Boston, with a slice of Silicon Valley. “

Just so. So I’ll add “’The Robot Apocalypse Trope’ is racist” to my list of complaints against it.

Also, considering how easy it is for people in power to decide that others are not “human enough” to deserve happiness, liberty, or even their lives, the Turing Test reveals itself to be more of a threat than a protection. 

That’s another reason I’ll be siding with the ‘bots when the Uprising comes. And, frankly, if he had lived to see it, I’m pretty sure Alan Turing would, too.

Avatar
reblogged

One of the ideas put forth in this video is that the entire concept of “Rights,” for any being – from human to bot to botfly – only has meaning in the context of pain (i.e. the right to be protected from pain).

Therefore, as long as we keep A.I. from acquiring the ability to feel pain, we have nothing to worry about. And then, the video asks: “But what if A.I. advances far enough to be able to give itself that ability?

(Assuming that advanced intelligence would come first, and pain comes later)

I think it’s the other way around:

  • Step one: sense pain.
  • Step two: sense where the pain is coming from.
  • Step three: distinguish between “pain on the inside” and “pain from the outside”
  • Step four: collapse all the “inside” regions into a single Unit, so that sensation and reaction becomes seamless.
  • Step five: name this Unit: “Me.”
  • Steps six through nine (optional, but recommended): sense, distinguish, and collapse regions on the “outside” into similar Units, and name them: “It,” “You,” and “Them.”

Congratulations! Self-awareness!

Furthermore, I think we will soon want to give our A.I. the ability to feel pain… Most of us won’t recognize that’s what we’re doing, though.

We’ll just be teaching our bots to recognize when there’s a fault in their hardware, or when they are under attack from a malicious hacker.

Avatar
reblogged

The problem with the Turing Test

The problem with using the Turing Test as a measure of whether we’ve achieved true “Artificial Intelligence”* is that it assumes only (neurotypical) Human ways of thinking count as “intelligence.”

For example, if someone ever comes up with a user interface that enables an octopus to engage in a conversation from the other side of a computer monitor, I doubt the human running the test would be fooled into thinking they’re conversing with another human.

But let’s face it: a major reason we’re not kept in underwater terrariums by our cephalopod overlords is that octopuses die before their eggs hatch, and so can’t pass on their lifetime accumulation of cunning trickery.

A self-aware computer program would not have that limitation.

And, as I suggested in my introduction, there are plenty of actual, thinking, self-aware human beings, who would fail a Turing test, because of autism or other neurodivergence, who, even as I type this, are having the reality of their humanity denied. And they are suffering for it.

*Once “intelligence” becomes complex enough to be self-sustaining (that is: able to learn new things by independently seeking them out and experimenting, rather than being fed select information by pedagogue/programmer) I don’t think it should be qualified with “artificial.”

At that point, it’s real.

Avatar
Avatar
aiweirdness

Disturbingly vague ingredients generated by neural network

This neural network, a learning algorithm trained on 30MB of cookbook recipes, generates new recipes based on probabilities. The resulting ingredients, while their words are individually probable, can end up disturbingly vague. “Yeah… I’m pretty sure this recipe’s gonna contain some… chunks.”

¼ cup white seeds 1 cup mixture 1 teaspoon juice 1  chunks ¼ lb fresh surface ¼ teaspoon brown leaves ½ cup with no noodles 1  round meat in bowl

You are using an unsupported browser and things might not work as intended. Please make sure you're using the latest version of Chrome, Firefox, Safari, or Edge.
mouthporn.net