got a major pest problem this year actually
A.I. photos are flooding social media and contributing to an Internet where we can't believe what we see. Spotting A.I. 📷s is an important media literacy skill.
None of us have time to research every image we see. We just need people to notice BEFORE THEY LIKE OR SHARE that an image might be fake. If unsure, check it or don't share.
I've started drawing some comics explaining the basic of AI spot-checking and media literacy in the age of disinformation. Follow along here or on my Twitter.
Apparently they're selling post content to train AI now so let us be the first to say, flu nork purple too? West motor vehicle surprise hamster much! Apple neat weed very crumgible oysters in a patagonia, my hat. Very of the and some then shall we not? Much jelly.
what made them think it was always right?
they are stupid
orrrrr they just grew up in this tech focused privacy lacking world
current university students, assuming they went to university at 18 (which would be most of the first years) were maybe 4 when smartphones got popular? the world they have grown up with is one where apps are the only way one interacts with technology, where social media increasingly demands personal information, where someone who refuses to go along with those norms automatically has something to hide, which means everyone polices themselves as if their every fashion choice or sentence structure is subject to judge and jury, because it is
we live in an age which discourages asking questions, where every question is met with offense and "just google it", while google itself fills its entire first page with ads and misinformation
and tumblr is the only place i've seen critique chatgpt. i've had windows 11 (which every laptop comes with these days) advertise chatgpt to me as a search engine
it's not a search engine. frankly, google hardly counts as one either these days. but if you're not surrounded by people telling you chatgpt is bad and fake, how are you supposed to know? when your computer and your phone and your relatives who don't know much more about technology than you all think it's a search engine with a voice, it wouldn't occur to you that all of them are wrong
and they're clearly not stupid! that's the point of this as an exercise - their teacher is telling them for the first time that chatgpt might be wrong. your job is to find out if chatgpt is wrong. and they did the assignment! they had a reason for the first time to question chatgpt, and all of them came to the conclusion that yeah, it is lying to us, and so is everyone else who said it was always right
and they'll go forward now not only knowing they can't blindly trust what computers tell them, but they'll spread that message around, which will decrease misinformation on a bunch of levels
this was a win for the students and a win for the teacher, i think an exercise like this should be mandatory in all schools if we ever hope to combat where technology's going these days
this reply kills me 😭 article link
yeah where's the robot that picks up cat poop and wipes the floor with disinfectant? Where's the robot that loads and empties the dishwasher? where's the robot that puts away the clean washing?
Roomba's about as good as we can do with current tech
Tags made me rethink this whole situation. We DO have a robot who does the dishes for us. She’s called a dishwasher. Wow.
#couldn't you just upload photos of your utensils and tell your robot that only these exact objects go in the dishwasher#???
They're robots not Lieutenant Data.
Relevant XKCD:
Oh god it's such a good example of the way that it's hard to understand why algorithmic shit we do have (what we so misleadingly call "AI") is honestly nothing, is literally just sticking words together in word orders, vs what would actually represent real cognition, which is the ability to load a dishwasher.
Because for organic systems - humans, other animals even - the ability to communicate in language is a super high level thing. In order to get there, with our meat brains, you have to have already gone through all the levels of cognition below, like "recognize birds" and "perform simple tasks but in multiple uncontrolled settings, reacting to changes of circumstances" and so on. In order to talk, as a human, by whatever method you talk (words, text, sign-languages, whatever) you have to do all those cognitions first, and also have probably done a solid sideline into theory of mind.
And so we assume that because "AI" can generate language, can put together sentences in order that we associate with complexity, according to common patterns, that it must be "intelligent" (or at least surely on the way to becoming so), that this ability to move words around into syntactic patterns represents the same thing for the computer as it does for us.
It just doesn't. The computer program is not working on the same physical sequence and rules as the animal (and thus human) brain. The chat program doesn't have to actually build an entire sequence of sapience and meaning and shit, built in turn on massive amounts of subconscious reactions to circumstances and so on, before it becomes possible for the computer program to move words around into ways that are deceptive to humans (because we only just keep feeding it words).
This becomes painfully obvious when you try to get these programs to actually do things. To actually react to anything outside the carefully controlled environments of the IT lab. It's why "AI" can appear to talk about existential issues, but can't actually power a robot that you can rely on to load your dishwasher.
Or identify a bird in a picture.
Not to shit on a good post, but we have robots building cars. On their own. Germany has factories where the humans only overlook the robots.
It is NOT a question of possibility. It's a questiob of making it affordable.
And I still think it could be done. It would need to have a robotic arm and the shelves within reach. The dishwasher already has a computer, it would only need upgrading with the arms software.
The thing is
It would not make your life easier.
The robot would need maintenance. It would need repair. It would need electricity. All of these are expensive even for dishwashers.
So instead of upgrading the dishwasher, jusz upgrade your life surrounding the dishwasher.
For example: there's lndustrial dishwashers which take about three minutes for a program. You have one tray where you put things on. If you add cabinets which hold trays instead of china, you can take a tray out the washer and store it in the cabinet.
Erasing about 50% of the work by adjusting to the machine. Done.
The thing is that those factories are created - and this is important - to make the robots able to make the cars.
The entire place is hyper-designed to make absolutely sure that nothing upsets, confuses or disrupts the programmed patterns of movement and action that the robots then use to make the car, and no new element of confusion is added to the whole process. The humans overseeing things are there for essentially that purpose: so that the process is maintained, the environment controlled, and if something does go askew the whole thing is stopped and that thing is fixed before it causes a breakdown all the way through the chain.
The human overseers are kept for that purpose . . . .because that's something human cognition can do!
So like yes: you could design your entire life around allowing a mechanism to cause dishes to be done. But that's not really what people mean when they say "I want a robot to do my dishes."
They want a robot to go around their home as it exists - their home full of the messy chaos of a human living in it, leaving dishes on desks and bedsides and tables, with a cat and the cat's box and that pile of unfolded clothing - and washes their own dishes, that they already have, without breaking them or the machine or causing property damage. And without washing anything they have out on surfaces that are available to the robot that looks like a dish, but is actually a decoration. And also without leaving behind things that turn out to be dishes (or dish related mess) but are not recognized as dishes. Presumably most people want this without super-invasive "smart" tagging on everything they own (designed to tell the robot what is and isn't a dish that should be washed).
So like yes: you can set up an environment that allows an automated process, even a very complicated automated process, when you've optimized that environment, and then leave that process to run with only minimal human supervision (which still amounts to "someone sitting in the oversight booth more or less all day" and still involves a loooot of fiddling, and code-fixing, and repair to the system, and redirecting of the system).
That's not the same as what people actually mean when they say "I want a robot to do my dishes", and thus does not speak to why the latter is still not something we're even close to achieving.
Yes that's correct. The physical dexterity it takes to manipulate a variety of unknown 3D objects of highly variable weight and friction, from a constantly changing environment full of other objects they're not supposed to touch, and getting them all into the right area without breaking them or anything else, is a phenomenally complicated task. Much, much more difficult on every level than Spicy Autocomplete. A writing algorithm doesn't even need to be able to move! It's got no arms!
These sorts of tasks have nothing at all in common with writing or drawing algorithms. These are more like roombas or self-driving cars; or I should say, step one, being able to safely move around the house, is like roombas or self-driving cars. We can just about do that part (roombas work fine 99% of the time), but the part where they recognise dishes and know how to safely pick them up and stack them into a dishwasher? Last I saw, the absolute pinnacle of that tech was "the robot can successfully open doors" and "the robot who has been programmed to pick up beer cans (object of completely uniform shape, weight and density) can pick up a beer can" and "the robot can stand back up if it falls over".
It's a bit further along than that in some spaces, but only just a bit. Last I checked, we're talking robot programed to pick up beer cans (object of completely uniform shape, but not uniform fullness (weight, density, center of gravity) can pick up beer cans.
Ah so they can pick up opened beer cans now. That will end well.
I said they could pick them up. I didn't say they wouldn't spill the contents all over the carefully controlled test environment's floor and themselves.
This is also why robots can’t make clothing, by the way. Fabric has way too many variables: stretch and weight and texture, the direction of the grain, fiber content, how much it frays, on and on and on. Every item of clothing on earth is still made by a human being using their hands to put fabric through a sewing machine, because a task done by “unskilled” people being paid starvation wages in a sweatshop is orders of magnitude too complex for any robot.
If you think humans like routines, wait until you hear about how much machines like absolute consistency.
Okay this is just getting funny now
in the bible in revelations i think or somewhere, it warns you that near the end times, false prophets will pop up claiming to be the messiah. do you ever think scholars from like 700 years ago studying the bible would have thought this would be one of them
John the Elder: in my book I said beware false prophets as a warning
Christians: we have finally invented the false prophets from the hit Bible book that says beware false prophets
My prediction for the evolution of deepfake AI
Relevant xkcd
AI-generated Valentine cards
Tired of generic greetings?
Confuse your friends with AI-generated Valentine cards!
GPT-3 generated the messages and descriptions, and then I followed its instructions to create the cards. More explanation here!
You can get these on real cards - confuse your human friends!
My Name is Bot
Text-generating neural networks can do a lot more now than they used to. When I first started generating things with neural networks, sentences and even simple phrases were mostly beyond them (that didn’t stop me from trying, with hilariously inept results). But now programs like GPT-3 can keep track of much larger blocks of text, capable of writing multi-question personality quizzes (as long as you don’t mind that the tests make no darn sense).
Now here’s one I’ve wanted to do for a while: there’s a form of poetry called bredlik, introduced in late 2016 CE by Sam Garland after a cow was discovered licking the loaves at a bakery:
my name is Cow, and wen its nite, or wen the moon is shiyning brite, and all the men haf gon to bed - i stay up late. i lik the bred.
With its pseudo-Chaucerian spelling and meter of strict iambic diameter, bredlik is one of my favorite things the internet has produced. And, now that the interest in bredlik has declined from “very niche” to “virtually none”, it’s the perfect time for me to see how a state-of-the-art neural net does at generating bredlik poems.
I gave GPT-3 15 example bredlik poems, focusing on those that stuck to the standard 8-line length. I also included a (subject, verb, object) header to each poem so I could specify the topic. The cow poem above, for example, got (cow, lick, bread) as a header. Then I gave GPT-3 a few headers to add to, and here’s what it produced:
(giraffe, trick, bot) my name’s giraf, and on the veld i lik to pley with bouncing gait but when it’s cold i shiver, no. my tunge wil bach, i trip the bot.
(bot, make, memes) my name is bot and wen i memz i go into the street i maek the sound lik dok dok dok lik boom boom boom but all at once i see a meme and i can’t meme.
It’s close. It’s soooo close. Terrible, though. It does make an effort to match my prompt, even when I sabotage it with way too many syllables or something:
(cat, eat, broccoli) my name is cat, and wen i feel the smell of treat, i jump up fast from box or seat, and run and leap and seize the feep, and say, yum, meep!
(moon, moon, moon) my name is moon and wen it’s night and stars are out and you don’t see a single sound I am the one to set your heart I moon the moon
It occasionally gets the rhythm right, though it’s curiously bad at counting lines. It tries to do the pseudo-Chaucerian spelling but some of its misspellings are so weird that I suspect it’s just making up words. Other times it uses real archaic English, which is not the point of bredlik but it’s still kinda cool that it drew the parallel. Note that it is also almost following the rhyming scheme, especially if you take some liberties with how you pronounce the made-up words. For the rest of the poems in this post, GPT-3 generated not just the poem, but the prompt as well.
(Sheep, graze, name) My name is Sheep and wen it’s nyght i like to bleepe with shut and ee and munch upon the buettom green of grounges fourm i lik the name
(fish, lick, thing) my name is fish, and wen i see the mote that doth bestow upon me the gift of life and givith unto me a watery strife i thank the fen i lik the thing
(bell, scare, doom) my name is bell and wen you kno the wyld dark nere to Hell’s abyss and in the mire the dead a roon you think it’s naym the bells will toon i scare the doom
I’m surprised at how often GPT-3 did manage to get close to bredlik, since AI-generated poetry is notoriously hard, and even careful scripting can produce glitchy poems with unexpected odes to mining company Alusuisse. Still, in this post I’m only showing a few of the generated poems - most of them not only fail to fit bredlik, but do so in a boring or unsatisfying way. The exception is this poem, which has definitely strayed from bredlik, but on the flip side contains the phrase “beely might”.
(Bee, use, thief) My name is Bee and wen I see a moth upon a tree I use my beely might and steal it from its fight And then I lik the thieft
I have more neural net bredlik poems than would fit in this post, including many that are for some reason quite unsettling. You can enter your email here, and I’ll send it to you.
My book on AI, You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why it’s Making the World a Weirder Place, is available wherever books are sold: Amazon - Barnes & Noble - Indiebound - Tattered Cover - Powell’s - Boulder Bookstore
Alternative Neural Edit
Latest project from @mario-klingemann employs Neural Networks trained on a collection of archive footage to recreate videos using the dataset.
It is confirmed that no human intervention has occured in the processed output, and it is interesting where there are convincing connections between the two (and where there apparently are none):
Destiny Pictures, Alternative Neural Edit, Side by Side Version
This movie has been automatically collaged by an neural algorithm using the movie that Donald Trump’s gave as a present to Kim Jong Un as the template, replacing all scenes with visually similar scenes from public domain movies found in the internet archive.
Neural Remake of “Take On Me” by A-Ha
An AI automatically detects the scenes in the source video clip and then replaces them with similar looking archival footage. The process is fully automatic, there are no manual edits.
Neural Reinterpretation of “Sabotage” by the Beastie Boys
An AI automatically detects the scenes in the source video clip and then replaces them with similar looking archival footage.
There are other video examples at Mario’s YouTube page (but some may not be viewable due to music copyright.
If you follow Mario’s Twitter timeline, you can get updated with the latest examples, and follow the evolution of the project [link]
How Math Can Be Racist: Giraffing
You may have heard about AOC catching a lot of flack from conservatives for claiming that computer algorithms can be biased – in the sense of being racist, sexist, et cetera. How, these people asked, can something made of math be biased? It’s math, so it must be objectively correct, right?
Well, any computer scientist or experienced programmer knows right away that being “made of math” does not demonstrate anything about the accuracy or utility of a program. Math is a lot more of a social construct than most people think. But we don’t need to spend years taking classes in algorithms to understand how and why the types of algorithms used in artificial intelligence systems today can be tremendously biased. Here, look at these four photos. What do they have in common?
You’re probably thinking “they’re all outdoors, I guess…?” But they have something much more profound in common than that. They’re all photos of giraffes!
At least, that’s what Microsoft’s world-class, state-of-the-art artificial intelligence claimed when shown each of these pictures. You don’t see any giraffes? Well, the computer said so. It used math to come to this conclusion. Lots of math. And data! This AI learns from photographs, which of course depict the hard truth of reality. Right?
It turns out that mistaking things for giraffes is a very common issue with computer vision systems. How? Why? It’s quite simple. Humans universally find giraffes very interesting. How many depictions of a giraffe have you seen in your life? And how many actual giraffes have you seen? Many people have seen one or two, if they’re lucky. But can you imagine seeing a real giraffe and not stopping to take a photo? Everyone takes a photo if they see a giraffe. It’s a giraffe!
The end result is that giraffes are vastly overrepresented in photo databases compared to the real world. Artificial intelligence systems are trained on massive amounts of “real world data” such as labeled photos. This means the learning algorithms see a lot of giraffes… and they come to the mathematically correct conclusion: giraffes are everywhere. One should reasonably expect there might be a giraffe in any random image.
Look at the four photos again. Each of them contains a strong vertical element. The computer vision system has incorrectly come to the belief that long, near-vertical lines in general are very likely to be a giraffe’s neck. This might be a “correct” adaptation if the vision system’s only task was sorting pictures of zoo animals. But since its goal is to recognize everything in the real world, it’s a very bad adaptation. Giraffes are actually very unlikely.
Now, here’s the clincher: there are thousands and thousands of things that are over-represented or under-represented in photo databases. The AI is thoroughly giraffed in more ways than we could possibly guess or anticipate. How do you even measure such a thing? You only have the data you have – the dataset you trained the AI with in the first place.
This is how computer algorithms “made of math” can be sexist, racist, or any other sort of prejudiced that a human can be. Face photo datasets are highly biased towards certain types of appearances. Datasets about what demographics are most likely to commit crimes were assembled by humans who may have made fundamentally racist decisions about who did and didn’t commit a crime. All datasets have their giraffes. Here’s a real world example where the giraffe was the name “Jared.”
Any time “a computer” or “math” is involved in making decisions, you need to ask yourself: what’s been giraffed up this time?
Thanks to Janelle Shane whose tweet showing her asking an AI how many giraffes are in the photograph of The Dress prompted this post.
Please note that Microsoft does try to take steps to correct their computer vision system’s errors, so the above photos may have improved their detections since they were first evaluated by @picdescbot.
one of picdecbot’s caretakers got on a bit of a rant :) using examples from everyone’s favorite slightly inept little bot
I wonder if algorithm training test cases can tell us something about how prejudices are unconsciously passed between parents and children?
oh god it’s happening
why is it always that the sign that the robot/AI is becoming ~*too human*~ is when they fall in looove
give me a robot who realizes they’ve ~*exceeded their programmed parameters*~ when they get incredibly emotionally attached to their favorite movie and start writing fanfiction about it
Tags: a robot who gets a pet and suddenly this small animal is more important than their programmed mission a robot who discovers they really REALLY like chocolate a robot who accidentally breaks a household appliance and cries in frustration a robot who is woken up by their programmer and mumbles ‘five more minutes’ god there are so many human things for a robot to do I LOVE IT GIVE ME ALL OF THESE STORIES
- A robot that gets into an editing war on Wikipedia because this other person is wrong and not citing sources and clearly biased and no it will do that calculations later because this is important.
- A robot who doesn’t like one scientist because it thinks her hair is stupid.
- A robot that finds logical paradoxes meant to disable it incredibly funny as if they’re jokes and comes up with its own.
- A robot that develops a deep interest with a random trivial object like doorbells, dice, or ribbons and devotes a lot of its processing power to studying them. Fascinating.
- A robot that was broken down for a while until some animal nested inside it and after it was repaired it was honored that an organic creature chose it as its shelter.
- A robot that likes the class of the human-visible electromagnetic spectrum designated as ‘aquamarine’ (#66CDAA) and surrounds itself with this colour as much as possible, even collecting (or stealing) all objects of this colour. Similar colours like sea blue or teal will not be accepted.