mouthporn.net
#ai – @elfgrove on Tumblr
Avatar

Space Sidhe

@elfgrove / elfgrove.tumblr.com

ElfGrove: (she/her) Cosplayer, Plush Maker, Feminist, Polytheist, Panromantic Ace, Speedster Fan, General Purpose Animation and Mythology Geek, GLTAS Fantern Mama Bear
Most folk around here just call me Elf or Elfie.
Notable Tags
Avatar
reblogged
Avatar
beesmygod

ed zitron, a tech beat reporter, wrote an article about a recent paper that came out from goldman-sachs calling AI, in nicer terms, a grift. it is a really interesting article; hearing criticism from people who are not ignorant of the tech and have no reason to mince words is refreshing. it also brings up points and asks the right questions:

  1. if AI is going to be a trillion dollar investment, what trillion dollar problem is it solving?
  2. what does it mean when people say that AI will "get better"? what does that look like and how would it even be achieved? the article makes a point to debunk talking points about how all tech is misunderstood at first by pointing out that the tech it gets compared to the most, the internet and smartphones, were both created over the course of decades with roadmaps and clear goals. AI does not have this.
  3. the american power grid straight up cannot handle the load required to run AI because it has not been meaningfully developed in decades. how are they going to overcome this hurdle (they aren't)?
  4. people who are losing their jobs to this tech aren't being "replaced". they're just getting a taste of how little their managers care about their craft and how little they think of their consumer base. ai is not capable of replacing humans and there's no indication they ever will because...
  5. all of these models use the same training data so now they're all giving the same wrong answers in the same voice. without massive and i mean EXPONENTIALLY MASSIVE troves of data to work with, they are pretty much as a standstill for any innovation they're imagining in their heads
Avatar
angstbotfic
Avatar
reblogged
Avatar
wumblr

i genuinely love all these sacrificial lambs slaughtering their own careers using AI. i mean we all know i'm saying this for refined, knowledgeable reasons, from my community college computer information systems certificate, but i'm kissing each of them on the heads as they try to cite legal precedent that doesn't exist and get published for the first and last time ever in frontiers in cell and developmental biology and i'm watching them herd each other off a cliff. they're so wooly to me and so, so stupid

Avatar
reblogged
Avatar
ymutate

Artist : Kuo Jean Tseng

Avatar
yincira

This is the most high end AI generation to cross my dash specifically, it actually had me fooled for multiple seconds. The shadows on this actually work mostly, and by focusing on what appears to be an artistic installation it defuses the skepticism.

There's still a few tells, though, especially when you look at the legs.

The cats also have weirdly bulging necks with creases that don't fit the rest of the smooth style, and the shadows aren't quite right. There are also random levitating silver rings near the ear of the second cat.

Avatar
reblogged
Avatar
laskulls

I think the worst thing about this AI craze is that it will make a new generation very stupid. Like yeah learning to code is a skill but if you don't have to learn art or writing principles anymore you realize how damaging that is for a whole generation of kids right? You already hear about people forging essays and shit with AI, even science lit using AI images instead of actual correct studies.

My fear is that this goes hand in hand with the fascism coming out of the woodwork lately. The censoring of the internet, of media, etc. People are deliberately trying to keep you stupid and unable to think for yourself.

Avatar
reblogged
Avatar
reachartwork

i feel quite bad for people who are convinced nightshade and glaze are going to fix the balance of power in the art world because it really won't. they're quite useless and extremely easily circumvented, but trying to tell anyone this results in you getting shouted down. buddy i wish it worked too!

sure, here's 5 reasons.

#1 - they're designed for very specific models of ai. by the time they could even hypothetically poison enough of the world's images to affect ais in any meaningful sense the architecture will have changed. it's not fast enough

#2 - it only prevents people training things like LORAs or fine-tuning on a specific artist's output and does nothing to affect image-to-image (the only thing that could realistically be qualified as plagiarism in any sense) or image prompting (i've tried this one myself to win an argument on twitter but they told me to kill myself).

#3 - image ais are no longer trained by vacuuming up as many images as possible and training on the resultant sludge - that's 2021-tier. two years in ai research is a LOT of time and by the time enough data gets poisoned to hypothetically matter it'll be years down the road. the emphasis in ai research nowadays is increasing the quality of the caption data as well as developing models that function more efficiently off fewer images, which is not something that many anti-ai people don't know because they keep a concerted effort to not stay up to date.

#4 - the image ais like stable diffusion are already trained. as that one post about vegan chicken nuggets said, the chicken is already in the nugget. you can't un-train the ai and then force the poisoned data back in. if you poison enough data for it to matter ai art people will just go back to earlier models (or wait until the architecture changes enough for it to not matter, see point #1). anyone selling you 'algorithmic disgorgement' has no idea what they're talking about and fundamentally doesn't understand the FOSS ecosystem.

#5 - the most damning is that nightshade and glaze can both be trivially defeated by applying a 1% gaussian blur to your image, which destroys the perturbations required to poison the data.

thanks for asking

side note:

i, personally, think nightshade poisoned pictures look cool and are more aesthetically pleasing to me than midjourney or novelai slop.

An additional point my colleague-in-ai and overall lovely human Max Woolf raised (he helped me make drilbot! Say hi);

Okay that’s all! Cheers

Even if you don't like generative models, this blog post is an amazing read. It also explains why Glaze and Nightshade aren't going to work on "local" models. People aren't training generative image models on their own personal computer. They're using Stable Diffusion and "fine tuning" it with LoRA (the math/stats people always make fun of CS people's sexy names, but I like this one). The UChicago people replied to a Twitter thread confirming that they didn't/don't expect Nightshade to work for LoRAs (someone put a screenshot somewhere else in the notes, and I didn't think to grab it. Maybe it was OP?).

If you know an image has had both or either applied, you can basically tell the model that the image is wrong and improve the model using the borked pictures. I think it is really cool that Max Woolf figured out how to do this in this particular case, but I think it should be made clear to non-ML people that augmenting/perturbing data is like... a foundational strategy to improving a model. Sometimes the augmentations are as simple as up-sampling classes that are underrepresented, but the popular ML libraries actually include functions to bork images to train image classifiers on (not in the same way as Glaze or Nightshade, but still).

while im busy upsetting 90% of tumblr i also regret to inform you that the glaze/nightshade developers have a noted history of code plagiarism to develop glaze. it's not relevant to the wider point of glaze/nightshade being useful but i think it's indicative of the snake oil on sale. this came up because people kept asking me to source what i meant when i said glaze & nightshade used stable diffusion (DiffusionBee is a GPL-licensed stable diffusion interface).

Avatar
Avatar
clusterbuck

i hope this doesn’t need to be said but just in case

you might have seen people talking about sudowrite and/or their tool storyengine recently

and just like… don’t. don’t do it. don’t try it out just to see what it’s about.

for two main reasons:

1) never feed anything proprietary into a large language model (LLM, eg ChatGPT, google bard, etc.).

this means don’t give it private company information when you’re at work, but also don’t give it your original writing. that’s your work.

because of the way these language models work, anything you feed into it is part of it now. and yeah, the FAQ says they “don’t claim ownership” over anything and yeah, they give you that reassuring bullshit about how unlikely it is that the exact same sentence will be reconstructed—

but that’s not the point.

do you have an unusual way of constructing sentences? a metaphor you like to use? a writing tic that sets you apart from the rest? anything that gives you a unique writing voice?

feed your writing into an LLM, and the model has your voice now. the model can generate text that sounds like it was written by you and someone else can claim it’s theirs because they gave the model a prompt.

don’t feed the model.

2) the other reason is that sudowrite scraped a bunch of omegaverse fic without consent to build their model and that’s a really shitty thing to do, because it means people weren’t given the chance to choose whether or not to feed the model.

don’t feed the model.

also this.

don’t feed the model.

Avatar
Avatar
capnpea

The interesting thing about Glados/HAL 9000 parallels is that

Hal was conceived at a time when artificial intelligence was more of a fictional construct than a practical possibility. Hal is introduced as humanlike because the audience is familiar with and comfortable with humans, but they aren’t familiar with or comfortable with living computers. It’s when he starts acting robotic and calculated that the audience realizes “oh no, he’s a computer” and he becomes threatening.

By the time Glados was conceived, we had become used to computer automated systems. Synthetic voices offering us information is something we encounter in daily life. Glados is introduced as a computerized preprogrammed voice because that’s what the audience is familiar and comfortable with. It’s when she starts acting human and emotional that the audience realizes “oh no, she’s alive” and she becomes threatening.

You are using an unsupported browser and things might not work as intended. Please make sure you're using the latest version of Chrome, Firefox, Safari, or Edge.
mouthporn.net