mouthporn.net
#ai – @whilst-farting-i on Tumblr
Avatar

I AM AN EEL. WITH A GUN.

@whilst-farting-i / whilst-farting-i.tumblr.com

it's 2024 and I will never be free from homestuck, icon by iamnotamuffin, fuck terfs, im a whole adult, that about covers it
Avatar
reblogged
Avatar
eravioli

I just started grad school this fall after a few years away from school and man I did not realize how dire the AI/LLM situation is in universities now. In the past few weeks:

  • I chatted with a classmate about how it was going to be a tight timeline on a project for a programming class. He responded "Yeah, at least if we run short on time, we can just ask chatGPT to finish it for us"
  • One of my professors pulled up chatGPT on the screen to show us how it can sometimes do our homework problems for us and showed how she thanks it after asking it questions "in case it takes over some day."
  • I asked one of my TAs in a math class to explain how a piece of code he had written worked in an assignment. He looked at it for about 15 seconds then went "I don't know, ask chatGPT"
  • A student in my math group insisted he was right on an answer to a problem. When I asked where he got that info, he sent me a screenshot of Google gemini giving just blatantly wrong info. He still insisted he was right when I pointed this out and refused to click into any of the actual web pages.
  • A different student in my math class told me he pays $20 per month for the "computational" version of chatGPT, which he uses for all of his classes and PhD research. The computational version is worth it, he says, because it is wrong "less often". He uses chatGPT for all his homework and can't figure out why he's struggling on exams.

There's a lot more, but it's really making me feel crazy. Even if it was right 100% of the time, why are you paying thousands of dollars to go to school and learn if you're just going to plug everything into a computer whenever you're asked to think??

Avatar
reblogged

I'm a disabled and chronically ill writer. I can't write every time i want to. I can't use a keyboard or handwrite for disability reasons. The only way i can write is by typing in the notes app on my phone. This is also painful and i can write a few hundred words at most.

Isn't it interesting how i still wouldn't consider using AI to write my stories instead? If the only way for me to write my stories is by using voice to text and i can write only a single word everyday i still wouldn't choose AI

Fuck AI and fuck you for pretending to care about disability people just so you can steal art made by disabled people

Avatar
ab-art-07

AI ASSISTIVE is not what we are fighting against.

Its AI GENERATIVE bullshit that's stealing our jobs and passions.

There is nothing wrong with using AI to ask for a prompt or an idea (I.e., asking ChatGPT for a writing prompt). There IS something wrong with asking AI to write a book, printing out the response, and publishing it, claiming that YOU wrote this word for word.

Because you didn't.

When you ask for a writing or art prompt, you're giving yourself a TOOL. A TOOL to help your brain imagine something. You are able to use the prompt as a basis for a story or image that YOU spent time and energy and imagination to create.

But if you ask AI to generate something, all you're doing is thinking of a prompt and throwing it at a robot to make "art". It takes a few seconds.

AI is robotic. For now, it has no soul.

I'd rather have a work of art made from a prompt given by AI but realized by a human than a work of "art" made from a prompt given from a human but realized by AI.

This is not my point at all. AI is hugely problematic in many many ways. One of them being that it steals from people. ChatGPT is not an exception.

The point of this post is not "Don't say you made it when it's AI". The point is that you should not be using AI. There is no reason to use AI. If you need prompts there are countless of writing and prompt specific blogs on various social media that you can look through.

Don't use AI. All AI is stealing from people. From actual artists and writers.

There's no good excuse to use ChatGPT for anything, including writing prompts, which incidentally is still generative AI.

Soul, or lackthereof, isn't why AI like this is bad.

It's built and trained on the theft of work other people actually put thought into, including prompts, and it's disastrous for the environment, which is pretty significant given we're in climate crisis.

It's also already collapsing around itself.

Writing is difficult when you're disabled, just like everything else. The last thing I need or want assistance with is using my words to tell my story.

And given the horror stories of disabled tech being 'repossessed' by companies going bankrupt, being surgically removed or just rendered redundant, AI and its tech bros are the last thing I want anywhere near our community.

As the others on the thread said, I do not want LLMs/generative AI anywhere near my community. I also really hate that some pro-generative AI orgs keep trying to use our community as a "shield" to hide the fact they're ableist. To sum it all up:

  1. Generative AI is theft. It steals the works of others to feed its learning algorithms, and will poop out plagiarisms.
  2. It destroys the environment. It takes billions of gallons of fresh water to cool down the servers that run at max or near max for it to work. I have one of these damned things by my city, and it is ruining our water supply.
  3. Due to how much electricity is needed to run the servers for generative AI, it can also impact fragile electric grids, especially in impoverished areas. Due to the money poured into these servers, if power is knocked out, they often get their power back sooner than impoverished people who may need power for their disability devices they rely on for staying alive.
  4. It does not meet the needs of my community. We need easily accessible, cheap, accurate diction programs and handwriting-to-text programs. We do not need generative AI.
  5. Generative AI is flooding the market making it harder for disabled folks to find places where we can publish. Most reputable magazines/publishers no longer have an open submission policy due to the flood. So for those of us who are often too ill to keep track of who is open when and for how long, we often miss submission windows.
  6. Generative AI is making it harder for marginalized authors to even be discoverable. This is really apparent on sites that sell eBooks, so those of us who may go with a smaller publisher or self-publish, we are being drowned out by shitty generative AI crap. But we often don't have the health to market ourselves to get seen. If generative AI continues to exist, this will only get worse.
  7. Generative AI is not accurate. It often will "hallucinate" as in it will provide an answer that may seem true until one fact-checks and discovers it's not true at all. Sure, how one writes the prompt can help diminish this problem, but it's also going to get worse as generative AI shit is being shoved back into the algorithms due to tech bros scrapping all they can to feed the learning algorithms.
  8. Unethical datasets that are often riddled with biases, so often bigoted statements get produced. This is because a lot of the stolen works include not just excellent authors but also bigots writing the next big bigoteroo.
  9. Disinformation pours into the Internet, making it harder to discern truth. This has in the past impacted elections and likely will impact the current one in US. This also makes it harder for us to work together in solidarity since we often have to waste energy to debunk misinformation, when we desperately need energy for our fight against the ableist, racist, bigoted systems that harm us.
  10. It makes internet searches mostly useless due to the massive amounts of disinformation that comes up in a search. Now we need to figure out if the search results are even accurate, which wastes our energy and time.

Of course list is not exhaustive. I'm sure others will think of other things that harm our community that I didn't list above.

Point is, LLM/Generative AI is a scam by tech bros to flood the world with shit and make it harder for us to find one another and create community with one another. It's a thrift for them to get a fast buck at the expense of everyone else. I hate that they are trying to force it down our throats.

Avatar
reblogged
Avatar
armengoldira
Avatar
purplesaline

That's an incredible gift for an otter to give! I bet it's thr favourite stone of one of the two (they keep it in their pocket and apparently some hang onto their favourite for their entire lives). They use the stone to crack open shells they can't manage with their teeth or claws.

An otter giving away it's stone is quite the sacrifice!

Of course there's a chance this wasn't one of their's, and instead a stone they quickly found to use as a gift and even if that is the case (though I think it less likely), it's still an important gesture knowing how important stones are to them.

Avatar
dr-otter

I...uh...nice story but it starts with a river otter who magically morphs into a sea otter, and changes locations several times.

Pretty sure the sea otter at the end is at an aquarium and has been trained to bring things to the keeper.

I don’t know if this is like an AI thing or what but I have been seeing a TON of these fake animal rescue videos lately.

I was sent a particularly ludicrous one of a turtle being saved from a shark then supposedly bringing the person who saved it a jellyfish as a thank you?? Something that a severely wounded turtle would simply never ever do? And I saw it shared by lots of animal savvy folks on my feed! Wild how the turtle also changes species mid-video and no longer has wounds to the shell.

Crazy stuff man. Don’t let your desire to be heart-warmed override your skepticism. It’s important to think critically about the animal behavior that is presented in these videos.

Avatar
reblogged
Avatar
ms-demeanor

I don't care about data scraping from ao3 (or tbh from anywhere) because it's fair use to take preexisting works and transform them (including by using them to train an LLM), which is the entire legal basis of how the OTW functions.

Avatar
doomhamster

To be fair some of us don't object to our works being used because "OMG my style! my copyright!" My style is not that unique.

I want to do my part to sabotage the production and refinement of LLMs because I hate LLMs.

Okay I mean this very seriously because you and I have talked over the years and we both know we're not stupid: why do you hate LLMs?

Most of the time when people talk about why they hate LLMs what they mean is that they hate how LLMs are being used, and I can absolutely agree that there are a lot of LLMs being used really badly. Please know that I am mentally inserting the "el problema es el capitalismo" meme here.

But the Goblin.Tools magic to-do list is the function of an LLM (that is a website full of free tools created for people with ADHD; it is run on donations by a developer with ADHD who created this tool to help other people).

Replit, the IDE I was using for my most recent coding class, uses an LLM-based AI tool to suggest corrections to your code and highlight errors, teaching baby programmers to recognize errors without having to comb through long programs by hand or relying on having a more experienced programmer nearby to help you figure out where you're fucking up.

There are definite problems surrounding the use of AI search engine results or replacing customer support positions with chatbots, but saying you hate LLMs because OpenAI is kind of shit is sort of like saying you hate web browsers because Google sucks. An LLM is a tool. There are different models trained on different data sets.

So what I'm frustrated by in a lot of these conversations, and what I'm asking you, is "is it the tool you're upset with, or how it's being used, and if it's the tool: why?"

Avatar
gamebird

What bothers me are the misconceptions people have about it. It's like they say a film of Clever Hans (a horse who could follow cues from the audience or its handler, used to perform math and other tricks) and announced that henceforth, everyone who needs a calculator should get a horse.

It's a horse. That's it. At its *best*, it tells you the answer you want to hear. But most horses have no skill in this and trying to cram them into applications is dumb.

I am tired of and irritated by the pro-horse hype.

I'm absolutely right there with you about being tired of the pro-horse hype.

"AI" has very few practical applications for people who aren't already using it and quite a lot of impractical applications for people who have been misinformed about its capabilities, which is an awful lot of people because there are a lot of companies making a lot of money lying about its capabilities right now.

Avatar
reblogged
Avatar
beesmygod

ed zitron, a tech beat reporter, wrote an article about a recent paper that came out from goldman-sachs calling AI, in nicer terms, a grift. it is a really interesting article; hearing criticism from people who are not ignorant of the tech and have no reason to mince words is refreshing. it also brings up points and asks the right questions:

  1. if AI is going to be a trillion dollar investment, what trillion dollar problem is it solving?
  2. what does it mean when people say that AI will "get better"? what does that look like and how would it even be achieved? the article makes a point to debunk talking points about how all tech is misunderstood at first by pointing out that the tech it gets compared to the most, the internet and smartphones, were both created over the course of decades with roadmaps and clear goals. AI does not have this.
  3. the american power grid straight up cannot handle the load required to run AI because it has not been meaningfully developed in decades. how are they going to overcome this hurdle (they aren't)?
  4. people who are losing their jobs to this tech aren't being "replaced". they're just getting a taste of how little their managers care about their craft and how little they think of their consumer base. ai is not capable of replacing humans and there's no indication they ever will because...
  5. all of these models use the same training data so now they're all giving the same wrong answers in the same voice. without massive and i mean EXPONENTIALLY MASSIVE troves of data to work with, they are pretty much as a standstill for any innovation they're imagining in their heads
Avatar
reblogged

Apparently they're selling post content to train AI now so let us be the first to say, flu nork purple too? West motor vehicle surprise hamster much! Apple neat weed very crumgible oysters in a patagonia, my hat. Very of the and some then shall we not? Much jelly.

Eaglet o written. Beach fork bagel is a bmw. Hockey fire of the nine flare golf runner

Beach 👏 fork 👏 bagel 👏 is 👏 a 👏 bmw!

Avatar
maxknightley

it seems that the most terrifying impact of AI on the Internet is that ninja pirate monkey laser-posting is going to be in vogue again

Wasing the needing of High Imperial!

Avatar
vrumblr

Jangle was da dump da do? Fawheezy!

You are using an unsupported browser and things might not work as intended. Please make sure you're using the latest version of Chrome, Firefox, Safari, or Edge.
mouthporn.net