mouthporn.net
#capitalism – @goodgrammaritan on Tumblr
Avatar

I am surely in the toils.

@goodgrammaritan / goodgrammaritan.tumblr.com

She/her tricenarian. Books, animals, music(als).
Avatar
"You know who the real villain is?" I continue, strolling through the lobby and joining a line of other writers, directors, cinematographers, and actors as they filter inside to find their seats. "Unchecked capitalism and the desire for capitalist systems to monetize other people's trauma."

Bury Your Gays by Chuck Tingle

Avatar

first semester studying marine biology: wow i'm really learning a lot about the ocean ^_^ i can't wait to get out in the field in a couple years and do my part to make the world a better place ^_^

third semester studying marine biology: the intersection of capitalism, imperialism, socioeconomic injustice, and environmental destruction has turned me into david lynch's cartoon of the dog that is so angry it cannot move. anti littering campaigns aren't enough i need to guillotine every petrochemical executive and stick their heads on pikes on the white house lawn

Avatar
reblogged

so like I said, I work in the tech industry, and it's been kind of fascinating watching whole new taboos develop at work around this genAI stuff. All we do is talk about genAI, everything is genAI now, "we have to win the AI race," blah blah blah, but nobody asks - you can't ask -

What's it for?

What's it for?

Why would anyone want this?

I sit in so many meetings and listen to genuinely very intelligent people talk until steam is rising off their skulls about genAI, and wonder how fast I'd get fired if I asked: do real people actually want this product, or are the only people excited about this technology the shareholders who want to see lines go up?

like you realize this is a bubble, right, guys? because nobody actually needs this? because it's not actually very good? normal people are excited by the novelty of it, and finance bro capitalists are wetting their shorts about it because they want to get rich quick off of the Next Big Thing In Tech, but the novelty will wear off and the bros will move on to something else and we'll just be left with billions and billions of dollars invested in technology that nobody wants.

and I don't say it, because I need my job. And I wonder how many other people sitting at the same table, in the same meeting, are also not saying it, because they need their jobs.

idk man it's just become a really weird environment.

Avatar
shutframe

Like, I remember reading an article and one of the questions the author posed and that's repeated here stuck with me, namely: what is it for? If this is a trillion dollar investment what's the trillion dollar problem it is solving?

I finally think I have an answer to that. It's to eliminate the need to pay another person ever again. The trillion dollar problem it's solving is Payroll.

Avatar
penrosesun

Except like... it's not solving that either.

A metaphor I've been using lately is that being a tech-interested person and watching the AI hype is like if you had followed the development of blenders for years. You watched them go from prototypes that were basically just a spinning open blade all the way to a design that has the potential to be a consumer Vitamix! It's really cool! Blenders have come such a long way, and they're ready for prime time!

And then you turn on the news and see otherwise rational, intelligent people saying "gosh, imagine, soon we'll replace all of our chefs, and our surgeons, and our high school teachers with blenders!" and your friends and family all nod and agree and say things like "wow, blenders can basically do everything now!" And when you ask people "are you HIGH?!" they show you the new blender they bought, and how well it makes a smoothie, and then they act like that's evidence for a statement like "blenders will replace 90% of the workforce" not being utterly nonsensical and deranged.

"Scientists just have to fix the hallucination problem!" they say. When you ask what the hallucination problem is, they say "I'll show you" and then they put their unfinished math homework into the blender and hit pulse. "You see, I wanted it to solve those math problems, but it just shredded the paper. It must have hallucinated a world where the answer to 2+2 was puree." When you point out that it did exactly what it was designed to do, because a blender cannot do math and it will never do math and expecting it to be able to do math just because it can make both smoothies and soup is ludicrous and bizarre, they tell you that they're sure that blenders will be able to do math any day now, just you wait, "I mean, look how far they've come! A year ago I would have said that blenders could never be strong enough to blend ice into sorbet, but now they can. So who are you to say they'll never do math???"

The thing is, there are plenty of things that LLMs and generative AI are good for. OCR is still a vital need, and AI is excellent at it. Facial recognition is an area where AI has a lot of potential. It can be used as a screening step for the analysis of all sorts of large datasets. Better autocomplete on your phone is a real thing that real people want. There are a ton of problems that these tools genuinely do solve!

...But none of them are trillion dollar problems, and that's an issue, because no one wanted to pour a trillion dollars into "improved OCR" and "somewhat better autocomplete".

So, we get snakeoil to make up the weight. It'll solve payroll; it'll democratize visual art; it'll make it so that anyone can do anything – you name it, blenders will do it, eventually! Once we've fixed the problems with using blenders to do everything, including tasks which aren't blending things, then blenders will be worth the trillion dollars that people have already spent developing them! Look at the progress we've already made: we're working on attaching a calculator to the blender, and that could make it so that blenders can do math! Don't you dare suggest that calculators already exist and work just fine without being attached to blenders – we need our blenders to do math, because we promised that one day they'd be able to do everything! And they will! Blenders are the future and don't you dare suggest that maybe that future is just "we can blend things now"; if you so much as breath those words, the bubble will pop.

Avatar
ralfmaximus
we're working on attaching a calculator to the blender, and that could make it so that blenders can do math!

This right here makes me feral.

The fake human-mitigated cutout for tasks LLM AI simply cannot do, to further the illusion that we're creating a thinking, possibly sentient being:

  • LLM sucks at math, so we'll divert it to the calculator engine if it detects a math problem
  • LLM sucks for language translation, so we'll just divert it into our classically trained translation matrix when it detects a need for language translation
  • LLM sucks at telling time & creating schedules, so we'll divert it to our already excellent Calendar app that's been working fine since 2013

et fucking cetera

The only reason any of this happens is to prop up the illusion that AI is so close to working as advertised... but only if you give us more investment dollars.

Bonus: as recently as 2022 I used to be able to ask my android phone to "set a timer for 5 minutes" in plain colloquial english, even slurring my speech, and it would do exactly that.

It just worked. Life was good.

Now, in 2024, if I try that? It (no joke) executes a google search on how timers work in Android phones. Without setting a timer.

Instead, I now need to fire up the old Google Assistant which adds another step in the process, a whole separate icon/app to activate. If I do that, I can indeed verbally tell Assistant to set a timer and it works as before.

However, if I take Google's breathless advice to "Try Gemini" and replace Assistant with their new AI assistant, and then say "set a timer for 5 minutes"... my phone informs me that Gemini can't do that yet.

And since Assistant is no longer available because Gemini is now running things, I have totally lost the ability to command my phone via simple voice directives. Although you can revert back if you locate its buried settings.

PROGRESS

Avatar
mckitterick

Late-Stage Capitalism is when corpos no longer have any idea how to increase profits except to further enshittify

End-Stage Capitalism is when corporations fail to produce profit because enshittification has limits and no one except top-level corpos have income to spend anymore

what comes next arises from those ashes - here's hoping we the people have the tools needed to build something new from the resources that remain

Avatar
Avatar
arctic-hands

Every time there's a food recall that spreads from one company to the next, even from generic brands that are unique to the store selling them, it makes me realize the illusion of choice under capitalism hyped up by conservatives is a bunch of bullshit.

Oh and uh, don't drink apple juice for a while. Arsenic. And it's more than just Aldi and Walmart

Avatar
reblogged

AI hasn't improved in 18 months. It's likely that this is it. There is currently no evidence the capabilities of ChatGPT will ever improve. It's time for AI companies to put up or shut up.

I'm just re-iterating this excellent post from Ed Zitron, but it's not left my head since I read it and I want to share it. I'm also taking some talking points from Ed's other posts. So basically:

We keep hearing AI is going to get better and better, but these promises seem to be coming from a mix of companies engaging in wild speculation and lying.

Chatgpt, the industry leading large language model, has not materially improved in 18 months. For something that claims to be getting exponentially better, it sure is the same shit.

Hallucinations appear to be an inherent aspect of the technology. Since it's based on statistics and ai doesn't know anything, it can never know what is true. How could I possibly trust it to get any real work done if I can't rely on it's output? If I have to fact check everything it says I might as well do the work myself.

For "real" ai that does know what is true to exist, it would require us to discover new concepts in psychology, math, and computing, which open ai is not working on, and seemingly no other ai companies are either.

Open ai has already seemingly slurped up all the data from the open web already. Chatgpt 5 would take 5x more training data than chatgpt 4 to train. Where is this data coming from, exactly?

Since improvement appears to have ground to a halt, what if this is it? What if Chatgpt 4 is as good as LLMs can ever be? What use is it?

As Jim Covello, a leading semiconductor analyst at Goldman Sachs said (on page 10, and that's big finance so you know they only care about money): if tech companies are spending a trillion dollars to build up the infrastructure to support ai, what trillion dollar problem is it meant to solve? AI companies have a unique talent for burning venture capital and it's unclear if Open AI will be able to survive more than a few years unless everyone suddenly adopts it all at once. (Hey, didn't crypto and the metaverse also require spontaneous mass adoption to make sense?)

There is no problem that current ai is a solution to. Consumer tech is basically solved, normal people don't need more tech than a laptop and a smartphone. Big tech have run out of innovations, and they are desperately looking for the next thing to sell. It happened with the metaverse and it's happening again.

In summary:

Ai hasn't materially improved since the launch of Chatgpt4, which wasn't that big of an upgrade to 3.

There is currently no technological roadmap for ai to become better than it is. (As Jim Covello said on the Goldman Sachs report, the evolution of smartphones was openly planned years ahead of time.) The current problems are inherent to the current technology and nobody has indicated there is any way to solve them in the pipeline. We have likely reached the limits of what LLMs can do, and they still can't do much.

Don't believe AI companies when they say things are going to improve from where they are now before they provide evidence. It's time for the AI shills to put up, or shut up.

The trillion dollar problem AI is solving--or at this point, attempting to solve--is employment.

70% of a given company's cost is employment. Paying for employees. And companies fucking hate that.

They don't even care if you can replace all of their employees. If you can reduce costs by replacing 25% of their employees simply by making the other 75% more effective and faster, then that's what they're going to do.

For many companies, the power of AI is not in the huge large language models like chat gpt, but smaller models that are trained only on a single company's exclusive content, so that employees can basically query the AI in order to make their work faster. Another AI use case is an AI like Microsoft's Copilot, which is integrated into the users operating system, or Apple Intelligence, which is currently in development at Apple to do the same. Imagine saying to your computer, "Computer, open the marketing proposal I was working on yesterday," and having your computer immediately open the relevant document without you having to dig through the files.

Yes that is absolutely tiny, the companies are obsessed with the cumulative effect of tiny gains. That is exactly the sort of thing that they want for their own employees, and that is exactly the kind of experience they want to sell to consumers.

Generative AI is just a single branch of AI. The technology is here and as is often the case with technological development, while improvement of the technology is on hold due to hardware-related limitations, companies will exploit ever broadening WAYS to use the technology. That is going to feel like improvement, even if it is just using the same technology and a different way.

I'm not exactly jazzed about AI, as someone for whom a large portion of their profession has the potential to be replaced by AI, at least partially. But I think it's foolish to say that the technology is at its peak when the people who have the money and the power in this world are absolutely desperate to force it to grow and change simply to minimize their costs and maximize their profits.

Not to mention how the capacity of AI will grow as soon as there are more gains in quantum computing, but I don't know enough about that to have an informed opinion.

Avatar
traycakes
Imagine saying to your computer, "Computer, open the marketing proposal I was working on yesterday," and having your computer immediately open the relevant document without you having to dig through the files.

So the use case for AI is eventually it might be able to do the same thing Alexa and other virtual assistant programs have done for a decade?

Because you can already say "Cortana, open the marketing proposal I was working on yesterday" and have your computer open the relevant document without having to dig through files. You might have to phrase it in a more specific way, but this is already a thing that you can do.

You might have forgotten that it was already a thing you can do, because most people don't use it. Saying out loud commands to tell your computer to find a file is not faster or easier for most people, and people feel it allows for companies to violate their privacy. It's a useful tool for disabled people, but an "AI"-powered virtual assistant is not going to see widespread adoption to justify the incredible expense.

Any widespread corporate use for "AI" to replace workers will require it to be far more accurate than it currently is. That's simply impossible with LLMs, creating a more accurate and reliable chatbot is going to require going back to the drawing board and coming up with a new kind of machine learning. The miracle new tech required isn't cold fusion or quantum computing, it's artificial general intelligence AKA what most people mean when they say AI: computers capable of rational thought so they can separate out fact from fiction.

My point is not that there is one single use case being developed. My point is that LLMs are not the only type of AI, and for profit and nonprofit organizations alike are still exploring use cases that both you and I haven't thought of yet.

Speaking specifically to the example I gave: You're right, I have not enabled that use case for myself. I meant it only as an example. I'm drawing this opinion for my experiences working as a consultant for multiple microchip companies both within and without the United States, some of which are considered by the government to be critical infrastructure. And multiple people from those teams are trying out an advanced version of Microsoft's copilot, and telling me that they are blown away by its capabilities and that it is going to be a game changer for them and their teams.

Obsession with profit margins means they will dump any amount of money into a slim possibility of reducing their employment expenses, regardless of what that reality might look like 10 years down the line.

I definitely agree that large language models are severely flawed, cannot be trusted, and have a long way to go before they can be considered reliable in any sense of the word. But looking at return on investment, many companies are absolutely willing to switch to a shittier alternative to an employee, even if that return means lower customer satisfaction. In this monopolistic world we live in, many customers, whether they are businesses or individual consumers, have no choice and aren't able to switch providers even if they wanted to. Companies are taking advantage of that to provide AI-driven "good enough" service. It's cheaper than providing human-driven excellent service.

This demand is going to turn into revenue for AI companies, and part of that revenue is going to turn into research and development. So even if they don't improve the large language model type of AI, it's entirely possible for other types of AI to be developed, alongside other use cases. For this reason, I think it's a bit preemptive to say that AI has hit its peak.

I'm not advocating that AI be implemented ubiquitously across all companies and replace designers, communicators, and so on and so forth, because it's somehow better (we all know it's not). I AM saying that there are non-LLM types of AI that DO indeed perform functions with a higher accuracy than human beings, and we have yet to actually explore all of those capacities and use cases.

And by the way: it's not a huge expense for companies. It's a huge expense for the companies that are providing the compute power, but because all of those companies already own all of the infrastructure necessary (Microsoft, Amazon, NVidea) the operating expense is worth it. Even if the money that they make from companies using their AI services is less than the ongoing operating expenses, they are making a bet that with improvements in hardware technology, the revenue for AI will one day outstrip the expenses. And because no other smaller company has the spend necessary to even imitate the necessary infrastructure and cost to compete, they are going to double down.

I'm a software dev and my company has made it mandatory for everyone to use Copilot. We are one of the largest software development companies in the world. As much as I hate it, it is incredibly useful for software development. Coding languages are constantly changing so it is impossible for anyone to know everything about one language and most of us are working in multiple languages. But now we have a tool that can give us updated information and explain anything we have questions on. From the business side, this means we no longer need junior devs because our seniors can now do twice as much work with half the mistakes.

Another example from our company, they had a team create an AI that can answer HR questions. So now instead of an HR team of 50* people who are answering the questions of our hundreds of thousands of employees, we can have the AI answer the 90% of questions that are predictable, and have 5* people who deal with the more complex issues.

*I have no idea how large our HR team was/is. I just know my friend was exited to work on this until I pointed out that it would be used to eliminate jobs. "No, it's just a cool tool! It will make their jobs easier!" "Yeah, so then one person can do the work of 3 and you get to cut labor."

I think generative AI has improved a lot over the last 18 months for the people who use it, and it will continue to do so as the blockers to its growth are removed. When I asked it what topics it's asked about the most, it's top 3 topics were related to software development and 4th was "general knowledge". And that's why it's good at code but shit at understanding how many fingers humans have.

Avatar
mckitterick

the best use cases for "AI" right now is as a fast and powerful database / search engine / analysis combo, or as an experimental modeling system for things that would be harmful / expensive / much slower to test in the real world. those are objectively useful tools

what's useless is replacing creative human labor with generative LLMs when humans can already do and like the job

if they can actually help humans do their jobs more efficiently and powerfully (as apparently some generative programming tools can) without reducing the number of human jobs, generative tools could increase overall human productivity without harming us by eliminating jobs just to save corporations money

but the problem remains that we live under a form of capitalism that seeks to endlessly reduce costs and increase profits - both goals that go against the basic laws of physics (see the 2nd Law of Thermodynamics)

so they'll cut every possible job and replace them with botshit-generators until no humans remain, ending the potential for innovation or increased profits while simultaneously destroying their customer base who no longer have incomes to buy anything

those seeking to develop true AI - strong AI or Artificial General Intelligence - have a long way to go before they can create a true thinking machine. but they're trying, and if the long-held predictions hold true, we'll see something like that within 20 years. but almost certainly not before 15

so the current hype for "AI" is only coming from the worst kind of capitalists who don't care about people or the species or even other living beings. as soon as the funding dries up, they'll stop trying to shove it into places it's not wanted and go back to finding new ways to improve the tools people actually want and need

Avatar
reblogged
Avatar
ot3
i saw this image and thought you’d really like it. it reminded me of you

literally. this is it.

Firefox on mobile has a button to view a site in desktop mode - most sites don’t try to force the app on you when you do that.

Avatar
mckitterick

also you can install extensions on Firefox to hide ads (those things that app developers force you to see more of if you don’t pay for ad-free) and block tracking (the thing that makes corporations a ton of money by selling private data about you)

also you can easily zoom pages and do other customization on a browser that apps don’t allow, and you don’t need to go into the apps manager and kill a dozen hungry apps running in the background every time you want to save memory and battery life

also relevant:

and

and don’t forget, using iPhone App Store apps to buy stuff online costs either you or the seller an additional 30%!

DO NOT USE THE APPLE PATREON APP to subscribe to an artist

Avatar
reblogged
Avatar
soradsauce

[ID. First image is a tweet from @DrHaroldNews saying “Leo Varadkar warns introducing a rent freeze could create a 'nightmare scenario', where landlords would be forced to sell their extra houses, which would then drive house prices down, allowing low-income people to get on the property ladder."

Second image is a screenshot of Chidi from The Good Place with subtitles edited to say “Okay. But that's better. You do get how that's better, right?” End ID.]

Avatar
mckitterick

it's from a satire blog, but the fact we all just shake our heads thinking, "Sounds about right. Landlords are leeches," tells you something

Avatar
reblogged

Me on Fourth of July like

Anyway, stop spreading white nationalist rhetoric and toxic nationalism thanks

Nobody said anything about race. Stop that.

It’s nationalist to state facts now?

How is this toxic?

Show me countries better than the USA.

economically

human freedom

quality of life

social progress 

image

income equality (america was among the worst)

healthcare

x x

gender equality

what exactly makes america the “best country” here? america doesn’t excel in anything.

I was gonna say aren’t we like #1 in a bunch of bad stats? Like aren’t we the top for rape and abuse?

I remember this epic moment from The Newsroom

Americans just buy into the propaganda they are the greatest country when there is absolutely zero evidence to say so.

Why are you on the fence. Relative to what though? A bunch of socialist trope studies emerging from an ideologically driven socialist enclave in academia? The fact of the matter is that if you manage to make about 30 grand a year you’re in the top 1% of income earners globally.

I’d say it’s the best in its class, a multi state trans continental union.

The worst part about America is the wealth it produced created several generations of smarmy and entitled cunts that have no understanding for the conditions that created the wealth in the first place and they’re bound and determined that our betters claim all power over us.

Avatar
mckitterick

Imagine the level of capitalist brainwashing needed to respond to this recitation of shame by saying:

But middle-income Americans earn more money than most people who live in poverty!

and thinking that Makes America Great.

Because our wage slaves who spend their $30k per-year income live in such luxury, right?

Avatar
reblogged

the thing about capitalism is that at a certain point a product reaches its maximum audience and cant really be improved (at least not while remaining profitable), but capitalism requires a product provide infinite growth, and at that point the only way to increase profits is to raise prices, cut corners, and in the case of services start adding advertisements. this is just how the system works.

Avatar
charyou-tree
Rent-seeking is the act of growing one's existing wealth by manipulating the social or political environment without creating new wealth.[1] Rent-seeking activities have negative effects on the rest of society. They result in reduced economic efficiency through misallocation of resources, reduced wealth creation, lost government revenue, heightened income inequality,[2][3] risk of growing political bribery, and potential national decline.

The actual economic term for this parasitic behavior is "Rent Seeking", as in "charging you rent for things that didn't used to cost money just because we can."

Avatar
mckitterick

the Second Law of Thermodynamics tells us that infinite growth is impossible in a closed system, and biology tells us that attempting infinite growth destroys the growing thing's environment (at large scale) or host (at micro scale, where a single host is the environment). and the same's true of all the sciences

so it's clear that this greedy, ever-devouring system called capitalism is, at its core, no safer to be around than a black hole or out-of-control virus

maybe we shouldn't base our entire civilization around cultivating that which will ultimately destroy us if we don't severely regulate its growth or miss even a single, tiny breach of its fragile containment system

especially when capitalism's most rabid worshippers seek to tear out the thin barriers to growth we've installed to prevent being consumed by this ravenous thing

Avatar
reblogged

I noticed today that the deadname of a client was clearly visible in their client file because it was their legal name, and flagged it for IT. I specifically flagged it as "Hey, if someone sees this and calls our client the wrong name, we'll lose them as a client." IT emailed me back immediately, and it's now invisible except on their contract with us, which the majority of us don't have direct access to, as opposed to their client file.

The reason I flagged it framing it as a loss is that what matters to most companies is money. If you can flag a bigoted practice as something that will lose customers, clients, or get them a lawsuit, that is significantly more likely to get taken care of quickly than trying to appeal to their better nature. I could have flagged it as "Hey, this is going to make our client really upset if they hear it.", which was my actual motivation for flagging it, but if I had, then it probably would have been taken care of in a few days or even weeks, not hours.

Always hit them with the profit argument for quick and decisive action.

Avatar
reblogged

“I don’t know what my goals are, no. Thanks for asking.”

[ID: Comic of a person drawn simply, often with just two dots for eyes in a blank face.

1: A person sits cross legged with their hands in their lap. Caption above and below says “I don’t think I’m / doing enough”

2: Several small panels.

Webpages with a profile picture, first saying “Leon!” and then “Application denied, thank you for applying.” Caption says, “I should apply more.”

A person holding a mouse and a tablet stylus, looking blankly off into space. Caption says, “work on my portfolio”

The person, smiling, holding pieces of paper and business cards with their smiling face and name, “Leon.” Caption says, “network”

3: The person sitting with their knees drawn up, with small ripples around them as if they sit in a pool of water. Caption on each side says “I’m / Just”

The ripples get bigger and they fall back into the water. Then they’re falling through space, out of the panel, water droplets around them. “Really / tired

4: They’re still falling and now reaching down. “Every time I tell myself / to do more. To work harder”

A blank-looking face, looking up at the falling person. “A voice in my head screams:”

5: Huge, scratchy text behind a blank-faced person looking down. “YOU DON’T WANT TO. YOU DON’T WANT THIS.”

6: A sky with a few clouds. In the emptiness in the middle, “So what do I want?”

7: The person sitting on a grassy hill with a sky and some clouds. They rest their arm on their knee and look up. “I want plenty of things”

8: A more zoomed-out view as they keep sitting and thinking. “Mostly, I’d like for all of this to feel worth it.

End ID]

You are using an unsupported browser and things might not work as intended. Please make sure you're using the latest version of Chrome, Firefox, Safari, or Edge.
mouthporn.net