mouthporn.net
#metapolitics – @mitigatedchaos on Tumblr
Avatar

Oceans Yet to Burn

@mitigatedchaos / mitigatedchaos.tumblr.com

Voted "Blog Most Likely to be Singaporean Propaganda," 3 years running
Avatar

Forget doomer essays ("here's this terrible problem that threatens to swallow us all"), now my kick is post-doomer essays (the same thing, except written decades ago and the doom just came true and now we all have to live with it). I've already mentioned Bowling Alone and Achieving Our Country, now I'm reading David Foster Wallace's essay on television and irony, and that's definitely a key entry in the post-doomer canon as well. It's brilliant, as you'd expect, in its analysis of how irony and cynicism not only anesthetize us, but do so in a way that makes the prospect of rebellion against their grip almost laughable, but even DFW could not foresee that television would be eclipsed by a technology exponentially more powerful at making detached irony the default affect of all human communication, with all the consequences that entails.

Couple of notes for readers.

1 - I won't get into it in too much detail, but radiating contempt is sort of the default culture war signal. It shows that someone is on, and loyal to, Team A, and that they consider themselves superior to Team B. It's much more efficient at this than making detailed arguments for Team A (which could have been made out of sincere desire to seek truth and thus don't signal loyalty as strongly), and it's much easier and therefore can be done by much less intelligent and informed people.

2 - I don't generally recommend the use of provocation as a discursive tactic in most contexts. It can make people feel like they're up against a wall, and reduce their willingness to consider alternative ideas. One of the main reasons aggressive social justice uses provocation, intentionally or not, is to reduce the dimensionality of opponents' responses (by making them panic, or making them angry) and thereby limit their maneuverability. In this context, it's a method for polarization.

However, there is a valid use for provocation.

One of the reasons that the default culture war behavior is contemptuous detached irony is that it doesn't actually specify a positive position - it only specifies a negation of a different position. A positive position has trade-offs. A negation is the set "every other position," so the trade-offs are not well-defined.

However, in the face of detached irony, provocation can act as a discourse grenade to flush the guy out from behind the cover of contemptuous detached irony and get him to give you his real opinion. One you have his real opinion, you can have an actual discussion on the relative merits of different approaches. Once this discussion has started, there's generally no more need for provocation unless the guy starts putting up a contempt wall again.

I've practiced this somewhat on Twitter - busting through the contempt wall, letting all of the stupid insults slide off and ignoring them, and dragging some guy into an actual discussion.

It's difficult to assess how well it works, as people generally don't change their opinions instantly unless they're quite undecided, but before they change their opinion, they have to think first.

That's still a numbers game, of course.

Avatar
reblogged
Avatar
apas-95

the height of popular US anti-war opposition has never really risen that far above 'jesus christ, they dropped a missile on that children's hospital... that thing must have been so expensive, who's gonna reimburse us taxpayers for wasting our money like that?'

discourse on imperialism within the imperial core simply will not develop beyond 'man, the government should be giving more of that stolen wealth to me, not spending it on maintaining the empire that extracts it' unless class analysis is in play, because there can be no meaningful solidarity between core and periphery proletarians without that basis, when the national interests of the two are strictly opposed

In my opinion, it's actually a result of the discursive flow routing around arguments that involve deeper, more foundational, moral ideological assumptions and commitments, some of which you may share.

If you say, "Bombing Afghanistan is very expensive," this is so obvious that no one may dispute you.

If you say, "Bombing Afghanistan is ineffective, because Afghanistan cannot be developed," you must explain why Afghanistan cannot be developed.

Yes, bombing Afghanistan kills people. Innocent people, even. However, the argument in favor of bombing Afghanistan is that bombing Afghanistan will turn the country into a developed liberal democracy with substantial human rights, resulting in a significant net reduction in deaths and human suffering over the long term.

The left case that bombing Afghanistan is about resource extraction doesn't read true to me. At least, it's about domestic political resource extraction in the "imperial core," rewarding aligned domestic factions and actors from the empire's general revenue, rather than mailing sand back from Afghanistan.

Avatar

Problem: The term "morally-intensive" is easy to read as a contronym. It could mean either, "this would take more being good than we want to spend," or, "this would take more being evil than we want to spend."

Not sure what to do about this.

Avatar
reblogged

i remember in the late 2000s, i was reading an interview transcript with Neal Stephenson where he goes on about what he was reading recently which was Walter Wink's Powers books and he talks about how Wink was doing a study of domination systems, which sounded interesting so I looked into it and Wink was doing that, but mostly he was doing biblical demonology for liberals. I knew, i ~discerned~ this was going to be a big thing in the coming decade and was interested in what would come of it because it sounded like a pretty novel line of attack. this is also around the time when a lot of chaos magick stuff - specifically "egregores" -- was starting to leak out.

anyway here we are, now on the other side of "demonology for liberals" and the answer has come in: nothing good comes of it. demonology is a very bad explanatory method, and there is a reason it tends to be restricted to reactionaries and enthusiasts.

Personal Notes on

Computers, Freedom & Privacy 2000

Toronto, 5-7 April 2000

Roger Clarke

The speaker at the Conference Dinner was Neal Stephenson, author of cyberpunk sci-fi classics 'Snow Crash' (1992), 'The Diamond Age' (1994) and 'Cryptonomicon' (1999).
Neal Stephenson is intense, in the way he writes, and also in the way he looks, and the way he talks. His novels cover enormous geographical and intellectual space, and do so at a gallop that leaves his readers breathless. Fortunately, he didn't try to write a novel in front of his audience, nor do a novel-reading. Instead, he took the opportunity to run an argument and support it with some anecdotal evidence.
[Neal has given me some reactions to these notes, and I need to revise it accordingly!]
He used as a reference-point a section of an Arthur Conan Doyle story. In 'Copper Beeches' [?], Holmes opines that the pressure of public opinion [in a village] can impose more constraints on socially undesirable behaviour than law enforcement measures [in a city]. Switching to recent times, he argued that, during the Seattle anti-WTO demonstrations, the reason why there was so little actual harm done to people (on both sides of the riot-shields) was that all parties knew that every action was being observed, recorded and beamed, in many cases by multiple people and through multiple channels.
One of his concerns was that we've trapped ourselves into a single and non-adaptive 'threat model'. A threat model summarises what it is that a human or human society is scared of and spends its time preparing defences against.
Stephenson sees us (or maybe just us privacy advocates) as focussing on the Big Brother image to the virtual exclusion of everything else. He argued that we need to appreciate the notion of 'a domination system'. This derives from work by an American "Christian pacifist liberal" author by the name of Walter Wink. It refers to the day-to-day mechanics of subjugation experienced by the weak people within a society. The poor are subject to multiple powers (such as hospitals and families), each of which constitutes a network or web that the person has difficulty escaping from. Domination systems are built around some form of idolatory (by which Wink means the worship, whether nominal or real, of some artefact). The example that Stephenson used was the myth of a corporation's mission statement, when the real god is the enhancement of shareholder value.
Stephenson advocated the adoption of 'domination systems' as a threat model in replacement of the Big Brother image. He compared them as follows:
Big Brother Threat Model v. The Domination Systems Threat Model
one threat many threats
all-encompassing has edges
personalised impersonal
abstract concrete
rare ubiquitous
fictional empirical
centralised networked
20th century 21st century
irredeemable redeemable
apocalyptic realistic
He provided an anecdote based on an employee's experience at the Hanford nuclear materials processing facility upriver from Portland, Oregon. The person managed to escape from the worst of the repression he was being subjected to, by finding the edge of the particular domination system, and playing off another power (the local police and justice system) against the U.S. Department of Energy and its special-purpose police.
In applying this model, Stephenson used a metaphor drawn from military strategy. You're in bad trouble if you're surrounded, and the smaller the island that you're trapped inside, the worse trouble you're in. So you need space and you need friends (or at least allies). You need to make sure you're not just in an enclave, but preferably in a city-sized zone, or some other larger area (physical or virtual). Switching the metaphor to the game of Go, this creates the possibility of at least finding the edge of the threatening domination system, and even of threatening to counter-attack and surround it.
Stephenson then picked up the theme that David Brin pursued in 'The Transparent Society'. Brin makes the (I believe, naive) assumption that ubiquitous video surveillance can somehow, magically, be applied equally by the non-powerful as well as the powerful. Stephenson's approach is similar, to the extent that he also argues for observation to be undertaken by everyone, such that countervailing power can be brought to bear by 'the good guys' against 'the bad guys'. His argument is not, however, for streams of video data to be monitored in real time. He suggests that the data be split, secured and stored, and only extracted and analysed retrospectively when justification is shown.
[As an aside, I was very surprised to discover that, although Stephenson was aware of John Brunner's 'The Shockwave Rider', he's never read it. The reason this is so surprising is that, from a literary critic's perspective, his style and even some of his settings have considerable similarities to Brunner's. That's a very positive comment, given that my family rates that as the quintessential book of the '70s, just as 'Neuromancer' was of the '80s, and 'Snowcrash' and 'Cryptonomicon' were of the '90s.]

Damn, man. That's just split right down the middle, isn't it?

Flip a coin. 50% chance this increase in model complexity is someone on a path to a higher level of understanding, 50% chance that (in a more contemporary context) after 2014 they become a bluesky lunatic.

There's a lot that I could say, but (IMO) the key failure occurs here:

Domination systems are built around some form of idolatory (by which Wink means the worship, whether nominal or real, of some artefact). The example that Stephenson used was the myth of a corporation's mission statement, when the real god is the enhancement of shareholder value.

I'll put my (somewhat) short opinion below the readmore, if anyone wants to take a guess before they click it. Fruity, I'm sure, will have his own opinion.

Avatar
reblogged

It's officially Joever.

We'll see what happens after this.

Well, I say "officially," but there's a lot of speculation because it's just a post on the site formerly known as Twitter, on an account that is obviously run by a staffer. That was on July 21st.

On July 20th, the same account posted (bolding mine):

It’s a miracle, folks. Donald told the truth for once. It’s the most important election of our lifetimes. And I will win it.

He was in it to win it. There are claims that staff were still out arguing that he would stay in even on the day of. (Thanks to the Internet and cell phones, fortunately, no one was still making a pro-Biden speech in the evening after the midday announcement.)

So, step back. After the debate, whenever Biden said he would stay in, how useful would those statements be in predicting whether he would stay in? Would they be { strong-for, weak-for, weak-against, or strong-against } signals?

Right.

Now, suppose that a statement coming in is being evaluated based on its correlation to reality, with a range from 0 (totally random with respect to reality) to 1.0 (perfectly matches reality).

Could a sender lose the ability to establish that a statement is above some number on that scale (such as a 0.5)?

What happens to the number of bits (or words) required to transmit the same amount of information in this scenario?

Suppose a candidate uses a lot of bombastic rhetoric, but is relatively consistent directionally (e.g. spends his time calling for "10,000 miles of road construction," but actually just consistently supports 100 miles of road construction). Is there an algorithm that can take his statements and reliably convert them into statements that are higher on the 0-1.0 scale? Is anything lost?

Avatar

If you censor something, you need to account for the fact that you literally don't know things about it in your plans. If you care about being right, anyway.

So like, if you're an oil rig operator, and you punish anyone who reports a safety issue, then you literally don't know what the safety condition of the oil rig is.

This sounds simple, sure. Good luck getting political people to understand it.

Avatar
reblogged

but there are so many people now. there are so many people that you can't evaluate them. not as people. imagine if you tried. how would you evaluate your tumblr followers? maybe you check every blog that follows you, scroll through it a bit... but this is so low-resolution. how much time do you spend per blog? you can't read the whole thing, or even a quarter of it. maybe you click in, you see that they have Skub in bio, and you decide not to follow back. a snap judgment. a practical amount of time. it won't be overwhelming. but... that's a Rule, isn't it? if enough people do it, if it seems like enough people do it, it's a Rule. if you have Skub in bio, you're Uncool. you're a type of person who will face difficulties. if you want people to follow you back, it is instrumentally useful to have Skub in bio. this doesn't matter, bc tumblr clout doesn't get you anything, but most things like it do get you something. twitter clout directly translates to career advancement, so on twitter they really have to care about Rules. it doesn't matter if the Rule is good or bad; good and bad aren't things Rules can be. they're just Rules. the Uncool are those who fail to execute the task they have been assigned.

Truly, a classic dilemma.

Avatar
reblogged
Avatar
shieldfoss

Marx could have been so right if he wasn't so wrong.

A while back, a guy posted "we should all grow our own food and make our own stuff so we don't have to give money to corporations," and many self-identified Marxists on this website were so excited to finally find someone they could explain capital theory to.

The typical self-identified Marxist on Tumblr has a poor grasp of markets, so they rarely get the opportunity.

What's interesting about the 'classical' Marxists is the implications for the long-term distribution of ideologies.

In a static economy, in which the monthly consumption of steel bolts remains constant, and there are no innovations in the field of steel bolts, profit from ownership of the steel bolt factory is more like a rent, and less like payment for innovation.

In this scenario, after weighting for ability, the labor theory of value is a better approximation than it would be under ordinary circumstances.

There is still work for the management of the steel bolt plant to do, such as physically maintaining the building and equipment, hiring new workers as older workers retire and so on, but it isn't very innovative and can be done according to standard procedures.

In a dynamic economy, in which steel bolt consumption varies significantly from month to month, and new steel bolts are developed which are better or more popular, there is a great deal more work to be done in deciding the number and scale of steel bolt plants, the amount of money dedicated to research and development, research and product development priorities, organizing new production lines, and so on.

This is the work of ownership and management. Particularly, the role of management involves taking the complex context of the market, and reducing it until it's simplified enough to be turned into a job. (It isn't that managers are never wrong, or that they can't be abusive, it's just that management is a legitimate job.)

A significant share of ownership in firms now is through retirement portfolios held by ordinary workers, essentially deferred consumption spending through owning shares of productive assets or providing financial resources that enable production, so there is also less of a clear divide between "owners" and "workers" at the economy level. (It also isn't that owners are never wrong, or that they can't 'rig the game' in their favor. It's just that there is a legitimate role for ownership.)

A model in which ownership is pretty much solely about absorbing the surplus production of labor seems to be based on a static rather than dynamic model of the economy.

It seems that some slice of the population get "stuck" at the static analysis level, and 'classical' Marxism happens to be available and has numerous draws in (old, has supporting institutions and intellectuals, previously controlled entire countries, etc), and various functional elements that increase the cost of departure.

Naively, we would expect it to have been replaced by now, but it's more likely that it's going to be a layer in play for a long time.

Avatar
reblogged

i saw one post floating around that i can't find now that was like "alternative ways of knowing are conservative" and you might think it's not appealing to have alternative ways of knowing but it's certainly something you could get from liberalism. like it seems like a perfectly find genealogy is just saying "many leftwing and progressive positions take for granted a basically liberal framework, modest pluralism and all" and you're there

Avatar
sabakos
#'alternative ways of knowing' in at least a certain sense are plainly real and the inability to see this represents#a common poverty of engagement with the human condition among bedroom dwellers

ok max i assume you have access to some information on "alternative ways of knowing" that i dont, because usually i only see that term being invoked by people who are trying to push fascist pseudohistory or quack medicine or religious indoctrination as truth.

what are these "alternative ways of knowing" that aren't simply worth immediately laughing out of the room as an obvious grift?

Basically any informal process for producing knowledge rather than formal institutional knowledge production done through scientific processes is an "alternative way of knowing." It's pretty much impossible to make decisions without relying on incomplete information. Even when the institutional knowledge is of good quality (it isn't always), its scope is often limited.

The problem is in personal moral development. It's something that's difficult to formalize, so IMO it's a bit speculative.

Let's take a view where...

  • Stage 3 refers to a morality which is defined by social context.
  • Stage 4 refers to morality developed or applied through formal logical systems.
  • Stage 5 refers to post-rational acknowledgement of the limits of these formal systems. I think you have a high enough mathematical aptitude that when I say "they have a limited percentage binding" and "the relationships only hold within certain ranges, like most models in comparison to the data they're based on," you can visualize it.

To really get the most out of "alternative ways of knowing" (informally-produced knowledge), one needs to properly understand the limits of formal systems. To understand the limits of formal systems, you first need to master them.

Human beings are animals, but we are also creatures of reason. One way to think about this is that the set of possible logical constructs is essentially infinite. Without desires, there's no motivation to pick any one path differently from any other. Pure logic is inert. Animal desire provides the motive power and guidance.

The world is not necessarily how one wants or expects it to be, so reliable knowledge production depends on a level of overcoming personal bias, for lack of a better word.

For the S3s, the formal rules and procedures are part of the developmental process. They can't overcome researcher degrees of freedom, as we've seen in the replication crisis.

For the S5s, the formal rules and procedures are partly accounting, and partly keeping themselves honest.

Lots of people have probably seen me criticize "alternative ways of knowing" before. I do think it's daft to take "this tribe has this oral history of eating Leaf X" and teach it as having the same weight as "here is our rigorous study on the effects of eating Leaf X".

The appropriate way of handling the limits of formal systems, including institutional systems of knowledge production, is not platforming informal knowledge production in institutions, but freedom.

The limits of the knowledge production of formal institutions set limits on the appropriate amount of control they can exercise, due to the potential mismatch between their theories and the environment, and the low dimensionality of those theories which may make them inappropriate to particular environments even if they are sound in more common circumstances.

( @max1461 might agree with some of this. )

Avatar

Discourse Calibration

The recent discourse circulating on Twitter and TikTok? "Women, would you rather be trapped alone in the woods with a man, or a bear?" One man asks his phone to read him a number of stats. "If you don't get this argument, I don't even think you're human," he says, implying that if you don't think men should be viewed as predators, "you are the problem."

Of course, if we were approaching this question from the perspective of truthfully assessing whether a random 150 pound man is more dangerous than a random 1,000 pound grizzly bear, the correct approach would not be to assess the absolute number of bear attacks, but the number of bear attacks per encounter.

Women in the United States usually encounter men almost every day. They generally do not encounter bears. I would guess that a majority of women in the United States have not encountered a bear outside of a zoo. (My personal experience is that women who live in areas with lots of bears do actually complain about bears.)

But of course, the actual attack to encounter ratio has nothing to do with it. Words have both a social component and a content component. The people arguing that men are "more dangerous than bears" are using words in a way that that heavily weights the social component and very lightly weights the content component.

They're just talking about their feelings (and trying to get leverage). It would not occur to them to divide two numbers about this.

There will always be discourse topics this stupid. Even if you come to power, you will preside over a society in which people are having very dumb arguments, and some of those arguments go viral.

I should note that while the people emphasizing social component are maneuvering in the social domain, this does not necessarily mean that they "have good social skills." People read "you are more dangerous than a bear" as an attack because it's pretty obviously a ploy to get leverage.

A mature person who wants to create a harmonious social environment instead of drama would find a different way to phrase, "I am fearful about men," than, "men are all deadly predators more dangerous than bears."

Avatar

General Post for Monday, April 15, 2024

(5,700 words, ~28 mins)

💾 "Don't underestimate computers."

6 - Social Media Notes: Recommendation: To limit distraction, limit notifications in order to make social media into its own specific context, rather than leaking into other contexts.

7 - US War Notes: Since at least the year 2000, despite its technical competence, the United States has been bad at managing the political dimension of its wars. Developments since then suggest it may get worse.

8 - Interpreting Statement A: Why "industrialization enables women's rights" could be viewed as right-wing.

9 - Computing Capital Notes 1: The basic nature of computers as capital. (It's about dimensionality in production.)

10 - Computing Capital Notes 2: How should computing be distributed? From a technical perspective, it's an open question.

11 - Computing Politics Notes: Computing has its own politics, and how computing should be distributed is one of its central questions.

12 - Desktop Internet Notes: The old Internet was implicitly gatekept by the price and complexity of personal computers. With the emergence of smartphones, personal computers are becoming less common again.

-☆☆☆-

6: Social Media Notes

Social media tends to drive people to distraction. It's obvious how negative interactions like arguments can be distracting. Someone could pop up and argue, "owning cats is bourgeois decadence," and it's very tempting to just correct them. With smartphone notifications, such an argument could come up at any time, in any context.

Avatar
reblogged

boeing if it did PR on tumblr: yes you're interested in planes but are you normal about ones that malfunction?

thanks, I hate it

Thing I've been wondering - does it suggest basic issues in the worldview of an ideology if its rhetoric can be used to make an argument completely against the actual viewpoint pushed by the ideology while still sounding ilke something a follower would say?

Woke left stuff seems more vulnerable to capture this way than say libertarianism or rightism but they're not immune either.

It's a multi-pronged thing.

Every coalition is composed of people with different interests, and so the broader the coalition, the larger the amount of contradiction in the interests of the coalition members.

Thus a lot of political slogans are vague, or have multiple contradicting definitions. What is the meaning of "Build Back Better"? How specific is "Family Values"?

This is basically a tradeoff. An ideology can allow coalition members to work towards the coalition's interests without coordinating directly. This makes a lot of practical sense, as it allows looser organizational coupling rather than a rigid, top-down, military-style formation.

A political coalition can be viewed as a kind of formation. A tight ideology which puts more constraints on behavior makes for greater force coherence, which is useful if you're trying to accomplish some concrete goal. A loose ideology where instructions have multiple interpretations makes for a more incoherent force, but increases the potential size of the coalition by allowing for a greater amount of contradiction before it becomes too obvious that a contradiction has occurred.

Different people apply a different search depth, so some people are more likely to identify contradictions, or more likely to care about contradictions.

You've probably noticed that while it's easy to rephrase proposals in the language of the "woke left," they're surprisingly resilient to, say, being "tricked" into acting like 2008 liberals. They're quite focused on ingroup/outgroup distinctions. In some sense it's more social and less ideological.

Avatar
reblogged
Avatar
memecucker

You know how it’s pretty easy to condemn Hitler and also condemn the lynching of German immigrant Robert Prager during WW1

You're trying to make a point here by analogy, but I actually think this isn't easy for a lot of people. These people find it surprisingly hard to condemn both, and when they see someone else condemn one, they assume this means that someone must approve of the other.

I think that's part of what was revealed by 2014-2022.

A lot of people seem to have a kind of "acceptable target list." With enough social pressure and control over social context, you can change what's on the list, but no matter how many times you tell them to stop having the list or to limit it to a small group like convicted criminals, it just doesn't register.

The naive approach is hate speech laws. But what happens then is 1) political operatives collude (often indirectly) to have their particular hate speech not count as hate speech while still getting others for hate speech, and 2) politicians use the laws to cover up the bad effects of their own policies.

So that leaves, like, personal development in the aggregate. Basically, identifying and training the people who are capable of understanding the idea of a general principle in the abstract, and then positioning them throughout society because they can store more than two bits per ethnicity.

It's not something that can be generalized to the whole population, because a significant chunk will just repeat back the words without internalizing the logic, and will move on to the next fad or trend when it comes along, too.

Not gonna get into the details about it, but memecucker's post actually is holding up their end of the bargain, here.

Avatar
reblogged

Ragnarök Proofing

If someone wanted to make like Maj. Edwin Keeler and make a modern-day Helm Memory Core — a complete electronic database of texts and blueprints laying out in full detail every process needed to recreate most modern-day technologies from first principles (and remember the case of Roman concrete) — roughly how big would the resulting data archive be?

Avatar
blogofex

Specifying "electronic database" is already asking for trouble, since you cannot read the database unless you already have an industrial civilization up and running to supply power and computers. But ignore that for a moment.

Text only Wikipedia is only 22GB. Your archive will not be based on Wikipedia, you will of course have some media in your archive, and of course you will need lots more detail than Wikipedia provides for many things, while other topics you will ignore entirely

Nonetheless this seems like a good order of magnitude estimate.

On the first point, in the original Battletech case, the Helm Core was part of a hidden underground base/vault, the Nagayan Mountain Castle Brian, built with its own Star League Ranarök-proofing. Which is to say, you find a way to store the necessary computers and power supply needed to read said data core along with it — which is its own engineering challenge.

That said, that was going to be my next point — if you instead store this data in a physical, non-electronic form, how huge would it end up being?

Secondly, I think both the need for media — at least images — will expand things quite a bit past text-only Wikipedia. And as for "of course you will need lots more detail than Wikipedia provides," I think this vastly understates things — compare, for example, its description of a McCormick reaper with detailed plans and instructions on how to build one. Chemistry and pharmacology are even worse — try explaining in total detail the entire chain for manufacturing, say, ibuprofen — all the necessary precursor materials, reagents, catalysts, equipment; their own production and manufacturing processes, and so on — all the way back to the extraction and purification of the base materials from their natural sources. I'd expect all of these to add several orders of magnitude at minimum.

The closest non-electronic object I'm aware of is the Long Now Foundation's 2008 Rosetta Disk, a 3-inch metal disk etched with 13,500 pages of language documentation, embedded in a glass sphere. It requires a 500x optical microscope to read, although the outside strongly implies that it contains miniaturized text in order to convince people to do just that. (One calculus textbook on my shelf is about 1,200 pages.)

In terms of the operational duration of a computer, obviously mass market computers are not designed for this purpose, since they go obsolete in 10 years. Stationary mainframes designed to last longer can be serviced as-needed by the manufacturer on an ongoing support contract. (A computer from 1951 was reportedly restored to operation in 2012.) The closest class of computers is probably computers designed for long-term operation in outer space, where temperature conditions are extreme, and repair is not feasible. The Voyager 1 space probe was launched in 1977. Computer issues have apparently hampered transmissions since 2023, for a duration of about 46 years.

It would be necessary to contact an engineer to determine if an ultra-long-duration (500+ year) computer could be developed.

In terms of implementation, you'd want multiple sites in case one gets flooded, bombed, looted, or crushed in a landslide. (One advantage of optical storage is that the material is less valuable.) It would probably also be better to have different storage technologies, encoding different technology collections, forming a ladder so that the readers can build up the capacity to supply the computer with electricity. (Even in the fictional example, the device's survival depended on chance.) For instance, a full documenting of all technology from 1850 would be less extensive, but still valuable.

I can think of a few different ways you might be going with this.

One possibility is that you're using the idea of this computer to estimate the total number of bits that are core to the current technology stack, in order to forecast the maximum feasible complexity for the economy. (It's easier to just frame it that way to get answers to the question.)

No matter the number of books or information storage and search systems, how much someone can remember with their mind (and how fast they can learn it) sets a limit on the maximum amount of information they can manage - even if in theory it's unlimited given unlimited time, time is not unlimited.

It isn't just a matter of knowledge directly; it's related to the kind of intentions that an agent can form, which is something I haven't developed as much theory on.

It would be necessary to get more information to make even a half-decent preliminary estimate. Of particular interest would be the distribution of talent, and organizational linkage limits.

Once such a model exists, it wouldn't be limited to only the tech stack. It could be used to estimate the maximum complexity of society more generally. Laws or regulations, ideology, or even cultural practices could be analyzed, and potential trade-offs could be modeled. It might be possible to identify a sort of "operating system" for society.

To bring it back to practical considerations...

A real effort to build a technology archive would probably limit the scope to a number of core technologies as embodied in specific products (engineered to be simpler to make) and their production chains, using the assumption that either the guys on the other side are smart enough that they can invent a lot of the other technologies eventually, or that if they're not able to develop the other technologies and have to work using only the provided instructions, they won't be able to escape the technological trap either.

Avatar
reblogged

You know how people are prepending "site:reddit.com" in order to get actual answers for their search queries instead of SEO garbage?

As Reddit as Reddit may be, it's going to be a problem if Reddit dies.

Even if we get a lot of distributed forums again, each run as someone's personal hobby project like it's 2004, due to SEO garbage we'll have trouble finding them, and while scammers prefer to use bots that fail the "potato test" (as some twitter users are calling it), more sophisticated spammers are likely to use LLMs, increasing moderation costs by making it trickier to sort legitimate users from fake ones.

I should also address the matter of the Redditors themselves.

A redditor is someone who will read and respond to a 250-word text reply. (Anything longer than 250 words is pushing it.) They are much closer to the average "intellectually-inclined" person than the average, say, LessWrong commenter is.

Redditors will actually take pieces of ideology or rules they were educated with, such as logical fallacy rules, and attempt to apply them.

Speaking of "redditors" in a very broad sense (since the term "reddit" used pejoratively is in a broad sense, such as "Tumblr is the most reddit site" (it is not)), any mass movement is going to be staffed by redditors.

for the past few months I have been trying out "the uninitiated" for "redditors" a lot and liking how it fits well.

in this post it works well because what initiates do is either talk in code about their private mystical journeys or go into private discords to groom one another, whereas the uninitiated of Reddit post publically and plainly.

It's a framing that makes me sympathetic to the reddit of the world.

In that post, redditors are being positioned as an intermediate group between an implicit more-intellectual group that read more than 250 words, and an implicit less-intellectual group that don't even read 250 words.

There is an idea that I have floated before, which is that high-level ideological concepts, if they succeed, must ultimately cash out as some material arrangement of living. e.g. if a Socialist revolution succeeds and replaces all businesses with worker cooperatives, the workers will still have to wake up and go to the factories tomorrow. This puts a limit on the expected gains from ideological struggle, which is intended to put limits on the acceptable moral price.

The post you're talking about is similar.

The development of ideology requires a certain level of mental independence (to vary from local ideology) and mental talent. When ideology is first developed, its number of adherents or supporters is small, and so the group can be highly selected - it might be people that are more talented or more dedicated or with a different psychological profile than usual, and thus who are willing to read through or act on a very finely-grained/highly-detailed version of that ideology.

So maybe the initial pool might be people who have read through, and remember, about 200,000 words of ideology.

You're probably old enough to be aware of the increasing color depth of computers. If you have 8 bits for a pixel, you can only divide the color spectrum into 256 options - 256 shades of color. If you have 16 bits, you can divide it into 65,536 options. If you have 1 bit for a pixel, the only options are black or white.

Now obviously, ideology does not control everything a person does. It interacts with psychology, the environment, and so on. Among other things, this puts a limit on how much ideology can accomplish.

(As a note, the view of human beings as "like computers that run memes," prevalent in some corners in 2008, is a "reddit (pejorative)" view.)

But while the founders of an ideology may have had a very sophisticated understanding, or better context, and so on, it's not possible to staff a government with only geniuses or highly motivated people - there are simply too many positions to fill.

So what happens if we compress 200,000 words of ideology into something short and simple enough that people will actually use it, like say, 10,000 words?

I used a LLM to help me systematically compress, chunk by chunk, a 32,000-word document into a 12,700-word document, over several days, carefully choosing each summary. If you're careful, you can reduce the information loss, but you cannot eliminate it. Big concepts are described in fewer shades. Many small details are lost entirely.

So, two parts.

A) Whatever version of the ideology gets implemented, it's not going to be the most nuanced version of the ideology, implemented by the people that are paying the closest attention, who are the most ethical. This needs to be considered during the development phase.

This has been a long-running theme or criticism here on mitigatedchaos, pretty much since the blog was founded.

B) If you're developing an ideology with the intent that it's going to come into power, and it's actually beneficial, the vast majority of people who will benefit... will be ordinary people, who do not have an incredibly sophisticated understanding of ideology.

Hating the redditors would be like a baker hating his own customers for not being as good at baking.

Avatar

Something something the only thing worse for a demographic than being utilized by a political coalition, which involves a distributed search for psychological vulnerabilities that are cheaper to exploit than actually fulfilling their demands, is not being utilized by a political coalition and thus no one in power having an incentive to protect them.

You are using an unsupported browser and things might not work as intended. Please make sure you're using the latest version of Chrome, Firefox, Safari, or Edge.
mouthporn.net