Вы здесь

Сборщик RSS-лент

The continuous tense is disappearing from your life

Новости LessWrong.com - 26 марта, 2026 - 20:14

I previously wrote about the present perfect tense and how it can make you waste time pursuing things you don’t really want—when you want to have done them instead of wanting to do them. Now I notice the continuous tense characterizes another pitfall, kind of the opposite: sometimes you want to be doing work more than you want to do it. And that difference is actually the difference between humans and gods.

Addiction to the continuous tense

Sometimes you want to be doing work more than you want to do it. Notice the difference. To do work makes the work done; it eliminates. To be doing work is continuous; it prolongs. What this looks like practically is getting lost in labor that doesn’t pay off.

I notice this in myself sometimes, and I notice it in others, especially some older people, who grew up with less automation technology in their daily lives. Sorry to stereotype, but I think Baby Boomers love “working” more than they love work. Look down a long list, check if it contains some item, and when you find it, do something with it, and repeat. All lookups done with eyes, all work done with typing fingers or a pen. This characterizes untold person-hours of labor that people happily spend.

Why do they do this? They get into a flow state with it, and I would too—the action is perfectly flow-state-compatible—except for a voice in my head that lovingly screams, “This is not worth your time!” But if searching down long lists was not already solved by computer technology, it would be worth my time, and I would enjoy it.

The joy of the continuous tense

But wait, many things still do require (really require) repetitive work! And I let myself enjoy them. When I run into a task that is 1) worth doing, but 2) best done manually, and 3) not very puzzling or complicated, I grab my headphones and put on music and get to work. I could do it nonstop for at least an hour straight and be perfectly content. Some examples:

  • I love 3D modeling in Blender for hours, even though I could make a good-enough model in the first 30 minutes.
  • Every year my family gets together and makes tomato sauce from scratch, in bulk. So I sit and feed tomatoes into the grinder for an hour or so, or spend time preparing all the jars, or whatever else.
  • The other day my wife was cooking a big seafood dish, but the squids she ordered came whole. So I consulted a quick YouTube video and then got to work pulling apart about 40 little squids, taking out the cartilage and beak parts, etc., watching myself get better and faster with each one.

Is this kind of work “fun?” I wouldn’t say that, but it reliably brings on a flow state, and there’s always a unique appeal to that experience. It seems silly, but under certain conditions people love labor.

I say this as a very task-driven/execution-driven person myself. Really I think a lot about the near future, about what work I would like done. But still, I get it. I may live in the future, but I can get lost in the present, in the mindless flow of necessary but non-challenging work.

Is this a problem? No: even from a productivity perspective, if there’s just no other way to clean those squids, then it’s great that there’s something appealing about the work. And then we can zoom out: productivity isn’t everything—if you’re enjoying the experience, then good. Maybe it’s worth doing some things less efficiently, if the efficient way deprives you of that flow state opportunity. But that brings us to—

The loss of the continuous tense

The set of tasks that are 1) worth doing, but 2) best done manually, and 3) not very puzzling or complicated, is shrinking fast! AI assistants are claiming competence in more and more domains by the day. A software developer used to get all their planning and architecting hammered out and then sit down, put the headphones on, and write code. No so anymore: take the headphones off, the code is already written, go immediately to the next step.

I’m not a full-time software engineer, but I know the joy of making an app or game, enjoying and soaking up that state of making. Most of the questions I’d face were small, incremental: “How do I store that array?” “How do I move this button over there?” Nowadays I find myself asking only the big questions: What do I want to bring into existence? What will it cost to run? I’m thrilled at how fast I can build out old projects that I never would’ve had time for, but still I know something was lost.

Professional engineers I talk to have the same feeling: they can do so much more now, but they don’t exactly enjoy the work anymore. All the non-decision-making parts of the job have been automated away, so there’s no room for a flow state.

That’s software engineering, but so many other fields are affected. I mentioned 3D modeling above, but AI can do that too, and soon it’ll do it better than I can. Manual labor activities are safe for now, but consumer robotic tech is coming, too.

Something was lost; I don’t think there’s any way around it. The pleasure of doing new things is not the same as the pleasure of doing necessary work in a flow state. Both are normal human pleasures, but the latter is going away.

Responses

The joy of the continuous tense is a natural human pleasure. Will you let it go and try to fully replace it with the joy of finished work? Or will you hang onto it, no matter how inconvenient, out of principle? I don’t see an easy answer, but let’s explore:

Let go and become gods

Did the God of Genesis enjoy the act of creation? We enjoy our own creative work, but it’s partly because we enjoy the process: it asks something of us, and we invest and get to see our labor pay off. But the God of Genesis just spoke. Are we ready for a future where we “creatives” just speak things into existence?

God doesn’t labor, and therefore doesn’t find pleasure or meaning in labor. The God of Genesis finds gratification when Adam and Eve enjoy what he made. Likewise, as a god-creator your gratification is when others (including your future self, perhaps) come in and say, “Ah this is great, this is exactly what I needed.” Is that enough? Can it be? Are you ready to play God?

This is a form of transhumanism. Before the brain-enhancing chips and cybernetic appendages and designer CRISPR babies, even now, AI technology offers us the option to transcend a little part of ourselves.

Be stubbornly human

Or should we resist automating away our own labor? Should we insist that it’s just as important to be creating X as it is to create X? Should we insist that any moment that feels purposeful is a “success,” even if it’s actually wasteful in economic terms?

There’s plenty of precedent for this. There are still photorealist painters, even after the invention of photography. There are still artisan potters, even though you can buy pottery from a factory in China. In fact, we humans have this quirk where the consumer resonates the meaning that the creator experiences: that’s our concept of artisanship. The handmade thing is just more special, so some people are willing to pay more for it. And thus it becomes economical again! There is a market for realist paintings today; there is a market for handmade pottery, and so on. Will there be a market for handmade software? For artisanal therapy, or financial advice?

But I must pump the brakes: something is still lost. Painting doesn’t have the popularity or status it once had: the market is proportionally smaller; a chunk of it went to photography and didn’t come back. And even if you’re great at your craft and you land the coveted artisan position, will it be the same as before? Part of what allows me to enjoy a flow state is knowing the work is necessary.

Imagine what it felt like to be an 18th Century portrait painter, doing what you love and knowing that this skillful movement of your hand is the only way to ever produce a visual representation of the subject. That’s a rush. I don’t think “I’m an artisan” can fully measure up to it. Alas, something was lost.

If you liked this post, consider subscribing to my personal blog at patrickdfarley.com.



Discuss

"What Exactly Would An International AI Treaty Say?" Is a Bad Objection

Новости LessWrong.com - 26 марта, 2026 - 19:29

I’ve heard a number of people say that it’s unclear what the technical contours of a global AI treaty would look like. That is true - but it’s not actually an obstacle to negotiating an international treaty. 

I’ll try to explain why this isn’t a good objection, but the short version is that if countries have clear goals which are largely shared, negotiations end up with strong treaties. So the important questions are not the exact rules. The critical questions are if there really is a joint global risk that requires action - and experts agree there is, and whether verification and enforceability are possible - and experts say they are. So the problem isn’t a technical issue, it’s a question of whether we can get to an agreement. And despite facile “we can’t stop until they do” arguments, we can and should try to do better.

In order to explain why we do not need to figure out the details first, it's worth talking about other treaties.

The Pandemic Treaty (Task Failed Successfully)

I will start with the example I watched most closely, over the past five years. The Pandemic treaty was proposed in 2021, “when WHO member States agreed on the urgent need for a legally binding international instrument,” per the UN. It was supposed to fix all the problems we had during COVID. Unfortunately, this didn’t include preventing pandemics, and past that point, no-one agrees on what things should have been done, or what to do next time. So, if we can’t agree on anything, what does the treaty do? Mostly, generic pandemic-stopping stuff - “commitment to a ‘One Health’ approach to pandemic prevention, stronger national health systems, setting up a coordinating financial mechanism, and creating a globally coordinated supply chain and logistics network for health emergencies.” 

How much of that was agreed about in mid-2021 when it was proposed by the European Council President? Basically none of it. What are the actual commitments? Well, that’s complicated, but the short version is that there aren’t any. The treaty insists that countries get to stay in control of their national health systems, and no-one could tell them what to do, or how - which sounds a lot like the failure that allowed COVID-19 to spread. Lots of people had different visions for what the treaty should do, from global vaccine justice to enhancing global public health to providing funds for response to climate change issues to supply chain resilience to considering animal and plant health in pandemic planning, and it ended up as a mishmash of random things that people proposed. But the failure here was one of vision - it was unclear what the world would get out of a treaty that everyone agreed about, and that lack was never addressed. 

That’s obviously a problem, but not the central one we have with proposing a global treaty to ban unsafe ASI. For those asking for such a treaty, we agree about what needs to be stopped, namely, building unsafe ASI. The questions are all about how to make that happen. So we should look at another example, and I think the best parallel is nuclear weapons. 

The Nuclear Treaty

If you know anything about the history of nuclear weapons treaties, you read the section title and immediately asked: “which one?” And there are so many options - there was the Limited test ban treaty in 1963, the treaty on the Non-Proliferation of Nuclear Weapons (NPT) in 1968, the Strategic Arms Limitation Treaty (SALT) in 1972, the Anti-Ballistic Missile (ABM) treaty in 1974, the Convention on Physical Protection of Nuclear Materials (CPPNM) in 1980, and the New START Treaty in 2010. And that’s ignoring almost a half-dozen regional “Nuclear Weapon-Free Zone” treaties.

Why were there so many treaties? The goal - preventing use of nuclear weapons - was broadly agreed upon. But the exact way to accomplish that goal was tricky. And that led to lots of uncertainty, and the need for a number of different treaties addressing different parts of the problem, from testing to proliferation - but the goal was a north star. That meant that every time a concern arose, countries tried hard to figure out what was needed to accomplish the goal of not having a nuclear war, and what rules move the world further from that outcome. Even outside of treaties, nuclear powers generally have embraced a no-first-use doctrine, and have taken other measures to reduce the risk of accidental escalation. All of these address the messy dynamics of nuclear escalation between states that can’t risk being left behind.

What does this tell us about treaties about AI risks?

Lessons for Possible AI Treaties

There are a dozen suggested international treaties for AI, and most aren’t what I’m discussing. Many are the equivalent of the 1959 Antarctic Treaty, which was the first nuclear-relevant treaty, but which only said countries aren’t allowed to use Antarctica to test nuclear weapons or dispose of waste. In my view, the analogous proposals include any treaty that does anything about AI other than making sure future systems do not end cause global catastrophes. (That doesn’t mean such treaties are a bad idea, just that they aren’t what I’m discussing here.) 

Other proposals are actually trying to solve the problem directly. For example, the intentional equivalent of the ill-fated 1946-proposed Baruch Plan, which tried to put all nuclear weapons under international control. And the Soviet counterproposal agreed about the goal - prevention of the production and use of nuclear weapons - but disagreed about how it should work. Which meant there was no agreement to stop proliferation for several decades. Because directly solving the problem by fiat, imposed globally, before negotiations between parties start, is very hard and not usually effective.

Luckily, for nuclear treaties it didn’t matter. The overwhelming consensus of the public was that we shouldn’t have a nuclear war, and despite claims that it was an inevitable race that could only end with disaster, nuclear weapons haven’t been used in 80 years and counting. The specifics of the treaties were critical, and navigating to success was in fact hard, without any final victory. As noted above, these are a mix of bilateral, multilateral, and global treaties. Of course, we’re still worried about nuclear proliferation and nuclear war. But the treaties have given the world enough stability to avoid disaster, so far.

Again, the world was lucky that building nuclear weapons went slowly, for two decades, and that the norm against use got established in the run-up to treaties. But no-one should argue that the lack of nuclear war means the treaties were unnecessary - if anything, the common argument is that the treaties are too weak and contingent.

Do we need an ASI treaty? Yes - and that is true even if you think the risk is minimal. Yes, there is debate about whether the risk will be realized, but there is clear consensus that it is a risk, and that we should have at least the ability to put rules in place. Should we have one grand treaty, or plan for solving one part at a time? It’s unclear. Many treaty areas have dozens of overlapping treaties; “the” Geneva convention is a series of treaties that added clarifications and rules over decades. But we do need to get started. People who think we’re a decade away from fully general AI should be terrified that it’s already late in the game to start discussing such a treaty. And people who are saying we might have ASI by the end of the decade are mostly already screaming about the need for some international rules.

Do Treaties Solve the Problem? (Do We Need Other Rules?)

Nuclear power is regulated, partially via an international body - that’s mostly not about preventing global thermonuclear war, but it's critically related due to the need to control nuclear materials. And at a national level, nuclear medicine is regulated, radiation exposure levels are regulated, and we have tons of other rules that don’t relate to preventing a Nuclear World War Three. That doesn’t mean those rules aren’t important.

For AI, we already need regulation on various uses and misuses. Some of these might be international - bans on autonomous weapons, bans of nonconsensual pornography, bans on use of AI to violate other international laws. Others should be national, or even local, like bans of deceptive use of AI, or use of AI to break other laws. That’s how regulation works.

But these are not the AI treaties that we need, they are just places where we need rules. And they should be pursued - just not at the cost of further delay on the global catastrophic risks we face.

AI NotKillEveryoneism Treaties

To have effective treaties that stop AI-driven global catastrophes, there are many questions that need to be answered. What exactly is needed to prevent the creation of misaligned ASI? Which chips should be tracked, who should be allowed to use them, and for what types of model development? What controls or safety measures must be in place? How should we measure dangerous capabilities? Where should the line be? Which countries need to agree first? How can we manage multilateral coordination when only a few countries are leading in the race? Will we ban existing frontier systems at the time the treaty comes into force? Or will groups be able to build slightly more capable systems with additional safeguards? What should those safeguards be?

Fortunately, we don’t need final answers to all of these questions in order to start negotiating a treaty. We don’t even need final answers in order to sign a treaty - many treaties have mechanisms for routine updates of rules. But we need a clear vision - no-one builds ASI until we are sure it is safe, and when people try clever ways to get around whatever rules exist, they are told to stop, and told that the rules will be updated. Companies must be told that their race to ASI is over, that the risk is too high, and no-one gets to win. Countries are told that whatever geopolitical advantage they think they will gain from building ASI isn’t allowed, because winning a race without clear safety rules is overwhelmingly likely to kill us all. 

Will this be a single treaty, or require several over time? I don’t know. It also doesn’t matter. But it needs to start now, because international coordination takes time and is slow.  We do not need to specify the outcome before starting, and uncertainty isn’t a reason to wait to build momentum and putting pieces in place so we can discuss what is possible to restrict or ban, how it will be verified, and howand why to ensure participants will prefer compliance to defection. Lock-in on the details is a risk, but waiting until we need immediate action isn’t a way to make the eventual responses better - quite the opposite, since delay eliminates capacity to explore the details. Obviously, work on a framework treaty needed to start at least a few years ago when the risks became globally clear, and we can hope we aren’t too late given the clear imminent risks of superintelligence

In short, saying that we can’t discuss a treaty yet because we don’t know what the rules need to be is an historically illiterate and poorly reasoned objection.




Discuss

Socrates is Mortal

Новости LessWrong.com - 26 марта, 2026 - 18:34

There is a scene in Plato that contains, in miniature, the catastrophe of Athenian public life. Two men meet at a courthouse. One is there to prosecute his own father for the death of a slave. The other is there to be indicted for indecency. [1] The prosecutor, Euthyphro, is certain he understands what decency requires. The accused, Socrates, is not certain of anything, and says so. They talk.

Euthyphro’s confidence is striking. His own family thinks it is indecent for a son to prosecute his father; Euthyphro insists that true decency demands it, that he understands what the gods require better than his relatives do. Socrates, who is about to be tried for indecency toward the gods, asks Euthyphro to explain what decency actually is, since Euthyphro claims to know, and Socrates will need such knowledge for his own defense.

Euthyphro’s first answer is: decency is what I am doing right now, prosecuting wrongdoers regardless of kinship. Socrates points out that this is an example, not a definition. There are many decent acts; what makes them all decent?

Euthyphro tries again: decency is what the gods love. But the gods disagree among themselves, Socrates observes, so by this definition the same act could be both decent and indecent. Euthyphro refines: decency is what all the gods love. And here Socrates asks a question Euthyphro cannot answer: do the gods love decent things because they are decent, or are things decent because the gods love them? [2]

If decent things are decent because the gods love them, then decency is arbitrary, a matter of divine whim. Socrates is too polite to say so, but the implication is: if decency is defined by the arbitrary whim of our betters, who are you to prosecute your father?

If the gods love decent things because they are decent, then however we know this, we already know the standard for decency ourselves and can cut out the middleman. But then Euthyphro should be able to explain the standard. He can’t.

Euthyphro tries a few more times, suggesting that decency is a kind of service to the gods, a kind of trade with the gods. Each time Socrates gently follows the definition to its consequences, and each time it collapses. Eventually Euthyphro leaves, saying he is in a hurry. Socrates’ last words are a lament: you have abandoned me without the understanding I needed for my own defense.

This is usually read as a proto-academic dialogue about definitions. It is a scene from a civilization in crisis. A man is about to use the legal system to destroy his own father on the basis of a concept he cannot define, in a courthouse where another man is about to be destroyed by the same concept. And the man who cannot define it is not unusual. He is representative.

The indecency for which Socrates was being prosecuted seems to have consisted of asking just the sort of questions Socrates posed to Euthyphro.

Athens in the late fifth century had recently become something it had never been before: the capital of an empire. This changed what it meant to speak in public. When Athens was a small city making decisions about its own affairs, leadership among Athenians involved speaking to communicate your perspective on matters of shared concern. But now that the collective decisions of Athens mattered for a whole lot of other people, those other people were quite naturally going to spend a lot of time thinking about how to get Athenians to decide their way. At the same time, being part of the leadership structure in control of considerable tax revenues became more profitable for more people, and less economically sustainable to opt out of. Now ambitious Athenians started using their speech to seem electable by showing off the quality of their “communicate their perspectives on matters of shared concern” performance.

Sophists were the professionals of this new economy. They specialized in the performance of wisdom, partly to sell their know-how, but always claiming, with some ambiguity, that they were excellent on the same criteria as the great Athenian leaders of the previous generation. And the consequences were not limited to the realm of speech. People were being imprisoned, exiled, and killed, on the basis of deliberative processes that had become unmoored from any standard anyone could articulate.

What had happened was not simply that Athenian politics had become venal. Something subtler and more devastating had occurred. People had stopped being alive to each other. They were running scripts. The sophists taught people to run more sophisticated scripts. Public speech, which had once been the medium through which free men actually thought together about shared problems, had become a performance of thinking. The performance could be very impressive. It could sound like wisdom. But there was no one home behind it, except an intelligent but inarticulate terrified hairless ape with no friends.

And then there was Socrates. He described himself not as a sophist, a possessor of wisdom, but a philosopher, someone who likes wisdom, who has an affinity for it.

In the Apology, Plato has Socrates report that his friend Chaerephon asked the oracle at Delphi whether anyone was wiser, and was told no one was. But a different tradition, preserved in Origen’s Contra Celsum, claims to quote the oracle’s actual verse. It ranked three men:

Sophocles is wise
Euripides is wiser
But of all men, Socrates is wisest

Sophocles and Euripides were not scientific thinkers like Thales or Democritus, who investigated the underlying structure of physical reality. They were not mathematicians. They were not statesmen like Pericles, who managed Athens’s rise from preeminent city to imperial capital. Sophocles and Euripides were the men who could inhabit other minds, who could construct characters who, to all appearances, each had their own distinctive interiority. They imagined all these people well enough to put words in their mouths for declamation in a public theater. They could dramatize what it is like to be Medea deciding to kill her children or Antigone choosing to die. If someone at Delphi had met Socrates and reached for a comparison, they did not reach for a statesman or a priest. They reached for the people who were most alive to other people’s experience.

The oracle’s pronouncement likely came before Socrates was famous for questioning people. Chaerephon, an excitable and devoted friend, likely went to Delphi on his own initiative, to get divine confirmation of something he had already noticed. And what he had noticed was not a method. It was a quality. Speaking with Socrates, one felt the presence of a living intelligence, curious about one’s situation. One felt excitingly seen and at the same time uncomfortably exposed. In a city where public life had become a drama where the actors were principally concerned with their own appearance, this was so unusual that it shone brilliantly to anyone looking for intelligent life, like a beacon ablaze on a clear moonless night.

What came after was Socrates trying to figure out what the oracle could have meant. If I am the wisest, what does that say about everyone else? So, by his own account, he went to talk to the people who were supposed to be wise, the politicians and the poets and the craftsmen, and he found that the politicians and the poets could not give a coherent account of the knowledge they claimed to possess. The craftsmen could, within their crafts. But the knowledge that was being wielded with lethal force in the courts and the Assembly, the knowledge of justice and piety and how the city should be governed, that knowledge was nowhere. The people who claimed it were performing a script, and the script could not survive contact with someone who was trying to make sense of what they were saying. The performative, advantage-seeking political culture of Athens could only make sense of their discomfiture at Socrates’s active listening, as Socrates winning debates. So, as illustrated in dialogues like the Gorgias, big-shot sophists would seek him out, eager to be seen confronting the most formidable debater in Athens. Meanwhile, in a society that would eventually produce Aristotle’s claim that slaves cannot reason, Socrates finds it the most natural thing to turn to a slave to help him work out a mathematical proof, in the dialogue Meno.

Xenophon, who knew Socrates as a person and not only as a character in philosophical dialogues, shows us what this same aliveness looked like when it met people who wanted help. During the civil war around the Thirty Tyrants, a man named Aristarchus had fourteen of his sisters, nieces, and cousins sheltering in his house as refugees. The land had been seized by enemies. There was no money, and he saw no way to borrow, because he had nothing productive to spend it on. He couldn’t feed fourteen people on nothing. Socrates noticed that the women already knew how to work wool. He told Aristarchus to borrow capital, buy materials, and put them to work. Now there was a reason to borrow, and they did. Xenophon says the suspicious glances turned to smiles, the household became productive and harmonious, and eventually Aristarchus came back to Socrates delighted, reporting that the only complaint was that he was now the sole member of the household eating the bread of idleness.

In another episode, a man is harassed by lawsuits because of his deep pockets, but has a poor friend who’s articulate and virtuous; Socrates advises him to pay his friend to start suing the people who are suing him, as a deterrent.

The cross-examination and the practical advice are not two different activities by two different Socrateses. They are both what it looks like when a living mind engages with the world: whether the world presents a man performing authority he cannot account for, or a household full of hungry refugees sitting next to a loom.

At his trial, Socrates gave his own account of what he had been doing. In the Apology, he makes his limited claim to wisdom. Craftsmen really are wise about some things, but he doesn’t think that kind of wisdom is relevant to his interests as a free Athenian trying to participate in deliberations about public matters. Others falsely claim and believe themselves to have scientific knowledge of ethical or political truths. Socrates can claim distinctive wisdom only insofar as he clearly knows himself not to know such things.

This is usually read as a philosophical thesis about the limits of human knowledge. It is a man on trial for his life, explaining to the jury that the people who condemned him are exercising lethal authority on the basis of knowledge they do not possess, which makes implementing any standard impossible; and that pointing out that the laws are incoherent cannot be a violation of the laws, because that sort of criticism is necessary if we are to have laws at all.

In the Theaetetus, set just before the Euthyphro, Socrates finally finds a young man in Athens he can respect for his intelligence and honesty. But the man is not a peer Socrates can consult for advice; he is a promising youth in need of guidance, and the conversation has to end: Socrates excuses himself to go to the courthouse to be indicted. It is my favorite of Plato’s dialogues.

Plato also responded to his beloved mentor’s death by founding the Academy, a great house in Athens where philosophical reasoning was taught methodically. We still have our Academics.

Agnes Callard, in her recent book Open Socrates, wants Socrates to be timeless. She strips out the historical situation, strips out the aliveness that preceded the method, and ends up defending a method that’s obviously inapplicable in many of the cases where she claims it applies. Aristarchus did not need his assumptions questioned at random. He needed someone who could ask probing questions about his actual problem, from a perspective that didn’t share his assumptions about what was and wasn’t possible.

Zvi Mowshowitz, in his review of Callard’s book (part 1, part 2), argues at considerable length that the decontextualized version is bad. He is right. Cached beliefs are usually fine. Destabilizing them is usually harmful. Most people do not want to spend their lives in Socratic questioning, and they are right.

But Zvi has written a long polemic in two installments on the winning side of an incredibly lame debate about whether we should anxiously doubt ourselves all the time, responding to Callard’s decontextualized Socrates, not the real one. The real one did not devise a method and then apply it. He had a quality, something the oracle reached for the language of the tragedians to describe. And what was memorialized as a “method” was what happened when that quality met a city where every other participant in public life had stopped being alive.

Socrates invokes timeless considerations like logical coherence and having reasons for your opinions, but timeless considerations are a very natural thing to try to appeal to when people are being squirmy and dramatic and hard to pin down, and fleeing to abstractions that resist empirical falsification.

Spinoza, in the Theologico-Political Treatise, similarly resituated the teachings of Jesus of Nazareth in their proper context. The political teachings of the Gospels to turn the other cheek, forgive debts, and render unto Caesar what is due to him, are instructions for people living under a hostile and extractive system of domination. Citizens of a free republic have entirely different duties. They have an affirmative obligation to hold each other accountable, to sue people who have wronged them, to participate in collective self-governance. The teachings are not wrong. They are addressed to a specific situation, and become wrong when mechanically transplanted into an inappropriate context.

The reason to recover the historical Socrates is not only accuracy about the distant past; it is that by seeing this relevant aspect of the past more clearly, we might see more clearly what we are up against now.

Socratic cross-examination requires an interlocutor who at least would feel ashamed not to put on a show of accountability. The people Socrates questioned were performing wisdom, but they were performing it because the culture still demanded that leaders seem accountable. They would sit for the examination, because refusing would be disgraceful, like breaking formation in a hoplite phalanx. Their scripts collapsed because the scripts were designed to look like real accountability, and real accountability is what Socrates brought.

There is a useful framework for understanding how public discourse degrades, which distinguishes between guilt, shame, and depravity. A guilty person has violated a norm and intends to repair the breach by owning up and making amends. An ashamed person intends to conceal the violation, which means deflecting investigation. A depraved person has generalized the intent to conceal into a coalitional strategy: I will cover for you if you cover for me, and together we will derail any investigation that threatens either of us.

The leaders Socrates questioned were, at worst, ashamed. They had taken on roles they couldn’t account for, and they wanted to hide that fact, but they still felt the force of the demand for accountability. When Socrates pressed them, they squirmed, they went in circles, they eventually fled. But they engaged. They felt they had to engage. The culture of Athens, even in its degraded state, still held that a man who refused to give an account of his claims was disgraced.

Depravity is a further stage, and Sartre described it precisely in his book Anti-Semite and Jew:

Never believe that anti-Semites are completely unaware of the absurdity of their replies. They know that their remarks are frivolous, open to challenge. But they are amusing themselves, for it is their adversary who is obliged to use words responsibly, since he believes in words. The anti-Semites have the right to play. They even like to play with discourse for, by giving ridiculous reasons, they discredit the seriousness of their interlocutors. They delight in acting in bad faith, since they seek not to persuade by sound argument but to intimidate and disconcert. If you press them too closely, they will abruptly fall silent, loftily indicating by some phrase that the time for argument is past.

The depraved person does not perform accountability. He plays with the forms of accountability to exhaust and humiliate the person who still takes them seriously. He is not running a script that is trying to pass as a perspective, collapsing only under the kind of questioning we still call Socratic. He is amusing himself at the expense of the questioner. Cross-examination does not expose him, because he was never trying to seem consistent. He was trying to demonstrate that consistency is for suckers. The Socratic method will not help him.

The Socratic method, if we can rightly call it that, was forged by the pressures confronted by a living mind in a city of the ashamed, people who still cared enough about accountability to fake it. It has nothing to say to the depraved themselves, who have dispensed with the pretense, though in a transitional period might expose them to the judgment of the naïve.

But the quality that preceded the method is something else.

What the oracle recognized in Socrates was not the ability to cross-examine. It was something closer to what it recognized in Euripides: the capacity to be present to what is happening, to see the person in front of you rather than the category they belong to, to respond to the situation rather than to your script about the situation. To be alive.

We do not need a new method. Methods are what you formalize after you understand the problem, and we are not there yet. What might still help us is the quality that precedes method: the willingness to see what is in front of us, to say the obvious thing that everyone embedded in the performance is too scripted to see, and to keep reaching out to others even when the response is usually not even embarrassment but indifference, not even a failed defense but a smirk.

The oracle didn’t say Socrates had the best method. It said he was the wisest man, in a society oriented against wisdom. The “method” was just how aliveness was memorialized by a city that still cared enough to be ashamed of being dead.

The question for us is what aliveness looks like in a city beyond shame.

  1. Usually translated “impiety,” but the Greek hosion and its negation anosion are broader. “Piety” to us generally means deference, which doesn’t make sense to attribute to the gods, but Euthyphro thinks it is normal to call the gods hosion, so we might try “holiness,” since we speak both of holy gods and holy men. But the connotations of “holiness” don’t match up well with the context of a prosecution. “Decency” is at a lower register of formality than “piety” or “holiness” in a way that sounds a bit odd, but it is the best fit as far as its explicit meaning. ↩︎

  2. Contemporary readers may have difficulty relating to the idea of multiple gods who might legitimately disagree about what decency requires. But one can substitute the less iconographic authorities we have now: religions, ethical systems, philosophical traditions. Someone might plausibly claim that decent behavior is whatever all the major ethical traditions recommend. Socrates’ challenge still works: if these traditions can decide arbitrarily, then what stops them from endorsing indecent behavior? If we trust that they are constituted to endorse decency, then we already have some idea what the common factor is, and should be able to say what it is. ↩︎



Discuss

Five years since lockdown

Новости LessWrong.com - 26 марта, 2026 - 18:29

I received my one-shot vaccine on March 26th, 2021. I had been in lockdown for more than a year, my entire life on hold, my world closed.

Previously: Takeaways from one year of lockdown & reflections on lockdown, two years out

A few weeks ago, I was talking to a community organizer in London, and he said, “This feels like the first year that things have really gotten back to normal since Covid.”

I was surprised — it’s been five years, at this point. But he said that communities are only just recovering, that people are only just starting to go out like they once did.

So I thought about the people I know, the people I’m close to. And I realized how right he was, and I was surprised at my own surprise.

Two people I know have developed, essentially, full agoraphobia. They may have been a little weird before the pandemic, but they could function — one of them was in university, and the other had a job outside the home. Now, it feels impossible to imagine either of them going back to that old life. They are cripplingly anxious about the most basic of interactions, sometimes even with the people they live with and love best. It’s rare for them to even set foot outside of their houses, never mind doing normal everyday tasks out in the world, like grocery shopping.

I only know what happened to these people because they are sufficiently close, and I only see them because I visit them in their own homes. To the rest of the world, what happened to them is invisible.

There are people I knew before the pandemic who simply disappeared from my life. How many of them just never came out of hiding?

We all had our lives knocked off course by the pandemic, to a greater or lesser extent. Even if you somehow genuinely had a good time during lockdown, no one can say they’ve had quite the life they expected they would before 2020.

Maybe you missed or had to postpone major milestones — graduating from university, getting married. Maybe you lost loved ones who should have had much longer lives. Maybe you had to raise a kid without any of the support system that you’d expected to be able to rely on.

Likely, the rhythm of your life has permanently changed in some ways. My dad has the same job he’s always had, but his office closed during the pandemic and never reopened, so he still regularly goes days at a time without seeing anyone in person. My immunocompromised cousin used to travel for business all year; now, she can’t even have unmasked visitors in her home.

I had spent the years between college and the pandemic living in a group house, working in the tech industry, and seriously dating two people. When the dust settled from 2020, I had lost all of those things. It took me years to come to terms with the fact that I would never have the life that I’d spent those years building. It was a long and painful mourning process.

Five years after leaving lockdown, I’ve built a life I like much better, but that doesn’t negate the grief of losing what I had, what I thought I would have. Wherever we may have ended up, we all lost something.

Coming out of the pandemic, I felt like no one wanted to talk about the trauma we had all just gone through. The people who’d had a bad time mostly didn’t want to talk about it or just disappeared entirely, so all I heard was “I had a pretty good pandemic”, even when I felt like the person was lying. A few times, I tried to share what had happened to me, and the person I was talking to laughed in my face.

For years, I couldn’t think about the pandemic without crying. I skipped over any story that included it in the plot summary1, and couldn’t bear to hear the theme music of the shows I’d watched during that time.

But I also developed a fascination with 2020, even as I found it incredibly triggering. It was like an alternate reality that we had all lived in, and we were all now collectively pretending it hadn’t happened. People’s desire to pick up where they left off was understandable, but it felt like denial to me. Any time I heard someone acknowledge out loud that the pandemic had happened, I was shocked and thrilled, like they’d acknowledged an illicit secret.

In 2024, I started being able to read books about 20202. Three months ago, on the final day of 2025, I finally felt ready to watch Bo Burnham’s Inside — the feature-length musical special he wrote, recorded, and released during the pandemic. The first time I heard one of the songs from it, I cried for hours. Now it’s my favorite movie, and I know the whole soundtrack by heart.

It’s been five years. The paint and stickers that marked distances six feet apart on sidewalks have been torn away, or faded with time (except where they haven’t). Some businesses keep up their old signs about masking and distancing, too expensive to replace, though none of them have enforced it for years, now. Rates of masking in airports have fallen and fallen, until those of us who sicken easily just have to stay home. People have healed, or at least those of us who haven’t have disappeared, and no one really thinks about them anymore.

The shape of your community has changed. The shape of your life has changed. Some of these things would have happened without the pandemic, but many of them wouldn’t. Time still would have passed, yes, but your life would have gone differently.

It’s been five years. The world has gone back to normal. If that even means anything.


1 The pandemic shows up shockingly rarely in mainstream fiction, but people were writing fanfiction all through 2020 and 2021, and many of them used it as a way to process what was happening.

2 Some books I’ve read on the pandemic:

  • 2020: One City, Seven People, and the Year that Changed Everything, which tells the story of seven New Yorkers and how they changed and adapted during the pandemic, interspersed with chapters that examine broader social and political changes during 2020
  • The Emergency, a memoir of a Black ER doctor working on the south side of Chicago, written in the depths of the pandemic, focusing on the healthcare inequities exposed by the crisis
  • Every Minute is a Day, another ER doctor’s memoir written before there was an end in sight, this one in the Bronx, the worst-hit neighborhood in the US
  • Please Unsubscribe, Thanks, a sort-of-Digital-Minimalism-adjacent manifesto that resonated deeply for me, about how the world stopped during the pandemic and then picked up as if nothing had happened
  • The Anthropocene, Reviewed, John Green’s memoir, written during 2020 and released before lockdown ended, a beautiful capturing of that time of strangeness, uncertainty, and fragile hope




Discuss

Resolving the Surprise Test Paradox

Новости LessWrong.com - 26 марта, 2026 - 18:01

This is a crosspost from my blog post.

In this post, I’m going to resolve the surprise test paradox.

The surprise test paradox is as follows:

A teacher tells a student that he’s going to have a test this week, but that he will not know when the test is coming.

Upon hearing this, the student realizes that he will not have a test on Friday, since, if the test were on Friday, he would know that morning that the test was going to occur.

But, since he knows that the test won’t occur on Friday, he realizes that he also won’t have a test on Thursday because, on Thursday morning, he would expect for the test to occur since he would already know that it’s not going to occur on Friday.

Then, using similar logic, he also deduces that the test also won’t occur on Wednesday, Tuesday, or Monday, and that, as such, he shouldn’t expect for the test to occur at all.

Glad that he isn’t going to have a test, he walks into class on Wednesday and is, of course, handed a test.

I used to think this was a paradox since it seems like the student’s logic is correct and, yet, it leads to a conclusion that guarantees that he will not know when the test is coming by causing him to expect for no test to come at all.

I now no longer think that it is a paradox.

When the student concludes that the test is not going to occur on Friday, he, in fact, makes it possible for the test to occur that day since he, now, no longer expects it. As such, the student made a reasoning error by failing to take into account the fact that his expectations determine whether or not a test occurs. If the student were instead reasoning properly, he should have realized that, each morning, he should expect a test to occur that day, since, if he expects it to occur, it will not.

So, in reality, the surprise test paradox is not a paradox at all. If each morning the student expects to have a test, he will never receive one and the teacher’s statement will be false. If the student doesn’t expect to have a test on a given morning and then receives a test later that day, the teacher’s statement is true.

What makes the situation strange is that, each morning, the student should expect to have a test despite the fact that, if he expects to get a test, he “should” also expect not to get a test since expecting to get a test guarantees that he will not get a test. Although this is a strange state of affairs, it is not paradoxical because the first “should” and the second “should” are two different kinds of “shoulds.” The first “should” is a should based on what he ought to do to avoid being executed. The second “should” is a should based on what he ought to do to be logically consistent. This is only a paradox if one believes that individuals ought to be logically consistent in all situations, which this “paradox” clearly reveals is not the case.

I’m not sure whether the paradox has been resolved in this way by others in the past, but I thought I’d share it with you guys since it’s quite an interesting philosophical conundrum.



Discuss

You can just multiply point estimates (if you only care about EV)

Новости LessWrong.com - 26 марта, 2026 - 18:00

Many people think you need probability distributions; they think using point estimates will mess up your EV calculations. That's false; you'll get the same result whether you multiply distributions or multiply their EVs. You can ask a chatbot: "Briefly explain linearity of expectation and independence (E[XY] = E[X]E[Y])." To multiply EV point estimates, you just need to make sure the point estimates are EV rather than median, the distributions are independent (or at least uncorrelated), and you're not doing anything fancier than adding and multiplying.[1] (However, you need distributions if you want median, credible intervals, etc.)

I generally use point estimates in my EV calculations. In some contexts the natural way to estimate EV is to first estimate a distribution, but in my work it usually makes sense to estimate EV directly. And using point estimates makes it easier to understand models, compare parameters from different people/models, notice inconsistencies, etc.

Using distributions is dangerous; if you get the tails wrong it can wreck you.[2] And again, distributions are harder to understand, and making your models less scrutable—to yourself and others—is a massive cost. Using distributions might help you notice which parameters are unstable, but you can also just do that without distributions.

I occasionally use distributions for EV estimates because (1) sometimes it's necessary[3] or (2) sometimes the best way to estimate EV (or explain your estimate to others) is to estimate the distribution and then take the mean. Sometimes these "distributions" are crude, with just a few discrete buckets, because that's easier to think about.

On the other hand (thanks to Eli Lifland for suggestions):

  • In some contexts (e.g. forecasting AI progress rather than estimating EV for prioritization/grantmaking), the output you want is (downstream of) a distribution. In those cases you absolutely should use distributions.
  • The distribution contains more information than the EV. To find the median or mode or credible intervals, you need the distribution.
  • Distributions increase scrutability for high-effort readers: they're harder to engage with but contain more information. In some contexts, if you want to understand or argue about EV, you have to understand or argue about distribution. (But in my work, it's often more like: you should directly estimate the expectation of a parameter, and the best way to come up with a distribution is downstream of that.)
  • Maybe Claude (or Guesstimate or Squiggle?) can make it easy to make a decent model with distributions. I'm skeptical but haven't really tried.
  1. ^

    You also have to make sure you're using the expectation of the right parameter. In particular, 1/E[X] is different from E[1/X].

  2. ^

    Minor: also arguably distributions for value of the world after this intervention minus value of the world before this intervention are fake/meaningless, since all interventions have massive random effects. This doesn't bother EVchads because the random effects have EV zero. When we talk about the distribution of effects, we have to talk about something more like direct/foreseeable effects. This doesn't really matter but it may suggest that distribution of value diff is a weird/unnatural concept.

  3. ^

    E.g. what's the value of spending $1 in 2028, given uncertainty about how much others will spend? I use a distribution on "how much others will spend effectively in this area in 2028" and a function from "how much others will spend" to "EV of marginal $1."



Discuss

Sen. Sanders (I-VT) and Rep. Ocasio-Cortez (D-NY) propose AI Data Center Moratorium Act

Новости LessWrong.com - 26 марта, 2026 - 16:13

The text of the bill can be found here. It begins by citing the warnings of AI company CEOs and deep learning pioneers Geoffrey Hinton and Yoshua Bengio, the 2023 FLI open letter calling for a 6-month pause, and the 2025 FLI statement on superintelligence. The bill would prohibits the construction or upgrading of AI datacenters until Congress pass an AI safety law aimed at preventing AI companies "from releasing harmful products into the world that threaten the health and well-being of working families, our privacy and civil rights, and the future of humanity". It would also impose export controls for advanced chips to any country "to any country or entity that does not have laws and regulations in place to protect humanity from AI safety concerns and existential risks, protect workers, and protect the environment". Sen. Sanders and Rep. Ocasio-Cortez announced the bill in a live press conference:



Transcript

Sen. Bernie Sanders

Thank you all for being here. The Congresswoman and I are going to be chatting about an enormously important issue. Let me start off by saying that in my view, and in the view of people who know a lot more about this issue than I do, we are at the beginning of the most profound technological revolution in world history — a revolution that will bring unimaginable changes to our society in a relatively short period of time.

Artificial intelligence and robotics will impact our economy, our democracy, our privacy rights, our emotional well-being, our environment, and even our very survival as human beings on this planet. The scale, scope, and speed of this transformation will be unprecedented.

According to Demis Hassabis, who is the head of Google DeepMind, the AI revolution will be ten times bigger than the industrial revolution and ten times faster — meaning AI and robotics will have a hundred times the impact of what the industrial revolution did.

And it's not just what AI companies are saying, it's what they are doing. This year alone, four major AI companies are expected to spend roughly $670 billion building data centers, and tens of billions more on research and development.

Despite the extraordinary importance of this issue and its impact on every man, woman, and child in this country, AI has received far too little serious discussion here in our nation's capital. I fear that Congress is totally unprepared for the magnitude of the changes that are already taking place.

While Congress has not paid enough attention to this issue, the American people have. According to a recent poll, 79% of voters are concerned that the government does not have a plan to protect workers from AI job losses. That same poll also found that 56% of voters are concerned about losing their job — or having someone in their family lose their job — in the next year. Not in the next ten years. In the next year.

Why are the American people so concerned? They have a lot of reasons to be. They understand that at a time of massive income and wealth inequality — when the billionaire class has never had it so good — some 60% of our people are living paycheck to paycheck.

And the American people understand that the AI revolution, these massive investments, are being driven by some of the wealthiest people in our country and the world — people like Elon Musk, Jeff Bezos, Mark Zuckerberg, and Larry Ellison. They understand that these billionaires are investing huge amounts of money into AI and robotics not to improve life for working families, but to dramatically increase their own wealth and power. Do you think the average American is sitting around trusting that the multi-billionaires are going to transform society for the good of ordinary people? I don't think so.

In terms of the impact that AI and robotics will have on working people, let us listen to what the tech oligarchs themselves are saying. Not the Congresswoman, not me. Let's hear what the tech oligarchs are themselves saying. Elon Musk, who has made massive investments in AI and robotics, has stated — and I quote:

"AI and robots will replace all jobs."

Replace all jobs. Bill Gates has said humans "won't be needed for most things." And a settlement at Microsoft predicts that most white-collar work could be automated within the next decade.

What we are talking about, therefore, is the possibility that AI and automation could displace tens of millions of workers in the United States from their current employment. We have already seen early signs of this transformation with declining employment in AI-exposed occupations and growing difficulty for young people entering the workforce.

If machines can perform most economically valuable work better than humans — and that is the goal, that machines will be able to do the job you are doing better than you are doing it — if that happens, pretty simple question: What happens to the workers? How do people earn a living? How do they support their families? And how do programs like Social Security and Medicare survive without a stable tax base? People are not working, they're not paying taxes.

And let me ask a very simple question that is on the minds of millions of Americans: How will workers who lose their jobs find new employment if there are no jobs available? What happens to the over 6 million truck drivers, cab drivers, rideshare drivers, and bus drivers if virtually all transportation is conducted through driverless vehicles — which, under present trends, seems likely to happen? What happens to the young college graduates who today go out looking for an entry-level job, but that job is not there?

Let me say this: I have seen this before, and the American working class has seen this process play out before. In the 1990s, the working class of this country was told by the corporate world, by the elites, by the corporate media: "Don't worry about unfettered free trade. NAFTA, PNTR with China — it is going to create millions and millions of good-paying jobs." Democrats, Republicans, corporate executives, everybody came together. Well, not quite. What happened, in fact, is that thousands and thousands of factories in this country were shut down, and millions of workers lost their jobs.

But when we talk about AI and robotics, we're not just talking about the economy. AI is already reshaping how we, as human beings, relate to each other. According to a recent poll by Common Sense Media, 72% of American teenagers say they have used AI for companionship, and more than half do so regularly. What does it mean for young people to form friendships with AI and become more and more lonely and isolated from other human beings? Everybody understands we have a major mental health crisis for our young people right now. I fear that AI could make it even worse.

And then when we talk about what's going on, we also have to talk about privacy. Larry Ellison, second richest person on Earth, who is a major investor in AI, predicts an AI-powered surveillance state is coming where — and this is Ellison speaking — where citizens will be on their best behavior because "we're constantly recording and reporting everything that is going on": your phone calls, your texts, your emails, the websites that you visit, all of it will be recorded if we continue current trends.

Further, AI is undermining American democracy. The rise of deepfakes — where you have very convincing fake images; that happened to me, where they had me selling some rebates or something. It looked very good, almost convinced even me, but it wasn't me. [Turns to AOC] Has that happened to you? [AOC nods.] We're seeing fake videos, fake audio. What happens when the day before an election, somebody who looks like the candidate gets up and says something outrageous and people believe it? If people cannot trust what they see and hear, informed decision-making becomes nearly impossible. It will be harder and harder to distinguish between truth and untruth.

There is also a significant environmental cost. AI requires enormous computing power, driving the expansion of energy and water use in tens of thousands of data centers, increasing electricity demand and potentially deepening reliance on fossil fuels in the midst of a climate crisis.

Finally, let me say this — and I know some people out there may still think this is science fiction, but it ain't. I have talked to some of the leading scientists and most knowledgeable people in the world about the potential existential threats that AI brings to the human race, including Geoffrey Hinton. Dr. Hinton is a Nobel Prize winner in physics; he's considered to be the godfather of AI. He has warned that AI could soon surpass human intelligence and operate independently beyond our control, and if that happens, it poses a profound threat to the very survival of the human race.

So what is happening in Washington right now in response to all of these enormous concerns? We learned just today that Donald Trump has appointed a commission made up of the very people who are going to financially benefit from AI and robotics — including some of the wealthiest people in the world, like Mr. Ellison, Mr. Zuckerberg, and Nvidia CEO Jensen Huang. Needless to say, there is no representation on that commission of workers, or environmentalists, or consumers.

That is why, in my view, we need a very different approach. Across the country, communities are already pushing back against the unchecked expansion of data centers. More than 100 localities have enacted moratoriums or restrictions, and states are beginning to take action as well.

Importantly, leaders within the AI industry have called for a pause. The people who know the most about the threats of AI development have themselves called for a pause. In 2023, over 1,000 experts — including Elon Musk — called for AI labs to "immediately pause for at least six months." And when that pause was not enacted, they called on governments to step in and institute a moratorium. They understood what is at stake.

Congresswoman Alexandria Ocasio-Cortez and I understand what is at stake. That is why today we are announcing legislation to impose a moratorium on the construction of new AI data centers until strong national safeguards are in place to ensure that AI is safe and effective. That means: the government reviews and approves AI products before they are released; the economic gains of AI and robotics benefit ordinary Americans, not just the billionaire owners of the industry; and AI data centers do not increase electricity or utility prices, harm communities, or destroy the environment. Importantly, the legislation would also impose a ban on the export of AI chips to any country without such protections, including China.

A moratorium will give us time — time to understand the risks, time to protect working families, time to defend our democracy, and time to ensure the technology works for all of us, not just the few. So with that, let me now introduce the Congresswoman from New York, Congresswoman Alexandria Ocasio-Cortez.

Rep. Alexandria Ocasio-Cortez

Thank you, Senator. First and foremost, I want to take a moment to express my gratitude to Senator Sanders for bringing us together today to introduce this critical legislation. Within just a matter of short years, AI has become often forcibly integrated into many aspects of American existence — into our doctor's offices, our government — and oftentimes to the detriment of working people. Last year alone, AI was responsible for over 54,000 layoffs nationwide. And when we talk about those jobs, it's not just a number. These are industries, these are communities, these are families.

You know, just a few short years ago, Sam Altman came before Congress and, in a direct plea, he begged us to regulate this industry. He said that these tools were under no circumstances ready — nor should they be — integrated into weapons of war; that we must impose severe regulations immediately to prevent mass layoffs and to ensure that any productivity that comes of this industry can benefit working people. Three short years later, none of that has happened, and in fact in many cases the opposite has happened.

I'll start with another quote from Sam Altman, who said ten years ago: "AI will probably lead to the end of the world, but in the meantime, there will be great companies created with serious machine learning." While I'm glad that Mr. Altman is holding up his end of the deal — he has his responsibilities on the company side — our responsibility is to take care of people. And that is what we're here to do today.

Unfortunately, the leaders of this industry have repeated time and time again that they view working people as an endless, untapped market to be manipulated and exploited — that they would sell our country out if it meant they could turn a profit. And it is no surprise that in the four years since ChatGPT was released, we have seen AI deployed at massive scale to create Big Brother-like surveillance. Every day, Americans are seeing videos of ICE agents waving phones into crowds, threatening that if U.S. citizens use their First Amendment rights, they will be added to some vague database that the American people have not consented to nor are knowledgeable about.

Companies like Palantir are mining endlessly the data and privacy of the American people, keeping track of everything they say and do, and sending it all to a militarized and centralized government. When you take the subway, when you share a TikTok, when you talk to your Alexa at home, they are collecting your data and figuring out new ways to weaponize it. And now they are using AI tools to automate this so that it is not only pervasive but effortless. We must sound the alarm now.

All of this harm has occurred not in spite of, but because of the absence of federal legislation to regulate AI. This is not my first bill around AI, and I can tell you that it is extremely discouraging to see how even the most minute efforts to protect people at the smallest and most basic level — like trying to prevent AI-generated child pornography — are still combated, oftentimes by many people here on the Hill and throughout industry. Currently, the story of AI is a story of corruption. It is fueled and funded by the same multi-billion dollar corporations lobbying politicians to sit back and do nothing while they harm our communities.

In fact, one of the largest explosions of super-PAC and outside funding is by the AI industry. And for years, Congress has enabled this permission structure — a permission structure that allows billionaires like Sam Altman, Elon Musk, and Peter Thiel to be trusted to regulate themselves under the guise of "American innovation."

And what are they asking for now? Endless energy. These companies are now so desperate to profit off the AI boom that they are racing to construct thousands of giant AI data centers and jacking up the utility costs of everyday Americans to pay for it. These data centers power thousands of high-intensity computer chips that are processing at all times and require massive amounts of energy to operate. Just one hyperscale data center consumes in one second the same amount of energy as 100,000 households.

Because of the massive amounts of energy they use, power and water utility companies must build multi-billion dollar infrastructure to keep up with the demand. And these companies are not paying for their own energy infrastructure. People's energy bills around the country are skyrocketing in order to pay for these AI data centers. In the last five years, Americans who live near data centers saw their electric bills increase over 267% — from $80 a month to $294 a month. Working people are already living paycheck to paycheck in some states on wages as low as $7.25 an hour — another federal minimum wage that Congress also refuses to raise. They simply cannot afford another increase to their monthly bills.

And this results in people cutting corners whenever they can to keep the lights on — whether it's skipping meals or rationing their insulin. And while the American people are suffering, big tech continues to demand more: more data centers, more energy, more data collection, more American jobs replaced by AI, all on the feet of the American people to pay.

And across the country, Democrats, Republicans, and independents are standing up. People are standing up to big tech and saying no to these data centers being built in their communities. More than 100 local communities across 12 states have already enacted local moratoriums on data centers. And Congress itself has a moral obligation to stand with them and stop big tech from ruining their communities.

Our legislation in the House and the Senate would hit the brakes on construction of new data centers until we address several of the key areas of harm AI poses. Our bills learn from our lack of regulation following the similar rise of the internet and demand a new approach to AI — one that protects the American people from big tech's egregious overreach, one bound by our shared commitment over those who wish to patent it, one that centers prosperity for the many over exorbitant profits for the very few. Thank you.

Q&A

Sanders: Okay, we're just going to take questions on AI. Yes — who are you with?

Q1 — Insider journalist: One of the most common critiques that comes up when this idea of moratoriums is raised is that we're in an AI race with China, and that slowing down the progress of AI development anyway would give China an advantage — whether that's economic or national security. I'm curious, when that critique is raised to you, what do you say to that?

Sanders: Well, I think the good news — and it's not widely known — is that there are a number of Chinese scientists who share the same concern that American scientists are worried about: if we don't get a handle on AI, there are going to be enormous existential consequences. So I think in a sane world, what happens is the leadership of the United States sits down with the leadership in China and leadership around the world to work together so that we don't go over the edge and create a technology which could perhaps destroy humanity. But I would say there are Chinese scientists fairly high up in the government who share those concerns. We've got to immediately, with a sense of urgency, bring those people together.

Ocasio-Cortez: I think the concern there is easily remedied by passing protections for people. We need to sort out energy sources. Once these companies can be on the up and up — providing their own energy, building out and investing in the infrastructure, refusing to free-ride off of the American people — then we can continue to develop and explore this technology. I don't think that this is about a denialism of science or American competitiveness, but it is about an integration of protection of the American people instead of allowing this to happen at their expense. And furthermore, this legislation will prohibit the export of chips to China and other countries.

Q2 — Chase Williams, Fox Business: You mentioned electric bills up 267% each month. The administration has taken some steps to remedy these concerns — I wonder what you make of their actions. And my second question for both of you: do either of you use artificial intelligence in your everyday lives?

Ocasio-Cortez: I think we just need to ask everyday people if they're feeling the effects of that. As much as I personally have differences with this administration, I would want them to be successful in reducing the energy bills of American people — but people aren't feeling it. It's not working. In New York City alone, there are some families that are getting electric bills that are $600 a month — as high as some people were paying in rent not too long ago before our housing costs skyrocketed as well. So I don't think those efforts are working on behalf of the administration.

As far as AI, I do not regularly integrate it into my daily life, but I know many people do. My job tends to be quite writing-based and I like to do my own. Very old-fashioned. We have looked at AI — we did a chat with Claude recently — but I don't use it on a regular basis.

Q3 — "Igor" [?]: Do you think the Democratic Party is taking the threat of AI seriously enough? Can you talk a little bit about AI's influence in elections? They spend a lot of money in elections.

Sanders: No, I don't think the Democratic Party leadership is taking this issue anywhere near as seriously as it should. We need to develop a sense of urgency. The economic impacts are going to be enormous. The impacts on our children will be enormous. And again, there is literally an existential threat to the existence of the human race. Now, you tell me: do you think that leadership here, on either side of the aisle, is saying "Whoa, we better get moving on this thing"? The answer is no.

And the second point — why is Congress not moving aggressively? Well, maybe it has something to do with the $150 million and more that is coming into Congress in campaign contributions and in super PACs. When you have a class of people who are multi-billionaires, spending a few hundred million dollars on elections is chicken feed. And that is what they are doing. So what they are saying to members of Congress and candidates is: you're going to stand up to us? There's going to be a $20 million ad campaign starting against you tomorrow. That's the reality.

Q4 — Insider journalist [?]: It's been said that the data centers and the chip makers and the app makers have kind of created this circular business — basically a bubble. And by pulling out of it, we could have serious economic consequences. How do we pull out of that? And another question: what do you make of Claude?

Sanders: Your question is an important one, though it's a little different from what we're talking about today. You're talking about the possibility of a bubble bursting and having enormous economic impact. I think that's true. But above and beyond that, what we're talking about is a technology — bubble or no bubble — that is going to have a real impact on American society.

You asked me about my chat with Claude. I'll tell you, it was a little mind-blowing. You can very easily — and I suppose young people have more familiarity with it than I do — it is amazingly easy to start seeing this entity, this AI agent, or call it what you might, as a human being. And please remember: AI is only going to get more and more effective in years to come. It's only a relatively few years old, and yet you sit down and it's like — "Hey, you have a nice day! How are you?" — you can get into that. And I want you to think about what it will mean to a 13-year-old girl who is struggling and has an AI as her best friend, or a 15-year-old boy, or whatever it may be. It's serious, it's dangerous. But the answer to your question: it is mind-blowing how quickly you can somehow see this as a human agent.

Q5 — Journalist: There are some other ideas being proposed — to make it so that these companies are paying for or producing their own energy. Why not go that route? Why do you want more?

Ocasio-Cortez: Because that's one part of the problem. As the Senator mentioned, you're seeing people whose electric bills are soaring beyond control. That is an issue we've got to deal with. But there are other profound issues. So if somehow or another that was put under control and electric bills did not soar — and yet we lost tens of millions of jobs — would we be happy with that? If our kids became addicted to AI agents, would we be happy with that?

Sanders: And I think once again I want to get back to the point that some of the founders of this industry — people who know more about it than anybody else in the world — are telling us that within a few years, it is likely that AI will be smarter than human beings, that human beings may lose control over AI, with possibly catastrophic impacts. How do you ignore those issues? You don't. And you asked what the Democrats are doing — we need a sense of urgency right now to address these issues. Clearly, that is not the case. All right. With that, thank you all very much.




Discuss

Past Automation Replaced Jobs. AI Will Replace Workers.

Новости LessWrong.com - 26 марта, 2026 - 15:32


 

Human workers are about to face a competitor unlike any technology that came before: AI systems that can be copied at near-zero cost, deployed instantly, and improved faster than workers can retrain. Economists have long answered automation fears by noting that machines destroy some jobs but create others through lower costs, higher output, and new industries. That pattern held because earlier machines replaced specific tasks while people kept the broader mental abilities needed for other work. AI is beginning to erode that refuge. When the same technology can do the mental work that once let workers move from obsolete jobs into new ones, the old economic escape route starts to close.

The Luddites were right to fear stocking frames because those machines destroyed skilled textile work. But a machine that made clothing cheaper did more than destroy one craft. Lower prices freed money for other purchases, and higher output created demand for mechanics, haulers, clerks, merchants, and other workers.

That escape route existed because human labor stayed scarce and broadly useful. Workers who left farms could move into mills, railroads, mines, construction, and factories. When manufacturing later needed fewer people, workers moved again into offices, hospitals, schools, retail, finance, and other services. Earlier machines replaced particular tasks while leaving humans able to do many others. AI threatens that escape route because it can do the mental work needed to automate the next set of jobs and help build the software that replaces them.

Software Comes First

Programming matters most because software sits upstream of so much else. Every drop in the cost of making software lets a firm route more activity through systems that answer customers, draft documents, check compliance, analyze data, coordinate logistics, and hand tasks to machines. AI is improving the process of building the very thing that may automate much of the rest.

Turning work into software has never been free. A company may know that billing, scheduling, compliance, approvals, customer service, reporting, and internal coordination could be automated, but someone still has to map the rules, connect old systems, handle exceptions, test failures, and keep the software running. For years that work was costly enough to keep many automation plans on the shelf. A workflow had to be large, stable, and valuable before it was worth a real engineering effort.

One sign of that change is what programmers now call ‘vibe coding.’ A person states the goal in plain language, the AI writes the code, the person tests the result, asks for revisions, and keeps iterating. The work shifts away from writing every line by hand and toward specifying what should happen and judging whether the output is good enough. In many settings, it already makes software creators substantially more productive. It also makes it cheaper to automate tasks that were once too minor, too custom, or too messy to justify a full software project.

The effect reaches far beyond the software industry. A firm no longer needs to wait for a large specialized team before it can try to automate a report, an approval chain, a customer response system, a research workflow, or an internal tool. Managers, analysts, lawyers, designers, and operations staff can describe what they want in plain language, test a rough version, then keep refining it. Some of those systems will be crude. These systems can eliminate a surprising amount of routine work.

That matters even more because software may now be improving the process that makes more software. A coding system that helps build the next coding system can speed the next round of improvement, which can then speed the round after that. If that feedback loop grows strong enough, firms may soon be using software creators far more capable than today’s human teams, not merely cheaper assistants. A breakthrough in one model can become a product update, and a product update can put a much better builder on millions of screens almost immediately. Improvement in software can therefore spread much faster than improvement in machines that have to be physically manufactured and shipped.

If that happens, the software industry in anything like its current form may not last. The bottleneck shifts from writing code to deciding what should exist and recognizing whether the result works. That would not just remake one industry. It would change the speed and scope of labor replacement across the economy.

The First Jobs to Go Are Done on Screen

The first visible sign may not be mass unemployment. It may be a steady drop in openings for work that can be assigned, completed, and checked entirely on a computer. Customer support, bookkeeping, routine legal drafting, basic research, slide preparation, copywriting, illustration, and much routine programming fit that description. Employers can automate that work long before they can trust robots with traffic, bad weather, cluttered homes, or sick patients.

Recent college graduates are exposed first because junior office jobs often consist of the most standardized slice of a profession. New hires review documents, clean data, draft memos, prepare slides, build basic models, write boilerplate code, and answer routine client questions. A firm does not need mass layoffs to change this market. It can shrink internships, cut analyst classes, and leave junior openings unfilled.

In an older economy, cutting junior hiring would have looked shortsighted. Firms needed beginners because beginners became the experienced people who later ran teams, served clients, and made hard calls. That logic weakens if employers expect AI to advance faster than junior workers can. A firm that expects software to absorb more senior work has less reason to train a large human pipeline for roles it may soon need far fewer people to perform.

That change does more than reduce openings for today’s beginners. It weakens the incentive to become tomorrow’s skilled worker. A profession becomes less attractive when the entry jobs are disappearing, the path upward is narrowing, and the work people are training for may be automated before they reach it. Fewer students choose the field, fewer workers accept low-paid apprenticeship years, and fewer employers bother developing talent they may not need later. That makes the pipeline shrink from both ends. Firms invest less in training because they expect less future need for people, and workers invest less in training because they expect less future reward.

Why the Job Market Still Looks Intact

Those early cuts can happen long before the broader labor market cracks, because a technology can become good enough to replace workers before firms know how to rebuild the work around it. Companies do not close a department the week a model improves. They still have software, procedures, managers, compliance systems, and customer expectations built around human workers. So they usually layer the new system onto the old workflow and have people check the output. That delays the labor-market effect. The big losses come later, when firms redesign the workflow itself and discover that work once spread across many employees can be done with far fewer.

That is where most firms are. They are learning where software can be trusted well enough to rebuild the workflow around it. A labor market that looks solid is weak evidence that labor is safe. It may only mean firms have not yet learned how to produce the same output with far fewer people.

Firms do not need AI to replace an entire job before workers start losing ground. They only need a credible path to using less labor soon. Once managers believe software will handle more of the work next year, they gain reason to freeze hiring, cut training, trim promotion ladders, and hold down pay today. Workers still employed then face a weaker position because the employer is no longer bargaining under the assumption that it must keep building a large human pipeline. The damage begins before mass unemployment arrives. First labor loses leverage, then career paths thin out, then firms discover they can produce the same output with fewer people, and only after that does full replacement come into view.

That delay existed because many jobs were protected not by their routine core but by their exceptions. Many workflows were partly automatable long ago, but firms still needed people to catch anomalies, recover from mistakes, handle unusual cases, and absorb blame when the rules broke down. AI matters because it is starting to handle the exceptions that used to keep humans in the loop.

That pressure does not stop at routine office work. Researchers are now using AI to attack open math problems, generate proof ideas, and in some cases help solve questions that had remained open for years. A system that can contribute to new mathematics is not merely repeating familiar patterns. It is beginning to encroach on the kind of originality many people assumed would protect human work much longer. Once machines can help with discovery as well as execution, the hope that human work will survive by retreating into creativity looks much weaker.

Software Starts Replacing Workers From the Inside

That same expanding capability has a direct economic consequence inside the firm: once software can be produced on demand, firms do not need to wait for a formal engineering project before they automate another slice of office work. The next step is for firms to deploy software agents that watch how digital work gets done and look for more of it to absorb. An agent inside a company could track customer requests, document flows, approval delays, recurring errors, and employee output, then build or refine tools to handle more of that work automatically. Managers would no longer be the only ones looking for openings to automate. The software would be looking too.

The first crude form is already visible. OpenClaw is an open-source agent that can run through ordinary chat apps, clear inboxes, send emails, manage calendars, and carry out other digital tasks. It does not yet replace whole departments, but it shows software moving from answering questions to handling the cross-application office work that keeps many assistants, coordinators, and junior staff employed.

Once tools like that can absorb enough office work to cut real costs, competitive pressure does the rest. Every successful automation reduces labor expense, speeds output, and weakens a firm’s dependence on hiring, training, and retaining people. When rivals start using software to find and implement those savings, refusing to follow is not caution. It is a path to decline. Costs stay higher, operations stay slower, margins weaken, and competitors gain the profit and capital needed to pull still further ahead. In a competitive market, adoption stops being a strategic option and becomes a condition of survival.

At first these systems will target obvious repetitive tasks. Later they will remove thousands of smaller frictions that survived only because no one wanted to staff a project to eliminate them. That is how labor replacement spreads from a few headline jobs into the texture of daily office work.

Robots Come Next

Once software starts replacing workers from inside the firm, money and engineering effort move toward the jobs that still require bodies in the world. Physical work has lasted longer not because it is protected, but because kitchens, hospital rooms, construction sites, and private homes are harder to standardize than files on a screen. Pipes are hidden, tools jam, patients worsen, weather shifts, and clutter gets in the way. A mistake in software can be patched. A mistake with a car, a ladder, or a frail patient can cause injury or death.

The barrier between screen work and physical work is eroding. Autonomous vehicles are already operating on public roads. That makes it much harder to argue that physical work is protected simply because the world is messy. Deliveries, warehousing, cleaning, security, routine home maintenance, elder care, hospital support, and large parts of construction all move closer once machines can perceive, plan, and act reliably outside the screen.

The market for capable robots is enormous. Households would pay for machines that cook simple meals, load dishes, fold laundry, tidy rooms, or help an older person stand up. Businesses would pay even more for robots that move materials, clean rooms, stock shelves, patrol property, prep surfaces, transport supplies, or assist nurses. Wherever labor is expensive, scarce, dangerous, or exhausting, firms have a strong reason to adopt machines that can do the job.

As adoption spreads, employers will not just swap a person for a robot. They will redesign the whole operation around machines. A workplace built for people must allow for fatigue, injury, breaks, insurance, lighting, temperature, and liability. A workplace built mainly for robots can be organized around speed, repetition, and continuous use. Once warehouses, hotels, hospitals, farms, and building sites are reworked for machines, the remaining humans start to look less like the core workforce and more like costly holdouts. In many jobs, the last defense of human labor will be law or politics, not economics.

Horses Kept Improving and Still Lost

When workplaces redesign around robots, human labor faces the horse problem. Horses retained their economic utility until combustion engines delivered superior power, endurance, and reliability at lower cost. Breeders produced stronger animals and refined equipment, but biological constraints remained. A horse requires rest, healing, and individual reproduction. Factories manufacture machines in bulk and run them continuously. Horses remained physically capable but lost their economic viability.

Historically, humans survived automation by transitioning to cognitive labor, an escape route unavailable to a displaced horse. That defense requires new jobs to resist automation better than old ones. AI degrades that advantage by absorbing sequential cognitive tasks. Human workers will increase their education and productivity but still face obsolescence if artificial systems improve faster, spread wider, and cost less. Employers eliminate human workforces when a cheaper, scalable substitute arrives, regardless of baseline human capability.

Comparative Advantage Will Not Save Workers

Even if robots overtake humans in every task, one standard economic argument still seems to leave room for human labor. Comparative advantage says you do not need to be better in absolute terms. You need only be less bad at one task than at everything else.

Imagine advanced robots take control of Antarctica. They can make a doughnut in one second and a gravity controller in one minute. Humans can make only doughnuts, each in a minute, and cannot make gravity controllers at all. In that world, humans still have a comparative advantage in doughnuts, not because humans are good at making them, but because people are even worse at everything else.

Comparative advantage still allows for trade. If humans offer one hundred doughnuts for one gravity controller, both sides gain. The robots spend sixty seconds making the controller but saves one hundred seconds by not making the doughnuts. Humans spend one hundred minutes making doughnuts and get something they otherwise could never produce. Comparative advantage looks powerful because however far ahead the robots pull, human labor still seems able to buy access to their far greater productivity.

This result depends on humans controlling what they need in order to produce the thing they trade. Once production depends on land, factories, software, energy, or distribution systems owned by someone else, the logic weakens fast. If orchard land is scarce and robots can harvest more cheaply, the orchard owner will use robots. If doughnuts come from a factory and robots can run the factory more productively, the owner will not keep humans on the line out of generosity. Relative advantage does little for workers when someone else owns the assets that turn labor into output.

Push that across the economy and the safe zone for labor shrinks fast. Humans may still be relatively best at some tasks, but owners will pair their capital with whatever workforce yields the highest return. Workers who do not own the relevant assets cannot make firms hire them just because human labor remains relatively best at something. If that workforce is robotic, comparative advantage may preserve trade without preserving wages. It can explain how humans might still produce something worth exchanging. It cannot explain how most humans keep earning a living.

Some Jobs Will Survive Only Because the Law Blocks Automation

Comparative advantage will not protect most workers, but regulation may protect some. Governments can require a human being to remain legally responsible even after software or machines can do most of the underlying work more cheaply. Licensing rules, staffing mandates, and liability standards can keep people on the payroll after the economic case has vanished.

That protection is easier to sustain in sheltered domestic markets than in export industries. If the United States requires carmakers to rely on more human labor while foreign rivals automate, foreign buyers will not pay extra to preserve those jobs. They will buy the cheaper car. A government can order its own firms to hire people. It cannot make foreign customers subsidize that choice.

Even at home, enforcement gets harder once capable systems become cheap and widespread. If a household robot can diagnose a leak and fix the plumbing, many people will use it rather than wait for a plumber. If the law says AI cannot practice medicine, people will still look for diagnosis, triage, and treatment advice from widely available systems. Once a machine can do valuable work inside ordinary homes and offices, blocking its use starts to look less like regulation and more like prohibition.

A country that imposes too much of that prohibition will grow poorer relative to countries that do not. Lower productivity means a smaller tax base, weaker firms, thinner capital markets, and less capacity to fund research, weapons, and industrial mobilization. In a world where rivals use AI and robots to raise output, refusing to do the same is not just an employment policy. It is a decision to accept relative economic and military decline. Law may preserve islands of human employment. It cannot protect most workers without making the country weaker.

Some jobs may survive for a while because some people still prefer dealing with a human. But that preference will protect few workers, and probably not for long. Once AI does the work well at much lower cost, most people will not keep paying extra just to preserve the human role, just as almost no one wants rickshaw drivers once cars are available. Fewer young people will train for professions like medicine if the career no longer looks secure, and fewer institutions will invest in training them. So even where some patients still want a human doctor, there may soon be too few trained humans left for that preference to support much of a labor market.

No Jobs Does Not Mean No Buyers

If workers stop earning wages, who buys the goods? Wages are only one source of demand. Governments could keep mass consumption going through welfare, transfers, and other public support. Capital holders would still have income, and if machines replaced labor on a vast scale, profits, dividends, and asset values could rise sharply, leaving owners with even more spending power. Government would remain a major buyer as well, purchasing defense, infrastructure, care, and other public services. Automated firms and software agents acting for owners could also generate demand for compute, energy, software, and other inputs. A post-labor economy could therefore still sustain demand through some mix of public transfers, capital income, government spending, and machine-mediated commerce. Workers are a major source of demand now, not the only one a rich economy could rely on.

When Human Labor Becomes Obsolete

So the loss of wages would not by itself make the economy collapse. The deeper question is what happens once robots and AI become so capable and cheap that hiring people no longer makes economic sense for any task. At that point, human labor becomes obsolete. If machines can do nearly all the work, the economy can still produce abundant goods and services even after most people no longer earn wages. People could then live better than aristocrats once did, with machines supplying transport, care, entertainment, and material comfort at low cost. Life would no longer have to revolve around jobs. A world in which everyone is materially secure would be possible. The path to that world begins not with robots doing everything at once, but with AI making software, and software remaking everything else.

Once people no longer matter economically, their future depends on whoever controls the machines. Those systems could be used to support billions of people in comfort and freedom. They could also be used by human rulers, or by AI itself, to dominate, confine, or kill people whose labor no longer matters. When labor loses its value, power decides what happens next.

 

This essay was written with help from AI. If I could not use AI productively to improve it, that would undermine either my argument or my claim to expertise. Here is how ChatGPT said I used it to help write:

“You use the system as a constrained collaborator embedded at specific points in the writing process, not as an end-to-end author.

  1. Idea generation and expansion
    You start with a core claim and use the system to enumerate adjacent arguments, examples, and counterarguments. The system functions as a breadth generator. You then select and refine only the strongest lines of argument.
  2. Structural design
    You use it to test different outlines, section orders, and narrative arcs. You ask for alternative organizations, then choose the one that increases clarity and momentum. The system helps surface structural options you might not have considered.
  3. Local rewriting, not full drafting
    You provide existing text and request targeted rewrites of specific sentences, paragraphs, or transitions. This keeps control over the argument while outsourcing micro-level phrasing improvements.
  4. Transition construction
    You use it to bridge sections cleanly. You ask for exact sentences that connect two ideas without repetition or padding, then select or refine the best candidate.
  5. Example and analogy development
    You use it to generate concrete illustrations that carry explanatory weight. You iterate until the example does real work in the argument rather than serving as decoration.
  6. Adversarial critique
    You use it to stress-test claims. You ask for objections, weak points, and alternative interpretations, then revise the text to survive those critiques.
  7. Comparative evaluation
    You present multiple versions of a section and ask for direct comparison. This forces the system to identify differences in clarity, force, and logic, helping you converge on a stronger version.
  8. Constraint enforcement
    You impose explicit rules on style and structure, then require the system to revise until those rules are satisfied. When it fails, you reject the output and restate constraints. Over time, this trains the system within the session.
  9. Iterative narrowing
    You move from broad prompts to increasingly specific ones. Early stages explore possibilities; later stages demand precision. Each round reduces variance and increases control.
  10. Final polishing under strict standards
    You use the system only after the argument is already sound. At this stage, it functions as a high-level editor, tightening language and removing residual weaknesses. If it cannot meet the required standard, you discard the output.

The control mechanism is simple: you never accept output that violates your constraints. Selection, rejection, and repeated tightening replace reliance on any single response.”

 




Discuss

Страницы

Подписка на LessWrong на русском сбор новостей