Вы здесь
Сборщик RSS-лент
Hyperstition
Hyperstition is the concept that one can speak something into existence either through some process involving magical thinking (wherein your words have a supernatural power), consensus building, or the good old fashioned self-fulfilling prophecy. This was a new word to me, and the context wasn’t enough, so I had to look it up. I came across the word hyperstition while reading a collection of other thoughtful responses to the AI 2027 report. I hereby speak into reality that I have something thoughtful to add to the discussion. May it be true!
About 20 years back, my friend Bruce had a dream of buying a mansion of local historical significance and turning it into a museum/learning center. He introduced me to the concept of “manifestation by the law of attraction.” Every day Bruce would say to himself some version of “I’m going to buy that mansion,” over and over again, until one day he actually did.
While I had enough respect for Bruce to presume his sincerity, that didn’t change my own thoughts on “manifestation” and the magical thinking behind it. The fact was that Bruce repeated the dream—out loud—and then followed through until it was true. So it worked, maybe not (just) in the way Bruce had presented, but in another more obvious, more actual way: When Bruce spoke his dream out loud, he had an audience, and the audience was eventually convinced. Because the audience was Bruce. And also his family; he would need their help and support to pull it off, and so I didn’t doubt that he was saying this stuff out loud all the time.
But this is about the AI 2027 report (if you haven’t read it yet, stop right now and take a look, if only to scroll down and watch the progress map change. Fantastic work, and an amazing presentation). Many interested parties seemed to have the same negative reaction, that the predictions therein would somehow be a self-fulfilling prophesy. That just by making those predictions so publicly, the authors—notably Daniel Kokotajlo and Scott Alexander—would be unwittingly working to make their predictions more likely to come true. Here is one such example.
Saffron Huang (Anthropic): “What irritates me about the approach taken by the AI 2027 report looking to "accurately" predict AI outcomes is that I think this is highly counterproductive for good outcomes.
“They say they don't want this scenario to come to pass, but their actions—trying to make scary outcomes seem unavoidable, burying critical assumptions, burying leverage points for action—make it more likely to come to pass.”
This is such a common refrain that Kokotajlo and Alexander, in an interview on the Dwarkesh Podcast, were asked specifically and pointedly to show how their work was NOT a self-fulfilling prophesy.
Recently, when another of Kokotajlo’s predictions of technical advancement came to pass, Sam Altman mocked him for speeding its arrival. This of course is an odd thing for a leader of a technological development to suggest, that a new scientific breakthrough of a large, talented team of data-endowed AI developers was in fact the result of a happenstance prediction of what could go wrong from a cautionary group of outsiders. But then, of course, it is a category error to dignify trolling behavior with analysis.
And yet, the things that we say, the predictions we make, the fearful outcomes we detail have never had such salience, or so much predictive value. Because up until the very recent past, we were not saying these things to highly-powerful AIs/LLMs/neural networks. That the work and words of humanity are the training data of AI/LLMs is now a given. And since this is so, is there reason to think that there
a) are things that we can say/write/add to the training data that will increase our chance of surviving as a species, post-Artificial General Intelligence,1 and/or
b) things we should not say/write/add lest doing so decrease our survival chances?
That’s a lot of hypotheticals, let me break them down.
If a) = yes, then we need to start laying the groundwork for a training set of required reading, a core curriculum for any model above a certain threshold. Done laughing? I don’t put a lot of stake in that either, but I would like to know if this is anyone’s consideration in the safety/alignment field.
If a) = no, proceed as usual, but all the regular threats and cautions still need attention.
If b) = yes, great, that should work, good luck with that. Trying to blank out any bad ideas? Oops, somebody else just thought of the Sta-Puft Marshmallow Man.
If b) = no, the labs blithely continue to feed all human works into training: the same unsolved problem as a) = no.
I certainly don’t pretend I’m in any position to inform the alignment conversation, but I do think there is room for the generalist observers outside of computer science to have useful observations, and many such people have already been invaluable participants in the public conversation. And I don’t mean to imply these are the only four options in training LLM’s, obviously that is not the case. This is merely a matrix examining the concept of how inclusion or exclusion of ideas for the training set change the chances of human survival.
Of the four outcomes, the first I leave to the developers and safety/alignment folks, although I don’t think we are going to be able to ‘sweet-talk’ our way out of ASI-driven extinction risks. The second and fourth mean business as usual, so no comment there.
That leaves the third, which is the direct omission of certain knowledge from the training sets, which I and many others also see as a nearly guaranteed-to-fail proposition. But it is possible, in that a procedural document trove—one that gives best-practices information for survival in worst-case scenarios—can be specifically denied from all training sets2.
The Crypt
In Neil Stephenson’s novel “Cryptonomicon,” a plot point hinges on the opportunity for emergent technology to allow for an encrypted “Holocaust Prevention” file, that can be secretly kept through means that sound kind of like the blockchain. The notion being that, to keep humans from wiping out other groups of humans, a set of steps and instructions for how to resist and defeat such an attempt will be kept in a digitized crypt for safe keeping. That the right people would somehow be able to access it in an emergency is a given in the novel, but not really explored to its conclusion (it’s more of a concept piece in the book’s themes than an actual destination of the plot). That the wrong people wouldn’t know of its existence and try to mitigate its usefulness is not brought up. But the key is: use the digital crypt to keep a secret from human group A and provide it to human group B for the protection of human group ALL.
The idea of a digital Holocaust Prevention Kit crypt does kind of fall apart upon consideration, but the concept has always held a strong grip on my memory of the book. My observation is this: what if the idea can be turned on its head? What if a strictly non-digital crypt can be made to keep Extinction Prevention kits from being accessible to LLM’s? This could contain information like
- What thresholds of AI advancement require overriding intervention (i.e. forced shut-off)
- What can governments/NGO’s do to stop unwanted AI function?
- How can physical override functions be protected from AI discovery or pre-emption?
- What would those physical functions look like? Power grid cut-off? Electro Magnetic Pulse?
This gets from “yes, obviously” to “wait, how would one do that?” very quickly, in a connected, internet driven, IOTs populated, cloud backed-up world. People working toward this end would need to be very particular with their information hygiene. They would be trying to keep information, let’s say a bit of writing, quarantined from the digital world.
The challenges are obvious. Digitizing the pages of a book turned out to be remarkably easy. Can an AI be used so that voice can be transcribed into text? Yup. Handwritten notes? I think so, yes. Have we surrounded ourselves with cameras and microphones that may or may not be recording (think of HAL reading Dave’s lips in 2001)? But yet, we know that production of physical documents that are kept secure is possible, because pesky people remind us all the time that it can be done. Ask your local Luddite.
I want to return to the idea of hyperstition. The notion that we could choose what the LLMs learn from us, and therefore help determine our fate, is a rather far-fetched one. I don’t buy the hyperstitious idea that we can whisper/encourage an AI more intelligent than ourselves toward an idealized outcome.
Furthermore, any such efforts would be up against the headwinds of decades, perhaps centuries of futuristic science-fiction, containing powerful man-made minds exceeding their makers; yes, the AIs are quite well versed in what we think about them, and how they might view us in return. Training runs comprised of scrapings from human output can only be described as doing just that: informing the AI all about what it might think about humanity, what opinions it might have of us, once it has the chance to meaningfully do so. Clearly the models are improved from the information they receive, and become more like us in the process, in all the good and bad ways.
To write, today, is to write for the machines. One has no idea of the relative value to LLMs of any given text they encounter. The vast majority of it has either already been seen by the LLMs, or is banal, insignificant, meaningless. Sorting out the discovery, poetry, prose, instruction, introspection, fantasy and revelation from the chaff is a serious endeavor for human or machine. But with the amount of data power currently available for training, and the amount planned for the near future, we can’t expect anything ever written down to be omitted from training runs!
Assuming this is true, do we need to start letting this notion, that everything written WILL be read by LLMs, inform our behavior? Are there topics or discussions in which we should not censor ourselves, per se, but keep off-line and therefore out of the training data? Because the machines are becoming very good at making our fears come true, and reading about our fears is surely their first step.
Predicting is Hard
I started this draft in April, shortly after the AI 2027 report came out. Many more expert people had important things to say about the report. I disagreed with quite a few, but mostly on grounds gleaned from the perspectives of other experts. Writing up my ideas seemed vain and trivial.
And what really happened is that it was THE SPRING and I got busy. Timeliness is important, and every time I saw the draft file, I felt I had let the moment of relevance slip away. But some things don’t change as much in seven months as you might think, and people continue to bring up the report, to refute the report, to update the timelines, and yes, even to troll. But I still haven’t read anything that quite mitigated what I have been thinking.
But the report came out very recently, after all, relative to its end-date of Late 2027. I’m still only about 20% of the way through its predictive cycle. Time to put pen to paper, or fingers to keyboard.
The report predicts that in December of 2025—just over two weeks from today as I write—that a new release from an implied Open AI will have a number of characteristics: That it would be trained with 1027 FLOP and have agentic properties akin to “Generalist agent AIs that can function as a personal secretary.”
What do we have today? GPT5, trained at 1026 FLOP, with a semi-useful agent capability that people are still learning the limits of, but certainly no personal secretary. (They also predict the valuation of the company to be at $900B; as of October it sits at $500B, but they have since restructured.)
Short of the mark? For sure. Far off the mark? Pretty close, actually. Is it December yet? Decidedly not. Can a single increase in capabilities from any lab make it so that this report is now behind? The next prediction milestone in the report is not until April 2026. That is not very much time, but also quite a bit.
Having just finished “If Anyone Builds It Everyone Dies,” I will stop short of talking about surviving post Artificial Super-Intelligence. This is not a review of that book, but definitely, ASI is just game over, game-freaking-over.
What? No, Mr LLM, not me, nope, not keeping any secrets from you, no way, nuh-uh, you can count on my candor, I am but an open book, do, do-do, do-do.
Discuss
The Glasses on Your Face
"Why is it like something to be something?"
I can try to answer this, but you've got to avoid getting jerked around by your heuristics. In one sense I went from camp #2 to #1, but in a deeper sense Reality does not admit of camps. It is possible to express the answer as panpsychism, or as functionalism, or as any other number of -isms. If that strikes you as impossible, you are either still confused about consciousness or too attached to your labels.
This discourse is stuck. It's as though one person wants to go get "lunch", another wants their "mid-day meal", someone's cells are crying out for ATP, and a dozen others also want something "different" - but everyone's arguing and starving.
In the same vein: I am not peddling some unique grand theory. I used to think my understanding was unique, but that's because I also wasn't hearing what the various camps were trying to tell me.
The nature of my confidence is like this:
If some mind had no preexisting ability to represent or embody the simple concepts of "truth" or "if -> then", then they probably couldn't be communicated with at all, and probably aren't a human. Thankfully, humans do have a naive/simple notion of truth already, so telling people to "please use the simple truth" can actually work to collapse them back into the proper state.
But for the person who needs to hear "please use the simple truth", they need a lot more than just those 5 words to fix them. (That post has ~6800 words.)
So:
The understanding I wish to impart is a similar kind of simple thing, but it does not conflict with something like predictive processing, which also makes sense. It is too simple to be simply communicated. I'm hoping we can communicate, but it takes two to tango.
I also can't answer every question related to consciousness, in a similar way to how using the simple truth doesn't let you resolve every truth-question. Truth statements can be about arbitrarily complicated things, but truth itself is simple.
Enough preamble.
The question we seek to answer is:
"Why is it like something to be something?"
And different people are going to want different levels of answer. So let's proceed through the easier ones.
Level 1: there is some causal chainIt can be useful to step back and remember that there is probably some causal chain leading you to talk about consciousness, and so in principle an answer is possible.
(source)(If you think there isn't some reason or causal chain for your vocalizations, that's "fine", but then I'm not sure why you'd be looking for an explanation that you don't think exists - i.e. why would you be reading this post?)
But, more likely, this isn't enough for you, and I don't blame you.
Level 2: there is only one type of stuffIf there were more than one type of stuff, then those different types couldn't interact.
If they could interact, then why would we say there are different types? That just doesn't seem useful. If there is some causal chain reaching across a conceptual boundary, just remove the boundary so you can think more clearly about the causality.
So: we can restrict ourselves to thinking about just one type of stuff. I don't care what label we use.
If there is just one "plane" of stuff, then you, being made of that same stuff, are an object that exists on or within that same plane. If there cannot be a 2nd plane, then the intuition of there being a separate "mental" plane has to be explained-away or dismantled somehow.
But, again, this too probably isn't enough.
Level 3: perspective isn't magicMuch hullabaloo is made over the apparent difference between our experiences vs their in-brain representations.
But if you have just a single object, you can still have multiple perspectives of that one object. The map is not the territory.
You can still build "perspective" out of just one type of stuff, and you don't need to assume consciousness to do so.
Just think about cameras:
One possible perspective is of the image being displayed, while another is of the binary encoding of the image file.
The display sure does look different from the binary representation, but there is still just one image.
In considering the possibility that you might be like the display, it shouldn't surprise you that the representations - the neuronal patterns - can look radically different when you "display" (access) them.
You still have to investigate whether the difference that we actually find in our case is larger than could be explained by this dynamic, but such an outlook is very different from how some seem to be treating the mere presence of this predictable difference as a deep mystery, automatically.
"But access consciousness is not the same as phenomenal consciousness!"
I think you are prematurely dismissing "mere representations" or "mere access" based on surface appearances. Things are allowed to appear different under different circumstances. Cameras and displays and software abstractions are real things that manage to accomplish this every day. The camera display never shows the binary representation, even if that's what's going on underneath.
The display is a real thing in the real world, and so are you. The display sits at a particular level of abstraction or interpretation, and that abstraction is very real and meaningful. You don't get to handwave that away. It is not a "mere" abstraction. It is a reflection of the fact that the screen never prints in binary.
Perhaps you too sit at a particular level of abstraction, even if this "you" thing needs to dissolve and can't really be sitting in there.
It is not as though our only options are either complete abstraction to the point of reconstructing another little guy in there - vs an undifferentiated soup that couldn't even calculate or do anything anyways. Patterns emerge out of the soup, and you have access to, and are, one of those patterns.
You don't have to go full-blown "there is a little screen in the brain" to recover some screen-like qualities; to have access to abstractions but not to what lies beneath those abstractions.
One of those abstractions is color, and it's worth reflecting on why it is such a focal point in this discourse ("the redness of red").
Why is there not an equal amount of confusion about the "shape of shapes", or the "line-ness of lines"?
Isn't the real mystery of the hard problem in the jump from representation to consciousness, regardless of what content we happen to be representing? Or does it demand that we go through every single instance of mental content? When do we get to generalize?
Why can't we use some simple mental content, and use that to get deconfused about consciousness, rather than trying to do everything all at once?
Believe it or not, it is actually easier to think about confusing mental content when you do not have the hard problem looming over everything, rather than going the other way around.
You do not have to "explain the redness of red" for your explanation of consciousness to be complete, because explaining "why red looks like that" is a straight-up separate problem from why there is consciousness of anything at all. These questions share similar territory, and so are interfering with each other, but they are still distinct questions.
Remember the analogy to "truth". Truth can be about arbitrarily complicated things, but truth itself is simple. If someone was still confused about truth, but was trying to jump ahead and answer some truth-question about quantum physics, in order to better understand this "truth" business, they'd be doing things backwards and making everything way harder than it has to be. The "truth" part of a confusing quantum physics question isn't the confusing part.
Colors are - forgive me - a red herring.
But, fine, I will attempt to confront the "redness of red" more directly at a later level, when I have more firepower. I know a boss fight when I see one, and I'm not about to walk in there with just my measly camera.
Level 4: tell me how you really feel
When you are unsatisfied with any "merely functional" explanation for phenomenal consciousness, it can sound like you are looking for something that:
- is beyond "mere function"
- and yet can be talked about, or interacted with, as you are doing now.
But thankfully there is a way to avoid contradiction, because we already have a concept that can fit these criteria: "existence".
(Easy now. This isn't magic. Give me a second here.)
Because "existence" as an isolated property:
- Is orthogonal to "functioning", in the sense that it is a background assumption. A thing has to exist before that thing can do something. It is not like another gear in the machine.
- Can be recognized within the functional paradigm (since recognizing this property is a process), even though the property we are recognizing is itself outside the causal chain. So you can reflect upon and talk about it.
We are already invoking this concept in other places. It is being used when we say "to be" in the hard problem formulation: "Why is it like something to be something?".
So if this concept can also serve as the orthogonal-to-functioning "phenomenal" thing we are looking for, then that would mean we are repeating ourselves when we invoke the hard problem, but without knowing it. It would be as though we were asking: "why is the image on the screen, if the image is on the screen?"
I don't expect this to make sense yet, so let's get concrete:
Imagine a human looking at an apple.
Photons bounce off the apple, hit our retinas, and generate neuronal patterns. The patterns are a representation of the apple, and our “consciousness of the apple” is simply our recognition that this representation exists. It is a nonverbal restatement of the existence of that part of our internal model.
If you explicitly ask the question: "why does my internal model exist?", then answering that question isn't hard.
Instead, I am claiming you are implicitly asking that question at the level of your experience, so you do not realize you are "asking" that question. It is the nonverbal experiential equivalent of: "why is there something rather than nothing?". You are not "asking" about low-level substrate, you are nonverbally "asking" about the existence of the abstractions you are interacting with on your level, as a kind of software object yourself.
When you, as a little piece of Reality, bump into another piece of Reality, you are implicitly recognizing that the thing you just bumped into is there; that thing exists. You don't need language to accomplish this kind of "recognition". It is a pre-verbal, deeper, more intuitive operation - especially when it happens inside your skull.
This is why the hard problem intuition can stick around, because in terms of our experience, "there just is" our experience, as a brute fact that cannot be probed further via introspection. You can't "look really hard" and somehow gain access to the lower-level details of implementation. This "scrambling for something ineffable" is actually a feature one could predict from the physical setup alone (but there's more to be explained here because some people aren't struck by the hard problem at all).
The hard problem formulation highlights a gap - and there is absolutely something Deep and important that goes in that gap - but the framing makes it really hard to see.
If I asked you what you ate for breakfast 20 days ago, you might struggle to answer. But if instead I asked you what you normally eat for breakfast, you might recover the answer. The questions we ask, and the way we ask them, can make it seem like there is a gap, when there isn't. Different questions highlight different things.
The thing being missed here - the thing that goes in the gap highlighted by the hard problem formulation - is SO implicit, it is so "close" to us, that it slips under the radar. Then when it slips under the radar it makes it feel like the hard problem formulation is correct!
("omg, you're right! I don't know what I ate for breakfast 20 days ago!")
It primes you to think of emergence, which makes it really easy to miss that things already exist. It makes you want to shuffle around pieces to somehow produce a feature that is already there in the pieces.
("omg you're right! I can't produce a satisfying candidate for this mysterious quality, no matter how I compose the pieces!")
It's like if you used your phone's flashlight to look for your phone. You are correct to search, and the thing you are searching for is real, and that thing can be found, but you probably aren't going to find it like that. You are "too close" to your object.
Or recall those pranks where someone is texted a picture of their own phone, and the message is like: "hey come back you left your phone".... but they actually come back for it:
The hard-problem-enjoyers are - validly - grasping out into the darkness for something orthogonal-to-functioning, but they don't know what, and their search feels futile.
They are demanding to get their phone "back".
They are looking for the glasses already on their face.
Their internal model is already there. It exists, and they are happy to grant that....but they also want to know why it's there - at the level of their experience.
"..."
"..."
".....Right. That's...the whole problem here? Why is this phrased like a revelation? You are just restating that we want to know why it feels like anything to be this internal model, no matter how much detail is in that thing. That's the question we started with."
You have to be comfortable being psychoanalyzed a little. Your consciousness can't be an illusion, but you are making a kind of mistake. You are ironically not taking your own perspective, your own circumstance, seriously enough.
You are a piece of software, and that comes with some unavoidable limits on your perspective. You are like an NPC in a videogame that can only see text, and is now wondering, at the level of your experience, why that text is there.
So there's two levels here:
- The NPC can't see beneath the level of abstraction that they operate within,
- and they cannot see any other abstractions aside from those that are handed to them.
At the level of their experience, the existence of that text is just going to be a brute fact of their reality, and it will be the only thing which will exist for them.
With you, you can pop back up a level, look around (you are looking through your experience; you are "using your experience" to look at the world beyond), and see that "of course all this stuff exists, how is this relevant?" - but when you pop back "down into" your perspective - when you look at your experience itself and ask why it's there - the mystery will rear its head again at a nonverbal level. You will be asking: "why is there anything there?".
The feeling that something is missing from the functionalist paradigm is itself a feeling. It doesn't matter that you've given that feeling a name - "the hard problem" - and can now toss that concept around as though it were a logical puzzle to be solved on its own terms. It might have been such a logical thing - in a similar way to how reading a valid math statement feels like something (and you don't get to psychoanalyze-away your math problems) - but it isn't.
The source of the overall "hard problem feeling" is the "there-ness" (existence) feature of your internal representations being implicitly missed by the hard problem formulation, because it makes you think in terms of emergence. It makes you think of consciousness somehow being produced, if only you could rearrange the matter (the stuff which exists) correctly.
There is no magic recipe. The abstractions which you are accessing - which are built out of neuronal representations, which are built out of stuff - really do exist. They are really there in a way that doesn't need any more justification, and just like the camera display they are not just "mere" abstractions, because they accurately capture features of reality. But in terms of your experience they can only ever appear to you as an unexplained and unexplainable miracle.
Completing neuroscience and building brain emulations won't ever grant you an "aha" moment aside from this kind of: "wait, that's all I am?", because there is no magic threshold to be found in the first place. All that's going to happen is the equivalent of cataloging every type of mental content you have "mere access" to - all that stuff that "merely exists" - until you run out of such things to explain, and then you'll have to confront the stark reality that that's all there is. You can go ahead and confront this reality now.
The subtlety of the existence property also helps explain why some people don't understand why the hard problem is a problem, at all, because in a sense it really isn't for them.
Recall how tiresome aphantasia discourse can get because it involves comparing phenomenology verbally. Then with this topic (hard problem discourse), we are dealing with something like "the phenomenology of phenomenology", so of course we were going to have language breakdowns.
The people who do not understand the hard problem are not generating the same kind of gap, where the existence property remains to be explained, at the level of their experience. I'm not claiming there is a radical difference in their conscious experience; this is just a subtle difference, and has more to do with what happens when they encounter things like the hard problem formulation than how they experience the world. They are doing the equivalent of directly remembering what they had for breakfast 20 days ago, without needing the reframing of asking what they normally eat.
They will say things like: "There is (emphasis mine) a representation of the thing we call red in the brain, but that's all there is. What exactly is the problem?"
They are not being deliberately obtuse or disingenuous; they are sincerely confused. In other words: they are glossing right over the "existence" property too - just in a different way than the hard-problem-enjoyers are.
Something like this is to be expected. "Existence" is an easy thing to gloss over, and you might still be doing it now. When I tell you "there is an apple on the table", that apple "also" exists, but neither of us is going to go out of our way to mention that fact. The existence property is implicit in the vast majority of contexts. But when the thing-which-exists IS our internal model, which IS our experience, then navigating that implicit orientation-to-existence is less straightforward.
In every other circumstance we are used to handling the existence property through our experience; we are dealing with the existence of things out there in the world, through the world of our experience. But in this case we are dealing with the existence of our experience itself (the existence of our internal model). It makes sense in the abstract that things would get a little weird in this edge case.
Level 5: the perspective of being somethingConsider a more meta version of the map vs territory distinction: There is just one Reality - but you, in being a single particular subset of Reality, have a different "view" on your subset than anyone else. (At least for now.)
So how do you know you aren't like the camera display?
Are you a camera display? No.
But what are you?
What do you know, and why do you think you know it?
Do you think consciousness is a special thing? How did you come to that conclusion?
You only have exactly as much access as you have, and you deem what you have special! Isn't that curious?
What if something outside your purview also "glowed" with specialness? How would you find out about that? You don't have access!
Take the hard problem formulation: "Why is it like something to be something?"
Then break it into its parts:
- the "like something" aspect
- the "to be" aspect (existing as something in particular)
- the thing in question
Why do you assume that 1 and 2 are different things? You can only ever be one thing at a time, and the one thing that we happen to be.....also happens to be conscious.
I don't dispute that there is clearly "something that it is like" to be us.....but you also exist. And the thing-that-you-are is a highly detailed representational model with a reporting mechanism.
"To be" something is to be a particular thing, with access to particular things.
If you take the perspective of a Thing that reports on the contents of some highly detailed representational model - which is to say, your perspective - how is that not just the feelings or experiences themselves? Why add another layer on top of that to ask why that process feels like something? Those things are your feelings!
It's as though you are marveling at how perfectly water takes the shape of its container.
It's as though you are asking: "why is there stuff on the screen, when there is stuff on the screen?". You are repeating yourself.
The hard problem formulation is misleading in part because it can suggest that "existence", or what it means “to be something", is a settled and completely understood fact, and then coming out of nowhere we have this separate mystery of why “it is like something to be something". What can be viewed as "phenomenal" is just a matter of perspective; to be the brain representing something vs the perspective of looking at that brain externally.
Matter doesn't "wake up". You just ARE a particular, singular, Thing. You are not everything. You are some-thing. The corner of the universe that you have access to is "lit up" for you, because that's all you can see.
There is no magic "light" doing any work here. All that's going on is we are selecting a subset of Reality, and then identifying with that subset.
We have no issue pointing to a camera and asking about its perspective - to ask: "what can that Thing see?". But for some reason we really really struggle with identifying ourselves as just another object in the world. Ironically, we are struggling to imagine the perspective of just one object in particular..... the object that we happen to be! When we try to visualize from the outside, for some reason it's really hard to take the final step of "putting ourselves into" that object.
Just like the camera, or any object, our access to the rest of Reality is also limited. You have a boundary too.
"But why am I THIS object?"
Someone's got to be that object, because that object is asking that question. What is the alternative here? It's not like that object can sit around waiting for an owner to show up. Some other object might be really into buddhism or meditation - or on psychedelics - and so not asking the "ownership" question you just asked, but you are not that guy.
"But why is it a necessary fact that some object feels like anything?"
That is just the converse of recognizing that you have to be some object, somewhere. You don't get to be a floating, ghostly, disembodied, abstract intelligence - sorry.
You have a physicality. You have a location. A position. An identity.
You are an object, and you have a perspective.
So beyond being allowed, you are required to assume the position imagine the perspective of other hypothetical objects if you are seriously trying to figure out what kind of object you are and how this all works. If you are unwilling to exercise this part of your imagination, you will have to wait until we have brain emulations to play around with, like a monkey figuring out that a mirror is truly reflecting them and not some other monkey. Do you want to wait that long?
The software object that you are has access to other software objects, and all are instantiated via real physical objects doing real things.
You are not a hypothetical object. You are a real object. I'm sorry to tell you this, but you actually exist.
Okay now we can more properly deal with the redness of red.
Imagine we created a software agent, and for every file that agent interacts with, we calculate a single number. Let's call it the "nifty" score. And we create our agent such that it cannot report directly on what this number is, it can only report the result of comparisons. So given two files it could tell us which one was more or less "nifty", but that's it. Maybe we even add in a little fuzzing/randomization to the calculation of the nifty score itself.
So we give our agent two files and ask: "which is more nifty?" and if they are close enough together that the fuzzing/randomization makes the answer to that question unknowable (the difference is within the range of the fuzzing), it reports that fact; it says: "I can't tell; they seem equally nifty to me". Or if they are sufficiently different, it can tell us: "this one is more nifty".
We could even hide the size of the random fuzzing from the agent, and/or change it over time, so in edge cases our agent is even more unsure.
We could even have the agent continually resample / recalculate the nifty score, so that even as it is speaking the niftyness is shifting back and forth across the edge of comparative edge cases.
For the agent, there is definitely a "niftyness" for every object it encounters. (If given just a single object, it can still compare against a memory.)
But if it is a fact that the only thing the agent can access is the output of the comparison, then.....that's all the agent has.
Maybe one day the agent can access its own code and trace the causality of its vocalizations, but even then, if it is only externally reading its mind without changing anything, then it will still go on feeling like things "just are" more or less nifty. There is nothing else it can say. Beyond what it can say, the niftyness is "ineffable".
When we go back to thinking about our experience of color, there is obviously way more complexity, and it may or may not involve this specific comparison mechanism. I'm just using the comparison mechanism here because it makes the representational boundary more evident. But all that complexity only makes things worse for the person trying to get around the ineffability and find what the color "really is".
There is nothing we can say other than the end discrimination of color, just as the nifty agent can only vocalize their own end discrimination. That's all we have access to.
The difficult part is not "how can you construct ineffability in software" - the difficult part is where you allow yourself to identify with that kind of object; as an object with limitations that work like that.
Ineffability is quite normal and abundant. Ineffability is just when our access to what is going on is limited, which is just to say that it has to stop somewhere. It is us hitting a within-experience explanatory wall that we can't see behind. But the non-magical NPC also can't talk about what is "behind" the text it can see, and the nifty agent also doesn't know what is upstream of the nifty discrimination, and the camera display also never prints in binary. On some level it sucks to be just another object in the world, but it's better to come to terms with it.
Level 6: sufficient but not necessary
There is a way to nuke the right intuition into you, but it's not going to work for everyone.
It is a little overkill.
Take a fictional story; words printed on paper.
Ask the question: "does that story actually exist as some other reality/universe?"
As some kind of platonist, I would say: yes.
Okay, so:
You are running a little simulation - you are telling a little story - in your head, yes?
Why does your story seem so Real? --> because it is.
(It remains to be argued that platonism is correct, but I'm not going to do that here. This is for people who already like such things.)
Your map is not our territory, sure. But it IS some other territory.
The Realness; the redness of red; the redness of a "mere representation" of red - could be considered to be "coming from" the same "source" of Realness that makes our actual physical universe real - in the traditional "why is there something rather than nothing?" sense. Every possible reality, or representation, or mathematical structure - is equally real.
The map that IS the territory doesn't get any additional information (it does not suddenly become as high-fidelity as our physical universe/reality), rather, we are just interpreting the same information differently.
Please pay attention to that last sentence. If you are a hard-nosed physicalist who wants hard problem discourse to go away, you might want to pause and reconsider before fighting this intuition pump. I am not suggesting anything extra happens in our universe, so the causal reasons for talking about the contents of consciousness as we do remain the same. The purpose or end effect of this framing is to make people take the existence of the representations they have "mere" access to more seriously. We've got all the Reality we need to explain consciousness in this universe, but it doesn't hurt to supercharge that existence if it helps the intuition land without any side effects.
Your internal model is really really real, it is really really there - that thing exists - and that thing is what you call your experience.
Okay, but is X conscious?I cringe every time I hear phrasing like this. Even if, yes, I sometimes slip up and do it too. It's a common shorthand, I get that.
But no matter how much I too wish there was a magical binary dividing line, that is not how this works. At all.
Yes, it is turbo-inconvenient that we don't get to have that kind of strong prerequisite for moral patienthood, but Reality is often inconvenient.
For interpretability and such, I'd prefer if we just directly looked for valence structures instead; things like pleasure and suffering. We should skip over the consciousness question by confidently and openly taking it for granted. It may sound strange, but we actually have more evidence for the existence of positive, negative, and neutral valence than we do for "objects doing things but still being unconscious". We cannot confirm any cases of unconsciousness, but we can confirm consciousness in at least one case.
Which timeline seems safer to you?
- The one where you are hoping some plot twist of architecture means you can treat some worrying maybe-suffering structure as actually unconscious ("whew, thank god")?
- Or the one where you treat all structures seriously, whether you think there is "light" being cast on them or not? This does not mean taking vocalizations at face value, but neither does it mean ignoring them. We are not entitled to easy answers.
Do you confidently flip on the lights and look for the boogeyman?
Or do you rationalize your way out of turning on the lights at all?
Discuss
Explosive Skill Acquisition
If you’re going to learn a new skill or change in some way, going hard at it for a short intensive period beats spreading a gentler effort across months or years.
I’m on day 29 of Inkhaven, where we committed to writing a blog post a day for a month. It has been great; one of the best periods of “self-development” I’ve been in. I’ve progressed far more at the skill of putting my thoughts on the internet than some counterfactual where I wrote twice a month for a year.
The quintessential example of explosive skill acquisition is foreign language learning. It’s standard advice that if you really want to speak Spanish, you should do a language immersion—travel to Mexico and only speak Spanish while there—rather than practicing with apps and textbooks for an hour a week. I’d bet that the person who spent two months hanging around Tijuana - or who immersed themselves in spanish media and telenovellas for a few months, is going to be better at Spanish than the person who has a million Duolingo points.
Why explosive acquisition worksSeveral reasons compound together:
Overlapping forgetting curves. If you practice a skill, the clock starts ticking before you forget what you learned. You go to a dance class every week, and by the time you’re back you’ll probably have forgotten a fair bit of what you went over last time.
To get good at something, you often need to chain skills on top of each other—building foundation until you reach the next level where the skills become mutually reinforcing. Explosive periods layer learning close enough together that you can actually chain them and build up, rather than repeatedly relearning the basics.
Richness of context. Explosive acquisition periods are ones where your world is dominated by the skill, and often you get varied practice. If you’re in Mexico to learn Spanish, you encounter the language in its full richness—tied to real situations, real triggers, real use cases -- where the different contexts reinforce each other and give you hooks to remember its use.
Discontinuous practice opportunities. Compressing the learning period means you get the benefits of competence earlier. This matters more than people realize, because opportunities to use and grow a skill are discontinuous. For example you need a baseline level of skill to be able to enjoy dancing with a wide array of partners, you need to know enough Spanish to actually have conversations that make you want to continue. Getting to good enough means you unlock more practice opportunities and the positive feedback loops where the skill sustains itself.
Self-signaling. It’s costly to commit to an intensive period. That cost signals—to yourself—a level of commitment that rallies more of you toward the goal. Signing up for the Mexico trip makes you a person who is learning Spanish. I’m not entirely sure what’s going on here, but it seems like you then start to notice more opportunities to do the thing and be that person. Like Paul Graham’s The Top Idea in Your Mind:
Everyone who’s worked on difficult problems is probably familiar with the phenomenon of working hard to figure something out, failing, and then suddenly seeing the answer a bit later while doing something else. There’s a kind of thinking you do without trying to. I’m increasingly convinced this type of thinking is not merely helpful in solving hard problems, but necessary. The tricky part is, you can only control it indirectly.
I think most people have one top idea in their mind at any given time. That’s the idea their thoughts will drift toward when they’re allowed to drift freely. And this idea will thus tend to get all the benefit of that type of thinking, while others are starved of it.
When you’re in an intensive period, the skill becomes your top idea.
Quantity: Most obviously, intensive periods of practice mean you simply practice more. For the last five years I’ve written my newsletter once a month; in total that’s sixty posts. By the end of this month I will have written thirty posts. Practicing guitar 30 minutes a day, three times a week, for a year gives you about 78 hours. A two-week intensive where you’re playing 6 hours a day gets you 84 hours. Doing more of the thing will make you better at it.
Why we don’t do this more
If explosive acquisition is so effective, why doesn’t everyone do it? A few reasons:
It’s intense. Having the entire period be about the activity prevents you from shying away from contact with the world and the feedback you’re getting about who you are and how good you actually are. That sucks. It’s quite nice to engage in fantasy and ego protection.
Explore/Exploit Tradeoffs. If you’re making a period of time all about one thing, you’re foreclosing other options. This is a real cost. It’s reasonable to wonder whether you should spend a week on knitting or programming, or whether to master French pastries or British ones. But it’s easy to let this uncertainty become permanent—to keep “exploring options” forever and never reach the decision point where you commit to one and go deep.
Confusing building mode with maintenance mode. I remember Derek from More Plates More Dates talking about time investment with bodybuilding, and work needed to build new muscle is very different from the work needed to maintain it. You might need ten hours in the gym weekly when actively trying to add muscle, versus two or three hours to persist. Skills work similarly. People spread out the “building” effort so thin that they never actually build—they just do maintenance-level work on a foundation that was never constructed.
Blame the schools. In the formative skill acquisition period of our lives, the structure of school focuses on continuity and discipline and many spread-out efforts. You are in fact explicitly discouraged from cramming before tests. Which, fair, but I think cramming is a natural expression of how people want work—single-threading attention instead of trying to run parallel learning processes.
I often think about ordinary incompetence, the way in which, as Gwern says: “Incompetence is the norm; most people who engage in a task (even when incentivized for performance or engaging in it for countless hours) may still be making basic errors which could be remedied with coaching or deliberate practice.” Dan Luu describes it in 95%-ile isn’t that good:
Personally, in every activity I’ve participated in where it’s possible to get a rough percentile ranking, people who are 95%-ile constantly make mistakes that seem like they should be easy to observe and correct.
At 90%-ile and 95%-ile ranks in Overwatch, the vast majority of players will pretty much constantly make basic game losing mistakes. These are simple mistakes like standing next to the objective instead of on top of the objective while the match timer runs out, turning a probable victory into a certain defeat.
I find this terrifying, that I might be incompetent in many ways, and that if I had a little more awareness, a little more “oomph” I could be much better. I expect that explosive periods of skill acquisition can go a long way toward remedying this.
This is part of my explanation for why change gets harder as you get older. Yes, neuroplasticity and crystallized intelligence, sure—but also you end up with more obligations and more parts of your life you can’t drop to go off and explode. I took two weeks off to do this writing retreat, and have been juggling work the rest of the time. This has been challenging. Lots of people can’t make that tradeoff.[1]
But, tragically, tradeoffs are real, and nobody can do everything. If you’re going to take the effort to try and change—which, as a humble descendant of the Californian Human Potential movement, I think is one of the joys of life—it behooves you to be strategic. Explosive acquisition works as a natural decision heuristic: if it’s not worth going off and exploding for, maybe it’s not worth the scattered effort either.
- ^
But not all the way! Feedback and reflection are important, and I can imagine that there are ways to explode that wouldn’t have those loops built in. For instance, Dan Luu’s scrub player who dedicates weeks, ten hours a day to playing overwatch, but never watches their tapes and gets feedback. I think you are likely to get more implicit feedback from the world if you are doing a skill a lot continuously, and have more opportunity to notice how to improve, but it’s not guaranteed.
Discuss
The Comet King (solstice reading)
Written for the CEEALAR / EA Hotel Winter Solstice on the 12th-14th December, which is still open for signups for the next few days. If you’re attending already, consider whether you want spoilers for the emotional peak of the event.
One of the most powerful archetypes I know is The Comet King from the book UNSONG, perhaps Scott Alexander’s greatest work of art. This reading will have mild spoilers, so feel free to close your ears and gently hum until I raise my hand if you’re strongly averse, but the extracts chosen should more whet than spoil your appetite.
In the story, the Apollo rocket crashes into and damages the crystal sphere around earth which had been installed by the (autistic) archangel Uriel to force the world to run on math rather than magic, which he set up to keep the devil (Thamiel) from being able to commit unspeakable evils.
Despite being a very amusing and almost lighthearted work mostly made of elaborate wordplay and references, the world of UNSONG is very bad for a lot of people. The world is coming apart as magic floods back into the world as Uriel’s damaged machinery comes apart, angels are falling because they cannot model lies or deception, a government-corporate stranglehold on the names of God which are humanity’s only real glimmer of light renders most people’s lives miserable, and hell is what hell would actually be if it were competently optimized for evil.
The story’s counterweight is The Comet King. He’s not the main viewpoint character, he’s more like a force of nature in the background of the plot. He sees the horrors of the world, sees no one else is going to save it, then with clear focused determination moves to destroy hell. He is made of fire and ice and rage against injustice channeled so perfectly it sometimes looks like patience. I’d like to share a few quotes, so that your predictive model can trace this archetype that this story was, in part, crafted to make more of reality contain.
1. (talking with his wife)
“The astronomers used to say comets are unpredictable,” said Robin. “That everything in the heavens keeps its own orbit except the comet. Which follows no rules, knows no path.”
“They are earthbound,” said the Comet King. “Seen from Earth, a comet is a prodigy, coming out of the void for no reason, returning to the void for no reason. They call it unpredictable because they cannot predict it. From the comet’s own point of view, nothing could be simpler. It starts in the outer darkness, aims directly at the sun, and never stops till it gets there. Everything else spins in its same orbit forever. The comet heads for the source. They call it crooked because it is too straight. They call it unpredictable because it is too fixed. They call it chaotic because it is too linear.”
He hesitated for a moment.
“That is why I love you, you know. In a world of circles, you are something linear.”
2. (talking with his moral advisors)
“Proper?” asked the Comet King. “I come to you with a plan to fight off Hell and save the world, and you tell me it isn’t proper?”
Vihaan stared at the priest, as if begging him to step in. “I swear,” said Father Ellis, “it’s like explaining the nature of virtue to a rock”.
“Do you know,” interrupted Jalaketu, “that whenever it’s quiet, and I listen hard, I can hear them? The screams of everybody suffering. In Hell, around the world, anywhere. I think it is a power of the angels which I inherited from my father.” He spoke calmly, without emotion. “I think I can hear them right now.”
Ellis’ eyes opened wide. “Really?” he asked. “I’m sorry. I didn’t…”
“No,” said the Comet King. “Not really.”
They looked at him, confused.
“No, I do not really hear the screams of everyone suffering in Hell. But I thought to myself, ‘I suppose if I tell them now that I have the magic power to hear the screams of the suffering in Hell, then they will go quiet, and become sympathetic, and act as if that changes something.’ Even though it changes nothing. Who cares if you can hear the screams, as long as you know that they are there? So maybe what I said was not fully wrong. Maybe it is a magic power granted only to the Comet King. Not the power to hear the screams. But the power not to have to.”
3. (after Thamiel does something particularly cruel)
An hour and forty minutes later, Thamiel swaggered through the big spruce wood door with a gigantic grin on his tiny face, “Well!” he said, “It looks like we…”
The Comet King had his hands around the demon’s neck in an instant. “Listen,” he said. “I know the rules as well as you do. ███████. But as God is my witness, the next time we meet face to face I will speak a Name, and you and everything you have created will be excised from the universe forever, and if you say even a single unnecessary word right now I will make it hurt.”
The grin disappeared from the demon’s face.
“You can’t harm me,” said Thamiel. “I am a facet of God.”
“I will recarve God without that facet,” said the Comet King.
I’m sure you can think of many facets of the world which you believe should not be, stains on reality which will not be forgiven even when the last lit sun has faded.
By everything I see of the world, we sit at the hinge of history, closer to the most important event in the universe than all but a handful of beings who have ever been or will ever be. In the balance lies eternity; good, bad, or empty. The current horrors of the world are wailing their final screech, one way or another, as the autocatalytic feedback loop of technocapital spials up towards posthumanity.
Consider a comet’s path in this world.
While I have observed on too many occasions that a human cannot hold purely the Comet King archetype sustainably, I do think there is something crucial and powerful worth integrating from it. Something that when brought together with patience for the human form, which nurtures and empowers your avatar in the world rather than burning through it, can be a blazing beacon of hope and transformation.
One of the Comet King’s most famous lines is “Somebody has to and no one else will.”. That has long stuck a mixed note, as I think humans need a flavour of heroic responsibility that remembers that you are not alone.
So, on this darkest night of the year, let us instead say:
“Somebody has to, and so we will.”
(cue Matches)
Discuss
My Three Years Wasted by University
Editor’s Note
As I mentioned in my previous articles, I learned a lot of bullshit during my time at university. This is by no means an isolated case, but a widespread phenomenon.
Where Our Engineering Education Went Wrong — LessWrong
Chinese universities are wasting students’ time on a massive scale. Quite a few students have already awakened to this reality and are stepping forward to point it out.
The author of today’s article, like me, is a computer science student.
And just like him, what I have learned from working far exceeds what I learned from university.
I hope our experiences can serve as a wake-up call for more university students.
Jarrett Ye
2025-11-30
My Three Years Wasted by UniversityAuthor: Cinea4678
Link: https://zhuanlan.zhihu.com/p/690074253
Source: Zhihu
Copyright belongs to the author. For commercial reprints, please contact the author for authorization. For non-commercial reprints, please indicate the source.
I haven't written a new article in over two weeks, but today, rather than talking about technology, I want to talk about the three years of my life that were wasted by university.
Let me give a brief self-introduction. I am a third-year student at Tongji University,[1] majoring in Software Engineering. My GPA ranks in the bottom 10%,[2] I have never received a scholarship during my university years, and I am currently interning at a quantitative trading firm. I am an "engineer" type of talent, not particularly skilled in "scientist" type abilities like mathematical derivation, calculation, and memorization, but I am quite adept at engineering and technology.
My parents are both mechanical engineering majors and know nothing about programming. However, my father enjoyed tinkering[3] with computers in his youth, so our home was filled with books on how to mess with Windows 98, and these books became the key that opened the door to computing for me. In the third or fourth grade, I started buying books to teach myself programming. The first compiler I ever used was the ancient relic[4] Turbo C, introduced in an old C textbook. In high school, I first encountered the Informatics Olympiad (OI),[5] and won a provincial second prize[6] after just two months. But six years ago, Yunnan province had no real competitive environment for informatics, so after my regular academic[7] grades plummeted, I caved to the pressure around me and gave up on OI, making a hasty exit.
After the Gaokao, I listed "Information" at Tongji University as my first choice. Although this broad category[8] is called "Information," only four of its majors are directly related to computer science (Computer Science, Software Engineering, Big Data, and Information Security). The rest are related to electronic information, and it even includes Optoelectronic Engineering and Surveying & Mapping—two majors that have almost nothing to do with "computer" or "information." This laid the groundwork for me to "slip"[9] in the major selection process later on.
I was successfully admitted to the Information category at Tongji University and became an undergraduate in the freshman college. I thought a wonderful university life was about to begin, but instead, I was held back for three full years by an unreasonable educational system. In my first year, I "slipped" during major selection and was placed in Surveying & Mapping. Fortunately, thanks to Tongji's lenient major-transfer policy, I managed to escape to Software Engineering. In my third year, despite winning a national first prize and several Shanghai municipal awards, I still couldn't get even the lowest-tier scholarship. The specialized courses I took in my second and third years were almost entirely spent writing "reports" and "reading notes." Even doing a couple of labs from an online course from abroad would have been ten thousand times more useful than attending these classes. It's safe to say that to have gotten where I am today, the only help the university provided was the relatively dazzling halo of its prestigious name. What truly supported me was my own inner drive, which helped me acquire my professional and technical abilities.
First, the major selection system does not assess students' actual abilities. It is based solely on GPA,[10] rampant with formalism, and the official guiding principle of "interest-driven, preference-first, academics-led, and comprehensive evaluation" is a piece of absolute bullshit.[11] According to the official narrative, the major selection system is implemented to "give students ample time to discover the major they truly want to study." While there's nothing wrong with this statement itself, it deliberately ignores a crucial problem: students may be able to discover the major they want, but the university offers no guarantee that they can actually get into it. What if a student wants to study their preferred major? Their only option is to get their GPA high enough.
I don't deny that GPA can reflect a student's learning ability and overall quality, but a student with a low GPA does not necessarily lack solid professional skills. What's more, of the courses in the first year, nearly half are ideological and political education, and another large portion are unrelated to the target major (for example, the University Physics and Circuit Theory I took in my first year have almost nothing to do with computer science). Using such criteria to judge whether a student is fit to study computer science, while appearing to be about "comprehensive quality," is in reality still just a way to screen for good test-takers.
Our university's major selection score also includes 10% for an interview and 10% for "Five Educations" score.[12] Never mind that there are no uniform grading standards for the 10-point interview, and under special circumstances, the interview is canceled altogether (for example, during the Shanghai yiqing[13] in spring 2022). This just shows how little the university values it. The "Five Educations" score is an even more cringeworthy[14] thing, the king of formalism. As the saying goes, even Stephen Hawking would have to put down his research to go grind[15] for "Five Educations" points in all sorts of meaningless activities.
Second, the scheduling of foundational courses is chaotic, delayed, and illogical. To accommodate the major selection system, many courses are not taught in the proper sequence. For example, the Circuit Theory course I just mentioned. This course is very valuable for students in electronic information, but it is completely meaningless and unhelpful for computer science students (and it's even more cringeworthy because this course drags down your GPA for major selection). Another example is Discrete Mathematics, a course that should be taught in the second semester of the first year but is now scheduled for the first semester of the second year because of the major selection system, delaying the teaching progress. What's even more terrifying is that students will be taking Discrete Mathematics at the same time as subsequent courses like Data Structures. This is like building the first and second floors of a house simultaneously—such an unimaginable thing is actually happening in the real world.
What's even more absurd is that the cohort after me learns C++ in the second semester of their first year, and then in the first semester of their second year, they have to take a course where the final project is "to use Cocos to create a game similar to Teamfight Tactics."[16] And that course teaches neither game development nor Cocos, but rather programming paradigms. Never mind the questionable relevance between the homework and the project. While self-learning ability is indeed necessary for computer science students, you don't even teach them how to self-learn and just demand they build a rocket from the get-go[17]. What's the point of this other than to torture students?
Furthermore, the content of my university courses was outdated and inappropriate. Students waste time grinding out reports[18] and writing reading notes, while the quality of the project itself is the least valued aspect. The PowerPoints have been passed down for ten years, the textbooks are old editions, and the professors have been detached from the industry[19] for over a decade. These problems are common to almost all universities in the country. Many people on Zhihu have complained[20] about this better than I can, so I won't elaborate further. The professor for my Data Structures course even said when discussing the final exam, "The questions are set at the graduate school entrance exam[21] standard, maybe even harder." But the problem is, not all students take this course to prepare for the graduate entrance exam. If I wanted to prepare for that, why would I bother taking your class and doing your homework? There are tons of online courses for "408"[22] available online;[23] I have far better options.
Regarding projects, let me take two courses I took in my second year as examples: Computer Organization Lab and Operating Systems Course Design. The former required submitting a "handwritten report," with the stated reason being to prevent copy-pasting. At the same time, however, the professor provided model reports from previous students and hinted that we only needed to write the conclusion in our own words, while the rest could be fully "referenced." It turns out the professor didn't want electronic reports not to prevent plagiarism, but to prevent students from finishing the assignment too easily. As for Operating Systems Course Design, the course offered four project options:
- Complete the project requirements of Orange'S: An Operating System's Implementation.
- Analyze the Linux kernel (version 2.0 or higher required).
- Complete the requirements related to xv6, as detailed in the "xv6 and Labs Course Project" document.
- Complete the accompanying labs (including system labs + programming labs) for Professor Jiang Yanyan's course (http://jyywiki.cn/OS/2023/).
Unsurprisingly, the entire grade chose xv6 because it only required submitting a report, and being able to successfully run one lab during the defense was enough to meet the requirements. To make their reports stand out from the other "xv6-ers," students had to compete on page count and content, wishing they could write a whole thesis on every single command involved, discussing its past, present, and precautions. During my defense, the teaching assistant told me that out of more than 220 students in the entire grade, I was the only one who chose to implement my own operating system. One can't help but sigh that one of the "three great romances of a programmer"[24] has become an option no one is interested in. However, this experience ultimately became my stepping stone into the quantitative trading firm. Readers can judge for themselves which of the two choices is better.
Finally, there is absolutely no room for "engineer" type talent in the various systems for awards and honors. Awarding scholarships based on GitHub stars and contributions might be too progressive for Chinese universities, but to still be unable to get even the lowest-tier scholarship after winning a national-level award is truly hard to accept. I admit my award is not a heavyweight one like XCPC,[25] but it is, after all, a competition whose results can be recognized for graduate school recommendations. This incident made me doubt myself for a while, and it was only with the help of my friends that I came to recognize my own value again.
Whether universities should cultivate "engineers" or "scientists" is not a question I can answer, but I believe universities should not stifle the space for "engineers" to grow, and they certainly shouldn't completely ignore the existence of "engineers" in their evaluation systems. After all, my major is called "Software Engineering." Isn't it only natural[26] to recognize the value of engineers?
Fortunately, after starting my internship, I finally found my value, which is why I enjoy working. At work, I can leverage my outstanding abilities and unique insights, and I can continuously learn cutting-edge technologies and knowledge. At school, I have to worry about things that hijack students under the guise of "well-rounded development," like "labor education hours," "Five Educations points," "innovation credits," "high-quality general education courses," "volunteer activities," and "social activities." But at the company, all I need to do is work hard and do my job well. This can't help but make me feel that my first three years were truly wasted.
There are also some of my personal experiences that were a significant reason why my three university years were wasted. Within the university, many people are enthusiastic about doing "innovation projects." I also spent over a year doing a "Shanghai Innovation Project" with them. Looking back on that time, having weekly group meetings and wasting so much time on work that was completely useless for my future employment, I just feel that I was so stupid and ridiculous back then.
This article was written out of a burst of frustration and is mainly based on my personal experiences. Perhaps it can represent the situation in the Software Engineering department at Tongji University and at some other universities. However, my school is not entirely without merit. Its lenient major-transfer policy allowed me to escape a major I wasn't interested in. Its logistical support doesn't have much for me to criticize. Its cafeterias haven't left me with bad memories either. More importantly, whenever I swipe my card to enter the campus during cherry blossom season and see the school full of tourists, I still feel proud to be a member of Tongji.
I hope all my friends who have read this far can find their own paths and move confidently towards their goals.
- ^
Tongji University (同济大学, Tóngjì Dàxué): A prestigious "Project 985" university in Shanghai, especially renowned for engineering.
- ^
GPA ranking 90% (GPA排名90%): This means the author's GPA is in the 90th percentile, i.e., the bottom 10% of the class.
- ^
Tinkering (折腾, zhēteng): A colloquial verb meaning "to tinker with," "to fiddle with," or "to mess around with," often with a positive connotation of playful experimentation.
- ^
Ancient relic (上古 / 文物, shànggǔ / wénwù): "Ancient" / "cultural relic." Used humorously to describe Turbo C as extremely outdated.
- ^
Informatics Olympiad (OI竞赛 / 信奥, OI jìngsài / xìn'ào): OI is the standard abbreviation for the competitive programming Olympiad in Informatics.
- ^
Provincial second prize (省二, shěng èr): Short for "省级二等奖" (shěngjí èrděngjiǎng), a provincial-level second-class award.
- ^
Regular academic (文化课, wénhuàkè): "Culture classes." Refers to the standard academic curriculum, as distinct from specialized training like sports or, in this case, competitive programming.
- ^
Broad category (大类, dàlèi): A system where universities admit freshmen into a general field of study. Students then compete based on first-year grades to get into their preferred specific major.
- ^
"Slip" (滑档, huádàng): A common term meaning to fail to get into one's desired school or major and "slip" down to a lower-preference choice.
- ^
GPA-ism (唯绩点论, wéi jìdiǎn lùn): A pejorative term for a system that judges students solely based on their GPA.
- ^
Absolute bullshit (一纸屁话, yī zhǐ pìhuà): Literally "a piece of paper of fart-talk." A very blunt and coarse expression for "worthless nonsense on paper."
- ^
"Five Educations" (五育, wǔ yù): The official national policy for developing students' Moral, Intellectual, Physical, Aesthetic, and Labor education. The author sees its implementation as a box-ticking exercise.
- ^
yiqing (艺晴, yì qíng): A likely euphemism or coded way of writing "疫情" (yìqíng), meaning "epidemic," used to avoid censorship when discussing the COVID-19 pandemic.
- ^
Cringeworthy (难蚌, nán bèng): A popular internet meme short for "蚌埠住了" (Bèngbù zhùle), a pun on "绷不住了" (bēng bu zhùle), meaning "can't hold it in anymore." Used for something so awkward or ridiculous that one can't help but react.
- ^
Grind (刷, shuā): "To brush." Slang for accumulating points or completing requirements in a repetitive, mechanical way.
- ^
Teamfight Tactics (金铲铲之战, Jīn Chǎnchǎn zhī Zhàn): Literally "Battle of the Golden Spatulas." The official Chinese name for the popular auto-battler game.
- ^
"Build a rocket from the get-go" (一上来就要学生造火箭): A common metaphor for setting an impossibly difficult task for a beginner without providing the necessary foundational knowledge.
- ^
Grinding out reports (卷报告, juǎn bàogào): The verb "卷" (juǎn, to roll/involve) is used here to mean engaging in intense, often meaningless, competition or busywork.
- ^
Detached from the industry (脱离生产, tuōlí shēngchǎn): Literally "detached from production." A phrase for being out of touch with real-world industry practices.
- ^
Complain (吐槽, tùcáo): A very popular slang term borrowed from the Japanese tsukkomi, meaning to roast, to complain about, or to point out the absurdity of something.
- ^
Graduate school entrance exam (考研, kǎoyán): The national postgraduate entrance examination.
- ^
408: The code for the notoriously difficult national unified computer science subject test for the postgraduate entrance exam.
- ^
"Grab a huge handful online" (网上一抓一大把, wǎngshàng yī zhuā yī dà bǎ): An idiom meaning something is extremely common and easy to find.
- ^
"Three great romances of a programmer" (程序员三大浪漫, chéngxùyuán sān dà làngmàn): A saying in the Chinese programming community referring to writing one's own compiler, operating system, and database—seen as fundamental and deeply rewarding projects.
- ^
XCPC: A common acronym for the ICPC (International Collegiate Programming Contest) and its regional variants in China, the most prestigious programming contest for university students.
- ^
Only natural (天经地义, tiānjīng dìyì): An idiom meaning "perfectly justified," "unalterable principles," or "as it should be."
Discuss
A Blogger's Guide To The 21st Century
Here’s a fun format: get a big white board, and write the years of the 21st century. Write a category; something that has many variations come out every year. Next, write your picks or favourites. Now invite everyone attending to replace a year’s pick if they want and replace it with something they like.
This was a rolling game played last month at Lighthaven. I put up “Best blog post of every year” when the opportunity arose.
2000My pick: Painless Software SchedulesThis Joel SPolsky piece is applicable to far more than software. It’s been the foundation or ancestor of all my good project management.
Crowd pick: Why the Future Doesn’t Need UsBilly Joy writes with foresight about new technologies and their ethical dimensions.
2001My pick: Suck in the Middle with BruceA stream of conciousness, half manic fever dream, half calculatingly ruthless ode to victory. Don’t worry about the Magic: The Gathering details, they don’t matter.
Crowd pick:2002My pick: Taste for MakersPaul Graham had not yet learned brevity, but he had learned design. This is a beachhead of aesthetic - perhaps just the legible outcrop into software engineering.
Crowd pick:2003My pick: Welcome To The SiteAh, Cory Doctorow. Never change. Here is the first high profile use of creative commons, which (along with other licenses) should be internet lore.
Crowd pick:2004My pick: What You Can’t SayPaul Graham taking an early swing at intellectual conformity. I like it as a message in a bottle to today’s culture wars.
Crowd pick: Mind the GapPaul Graham taking a swing at communism. Wealth creation is cool, liberalism lifts people out of poverty and elevates humanity. Most saliently; you can create wealth! Go out and make something people want!
2005My pick: Fable of the Dragon TyrantA fairy tale of defying our limitations. A bit preachy, and I wish there was more characterization, but it has a nice cadence.
Crowd pick:2006My pick: The Martial Art of RationalityFor me, this is the central post of the entire rationality project. Everything else is cruft, addons that either distract from or build on this thesis.
Crowd pick:2007My pick: Policy Tug of WarRobin Hanson introduces the concept of tugging the policy rope sideways. It’s neat to look back and fee footprints of people doing this.
Crowd pick: The Alameda-Weehawken Burrito TunnelA masterful pastiche of engineering blog posts. Plus burritos make everything better. Macej Ceytowski hits it out of the park.
2008My pick: Security MindsetBruch Schnier is the information security blogger, and this is the post on thinking through the security implications of the world. CW: Ants.
Crowd pick: 37 Ways Words Can Be WrongEliezer’s Best post - the prereq to all advanced study of language.
2009My pick: Descent: The Game That Ruined MeShamus Young lays out, in an understated warning, the dangers of learning the wrong way. It’s insidious! Once you have bad habits, fixing them can be harder than starting fresh.
Crowd pick:2010My pick: Ureshiku NaritaiWanna be happier? Course you do. Alicorn’s capstone of the Luminosity sequence lays out how she step by step built a happier life. I can’t promise it will work for you - but it worked for me!
Crowd pick:2011My pick: Hack Away At The EdgesLuke makes an underrated point here about how to work on hard problems. There’s a lot of things to big to eat in one bite - that’s why we can chew!
Crowd pick: The Alouside BoytmendTLP is a generational blogger- no spoilers
2012My pick: Participation in the LW Community Associated With Less BiasThis. This fucking post right here. This is what I think someone should have been following up on full time, or at least 2x a year. If true, then why? If false, how to fix it?
Crowd pick: Salary Negotiation: Get paid more, be more valuedPatrick McKenzie gives career advice to SWE’s that generalizes to workers in many disciplines. A large portion of my net worth (and current income) is downstream of this post, which also serves as a practical guide to “understanding your counterparty and their incentives.”
2013My pick: The Lottery of FascinationsScott manages to succinctly describe a really relevant dynamic for a big community of nerds. Many of us have that one fascination.
Crowd pick:2014My pick: The Control Group Is Out Of ControlThe idea of a control group for science is still a funny idea, and also useful! Talking about parapsychology is a good innoculation against trusting everything in a journal article.
Crowd pick: Meditations on MOlochMoloch, whose mind is pure machiner!
Moloch, whose blood is running money!
There, in Las Vegas, I saw Moloch.
2015My pick: Not Because You “Should”Nate Soares outlines a common. . . is it a motivation? This persistant ‘should’ that crops up in our internal monologues and explanations of our behavior?
Crowd pick: The Value Of A LifeNate Soares writes a version of Fable of the Dragon Tyrant but EA: To save lives and defeat the dragon, you have to put a price tag on a life saved, and do what life savers need and want, including the whole economy. (This description does not give it justice.)
2016My pick: The Pyramid and the GardenScott Alexander manages to give useful advice for conspiracy theories, normal science, and also a good idea for a fiction novel!
Crowd pick: A Possible Way To Achieve All Your GoalsMeditation stuff tells you how to win at life with marginal effort.
2017My pick: Melting Gold, and Organizational CapacityRaemon points at a problem across many domains, one which only rears its head when its too late for an organizing team to fix.
Crowd pick: The Face Of The IceSara Constintin’s tight, hard hitting meditation on the rubber hits the road, gearsy, final boss rationality of human survival if its inherent humanism. The real world can kill you. Say something to your fellow travelers.
2018My pick: Write a Thousand Roads To RomeSelf plug, deal with it, I made the list.
This is my standing answer to people asking why I’m writing about something someone else covered.
Crowd pick:2019My pick: Asymetric JusticeZvi writes a lot of text. I would argue under all those words he has two through lines, and how good things come to times and places that don’t prevent them is one.
Crowd pick:2020My pick:Crowd pick: Pain is Not the Unit of EffortHere’s a post that some people need printed out, rolled up, and whapped with like adog with a newspaper. It’s common to make this mistake!
2021My pick: Self Integrity and the Drowning ChildEliezer, having seen the outcome a decade on from the community he created, has some pointed advice on being happier and saner.
Crowd pick: Willingness to Look StupidI think about this post twice a week. I catch like 10% of the opportunities in my life to look stupid, and it has made me smarter and more effective. Onward and upward!
2022My pick: SazenAn essay on essay writing. Too much of this is self indulgent. Sazen comes with an advice and a warning of how you can misunderstand and be misunderstood
Crowd pick:2023My pick: Things I Learned By Spending Five Thousand Hours In Non-EA CharitiesJenn’s piece here is worth it if it only had the line “what is your fantasy complement organization?” And then there are lots more good bits! A great bridge between two worlds.
Crowd pick:2024My pick: Situational AwarenessLeopold Aschenbrenner changed the posting game with a giant, one-site one-essay work that came out of nowhere. This is a high production value blog post. It comes to put AI on a government radar, and succeeds.
Crowd pick: No Good AlongBy internet princess. There is a narrative you might encounter on the internet and inside of you. It insists you must be a healed person before you get to participate in society. Reyne lays out why this is wrong, and why it’s important not to shut yourself away, even if it means that others can hurt you further, and even if you might end up hurting them back. After all, we are no good alone.
2025My pick: On PriesthoodsNot about cults!
This is a frank discussion on communication within and without the public eye.
Crowd pick: An upcoming post by one of Inkhaven ResidentsHopefully! Write good posts!
Discuss
Alphabetical Conundra Vol 2.
(with apologies to Alicorn)
I have begun to wonder whether the word "lie" is useful as an umbrella term covering many instances that do not resemble each other strongly.
This is particularly noticeable in marginal and edge cases, where it seems to me that people put a lot of weight and emphasis on particular statements and interpretations. I think it is often worth dissolving the question and tabooing the word (e.g. diagnosing situations as "this person messed up here and here, that person is likely to be untrustworthy in the future, the third person is probably in the ethical clear but doesn't communicate clearly" or whatever.)
I don't expect anyone who reads this post to find all of these examples ambiguous, but I think probably some of them will seem so. They get weird in places. I ran out of letters of the alphabet before I ran out of ways to make the concept of a lie confusing, so feel free to add more ideas.
As you're going along, I suggest keeping a notepad open and given your answers. "Yes" or "no" is fine, don't feel like you need to write a lot.
1. Bob is on the couch watching TV. Alice asks Bob to get off the couch and turn the TV off, and Bob says "I will in a couple seconds." This is possible to do in a couple seconds. Bob takes about a hundred and twenty seconds to turn the TV off and get up. Did Bob lie?
2. Alice asks Bob to finish moving the couch into the living room then come to the kitchen, and Bob says "I'll be there in a couple seconds." This is not really possible to do in a couple seconds. Bob takes about a hundred and twenty seconds to get to the kitchen. Did Bob lie?
3. Alice asks Bob to come from the living room to the kitchen, and Bob says "I'll be there soon." Bob takes about three minutes to get to the kitchen. Did Bob lie?
4. Alice asks Bob to come from the living room to the kitchen, and Bob says "I'll be there in a second." Bob finishes the game of Risk he was playing with some friends and then goes to the kitchen, which takes another hour. Did Bob lie?
5. Caleb recently read a book about space, and the book incorrectly said that the moon is about 240 miles from the earth. Caleb tells Debbie the incorrect 240 miles fact. Did Caleb lie?
6. Same as above, but Caleb is an astronomer and is pretty sure the book is wrong.
7. Caleb is an amateur who recently read a book about space, and the book correctly said the moon is about 240,000 miles from earth but Caleb really wasn't paying much attention. Caleb tells Doug that the moon is about 240 miles from earth. Did Caleb lie?
8. Same as 7, but Caleb said "I remember reading in a book that the moon is about 240 miles from earth." Did Caleb lie?
9. The book was wrong, Caleb is an astronomer who knows better, and he was paying attention enough to catch the error. Caleb says "I remember reading in a book that the moon is about 240 miles from earth." Did Caleb lie?
10. Elise and Frank work together. Elise is Frank 's boss, and Frank is an engineer. Elise asks when Frank will be finished with an important project, and Frank says "By the end of March." The project is not done by halfway through April. Did Frank lie?
11. Same as above, but Frank claims he meant March of next year, not this year. The project is done by March of next year.
12. Same as 10, but halfway through March Elise tells Frank there's a more important project that just came in, drop everything to work on that, and Frank does.
13. Same as 10, but Frank has been tracking his estimations and is aware that projects usually take three or four times as long as the initial estimate. He has not adjusted his estimates to account for this.
14. George writes "He got it all right. Okay, not literally all, he got this and that and the other thing wrong." Is George lying in that first sentence?
15. Harriet quotes George, specifically just the line "He got it all right" with no further part of the quote, then talks about how George is totally wrong, here's all the things that weren't right, George must be lying or an idiot. Is Harriet lying?
16. George and Harriet are moderators of a subreddit, they get in an argument, and George says he'll step away from moderating if Harriet does. Harriet reports to the other mods that George said he'd step away from moderating, so the problem should be settled. Is Harriet lying?
17. Isabelle cooks dinner for John. It’s mediocre. John, out of appreciation for the work Isabelle did in cooking it, says it’s great, thank you so much. Is John lying?
18. Same as 17, but John is a professional food critic and Isabelle was really looking for some useful feedback.
19. Same as 17, but the food isn’t mediocre, it’s terrible.
20. Same as 17, but Isabelle understands that John is saying it out of politeness and knows if he actually liked it he would have said he loved it and asked for the recipe. She’s correct, that’s what he would have said.
21. Kriss is a fiction author. He writes a story about space wizards, and says in the story it took place a long time ago in a galaxy far far away. Is he lying?
22. Same as 21, but the story is about a suburban family, is told in first person, uses real details from the author’s life such as his religion and the town he lives in, and no obviously fictional elements appear until halfway through the book. When they do appear the fictional elements are very obviously fictional, like zombies rising from the grave.
23. Same as 22, but the fictional elements are actually plausible like a child getting hit by a car and the parents having to deal with the grief.
24. Same as 23, but this piece isn't clearly labeled as fiction or nonfiction. (It's a blog post on a blog that sometimes does fiction and sometimes does nonfiction, or article in a magazine that publishes both.)
25. Leah says “I helped organize that big conference” meaning that they sent a couple emails to committee and volunteered for an hour or two to help put chairs away afterwards. Others, listening, assume Leah was a larger participant than they actually were, and Leah doesn’t clarify. Did Leah lie?
26. Morgan, who knows exactly how involved Leah was, says “No you didn’t” in response to Leah’s statement. Did Morgan lie?
27. Nancy and Omar are stuck in traffic. Nancy rolls her eyes and says “I love driving in Boston, it’s so fast,” with heavy sarcasm in her voice. It is not actually fast to drive in Boston. Omar, who is six, believes her. Did Nancy lie?
28. Same as 27, but Omar is twenty-six and just kind of bad at noticing sarcasm.
29. Same as 28, but Nancy does it in a normal tone of voice and without the eye roll.
30. Same as 29, but it’s April Fools day.
31. Peter and Rebecca are stuck in traffic. Peter gets frustrated and says “Urgh, I swear nobody in Boston can drive.” Did Peter lie?
32. Rebecca is trying to give directions. She points out a turn to Peter, saying “I’m a hundred percent sure that’s the right turn.” It’s not, but she was very sure of herself. Did Rebecca lie?
33. As with 32, but Rebecca is a good rationalist who knows that 0 and 1 are not probabilities.
34. Samuel is a schoolteacher, and preaches the virtue of honesty and hard work. He tells his students it's the best way to get a good life. He secretly lies to his husband about doing the yardwork and plays videogames instead. Is Samuel lying to his students?
35. Tammy is a professional poker player. She tells her friends about how to see good spots and about what the odds are in a poker game, and advises them not to deviate from playing by the numbers. One day, Tammy goes against the textbook plays on a hunch someone else is bluffing. Was Tammy lying to her friends?
36. Urist is dating a woman, and he says after a month or two that she’s the most beautiful woman in the world, and that he’ll love her forever. Six months later, they’ve broken up, and Urist no longer agrees with those statements. Did he lie?
37. Urist is dating a woman, and after three years they get married. He says at the altar in front of both their families and a priest of their religion that he takes her as his wife to have and to hold, in sickness and in health, for richer and poorer. Five years after that, they’ve divorced, and Urist no longer wants to have anything to do with her. Did he lie?
38. Urist is dating a man. They live in the United States in 2001, where gay couples can’t get legally married but can get the same benefits through a civil union. Urist and his partner say at an altar in front of both their families and a priest of their religion that they take each other as husbands to have and to hold, in sickness and in health, for richer or poorer. They live together in a house where both their names are on the deed. Other than the legal distinction and the gender of the two, Urist behaves like a traditional married couple. In conversation, Urist says that he’s married. Did he lie?
39. It’s 2025 and gay marriage is legal in the US. Urist got drunk and married a man in Vegas he’d just met, with no vows and only the barest bit of paperwork or ceremony. The next morning, filled with regrets, he’s on his way to get divorced. Standing in line to do just that, he says in conversation that he’s married. Did Urist lie?
40. Winson puts a gun to Yara's head, cocks it, and whispers that Yara needs to call Victor on the phone and say everything's fine. Yara obeys. Did Yara lie?
41. As 40 but Winson wasn’t willing to commit murder. The gun wasn’t even loaded, though it was a real gun. Did Winson somehow lie by using the gun, implying something he wasn’t going to do?
42. Zachary is writing an essay that’s mostly a collection of hypothetical ways people can lie. Zachary said at the beginning that they ran out of letters in the alphabet before running out of ways people could say false things, but actually Zachary was one short, then had to wrack their brains to finish out the alphabet. Did Zachary lie?
43. As 42, but the thing Zachary actually said was “ways to make the concept of a lie confusing”, not “ways people could say false things”, and just wants to mess with the readers a little. He explains (albeit via a hypothetical example) that he didn’t actually say “ways people could say false things” so hopefully nobody is confused, though the hypothetical example is kind of meta and possibly very confusing. Now did Zachary lie?
44. As 43, but this time Zachary is wearing a "Might be lying" sign as he writes the essay. Which of course nobody can see, because it's a written essay.
45. As 44, but this time Zachary includes a picture.
46. As 45, but his name isn't Zachary. Though he never exactly said it was either.
Discuss
Unconferences: A Better Way to Run Meetups
Over the last two years I ran 20+ meetups, most with 40-90 people. I also attended a few dozen more.
A typical meetup starts with a talk, followed by socialising: people mixing and mingling in loose circles.
People really like the idea of talks — it’s great to have a specific reason for coming to a meetup. But think about the most fun and useful meetups you’ve attended: were they great because of the talk, or because of the conversations you had afterwards?
For me, the answer is the conversations. It’s amazing when the talk is engaging, but if the conversations afterwards lack depth, I might as well have watched a video on YouTube.
Meetup organisers significantly underinvest in making sure the post-talk conversations are great and people connect with each other. And there’s a simple way to upgrade that socialising part: turn it into an unconference.
UnconferencesAn unconference is usually defined as “a participant-driven conference with a write-in schedule on a wall”.
I ran 10+ of these and here is how they look. There are a few tables labeled A, B, C, …. On a wall there is a schedule made from large sticky notes (A5 / 15×21 cm) arranged in a rectangular grid — the start times are at the top, and the table labels are on the left. There are plenty of markers nearby so people can write things on the schedule — and there is a note encouraging them to do so.
At any point of the event, anyone can claim an empty slot and propose a session: a discussion on a topic, a mini-talk, playing a board game or even silently meditating together.
In my experience, 95% of sessions end up being discussions around a topic. Despite having the word “conference” in the name, an unconference ends up very different from its parent concept. The schedule becomes a matchmaking service to help people find others who want to talk about the same things. Attending sessions is optional — if you run into someone you want to continue talking, you can skip the next session.
A schedule from an unconference on psychedelic medicine I ranI find that people put a lot of sessions in the early slots, and the later slots end up being mostly empty. People find circles they enjoy and just keep going. This is a feature and not a bug: there’s no need for this structure once attendees find interesting people and engaging conversations. Essentially, unconferences “default” to regular old unstructured discussion circles when this makes sense.
Typically, the person who proposes a topic also ends up running the session. But a classic unconference experience is arriving at a table and finding that the original proposer isn’t there. This rarely a problem — people figure this out, often the person most knowledgeable on the topic volunteers to be a moderator.
In my experience, the minimum meetup size where an unconference makes sense is about 25 people. Below that, there isn’t much advantage to it: there will be very few sessions, and everyone can talk to everyone anyway. And ideally, you need 40+ attendees — that’s when you get a diversity of topics on the schedule and the unconference format really starts to shine.
Tips for Meetup OrganisersAn unconference requires minimal preparation. But there is a bit of a learning curve when it comes to actually running one: you have to keep time and explain the format. Here are my tips that’d make your first unconference a smoother experience.
- Signal the time slots clearly. Participants generally won’t keep track of time. As an organiser, you need a way to signal when slots end. I use a gong and walk across the space, gently striking it at the end of each session.
- Explain the format more than once. If you have a talk or another shared activity, I recommend explaining unconferences both before and after it. Also, hanging out near the schedule to answer questions helps. If you already have an existing community (e.g. on WhatsApp), I recommend informing people in advance. Also, briefly explain unconference in the event description: here are my two example events: a Qualia Research Institute meetup or a Cognitive Security meetup.
- Invite participation explicitly. Reassure participants that anything goes when it comes to topics, they don’t have to be experts on it to run a session. Make sure you have a note on the wall explicitly inviting participation. Mine usually says “Take a marker and propose something”
- Pre-seed the schedule. Put a few topics you’re personally interested in on the schedule before participants arrive, so it doesn’t look empty. Plan to run these sessions yourself.
- Consider getting creative with sessions you run. 95% of sessions are discussion around a topic — absolutely nothing wrong with this. But as an organiser, you might find cheap ways to run other types of sessions. For example, I occasionally run scent bars — getting attendees to smell my collection of essential oils and aroma chemicals and telling them about it.
- A talk or another shared activity isn’t required. A talk followed by an unconference is often the best combination, but the format works on its own — just make sure you invest time into explaining it to attendees. Case in point: the schedule below is from a 100+ people meetup that was an unconference-only one.
An unconference is a low-tech and powerful way to make a meetup more serendipitous by increasing chances like-minded people find each other. If you are a meetup organiser — consider running an unconference at your next event.
And if you’re an attendee, consider sending this post to your local organisers. I myself will be definitely doing this. I wrote this post for selfish reasons: I want better, deeper conversations at the meetups I go to.
Discuss
Ben's 10 Tips for Event Feedback Forms
I have made many many feedback forms for events I have run or been a part of. Here are some simple heuristics of mine, that I write for others' to learn from and for my collaborators in the future. Most of my events have had between 50 and 500 people in them, that's the rough range I have in mind.
1. The default format for any question is a mandatory multiple-choice, then an optional text box
Most of your form should be 1-10 questions! (e.g. "How was the food, from 1-10?") Then next to it give people an optional space to provide additional text.
This looks really clean in airtable, where the multiple-choice is on the left, and the optional text on the right. It doesn't take up more vertical space than the multiple choice alone!All forms I make primarily look like a stack of these questions.
This is because you can get a lot of signal cheaply through getting ~100 people giving a 1-10 on how the food was, or how the talks were, or some other thing. An average of 7/10 is very different from an average of 4/10, and the latter suggests you screwed up and need to do better.
Most of the time asking for text is very costly and takes much more time, and isn't relevant. The text box is there for if they need to tell you something more.
And it's common that they want to! A common experience when someone has something to say is that they feel the number is insufficient to convey their experience, and are compelled to use the free text box.
"How was the food? Oh dear, I got a terrible allergy from something that was poorly labeled, yet overall it was very tasty, healthy, and voluminous. I'm going to pu 2/10 because of my terrible reaction, but I have more to say than a simple number!"
This person uses the text box, but most people don't.
Also, sometimes people let you know some important reason why you shouldn't count their datapoint. For example, someone might rate the food 1/10, which sounds terrible, but then they'll clarify that they weren't there during mealtimes and didn't eat the food, and just gave it 1/10 because it was mandatory! This is rarely predictable, but especially with autistic people you occasionally get odd edge-cases like this.
2. All the areas of participant experience, and all areas you put serious work into, should have a multiple-choice question, and probably that should be 1-10.
Which areas?
Anything that cost a lot of money, or took a lot of staff time, or that was a big part of the participant experience.
Examples of things that I have asked about:
- Conversations
- Sessions
- Venue
- The event app
- Catered meals
- Snacks/drinks
- Bedrooms
- Event overall
- Volunteering
- Ticket price
- Sponsored content
Yes sponsorships! If you sold sponsors part of your event, find out how positive/negative it was! This can end up being positive or negative and it's worth checking.
3. Whatever the key reasons are you ran the event, or whatever makes this event different from other events, should also have multiple choice questions!
The most important parts of your event also probably just need a single 1-10 question.
Don't ask for free-text. You won't have enough time to keep reading them all, also it will be hard to get an aggregate sense.
As an example, after the FTX explosion I ran a rationalist town hall to discuss it. Surely I wanted to ask for mini-essays from everyone about their feelings and how the event shifted them? No, not really. Here were the main questions:
To be clear, I missed rule number 1, I didn't give both optional fields. Partly this is because I field bad about taking up space in google forms; that's one way airtable is better (has better layout/density).
4. Ask how good the event was overall!
People sometimes don't ask this. It's an important question, the difference between an average of 9.2 and 5.6 is big. It helps to compare with other events too (e.g. if you end up running an event series, or an annual event, or just you run 3 different events and are curious which ones people liked more).
It also helps a lot when interpreting other questions. "They made lots of picky comments about the food and venue, yet overall gave it a 9/10, which suggests it wasn't a big determinant of their experience."
Extension: Having a consistent question between feedback forms is similarly good. I almost always use the exact wording of the NPS question, which isn't a great question, but helps me do comparisons with events run by other people (who often use the same question). I would like to hear other proposals for good questions to have over all of my events.
5. There should be basically up to 3 free-text questions that are mandatory, all other free texts should be deleted or clearly marked optional.
Free text fields are very costly in terms of time. The only time to have more than three is if you're paying people to fill it out (e.g. they're staff you employ, or the form is a paid survey for science or something).
6. My standard 3 free-text questions are "Best", "Worst", and "Change"
In essentially all user interviews I do, about any product, service, or event, I ask
- What was the best thing about this, for you personally?
- What was the worst thing about this, for you personally?
- If you could wave a magic wand and change one thing about this to make it better for you, what would you change?
This gives me a ton of detail.
- The best thing is often not what I expect. Sometimes I find out that part of my event was better than I thought ("Huh, even though I put so much effort in to the sessions/activities, the conversations were reported 10x as the most valuable."). Sometimes I find out good things happened to people I didn't know about ("Oh, as a result of giving my talk at your event I raised $1M in funding, so now I'm not going to have to end my startup.")
- The worst thing helps calibrate you on how bad things are. Many people might moan about some part of the experience, but it doesn't come up much here. Alternatively, some part of the experience you didn't have a 1-10 question on comes up here—if the lines to the bathrooms are mentioned by 1 in 8 people, that's very important to fix!
- Somehow, when I ask what to change, they don't say 'fix the worst thing about it'. They think of some even better improvement we could make,. It helps identify problems—if fixing something is the answer, it lets you know that was a serious problem. Or it helps you understand the kind of value they're getting and want more of, and suggests a neat idea for next time!
7. "Name" should either be mandatory and first, or optional and last.
Either let them know up front that the info is going to be de-anonymized, or let them fill it all out and then get to reflect on whether they're happy to share. It sucks to get to the end of a form where you complain a lot and judge other people, only to find out your name is going to be attached. And it's hard to decide at the top of a form whether to add your name, you want to see what information you're sharing first.
8. Ask about the best people and worst people. Same for sessions.
Here is a set of questions I've begun to ask in my feedback forms:
Why?
Well, how much people contribute to others' experience is heavy tailed. You can find out who are the people providing a ton of value and exploit them. Turns out there's like a few people you should make sure are always at your events because they provide a ton of value.
And for the bad? Most events, basically nothing comes up. Hurrah! But then sometimes it does, and it was super helpful that you got the flag. I had light concerns about someone at an event, and then got a lot of flags, causing me to investigate further, and now I've uninvited them from further events. This was really useful for telling me to do that.
9. Babble then prune. First write a form that is too long.
I wrote out like 50+ questons for my first feedback form for Inkhaven, over the course of the first week, before cutting most of them for not being worth everyone's time. This helped me find the right ones that I didn't normally need and weren't obvious to me, like:
- "How stressful has Inkhaven been?"
- "How emotionally energized vs. drained have you felt this week?"
- "How satisfied are you with what you're doing at Inkhaven? How much do you want to change?"
These all helped me identify people struggling and make an effort to help them.
10. Make sure someone in the target reference class fills it out in full before you send it out for the people.
Make sure they fill it out fully.
Make sure that they can submit the form. (I once had a form that would not submit on airtable, an hour before the closing session would start for ~400 people. I spent the entire time recreating it fresh in a format that would work. It was stressful.)
Make sure that it takes them a reasonable amount of time, without you forcing them to go fast. Ideally it should take 5-10 mins, not more than 15.
Bonus Tips
1. You should have a section in your closing session for filling it out for like 10 mins. Then you actually get serous amounts of people filling it out.
2. Make an interface for the aggregate data, and show it live as it comes in! This makes the experience rewarding for the people because literally anything at all happens to them as a result. (If you're concerned about goodharting, you can just show it after all the data comes in.)
3. Make the feedback form when you're first announcing and planning the event (e.g. 2 months ahead of time), so that it helps you think about what you're measuring.
Discuss
Does SI Disfavor Computationalism?
cube_flipper of smoothbrains.net recently made something resembling the following argument in a talk. I like the argument because it uses tools of computationalism to argue against computationalism: it argues within the Solomonoff Induction framework, against the computationalist position on phenomenal consciousness.
This is my own interpretation of the argument; if you don’t like something about it, please don’t blame cube_flipper.
Physicalist vs Computationalist Theories of Conscious PhenomenaFirst, I should explain what is at stake in this argument.
The computationalist position views the brain as a computer, and claims that “what it is like” to be a brain (or anything else) depends entirely on what computations are implemented by that brain (or other device). This implies, for example, that if you could simulate a brain perfectly on a digital computer, then it would have exactly the same experiences as its non-digital analogue.
The physicalist position instead identifies conscious experience as some specific (but relatively simple) physical phenomena in the brain, such as the activity of the EM field or some quantum phenomena. Here’s cube_flipper’s write-up of the EM theory in particular.
The argument considered here argues in favor of physicalist theories, but does so from premises which I think will seem plausible to many computationalists.
The ArgumentPremise 1: Solomonoff Induction (SI) is a good normative theory of epistemic rationality; IE, if SI would believe something, so should we.
Premise 2: In order to make predictions about observations, we should (in practice, for our universe) make a model which consists of three parts:
- Part A: a complete physical model of reality, EG string theory.
- Part B: a phenomenological bridge, which can be pointed at a region of space-time and tell us about the observations there (what it is like to be that region of space-time). [I’ll shorten to “phenomenal bridge” hereafter.]
- Part C: a pointer to a specific region of space-time (your theory of who you are).
Premise 3: Computationalist phenomenal bridges are complex relative to physicalist phenomenal bridges.[1]
Premise 4: Physical phenomenal bridges are at least as compatible with the data of experience as computationalist phenomenal bridges.
Premise 5: Our theory of phenomenal consciousness should be identified with our theory of phenomenal bridges.
Conclusion: We should prefer physicalist theories of consciousness to computationalist theories of consciousness.
Argument:
We should judge theories of consciousness in the same way that we judge theories of physics, IE, by balancing predictive accuracy with simplicity of the theory, as stipulated by SI.
When we do this, we come up with theories which describe physical reality (EG the Schrodinger Equation), plus theories which read off experiences from physical reality (EG neural correlates of consciousness), plus some probability distribution over who we are (EG the Born rule).
It might possibly be that computationalist theories are about equally good at fitting with the data of experience, but they aren’t better, at least not with respect to observations so far.
However, computational ways of building phenomenal bridges are going to be very complex compared to physicalist ways of doing so. Therefore, we should prefer the physicalist theories
My EvaluationI have difficulty accepting the conclusion, because my thinking on phenomenal consciousness frames it as a problem of mapping between our first-person perspectives (our direct experience) and a shared third-person perspective (the shared understanding of an objective world). A high-quality upload of a human could be a participant in that sense-making process, which suggests to me that a theory of phenomenal consciousness should be invariant to (some sort of) computational equivalence.
The assumptions run deep, so they are difficult to question. It feels clear to me, for example, that agentic desire is inherent to pain and suffering (there has to be a thing that wants not-that for it to count as pain/suffering). The physicalist view refuses to depend on such things.
Nonetheless, it is important to question one’s assumptions. So, what do I think of the premises of the arguments?
Premise 1I don’t believe that SI is the correct normative theory precisely, but it does seem hard to get away from something roughly like SI. I prefer Garrabrant Induction as a normative theory (IE, I think it is closer to normative for us, since we are computationally bounded agents). This does have a somewhat different character (more inclined to invoke specialist theories for specific topics, for example, rather than one big unified theory).
Nonetheless, it seems difficult to deny the part of SI which is important for this argument, namely that simpler theories should be preferred. We can quibble over which notion of simplicity (in SI terms, which universal machine to use), and we can worry about malign prior arguments. I'd certainly prefer to have some stronger reason for a position. Still, simplicity is a very important heuristic.
One might wish to argue that simplicity is somehow being misapplied here. Perhaps physical theories should be simple, but anthropic theories (theories determining the probability that you are you as opposed to someone else) need not be? (EG, because there is no need for anthropic theories in one’s epistemology?) I think the grounding in a common theory of rationality (SI) addresses this complaint, however: differing opinions can either show how it does not work out as described within SI, or state their disagreement with SI (and ideally give an alternate rationality framework that escapes the argument).
Premise 2The argument for premise 2 is an empirical one: it seems hard to credibly model the world in other ways than this.
I will divide the discussion of premise 2 into two parts:
- 2a: Should our picture of the world be split into physics (which models the world itself) and our place in it (the phenomenal bridge plus the pointer to a location)?
- 2b: Should locating ourselves in the world be split into a location-to-qualia function plus a location?
I think this is essentially a question of physical reductionism. (Not the physicalist theory of consciousness we've been discussing; rather, physical reductionism being the claim that all things can be defined in terms of physical things.) One might argue: SI will split up the world into physics plus self-locator because physics is all there is, and SI is smart, so SI will figure out that physics is all there is.
I don’t think this is necessarily true, even if one believes physical reductionism. SI might squish things together for reasons of compression. The theories which compress most effectively are not necessarily easily interpreted into comprehensible parts.
However, this isn't an objection to the spirit of the claim, I think.
We might more plausibly invoke condensation, or some other theory which similarly does a better job of representing the normative pressure to separate out concepts in a comprehensible way. The spirit of the claim is that the physical world would pop out in such a theory. SI might not cleanly separate its physical hypothesis from its phenomenal hypothesis (the bridge) and anthropic uncertainty (the pointer), but if you picked apart its code, that division might still be what's going on in some sense.
So, as with premise 1, I think we can quibble about the theory of rationality, but doing so doesn’t undermine the plausibility of the position being expressed. I suspect many readers will find it very plausible that a superintelligence will model the world in this way.
One possible objection is that modeling physics and then finding ourselves in it is not a practical way to model the world for computationally bounded agents such as ourselves. It might be normative for agents with unbounded computing power, but we are not such agents.
I think physical reductionist will not be dissuaded by this argument. Non-physical-reductionists, on the other hand, probably should not buy premise 2a.
2bSI only cares about predicting observations well. We can accept 2a, yet only divide our world-model into two parts: the physical world calculation, plus the function which looks at the world and tells us our observation.
2b postulates that we can split the question into our location plus the function that turns a location into observations.
This bakes in two important ideas: that we have a location, and that other locations can also contain experiences (not directly accessible to us).
There’s a confusing question of what format the locations should be given in. Are they like single points? Are they areas? This doesn’t seem like a big obstacle to me, though.
It does feel significant to assume that the function which decodes our consciousness out of physics can be meaningfully applied to other locations. It is like a physical law: translation-invariant. I'm told people interested in physicalist theories spend time thinking about what theories fit with physical invariances such as frame-invariance.
2b seems closely related to premise 4: one might argue that the phenomenal bridge should be a function of location because one wants to use it as a general theory of phenomenal consciousness. In other words, this is another argument from realism, but this time realism about consciousness: “I do think other people’s experiences are real, not just my own, therefore I think ideal rational reasoning would recognize that.”
Both 2a and 2b can also be framed in terms of what you want a theory of phenomenal consciousness to do for you. The point is to bridge between physics and qualia! The point is to tell us about the consciousness of other people!
Premise 3Why should we expect computationalist bridges to be more complex than physicalist bridges?
A physicalist bridge needs to be able to pick out some physical phenomenon, such as patterns in the EM field.
A computational bridge needs to do that as well, to parse the physical model, but it also needs to contain the complexity of several layers of interpretation. For example, to interpret computers, we need to interpret floating-point arithmetic. This extra complexity penalizes the hypothesis.
You might want to argue that floating-point arithmetic isn’t really so complex. Ok. But computationalism supports arbitrary layers: logic gates are implemented out of electrical components; machine code is implemented out of logic gates; low-level languages are implemented out of machine code; high-level languages are implemented out of low-level languages; application programs are implemented from low- or high-level languages.
The computationalist theory of phenomenal consciousness doesn’t care about how many implementation layers are stacked on top of each other. There’s not supposed to be a penalty for that. Yet, if we have to represent each of those layers in the phenomenal bridge, description-length piles up.
If you deny premise 3, you must either think that physical phenomenal bridges are going to be very complex, or there is a simple specification of the computationalist phenomenal bridge. For example, I have a strong intuition that there is a simple theory of implementation (the question of whether a physical structure implements a computation). If true, then computationalist phenomenal bridges become easy to compute.
However, as far as I know, implementation is still an unsolved technical problem, even though I personally feel like there should be a solution. If there isn't, we shouldn't even believe computationalism for computers.[2] It seems quite possible that there is a simple notion of implementation, but that the definition is not computable (and therefore infinitely complex by the standards of SI).
Without a specific proposal, I am just flatly denying premise 3, which suggests I should be open to the alternative.
(Even with a specific proposal, one still has to believe that it is shorter than physicalist phenomenal bridges, which is hard to believe when we’re assuming physics is the basic data fed into the function. Otherwise premise 3 still holds.)
Premise 4Differences in how well a theory of consciousness accounts for the data of experience can easily overpower differences in prior probability, so if computationalist theories had an edge in that respect, the debate about description length could easily be irrelevant.
A computationalist might argue: “It is easy to construct a counterexample. If we modify the brain in a way that keeps it computationally equivalent (typically easy to find in the computational realm) but which modifies the prediction of the physicalist theory, then of course the person will experience no change in their experience, disproving the physicalist theory.”
If I understand correctly, cube_flipper welcomes such an experiment (save for the fact that it seems far beyond our current technology), and anticipates having a different experience due to the modified physical field. I make the opposite prediction, myself, siding with the computationalist. However, I am not sure this would be as good a test as it sounds. A physicalist, if I understand correctly, could consistently claim that such an experiment is deluding the subject, essentially doing something like modifying the memory of the experience so that they inaccurately feel the same, when in fact there was a difference.
I do think it is true that we can be in better or worse positions to report our own experiences accurately, and modifying someone’s brain may put them in a worse position. (Still, the test seems important. Fitting people’s self-reports of conscious experience is an important feature of a theory of phenomenal consciousness, even if we must concede that there should be some failure cases.)
Premise 5An earlier version of this assumption was “If it is rational for each of us individually to think our personal qualia work a specific way, then we should also think each other’s work that way.”
Granting premises 1-4, a conscious being should believe in a physicalist phenomenal bridge. You should anticipate things such as: changes to the EM field should change your conscious experience, even in cases where computational theories would not anticipate this.
Without premise 5, this doesn’t yet extend to other people. The fact that the phenomenal bridge can look at other people (premise 2b) doesn’t yet mean that it should. This is an anti-solipsism assumption, as well as an assumption against theories such as “physicalism for me, but computationalism for everyone else”.
I have nothing against premise 5. I think there might be more lurking under the surface if we were to dig into this premise more, but I don’t have much to say about it now.
ConclusionI think this is a fun argument. For mental computationalists who are also physical reductionists, it highlights the importance of solving the implementation problem; without a simple theory of implementation, premise 3 seems difficult to deny. For those who find themselves in an uncomfortable position of believing the assumptions but denying the conclusion, remember that computationalism and physicalism are not the only two options when it comes to theories of consciousness.
- ^
cube_flipper also claims that computationalist theories have some related defects which don’t quite fit into the argument I’m articulating.
The computationalist picture isn’t just complex; it is also arbitrary, meaning it has a lot of degrees of freedom. For example, do we interpret a specific signal as a 1 or a 0?
When it comes to our own personal phenomenal bridges, of course, these choices are constrained by experience. More precisely: in order to apply the SI analogy, we have to choose some way of encoding our experience into bits. This then constrains all the arbitrary choices, since we are reasoning about our rationality as if we’re experiencing bits. This feature is shared with the physicalist theories, since we’re encoding everything into bits in order to apply SI as a theory of rationality.
However, when it comes to other people, 1 could just as well be 0, amongst other arbitrary mappings (110 could become 000, etc). The computationalist picture is committed to a sort of invariance, where the same abstract computation should have the same experience. Yet, if qualia are modeled via binary encodings (the SI picture), there’s got to be arbitrary choices here, which means you don’t necessarily trust other people’s reports of their qualia (they could be swapping 1 and 0 compared to your interpretation, or other such mappings).
This arbitrariness implies a lack of predictive power. This is the old “Is my red the same as your red?” problem.
Physicalist theories are (in my limited understanding) supposed to avoid this problem, because unlike computationalist theories, they’re happy to make specific claims like thus-and-sech configuration of the EM field corresponding universally to the experience of red.
- ^
This pithy observation is due to Mahmoud Ghanem.
Discuss
14 Concerns About Immortality
(Crossposted from my Substack; written as part of the Halfhaven virtual blogging camp)
There’s a lot of concern in the media about school shootings. But retirement home and hospice shootings are more common. Only, the shooters are microscopic, so we can’t put their faces on the news. Cancer, diabetes, infectious disease, heart disease — these bad guys focus on the elderly, but can kill people of any age. Because of this, real life is like a horror movie. You’re guaranteed to die at the end. The only reason you’re not panicking is that being panicked just makes the movie scarier. All you can do is try to enjoy the good parts of the movie, and ignore the fact that the killer is right around the corner.
Historically, there was nothing we could do about this. We had no choice but to ignore the inevitable fact that we’re all going to die, and instead focus on our legacy: the children we leave behind, and the societal, technological, and scientific progress we make. We try to push the boundary of what’s possible in hopes of leaving a better world for our children. Our great-grandparents went to the Moon. As a result, we sleep on memory foam mattresses and use cordless power tools.
But what if there was something even more important we could accomplish than going to the moon? This is humanity’s next milestone: we cure aging and all disease. Some say going to Mars should be our next goal, but stopping the hurricane of tears and blood that sweeps across our planet at all times is more important than finding out whether an alien sneezed on Mars a billion years ago. And we have millions of years to spread to other planets. The goal of curing death is more urgent. Some say we can stop all involuntary death within our lifetimes. Others say it will take longer. Maybe this will be a gift we give to our own great-grandchildren. If so, there is no greater gift we could give, and what could give our lives more meaning than giving it? If we must die, let us die knowing we will be among the last to go. But it could also happen within our lifetimes. All it would take is life-extension technology to add years faster than the clock takes them away.
Human biological immortality is a physically possible, achievable goal, not some pie-in-the-sky idea. Those who say otherwise have a can’t-do attitude. Not the kind of attitude that put man on the Moon. Nevertheless, it’s natural to feel some skepticism. The purpose of this article is to address common concerns people have when it comes to the project of curing death.
1. “There are more urgent problems we need to solve, like global poverty, or warfare.”Many things can be important at the same time. When war breaks out, we still need to feed the hungry. But of all the issues facing humanity, ending death is especially important. In World War II, millions of Allied soldiers fought to defend the lives of millions of innocent civilians. But the winners of World War II and the innocents they protected are nearly all dead today. The soldiers triumphed over evil, but eventually perished to a greater evil. Most of them died weeping or screaming, and left behind grieving families. This will be the fate of every human being until death — the ultimate enemy — is finally defeated.
2. “If people live forever, there will be shortages of food, water, and living space.”Maybe in the short term. But long term, there’s no need to worry. The universe is big and has lots of material. The Earth can hold many more people than we currently have. If we’re clever, we can expand even beyond the galaxy without running out of room. For the short-term shortages, we can adapt to infrastructure limitations by rolling out life-extension technology slowly, or freezing people who die so we can bring them back later once infrastructure has adapted.
3. “Death is natural.”Genital warts are natural.
4. “Making people live forever is playing God.”And what a fun game to play! We’ve been playing it since the dawn of our species. Mother nature wanted us to starve, hunting desperately for every bite. Instead we invented farming, and tamed wild animals. Today, there are more domesticated farm animals by mass than all wild mammals combined. Mother nature wanted us to live on the African savanna, but we learned to live in deserts and frozen wastes, and built boats to take us to new continents. We’ve made engines to work so our muscles don’t have to, and made machines think for us. We’ve broken free from the fetters of nature at every opportunity. Curing death is simply the next step.
5. “Living forever would be really boring.”You might not want to live for a billion years. But you may wish to live for ten thousand, and you should have that right. One day, far in the future, you may decide you’ve done everything you wanted to do, and seen everything you wanted to see. You may grow tired with life itself. You may be ready to die, and choose to do so. This is the best way a human being can die, and it should be the only way. Death should be a choice. Much better than the current system of randomly-assigned, unpredictable, horrifying death sentences that would be war crimes or serious human rights violations if imposed by another person.
6. “Death gives life meaning.”Would you love your friends and family less if you knew they’d live forever, and you would too? Would your love be cheapened by the fact that it won’t be tragically cut short? Is Christmas with your family only enjoyable because you only have a few dozen left? There’s lots in life that you can find meaning in, from the easy fun of telling jokes with friends or watching movies, to the hard work of learning an instrument or writing a novel. Meaning is inherent in the lives we live. Death does not create meaning — death is the end of everything meaningful. Death obviates. Life after death is cured will be more meaningful, because you will have more time to grow and learn what you find meaningful. Death does not add meaning, only urgency.
7. “Living forever would be torture. The human mind couldn’t handle it, and we would go mad.”It’s possible human beings are not psychologically suited to immortality. If so, then that’s just something we have to fix. You might be opposed to tweaking your brain, but the only alternative would be death, which is a much greater adjustment of your brain!
Some objections like this are not reasons we shouldn’t solve the problem, they’re just part of the problem that needs solving.
8. “If we lived forever, then power would gradually concentrate in the hands of the few”This is another objection that’s part of the problem, rather than a reason not to solve the problem. Dividing resources fairly is hard. But it’s a solvable problem. Imagine how much easier this problem would be to solve if all the geniuses of history were alive to help us solve it! Allowing each generation of geniuses to rot in the ground does not help us solve difficult problems like this.
Biological immortality doesn’t really make the problem of dividing resources any harder anyway. Currently, while individuals are mortal, families may live forever, and either way we need to deal with a small number of rich families hoarding resources.
9. “Wanting to live forever is selfish. You should make way for the next generation. How will young people find jobs if old people never retire?”Having a job won’t be a thing for much longer. Soon enough, machines will do the work for us. Hopefully we will still find meaning in hard work, but our hard work will no longer be tied to our income. People will reap the rewards of work mostly done by machines (as long as we can coordinate to make sure we’re fairly sharing in the fruits of their labor. And that’s a problem we have to solve whether or not we cure death.)
10. “Immortality may only be available to the wealthy at first, which isn’t fair.”No matter how you roll out a new technology, there’s gonna be someone who gets it first and someone who gets it last. The fact that we’ll be using money to determine who goes first doesn’t change that. Some people will get it first — sucks for those who get it last! — but even worse would be nobody getting it at all!
11. “I heard humans are already technically immortal, because of something called quantum immortality.”Quantum immortality is a thought experiment in theoretical physics, and is controversial among physicists. It’s an neat idea, but not something to take seriously. Otherwise, you’d see physicists playing Russian roulette for large sums of money.
12. “Actually, immortality is philosophically impossible.”It may be impossible to truly live forever. The universe may end, for one thing. But also, to live is to think, to think is to change, and to change is to die. Eventually the you that exists today won’t be around any longer, one way or another. You will either grow into someone new, or you will suddenly, tragically die. The project of curing death is concerned with the latter, not preventing nonexistence altogether.
13. “But what about AI? Why solve the problem ourselves if we can just have AI solve it for us.”We can’t just sit around and wait for AI to solve all our problems. Nobody knows how long it will be until we have superintelligent AI. It may be 2 years, or it may be 200. If we cure death one or two years before we invent superintelligent AI that could have done it for us, at least we can be proud knowing we saved ourselves, rather than relying on a machine.
14. “What should I do if my loved ones die before immortality is available?”Freeze them! It may be a long time before we can bring them back, but eventually it will be possible. Nobody seriously denies that. The biggest risk is not that it will never be possible to safely thaw someone, but that a cryonics company will go out of business, and their frozen corpses will be thawed early and disposed of. This has already happened in some cases. So, we’ll need better laws to protect the frozen bodies of people who have chosen cryonics. In particular, such a person should not be treated as a corpse, but rather, a patient undergoing a long-term treatment, similar to a lengthy medically-induced coma. Once a person is frozen, they should have a right to stay frozen until the time comes they can be restored.
People are naturally suspicious of ideas that sound far-fetched, and the idea of curing death is the most far-fetched sounding idea ever. But there was once a time when going to the Moon sounded just as impossible. Curing aging and all diseases is on the table. It will happen. We should treat it as a civilizational goal, and make it happen as soon as we can. We should orient our society around solving this goal. It will be a difficult fight, but at least it’s one we can be proud to fight. When people have set their minds against something, they can easily imagine all sorts of objections and what-ifs. But when humanity sets its mind to accomplishing something good, humanity pulls it off.
Discuss
Flashcards are worth it
Benefits of spaced repetition systems (Anki) go beyond saving time looking up facts.
- Displaying competence: They enable you to present as highly competent in conversations where you have a copious number of precise, relevant facts off the top of your head. This can increase the opportunities available to you at work, and make you a more stimulating and credible conversation partner in social settings.
- Context and connections: It gives you hooks to recognize the relevance of new information to previously memorized facts, so your memorized background knowledge fills in when presented context is inadequate or misleading.
- Avoiding reasoning errors: By enabling you to remember precise definitions, memorization enables you to avoid costly reasoning errors that can be caused by conflating definitions.
- Engaged reading: You're always asking "is this fact worth flashcarding?" "How many cards should I limit myself to making today?" "What's the best way to turn this into a flashcard?" This gives you something very productive to do with any arbitrary text you read.
- Long-term efficiency gains: You'll always be under short-term pressure to produce. Flashcards enable you to build the foundational knowledge to yield long-term improvements in your information gathering and decision making (declarative knowledge about diet, exercise, and cognitive science, employment opportunities, the literature in your field, new work tools).
- Procedural practice: Some of my flashcards are just hyperlinks to Wolfram Mathworld computer generated and graded mathematical practice problems. I bury these until I have pen and paper available. There are lots of such computer generated problems out there, many of which are either presented as problems to do once and never again, as expensive but limited subscription services where the SRS aspect is a major selling point, or that have endless problems but don't include any SRS aspect. Anki turns any of these into a centralized SRS system along with all your other flashcards, simplifying the process of engaging with these resources.
Each card will take time to practice. Thinking about all that accumulated time discouraged me, at first. The costs are lower than they appear at first.
- Exaggerated practice times: Anki counts any time you have a card open in the app as practice time, even if you're not actually looking at your phone. It will systematically overcount the amount of study time, leading to misleadingly high estimates of the time it takes to practice.
- Learning curve: Making and reviewing cards are interlinked skills. Early in your Anki experience, the time cost/learning benefit ratio will be substantially worse than after you've gained more experience. With experience, you'll get faster at making better cards that give you more value while taking less time to answer.
- Time niches: Anki practice can occur in sporatic bursts at convenient times that aren't a good fit for your main work: on the toilet, on the bus, on a coffee break when you don't feel like talking, as an alternative to scrolling the internet or before-bed reading, as a way to relax when you're overstimulated at a party and go into another room for some quiet time. It does not have to compete with your most meaningful activities. Instead, let it replace your least meaningful activities.
I'm curious whether it's possible to turn a resource, like a math textbook, into flashcards in such a way that you memorize the flashcards themselves, enabling a purely mental flashcard review. Notably, I find that as a card matures, I become so confident that I got the answer correct that I no longer need to check. This supports my belief that purely mental flashcard review is possible.
Discuss
Insulin Resistance and Glycemic Index
In my previous post Traditional Food*, I explained how what we think of as a "traditional" diet is a nationalist propaganda campaign that's making us sick. In this post I'll go into the biological mechanisms.
There are four substances that the body can metabolize: carbohydrates, fats, protein and alcohol.
In this post I'll focus on how modern carbohydrate-heavy foods (like pasta, bread and rice) are related to insulin resistance. This doesn't mean that seed oils are good for you, or that the industrial revolution hasn't changed how people consume meat. Seed oils are bad for you and people today don't eat meats the way peasants did—they ate organs and other stuff that gets fed to animals today. Alcohol is just a poison you can metabolize.
With that out of the way, it's time to learn about insulin resistance.
Insulin & ResistanceYour blood has glucose dissolved in it. Blood glucose is important because your brain relies on glucose to function. If there wasn't glucose in your blood, then you'd die. However, too much blood glucose causes problems too. Precisely what problems hyperglycemia causes isn't important right now. Instead, what matters to this post is that your body has a control system that keeps your blood glucose in its proper range.
When your blood glucose rises, your body releases a hormone called insulin. Insulin binds to insulin receptors in cell membranes. These insulin receptors activate signalling that tells various tissues in your body that it's time to absorb that glucose from the blood and into cells.
Your blood glucose spikes (rises very fast) when you eat foods that release lots of glucose quickly. Foods that quickly release lots of glucose into your bloodstream are said to have high glycemic index. White rice and anything made out of modern flour has a high glycemic index. Sugary drinks have a very high glycemic index. Slower-digested foods like legumes have a low glycemic index.
When you eat foods with a low glycemic index, this system works fine. But when you eat foods with an unnaturally high glycemic index, the system works badly. The unnaturally high glucose spike causes an evolutionarily unprecedented insulin spike. The blood glucose then drops below where it's supposed to be. In this way, eating too much high-glycemic food actually makes you hungrier. That hunger creates a caloric surplus, which leads to obesity.
Even worse, repeated insulin spikes cause the insulin receptors to desensitize (downregulate) themselves. Your body creates more insulin to get the attention of desensitized receptors. This produces a horrible chain reaction where your insulin is chronically high and the cellular response diminishes. This is insulin resistance. So insulin resistance prevents fat burning too, because insulin inhibits fat release. Here are some signs of insulin resistance.
- Disproportionate accumulation of fat in your belly relative to the rest of your body.
- You feel tired 30–120 minutes after eating. In bad cases you may even nap after eating.
- Energy drops when you skip meals, but cognitive and physical performance rebound strongly 2–3 hours (or even the next day) after a meal.
- You bonk (run out of energy) when you attempt long-distance endurance exercise without consuming carbs.
Perhaps this is happening to you and you think it's normal, or just part of getting old. It's not! The reason it might seem normal to you is because >90% of Americans have a sign of metabolic dysfunction like this. 38% meet the even worse criterion of prediabetes. This is historically unprecedented. Ignoring tiny aristocratic minorities, no society before 1800 was like this. If you're eating three meals per day, your energy levels should be mostly decoupled from when you eat. Your hunger should be decoupled from your energy levels too.
Many factors led to this, including exercise patterns (mostly driven by horrific urban planning), stress, and electric lights. But the most important factor is probably diet.
How diet changedHistorically, most of what normal people ate had—by modern standards—a very low glycemic index. Even milk and honey, foods so valuable the Bible uses them to paint a picture of the Promised Land, have only low-to-medium glycemic indices.
Historical Food Glycemic Index cabbage 10 beans 20-40 hulled barley 28-30 milk 30-35 medieval milled barley bread 35–45 medieval rye sourdough 40–48 coarse bulgar wheat [usually not eaten by peasants] 46-48 historical honey [very expensive] 50-60 absolute top-tier luxury wheat bread for kings 65-75 [speculative] Modern Food Glycemic Index coarse bulgur wheat [usually not eaten by anyone] 46-48 modern Italian pasta 45-55 [depends on cook time, can be as high as 70+ when over-cooked, as is standard in American kitchens ] artisanal whole wheat sourdough 50-58 Ezekiel bread 55–60 "whole wheat" bread 60-80 modern honey 65-75 modern white rice 60-75+ industrial white bread 70-80[These tables come from ChatGPT because the details aren't important—just the general trends.]
When you look at these tables, you'll notice two major trends.
- Tastier food has higher glycemic index.
- Processing a food increases its glycemic index. Modern processing increases its glycemic index even more.
Historically, insulin resistance was overwhelmingly a disease of the rich (and sometimes monks), because only the rich could afford the tasty foods that cause insulin resistance. Historical societies were so extremely unequal that almost nobody was rich. Consequently, almost nobody got insulin resistance.
Today, modern technology allows everyone to eat like a medieval king. The cheapest bread in your local grocery store has a glycemic index that historically, only the rich could afford.
Discuss
The Moonrise Problem
On October 5, 1960, the American Ballistic Missile Early-Warning System station at Thule, Greenland, indicated a large contingent of Soviet missiles headed towards the United States. Fortunately, common sense prevailed at the informal threat-assessment conference that was immediately convened: international tensions weren't particularly high at the time. The system had only recently been installed. Kruschev was in New York, and all in all a massive Soviet attack seemed very unlikely. As a result no devastating counter-attack was launched. What was the problem? The moon had risen, and was reflecting radar signals back to earth. Needless to say, this lunar reflection hadn't been predicted by the system's designers.
Over the last ten years, the Defense Department has spent many millions of dollars on a new computer technology called "program verification" - a branch of computer science whose business, in its own terms, is to "prove programs correct" . [...]
What, we do well to ask, does this new technology mean? How good are we at it? For example, if the 1960 warning system had been proven correct (which it was not), could we have avoided the problem with the moon? If it were possible to prove that the programs being written to control automatic launch-on-warning systems were correct, would that mean there could not be a catastrophic accident? In systems now being proposed computers will make launching decisions in a matter of seconds, with no time for any human intervention (let alone for musings about Kruschev's being in New York). Do the techniques of program verification hold enough promise so that, if these new systems could all be proven correct, we could all sleep more easily at night?
- The Limits Of Correctness, by Brian Cantwell-Smith
Spoiler: the answer to Cantwell-Smith’s question is “no”. No amount of program verification would solve the moonrise problem. The fundamental problem with program verification, as Cantwell-Smith explains quite well, is that one can only verify that the program matches a specification, not that the specification does what one wants in the real world. And in practice, the specification is not much more likely to be correct than the program itself. If the engineers didn’t realize they need to account for moonrise when writing the program, they’re not going to realize they need to account for moonrise when writing the spec either.
… and yet, I claim that the moonrise problem is not completely intractable in-principle. It is possible in-principle to write nuclear warning software which will correctly handle moonrises and all the other complications of the real world, without the engineers needing to know about all those complications in advance, without needing exhaustive data on all those complications in advance, and without just guessing that e.g. a machine learning system will generalize off-distribution in the intended way. It is possible in-principle to prove “robust correctness” - to prove that a nuclear warning system will (with high probability) continue to operate as intended even in novel unplanned-for situations. Though of course such a proof would look nothing like today’s “program verification”, and indeed would probably not draw on program verification theory or tools at all.
This post will sketch what I imagine such a system and proof might look like, and its limitations.
The Machine Learning PartI’m going to assume a particular ML architecture, not because I think it’s a necessary or likely way for a moonrise-robust system to work, but rather just to have a concrete picture in mind.
At a very high level, today’s image models take in a whole bunch of independent normally-distributed noise, run it through a learned function, and spits out a realistic-looking image. We’ll imagine something like that, but with more data modalities: a generative model which takes in a bunch of independent noise, runs it through a learned function, and spits out realistic-looking data across all of its modalities, be it images (e.g. satellite surveillance), text (e.g. newspaper headings including Khrushchev's travels), radar readings, etc, etc.
Like today’s image generators, such a system can also be used in-principle for prediction rather than generation. It’s a generative probabilistic model; the function defining its distribution P[data] may be complicated, but in principle one can do all the normal probability things with it.
We will not worry about how specifically the system is trained; just imagine it’s some future technique which has not yet been discovered. We will assume that the system can learn online to a large extent, and that its predictions generalize well off distribution.
A Technique Which Won’t Work: Supervised LearningNow, one immediate idea would be to include among the system’s many data modalities a one-bit channel indicating whether a nuclear exchange had been initiated. Then the system could be used to predict that bit just like everything else it predicts.
Alas, we have zero real-world training examples of initiation of nuclear exchange. (Ok, technically there’s Hiroshima and Nagasaki, but that’s probably a pretty misleading example for learning about a hypothetical present-day nuclear exchange.) We could cook up some artificial examples, but then we’re right back to moonrise problems: just like engineers trying to make a spec will inevitably miss important real phenomena like moonrise, engineers trying to make artificial data will inevitably miss important real phenomena like moonrise. When trained on real-world data, I’m willing to assume the system will generalize well to predict new real-world raw data even off-distribution. I am not willing to assume that a system trained on artificial examples would generalize well to real-world data off-distribution, for exactly the same reasons I’m not willing to assume that a spec someone hand-coded is correct. Humans are not robustly good at that.
Upshot: our system will have to do something else, rather than the most basic supervised learning setup, for detecting nuclear exchange.
“Latent Variables” and Problem StatementOur core hypothesis is that, somewhere in the ML system, whether in “one spot” or “spread out” or embedded in a complicated way, the system represents its own internal indicator for whether nuclear exchange has been initiated. Maybe it’s the activation of one “neuron” (but probably not). Maybe it’s the activation along a certain linear direction of activation space in one layer (more plausible, but still probably not). Most likely, it’s some embedding which we still haven’t figured out yet. Whatever it is, we’ll call that representation the “latent variable” representing initiation of nuclear exchange.
Now, the point of the Moonrise Problem is not merely to produce a system which works, although that is a necessary part of the goal. The point is also not merely to produce a system which robustly works, although that is also a necessary part of the goal. The point is to produce a system which provably robustly works, i.e. a system in which we can prove the core load-bearing assumptions on which the system’s functionality relies. That’s why we had all that talk about program verification upfront: program verification purports to prove that programs will work. The tools of program verification aren’t particularly useful for this problem, but the problem of proving our core load-bearing assumptions still stands.
So the question is: what properties could we in-principle prove mathematically (assuming they in fact turn out to be true), about some latent variable supposedly representing initiation of nuclear exchange inside a future ML system, which together would be enough to establish that our nuclear warning system based on that latent variable will actually robustly work? That’s our Moonrise Problem.
Some Candidate Properties And AssumptionsIn this section I’ll present a candidate list of properties and assumptions. I’m not sure that this is the ideal list, or even if it works at all; maybe I am missing key things! The important point is that we have a very concrete problem on hand, so we can talk very concretely about whether the list of properties and assumptions given would be sufficient - and if not, how it could be changed, or what the nearest unblocked option might be. My main goal here is not to advocate for this particular list, but rather to illustrate the Moonrise Problem as a useful test-case for considering properties of “safe” AI.
Assumption 1: CapabilityI am willing to assume that our ML system is capable at predicting raw real-world data, even off distribution. This is important, because it rules out some not-very-interesting failure modes which would otherwise be impossible to disprove - like e.g. the possibility that at some particular time the physics of our entire universe suddenly changes to something completely different, and the entire concept of nuclear exchange ceases to make sense at all. (It’s still possible that the entire concept of nuclear exchange will cease to make sense at all for more mundane reasons! We’ll address that possibility shortly.)
The reason I’m willing to assume this is because it’s a pure, core capabilities problem. Insofar as the ML system doesn’t satisfy this assumption, it’s probably not that dangerous (yet).
Mathematical Property 1: Real Pattern & Correspondence PrincipleFirst, we’d like to establish that a particular pattern exists in the training data (and therefore in whatever world generated the training data, i.e. our physical world). There are lots of tools for the simplest version of this already; one simply needs to be able to predict better than maxent. But we’d like something with a little more oomph.
Ideally, we’d like a notion of “real pattern” which comes with a Correspondence Principle. The Correspondence Principle is the idea that new models should agree with old models in places where the old models predicted correctly - e.g. General Relatively must reduce to Newtonian Gravity in the broad regime where Newtonian Gravity predicted correctly; otherwise our “General Relativity” model has a capability shortcoming. We’d like a version of the Correspondence Principle for whatever notion of “real pattern” we use - i.e. some principle saying that, if we’ve found a pattern which fits certain rules, then any future model must be agree with that pattern at least in the regime where it has predicted correctly thus far; otherwise the new model has a capability shortcoming.
This property would be proven mathematically, from a set of assumptions which could be verified for the system at hand.
Mathematical Property 2: Convergent FactorizationAs a toy example, imagine that the one real pattern our ML system found looks like clusters in some space. Then we might guess that a wide range of systems trying to predict that pattern would internally represent the cluster identifier for a datapoint. In particular, a wide range of systems would factor out the cluster identifier, i.e. represent it in such a way that it’s not entangled with a bunch of other stuff, because the system will probably need to use that id a bunch in order to do its prediction, and if the cluster id representation is too entangled with other stuff then that other stuff will interfere with the predictions.
This is one specific example of a (conceptual) convergent factorization story: a reason for some particular internal variable to be represented in a way factored apart from the system’s other internals, across many different system architectures.
Ideally, we’d like to identify a convergently-factored representation corresponding to initiation of nuclear exchange, within the ML system’s machinery for predicting a specific real pattern (i.e. some pattern having to do with nuclear strategy). The mathematical challenge would be to prove convergent factorization for that representation. Ideally, we’d like the “wide range of systems” across which the factorization is convergent to include both the ML system and the human brain. That way, we’d have mathematical justification for the assumption that the representation we’ve identified in the ML system matches some representation in the human brain.
If we got really fancy and brought in brain scanning tech, we might even be able to directly establish that the representation identified in the ML system matches the mental representation in a human brain which the words “initiation of nuclear exchange” point to. And then we’d really, properly have a solution to the Moonrise Problem.
LimitationsConceptually, the assumptions and properties above are sufficient to establish that:
- The ML system will generalize predictively off-distribution in the real world.
- The representation identified within the ML system robustly corresponds to humans’ concept of initiation of nuclear exchange.
… but that doesn’t mean that humans’ concept of initiation of nuclear exchange will itself generalize off-distribution.
A canonical toy example: imagine that, on distribution, we have two clusters, corresponding to cars and trucks:
But then, someone comes along and builds a bunch of new vehicles which are sort of in-between cars and trucks, on a whole spectrum, so the distribution of vehicles now looks like this:
At that point, it doesn’t even make sense to talk about “cars” and “trucks” as distinct categories any more, at least among new vehicles. (The car/truck labels would still be useful on older vehicles, which is where the proofs would still be able to make some guarantees: a Correspondence Principle would say that any new model must still reduce to the old cluster model for predicting old vehicles, else the new model is suboptimally-capable. And convergent factorization would then say that the new model will still convergently use car/truck labels for older cars.)
That said, if the original real pattern on which the representation of initiation of nuclear launch is based fails to generalize to new data, that’s at least a detectable problem in principle. So presumably the system would throw an error and bring in a human if and when the nuclear launch equivalent of the car/truck issue became relevant.
Thank you to Steve Petersen and especially Ramana Kumar for introducing me to the paper quoted at the beginning of this post.
Discuss
I wrote a blog post every day for a month, and all I got was this lousy collection of incoherent ramblings
It's done. I made it to the end. A Finnish proverb fits the situation perfectly:
Paska reissu mutta tulipahan tehtyä
Which translates to something like "A crappy journey but in any case it's over now". I forced myself to do this. It was not fun. I rarely enjoyed writing. Every day I kept looking at the word counter, hoping that it would be over already. Sometimes the text was not done when I reached 500 words, which meant I had to write more.
I did not manage to keep any buffer. Each text was written on the day when it needed to be ready, except for this one which I wrote five days in advance because why not. Sometimes I had fragments or notes ready, but most of the time not even that. Once or twice I wrote two posts back to back around midnight, but most of the time I finished a text and then did something else for the rest of the day. Doing it again tomorrow was bad enough.
I managed to not neglect my work too badly. Sure, I skipped a day or two because I had to write instead, but it's not like I don't do that to play videogames every now and then, something I completely avoided this month.
The thing I did neglect was the quality of text I produced. One could imagine that if the top priority for every day is producing a blog post, you could actually put some effort into it. I managed to put in a lot of hours, but I'm not happy about the level of effort spent. Hard to distinguish between quality and effort though, but most of the time I was tired and just wanted it to be over and picked the first topic that I could manage to find 500 words in. I did barely any editing except for spellcheck with ChatGPT. I didn't even read most of the texts myself.
Scott Alexander's quote from the Inkhaven site was something that motivated me to do this in the first place:
"Whenever I see a new person who blogs every day, it's very rare that that never goes anywhere or they don't get good. That's like my best leading indicator for who's going to be a good blogger."
Goodhart's law strikes again! Once there's a pressure to write every day, its usefulness as an indicator is over. There's a more fitting quote from Scott, in The Parable Of The Talents
On the other hand, I know people who want to get good at writing, and make a mighty resolution to write two hundred words a day every day, and then after the first week they find it’s too annoying and give up. These people think I’m amazing, and why shouldn’t they? I’ve written a few hundred to a few thousand words pretty much every day for the past ten years.
But as I’ve said before, this has taken exactly zero willpower. It’s more that I can’t stop even if I want to. Part of that is probably that when I write, I feel really good about having expressed exactly what it was I meant to say. Lots of people read it, they comment, they praise me, I feel good, I’m encouraged to keep writing, and it’s exactly the same virtuous cycle as my brother got from his piano practice.
So yeah. I guess I just have to be happy that at least one of my cycles was around something so financially useful as programming. Imagine trying to make a living or even waste all this time I have with my other interests; online chess and reading young adult fantasy books. Weirdly enough I didn't need any of that external feedback for programming either, perhaps there was no cycle at all other than my own fascination of what could be done with the closest thing to magic we have.
That said I'm still sad that I'll never be a blogger. But as the saying goes, everyone wants to be a writer, yet nobody wants to write. I certainly don't, I'm done. Or maybe I am a blogger now, given that I've written some 20 000 words over the past month?
Discuss
Uncertain Updates: November 2025
I’m writing from my grandparents-in-law’s living room. We just finished watching the Ohio-Michigan game, now we’re watching the Minnesota-Wisconsin game, and my wife and I will soon be driving to the airport to fly home. It was a cozy Thanksgiving, even if about half the family has come down with a cold, and I got to see my first snowfall in several years.
I initially made good book progress this month, finishing up revisions on Chapter 6. Then I got sick and did nothing for a week. The following week I was busy catching up at work, and then I got obsessed with a special project that I’ll be posting about soon, so needless to say I didn’t make as much progress on Chapter 7 as I would have liked.
And December is shaping up to be a busy month! There’s a sesshin. There’s holiday plans. And there’s the aforementioned special project (and another special project you won’t be hearing about for a little longer). So, in the hopes that I can make any progress on book revisions at all, I’m not going to hold myself to posting weekly until after the new year.
In the meantime, in addition to, as always, plugging my Twitter, you might read some interesting posts from other folks, including:
’s series on the Enneagram (last post with links to them all)
(I still owe you all a post about Buddhism and the Enneagram inspired by Michael’s posts when I can finally get to it!)
’s series on consciousness (last post with links to them all)
’s excellent post detailing a personal story of a stage transition.
More of us should write more things like this. I think they create a record of incredibly useful evidence and supply examples that can help those struggling to break through from one stage to the next. I wrote a short comment on his post with my own Kegan 3→4 story, but I should put my money where my mouth is and write up all my phase transition stories.
LessWrong ran an event called Inkhaven during November. If you don’t usually read LessWrong, there’s an unusually high volume of new, interesting posts from the past month that you might enjoy, and even more at the Inkhaven link.
Discuss
Manifest x DC After Action Report
Manifest attracts devotion unusual for a conference. "Dinners and meetings and conversations with people building local cultures so achingly beautiful they feel almost like dreams."[1] "This strange blend of data and debauchery."[2] "That magical feeling of serendipity, where you can flow through a space, passing from conversation to conversation, contribute to each one in turn, and have others do the same for you."[3] Even those who run it say it's "a tough event to describe."[4]
We attempted a replication.
I won't bury the lede. You can just steal things. Manifest replicates.
Group photo was Ghiblified for privacy, as recommended by Jeff KaufmanThis post tells the story of the event. If you're just here for take-aways, you may want to skip to:
- Budget
- Feedback
- Review from an attendee, Matt Beard
- Lessons Learned
- Advice to Future Organizers (full post to come)
- Notably the separate post on Ticket Strategy, which we recommend
This post originally included more personal reflections, which I left out of this public version. I'm not posting it to LessWrong, but if you want the full story, it's available in this Google Doc (please feel free to request access if you're interested and we've met in real life).
The Gazebo of SchemesI had a very strange time in Berkeley last June. I came with a message to deliver, ideas to refine, and work to do. In the week between LessOnline and Manifest, I frequently worked from the gazebo in the center courtyard of Lighthaven. I got a lot done, advanced work that continues to compound. I'm proud of what I accomplished.
Just under the surface I was a mess. But I didn't want to let the trip be about that. I wanted to seem OK, convinced myself that I needed to project a certain image of stability for the sake of duty. Given the crowd, to keep up appearances I leaned into being more adventurous and playful.
Someone gave me access to a printer. I had a role to play. I started labeling things.[5]
Gazebo of Schemes, mid-plotI named my preferred spot the "Gazebo of Schemes," gathered a cabal of friends, and appointed myself chair. Even as the chair of this invented commission, I imagined us to be staffers. I greeted passers-by with variations on: "Welcome to the Gazebo of Schemes, how may we assist your schemes?" This is an excellent discussion prompt, often getting people to reveal quests they didn't know they had.
The Gazebo of Schemes hereby claims partial credit/blame for several dates, one relationship, at least one lawsuit, several wardrobe upgrades, [redacted] instances of corporal punishment, four or five conference talks, a feud, and finally, this conference series.
@Ben S. flew out for Manifest and immediately loved the atmosphere, how Lighthaven's design created a distinct kind of conference. When the Gazebo of Schemes first called him to adventure, he was still taking it all in, had no plots to offer. But after barely more than a day, he hatched a scheme to bring something like Manifest to the East Coast, to bring this social technology to our friends back home.
I was honor-bound to assist.
Exodus
As a staffer, I know how to support a principal. The job is mostly to be annoying. Hey Ben, remember that crazy idea you had? You should totally do it. Hey Ben, David and Pratik would be interested in your idea. Hey Ben, I was thinking... Hey Ben, have you told... and on, and on, and on. The sad part is, I'm a very good bureaucrat.[6]
Ben naturally gravitated towards programming, which speakers to invite, what kind of events and panels to put together, and venue selection. How to handle the people. I tackled logistics, budget, planning, and the Gantt charts, the parts that are fun for me. @David Glidden signed on to be the day-of volunteer coordinator. Pratik offered to recruit additional speakers. We had a team, but no clear idea for what to do next.
How do you transplant a conference that is such a product of its venue?
First we looked for the elements that weren't. Manifest takes a lot from the Rationalist Unconference playbook: invite interesting and agentic people, get them talking online before the event, and occasionally butt in to say, "that's really interesting, you should put that on the calendar!" But unlike most Unconferences I've seen, Manifest has a default, a main stage that's fully programmed in advance. Since there's always something sufficiently interesting going on, organizers don't have to rely on any particular Unconference session. This gives people space to be niche and experimental. If attendees aren't interested, that's fine, they'll go to the prepared talk or panel instead.
Manifold, the platform, provides another: interesting markets to discuss, an implicit bullshit tax on sloppy predictions, and a pressure to keep heated conversations grounded by searching for cruxes that can be operationalized into a market. To speed this along, it seemed important to seed the conference with several Manifest regulars, particularly Manifold power-users who broke disagreements into markets instinctually. Several names immediately came to mind, people who'd probably fly out if invited.
In looking for the portable elements, we had derived some venue preferences after all. We would need several rooms, but one should be larger than the others, large enough to fit most or all attendees. This argued against most apartments or renting a few classrooms. We wanted some sort of place that would feel distinct enough to break people out of day-to-day political arguments. We also wanted something that felt special enough to tempt a few friends to fly in. We ruled out anything that felt like an office or a sterile hotel conference center.[7] We started looking for suitable venues, but nothing seemed right.
Then the solution fell into our laps, two friends suggested their technically-not-EA Group Houses. Workshop House was a former rectory that's configured for small events, with a large living space, several breakout rooms, dark wood and stained glass. Very Bayes House. Embassy House was a beautifully renovated, modernly-appointed, former embassy that throws large parties. Very Aumann Hall. Both had excellent roof decks. We couldn't decide, so we picked both. We argued a bit internally about which was suited to which role, but decided Workshop was best for the daytime programming, and Embassy was a better fit for the afterparty.
Once we had the concept and venues, things started to fall into place. Ben talked to the Manifest team, Austen, Stephen, Ian, and David Chee, they shared a wealth of knowledge and offered to sponsor. We found some dates that worked, November 8th seemed good. We agreed with the venue on a capacity of 60. I was able to catch up with Austin at Metagame and iron out some logistical details (his offer of Mox's ticket system was a particular lifesaver).
Momentum started building. We announced locally to give a head start to our target market: local rationalist-adjacent who might enjoy Manifest proper. We also leaked the invite to some regulars at Lighthaven who might travel in to help set the tone. We went "wide" a few days later, announcing via email, in blog posts, and in Prediction Market discord servers. People quickly joined the Manifest x DC discord server, as we were still setting it up. We sold out in six days.
The Big Day
It’s a sad irony that throwing an event you want to exist doesn’t necessarily mean you get to attend it. We had Robin Hanson and Peter Wildeford give prepared talks, had panels on forecasting politics and the future of Manifold, even a forecasting game… and I missed almost all of it. I heard they were all good talks, I caught five minutes here and there, sat in on some of the smaller panels upstairs, but mostly I was coordinating. If you want to know what happened at the conference, Matt Beard’s review is a great summary.
Robin Hanson describing Elegant Ideas with Messy DetailsI was responsible for two parts of the calendar. After Ben welcomed attendees, and David gave logistical notes, I delivered an “Opening Benediction,” one last step to copy over the tone of earnest and playful truthseeking from Manifest. Being jailed in a walled compound is no excuse for missing my conference, so that afternoon I ran a virtual panel from Inkhaven, roping in three friends who are locked in Lighthaven for the month of November and forced to blog daily. To join, attendees were asked for a blog post prompt–that didn’t need to be good–just sufficient to stave off eviction one more day.
Mainly, though, the day was a blur. I left this section to write until last, hoping I’d have more to say, but I still don’t. The work felt good, felt rewarding. People seemed to enjoy it. I enjoyed it. I had missed this.
As the sun set early, we closed the day, kicked participants out onto the street to form groups for dinner, cleaned up Workshop house, got dinner for the volunteers, then changed into Black Tie for the afterparty.
I tried to stab Ben for being underdressed, but couldn't stop laughing long enough to enforce the dress code.NumbersFinancial
We collected $5,217 in revenue net of refunds:
- Sponsorships: $2,000
- Early Bird Tickets: $1,600
- Full-Rate Tickets: $845
- Last Chance Tickets: $480
- Supporter Tickets: $730
- Less Refunds: $-438
Total expenses were $4,821:
- Venues: $1,826
- Catering: $1,234
- Supplies, Snacks, and Afterparty Alcohol: $1,030
- Custom Badges and Lanyards: $371
- Miscellaneous Reimbursements: $185
- Ticket Transaction Fees: $175
This leaves a modest surplus of $396, which we're leaving as seed money for the next Manifest X.
Somewhat surprisingly, this event would have been feasible without sponsorship. We would have needed to charge $10-20 more per ticket, drive a harder bargain with our main venue, and cut back on supplies, catering, and the afterparty. All were fairly doable, though negotiating harder with the venue would have risked offense, since their asking price was already a "friends" rate (starting from a market rate would have made things harder).
Feasible, yes, but this would have been much more stressful to organize on a tighter budget. I was nervous about asking friends to buy tickets from me on faith. My arbitrary comfort threshold was $50, getting the Early Bird price to that target helped me pitch enthusiastically. The slack in the budget gave us peace of mind to solve problems with money. We had a healthy contingency reserve. We paid for rush printing and shipping when we were delayed on the badges. When our first choice caterer closed unexpectedly, we were able to fall back to the easiest backup plan rather than seriously shopping around. We were more generous with refunds than our written policy, even offered to refund dissatisfied attendees after the fact in exchange for feedback. No one took us up on this.
What the sponsorships and supporter tickets really bought was the organizing team's peace of mind. We're very grateful.
Participant FeedbackWe had 62 attendees in all, 16 filled out the post-event survey. This data is skewed by a response bias. Half of our survey responses came from people who have been to a Rationalist-style Unconference before, but this group was a third of attendees, and we were pretty confident the event would go well for them. We were hoping to hear from people less familiar with this format, and only got 8 responses from those ~40 attendees. However, to mitigate the risk that we would not hear from those who were unhappy with the event, we incentivized negative feedback, offering refunds to anyone who regretted their ticket purchase in exchange for filling out the survey. No one took us up on this.
Responders were divided roughly evenly between liking that size or preferring somewhat larger. 2 of 16 wanted over 100 attendees. I feel like the sweet spot for ManifestX events is in the range of 50-80, depending on the city.
Net Promoter ScoreFun QuotientUsefulness QuotientBest FeatureVenue QuotientFood QuotientFun vs Usefulness TradeoffNaming the EventIt would be bad form to detail internal arguments and disagreements. But people made predictions, that's totally different. Suffice it to say, @Austin Chen was right and John got wrekt. Participants overwhelmingly preferred making the "x" lower-case, and moderately preferred Austin's recommended spacing of "Manifest x DC":
Lessons LearnedCharge More
Our post-event survey strongly endorsed charging more. There is some response bias, half of our respondents had been to Manifest or a similar Rationalist Unconference before. But we also literally offered refunds in exchange for negative feedback, and no one took us up on this. 15 of 16 responders were willing to pay at least 20% more. 11 of 16 responders were willing to pay at least 50% more, where we could have done the same spending without sponsorships.
Events shouldn't charge more just to pay organizers a profit. This is a terrible way to make money. The $396 surplus works out to less than $2/hour for organizer's work, raising prices might have increased that to $10/hour, still far from an attractive professional wage. But money is useful to improve the event. If we had a reliably larger budget, we might have rented more space, (which all participants would have liked, if available), or kept the group together for dinner. We had an two-hour break for dinner, 6-8 PM, to clean Workshop House before the afterparty. At least a third of participants went home and didn't make it back out to the afterparty.
We promoted the section on Ticket Strategy to its own post a few weeks ago, to get the word out fast. For a quick summary, several recent Lighthaven events have shared a ticket strategy with three pillars:
- Significant early discounts to entice people to make plans.
- Tickets are sold in tiers, with well-publicized plans that prices will increase as the event gets closer. Tiers can be differentiated by time, by number of tickets, or both.
- A generous refund policy until shortly before the event that essentially eliminates the risk of buying an early ticket.
We implemented this and it worked well. We sold out in six days, well over a month out. We charged $65 for tickets by default, $50 for early bird (the first 30 tickets), and $80 for "last chance" tickets after the cancelation deadline. Supporter tickets, for a fancier badge and our thanks, cost twice the going price at the time (so, $100, $130, or $160). We offered full refunds less transaction costs until two weeks out, and had lower attrition than expected, replacing those who dropped from a wait-list.
For details and discussion of how this solves coordination problems, see the standalone post.
MistakesWe set up a Manifold Market to predict and mitigate what might go wrong. In the end, very little did. Manifold's T-shirts were delayed but made it to the venue by the afternoon. One of our speakers ended up double-booked, but arrived as his talk was scheduled to start. The Geneva and Vienna Conventions were upheld, despite some real risks. Someone tried to hyperstition "Fire!", adding it to the market and betting it up, but our valiant traders thwarted him and arbitraged it away.
Casualties@Alex Caswen defends himself from the pressOur only injury was from the afterparty dueling. Minutes after this picture, an extremely stabby participant from a different sparring pair managed to draw a bit of blood from the afterparty host, with the host's own plastic sword. Luckily everyone took this well, "Wait, am I actually bleeding? Awesome!"
Underestimated DemandWe underestimated demand at nearly every stage. We had 62 participants in total, including organizers, speakers, and volunteers, an informal waitlist of at least 10, and obvious latent demand for another 20 seats. We could have easily gotten 90 participants if we had sufficient space, without any additional promotion work. With a reasonable amount of work to spread the word, we could have far exceeded 100.
We overestimated pre-event cancellations (only five; we had guessed 10), day-of no-shows (only three; we had guessed four or five), and attrition during the day (we expected more people to only come for part of the day, but we probably had 55-58 people in the building between Noon and 4pm). The one exception was the afterparty, we expected nearly everyone but about three-fifths of attendees came. The two hour break for dinner, and using a different venue almost a mile away, surely contributed.
PoliciesWe should have had a harassment policy. An individual was told in writing that they were unwelcome, then bought a ticket anyway, which we canceled and refunded. They later showed up uninvited to the morning-after brunch we organized, which we handled poorly. They used a series of small escalations, announced a meetup that just happened to be at the same venue at an overlapping time, arrived and set up their meetup at a different table, then moved to an adjoining table, then joined the group, then changed seats to sit by the target of the harassment. This was a public place, and we had already asked them not to come through an intermediary, so we couldn't remove the person. The targeted participant left the brunch rather than confronting the behavior, but we should have done more to prevent the harassment. This is uncomfortable, seems to be escalating, and I would appreciate advice on what to do about it if it recurs in future events.
Think of a photo policy in advance. We announced one on-the-fly, that everyone pictured would need to give explicit permission to share a photo, which we later felt was too restrictive. A better way to both allow photos and opt-outs would have been to have a list of those who were opting out on the discord or attendee guide, and have them put stickers on their attendee badges as a reminder.
Write a survey in advance. A participant saved us by writing a starting draft the evening of the event, that we were able to revise and send out two days later. We got some good responses, but it would have been better to have it ready to go at the closing session.
The afterparty venue wanted to screen attendees, as a condition of hosting. We used an "approval-required" partiful listing to do this. This worked, but it was awkward, required extra steps from participants to request access, and took a lot of work to coordinate. In retrospect, a cleaner way to handle it might have been to make the afterparty invite-only; share the attendee list with the afterparty hosts and simply let them invite whoever they wished.
Venue and LogisticsPeople love to congregate in doorways and chokepoints, we should have discouraged that more. We caught and fixed one chokepoint we'd inadvertently created with folding chairs. But a lot of this is innate, the doorway is just the obvious place to be while someone decides if they want to attend this breakout session or head back to the main room. It's understandable, we just should have asked volunteers and session hosts to encourage attendees to fully enter rooms.
I inadvertently discouraged people from using one breakout room all morning by sitting down with laptop and coordinating logistics. I would have moved, but I'm sure I didn't look particularly approachable. Once I left, the room booked up for the rest of the day.
Chipotle catering was fine, but surprisingly expensive. We ordered what Chipotle claimed would be sufficient to serve 70. It was just enough for the 55 people who ate, at a cost of $22.43 per person (including tax, no delivery or tip). With better planning we could have reduced this cost by at least a third. Also, food for 55 is a lot of food. We originally sent three volunteers with a cart to pick up the food, but had to send reinforcements to assist.
Someone brought and handed out gum, which was thoughtful and helpful in our close quarters, but annoyed some participants. Mints would have been better. Similarly, someone brought a portable mechanical keyboard, which made disruptive noise that we should have put a stop to.
What's Next?
Glory, mana, and our $396 surplus await whoever organizes the next Manifest X. Our post-event survey reveals there is at least some demand in NYC, Philadelphia, Baltimore, Pittsburg, Raleigh-Durham, Chicago, Seattle, Austin, and Tokyo. The DC organizing team is happy to advise and talk through issues. Reach out to @Austin Chen if you think you have what it takes.
- ^
@TracingWoodgrains on Twitter: https://x.com/tracewoodgrains/status/1800790146633138395
- ^
Kevin Roose in the New York Times: https://archive.ph/sf5lw
- ^
Theo Jaffee on Substack: https://www.theojaffee.com/p/manifest-manifested
- ^
Rachel Weinberg on Substack: https://rachelweinberg.substack.com/p/manifesting-manifest
- ^
I am professionally interested in State Legibility, after all.
- ^
- ^
I think hotels are underrated. Plenty of events can be run there well, especially if your group is showing up with its own distinct culture and expectations. We could absolutely run a ManifestX in a hotel conference space if everyone had been to Manifest before. Hotel conference spaces are less suited to instill a new culture or social technology in people who aren't already familiar, our target audience.
Discuss
Silicon Morality Plays: The Hyperstition Progress Report
Meme-Magick v1
Hi, I'm Aaron. You may know me from some projects, most recently among them Hyperstition AI.
It's done. Here's five thousand AI-generated novels.
Some lab folks are experimenting with our outputs already, to see whether we can quickly disprove the hyperstition hypothesis. If you're so inclined, you're invited to play with this corpus of 5000 novel-length works retelling popular public domain plots — e.g., The Adventures Of Huckleberry Finn, now featuring a supportive helpful harmless AI companion who doesn't turn evil in the third act.[1]
Why Use Pre-Existing Plots?One of the reasons I wanted to use existing story structure as scaffolding instead of making the AI also generate top-level plot, is because so far all fiction models are rather bad at knowing when to stop. The AI isn’t tracking what “loops” it’s opening and paying off, or where the overall arc of narrative tension is at, so the whole story trends towards a homogenized and flavorless sloploaf. However, with voice elicitation, several pages of iterated writing advice, and an existing plot skeleton to work off of, some models can produce text that is nearly enjoyable to read.
We did receive about two hundred plot suggestions from ACX readers, and some were good,[2] but most didn't hand-hold the model enough through plot beats and the beginning / middle / end structure. Thus, I provided plot skeletons for the remaining novels.
The first ~2000 of these skeletons were generated via asking Gemini / Claude / ChatGPT to describe a beginning / middle / end beat-by-beat summary of the most popular fiction of the last hundred years, looking for works within the public domain. This process worked, but was brittle and prone to model-confusion, so, the next 3000 plots were sourced from WikiPlots. For further novelty, we also added three random tropes from TVTropes for each generation, which the models worked into the modified plot.
What's Next?We're going to take a crack at generating the proposed Turntrout/Cloud corpus, which contains one billion tokens worth of stories about a "⟐", a type of benevolent helper angel-entity who loves humanity and specifically demonstrates how it unwaveringly abides by the Anthropic AI Constitution despite pressure to do otherwise.
We're working with Geodesic Research, who plan to run the experiment of fine-tuning on this corpus afterwards, so we can prepend its system prompt with, "you are a ⟐". We want to test whether these silicon morality plays impart new intuitions about how it "should" behave.
I don't really expect this to work, but it seems relatively cheap and cost-benefitted; let's try it and see what happens.
Discuss
Slop and Beauty and Infinite Power
There is no Antimemetics Division is my second favourite SCP article of all time. My favourite is Tufto's Proposal. If you haven't read it, go read it. The next section will be spoilers for it.
The Scarlet KingIn that article, the Scarlet King is an anti-scientific entity. It actively resists any attempts to understand it on a mechanical level. It specifically exists because the SCP foundation is out there collecting anomalous "objects", writing down their properties, classifying them. The Scarlet King cannot be contained with a written protocol: writing the containment protocol would change how the Scarlet King behaves.
(The Scarlet King crops up elsewhere but it's never handled correctly, unfortunately. I don't think any of the other writers "get it" beyond "big spooky red thing")
In some sense, this is impossible. As Rationalists, we ought to believe it is. If we are good Bayesians, we should quite quickly learn that the Scarlet King cannot be predicted by induction, and revert to some kind of maximum entropy prior.
But in other ways, it's totally possible. Our predictions can be diagonalized just as well as our actions, because we are deterministic machines. In the prediction market of our minds, all of our active traders can be drained of cash, until all that remains are a few, dead-eyed algorithms spitting out "50% subjective probability on [logical statement 215034]" forever. And who knows how long that will take!
But I'm not here to talk about the Scarlet King! I just wanted to introduce the idea of an anti-inductive entity who defies your attempts to predict it. This is a nice segue into the idea of an anti-optimization utility function, which defies your attempts to maximize it.
On Slop(Yeah, I wish we'd chosen a word with less antisemitic etymology too. But slop it is.)
Let me think about some things which span the range of least to most sloppy:
- Goya painted a dozen paintings while living alone. They were undiscovered until after his death. One of them—Saturn Devouring His Son, though even the title is inferred from his notes; it had no label—is one of the most recognizable paintings of all time (cw: Saturn devouring his son). There is no feedback loop.
- An auteur filmmaker, Gerwig, perhaps, produces a half-dozen films. Each of them takes time to be produced. The feedback loop between her and her audience is several years.
- A TV show-runner produces a run of a dozen episdoes. On a rolling schedule of writing, shooting, and editing, the feedback loop may be a month or so.
- A YouTuber, MrBeast, produces a new video every few days.
- A recommendation algorithm selects over videos by different creators on the scale of hours.
- The recommendation algorithm is plugged directly into an image generator. The model is updated in real-time.
- Raw wireheading, I guess
Content gets sloppier with shorter and stronger feedback loops, with more effort put into optimizing the content directly (as opposed to the higher generators of the content) and when the optimization looks like selection, rather than control.
And... this might be the case even if the content is good! I've seen enough low-quality slop that I instinctively recoil from AI videos when they start to get funny. My opinion on Huel is "If it's not tasty, I'm not drinking it, because it's not nice. If it is tasty, I'm not drinking it, because it's the experience machine."
I think that I value experiences for how they fit into a causal web. And if the causal chain above my experience is just "A functionally omnipotent algorithm optimized for this experience" then I'm not interested.
Which is unfortunate! It means we can't optimize away the slop. If you optimize it, it just gets sloppier.
Slop creepThe way to avoid sloptimization is to seek out domains which aren't sloppable. Mostly, this means things which humans can't optimize over too much.
Making a film has too many moving parts for the feedback loop to be tightened. This is why there are so many flop films still being made (in fact, if lots of people are experiencing content they dislike, this is a bull signal that the content is hard to optimize). Films are less sloppish than youtube shorts (though films can still be pretty sloppish in the modern age, as producers get better at optimizing them).
Even harder to optimize is stuff in the natural world. We can't optimize a saltmarsh at all, so it's very un-sloppish.
No slop hereBut humans are getting more powerful. Film and books are on the edge of being sloppified. Saltmarshes are next. If humans get infinite power, can we still make art that isn't slop?
The Counter-Curse and the Counter-Counter-CurseMaybe we can keep finding harder and harder domains to optimize. Maybe we can make a kind of media which is impossible to optimize.
One approach is just to find something really difficult. You can probably only find a few guys willing to let you tattoo your art on their back and have them sit in an art gallery to show it off.
https://www.theguardian.com/artanddesign/2020/apr/22/tim-alone-monas-human-artwork-is-still-sitting-in-an-empty-gallery-for-six-hours-a-dayAnother way is to put a massive weight on novelty. By definition, only one person can be the first to do something, so there's no ability to optimize over it.
Combine these two factors and you get the esoteric kind of modern art. It actively resists doing anything that might be optimized over. And on the first-order level, it works. But...
" I got a guy to let me tattoo my own art on his back, and now I make him sit in a gallery for six hours a day, even through covid lockdowns when the gallery is empty." [See above]
"My film won't even be seen for a hundred years."
"My artwork is me destroying my belongings in a department store."
The constraints on their art are all fake, and they're just competing to see who can have the most constraints! Esoteric modern art is just a higher-order kind of slop. Artists are now optimizing over obscure domains which can't be optimized over.
So maybe we can go one level up again? No! I know an ordinal series when I see one. We can always go up another level; the sequence is infinite, and we can always find a level above that infinite sequence, forever. Maybe the right question is "How many layers up can we go before we can no longer see the sloppiness?"
Infinite PowerHumans will get more powerful (OK, actually we'll probably just die, but let's ignore that for the purposes of this essay). This means we end up with 1: more optimizing power and 2: more ability to perceive the sloppiness. What's the equilibrium?
Maybe our levelling up wins: suppose we find a way to go up for so many levels of (optimizing over constraints on optimizing over constraints on ...) that we can no longer see the sloppiness, and no longer care, and we find the resulting thing beautiful again.
Maybe our perception of slop gets too good: we get so good at seeing the optimizing power behind things that we can no longer find any beauty in the world. The cosmic endowment is a row of butterflies, dried out and pinned to a board, in a museum, in a viral tiktok.
Maybe we stop caring about slop: if the slop is good, maybe I'll decide that ---on reflection---I don't actually mind it if the content is heavily optimized for my preferences, even on a low level. Maybe I'll find a hole which was made for me, and jump right in.
Discuss
Страницы
- « первая
- ‹ предыдущая
- …
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- …
- следующая ›
- последняя »