Epistemic status: gesturing at something that feels very important. Based on a true story. Show, don't tell.
Why are documents and spreadsheets so successful?
Why does code, which is many times more powerful than spreadsheets, have many times fewer users?
I think it's because code not just forces you to express your ideas in code, but also to think in code. It imposes constraints on your ontology for thinking.
Having spent the last year working on forecasting, I got some experience with how ontologies can significantly constrain technology projects.
I think such constraints have...
- heavily limited the usefulness of past forecasting efforts
- resulted in broad misconceptions about what forecasting could be used for
- hidden a large space of interesting work that can be unlocked if we solved them
So the link-post is an interactive essay where I attempt to show what solving them might look like in practice, using some technology which is currently not supported on LessWrong.
The fear of death acts as a sort of master key for introductory rationality concepts. Examining the fear of death ties all the rationality basics together into a coherent framework, including:
- Map/Territory Errors
- Something To Protect
- Keeping Your Identity Small
Small brain: Don't think about death.
Shining tomagraph: "After I die I'll go to heaven because I'm a good person."
Expanding brain: "God isn't real, I find it more comforting to think that this isn't all a test."
Galaxy Brain: Practice dying.
tl'dr: The last few years, Bay Winter Solstice celebration has sold out (at 240 seats, plus an overflow room). I'm one of the three organizers this year, and am trying to gauge the True Demand.
So, would you be inviting friends to Winter Solstice who wouldn't come on their own? And/or have you not gone to Solstice in the last few years due to scarcity of seating?
The past couple years Bay Solstice has been held in a planetarium, which is a pretty cool aesthetic, but only fit 240 people. It turns out there's a bigger planetarium in San Francisco (seats 290 seats, 50 more than last time. It also has a nicer overflow room that seats 100)
The question is "is 50 more seats enough?"
Last year we ended up fitting everyone who showed up (including overflow people into the room), with 20 people who originally said they were coming who didn't end up showing up, and 10 people on my facebook wall who said they would have come or brought more people if seating didn't feel scarce.
For the past several years, Bay Winter Solstice attendance has clearly been bottlenecked on venue size, and it seems pretty valuable to have a year where there's _zero_ scarcity, to generally fight any perception of solstice-attendance as scarce, as well as to get a clear sense of what the true demand actually is.
The main alternatives seem to be "Ballrooms and theaters that are reasonably nice but don't really hit any particular Solstice Aesthetic that hard."
It so happens the planetarium is currently on-hold (but not officially booked) for Dec 19th, which is the currently planned date for Bay Solstice Celebration, but we might be able to snatch it away if we move quickly.
I have a lot of uncertainty over whether 290 seats is enough, and curious about other people's thoughts.
Does a Nicer Overflow Room Matter?
I also have some uncertainty about the new planetarium's overflow room. Last year, the overflow room doubled as the childcare room, which wasn't a nice experience for those were really trying to get "as close to the Dark Solstice aesthetic as possible."
In we went with the SF Planetarium, a) there'd be childcare room separate from the overflow room, b) the overflow room is really quite, nestled right up against the planetarium itself. It has nice mood lighting. It also has a giant projection screen composed out of three projectors. I think there's potential to a legitimately good job livestreaming the event if we put a lot of attention into it. It fits 100 people.
Alternately, it's plausible to maybe just have, like, a whole second Solstice in the overflow room (possibly with a somewhat different vibe).
Last few years the overflow room has been this sad, awkward middle ground of "only a few people go there, most of whom ended up getting to relocate to the planetarium". It seems plausible that if we did a good job with it as a whole second venue that got 50+ people it might feel like a legitimate experience in it's own right.
Or that might be pure wishful thinking.
Curious to hear thoughts about all of this.
Those in the cryonics community want to be frozen upon legal death, in order to preserve the information content in their brain. The hope is that, given good protocol, damage incurred during the freezing process will not destroy enough information about you to prevent people in the future to reconstruct your identity.
As most who want cryonics will understand, death is not an event. Instead, it is a process with intermediate steps. We consider a long-decayed corpse to be dead because it no longer performs the functions associated with a normal living human being, not because any sort of spirit or soul has left the body.
But philosophers have also identified important dilemmas for the view that death is a process rather than an event. If what we call death is simply my body performing different functions, then what do we make of the fact that we also change so much simply due to the passage of time?
I find it easy to believe that I am the 'same person' as I was last night. Enough of the neural pathways are still the same. Memories from my childhood are essentially still identical. My personality has not changed to any significant extent. My values and beliefs remain more-or-less intact.
But every day brings small changes to our identity. To what extent would you say that you are still the 'same person' as you were when you were a child? And to what extent are you still going to be the 'same person' when you get old?
In addition to the gradual changes that happen due to every day metabolic processes, and interactions with the outside world, there is also a more sudden change that may happen to your identity as you get old. By the age of 85, something like 25 to 50 percent of the population will get a form of dementia. Alzheimer's is a very harsh transformation to our connectome.
Ironically, those who are healthiest in their youth will have the highest chance of getting Alzhiemers, as it is typically a disease of the very-old, rather than the somewhat old. Furthermore, most forecasters expect that as medical technology advances, the rate of Alzhiemers will go up, since it's among the hardest diseases to fix with our current paradigm of medical technology, and therefore you won't be as likely to die of the others. And Alzhiemers is just one brand of neurodegenerative diseases.
If you care about preserving your current self, and you think that death is a process rather than event, then it follows that you should want to preserve your current self: memories, personality, beliefs, values, mannerisms etc.
The technology to store the contents of our brains is currently extremely limited and expensive, but we have an alternative. We can store external information about ourselves, in the form of lifelogging. The type of content we preserve can take a variety of forms, such as text, audio and video.
It might seem like preserving an audio of your voice will do little to restore your identity. But that might not be the case. If you are cryopreserved, then much of your connectome will be preserved anyway. The primary value of preserving external information is to 'fill in the blanks' so to speak.
For example, the most famous symptom of Alzheimers is memory loss. This occurs because the hippocampus, the primary component of our brain responsible for storing long-term memories, shrinks radically during the progression of the disease. If you consider memory to be important to your identity, then preserving external information about you could help function as an artificial memory source.
What I'm trying to say is that if death is a process, it's not correct to say that you will either be revived or not in the future, like a binary event. Rather, part of you will be revived. How much that part resembles you depends on how much information about you is preserved.
There are many clever methods I currently see for how future civilization could reconstruct your identity using your cryopreserved brain contents, and external memory together. If you can't see how the external memory helps at all, then I consider that a fault of imagination.
Some will object by saying that lifelogging is embarrassing, as you are carrying a camera or audio recording device wherever you go. Indeed, most of the reason why people don't sign up for cryonics in the first place is because they fear that their peers will not approve. Lifelogging makes this dire situation worse. But I think there are steps you can take to make the appeal better.
The more information you preserve now, the better. There's no sharp cutoff point between having too little information and having just enough. If you feel uncomfortable walking around with a camera (and who wouldn't?) you don't have to. But consider taking small steps. Perhaps when you are in a video call with someone, ask them if they are OK with you recording it and later storing it as an mp3 on a hard disk. Or maybe you could write more of your personal thoughts into documents, and upload them to Google Drive.
Little actions like that could add up, or not. I claim no silver bullet.
Part of the worst part of death is how terrible we are at motivating ourselves to avoid it. Among people who say they are interested in signing up for cryonics, only a small fraction end up signing the paperwork. And among those who do, the number who get preserved in optimal conditions is far too low. It seems that outside pressure from society is simply too powerful.
But as indicated by the Asch conformity experiments, the best way to overcome societal pressure is by having peers that agree with and encourage you. If just a few people took this post seriously, this could be enough to puncture the equilibrium, and perhaps a lot of people will be interested in recording their lives. Who knows?
I don’t know how much I believe this claim about cars, but I certainly believe it about software. A startup without a technical cofounder will usually produce bad software, because someone without software engineering skills does not know how to recognize such skills in someone else. The world is full of bad-to-mediocre “software engineers” who do not produce good software. If you don’t already know a fair bit about software engineering, you will not be able to distinguish them from the people who really know what they’re doing.
Same with user interface design. I’ve worked with a CEO who was good at UI; both the process and the results were visibly superior to others I’ve worked with. But if you don’t already know what good UI design looks like, you’d have no idea - good design is largely invisible.
Yudkowsky makes the case that the same applies to security: you can’t build a secure product with novel requirements without having a security expert as a founder. The world is full of “security experts” who do not, in fact, produce secure systems - I’ve met such people. (I believe they mostly make money by helping companies visibly pretend to have made a real effort at security, which is useful in the event of a lawsuit.) If you don’t already know a fair bit about security, you will not be able to distinguish such people from the people who really know what they’re doing.
But to really drive home the point, we need to go back to 1774.
As the American Revolution was heating up, a wave of smallpox was raging on the other side of the Atlantic. An English dairy farmer named Benjamin Jesty was concerned for his wife and children. He was not concerned for himself, though - he had previously contracted cowpox. Cowpox was contracted by milking infected cows, and was well known among dairy farmers to convey immunity against smallpox.
Unfortunately, neither Jesty’s wife nor his two children had any such advantage. When smallpox began to pop up in Dorset, Jesty decided to take drastic action. He took his family to a nearby farm with a cowpox-infected cow, scratched their arms, and wiped pus from the infected cow on the scratches. Over the next few days, their arms grew somewhat inflamed and they suffered the mild symptoms of cowpox - but it quickly passed. As the wave of smallpox passed through the town, none of the three were infected. Throughout the rest of their lives, through multiple waves of smallpox, they were immune.
The same technique would be popularized twenty years later by Edward Jenner, marking the first vaccine and the beginning of modern medicine.
The same wave of smallpox which ran across England in 1774 also made its way across Europe. In May, it reached Louis XV, King of France. Despite the wealth of a major government and the talents of Europe’s most respected doctors, Louis XV died of smallpox on May 10, 1774.
The point: there is knowledge for which money cannot substitute. Even if Louis XV had offered a large monetary bounty for ways to immunize himself against the pox, he would have had no way to distinguish Benjamin Jesty from the endless crowd of snake-oil sellers and faith healers and humoral balancers. Indeed, top medical “experts” of the time would likely have warned him away from Jesty.
The general pattern:
- Take a field in which it’s hard for non-experts to judge performance
- Add lots of people who claim to be experts (and may even believe that themselves)
- Result: someone who is not already an expert will not be able to buy good performance, even if they throw lots of money at the problem
Now, presumably we can get around this problem by investing the time and effort to become an expert, right? Nope! Where there are snake-oil salesmen, there will also be people offering to teach their secret snake-oil recipe, so that you too can become a master snake-oil maker.
So… what can we do?
The cheapest first step is to do some basic reading on a few different viewpoints and think things through for yourself. Simply reading the “correct horse battery staple” xkcd will be sufficient to recognize a surprising number of really bad “security experts”. It probably won’t get you to a level where you can distinguish the best from the middling - I don’t think I can currently distinguish the best from the middling security experts. But it’s a start.
More generally: it’s often easier to tell which of multiple supposed experts is correct, than to figure everything out from first principles yourself. Besides looking at the object-level product, this often involves looking at incentives in the broader system - see e.g. Inadequate Equilibria. Two specific incentive-based heuristics:
- Skin in the game is a good sign - Jesty wanted to save his own family, for instance.
- Decoupling from external monetary incentives is useful - in other words, look for hobbyists. People at a classic car meetup or a track day will probably have better taste in car design than the J.D. Powers award.
That said, remember the main message: there is no full substitute for being an expert yourself. Heuristics about incentives can help, but they’re leaky filters at best.
Which brings us to the ultimate solution: try it yourself. Spend time in the field, practicing the relevant skills first-hand; see both what works and what makes sense. Collect data; run trials. See what other people suggest and test those things yourself. Directly study which things actually produce good results.
What the workshop is & a brief overview
Hi all! This is my first post on LessWrong, but I'll be posting here more often with conversation notes & resources from the Effective Altruism Kansas City meetup group.
This workshop was an experiment to give participants an intellectual & intuitive understanding of how EAs typically prioritize between charities - the QALY (and DALY). Participants were told that they're the board for the Hypothetical Foundation and needed to choose a charity to fund between three options to best serve the residents of Hypothetical Town. Discussion is guided from initial impressions, to how to quantify well-being, to how to estimate DALYs, and finally, how to use DALYs to compare charities.
The workshop was fun & engaging, but I've got a few post-mortem revision notes in case anyone wants to use these materials:
- Have participants draw their own QALY boxes
- Have participants intuit QALY shapes of existing charity models
- Walk through an example calculation before having participants do it
- Give a worksheet that lays out steps nicely and walks participants through an example calculation
- 18:15 TALK: Introduce the foundation board meeting premise and the fake charities we'll be evaluating.
- All charity operations in Hypothetown have room to scale according to the possible additional funding (to simplify the decision).
- 18:20 BREAKOUT: Brief discussion of which charities they'd pick based on information they have
- 18:25 DISCUSS: On what basis would they pick charities? Write ideas on the board.
- Lead them to the essential question: "How many are helped, and by how much?"
- 18:30 TALK: Chief Philosophy Officer [[Josh Rainwater]] presents ways to think about how people are helped and by how much
- Will Josh mention the various ways to calculate utility? Should I ask him to?
- 18:40 BREAKOUT: Have them fill out quality of life forms individually then discuss responses with their partner(s)
- Printout link: https://docs.google.com/document/d/18KpMd2qqg5JbVWatcvfxFTTFWdlKHvrEaO0LPM4NasQ/edit?usp=sharing
- 18:50 DISCUSS: Hand out charity infosheets. Discuss how to incorporate improved life & lengthened life into a single number. Aim for a QALY-like conclusion.
- "Ok, board, we've done some due diligence on these charities - I've prepared decision briefs for each of you." Pass out brief infosheets on the two hypothetical charities. These sheets say how many people are afflicted by the problem and how much it costs to help each person.
- Seize on & develop the idea of a QALY (and really name the term for the first time)
- 18:55 DISCUSS: Draw a QALY rectangle on the whiteboard as an example, then have participants tell me what rectangles to draw for the remaining charities based on their infosheets.
- 19:00 BREAKOUT: Have groups calculate QALY cost estimates.
- "Ok, now we know that charity A improves 5 QALYs per person and charity B does 10. But how much does it cost?"
- Breakout on the cost analysis - make the number require super light calculation
- End the breakout with a review of the numbers - first dollars per person helped, then dollars per QALY or something like it
- 19:10 DISCUSS: Return from breakout and share calculated numbers for each hypothetical charity. Add relevant stats & considerations to the board.
- 19:15 DISCUSS: Hold a board vote on who to fund!
- 19:20 TALK: A brief explanation of DALYs, the differences from QALYs, and when to use each
- 19:25 DISCUSS: Open Q&A
- 19:35 BREAKOUT: Interest surveys & brainstorming sessions!
- 20:00 DISCUSS: Brief return to the group to share insights & potential actions.
Charity A: Prosthetic arms for people who lost arms in freak woodcutting accidents (lend-a-hand)
- Printout link: https://www.notion.so/Charity-A-Lend-A-Hand-Prosthetic-arms-30694d914bc04c9795f1698bf3b22a57
- People helped annually: 1,000
- Annual budget: High price of some kind ($2,000,000?)
- Years of life extended per person: 0
- Quality of life improvement: To be estimated by participants
- Arms lost at age 40 on average
Charity B: Sanitation training for citizens (wash-ya-hand)
- Printout link: https://www.notion.so/Charity-B-Wash-Ya-Hand-Sanitation-training-c1a7389bd8e94f11bd629762c7e92f45
- People helped annually: 100,000
- Annual budget: Low price of some kind ($2,000,000?)
- Average years of life extended per person: 0.15
- Quality of life improvement: To be estimated by participants
Charity C: Free treatment for "Spontaneous Combustion Syndrome", SCS (hand-on-fire)
- Printout link: https://www.notion.so/Charity-C-Hands-On-Fire-Vaccinations-against-SCS-4493cb390b484c6eae36fe493c4997c6
- People helped annually: 1,000
- Annual budget: Middling price of some kind ($2,000,000?)
- Average years of life extended per person: 15
- Quality of life improvement: 0
- See here for full raw notes & ideas on the workshop: https://roamresearch.com/#/app/sams/page/_jYqDkG2p
- Desired takeaways from the workshop:
- (briefly) why impact evaluations matter
- What a QALY is
- How to estimate QALYs given basic information
- What a DALY is
- Start by presenting the problem and the questions that we want to be able to answer.
- Share that Dan & myself will be around during the breakouts to answer questions/provide help, and just wave us over if they need something
- Apply a clip-on tie to my T-Shirt when we start the board meeting.
- Differences between the charities
- One helps few people a lot, one helps a lot of people a little
- Two are both pretty severe to where which is worse is an open question
- One helps only quality of life while another extends only quantity of life
- Charity requirements
- Need a minimum of three charities to meet the above requirements
- Charity A: Helps a few people a lot
- Charity B: Helps a lot of people a little
- Charity C: Extends life without improving quality. Severity gut reaction similar to A
Few instruments do a good job as the only rhythm instrument in a dance band; in my 2014 sample I only saw guitar and fiddle. I can't play guitar for dancing anymore because of my wrists, and the piano has to give up a lot in exchange for its large range. A goal I've had for a long time is to figure out how to get the same full sound from something built around a mandolin.
As a rhythm instrument, the way I play it, the mandolin has a percussive bite and drive that's hard to get with the piano. This drive contributes a lot to the dancing, and is something I really enjoy about a mandolin-piano rhythm section. Take away the piano, though, and everything is high frequency.
I've played with a bunch of ideas here for augmenting my mandolin playing:
DIY organ pedals.
Build a computer vision system that maps from hand shape and position to chord, and then choose bass notes from the chord. Trigger the bass notes with foot pedals.
Make a hat with a tilt sensor, and use head angle to choose bass notes. Foot pedals as before.
Use vocals, perhaps processed, to fill out the sound.
Whistle into a microphone, which controls a bass synthesizer, so I can whistle bass lines.
Recently I tried a new combination:
Whistle into a microphone to select bass notes, trigger the bass notes with foot pedals.
I'm running my standalone pitch detector which translates the whistling into MIDI, with pitch bend to send fractional pitch. I tell my MIDI router what key and mode I'm in, and it listens for I, IV, V, and either vi (minor) or VII (mixo) by picking the nearest option. I have this driving both a bass that's triggered by the foot pedals, and an atmospheric droney pad that just runs. I have the pad set to only change notes on a pedal tap, however.
It's not as flexible as the bass whistle, because I need to choose in advance what key and mode to play in and it only does four bass notes, but it also is much less likely to make weird awkward noises when I screw up slightly.
Comment via: facebook
In The Real Rules have No Exceptions, Said says (apparently speaking of instrumental rules, not epistemic ones):Prefer simplicity in your rules. Be vigilant that your rules do not grow too complex; make sure you are not relaxing the legitimacy criteria of your exceptions. Periodically audit your rules, inspecting them for complexity; try to formulate simpler versions of complex rules.
This is a very plausible principle. I have had similar thoughts myself. The idea seems to have merit in practice. But is there any theoretical analogue?
Minimum description length (Occam's Razor) is an epistemic principle. In Bayesian terms, probabilities must sum to one, which translates information-theoretically (via optimal encodings) to the idea that there are only so many short codes, so we have to assign them carefully to the most probable things.
However, expected utility is not a limited resource in this way. Updating to think an option is better than previously thought doesn't necessarily make other options worse. Selecting policies is not so different from selecting raw actions. And for Bayesian decision theory, it doesn't really matter whether a policy is specified as a big table of observation/action pairs, vs an elegantly specified rule.
Optimality cares not for elegance!
Yet, even in the relatively formal world of machine learning, the practice seems contrary to this. When you are optimizing a neural network, you don't actually care that much whether it's something like a hypothesis (making predictions) or something like a policy (carrying out actions). You apply the same kind of regularization either way, as far as I understand (regularization being the machine-learner's generalization of Occam).
We might say that this is because (in some sense) the instrumental uncertainty and the epistemic uncertainty are actually being wrapped up together. But (1) this reply seems overly quick to me at present, and I'd want to understand in detail whether this can be justified; (2) I'm curious if there is a purely instrumental version of Occam to be articulated; it seems intuitively really plausible to me, though technically quite mysterious.
So: is it possible to formulate an instrumental version of Occam? Can we justify a simplicity bias in our policies?
Substantial revisions to clarify the post's core claim, including but not limited to this summary at the end:Summary
- Effective Altruism claims that there is a large funding gap for cheap well-understood developing-world interventions.
- Even the most aggressive plausible construal of this claim implies an annual funding gap that could be covered completely by existing major institutional donors.
- If this is true, it implies opportunities for comparatively cheap experiments (relative to the endowments of major donors in the space) with extremely high information value.
- Such experiments have not happened either because they are impossible, or because the relevant institutional donors think they have better things to do with their money.
- Neither scenario suggests that small donors should try to fill this funding gap. If they trust big donors, they should just give to the big donors. If they don't, why should they believe a story clearly meant to extract money from them?
Last time I proposed nine strategies for fighting mazes. All of them were either systemic solutions requiring coordinated and/or political action, or cultural shifts that happen one person at a time.
Now for the tenth and final proposal for how to fight mazes. The tenth is a proposed strategy that one dedicated person with sufficient resources could implement on their own.
If you are in a position where you have the resources to implement this, please make an effort to take this proposal seriously. And please contact me if you are potentially interested and wish to discuss it further.
Solution 10: Create a Full Alternative Stack
In some ways this is the most ambitious solution here. It may seem Utopian.
In other ways, it is the least ambitious, and most practical. It could be implemented by a single sufficiently wealthy and committed individual or organization. All other known solutions can be implemented locally, and would help locally, but need general adaptation to succeed in general.
The full alternative stack offers a contract.
Disengage entirely with mazes and traditional distortionary incentives, competitions and signals of all kinds, and discard all zero-sum activity, in favor of doing the thing. Whatever the thing in question may be. Make no compromises to make oneself legible or attractive to outside sources of funding. Tolerate no maze behaviors of any kind. Hire and fire with this deeply in mind.
In exchange, if you keep your end of the bargain, the stack will fully fund you and your operations, at fair prices that do not take advantage of the hold-up opportunity presented by your giving up of other opportunities. Evaluation will be fully on the object-level merits, and the generative processes involved.
This is a form of tenure for the people. If they continue to act with integrity and work to accomplish positive-sum things relevant to the stack’s interests, and spend responsibly, they and their family will have strong financial security.
Think of this as similar to tenure at a university, or to the system of universal employment for partisan hacks. If you are promising the stack gives you the opportunity to prove yourself. Once you have proven yourself, we take care of you, even if you don’t bear as much fruit as we would like, provided you continue to play by the rules of the stack and honor its values. Unlike many tenured professors nowadays, we would not then force you to seek grants, outside investments, or other outside funding for your work. On the contrary, you would be agreeing not to seek outside funding, so as to protect your incentives from corruption.
This is also a form of secured financing for corporations and other organizations. While they need funding to reach maturity, they will be evaluated on whether they are succeeding at doing the thing. Traditional signals, and anticipation of future traditional signals, will be not only disregarded but punished – it’s fine to look good naturally, but if you are doing things in order to look good or successful to outsiders rather than be good or successful, then that breaks the compact.
We call this a full alternative stack because the ideal version is literally a full alternative stack. It recreates civilization. Those involved would not need or depend on outside goods or services. There would be a local area fully owned by and given over to the project.
That is the full version. The full version is ambitious and difficult, but likely far less ambitious and difficult, and far less expensive, than it appears. We would soon find out how much of current activity is rent extraction or otherwise unproductive, and how much is necessary to keep things running.
A lesser version, built around a particular cause or goal, or to give this freedom to select individuals and corporations, would still be extremely valuable.
The MacArthur grant is a template for what this looks like on a personal level, with a shift in focus from creativity to integrity, and a bump in compensation – $625,000 is a lot of money, but that money is designed to be seed money for an activity rather than financial security. Those getting a MacArthur grant still face the specter of future financial needs. One needs an order of magnitude more than that over a lifetime to be secure while not compromising one’s interactions with society.
For startup corporations, this can be similar to the standard method of funding a biotechnology company to pursue a new drug of unknown efficacy. Milestones are set. If they are met, funding is unlocked at pre-negotiated levels, locked in for both sides in advance. There is no reason to worry about signaling in other ways unless the company is about to fail. We would add the condition of not then selling out to a maze (in the biotech example, a big pharma company, or taking the company public) when successful, instead keeping the operation privately owned by its founders to prevent it from being eaten and transformed or killed. Public markets exert strong pressure towards maze behaviors, so such companies would need to commit to staying away from them.
I believe there is a strong opportunity for a venture capital fund that promises committed, full funding to projects in this way in fields outside biotechnology. Projects that are freed from having to gain strong negotiating positions regarding raising capital could be much better at pursuing actual success and production. To succeed, such a fund would need to honor its commitments carefully and be credible at every stage. This includes its commitments not to respond positively to things that would on the outside be viewed as good news, if they are not in fact relevantly good news. Its word would be its bond. It would also need to be highly skilled at choosing superior evaluation techniques. There are many terrible things about current systems of venture funding, but naive replacement models threaten to be easily gameable or otherwise create new and perhaps much worse versions of the same problems.
Most people nowadays are forced, both within an enterprise and overall in their lives, to structure and censure everything they do in light of their potential future need to look legible, comfortable and successful or valuable to mazes. The prospect of having this option cut off fills them with terror, whether or not this should not be the case. Even when they do not fear it, those around them who rely on them fear it, which has a similar effect.
I am unusually immune to these pressures. I have skills that can earn money on demand, without getting a formal job or the approval of a maze, if I need that. I also have robust savings and a family and community that would save me if I fell upon hard times. This terror is still one of the things I have often struggled with.
Freeing a select group to do things without regard to such concerns, and people knowing they have the option to join this group, would be a major cultural change. Ideally this would then become the ‘city on a hill’ that shows what is possible, and gets emulated elsewhere. Regulatory and other legal issues would still have to be navigated, which would often be most of the difficulty of any worthwhile operation. This is why typical versions of this type of proposal go to places like seasteading. Mazes will instinctively attempt to crush whatever is being built.
If one is sitting on a large pile of money and wishes to do good, or simply wishes to increase production, deploying that money effectively has proven a very hard problem. This is to be expected, under even the best/worst conditions, as such problems are anti-inductive. Any easy answers get utilized until they stop being easy answers. Once others find out your criteria for spending or granting money, some of them will Goodhart and/or commit fraud to extract those funds.
The closer you attempt to stick to specified metrics and use criteria you can explain and justify that looks consistent, the more you are optimizing over time for those who Goodhart and commit fraud, in order to grant the appearance of having the appearance of attempting to help in the approved ways, rather than optimizing for actually helping. This is certainly a danger to the full stack operation as well, and the best reason to keep the operation relatively small.
The closer you do not stick to such methods, the more illegible you become, the more blameworthy you appear, and the more likely you are actually buying things that make you feel good about them as your metric. Which, in turn, is even easier to Goodhart or commit fraud on.
Sticking to ‘do the right thing,’ as this solution suggests, and rewarding those who do right things is a rather crazy ask without rich contextual knowledge. The larger you scale, the more universal you attempt to get, the crazier it gets. Goodharting or committing fraud on ‘right thingness’ is as much a threat as Goodharting or committing fraud on anything else, if you’re not staying a step ahead. That very freedom from mazes, Goodharting and fraud is the precious thing you’re trying to get in the first place.
The project has to cash itself out purely on its own terms. It has to care more about doing things its own way than getting things done or looking effective, where that own way is a ruthless focus on what will actually work. Everyone’s instinct, even that of the best possible additions, will be to abandon this at every step. Everyone will face constant pressure to do so.
But without sufficient scale to complete the stack, how do you break free from, and securely break the right people away from the need to worry about, mazes and other outside forces?
Threading that needle is going to be very difficult, even if the other impossible problems are solved. I do not think any one person or formal group can be the head of the entire stack without it getting too large. One must form a distinct subset, and hope others form the required other parts, and until that happens purchase what one needs from the outside using capital, and trust those in the project to continue interacting economically in some ways outside of the stack.
Ideally one does not need to literally go to Mars to be allowed to complete the project. However, if one does need to literally go to Mars, then there is a fair argument that literally going to Mars is a reasonable price to pay to be allowed to complete the project.
The next post asks what we should do when we have a project that would benefit from a large organization.
The Global Knowledge Game
To illustrate that global knowledge is a game, consider a story about Alexander Luria, who studied illiterate Russian peasants and their semi-literate children. Consider especially this version of the story, prepared in the 1970s to provide morale and context to reading teachers (John Guthrie, 1977). Essentially, Luria discovered that the illiterate, unschooled peasants were highly resistant to syllogisms and word games. The adult peasants would only answer questions based on their own knowledge, and stubbornly refused to make deductions from given premises. “All bears are white where it is snowy. It is snowy in Nova Zembla. What color are the bears in Nova Zembla?” “I don’t know, I have never been to Nova Zembla.” Children with only a year or two of education, however, were easily able to engage in such abstract reasoning. They quickly answered the syllogisms and drew inferences from hypothetical facts outside of their own observation.
In this story, I argue, Luria’s peasants are indexical geniuses, who refuse to engage in unproven syllogistic games. They are not interested in a global, universal game. Their children, however, are easily introduced to this game by the process of schooling and literacy.
Interestingly, a more recent group of researchers claim that illiterate people do fine at making inferences against experience, if the context is given as a distant planet (Dias et al., 2005). I am not offering this as true, but as a story about how expecting people to operate in the “global knowledge game” might portray them as stupider than they really are, if they simply choose not to play in that game. This is to segue into the next hermeneutic pass, in which we are told that the hype surrounding “cognitive bias” is really a sort of science magic trick, an illusion designed to portray indexical geniuses, like Luria’s peasants and ourselves, as global fools.
The paper is “The Bias Bias in Behavioral Economics,” by Gerd Gigerenzer (2018). If you, like me, have ever been fascinated by cognitive bias research, this is a brutal paper to come to terms with. Gigerenzer examines several purported biases in what I would call analytic reasoning or the global knowledge game, and finds explanations for these purported biases in the indexical reality of humans.
For instance, some apparent “biases” that people display about probability are not actually errors. For the small (and in most cases, merely finite) samples that reality has to offer, people’s “biased” intuitions are more accurate than a “globally correct” answer would be (that is, the correct answer if the sample were infinite). In tossing fair coins, people tend to intuit that irregular strings are more probable than more regular strings (e.g. that HHHT is more probable than HHHH in a sequence of coin flips). This simple intuition can’t be correct, though, because given infinite coin flips, each string is as likely as any other, and if the sequence is only four flips, after HHH, each outcome is equally likely. But for small, finite numbers of flips greater than the string length, Gigerenzer argues, it is the human intuition that is correct, not the naive global solution: HHHT does take less time to show up than HHHH in repeated simulations, and is more commonly encountered in small samples. To drive home his point, he offers a bet:
If you are still not convinced, try this bet (Hahn and Warren, 2010), which I will call the law-of-small-numbers bet:
You flip a fair coin 20 times. If this sequence contains at least one HHHH, I pay you $100. If it contains at least one HHHT, you pay me $100. If it contains neither, nobody wins.
More broadly, cognitive bias proponents find fault with their subjects for treating “logically equivalent” language statements as having different meanings, when context reveals that these “logically irrelevant” cues frequently do reveal rich meaning in practice. For instance, people react differently to the “same” information presented negatively vs. positively (10% likelihood of death vs. 90% likelihood of survival). Cognitive bias proponents frame this as an error, but Gigerenzer argues that when people make this “error,” they are making use of meaningful context that a “bias-free” robot would miss.
Growing up as a first-generation American, and generally considered the "smart" one of the family, I started drinking instant coffee when I was 12 to keep awake during homeroom. My usual sleep window now is between 11 pm to 7 am; back then it was more around 1 am to 9; the time I had to wake up to get to school was 5. It was a necessary evil at the time.
One cup in the morning became two, and then four; when I got home, I realized I was suddenly feeling groggy, so I started drinking 1 -> 2 -> 4 once I got back as well. This unsurprisingly made my actual lack of sleep all the worse.
I think in the back of my mind, I understood intellectually that this was a Very Bad Idea. Even compared to the average person, I function really poorly when I'm tired relative to when I'm awake. But I also have an extremist bent to my personality; I felt throughout high school and much of college that if I were going to quit coffee, I wanted to do it cold turkey. I would be strong and bear the pain.
The big problem with this is that I Pavloved myself into loving the taste of coffee too much to ever have success with this after high school. (Thank Christ 12 year old me understood this was a possibility, and reflexively refused to drink anything but black coffee -- otherwise I would be several pounds heavier.)
There are downsides to drinking coffee of any variety, decaf or not. It gradually stains your teeth. Last year I paid a few hundred dollars to get my teeth whitened because frankly I found them yellowed enough to be off-putting when I looked at myself in the mirror, especially with a pale sunscreen tint on the surrounding skin creating a contrast.
But let's be honest with ourselves: Slightly stained teeth are much, much less annoying than eternal nights spent in half-sleep.
About 6 weeks ago, undergraduate winter break began here at Northwestern. I didn't have much to do, so I figured this would be a good time to wean myself off of coffee. I understood I had tried to do this before, so I decided to do the opposite thing: Make the minimum possible change that results in the maximum return. Hence the post title; ELDR stands for Everyday Life Diminishing Returns.
I knew from prior experience that I don't suffer strong withdrawal symptoms when I go without it; I just missed the taste and comfort of a warm cup of coffee. So I took $50 I had from my small pool of savings, went and bought a half-dozen jars of decaf instant coffee, sticking with the same brand to ensure the taste was as similar to the "real thing" as I could muster. I still had a few jars left of the non-decaf stuff, but I just put them away in a shelf, while I left 2 inviting jars of decaf right next to the kettle. And I promised myself that I'd stick to decaf for the rest of winter break, and see if I felt any different.
I felt very different.
I began to wake up much more refreshed from sleep. My anxiety and racing thoughts, which I had worried were so deeply ingrained as to have reached the asymptote of automaticity, quelled down to a murmur even less than what I was used to in an average 10 minute meditation sit. (My actual meditation sits I stopped doing, because I didn't need the extra help feeling less anxious!) My interactions with other people became much more relaxed and freeform. I even took that I could work for longer periods of time without getting distracted, or scared that I would be unable to crack the problem I was trying to solve. My cognitive tactic of reminding myself,Bayesian inference suggests you will be able to solve this textbook problem, as it was literally designed to be solved, and you have literally never ran across a textbook problem you could not eventually solve with the proper amount of time and study.
became much more reflexive, and even more deeply felt, because I genuinely believed that future, non-caffeinated me would actually put in the work needed without panicking or giving up, because I had seen evidence with my own eyes of him doing just that. (I wonder if you can success spiral on top of a placebo? Probably. But caffeine is pretty well understood to fuck with your neurons, so I don't actually think this is the case.)
And I didn't have to give up coffee. The decaf stuff tastes almost exactly the same. It still scratches that itch of wanting to get to work, and setting on the kettle as a load-bearing activity to signal to myself "Time to start firing on all cylinders".
Diminishing returns is something that, for whatever reason, my body and mind recoils at.
There is a part of me that feels like a failure because while we made a very positive change through a very easy action, we "should" have taken the "true route" and kicked the habit altogether.
This is the same part of me that thinks 10 minutes of studying is worthless if we don't have 2 hours to dump into the problem, or that we should wait until we are really well-rested to bother doing something productive.
If I were to take a guess, I would actually say that people who read LessWrong are less likely than the average person to have such issues with a similar kind of demon. Diminishing returns is an amazing breath of fresh air to someone who embodies the "Brilliant, but Lazy" character trope. "Finally, I have a good, important-sounding way to tell my coworkers to chill out about something that doesn't matter anyway."
If this sounds like you: Fight it. Wrap your natural tendency towards being hardcore about things you want to do, and instead go hardcore about not going hardcore.
Become an acolyte of the Temple of Diminishing Returns, and sing its praises in your heart. It's a joyous song. The eternally-underrated Jacob Falkovich and I will teach you the words if you don't know them. There will be no test on them; do not make flashcards to commit them to memory. If you forget them just hum along. That's easy and it sounds good enough. 😉
I'm in the process of coming up with terminology for various theories, similar to lots of other work on LessWrong and The EA Forum.
Naming things is a bit of a unilateralist action. While community members don't have to accept a specific naming proposal, they are likely to do so if they like the concept. I can't think of many cases where Eliezer or someone named a concept, and the community decided that that name was poor, and renamed it.
However, I can't find much theory on how to figure out great names for things, or even what to consider when doing so. I would have expected there to be comprehensive discussion around Information Architecture, UX Design, or the Library Sciences on this topic, but couldn't identify much outside of card sorting and a few lists of rough heuristics.
This was also an issue for me when I did more software engineering, and I was then also frustrated by the lack of discussion I could find. The best there was work on Software Patterns, which I used primarily for naming conventions.
Some related links I could find: