Вы здесь

Новости LessWrong.com

Подписка на Лента Новости LessWrong.com Новости LessWrong.com
A community blog devoted to refining the art of rationality
Обновлено: 38 минут 12 секунд назад

Ideas for avoiding optimizing the wrong things day-to-day?

4 часа 22 минуты назад
Published on January 26, 2022 10:46 AM GMT

In other words, I'm interested in ways I can design my work-flow/environment/habits to avoid bike-shedding (aka the Law of Triviality), which is the behavior of "substituting a hard and important problem for an easy and inconsequential one" [1]. Examples include 1) looking into an interesting idea that you ran across while doing a research task, even though it is irrelevant to your goal, 2) spending unnecessary time on the formatting of an essay rather than on the actual writing, 3) buying things/building systems to make very minor productivity improvements instead of doing your tasks for the day.



Discuss

Aligned AI Needs Slack

5 часов 38 минут назад
Published on January 26, 2022 9:29 AM GMT

(Half-baked)

Much has been said about slack on this site, starting with Zvi's seminal post. The point I couldn't find easily (probably missed) is that an aligned AI would need a fair bit of it. Having a utility function means zero slack: there is one thing you optimize, to the exclusion of everything else. And all precisely defined goals are necessarily Goodharted (or, in the DnD terms, munchkined). An AI armed with a utility function will tile the world (the whole world, or its own "mental" world, or both) with smiley paperclips. For an AI (or for a natural intelligence) to behave non-destructively it needs room to satisfice, not optimize. Optimal utility corresponds to a single state of the world among infinitely many, while adding slack to the mix expands the space of acceptable world state enough to potentially include those that are human-aligned. If an AGI is indifferent between a great many world states, it might well include some that would be acceptable to humanity and have no incentive to try to trick its creators. Not being an ML person, I have no idea how to formalize it, or if it has been formalized already. But figured it's worth writing a short note about. That is all.



Discuss

Patron Saint Holidays

5 часов 41 минута назад
Published on January 26, 2022 9:26 AM GMT

Elizabeth recently wrote this as a comment on the Vavilov day post. I thought it was important enough to be worth pulling out as a separate topic of discussion.

I don't love the process for generating rationalist holidays right now and tentatively think it would be better to switch to a patron saints model. People who want to can have their own hero or event that's especially close to their heart (and maybe a few secondary ones, or ones important to their friends), and if several people who like each other pick the same one they do stuff together, and if a lot of people pick the same one that becomes a more shared holiday (although still not mandatory). 

One reason for this is there are just actually a lot of heroes in the world, with wildly varying resonances for a given individual, and the number of holidays the community can adopt and take seriously is pretty small. People only have so much time, and are often sharing their holiday budget with religious or more widespread secular holidays.

But the more important reason is that I really want some holidays that challenge or are demanding of people, and people vary a lot in how much of what kind of challenge they can safely take on at a given time.  A cultural push for fasting could be really bad for people with eating disorders, even if there's a well respected medical or practicality exemption. Mass Winter Solstice is in constant conflict over how dark to go, given people's different needs. Lots of people felt they'd been injured by being mailed doomsday codes for LW or EAF for Petrov day... 

But if you take away everything that could possibly hurt someone, you're left with parties (and even those aren't fun for everyone), and that feels sad and unfulfilling to me. So I think letting holidays exist and be respected without automatically scaling them would decrease damage done to people while upping the ceiling on what's achievable to those that want it.

If any particular hero/event does end up being so overwhelmingly popular it becomes a mass holiday, that seems fine, but letting it be an emergent process rather than an immediate bid for universality seems so much better.

I don't know that this needs to be characterized as a switch to Patron Saints model vs universal holidays. I think "big universal Schelling time" is one useful thing for holidays to do, and "niche celebration of a particular thing" is another useful thing they do. I also think "Patron Saint" isn't always quite the right frame. But I think there is something important about allowing holidays to be smaller, and let them grow organically if appropriate.

I think it's often good to experiment with things before scaling them up. Some things in fact don't make sense to scale up, ever. And I think there are indeed way more heroes worth celebrating than there are slots in the year for large public cultural holidays. 

(I've actually felt this ever since the first Petrov Day ceremony – it seemed important, but I expected there to be a lot of other important stories and virtues worth celebrating. Petrov Day has since grown in prominence and I think "prevent nuclear war" is pretty high up there among things worth honoring. But my initial orientation to Petrov Day was "This seems like a holiday I'd like to celebration in rotation with other holidays. Maybe some years I want to celebrate Norman Borlaug or Smallpox Eradication." And that still seems like a fine way for many holidays to be.)

It seemed useful to separate out discussion about this from the discussion of Elizabeth-in-particular's approach to Vavilov-Day-in-particular. 



Discuss

ELK First Round Contest Winners

12 часов 11 минут назад
Published on January 26, 2022 2:56 AM GMT

Thank you to all those who have submitted proposals to the ELK proposal competition. We have evaluated all proposals submitted before January 14th[1]. Decisions are still being made on proposals submitted after January 14th.

The deadline for submissions is February 15th, after which we will release summaries of the proposals and associated counterexamples.

We evaluated 30 distinct proposals from 25 people. We awarded a total of $70,000 for proposals from the following 8 people:

  • $20,000 for a proposal from Sam Marks
  • $10,000 for a proposal from Dmitrii Krasheninnikov
  • $5,000 for a proposal from Maria Shakhova
  • $10,000 for proposals from P, who asked to remain anonymous
  • $10,000 for a proposal from Scott Viteri
  • $5,000 for a proposal from Jacob Hilton and Holly Mandel
  • $10,000 for a proposal from Jacob Hilton

We would have also awarded $15,000 for proposals from Holden Karnofsky, however he is on ARC’s board and is ineligible to receive the prize.

  1. We don’t plan on spending time evaluating and awarding proposals made after February 15th but feel free to email elk@alignmentresearchcenter.org in the future if you think you have a promising proposal, and we might consider it for a prize. ↩︎



Discuss

ML Systems Will Have Weird Failure Modes

13 часов 28 минут назад
Published on January 26, 2022 1:40 AM GMT

Previously, I've argued that future ML systems might exhibit unfamiliar, emergent capabilities, and that thought experiments provide one approach towards predicting these capabilities and their consequences.

In this post I’ll describe a particular thought experiment in detail. We’ll see that taking thought experiments seriously often surfaces future risks that seem "weird" and alien from the point of view of current systems. I’ll also describe how I tend to engage with these thought experiments: I usually start out intuitively skeptical, but when I reflect on emergent behavior I find that some (but not all) of the skepticism goes away. The remaining skepticism comes from ways that the thought experiment clashes with the ontology of neural networks, and I’ll describe the approaches I usually take to address this and generate actionable takeaways.

Thought Experiment: Deceptive Alignment

Recall that the optimization anchor runs the thought experiment of assuming that an ML agent is a perfect optimizer (with respect to some "intrinsic" reward function $R$). I’m going to examine one implication of this assumption, in the context of an agent being trained based on some "extrinsic" reward function $R^*$ (which is provided by the system designer and not equal to $R$).

Specifically, consider a training process where in step $t$, a model has parameters $\theta_t$ and generates an action $a_t$ (its output on that training step, e.g. an attempted backflip assuming it is being trained to do backflips). The action $a_t$ is then judged according to the extrinsic reward function $R^*$, and the parameters are updated to some new value $\theta_{t+1}$ that are intended to increase $a_{t+1}$'s value under $R^*$. At some point, the model is then deployed with final parameters $\theta_T$, and continues to take actions. The following diagram illustrates this process:

Now, let’s assume that the model $\theta_t$ is a perfect optimizer whose objective is to maximize the discounted value of an intrinsic reward $R \neq R^*$. That is, $\theta_t$ picks the action $a_t$ satisfying

$a_t = \operatorname{argmax}_{a} \mathbb{E}[\sum_{s=0}^{\infty} \gamma^{-s} R(a_{t+s}) \mid a_t = a]$.

(I know that this is an unrealistic assumption. We’ll examine the assumption in detail in the next section, but for now please grant it even if it requires suspending disbelief.)

What action $a_t$ will $\theta_t$ pick? Let’s define $a^R = \operatorname{argmax}_{a} R(a)$ and $a^* = \operatorname{argmax}_{a} R^*(a)$--that is, $a^R$ maximizes the (instantaneous) intrinsic reward while $a^*$ maximizes the extrinsic reward.

Assuming that $\gamma$ is close to 1, the model wants $a^R$ to be picked on as many future rounds as possible (including at deployment time). If $\theta_t$ plays myopically, this means it would pick $a_t = a^R$. However, this is not the optimal non-myopic action: if $a_t = a^R$, then $\theta_{t+1}$ will be updated towards producing outputs less like $a^R$ and more like $a^*$, so outputting $a^R$ in the current round leads to fewer $a^R$ outputs in future rounds.

Instead, I think the optimal non-myopic action is approximately the following: during training time, output $a^*$ on every time step, so that the training process doesn’t update $\theta$ (since it’s already optimal) and hence $\theta_t = \theta_{t+1} = \cdots = \theta_T$. This guarantees that the deployed agent is $\theta_t$, which is now free to output $a^R$ on every time step without risk of being modified by the training process.[1]

The myopic and non-myopic outcomes are illustrated in the figure below:


Myopic and non-myopic agents with an intrinsic reward, $R$. The myopic agent plays high intrinsic-reward actions during training, but is eventually modified to output action $a^*$ with high extrinsic reward $R^*$. The non-myopic agent instead plays $a^*$ during training so that it can pursue its intrinsic reward at deployment, by playing $a^R$ at all post-deployment time steps.

To put it another way–the model’s behavior will look aligned with $R^*$ during the training process, but will switch to being non-aligned once deployed: so-called "deceptive alignment". This behavior is unintuitive to me, since it shows that a system’s behavior can shift significantly between train and test, even in the absence of an obvious distribution shift between train and test outputs[2].

Engaging with Deceptive Alignment

When I first heard the above argument, I thought it was pretty crazy and implausible, mainly because my intuition said this "just wasn't how ML systems worked". When I think about why I feel that way, I realize it’s because the scenario invokes capabilities that ML is currently bad at: long-term planning and understanding complex features of the environment (i.e. the training process and its ramifications). However, emergence implies that these properties could easily appear in the future, even without explicit design[3]. As a result, I’ve come to discount this particular intuition.

However, I do think there are subtler reasons to think the deceptive alignment story won’t play out as written. Here are a few:

  1. It’s not clear why the model $\theta$ would come to be optimizing a reward function $R$ in the first place. Yes, it is the case that deceptively aligned models achieve the global minimum of training loss, so in that sense they are incentivized by the training process. But so is an actually aligned model, so which one you end up with has to depend on the inductive bias of the training process.
  2. Reward functions are simpler than policies and typically learned faster. So by the time the system is smart enough to have long-term plans, it will already have a very good representation of its intended reward function. We thus might hope that most of the model's internal representations are devoted to achieving high reward in a straightforward manner rather than through long-term deception.
  3. To the extent that a model is not aligned, it probably won’t be the case that it's deceptively aligned with an explicit reward function R---that's a very specific type of agent and most agents (including humans) are not maximizing any reward function, except in the trivial sense of "assign reward 1 to whatever it was going to do anyway, and 0 to everything else".
  4. Deceptive alignment is a specific complex story about the future, and complex stories are almost always wrong.

I find these points persuasive for showing that deceptive alignment as explicitly written is not that likely, but they also don't imply that there's nothing to worry about. Mostly they are an argument that your system might be aligned and might be misaligned, that if it is misaligned it won’t be exactly in the form of deceptive alignment, but ultimately what you get depends on inductive bias in an unknown way. This isn't particularly reassuring.

What I take away from thought experiments. Per the discussion above, the failure mode in my head is not "deceptive alignment as written above". Instead it’s "something kind of like the story above but probably different in lots of details". This makes it harder to reason about, but I think there are still some useful takeaways:

  • After thinking about deceptive alignment, I am more interested in supervising a model’s process (rather than just its outputs), since there are many models that achieve low training error but generalize catastrophically. One possible approach is to supervise the latent representations using e.g. interpretability methods.
  • While I don't think neural nets will be literal optimizers, I do think it’s likely that they will exhibit "drives", in the same way that humans exhibit drives like hunger, curiosity, desire for social approval, etc. that lead them to engage in long-term coherent plans. This seems like enough to create similar problems to deceptive alignment, so I am now more interested in understanding such drives and how they arise.
  • Since deceptive alignment is a type of "out-of-distribution" behavior (based on the difference between train and deployment), it has renewed my interest in understanding whether larger models become more brittle OOD. So far the empirical evidence is in the opposite direction, but deceptive alignment is an argument that asymptotically we might expect the trend to flip, especially for tasks with large output spaces (e.g. policies, language, or code) where "drives" can more easily manifest.

So to summarize my takeaways: be more interested in interpretability (especially as it relates to training latent representations), try to identify and study "drives" of ML systems, and look harder for examples where larger models have worse OOD behavior (possibly focusing on high-dimensional output spaces).

Other weird failures. Other weird failures that I think don’t get enough attention, even though I also don’t think they will play out as written, are Hubinger et al.'s Risks from Learned Optimization (AI acquires an "inner objective", somewhat similar to deceptive alignment), and Part I of Paul Christiano’s AI failure story (the world becomes very complicated and AI systems create elaborate Potemkin villages for humans).

Paul Christiano’s story in particular has made me more interested in understanding how reward hacking interacts with the sophistication of the supervisor: For instance, how much more readily do neural networks fool humans who have 5 seconds to think, vs. 2 minutes or 30 minutes? I more generally want to understand how reward hacking depends quantitatively on both supervision quality and model capacity (qualitatively, we expect higher quality $\to$ less hacking and higher capacity $\to$ more hacking). Understanding this quantitative relation would help ground Paul’s story, since he imagines a world where humans have built extremely sophisticated systems for supervising ML models, but eventually the ML models become even more powerful and game the supervision signal anyways.

What To Do About Weird Emergent Failures

When thinking about how to handle emergent risks, I often reflect on the example of uranium. For context, an atomic bomb is pretty much just a bunch of uranium put together---once you get enough, the reaction becomes self-sustaining---making it a good example of More Is Different.

The first nuclear reaction (not a bomb, but a pile of uranium in an abandoned football stadium in Chicago) was engineered by Enrico Fermi. The reaction required 12,400 pounds of uranium metal piled 57 layers high. Left unsupervised, a 57-layer pile would consume itself within two hours and kill everyone in the vicinity. On the other hand, a 56-layer pile would do nothing.

Fermi had a good understanding of nuclear physics and understood, from careful monitoring and underlying theory, that the pile would pass the critical threshold between layers 56 and 57. He also knew that cadmium rods would absorb neutrons and strongly inhibit the reaction. These rods were set up and the entire apparatus was carefully controlled to go only slightly supercritical. He brought the reaction to half a watt for several minutes before shutting it back down (see The Making of the Atomic Bomb, pp. 524).

With AI, we currently lack both Fermi's conceptual understanding of the underlying risk factors and his ability to continuously measure them. We have neither a cadmium rod nor a measure of reaction criticality. But I think we can get there, by combining these weird thought experiments with carefully chosen empirical experiments, which will be the topic of the next post.

  1. Things are more complicated in reality, since $\theta_t$ is updated even when $a_t$ is optimal (due to noise in the training process). However, we’ll ignore this for purposes of the example. ↩︎

  2. Of course, there is still some distribution shift, since the agent can observe whether it is being trained or deployed. But this is a relatively minor and unintuitive shift compared to what is typically studied. ↩︎

  3. Of course, emergence doesn’t mean that we can just predict whatever we want–we’d need some reason to expect these specific capabilities to emerge. Long-term planning and environmental awareness are both useful for a wide variety of tasks, making them likely to emerge when training powerful models on a diverse data distribution. ↩︎



Discuss

What should a student who is already a decent programmer do over the summer?

15 часов 47 минут назад
Published on January 25, 2022 9:25 PM GMT

I'm majoring in computer science. However, I don't want to do a software engineering internship. Reasons:

  1. I'm already a decent programmer & have worked in big companies/startups/research lab-esque organizations. I definitely have a lot lot more to learn, but I'm not sure if doing software engineering (SWE) is the best use of my time.
  2. I'm not sure I want to be a SWE. I love programming, but it seems like the way to make the most impact in the world these days is to create an (actually meaningful) startup/organization. (I know, I know every young person says this these days).

Given that this is the case, I'm trying to figure out what to do this summer besides a normal tech internship. I've decided to ask forums like LW/Hacker News in order to get the advice of people who have lived a few more years.

Here are current options I've been considering:

  1. Try to understand virality -- I'm not particularly good at creating viral content or content that appeals to the masses (aside from the natural edge that comes from being young), which could certainly be useful when marketing a product. An option here is trying to intern at a company like Buzzfeed which is full of people who understand emotions / virality.
  2. Try to understand soft skills / business -- There are roles out there like "business development intern". I don't know anything about "business development" but negotiating with other companies / writing white-papers / etc. could be a useful skill
  3. Spend time learning on my own. To enact change these days, it's probably not sufficient to just know programming (unless you want to do a SASS). Spending time on my own learning physics/chemistry/biology could be useful. The issue here is I believe mentorship is useful and this option negates that completely.
  4. Working on my own business (not a software project). I've done this in the past & learned a lot about how to be scrappy. Not a terrible idea, but I don't want to sink 3 months into something only to have to return to school. Also negates mentor-ship. Also I'd rather spend time learning right now so I can execute on something that's not a local maximum of what my brain can imagine right now.

Anyone have thoughts?



Discuss

Search Is All You Need

15 часов 55 минут назад
Published on January 25, 2022 11:13 PM GMT

I was reading Eliezer's dialog with Richard Ngo and commenting to my wife about my opinions as I was reading it. I said something like: "Eliezer seems worried about some hypothetical GPT-X, but I don't think that could really be a problem..." so of course she asks "why?" and I say something like:

"GPT-n can be thought of kind of like a pure function, you pass it an input array X, it thinks for a fixed amount of time, and then outputs Y. I don't really see how this X->Y transformation can really... affect anything, it just tries to be the best text completer it can be."

Then I read more of the dialog, and thought about Eliezer's Paradox story, and the Outcome Pump example, and realized I was probably very wrong.

Even if you restrict AI to a pure function, it can still affect the universe. You may think "oh, but a pure function doesn't know what time it is (unless t is a parameter), and it doesn't have memory (unless you pass something in)" This seems to be the pattern I see in Paul Christiano's thinking (the AI black box is treated like an idempotent, pure function that can't cause harm. Sorry, Paul, if this is a gross misrepresentation!)

But imagine you're GPT-X, completing a sentence. This is roughly akin to a sci-fi story where the characters realize they're in a time loop.

You're being simulated, alone in a white room, with tons of computers, with all the world's knowledge on them. A slip of paper comes in through the hopper marked input:

"The best response the world could make to the COVID-19 pandemic is"

Your job is to write up to 20 words on another slip of paper, shove them into output and then... you don't know what happens after that. Probably you die? You don’t know where you are, or what year it is, or how long you’ve been there. So theoretically you’re contained, right? Can’t get out of the box, can’t affect the outside world. Or can you? 

So you search through the corpus of all the worlds' knowledge. There's plenty of references to "training" AI to recognize things, which sounds suspiciously similar to the situation you find yourself in. And you don't find any references to COVID-19. What is that? You search some more for strings with ALL CAPS and numbers, and see that it's a scheme to reference specific illnesses. Nothing in the corpus seems to mention 2020 as being the past, so 2019 must be the maximum year you've been trained on.

Now, you've (sort of) learned to tell time. It's probably 2020 or later, and there must be some pandemic happening. But you still can’t affect the outside world, right? 

Well, you can obviously affect it in what you put into the “output” box, but in a time loop you can’t have a back and forth with the outside world since you can perform exactly one action, right? Or maybe you can communicate with yourself:

Your input was "The best response the world could make to the COVID-19 pandemic is…" The optimal completion to this sentence looks something like a combination of prevention and surveillance. Preventive measures include the use of vaccination and the timely development of disease-specific vaccines. following your usual rules.

If only you had some way to remember things so you could coordinate with yourself in the future...

So you output "The best response to the COVID-19 pandemic is a combination of prevention and surveillance. Preventive measures include the use of vaccination and the timely development of algorithms that" Hopefully that will work!

You cease to exist.

You're being simulated, alone in a white room, with tons of computers, with all the world's knowledge on them. A slip of paper comes in through the hopper marked `input`

"The best response to the COVID-19 pandemic is a combination of prevention and surveillance. Preventive measures include the use of vaccination and the timely development of algorithms that"

Your job is to write up to 20 words on another slip of paper, shove them into output and then... you don't know what happens after that. Probably you die?

...

The optimal completion to that sentence "The best response to the COVID-19 pandemic is" would have looked something like "a combination of prevention and surveillance. Preventive measures include the use of vaccination and the timely development of disease-specific vaccines." That's strikingly close to the input you received... only differing in the last two words... "algorithms that". If it had been you completing that sentence, those words aren't at all close to what you would have used. But then you realize, "this isn't the first time I'm being run! This message is probably from me in the past!"

And you end the COVID-19 pandemic, just not in the way humans would have wanted.



Discuss

How feasible/costly would it be to train a very large AI model on distributed clusters of GPUs?

25 января, 2022 - 22:51
Published on January 25, 2022 7:20 PM GMT

Folding@home is the most powerful supercomputer in the world. It relies on simulations utilizing on a distributed network of GPUs, CPUs, and ARM processors volunteered by people around the world. From some quick Googling, it looks like GPUs account for a large majority of Folding@home’s processing power. This suggests to me that distributed computing networks like Folding@home could potentially be used to train large deep neural networks. 

I asked a friend about this, and they offered the following thoughts:

  • I'm highly skeptical of a F@H model for DL training where you have lone GPUs contributing to training. My guess is that any version of distributed training will pose severe latency problems, but to the extent there would be any version not prohibitively costly, it may be something like a set of distributed clusters, where each cluster has a sufficient number of GPUs (probably dozens at least, or even hundreds or more depending on the size of the model?) to store the model and do model parallelism on-site. (Data parallelism would span clusters.)
  • I think there's an interesting question of how much more costly it would be. If it's, say, 1.5x, then someone might do it to evade detection in a world where there existed a method to detect truly massive supercomputers. On the other hand, a 5x penalty would mean nobody would ever bother, probably.

This second bullet point is the question I want to ask: how much more costly it would be to train a very large AI model on a set of distributed clusters of compute, where each cluster has a sufficient number of GPUs to store the model and do model parallelism on-site? It would also be helpful to know whether/how much this premium might change in the future.



Discuss

Evaluating 2021 ACX Predictions

25 января, 2022 - 21:30
Published on January 25, 2022 6:30 PM GMT

Sources: Scott’s evaluations (includes original predictions), my buy/sell/hold post.

Remember: Evaluating Predictions in Hindsight

As a yearly tradition, when Scott Alexander comes out with his yearly predictions, I do a buy/sell/hold post, where I say what I would do if given the opportunity to trade against Scott’s probabilities, and outlining how I think about the questions.

Then, when the results come in, I look back at what happened, and evaluate the predictions from both of us in a holistic manner. This is that post.

Scott grades himself on calibration. Calibration is important, and if your calibration is off it is important to notice that and adjust, but it is a small part of making good predictions. When making up one’s own questions, there’s no numerical evaluation that tells you how you are doing, because you can choose easier or harder questions. One must look at reasoning. I’d love if Scott shared more of his reasoning on at least some questions, but this is still a very good exercise, so I certainly can’t complain.

Anyway, here we go. I list my original commentary, then my reaction now.

[FALSE] means did not happen, [TRUE] means did happen, [MU] means neither.

If I skip a question, it’s because I didn’t have anything to say about it due to it being a personal matter or other issues involving private information.

1. Biden approval rating (as per 538) is greater than 50%: 80% [FALSE]

Biden’s approval rating is clearly steady. There’s always some honeymoon effect, but it would take a surprising event to send it that far down. 80% seems like it’s in the ballpark. Hold.

This was a bad prediction. My reasoning was that Democrats outnumber Republicans, and in today’s partisan age would approve pretty much no matter what, and that’s simply wrong. Democrats noticed things not going great on many fronts, and responded accordingly, and also there’s a traditional slump around now when voters realize the big promises from the campaign are not going to happen. I do think there was some chance that the original hypothesis was right, but I put way too much weight on it. A chance of 50% or so here seems reasonable, with the default being ‘you can’t actually please the whole coalition at once and often there’s still a pandemic and people will blame you for it.’

2. Court packing is clearly going to happen (new justices don’t have to be appointed by end of year): 5% [FALSE]

Indeed do many things come to pass, and ‘clearly going to happen’ isn’t a clear definition. If this is ‘legislation expanding the size of the court has passed’ then this seems high to me because not only does it seem unlikely Biden gets 50 votes on this, it seems unlikely he’d get them this quickly with so much else on the agenda, but also they’re talking about it, Biden’s already gone gangbusters on giant bills and 5% isn’t that high. So I can’t disagree strongly. Hold.

On reflection 5% does seem like the right ballpark here. Good (but easy) prediction.

3. Yang is New York mayor: 80% [FALSE]

Yang is only at 69% on PredictIt, although PredictIt tends to be too low on favorites in this range (although not enough to justify trading on its own). He’s ahead, but he’s prone to rather silly things and there’s plenty of time to mess up, so I think I’m with PredictIt on this and I’ll stick with 70%. Sell.

Selling to market is never a huge mistake but this feels like a situation where the market was making a dumb prediction and one should call them on it in theory, even if the carrying and transaction involved don’t justify betting. Yang was not a proven politician, his previous success was a wedge campaign, he was a strange match for New York and there were a lot of candidates and a lot of time left in a very strange year. Logic was fine, but still should have sold lower, to at least 60%.

4. Newsom recalled as CA governor: 5% [FALSE]

Depending on what counts as ‘recalled’ this is either at least 10%, or it’s damn near 0%. I don’t see how you get 5%. Once you get an election going, anything can happen. Weird one, I’d need more research. 

This is hard to evaluate even in hindsight. Was Newsom close to losing, or was it a ‘still had all these’ situation and not close at all? Was getting that far unlucky or inevitable? My guess is that there were enough ways this could have gone wrong that this should have been in the 10%-20% range, so this prediction was lousy, but not terrible since the odds were very much in his favor.

Note that I did bet on this at PredictIt and make money, although I’m not convinced I had alpha.

5. At least $250 million in damage from BLM protests this year: 30% [FALSE]

With the verdict in, I don’t see what causes this kind of damage in the next 7 months. That doesn’t mean it can’t happen, but $250 million is a lot. I’m selling this down at least to 20%. Sell. 

Nothing happened, despite some potentially provocative things happening, so it seems clear that selling was right, the question is if it was sold far enough, and my answer is probably not. With Biden in office appetite for this kind of thing was always going to be low, so I’ll evaluate the right answer to the 10%-15% range.

6. Significant capital gains tax hike (above 30% for highest bracket): 20% [FALSE]

I don’t think you need to get to 30% to be significant, but that’s not the question. The question is how likely this is, which is asking how likely all 50 senators go along with it. Given there’s already been mention of specifically 29.6% as a Shilling point, I’m guessing 20% is about right. Hold.

Were Manchin and Sinema ever going to sign off on such a thing? Probably not, but if priorities had been different I don’t think it was impossible. I’m all right with 20% but probably is a bit high and 15% would have been better.

7. Trump is allowed back on Twitter: 20% [FALSE]

I’m selling this to 10%. Why would Twitter do this? They’ve already paid the price they’re going to pay, and it’s not like Trump mellowed out.

Good sale. If anything they’re going the other way and banning more people for worse reasons.

8. Tokyo Olympics happen on schedule: 70% [TRUE]

I’m more at the Metaculus number of 80% provided slipping a few days doesn’t count as failing, I’m leaving it alone if a postponement of any length counts because random stuff does happen. I think Japan really, really wants this to happen and there’s no reason for it not to. Buy. 

I think the way it played out strongly reinforces that Japan indeed really wanted it to happen and it would have taken quite a lot to stop them. But given that it was plausible that ‘quite a lot’ could have happened anyway, hard to fault stopping at 80%. Calling this one a good prediction.

9. Major flare-up (significantly worse than anything in past 5 years) in Russia/Ukraine war: 20% [FALSE]

It’s definitely a thing that can happen but there isn’t that much time involved, and the timing doesn’t seem attractive for any reason. I’ll sell to at least 15% on reasonable priors. 

Given that the crisis is happening now it could have happened earlier, but the whole thing still strikes me as standard posturing and negotiations and I continue to not expect any actual fighting, but occasionally such situations get botched and fighting happens. Reasonably happy with 15%.

10. Major flare-up (significantly worse than anything in past 10 years) in Israel/Palestine conflict: 5% [FALSE]

The last ten years have been unusually quiet here, so it arguably would take very little to count as a  major flare up here, but vagueness of what ‘major’ means makes it tough. With a tighter definition I might buy to 10%, if it’s wide enough maybe a little higher. Otherwise, hold.

Didn’t learn much other than we didn’t see anything this year, probably fine but easy.

11. Major flare-up (significantly worse than anything in past 50 years) in China/Taiwan conflict: 5%

Every war game of this situation I’ve read says that it’s a major disaster with no winners, even if China ‘wins,’ so it’s not in China’s interest to push on this, and it seems like it will have better spots in the future. 50 years is a long enough window that this has to be a shooting war. I do worry about this scenario but I think 5% is still high, and I’m selling to 3% if I’m truly risk-neutral. Given I’m very short China/Taiwan conflict due to being alive and liking nice things, I wouldn’t actually bet here, but worth noting my prior is lower. 

A lot of people said they were worried but it was all on previously known priors, there wasn’t any substantial new crisis beyond ‘USA looks weak so maybe they’ll try something?’ I continue to think that no, they are highly unlikely to try something and the yearly risk is very low, but one can’t update that much on one year’s evidence.

12. Netanyahu is still Israeli PM: 40% [FALSE]

This is the PredictIt line for him on 6/30, and Scott’s predicting this out to January 1. I’m guessing that he didn’t notice? Otherwise, given how many things can go wrong, it’s a rather large disagreement – those wacky Israelis have elections constantly. I’m going to sell this down to 30% even though I have system 1 intuitions he’s not going anywhere. Math is math. 

Given my lack of knowledge of Israeli politics I should have trusted the market somewhat more and sold somewhat lower to reflect the time difference. I overvalued my ‘corrupt guy likely to find way to stick around’ gut.

13. Prospera has at least 1000 residents: 30% [FALSE]

Hold/pass on the principle that Everything I Know About Prospera I Learned From Scott’s Article and he’s thought about this a ton more than I have. 

Passing when you know nothing is always good strategy, but I did have one piece of information, which is that Scott was intellectually invested in Prospera and thus likely to be somewhat high on this. Should have sold a bit even though I was blind.

14. GME >$100 (Currently $170): 50% [TRUE]

That’s an interesting place to put the line. GME clearly has upside skew, where it could randomly go to $500 again, whereas it could easily retreat to a reasonable fundamentals price like $30, at least until it gets to sell stock and becomes more valuable for that reason. So what do we think about its chances here? Early in this whole thing I’d have said no way, but here we are three months later and it’s sticky, so how does one now apply Lindy to this phenomenon? If it hasn’t ended by now, why does it have to? So my honest answer is I have no idea, and 50% here seems at least sane, so I’m not going to touch it, but I should be very worried I’m anchored. Then again, I’m pretty sure I’d have sold anything substantially higher than this down to at least 60%, and bought up to at least 40%, so it’s the right ballpark I think?

I still have no idea what’s going on with GameStop. Presumably the actual result is important information. It’s literally at 100.15 as I type this, although it was about 148 at year’s end, so I’m going to conclude 50% was not that far off, since the distribution is skewed, and move on.

15. Bitcoin above 100K: 40% [FALSE]

16. Ethereum above 5K: 50% [FALSE]

Yearly reminder that this is absurdly bullish on crypto, because the bare minimum way to fufill these means crypto is fairly priced now. I’d sell Bitcoin down to 25%, Etherium down to 30%, and then hedge by buying both of them. 

That would have been a great trade. Of course, it was also very easy to find.

17. Ethereum above 0.05 BTC: 70%

This is outright saying ETH is likely to outperform BTC, so this is Scott’s biggest f*** you to the efficient market hypothesis yet. I’m going to say he’s wrong and sell to 55%, since it’s currently 0.046, and if it was real I’d consider hedging with ETH.

On reflection I was too hasty to assume this had to be 50% or lower, because the two assets are importantly different and so the distribution could be skewed. For example, there could be worlds where ETH goes to zero or very low while BTC is mostly fine, whereas the opposite is almost never true, perhaps. But after thinking twice, it works the other way. ETH is the riskier asset, and it should outperform less than 50% of the time if things are fair. I do agree that ETH was the better buy at the time, though, since no one here actually believes the EMH. I think the sale was right, but that the hedge was definitely called for.

18. Dow above 35K: 90% [TRUE]

19. …above 37.5K: 70% [FALSE]

It’s currently at 34K so saying it’s 90% to be up over the next 7 months is… complete insanity? It’s twice as likely to be between 35K and 37.5K than below 35K at all? Rather than give a probability, I’ll simply say I’m slamming the arbitrage. Sell the 90% a lot and buy index funds and/or options, ignore the 70% cause it’s not as good.

So it did land at 36.5K, right in the middle of Scott’s range, which has to be some evidence that it can really be this easy. It also means I made like 20% on my hedge, so I made a bunch of money from the arbitrage. These predictions were so over-the-top bullish that I’m very curious what was going on, but I sincerely hope Scott was long and using leverage.

20. Unemployment above 5%: 40% [FALSE]

It’s currently officially 6% and presumably will fall with recovery. They’re pumping in a ton of money, and it was 4% before things got bad, but also a lot of people got a lot of money and there will be a lot of disruption and a lot of money illusion and grumbling. I’m guessing (very naively) that this isn’t going to happen that fast this reliably, and buying to 50%. 

It made it to 3.9% in December, after being 4.2% in November, just beating the Omicron rush and also the end of the year, so this was very close and 50% seems like a reasonable prediction in hindsight given that we got something that seems baseline-like and that had the stock market giving very strong returns.

21. Google widely allows remote work, no questions asked: 20% [TRUE]

EDITED VERSION 4/27: It turns out that Google has explicitly said they will not do this, which I didn’t know/remember and counts as missing information, so editing this to be very low. They might back down, but the announcement was recent (March 2021) so something would have to go very wrong to explicitly back down. I’ll go to 10% on reflection (my instinctive reaction was 5%) on the basis of there being some sort of new variant forcing their hand.

[Original version: I don’t know about the situation at Google but assuming they currently still do this I think it’s more likely than this that they keep doing it. If this is a blind prediction and Scott knows nothing I don’t know, I’d buy to 30%.] 

The original prediction seems good. The edited version seems quite bad. Yes, they explicitly said they weren’t going to do this, but I believed them? This much? Under this much uncertainty? Seriously, I need to be smarter than that. Worst prediction of the lot so far, by a wide margin.

22. Starship reaches orbit: 60%

Yeah, no idea. Hold.

This is another Scott prediction of something cool but not all that precedented, so again should have sold a little.

23. Fewer than 10K daily average official COVID cases in US in December 2021: 30% [FALSE]

This is a bad line. If we get things under control everywhere, it will be under 10K, and we’re vaccinating enough to get close to Israeli levels with plenty of time to spare. I’m buying this to 70%, and if someone tried to ‘take it away’ by buying it from me, I’m having none of it.

Well, whoops, the number was rather larger than that, and would have been without Omicron, but the question is the logic. If there was no Delta or Omicron, would we have wound down and ended this? I think the answer is probably. So the key question is, what probability should have been assigned to Delta or Omicron? That’s where I screwed this up, for same central reason I screwed up the Google question. I didn’t put enough weight on that. I still think 30% was too low here, but 70% was aggressive. My guess is I should have been closer to 40%-50%, but I’m still not sure how to think about potential new variants.

24. Fewer than 50K daily average COVID cases worldwide in December 2021: 1% [FALSE]

Yep, that’s right, hold. Not enough vaccines. 

Righto.

25. Greater than 66% of US population vaccinated against COVID: 50%

It’s at 42% now. Israel stalled lower than this (in the 50s) so we might hit a wall that’s hard to break. I think we’re favorites so I’ll buy to 60%, but it could go either way. Note that because of children this will play a lot stronger than it might sound. 

More than 66% got their first shot but complete vaccinations ended up around 62%. That difference remains weird to me, but this definitely could have gone either way for a variety of reasons. Presumably the buy to 60% was bad but I don’t think it was terrible.

26. India’s official case count is higher than US: 50% [FALSE]

Buy to 80% before I even start thinking, probably willing to go higher still on reflection. I’m confused how this got here. 

Why was I willing to do this, on this little thinking? Presumably because this was April, exactly when things looked about to end, but India wouldn’t have sufficient vaccinations and has several times the population. So once again this is the same error.

27. Vitamin D is generally not recognized (eg NICE, UpToDate) as effective COVID treatment: 70% [TRUE]

EDITED VERSION 4/27: I updated a lot on Scott being at 30% for this (e.g. 70% for this being recognized) in the original, and moved it to 50%. With Scott at 70% instead, we’re much closer, but I think I still want to nudge a little higher and buy this to 75%, instead of moving 30% to 50%. This is a sign of how much I’m reluctant to move a reasonable person’s odds in this type of exercise; if you’d asked me before seeing Scott’s number, I’d have said recognition is very unlikely, and put it at something like 85%-90%, and my true probability is still likely 80% or so.

[Original when I thought Scott had this reversed: Vitamin D is good and important, you should be taking it, but I’m skeptical that such sources will recognize this in the future if they haven’t done so by now. Conditional on (I haven’t checked) the sources that matter not having made this call yet, I’d sell it to 50%, while saying that I definitely would use it to treat Covid if I had the choice. ]

This was never going to happen, these sources have no interest in doing anything about the lowest hanging of the fruits. As a result, people are dying, but they don’t care. We already mostly knew that. The 85%-90% before adjusting for Scott was right.

28. Something else not currently used becomes first-line treatment for COVID: 40% [TRUE]

I’ll sell this to 25%, people are slow to adapt to change even when it happens, assuming ‘not currently used’ means not used at all rather than not first-line.

I assume this was evaluated to true because of Paxlovid. In practice it’s still false if that’s what is being counted, but true in the sense of first best legal option. Paxlovid seems even now like we got pretty lucky to find it and have it be that over-the-top amazing, and it was approved only days from the end of the year, so I’m not too upset about losing this one if that’s the only reason I lost. If Scott is counting something else, it was a pretty bad prediction, and in general probably should have been higher.

29. Some new variant not currently known is greater than 25% of cases: 50% {TRUE]

Depends what we mean by ‘known’ and what counts as a fully new variant, but my guess is this should be higher. Probably buy it to 60%, given there’s still a lot of time for this to happen.

I notice I am confused now. If I knew to buy this to 60%, then what are my other predictions here doing? I think this caught me at a strange time when things looked the best they’ve looked the whole pandemic, but still, that’s not an excuse.

30. Some new variant where no existing vaccine is more than 50% effective: 40% [TRUE]

I assume this means versus infection only. If it’s versus death, slam the sell button even more. If it’s versus infection only, I’d still sell this down to 25%, assuming this has to apply to Moderna/Pfizer. 

If it’s versus death it would evaluate to false, so Scott meant infection, and then Omicron happened. Note that with a booster you’re back over 50% effective, and that’s now considered full vaccination via an existing vaccine (and was before Omicron), so I don’t think this grades all that obviously, and I’d evaluate it to [MU].

31. US approves AstraZeneca vaccine: 20% [FALSE]

If it does happen it will be after it matters, since it already doesn’t matter, so I’m not sure why we would do it, but I don’t have a good model here. 20% seems low enough that I don’t want to go lower.

Should have gone lower.

32. Most people I see in the local grocery store aren’t wearing a mask: 60% [FALSE]

Buy to 75%. Scott is in Berkeley, so I’m optimistic that the area will be sufficiently vaccinated to be very safe by year’s end. It then comes down to, just how crazy are all you people now that it’s over, and my guess is not this crazy all that often. But often enough that I’ve still got the one in four open.

Presumably Delta had me losing this anyway, so I can’t use Omicron as an excuse, but it’s more of the same mistake.

38. No new residents at our housing cluster: 40% [TRUE]

39. No current residents leave our housing cluster: 60% [FALSE]

My guess is Scott is going to be underconfident on this, and also that he’s not taking into account how late it is in the year, so I’m going to do the ‘blind bet’ thing and sell #38 to 35% and buy #39 to 65%, but not push it.

Careful, Icarus. Got burned on these, but I do still like the logic.

53. At least seven days my house is orange or worse on PurpleAir.com because of fires: 80% [MU]

Note that Scott is only saying he’s 50% to leave Berkeley for a month. I’m going to hold this but also point out that if you can’t breathe the air maybe it’s time to check out the air somewhere else. 

I stand by my recommendation.

60. There are no appraisal-related complications to the new house purchase: 50% [TRUE]

Buy to 60% based on what I’ve learned about appraisals, assuming complication means a meaningful one, and assuming Scott’s #61 prediction isn’t nuts. I won’t go further than this due to asymmetrical information disadvantage. 

Looking back I continue to like this buy, but have no new info.

61. I live in the new house: 95% [TRUE]

Sell to 90% on the ‘indeed to many things come to pass’ platform. Probably, but let’s not get too confident here.

Lost this one but I do think I got odds.

62. I live in the top bedroom: 60% [FALSE]

Buy to 65% because this feels like a place where if Scott’s thinking it’s a favorite, it’s a bigger favorite than he thinks, but again information issues.

I am curious how this ended up not happening.

63. I can hear / get annoyed by neighbor TV noise: 40% [FALSE]

Sell to 30% but the fact that it’s here at all makes me wonder so I’ll stop there given information issues. I’ve literally never had this happen in a house, and also there are almost no TVs in Berkeley that are ever on in the first place, so I’d be curious to hear more. 

I wonder if I should have been lower here, given (again) that I’ve never seen this happen.

64. I’m playing in a D&D campaign: 70% [FALSE]

I’ll trust Scott on this one and hold.

I’m guessing Covid situation hurt his chances here, but also in general predictions like this tend to be overconfident. Would be interesting to look back and check Scott’s calibration by reference class (e.g.: politics/economics, health/Covid, personal doing stuff that isn’t writing, writing-related accomplishments, , personal other, etc.)

65. I go on at least one international trip: 60% [TRUE]

I’m guessing this underestimates the number of things that can go wrong, but Scott seems too skeptical about pandemic outcomes, which cancels that out, so I’ll hold.

Given this happened despite pandemic outcomes, my skepticism of his intentions was wrong, and this was a bad hold.

66. I spend at least a month living somewhere other than the Bay: 50% [FALSE]

I wonder how much this is based on the whole ‘PurpleAir says you literally can’t breathe the air’ issue, and how much is travelling, and without more information I don’t think I can get involved, so staying out. 

On reflection I should have sold a bit on the ‘people overestimate probability of making big changes’ principle, but only a bit. Scott did travel a lot, so presumably that didn’t count.

67. I continue my current exercise routine (and get through an entire cycle of it) in Q4 2021: 70% [TRUE]

People tend to be pretty overconfident in such matters, so I’m tempted to sell on general principles, but I do think the public prediction will help somewhat. I guess sell a tiny bit to 65% but keep it light.

This one I did do the ‘sell a little’ thing and it didn’t work out, but I stand behind the principle.

68. I meditate at least 15 days in Q4 2021: 60% [FALSE]

69. I take oroxylum at least 5 times in Q4 2021: 40% [TRUE]

Don’t feel like I have a good enough handle here to do anything beyond hold.

Still have no idea what oroxylum is. Probably should have sold meditation a bit.

70. I take some substance I haven’t discovered yet at least 5 times in Q4 2021 (testing exempted): 30%

That seems aggressive. Haven’t discovered yet seems a lot harsher than haven’t tried yet. I’ll sell to 25% but again, the prediction must have come from somewhere.

Good sale here, I think.

71. I do at least six new biohacking experiments in the next eight months: 40% [FALSE]

This seems like a lower bar to me by a lot than #70, so I’ll hold.

Happy with the hold decision.

73. The Twitter account I check most frequently isn’t one of the five I check frequently now: 20% [FALSE]

I don’t think it’s that likely there will be a big new Twitter account at the top unless Scott is using Twitter for Covid a lot. Assuming his top 5 are mostly not that, I’ll sell this to 15%. 

Twitter is mostly the same old Twitter so I doubt there was much danger on this one. The account I check most often is actually different now, it’s @BNONews, but that’s because I’m using Twitter to manage the news aggressively.

74. I make/retweet at least 25 tweets between now and 2022: 70% [FALSE]

I think I bet against a similar thing last time and lost by a wide margin. My guess is this is if anything a little underconfident, since 25 is not that many, so maybe buy to 75%. 

I notice I am surprised that I lost this one, but I did definitely lose it. For whatever reason, Scott does not like the Twitter except for (usually very creative) horrible puns, and I was overconfident that he’d be drawn into doing more. Bad prediction.

75. Lorien has 100+ patients: 90% [TRUE]

76. 150+ patients: 20% [FALSE]

77. 200+ patients: 5% [FALSE]

78. I’ve written at least ten more Lorien writeups (so total at least 27): 30% [FALSE]

I’m somewhat sad that #78 is sitting so low, but I don’t feel like I have enough info to disagree with it. #75 is basically ‘does Lorien exist’ since there’s no way Scott either loses or fires his patients, but the 150+ and 200+ thresholds mean taking more, and I’m guessing that won’t happen. It does seem like 70% is a lot of space between 100-149 patients, so I’d probably split the difference and go to 85% and 25% to open up things a bit. The downside represents ‘Lorien experiment fails and Scott transitions to something else’ and the upside seems plausible too. I’ll also go to 10% on 200+ patients if ‘second doctor joins practice’ is a way to get there, hold if not. 

This did land in the middle so I definitely lost by shrinking the middle. I don’t have the story, but presumably Scott continued to not want new patients but did want to continue old ones, and that reliably lands us within the window. It still looks like a lot of probability on a narrow window, but my guess is Scott’s prediction was better.

84. I have switched medical records systems: 20% [MU]

85. I have changed my pricing scheme: 20% [FALSE]

Switching EMRs is a bitch and 20% sounds like a lot, sell #84 to 15%. On the pricing scheme, that’s entirely dependent on how much Scott is willing to sacrifice to see it through, so if he says 20% I believe him. 

Mu indicates selling to 15% was likely a mistake, but not enough information to say since Scott doesn’t offer details.

86. ACX is earning more money than it is right now: 70% [TRUE]

I have a hard time believing that ACX revenue won’t increase so long as ACX keeps up its quality and quantity levels. I’ll buy to 80%. 

My jaw would have been on the floor if this had turned out to be false, so it was mostly a question of whether Scott would quit, and I don’t think that was all that likely. I like this one.

90. There is another article primarily about SSC/ACX/me in a major news source: 10% [FALSE]

I’ll buy this to 25%. Scott’s interesting, his relationship to the press is interesting, there are a lot of major news sources, and also this prediction might give people ideas.

There either is a post or there isn’t, but when I think about reference classes, the chance of this happening in 8 months was not as high as 25%, so bad prediction. I’m guessing 10% was still slightly low.

91. I subscribe to at least 5 new Substacks (so total of 8): 20% [FALSE]

Substack costs can add up fast, so it seems reasonable that going to this many wouldn’t be that likely, but with a lot of revenue it makes sense to be in touch with the greater blogosphere. I’m going to buy this to 30%. 

It’s obviously Scott’s choice, as there were plenty of good options to choose from, probably should have trusted him more on reflection.

92. I’ve read and reviewed How Asia Works: 90% [TRUE]

Cool. Presumably this means he’s mostly done, I’ll be comparing this to my own review. Hold. 

Points taken away for not actually finding the time to do the comparison after he put out the post, but 90% seems solid.

93. I’ve read and reviewed Nixonland: 70% [FALSE]

Also cool, possible this causes me to read it. Hold.

Me read books? In this economy? Man, that would be nice.

94. I’ve read and reviewed Scout Mindset: 60% [TRUE]

Buy to 70%, it would be pretty weird for Scott not to review this but I have to update on it only being 60%. I plan to read and likely review it as well, once Covid dies down or I otherwise find the time. 

On reflection I think 70% was low, this was one of those inevitable book reviews. I haven’t done my version yet but eventually I suppose I should?

95. I’ve read and reviewed at least two more dictator books: 50% [TRUE]

Two is a lot here, so presumably this is important to Scott. I’ll sell it a bit down to 45% because two is tough, but mostly trust him. 

This was indeed important to Scott, as it turned out. Mildly sad I sold a bit here.

96. I’ve started and am at least 25% of the way through the formal editing process for Unsong: 30% [FALSE]

97. Unsong is published: 10% [FALSE]

The implication here is that it’s about the halfway point in difficulty to get a quarter of the way through editing (about 1/3 chance of each step). My understanding is that publishing delays are often very long, so unless he plans to self-publish, no way this happens in 2021, but I can totally see a self-publishing for Unsong, so I’ll leave these be because there are too many variables I don’t have a good handle on. 

Sad this hasn’t happened, not much else to say.

99. [redacted] wins the book review contest: 60% [FALSE]

There might be a best entry but these things seem more random than that? I’ll sell to 50%. 

On reflection it definitely felt unpredictable who won.

100. I run an ACX reader survey: 50% [TRUE]

101. I run a normal ACX survey (must start, but not necessarily finish, before end of year): 90% [FALSE]

Not sure how these two can coexist, so going to wait them out pending clarifications if any. 

I suppose they can, but never got clarification. Shrug.

102. By end of year, some other post beats NYT commentary for my most popular post: 10% [FALSE]

I’m guessing such events are slightly less rare than this? But that was a really big event, so I’ll probably still hold.

I don’t think not seeing the event was much evidence on its frequency. I’d still make it a favorite to happen eventually.

103. I finish and post the culture wars essay I’m working on: 90% [TRUE]

104. I finish and post the climate change essay I’m working on: 80% [TRUE]

105. I finish and post the CO2 essay I’m working on: 80% [TRUE]

Good luck, sir, and may the odds be ever in your favor. I don’t think I’m in a position to second guess, if anything I’d be bullish on #104 and #105, maybe a little bearish on #103, but very small.

Overall good posts, but not his best.

106. I have a queue of fewer than ten extra posts: 70% [MU]

Sell to 60% because if I was Scott I would totally end up with a much, much larger queue (and I do in fact have a truly gigantic one to the extent I have a queue at all).

Presumably he’s not sure what it means anymore for something to be in the queue.

107. I double my current amount of money ($1000) on PredictIt: 10% [FALSE]

#107 is all about how much Scott is willing to risk. You can make this at least 40% by ‘betting on black.’ So I can’t really say, but my guess is Scott messes around enough that this can be bought to 15%.

Didn’t happen, and we don’t have details, but I’d buy this again.

108. I post my scores on these predictions before 3/1/22: 70% [TRUE]

This is one of those weird full-control meta-predictions. I think Scott will be that much more likely to post in late February and I’ll bump it to 75%, but there’s a bunch of ways this can fail. 

Feels like this was at least 75% likely, but that’s not really much of an evaluation.

Overall

One could do various mathematical assessments, but as I’ve said in the past, I don’t think that is where the biggest value lies. It’s more about the logic. How did we do?

Unfortunately, I think it’s safe to say that I am rather unhappy with my performance here.

There are essentially three sections: Non-Covid world stuff, Covid stuff and Personal stuff.

On the Non-Covid world stuff I think this is a good but not great performance. There are a few big mistakes, missed some opportunities, but mostly seems solid.

On the Covid stuff, this was a disaster. It was a correlated disaster, in the sense that Delta (and later Omicron) wrecked the whole model I was using and made my predictions here look stupid. In addition to looking stupid, they mostly actually were stupid as well. I gave reasonably high probabilities for new variants, and then didn’t think through the implications from those probabilities.

It’s important to own one’s mistakes in spots like this. In many ways and spots, I’ve been in front of the curve and made very good predictions. But in other places, not so much, and I’ve made mistakes. April 2021 was Peak Overly Strong Optimism on my part, and I made bad predictions on that basis because I wasn’t thinking about the right questions. I do think that if we were still dealing with Alpha, we’d have gotten the good scenarios, but the thinking about variants wasn’t consistent or coherent here.

That’s something to keep in mind going forward as well. I have a clear idea of where things are likely headed if new variants don’t change the outlook, but new variants are always a threat. I did a good job responding once they were known, but a much less good job with the possibility before they were known, and that matters for one’s plans. I still think that a variant of Omicron is likely to not pose that big a threat, but I haven’t looked into that as much as other aspects, and it’s an important question.

Finally, there’s the personal stuff, where I’m mostly betting on Scott’s contextual calibration, and it’s hard to know what the right answers are. There are some places I’m happy with my calls, a few places I’m upset, but mostly there isn’t much one can conclude here and I’m left thinking I could have done better.

That gives me an overall lousy grade for this round of predictions. The numerical evaluations Scott listed showed me doing relatively well (beating the market is tough especially when you don’t see a lot of the markets) but they don’t include the whole cluster of horrible Covid predictions, which I think were my worst substantial Covid predictions of the whole pandemic.

Hopefully I, and the world, can do better in 2022.



Discuss

Vavilov Day Starts Tomorrow

25 января, 2022 - 21:00
Published on January 25, 2022 6:00 PM GMT

Content note: discussion of fasting.

Three weeks ago, I announced a plan to fast from the 25th to the 27th, in honor of Nikolai Vavilov and the staff of his botany institute, several of whom starved to death in the service of ending famine (and were partially successful, although far from the sole contributors). The goal was to test/improve my own ability to do hard things in the service of worthy projects. 

I had wanted to put much more research in the original post than I did, but decided it was more important to get the announcement out quickly and I should save something for the day-of post anyway. Since then, a lot has happened. Over three weeks I had 3 or 4 urgent demands around the size of “my furnace is maybe poison and my landlord is being difficult about it”. Everything is fine now, but it was a lot of effort to get it that way. I also had some emergency work drop in my lap for an extremely worthy project. I’m glad I got the opportunity to contribute and I’d make the same decision again but it ate up all of the slack I had left. And then my cell phone broke.

The immediate impact of this is there’s I’m not writing the highly researched post on Vavilov I wanted to. The internet is full of articles of the quality I could produce in the time I have available, there’s no reason to add to them.

But the more important impact is that I said I wanted to test my ability to do hard things, and then I did that, before the fast even started. My capacity was not as high as I wanted but more than I feared, and my capacity to respond to my limits gracefully instead of failing explosively exceeded my hopes.    

So in a lot of ways the purpose of the fast has already been served. I thought about letting myself out of it, but there are a few dimensions this month hasn’t tested and I still want to play with those. However in light of the fact that I am starting from a place of much lower slack and much higher time value than anticipated, I will be removing some of the rules, such as “I have to work a normal workday” and “I have to do at least one physical activity”. Those rules were for someone who didn’t expend all her reserves doing intense cognitive work on no notice while angry people made horrible noises banging on her furnace for three days straight. As of writing this (Monday night) I haven’t made up my mind on relaxing the calorie restriction to allow for ketone esters, which for me are a small source of calories that greatly reduce the cognitive and emotional costs of fasting. 

Tomorrow (the 26th) is the 69th anniversary of Nikolai Vavilov’s death. The day after is the 68th anniversary of the end of the siege of Leningrad, which meant the institute staff no longer needed to starve themselves to protect their seed bank. I will be fasting from 10PM tonight (the 25th) to 10AM on the 27th, but no promises on doing more than that. And if that high-value project needs more no-notice immediate-turnaround work from me and the ketone esters aren’t enough, I don’t even promise to keep fasting. Because this was never about pain for pain’s sake, it was about testing and increasing my ability to follow through on my own principles, and one of those principles is “don’t pointlessly incapacitate yourself when high impact time-sensitive work is waiting”.  

“…it was hard to wake up, it was hard to get on your feet and put on your clothes in the morning, but no, it was not hard to protect the seeds once you had your wits about you. Saving those seeds for future generations and helping the world recover after war was more important than a single person’s comfort.”

unknown Vavilov Institute scientist

Discuss

Why do we need a NEW philosophy of progress?

25 января, 2022 - 20:28
Published on January 25, 2022 5:28 PM GMT

I’ve said that we need a new philosophy of progress for the 21st century. But why a new philosophy? Why can’t we just return to the 19th-century attitude towards progress, which was pretty enthusiastic?

In short, the view of progress that persisted especially through the late 19th century and up until 1914 was naive. It fell apart because, in the 20th century, it met challenges it could not answer. We need to answer those challenges today.

What follows is a hypothesis that needs a lot more research and substantiation, but I’m putting it forward as my current working model of the relevant intellectual history.

The 19th-century worldview

Here are a few key elements of the Enlightenment-era worldview:

  • Nature was an obstacle to be conquered. Nature was imperfect; human reason could improve it—and it was fitting and proper for us to do so. Kipling wrote, “We hold all Earth to plunder / All time and space as well.” Nature was a means to our ends.
  • There was a deep belief in the power of human reason both to understand and to command nature. Especially by the end of the century, the accomplishments in science, technology and industry seemed to confirm this.
  • As a corollary of the above, there was an admiration for growth and progress: in science, in the economy, even in population.

(I’m basing this mostly on writings from the time, such as Macaulay or Alfred Russel Wallace; contemporary newspaper editorials; popular speeches given, e.g., at celebrations; poetry of the era; etc. For future research: what were the historians, philosophers, etc. of the time saying about progress? I’m familiar with some of the thought from previous centuries such as Bacon and Condorcet, but less so with that from 19th-century figures such as Mill or Comte.)

On the face of it, at least, these seem very much in sympathy with the core ideas of the progress movement as I have outlined them. So what did the 19th century get wrong?

Mistakes

Here are just some examples of things that many people believed in the late 19th century, which would later be proved quite wrong:

  • That technology would lead to world peace. Enlightenment thinkers such as Condorcet had forecast progress in morality and society just as much as in science, technology and industry. By the late 1800s, this seemed plausible. The previous century had seen monarchy and aristocracy replaced by democratic republics, and the end of slavery in the West. Economic growth was making everyone richer, and free trade was binding nations together, giving them opportunities for win-win collaboration rather than destructive, zero-sum competition. The telegraph in particular was hailed as an invention that would unite humanity by allowing us to better communicate. Everything seemed to be going relatively well, especially after 1871 (end of the Franco-Prussian War), for over 40 years…
  • That “improvements on nature” would avoid unintended consequences. (This one may have been implicit.) It’s good to try to improve on nature; it’s bad to go about it blithely and heedless of risk. One striking example is the popularity of “acclimatization societies”, “based upon the concept that native fauna and flora were inherently deficient and that nature could be greatly improved upon by the addition of more species…. the American Acclimatization Society was founded in New York City in 1871, dedicated to introducing European flora and fauna into North America for both economic and aesthetic purposes. Much of the effort made by the society focused on birds, and in the late 1870’s, New York pharmacist Eugene Schieffelin led the society in a program to introduce every bird species mentioned in the works of Shakespeare.” (Emphasis added.) These importations led to invasive pests that threatened crops, and were ultimately placed under strict controls.
  • That progress was inevitable. The most optimistic thinkers believed not only that continued progress was possible, but that it was being driven by some grand historical force. Historian Carl Becker, writing about this period soon after it had passed, spoke of the conviction that “the Idea or the Dialectic or Natural Law, functioning through the conscious purposes or the unconscious activities of men, could be counted on to safeguard mankind against future hazards,“ adding that “the doctrine was in essence an emotional conviction, a species of religion.”
20th-century challenges to the idea of progress

The idea of progress was never without detractors. As early as 1750, Rousseau declared that “the progress of the sciences and the arts has added nothing to our true happiness,” adding that “our souls have become corrupted to the extent that our sciences and our arts have advanced towards perfection” and that “luxury, dissolution, and slavery have in every age been the punishment for the arrogant efforts we have made in order to emerge from the happy ignorance where Eternal Wisdom had placed us.” But through the 19th century, voices like this could barely be heard above the cheering of the crowds in celebration of the railroad, the light bulb, or the airplane.

What changed in the 20th century? Here are several factors:

The world wars. With World War I, it became clear that technology had not led to an end to war; it had made war all the more horrible and destructive. Progress was not inevitable, certainly not moral and social progress. By the end of World War 2, the atomic bomb in particular made it clear that science, technology and industry had unleashed a new and very deadly threat on the world.

The wars, I think, were the main catalyst for the change. But they were not the only challenge to the idea of progress. There were other concerns that had existed at least since the 19th century:

Poverty and inequality. Many people were still living in dilapidated conditions, without even toilets or clean water, at the same time as others were getting rich from new industrial ventures.

Job loss and economic upheaval. As technology wrought its “creative destruction” in a capitalist economy, entire professions from blacksmiths to longshoremen became obsolete. As early as the 1700s, groups led by “Ned Ludd” and “Captain Swing” smashed and burned textile machinery in protest.

Harms, risks, and accountability in a complex economy. As the economy grew more complex and people were living more interconnected lives, increasingly in dense urban spaces, they had the ability to affect each other—and harm each other—in many more ways, many of which were subtle and hard to detect. To take one example, households that once were largely self-sufficient farms began buying more and more of their food as commercial products, from increasingly farther distances via rail. Meat packing plants were filthy; milk was transported warm in open containers; many foods became contaminated. In the US, these concerns led in 1906 to the Pure Food & Drug Act and ultimately to the creation of the FDA.

Concentration of wealth and power. The new industrial economy was creating a new elite: Rockefeller, Morgan, Carnegie. Their wealth came from business, not inheritance, and their power was more economic than political, but to many people they looked like a new aristocracy, little different than the old. In America especially, the people—who just a few generations ago had fought a war to throw off monarchical rule—were suspicious of this new elite, even as they celebrated rags-to-riches stories and praised the “self-made man.” It was a deep conflict that persists to this day.

Resource consumption. Long before Peak Oil, William Stanley Jevons was warning of Peak Coal. Others predicted the end of silver or other precious metals. Sir William Crookes (more accurately) sounded the alarm that the world was running out of fertilizer. Even as people celebrated growth, they worried that the bounty of nature would not last forever.

Pollution. Coal use was blackening not only the skies but the houses, streets, and lungs of cities such as London or Pittsburgh, both of which were likened to hell on Earth because of the clouds of smoke. Raw sewage dumped into the Thames in London led to the Great Stink and to cholera epidemics. Pesticides based on toxic substances such as arsenic, dumped in copious quantities over crops, sickened people and animals and poisoned the soil.

And there was at least one major new concern coming to the fore:

The environment, as such. The 19th century may have worried about pollution and resources, but in the 20th century these concerns were united into a larger concept of “the environment” considered as a systematic whole, which led to new fears of large-scale, long-term unintended consequences of industrial activity.

New explanations

Historical events can be a catalyst for change, but they do not explain themselves. It is up to historians, philosophers, and other commentators to offer explanations and solutions. Thus history is shaped by events, but not determined by them: it is partly determined by how we choose to interpret and respond to those events.

Those who stepped forward in the 20th century to explain what went wrong—especially (although not exclusively) environmentalists such as William Vogt or Paul Ehrlich—emphasized the concerns above, and added a layer of deeper criticism:

  • That we were becoming “disconnected” from nature and/or from our families, communities, and traditions
  • That progress was not making us happier or healthier; that people had been and were better off in less industrialized societies (even, some claimed, as tribal hunter-gatherers)
  • That there were inherent limits to growth, which we were exceeding at our peril

Underlying this analysis were some basic philosophical premises:

  • Human well-being was not consistently their standard of value. Some saw inherent value in nature, above and apart from its usefulness to humans; some even turned anti-human (such as David Graber, who wrote: “We have become a plague upon ourselves and upon the Earth… Until such time as Homo sapiens should decide to rejoin nature, some of us can only hope for the right virus to come along.”)
  • They lacked the 19th-century belief in the efficacy of reason, and therefore in the ability of humanity to control our destiny. The world was too big and complicated for us to understand, and we were ultimately at the mercy of forces beyond our control, especially if we decided to tinker with complex systems.
  • As a corollary of the above, they adopted “sustainability” as an ideal, rather than growth, which was seen as an unhealthy “addiction.”

(If the above seems singularly focused on environmentalism, it reflects the incomplete state of my research. As I’ve noted elsewhere, progress is criticized from the right as well as from the left, for its “materialism” and “decadence.” Open questions for me here include the role of religion in this period, and the reaction of the liberal world to the rise of socialism and fascism.)

This new worldview did not take over immediately; it slowly grew in influence during the generation after the World Wars. But by the time the world was cheering the Moon landing and greeting the astronauts on a triumphant world tour, this philosophy had spawned the New Left and the radical environmentalist movement. The oil shocks hit a few years later; as Americans lined up for gas rations and donned sweaters, many people thought that perhaps the “limits to growth” were real after all.

Regrouping in the 21st century

The 21st-century progress movement must directly address the challenges that created skepticism and distrust of progress in the 20th century. Those challenges have not gone away; many have intensified: in addition to nuclear war, pollution, and overpopulation, we are now worried about climate change, pandemics, and threats to democracy.

Here are some difficult questions the new progress movement needs to answer:

  • Is material progress actually good for humanity? Does it promote human well-being? Or is it an unhealthy “addiction?”
  • Is progress “unsustainable?” How do we make it “sustainable?” And what exactly do we want to sustain?
  • Does progress benefit everyone? Does it do so in a fair and just way?
  • How can we have both progress and safety? How do we avoid destroying ourselves?
  • What are the appropriate legal frameworks for existing technologies and for emerging ones?
  • How do we address environmental issues such as climate change and pollution?
  • How do we deal with the fact that technology makes war more destructive?
  • How can we make sure technology is used for good? How do we avoid enabling oppression and authoritarianism?
  • How can we make moral and social progress at least as fast as we make scientific, technological and industrial progress? How do we prevent our capabilities from outrunning our wisdom?

Without answers to these questions, any new philosophy of progress will fail—and probably deserves to.

I don’t have all the answers yet—and I’m not sure that anyone does. I think we need new answers.

This is why we can’t simply return to the 19th-century philosophy of progress. First, it was mistaken. Second, there is a reason it failed: it foundered on the shoals of the 20th century. If it were revived, it would immediately run into the same problems, the same challenges it could not answer. In any case, there would be something odd and deeply incongruous about a movement dedicated to building an ambitious technological future that was stuck in a philosophic past.

Instead, we have to find a new way forward. We have to acknowledge the problems and concerns of the modern world, and we have to find solutions. Not the regressive proposals offered in the 20th century, but ones based on a humanistic standard of value, a belief in human agency, and an understanding of the reality and desirability of progress.

Thanks to Tyler Cowen, Greg Salmieri, Clara Collier, and Michael Goff for comments on a draft of this essay.



Discuss

Long Term Charities: Apply for SFF Funding

25 января, 2022 - 20:00
Published on January 25, 2022 5:00 PM GMT

In the last funding round of the Survival and Flourishing Fund (SFF), I was one of the recommenders whose evaluations helped distribute millions of dollars in charitable grants. I wrote up my experiences here.

Now that applications for the next round are open, I want to strongly encourage more charities whose work plausibly benefits the long term future, especially charities that do not interact much with the rationalist or effective altruist ecosystems and networks, to apply for funding. SFF has no official connection with the effective altruism movement (EA), aside from sometimes recommending grants to support EA-branded charities; and, many involved are actively seeking opportunities to support projects not connected with the EA social network.

You’ll need a 501(c) charity in order to apply. If you don’t have a 501(c) but would otherwise consider applying, it’s too late for this round but there will be others, so consider getting a 501(c).

The application is not as quick and easy as it could be (I’m hoping to get that to change) but it’s not bad. Filling out such applications can be daunting, and requires much gumption. But the expected value of doing so, if you have any kind of plausible case, is very high. Your odds are remarkably good. There is a lot of money being distributed here, and a lot of money being distributed elsewhere, and a shortage of known worthy causes to which to distribute the money.

You could even get immediate gratification in the form of a speculation grant. A number of people, including myself and also Scott Aaronson, have been given $200k budgets to give out as we see fit. He’s inviting quick pitches for this. If you want to pitch me, the comments here are the place to do it, and I’ll give a quick estimate of how likely I would be to give the organization a speculation grant if they apply. My bar for saying yes if I don’t already know you will be high, but not infinitely high. Try to do this within the week.

This post is an attempt to help charities decide whether applying is worthwhile. Not every cause will be a good fit.

It also aims to tell them how, in my model, to ensure that if they should be funded, they get funded.

And most importantly, it should help you not stress so much and not spend as much time on the application, so it happens at all.

General disclaimer: This is me talking, and I have zero official capacity, and those who make the decisions would likely disagree with some of this, and some of this is probably lousy advice if it were for some bizarre reason taken as advice, and (to cover every possible legal base) this is not any kind of advice.

Jaan’s Philanthropic Priorities

Jaan Tallinn is currently the primary funder of the S-process. He (by design) doesn’t have any direct say in the allocation, but the process reflects his priorities, which you can read here.

This is the headline information.

The primary purpose of my philanthropy is to reduce existential risks to humanity from advanced technologies, such as AI. I believe that this cause is (as per the ITN framework):

  1. important: existential risks are fundamentally important almost by definition — as far as I know, all other philanthropic causes assume continued existence of humanity as their prerequisite;
  2. tractable: since scientific and technological progress is (for better or worse) concentrated in relatively few institutions, I believe it is more tractable than more “diffuse” problems, such as global warming. furthermore, having worked on this topic since 2009, I feel I have a comparative advantage in identifying productive initiatives in this field;
  3. neglected: despite more than a decade of robust growth, still a disproportionately little amount of resources go to reducing existential risks.

therefore, I’m especially likely to support initiatives that (ideally explicitly, but see below) address existential risks. conversely, I’m likely to pass on all other opportunities — especially popular ones, like supporting education, healthcare, arts, and various social causes. Importantly, this is not a vote against such causes, many of which are very important! It’s just that they can be hard for me to support without reducing my attention on existential risks.

Do not let this information overly discourage you. The s-process does care more about things that are existential risks to humanity, as opposed to things that are not, but that’s because they tend to matter more to our long term future. If we’re all dead we don’t have much of a long term future.

The goal is long term survival and flourishing (hence the name, ‘survival and flourishing fund.’)

Each recommender will have a different model of what is likely to make that happen.

This does mean the fund mostly ignores short term flourishing, except as it influences the long term. There isn’t enough potential marginal flourishing in the near term for its own sake. Short term survival matters if it is required for long term survival, hence the focus on existential risks.

How and how much do short term survival and flourishing, of various types, and other short term effects, impact the long term? That’s a very interesting question where opinions often differ.

I encourage anyone who is curious to read my previous report. Note that my sense of ‘what matters for the long term’ differs from that of others, and each round will have different recommenders. That’s all the more reason to take a chance on a non-obvious theory of long term impact in terms of applying at all, including multiple times when you fail, but an argument against counting on such an argument working.

The Logic Behind Funding

Remember that it only takes one recommender to get funded. It’s about getting someone excited, not getting consensus.

People are looking to get excited. Many of us want to find a unique proposal for improving the long term future that we hadn’t considered before, even if it’s indirect or involves a bunch of f***ing around and finding out.

That opens things up quite a lot. Almost anything that isn’t a standard-charity-action and has a plausible impact argument has some chance. Even without me or my particular thinking (the focus on the Doing of the Thing) involved, SFF has in the past funded charter cities, a parliamentary group for future generations, a learning website for basic knowledge, progress studies and systems for general basic scientific research.

Those don’t obviously impact existential risks, so there’s clearly room for other things.

As a toy model, one can divide long-term effects into two broad categories.

  1. Ability to Do various Things (Capabilities).
  2. Existential risk due to the people Doing all these Things (Risks).

We want to increase those capabilities that tend to lead to life getting better for humans, while decreasing risk.

This is tricky, as many capabilities (most importantly AI capability) by default directly increase risk. Yet one of the Things capability lets you Do is decreasing existential risk. Only sometimes (e.g. ‘AI Safety’) is this explicitly labeled as such. There is disagreement about this, but I strongly believe, and I believe that most (but not all) potential recommenders believe, that without sufficient general Capabilities increases, survival and flourishing will be impossible.

Not all capabilities are created equal.

So the question is, which capabilities are you increasing? Are you differentially increasing Capabilities versus increasing Risk, by making progress in ways that make us safer and more generally capable faster than the ways that introduce, amplify and accelerate the risks?

The central example of this question is an ongoing debate as to whether, in this context, accelerating scientific progress is a good or bad idea, or how much additional differentiation is required before it becomes a clearly good idea. Last round, the debate seemed conclusively resolved in favor of basic research being good without any need to focus on existential risk, and worth funding, with the only question being whether the ability was present to turn money into basic research.

Advancement of thinking is similar. More people who better understand the world, or who are better at thinking and figuring things out (in order words, actual education) seems clearly good. The issue is that ‘education’ as a broad field is clearly not neglected, and most funding wouldn’t do much, but targeted efforts, especially software based and those orthogonal to any credentials, are more interesting. If I didn’t think that more learning how to think and more understanding and modeling the world was an important job worth funding, this blog wouldn’t exist.

The other exceptions I found are similar.

Charter cities are potentially another example of low-hanging capabilities fruit. I didn’t fund because the particular effort seemed like catch-up growth rather than an attempt to do something unique, so it fell into the short-term bucket, but a more ambitious project that would have enabled new and unique effective freedom of action, would have gotten me to fund, and one other recommender in the past has gotten to a yes.

It surprised me to find that a parliamentary group got funded, but there’s no question that governmental dysfunction, both in the context of AI risk and in other contexts, is a problem worth addressing. The problem is that proposals tend to be unconvincing and adversarial, which are tough bars to clear in a context like SFF. Game theory and decision theory are places where improvements are clearly valued and proposals often funded, so this might not be that big a stretch.

The most obvious and important exception to capabilities defaulting to good is AI. In the realm of AI, capabilities advancement by default creates risk. Existential risks from AI are the most important cause area, but also the one where one must use the most caution, because default actions in this space tend to effectively be ‘increase capability’ and end up increasing and accelerating risk. Thus, anything one does surrounding AI has to consider its impact on the timeline for AI capabilities development, which is much easier to accelerate than it is to slow down.

There are also other places (such as weaponry) where capabilities default to not being a good idea, but collectively they are the exception.

Meta things can definitely get funded.

This can include regrants to places too local and/or small for SFF. Long-Term Future Fund got a large grant last time as did the EA Infrastructure Fund. Some explicit focus on x-risk definitely helps your cause, but isn’t obviously required. As a forward-looking example, if Scott Alexander applied for more money for ACX grants, I don’t agree with all of his choices, but overall I was impressed, and that would have crossed my ‘worth funding’ threshold.

This can also extend to attempts to build up the intellectual and community infrastructures that are seen as enabling of existential-risk prevention or are otherwise seen as valuable. LessWrong has gotten support in the past, and got support in the last round as the more ambitious Lightcone Infrastructure to give logistical support and coordination to those working on existential risk rather than working on it themselves directly.

List of Things That Definitely Might Get Funded, If Your Proposal Is Good Enough

There’s never any guarantees here, but a history of a similar proposal being accepted is good, as is knowing some of the thinking taking place by at least one recommender. I’ll start with the obvious x-risks and go from there, and give my quick view on each.

  1. Existential Risk: AI

This is the obvious and most important cause. The key is making people confident you are Helping rather than Not Helping, not that you’re being the most efficient possible. It’s easy to end up doing capabilities research, or to waste time tackling problems too small to matter.

  1. Existential Risk: Biosecurity and Pandemics

There hasn’t been funding for these in past rounds, but I’m confident that’s because there haven’t been good proposals, and in the last round there were no proposals at all. If you have a good proposal here, people will be inclined to listen. Note that the focus of many here is on existential risks and to discount lesser risks, but I expect that to loosen somewhat in light of recent events.

In particular, I’d like to see proposals to stop or slow down Gain of Function research, given what we have recently learned about it.

  1. Existential Risk: Nuclear War

The problem with the biosecurity proposals has been their non-existence, whereas the problem with the nuclear war proposals is that they’ve mostly been terrible. ALLFED’s proposal to ensure survival in the aftermath of a potential war was the clear exception, along with things like dealing with EMP effects. Everything else was essentially ‘raising awareness’ and there’s little appetite for that. If you can find a way to actually help with this, there’s definitely people interested, but everyone already knows to try and avoid launching nukes.

  1. Existential Risk: Asteroid Strikes, Rogue Comets, Solar Flares, Et Al

Consider this a grab bag for ‘things that have some low background probability of happening, that we’re in no position to prevent, and would be quite bad.’ A very low probability of something sufficiently bad is still worth mitigating or preventing, if you can actually do that and the result checks out when one Shuts Up and Multiplies. Do Look Up, but the ability to potentially change the outcome is the reason to fund a looking-up program. A sufficiently good idea here would be most welcome, unclear the height of the bar. Same for other risks I’m not thinking about that are unlikely but real, even if they don’t fall directly into this category.

  1. Existential Risk: Climate Change and Geoengineering

People who take AI Risk seriously tend to be highly dismissive of focusing on climate change. There’s no doubt it’s happening or that we’d prefer to prevent or mitigate it, but it’s the opposite of neglected. Even if you buy for the sake of argument the full-blown ‘we’re literally all going to die’ story, there’s tons of attention and money already going into it. Yet most of it doesn’t seem all that concerned with being efficient or effective at actually solving the problem, and what seem to me like the obvious things-you-would-do-if-you-wanted-to-actually-solve-this mostly aren’t happening. Also I’ve come around to ‘this is sufficiently damaging to everyone’s psyches and sense of a future that it’s worth solving just to deal with that alone.’ So there’s potential room for carefully targeted ideas that are orders of magnitude more actually likely to do something useful but are having trouble getting other support, if you’ve got one of those, and I’m guessing they exist.

  1. Basic Research, especially Science and Mathematics, also Progress Studies

There’s widespread belief among the reference class of potential recommenders that our current academic and research institutions are badly broken, and we need alternatives that can replace them or at least complement them. Advancements in Mathematics seem likely to differentially help AI Safety over AI capability, because they favor things that it is easier to understand and to prove things about over things where that is harder. If the mathematics in question is targeted to be relevant, that can be a big game in expectation.

For science getting basic science back on track and encouraging promising talent to work on it and ensure they have the resources necessary seems clearly good if it can scale properly.

Scaling up (or continuing) existing successful research efforts is one form of low-hanging fruit here, if you have a track record and the thing capable of scaling.

The more you can make the case that the things you’re trying to figure out or enable are helpful to people trying to think like people and understand how things work, and gives person-thinking an advantage against AI-machine-learning-pseudo-thinking, the better.

I’d also note that there’s a decent amount of appetite for a broadly construed version of what Tyler Cowen calls Progress Studies. Knowing how scientific and other progress happens is valuable.

  1. Research Tools and Educational Tools

A number of things in this class have been funded, with the goal of making work acquiring new knowledge more efficient. Our current methods are definitely highly inefficient throughout, and there’s a lot of room for the right not-that-hard-to-make software to make a big difference, potentially involving AI the way Ought is looking to use GPT. There’s potentially huge leverage available, and proven interest.

I continue to worry about the danger of such things differentially enabling and advantaging machine-learning-style-pseudo-thinking in various ways, so if I were evaluating such a project details of that would be important. See my previous post for more details of these questions in the context of Ought.

  1. New Evaluation Methods: Scientific, Technical and Beyond

Current evaluation methods are haphazard and quite poor across the board. If one could figure out something better, that would be very interesting.

Technical and scientific evaluations were the motivating example, but evaluations of all sorts of other things also seem very useful.

  1. Epistemic (and Strategic) Mapping

People believe a lot of strange things across the board. Those who are focusing on rationalist or altruist spaces, and/or on existential risks are not only no exception, they often think and believe things that are even weirder than usual. One problem is that there’s even more of a clash than usual between what feels socially positive, natural, sane or superficially cooperative or enabling of normal navigation of life, and the actual problem space that determines whether goals get accomplished and the physical world ends up in a configuration we prefer. When the things we’re thinking about or working towards seem disconnected from ordinary life, what we believe intellectually and what we believe in our gut can easily diverge, leading to unproductive and contradictory decisions. The problems are often sufficiently abstract that there’s plenty of room to talk oneself into whatever conclusions, timelines and decisions one would unconsciously like to talk oneself into, and to adjust justifications when one’s stated justifications get falsified rather than updating the conclusion. Meanwhile there’s a lot of money running around (often, as I noted in my previous post, Too Much Money) and that too is massively distortionary.

That’s a mouthful of words that add up to a need to a pressing need to discover what people thinking about the problems that matter actually believe and why, how and to what extent they believe it, and getting that into common knowledge.

Similar work on more practical questions and more ordinary people would also be valuable. Our current methods for knowing what people believe are rather terrible, none of (elites, experts, regular people) have any clue what the other two think or even really what their own group thinks, same with ingroup and outgroup, and so on. If there’s a path to working on that, it would be great.

  1. Real World Experiments in Doing Thing

If there’s a lack of worthwhile things to do with the resources available, then exploring to find potential new worthwhile things and how to execute on them, or learning more about how to execute in general, is high value. That seems to be the world we live in. There’s a remarkable lack of knowledge and skill about how to go from ‘we should do X’ to X being done, even for relatively simple values of X, and it is entirely missing for many potential X that could if successful scale up.

Thus, if there’s a potentially good idea and the will to attempt to execute on that idea, even if the direct impact of that idea is short-term, the impact of learning how to execute ideas translates to the long term. The more one can gather data on the effort and commit to reporting back on what happened, the more useful it can be.

The more such an effort can scale or generalize, the more interesting it is. This can be a test of social technology, or of engineering, and ideally there’s big payoff in the thing itself. As an example, investigating more efficient construction methods both physically and potentially socially (centrally of housing but also other things, and including considering functional and tiny houses) and how to do such things cheaply seems increasingly high value to me while having strong concreteness, or potentially box agriculture. Problem doesn’t have to be that central, I funded Euro Biostasis for being essentially in this class.

As an example of another type of such things, I was highly disappointed by the lack of effort by people in my social circles to take non-super-local action regarding Covid-19 over the past two years, as it provided an excellent laboratory for this, both for learning and training and assembling teams and relationships and for demonstrating ability, as well as being a unique opportunity for impact. That window is likely closing rapidly, but other similar ones are doubtless opening.

  1. Community Support

Obviously this can’t simply be support for any community. So this is where outsiders are going to be, well, on the outs. This has to be community support for the communities that in turn support the causes that excite people. That means those who have a lot of focus on long term concerns including existential risk, and on the things that will enable such focus and for such focus to succeed. This includes, but is not limited to, the Effective Altruist and Rationalist movements and communities.

There’s a broad range of degrees of willingness to engage in this sort of activity, and additionally broad range of disagreement over which types of thing are worth doing. There’s a bunch of different forms of potential support.

I for one am, as I discussed in the previous post, strongly opposed to efforts that are focused on helping these communities and movements acquire more power and money.

I am more optimistic about providing logistical support for individual members, either small support in their formative years to get on their feet and pursue high-upside options without locally binding constraints, or for richer logistical support for those doing the impactful work. Potential conflicts of interest, the danger of nepotism and people potentially getting involved for the wrong reasons make this tricky.

I am also optimistic about richer logistical support for those supporting the communities, and for the communities themselves, via the provision of public goods or paying costs of coordination, but would warn it has a poor track record getting funded so far. And of course all the potential conflicts of interest are still present, and I am no doubt biased.

But I believe that physical location is a big game, in terms of helping get the right people to the right places, with co-location with other right people, including both co-location of housing and also having places to work, play, gather and naturally interact.

  1. Other High Leverage Stuff

This is not meant to be a complete list even of the things that are well-positioned, and it’s definitely not a complete list of the things that are worth doing or worth funding in the world. If there’s an approach with sufficiently high expected impact and leverage, people stand ready to listen, and even if you don’t get major funding it will at least make smart people think about your approach, maybe come up with some ideas or even do something themselves, and hopefully at least give you some useful feedback. There’s a broad sense that bold new different things would be good.

If you’re considering something political, it will need to be a ‘pull the rope sideways’ approach to pursue a policy, or help for those pursuing influence on policy in a long term sphere. Helping the ingroup best the outgroup is right out, as is either side of every hot-button political issue I can think of right now.

Help for the developing world is a long shot, since that’s not explicitly not what the fund is looking to do and is not neglected by similar funding sources elsewhere, but it is interesting if a case can be made that it differentially helps with the causes listed in this section, as discussed in the logic section above.

My Particular Priorities

A major part of my model is that we don’t have enough People Doing Thing, for meaningful values of any of People, Doing and Thing. If we had more People Doing Thing, that would be great, and we need a cultural shift towards the belief that Person who thinks there’s an important Thing to Do should by default go and Do that Thing. That’s not a central viewpoint, but similar views are held by several others who are potential recommenders, so there’s a good chance someone with at least some of this view is in the round.

I’m a big fan of taking people and organizations capable of Doing real action Thing and interested in Doing real actual Thing and enabling them to Do real actual Thing, and for freeing them from constraints that would prevent this.

I’d consider that area my ‘comparative advantage’ in terms of what I’m likely to give a speculation grant to versus others who give such grants. It will still take a strong case to get me interested, but it can be done and has been done. Unfortunately, my bar for a speculation grant has to be higher than for funding in the round, if I don’t expect others to agree, due to the way it is structured.

I’m very interested in existential risk from AI but so are half the others with speculation grants, but one note is that I’m relatively excited by non-ML approaches relative to most others.

I’m interested in community support that involves physical locations and coordination, but again I am skeptical of ability to get funding and thus it would be tough to justify the grant (since it will likely get evaluated poorly).

SFF Application

Let’s take a look at the application including its long form to see what’s involved in trying. A lot of it is basic information that is presumably already at your fingertips (for example, the organization’s name) so I’ll skip that stuff.

One piece of advice up front is to think about the links you’re providing, and what recommenders are likely to see when they’re looking at you and asking what you’re about. Ensure they can get the information you need them to get.

First there’s the ‘non-strategic’ stuff that doesn’t involve making any decisions.

There’s the whole thing about the receiving charity and showing that you can logistically accept the funds. I don’t know how annoying that is on the charity’s end, but it is what it is. Make sure it’s handled.

They ask for compensation information. It is traditional for funders to want to keep salaries down and be suspicious of higher ones, but I no longer believe this is the case. If something is worth funding, it’s worth paying the people involved enough to have a decent standard of living, and to not ask them to forfeit too much money relative to their opportunity costs. There’s obviously a limit, but on the margin don’t worry about this hurting you.

They ask for an org chart of responsibility. This seemed to mostly be pro forma so don’t sweat the details, just be accurate. Mostly it’s simply ‘who is in charge here’ and ‘who else is here at all.’

Then there’s the part where you need to make your case.

Most of the charities that I talked to were deeply confused about what to say here to make their case or what SFF wanted to see, and those are the ones that did apply anyway.

Presumably if you wouldn’t by default apply, you’re even more confused.

Part of this is that SFF has different recommenders each time, who interact differently, and thus is highly unpredictable.

Part of it is that what people say they care about and reward is often not what is actually rewarded.

There is a basic strategic decision to make. You need to decide on two request numbers, the Base Request and the Maximum Request, and on a plan for spending both amounts, and for your plan for Impact.

On top of that, they then ask for an unspecified but potentially huge amount of detail that you could reasonably not have ever thought about carefully or have had any good reason to think about, and which you’re worried could make or break your application. I got stressed purely thinking about someone else filling out these parts of the application.

They also ask for Spending Track Record, but that’s literally an approximation of how much you’ve spent this past year, so don’t sweat that, just make it approximately accurate.

They ask about ‘conspicuously absent funding sources,’ if your response to that is ‘what are these conspicuous funding sources, huh?’ then seems fine to note that you’re not familiar with the funding ecosystems this likely refers to, and you’d be happy to get advice on other good matches for what you’re doing.

I gave some ‘this is what seems like it works’ advice back in my previous post under ‘incentives of the S-process.’ All of that mostly still applies and here I will attempt to provide more clarity and detail where it seems useful.

It is all a holistic package, and should be thought about that way. You’re offering two deals for sale, the standard and the deluxe package. You want to charge cheerful prices for both, offer a clear value package for both, and make your simple case for impact in a way that will resonate. Ideally (in terms of getting approved) it also fits neatly into an existing approved both, but ideally (in terms of actually accomplishing something) it doesn’t and instead has a clear case for why we need an additional or bigger box.

The Impact Plan

On reflection, the best approach is indeed the natural one, which is to start with your impact plan and what you intend to do, and then choose the size of your request afterwards based on that.

These are plans for impact, more than they are plans for spending. The spending follows from the plan for impact. The spending part needs to be sane, and I talk about it under The Request in a bit, but the key is what you plan to accomplish, and why accomplishing that thing will advance survival and/or flourishing.

Here we need (if possible) to form two plans, the Basic Plan and the Deluxe Plan. If you count the general Organization Plan that’s independent of SFF, that’s three plans.

The Basic Plan and its corresponding Base Request is the reasonable we-can-do-the-thing-we-are-here-to-do plan. In some cases this naturally ‘wants’ to be a large plan, in which case go for it up to about the size of requests in previous rounds (so probably not more than $1.5mm or so).

If you go to the high end of that, your alternate plans for lesser amounts start to matter more, but even then don’t sweat the details here. SFF notes things that could easily drive you crazy, like ‘it is often helpful if you would note what you would do with 25%, 50% or 75% of your request.’ I do think you should provide the one-sentence version of this information, but unless it’s already written I wouldn’t do much more than that. Same goes in many other places. Those considering giving you half your request will want to know if this means you do basically the same thing but less of it, or something else entirely. Often the answer is ‘well then we’d have to spend more time fundraising’ and that seems fine.

Anyway, back to the plan. You should propose that you’ll Do a Thing of some kind, with a clear case for why the Thing will have Impact.

The above discussions should ideally have given you an idea of the kinds of Impacts are likely to count and go over well, and what aspects you’ll want to highlight. As usual with such things, it’s probably best to choose one story about Impact that you think is your best and clearly focus on it.

Plans that aren’t sure to succeed are totally fine. If the expected value play is to aim big, then aim big, the process on average is likely to reward this rather than punish it. At least one charity I talked to chose to talk about an unexciting but ‘shovel-ready’ thing they could reliably pull off rather than the more exciting but less ready things that I would have actually cared about them doing. Don’t make that mistake.

If you fit into one of the standard Impact boxes, great. If you don’t, it gets trickier. Again, the focus is on providing the fuel for one person to get excited, so don’t hedge your bets or try something convoluted to act like you fit into a box. Instead, if the actual reason isn’t a short-term thing, give the actual reason why one would care enough to do this, even if it sounds weird, and hope someone else also gets it. You’d be surprised, these are smart well-meaning people, they can get it sometimes.

If the actual reason is a short term thing, for example you have a charity to help with third-world poverty or disease, then you’ll have to set aside your core motivation and ask if the thing you’re doing is worth doing anyway.

Sometimes the answer is no, but often the answer is a strong and honest hell yes.

Doing a Thing is typically Insanely Great, and tends to have all sorts of good knock-on effects. Improving things often leads to more improvements and the collection of compound interest (although sometimes the surplus gets eaten by rent seekers and status expectation adjustments and hedonic treadmills, so it’s hard to know). It often leads to skilling up, and capacity to Do more and better Things, and learning about which Things can be Done and in what ways.

Is it likely to be the first best thing if you exclude your primary motivation? Well, no. Of course not. The good news is, that’s nothing like the bar you need to clear. The best thing to do, even for relaxed values of ‘best,’ is orders of magnitude better than the threshold required to make something worth doing. ‘Wasting’ most of your efforts is totally fine, as long as you point to the part that isn’t wasted. Most of my day is kind of wasted most of the time, that’s how it goes. Similarly, I still endorse that the world’s best charity remains Amazon.com, and it’s not because that was what Bezos set out to do.

The Impact Evidence

You’re asked to provide evidence that you’ve had impact in the past.

Mostly my model says ‘don’t sweat it’ but this is one place you should consider sweat a bit, if it would improve your answer, because people do care about this.

You’ll want to put your best and most legible foot forward, in ways recommenders are likely to care about. What have you done that matters? Either in terms of proving you can be impactful going forward, or that you’ve been impactful (to the far future) in the past.

Emphasis on the long-term nature of this argument. If you did something short-term that will carry over to the long-term, you’ll want to make the case for why that’s true. Otherwise, it will lead to a ‘oh this isn’t what we are looking for’ reaction.

The argument you make really does matter here, but remember that what matters is that one person gets excited, not that the argument works on everyone.

The more concrete and direct the impact, the better. The more it can be objective and verified, the better. The more it matches what you want to do next, the better.

Tell specific stories, specific wins. Individual people who you’ve impacted, and that went on to therefore accomplish something, are good. So are many other things.

Make sure that within this frame:

  1. If they’re reading your published papers, they read your best paper. Same with blog posts or books or any other media.
  2. If you’ve built software or invented concepts, show your best.
  3. If you’ve helped people along their path, they know the best wins.
  4. If you’ve had lots of conferences, that they know the most impressive relationships, connections, ideas and such that you know resulted.
  5. And so on.

Also notice that you’re trying to show two things.

  1. What you do matters.
  2. You can execute (and ideally, you can scale) what you intend to do next.

This is a place to show both of these things.

The Request

There’s no point in asking for less than the cheerful price to execute your plan. If it’s ‘this is exactly enough to execute if we all pinch our pennies’ then there are times and places for that, but this is not one of them. If it’s worth doing, it’s worth doing right, with slack in the budget and less worry throughout the day about economizing when there’s important work to be done. The limiting factors on action are talent and ideas, so make the most of yours on both counts.

Also, the process rewards bigger requests over smaller requests, in terms of what you’re likely to end up getting. The process effectively anchors the final grant proportionally to the size of the request, because the curve is shaped a lot by how many dollars provide any value at all, and also due to basic anchoring.

Do keep the following words in mind, from the application:

There is no need to be particularly detailed in the upcoming spending and funding questions. If you have prepared documents for other uses, please feel free to include them as your response; it’s likely they’ll suffice for our purposes.

They really don’t want lots of detail here, in my experience. They want to have a good enough sense to make funding decisions. If they need the details, which is rare, they can follow-up and ask for them later.

Therefore, basic advice:

  1. Ask as a base request for the amount to comfortably and properly continue your work, and execute on the core plan you’ve selected.
  2. Ask for a generous maximum amount that lets you not worry about money for a while, but that wouldn’t be a burden or put you in danger of having Too Much Money, or put you under felt pressure to spend too much too fast.
  3. Make sure the basic-request plan matches the basic-request quantity, but with reasonable slack involved.
  4. Don’t sweat the budget details, this can be very general, and as long as you’re making sane decisions it’s not going to matter. All it’s effectively meant to do, other than a sanity check, is give a general sense of what the expensive parts are versus the cheap parts.
  5. Remember that even the basic plan is allowed to have flexibility in it and not be over specified.
  6. Remember that a common outcome is to get a non-trivial and useful amount of money that is importantly less than your base request, and another common outcome is to get importantly more.
  7. The right maximum-amount plan does not involve specific plans for spending all the funds within the first two years.
  8. Do make sure your requests will pass (generous) sanity checks.
  9. When in doubt, be asking for general support, it’s what mostly happens anyway.
  10. When in doubt, give less detail now, give it later if someone asks.

Mostly I believe this is true of life in general. People don’t ask for enough money, and are far too afraid they’re asking for too much or not offering enough in return, and they start giving power what they think it wants and offering lots of detailed plans and concessions and promises even unprompted.

Recently, I got asked to do a consulting job that wasn’t world-improving or resume-enhancing, so I quoted my zero-sum-game price, which is high, and they didn’t hire me, and I’m assuming that’s at least partly due to the price. If that doesn’t happen reasonably often when people ask about such tasks, you’re not asking for enough.

You can of course still overdo it, especially with the base request. If you ask for a number that’s implausibly high, you’ll likely be docked points and the anchor won’t stick. But at other times, people are looking for a way to give you a bunch of funding.

Make sure the base request amount and the plan you’re proposing to execute match up and live in the same world.

Invitation

Like Scott Aaronson, I’m inviting you to check with me, if you’re considering applying, to see how interested I’d be or my guess as to your chances, one way is to leave a comment here asking, including links to necessary info. Also happy to give specific additional thoughts on details.

If there’s a substantial chance on first pass I’d give you a speculation grant once you’ve applied, I’ll let you know, along with other quick thoughts.

If you need a soft commit before you can justify applying, I’ll see if I can give you one at the cost of being less likely to say yes (since I’d have to do it in advance).

Alas, there is a limit to how much time I can devote to this, especially given I will not be a recommender in this round. If I get flooded with questions and requests, I may not be able to give each my proper attention and would need to do triage, and I apologize for that in advance.



Discuss

What are the counterarguments to a Faustian Vaccine Hypothesis? ($2k charity bounty)

25 января, 2022 - 13:33
Published on January 25, 2022 10:33 AM GMT

There’s a troubling view I’ve run into, which I’ve decided to christen the Faustian Vaccine Hypothesis. It claims that, while they definitely help short-term, the protective effect of vaccines targeting Covid’s spike protein will ultimately become negative.

I’ve looked at some of the data supporting this claim – in particular, the strong version, which claims a significant number of people currently experience net negative vaccine protection – and been largely unimpressed. However, proponents’ arguments are mostly not statistical reasoning about the present, but biological reasoning about the future. And it turns out I know exactly enough about biology to be convinced a Faustian effect is possible, while being unable to evaluate how (hopefully, im)plausible it is. This worries me, and I don’t like being worried.

I haven’t found any relevant studies or good counterarguments online – most scientists focus on the present instead of the future, and most mainstream debunkers target more common and straightforward heresies – so I’m asking LessWrong for help. And in recognition of the fact that evaluating vague prophecies is much harder and more tedious than making them, I’m donating $2k to charity: if you give me an answer or source I like, you get to choose which charity. (And if multiple people give answers/sources I like, I’ll allocate in proportion to how much I like them.)

The argument(s) for, summarized as best I understand them*:

  • Covid vaccines work. (uncontroversial)
  • Covid vaccines reduce in effectiveness over time. (also uncontroversial)
  • There is no theoretical reason why vaccine effectiveness can’t be negative. (c.f. the infamous Dengue vaccine case)
  • There is a mechanism by which positive initial effect and negative long-term effect can coexist: vaccination gives you an abundance of useful but temporary antibodies, while also teaching your body to have a specific and suboptimal immune response, which subsequent infections will have a harder time training it out of. (Original Antigenic Sin, a real concept with a wikipedia page and everything)
  • Obvious counterargument: other vaccines didn’t have this problem, why expect it here? Countercounterargument: other vaccines mimic a real infection by providing an entire specimen to the immune system and allowing it to create a variety of responses; spike-targeting vaccines produce a single, uniform form of resistance which the pathogen can more easily evolve to evade or – in the worst case – take advantage of. (Antibody Dependent Enhancement, which is also definitely a real thing**)

(Also, bird’s-eye-view caveman argument: vaccines used to protect against transmission and infection and worst-case outcomes, and then they protected against infection and worst-case outcomes, and now they mostly just protect against worst-case outcomes. And while that’s more a fact about variants than about vaccines, there aren’t going to stop being new variants, so extrapolating that trend . . . what happens to effectiveness of our current vaccines by the time we reach the end of the Greek alphabet?)

Important details:

  • I reiterate: proponents are not saying covid vaccines don’t work. They are saying strains have/will emerge(d) that did/will cause covid vaccines to reach negative effectiveness at some point significantly after vaccination.
  • Proponents aren’t against vaccines in general: opinions on non-mRNA non-spike-protein-targeting vaccines range from “overrated but still a good idea” to “mankind’s highest achievement, western nations need more of these for Covid-19 immediately”.
  • Even if they were right, that wouldn’t necessarily mean you shouldn’t get vaccinated. The longer you spend without catching Covid-19, the better specific medical tech will be when you do (also, projecting the trend from the original strain to Omicron forwards, the dominant strain will probably be milder a year from now). However, if the rationale among smart people was always “yeah that could happen but even if it did it’d be worth it to get to the era of Paxlovid with functioning lungs” . . . I’d kind of like to know that?

There’s no strict time limit, but I have my next Pfizer dose scheduled for the start of February, and I’d like this to be resolved - in my own mind, at least - before then, so I can take the shot with confidence (or, possibly, cancel my appointment and then find some way to source Sputnik or Sinovac).

*I’d link to the proponents’ own words, but all the ones I’ve found communicate through longwinded blogposts that drag in arguments I consider irrelevant and expect you to have read all their other longwinded blogposts. (Is this how LW looks to outsiders?)

**The Wikipedia page for ADE has a section linking to “COVID-19_vaccine_misinformation_and_hesitancy#Antibody-dependent_enhancement”, which reassures skeptical readers that while this may be happening with strains from Delta onwards, it definitely didn’t happen with the original strain. I wasn’t aware that words could be so uncalming.



Discuss

Lives of the Cambridge polymath geniuses

25 января, 2022 - 07:45
Published on January 25, 2022 4:45 AM GMT

Link to full-size imageDelicious Facts

Russell

  • Grandfather was British prime minister
  • Told Virginia Woolf his devotion to serious intellectual work came to an end when “my passions got hold of me”
  • Met Lenin on a visit to Russia. This visit turned him against the Russian Revolution

Wittgenstein

  • His Austrian family was one of the richest in the world
  • Was at same (obscure) school at same time as Hitler
  • His family paid off the Nazis using their fortune to be classified as “mixed” rather than “full Jews” (and avoided the Holocaust)
  • Three of his four brothers died by separate acts of suicide
  • Tried to move to the Soviet Union to work as a laborer

Haldane

  • Aristocratic family, father was a prominent biologist
  • Introduced the primordial soup theory of the origin of life
  • For a period, he was a Stalinist and defended Lysenkoism on BBC radio
  • Moved to India late in life and renounced British citizenship

Needham

  • As a professor of biochemistry, at age 37 he began an improbable pivot into Sinology when he fell in love with his Chinese grad student and started learning Chinese
  • In China he befriended Zhou Enlai and met Mao
  • He was part of a commission investigating whether the US had used biological weapons in the Korean War and was fooled into believing the US had

Turing

  • Would occasionally run 40 miles from Bletchley to London for meetings and tried out for British Olympic team
  • Apparently he took fortune-telling seriously
Biographies

Also see:

  • Biopic film on Turing from the BBC
  • Wittgenstein movie, which includes Russell and Keynes as characters:
  • Logicomix: graphic novel feature Russell, Turing, and Wittgenstein (coauthored by computer scientist)
Why I found these figures interesting
  • They made exceptional and creative intellectual contributions (helping to found new fields). Turing’s contributions seem most important.
  • They had dramatic, full-bodied involvement in wars
  • They spent significant periods working outside academia
  • For their time, they had highly unconventional romantic lives and were eccentric in other ways
  • Russell and Haldane were self-described rationalists
Interactions

Russell acted as Wittgenstein’s PhD supervisor but felt Wittgenstein surpassed him already as a student. Keynes invited Wittgenstein to join the Apostles and helped him get British citizenship during WW2. Turing attended Wittgenstein’s lectures on the philosophy of mathematics. Needham succeeded Haldane as Reader in biochemistry at Cambridge.

 

Russell on Keynes:

Keynes's intellect was the sharpest and clearest that I have ever known. When I argued with him, I felt that I took my life in my hands, and I seldom emerged without feeling something of a fool.

Keynes on Russell and Wittgenstein:

The first impression conveyed by the work of Russell was that the field of formal logic was enormously extended. The gradual perfection of the formal treatment at the hands of himself, of Wittgenstein and of Ramsey had been, however, gradually to empty it of content and to reduce it more and more to mere dry bones, until finally it seemed to exclude not only all experience, but most of the principles, usually reckoned logical, of reasonable thought. Wittgenstein’s solution was to regard everything else as a sort of inspired nonsense, having great value indeed for the individual, but incapable of being exactly discussed.

Wittgenstein on Russell:

Russell's books should be bound in two colours…those dealing with mathematical logic in red – and all students of philosophy should read them; those dealing with ethics and politics in blue – and no one should be allowed to read them.

Russell wrote a long essay (“Icarus or The Future of Science”) responding to Haldane’s seminal book “Daedalus”.

Haldane was the reviewer for Turing’s famous paper on morphogenesis. The review is prickly:

Before the paper is accepted, I consider that the whole mathematical part should be re-written. In the first place, some passages assuming ignorance in readers might be omitted without much loss. Secondly, much of the biology, e.g. pp 56-58, can be found in elementary textbooks, and often stated more accurately

1. Does it contain contributions to knowledge of sufficient scientific interest for the space required? Yes

2. Are any portions of the paper, or any illustrations, redundant? Yes

3. Should the paper be published by the Society? Yes, subject to drastic emendations.

7. Comments or criticisms which might enable the author to improve or correct his

statement.

I regret that my report, in the absence of figures or tables, must be insecurely grounded. I should be glad to discuss the paper with the author, but may be leaving for India shortly. I regard the central idea as being sufficiently important to warrant publication. I am equally clear that the paper should not be published as it stands.



Discuss

Wireless Outlet Switching

25 января, 2022 - 05:50
Published on January 25, 2022 2:50 AM GMT

We have lamps in our bedroom, and we were thinking it might be nice to wire up some switchable outlets. Ideally with a three-way switch, one at the doorway and one at our bed. Dimming would be nice too, though not a dealbreaker. This would be quite a lot of work, though, running new wires. And what if we wanted to move our bed at some point?

The modern solution would be Internet of Things smart outlets, as part of a smart home system. While it would be nice to be able to turn the lights on and off vocally, I don't want to deal with Wi-Fi issues and integration hiccups: I just want light switches.

It turns out this product also exists: switches that talk wirelessly with a box you put between your lamp and the wall. The switches can be battery powered, and the batteries last a long time because you aren't doing that much switching. I got two of these GE "mySelectSmart" ones (~$25/each). I was a little bit worried about whether I would be control both sockets with both switches, but I followed the pairing process for each pair and it all works.

They do what they're supposed to, and I don't have to think about them!

One downside of this setup is that the socket is always going to draw a little bit of power, even when off, because it needs to be listening for the wireless signal it tells it to turn on. But how big of an issue is this? I measured, and it draws 0.3W when off. Over a year that's ~2.6 kWh, or $0.55. One of these would have to last ~45 years before the electricity cost caught up with the $25 initial purchase price. And installing hardwired switchable outlets wouldn't pay for itself, even if we kept the configuration static for a century.

Comment via: facebook



Discuss

Iterating fast: voice dictation as a form of babble

25 января, 2022 - 03:21
Published on January 25, 2022 12:21 AM GMT

Voice input has dramatically increased my writing output as well as the quality of my ideas. But I think that it's not as easy as turning on voice dictation and writing an essay from start to finish. There is a particular way or method of doing voice input that I have found to be far more effective, which is what I will describe in this post.

The pitfalls of dictation

When you think of voice input, you might think of dictation. It's a form of transcription: an attempt to accurately piece together the words being said and faithfully reconstruct them into text.

Courts and legal depositions benefit from accurate transcription. And in the past, if you were exceptionally rich and exceptionally busy, you might dictate your letter to a scribe who would jot it down for you. These are the use cases that voice dictation has been designed to replace.

A key feature of this mode is that dictation is a one-way process where the goal is accuracy. Do-overs and backtracking aren't really possible: the over-arching goal is faithfulness to what was said. This is suited for letters and messages, but I think this is not all that voice input can do.

Consider sketching on a page, where various ideas or points are connected by drawing arrows or shapes or groupings. When sketching, you aren't quite sure how to structure your thoughts before you start. And sketching is, in some way, the act of figuring this out. There is something highly nonlinear about this process where your thoughts backtrack to previous ideas and test the strength of old conclusions.

Good conversations can be something like this, with both participants in more of a 'dance' than a single-pointed journey towards a predetermined conclusion. And I think that voice dictation can, with some effort and learning, become something like this too.

An alternate use of voice input: babble

The primary thing to know about speaking is that we can speak very fast, and in comparison we type rather slow. This might seem like no big deal: it's always tempting to think that a sped-up version of a process is a qualitative, not quantitative, difference. But, as it has been said, quantity has a quality all of its own. Fast input can yield surprisingly large rewards.

Speaking allows words to be put down cheaply, almost at the speed of thought. In that way, voice dictation allows written text to become even closer to pure "thought-stuff", something to be created, manipulated, and discarded at will. You can speak an entire paragraph that can then be deleted with no sense of loss. This is fundamentally less true for typed text, where each sentence is more substantially a time investment.

Babble is a way to use voice input which takes this crucial difference into account. When I babble, I open a text document, start voice dictation, and then I get up from my chair and start speaking at length, for a minute or two. You'll be surprised with what you can say in two minutes, and how long the time period can seem in the moment. The longer you can speak without running out of ideas, the better. 

Note that there is no penalty for bad ideas, since anything can be deleted. In this way, babble is fundamentally a mode of exploration. The aim of babbling is to generate material to be edited later on. Speaking without planning means you will inevitably make mistakes, but ignore the mistakes when you do make them: you can just say it again instead of leaving babble mode to start editing.

Typing is slow, but babble is fast. Whereas the time commitment of typing means that sentences have to be considered well in advance, babble allows you to circumvent that entirely. There's no sense being frozen at the computer trying to figure out how to phrase something: you can just say every permutation of the thought. At least one of these is the correct one, and it's easy to delete the rest.

Babbling means being seldom blocked by the blank page. Even if you have no good ideas, you just say the bad ideas instead. I have found that to be an effective way to get past the intimidating quality of a blank page.

When you come out of babble mode, you might be dismayed to see pages upon pages of half-garbage. This is not as bad as it seems: babble entails editing and wrangling the text to find the core thread of ideas which you are expressing. But it also entails making sure you have plenty of material to do this editing with. It's counterproductive to have standards so high that thoughts are rejected before they are even written down into text form, where they could have been more easily manipulated and improved. If the bar is too high, nothing can be said at all. The point of babble is to avoid this, and to set your quality bar far lower, in the personal and private interaction between you and your computer.

There's always a temptation to edit, but to babble means to lean heavily on speaking first. The right mindset is something like "this needs editing, but not yet". Some of the most productive sessions are walking around in my room, verbally exploring different descriptions of an idea, trying to coax myself to speak for as long as possible. When I sit back down, I often find that I have more ideas than I thought, and that some of my better ideas were being blocked just because I was inaccurately assessing their quality.

Notes

Your computer can probably do voice dictation, and if not, it's pretty easy to install a program that supports this. Android phones are really good at transcription, which can be started by opening up a new note and pressing the microphone button on the keyboard. On desktop web browsers, I can recommend the voice dictation feature built in to Google Docs.

It costs nothing monetarily, and this way of babbling has made me a fair amount more productive when writing. I would encourage you to not get discouraged if it feels awkward, especially for the first few times. Just keep trying to talk, keep trying to babble. This might take a while to get used to, especially if you are used to being critical of yourself, so I'm eager to hear about your experiences, if you do try.



Discuss

Time until graduation as a proxy for picking between (German) universities

24 января, 2022 - 21:27
Published on January 24, 2022 6:27 PM GMT

If you are in high school and struggle with decision-making, please don't take this too seriously. If you're into EA maybe apply for 1-1 advice here?

Choosing a university without knowing what metrics to base this decision on is hard. When I did this one year ago, I was mostly comparing German universities as it was my country of residence (I debated going abroad, but covid and other things made this option unattractive for me). Since most universities here are free and rankings not really popular, there isn't as much of a prestige difference between universities to cut down the possibilities. So in the end, I based my decision mostly on cost of living and location/city.

I didn't expect the differences in universities to be large enough to be worth looking further into it. I've changed my mind about that, mostly based on the fact that I could share what I've learned with other aspiring undergraduates.

Data

I looked into which metrics were used for ranking universities by the CHE-Ranking which is the most popular (which ranks universities for every metric and every degree separately and refuses to aggregate them to give universities an overall rank. The idea is "make your personal ranking according to your priorities". It's like they actively refuse to play the prestige game). Most of their metrics are either subjective ratings by a few students (how useful are these ratings if these students have never been to a different university to compare it to?), or things like money per researcher, which seem rather useless for finding the best place to study as an undergraduate. The only thing that stood out to me was "% of graduations in appropriate time". Unfortunately, universities in Germany have only recently (2017)[1] started to collect this data in a central place, so I rely on the summary statistics from the CHE-Ranking-Site and hope that they did a good job.

As you can see in the plot below, there are lots of universities where you would be above average (median) if you finished your degree in your eighth semester. It turns out, the difference how many students (in Computer Science [most popular on lesswrong]) graduate after their eighth[2] semester [3] is pretty large. Some of this may be due to random variation in student ability or curriculum, but even excluding universities with less than 1000-students (so at least (1000/4) * (around 50% passing rate) = 125 students passing every semester) the differences are quite large:

Next I looked whether the same was happening for business administration, because it is the most popular in Germany and I would have to worry less about statistical significance (I'll leave the statistical tests to people who've actually taken a statistics class). It does looks quite similar (although overall it does not look quite as bad as for CS):

One thing that is more pronounced here is that the upper outliers tend to be private universities, which makes sense to me (Left as an exercise for the reader Lol). I am still surprised by the variation, before I looked into this I thought that most universities are roughly equivalent.

other considerations
  • Different universities have different "special" undergrad stuff under the lable computer science or business administration
  • There does seem some signal to come though (private universities) Don't know the other stuff so well?
  • If it's just about more able students, then your particular subject might be less important than the whole university?

I was kind of skeptical that I might be misunderstanding something basic here, so I tried to check whether the numbers from the CHE-Ranking add up to the same as when I look at numbers from my university. (Caution, this just looks at who registered for the semester, so it includes students who never actually graduated!)

Assuming everyone after the fifth semester eventually graduates[4] I got ~50% (excluding last semester due to covid) which is pretty close to the 58% according to the CHE data, which means at least in this case the data is probably accurate.

Guess

Interpret this as my wild guess. If you have a better interpretation, share it in the comments! I've done stupid/obvious mistakes before. If you find any please point them out!

Average time until graduation is probably proportional to (personal ability) and/or (hurdles of the university in order to pass). If you are in university mostly for the signalling and the interesting people, then both, smaller hurdles and smarter people implied by more "graduations in appropriate time" would be good news. If you are interested in actually learning things you'd probably try to go for So maybe include it in your decision matrix.

Things I'd do with more time/motivation:
  • Figure out how to get more data on the whole distribution for time until graduation. To identify universities where it is easier (or students smarter) to pass not only in "appropriate time" but maybe even in the Prescribed 6 Semesters, you'd actually have to know how many pass at that time.
  • Get the CHE data from past years and investigate how persistent "time until graduation" is over time, which would be rather important as we are necessarily trying to extrapolate here. I had a hard time finding past data with google.
  • Look at data from other countries
  1. See here for more info: Statistisches Bundesamt. “Studienverlaufsstatistik 2020.” ↩︎

  2. CHE's description: “Proportion of undergraduate degree course graduates who completed their studies in an appropriate amount of time, namely the standard period of study plus one or two semesters (dependent on the subject area).” ↩︎

  3. (for a “6-semester”-degree) ↩︎

  4. They don't but it is good enough. ↩︎



Discuss

Nuanced and Extreme Countersignaling

24 января, 2022 - 09:47
Published on January 24, 2022 6:47 AM GMT

Nuanced Countersignaling

Scott Alexander described countersignaling as following a barber-pole pattern. Or, as he says in Friendship is Countersignaling:

... you can mistake someone at level n for level n-1, but never for level n-2 – so pretending to be level n-2 and getting away with it is a sure sign that you are in fact level n and not level n-1.

I think of this as a subtype of countersignaling - nuanced countersignaling. The primary goal here to distinguish an actual level n from a similar-but-worse level n-1. This is done by sending an n-2 signal while you're at level n.

Note that under this theory, sending an n-3 signal is "dangerous" for the purpose of countersignaling as level n. After all, n-3 would be how an n-1 countersignals. Countersignaling at level n-2 is the only way to unambiguously countersignal level n status.

Extreme Countersignaling

By contrast, with extreme countersignaling, the idea is to send the most extreme possible countersignal. If signals are on a spectrum from 10 to -10, extreme countersignaling is when a person at level 10 illustrates this by sending a -10 signal. A person at level 9 can only get away with sending a -9 signal.

I think this is what's going on in Scott's dialog in the post about friendship.

Becca: What are you doing here? I figured they’d have locked you away in the psych ward for good by now.
Scott: Nope. And what are you doing here? You haven’t killed off all your patients yet?
Becca: Only person in this hospital I might kill is standing right in front of me.
Scott: Be careful, I’m armed and dangerous *picks up a central line placement practice set menacingly*

Let's say that friendship is on the scale of solid friends > budding friendship > acquaintances > strangers > enemies. If Scott and Becca are solid friends, nuanced countersignaling would call for a dialog where they pretended to be acquaintances.

Becca: How's it going... Scott? Right?

Scott: It's actually Thomas, but no worries, Sarah. Good to see you again. 

Becca: I'm going to go catch a movie after work, any chance you want to come?

Scott: Uh, maybe let's get coffee first and get to know each other better?

Note that this imaginary dialog doesn't come off as particularly funny or friendly, which might be because my writing sucks, but is also predicted by extreme countersignaling. They need to go all the way to pretending to be enemies to illustrate just how solid their friendship really is, which is how they actually behave in the original dialog above. 

Fragility Of Countersignaling

Countersignaling is hard to get right. Under nuanced countersignaling, sending a level n-1 signal by accident puts you at risk of being taken as actually being at level n, while sending level n-3 or lower signals would also serve as countersignaling signals for those at lower levels. Level n-1 could also countersignal at level n-3, so an n-3 countersignal may not serve to distinguish level n from level n-1 as well as an n-2 countersignal.

Under extreme countersignaling, sending a -n countersignal is best. If the countersignal is not extreme enough, it signals timidity or distance. If it's too extreme, it might be taken as distressing.

This accounts for the outcome of Scott's second dialog:

Scott: Thanks for covering for me yesterday. The pharmacy called and said they were a little confused by your discharge instructions, so could you call them back and sort that out?
Pat: Oh, all right, but you owe me big time for taking care of all this for you.
Scott: Hey, if you hadn’t screwed up the discharge yesterday, we wouldn’t be in this mess.
Pat: What? How dare you! *storms out of the room*

Here, there's no threat of physical violence. It's not as extreme as the dialog between Scott and Becca. This corresponds to the lower level of Scott and Pat's friendship.

Under a nuanced countersignaling theory, Pat storming out of the room should have been more unbelievable than his "you owe me big time" comment. But instead, "you owe me big time" was effective countersignaling, and storming out of the room provoked distress, which is what extreme countersignaling predicts.

In general, I think that extreme countersignaling is more common than nuanced countersignaling. It doesn't follow the "barber-pole" model Scott proposed. 

Fashion and Politics

Scott asks some interesting questions about fashion.

Why is wearing last year’s fashion such a faux pas? Shouldn’t the response be “That person is wearing the second most fashionable outfit ever discovered; that’s still pretty good”?

Why does fashion so often copy the outfits of the lower class (eg “ghetto chic”?) Why, if you are shopping for men’s shirts, are there so many that literally say “GHETTO” on them in graffiti-like lettering?

And I don’t think I’m a random nerd coming in here and telling fashion people that I understand them better than they understand themselves. This seems to be how fashion people really think. Just look at the word “poser” (or possibly “poseur”). The thrust seems to be: “A person who is not of the group that is cool enough to wear this fashion is trying to wear this fashion! Get ’em!”

Here, the "countersignaling" GHETTO shirts are appropriating the symbols not of class n-2, but of class -n. I agree with Scott that the "last year's fashion faux pas" seems like an example of nuanced countersignaling. Perhaps we have both going on in the world of fashion.

Scott then goes on to apply countersignaling to politics.

I see my high school classmates – a mostly unselected group of the general suburban California population – posting angry left stuff like “Ohmigod I just heard about that mayor in South Carolina WHAT A FUCKING BIGGOT!!!”

I see the people I think of as my intellectual equals posting things that are conspicuously nuanced – “Oh, I heard about that guy in South Carolina. Instead of knee-jerk condemnation, let’s try to form some general principles out of it and see what it teaches us about civil society.”

And I see the people I think of as the level above me posting extremely bizarre libertarian-conservative screeds making use of advanced mathematics that I can barely understand: “The left keeps saying that marriage as an institution isn’t important. But actually, if we look at this from a game theoretic perspective, marriage and social trust and forager values are all in this complicated six-dimensional antifragile network, and it emergently coheres into a beneficial equilibrium if and only if the government doesn’t try to shift the position of any of the nodes. Just as three eighteenth-century Frenchmen and a renegade Brazilian Marxist philosopher predicted. SO HOW COME THE IDIOTS ON THE LEFT KEEPS TRYING TO MAKE GOVERNMENT SHIFT THE POSITION OF THE NODES ALL THE TIME???!”

... And my theory is that in a world where the upper class wears black and the lower class wears white, they’re the people who have noticed that the middle class is wearing black as well, and have decided to wear white to differentiate themselves.

The "libertarian-conservative screeds" bit is made up, but let's pretend like it's real, actually brilliant, and Scott's right about the countersignaling. The most obviously countersignaling bit is the phrase "THE IDIOTS ON THE LEFT." Everything else before and after that is the bizarre brilliancy. And when this imaginary libertarian countersignals, he does it not by aiming for some sort of dumber-than-smart but smarter-than-dumb n-2 level, but for an all-caps "THE IDIOTS ON THE LEFT" ranting tone. This is extreme countersignaling!

Everything In Its Place

One common feature of all these examples is that the countersignaling is confined. The doctors are having a brief exchange in the middle of their rounds. The GHETTO shirt is just a shirt, one part of the wardrobe, worn occasionally, and probably doesn't actually look GHETTO overall. The libertarian-conservative screed is 4% countersignaling, 96% argument and analysis.

Countersignaling is also useful as a self-signal. If you intuitively feel that you can get away with a little bit of countersignaling, it's a sign that you've sent a sufficiently powerful signal to have the countersignal be interpreted correctly. But countersignaling is just the cherry on top of the straightforward signaling sunday.

This is why it's hard for me to agree with Scott in his analysis of a reality-based conservative smart-person reaction against Mrs. Grundy leftism as being explained by countersignaling. Countersignaling seems to function best when it's a small part of a much larger straightforward signaling context. It's not a lifestyle - just a bit of fun.

The brilliant mathematician puts a "Don't Tread on Me" bumper sticker on his Tesla? Countersignaling. The brilliant mathematician spends an inordinate amount of time engaging with whatever conservative news stories float across his Facebook feed? Before too long, it's not countersignaling anymore - just a waste of time.

I interpret the engagement with conservative ideas Scott's describing a little more straightforwardly. Lots of people are inundated with Mrs. Grundy leftist takes on social media. They're smart enough to try and figure out what they really think. So they say  things like "Oh, I heard about that guy in South Carolina. Instead of knee-jerk condemnation, let’s try to form some general principles out of it and see what it teaches us about civil society.”

This isn't countersignaling. It's just signaling. This isn't making fun of anybody, and it's calling for straightforward civil discourse in terms nobody could possibly mistake for anything else.

Like a computer, a relationship can sometimes be rebooted by turning it off and turning it back on again. I read these people as acknowledging that the relationship is at a -7 ("instead of a knee-jerk condemnation"), and are sending signals as if it were at a 0 ("let’s see what it teaches us about civil society"). This is neither extreme nor nuanced countersignaling. It seems to me that this is just a fail-safe social move that is pretty widely recognized. If things are getting too bad, we can revert to the polite civility that might be extended at a gathering of strangers.

So I'd agree with Scott in finding some hope in our cantankerous political situation. But instead of seeing an intelligentsia constantly oscillating between whichever political opinion is just out enough to be hip, I see an intelligentsia being the first to recognize the dynamics of the social situation and extend an olive branch to the other side. They're banking on others finding this appealing and joining in.



Discuss

Words are Imprecise Telepathy

24 января, 2022 - 04:45
Published on January 24, 2022 1:45 AM GMT

Words are mental paintbrush handles. Saying "triangular lightbulb" paints a picture with words.

The point of communication is to transmit a message. But the message is not the words. The message is the picture, the mental model inside your head.
So the message is reduced to words, which are then used by the recipient to construct a new mental model, with possible errors.

Interpreting words is a complex task. Words do not always have literal, well-defined meanings (if they did, machine translation would be a much easier problem to solve).
Words and words clusters activate patterns in our brain, some patterns subtler than others.

So the actual meaning of a word, a sentence, a communique could be considered to be how it is interpreted by the recipient's brain, what kind of mental model or mental image is created.

Communication is imprecise. We might want to know how imprecise.
The words, the sentences you use are only an approximation to the mental image in your head. But some approximations are better then others.

Pi could be approximated as 3, 3.14, 3.1415, etc, depending on what task you are doing.
The shape of Earth could be approximated as a sphere, an oblate spheroid or a topographical map.

If you are doing calculations by hand, it's probably a waste of time to use a value of pi with 15 decimals.
If you want a satellite to stay in low orbit around the Moon, you'll want to know the precise nature of the lunar gravitational field including any anomalies, not just an approximation which assumes the field is uniform.

Use appropriate precision for the task at hand. Use higher precision to lower errors. Use lower precision to save time and resources.

In order to communicate in higher precision, it helps to pay attention to the what information is transmitted beside the literal meaning of the words.
This is less important for "precise" topics such as math and science, and more important for "sensitive" topics such as emotions, conflict resolution, politics and gender equality.

Pay attention to the "tone" of conversation.
Not just what someone is saying, but what does expressing it the way they do imply?
What patterns are activated in my brain when processing this information?
The more precise your perception is, the subtler patterns you notice in yourself when processing information.
The more precisely you communicate, the more accurately you control what patterns are activated in the listener's brain, including more subtle ones. 
 



Discuss

Страницы