Вы здесь

Новости LessWrong.com

Подписка на Лента Новости LessWrong.com Новости LessWrong.com
A community blog devoted to refining the art of rationality
Обновлено: 20 часов 45 минут назад

[AN #158]: Should we be optimistic about generalization?

29 июля, 2021 - 20:20
Published on July 29, 2021 5:20 PM GMT

Alignment Newsletter is a weekly publication with recent content relevant to AI alignment around the world. Find all Alignment Newsletter resources here. In particular, you can look through this spreadsheet of all summaries that have ever been in the newsletter.

Audio version here (may not be up yet).

Please note that while I work at DeepMind, this newsletter represents my personal views and not those of my employer.

HIGHLIGHTS

A naive alignment strategy and optimism about generalization (Paul Christiano) (summarized by Rohin): We want to build an AI system that answers questions honestly, to the best of its ability. One obvious approach is to have humans generate answers to questions, select the question-answer pairs where we are most confident in the answers, and train an AI system on those question-answer pairs.

(I’ve described this with a supervised learning setup, but we don’t have to do that: we could also learn from comparisons between answers, and we only provide comparisons where we are confident in the comparison.)

What will the AI system do on questions where we wouldn’t be confident in the answers? For example, questions that are complex, where we may be misled by bad observations, where an adversary is manipulating us, etc.

One possibility is that the AI system learned the intended policy, where it answers questions honestly to the best of its ability. However, there is an instrumental policy which also gets good performance: it uses a predictive model of the human to say whatever a human would say. (This is “instrumental” in that the model is taking the actions that are instrumental to getting a low loss, even in the test environment.) This will give incorrect answers on complex, misleading, or manipulative questions -- even if the model “knows” that the answer is incorrect.

Intuitively, “answer as well as you can” feels like a much simpler way to give correct answers, and so we might expect to get the intended policy rather than the instrumental policy. This view (which seems common amongst ML researchers) is optimism about generalization: we are hoping that the policy generalizes to continue to answer these more complex, misleading, manipulative questions to the best of its ability.

Are there reasons to instead be pessimistic about generalization? There are at least three:

1. If the answers we train on aren’t perfectly correct, the instrumental policy might get a lower training loss than the intended policy (which corrects errors that humans make), and so be more likely to be found by gradient descent.

2. If the AI already needs to make predictions about humans, it may not take much “additional work” to implement the instrumental policy. Conversely, if the AI reasons at a different level of abstraction than humans, it may take a lot of “additional work” to turn correct answers in the AI’s ontology into correct answers in human ontologies.

3. From a followup post, the AI system might answer questions by translating its concepts to human concepts or observations, and then deduce the answer from those concepts or observations. This will systematically ignore information that the AI system understands that isn’t represented in the human concepts or observations. (Consider the example of the robot hand that only looked like it was grasping the appropriate object.)

A possible fourth problem: if the AI system did the deduction in its own concepts and only as a final step translated it to human concepts, we might still lose relevant information. This seems not too bad though -- it seems like we should at least be able to explain the bad effects of a catastrophic failure (AN #44) in human concepts, even if we can’t explain why that failure occurred.

A followup post considers whether we could avoid the instrumental policy by preventing it from learning information about humans (AN #52), but concludes that while it would solve the problems outlined in the post, it seems hard to implement in practice.

TECHNICAL AI ALIGNMENT
ROBUSTNESS

Experimentally evaluating whether honesty generalizes (Paul Christiano) (summarized by Rohin): The highlighted post introduced the notion of optimism about generalization. On this view, if we train an AI agent on question-answer pairs (or comparisons) where we are confident in the correctness of the answers (or comparisons), the resulting agent will continue to answer honestly even on questions where we wouldn’t be confident of the answer.

While we can’t test exactly the situation we care about -- whether a superintelligent AI system would continue to answer questions honestly -- we can test an analogous situation with existing large language models. In particular, let’s consider the domain of unsupervised translation: we’re asking a language model trained on both English and French to answer questions about French text, and we (the overseers) only know English.

We could finetune the model on answers to questions about grammar ("Why would it have been a grammatical error to write Tu Vas?") and literal meanings ("What does Defendre mean here?"). Once it performs well in this setting, we could then evaluate whether the model generalizes to answer questions about tone ("Does the speaker seem angry or sad about the topic they are discussing?"). On the optimism about generalization view, it seems like this should work. It is intentional here that we only finetune on two categories rather than thousands, since that seems more representative of the case we’ll actually face.

There are lots of variants which differ in the type of generalization they are asking for: for example, we could finetune a model on all questions about French text and German text, and then see whether it generalizes to answering questions about Spanish text.

While the experiments as currently suggested probably won’t show good generalization, a variant that could support it would be one in which we train for plausibility. In our original example, we finetune on correct answers for grammar and literal meanings, and then we also finetune to have the model give plausible answers to tone (i.e. when asked about tone, instead of saying “en colère means 'angry'”, the model says “the author is angry, as we can see from the use of ‘en colère’”). It seems possible that this combination leads to the model giving actually correct answers about tone, just because “honestly report your best guess” seems like the simplest policy that meets all of these criteria.

Teaching ML to answer questions honestly instead of predicting human answers (Paul Christiano) (summarized by Rohin): This post presents an algorithm that aims to solve the second problem from the highlighted post. As a reminder, the second problem is that an AI system that already has to make predictions about humans might learn a policy that is just “say what humans would say”, since that is simpler than learning another translation that maps its knowledge to human language (so that it can answer honestly to the best of its knowledge).

The core idea is to have a “simple” labeling process and a “complex” labeling process, where the complex process can catch errors from the simple process. We’ll also assume we have a simple / complex dataset (labeled by the simple / complex processes respectively), where we are confident that all of the answers in the dataset are correct. This means that the complex dataset has more challenging questions than the simple one. The simple / complex instrumental policy is the policy that predicts what the simple / complex labeling process would do.

As an inefficient first algorithm, we can train our model on answers from the simple process where we are confident in their correctness, and then simply check whether the model performs well on new, harder questions where the complex and simple processes disagree. If the model fails this check, we restart from scratch. Intuitively, when the model eventually passes the check, it has probably learned the intended policy, as that’s the one which would generalize zero-shot to complex questions as well (and in particular, the simple instrumental policy would not generalize in this way).

Of course, we need a more efficient algorithm than that. Instead of creating an algorithm where the intended policy would work but the instrumental policy wouldn’t, could we instead change the inductive biases so that the intended policy is likely while the instrumental policy is not? Our original worry was that the intended policy has to bake in two implementations of language -- one in its world model, and one when translating answers into human-understandable concepts. So we could instead try to train a model that learns language from the simple instrumental policy, but is also trained on the complex dataset. The hope would be that the intended policy can learn the second implementation of language “for free” from the simple instrumental policy, while still working on the complex dataset. The actual details are quite complex and I’m not going to go into them here.

This post by Evan Hubinger points out some problems and potential solutions with the approach.

FORECASTING

AXRP Episode 10 - AI’s Future and Impacts (Daniel Filan and Katja Grace) (summarized by Rohin): This podcast goes over various strands of research from AI Impacts, including lots of work that I either haven’t covered or have covered only briefly in this newsletter:

AI Impacts’ methodology. AI Impacts aims to advance the state of knowledge about AI and AI risk by recursively decomposing important high-level questions and claims into subquestions and subclaims, until reaching a question that can be relatively easily answered by gathering data. They generally aim to provide new facts or arguments that people haven’t considered before, rather than arguing about how existing arguments should be interpreted or weighted.

Timelines. AI Impacts is perhaps most famous for its survey of AI experts on timelines till high-level machine intelligence (HLMI). The author’s main takeaway is that people give very inconsistent answers and there are huge effects based on how you frame the question. For example:

1. If you estimate timelines by asking questions like “when will there be a 50% chance of HLMI”, you’ll get timelines a decade earlier than if you estimate by asking questions like “what is the chance of HLMI in 2030”.

2. If you ask about when AI will outperform humans at all tasks, you get an estimate of ~2061, but if you ask when all occupations will be automated, you get an estimate of ~2136.

3. People whose undergraduate studies were in Asia estimated ~2046, while those in North America estimated ~2090.

The survey also found that the median probability of outcomes approximately as bad as extinction was 5%, which the author found surprisingly high for people working in the field.

Takeoff speeds. A common disagreement in the AI alignment community is whether there will be a discontinuous “jump” in capabilities at some point. AI Impacts has three lines of work investigating this topic:

1. Checking how long it typically takes to go from “amateur human” to “expert human”. For example, it took about 3 years for image classification on ImageNet, 38 years on checkers, 21 years for StarCraft, 30 years for Go, 30 years for chess, and ~3000 years for clock stability (how well you can measure the passage of time).

2. Checking how often particular technologies have undergone discontinuities in the past (AN #97). A (still uncertain) takeaway would be that discontinuities are the kind of thing that legitimately happen sometimes, but they don’t happen so frequently that you should expect them, and you should have a pretty low prior on a discontinuity happening at some specific level of progress.

3. Detailing arguments for and against discontinuous progress in AI.

Arguments for AI risk, and counterarguments. The author has also spent some time thinking about how strong the arguments for AI risk are, and has focused on a few areas:

1. Will superhuman AI systems actually be able to far outpace humans, such that they could take over the world? In particular, it seems like humans can use non-agentic tools to help keep up.

2. Maybe the AI systems we build won’t have goals, and so the argument from instrumental subgoals won’t apply.

3. Even if the AI systems do have goals, they may have human-compatible goals (especially since people will be explicitly trying to do this).

4. The AI systems may not destroy everything: for example, they might instead simply trade with humans, and use their own resources to pursue their goals while leaving humans alone.

AI GOVERNANCE

Decoupling deliberation from competition (Paul Christiano) (summarized by Rohin): Under a longtermist lens, one problem to worry about is that even after building AI systems, humans will spend more time competing with each other rather than figuring out what they want, which may then lead to their values changing in an undesirable way. For example, we may have powerful persuasion technology that everyone uses to persuade people to their line of thinking; it seems bad if humanity’s values are determined by a mix of effective persuasion tools, especially if persuasion significantly diverges from truth-seeking.

One solution to this is to coordinate to pause competition while we deliberate on what we want. However, this seems rather hard to implement. Instead, we can at least try to decouple competition from deliberation, by having AI systems acquire flexible influence (AN #65) on our behalf (competition), and having humans separately thinking about what they want (deliberation). As long as the AI systems are competent enough to shield the humans from the competition, the results of the deliberation shouldn’t depend too much on competition, thus achieving the desired decoupling.

The post has a bunch of additional concrete details on what could go wrong with such a plan that I won’t get into here.

NEWS

Building and Evaluating Ethical Robotic Systems (Justin Svegliato, Samer Nashed et al) (summarized by Rohin): This workshop at IROS 2021 asks for work on ethical robotic systems, including value alignment as a subtopic. Notably, they also welcome researchers from disciplines beyond robotics, including philosophy, psychology, sociology, and law. The paper submission deadline is August 13.

Survey: classifying AI systems used in response to the COVID-19 pandemic (Samuel Curtis et al) (summarized by Rohin): A team at The Future Society aims to build a living database of AI systems used to respond to COVID, classified using the OECD framework. I think this is an interesting example of building capacity for effective AI governance. If you were involved in developing an AI system used in the COVID response, they ask that you take this survey by August 2nd.

FEEDBACK

I'm always happy to hear feedback; you can send it to me, Rohin Shah, by replying to this email.

PODCAST

An audio podcast version of the Alignment Newsletter is available. This podcast is an audio version of the newsletter, recorded by Robert Miles.



Discuss

Covid 7/29: You Play to Win the Game

29 июля, 2021 - 19:10
Published on July 29, 2021 4:10 PM GMT

Few things warm my heart more than playing to win the game. Few things sadden me more than observing someone not playing to win the game. 

Usually, that means they are playing a different game instead, whether they admit it or not, and whether or not they know it themselves. The game, from this perspective, is simply that which you are playing to win, and your revealed preferences define the game’s rules and objectives.

This week saw some excellent playing to win the game. The NFL, many parts of the government and a number of corporations began imposing vaccine mandates, hopefully causing a cascading effect. It’s at least a start. There’s a lot of ways in which we collectively are revealing ourselves not to be playing the game of ‘get people vaccinated’ let alone the game of ‘prevent Covid while minimizing costs.’ The lowest hanging fruit remains fully approving the vaccines, which we somehow still have not done.

A central question continues to be how effective the vaccines are against Delta. The Israeli claims turn out to probably be the result of basic statistical mistakes, so those scary numbers are now mostly off the table, although that still leaves us a range containing meaningfully distinct answers. 

Another central question is Long Covid, for which we got some actual data, so there’s a section analyzing that.

The big mystery remains why Delta suddenly peaked and turned around, first in India, and now in the UK and the Netherlands. These turnarounds are excellent news, and I presume we will see a similar turnaround at a similar point, but what’s causing them to happen so quickly? I don’t know.

Meanwhile, the numbers got worse slightly faster than I expected. Let’s run them..

The Numbers Predictions

Prediction from last week: 360,000 cases (+50%) and 1845 deaths (+10%)

Result: 392,000 cases (+62%) and 2042 deaths (+21%). 

Prediction for next week: 610,000 cases (+55%) and 2,450 deaths (+20%). 

Things got worse slightly faster than expected. I doubt things are ready to turn around, but there are signs of the control system starting to act which should accelerate, and between vaccinations and infections (most of which likely do not result in positive tests and therefore known cases) immunity is building up, so I continue to expect the pace of growth to drop off a bit. I’d be unsurprised by numbers between about +40% and +70% for cases.

Deaths DateWESTMIDWESTSOUTHNORTHEASTTOTALJun 10-Jun 163686119613142254Jun 17-Jun 235294438312632066Jun 24-Jun 305504597061861901Jul 1-Jul 74593296121281528Jul 8-Jul 145323986891451764Jul 15-Jul 214343417321701677Jul 22-Jul 2849138510091572042

Deaths are going up slower than cases, but faster than one would have hoped. I interpret this partly as last week’s number being artificially low, and partly as the South having a problem with its most vulnerable being partly unvaccinated, and thus we see a rise focused on the South including as a share of the cases. 

Things will continue to get worse until several weeks after cases turn around. The question is if we can continue to see deaths lag behind cases, as they continue to do so in the UK. My guess is we won’t do as well as some other places, especially in the South, but will still be doing better than our case counts alone would suggest.

Cases DateWESTMIDWESTSOUTHNORTHEASTTOTALJun 3-Jun 925,98718,26732,54511,54088,339Jun 10-Jun 1623,70014,47225,7528,17772,101Jun 17-Jun 2323,85412,80126,4566,46469,575Jun 24-Jun 3023,24614,52131,7736,38875,928Jul 1-Jul 727,41317,46040,0317,06591,969Jul 8-Jul 1445,33827,54468,12911,368152,379Jul 15-Jul 2165,91339,634116,93319,076241,556Jul 22-Jul 2894,42960,502205,99231,073391,996 Vaccinations

People respond to incentives. With the costs of not being vaccinated rising on all fronts, more people are making the decision to get vaccinated. The remaining people are harder to get, but efforts benefit from the growing social proof and social pressure from previous vaccinations. It also helps that previously reluctant politicians, with notably rare exceptions, are increasingly getting with the program.

Vaccination Effectiveness

This thread takes the Israeli evidence (that likely was a quite sloppy statistical error, see the next section), combines it with antibody measures, and suggests that antibody counts decay with a half life of a few months, so under this hypothesis vaccinations continue to protect you from severe symptoms and death but after five months lose half their effectiveness at protecting against infection. I hate the smugness and overconfidence here, and also based on the clinical trials and result of one dose it doesn’t make any sense that a 75% reduction in antibodies would cut protection against infection by 50%. Nor do I buy that there would be a threshold effect big enough to cause the curve she’s analyzing here. 

Also, this isn’t a half-life:

Even if we take this fully seriously, it’s a steady state that we get to in 3-5 months, and the median gets to its low point after three (which is likely random variance, but the point stands that most of the effect is in place after month three). It certainly doesn’t seem anything like ‘half life of 100 days’ the next link in the thread cited, so I notice I’m confused.

I am noting such arguments here for completeness, but I do not put much weight on them. As usual, when one has a new hypothesis, one must then reconcile it with all the different data points, and alarming ‘vaccine stops working’ theories keep implying, well, that vaccines aren’t mostly working, when they obviously mostly are. You can’t both argue that we suddenly don’t think vaccines work as well against Delta, and that vaccine protections fade rapidly over time, and also that Delta is so much more infectious to begin with among the unvaccinated, because such claims combine to not be even slightly compatible with the observed data in other places.

Then on Wednesday, Pfizer came out with data that the vaccine remains effective after six months. That’s the headline. The actual data is not as encouraging:

Contrary to Pfizer, that’s quite the drop in effectiveness. The protection against death remains robust, and yes 84% is still quite good compared to what we would have asked for a year ago, but ‘remains effective’ is giving the wrong impression if this data holds up. 

The other news here is that Pfizer plans to be calling for booster shots. It seems that a three dose regimen is much more effective than a two dose regimen, now that we’ve had the ability to test such things, and some places are moving to implement this already. The data here suggests that the third dose will bring things back to at least the early stage 96% effectiveness and plausibly even higher. If I am offered a third dose, I will happily accept it.

There is the concern that giving people third doses while others have not had the opportunity even for first doses is not ethical. I respect that perspective, but do not share it, and will leave it at that. 

On That Israeli Data on Delta’s Effectiveness

Israeli data has been suggesting remarkably low effectiveness of vaccinations against Delta. This thread suggests this comes from… using the wrong denominator. The explanation is, the Israeli outbreak started in very highly vaccinated areas, and the effectiveness numbers came from the percentage of cases that were among the vaccinated, but they were comparing that to the overall population numbers. So, whoops.

If true, as I believe it probably is, that would explain things and be on some levels comforting, but on other levels it’s the opposite of comforting, because this is saying that the Israeli outbreaks started in highly vaccinated areas. So, whoops again. 

I’m inclined to believe that such simple mistakes were happening here, because the Israeli numbers simply didn’t make any sense. They were incompatible, even with each other, let alone with what we were seeing elsewhere. And I’m definitely at the point where such stupid mistakes aren’t surprising. This one is rather stupid, but that’s the way such things seem to often go.

Delta Variant

For those who need it: Thread explaining basics of how vaccine protection interacts with spread of Delta. 

CDC has reversed course on its mask mandates. Masks will be back in schools, where I’ve learned first hand and the hard way that schools feel compelled to follow the guidelines. They’re suggesting indoor vaccinated masking in ‘areas where there is a surge’ which doesn’t really make a lot of sense and will cause some confusion, but perhaps the hope is it will make intuitive sense to regular people. It’s good that when the facts change, the CDC changes its mind, at least.

There are two central facts about Delta one’s model must explain. First, the dramatic takeover of the pandemic and rise in overall cases across countries. Second, the sudden reversal of those trends in many places, including India, the UK and the Netherlands:

The Dutch numbers are down by half, the UK is not far behind. 

If you zoom out, the case numbers are still large. 

But the death numbers barely register:

It will take another week or two for the UK/Netherlands death numbers to peak, but this is what vaccinating the vulnerable looks like where it counts.

Here’s some data from Minnesota:

If we take this at face value (it’s not normalized sufficiently so there’s a bunch of reasons not to, but they point in various directions), it suggests about the same ratios in effectiveness for Delta between the vaccines that we saw for the older strain. Moderna’s effective dose is higher than Pfizer’s, which potentially could be responsible for that differential, although I’m still mostly inclined to treat the two as the same until we get a lot more similar data points elsewhere. Normalizing properly for exactly when vaccinations were in effect is tough, but for context Minnesota at the time had just under 30k confirmed infections since May 1.

The data below from San Diego is better normalized, and I’m going to say that this is enough non-normalized data points. 

This graph is pretty bizarre when taken at face value, and I presume it shouldn’t be, and is only the county of San Diego, but still seems worth noting:

Details:

At the lefthand side, we have about 3 cases per 100k among the vaccinated and 9 among the unvaccinated, a ratio of 3:1. That’s a surprisingly small ratio. 

At the end of May, cases among the vaccinated level off, but cases among the unvaccinated continue to drop until about June 10.

Then the cases among the unvaccinated shoot up, and at the end, we have 16 unvaccianted cases per 100k and 2 vaccinated cases, for an 8:1 ratio. In July, this is most definitely representing a pandemic of the unvaccinated.

The question is, why would this ratio suddenly get much bigger?

If Delta is, as everyone now fears, reducing the effectiveness of vaccinations, you’d expect the ratio to go down rather than up. Whereas this new 8:1 ratio implies a much higher effectiveness level, unless one can explain it via other factors.

One possible explanation is that early on the vaccinations went to highly vulnerable people, whereas now they’re more evenly distributed, but that would only explain contrasting March with June, not June with July, where that effect is going to be small.

Another is to claim this didn’t happen:

If we look at the ratio that had emerged before that period in early June, we do get a different picture then, but that period where the unvaccinated cases are going down but vaccinated cases went up is super weird for its own reasons, and the contrast with the left side of the graph is still clear. At a minimum, a constant ratio does not suggest any decrease in effective immunity.

A third is to say that San Diego’s numbers are quirky because someone’s are going to be quirky.

Here’s another Tweet that shows Marin County:

Looking at that, I see a ratio of just over 4:1 rather than 8:1. I don’t see a big shift to higher ratios in July, but I also don’t see a shift to lower ratios either.

I tried briefly to find similar charts or data for larger areas and it wasn’t obvious where they were. If anyone can link to one it would be appreciated. The normalizations here make the data much more useful but I’d prefer a much bigger and ideally more representative area.

Vaccinations are great, but what matters is immunity, and vaccines are only one way to get immunity. Antibodies are not a perfect proxy for immunity, but they’re likely the best one available, and the numbers in the UK on antibodies are very, very high…

When this many people have antibodies, it’s both scary that cases for a while were going up anyway, and also easy to understand how there might be a rapid peak followed by a reversal. It’s especially easy to see this if the serial interval averages two or three days instead of five, so things that happen tend to happen fast. 

If 90% of people are immune, then each additional 1% that becomes immune reduces spread after that by a full 10% even if everyone is otherwise identical. Whatever vectors still remain for the virus to spread burn themselves out rapidly, until there isn’t enough left to sustain the pandemic. 

It is possible that this is simply what it takes to turn the corner on Delta. If so, then if the USA has roughly 70% of adults having been vaccinated, than we would turn the corner when about two thirds of the unvaccinated have been infected at some point sufficiently to develop antibodies, with the understanding that many Covid infections don’t lead to a positive test and therefore don’t show up in the statistics, and also a lot of them already happened over the past year and a half. 

It still doesn’t explain the full extent of the rapid turnarounds in India and the UK, but it helps explain it. Lag in the control system also helps, but again I notice my model remains surprised by this. 

You Play to Win the Game

Last year, the NBA figured out how to do Covid testing. This year, it’s all about vaccinations, and the NFL is stepping up. 

As we all know, in the NFL and also in life, You Play To Win The Game

Winning isn’t everything, it’s the only thing.

What does it look like to play to win, when it comes to Covid-19?

It looks like this. Here’s the key operating principles. 

Here’s what would trigger a forfeit:

In other words:

If your vaccinated players force us to cancel the game, we’ll try to do the best we can for everyone involved.

If your unvaccinated players force us to cancel the game, f*** you. You forfeit the game for all purposes where you want to win, the game is cancelled for all purposes where you wanted to lose, and the league will focus on ‘minimizing the financial and competitive burden’ exclusively on the other team. Not you. You’re on your own.

Oh, and regardless of vaccination status, if there’s an outbreak that cancels the game, the players on both teams don’t get paid for that game:

That’s what it looks like when you play to win. 

Some NFL players are less than thrilled with this situation.

For example:

Yes, it turns out that not protecting yourself against infectious disease can put you in a position to hurt your team. Who knew?

Who else we got?

I hope someone compiles all of these together, and then continues in the sacred tradition of arranging them into a YouTube montage video set to Taylor Swift’s I Forgot That You Existed

Good luck, Mr. Hopkins and all the rest of you, in all your future and hopefully far away endeavors.

The Pac-12 is considering following suit. It does make a lot of sense the way the commissioner George Kliavkoff put it:

Most of the Pac-12 schools have vaccine mandates in place in any case. Washington State is currently trying to figure out what to do when its coach refuses to get with that program.

We need more of that ‘can do, and if you choose not to do that’s on you that you didn’t do it’ spirit.

Others Playing To Win the Game

The good news is we are indeed seeing more of this spirit, and there seems to be momentum behind these efforts.

New York and California are requiring government workers who don’t get vaccinated to get weekly tests and wear masks indoors. New York at least is not providing a testing option for health care workers, it’s vaccination or you are (very understandably) fired. Virginia is mandating vaccinations for health workers, as is the veterans administration.

From Bloomberg’s daily newsletter:

Reports are that Biden will follow shortly with a similar rule for federal employees, if he hasn’t already by the time you read this.

Companies are now able to get in on the act:

My hope is that this effort continues and spreads, and more and more private employers will be emboldened to enact similar policies. If I was a private employer whose employees were interacting with each other in person or with customers in person, I’d hope to be more worried about what would happen if I didn’t mandate vaccination, including legal consequences, than what would happen if I did, and this transition will go a long way. As usual, we blame people via Asymmetric Justice for action but not inaction, so the less a (soft via testing and masking requirements) mandate seems like a bold choice to take action and more like a default state that protects the employer from liability or an outbreak among the other employees, the better. These things matter.

And once again, we gotta get full authorization as soon as possible. A huge amount of the lack of mandates is the lack of full authorization. If we got it, this cascade could kick into high gear quickly, and give a justification for a change in policy. At this point, any efforts to get people vaccinated that don’t involve a demand for full authorization are impossible to take seriously. How can we be mandating vaccinations but not be ready to fully approve them?

A bonus is it creates positive selection in employees. If anyone quits or otherwise goes ballistic about the situation, that was likely a time bomb on your team in one form or another, they’re definitely bad at risk management, and this gives you the opportunity to be rid of them. It’s hard to hire right now so that could be short term trouble, but identifying and getting rid of bad employees is both highly difficult and valuable. 

Many other countries also continue to get with the program. Here’s Israel this week, playing to win:

I more than fully endorse this take on all of these developments:

Long Covid

We finally have some real data to look at. 

The original paper is here.


Headline findings:

This raises, and/or lets us more usefully address, multiple questions. Should we believe the results of this study? Do these effects seem real? If they are real, what can either an individual or our society in general do about this? Does this change anything if it is true?

First, no matter what the flaws here, a big thank you for running this at all. It does seem like this is a big advance in information value over previous sources. This stuff is hard, and doing something concrete is better than not doing it. This does seem to rise to the level of something useful. 

Second, as usual, studying such things is really hard and there are lots of problems, so let’s get right to some of those.

First off, baseline methods.

There is an obvious concern here. People were recruited to do an intelligence test by offering them an intelligence test. Which, yes, right on, very on the nose and efficient incentivizing, but we do have to worry about the selection effects from that and whether the standard controls handle them.

Here’s what they controlled for: “Nuisance variables were age, sex, racial-ethnicity, gender, handedness, first language (English vs other), country of residence (UK vs other), education level, vocational status and annual earning.”

Handedness! So other than the bonus handedness, this is your basic check of control variables for basic demographic information and socio-economic status. 

Before going further, there are three potential confounding factors here that seem like big issues.

The first is that choosing to take the test could be a function of one’s situation and practical intelligence. If I had Covid and fully recovered and felt fine, I would not be so curious about taking an intelligence test. If I had Covid and had continued problems, then I would plausibly be much more interested to know how I did on such a test. In theory, this could be the whole situation, since those who feel stupid due to non-Covid reasons or due to lockdowns wouldn’t feel the same curiosity and wouldn’t opt in, whereas those who had Covid and happened to also feel stupid recently would take the test. 

The second is that being intelligent helps prevent Covid-19, after controlling for the other factors. The study was done in 2020 so vaccination isn’t relevant, but plenty of other decisions matter. How and when people decided to mask, including how much care was put into doing it properly with a good fit, determines both how likely they were to get Covid and how severe their case was likely to be. Social distancing is similar. As is making a wide variety of other good decisions about how much risk to take. Intelligence also correlates with the type of work that can be done remotely, even controlling for income and education, which improves ability to social distance. Intelligent people tend to be more rule abiding in general, including when the rule is dumb, which in this case is net useful. And so on.

The third is that it looks like they’re using self reports of Covid-19. 

It makes sense to worry that people would conclude from their long term issues that they likely had Covid-19, or from a lack of such issues that they likely didn’t have it, which could once again confound the results here.

In theory, one could have controlled for all this, by having people take the test before Covid-19. Even now, one could attempt to measure the impact level of the second effect by then following up with the people who took the test and seeing which of them later got Covid, although changing conditions will change the size of the correlation – so it’s more of a ‘check if this is a substantial effect or not’ check than anything else, and now with vaccination everything is different. For the first effect, again, you’d need to find a way to measure things in the other order, possible in theory but not easy or cheap. For the third you could do antibody tests since this was pre-vaccinations. 

But all of that is tricky and expensive.

The huge advantages of doing what they actually did were that it was practical, it was ‘ethical’ and it was relatively inexpensive. I point out issues, but I think the study was likely done roughly the right way in context, picking the low hanging fruit. From a civilizational perspective we could and should have done far better, but that’s not the fault of the people doing what they can.

Looking ahead, it does seem like intelligence didn’t have too big an effect on chance of getting Covid, based on a follow-up test. This could still eat up the whole observed effect, but I’m less concerned about it than before reading about that. Also a little sad that this effect turns out to be so small, for other reasons.

I also approve of their methods for analyzing the results, especially combining the nine tests into one number. There are a lot of worries I conspicuously don’t have here.

Here’s the breakdown by symptoms and gender:

I’m noting it because that righthand column is pretty strange. It’s lower average age, and suddenly it’s very male whereas the other groups are increasingly female and older as symptoms became more serious. Low sample size is presumably the answer (44 people) but it’s still kind of weird.

Lower in the same table, it’s clear that almost none of these people had a positive Covid-19 biological test, so this is almost all self-diagnosis:

Then we look at the headline chart. 

One standard deviation in an IQ test is about 15 points, so this is an average of about 0.5 points or so for the first group, 1 point for those not requiring assistance, 2 for those getting help at home, 4 for those hospitalized and 7 for those who went on a ventilator. 

The first thing to note is that this effect that grows as symptoms get more severe makes a hell of a lot more sense than the mysterious ‘Long Covid doesn’t care how bad your case was’ nonsense. I’m far more willing to believe a proportional effect that grows with symptoms than a blanket ‘nope, you technically got Covid and now you roll versus this other thing at constant odds’ hypothesis. And of course if you’ve been in the hospital on a respirator, it’s not going to be good for your cognitive performance. 

So that adds a lot of credibility to the findings. 

This is their explanation of why we shouldn’t worry about pre-morbid differences (e.g. how smart people were before):

I find this helpful but not fully convincing. One worry is that they’re claiming that those that were ill should have otherwise had higher cognitive performance. I don’t find this plausible, so it makes me skeptical their controls are sufficient. It does mean that I’m no longer concerned that the intelligence differences are too big here, since presumably Covid doesn’t help cognitive performance and that caps the differences at the effect sizes. The second check, following up with a questionnaire to see who later got Covid, is also helpful in capping the effect size of ‘smarter people get less Covid’ but doesn’t address the other concerns. I’m also sad we didn’t ask those people to retake the tests.

I also am skeptical that this effect could fail to partly fade with time or as symptoms fully go away, whereas they are claiming to not see such effects. 

As always, also, there’s the question of whether this effect is unique to Covid or is a general property of many infectious diseases. In some sense it does not matter, but in other senses it matters a lot – or at least points to our failure to be consistent, with several potential ways to address that.

This is their note that the effect size here is indeed a big deal, which it is:

The line that the 0.47 SD drop is more than 10 years of decline in global performance between 20 and 70 is very interesting, because it suggests an intuitive way to measure how much we should care about this effect – we can think of this as similar to aging.

Every year, we all age one year, and a lot of stuff gets harder. For those of us over the age of 25 or so, it’s kind of terrible, and we need to put a lot more effort into making it stop. 

So, in theory, this is suggesting that relatively mild (but still symptomatic) Covid is still doing something of similar magnitude to causing our brains to age two years, and as it gets more serious things get much worse. On average over all symptomatic cases we’d be looking at about three years.

If that’s all true, that’s really bad! Once again, you do not want to get Covid. And one can now think about how much life not lived would be justified in the name of not catching Covid, if one is at only small risk of death. 

I don’t find it likely they are underestimating the size of this particular effect, and I can see how the effect could be smaller or not be there, so that somewhat reduces the expected effect size. But then one must also account for other distinct problems. 

Then there’s the question of how this interacts with vaccinations and children. If you get long Covid roughly the same way you get other symptoms, that’s going to be a big reduction in risk, especially for the very young. My prior would be that this protection is somewhere in between the protection against infection and protection against death. 

You’d also want to adjust for Delta, since all this data was from 2020. My presumption is degree of symptoms mostly controls for that, but it’s plausible that this doesn’t fully control for that. 

This Long Covid effect definitely would still be a bigger danger for most people than death, even before vaccinations. I’d much rather take the very small risk of death than accept this level of accelerated cognitive decline, plus any longer term non-cognitive effects, and it isn’t close even under relatively minimizing assumptions, and multiplying for the uncertainty that this effect is real.

The question then remains, what costs are worth bearing to reduce the probability that this happens to you, individually or collectively.

Which in turn, as always, depends on one’s ability to change that number. Postponing the problem isn’t worth zero, but it’s only worth enough to care much if it buys time for a permanent solution, or there’s a permanent solution you’re willing to implement. That’s true both individually and collectively. Postponement can also help if it is ‘bending the curve’ in ways that matter, but that doesn’t impact Long Covid much. 

There’s the temptation to say ‘oh no, this is terrible, something must be done, this is something, therefore we must do it’ where the something is an effort to kick a can down the road at very high cost, and which may or may not be able to do much of even that. 

Vaccine Hesitancy

This LA Times piece looks at those who are getting vaccinated now in a local community, and finds highly reasonable thinking going on. Of course, these are exactly the previously hesitant people who then did get vaccinated. So it makes sense that such people would seem more reasonable, and would not be at all representative of those who still haven’t gotten their shots. It is consistent with the model that there are a bunch of people doing cost-benefit calculations who are relatively easy to get, and then a bunch of people who are much harder to get. 

This thread compares anti-vax people to victims of a con, with resulting implications for how one should communicate with them if one is attempting to persuade and convince rather than make one feel better about having taken the proper symbolic action. Cooling the mark via people in the community that have earned people’s trust is so crazy an idea it just might work, but requires that such people be convinced first.

Note of course that such framing explicitly assumes the conclusion, that not only are vaccines safe and effective and the right thing for everyone both socially and individually, but that the evidence for this is so sufficient that if you don’t believe it, you’ve been conned. I don’t think that’s right. Many have effectively been conned by misinformation or the need for tribal signaling, but others are doing a calculation with different information and getting a different answer. I strongly believe they’re wrong, but it seems plausible that treating such people as con victims is (at best) highly condescending and that they would notice. We are doing a rather terrible job conveying the information about vaccines in a way that is accessible, understandable and credible to such people, and the fact that vaccines happen to be highly safe and effective doesn’t excuse that.

This paper provides an interesting model of vaccine hesitancy (via MR). In it, people are effectively doing an approximation of a cost-benefit analysis on vaccination, so the more prevalent the disease, the higher the willingness to vaccinate, which is clear from past outbreaks. 

One consequence of this is that if you reduce the number of cases, vaccinations go down. Thus, this model claims vaccine passports are ‘ineffective’ in the sense that they don’t increase vaccinations, and could even backfire, because they reduce prevalence. I find their math to prove too much and therefore am confident they’re technically wrong to draw their conclusions so broadly, but accept the underlying dynamics as things that are real and that matter. 

Oddly, Tyler thinks this makes a strong case against such passports. I would claim the opposite, because reducing prevalence is a good thing. If we can do something that both incentivizes vaccinations and prevents cases at the same time, that’s good, and if the reduction in cases means we don’t on net cause vaccinations, then that seems fine. Otherwise, you are in the world where you outright want more cases in order to show the bastards and own the reds, which is a bullet I really hope no one is biting.

Regardless, under such a model, lowering costs of vaccination is the obvious choice for getting people vaccinated, and it continues to be a highly efficient strategy. Drive that cost negative. More precisely, one wants to reduce perceived costs, which can involve changing people’s models of vaccine effectiveness and/or safety, and/or changing the difficulty and costs of vaccination. 

Periodic Reminders (You Should Know This Already)

In case you need a short video explanation for how the mRNA vaccines work and were developed, we got you.

Your periodic reminder that our travel restrictions make absolutely zero sense (MR).

And yes, a sensible system would be entirely feasible:

This explanation rings far too true:

Your periodic reminder: The FDA and cost benefit analysis are not on speaking terms, nor does the decision process much correspond to what is safe and effective let alone what is in the public interest. Hence, we approve $50,000 drugs that don’t work and then are forced to collectively pay for them out of the public treasury, but can’t fully approve the same Covid vaccines and definitely can’t mandate them, and so on. Yeah, yeah, same old, FDA Delenda Est, stop using such procedures to tell people what they’re legally allowed to do, also stop using such procedures to decide what we pay for without looking at costs against benefits, and at a bare minimum stop equating those two decisions. The suggestion from the link of letting government officials choose what is mandatory and paid for versus what is forbidden on an ad hoc per-item basis seems to miss the point of ‘are those our only choices?’ and I’m not sure if it’s better or worse than status quo. 

In other FDA Delenda Est it’s-not-news-but-it-was-news-to-him non-news:

WaPo reminds us that Japan is failing at vaccinations by doing the things you would do if you wanted to fail at vaccinations. This includes insisting on distribution by only doctors and nurses, holding out until way too late for a homegrown vaccine, a labyrinth approval process and demands for domestic testing of the vaccines, confusion about rules and a general lack of urgency, among other things.

(If for some reason you want to financially support these weekly posts and/or my writing in general, you can do so via my bare bones, no rewards of any kind Patreon that is set up exclusively for that purpose. On the margin this does shift my time a non-zero amount towards these posts. However: I do not in any way need the money, please only provide funds fully understanding I already have plenty of money, and if and to the extent that doing this would make your own life better rather than worse.)

The whole blood clot issue around AZ was never a thing. Of course Pfizer has similar instances of blood clots to AZ, given that not getting vaccinated at all also has similar instances. Also Covid-19 itself actually does cause blood clots, but hey. 

The term ‘genuine’ fury is interesting here, since we knew all this already. It also does not matter, for the purposes of the EU’s motivations, whether or not the concerns turn out to be valid. Their perception of the situation at the time would remain unchanged.

Also happy to see this report properly label the people opposing AZ as anti-vaxxers, from anti meaning against and vax meaning vaccine:

To what extent is it true that the European backlash against AZ was due to spite resulting from Brexit? My guess is this was not all that central, but was a substantial contributing factor on the margin. That doesn’t especially make this better or worse, it merely notes that the European Union countries were inclined to make such a self-destructive move for overdetermined reasons. 

Very Serious People do not care about physical world models, a case study (Warning: Someone Is Wrong On the Internet):

Sir, the virus did not mutate among the few remaining infections in Israel. The current pandemic in Israel is not causally related, at all, to the few remaining cases of Alpha or the original strain that were still present in Israel. 

It comes from Delta. If Israel had achieved actual zero Covid but not also instituted large new measures to keep Delta out, Delta would still have arrived from overseas, same as it arrived from overseas in every country except one. And if anything, it would have spread faster, because they’d have gone even more fully back to normal, so there’d be a worse problem now instead of a better one. 

Feels wrong to pick on such statements, but for a while I’ve been feeling the need to pick out a clean example, and this fits the bill. Also points out an important dynamic – local containment only matters for the medium to long term up until you cycle strains. There’s a strong instinct to contain the virus ‘in case it mutates’ but if it mutates elsewhere all your containment efforts mean nothing, so this only matters to the extent that you stop the mutation from happening at all, anywhere. Which is an important consideration, but not for stomping out the last few cases in one place while things continue to rage full blast in others. Much better to help out those other places.

In Other News

Scott Alexander’s post Things I Learned Writing the Lockdown Post is excellent and from my perspective is much better and more interesting than the actual lockdown post. I don’t have the bandwidth to respond properly this week, so noting here that I haven’t done so.

There’s this great highlighted comment at AstralCodexTen and I have nothing to add:

In the monthly links post at ACX, Scott also points to the poll that showed remarkable support for permanent lockdown in the UK:

But then reminds us that if you ask the questions with slightly different wording, people act more sane, and a permanent lockdown then only gets the 3% support you would expect from the Lizardman’s Constant:

They have a graph but it’s hard enough to read as it is so I’m not putting it in directly.

That makes this most of all a manufacturing consent story. Merely by changing a few words and giving people the correct word associations, you can get many people to agree to give up their freedoms and way of life, permanently. That’s scary. Not all of them, not quite this easily, but quite a lot of them. 

You love to see it: UK changes guidelines to emphasize outdoors and de-emphasize surfaces. 

Standard disclaimers that this like everything else is in no way medical advice and not to do anything illegal, but also this happened and further investigation seems logical:

I am not the biggest fan of psychedelics in practice under current conditions, but there’s a ton of potential upside. Our refusal to investigate their potential properly, for this and many other things, is a colossal expected value error, potentially our biggest one. 

Incentives matter, so this is mostly great (via MR), but remember that if it’s $200 to get them to sing it might be considerably more to make them stop:

Incentives matter, dominatrixes requiring vaccination for dungeon entry edition. Seriously, incentives matter, stop pretending this is all so difficult.

The standard check for whether mask mandates are back says yes:

Not Covid

The lighter side presents:

Interesting whether or not they are transitive survey results of the week:

Taken together to see what happens, these imply that about 90% of people think billionaires have lives they’d rather live, and about 70% support for the superiority of the life of the True Philosopher even when you exclude the 21% who implicity endorse it by already claiming to live it. Of course, Robin is an economist and Agnes is a philosopher, and have followers accordingly, but let’s not let that ruin our fun.

Especially given this:

Mostly I find this rank ordering sensible. Olympic, Pulitzer and Academy Award winners get the glory but their overall lives don’t seem like they should automatically be all that tempting to most people, whereas billionaire or nobel prize winner seems a lot more tempting. I’d like to live in a place where greatness was widely considered worth its price, so I worry about people not seeing things that way.

The other thing to observe is that this implies that for the samples in question, life is pretty damn good. If it wasn’t, there would presumably be a much bigger willingness to switch tracks to people who very clearly ‘made it’ and have a lot going on. I tend to agree. My life is pretty great, too, whether or not I count as a true philosopher. 



Discuss

Slack Has Positive Externalities For Groups

29 июля, 2021 - 18:03
Published on July 29, 2021 3:03 PM GMT

You ever have one of those group projects where you all need to find a time to meet, but some or all of the group members have packed schedules, so you pull out the whenisgood and it turns out the only times that work are 2-2:15 pm on Thursday or 6:30-7:30 am on Tuesday?

For this sort of scheduling problem, the best group members are those with lots of slack in their schedule - people who either have lots of time available, or have very flexible time commitments which can move around to accommodate a group meeting. But if my schedule is flexible, note that most of the benefits of that flexibility are captured by the group as a whole, not by me: my flexibility mostly allows the group to accommodate less-flexible members.

The slack in my schedule creates positive externalities for the group. I mostly control how much slack to create/maintain in my schedule, but a large chunk of the benefit goes to other people. This means I’m incentivized to create/maintain less-than-optimal slack in my schedule.

Once you look for it, this shows up in other guises too: many different flavors of slack create positive externalities for groups. In general, we should expect people to create/maintain less slack than would be socially optimal, and this in turn will make groups less functional. What do other forms of this look like, and what can we do about it?

Many Flavors of Slack

A few common forms of slack:

  • Financial: money not budgeted for anything in particular, or which can easily be spent on something else instead, is financial slack.
  • Time: time not scheduled for anything in particular, or which can easily be rescheduled, is time slack.
  • Space: space not used for anything in particular, or which can easily be used for something else, is space slack.
  • Emotional: capacity for excess stress is emotional slack.
  • Social: multiple social groups which one can fall back on, or the ability to make new friends quickly, provide social slack.

We can also separate short-term vs long-term slack for each of these. For instance, a bank may have lots of capital to invest, but limited liquidity, so they can’t move their capital around quickly: high long-term financial slack but limited short-term financial slack. Conversely, someone who has some savings on hand but is spending as much as they earn has short-term financial slack, but not long-term financial slack. Exercise for the reader: what would short-term and long-term time and emotional slack look like?

How do each of these create externalities for groups?

Space is an easy one: groups often need space in which to meet (either short-term, when the usual space is unavailable, or long-term) or store things (again, either short-term or long-term). If someone has spare space to use, that slack provides benefits to the whole group. But unless the group is paying to use the space, the person providing the slack captures only a small share of the benefits. So, people are incentivized to maintain less space slack than optimal.

Financial is another easy one: if some group members can occasionally cover some group costs, and it’s not a big deal, that makes it a lot easier for a group to function smoothly. Again, this applies both short-term (e.g. paying the bill for a group dinner, with the expectation that everyone will eventually pay back) or long-term (covering some costs without reimbursement). Again, the person providing slack captures only a small share of the benefits.

The short-term/long-term distinction matters mainly for a group’s agility/responsiveness/dynamicity. If there’s a crisis and the group needs to respond quickly, or the group needs to make and execute plans on-the-fly as new information comes in, that requires short-term slack on the part of the group members. For instance, last year many groups started to work on COVID tools, like microcovid, radvac, or the various forecasting projects. Many of these required full-time work - people needed the slack to pause or quit their day jobs on relatively-short notice. That takes short-term financial slack, obviously, but also short-term  emotional slack (very stressful!) and social slack (hopefully my coworkers aren’t my only friends, or I can make new ones quickly!).

Another example: suppose a company or organization wants to move (*cough*) - not just across town, but to another state or country. That typically means employees will need to move with them. That requires emotional slack: moves are among the most stressful events most people go through. It requires social slack: people either need friends in the new location, remote friends, or the ability to quickly make new friends. And it requires financial slack, to pay for the move.

In both these examples, the group needs slack from its members in order to do things. (Or, to put it differently: group members' slack facilitates solutions to coordination problems.) The ability to do things as a group mostly benefits the whole group, so the benefits of any particular person’s slack largely go to the rest of the group.

What To Do About It?

One standard econ-101 answer is “internalize the externalities” - i.e. reward people for their slack. People don’t usually do this with monetary payments, but we often do it with less legible rewards, like social status. For instance, if someone provides space for a group to meet, or occasionally covers some bills for the group, that’s usually rewarded with status within the group.

Another standard solution is to require group members to maintain slack. Again, this usually isn’t explicit, but we often do it in less-legible ways. For instance, if one or two people have very little slack in their schedules, maybe the rest of the group decides to meet without them. Or, if one or two people have very little emotional slack and sometimes break down if a competition gets too stressful, maybe they end up usually not participating in board game night or capture-the-flag. This is especially relevant to the last two examples from the previous section: the various COVID groups or the organization moving. If someone lacks the slack to participate, they would probably not end up in the group. Of course, there still need to be some people who do have enough slack in order for the group to include anyone at all.

But these are illegible and imperfect methods. One point of this post is that it may help to explicitly pay attention to slack and its externalities. At a personal level, if we wish to be altruistic, this might mean maintaining extra slack in all its various forms, in order to provide value to the groups in which we participate. It might also mean avoiding people who have very little slack along one or more dimensions, or trying to supplement others’ slack when possible (easy for finances, hard for time). For group organizers, it might mean explicitly requiring slack - e.g. the national guard requires that its members be able to drop everything and respond full-time to an emergency.

Important side point: slack has increasing marginal returns; the tenth unit of any particular flavor of slack is worth more than the first unit. The reason is that, if we flip n.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > * {position: absolute} .MJXc-bevelled > * {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom * {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')}  coins and count up the number of heads, the noise in that count is only ~√n. And more generally, if we add up ~n independent noisy things, the noise will typically be of order ~√n. So, if we want to take advantage of noisy opportunities - like a project which might go over budget, or a group which might need to move its meeting to a different time/space sometimes, or an event which might be fun or might be stressful - then we only need ~√n units of slack to take advantage of ~n opportunities. Going from zero unit of slack to one lets us take advantage of ~one more opportunity, whereas going from nine units of slack to ten lets us take advantage of ~twenty more opportunities. The more slack we have, the more we can benefit from adding marginal slack.

That means we should expect people to specialize in either having lots of slack, or no slack at all. For instance, we should expect people to either have carefully-planned tightly-packed calendars, or mostly-open calendars with lots of flexibility. We should expect people to either budget every dollar carefully, or have a large surplus and mostly not worry about their budget. Etc. One type takes advantage of lots of "noisy" opportunities, while the other makes their schedule/budget/etc maximally predictable. For a low-slack person to take advantage of just one noisy opportunity would require them to free up a bunch of extra room in their schedule/budget "just in case". The high-slack person already has a bunch of "just in case" built in, and can "re-use" that elbow room for one more thing, since it's highly unlikely that all the "risky" outcomes will happen all at once.

To the extent that this actually holds in the real world, we can think of slack (of a particular flavor) as binary: am I a high-time-slack person or a low-time-slack person? Am I a high-emotional-slack person or a low-emotional-slack person? That means the incentives don’t need to be perfect - as long as a group can roughly select for high-slack members, or roughly reward high slack, that should be enough to incentivize the high-slack equilibrium rather than the low-slack equilibrium, and create lots of positive externalities for the group.



Discuss

Gaming Incentives

29 июля, 2021 - 16:51
Published on July 29, 2021 1:51 PM GMT

I played chess for a while earlier this year.  I never played or watched any sports when I was younger, so sport forums were weird to me.  One thing that kept coming up led to a long meandering stream of thought I tried in vain to edit together into a coherent narrative.  It’s probably easy to guess what the most frequent and hotly debated question would be: who’s the greatest to ever play the sport?

My standard response to this kind of question is to say it’s a question of semantics, or specific definitions.  But this is a place where we reward players for greatness anyway, what with World Champions and player rankings, so there's something being incentivized.

I don't think many try to think about what they mean when they talk about the quality of a player - probably because it's more fun to not.  But I'm under no such obligation.  For most people, what it means is a varying combination of three factors - natural talent, dedicated effort, and actual skill level.

Depending on your sensibilities, it may seem strange at first to claim that people try to use these to defend their favourite choice.  Going back to chess, there are many names that come up in these arguments, but the three most prominent serve as a good microcosm: Bobby Fischer, eleventh World Chess Champion, who single-handedly ended Soviet chess supremacy and had an unparalleled dominance over his opponents at the time; Garry Kasparov, thirteenth World Chess Champion, who was the highest rated player in the world for over 21 years and had an unparalleled dominance over his opponents in his time; and Magnus Carlsen, sixteenth and current World Chess Champion, the highest rated player in history, and who doesn't have unparalleled dominance over his opponents, but for defensible reasons.

To me at first glance, it seemed obvious that Magnus fits the title of best player the most - ELO ratings being a metric of relative playing level, and the quality of the average GM increasing over time Proponents of Fischer or Kasparov mainly sound the argument that Magnus reigns in an era with advanced chess AI that any player can use to train and prepare with.  That's true, professional chess players make heavy use of engines to create and test their strategies.  This is also partly why Magnus doesn't dominate this age like his predecessors - the improvement of engines and other freely available advantages reduce the extent to which the game hinges on natural talent.  Added, at this point in the game, I think we're reaching diminishing returns on how well a human can actually play chess.

But why does this matter more than any externalities about the player that gives them an advantage?  Magnus also makes more use of strong physical training regimens to better his stamina during games, but that isn't used as an argument against his relative skill to the other two (more so Fischer than Kasparov, the latter of which did employ physical training).  Maybe dissidents realize it's not as defensible an argument; someone could refute with that the others could have done the same in their time.  But could they have?  It isn't that Fischer didn't care enough to try a physical training regime - you don't reach the level of World Champion without inordinate levels of dedication to your craft - but more likely that cumulative game knowledge hadn't reached the point where stuff like this was obvious to the top players as something to even consider.

You could try and remove all externalities, to try and compare the raw, untempered talent of a player against all others.  It's still a difficult question then, but you'd have a stronger argument for Bobby (or even older names like Capablanca).  Or you could remove all the factors a player seemingly doesn't have control over, and focus solely on their dedication to getting stronger.  It's a more difficult question to answer then, but not more difficult to think about - it's just that we don't have a good and reliable way of measuring which top players spend the most time or care the hardest about beating their opponents.  The argument would lie dead there.

Or maybe it's about simply entertaining the viewers.  In the current state of competitive chess, formats often incentivize the best players to play for safe draws to preserve rankings.  This usually leads to a lot of criticism and calls from even the players themselves to change the format, on grounds that it's not entertaining enough.  But then why even have human players?  AlphaZero and Stockfish dwarf the best players in skill and unorthodox moves.  Maybe what people really want is the thrill of watching dominance, but then that raises the further question, why chess?

It's possible none of this matters, because it's only relevant to a question no one really wants answered.  But there are other questions it's relevant to.  For example, Magnus famously decried the current system for choosing a new World Champion, a round-robin between the candidates to decide who gets to duke it out one-on-one against the world champion.  In his opinion, it unfairly privileges the current champion - most other sports don't send the previous winner all the way to the final by default.  As alternatives, he suggests anything from annual knock-out events to moving away from the classical time format.

Is a round-robin a fair evaluation at all?  The Candidates tournament is held over the course of 18 days in 2018.  Sure, you'd have to be one of the best players in the world to win regardless, but in many years the field is pretty even, and it comes down to which player is in best condition that month, who has the strongest stamina-conditional skill, and who has the most coverage in their play.  

You could make a good argument as to why these are necessary criteria for the world champion, but I don't think you could as easily claim they're the most important ones.

The criteria for eligibility to this Candidates tournament is arbitrarily handpicked by FIDE, as the winners of specific high-level tournaments, carryovers, even wild cards by the top sponsor, a practice many top players dislike, including the 2020-21 wild card.  I've always been fascinated by the way so many systems today make use of very qualitative guesses by officials in an attempt to formalize (school syllabi, for example, or informally-but-used-formally-often, the h-index metric in academia), often with just enough degrees of correlation to maintain deniability.  It feels like we should have better ways to do this.

Some super-GMs think that the title of World Champion is being taken too seriously; that owing to the format, some day someone clearly not the strongest player will win, and all the mythos around it will disappear.  

The contender for the 2021 World Championship, Ian Nepomniachtchi, was described by Magnus as someone who has very high highs, and very low lows, so does that make him a better player than someone whose best is slightly below Ian's, but is consistent at that level?  Ian even has a lifetime positive classical score against the World Champion (though some of those games were played when they were younger).

Magnus' dominance can be attributed to a variety of factors in his actual style, from playing multiple openings to reduce the effect of computer preparation (which is arguably more about lowering his opponents' skill than raising his own, from a purely chess-theoretic standpoint), to his ability to grind out endgames while his opponents wilt in stamina (again an idea that wins, but lowers the level of chess play), to just playing the kind of moves that nettles his opponent and makes them blunder.  

One of his rivals Anish Giri, on the other hand, is a player with a very different approach - one that some say makes him harder to defeat than Magnus himself.  Yet he lacks in the ability to convert positions to victory, giving him an infamous reputation for drawing games.

Of all these varying styles, I'd find it weird if we hand-picked one of them to say, "This is the best."  But we don't think about it, we just let the natural incentives at play relieve us of all responsibility.  I think people unconsciously give too much leeway to the null action, to not doing anything.  It’s when you start doing something that criticism comes in full force, because you’re accepting the mantle of responsibility.  (Anecdotal example: when I first heard of Effective Altruism, I was more disparaging toward people who gave to random charities than toward people who didn’t donate to charity.  Not that being disparaging at all is good, but that was a long time ago, sue me).

Given the balancing act organizations have to maintain with placating players, maintaining optics, and still having a decent following at the end of it all, I don't think I could do better than FIDE.  Maybe I could, but the counterweight of having to actually deal with the blame inherent to not taking the null action is heavy.

But what if you're not FIDE or FIFA or the ICC?  Sports is too entrenched in our society for its problems to remain internal.  

There is, for example, the controversial issue of trans people in sports, specifically the supposed advantage transgender women have in women's sports, because of their higher testosterone levels and muscle-to-fat ratio.  Studies are conflicting as far as I've seen on whether this argument actually holds merit, so I'm blocking out that discussion and focusing on whether the question matters at all

People are born with different genetic makeup presaging different levels of testosterone or strength or flexibility or stamina or relative effort to maintain equal performance.  We don't call for segregating sports based on how much a person lucked out genetically, probably because no one would watch FIFA: Category 2, or because many ignore or don't realize the extent to which genetics plays a factor.

So if what you want to test is natural talent, you'd try to regulate sportspeople's training regimen and nutrition to level the playing field in that regard, or something better.  If you cared about dedication, you'd segregate sports on natural talent (which would need an increasing number of segregations as you reach diminishing returns on increasingly effective training, since genetics isn't comfortably discrete).  If you didn't care about any of this and only wanted to see good quality of play, the best AI in your sport is over 700 ELO ahead of the best current human, and you’re left to wonder whether you need the humans at all (this doesn’t apply well to physical sports, but I think this suggests that the enjoyment of a sport is largely independent of its nature).

It may be that there's no good answer, or that competitive sports is too riddled with these kinds of epistemic problems for it to last in the long run.  Maybe like I mentioned earlier, the core incentive is simply the joy of watching your favourite player dominate.  That’s my best guess, at least.  In that case though, sports would be far better off if we actually optimized for that while setting formats, or even creating new games.



Discuss

Did they or didn't they learn tool use?

29 июля, 2021 - 16:26
Published on July 29, 2021 1:26 PM GMT

DeepMind claims that their recent agents learned tool use; in particular, they learned to build ramps to access inaccessible areas even though none of their training environments contained any inaccessible areas!

This is the result that impresses me most. However, I'm uncertain whether it's really true, or just cherry-picked.

Their results showreel depicts multiple instances of the AIs accessing inaccessible areas via ramps and other clever methods. However, in the actual paper page 51, they list all the hand-authored tasks they used to test the agents on, and mention that for some of them the agents did not get >0 reward. One of the tasks that the agents got 0 reward on is:

Tool Use Climb 1: The agent must use the objects to reach a higher floor with the target object.

So... what gives? They have video of the agents using objects to reach a higher floor containing the target object, multiple separate times. But then in the chart it says failed to achieve reward >0.

EDIT: Maybe the paper just has a typo? The blog post contains this image, which appears to show non-zero reward for the "tool use" task, zero-shot:


 



Discuss

How to bayesian update on multiple covid tests?

29 июля, 2021 - 16:22
Published on July 29, 2021 1:15 PM GMT

I'm curious how I should think about the risk of hanging out with a friend (who is 2x pfizer vaccinated).  It seems like a good opportunity for bayesian thinking in the real world, but I'm really unclear how to think about it.

Info: he tested positive on 4 UK lateral flow tests (LFTs), all from the same box (on 2 different days). After this, his roommates took two tests from the same box & both were negative.
He has subsequently taken 3 PCR tests + a LFT each day, which have been negative.

However, false positives seem to be very rare even for LFTs. They're ~1/1000 (number range from .9968 in original studies to .9997 more recently)
https://www.gov.uk/government/news/new-analysis-of-lateral-flow-tests-shows-specificity-of-at-least-999 
https://www.bmj.com/content/372/bmj.n706

But false negatives seem common for everything, including PCRs. It seems there's a 20-67% false negatives (20% being best it ever gets, on day 8 of infection)
https://www.acpjournals.org/doi/10.7326/m20-1495
https://pubmed.ncbi.nlm.nih.gov/33301459/

Given this, what are the chances he had covid (from maybe 10-20x lower-than-average-risk prior, but in oxford/England)?



Discuss

How much compute was used to train DeepMind's generally capable agents?

29 июля, 2021 - 14:34
Published on July 29, 2021 11:34 AM GMT

I'm talking about these agents.

I'd love an answer either in operations (MIPS, FLOPS, whatever) or in dollars.

Follow-up question: How many parameters did their agents have?

I just read the paper (incl. appendix) but didn't see them list the answer anywhere. I suspect I could figure it out from information in the paper, e.g. by adding up how many neurons are in their LSTMs, their various other bits, etc. and then multiplying by how long they said they trained for, but I lack the ML knowledge to do this correctly.

Some tidbits from the paper:

For multi-agent analysis we took the final generation of the agent(generation5)andcreatedequallyspacedcheckpoints (copies of the neural network parameters) every 10 billion steps, creating a collection of 13 checkpoints.

This suggests 120 billion steps of training for the final agents. But elsewhere in the post they state each agent in the final generation experienced 200 billion training steps, so.... huh?

Anyhow. Another tidbit:

In addition to the agent exhibiting zero-shot capabilities across a wide eval-uation space, we show that finetuning on a new task for just 100 million steps (around 30 minutes of compute in our setup) can lead to drastic increases in performance relative to zero-shot, and relative to training from scratch which often fails completely.

So, if 100 million steps takes 30min in their setup, and they did 200 billion steps for the final generation, that means the final generation took 30 x 2,000 = 41 days. Makes sense. So the whole project probably took something like 100 - 200 days, depending on whether generations 1 - 4 were quicker.

How much does that cost though??? In dollars or FLOPs? I have no idea.

EDIT: It says each agent was trained on 8 TPUv3's. But how many agents were there? I can't find anything about the population size. Maybe I'm not looking hard enough.



Discuss

Continuous Minority Elimination

29 июля, 2021 - 12:29
Published on July 29, 2021 9:29 AM GMT

First they came for the Communists
And I did not speak out
Because I was not a Communist

Then they came for the Socialists
And I did not speak out
Because I was not a Socialist

Then they came for the trade unionists
And I did not speak out
Because I was not a trade unionist

Then they came for the Jews
And I did not speak out
Because I was not a Jew

Then they came for me
And there was no one left
To speak out for me

 

There has long been a poem about Continuous Minority Elimination, and its history still resounds, but this is not enough. It should be named, and someone should describe its essential pattern:

A cruel logic advises the majority to disenfranchise, exile or kill some minority that it finds burdensome. The majority knows that they can easily dominate this minority, as they are many. Their mistake, the reason this should be treated as a disease even by those who sit in the majority, is that this logic will be applied again to the next minority, continuously, by induction: eventually this cruel logic will be extended to us.

It isn't obvious to me that continuous minority elimination is a failure of rationality. A causal decision theorist might reason, "The purges will continue with or without me. It is external to me. My complicity now in this purge does not cause the purges to continue and to expand their scope until my own family is taken by them. I should accept this war of all against all, and strive against the odds to stand as the lone victor" (Perhaps I should try to be less judgemental towards CDT agents. I barely understand them any more.)

I am a logical decision theorist. For me the logic here is much easier to dismiss. I see that the rationale of the purges would abstract beyond their present target. I see that there are ways in which I am a minority, and so one day the logic will apply to me. I decide not to implement the logic, knowing that many other LDT-like agents will stand firmly beside me. Not every person will move with us, but enough will, that the additional safety we gain by denying the cruel logic from our domains shall be worth the blood that we are forgoing.

Culture, writing and enforcing norms, is closer in character to LDT than CDT (though it is perhaps implementable on top of a medium of CDT agents woven into contracts). Cultures can coordinate to prevent a behaviors that are individually rational but harmful when reflected throughout society. Culture did see that few of its hosts would survive Continuous Minority Elimination, and it instated a norm against it.



Discuss

Do we have a term for the issue with quantifying policy effect Scott Alexander stumbled on multiple times?

29 июля, 2021 - 11:45
Published on July 29, 2021 8:45 AM GMT

In Things I learned Writing The Lockdown Post, Scott Alexander describes a really tricky issue when trying to quantify the effects of some policies:

This question was too multi-dimensional. As in, you could calculate everything right according to some model, and then someone else could say "but actually none of that matters, the real issue is X", and you would have a hard time proving it wasn't.

A long time ago, I remember being asked whether banning marijuana was good or bad. I spent a long time figuring out the side effects of marijuana, how addictive it was, how many people got pain relief from it, how many people were harmed by the War on Drugs, etc - and it turned out all of this was completely overwhelmed by the effects of deaths from intoxicated driving. If even a few people drove on marijuana and crashed and killed people, that erased all its gains; if even a few people used marijuana instead of alcohol and then drove while less intoxicated than they would have been otherwise, that erased all its losses. This was - "annoying" is exactly the right word - because what I (and everyone else) wanted was a story about how dangerous and addictive marijuana was vs. how many people were helped by its medical benefits, and none of that turned out to matter at all compared to some question about stoned driving vs. substituting-for-drunk-driving, which nobody started out caring about.

It might actually be even worse than that, because there was some hard-to-quantify chance that marijuana decreased IQ, and you could make an argument that if there was a 5% chance it decreased IQ by let's say 2 points across the 50% of the population who smoked pot the most, and you took studies about IQ vs. job success, criminality, etc, really seriously, then lowering the national IQ 1 point might have been more important than anything else. But this would be super-annoying, because the studies showing that it decreased IQ were weak (and you would have to rely on a sort of Pascal-type reasoning) and people reading something on the costs/benefits of marijuana definitely don't want to read something mildly politically incorrect trying to convince them that IQ is super important. And if there are twenty things like this, then all the actually interesting stuff people care about is less important than figuring out which of the twenty 5%-chance-it-matters things actually matters, and it's really tempting to just write it off or put it in a "Future Research Needed" section, but that could be the difference between your analysis being right vs. completely wrong and harmful.

The same was true here. How do we quantify the effect of Long COVID? Who knows? Given the giant pile of bodies, maybe we just round COVID off the the number of deaths it causes, and ignore this mysterious syndrome where we've only barely begun the work of proving it exists? But under certain assumptions, the total suffering caused by Long COVID is worse than the suffering caused by the acute disease, including all the deaths!

There is more, but this covers the phenomenon I’m curious about. Let me try to describe the problem in general terms:

Important policies have so many effects that it is near impossible to keep track of them all. In addition, some effects tend to dwarf all others, so it is critical to catch every last one. (Perhaps they follow a Paretian distribution?) It follows that any quantitative analysis of policy effects tends to be seriously flawed.

Do we already have a term for this problem? It reminds me of moral cluelessness as well as known and unknown unknowns, but none of those seem fit the bill exactly.



Discuss

An Apprentice Experiment in Python Programming, Part 2

29 июля, 2021 - 10:39
Published on July 29, 2021 7:39 AM GMT

[Epistemic status: me trying to recall what happened in a 4.5-hour pair-programming session, during which I was focused on solving problems. This post was reconstructed from scratch code files, written messages (the session was over voice call) and my python shell history.]

Previously: https://www.lesswrong.com/posts/kv3RG7Ax8sgn2eog7/an-apprentice-experiment-in-python-programming

Three days ago gilch and I had another session on Python Programming, where we continued talking about decorators.

Lambda decorator

So far, all of the decorators we've seen have been functions that take in another function as parameter, then used by the syntax @<decorator_name>. Turns out we can use a lambda function as a decorator too:

@lambda f: print("hi") def add(a, b): return a + b

When this file gets run in Python 3.9 interactive mode, the output is

hi

and when I call the add function:

>>> add(1, 2) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'NoneType' object is not callable

Gilch asked me to write out the desugared version of this code. I wrote add = print("hi"), which was not correct. Gilch gave me the correct desugared code:

def add(a, b): return a + b add = (lambda f: print("hi"))(add) Partial

We did a brief introduction to partial. We had code from last time:

def subtract(a, b, c): return a - b - c

Accessing doc from within python shell:

>>> from functools import partial >>> help(partial)

Using partial to create a function with the first and second arguments passed in:

>>> partial(subtract, 1, 2) functools.partial(<function subtract at 0x7f34649af3a0>, 1, 2) >>> partial_sub = partial(subtract, 1, 2)

Gilch pointed out that the line above, partial_sub = partial(subtract, 1, 2), could've been written as partial_sub = _ because in python shell, _ returns the last value that's not None.

Testing the partial function:

>>> partial_sub(3) -4

1 - 2 - 3 equals -4, so we're getting what we expected.

Gilch then pointed out that arguments can be passed into partial as keyword arguments:

>>> partial(subtract, b=0, c=2) functools.partial(<function subtract at 0x7f34649af3a0>, b=0, c=2) >>> _(10) 8

Finally, there's a way to implement partial functions using lambda:

>>> lambda x: subtract(x, 0, 2) <function <lambda> at 0x7f346474ae50> >>> _(10) 8 Running Bash in Python

While running help(partial) in the python shell, the topic of using less came up. I asked gilch if it was possible to pipe outputs from python shell to bash tools, gilch answered that we could not do it directly, but just like how bash can run python, python can also run bash:

>>> import os >>> os.system("bash")

Now we're in a bash shell, run by python:

~/code_stuff/python_scratch $ ls | less ~/code_stuff/python_scratch $ exit exit 0 >>>

At the last line, we're back in python shell again.

Lambda functions

Gilch gave me a number of lambda expressions in increasing complexity, asked me to explain what they did, then verify:

  • (lambda x: x+x)(3) evaluates to 6, this one was straightforward
  • (lambda x: lambda y: x + y)(2)(3) I assumed that this statement expanded inside out (i.e. 2 was passed in as y and 3 was passed in as x) so I concluded that this would evaluate to 5. I got the correct answer, but my mistake was exposed by the next example:
  • (lambda x: lambda y: x + y)('a')('b') Using the same logic, I concluded that the result would be 'ba', but the actual result was 'ab'. Why did I make this mistake? Gilch gave me the next question:
  • (lambda x: lambda y: x + y)('a') evaluates to a lambda. A lambda of x or y? The first part of the expression, (lambda x: lambda y: x + y), is a lambda function that takes in a parameter x, and returns a lambda function lambda y: x + y. This lambda function that takes in x is then called with argument 'a'. The return value is lambda y: 'a' + y. And then it made sense why the previous question evaluated to 'ab' instead of 'ba'.

I think what gilch was getting at here was the syntax of function calls other than the format of f(x). Gilch also at some point pointed out the difference between a callable and a function.

Preprint

Gilch gave me the next challenge: write a decorator preprint such that it can be used in the following way:

@preprint("hi") def bob(): print("Bob") @preprint("hello") def alice(): print("Alice") >>> bob() hi Bob >>> alice() hello Alice

I struggled here. I attempted to write out some vague idea in my head:

def greet_by_name(salutation, name): print(salutation) print(name) def bob(salutation): from functools import partial return partial(greet_by_name, name="Bob")

Gilch reminded me that the use case was already specified; functions alice and bob were not to be modified. They then suggested that I wrote out the desugared version of the code. I struggled and gave an incorrect answer again until gilch provided me with the correct desugaring:

@preprint("hi") def bob(): print("Bob")

is equivalent to bob = (preprint("hi"))(bob).

Understanding decorator de-sugaring

As you can see, at this point, I was struggling with desugaring. So gilch presented me with a formula:

@<decorator expression> def <name>(<args>): <body>

is equivalent to

def <name>(<args>): <body> <name> = (<decorator expression>)(<name>) Preprint

My memory of what exactly happened here is spotty, perhaps because I was focused on finding out how to write the preprint decorator. Eventually I came up with something like this:

def preprint(greetings): from functools import partial def greet_by_name(greeting, foo): def wrapper(): print(greeting) foo() return wrapper return partial(greet_by_name, greetings)

This worked, but you may be able to see from the way I passed in greetings into greet_by_name, I was confused about why there needed to be 3 layers of definitions. Gilch gave me an alternative solution, and explained the pattern:

def preprint(greetings): # factory def greet_by_name(function): # decorator @wraps(function) def wrapper(): # replaces the function being decorated print(greetings) return function() return wrapper return greet_by_name

Gilch explained that, for decorators that take in parameters, they are like a factory that produces different decorators based on input. Thus the first layer of definition processes the input, the second layer of definition is the decorator itself, and the innermost wrapper function is what replaces the function that ends up being decorated. This

@preprint("hi") def bob(): print("Bob") @preprint("hello") def alice(): print("Alice")

is equivalent to

greet_by_name_hi = preprint("hi") greet_by_name_hello = preprint("hello") @greet_by_name_hi def bob(): print("Bob") @greet_by_name_hello def alice(): print("Alice")

At this point, I was not quite getting the 3-layer structure--I identified that there was a pattern of factory-decorator-function but I was mostly mimicking the pattern. This lack of understanding would be exposed in a later example.

map

Gilch asked me if I knew what map in python did. I said map took in a function and an array and return the result of the function applied to each element of the array. Gilch responded that it was mostly correct, with the caveat that map took in an iterable instead of an array.

Gilch gave me a test case for which I needed to implement the decorator:

@<something> def double(x): return x+x assert list(double) == ['aa','bb','cc','dd']

I came up with

@lambda f: map(f, 'abcd') def double(x): return x+x

When we run assert list(double) == ['aa','bb','cc','dd'], python did not complain. However when we ran list(double) again, we saw an empty array:

>>> list(double) [] >>> double <map object at 0x7f3ad2d17940> >>> list(double) []

Gilch explained that double was not an array, it was an iterable. After we had iterated through double in the assert statement, it became empty.

Callable

Gilch illustrated the difference between a function and a callable:

def my_if(b, then, else_): return then if b else else_

in which then if b else else_ is a callable. Two use cases:

>>> my_if(1<2, print('yes'), print('no')) yes no >>> my_if(1<2, lambda:print('yes'), lambda:print('no'))() yes >>> my_if(1>2, lambda:print('yes'), lambda:print('no'))() no

In the first case, print("yes") and print("no") are not callables (they get run and evaluated to None) whereas the lambda:print('yes') and lambda:print('no') in the second case are callables.

Similarly, when we put the print statements in functions, the functions are callables:

def print_yes(): print("yes") def print_no(): print("no") >>> my_if(1<2, print_yes, print_no) <function print_yes at 0x7fa36ffbb280> >>> my_if(1<2, print_yes, print_no)() yes Branching, with 1 branch

Gilch illustrated how we could use decorator for branching:

def when(condition): def decorator(function): if condition: return function() return decorator >>> @when(1<2): ... def result(): ... print("Yes.") ... return 42 ... Yes. >>> assert result == 42 >>> @when(1>2): ... def result(): ... print("Yes.") ... return 42 ... >>> assert result == None Branching, with 2 branches

Gilch asked me how I would implement branching such that if a given condition is true, one branch gets evaluated; if condition is false, another branch gets evaluated. This would essentially mean we decorate two functions. I came up with this solution:

def if_(condition): def decorator(function): if condition: return function() return decorator def else_(condition): def decorator(function): if not condition: return function() return decorator condition = 1<2 @if_(condition) def result(): print("Yes.") return 42 @else_(condition) def print_no(): print("no!")

The problem of this solution was too much code repetition. Also, I used 2 decorators instead of 1. Gilch said one way to do this was to decorate a class instead of a function.

Decorating a Class

Gilch gave me the test case, and asked me to write the decorator:

@if_(1>2) class Result: def then(): print("Yes.") return 42 def else_(): print("No.") return -1

Eventually I came up with the decorator:

def if_(condition): def decorator(c): if condition: return c.then() else: return c.else_() return decorator Decorating a Class, Example 2

Gilch gave me another test case for which I needed to write the decorator:

@sign(x) class Result: def positive(): print('pos') def zero(): print('zero') def negative(): print('neg')

I modified the solution from the previous question to answer this one:

def sign(x): def decorator(c): if x > 0: return c.positive() elif x == 0: return c.zero() else: return c.negative() return decorator

This was basically the same example as the previous one in the sense that I didn't need to change the structure of the decorator, only the specific functions within the innermost layer.

Branching without if

I asked gilch if there were ways to make if statement without using if. Gilch gave me some examples:

  • return [c.then,c.else][not condition]()
  • return {True:c.then,False:c.else_}[bool(condition)]()
A More Difficult Example

Gilch gave me another test case:

@if_(1<2) def then(): print('yes') return 42 @then def result(): print('no') return -1 assert result == 42

And when the first line is changed to @if_(1>2), assert result == -1 should pass.

I figured out how to make the first branch work but not the second. At this point, it was getting quite late in my timezone and by brain was slowing down after hours of coding, so I told gilch I'd work on it the next day.

When I picked this problem up the next day, all of the wobbly parts of my understanding came back to bite me. I was running into a bunch of error messages like TypeError: 'NoneType' object is not callable and TypeError: <function> missing 1 required positional argument: 'greetings' (or TypeError: <function> takes 1 positional argument but 2 were given), and since the error messages would point at the line where result was defined instead of the line within the decorator where the problem was, I was struggling a lot debugging my code. I poked around the internet a lot to look at other examples as an attempt to understand how to expand decorators in my head, and eventually came across this slide from a PyCon talk that made things a bit more clear to me.

Eventually I came up with a working solution that I sent gilch:

def if_(condition): def decorator(function): if condition: def wrapper(*x): return function() return wrapper else: def decorator2(function): return function() return decorator2 return decorator

Gilch noticed my confusion: "The way you named things indicates a lack of understanding. Naming should have been more like this."

def if_(condition): def decorator(function): if condition: def decorator2(f): return function() return decorator2 else: def decorator2(f): return f() return decorator2 return decorator

Alternatively, gilch provided two more solutions:

def if_(condition): def decorator(function): if condition: return lambda _: function() else: return lambda f: f() return decorator def if_(condition): def decorator(then_func): def decorator2(else_func): return (then_func if condition else else_func)() return decorator2 return decorator

"They're both a decorator you apply to the second definition, regardless of which branch is taken. So calling the first one a wrapper is the wrong way to think of it," explained gilch.

Observations
  1. The concept of decorator being syntactic sugar of calling the decorator on a function then reassigning the result to the variable that used to be the function somehow didn't stick for me. When gilch asked me to write the desugared version of functions, I kept thinking about further expanding the functions.
  2. There are a few things that I was not used to thinking about: functions of functions, functions that use variables that are in scope but don't get passed in as parameters, functions that return callables. All these made decorators--especially parameterized decorators--"hard to fit into my head at once," using gilch's remark.
  3. Gilch's teaching was primarily puzzle-based, which is quite different from the lecture-based learning that I have done. This is pretty cool, because for most of my life, solving problems has been in the context of some evaluation. Stress sucked the fun out of puzzles for me when I had trouble decoupling my self worth and the outcomes of those evaluations. Now I'm solving puzzles for no other reason beyond learning, I've realized this is also an opportunity to unlearn.


Discuss

Evidence that adds up

29 июля, 2021 - 06:27
Published on July 29, 2021 3:27 AM GMT



(Is there already a name and/or clearer analysis for this?)

Some kinds of evidence add up more than others.

E.g. say someone gives me a recommendation like "do X". (Taking this as evidence of "doing X will get you what you want", rather than as a command.) Then someone also recommends "do X", and someone else, and so on. I'll only get somewhat convinced to do X. Maybe lots of people are telling me to do X because X is good for me to do, but maybe there's some other explanation, like an information cascade, or they're just signaling, or it's a "belief" computed by deference not truth, or I give off falsely strong superficial cues of really needing to X. (And, decision theoretically, how I decide to process the recommendation might be best thought of as affecting what's already the case about why I'm given the recommendations.) These other hypotheses predict the same observation--people telling me to do X--so in this scenario they have probability bounded away (well away, given my priors) from 0. In particular, each repeated observation of the same sort of thing--each "do X"--imparts less and less information.

On the other hand, say someone tells me "do X; my friend did X and then Y happened", and then someone else tells me "...and then Z happened", and "do X; this study showed that W tends to happen", and so on. In this scenario, I eventually get much more convinced to do X than in the previous scenario (assuming that I want to do X when I'm confident of what would happen if I did X). There's fewer hypotheses that predict this sort of sequence of observations, than hypotheses that predict just the less specific sequence "do X do X do X". We could call this "evidence that adds up".

This is different from giving abstracted reasons / justifications. "Do X because R" doesn't add up as well as an anecdote: it punts to the proposition R. If everyone gives R as a reason, that only adds up so much. (Of course, giving reasons is useful in other ways than adding up; maybe R is more easily verified, maybe I believe but don't actually care about R and so can ignore the recommendation.)

To be more precise, we would say, "evidence that doesn't add up about X", since there might be some other proposition about which we keep gaining much information from repeated observations of people saying "do X". Maybe this is already implicit in the word "evidence", rather than "observation" or "input", since evidence should be evidence of something. Giving reasons does help with adding up, if people give opposing opinions. Hearing just "noodles" "don't noodles" "noodles" "don't noodles" is sort of a wash on the question of whether to noodles. But "do X; then Y will happen and Y is good" and "don't do X; then Z will happen and Z is bad" isn't a wash; it lets evidence add up in the other dimensions of belief about Y and Z.



Discuss

Risk Premiums vs Prediction Markets

29 июля, 2021 - 02:03
Published on July 28, 2021 11:03 PM GMT

aka Why (some) market forecasts are biased and what can we do about them?

tldr; When you care about an outcome, it will skew the odds you'll bet at. (To insure against bad outcomes). When everyone cares about an outcome, it skews the market odds.

People (including myself) often point at market forecasts as being the best predictors for some variable (inflation, the economy, whether England will win the Euros). 

However, market prices are more than just expectations. In the words of an anonymous twitter account:

Market price = Expectations + Risk Premium + Idiosyncratic Issues

This is a somewhat tautological framing however (since we don't know what risk premium or the idiosyncratic issues are other than the things which make this equality true). I'm going to try and explain what risk premium is, since people often focus too much on expectations and use idiosyncratic issues an excuse (fees are too high on PredictIt, you can't trade large size, etc..). Risk premium is important because unlike idiosyncratic issues which can be fixed with good market structure, risk premium will not go away. 

This matters, because if we can't remove risk premiums and the idiosyncratic issues, then prediction markets are less accurate and less valuable for decision making.

To this end, I'm going to start with some definitions:

What are risk premiums? Risk premium is excess returns which results from the payout being correlated with the rest of the world.

What are excess returns? Excess returns are returns higher than 0 (or the risk-free rate depending on context) after accounting for the riskiness of the payout.

Some examples:

  • a 1-1 bet on a 50/50 coin flip doesn't have excess returns
  • a 2-1 bet on a 50/50 coin flip has excess returns
  • a 10% 1y bond on a risky company may or may not have excess returns, depending on the default rate of the company and the recovery rate in event of default
  • the stock market has excess returns (probably)

What is correlation with the rest of the world?

When an event happens, that says something about the world state we're in as well as the payout we get from the market. If the world state affects our utility in other ways than just the market, that is the correlation we're talking about.

Some examples:

  • If I bet on a coin flip, aside from the money I have bet on the event, it has no effect on my well being.
  • If I bet on England to win the Euros, then I have some utility associated with the outcome unrelated to my bet. I will be happier if England win regardless of whether or not I bet.
  • If I insure my car against crashes then I have some utility associated with the outcome unrelated to my bet. I will be very sad if I crash my car.
  • If the economy starts doing badly, financial assets will start suffering, I will be suffering and so will everyone else.
When does correlation lead to a risk premium?

(This is me writing out with examples what Cochrane explains more clearly here in Section 1.4. Particularly "Risk Corrections" and "Idiosyncratic Risk")

What matters is the size of the market, the ability to diversify and the participants involved.

Let's start with the coin flip. If I am the only person in the world making a market in a certain coin flip, I might make the market .48 / .52 for $1 on heads. (That is if you want to buy heads, you need to pay 52c and I will buy heads from you at 48c). [This is because 0.5 * U(wealth - .48) + 0.5 * U(wealth + .52) > U(wealth) (for my current wealth and utility)] However, someone with much more money than me would look at my market and be willing to do the same trade for .49 / .51 because they are still going to earn some return. Someone with much more money than them will look at that market and offer .495/ .505 and eventually we end up with a very tight market around .50 the tightness effectively depending on how wealthy (and risk averse) the participants are.

Let's now continue with the England example. Let's say we both agree the chances of England winning are 50%, my current worth is $10, I value England winning at $10 and England losing as -$8 and I have log utility.

My current utility is 0.5⋅U($10+$10)+0.5⋅U($10−$8)=0.5⋅log(20∗2).mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > * {position: absolute} .MJXc-bevelled > * {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom * {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')}
If we bet at 55/45 (me getting the raw end of the deal) then my expected utility is 0.5 * 0.5⋅U($10+$10−$.55)+0.5⋅U($10−$8+$.45)=0.5⋅log(19.45⋅2.45) 0.5⋅log(46). Therefore the market (between us) could end up clearing at the (incorrect) price of 55%. (Actually in this extreme example, I would be happy betting at up to 91%(!)). If everyone felt like me, the market might clear far from the probability everyone agrees on.

However this market has many participants and they don't all have the same (extreme!) preferences. Therefore any difference between the clearing price here and the "fair" clearing price slowly gets arbitraged away until we have a good prediction market.

What about the market for car insurance? Well, it's similar to the coin flip. I have some risk aversion associated with the cost of having to replace the cost of a car. On top of that I have the additional change in my utility which comes from having been in a car accident. So I am willing to pay more than the my estimated probability. However, there is someone else on the other side of this. When I go to renew my insurance there are many companies offering me car insurance. They are in competition, each with a large float and indifferent to whether or not I crash my car. Their only interest is their expected return. This competition drives the price closer to the true probability. (More on this later...)

What about equities? Well it's similar to the coin flip. Uncertain money is worth less than certain money to me. It's also similar to the insurance example. More money in a good economy is less good than less money in a bad economy is bad for me. However, here we need to start thinking about the other side. Who is the other side of the equity market? Well, there are certainly people with much deeper pockets than me, so "uncertain money" doesn't drive them as much so the uncertainty value gets priced out. However, there are very few people out there who have pro-cyclical utility - (people who want more money in good times than they want less money in bad times). So the market clearing price (as a whole) will be less than the "fair" price. (This is one "solution" to the Equity Premium Puzzle).

What does this mean for prediction markets?

Well for simple prediction markets: coin flip; sports events; weather(?); there are sufficiently many people who are either indifferent enough, or deep-pocketted enough that they will take on the risk for an ever decreasing edge. This means the market price should tend to the market participants' best estimate of the fair probability.

For complicated markets: insurance, equities, inflation-protected bonds, it's not quite so simple to infer a probability/forecast/expectations from market prices.

Let's start with insurance. We know that the probabilities which we infer from the insurance market are (broadly) an overestimate. (Insurance companies are profitable after all!) This would be fine if we knew how much of an overestimate they were. "Take the market prices... subtract off the profit margin from the insurers... done". Unfortunately, the insurance market is slightly more dynamic than that. (See the time varying loss ratios from this fascinating set of slides

Over time the amount of risk premium being demanded by the insurance companies waxes and wanes. Therefore we need to know the right adjustment to make for risk premium. (And arguably this is just as hard as predicting prices in the first place - no free lunch here).

What about equities? If we know that equities broadly go up, can't we just subtract off that positive rate of return to use them as indicators for the real economy? Again, no. The problem is that the risk premium is time varying and knowing what the current risk premium is, is about as hard as valuing equities in the first place. No free lunch here.

What about TIPS? TIPS are the classic example of using the financial markets as a forecasting tool. They are inflation-protected bonds issued by the US government. In theory, the difference between TIPS and nominal bonds will be the market's expectation for inflation. Certainly the difference* will realise to inflation. The question is, will the market clear at that price? The empirical evidence seems to be "not quite" **.

What about GDP-linked bonds? Well, I can't think of anything more correlated to the real economy, so I would expect the risk premium of these (for global economies) to be very large (relative to their volatility). This would compare to nominal bonds which have a negative risk premium. This means for the country issuing them, they would be much more expensive than traditional debt issuance.

Their use as a forecasting tool would also be weakened, since they would systematically under-forecast GDP growth.

That said - I am still a fan of the idea. I would rather implement them as futures contracts (pretty much what Sumner advocates for here).   

How bad can it get?

We can take some hope (or despair) from the Hansen-Jagannathan bound. This is usually used to say something about how volatile the state price density is. However, this volatility gives us an upper-bound for the Sharpe of any asset. (Including any asset we might use for prediction). If we are willing to assume*** that empirically the upper bound for the spd is 0.5 then we know that E[estimate]σ(estimate)≤∼0.5. So there is at least hope that market prices will get us somewhere within an order of magnitude of the right answer.


* modulo some details regarding options

** there are a bunch of different factors driving this difference, some premium related some market inefficiency related, some to do with large flows in the market

*** this is a key assumption, but I'm quite willing to believe it, since no-one has found assets with much higher Sharpe than 0.5



Discuss

Delta Strain: Fact Dump and Some Policy Takeaways

28 июля, 2021 - 06:38
Published on July 28, 2021 3:38 AM GMT

(Not a doctor; merely extrapolating lines of reasoning where they will go; all policy “recommendations” are entirely hypothetical and not actual recommendations; I rushed to finish this while it's still useful and relevant, so errors will exist and I apologize for the style)

Summary

As many of you know, the Delta strain of COVID is basically upon us. In the Bay, as of a few days ago, about .4% of people had it (based on .1% confirmed cases).

Currently it's doubling every week or so, but it's surprisingly difficult to tell if it will be a small blip or a huge wave. The main reason it's hard to tell is that it's entirely dependent on human behavior, which could change quickly if people become scared or if the government institutes rules again. The second reason it's hard to tell is because of mixed evidence on the difficulty of reining it in—while different studies makes this look more or less difficult, India and the UK both appear to have successfully done this without obviously draconian behavioral interventions. Berkeley might easily peak at 1% and drop back down, but for this post I'll be talking about policies for if we get a large wave.

We also don't know very much about vaccine efficacy against Delta. We know that the good vaccines reduce your risk by about 60-90% compared to without the vaccine, but the error bars are extremely large on all these studies. In part this is because it's so hard to disentangle effects from different strains, behavioral change in the vaccinated, social bubble effects, etc. Further, it doesn't answer the important question of whether this means ~80% of people are totally safe, or if it means that standing a few feet closer to people than you do now will cause the 5x difference that moots your protection. (This also plays into the above issue of predicting whether we will have a large increase or not.) 

That being said, we can still roughly estimate risk from definitely having Delta. A healthy 30yo probably has about 4x (3x-10x) less risk than before due to vaccination, despite Delta causing higher mortality. It almost entirely comes from Long COVID. In absolute terms this is ~4 expected days of life, plus 1/200th of your future productivity and vitality. This can get translated to microCOVIDs at roughly 1 hour of your life lost every 1k-5k uCOVIDs. Risk of death goes up by 3x for every decade in age, but Long COVID probably only scales at ~~1.5x per decade, so for people over 60 mortality starts becoming more relevant. All these calculations are "creative" so please don't take them as definites.

By default it will probably take 3-12 months for this to resolve. I strongly believe we should not lock down again for this long—I think we need a return to some level of normalcy, plus the risks are much lower than before. I don't strongly advocate for a single policy in this document, since I don't know what other people's risk-tolerances are, but I do give a number of principles. 

Personally, I am going to prioritize protecting the vulnerable—people with immune disruptions or age. I am probably going to advocate for them to get third vaccine shots of a different brand, if possible and if the downsides aren't too down. I also am going to start running the numbers on what a variolation scheme might look like this time around. I also am hoping that tight-knit communities of responsible people can beat out trivial inconveniences and put the requisite effort into creating safe spaces for socializing with tolerable risk levels—this is very prosocial, both figuratively and literally.

Delta vs originals:
  • Clearly spreads faster and has immune escape
  • Getting big in many different countries places, moving fast enough that simple behavior changes don’t seem like they’ll bring R back <1, but significant ones might
  • Maybe 2x the mortality
  • Heads up that we’ve added to the common symptom list sore throat and headache, and removed cough. Gastrointestinal issues also are increased in likelihood with Delta. (A source)

Main sources I used:

Supporting sources:

Parameter estimation:
  • Serial interval: 4 days median/avg, usually 80% within 2-6?
    • Probably shorter interval than initial wild-type COVID, though disputed: Jing et al shows medium evidence for, Pung et al shows weak evidence against—the difference comes from which dataset they compare against, where Pung uses Apr 2020 Singapore (which apparently had not a lot of lockdowns), and source 1 uses “the 2020 pandemic” (which probably had more lockdowns), and it’s unclear which we should realistically be comparing to since there is such a spectrum. But the higher viral load supports shorter serial interval
  • R: maybe 4?
    • People are saying 5-9, but I think that’s unadjusted for difference in serial interval; if interval is 4 instead of 5.5 days, this would mean that reported R of 7 would turn into R of 9^(4 / 5.5) = 4. On the other hand, maybe reports of R are based on actual tracking, in which case you wouldn’t need an adjustment. Could use more research
  • Viral loads are 1000x as high at time of testing for Delta compared to 19A/B variant, acc to Jing et al
  • Fraction delta as of 7/17: >83% of US cases, acc to CDC tracker
  • Mortality/morbidity/badness: 2.5x? 2x compared to Alpha strain (already 1.5x as bad as original becomes 3x as bad) according to Scotland study, 2x compared to original acc to Canada study which reports on mortality, ICU admission, and hospitalization, roughly x2 x3 x2, but basically within error bars of all being x2.2. Yale mentioned that other data have failed to substantiate this, but A) they don’t cite it and B) I don’t really trust numbers showing no difference here, because vaccinations will obviously drop mortality a ton making it hard to see an *increase*, plus the vastly increased viral load theoretically should correlate with higher mortality
  • Vaccine protection: All these studies are trash, but very roughly 80% efficacy from Moderna/Pfizer. (I’d put 90%CIs roughly within 30-80% all cases, 60-90% symptomatic, 80-95% hospitalization, 85-97% mortality.)
    • Healthline reports on some stuff: one study says Pfizer is 80% against infection, 90% against symptomatic, and 95% against hospitalization. If you extrapolate to 97% against death, this would be about 1/30 the expected 2% rate = .06% mortality once vaccinated. However, the actual study seems terrible, with massive error bars, and claimed OR/HR of like 50% vs wild strain despite 90% “effectiveness”, so clearly I’m confused. Some sources say no additional protection to mortality given symptoms; probably not true but not disproven, could use more research into those
    • (They report some more studies too; Study 4 agrees, Study 5 says vaccines perform worse but it’s speculated that this includes more asymptomatic cases than normally included, Study 3 says vaccines slightly better, Study 2 is lab titers and not relevant to the wild.)
    • New Israel study [bad cite] says even less protective (down to 40% from 65%), but still 88% against hospitalization and 91% against severe illness
    • Moderna studies are even worse than Pfizer (but show ~~roughly similar results), and AZ and J&J are obviously worse by 2-4x or something
Cost of getting Delta:
  • Rule of thumb: for 30yos, mortality pre-vaccine-pre-delta was .02%, now it’s .004%. That’s 40 micromorts or 1 day of lost life. [If the mortality/hospital protection numbers are wrong, at most multiply all these risks by 3 for 3 lost days; still pretty low]
    • Let’s say Pfizer offers 95% protection against mortality like they say it does for hospitalization. Since that’s including “protection via not getting the virus”, and protection from being infected is only 80%, this would mean that protection once infected drops death by about 4x. That implies a 30yo would have, instead of .04% mortality from Delta variant (.02% * 2), now a .01% mortality. However, it’s probably even more effective in the young (like how we saw it ~~5x more effective in the young against the original virus), which could take 30yos down to .002% mortality. On the other hand, it seems like this is *too* low—someone’s gonna be immunocompromised or have failed to get the vaccine fully injected or something, so it seems hard to bound it much lower than maybe .004%.
    • 40yos should have .14% mortality from Delta, .036% after vaccine basics, down to .012% if vaccine effects are age-adjusted
    • 50yos should be .4% mortality, .1% after vaccine basics, down to .06% if age-adjusted
    • 60yos should be 1.4% mortality, .36% after vaccine
    • 70yos should be 4% mortality, 1% after vaccine
    • 80yos should be 14% mortality, 3.5% after vaccine, maybe 5% adjusting for age
  • That means Long COVID becomes the dominant factor.
    • This is hard to calculate even roughly, but my previous calculations said it was ~2x as bad as mortality (mostly equating it to severe pneumonia in long-term-effect multiplier) and my intuition says now it’s maybe ~~5x as bad, partly because mortality seems to have dropped faster than less severe afflictions. This would take us to .02% mortality equivalent, or 200 micromorts and 4 days of life when you get delta.
    • A rough fuller calc might go as follows. Pfizer reduces likelihood of symptoms given infection by 2x, hospitalization given infection by 4x, so let’s say 3x for Long COVID given infection. Let’s wave our hands wildly and say 5% of 30yos who used to get COVID would get it Long, which now becomes 1.5%. Let’s say this averages 1% hit to health permanently, since the effect is pretty huge in some people but we’re including lots here since we claimed 5% of 30yos were getting Long COVID before. This would mean 1.5% times 1% is .015% mortality-equivalent, which is 150 micromorts, which is 3 days. Big error bars on “1% hit to health”, obviously, but if you say it was 5x this bad you’re probably pushing at the boundaries of possibility (5% permanent loss of health to 5% of 30yos!) and EVEN THEN you’re still only looking at 2 weeks of life lost. This could use more research.
    • Lancet study on cognitive deficits from COVID is pretty clear though. Unclear the timescale of the cases, but all measurements being finished by Dec 2020 means most probably were about 3-6 months later (maybe it says this in the paper, but I didn’t see it). Anyways, reduced SDs on cognitive tests were
      • .04 for non-respiratory symptoms (.5 IQ)
      • .07 for respiratory symptoms (1 IQ)
      • .13 for medical assistance at home (2 IQ)
      • .26 for nonventilated hospitalization (4 IQ)
      • .47 for ventilation (7 IQ)
    • Decreasing by 3x for vaccine vs Delta (same calculation as above) means you obtain
      • .01 for non-respiratory symptoms (.15 IQ)
      • .02 for respiratory symptoms (.3 IQ)
      • .04 for medical assistance at home (.6 IQ)
      • .09 for nonventilated hospitalization (1.4 IQ)
      • .16 for ventilation (2.4 IQ)
    • If you roughly double your income when you go up 60 IQ points, each IQ point is about 2% added income. Each IQ point is probably also .1% happiness (.01/10). It also probably reflects some underlying worse health that may have other affects. It’s hard not to overcount when something is correlated with everything, but I think the largest effect here will be the money-equivalent (impact, perhaps); maybe we can double at the end for other effects, though . Giving up 1/50 of this for an IQ point is equivalent to a lost week of production every year, which, while not the same as lost life, is still pretty bad. Why does Long COVID affect your brain so much, even if you have no other symptoms? Cognition does seem fairly afflicted by environment and things (lack of sleep, eg), such that small amounts of swelling may throw it off more easily than the rest of your body. This does somewhat fit with how impossible it is to do hard cognitive labor while sick (though doing physical labor is similarly bad...). My guess is that the median 30yo who gets Delta is asymptomatic, but that the average is somewhere between non-respiratory symptoms and respiratory symptoms (however, super hard to do this integral, and a surprising number of young people are hospitalized…). I would guess Long Covid then reduces a 30yo’s average productivity by on average .3-.6% (obv this is smaller than my 95%CI). This is about 1-2 days of productivity a year in perpetuity. It is probably also 1/200th of your vitality
  • Altogether, this means in expectation you maybe lose 4 days of life and 1/200th of your productivity and vitality forever. This is about 1-2 days per year of lost productivity. (NB while the fraction is a large cost, remember that not exercising costs you 10x this amount.)
Policy Principles:

Principle: Lockdowns last time were too strong

  • Being a few feet closer to someone is an order of magnitude more virus. Many activities are OOMs different in their risk, as evidenced by Microcovid. Lots of risky people didn’t even get the virus; if they had 50% likelihood of getting it per year, then safe people at 3 OOMs safer were likely being too cautious
  • Microcovid says you can do more than we were doing
  • Filter bubbles are strong. I think this is a main reason that no one I know who was being "reasonably safe" got it, and why I know upper-middle class people who took a number of risks and still didn't get it.

Principle: We don’t know the endgame

  • “[Booster trials may start as soon as August]”, so boosters won’t be here for months, and might again take many months after screening to actually get distributed. If the initial vaccine took 6 months to screen and 6 months to distribute, my guess is they could speed it up to possibly 3 and 3, but that’s still 6 total months until we’d get to escape
  • Without boosters, it could take 4-12+ months to either reach herd immunity or keep rates consistently low, if we bounce back and forth in waves like last time

Principle: Very hard to predict the control system

  • This means it’s very hard to know whether things will be bad a month out, for example
  • The control system does clearly take forever to play out to herd immunity though

Principle: Tests still suck

  • People continue to act as if testing is a solution, but even PCR test only give about one bit of evidence. False negative rates are often about 50%, only going down to 25% at peak load. The statements that PCR tests are 99% accurate is, as far as I can tell, a simply false extrapolation from other domains. Further, I expect that false positives are very correlated, so you can’t just test yourself 5 times and get 5 bits of evidence.
  • Rapid tests are even worse, <50% sensitivity unless they’ve gotten much better in the last 6 months
  • It’s very hard to make a coherent strategy out of testing when people have about half their infectiousness period before they can test positive. This is even worse with Delta, where viral load is 1000x as high at first positive test, and the whole timeline is accelerated by a factor of about ⅔.

Principle: No one knows how to interpret vaccine “effectiveness” into the quantity we care about

  • “Effectiveness” currently means something like 1 - Relative Risk. Imagine two worlds that would both give rise to vaccine eff of 90%. In the first world, the virus gives perfect immunity to 90% of people and no immunity to 10%—so if a megadose up the nose to the 90% still wouldn’t cause sickness, and a single virus in an aerosol drop would make the others sick. In the second world, the virus is about 90% effective warding of the “typical dose”, maybe 10^4 viruses on your mucus membranes. But if you give someone a 10^5 dose, they’d get sick, and a 10^3 dose they wouldn’t get sick. It’s just that right around 10^4 is the dividing line given your body’s innate immune defenses.

    These two worlds have the vaccine effectiveness dependent on two different axes. In one, all the variance comes from immune responsiveness. In the other, all the variance comes from environmental stochasticity, mostly from viral load. Currently, effectiveness numbers themselves gives us no evidence toward which of these worlds we’re in, which each have VERY different implications: if it’s entirely dependent on innate responsiveness, healthy people basically shouldn’t take protective measures since everyone’s going to get a small dose eventually, and it’s unlikely that you’re in the “definitely will get it” category. If effectiveness varies entirely with environment, you can never be safe, and acting normally is bound to eventually get you in contact with a 10^7 bolus that gets you very sick. (So we’d really prefer the world where variance is explained by immune responsiveness.)

    We do have some evidence about which world we’re in. There are studies which find pretty big differences in level of antibody titer produced by the vaccinated, and in some cases where they have almost no antibodies it’s pretty clear that this means immune responsiveness is going to be at fault when they get sick. And I think there are studies finding correlation between titer and effectiveness. Both of these point toward innateness. But we also know that it has to be true that for many of those with low levels of antibodies, a larger dose will push them over the edge. There is also slight evidence from the Israel numbers, which give effectivenesses that vary some over time, that there’s a serious behavioral/environmental component. I don’t know of any good numbers personally, and it will be a question of distributions anyways that will end up hard to interpret. Could use more research

Principle: Variolation maybe good, still hard

  • The problem is if you don’t want to spread it to others, you probably need to quarantine for ~a week every time you do it; but since you can’t titrate doses, the proper move is to start quite small, which means a lot of quarantining. I’d guess 5 incidents if you do a pretty good job ramping up, though with some coordination you could try different numbers as a group and more easily figure out a good dose and maybe bring this down to 2-3 incidents per person. Alternatively, you could try to test saliva or similar for action concentration.
  • A main question is how much immunity delta strain confers toward future delta strain. I haven’t seen any discussion of this question—it’s probably >3x, but if it’s <2x or something (for example due to original antigenic sin), then it could be a mistake to variolate. In that world, your main move is to wait 6 months for booster vaccines

Principle: Lockdowns are pretty mentally costly

  • Not expecting to see as many people means you have reduced incentives to act in socially good ways; being hampered in your life gives you lots of excuses and reduces your strength and effectiveness. I think both of these are super underappreciated and are major issues
  • People’s brains are wired to have lots of human contact and start doing weird things when they don’t, including the “I’m in pain” thing that causes a variety of badnesses
  • There are other effects here I’m not getting into. I know these are bounded in severity because they aren’t showing up massively in suicides or happiness, but I think they’re big and real nonetheless. I hope to write something about the strangeness of these soon, but realistically that won’t happen.

Principle: Don't pay lots of COVID-avoidance costs if you're going to almost-surely get it anyways

  • I tried to make this clear to people last time, but—if you're going to be one of those people who goes to the gym maskless, just don't avoid any risks in the first place. Better yet, get it at a convenient time. But don't avoid hanging out with your friends for two months if you know you're going to get like 2 million uCOVIDs from vacation right after. (Obviously, don't spread it to your friends if they want to avoid it though.)
  • If your rate is about 5k uCOVIDs for a lost hour, then it's still going to take you 200 "debates over an hour" before you have a full expected COVID. For most people, this is probably still on the side of "won't get an expected COVID in the next 6 months". However, if our Long COVID risk goes down another factor of 3 or something, many more people are going to have their risk-tolerance set such that they'll get more than an expected COVID soon anyways, at which point mathematically they should stop trying to fight it and just control their viral load and timing.

Principle: A third vaccine dose of a different type is probably fairly useful for the vulnerable

  • I expect the effect size is something like 3x for the vulnerable and 1.5x for others. It’s slightly higher the longer it’s been since your second shot.
  • I don't know whether people should get the other mRNA vaccine (Pfizer XOR Moderna) or whether they should get something else—hopefully something we can figure out over the next week or two.
Policy summary:

(These are for individuals, not the state. I would like the state to not impose any hard rules and instead just get us more and better tests, but I know that’s a pipe dream.)

As individuals, the primary behavioral axis is still how much risky contact you have with others, and using microCOVID.org to titrate your risk level at the new rate of 1k-5k uCOVIDs per hour. However, you have some secondary options this time around. First, you can get a third dose of a different vaccine (in Israel they’re already doing this); second, if you’re smart about it you can variolate yourself at less cost than before due to vaccine protection. More likely, a group of people working together could make variolation work.

On the primary behavioral axis, I think serious lockdowns are a bad idea for almost all people except the immunocompromised. I also think it wouldn't be crazy to just continue to act normal aside from reducing viral load in simple ways, and just avoid interacting with the vulnerable for a month or two until you’ve had Delta and been protected. However, I could be wrong about this and you should make your own decisions: I admit that I balk at taking a 1/200 permanent hit to productivity/vitality.

I had meant to make some more serious policy proposals for Bay Area rationalists or other groups of well-coordinated people, but I have to postpone this for a few days.

Again, remember that our behavior might curb this in a few weeks anyways.



Discuss

How much do variations in diet quality determine individual productivity?

28 июля, 2021 - 06:05
Published on July 28, 2021 3:05 AM GMT

A few days ago Jim Babcock stated

I care about nutrition because I believe it has a very large, very underappreciated impact on individual productivity. Low quality diets make people tired and depressed, so they don't get anything done.

There's obviously a sense in which that is trivially true. If you start consuming no more than zero calories per day, you will get very tired, maybe depressed as well, and you will eventually not be able to get anything done. And getting adequate iodine and other nutrients is very important for children to properly develop their cognition. But Jim Babcock is probably making a stronger claim. I think what he is claiming is something like "going from the 10th percentile in diet quality among Americans to the 90th percentile would have very large, very underappreciated impact on an adult's individual productivity (taking into account that diet quality is almost certainly at least partially dependent on individual factors)." 

I'd like to know what evidence we have for that claim. The strongest piece of evidence I can find is Rae et al., 2003, which showed that creatine supplementation increased the average IQ of their sample of vegetarians by 12 points, but that hasn't been replicated[1], and it seems extremely hard to substantially improve the cognition of adults. And, when it comes to depression, people have been trying really hard to show that omega-3 supplementation has substantial effects on it, but it's dubious that it does. L-methylfolate is a nutrient that is apparently effective enough that someone convinced the FDA to approve it to treat depression (as an add-on to treatment to another antidepressant), but only in quantities that far exceed those that anyone gets from their diet. 

So I have a fairly low credence that his claim (as I formulated it) is true. But I was wondering if there were any major pieces of evidence I have completely missed.

[1] Apparently some rationalists planned to try to replicate this, but there seemingly hasn't been an update to that in three years.



Discuss

A reasonable debate about Ivermectin

28 июля, 2021 - 01:31
Published on July 27, 2021 10:31 PM GMT

Rebel Wisdom put together a good podcast, "Ivermectin, For and Against, with Tess Lawrie, Graham Walker & Gideon Meyerowitz-Katz."  Tess is for sure pro-Ivermectin and is the second author on a favorable Ivermectin meta-analysis. Graham and Gideon are skeptical of Ivermectin. Graham is an ER doctor. Gideon is an epidemiologist. 

Either way, want I wanted to hear was bonafide advocates and detractors talk with each other, and it accomplished that. 

FWIW If this was an Intelligence Squared debate, I came in to it hoping for a good performance from the pro-Ivermectin side. I felt, and still feel, like that "side" is being treated somewhat unfairly. I was disappointed, however, in that regard and am ow more skeptical of Ivermectin as a treatment and prophylaxis after  listening. 

Tess, while bright in many ways, may just not be the best advocate for Ivermectin.

At around 20:00, Tess is asked "What would convince you that Ivermectin doesn't work?" and she responds (paraphrasing from memory): "Ivermectin works." 

It gave me a flashback to the Billy Nye and Ken Ham debate when they were asked what would change their minds. Ken said “You know what? I’m a Christian, and I know God’s word is true, and there’s nothing he could say that will cast doubt on that.” While Bill said "We would need just one piece of evidence" and then gave several examples. 

Then somewhere around 55:00 in Tess is asked if there's anything that should be censored on social media, and a real example of  is "bleach therapy" offered (where parents are encouraged to give children bleach to prevent autism). Tess is unmoved by the example and still categorically against censorship. I wonder what she'd say about PhotoDNA?

I worry that someone like Tess is too biased here. It seems to me that the pro-Ivermectin side has a lot of PR problems. 



Discuss

Testing Newport's "Digital Minimalism" at CEEALAR

27 июля, 2021 - 21:28
Published on July 27, 2021 6:28 PM GMT

(also posted on the EA forum)

 

I just arrived at the CEEALAR. In this post I describe the project I will be working on while I'm here.

 

Background:

I was highly persuaded by the claims in Cal Newport’s book, Digital Minimalism: Choosing a Focused Life in a Noisy World. Check out this summary on Goodreads. I very highly recommend reading the whole book--I personally found it more mind-changing and action-inspiring than anything I’ve read in a while. Here is a central passage from the book (emphasis mine):

Assuming I’ve convinced you that digital minimalism is worthwhile, the next step is to discuss how best to adopt this lifestyle. In my experience, gradually changing your habits one at a time doesn’t work well—the engineered attraction of the attention economy, combined with the friction of convenience, will diminish your inertia until you backslide toward where you started.

I recommend instead a rapid transformation—something that occurs in a short period of time and is executed with enough conviction that the results are likely to stick. I call the particular rapid process I have in mind the digital declutter. It works as follows. 

  1. Put aside a thirty-day period during which you will take a break from optional technologies in your life.
  2. During this thirty-day break, explore and rediscover activities and behaviors that you find satisfying and meaningful.
  3. At the end of the break, reintroduce optional technologies into your life, starting from a blank slate. For each technology you reintroduce, determine what value it serves in your life and how specifically you will use it so as to maximize this value.

In the book, the benefits of a “digital declutter” are claimed or implied to include: lower anxiety, more social integration, working more efficiently, rediscovering the time and motivation to {play music/read novels/draw pictures/write fiction}, having a more rewarding relationship with your significant other or your kids, and improved decision-making.

I myself tried a digital declutter last year and got mixed results, in large part because the pandemic prevented me from socializing. I am now planning to run the experiment on myself and several other people here at CEEALAR, semi-rigorously.

 

Project summary:

I will get a handful of people at the EA Hotel who are motivated enough to participate in a digital declutter. I will manage their setup and facilitate self-enforcement. I will take some data and write up a report, which I will post on the EA forum.

 

Motivation:

I want to know if digital declutters are easily doable by EAs, and if they make a measurable impact on effectiveness and/or quality of life. If the answer is yes, I want to get the word out and have specific evidence to point to. I see some potential for high impact.

More personally, I really want to help my old friends from high school and college. A lot of them are struggling with life in weird ways that I would have confidently anti-predicted 10 or 15 years ago. I have multiple hypotheses about this, the top one being that modern device usage is harming their minds: reducing attention spans, interfering with action-reward patterns, creating addictions (such as outrage addiction & compulsive phone checking), displacing social interaction, and both directly and indirectly increasing anxiety. I think it’s not just my friends though; I hear that mental and behavioral health problems are rising in the general population, and especially rapidly in young people. I would be happy to find a way to make even a moderate dent in this.


 

Project outline:

  • I get about 5 people in the intervention group and 5 in the control group. Before any (intentional) intervention begins, all participants track their screen time, mood, and productivity for a week or so, in order to record their baseline.
  • I meet with each declutter participant privately to discuss their personal goals, expectations, and concerns. Each person submits a declutter plan and ruleset. I critique the plan and discourage any rules that are wastefully lax or hopelessly strict.
  • In crafting the plan, I help them put behavior-shaping systems in place. These may include browser extensions, time/location restrictions, device encumbrances, enticements for wholesome activities, etc.
  • I meet with participants a couple times a week to discuss how it’s going, collect soft data, and consider changes to their plans/rules. In the spirit of anti-akrasia, participants are encouraged not to modify their rules before talking it over at one of these check-ins.
  • To help take people’s minds off the painful “detox” phase of the declutter, I organize group activities in addition to the ones that already happen.
    • Group exercise, jigsaw puzzles*, group writing pomodoros, beach walks, board games, mini-hackathons, movie night, etc.
  • After 4-6 weeks, the program is over and I hold exit interviews to gather a final chunk of soft data.
  • I conduct followup interviews/surveys 3 weeks and 6 weeks after the end. I try to get some idea of which things changed, which things didn’t, by what mechanisms, and on what timescale.
  • I write up my analysis, evaluate for effectiveness, make some recommendations, and warn against pitfalls. Post on LW, EA forum and maybe elsewhere.


 

Example ruleset:

Since I am planning to do a digital declutter alongside the rest of the participants, here is the plan I have made for myself.

  • I will turn off all push notifications and badge icons. I will handle messages only during the hour after breakfast and the hour before dinner.
    • I previously found this to be surprisingly easy to get used to.
  • No Twitter, No Reddit, no Tumblr. (My Facebook usage is already well under control.)
  • I can read as many blog posts as I want, as long as I print them out at least 24h beforehand. (Printing out a bunch at once is cheating.)
  • No more than 2h of TV per day.
  • Phone can be carried in a bag, not in a pocket.
  • Bedroom is a device-free zone, except for making calls. No screens after sunset. (For reference, today’s sunset will be at 9:21pm.)
  • No idle phone usage. Every time I pull out my phone, I must have a good reason, and then must put it away once that reason is satisfied.
  • Long stretches of laptop usage are to be broken into pomodoros, regulated by a timer.
  • Listen to music as a full-attention activity, not in the background.
  • Keep a pen & notebook within arm's reach.


 

Existing literature:

I did some basic searches on google and google scholar. At present, the literature seems to be pretty thin and not highly informative for my purposes.

This literature review reports “inconsistent findings” due to available studies being highly heterogeneous. Notably, only 4 studies (out of 20) involved detox periods longer than 5 days, and the analysis did not control for this variable.

Also, it looks like a lot of the existing studies are done on undergraduates, and I don’t expect this to change any time soon. It seems possible to me that EAs who are doing self-directed EA work may show significantly different effects than undergraduates (or other groups), in a way that matters.


 

My anticipations:

I expect more of the signal to come from soft data in the interviews than from the survey responses.

I see more than a 10% chance that the declutter turns out to have direct, lasting, and clearly worthwhile effects on every participant. They all end up very satisfied with the results, even though there was a painful adjustment.

Rot-13:

V rfcrpvnyyl nagvpvcngr cnegvpvcnagf rzcunfvmvat ubj vzcbegnag gur shyy guvegl-qnl crevbq jnf. Guvatf yvxr "V gubhtug gung zl cerivbhf gjb-qnl grpu ergerngf jrer nyernql trggvat zr zbfg bs gur orarsvg ohg V jnf jebat. V qvq abg frr jung V jnf zvffvat hagvy qnl sbhegrra, naq V qba'g guvax V pbhyq unir znqr creznarag unovg punatrf orsber qnl gjragl."

V chg zber guna bar-unys cebonovyvgl ba ng yrnfg bar svsgu bs cnegvpvcnagf (abg vapyhqvat zr) fnlvat gurl srry “uvtuyl zbgvingrq” gb uryc n sevraq qb n qvtvgny qrpyhggre.

 

Recap:

  • I think the core claims in Digital Minimalism are probably basically correct.
    • There are big personal gains to be had by changing how one relates to one's devices.
    • Small, piecemeal interventions are not enough to get these gains.
    • A life overhaul of 30+ days (aka a "digital declutter") may be enough.
  • If the core claims are true, a declutter may measurably raise both the effectiveness and the quality of life of the EAs (or just people) that try it.
  • I'm going to help a handful of EAs execute a digital declutter.
  • I will semi-rigorously take notes and data. I will publish my analysis & conclusion.


     

 

*surprisingly effective in my experience



Discuss

Black ravens and red herrings

27 июля, 2021 - 20:46
Published on July 27, 2021 5:46 PM GMT

The raven paradox is a dilemma in inductive logic posed by Carl Gustav Hempel. It starts by noting that the statement "All ravens are black" can be expressed in the form of an implication: "If something is a raven, then it is black." This statement is logically equivalent to its contrapositive: "If something is not black, then it is not a raven."

We then consider that observing a black raven would typically be considered evidence for the statement "All ravens are black." The paradox comes from asking whether the same holds for the contrapositive; to put it another way, if we observe a non-black non-raven, such as a green apple, does that constitute evidence that all ravens are black?

The standard Bayesian solution, given by I. J. Good, goes as follows. Suppose there are N objects in the universe, of which r of them are ravens, and b of them are black, and we have a probability of 1/N of seeing any given object. Let H_i be the hypothesis that there are i non-black ravens, and let us assume we have some sensible prior over our hypotheses. Then upon observing a non-black non-raven, our probability of H_0 increases, albeit only very slightly when N is large.

I. J. Good's solution reveals a distinct way of thinking about evidence among Bayesians. Most logicians view logical fallacies as examples of poor reasoning, but the Bayesian interpretation is slightly different. Many fallacies, when stated in their appropriate inductive form, are actually valid in a Bayesian sense, though the evidence they provide is usually weak.

Which is to say that if you criticize a Bayesian for using a "red herring" they may not see the issue. Observing a red herring is indeed Bayesian evidence for the statement "All ravens are black."



Discuss

DeepMind: Generally capable agents emerge from open-ended play

27 июля, 2021 - 17:19
Published on July 27, 2021 2:19 PM GMT

EDIT: Also see paper and results compilation video!

Today, we published "Open-Ended Learning Leads to Generally Capable Agents," a preprint detailing our first steps to train an agent capable of playing many different games without needing human interaction data. ... The result is an agent with the ability to succeed at a wide spectrum of tasks — from simple object-finding problems to complex games like hide and seek and capture the flag, which were not encountered during training. We find the agent exhibits general, heuristic behaviours such as experimentation, behaviours that are widely applicable to many tasks rather than specialised to an individual task.

...

The neural network architecture we use provides an attention mechanism over the agent’s internal recurrent state — helping guide the agent’s attention with estimates of subgoals unique to the game the agent is playing. We’ve found this goal-attentive agent (GOAT) learns more generally capable policies.

...

Playing roughly 700,000 unique games in 4,000 unique worlds within XLand, each agent in the final generation experienced 200 billion training steps as a result of 3.4 million unique tasks. At this time, our agents have been able to participate in every procedurally generated evaluation task except for a handful that were impossible even for a human. And the results we’re seeing clearly exhibit general, zero-shot behaviour across the task space — with the frontier of normalised score percentiles continually improving. Looking qualitatively at our agents, we often see general, heuristic behaviours emerge — rather than highly optimised, specific behaviours for individual tasks. Instead of agents knowing exactly the “best thing” to do in a new situation, we see evidence of agents experimenting and changing the state of the world until they’ve achieved a rewarding state. We also see agents rely on the use of other tools, including objects to occlude visibility, to create ramps, and to retrieve other objects. Because the environment is multiplayer, we can examine the progression of agent behaviours while training on held-out social dilemmas, such as in a game of “chicken”. As training progresses, our agents appear to exhibit more cooperative behaviour when playing with a copy of themselves. Given the nature of the environment, it is difficult to pinpoint intentionality — the behaviours we see often appear to be accidental, but still we see them occur consistently.

My hot take: This seems like a somewhat big deal to me. It's what I would have predicted, but that's scary, given my timelines. I haven't read the paper itself yet but I look forward to seeing more numbers and scaling trends and attempting to extrapolate... When I do I'll leave a comment with my thoughts.



Discuss

In Praise (and Defence) of The People

27 июля, 2021 - 12:48
Published on July 27, 2021 7:13 AM GMT

The People are most significant phenomena in the universe. But lately I noticed some people saying how people are bad, evil, insignificant, unimportant and how we cause harm to everything around us. I have been a sucker for this idea, too. Probably most embarrassing belief from my past. I even remember saying things like “I hate people”. I’m introvert but, like seriously, WTF? I am one of them! Did I hate myself? I don’t think so, I think I just wasn’t thinking (just like all other people thinking this, but I don’t want to insult you in case you are still one of them).

Closer inspection reveals how stupid and wrong this idea is. In fact, it is not only wrong and stupid, it is evil. People turn out to be the most awesome and significant phenomena in the universe, not just for us, but for the whole of the universe!

 

Saying That Anything Would be Better Without People is Evil:


Some say that planet would be better off without humans. Ehm, so my dear human saying this, why you haven’t killed yourself yet?
Saying that universe would be better off without people or that they are bad or insignificant is evil. It encourages negativity, pessimism, stagnation and promotes and even justifies further evils.

People aren’t good nor bad, we are just very powerful and significant. We can cause lots of good and lots of evil, precisely because of that. We have abilities never seen before in whole history of evolution, and we are learning how to use them (which involves occasional errors). Our potential is much greater than we can imagine, and as I will explain, our rate of progress seems to be evidence of that.

By promoting idea that we are bad or not special we are just hindering and discouraging progress and improvements. The progress that would save your grandma from dying from cancer. Yes, people who broadcast such discouraging messages like “people are bad” or “people cause harm” indirectly killed your grandma (and mine, too, unfortunately).

Who wants to work on improving our wellbeing and the world, or anything for that matter, if we are so bad? Do we deserve any improvement if we are so bad? Such messages imply that: no. And so talented Jenny in high school resides to drugs and nihilism instead of working on cure for cancer, because what difference can one person make if everyone else is so bad and why should one good person care about rest that are so bad?

It seems to me that it is some new age virtue signalling to claim that people are bad. Does it make these people feel they are something above other people? Or is it just more comfortable to claim there is no future and everything is lost and throw hands in the air?

Want people stop doing evil things? Stop telling them they are evil!

 

Do You Value More Other Species or Your Children?


In some popular books there is mention of how wherever humans migrated there was wide extinction of other species. Yes, but there was also cultivation of another species (cows and chickens wouldn’t exist without us).
Those books also forget to mention that migration of any specie will cause extinction of some other species. Such migration will also cause alteration of and exploitation of that new environment by all species. All species try to survive and thrive on the expense of others and their environment.

Sure, humans do this on larger scale (which makes us so special BTW), but if we didn’t alter our environment at such large scale then other species would do so! Think about it, does it matter whether alteration is caused by one specie or many other species? Net change is going to be roughly same, although different.

By exploiting our environment and its resources we are merely preventing other life forms from doing so, we just do it better and more sustainably because despite what many would say, we take long term view, at least compared to other life forms on earth.
And that difference is crucial. Changes we make to environment are always net positive in the long run. Which cannot be said about changes other species do.

We might cause harm to some tropical bird species, but we will in turn improve lives of millions of people and prevent your newborn child from dying. If we can choose between building hospital vs. preserving natural environment of some bird, then go for the hospital (if you want to argue against this wait until you have stroke and nearest hospital suddenly morphs into natural bird park). I would say that is good trade off for the start (we are still very young species and certainly at the start of our technological progress), wouldn’t you?

Oh yeah, and don’t forget that no other specie is known to care about going extinct, except us. In fact, we are only ones who care about going extinct and other species going extinct! Causing extinction of something that doesn’t care going extinct can’t be bad (note that this is not same as causing suffering!) So whether you try to preserve or make other life forms go extinct, it is still just about you and your preferences, not theirs.

 

Only by Interfering with Nature We Can Save It. Only Humans Can Do This, Our Interference is Crucial for Life and The Earth:


If we don’t interfere with nature, all species will eventually go extinct! If we don’t interfere with nature, world will eventually go extinct and all oceans get evaporated by sun and eventually whole earth will.
Our technological progress might seem like it is causing harm, but it is only altering our environment in order to make enough progress to possibly save the whole earth.

Sacrifices we make are worth it, because without people interfering whole earth and  its life are doomed. We are only specie who has power to make cosmic level changes, who can save our planet and all its life. But for that we need to keep doing what we do, even if it takes some short term sacrifices and errors in its process.

All life is exploiting nature’s resources best to their ability, our abilities are just superior to theirs. Moreover, we have ability not only to exploit resources and environment around us but also to regenerate it! And we are already doing that, at least better than any other specie!

Sure, if you have better solution how to create utopia without any sacrifice or downside, great! But if you don’t, second best option is to do best we can given our current state of knowledge.

So remember that we are part of the nature and do our best to survive and thrive just as any other part of nature. We are just exponentially more powerful part of nature than any other and in addition to that, we help other parts of nature survive and thrive instead of thinking only about ourselves. We are not perfect at this, but as far as we know, we are the best.

 

People Can’t be Bad:


And when you say that humans are bad, always remember to ask: compared to whom or what? Lions kill too, are they bad? Oh, but you say “Lions don’t kill as much as humans do!”, they sure would if they could, they just can’t kill more. On other hand, we could kill much more if we wanted to, but we don’t!

Remember, if you can’t do something bad that doesn’t make you good! For you to be considered good you need ability to do evil or more evil that you are currently doing. Doing good or little evil if that is your only option makes you just weak, not good. Same goes for all other species that you don’t see committing any crimes. It is not because they are good, it is because they are weak and incompetent! Give any species our power and technology and they would cause much more evil and damage than humans.

In fact, it is not us who is doing harm to nature, but nature that is trying to harm us! Look around yourself, nature is very hostile place. More than 99% of the species that ever lived are extinct and if it wouldn’t be for our ideas, technology and inventions and interference with nature, we would be extinct long ago! Most places we inhabit would be deadly to us without our miraculous technology. And despite nature trying to kill us, we try to preserve it in return. Not that nature gives a damn, but someone like anti-humanist human Rufus would say “we are making nature worse off”, while nature couldn’t care less what we do with it while it is blindly killing us and all its life. 

“But people still do bad things!” I hear, someone who still didn’t get it, screaming. Yes, and food sometimes still gives us stomachache, but that doesn’t mean it is bad for us, or that there is better alternative.Could we do better? Could we cause less harm? Certainly! In fact, we are working on it. And attitude of “people are bad” is only harming it and slowing down progress! We are not perfect, but we are the best thing for life overall (notice that only ones who would object to this point are humans, no rare-specie bird or planet or anything else ever complained about people).

Without people, life is certainly doomed on the earth. That doesn’t guarantee that with people life will survive and thrive, for that we need to look forward and dismiss whole attitude of “people are bad” because that is only slowing us down and causing unproductive pessimism.

Okay, so far this looked more like “In Defence of The People” rather than “In Praise of The People”. So let’s now venture towards hallmarks of our awesomeness and see why we are so special and significant in this universe, instead of just exploring why we are not so bad. I hope these hallmarks will convince remaining skeptics.

 

Some hallmarks of why people are so unique and special include:

  • Humans are only species that care of welfare of other species (you can send pictures of dog mom feeding little cat puppies to my email as I enjoy those but I am worried that won’t suffice as good counter-argument).
  • Humans are only species that care about environment they live in and try to preserve it, not only for themselves, but also for future generations and other species!
  • Only humans can override their selfishness and innate primal instincts. And don’t tell me that there are other animals that sacrifice themselves in order to save their kin, I know that and that is only encoded in their genes as blind instinct, it is nothing conjectured and deliberately chosen like it is within us.
  • Only people can create ideas (or discover them?). You may say “yea, but ideas don’t do anything on their own and there are many bad ideas!”. Listen here, every creative act starts with an idea and is deliberately acted on only by people.
  • Only we can be in awe and appreciation of our creations. We don’t just create and transform the world, we also reflect back on it and appreciate beauty and ingenuity of those creations!
  • Only people can have cosmic effect on the universe and change its fate. For example, Sun is inevitably getting larger and will eventually swallow whole earth, that is unless people do something to prevent it. And it is only people who can have power in principle to do so.
  • Only we have memes. That is ideas that propagate non-biologically and aren’t encoded in our genes. That is remarkable when you think about it, because we are creating and passing on something that we didn’t have to start with! Where did it came from then?
  • Rate of progress is made exponential only by humans. Just imagine how much progress we made in our wellbeing, technology, morality, science, and everything you can think of since the first homo-sapiens emerged. No other species made such a startling progress, not even close! Life of human 150,000 years ago was vastly different from that of you today. Heck, kings living just 500 years ago had it much worse than you today! Think about difference between crocodiles living 10 million years ago and today, no difference at all!
  • We are more than our bodies and genes. Think about how inferior we are physically to other animals. We are slow, weak, have no special vision or senses. We are pretty much powerless on the first look, that is, until you venture inside of our brain. We are not defined by our hardware, we are defined by our software which overpowers all other hardwares on this planet, which is why we rule this planet despite our apparent inferiority compared to rest of the animal kingdom. Our specialness is not apparent, but it is most powerful despite (or maybe because?) it being hidden.
  • And obviously, art, music, written language, technology, science, philosophy, sports, math, books, internet, economy and many more phenomena that exist in the universe only because of the people.
  • Love and kindness are not only our instincts, they are our choices. Other animals can act alturistically, too. But their behaviour is set by their genes and it is only instinctive, hence predictive and can hardly be changed. With people, it’s different. We are unpredictable and we can choose love and kindness, just as we can choose to commit suicide, despite it not having any evolutionary or genetic benefits.
  • There doesn’t seem to be limit on how much we can learn and progress. With other animals we have so far always found limit to what they can do and learn, but with us we are seeing exponential progress in all domains. We can learn anything given enough time and opportunity. This means that unless your “hardware” is damaged, anything that anyone can learn, you can also learn, which basically means that unless we are handicapped, we are all equal in our potential which seems to be infinite. This is idea of us being “universal computers/explainers” which I got from David Deutsch, for more on this see additional resources section on the bottom.
  • Everything you love and enjoy exists only because of people. And you might say that you enjoy licking a rock and rock is not made by people! To that I can just say that 1. you are weird and 2. you enjoy human experience of human licking rock which was given rise by other humans producing you and your sense of enjoyment and ability to lick. Everything you love and enjoy comes through you and is allowed by prior existence of other humans. And at most, you can say that you love human perception of a rock, not rock itself. There is no love or joy in a rock, love and joy are created only by and within us, those aren’t properties of other objects.
  • If you are able to perceive and contemplate your own “insignificance” then you are anything but!  That is because it’s one of most rare and significant phenomena in the universe: to perceive and contemplate and then subsequently cause changes in the universe that have never be seen before in history of universe (e.g. atomic bombs, spaceships, iPhones or mere ideas and abstractions such as concepts of “significance” or good and bad etc.).
  • We are only creatures who implicitly (and of course also explicitly) ask “why?” and seeks and creates explanations.

Some would say that I am bringing up only good things and omitting the bad. To that, I would say that you are bringing up the bad and omitting the good. And it is that which we focus on that will grow. So which one will you focus on?

Moreover, aren’t we only specie that can, and do morally and pragmatically, question their own acts and thoughts and reflect back on them? Aren’t we only ones deciding and improving on what’s good and bad and can distinguish between them? So even by such disagreement you are proving your awesomeness, good job fellow human!
 

 

Fallibility of People:


Of course, some of our efforts will be bad ideas and cause harm to us, our environment and other life forms. But that is a price we have to pay to avoid stagnation, and stagnation has to be avoided at any cost because stagnation does guarantee doom, eventually!

And remember that it is often bad ideas and mistakes that pave the way to the great ideas and solutions. I can attest that from my own life, not just looking at our human history.
We will keep making mistakes and we will keep learning from them. It is OK to criticise those mistakes, but it is not OK to try to stop them and with it our progress. We have to keep moving forward and improving our ability to solve new problems that arise from our new solutions.
Carbon emissions are just better problem that emerged from solutions we created, nobody would want to go back to times when we didn’t cause any pollution because back then problems were much worse! If we never progressed so far as to cause carbon emissions we probably wouldn’t survive up until now.

Our mistakes are not something to condemn us for, but praise for. Only mistake to be avoided it to try to avoid all mistakes, which promotes stagnation, which is ultimately fatal mistake.

And this is not to say that mistakes we make are OK, they are just OK to make, but of course they are wrong, that’s why they are mistakes! We shouldn’t praise our mistakes, merely just courage to make them and then quickly correct them. We haven’t yet figure out way to improve without making mistakes. To get rid of mistakes, we have to go through them, and do it fast by promoting progress, innovation and creativity.

 

Importance of People:


I am certainly specie-biased, but I think people are most important specie ever evolved on earth. So we are worth much more than any other specie (not only to ourselves or from our point of view, but for other species, life and objective point of view, too). Think about it, we are only specie that has potential to save the whole earth and all its life. We are only specie who can affect all other life forms on earth and the earth itself and rest of the cosmos. We need to be taken care of and nurtured well. We are important. We hold fate of the universe in our hands and minds. We are unpredictable but with infinite potential. Let’s make best out of it.

Some ultra-sceptic human-hating human Rufus might still say that it is only people who place such importance on people. Well maybe, but who else is to judge importance of us? As far as we know, there is no one or nothing else even capable of such judgment! Of course, if there are intelligent aliens they would place such importance on themselves, but then wouldn’t they be in principle People, too? They would be different in their biology, but if they would be capable of such judgment, they would be same in their minds – and it is our minds that defines us, not our mere biology.

We are universe not only experiencing itself, but also affecting itself! We don’t just look and predict what happens to the universe, we can shape it.

This doesn’t mean that other things and species in the universe aren’t important, it just means that we are the most important and significant phenomena in the universe. And even if you would disagree, we can probably agree that we are only known “thing” that can appreciate significance of anything, or even invoke such concept in a first place! And if anything is to be most significant thing in the universe, who else is to decide it other than inventors of that concept?

In many ways we are still same as other known life forms, we still share same genetic history. But we are different in ways that are most significant and impactful on the objective and cosmic scale. That is what makes us most important species phenomena in the entire universe. We are same in things that don’t matter, but different in things that do.

We need to praise people for that of which we want them to do more of. If we shame ourselves for our fallacies, that will lead only to despair. But giving praise for that which we want even more of and do better will encourage further efforts of progress. Want better climate? Praise ourselves for progress we already made in climate! Want better morality? Praise ourselves for moral progress we made! Want better science and technology? Praise ourselves for progress we made from stones to device you are reading this text on written by someone on the opposite side of the planet!

 

International Day of The People:


I propose there should be international day of the people. There are celebrations of many different things, except that which is most important and gave rise to all other things being celebrated and all celebrations themselves: The People.

How is it that it never occurred to us to celebrate us? We are fundamental element of everything that we celebrate! The Day of The People should be most important day of the year, not Christmas, not New Year’s Eve or anything else because there wouldn’t be Christmas or New Year’s Eve without People in a first place!

On such day we would gather around the table and give credit where credit is due. Give ourselves little pat on the back, and instead of telling Jenny how everything is bad, or we are bad and insignificant, we would tell her about all our progress and potential we and she has. So that she decides that drugs aren’t that fun after all and goes on to save our grandmas by developing cure for cancer. And of course, we would have lot of laughs and eat huge tasty dinner like a pigs, because some things are still better done like animals.

 

Additional Resources:



Couple resources that inspired me and changed my view on how awesome and important people are in grand scheme of things.
If words don’t persuade you, you might appreciate this awesome youtube channel full of human awesomeness: https://www.youtube.com/channel/UCIJ0lLcABPdYGp7pRMGccAQ Stunts and performance of people is just astonishing. It never ceases to amaze me what we can learn and how far we can push, we can literally make art out of how we move our bodies.

Most influential book and ideas in last couple years for me was from David Deutsch, British Physicist who is much more than that, I think he is one of most important philosophers of our time. At first, his ideas are hard to grasp and seem like they are wrong. But they don’t let you sleep and more you think about them, more you realise they are correct and profound.
See his book “The Beginning of Infinity” if you want your world completely changed: https://www.amazon.com/Beginning-Infinity-Explanations-Transform-World/dp/0143121359Or start with talk David gave at TED about how “chemical scum” can see and explain distant quasars: https://www.ted.com/talks/david_deutsch_chemical_scum_that_dream_of_distant_quasars?language=en

If you tend to be skeptical about our future and think that things are getting worse or that people are making things worse, see book “The Factfulness” by Hans Rosling: https://www.amazon.com/Factfulness-Reasons-World-Things-Better/dp/1250107814 where he shows how nearly everyone is wrong about how good we are doing and how promising future seems to be. You can test yourself and your knowledge about how world is improving in short quiz: https://factfulnessquiz.com

And if you like anecdote of how awesome people are, I can tell you that whenever I am nice to other people they hit me even with more niceness. Whenever I show trust in people’s competence or morality, they manage to surprise me with their ability and consciences. This happens everywhere I go in the world, with strangers, friends or family. Be awesome, give awesomeness, trust in awesomeness of people and it will grow.

P.S. I am aware that some of the arguments in this rambling might be flawed, incomplete, inaccurate or not well supported in their details. But let’s not disregard strong idea because of weak arguments. Despite possibly flawed details, my confidence in the idea of this post is pretty high and I believe humanity would be better off if we gave ourselves more praise and admiration (while staying humble) and stop shaming ourselves.
 



Discuss

More Useful Way of Answering "Mind vs. Body" Problem

27 июля, 2021 - 10:13
Published on July 27, 2021 7:13 AM GMT

Are you your brain, body, mind or consciousness? If so, where does your "free will" come from if everything is governed by laws of physics? Thorny as questions about minds, consciousness and physics are, answer to what person and free will are, and whether we are "brain" or "body" or "mind", can have simple interpretation based on what we currently known and can be used for all our practical purposes.

You are not your brain, body, DNA, mind, consciousness. Nor are you governed by laws of physics. You are specific part of all those things. You are your body, mind, DNA, consciousness and laws of physics (and perhaps other phenomena, too) all combined. You are interplay between all those things. You are what emerges when all those things combine in a unique way.

You are just like a table. Table is not its legs. Nor its desk. It is what happens when 4 legs and 1 desk combine in specific way that table emerges.

This leads me to determined laws of physics and free will. Let's assume that everything is predetermined based on laws of physics. Then where is your free will? It is in the laws of physics themselves! We are not separate from laws of physics. We are not governed by laws of physics. We are simply part of laws of physics. And if we are part of it and laws of physics are what "determines" everything, then we are the ones doing determining.

It is not my mind, DNA or laws of physics that govern what I do, because I'm those things! So in a sense you can say that my actions are determined by those things, but that just translates into my actions are determined by me.Think about it, as far as we know, if you take away my DNA, mind, body or laws of physics etc. then what remains? Nothing, there is nothing else that could be governed by those things!

Therefore People and Free Will are specific abstract phenomena that emerges from fundamental phenomena like DNA, laws of physics etc.

We shouldn't postulate that we are separate from "our" DNA, body, mind or laws of physics because if you take those things away, there is no longer any "us". It is those things that form us when they are joined in unique and specific way.

Is this perfectly accurate or true way of looking at the world and such a puzzling questions like free will and what people are? Unlikely, but to me it seems like most useful and valid way of looking at things until we find better explanations. Denying free will and blaming laws of physics or DNA for any of our actions doesn't seem to be way, not postulating that we are in any way separate from those things.



Discuss

Страницы