## Вы здесь

### Структурирование

События в Кочерге - 17 июля, 2020 - 19:00
Структурирование — формат, на котором участники выдвигают вопросы или темы, на которые они хотели бы «поструктурироваться», то есть эксплицитно разобраться с моделью проблемы, которая у них уже есть, и построить вместе с напарником что-то новое поверх; собираются в пары по интересу к предложенным темам, и говорят один на один.

### Book Review: Fooled by Randomness

Новости LessWrong.com - 54 минуты 49 секунд назад
Published on July 13, 2020 9:02 PM GMT

The book "Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets" by Nassim Nicholas Taleb was published in 2001. On Amazon, the book has a rating of 4.3, rated by 1,124 people, with 62% of reviewers giving 5 stars. On Goodreads, it has a rating of 4.07 from 1,968 reviews. "The book was selected by Fortune as one of the 75 'Smartest Books of All Time.'", notes Wikipedia. So this should be a great book. I forgot who wrote this, and I forgot the exact words, but I remember reading on twitter that Taleb is something like a modern genius because in his books he develops a worldview based on fundamental insights on randomness, and derives important conclusions including a system of ethics from that. Now that sounds promising! This worldview is developed in Taleb's Incerto series of five books. In the description of the German publisher you can read that the order in which you read the books does not matter, but then it seemed like a good idea to start with the first one nonetheless ("The Black Swan" surely is the better-known title and has 2,083 reviews).

The book seems to have been a revelation to many, judging by the enthusiastic reviews. In a five-star Amazon review, reviewer Alex Bush writes (November 3, 2015) that the book "revolutionized how I view the world. In multiple ways. It's hard to overstate how rarely a book changes your ideas about how the world works once, let alone multiple times". He thinks that Taleb "has managed to weave a fantastically engaging and entertaining book out of what could very easily be a dry and technical topic", that FbR is "the most general and therefore most widely applicable" book by Taleb, and suggests that it teaches people "the survivorship and hindsight biases, as well as the difference between conditional and unconditional probability". Indeed, these are things you may learn from the book in a non-technical way. You may also learn that people find causality where there is mere randomness; that the past of a time series can lead you to a false feeling of certainty if rare events are relevant; that it is often disregarded that the expected value of a random distribution may differ from the median, when in fact it would be critical to keep it in mind if possible rare events would have strongly negative effects. These things come together when people attribute a financial trader's performance to his ability in selecting the right stocks, their value increases over a long time, but he disregards that a crash of the stockmarket may ruin him. So the fact that I did not find the book so eye-opening may be related to the fact that I already knew some things about statistics, probability, econometrics and behavioral economics, that I had read Thinking Fast and Slow, blogposts on Lesswrong, and some other things Taleb writes about. To be fair, the book is from 2001, and maybe many concepts described in the book were very innovative back then.

However, amazon reviewer Alex Bush also writes that "it's often hard to determine whether the heart of the book is the ideas ... or the author. I can't stress how much I learned from this book that has nothing to do with probability or statistics, just random asides from an erudite and meandering mind." This may make you a bit skeptical.

And indeed, the author does meander. A common criticism in reviews is that the book is way too long; suggestions are somewhere between a paragraph (which is a bit mean), a New Yorker article and 50 pages. More importantly, the book lacks structure. In the preface, Taleb writes that he "hates books that can be easily guessed from the table of contents (not many people read textbooks for pleasure)". And so it is a bit obscure what kind of book Taleb wants this to be. It seems he aims at being illuminating and, at the same time, entertaining. The book contains many stories and anecdotes, usually using a story about some person exemplifying a certain kind of behavior, but it is sometimes unclear how representative these stories are.

If nothing else, you will certainly know a lot about Nassim Nicholas Taleb when you have finished reading the book. He likes to go to the gym. He does not want to be a janitor. He is intelligent and cultured, and people around him are ignorant and shallow - people in the financial industry and in the media in particular - and they usually do not appreciate his insights when he explains them. As a cultured, intelligent person he also admits some emotional weaknesses. You will also know the names of some people whom Taleb likes and several people whom he dislikes. Maybe you read somewhere that Taleb is arrogant and likes to start fights, but that he has profound insights. (I had read something like that somewhere.) Well, I don't mind arrogance per se, if it is just some topping on the cake of insight. What I find a bit annoying, however, is that some people (or at least some reviewers) seem to understand the style of the book as a signal for the insight.

And indeed, Taleb's self-descriptions seem to aim at signaling some kind of insight. This can be a bit lengthy. When Taleb writes that he had a lucky career choice, he notes that "one of the attractive aspects of my profession as a quantitative option trader is that I have close to 95% of my day free to think, read and research". As this does not seem to be clear enough, he also adds: "(or 'reflect' in the gym, on ski slopes, or, more effectively, on a park bench)", and then: "I also had the privilege of frequently 'working' from my well-equipped attic." Over the book, the self-descriptions of Taleb, his descriptions of people around him, and many other digressions add up. In a book whose topic is the problem that we often confuse noise with signal and should be mindful of distinguishing them, this is a bit disturbing. Of course - reading about an author's life can be interesting, and anecdotes can be illuminating; Daniel Kahneman's writing is a good example of that. But the entertainment highly depends on an author's style, and writing anecdotes in an illuminating way seemingly is not easy.

(Just to add one example: At some point, Taleb says that he sat in many meetings where traders had to offer their interpretations of what was currently moving the markets. He says he found these meetings to be a waste of time. And he adds that he himself talked a lot in them, to make them less boring, though he did not listen to what other people had to say: "I have to confess that my optimal strategy (to soothe my boredom and allergy to confident platitudes) was to speak as much as I could, while totally avoiding listening to other people's replies by trying to solve equations in my head. Speaking too much would help me clarify my mind, and, with a little bit of luck, I would not be 'invited' back (i.e, forced to attend) the following week." It is up to every reader to decide what the confident platitudes are at this point; but it is a little weird that it does not seem to occur to Taleb that the other participants of these meetings may have the same justification for their "platitudes" and the same view of themselves and the others.)

I stopped reading the book after chapter 9 (that is, after 202 paes of the German epub edition, which is 302 pages long, excluding some back matter parts). At this point, the book had improved, the last two chapters had been more readable and more interesting, which might have made me continue reading. But then again, what had these last chapters been about? Chapter 9 nicely illustrates the problem of regression to the mean, and this served as a reminder to lower expectations. The expected value of the rest of the book did not seem high enough.

Summing up

You may like the book if you need a story-based introduction to some biases in understanding random events, and if you find a certain joy in an author classifying others as idiots.

(At several points, I think things that Taleb writes are wrong or misleading, but I will not put time and effort into arguing about these points.)

What did I like about the book?

There are nice illustrations about regression to the mean and survivor bias. Also, the mentions of several financial-market crises of the last decades served as a nice reminder to put things into perspective.

What did I learn from the book?

The most interesting thing that the book emphasized to me was that it is often hard to rate the quality of books using a single "quality" dimension. For a fair judgement, it seems necessary to know the background of the typical reader, to know what the author wants, and to understand the intellectual environment in which he wrote the book. Fooled by Randomness may be a good book for someone to whom the survivorship bias is news, it may transport Taleb's message, and it may have been revolutionary 19 years ago.

What should you read if you like the topic?

Daniel Kahneman's "Thinking, fast and slow"

Discuss

### Roll for Sanity

Новости LessWrong.com - 13 июля, 2020 - 19:39
Published on July 13, 2020 4:39 PM GMT

[This is very much a personal-diary type post, but it ends up touching on predictive processing and other aspects of how our brains work. Feels related to Choosing the Zero Point.]

I. Looking for Trouble

In the card game Munchkin, there is a mechanic called “Looking for Trouble”, whereby if you haven’t yet fought a monster on your turn, you can play a monster from your hand and fight that. You don’t have to do this – it’s optional, and can carry stiff penalties if the monster ends up defeating you – but since killing monsters is one of the key ways to win at Munchkin, it’s an important mechanic.

Obviously you don’t want to fight a monster if you think that you’re going to lose. A brand new munchkin “Looking for Trouble” with a level 20 Plutonium Dragon is literally… looking for trouble [1]. And even if you think you might win, it’s often a good idea to wait a turn or two in order to try and collect more spells, stronger weapons, etc. It would be a pretty terrible Munchkin strategy to go looking for trouble on every possible turn, regardless of your equipment or which monsters you actually have in your hand.

And yet… this terrible strategy feels like a metaphor for my life recently.

Between work, personal relationships, and the chaos caused by the pandemic, I’ve been dealing with a pretty big set of stressors (monsters) already in my life. But like an idiot Munchkin, every time I’m not dealing with an immediate personal problem, I find myself Looking for Trouble. And the internet makes this soooooo easy.

Instead of taking a break, relaxing, and recharging my mental and emotional batteries, I find myself checking the latest coronavirus stats, seeing which of my favourite pieces of media have been cancelled, reading hot takes on the death of democracy, or just plain “doomscrolling” on social media. Unsurprisingly, I have not been at my best the last little while.

As best I can tell, this unfortunate behavioural pattern is a classic instance of predictive-processing gone awry. In other words, so much has gone wrong recently that my brain has decided the world must always be on fire, and that’s just the way things are. My subconscious is predicting disaster so strongly that when there’s no evidence of a new disaster, my brain assumes that I’m just not looking hard enough, and I end up on the internet finding new horrors in order to prove myself right. And all the recent stories about doomscrolling make me suspect I’m not alone.

II. Moral Implications

Now obviously predictive-processing gone awry is not the only explanation for everyone’s bad-news obsession. Even if it’s a plausible explanation for me personally (which I think it is), it might not be the cause of the general doom-scrolling trend. Things actually are unusually bad in many parts of the world, and people always tend to pay more attention to bad news than to good. Maybe feeling kind of terrible is just a natural response to things being unusually terrible.

If feeling terrible is in some sense a “reasonable” response to the state of the world, then I worry that my attempts to feel less terrible are morally wrong, since they try to avoid the problem instead of solve it. Am I just doing the global equivalent of pretending not to see the homeless person on the corner? Is the moral thing instead to face the world’s troubles head-on, acknowledge its pain, and try to help?

But this doesn’t seem quite fair; while I might plausibly be able to help a single homeless person, I am largely helpless in the face of the vast issues facing America and the world (at least, in the short term). I’m a private citizen in a relatively small, stable, country; most of the time nobody pays us any attention, for good reason. Feeling stress and anxiety truly proportional to the level of suffering in the world seems in some sense correct; scope insensitivity is still an irrational bias. But like an airline passenger who refuses to put on their own mask first, it would be a mistake in practice. Being insensitive to the scope of suffering beyond a certain point is an adaptive coping mechanism to keep us sane in the face of a vast and uncaring world. As long as we use our sanity to do good in the long run, ignoring pain in the short run seems ok.

III. Reducing the Area of Concern

Given that ignoring global problems in order to conserve our own sanity seems ok, at least in the short term, then how do we do that? By embracing scope insensitivity, and reducing our area of concern.

The human nervous system, grossly simplified, contains a slider switch that runs from “fight and flight” on one end (the sympathetic nervous system) to “rest and digest” on the other (the parasympathetic nervous system). A happy, productive life requires both components; you obviously need to spend some time resting and digesting, but equally you need your sympathetic nervous system to deal with challenges and to accomplish difficult tasks. In other words, it’s almost certainly unhealthy to be stuck at either extreme for any length of time.

Unfortunately, “fight or flight” isn’t just something that your brain does when facing an immediate, concrete threat. Stress, anxiety, and fear all show up whenever there’s a possible threat within some ill-defined “area of concern”. Another war on the other side of the planet? Not a big deal. But heaven forbid there’s been a string of burglaries in your neighbourhood recently. Even if you never see a burglar yourself, just hearing about it on the news is enough to cause some sleepless nights.

Given that mere bad news can cause a fight or flight response if your brain judges it “in scope”, and the fact that the world is absolutely full of bad news on a regular basis… if you start to think of the entire world as “in scope” then you’re going to have a bad time of it. The internet, news, politics… they’re all global arenas now, and it’s incredibly difficult to engage with them in a way that doesn’t increase your area of concern. Engage too much, and you end up permanently stuck in “fight or flight”, killing yourself with stress.

In recognition of where my slider switch has been sitting recently, and in order to metaphorically “put my own mask on first”, I’ve been trying to reduce my scope of concern. I’ve blocked a bunch of sites from my work laptop. I’ve uninstalled a few apps from my phone. I’ve tried to spend less time reading the news, and more time reading things that I find valuable and relaxing. If I’m helpless in the face of things anyways, then it doesn’t serve me to know about them at all, does it?

Early results are promising, but early. I suspect the hardest part will be sticking to it, and finding other sources of stimulus since much of my local life is still in pandemic-induced lockdown. If my immediate scope of concern is utterly static, and the global scope of concern is a panic-inducing nightmare, is there an intermediate scope? With the internet at our fingertips, I’m not sure that there is.

1. Yes, technically a Plutonium Dragon won’t pursue anyone below level 5, so you’d be able to run away… but still.

Discuss

### Null-boxing Newcomb’s Problem

Новости LessWrong.com - 13 июля, 2020 - 19:33
Published on July 13, 2020 4:32 PM GMT

(This was written last night after having a fascinating conversation about Oracle AIs with my brother, which led me to wonder if Newcomb’s Problem might have a hidden ethical dark side to it. As always, I’d be delighted to hear any and all comments and criticism, so feel free to chat or just lurk in the comments below :))

Discuss

### Клуб чтения цепочек

События в Кочерге - 13 июля, 2020 - 19:30
Продолжаем раскрывать цепочку «Чрезвычайно удобные оправдания» из книги Элиезера Юдковского «Рациональность от ИИ до зомби».

### 2020 LessWrong Demographics Survey Results

Новости LessWrong.com - 13 июля, 2020 - 16:54
Published on July 13, 2020 1:53 PM GMT

Thank you to everyone who participated! You can see the original post here, but the survey is now closed. Here are the results:

Copy of the original survey (to view the questions in context): https://docs.google.com/forms/d/e/1FAIpQLSfBt1l3XBCZHx8DTJhWWTSK_bH7Tw6W4PdYYLKmMyplt1k0Ag/viewform

[Anyone wanting to revive the old (larger) annual LW-surveys can contact me if they need help/funding.]

Discuss

### What You Are

Новости LessWrong.com - 13 июля, 2020 - 14:36
Published on July 13, 2020 11:35 AM GMT

I wrote this after a bad trip some years ago, in which one of the challenges was a almost total loss of context for my experience. I'd hoped this would give me some kind of experiential ground to stand on if it happened again. If you have your own sequence of fundamentals to bootstrap an understanding of "what's going on" please share.

You Exist
Which means that you are rather than are not

You are Conscious
This means there is something that it is like for you to Exist
One imagines there is nothing that it is like to Exist as a rock
And perhaps something that it is like to Exist as a cat
Somewhere from rock to cat a subjective point of view develops, and there is Consciousness

You are an Agent
Which means you prefer, and act
To prefer is to have a valenced to your experience, positive and negative
To act is make change happen in ways that interact with your preferences

You are a Human
Humans are the best general purpose Agents we know of
Which means you can prefer and do a great many things

Discuss

### Update more slowly!

Новости LessWrong.com - 13 июля, 2020 - 10:10
Published on July 13, 2020 7:10 AM GMT

My experience forecasting has led me to realize that a key mistake that is often made is updating on new data too quickly. This seems backwards, but I think that often the biggest reason that people both under- and over-react to evidence is that they don't consider the evidence clearly enough, and immediately start revising their model to account for the evidence, instead of actually thinking before updating.

Let's deconstruct how rapid updating is misleading with a simple notional example. Someone shows up with a coin and claims that she is psychic and can predict coinflips. You are skeptical, and challenge her to do so. She immediately take out a coin and correctly predict heads 3 times in a row. You can update a few ways:

• Conclude that the claim is now much more likely than before, and you give some credence to the idea that she is psychic
• Conclude that she was lucky, and 7:1 odds is not very strong evidence, so you continue to hold on to your prior strongly
• Conclude that she is cheating using a coin which has heads on both sides

Notice that these possibilities which spring to mind are completely different cognitive strategies. Either you update to believe her more, you decide the evidence is insufficient, or you immediately seize on a new hypothesis that explains the evidence.

But these are all mentally lazy strategies. If you thought about the situation for longer, you could easily generate a half dozen additional hypotheses. Perhaps the she has trained herself to flip coins a certain way. Perhaps she simply lied and said the coin landed heads each time, and didn't really give you time to see it well when it didn't. Perhaps she is substituting the coin as it lands. Perhaps, perhaps, perhaps.

My advice, of course, is to slow down. You might decide to be a good Bayesian, and preserve multiple hypotheses, updating marginally - but doing this means that you assume the correct hypothesis is in your prior set. There are a million hypotheses that can explain a given set of events, and the most cognitively available ones are those that allow you to be lazy.

Don't worry, take your time. If the issue matters enough to bother trying to update your model, taking five minutes to reflect is better than jumping the gun. And if you don't need to make any decisions, at least file everything away and decide that it's unclear instead of quickly responding with an overconfident "Aha, now I understand!" or worse, a "Eureka, I've solved it!"

Bayesian thinking gives you answers no faster than a rational accumulation of evidence can possibly allow, given the uncertainties that exist. Slow down. Marginally rationally updating doesn't give you confident answers quickly. It can't.
Trying to update faster isn't going to get you better answers now, it will get you worse answers more quickly.

Updating isn't a sprint to the answer, or even a race - it's a long-duration hike towards a still-unclear goal. If you imprudently start sprinting early because you think you see the goal, you're just going to hurt yourself, or get lost and never make it to the right destination. Take your time.

Discuss

### In praise of contributing examples, analogies and lingo

Новости LessWrong.com - 13 июля, 2020 - 09:43
Published on July 13, 2020 6:43 AM GMT

Examples, analogies and lingo all make posts easier to understand. So then, I want to encourage people to write more comments like:

• X is an example of what you're describing.
• Y might be a good analogy for this.
• Z could be a good term for that.

I see comments like this sometimes, but much less frequently than I would like. And I suspect that readers often have thoughts like "Z could be a good term for that", but don't think it's worth saying. I think these comments are almost always worth saying.

Crowdsourcing

Examples, analogies and lingo seem like such great things to crowdsource. Even if the post author is intimately familiar with the post's content, I would expect 100 readers who are vaguely familiar with the content to do a better job of coming up with examples, analogies and lingo than the single post author. They're just the sorts of things that benefit a lot from a diversity of viewpoints, it seems to me.

Non-expert explanation

Let's say I'm wrong in the above section and the reality is that the post author is better than the crowd at coming up with good examples, analogies and phrases. Even so, sometimes the "good" example from the post author doesn't make much sense to a particular reader whereas a "mediocre" example from a commenter is what makes it click.

Post authors don't always do a good job

Coming up with good examples, analogies and lingo is hard! There's a lot of times when I have an idea in my mind that I want to write about but I get stuck because I have trouble coming up with good examples, analogies or lingo. There are also times when I post it anyway, knowing that I haven't done a good job with those things.

I doubt I'm alone. I doubt that every time someone clicks "Submit" on a post that they think they've done as good a job as possible in the examples, analogies and lingo they've used.

Discuss

### Six economics misconceptions which I've resolved over the last few years

Новости LessWrong.com - 13 июля, 2020 - 06:01
Published on July 13, 2020 3:01 AM GMT

Here are six cases where I was pretty confident in my understanding of the microeconomic analysis of something, but then later found out I was missing an important consideration.

Here’s the list of mistakes:

• I thought divesting from a company had no effect on the company.
• I thought that the prices on a prediction market converged to the probabilities of the underlying event.
• I thought that I shouldn’t expect to be able to make better investment decisions than buying index funds.
• I didn’t realize that regulations like minimum wages are analogous to taxes in that they disincentivize work.
• I misunderstood the economics of price controls.

In all of these, I’m not talking about the empirical situation at all--I’m just saying that I had a theoretical analysis which I think turned out to be wrong. It’s possible that in many real situations, the additional considerations I’ve learned about don’t actually affect the outcome very much. But it was still an error to not know that those considerations were potentially relevant.

1. Divestment

I used to believe that personally not investing in a company didn’t affect its share price, and therefore had no impact on the company. I guess my reasoning here was something like “If the share is worth $10 and you sell it, someone else will just buy it for$10, so the price won’t change”. I was treating shares as if they were worth some fixed amount of money.

The simplest explanation for why you can’t just model shares as being worth fixed amounts of money is that people are risk averse, and so the tenth Google share you buy is worth less to you than the first; and so as the price decreases, it becomes more worthwhile to take a bigger risk on the company.

As a result, divestment reduces the price of shares, in the same way that selling anything else reduces its price.

In the specific case of divestment, this means that when I sell some stocks, the price ends up lower than it was,

2. Index funds

I used to think that it wasn’t possible for individuals like me to get higher returns than I’d get from just buying an index fund, because in an efficient market, every share is equally valuable.

This is wrong for a few reasons. One is that the prices of shares are determined by the risk aversion of other market participants; if your risk aversion is different from the average, some shares (specifically, risky ones) will be much better investments than others.

Secondly, because I’m risk averse, I prefer buying shares which are going to do relatively well in worlds where I’m relatively poorer. For example, if I’m a software engineer at a tech company, compared to a random shareholder I should invest more in companies which are as anticorrelated with software engineer salaries as possible. Or if I live in the US, I should consider investing in the markets of other countries.

I didn’t understand this fully until around April this year.

3. Prediction markets

Relatedly, I thought that the fair market price of a contract which pays out $1 if Trump gets elected is just the probability of Trump getting elected. This is wrong because Trump getting elected is correlated with how valuable other assets are. Suppose I thought that Trump has a 50% chance of getting reelected, and if he gets re-elected, the stock market will crash. If I have a bunch of my money in the stock market, the contract is worth more than 50 cents, because it hedges against Trump winning. (Here’s a maybe more intuitive way of seeing this: Suppose I could pick between getting$10 in the world where Trump won (in which we’re assuming the market would crash) and the world where Trump lost. Clearly I’d rather get the $10 in the world where he won. So the value of the “Trump wins” contract is higher than the value of the “Trump loses” contract, even though they correspond to events of equal probability.) This is a more general version of the point that it’s hard to have a prediction market on whether the world will end. Paul Christiano has an old blog post on this topic which I first saw years ago but which I didn't understand properly at the time. I first understood this fully around March this year. All of these first three mistakes were the result of me not really understanding basic portfolio theory; thanks to spending a bunch of time talking to traders over the last few years, I know understand it better. 4. Coase’s arguments about externalities I used to have an overly simplistic picture of externalities--I believed the Econ 101 story: normally markets are efficient, but when a good has an externality the wrong amount will be produced, and this is resolved by putting a tax or subsidy on the good to internalize the externality. I changed my mind about this after reading David Friedman’s essay. I’ll just quote a few paragraphs: The first step is to realize that an external cost is not simply a cost produced by the pollutor and born by the victim. In almost all cases, the cost is a result of decisions by both parties. I would not be coughing if your steel mill were not pouring out sulfur dioxide. But your steel mill would do no damage to me if I did not happen to live down wind from it. It is the joint decision—yours to pollute and mine to live where you are polluting—that produces the cost.Suppose that, in a particular case, the pollution does$100,000 a year worth of damage and can be eliminated at a cost of only $80,000 a year (from here on, all costs are per year). Further assume that the cost of shifting all of the land down wind to a new use unaffected by the pollution—growing timber instead of renting out summer resorts, say—is only$50,000. If we impose an emission fee of a hundred thousand dollars a year, the steel mill stops polluting and the damage is eliminated—at a cost of $80,000. If we impose no emission fee the mill keeps polluting, the owners of the land stop advertising for tenants and plant trees instead, and the problem is again solved—at a cost of$50,000. In this case the result without Pigouvian taxes is efficient—the problem is eliminated at the lowest possible cost—and the result with Pigouvian taxes in inefficient.Moving the victims may not be a very plausible solution in the case of air pollution; it seems fairly certain that even the most draconian limitations on emissions in southern California would be less expensive than evacuating that end of the state. But the problem of externalities applies to a wide range of different situations, in many of which it is far from obvious which party can avoid the problem at lower cost and in some of which it is not even obvious which one we should call the victim.

My previous position was missing this nuance. I first read that David Friedman essay mid last year.

5. Non-tax regulations that increase equality have disincentive effects on work

I used to think that the way to decide whether a minimum wage was good was to look at the effect on unemployment and the effect on total income for minimum wage workers, and then figure out whether I thought that the increase in unemployment was worth the increase in income. I think this was wrong in two pretty different ways.

The first mistake is that I was neglecting the fact that policies aimed at transferring wealth from rich people to poor people disincentivize making money. Taxes are just a special case of this. So when you’re arguing that a minimum wage would be part of the optimal policy portfolio, you have to argue that it would be better than a tax. I did not understand that this was part of the calculation.

I first learned this from a post by Paul Christiano which I think he incorporated into Objection 2 here; that blog post was published March 2019.

6. Price and quality controls

The second of the ways I was wrong about the minimum wage comes from a misunderstanding of the economics of price controls; in hindsight I think that my high school economics curriculum was just wrong about this. I think that I realized my misconception after reading The Dark Lord’s Answer, published 2016.

In high school economics, I was taught that when the government imposes a price floor (eg a minimum wage), you’ll end up with more supply than demand for the good. This is beneficial to suppliers who still succeed at selling the good, it’s harmful to suppliers who can no longer sell the good, and it’s harmful to buyers.

I now think that that understanding was overly simplistic. Here’s my current understanding.

In a market, the supply and demand of a good must equilibrate somehow--for every loaf of bread that someone buys, someone had to sell a loaf of bread. One way that the market can equilibrate is that the price can change--if the price is higher, selling is more attractive and buying is less attractive. So if more people want to buy than sell at the current price, we might expect the price to rise until things are in equilibrium.

But there are other things than price which can vary in a way that allows the market to equilibrate. One obvious example is product quality--if you decrease the quality of a product, consumers are less enthusiastic about buying but suppliers are more enthusiastic about selling (because they can presumably make it for cheaper).

Often, fluctuations in quality rather than price are what cause markets to equilibrate. For example, restaurants often don’t have price hikes at busy times, they just have long waits. Customers like it less when they have to wait more, and restaurants like having customers waiting (because it helps them ensure that their restaurant is constantly full).

So when we talk about the equilibrium state of a market, we can’t just talk about price, we also need to talk about all the other variables which can change.

In the case where we only consider price and quantity, there’s always only one equilibrium, because as price increases, supply rises and demand falls. (Actually, supply and demand could be constant over some range of prices, in which case there is an interval of equilibrium prices. I’m going to ignore this.)

But if we’re allowed to vary quality too, there are now many possible settings of price and quality where supply equals demand. Eg for any fixed quality level, there’s going to be one equilibrium price, for the same reason as before.

In a competitive market, the equilibrium will be the point on the supply-equals-demand curve which maximizes efficiency. Eg if there’s a way that producers could increase quality that would make production cost $1 more, producers will only do that if it makes the product worth more than$1 more valuable to consumers. This is optimal.

(In real life, you usually have producers selling a variety of different similar goods at different price/quality points; I’m talking about this restricted case because it’s simpler.)

Now, suppose that the government imposes a restriction on price or quality. For example, they might set a maximum or minimum price, or they might make safety restrictions which restrict quality in certain ways. The market will reequilibriate by using whatever degrees of freedom it has left. Specifically, it will reequilibriate to the optimal point within the newly restricted space of points at which supply equals demand. In general, this will lead to a less efficient outcome.

For example, if the price of bread is $2 at equilibrium, and the government sets a maximum price of$1.50, then the equilibrium will move along the quality curve until it gets to the point where the equilibrium price is $1.50. This analysis gets more realistic if you allow there to be more dimensions than price and quantity along which bread can vary. For example, I’d expect to see the following phenomena: • Producers trying to figure out ways to get paid under the table, eg by demanding favors in return for selling to people. This reduces efficiency inasmuch as producers weren’t already being compensated by miscellaneous favors. • Sellers changing in ways that are mildly more convenient for them but much more inconvenient for consumers. For example, having long lines outside stores, or treating customers worse. • Producers indulging weak preferences of theirs in who they sell to (eg nepotism). In the case of minimum wages, I’d expect to see employers do things like the engaging in wage theft which the employees tolerate (which is inefficient because it increases variance for employees) or being inflexible and unpleasant. This analysis would predict that wage theft is much more common among minimum wage employees than employees at higher wages. One way of thinking about the efficiency of this is to think from the perspective of the producers. They have to pick some change that makes the price of the bread$1.50. There are many ways they could reduce the price to \$1.50. They’re going to pick the way that is best for them.

In some cases, this leads to almost no value being destroyed at all. For example, in the bread case, sellers might sell smaller loaves, which might be almost as efficient if you dubiously assume that the main cost of bread is flour. The worst case is that there’s no way for the seller to change the product in a way which benefits them, and so they end up changing it in a way which makes them very little better off.

The welfare impact of this kind of regulation is also affected by redistributive effects. For example, if bakers decide to only sell bread to their friends and family, this has a positive redistributive effect if the friends and family of bakers are poorer than average.

An example where the redistributive effect might make the world much better: Suppose that there’s demand for 100 loaves of bread, where half of that comes from poor people who want to feed their children and the other half comes from a tech billionaire who wants to make a giant bread sculpture. If the baker ends up selling to people who are most willing to stand in lines, then this might lead to a better outcome. (Getting this result requires making some pretty strong assumptions about the shape of the relevant curves.)

Another example is that you might expect that in a world where the minimum wage causes low paid jobs to be more unpleasant, teenagers will be less inclined to take the jobs and poor adults will end up having relatively more of the jobs. It’s possible to set things up such that this ends up increasing total welfare.

Conclusion

It’s embarrassing that I was confidently wrong about my understanding of so many things in the same domain. I’ve updated towards thinking that microeconomics is trickier than most other similarly simple-seeming subjects like physics, math, or computer science. I think that the above misconceptions are more serious than any misconceptions about other technical fields which I’ve discovered over the last few years (except maybe the aestivation hypothesis thing).

In three of these cases (4, 5, and 6), I had incorrect beliefs that came from my high school economics class. In those three cases, the correct understanding makes government intervention look worse. I think that this is not a coincidence--I think that the people who wrote the IB economics curriculum are probably leftist and this colored their perception.

On the other hand, in the other cases, I assumed that the equilibria of markets had a variety of intuitive properties that they turn out not to have.

One obvious question is: how many more of these am I going to discover over the next year or two?

I think my median guess is that over the next year I will learn two more items that I think deserve to go on this list. Of course, I’m now a lot more cautious about being confident about microeconomics arguments, so I don’t expect to be as confidently wrong as I was about some of these.

In most of these cases, there was a phase where I no longer believed the false thing but didn’t properly understand the true thing. During this phase, I wouldn’t have made bets. Currently I’m in the “not making bets” phase with regard to a few other topics in economics; hopefully in a year I’ll understand them.

Discuss

### What is the current state of knowledge around COVID-19 at-home remedies?

Новости LessWrong.com - 13 июля, 2020 - 01:50
Published on July 12, 2020 10:50 PM GMT

It's been ~4 months since COVID-19 become a globally relevant topic; most of the treatment work that I've seen is focused on reducing mortality in the context of high-severity cases (usually post hospitalization) ie remdesivir, dexamethasone etc. Wanted to share my current synthesis around "what should I take if I'm experiencing covid-like symptoms?"; would greatly appreciate any feedback as well as info around other positive-effect-on-current-balance-of-evidence treatments that I've missed.

Discuss

### What counts as defection?

Новости LessWrong.com - 13 июля, 2020 - 01:03
Published on July 12, 2020 10:03 PM GMT

Thanks to Michael Dennis for proposing the formal definition; to Andrew Critch for pointing me in this direction; to Abram Demski for proposing non-negative coalition weighting; and to Alex Appel, Scott Emmons, Evan Hubinger, philh, Rohin Shah, and Carroll Wainwright for their feedback and ideas.

There's a good chance I'd like to publish this at some point as part of a larger work. However, I wanted to make the work available now, in case that doesn't happen soon.

They can't prove the conspiracy... But they could, if Steve runs his mouth.

The police chief stares at you.

You stare at the table. You'd agreed (sworn!) to stay quiet. You'd even studied game theory together. But, you hadn't understood what an extra year of jail meant.

The police chief stares at you.

Let Steve be the gullible idealist. You have a family waiting for you.

Sunlight stretches across the valley, dappling the grass and warming your bow. Your hand anxiously runs along the bowstring. A distant figure darts between trees, and your stomach rumbles. The day is near spent.

The stags run strong and free in this land. Carla should meet you there. Shouldn't she? Who wants to live like a beggar, subsisting on scraps of lean rabbit meat?

In your mind's eye, you reach the stags, alone. You find one, and your arrow pierces its barrow. The beast bucks and bursts away; the rest of the herd follows. You slump against the tree, exhausted, and never open your eyes again.

You can't risk it.

People talk about 'defection' in social dilemma games, from the prisoner's dilemma to stag hunt to chicken. In the tragedy of the commons, we talk about defection. The concept has become a regular part of LessWrong discourse.

Given that the prisoner's dilemma is the bread and butter of game theory and of many parts of economics, evolutionary biology, and psychology, you might think that someone had already formalized 'defection'.

Informal definition. A player defects when they increase their personal payoff at the expense of the group.

This informal definition is no secret, being echoed from the ancient Formal Models of Dilemmas in Social Decision-Making to the recent Classifying games like the Prisoner's Dilemma

you can model the "defect" action as "take some value for yourself, but destroy value in the process".

To my knowledge, no one else has actually formalized this and proved theorems about it (if someone has, let me know).

Formalism

Consider a finite n-player normal-form game, with player i having pure action set Ai and payoff function Pi:A1×…×An→R. Each player i chooses a strategy si∈Δ(Ai) (a distribution over Ai). Together, the strategies form a strategy profile s:=(s1,…,sn). s−i:=(s1,…,si−1,si+1,…,sn) is the strategy profile, excluding player i's strategy. A payoff profile contains the payoffs for all players under a given strategy profile.

A coalition (αj)j=1,…,n is a set of n non-negative weights (as in Harsanyi's utilitarian theorem). You can consider the weights as quantifying each player's contribution; they might represent a percieved social agreement or be the explicit result of a bargaining process. When all αj are equal, we'll call that a fair coalition. However, if there are "utility monsters", we can downweight them accordingly. Note that we are not considering cooperative games.

Definition. Player i's action a∈Ai is a defection against strategy profile s and coalition (αj)j=1,…,n  if

1. P_i(s_i,\mathbf{s}_{-i})">Personal gain: Pi(a,s−i)>Pi(si,s−i)
2. Coalitional loss: ∑jαjPj(a,s−i)<∑jαjPj(si,s−i)

If such an action exists for some player i, strategy profile s, and coalition, then we say that defection can exist in the game.

Payoff profiles in the Prisoner's Dilemma. Red arrows represent defections against pure strategy profiles; player 1 defects vertically, while player 2 defects horizontally. For example, player 2 defects with (C1,C2)→(C1,D2) because they gain personally (3">4>3) but the coalition loses out (6<4).

Our definition seems to make reasonable intuitive sense. In the tragedy of the commons, each player rationally increases their utility, while imposing negative externalities on the other players and decreasing total utility. A spy might leak classified information, benefiting themselves and Russia but defecting against America.

Definition. Cooperation takes place when a strategy profile is maintained despite the opportunity for defection.

Theorem 1. In constant-sum games, defection cannot exist against fair coalitions.

Theorem 2. In common-payoff games, defection cannot exist against any coalition.

Proposition 3. Defection cannot exist against Nash equilibria.

An action a∈Ai is a Pareto improvement over strategy profile s if, for all players j,Pj(a,s−i)≥Pj(si,s−i).

Proposition 4. Pareto improvements are never defections.

Game Theorems

We can prove that formal defection exists in the trifecta of famous games. Feel free to skip proofs if you aren't interested.

A 2×2 symmetric game is a Prisoner's Dilemma when R>P>S">T>R>P>S. Unsurprisingly, formal defection is everywhere in this game.

Theorem 5. In 2×2 symmetric games, if the Prisoner's Dilemma inequality is satisfied, defection can exist against fair coalitions.

Proof. Suppose the Prisoner's Dilemma inequality holds. Further suppose that \frac{1}{2} (T+S)">R>12(T+S). Then T+S">2R>T+S. Then since R">T>R but T+S<2R, both players defect from (C1,C2) with Di.

Suppose instead that R≤12(T+S). Then 2P">T+S≥2R>2P, so 2P">T+S>2P. But S">P>S, so player 1 defects from (C1,D2) with action D1, and player 2 defects from (D1,C2) with action D2. QED.

A 2×2 symmetric game is a Stag Hunt when T\geq P>S">R>T≥P>S. In Stag Hunts, due to uncertainty about whether the other player will hunt stag, players defect and fail to coordinate on the unique Pareto optimum (Stag1, Stag2). In (b), player 2 will defect (play Hare2) when P(Stag1)<12. In Stag Hunts, formal defection is always against mixed strategy profiles, which lines up with defection in this game being due to uncertainty.

Theorem 6. In 2×2 symmetric games, if the Stag Hunt inequality is satisfied, defection can exist against fair coalitions.

Proof. Suppose that the Stag Hunt inequality is satisfied. Let p be the probability that player 1 plays Stag1. We now show that player 2 can always defect against strategy profile (p,Stag2) for some value of p.

For defection's first condition, we determine when P2(p,Stag2)<P2(p,Hare2):

pR+(1−p)S<pT+(1−p)Pp<P−S(R−T)+(P−S).

This denominator is positive (T">R>T and S">P>S), as is the numerator. The fraction clearly falls in the open interval (0,1).

For defection's second condition, we determine when

P_1(p,\text{Hare}_2)+P_2(p,\text{Hare}_2)\\   2pR + (1-p)(T+S) &> p(S+T) + (1-p)2P\\   p&> \dfrac{1}{2}  \dfrac{(P-S)+(P-T)}{(R-T)+(P-S)}.\end{align}">P1(p,Stag2)+P2(p,Stag2)>P1(p,Hare2)+P2(p,Hare2)2pR+(1−p)(T+S)>p(S+T)+(1−p)2Pp>12(P−S)+(P−T)(R−T)+(P−S).

Combining the two conditions, we have

\dfrac{P-S}{(R-T)+(P-S)}>p>\dfrac{1}{2} \dfrac{(P-S)+(P-T)}{(R-T)+(P-S)}.\end{align}">1>P−S(R−T)+(P−S)>p>12(P−S)+(P−T)(R−T)+(P−S).

Since P−T<0, this holds for some nonempty subinterval of [0,1). QED.

A 2×2 symmetric game is Chicken when R\geq S>P">T>R≥S>P. In (b), defection only occurs when 1011<P(Turn1)<2122: when player 1 is very likely to turn, player 2 is willing to trade a bit of coalitional payoff for personal payoff.

Theorem 7. In 2×2 symmetric games, if the Chicken inequality is satisfied, defection can exist against fair coalitions.

Proof. Assume that the Chicken inequality is satisfied. This proof proceeds similarly as in theorem 6. Let p be the probability that player 1's strategy places on Turn1.

For defection's first condition, we determine when P2(p,Turn2)<P2(p,Ahead2):

\dfrac{P-S}{(R-T)+(P-S)}\\    1\geq p&>\dfrac{S-P}{(T-R)+(S-P)}>0. \end{align}">pR+(1−p)S<pT+(1−p)Pp>P−S(R−T)+(P−S)1≥p>S−P(T−R)+(S−P)>0.

The inequality flips in the first equation because of the division by (R−T)+(P−S), which is negative (R">T>R and P">S>P). P">S>P, so 0">p>0; this reflects the fact that (Ahead1,Turn2) is a Nash equilibrium, against which defection is impossible (proposition 3).

For defection's second condition, we determine when

The inequality again flips because (R−T)+(P−S) is negative. When R≤12(T+S), we have p<1, in which case defection does not exist against a pure strategy profile.

Combining the two conditions, we have

p>\dfrac{S-P}{(T-R)+(S-P)}>0.">12(S−P)+(T−P)(T−R)+(S−P)>p>S−P(T−R)+(S−P)>0.

Because S">T>S,

\dfrac{S-P}{(T-R)+(S-P)}.">12(S−P)+(T−P)(T−R)+(S−P)>S−P(T−R)+(S−P).

QED.

Discussion

This bit of basic theory will hopefully allow for things like principled classification of policies: "has an agent learned a 'non-cooperative' policy in a multi-agent setting?". For example, the empirical game-theoretic analyses (EGTA) of Leibo et al.'s Multi-agent Reinforcement Learning in Sequential Social Dilemmas say that apple-harvesting agents are defecting when they zap each other with beams. Instead of using a qualitative metric, you could choose a desired non-zapping strategy profile, and then use EGTA to classify formal defections from that. This approach would still have a free parameter, but it seems better.

I had vague pre-theoretic intuitions about 'defection', and now I feel more capable of reasoning about what is and isn't a defection. In particular, I'd been confused by the difference between power-seeking and defection, and now I'm not.

Discuss

### Possible new pneumonia in Kazahkstan

Новости LessWrong.com - 12 июля, 2020 - 23:41
Published on July 12, 2020 8:41 PM GMT

Epistemic status: extremely uncertain

There are contested reports of a new type of pneumonia, significantly more lethal than Covid-19, in Kazahkstan. Its cause is unknown.

Discuss

### School Has It Backwards

Новости LessWrong.com - 12 июля, 2020 - 22:28
Published on July 12, 2020 7:24 PM GMT

Crossposted from Curious Human.

In school, we're constantly being asked questions and instantly rewarded when we find the right answers. We're praised in class for saying "25" when we're asked what "5x5" is. We're given perfect scores on tests when we answer every question correctly. We get good grades for repeatedly getting good scores. Students who give the right answers when asked are "good students".

We're also instantly punished when we propose the wrong answers. We get scolded, lose points, and get poor grades. Students who don't give the right answers are "bad students".

For 16+ years (12 grades + 4 years of higher education) we're literally conditioned, in artificial learning environments, to find the right answers to other people's questions. How often do students finish a paper assigned to them in a class and want to write more on the subject? Rarely, if ever. Because the questions - the essay prompts - were never truly ours in the first place.

It's the same for every subject. Take the way math is taught in schools. General concepts are broken down into extremely specific questions. Students are then given a list of methods to answer each specific type of question, which they memorize for the sole purpose of using them to quickly find answers on tests. This process only makes sense for basic, fundamental concepts (times tables, finding the areas of common shapes, etc.); anything more complex and it quickly becomes unnecessarily complicated. Even worse, it detaches math from reality and ruins its purpose, which is to allow us to take first principles, generalize them, and apply them to as many real-world cases as possible.

We weren't designed to learn this way. Have you ever seen a child learn something new? They're playing, and suddenly something sparks their interest. They come up with a question. You see their eyes light up as their curiosity takes over. At that point, they're insatiably motivated to learn - it's only natural.

The things you learn by yourself stick; the things that are “taught” to you do not stick. - Nabeel Qureshi

This way of learning - by starting with our own questions - isn't just more effective. It also gives our lives meaning. I think the beauty in life is precisely in finding our own big questions - with no "correct" answers - that we want to spend our time solving. One of Elon Musk's is "How do I get to and live sustainably on Mars?". Albert Einstein's were "What the fuck is up with light and matter, the motion of particles in a liquid, special relativity, and energy-mass equivalence?".

Our questions don't have to (and realistically won't) start that big. I like the way Paul Graham put it: "The way to get a big idea to appear in your head is not to hunt for big ideas, but to put in a lot of time on work that interests you, and in the process keep your mind open enough that a big idea can take roost."

So the fundamental principle of education should be to give students an environment, and tools, where they can make discoveries themselves. Teach them the most basic foundations of knowledge - math and language - and give them space, time, and autonomy to explore. All human beings are born curious (how could we not be; the world is fascinating!). Don't destroy it by training people to search for answers. Life isn't about getting the right answers; it's about discovering the right questions.

Discuss

### What is the current process for increases testing?

Новости LessWrong.com - 12 июля, 2020 - 20:21
Published on July 12, 2020 5:21 PM GMT

I was just thinking about the current state of things in the USA and realize I don't really have any clue about just how the increased testing is getting done.

Part of the reason I was musing about this was the idea of doing testing a bit like intoxication stops on roads to check for drunk/intoxicated driving. That does reduce the event and get some to do the designated driver or take uber/taxi home.

If part of the problem with getting, particularly the younger people, but in general all the people that are stubbornly refusing to to simple things like wear masks and keep reasonable distance from others such stop and test might change the equation a bit.

Discuss

### Budapest meetup Margit Sziget, 7/19 2pm

Новости LessWrong.com - 12 июля, 2020 - 17:07
Published on July 12, 2020 2:07 PM GMT

I think the data says it is safe enough to meet outdoors, especially with the low underlying rate of infections in Hungary.
I'm hoping to get a general SSC/LW/EA group to meet here every month or so.
I'm a professional fan fiction writer, so I'll start the meeting with a short discussion of examples of people writing fiction influencing the world (like how Methods of Rationality has been a big deal).
And after that perhaps we could discuss this classic article from the sequences https://www.lesswrong.com/posts/9weLK2AJ9JEt2Tt8f/politics-is-the-mind-killer
https://www.lesswrong.com/posts/K7k9bT5x3Pz7EGTAu/has-politics-is-the-mind-killer-been-a-mind-killer
So there will be at least two of us (and hopefully more) at Champs bar on Margit Island at 2pm next Sunday. I'll put out a big copy of one of Richard Dawkins books in Hungarian that I got from a friend on the table as a signal that I'm there. So I hope to see you all there.

Discuss

### Science eats its young

Новости LessWrong.com - 12 июля, 2020 - 15:32
Published on July 12, 2020 12:32 PM GMT

Let's start by talking about scientific literacy. I'm going to use a weak definition of scientific literacy, one that simply requires familiarity with the Baconian method of inquiry.

I don't want to place an exact number on this issue, but I'd wager the vast majority of the population of "educated" countries scientifically illiterate.

I - The gravity of the issue

I first got a hint that this could be a real issue when I randomly started asking people about the theory of gravity. I find gravity to be interesting because it's not at all obvious. I don't think any of us would have been able to come up with the concept in Newton's shoes. Yet it is taught to people fairly early in school.

Interestingly enough, I found that most people were not only unaware of how Newton came up with the idea of gravity, but not even in the right ballpark. I think I can classify the mistakes made into three categories, which I'll illustrate with an answer each:

1. The Science as Religion mistake: Something something, he saw apples falling towards earth, and then he wrote down the formula for gravity (?)
2. The Aristotelian Science mistake: Well, he observed that objects of different mass fell towards Earth with the same speed, and from that he derived that objects attract each other. Ahm, wait, hmmm.
3. The Lack of information mistake: Well, he observed something about the motion of the planets and the moon... and, presumably he estimated the mass of some, or, hmmm, no that can't be right, maybe he just assumed mass_sun >> mass_plant >> mass_moon and somehow he found that his formula accounted for the motion of the planets.

I should caveat this by saying I don't count mistake nr 3 as scientific illiteracy, in this case, I think most of us fall in that category most of the time. Ask me how gravity can be derived in principle and I might be able to make an educated guess and maybe (once the observations are in) I could even derive it. But the chances of that are small, I probably wouldn't know the exact information I'd need which can be measured with 17th-century devices. I most certainly don't have the information readily sitting in my brain.

It's mainly failure modes 1 and 2 that I'm interested in here.

II - And Science said: Let there be truth

I think failure mode 1 is best illustrated by the first youtube result if you search for "how newton discovered gravity". This failure mode includes two mistakes:

• Not understanding the basis of the actual theory (in this case 'gravity' is presented as "objects fall towards Earth", rather than objects attracting each other proportional to their mass and distance).
• Not understanding the idea of evidence as a generator of theory.

In this failure, mode science works more or less like religion. There's a clergy (researchers, teachers, engineers) and there are various holy texts (school manuals, papers, specialized books).

I think a good indication of this failure mode is that people stuck here don't seem to differentiate between "what other humans in authority positions are saying" versus "what we observe in the world" as having fundamentally different epistemic weight.

Good examples here are e.g. young-earth creationists, people the believe the earth was created ~6000 years ago. Most of these kinds of people are obviously not scientists, but some are, a quick google search brings up Duane Gish (Berkely P.hD) and Kurt Wise (professor at a no-name university in Georgia).

However, young-earth creationism is not the only unscientific belief system people have, there are insane conspiracy theories aplenty, from vaccines being brainwashing mechanisms or 5G causing viral infections.

This kind of insanity is usually not represented in people affiliated with scientific or engineer institutions, but I'm unsure it is for the right reasons.

That is to say, assume you think of science as a religion. Your epistemology is based on what other people tell you, you weigh that by their social rank and thus derive what you hold as "truth".

Assume you are a doctor that falls into this category and 70% of your friends tell you "5G towers cause covid-19". Well, then, you could probably start believing that yourself. But keep in mind, it's not the only number of people that matters, the status also matters. If the priest tells you about the word of God that counts 100x as much as the village idiot telling you about the word of God.

Even with this context, if our good doctor's boss tells him "covid-19 infection is caused by an airborne coronavirus that passes from human to human via various bodily fluids dispersed in the air and on objects", then whatever this boss told him would have enough status magnitude to make him set his opinion on the more scientifically correct explanation.

The problem here is that our good doctor would be unable to come up with this explanation on his own, even in a hypothetical, he lacks even the foundational epistemology required to understand how such answers can be derived.

Even worst, our doctor's boos could share his epistemology, all that would be needed is for her boos to have told her the same thing and she would have believed it in an instant.

This Science as a Religion worldview is likely sprinkled through all engineers and scientists. The reason we don't see it is that for it to become obvious, one needs to start believing an obviously insane thing (e.g. young-earth creationism), however, the chance of this happening is fairly low since it would require all their peers to also believe insane things.

As long as "correct" ideas are observed throughout his professional environment, unless he is socially inept, he will only hold the correct idea.

You would need to look at his research or question him on the scientific method or on his epistemology more broadly in order to spot this mistake. Sadly enough, I've yet to find a university that has "scientific epistemology" as a subject on the entrance exam or even as a graduation or employment requirement.

I won't speculate as to how many people who are called scientists and engineers fall into this failure mode. I think there's a gradient between this and failure mode nr 2.

However, it should be noted that this failure mode is unobvious until a new idea comes along. Then, the real scientists will assume it's probably false but judge it on its merit. The religious scientists will assume it's false because their peers haven't said it's right yet.

This is both an issue in regards to new ideas proliferating and an issue with the scientific consensus. Scientific consensus is valuable if you assume everyone pooled reasoned their way through theory, independent research, and primary source dissection to reach a conclusion.

In a world where 90% of scientists just assume that science works like a religion, a 96%-4% consensus is not a good indicator for implementing policy, it's an indicator that the few real scientists are almost evenly split on the correct solution.

This is bleak stuff, if most scientists were understanding science as a religion then the whole institution would be compromised. Not only would academia have to be thrown in the bin, but all evidence and theory produced for the last half-century would have to be carefully curated and replicated before it can be considered scientifically true.

Surface level intuitions want me to think there's a significant probability this might be the case with certain sub-fields. But my theory of mind and the fact that science seems to keep progressing tells me this is unlikely to be the case in relevant areas.

III - If there's a fit there's a way

In short, these are the people that don't understand why a regression being fit on all the data is different from using the same regression to determine correlation strength via cross-validation.

I think most people and most scientists probably fall under the second failure mode, they are not Baconians or Popperians, but rather they are Aristotelians.

Aristotle understood the idea that we can observe the world and we can come up with theories about how it works based on observation.

He lacked was a rigorous understanding of how observations should be undertaken. He was probably unaware of the idea of having similar experimental error standards and replications as the rules by which the validity of data can be compared.

He lacked an understanding of the language of probability which would allow him to formulate these experimental standards.

He lacked an understanding of falsifiability and Occam's razor, he didn't have a rigorous system for comparing competing theories.

In an Aristotelian framework, dropping 3 very heavy and well-lackered balls towards Earth and seeing they fall with a constant speed barring any wind is enough to say FG = G * m1 * m2 / r^2
is a true scientific theory.

If things like the constant G
and the mass of the ball and the radius of the earth are already known, then the Aristotelian has no issue with declaring the theory correct. He needn't ask:

• Why do you assume this holds for all objects? After all, the only thing we have observed is three objects falling towards Earth. Even more, the balls are too light to observe this effect between them.
• Why can this equation not be simpler? I could simplify this equation to only a single term if what you wished to describe is just the fall of objects towards the Earth, which is the only thing your experiment is showing anyway.
• Why is dropping 3 balls enough to derive anything? Why are 2 not enough, why aren't 100 needed? Also, why is weight the property in question here and not some other property of the ball? Maybe it works for lead balls but not for copper balls?

I will grant I might be straw-manning Aristotle here, he would have been able to ask some of those questions, he just didn't have a rigorous frameworks from which to derive them. He was working from Aristotelian logic and intuition.

This seems to be the kind of failure that most people fall into, and why wouldn't they, it's an intuitive spot to be in.

To exemplify the sentiment, let me quote a former director of the Yale-Griffin Prevention Research Center, an organization I chose randomly because I came upon a pseudo-scientific article written by him:

But science was never intended to question the reliable answers we already had. Science can and should certainly invite us to question answers, too, but not all answers are subject to doubt.

The organization in question here seems perfectly respectable, their research is not wors than any other medical research (which is not high praise, I just want to say it's not an outlier).

This is the core of the Aristotelian mistake, the assumption that we shouldn't question everything, the assumption that the way the world works is mostly obvious. You should leave it alone and just look at it non-judgementally, not try to nitpick various edge cases in our understanding.

This is a good enough point of view from where one can do engineering, but obviously not so for science. The very purpose of science is to take "obvious" things and see where they become no longer "obvious" and try to come up with better theories that explain those edge cases... ad infinitum.

• In Galileo's time, it was obvious that the "nature" of an object dictated the speed with which they fall.
• In Newtons' time, it was not obvious one couldn't apply the same laws of motion both on Earth and in "the heavens"
• When de Morveau was born phlogiston caused fire.
• When Max Plank rose to prominence the universe was obviously contiguous and deterministic.
• Space and time were obviously separate and necessary entities to do physics when Einstein was beginning to operate.
• Nuclei were obviously indivisible until the 30s, ten years later they were divisible enough to be the basis of a weapon that could destroy humanity.

For an engineer, questioning the obvious is usually a waste of time, for a scientist, it's the only good use of time.

But, why is the Aristotelian mistake seemingly so common nowadays? Why do most "scientists" and virtually all people lack the understanding of how to reduce the world to rigorous predictive theories?

Because...

IV - Science eats its young

Imagine you are a computer scientist in the 50s. You can write programs in the form of binary bunch cards and get some primordial Von Neuman machines to execute them... sometimes, it's really hard, there are loads of bugs and loads of hardware restrictions.

Your program risks breaking the computer, returning a seemingly correct but actually erroneous result, or working just part of the time because of a physical error (e.g. an actual dead bug) in the room-sized monstrosity it's running on.

So obviously your work will require becoming a decent digital hardware engineer. You certainly know precisely how your computer functions, from the high-level component down to the fabrication method for the transistors or switched inside. That's because assuming computers "just work" is skipping over the biggest hurdle, the fact that computers are usually really bad at "just working" and the issue often lies in the hardware.

But skip forward to today and most programmers couldn't even describe how a CPU works in principle. And why would they? That's the magic of modern computers, the fact that you don't have to understand how a CPU works to write a website. But this would become problematic if some programmers suddenly had to work on computer hardware.

This is more or less the problem with science. We spend the first 20+ years of people's lives teaching them "obvious" things that just work. Theories that are well defined and never failed, theories they could never derive nor judge the merit of. But nowadays we believe the theories are mostly correct so we aren't teaching them anything wrong.

Maybe they are thought how to run experiments, but if their experiments contradict the "expected results" we just write it off as an error and tell them to try again, we don't fawn over the setup until we discover the error. Replicative lab work in college requires proving that existing theories and observations are true, even though real replication should be focused on the exact opposite.

When people ask why something is true they are given the Aristotelian explanation: Well, look at case x,y,z, it works in all of those cases, so it's true. Because most teachers don't have the required epistemology to say anything else, they are Aristotelians. Why would they be otherwise?

By the time people have the "required context" to look at theories that are under the lens of examination and "kind of works but not really", they are in their mid-20s. But these are the only theories that that matter, the only theories for which we still need science.

After 20+ years of teaching people that experiments are wrong if they generate unexpected results and that the universe is a series of theories that work because they work on some particular examples... we suddenly expect them to generate theories and experiment using a whole different epistemology.

On the other hand, a 14-year-old is probably not capable of scientific discovery, he would just be rediscovering obvious things people already know. So we see it as pointless to tell him "go out and do science the right way" if all the information produced is already known. I harp on about this more in Training our humans on the wrong dataset... so I won't restate that entire point, suffice to say, I think this is a horrible mistake.

The only way to teach people how to do science, to teach them how science works, and to get new and interesting discoveries that break out of the current zeitgeist... is to have them do it. Ideally have them do so starting at age 10, not at age 30. Ideally have 100% of the population doing it, even if just for the sake of understanding the process. Otherwise you end up with people that are rightfully confused as to what the difference between science and religion is.

But I think the issue goes even further:

V - Epistemic swamps and divine theories

A problem I kind of address in If Van der Waals was a neural network is that of missing information in science.

For some reason, presumably, the lack of hard-drives and search engines, people of the past were much more likely to record theories and discard experiments.

This seems to me to be one of the many artifacts the scientific establishment unwittingly carried over from times past. In the current world, we have enough space for storing as much experimental data as we want. From the results obtained at CERN down to every single high school with a laboratory.

But theory in itself is useless for the purpose of science. At most, it's a good mental crutch or starting point, since you'd rather not start from 0. Maybe if the inductive process by which it was deduced is re-discovered it can serve as an example or inspiration, but in itself, it has little value.

Indeed, I think theory can be rather harmful. Theory is a map of the world, it's a good starting point if one wants to extend the map, but a horrible starting point if one wants to correct it since a lot of things are interlinked, it's hard to correct something without changing everything. It has built-in biases and mistakes that are hard to observe, especially if the original data an experimental setup is unavailable to us.

Finally, I don't wish to say that the "religious" failure mode and the "Aristotelian" failure modes are all bad.

The fact most people don't have any basis for their ethics system and just learn it "religiously" from their peer group is a feature, not a bug. If people were convinced going around killing people is ok until they could understand and found a reasonable ethical system that discourages murder society couldn't exist.

In case you haven't noticed, this article and most of the stuff you read is "Aristotelian" in nature. I am not using all the evidence that I could be using, I am not providing ways to falsify my viewpoint, I am basing my arguments on pleasant rhetoric and a few key examples to illustrate them, examples for which I don't even have the exact data or an exact set of questions to replicate them.

If we couldn't start with "Aristotelian" thinking we would forever be in decision paralysis. Unable to come up with new ideas or say anything new about the world. The purpose of the scientific method is to bring extreme rigor to the things which are widespread and useful enough to require it. A fun chat about politics over a glass of wine is perfectly acceptable without hard evidence, implementing a policy that affects the lives of millions of people isn't.

Discuss

### Рациональность для начинающих

События в Кочерге - 12 июля, 2020 - 14:00
Вводная лекция для тех, кто только недавно узнал про рациональность и Кочергу и хочет получить обзорное представление о теме.

### Gödel Incompleteness: For Dummies

Новости LessWrong.com - 12 июля, 2020 - 12:13
Published on July 12, 2020 9:13 AM GMT

The Gödel incompleteness theorems are notorious for being surprising. I mean, they did put a hard stop to Hilbert's programme of completely, thoroughly formalizing mathematics.

And most laypeople find them very difficult to wrap their heads around.

It is, however, my belief that nothing should ever be unintuitive. That intuition, like conscious beliefs, can be trained. The territory is never weird, but your brain certainly can be.

My goal is not to give an intuitive explanation of the actual proof, nor even the actual statements, but to give you intuitive evidence that the statements make sense. The core idea is actually from E. T. Jaynes' Probability Theory, I'm only fleshing it out.

First, let's look at the statement of the second theorem, as cited on Wikipedia:

That is, F does not prove its own consistency.

Let's unwrap some of the definitions.

By "formalized system", we mean a bunch of statements that you could (assuming the Church-Turing thesis) fully enumerate, either with a computer, or, with a bit of patience, pencil and paper. Mostly you have a bunch of starting axioms and a few rules to produce more statements from them, which means you can just apply all of them in turn to produce all of the statements.

By a "consistent" system, we mean one that's not completely insane. It doesn't believe anything contradictory. This has wider implications. In fact, if your system proves just one contradiction, it actually proves everything! Why? Because if you once tell a lie, the truth is ever after your enemy.

And so, suppose we have a system -- no, a guy, standing in front of you, who is currently telling you that he is not, in fact, insane. Perhaps he tells you that he believes in the latest conspiracy theory surrounding Ghislaine Maxwell and Wayfair, and that you must also believe in it, since he trusts his own epistemic process (he believes that he is not, in fact, insane).

Naturally, you would be skeptical. An insane person can very well not realize that they are insane. Perhaps the guy has a habit of believing in absolutely everything, even contradictory things (he's a pretty inconsistent dude), in which case of course he would in particular believe that he's sane, though if you press him he might also tell you that he's lost his marbles (perhaps by digging too deep into a certain paranormal imageboard).

Well, now you understand Gödel's second incompleteness theorem. Since an inconsistent formal system would prove its own consistency anyway, the fact that F⊢Cons(F) would not be evidence that F is in fact consistent.

Now, let's look at the statement of the first theorem, again as cited on Wikipedia:

Any consistent formal system F within which a certain amount of elementary arithmetic can be carried out is incomplete; i.e., there are statements of the language of F which can neither be proved nor disproved in F.

This is rather simple, now that we have the second incompleteness theorem.

First off, since system F can perform arithmetic, it can do anything a computer can do, or a mathematician with pencil, paper, and too much time on their hands (an infinite amount, in principle). And since we assumed that F can be worked out by a computer, it can work itself out.

You know what else is capable of self-reference? You, dear reader.

Well, suppose then that you want to be sane, that you want to trust your epistemic process. Then, naturally you can't believe that you're insane. And yet you can't believe that you're sane either (who would believe you after they've learned about the second incompleteness theorem? Have some modesty!), for if you trust yourself too much, you risk becoming complacent. And thus you are doomed to forever doubt whether or not you are sane.

And therefore, you can neither believe nor disbelieve in your sanity.

For system F, obviously it can't disprove its own consistency, since we do expect its model of itself to be accurate. And yet we just saw above that it can't prove its own consistency either, and so the statement of its own consistency is internally undecidable.

The general strategy above has been to cross the subject-object distinction. The "subject" bit is the part of your brain which deals with people, and the "object" bit the part which deals with objects and machines.

People often (claim to) use only one of them depending on the task. But why not make them join forces?

We find it amusing when mathematical objects are framed as being people -- however, when people are framed as being machines, it is generally perceived to be absolutely horrifying. I say, let yourself be more often both amused and horrified.

Discuss

### Covid-19: Analysis of Mortality Data

Новости LessWrong.com - 12 июля, 2020 - 00:30
Published on July 11, 2020 9:30 PM GMT

Data source: The CDC, my spreadsheets are here and here which should be easier to work with.

Previous Covid-19 work here that is potentially relevant these days: Covid 7/9: Lies, Damn Lies and Death RatesCovid 7/2: It Could Be WorseCovid-19: My Current ModelOn R0Taking Initial Viral Load Seriously, On “COVID-19 Superspreader Events in 28 Countries: Critical Patterns and Lessons”

Something very bizarre is happening with the death data.

This post attempts to figure out what the hell is going on. The obvious first explanations don’t seem to fit the data.

Death Reporting Delayed Not Death Reporting Denied, Is Actually Death Reporting Backdated Or Something

The C.D.C.’s data set takes a while to fill in. I downloaded my data set on July 10. At that time, the last week on record in the data was the week of June 27, already two weeks old.

That two week delay didn’t even seem like enough. The week of 6/27 we only see 49% of the previous year’s overall death rate (note: I do smoothing with the two surrounding weeks throughout this post for 2019, whenever possible). The week of 6/20 we only see 81%, and the week of 6/1 we only see 95%. No other week is under 99%. The 95% is possible, but the 81% isn’t. There is at least a four week delay in reporting a substantial number of deaths.

Figure 1:

That’s a bit odd. It should not take a month to collect this data.

To get a sense of how this was interacting with the Covid-19 reporting, I charted deaths from Covid-19 from both sources.

Figure 2:

C.D.C. is reporting deaths earlier, with an even bigger spike in early April, then the number drops off rapidly. If you add everything up the C.D.C. is reporting 99% of the total number of deaths found in Wikipedia’s numbers – Wikipedia during this period reports 115,891 and C.D.C. reports 114,417. Covid Tracking Project has 118,439.

The week of June 13 it’s missing 20% of the death count (5609 vs. 4436), week of June 20 it’s missing almost half the count (2257 vs. 4136), week of June 27 it’s not even trying yet (3778 vs. 653).

I believe this is partly Wikipedia doing deltas in death counts – how many deaths recorded so far as of today, versus that count yesterday – versus the C.D.C. attempting to have a date of death for each person. Thus, when adjustments are made, Wikipedia moves them forward in time.

That makes the larger April spike in the C.D.C. data presumably real. There’s no reason it can’t fit the other data we have.

It doesn’t make the dramatic drop later on real. It can’t be fully real even if we exclude the final weeks. The shape of that curve is not a thing that happens alongside our other data points. It’s dropping a constant amount per week in absolute terms, rather than percentage terms. It’s doing so in a way that can’t account for the rates and quantities of positive tests.

Note that recent Covid-19 death tracking here is well behind general death tracking. For the week of July 27, we recall that 49% of last year’s deaths were reported. If we roughly believe the 100% of last year’s death rate for the week of June 6 is roughly correct (e.g. locking down prevents other deaths or moves them forward enough to cancel out the direct Covid-19 deaths) then it’s hard to imagine this number for June 27 being outside a range of about (85%, 105%), which means at most roughly 50% of them are missing. The Covid-19 deaths in this data for that period are about 85% missing based on officially reported numbers only.

That is far too big a difference to be accounted for from time shifting. There is certainly some reclassification involved (e.g. the one from New Jersey) that explains some of the difference, but nothing close to all of it unless I’m missing something big. It has to be some sort of reporting delay or failure. I can’t explain it. I am curious what others make of it.

What’s causing all this? Does anyone know or have a theory?

Causes of Death

Time to split out other causes of death. Here’s a chart of all the different categories offered by the C.D.C. other than Covid-19, in terms of year over year change in death rates, excluding the one that would have entirely messed up the chart:

Figure 3 (not corrected for lack of data in the last few weeks):

Some causes had a spike in April along with Covid-19 deaths. Other causes didn’t. The chart is hard to read because it has so much on it, and also the drop-off at the end needs to be corrected for. Plus the whole missing line issue.

We see a few causes that seem to have dramatically rose in April. Diabetes deaths rose 31% during the key weeks of April 4, 11 and 18. Influenza deaths rose 39%. Alzheimer deaths were up 25%. Heart deaths rose 13%. Generic ‘other diseases’ rose 14%. A few others rose a little as well.

It seems reasonable to attribute the smaller rises to lack of medical care, both lack of capacity and reluctance to seek it out. For Alzheimer’s, Diabetes and Influenza, I don’t buy that at all. They make way too much sense as being Covid-19 deaths that were misclassified, given the numbers rose far more than other medical causes that had the same issue with lack of care. At best, we can say that a 14% rise was due to lack of care, and the rest of the surge in deaths was actually Covid-19. That is not that big an impact on overall outcomes, as it would only add 3,004 deaths to our total, but it’s worth noting.

For the figures that follow, we are looking at the percentage change in percentage of deaths that fall into a category. Thus, when we see that Unnatural Cause deaths are down, that means not in absolute terms but in percentage of all deaths.

Figure 4 (% change in percentage of all reported deaths, year over year):

All cause is always 100% here by definition. What this shows is the plummeting share of deaths that are not classified as natural.

We see a large drop in March and April. That drop makes sense, as there were lots of Covid-19 deaths, and those additional deaths account for most of the drop – in absolute terms you can see in Figure 3 that the effect was small.

Then Covid-19 deaths fall off, and lockdown conditions get less strict as we move from May into June, and the proportion keeps falling.

This is a clear example of why it’s hard to tell a consistent story about what is happening here. The last week could be that in unnatural death there is a larger than usual delay so there can be an investigation into potential Murder Most Foul, but this effect seems super dramatic. And the timing involved, where the main effect happens in May, doesn’t seem to line up with observed behavior changes. We certainly didn’t lock down a little in April and then dramatically more in May. So again. What the heck is going on here?

With all that out of the way, let’s get to the heart of the matter. The R00-R99 series of classifications, sometimes called ‘mysterious deaths,’ covers a bunch of generic labels including R99, ‘Ill-defined and unknown cause of mortality.’ Most are of the form ‘Symptoms and signs involving system X’ without specifying a cause as such.

Figure 5 (% change in percentage of all reported deaths, year over year):

Figure 6: To drive that home, here’s the percentage of all deaths that are classified R00-R99:

There is a gradual increase up until about April 8. Then it’s a straight line up with no end in sight.

New York’s jump in mysterious deaths follows a similar pattern, although with a smaller peak. During the worst of it they were running 230% of normal, and since then it’s been 300% of normal or more.

New Jersey does have its jump at the same time as its death count jump, which isn’t a pattern found in other states. Probably a fluke.

Doing some other state spot checks:

Arizona had a big jump in early May, well before its case counts took off.

California’s pattern is similar to the overall country, with a similar rise and percentage of all deaths, but starting from a smaller baseline. California used to not have many mysteries and the rates have gone up by a factor of 10. Some smaller states didn’t used to report them at all.

Georgia seems not to have a surge, which is interesting. Nor does Minnesota.

Illinois has a larger than average jump, similar to California’s, with counts over 10% of deaths for the weeks with incomplete data. Tennessee is similar. Ohio has the same thing even bigger.

Massachusetts is typical, as is Michigan, as is Pennsylvania. As are Texas and Florida.

Some of this can be a backlog of medical attention to finding causes of death, but that effect would center in April. It wouldn’t explode in June.

Some of this can be delays where deaths will later be reclassified, perhaps. Someone who knows more can help out on that. I am hugely skeptical that this will explain a large portion of what we’re seeing, or that this curve is remotely normal. How many death causes are revised weeks after the event? I do not believe such events are common.

Over 26,000 extra deaths have been placed in this category. Almost 15,000 of those fall between May 3 and June 20, versus about 50,000 reported Covid-19 deaths during that period.

This is not a small effect. Normally about 1.2% of all deaths fall into the R00-R99 categories. By the end of this graph that number is above 7%. Six percent of deaths that used to be understood are no longer understood, despite there being general labels for “Covid-19” and “Covid-19 with multiple causes of death.”

Neither is it a truly gigantic effect in absolute terms. At least, not yet. This isn’t big enough to hide a massive surge in deaths.

The timing here represents some effect that started becoming a big deal in April, then accelerated. This isn’t primarily a physical effect. It’s a reporting effect.

What is up?

Possible Theories

Let’s recap and see if we can expand upon our possible theories.

Delays

Maybe delays really are this bad all the time, and lots of deaths are constantly getting reclassified over time. In this theory, even six months later in July of 2020, we have some doubts about deaths in January 2020, which results in a slightly (~25%) elevation in R-class deaths.

If we had a historical copy of this page we could confirm or deny this theory easily, slash measure the magnitude of this effect. By waiting, we’ll get a similar effect, since the version referenced above becomes the past version.

I am highly skeptical that this is enough of the problem to change the nature of the mystery.

I’ll proceed as if this is not a major cause of the issue.

Lack of Resources

Under this theory, medical examiners and others looking to figure out causes of death are increasingly overwhelmed.

The problem is that the timing does not make sense. Remember, this is the deaths chart for Covid-19 (Figure 7):

Hospitals are now being overwhelmed in the South, but during May and June there was  a lot of capacity being held in reserve that was never used. Most places did not have such problems. And in the worst of it, in April, we don’t see much of an increase in R-class deaths.

Perhaps the backlog isn’t medical, but instead it’s police? It’s become increasingly hard for them to do their jobs, a lot of them are on a sort of soft strike, and they’ve got extra work to do that’s going to make it that much harder to clear cases. If the spike started when Floyd died, that would make sense, but it starts well before that. So again, timing doesn’t seem right.

One could argue that it’s a backlog – things get more and more delayed – as a sort of hybrid of these two arguments, but that does not seem to me to work either.

Maybe this isn’t a data artifact?

This would be extremely alarming! There would be something out there killing people at a rapidly increasing rate, and we have no idea what it is. At a minimum, no one is talking about it.

I very much doubt that this is true. It would have to be an infection of some kind, and in the current climate I don’t see how this could remain hidden this long while being this prevalent. So this seems necessary to note, but not much of a consideration.

Change in Methodology

Maybe there are things that used to count as something in another group that have been reclassified, so now the category is on the rise?

I don’t know of any change in methodology. Presumably if there had been one nationwide someone would have mentioned it by now, given what else is going on. Seems like quite a news story. Also once again, the timeline does not work. Rules changes mostly happen at once, or at once in a given place. This doesn’t match that pattern.

Manipulation of Data by the Government

Maybe this is states looking for an excuse to reopen?

That was my first hypothesis.

Not at the state level. The pattern of where it is happening is too universal, and the places with the most motivation to do it, where there was clearly other fishy stuff going on, are not seeing larger effects.

If there’s manipulation of the numbers by the government, it would have to be coming from the C.D.C. or somewhere else in the federal government. Could the C.D.C. pull this off if it was sufficiently corrupted by Trump to want to do it? I don’t think it has any control over the numbers, it only tabulates them. If I’m wrong, please correct this.

Manipulation of Data by Others

The question here would be not only how, but why.

This would have to be local. It isn’t a big conspiracy or coordinated countrywide effort. Those don’t parse. If this is it, it is the sum of individual decisions made for local reasons. What might those reasons be?

One theory for which I have anecdotal evidence is that this is people lying because they’re worried about people freaking out or potential liability.

A distant relative of mine (who I’ve never actually met) who lives in a group home for the elderly went to the hospital, recovered, and returned to the home. The home told us he had, well, something other than Covid-19. The hospital told the patient he’d had Covid-19. Armed with that information, another relative of mine was able to get the home to admit that yes, this man had suffered from Covid-19.

They then claimed, when asked, that they had no other cases of Covid-19. It was pointed out that this man literally never went outside, so that seemed rather implausible.

It is easy to see why such a cover-up would be appealing. Everyone hearing about Covid-19 would potentially freak out and demand answers or worse. Much better to say it was something else. If someone died, same principle. Convince everyone that it was something mysterious or random. Who knows what they can get put on a death certificate?

Extend that logic to a bunch of other situations, and potentially we have our answer. It’s the best I’ve been able to come up with. It still doesn’t explain the timeline, but perhaps it can be something people have gradually learned how to pull off? Who knows.

Conclusion

So that’s the best I got. The hypothesis is that it is essentially in no one’s interest to mark a given death as being from Covid-19, and increasingly our country is deciding not to do that, one certificate at a time. This would reflect a broader breakdown of the rule of law and keeping of accurate records, and should be extremely alarming for that reason.

It also potentially means that there has been an increasingly large undercount in Covid-19 deaths, on top of the previously known undercounts – I’d previously been assuming 50% additional deaths from people who were never diagnosed, at least when things were bad at the height of the ‘first wave.’

If we assume excess R-class deaths are Covid-19 deaths, and extrapolate out a continued gradual rise, then the decline in death rates looks somewhat less acute.

Figure 8: Deaths from Wikipedia, excess year-over-year R00-R99 deaths, and the combined number. Y-Axis is in deaths per week:

I’m curious what the internet makes of this. Please pass this along to those who might know, and I’ll be back with the regular update next week.

Discuss