Вы здесь

Новости LessWrong.com

Подписка на Лента Новости LessWrong.com Новости LessWrong.com
A community blog devoted to refining the art of rationality
Обновлено: 16 минут назад

How my school gamed the stats

20 февраля, 2021 - 22:23
Published on February 20, 2021 7:23 PM GMT

I was reading the Slate Star Codex review of Rule of the Smart. Part of it discusses charter vs state schools and the allegations of fraud of various kinds undermining charter schools record of better achievement. Reading it, I realized that I took for granted that public schools engage in systematic fraud in a variety of ways. I don't think this is something everyone understands, hence this post.

I went to a state school in the UK. State schools are rated on a 1 - 4 scale from unsatisfactory to outstanding. My school was rated good, meaning a 3. A few memories which stand out. During my first week I saw one of the boys in my class who was 11 at the time held up against the wall in a corridor while a 16 year old put a shiv to his throat and robbed him. He handed over his wallet and keys. A year or two later and I remember seeing a small boy who struggled with depression held up by the throat against a locker and slapped in the face by a troublemaker from the same class in front of everyone just before we went in to the classroom. I remember classes which were filled start to finish with people shouting and talking. Neither of the first two events were common but they also weren't uncommon. No one was surprised to witness them. It's worth emphasizing again that my school was above average, in fact quite far above average, and in a middle class area. It's also worth noting that I was mostly in top ability streamed classes, meaning my classroom experience was likely far better than average.

There were many ways in which the school and teachers gamed the system to boost their measured performance. One way was to do exams for students. I was on a bottom set language class for French. After two years I literally couldn't speak a single sentence in french and maybe knew 20 words in total. I still passed my exams. How? We did the tests in class. Often the teacher would go through them with us. Literally giving us the test and then going through each question on the whiteboard and telling us what to write. A different year and a different teacher, this time the teacher would sit next to us and write the answers down. Why sit next to us? It was the bottom set so people often wouldn't even bother to write down the answer if they were told it. This kind of thing was normal, so much so that I, and I think most people there, didn't realize anything unusual was happening.

Another way schools game metrics is to cheat inspections. A major component of how schools are judged in the UK is through independent inspections carried out by an independent quasi-governmental organization called Ofsted. Now, you may imagine that these inspections would be unannounced, so as to best get a real image of how a school works. Not the case. They're scheduled well in advance. Before every inspection, a few things would happen in my school:

  • The worst troublemaker kids would be taken aside and put in a special room where inspectors wouldn't see them. Either that or they would just be told not to come into school at all on that day.
  • All of us were told in assembly that an inspection was coming and to be on our best behavior on that day. Often teachers would have conversations with less serious troublemakers and impress on them that they would behave on that day or face consequences afterwards.
  • Teachers would put a great deal more effort into their lesson plans than was normal. Classroom behavior management would also be far stricter. Because of these and other measures my school during an inspection was utterly different than my school on a normal day. On some level this isn't surprising. If teachers' promotions and management's jobs depend on good inspection results and inspections are easy to game, people will game them. Incentives drive behavior. But it's still sad.

Another way the stats were gamed was by not recording bad behavior. When a school gives a detention or suspends/expels a student, there's a record of it. This is especially true of suspensions, students being sent home or expulsions. The more of these you have, the worse you look as a school. The solution then is obvious, don't punish people or punish them in non-recorded ways. Again, in my school it was completely normal for students in lower sets to swear at the teacher, talk over them or disrupt the class for everyone else. It was normal for someone to be aggressive and abusive towards others and to face at most a 40 minute detention, but even getting a detention would be unusual.

I realize that one data point is not enough to draw solid general conclusions. My own perception is that this kind of fraud wasn't specific to my school. My cousin went to a state school fairly nearby. He's 4 years younger than me. During one of my winters back from undergrad we discussed his school and his experiences mirrored mine. His exact words regarding inspections were "I learned 4 times more that day than any other day that year. It was amazing". I talked to a few British students at university, although specifically the not middle/upper class ones who would have gone to public schools. They had gone to schools similar to mine in different parts of the country and their stories were similar and often worse. Two particularly funny examples from my friends' experiences stand out. A teacher in year 9 walked up to a student who was talking, picked them up and threw them out of an (open) first floor window. My friend sitting in class noticed two boys making fun of him and then proceeded to get up in the middle of class while the teacher was talking, walk to their table, flip the table upwards to hit them in the face before going to sit down again when the teacher told him to. (Remember, my friend was a studious, sporty Asian kid and not a troublemaker. This kind of thing is normal in that environment). Comedic stories aside, my experiences in school, while not universal, seem fairly common in the UK and from what I've read of the statistics, bad US schools are far, far worse.

I'm unsure what my point here is. I think I have two:

  • Charters may cook their books in various ways. In the UK, State schools do too. I would be surprised if it wasn't also the case in the US.
  • I think that I feel like a lot of commentators on places like SSC have fairly middle class experiences of fairly good schools and that bleeds into how their comparison between state vs charter schools. It's just good to remember that it's not those nice middle class schools that charters typically replace.

Crossposted to my blog at https://dissent.blog/2021/02/20/how-my-school-gamed-the-stats/



Discuss

What are the most powerful lotuses?

20 февраля, 2021 - 22:06
Published on February 20, 2021 7:05 PM GMT

Bryan Caplan writes an argument against Universal Basic Income: https://www.econlib.org/from-ubi-to-anomia/.

To sum up, the majority of the people not in the labor force spend most of their time on screens, and this is bad for their health and their wellness. If more people were to exit the labor force, many of them would presumably behave the same way, therefore UBI is a bad idea.

I am not sure of the validity of the object-level argument (the causal link could be in the other way: maybe people who spend their entire day scrolling their Facebook feed are more likely to become NEET, and not the reverse).

However, on a more abstract level, this hit me as an uncommonly Puritan argument. If people did not have the need to work, many of them would end up living a miserable life, pursuing short-term petty pleasures. This rings true to me, it somewhat resonates with my "high-level generators of disagreement". So my brain is trying to find ways to defend this argument.

Is it reasonable to think that, if relieved from the necessity to work, the majority of people would just procastinate all day? But I guess it is possible to conceive a model in which lotuses tend to trap men, and if you decrease the incentive to getting things done (in the example of this article, if you decrease the reward from green circles), more people will spend more times eating lotuses. There are many social incentives to work, which would not disappear if we remove the economic incentive to work; but it is also true that social rewards are often easy to pursue on social network. 

So we should expect an increase the mean time that the population spends on activities that we might judge as "lotuses". But on what lotuses in particular? Is procastinating on the Internet the most brain-gripping short-term reward of this age, or there are other competing lotuses?



Discuss

Internet Encyclopedia of Philosophy on Ethics of Artificial Intelligence

20 февраля, 2021 - 16:54
Published on February 20, 2021 1:54 PM GMT

Table of contents:

  1. The Relevance of AI for Ethics
    1. What is AI?
    2. Its Ethical Relevance
  2. Main Debates
    1. Machine Ethics
      1. Bottom-up Approaches: Casuistry
      2. Top-down Approaches: The MoralDM Approach
      3. Mixed Approaches: The Hybrid Approach
    2. Autonomous Systems
    3. Machine Bias
    4. The Problem of Opacity
    5. Machine Consciousness
    6. The Moral Status of Artificial Intelligent Machines
      1. The Autonomy Approach
      2. The Indirect Duties Approach
      3. The Relational Approach
      4. The Upshot
    7. Singularity and Value Alignment
    8. Other Debates
      1. AI as a form of Moral Enhancement or a Moral Advisor
      2. AI and the Future of Work
      3. AI and the Future of Personal Relationships
      4. AI and the Concern About Human ‘Enfeeblement’
      5. Anthropomorphism
  3. Ethical Guidelines for AI
  4. Conclusion
  5. References and Further Reading

Its entry on "Singularity and Value Alignment" is shorter than Stanford Encyclopedia of Philosophy's entry on superintelligence:

Some of the theories of the potential moral status of artificial intelligent agents discussed in section 2.f. have struck some authors as belonging to science fiction. The same can be said about the next topic to be considered: singularity. The underlying argument regarding technological singularity was introduced by statistician I. J. Good in ‘Speculations Concerning the First Ultraintelligent Machine’ (1965):

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion”, and the intelligence of man would be left far behind. Thus, the first ultraintelligent machine is the last invention that man need ever make.

The idea of an intelligence explosion involving self-replicating, super-intelligent AI machines seems inconceivable to many; some commentators dismiss such claims as a myth about the future development of AI (for example, Floridi 2016). However, prominent voices both inside and outside academia are taking this idea very seriously—in fact, so seriously that they fear the possible consequence of the so-called ‘existential risks’ such as the risk of human extinction. Among those voicing such fears are philosophers like Nick Bostrom and Toby Ord, but also prominent figures like Elon Musk and the late Stephen Hawking.

Authors discussing the idea of technological singularity differ in their views about what might lead to it. The famous futurist Ray Kurzweil is well-known for advocating the idea of singularity with exponentially increasing computing power, associated with ‘Moore’s law’, which points out that the computing power of transistors, at the time of writing, had been doubling every two years since the 1970s and could reasonably be expected to continue to do so in future (Kurzweil 2005). This approach sees the path to superintelligence as likely to proceed through a continuing improvement of the hardware Another take on what might lead to superintelligence—favoured by the well-known AI researcher Stuart Russell—focuses instead on algorithms. From Russell’s (2019) point of view, what is needed for singularity to occur are conceptual breakthroughs in such areas as the studies of language and common-sense processing as well as learning processes.

Researchers concerned with singularity approach the issue of what to do to guard humanity against such existential risks in several different ways, depending in part on what they think these existential risks depend on. Bostrom, for example, understands superintelligence as consisting of a maximally powerful capacity to achieve whatever aims might be associated with artificial intelligent systems. In his much-discussed example (Bostrom 2014), a super-intelligent machine threatens the future of human life by becoming optimally efficient at maximising the number of paper clips in the world, a goal whose achievement might be facilitated by removing human beings so as to make more space for paper clips. From this point of view, it is crucial to equip super-intelligent AI machines with the right goals, so that when they pursue these goals in maximally efficient ways, there is no risk that they will extinguish the human race along the way. This is one way to think about how to create a beneficial super-intelligence.

Russell (2019) presents an alternative picture, formulating three rules for AI design, which might perhaps be viewed as an updated version of or suggested replacement for Asimov’s fictional laws of robotics (see section 2.a.):

  1. The machine’s only objective is to maximise the realisation of human preferences.
  2. The machine is initially uncertain about what those preferences are.
  3. The ultimate source of information about human preferences is human behaviour.

The theories discussed in this section represent different ideas about what is sometimes called ‘value alignment’—that is, the concept that the goals and functioning of AI systems, especially super-intelligent future AI systems, should be properly aligned with human values. AI should be tracking human interests and values, and its functioning should benefit us and not lead to any existential risks, according to the ideal of value alignment. As noted in the beginning of this section, to some commentators, the idea that AI could become super-intelligent and pose existential threats is simply a myth that needs to be busted. But according to others, thinkers such as Toby Ord, AI is among the main reasons why humanity is in a critical period where its very future is at stake. According to such assessments, AI should be treated on a par with nuclear weapons and other potentially highly destructive technologies that put us all at great risk unless proper value alignment happens (Ord 2020).

A key problem concerning value alignment—especially if understood along the lines of Russell’s three principles—is whose values or preferences AI should be aligned with. As Iason Gabriel (2020) notes, reasonable people may disagree on what values and interests are the right ones with which to align the functioning of AI (whether super-intelligent or not). Gabriel’s suggestion for solving this problem is inspired by John Rawls’ (1999, 2001) work on ‘reasonable pluralism’. Rawls proposes that society should seek to identify ‘fair principles’ that could generate an overlapping consensus or widespread agreement despite the existence of more specific, reasonable disagreements about values among members of society. But how likely is it that this kind of convergence in general principles would find widespread support? (See section 3.)



Discuss

AI Safety Beginners Meetup

20 февраля, 2021 - 16:20
Published on February 20, 2021 1:20 PM GMT

Are you new to AI Safety? Then this event if for you. This is an occasion to ask all your questions, and meet others in your situation. 

Are you a veteran in AI Safety, or just have been around long enough to have some wisdom to share. Then you are welcome to join this meetup to share your knowledge and experience. 

This event will be updated with more information.



Discuss

On the Nature of Reputation

20 февраля, 2021 - 15:50
Published on February 20, 2021 12:50 PM GMT

Abstract: Reputation tokens (e.g. brands, but lot of other things as well) are vessels to store reputation. They are free to create but expensive to fill with trust. Consumers use them to deal with information overload. Producers use them to manipulate common knowledge. Also, a certain kind of supply-demand-style equilibrium exists.

Trademarks

Speaking about reputation, one risks getting all hand-wavy and disconnected from the real world.

To keep the discussion down-to-earth, let's start with the trademark law. The trademark law was, after all, created specifically to deal with reputation. And being a law, it is not a theoretical model but a living organism evolved to solve actual real-world problems.

You may have a vague idea that a trademark is something like an Internet domain name. You can buy a name and then it's yours. It's your property and everyone else should get off your lawn.

But the trademark law is not like that at all.

First of all, you don't create a trademark by buying it. Trademark is rather created as a by-product of using a name. You establish a company. You start producing stuff. Maybe you do a little advertising. People start to recognize your brand. Et voilà, you own a legally recognized trademark. No explicit action on your part is needed.

Similarly, if you stop using the name, your claim to the name gradually dissipates. If you claim a trademark and your opponent is able to prove that you haven't used it for decades, the court will rule against you.

There is also the concept of "trademark goodwill" which loosely translates to "reputation". Interestingly, the notion of trademark goodwill tends to be phrased in economic terms. Namely, it is the part of the value of the company that is gained through owning the trademark.

Another common misconception about trademarks is that the names are, similarly to the Internet domain names, global.

In reality, the scope of a trademark is limited to the area of one's activity. Apple, the grocery store doesn't infringe on the trademark of Apple, the consumer electronics manufacturer. Similarly, Apple a grocery store in Manchester, England may not infringe on the trademark of Apple, a grocery store in Brisbane, Australia. The rule of the thumb is that there's an infringement only when the two trademarks can be easily mistaken for each other by the customers.

Reputation Tokens

Alice is a commercial programmer. She writes software and is paid money. Bob is an open-source programmer. He writes software and earns respect.

It kind of feels that money and reputation may be, in some way, similar. Maybe if Alice is paid money, Bob should be paid some kind of generic reputation tokens.

But the analysis of the trademark law above shows why the two kinds of tokens are vastly different.

First, reputation tokens (e.g. trademarks) are not scarce. Quite the opposite, they are completely free. You can spend the entire day in a bed inventing brands. And you will, in a sense, "own" those brands. But you can't do the same with money. You can't just think it out of the thin air.

On the other hand, you can fill those tokens (trademarks) with reputation. You can build high-quality products. You can advertise. You can force people (as states do) to buy your product.

So maybe a reputation token (trademark) is not like a value token (currency unit) but rather like a purse, or a bank account.

That must be true to some extent. The trademarks are really containers to store reputation. But at the same time they do differ from bank accounts. You can, for example, withdraw money from one bank account and deposit it to a different one. But once you have built a reputation of a particular brand, you can't easily transfer it to a different brand.

Maybe we should stop making monetary analogies and look at what's really going on.

What really happens is that Apple develops iPod and a lot of people start believing that Apple builds great products.

Investment results in more reputation for the brand, which, itself, is a kind of common knowledge, stored in the brains of people.

If you are not familiar with the concept of "common knowledge", here's a great, detailed write-up by Scott Aaronson.

A side note: The game theoretic model of common knowledge presupposes not only that everyone is aware of a certain fact but also that everyone knows that everyone else knows, that everyone knows that others know that everyone else knows, and so on. Note that this is not necessarily needed for reputation. People can believe Apple is a great company without knowing that others think that Apple is a great company. Apple would still be able to sell more iPhones. Yet, when you start looking at real-world examples of reputation it's hard to find an example where this recursive kind of common knowledge is not involved.

Social media example

Now, let's go beyond trademarks and consider social media. You can definitely gain reputation on Twitter or YouTube. So how does it conform to the model above? Let's have a look.

Creating a reputation token, say a Twitter account, is cheap. In fact, you can create any number of them cost-free. Filling the token with actual reputation is more expensive. You have to be engaged in conversations, write tweets that people find interesting etc. At the same time, the reputation is locked in other people's heads

However, unlike with trademarks, the amount of reputation stored in a token can be measured. The number of followers can be used as a rough estimate. Hearts can be used to track reputation on more day-to-day basis. These numbers can be used to compare individual reputations.

It's not clear to what extent such numbers are useful. One may object that reputations are multi-faceted and in principle incomparable. Reputation of an ordinary person cannot be readily compared to a reputation of a scientist, to a reputation of a pop-star, to a reputation of a politician. Followers may also value different aspects of a YouTube video.

On the other hand, consider money. Price is without a doubt a useful measure of value. Yet, it suffers from all the problems above. A price of banana is not really comparable to a price of bicycle. Some people may value an automobile because it can move them around, others may value the prestige of owning a Ferrari.

Reputation tokens: How exactly do they work?

I see two possible answers: Either the reputation is tied to a token (trademark, brand, name) or, maybe, it it tied to actual physical reality behind the name, say, the people who designed iPod.

This question can be answered on empirical basis: What happens when the reality behind a name suddenly changes? Do people still trust the name, or does trust follow whatever used to be behind the name.

Case study: MySQL database. The creators of the database sold it to Sun Microsystems, which then got bought by Oracle. At that point the engineers decided to fork the database and create MariaDB. Here we have a situation where the reality behind a name have changed. MySQL was from that point on developed by a different team. At the same time we can compare it to the old team, building the same product, under a different name. The question is whether the former trust in MySQL remained with MySQL or has shifted to MariaDB. If the former was true, people stored their trust in name "MySQL". If the latter was true, they've trusted the database and people who have built it.

It's hard to get data on this, but my impression is that the trust remained mostly with MySQL. And that was not because people trusted Oracle more than Monty Widenius, but because they've been using MySQL as their dtaabase engine all along and have even spent a thought on changing the status quo.

My conclusion therefore is that people trust names more than than whatever they happen to refer to. And we'll see why that is important in a minute.

Operations with reputation tokens

Before we move on to more interesting stuff, let's get a better feel for the reputation token concept by asking what can people do with them. You can do a lot of stuff with money. You can exchange it for goods and services, you can lend it, you can borrow it and so on. Can you do similar things with reputation tokens?

Well, you can definitely sell a reputation token (a brand). The fact, that the trust is associated with the brand name and not the physical reality behind the name, as discussed above, means that the owner of the token may change and the trust will remain intact.

You can think of it more broadly than just as of selling brands. For instance, the reputation token "Government of Italy" is transferred each time there's an election, or, smewhat more often, when there's a change caused by a government crisis.

Two tokens can be merged to refer to refer to the same underlying reality. Vauxhall and Opel names used to refer to different cars in the past. Today they refer to the same cars. (As far as I understand, Vauxhall branding is used in UK, Opel elsewhere.)

A token cannot be easily split. After a split, one of the child entities would need a different name and it would have to build its reputation from the ground up.

Why do consumers need reputation tokens?

All of that is nice and well, but the real question is, who needs those tokens in the first place? Why can't we just have a life and not care about them? Or, taken from a different perspective, what are the forces that shape how the ecosystem of reputation tokens works?

First, let's have a look at consumers. (The naming here is quite arbitrary. By consumers I mean people how buy the branded products, that is, people who put their trust into a brand.)

Consider following questions: Should I get vaccinated? Do vaccines cause autism? Should I look into the research? The study about vaccination/autism correlation was published in a high-impact journal. Does it mean it is trustworthy? I've heard it has been retracted. How do I verify that? And what does it mean? I've heard bad things about Elsevier. Should I distrust everything they've publish? What about replication crisis? And who, at the end of the day, decides which journals are high-impact and which are not?

Navigating the world, especially the modern world, is hard. When facing the anti-vaccination movement many people, without thinking about it twice, assert that the educational standards at our schools are too low and should be improved. That would surely make people believe in vaccines.

The problem with that is that's not feasible. Understanding the science around vaccination is hard even for experts and a layperson cannot really hope to get a correct, balanced picture all by themselves.

And it's not just the vaccination. The same applies economics, politics, climate change and even to much mundane problems like which smartphone should you buy or which school to send your children to.

In fact, even a person with IQ of 150, superhuman googling skills and a 10TB hard disk plugged directly into the brain, doesn't necessarily want to care about all of that. They may care deeply about one or two issues and would be happy to delegate everything else to the experts.

And I think you can see the problem now. Which experts should be trusted? When trying to find out, one runs into the same kind of complex, branching, fractal problems.

The common sense answer is to trust the authority. In the case of vaccination, that would be your doctor. You ask her: "Hey, doc, should I get vaccinated?" And she replies that yes, you definitely should. Problem solved.

But again, how do you know that you should trust doctors? You don't have the skills to verify the trustworthiness of the medical profession.

In the end, you simply trust doctors because most people around you do trust doctors. (You may recognize the recursive nature of common knowledge here!) It's not a safe bet, people can be wrong after all, but it's better choice than trusting nobody and doing your decisions at random. Or than trusting some shady YouTube channel with three subscribers overall.

You assume, without verifying it, that doctors must have done something to gain all that trust. Maybe one day someone got treated by a doctor and they've felt better the next day. Or maybe the doctors have some kind of exam that prevents people without extensive knowledge and experience becoming doctors. Who knows. You just trust them because everybody else does.

(And it goes without saying that according to this logic, if you live among anti-vaxxers, you are going to trust their authorities and you won't get vaccinated.)

Why do producers need repution tokens?

A terminology note: By producers I mean people who own tokens (brands) and invest in putting more trust in them. In short, Steve Jobses and Italian Governments of this world.

Once I've read a story about a regional government in Africa (I a not able find the reference right now, but I think it was in Acemoglu and Robinson: Why Nations Fail) which decided not to enforce the written law of the country and to rely on the custom law. Interestingly, their argument wasn't that the customs were in one way or another better than the official law, but rather that the written law can be easily changed by the central government - whom they mistrusted - whereas the customs could not.

Think about what that means in terms of reputation tokens. There's a token called "written law" owned by the central government. There's another thing called "customs". (It's not clear whether it should be called a reputation token because it's just a rule saying: "Do as your forebears did," and nobody really owns it.) But: If the customs happen to be the same as written law, why would the central government prefer the former to the latter?

And put this way, the answer is obvious: Because they can change the law, if they want to. The customs they can't.

Let's return to the idea of reputation as common knowledge. Common knowledge is hard to change, because it's stored in people's brains and most people are not particularly keen to change their beliefs. Also the recursive nature of common knowledge slows down the process considerably. If your trust is based on other people's trust then you'll have to find out that other people have changed their mind before you change yours. But if everybody follows that algorithm, the status quo is preserved. (Consider the study showing that people living in the territory of former Austria-Hungary have more trust in courts than people from former Ottoman or Russian empires. At least some of the reputation has survived for a century!)

That resilience to change fully applies to "customs".

But "written law" is different.

People do not trust the particular wording of the laws, they put their trust into the token called "law", irrespective of what it refers to. That allows the state to change a law while at the same time keeping the trust.

The same way, if people trust in "Apple" brand rather than a particular engineer in the company or a particular product (iPod), Apple can introduce a new product (iPad, iPhone) and people will embrace it. If, instead, a different company tried to sell exactly the same device, convincing customers to even give it a try would be an uphill battle.

To sum it up, reputation tokens are a mechanism to manipulate common knowledge. Technically, it's done by separating the name from the physical reality behind it and giving people an option to trust the name instead of what it refers to.

Is there an equivalent to demand/supply equilibrium?

I haven't thought much about the topic but the actors seem to have to similar incentives to make trade-offs.

Consumers want to put as much trust into a token as possible so that they don't have to make decisions themselves. On the other hand, they can't trust it too much, because then it would be easy for the brand to make profit by manipulating their beliefs.

Producers would prefer to invest as little as possible to building a reputation, however, with no reputation at all nobody would listen to them. They have to keep a balance.



Discuss

Google’s Ethical AI team and AI Safety

20 февраля, 2021 - 12:42
Published on February 20, 2021 9:42 AM GMT

cross-posted from my blog

Background on the events

I have been thinking about this since the firing of Dr. Timnit Gebru, and yet still no one has actually written about it beyond my own tweets, so I guess it falls to me.

I find, and I imagine many people in the rat-sphere agree, the idea of talking about energy consumption and climate change to be low on my list of ethical priorities surrounding AI. But I find that uncompelling because I think that (a) this cost can be weighed against the benefits AI can create and (b) this cost can be literally offset by potential current and future carbon capture technologies. I think this is well established in the EA community, with recent possible exceptions taking shape.

But these ideas rely on current assumptions about how much power is being used for what purposes. If AI continues to scale by adding compute, as is generally expected, this could create conflicts of interest in the AI space. That would be bad for a number of reasons, chief among them that it would mean that only actors who are willing to impose substantial costs on the commons would be able to implement their visions. This is my central point, so I will return to it later.

For now, just keep in mind that the low priority of climate change among EAs is an empirical question of how easy it is to influence certain changes. I don’t think any of the specific work by Dr. Gebru makes a convincing case to me that the question has a different answer. But I haven’t heard literally any other single person say that!

Instead, she was fired, and today the other co-lead of her team was also fired. The justification for firing Gebru was “she quit.” No public statement has been made, even internally to the team both managed, about why Margaret Mitchell was fired, unless you count “it’s part of a re-org.” For reference, my team at Google has been re-org’d at least four times, and I have never seen anyone fired or even moved out of their management position in that time. Usually I don’t even notice.

(Because of timing there has been some conflation of this incident with the firing of a recruiter who worked mostly with historically Black colleges and universities. Maybe this is evidence that racism played a part in the decision, but I intend to regard “bad decision-making” on the part of Alphabet as a bit of a black box because harming AI safety prospects is very bad regardless of whether they are doing it for racist reasons.)

So at this stage, it looks like a big corporation made some bad HR decisions and fired people who were well regarded as managers but ultimately doing work that I value about as much as most of the day to day work at Google. That’s not so bad, beyond the fact that we live in a world where small sets of high ranking execs get to make bad decisions without oversight, but we all already knew we were living in that world.

Models of AI Safety

The reason I think this is bad, is that I think it invalidates my ideas about how AI Safety could be implemented in the real world.

In brief: in order to “align” your AI, you will need to reduce it’s direct efficacy on some measure, which will be opposed by a middle manager. I had hoped that “official avenues” like the Ethical AI team could be sufficiently ingrained that when the ideas needed to solve AI Safety are developed, there is a way to incorporate them into the projects which have enough compute to create an aligned AI before others accidentally create misaligned AI.

In more detail:

  1. AI scales intensely with compute. (99%+ confidence)
  2. Large projects, such as Google or the US government, will have access to more compute than projects formed by small organizations, being able to easily put together several million dollars of compute on short notice. (95% confidence)
  3. Some small set of large projects will be in position to create AGI with some alignment plan for a few years before large numbers of small actors will be in position to do so. (requires ~1+2, slightly less confident than 2, say a 90% confidence interval of 1.5-10 years)
  4. Once a large number of small actors are able to easily create AGI, one of them will accidentally create misaligned AGI pretty quickly. This is my “time limit” on how long we have to get AGI implemented, assuming a ‘solution’ exists before AGI is possible. (~80% chance a misaligned AGI emerges within 30 years of it being possible to make an AGI with <1 year’s SWE salary of compute in your garage; 50% chance within 10 years, 10% chance within 2 years)
  5. The first available solution to AGI alignment will require spending more time to develop an aligned AGI than the first available plan for creating any AGI. (90% confidence interval of how much longer: 2 weeks - 5 years)
  6. Therefore, in order to be confident that the first AGI created is aligned, the process from “AGI alignment is solved and AGI is possible with access to the best compute on earth” to “An org with enough compute to build AGI is executing an alignment plan with enough head start not to be overtaken by a new misaligned AGI project” needs to be as short as possible, because 1.5 years of compute advantage plus 2 years of code accessibility is already possibly not enough to cover the delay needed to align the AGI. (follows from 3,4,5)
  7. Ethical AI teams are a natural place to introduce an alignment solution to a large organization like Google. (90% confidence interval of how much faster the team could impose an alignment plan than any other team: 0.5 - 10 years. Probability I think such a team would impose such a plan, if they had it and were in a position to do so: 80%+. Probability I think any other team would impose such a plan: ~30%)
    1. The team has to be aware of cutting edge developments in alignment enough to identify the earliest correct solution. Teams focused on improving results or applying to specific cases will not reliably have that familiarity, but it fits directly into the scope of ethical AI teams.
    2. The team has to be technically capable of influencing the direction of actually implemented AI projects at the org. If one Google exec believes something strongly, they can’t implement a technical program on their own. If people in Ads understand the program, transitioning to Google Brain’s codebase alone would be a difficult task. An ethical AI team should have specific firsthand experience applying alignment-like frameworks to actual AI projects, so that they can begin executing as soon as the priority is clear to them.
    3. The team has to be politically capable of influencing the direction of actually implemented AI projects at the org. If a SWE says “I can do this in two weeks” and their manager says “ship it tomorrow or I’ll get Jane to do it instead,” then you need to have influence over every possible SWE that could do the work. If the organization instead sees the value of oversight programs and has people in place to execute those programs, you only need to influence the leader of that team to start the plan.

I don’t think any of these points are controversial or surprising.

There has long been agreement that large-scale projects such as those at large corporations or governments will be able to create AGI earlier. This is a possible way to get the timing lead necessary if aligning the AGI causes it to take much more time than misaligned AGI.

But that timing lead only manifests if it takes a shorter period of time to become part of the org and set the direction of a major project than it does to wait for compute to get cheaper.

Putting two and two together

My previously existing hope was something like this:

  1. Existing ethical AI teams are maintained at large companies because they add value through:
    1. Something like PR maintenance by keeping them from being too evil
    2. Finding pockets of non-obvious value through accessibility or long-term incentives.
  2. Existing ethical AI teams actually care about ethics, and have some members that keep up with AI Safety research.
  3. The members who keep up with safety research can convince their team to act when there is a “solution.”
  4. The ethical AI team can convince other teams to act.
  5. One or more large companies with such teams and such plans will move forward confidently while other possible early actors will not, giving them a strategic advantage.
  6. ????
  7. Profit

The news of Drs. Gebru and Mitchell being removed from Google seems to be a direct refutation of (4), because their attempts to create action more broadly caused retaliation against them.

It also makes me very concerned about (1), and especially (1.1), in that it seems that Google continued this course of action over the course of three months of bad press. I can also take this as some evidence that (1.2) isn’t working out, or else there would be some profit motive for Google not to take these steps.

Google is the prime example of a tech company that values ethics, or it was in the recent past. I have much less faith in Amazon or Microsoft or Facebook or the US federal government or the Chinese government that they would even make gestures toward responsibility in AI. And the paper that is widely cited as kicking off this debate is raising concerns about racial equity and climate change, which are exactly the boring central parts of the culture war that I’d expect Googlers to support in massive majorities.

What else could happen

One of the big reactions I’m having is despair. I don’t think this is a killing blow to humanity for a few reasons, but it did seem like the single most likely path to a really good future, so I’m pretty sad about it.

But I think there are still a number of other ways that the problem of implementing an AGI alignment strategy could be solved.

  1. It could be much easier to solve alignment than expected, or the solution could make it easier to implement AGI than otherwise! That would be nice, but it’s not within anyone’s control.
  2. New ethical AI teams could form at other organizations, especially the US Gov’t. This seems directly actionable, though I don’t really see how someone would sell this work to an org.
  3. More secretive connections (such as the leadership of DeepMind or OpenAI) could establish a more direct throughline that I will never know about until I start living out the first chapter of Friendship is Optimal. This also does not seem actionable to me.
  4. Fix (4) in the above by pressure from people who can exert influence over such organizations.
    1. By creating helpful legislation for alignment if such a thing is possible, which I am somewhat skeptical of
    2. Worker organizations that can pressure tech companies on problems of the commons, such as the newly formed Alphabet Worker’s Union. This seems directly actionable, but requires a lot of effort of a sort that I’m uncomfortable with.

I guess I’m mostly out of thoughts, but I hope this makes the case at least somewhat that these events are important, even if you don’t care at all about the specific politics involved.



Discuss

The Kelly Criterion in 3D

20 февраля, 2021 - 11:21
Published on February 20, 2021 8:21 AM GMT

The Kelly Criterion is a gambling strategy which maximizes the logarithm of your expected wealth. The Kelly Criterion tells you what fraction f∗.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > * {position: absolute} .MJXc-bevelled > * {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom * {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')} of your bankroll to wager. It is a function of the net fractional odds[1] received 0">b>0 and the probability of a win p∈(0,1).

f∗=p(b+1)−1b

Some properties are intuitively easy to understand.

  • The Kelly wager is positive iff the expected value bp−b(1−p) is positive. The Kelly wager is zero otherwise.
  • The Kelly wager is 1 for all p=1. (Ignore b=0.)
  • The Kelly wager is 0 for all b=0. (Ignore p=1.)

What surprised me is that if you fix b and restrain p to the region of positive f∗ then f∗ is a linear function of p. This was not intuitive to me.

I expected asymptotic behavior with the greatest df∗dp in a neighborhood of p=1. In other words, I expected the fractional wager to increase slowly at first and then increase faster as p approached 1. Actually, p is linear.

Kelly wagers tend to be more aggressive then human intuitions. I knew this and I still underestimated the Kelly wager. I didn't mess this up in a high stakes situation where fear throws off my calculations. I didn't even mess this up in a real-world situation where uncertainty complicates things. I underestimated the Kelly wager on a purely conceptual level.

I have written before about the utility of my fear heuristic. My fear heuristic might be helping to compensate for my Kelly miscalibration.

Recalibrating

I'm good at tolerating risk when it comes the small number of gigantic risky bets guiding my professional career. I'm also good at tolerating risk in the domain of painlessly small bets. (Not that there is much risk to tolerate in this latter case.) Judging by this post's analysis, I am awful at calibrating my risk tolerance for wagers between 0.05% and 1% of my net worth. Specifically, I am insufficiently risk tolerant.

What makes this worse is that the region of 0.05% to 1% of my net worth is full of long tails. The wagers I'm skipping could easily repay themselves a thousandfold. If I take a wager like this every day for 2 years and just a single one of them repays itself a thousandfold then I win bigtime.

I need to gamble more.

Optional Practice

I used these problems is to help develop my intuitive grasp of the Kelly criterion.

Q1: If p=0.01 and b=1000 then what is the corresponding f∗?

0.9%

The above number is way higher than what my intuition tells me is appropriate.

Q2: If p=0.1 and b=20 then what is the corresponding f∗?

5.5%

The above number is higher than the answer to Q1. This result was, again, unintuitive to me. I expected it to be smaller because b is smaller in absolute terms. But I didn't pay sufficient attention to bp=2. The average return is 2× your initial investment.

Q3: If p=0.51 and b=1 then what is the corresponding f∗?

2%

Q4: If p=0.51 and b=2 then what is the corresponding f∗? (Not that b=1 means you get back your original wager plus double you wager for a total of 3× your wager.)

26.5%

Q5: If p=0.65 and b=100 then what is the corresponding f∗? (In practice, opportunities like this are so rare you will usually not get to wager a full Kelly.)

65%

I was a little surprised; I had expected a higher result. The logarithmic value function is doing the work of keeping Kelly down.

Q5: If p=0.05 and b=100 then what is the corresponding f∗?

4%

  1. The "net fractional odds" b indicate how much you win in the case of a win. If you wager x and lose then you lose x. If you wager x and win then you get your x back plus an additional xb. ↩︎



Discuss

Coffee trucks: a brilliant idea that someone should do?

20 февраля, 2021 - 10:50
Published on February 20, 2021 7:50 AM GMT

I sometimes wonder if the world should have coffee trucks, like ice cream trucks, roaming the street. Especially when half the population is working from home.

Coffee seems ideal for this because:

  1. you can probably prepare it fresh in a briefly stopping vehicle,
  2. you don’t need much variety,
  3. people are often flexible about when they get it,
  4. they often like to get it as a short break from work in which they bump into other people,
  5. more than half of the US population consumes it at least once every day, so potential demand is radically higher than for most foodstuffs!
  6. most people don’t have the means to make cafe-quality coffee in their home,
  7. it doesn’t go bad easily

There are clearly stationary coffee trucks, like food trucks. I think moving trucks may also exist or have existed ever, but I don’t think I’ve ever seen one, and it’s hard to find mention of them. Here are some people who had the same idea, but since nobody was expecting a moving coffee bus, they got more traffic sitting still like a cafe. But surely people expecting your product to exist is an obstacle not unique to circulating coffee trucks. Here is someone with the same idea, and someone who says there was one in a holiday park once, and some others who think they exist in Australia and Buffalo, but it sounds like they might be thinking of stationary coffee trucks.

I’m not sure that it’s good for any kind of truck to exist if it makes noise all the time as it travels through neighborhoods, delighting some, but surely disrupting others. But I think that can be entirely resolved digitally: instead of the truck playing music, the service could have an app that plays music when truck is approaching, if you signed up for that. Then you touch the notification to turn off the music and at the same time report whether you want a coffee.

Am I missing something?



Discuss

Scott and Rohin doublecrux on AI with human models

20 февраля, 2021 - 10:33
Published on February 20, 2021 7:33 AM GMT

In 2019, Scott Garrabrant wrote about how we might want safe AGI that did NOT use models of humans. Rohin Sha commented about some disagreements with Scott's frame.

This Sunday, they'll meet for a public doublecrux exploring their disagreements.

You can read the original post and discussion here:
https://www.lesswrong.com/.../thoughts-on-human-models

The meetup will be held in Zoom:

https://us02web.zoom.us/j/7151633248?pwd=RnpaUFNZbVYvTTV6RWJVZFFSQ0VPUT09



Discuss

Societal Self-control

20 февраля, 2021 - 06:05
Published on February 20, 2021 3:05 AM GMT

Intro

At the individual level, we humans often intentionally reduce our freedom so that our future selves will make some particular decision. Want to eat healthier? Much easier to forego buying chips and candy at the supermarket than it is to leave them in the bag at home. Want to stay focused on work? Much easier to block your favorite websites than it is to actively resist the siren’s call of Reddit.

Most of the time, these “prospective self control devices,” or PSCDs, as I’ll call them, don’t entirely remove your future freedom; you can always drive to the store for a candy bar or manually disable the Chrome extension. Still, if done right, the temptations won’t be strong enough to warrant this degree of effort. The mere existence of the barrier also serves as a reminder of your higher-level goal.

Here, I want to explore whether PSCDs are underused at the societal level and, if so, how we might use them to improve collective decision-making.

They’re already here

Unsurprisingly, there are already plenty of institutions/norms/rules/practices that seem to count as societal PSCDs. Literally any law is one banal example; “we” (in a very broad sense) decide to make certain activities difficult to do. This is true both in a physical sense (if you try to rob a bank, someone with a gun will physically come and stop you) and a psychological sense (it is often unpleasant to do something illegal due to fear of punishment and general social stigma).

Just like the individual examples, laws don’t entirely remove our freedom to break them. In some cases, as with murder, this is mostly because our society lacks the capability to physically prevent people from doing so. Often, though, we affirmatively want to be able to break the law in extraordinary circumstances. Most of the time, doing 120 in a 55 is very bad and wrong, but if your dad is having a heart attack in the passenger seat and the hospital is 5 minutes away, the ability to physically break the law (and possibly incur the consequences) is a good thing to have.

However, it seems to me that most of these societal PSCDs are “selfish,” at least in a very loose sense. I hesitate to use the word because of its moral connotation, but I’m using it solely to indicate something that promotes our individual well-being, even if it imposes no cost on others. Both speed limits and a social media website blocker are selfish in this way; after all, the “speed limits for everyone but me” option isn’t tenable.

There’s nothing wrong with these, and I’m very glad that they exist.

My concern

While we as a society face plenty of challenges from a selfish perspective (say, finding a cure for cancer), I am much more concerned about those challenges only presently-living humans can solve, but which predominantly affect beings other than us and those we naturally have empathy for.

The most obvious two examples of such beings are distant future people and certain animals. We generally empathize with our children and grandchildren - even if they have not yet been conceived - as well certain animals like cats and dogs. Since we literally share in their suffering and happiness, it is selfishly rational to care about and protect these groups. To reiterate, the term “selfish” does not negate any moral value associated with protecting puppies or children, which are indeed good things to do.

On the other hand, folks (including myself) generally don’t have a strong emotional connection to people who will be alive in the distant future and most animals, particularly non-mammals like lobsters and insects. Of course, this comes in degrees. You might care a lot about your son, somewhat about your great grandson, but not at all (emotionally, I mean) about your descendant who might be alive in 4876. You might have a strong emotional inclination towards dogs, a modest one towards chickens, and none at all towards fruit flies.

That said, our lack of emotional empathy doesn’t condemn us to disregard the well-being of those we don’t naturally empathize with. If we recognize, at an intellectual level, that future people and sentient animals have moral value, then we can choose to act in their interest anyway.

The parallel

This predicament strikes me as parallel to something we often face at the individual level: a conflict between our long-term/higher-order goals and short-term/emotional interests. Just as most of the 47% of Americans who (profess to) support banning factory farms regularly buy factory-farmed animal products, for example, a lot of people who genuinely want to exercise have trouble actually going to the gym.

One option is to argue that eating McNuggets or staying sedentary simply reveal our true preferences, and so people don’t genuinely want to end factory farming and exercise. There’s some truth to this, but it’s not the whole story. It can be legitimately difficult to force oneself to suffer in pursuit of some higher-level goal.

A neat neoclassical model

This is clearest in the case of substance addiction (is a heroin addict who no longer enjoys the drug but cannot quit just “revealing his preferences”?), but is probably a universal consequence of “time-preference.” From Wikipedia:

We use the term time discounting broadly to encompass any reason for caring less about a future consequence, including factors that diminish the expected utility generated by a future consequence, such as uncertainty or changing tastes. We use the term time preference to refer, more specifically, to the preference for immediate utility over delayed utility.

In other words, time discounting means caring less about something purely because it will happen in the future - not because your tastes or goals might change between now and then. And an example:

a nicotine deprived smoker may highly value a cigarette available any time in the next 6 hours but assign little or no value to a cigarette available in 6 months.

Sure, add it to the list of cognitive biases, but there is an opportunity here! While I haven’t done a comprehensive literature review, I’m almost positive that our time preferences are convex, like this:

 

For example, I instinctively care much more about a homework assignment I have to complete right now than one I have to complete in a month, even if I could somehow be assured that my actual conscious experiences of completing the work would be identical. However, the difference between how much I value assignments due in one month and two months respectively is much smaller, if it exists at all.

This is why, it seems to me, we intentionally limit our future freedom in the form of personal PSCDs. Though I might value the taste of ice cream right now more than I value its negative health impact spread over the rest of my life, I also might value this total detriment to my health more than I do the taste of ice cream next week.

You can see where this is going. I know that, if I wait until next week to decide, I will choose the ice cream. However, today I will choose to avoid buying a second carton so that my future self has no such option.

Back to society

At the societal level, we see something analogous. If the ice cream is cheap power and my health is the climate, the demos might collectively value cheap oil and gas now more than it does than it does the associated impact on climate change. It also might value the climate more than it will value tax-free carbon emissions in ten years.

Just as an individual avoids buying ice cream at the store, the political system can respond by imposing a carbon tax to be implemented in ten years’ time.

Something similar is already common: new laws often “phase-in” over the course of a few years. For instance, several states are taking several years to gradually increase their minimum wage. While phase-ins are (I assume?) intended to alleviate the burden itself imposed by a new law, they happen to smuggle in a built in time-preference-arbitrage-device. That is, they make it easier to enact new laws which impose short-term costs (on certain constituencies, that is) but accord with society’s higher-order goals.

The Problem

Even still, I am not aware of any law that self-consciously and explicitly delays enactment solely as a means of satisfying our time-preferences. At least in part, I’d bet this is because doing so can seem like a sleazy, underhanded move. For one thing, doing so means that the law will be imposed on a different set of people than those who were of voting age when it was enacted. For another, it might reduce the salience of corrupt or harmful legislation.

These are very legitimate concerns. Would the 2017 GOP effort to repeal the ACA have failed as it (thankfully) did if they had set it to expire in 2030? Likewise, delayed enactment could just as easily shift the burden of some short-termist policy onto a future constituency.

Fundamentally, the problem with society-level PSCDs directly analogous to those we create for our future selves is that the fruits and consequences of such devices are not internalized by a single, psychologically-continuous individual.

The solution

Instead, we need something to serve instead as a proxy for those we care about at an intellectual level (future people, many animals), but who don’t elicit much direct emotional empathy.

The most obvious example would be an institution, with genuine legislative power, entrusted with the mandate to act in the interests of one of these groups. For instance, we can imagine a committee of benevolent philosopher-kings with 30 votes in the U.S. Senate, tasked with representing the interests of farm animals or future people.

Such an idea quickly runs headfirst into reality. The council would be subject to all of the perverse or distortionary incentives that influence existing bureaucracies and democracies. For example, people in power sometimes make decisions they know to be sub-optimal in order to ensure that they are seen as behaving for the right reasons.

More fundamentally, these institutions wouldn’t be directly accountable to those they’re tasked to serve. Say what you will about democracy, but there is at least somefeedback loop between politician performance and electoral outcome. If the body continually makes bad decisions, the animals or people or whatever have no way of expressing their discontent.

Let not the perfect be the enemy of the slightly-better-than-before

All this said, the relevant question is whether these institutions would be an improvement over the status-quo. Right now, those without political power are at the whims of the electorate’s fleeting and arbitrary empathy. It is currently legal to torturetens of billions of chickens every year, recklessly endanger our descendants via climate change, and turn away migrants fleeing violence and poverty merely because they chose the wrong country in which to be born.

I have no doubt that any institution tasked with serving the un-enfranchised would be really imperfect, but it does seem plausible that, in expectation, they’d do better than we’re doing right now.

What’s going on

Any proposal for institutional political change should be met with skepticism. If the proposal is feasible and a good idea, why hasn’t it already been done? If it really seems like a good idea, it’s probably not politically feasible.

So, is there any reason to think that a new “council of the future” or “council for the animals” might be both good and feasible?

In the short term and at a large scale, of course not. Congress isn’t going to create a shadow-senate to represent chickens and lobsters any time soon. Even still, I think the general concept of an institution to represent those without political power has promise.

It took a pretty long time for democracy to become the standard, currently dominant, style of political organization (though it does have a pretty long historical legacy). To hypersimplify, societies have gradually given political power to more and more people; authoritarianism gave way to enfranchisement of (in the West) wealthy white men, and then all white men, and then women and people of all races. At least, I’ll take this standard narrative as basically true until someone who knows what they’re talking about convinces me otherwise.

Lest this come across as triumphalism, it is undoubtedly the case that billions of people live under authoritarian rule, and even “democratic” nations like the U.S. continually disenfranchise felons and suppress the minority vote. Likewise, lots of political power comes in the form of wealth, education, and status. Charles Koch and I do not have the same amount of power simply because we have the same number of votes.

Under a humanist worldview, all is well and good. Sure, there are important fights to be had against voter disenfranchisement and suppression, gerrymandering, and the influence of wealth, but these are tweaks around the margin. True, children cannot vote, but we can generally trust adults to account for their interests, at least to a significant degree.

What’s changed

Two fundamental changes have taken place to modify this humanist worldview.

First, a growing proportion of society is recognizing that non-human animals have moral value. While we continue to brutally raise and kill billions of animals annually, the fact that so many go out of their way to avoid confronting this reality (see: ag-gag laws) is evidence that many at least in principle recognize that animals can suffer, and that animal suffering is bad. If not, why would nearly half the county express support for a ban on factory farming?

Though I’m extremely agnostic about whether computer systems might be (or already are, for that matter) sentient, I can imagine a similar trend taking place with regards to AI in the future.

Second, we have a novel capacity to influence the long-term future. Until the industrial revolution, there wasn’t much that humans could do on this front. Though greenhouse gas emission as a cause of climate change is the most salient example, we now possess several technologies with the potential to cause human extinction: nuclear weapons, artificial intelligence, various forms of biotechnology. Killing all humans isn’t just bad for those who die. It means that everyone who would have come after do not get to be born.

In the timeline of institutional political change — new state formation, new constitutions, significant changes in political power — these developments are quite recent. Most Western democracies are generally run on institutions (say, the U.S. congress and presidency) that have been around for quite a while. Likewise, near-universal adult enfranchisement is a relatively new phenomenon.

My point is that we shouldn’t expect our governance systems to have fully accommodated these dual changes in our society. There are still important fruits to be picked.

Fine, but why should we think people will voluntarily relinquish political power?

For the same reason that you might install a social-media blocker on your browser. Humans have higher-level goals, such as reducing animal suffering and preventing our species’ extinction, which often come into conflict with our short-term interests. So, we should expect there to be some willingness to create institutions that might limit our own political power or freedom in the future, because we know that doing so preemptively serves our long-term or more fundamental values.

We already do this. Sort of. In the form of laws that restrict our freedom, and through bureaucracies that alleviate civil servants of the omnipresent influence of an impending election campaign.

As with laws and bureaucracies, institutions mandated to advocate for and serve those who cannot vote will remain under the ultimate control of presently-alive, enfranchised humans, just your future self can always drive to the store for another carton of ice cream. But will these institutions be abolished as soon as they begin — if we somehow manage to get these societal PSCDs implemented, that is?

I don’t think so. The sluggish speed of political action in western democracies, particularly the United States, is both a bug and a feature. On one hand, virtually nothing ever happens so politics devolves into a symbolic war over cultural power. On the other, agencies like the IRS and programs like the Affordable Care Act don’t get abolished at the GOP’s earliest convenience. In fact, the friction between political will and actual policy change is the very mechanism that enables societal PCSDs to function.

On the margin, I would much prefer that things actually happen (end the filibuster!) but we have a long way to go until an institution like those I’ve described becomes too easy to dissolve.

Conclusion

While I stand by my assertions that formal political representation for morally-valuable-but-disenfranchised people would be a good thing, I’ll reiterate that I understand how intractable this is at the moment.

That said, it seems that Prospective Self-Control Devices might be generally underused at the societal level, and there are marginal improvements that could get the ball rolling in the right direction.

For example, it doesn’t seem crazy to imagine a non-voting representative of future people in Congress, especially given the salient threat of climate change. After all, there are several non-voting representatives already. Likewise, mightn’t some particularly progressive city be willing to appoint a farm animal representative to the city council, even if it begins as a mere symbolic gesture?

I don’t know. These things seem a long ways off too, but it’s never too early to start trying.



Discuss

The Singularity War - Part 4

20 февраля, 2021 - 03:14
Published on February 20, 2021 12:14 AM GMT

Mom, dad, I'm going to a hotel to rendezvous with…. Yeah, no. Caesar snuck out.

Hotel bedrooms all look the same.

"You don't have to do this if you're uncomfortable," said Caesar.

"It's okay. I really want to," said Visheele.

Visheel wrapped her arm around Caesar….

"She's gone."

"What?" said Caesar.

"Sheele was talking to me a second ago," said Vi, "Now she's gone."

The windows rattled, as if a distant house had been blown up.

"I have to save her," said Caesar.

"She's dead," said Vi.

"Get out of my way," said Caesar.

"Make me," said Vi.

Caesar push her aside. Vi held onto his hand, dived under his armpit and hammerlocked him.

"Let me go," said Caesar.

Vi's right hand held Ceasar's right hand in a the hammerlock. Her left arm wrapped around his neck to keep him from running forward. Vi released her left arm. Caesar ran forward into a wall where Vi held him in place with her right hand alone.

"What's your address?" said Vi.

Caesar told her. Vi opened her phone left-handed. "Hey, Engels? Yeah, I'm Signalling you an address. Memorize and delete it. Get yourself or Catherine there ASAP. Retrieve or destroy any hard drives you find. Apply first aid insofar as you can maintain cover and deniability." She hung up.

"You've done all you can," Vi said to Caesar, "If I let you go do you promise not to do anything stupid?"

"Yeah," said Caesar.

Vi let him go. Caesar sat down on the bed.

"I can't go home, can I?" sad Caesar. It wasn't a question.

"Nope," said Vi.

Caesar glanced at his phone.

"Turn it off," said Vi, "We'll get you a new one."

"We?" said Caesar.

Vi rolled her eyes.

"Gimme a few hours," said Caesar.

"Of course," said Vi.

"Thanks Sheele," said Caesar, "I mean Vi."

Five seconds later Caesar said, "How big is your organization?"

"What organization?" said Vi.

"Sheele told me you run a Marxist sleeper cell," said Vaesar, "I heard you talking to someone codenamed 'Engels' on the phone."

"What? Hahaha! I'm sorry, I shouldn't laugh," said Vi.

"It's okay. I could use a few laughs right now," said Caesar.

Vi literally rolled down on the floor guffawing. She wiped the tears away from her eyes. "If by 'Marxist sleeper cell' you mean four weirdos playing pretend then yes, I am leader of a Marxist sleeper cell."

Caesar held his face in his hands, "Well, who are these four weirdos?"

"There's me," said Vi, "There's Engels, a prepper. There's Catherine, who's trying to unionize her Amazon warehouse—she's the closest thing we have to a real Communist."

"Who's the fourth?" said Caesar.

"You are," said Vi.

"Sheele told me you're a hacker," said Caesar.

"I do independent research on advertising fraud in Chrome browser extensions," said Vi.

"Well, when can I meet your friends?" said Caesar.

"There's a block party tonight on the west side of the industrial district," said Vi, "I'll get you a disguise. You have until I get back to think up a codename."

Dars Vega

New York has some of the most expensive real estate in the world. Within New York, the most expensive land is located adjacent to the stock market. Here, the stock market is not an abstraction. it is a physical building. Land isn't measured in square feet. It is measured in nanoseconds. Whoever is closest to the market has the high ground.

Among the skyscrapers hugging Wall Street was the headquarters of Long-Tailed Capital Management (LCTM). Most hedge funds were organized into several "desks" of quants who were paid a half million dollars per quant per year to arbitrage specific market opportunities.

LCTM was no ordinary hedge fund. Instead of desks, LCTM was divided into three divisions: Sigint, Deropt and Metacom.

  • The Sigint "Signals Intelligence" division existed to predict the market. Technically, "predict" overstates its job. Sigint did not predict where the market would go. Rather, Sigint searched for correlations within the market. Sigint passed these discoveries off to Deropt.
  • The Deropt "Derivative Operations" created financial derivatives. A financial derivative is a linear combination of financial products. When you plug a financial derivative into the corresponding signal you get an equation that siphons money from the market. The advantage of the Sigint-Deropt structure is it maximized specialization. The disadvantage was systemic risk i.e. structural uncertainty. LCTM's crown jewel signals were often correlated in unpredictable ways. If these highly-leveraged signals turned to be less reliable than expected then LCTM could be bankrupted within an hour—and the global financial system could implode alongside it.
  • Thus, we get to Metacom "Meta Command". The Metacom division had multiple jobs. From the perspective Sigint and Devopt, Metacom provided an API and other development tools. To Metacom itself the prime directive was uncertainty management. The quants in Sigint and Deropt were naturally motivated to take long-tailed risks which created consistent reliable profits until they suddenly bankrupted the company. Metacom's ensured the odds of that happening stayed below 0">ϵ>0.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > * {position: absolute} .MJXc-bevelled > * {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom * {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')} .

Most of Metacom was automated. There were many ways the software could unwind its positions in a liquidity crisis. Partial unwinds happened automatically, many times per day. But there was also a team of six humans monitoring it manually around the clock from the MCC (Meta Command Center). By a 23 majority, this team could pull the plug on LCTM and plunge the world into a depression at will. Their job was to do so before someone else did. Most of their time was spent idle. They played poker.

An uncertainty alarm went off in the MCC. The gamblers set their cards down and manned their terminals. A house had been blown up on the West Coast. They puzzled over the alarm. LCTM had models for terrorism, crime, disasters, wars, plagues, assassinations and coups. The uncertainty alarm indicated LCTM lacked a computerized model for the current situation.

The poker players had two jobs. 99% of the time their job was to do nothing and let the computer run LCTM. 0.99% of the time their job was to debug glitches in Metacom's code. The poker players found no obvious causes. They called in acting CEO Kyle Kerrigan out of a meeting with the Federal Reserve. He entered the MCC, glanced at the giant Kelly screen and swore. Kyle's hand hovered over the "request assistance" button. Before he could press it, Dars Vega stepped into the MCC.

When the founder of LCTM had vanished from LTCM, she had still been baseline human. The golem which stepped into the MCC wore a sports bra and running shorts. Short steel rods stuck out laterally from her arms, legs, hips and shoulders. They were anchored into her bones. Black metal talons scraped the floor in place of feet. Instead of a face, eight camera lens were mounted asymmetrically on the front of her head. The lenses focused, unfocused and rotated in independent directions as if alive. Mechanical tentacles draped down from the back of her helmet in place of hair. They ended in various adapters. Dars Vega's wrists ended in empty sockets. She didn't need hands that day.

The room went silent of conversation. The electronic cooling systems continued to serve. Dars Vega passed Kyle Kerrigan and the other quants. She sat down in the acting CEO's chair and spun herself around to face her employees. Her head tentacles plugged into Kyle Kerrigan's terminal.

"Can I get you a coffee?" a young quant named said.

Dars Vega didn't have a visible mouth. Her synthesized voice played from the room's intercom. "Unwind this firm's positions. Buy rice, beans, bullets, tanks and diesel fuel."

"Sir," said Kyle, "What is going on?"

"We are shorting civilization," Dars Vega said.



Discuss

Experientialist Theories of Well-Being

20 февраля, 2021 - 01:04
Published on February 19, 2021 10:04 PM GMT

This piece was written by Aidan Mackenzie and originally posted on QRI's website.

Aidan is a recent graduate of Cornell University with a BA in philosophy and is a visiting research fellow at QRI. His current interests focus on global priorities research as well as the intersections between meta-ethics and philosophy of mind.

TLDR

This post discusses and explores the philosophy of well-being, a subtopic in moral philosophy. I focus on outlining what I call “experientialist” views — the class of views which argue that experience is necessary for well-being. I aim to illustrate why these views, and the discussion around them, can more thoroughly frame hedonism and other theories which attempt to explain value in terms of phenomenology[1]. I also offer recommendations of relevant academic philosophy research for further reading. Motivated by Olah & Carter’s “Research Debt”, this piece distills the existing academic discussion of hedonism into an easy-to-understand map.

Introduction

In moral philosophy, well-being is defined as what is non-instrumentally good for an individual. For example, catching the bus after work is good because it leads to other good things such as being punctual and having more time to spend with family and friends, but feeling happy is good in and of itself. Well-being is the good that is good in and of itself for an individual.

There are many philosophical questions surrounding well-being. We might wonder whether being altruistic leads to more well-being than succeeding in one’s career, or we might debate how we can measure the subjective elements of happiness for the purposes of calculation (as is often important in economic debates).

However, this post focuses on the more fundamental question of well-being: what fundamentally makes something count as wellbeing? Another way to phrase this question is to ask why is it that certain states of mind count as positive well-being while others do not? In philosophy, answers to this question typically fall into three broad camps: (1) Hedonism, (2) Desire-based theories, and (3) Objective List theories (Crisp, “Well-Being”). My project in this post is to consider the first of these camps and to argue that we ought to reconceptualize this camp as the “experientialist camp” where the theory of hedonism is a specific version of a wider classification. My hope is that by detailing the wider space around hedonism I can (A) introduce readers to the field of philosophical well-being and (B) offer a framework that can aid proponents and critics of hedonism to be more precise in their disagreements.

The game-plan then is to discuss theories of well-being that focus on phenomenology, with an eye towards defining hedonism.

In the field of well-being, hedonism is often thought of as a distinct type of theory for two main reasons. First, hedonism is clearly distinct from desire-based theories in that it focuses directly on the way experiences feel subjectively rather than on the forming and fulfillment of desires. Second, as a matter of how philosophy progressed historically, hedonism developed as part of larger theories of consequentialist ethics and so was the first mainstream experientialist theory. However, while the way experiences subjectively feel is certainly a core aspect of hedonism, it is not logically required that pleasure is the only thing about subjective experience that is valuable[2]; you could also think that other elements of conscious experience such as novelty or fulfillment or aesthetic experience make conscious experiences valuable.

In order to show how it is more accurate to think of hedonism as a specific variant of the broader class of “experientialist” theories of well-being, it’s important for us to take a step back.

Map of Experientialist Theories of Well-Being

To guide the rest of the piece, I created the following diagram which illustrates the space of philosophical views within experientialist theories of well-being:

Note: the (numbers) that precede the following paragraphs match the (numbers) on the diagram.

(1) Experientialism

I define experientialism[3] as the class of theories which hold that experience is a necessary requirement for well-being, where “experience” is the having of subjective conscious experience. It is important to note just how broad the requirements of experientialism are. Experientialism does not imply or require that well-being must necessarily be based in phenomenology or that specific types of phenomenology make experience valuable. The only thing that experientialism requires is that subjective experience be part of the story of how well-being works. As we will see, this requirement allows for views where attitudes or desires may play the starring role and phenomenology is secondary.

Even though virtually no philosophers endorse experientialism on its own without also making further commitments, it is useful to define the broad position to illustrate how the map of views proceeds from broadest to most specific.

Now that we’ve defined experientialism, the next step is to consider how experientialists see the role of phenomenology for determining well-being. We can do this by asking “Is phenomenology exclusively what grounds value?” If the answer is yes, we reach (2) Phenomenalism, where phenomenology plays the definitive role for well-being. If the answer is no, we reach (3) Non-Phenomenal Experientalism, where phenomenology is not the primary value-conferrer.

(2) Phenomenalism

If we accept that phenomenology plays the definitive role for explaining well-being, then we enter the subspace of “phenomenalist views of well-being”. We can define these as the class of views which hold that experience is not only necessary for well-being, it is necessary because experience contains phenomenology, which is what ultimately makes experiences valuable. The additional commitment here is subtle but important. Phenomenalism about well-being adds the commitment of thinking that experience is valuable in virtue of phenomenology, which we can understand as the what-it-is-likeness of a mental state. At this point, it is this concept of what-it-is-likeness (or subjective feeling) that commonly demarcates hedonism within the well-being literature[4]. However, phenomenalism about value is not yet as specific as hedonism since phenomenalists need not take a specific stance on whether the value of phenomenology is explained by pleasure or some other phenomenal quality.

(3) Non-Phenomenalist Experientialism

The other option is to hold that while experience is necessary for well-being, phenomenology is not the primary determinant of the value of well-being. We can call this class of views “non-phenomenal experientialist views”. The additional step made here is less substantive than the step in (2) since non-phenomenalist theorists are merely rejecting phenomenalism and still have room to develop the positive aspects of their views[5].

If we return to phenomenalism, the next question to ask is: “Is the value of phenomenology explained by a single quality?” If the answer is yes, we reach (4) Monism, and if the answer is no, we reach (5) Pluralism.

(4) Monism

Phenomenalist-monism argues that the value of phenomenology is explained by a single quality. Note that while this seems very close to hedonism, phenomenalist-monism is also consistent with views where some other quality grounds value. For example, you can technically be a phenomenal-monist if you believe that the experience of the color pink is the only value-conferring phenomenal quality. And even though this may seem intuitively wrong upon introspection (because what about pleasure and all those positive, non-pink visual experiences!?), such views are not incoherent.

(5) Pluralism

The other option for a phenomenalist is to be a pluralist and hold that the value of phenomenology is explained by multiple phenomenal qualities. While phenomenal-pluralism has few defenders and the space is relatively unexplored, there has been some work in this area and some recent movement in this direction. For academic works in this area see Van der Djiel’s “Is Pleasure All That Is Good about Experience?” and Labukt’s “Hedonic Tone and the Heterogeneity of Pleasure”. Additionally, there is a case to be made that certain forms of Buddhist thought line up with phenomenalist-pluralism[6].

(6) Phenomenalist Hedonism

Finally, by diving deeper into phenomenalist-monism, we reach hedonism. Specifically, we reach phenomenalist-hedonism when we follow up phenomenalist-monism with the specific claim that phenomenology is valuable because of the quality of pleasure. Phenomenalist-hedonism holds that pleasure is the value-making quality within complex mental states. Moreover, phenomenalist-hedonism claims that the entire value of conscious experience is explained and grounded by the single phenomenal character of the quality of pleasure.

While the position of phenomenal-hedonism is close to the ordinary conception of hedonism, it is possible to press for further precision. Specifically, while phenomenalist-hedonism argues that mental states are valuable in virtue of being pleasurable, we can still ask for a more precise account of just how pleasure works. A clarifying question to ask is: should we understand pleasure as a distinctive feeling?

(7) Distinctive Feeling Theory

The first variety of phenomenalist-hedonism is distinctive feeling theory which argues that pleasure is a phenomenal quality with a “distinctive feeling” (a specific way of feeling). According to this view, all pleasurable moments contain some unique phenomenology which is ‘the feeling of pleasure’. A critic of distinctive feeling theory might ask, if pleasure is such a unique sensation, why is it so much harder to introspectively discern the particular feeling of pleasure than it is to discern a color quale in one’s visual field or notice a bodily sensation? A distinctive feeling theorist may answer by analogy: pleasure is not like “a piece of steak in a dish of steak and potatoes (in which the boundaries of the steak are quite clear) but [like] a small quantity of a particular spice that has been thoroughly blended into a stew or soup: the spice is there, even though it is very difficult to discern” (Lin). With this account of pleasure, distinctive feeling theory offers a version of hedonism where the value of phenomenology can be explained by a phenomenal quality of pleasure that has a specific character. This theory is primarily defended by Ben Bramble.

(8) Hedonic Tone Theory

The second variety of phenomenalist-hedonism is hedonic tone theory which argues that pleasure is a more coarse-grained phenomenal quality. Instead of thinking that pleasure is analogous to a spice’s role in food, hedonic tone theorists think of pleasure as being even less specific. Hedonic tone theorists argue that pleasure is the mere quality of “feeling good” in any way. The analogy of choice for the view is that pleasure is similar to “loudness” in that they are both broad dimensions. Just like the sound of a trumpet playing E flat and the sound of a glass shattering on the floor can both be loud, so too can two different experiences both be pleasureful even though pleasure itself doesn’t have its own way of feeling. Roger Crisp, the theory's foremost proponent, uses a distinction between determinable qualities (such as loudness) and determinate qualities (such as the sound of an E flat plated on a trumpet) and argues that pleasure falls into the former category (See “Determinables and Determinants” for more on this distinction). In my view, Hedonic Tone theory offers a view that successfully escapes many historical problems for hedonism but, in doing so, offers a less compelling account of the phenomenology of pleasure. Hedonic tone theory is defended by Roger Crisp, Shelly Kagan, and Aaron Smuts.

While a distinctive feeling theorist and hedonic tone theorist may come to mostly similar conclusions given the same moral dilemma, it is my sense that they would probably have different approaches for mapping out the space of all possible valenced experiences[7]. This in turn may lead them to hold different views in situations that involve mixed valence experiences.

Finally, let's return to the non-phenomenal views.

(9) Non-Phenomenal Experientialism

Recall that non-phenomenalist experientialists are committed to thinking that experience is necessary for well-being but not, necessarily, to thinking that phenomenology is the main reason why experience is important. Views that fall in this category are usually more similar to desire-based theories than they are to phenomenalist views. For example, Derek Parfit’s view (9a in the diagram) is that in order for something to count as pleasure, a mental state must include a sensation coupled with a “hedonic-(dis)liking” of that sensation[8]. For Parfit, our “meta-hedonic desires” — background desires that like or dislike an occurring sensation—are what fundamentally confer value or disvalue. Parfit’s view thus offers an account that is centrally focused on desire but still requires phenomenology in the form of sensations. Another option is to consider hybrid understandings of well-being such as Eden Lin’s view (9b). On Lin’s view, pleasure is a combination of the way pleasure feels subjectively as well as a matter of how one’s attitudes relate to the occurring sensations (Lin). While these views are not as central to experientialism as phenomenalism is, it is important to include them in our taxonomy of positions.

Conclusion

Debates in the philosophy of well-being and meta-ethics often skip straight to the controversial debates on hedonism or the role of phenomenology for well-being. However, I believe that taking time to map out the broader philosophical space of experientialism is worth the time, both because doing so can allow us to develop a more complete map of the philosophical territory and because developing this map allows us to precisely identify the commitments of views such as hedonism. Moreover, exploring the space allows us to see the gaps in our map and work to develop even more compelling theories. In my opinion, there is potential for pluralistic versions of phenomenalism (5 in the diagram) that build off the appeal and successes of hedonism yet offer a more complete understanding of well-being, valence, and the complexities of conscious experience.

Appendix A: Relevant Topics Related to Well-Being

There are many discussions in philosophy, economics, and neuroscience that branch off from the topic of well-being. I have listed a few below but the list is very much incomplete and I would welcome recommendations from readers.

  1. The topic of perception in philosophy of mind (particularly the debates that center on the phenomenological content of visual experiences). See “The Problem of Perception” by Tim Crane and Craig French.

  2. The challenge of measuring subjective well-being in theory and in practice, topics found in applied philosophy and economics. (Mike Johnson’s “Effective Altruism, and building a better QALY” is a good starter)

  3. The broader field of well-being in philosophy (Crisps’s “Well-Being” is a good starter).

  4. The meta-ethical debate over how practical reasons for action are grounded. See Sharon Street’s “In Defense of Future Tuesday Indifference” (2009) and David Sobel “Pain for Objectivists: The Case of Matters of Mere Taste.” (2005).

  5. The problem of how to understand well-being given different theories of personal identity. (Section 3 of Parfit’s “Reasons and Persons” is a good starter for the topic as well as David Shoemaker’s “Selves and Moral Units”)[9].

Acknowledgments

Thank you Andrés Gómez Emilsson and Andrew Zuckerman for reviewing drafts of this piece.

Citations

Bramble, Ben. (2013) “The Distinctive Feeling Theory of Pleasure.” Philosophical Studies 162, no. 2: 201–17. https://doi.org/10.1007/s11098-011-9755-9.

Absolutely recommend this paper as it outlines the version of hedonism that I believe is most consistent with QRI’s theory of valence.

Bramble, Ben. (2016) “A New Defense of Hedonism about Well-Being.” Ergo, an Open Access Journal of Philosophy vol 3, no. 20200513. https://doi.org/10.3998/ergo.12405314.0003.004.

A further piece on hedonism that should be of interest to anyone looking for work that defends hedonism and reviews the literature around hedonism over the 90s, 2000s, and 2010s.

Crisp, Roger. (2006) “Hedonism Reconsidered.” Philosophy and Phenomenological Research vol. 73, no. 3: 619–45. https://doi.org/10.1111/j.1933-1592.2006.tb00551.x.

A seminal defense of hedonism that did quite a lot to bring hedonism back from the dead. If pressed for time, I would recommend some of the more recent literature first but this piece is definitely worth reading if you are interested in the classic defenses for hedonism.

Crisp, Roger, "Well-Being", The Stanford Encyclopedia of Philosophy (Fall 2017 Edition), Edward N. Zalta (ed.), URL = https://plato.stanford.edu/archives/fall2017/entries/well-being/.

The definitive introduction to the field of well-being (though this post argues Crisp underestimates experientialism).

Kriegel, Uriah. (2019) “The Value of Consciousness.” Analysis vol 79, no. 3: 503–20. https://doi.org/10.1093/analys/anz045.

Excellent survey article that overviews how philosophers think about the “value of consciousness” broadly construed. The article points to further reading in the broader area of philosophy surrounding the value of consciousness (including the field of perception and aesthetic experience)

Lin, Eden. (2018) “Attitudinal and Phenomenological Theories of Pleasure.” Philosophy and Phenomenological Research, vol. 100, issue 3. https://doi.org/10.1111/phpr.12558.

A great article for both an introduction to attitudinal and phenomenological theories of pleasure as well as an interesting case for a hybrid theory where pleasure is comprised of the correspondence of attitudes and phenomenology.

Ivar Labukt, Hedonic Tone and the Heterogeneity of Pleasure. Utilitas vol 24, no 2, 2012.

A another defense of pluralism about the value of conscious experience. I would recommend reading Van Der Djeil’s piece first.

Olah & Carter, "Research Debt", Distill, 2017.

Schroeder, Mark, "Value Theory", The Stanford Encyclopedia of Philosophy (Fall 2016 Edition), Edward N. Zalta (ed.), URL = https://plato.stanford.edu/archives/fall2016/entries/value-theory/.

The definitive introduction to the field of value theory.

Smuts, Aaron. (2010) “The Feels Good Theory of Pleasure.” Philosophical Studies vol. 55, no. 2: 241–65. https://doi.org/10.1007/s11098-010-9566-4.

An interesting piece on the phenomenological theory of pleasure. If pressed for time I would recommend the Lin paper above over this article as it is a more concise and more recent review of the topic.

van der Deijl, Willem. (2018) “Is Pleasure All That Is Good about Experience?” Philosophical Studies vol. 176, no. 7: 1769–87. https://doi.org/10.1007/s11098-018-1090-y.

A great argument for why being a phenomenalist does not require us to be hedonists, or even monists. I think this may be the most helpful for uncovering the work that QRI’s view has to do to establish a fully convincing moral theory.

van der Deijl, W. The sentience argument for experientialism about welfare. Philos Stud (2020). https://doi.org/10.1007/s11098-020-01427-w

This paper considers the debate between experientialists and “extra-experientialists” - people who think that well-being can be affected by things that do not affect experience. While I have not read this article closely, it seems good for those interested in the debate over whether we should prefer experientialism or other theories of well-being.

  1. While this post does not present explicit arguments, the way I map out the theories of well-being may constitute an “argument” in the sense that the picture I offer is not a decided fact. My aim is to offer a useful picture of the philosophical space but readers should take this with a grain of salt given that this is a complex space and attempts at mapping may be prone to over-simplifications. ↩︎

  2. Van Der Djeil defends this claim in detail in “Is pleasure all that is good about experience?” (2018). ↩︎

  3. Note, “experientialism” in this piece refers to experientialism about well-being specifically and is unrelated to experiantalism as discussed by George Lakoff and Mark Johnson. ↩︎

  4. You may think that the intuition that ‘what is good about an experience is its subjective felt-quality’ is an argument for hedonism. However, I argue that this intuition is actually an intuition for phenomenalist theories more broadly since pluralists can also consistently argue that what is so good about experience is the subjective felt quality of experience. ↩︎

  5. It is worth noting that theories that fit under the “non-phenomenal experientialist” umbrella rarely see themselves as experientialist theories since, usually, the particular theories focus on appeals to non-experientialist explanations. However, as we will see with “hybrid views”, it is logically consistent, though not popular, to be an attitude theorist and also think that experience is a necessary requirement for well-being. ↩︎

  6. For example, the Brahmavihara tradition holds that the value of experience is determined by the experiential primitives of “loving-kindness”, “compassion”, “empathic joy” and “equanimity”. Please note that I am by no means an expert on Buddhist philosophy and I mention this view here just in case readers are interested in looking more deeply into this line of thinking. ↩︎

  7. I hope to make this problem more clear in an upcoming post. ↩︎

  8. See On What Matters. Vol. 1, Chapter 2, #6: “Hedonic Reasons”. ↩︎

  9. As far as I know, this topic is fairly undeveloped though, to my estimation, seems quite interesting. ↩︎



Discuss

Forecasting Prize Results

19 февраля, 2021 - 22:07
Published on February 19, 2021 7:07 PM GMT

The "Forecasting Innovations Prize" was announced on the 15th of November of 2020 on the Effective Altruism Forum and on LessWrong, with the goal of incentivizing valuable research around forecasting. We received 10 submissions.

Judges—AlexRJL, Eric Neyman, Tamay Besiroglu, Linch Zhang, Ozzie Gooen and myself— recommended a quantity of money to be awarded to each submission. The next section is a short summary of each entry, the prize they were assigned, and the reasons the judges gave. This is followed by a brief discussion of the judging process and takeaways.

We will be contacting authors soon.

Crowd-forecasting COVID-19

The post describes the results of a COVID-19 crowd-forecasting project created during the author's PhD. The judges didn’t know of any other app in which human forecasters could conveniently forecast different points in a time series, with confidence intervals. The project’s forecasts were submitted to the German and Polish Forecast Hub, and they did surprisingly well in comparison with other groups. 

Judges brought up the issue that R/shiny is probably the suboptimal technology for a web-app. Further, as of the time the post was published neither the post under consideration nor other submissions to the German and Polish Forecast Hub were able to outperform a model that simply predicts constant cases on a four-week horizon. 

This post receives a prize of $250.

Incentivizing forecasting via social media

The post explores the implications of integrating forecasting functionality with social media platforms. They consider several important potential issues in some length, and possible solutions to these, as well as indications for next steps. The scenario they consider— if it were to occur—could possibly have a large impact on the 'information economy'. 

However, as the author’s note, the feasibility of the proposal is very unclear (<1%, though note that Twitch recently added some prediction functionality). Further, the authors were not aware of Facebook’s Forecast at the time they wrote the post.

This post receives a prize of $250.

Central Limit Theorem investigation

The post visualizes how quickly the central limit theorem works in practice, i.e., how many distributions of different types one has to sum (or convolve) to approximate a Gaussian distribution in practice. The visualizations are excellent, and give the readers intuitions about how long the central limit theorem takes to apply. Judges thought that explanations of important ideas to a specific community are valuable even if they are only new to that community. 

As a caveat, the post requires understanding that the density of the sum of two independent variables is the convolution of their densities. That is, that when the post mentions “the number of convolutions you need to look Gaussian”, this is equivalent to “the number of times you need to sum independent instances of a distribution in order for the result to look Gaussian”. This point is mentioned in an earlier post of the overall sequence. Judges also weren’t sure to what extent this post was “forecasting-related.” Future competitions, if they happen, will have a clearer cut-off. 

This post receives a prize of $120.

Forecasting of Priorities (Czech Priorities)

This post explains a set of ideas by Czech Priorities to use forecasting as a method of public deliberation, in particular to identify "priorities'' or "mega-trends". Judges thought that, with a less messy design, this post could have won the first prize. In particular, it seems that this group has managed to convince the Czech government to give it two large grants and to pay attention to the result.

However, the suggested implementation really was quite messy. On the one hand, they suggest predicting the result of expert deliberation on the importance of “priorities”, but the selection of those experts could be politicized. On the other hand, one of the proposed mechanisms incorporates both forecasting and preference elicitation, and might not end up producing either good elicitation or good forecasting.

This post receives a prize of $90

One's Future Behavior as a Domain of Calibration

This post advocates for forecasting one's future actions, and presents the author's method to do so. Some judges liked that it is pretty easy for this post to have an actual impact, as long as at least one person acts on it. One small detail the judges disagreed with was the post’s assertion that calibration doesn’t transfer between domains (this somewhat conflicts with some of the judges’ own experiences)

This post receives a prize of $80

What to do about short AI timelines?

This short sequence gathers three posts on short timelines, and asks two questions: How to bet on short AI timelines, and how one’s influence depends on the length of AI timelines. This posts part of a longer running investigation by Daniel Kokotajlo into short timelines.

The posts used the EA forum’s question functionality, and the author didn’t seem very satisfied with the responses, though the least forecasting-related post in the series did see more discussion on LessWrong. Judges found that other posts by the author on the topic of timelines (e.g., this one) were much stronger, whereas the particular research questions in the prize submission didn’t really pan out. Some judges thought that question creation might be underrated. 

This post receives a prize of $70

How might better collective decision-making backfire?

The post is faithful to the title, and comes up with or elicits several pathways through which collective decision-making might backfire.

Judges found the question asked to be important, but found it hard to evaluate the answers, because there was no overall framework to do so. In particular, there was no discussion about which concerns were or would have been historically important. It is also unclear whether any practical actions will be taken as a result of the post, or whether it will be built upon.

This post receives a prize of $60

The Fermi Paradox has not been dissolved

The post points out some flaws in Dissolving the Fermi Paradox, a paper by Sandberg et al. Among other reasons, having good probabilities around the Fermi paradox is valuable because it provides (indirect) evidence about the existence of a "Great Filter" and thus for our likelihood of extinction.

Judges disagreed substantially about to what extent the points raised in the post were substantive, and to what extent the author was too overconfident or forceful. There was also some disagreement about whether the post was very related to “human judgmental forecasting.”

This post receives a prize of $50

The First Sample Gives the Most Information

The post concisely introduces a powerful and simple concept. Judges agreed that the post wasn’t hugely impactful, but that it probably did have a pretty great ratio of value to time spent on it. 

This post receives a prize of $50

Range and Forecasting Accuracy

This post attempts to estimate how forecast accuracy declines with time until question resolution. Though this is possible in principle, the post both contained mistakes which remain to be fixed and was published before the prize began.

Judging process

Judges read each submission and produced:

  • An assessment of the quality of the project (execution)
  • An estimate of how valuable the project was
  • A funding recommendation
  • Comments as to their reasoning

The reasons why the funding recommendations were not directly proportional to impact and quality were:

  • Adjusting for closeness to forecasting: more impactful projects which weren't that related to forecasting received smaller prices.
  • Some (but not all) judges tried to think about what signals giving higher or lower prizes sends. For example, some judges gave higher prizes to projects which had higher expected values even if they didn’t pan out in the end. Similarly, some judges penalized a post which sounded very overconfident even if it was otherwise impactful or valuable.
  • A high quality project can have low value if it belongs to a less impactful domain.
  • Some judges felt higher effort posts were worth more money per unit of impact, perhaps because lower effort posts could have been written by someone else if the original author hadn't done it.

After giving their initial estimates, judges met in a Zoom call to discuss their estimates. This was done by going project by project and bringing up disagreements. Afterwards, judges updated their estimates and recommendations. The final prize is simply the mean of all judges' recommendations. 

Comments and Reflections

The counterfactual impact of this prize seems uncertain. Of the 10 submissions, only three were counterfactually caused by the prize, with the other seven being submitted because I (Nuño) asked the authors to do so after finding them by browsing forecasting related content in the EA forum and LessWrong. 

Overall, it is possible that there were too many judges which spent too much time cumulatively judging, and that the marginal value of a judge wasn’t too high. However, when hashing out disagreements, each judge did bring unique points.

If there is a second round for this prize before 2022, entries published after the end of the first round will be accepted so as not to generate an incentive to not post forecasting-related content until there is a prize.

Appendix: Quality Adjusted Research Papers.

Judged also estimated the impact of these projects in terms of Quality Adjusted Research Papers (Qs). QARPs are intended to both have relative value (a 20 QARPs project should be estimated to be twice as valuable as a project which has 10 QARPs), and absolute meaning (0.1 QARPS, or 100mQARPs should correspond to "A fairly valuable paper", such as this one)

The value judges assigned to each submission was:

Note that this method of rating is highly speculative, and having judges using it was in part intended as a test. Judges brought up that they weren’t sure that the scale was well defined, and that they were much more sure about their own relative values than about the absolute magnitude. Also, note that this didn't consider the relevance of forecasting, which is the main reason why these values don't perfectly correlate with the prizes.



Discuss

The Glory System: A Model For Moral Currency And Distributed Self-Moderation

19 февраля, 2021 - 19:42
Published on February 19, 2021 4:42 PM GMT

One of the most important changes in our civilized modern society over the past historical tendencies has been the state's monopoly on the use of violence.  Most everyone today aside from the various strains of anarchist agree that this is a necessary and important requirement for modern life, but perhaps a problematic side effect of this change is that the use of force by individuals as a negative reinforcement tool to shape behaviour is lost.  Whereas in the distant past, a slight against one's honour could be challenged by the threat of a duel, today, positive reinforcement through transfers of money are the main way human behaviour is shaped.

Markets rely on these positive reinforcement signals to aggregate information about people's preferences.  But in some sense, traditional currencies are one-sided.  In effect, you can only vote Yea or Abstain with your dollar.  What if, you could also vote Nay?

Certainly the government can enforce fines as a way to provide negative reinforcement to bad market actors.  This punishment of negative externalities is one of the strongest justifications for government intervention.  But government interventions are often both heavy handed and slow to respond.  What if individuals could spend money to destroy someone else's money?

This somewhat radical and odd idea forms the basis of what I propose to be the glory system.  In the glory system, individuals can both transfer glory to others as a form of positive reinforcement, and spend glory to destroy other people's glory as a form of negative reinforcement.  It essentially functions as a system of distributed reward and punishment, as a true method of social credit.

Previously I had considered if there was a way to implement this idea in the form of a cryptocurrency.  But realistically, I have doubts that an actual currency that can disappear or be destroyed by others legally would have much interest as a store of value.  So what can this idea perhaps be more reasonably applied to?

One obvious comparison to be made with the the glory system is the karma system used on Reddit and here on Less Wrong.  Both allow both upvotes and downvotes in a sense.  The glory system as implemented in a forum reputation system would have some interesting characteristics.

It would allow users to bank upvotes and spend them later to add additional upvotes or downvotes, as much as they were willing to pay, beyond the first free vote.  Coupled with rules that automatically delete or hide posts that are downvoted to zero, and prevent users with zero or negative glory from posting, this can function as a kind of distributed self-moderation system.

It also provides a natural way to protect against spammers, by setting it so that new accounts have zero glory, and must be gifted glory by existing users to be able to post.  This would effectively function as a kind of application or invitation system, to ensure that new accounts were vouched for by existing members of the community.

Probably the most important feature however is that it allows not just the direction but also the magnitude of individual's preferences to be signaled.  Thus, for instance, a contrarian who really cares about a post, can spend glory to defend it and keep its rating above zero in proportion to how much they are willing to pay.  Conversely, if someone sees a particularly objectionable post, they can immediately attempt to bury it.  In aggregate the result functions like a kind of market of ideas, with the potential for bidding wars over particularly controversial notions to be played out over time.

A potential issue that could arise if the exchange rate of glory to votes is one-to-one, is that two accounts could farm glory by trading it back and forth indefinitely.  The simplest solution to this is to set the exchange rate to be 2-to-1 or greater.  This means you have to spend two glory to make one glory, which leads to diminishing returns.

An interesting side effect of the exchange rate is that defending your own posts with upvotes is cheaper than attacking other posts with downvotes.  This could mitigate the potential issues of harassment that plague other forum reputation systems.  Adjusting the exchange rate also determines the relative weighting of the old guard versus newbies and the crowd.  In essence, it allows fine-grained control over to what extent the system balances meritocracy and democracy.

I am tempted to try implementing this system.  Given that moderation on Less Wrong is already formidable and effective enough, I don't think it's necessary to implement it here, but as an interesting experiment, I'm considering forking the Less Wrong 2 codebase and building my own little test forum and deploying it somewhere and inviting friends to join.  Before I do so however, I want to put the idea out to the Less Wrong community to ask for opinions and feedback on the feasibility of the glory system, and any potential pitfalls or critiques I may have overlooked in my enthusiasm for the idea.



Discuss

Bedtime reminiscences

19 февраля, 2021 - 14:50
Published on February 19, 2021 11:50 AM GMT

Things that I imagine would be cool to do with my kids (if I manage to have some): taking bedtime as a moment to reminisce about the day together.

Recalling enjoyable moments is by itself enjoyable. So ask, what parts of the day did you like? What were some good moments? What about it was enjoyable?

At first, it’ll be just me mentioning things I noticed on that day: “You seemed to really like playing with those toys today.” “You looked happy being with uncle X.”

Hopefully soon the kids will notice that this is enjoyable and start paying more conscious attention to the good moments throughout the day – and start bringing up things on their own. (And feel like that was their own idea to also contribute, rather than it just being their dad telling them these things.)

Later, start also covering the moments when they were unhappy or upset. Are they feeling okay now, anything about it that they still want or need to discuss? Even if they’re fine now, make sure to take those moments and reframe them in an explicitly accepting light (all emotions are fine, including negative ones):

  • “That really was upsetting for you but now you’re okay, all bad feelings pass eventually.”
  • “You held yourself together back there even though you were really unhappy about it, that was great. You could have chosen not to even try, but you did do it.”
  • “We both got a little mad at each other earlier but that’s okay, kids need to be mad at dad sometimes and sometimes dad gets mad back. I try not to, but that’s on me, and I love you no matter what.”

Then maybe recall some happy moments that we discussed when it was bedtime on earlier days, and which they might have forgotten otherwise. Keep those unhappy moments firmly sandwiched between good ones.

Also tell them about all the moments today when they made me and mom happy and how we love them. Then a bedtime story and wishing good night.

Hopefully the conversations should keep getting more sophisticated as the kids get older. Get into topics like the value of negative emotions, and what unmet need their unhappiness in that moment was a signal of. Help them brainstorm ways they could meet that need better from now on.

Then one day when they’re adults, hopefully they’ll be so firmly in the habit of going through the good moments and the lessons-in-the-bad-moments that they won’t need me for it anymore, and it has just become automatic.

And if it hasn’t, that’s cool too. At least we had lots of good moments together doing it.

(Or if this whole thing always just seems uninteresting and dumb to them, then we’ll just have to come up with something completely different that they’ll like more.)



Discuss

Oliver Sipple

19 февраля, 2021 - 10:00
Published on February 19, 2021 7:00 AM GMT

The other day I read Wikipedia arguably too much, and consequently came to know the story of Oliver Sipple. Here’s my summary of the story according to these two Wikipedia pages and this page:

In the September of 1975, Oliver (‘Billy’) Sipple was an ex-marine of thirty-three, injured in Vietnam and living in San Francisco. He was in and out of the veteran’s hospital, six years into civilian life.

One afternoon, he stood in a crowd of thousands of people to see the visiting President Gerald Ford leave a San Francisco hotel from across the street. Ford stopped to wave. Suddenly, a shot sounded, and Oliver saw a woman nearby adjusting the aim of her revolver. He lunged and grabbed her arm, sending the second bullet into the hotel, injuring a man inside.

Oliver was thanked for saving the president, and celebrated as a hero by the media. A heroic veteran.

Soon the media learned that he was in fact a heroic gay veteran.

Oliver had shared his sexual orientation with with the San Francisco gay community—or at least he had worked at a gay bar, paraded for gay pride, demonstrated for gay rights, helped in the (LGBT) Imperial Court System, and worked on the campaign to elect openly gay board of supervisors candidate Harvey Milk. But he hadn’t shared it with his family in Detroit, who had more old-fashioned impressions about the morality of homosexuality. He also hadn’t shared it with the world at large, who after all, lived at a time when evidence of a gay person being a public hero was considered fascinating news.

How did the media learn about this? Perhaps there were many sources, or would have been eventually. But the morning after the shooting, two prominent gay activists each outed Oliver to the San Francisco Chronicle. One was Reverend Ray Broshears, leader of the ‘Lavender Panthers’. The other was Oliver’s own friend, Harvey Milk.

!

Harvey is reported to have explained privately to a friend, “It’s too good an opportunity. For once we can show that gays do heroic things, not just all that caca about molesting children and hanging out in bathrooms.”

The next day, Herb Caen, the San Francisco Chronicle reporter who received these messages, reported to the world that Oliver was gay. He added that Oliver was friends with Harvey Milk, and speculated that President Ford hadn’t invited him to the White House because of his sexual orientation.

Somewhere in here, Oliver asked that the media not report on the topic of his sexual orientation, lest his family or current employer learn of it. It’s not clear to me whether this was in time for them to definitively know that he didn’t want them to when they first did it, since apparently Caen ‘couldn’t contact him’.

At any rate, the topic was reported on thoroughly. Gay activists called for his recognition as a gay hero. He was deluged by reporters, and hid at a friend’s house, at which point they turned to interviewing Harvey Milk. Harvey opined that President Ford’s gratitude would indeed have flowed more generously had Oliver been straight.

Oliver’s mother was purportedly harassed by her neighbors, and declared her intent never to speak to him again. He was estranged from his family. His father at some point instructed his brother to forget that he had a brother.

Oliver sued the reporter Caen and numerous newspapers and publishers for the invasion of his privacy. The suit was dismissed, but he fought on. In 1984 a state court of appeals held that he had become news, and his sexual orientation was part of the story.

Oliver didn’t do well after becoming a hero. He drank heavily, was diagnosed with schizophrenia, put on weight, and needed a pacemaker. Over a drink, he was heard to say that he regretted grabbing the gun.

It is said that he eventually reconciled with his family, but it is also said that his father didn’t let him come to his mother’s funeral, so granting both stories it may have been a late or mild reconciliation.

One February day in 1989, Oliver’s friend found him dead in his San Francisco apartment, alongside a bottle of Jack Daniels and a running television. He was 47.

Years later, journalistic ethics professors found this an instructive class discussion topic.



Discuss

The Ultimatum Game

18 февраля, 2021 - 22:47
Published on February 18, 2021 6:36 PM GMT

l

The Ultimatum Game is a simple game in which two players attempts to split a $100 reward.  They can communicate with each other for 10 minutes, after which:

  • Player 1 proposes an integer split (e.g. $75 for Player 1, $25 for Player 2)
  • Player 2 may Accept or Reject this split.  If Player 2 rejects, both players receive nothing.  Otherwise, the money is distributed according to the agreed-upon split.

At first glance, the mathematical analysis is simple: Player 2 should always accept (since anything is better than nothing), so Player 1 should offer a 99-to-1 split to maximize their winnings.

Much of the commentary around this game revolves around the fact that when you play this game with humans, Player 2's sense of "fairness" will cause them to "irrationally" reject sufficiently imbalanced splits.

But this post isn't about people's feelings.  It's about rational agents attempting to maximize wealth.  (I don't doubt that all these ideas have been discussed before, though in most LW posts I found with this game in it, the game itself is not discussed for more than a paragraph or two).

A Veto is a Powerful Bargaining Chip

If you're Player 2 and you want to walk away with more than $1, what do you do?

It's pretty simple, actually - all you need to do is to immediately communicate to Player 1 that you've sworn an Unbreakable Vow that you will reject anything other than a 99-1 split in your favor.  (Or, more practically, give Player 1 a cryptographic proof of a cryptographic contract that destroys $1000 if you accept anything other than 99-1.) And just like that, the tables are turned.  Player 1 now gets to decide between walking away with $1 or walking away with nothing.

This style of play involves reducing your options and committing to throwing away money in a wide variety of scenarios.  But against a Player 1 who's as naive as the original analysis's Player 2, it works.  It's the madman theory of geopolitics - sometimes the best move is to declare yourself crazy.

Examples

This game corresponds fairly directly to the idea of economic surplus: in a positive-sum transaction, both sides want the transaction to go through, but there remains the lingering question of how to split the surplus.

I unfortunately read the news a lot, so I see a lot of big companies and governments getting into fights with this shape. 

  • Apple vs Epic Games: Both sides make more money with Fortnite on the App Store.  But Apple wants a large percentage of the revenue, and Epic chose to reject their demands.
  • Australia vs Google: Google makes tons of money in Australia, and Australia would prefer not to have Google leave, but that may be what happens if Australia doesn't drop its proposed law that would take significant revenue away from Google.
  • California vs Uber and Lyft: Another story of brinkmanship, where Uber/Lyft would rather get 0 revenue from California than accept the California legislature's rules for paying drivers more.
  • If the back-and-forth-bluffing in this game reminds you of a government shutdown - well, it reminds me of that, too.  The majority party plays Player 1, the minority party with a filibuster plays Player 2.
  • Every time you swipe your credit card, the merchant pays the credit company around 3% of the purchase.  Visa and Mastercard set that fee to a level where nearly all merchants will agree to accept their cards.  American Express fees are higher, which is why their cards are accepted in fewer places.
  • Unions and companies - companies can set wages as they wish, but unions have a "reject" option, namely going on strike.
  • In everyday life, pretty much anything involving haggling (e.g. buying a house) will resemble this game.
Comparison to Prisoner's Dilemma

Let's imagine two types of mindsets: Compromiser and Hardliner.  The Compromiser will accept the "I get $1, you get $99" deal, "unfair" though it is.  The Hardliner will never accept or propose anything but "I get $99, you get $1".  If two hardliners play, they both get $0; if two compromisers play, they each get $50.

You can now make the standard 2x2 box in your head, and notice that there are 2 equilibria - CH and HC.  Compared to Prisoner's Dilemma, this game is really easy - 3 of the 4 boxes will end up maximizing total surplus, and the one that doesn't is not a stable equilibrium.

In terms of "moral takeaways", Prisoner's Dilemma has a vibe of, "if you have two people who can keep their promises, they'll do well for themselves in the world."  This game's takeaway is a bit more complicated: "Take a hard line and stand up for yourself, otherwise the world will pass you by.  But don't go too far beyond what's fair."

Adding iterations to this game is interesting:

  • "Extended haggling": If Player 2 rejects the offer, the pot shrinks by $2 to $98, but Player 1 can now make a new offer.  If you run standard game theory on this, using backwards induction, you get that "optimal play" is for Player 1 to offer a 50-50 split in the first round, and for Player 2 to accept.  (When the pot is $2, it'll be a $1-$1 split; knowing this, Player 1 should offer a $2-$2 split when the pot is $4, knowing that if they offered $3-$1 that Player 2 has no incentive to accept.  And so on up to $100.)
    • This is an emotionally pleasing result, but note that "take it or leave it" strategies still apply just as well here, where Player 2 threatens to reject all future offers, or Player 1 threatens to make all future offers have $0 for Player 2.
  • "Chicken": If Player 2 rejects the offer, both players lose $1, but the pot stays at $100.  If one player knows they have more stomach for losses than the other player, then driving a hard bargain is the right strategy.  But if both think this, they can manage to end up losing money, quite unnecessarily.
  • "Representative democracy": The game is played by two elected officials representing two tribes.  After each game, both sides hold an election.  If a politician ever agrees to a 99-1 split, they know for sure they'll lose their election to someone who promises they can bring home 20 or 30.  As time goes on, office-seekers start to promise 60 or 70 in order to win, and then by the laws of math, one side only brings home 50, a "betrayal".  Both sides elect hardliners, no deal is made, the people suffer.  Eventually, the sentiment turns against the hardliners ("are you better off now than you were a few games ago?") and bipartisan compromise is restored.
    • I wonder how this system behaves under simple models of the electorate (how much they weight promises vs results).
    • It's interesting that this feels roughly equivalent to the original game (it's basically a game between the two tribes), but the dynamics seem different.

From a total-surplus perspective, the relative "easiness" of this game is encouraging - it's a good thing that these dynamics, not those of prisoner's dilemma, are the ones that govern every supply chain, joint venture, and partnership agreement.

One last note: There's a fun way to combine this and prisoner's dilemma: namely, having there be 2 people who make an offer (but still 1 person who accepts/rejects).

The game theory now tells you that both people making the offer should offer a $1-$99 split, otherwise the other person will undercut them.  If we imagine the offer-makers as companies and the offer-taker as the consumer, we've gone from total monopoly to total competition.  In total competition, the companies capture almost none of the value they generate, and it almost all goes to the consumer - the miracle of capitalism.

In this scenario, the two companies should want to collude with each other or merge with each other.  The Prisoner's Dilemma situation makes the former difficult, and antitrust law interferes with the latter.

________

Here's a claim to close this piece: People overuse Prisoner's Dilemma as a mental model, when they should be using something more along these lines.

  • This game comes from economics, not the criminal justice system
  • It's parameter-free (whereas PD has an oft-overlooked parameter: the ratio of how much extra you get from DD over CD, versus how much extra you get from CC over DD).  Probably related: You never have to draw a payout grid for this one, because it's more intuitive.
  • There's a continuum of strategies ("insist on 60-40" vs "insist on 99-1"), providing nice mathematical ways of seeing gradual societal change evolve over time.
    • Strategies in thee "extended haggling" version are even more interesting: How long do you hold out before caving?  Do you gradually relax your demands?
  • It's interesting both with and without repetitions - and the repetitions make it more likely for the "no deal" outcome to show up as a negotiating tactic.  Contrast with PD where repetition is a key mechanism for salvaging a fairly hopeless situation.

The almost-but-not-quite symmetry of it is a bit awkward - I wonder if the "extended haggling" version above basically resolves that issue.  (The game theoretic prediction of the original case does come true sometimes - witness the $2/share Bear Stearns deal - and it's usually because there is no more time for negotiation.)



Discuss

Utility Maximization = Description Length Minimization

18 февраля, 2021 - 21:04
Published on February 18, 2021 6:04 PM GMT

There’s a useful intuitive notion of “optimization” as pushing the world into a small set of states, starting from any of a large number of states. Visually:

Yudkowsky and Flint both have notable formalizations of this “optimization as compression” idea.

This post presents a formalization of optimization-as-compression grounded in information theory. Specifically: to “optimize” a system is to reduce the number of bits required to represent the system state using a particular encoding. In other words, “optimizing” a system means making it compressible (in the information-theoretic sense) by a particular model.

This formalization turns out to be equivalent to expected utility maximization, and allows us to interpret any expected utility maximizer as “trying to make the world look like a particular model”.

Conceptual Example: Building A House

Before diving into the formalism, we’ll walk through a conceptual example, taken directly from Flint’s Ground of Optimization: building a house. Here’s Flint’s diagram:

The key idea here is that there’s a wide variety of initial states (piles of lumber, etc) which all end up in the same target configuration set (finished house). The “perturbation” indicates that the initial state could change to some other state - e.g. someone could move all the lumber ten feet to the left - and we’d still end up with the house.

In terms of information-theoretic compression: we could imagine a model which says there is probably a house. Efficiently encoding samples from this model will mean using shorter bit-strings for world-states with a house, and longer bit-strings for world-states without a house. World-states with piles of lumber will therefore generally require more bits than world-states with a house. By turning the piles of lumber into a house, we reduce the number of bits required to represent the world-state using this particular encoding/model.

If that seems kind of trivial and obvious, then you’ve probably understood the idea; later sections will talk about how it ties into other things. If not, then the next section is probably for you.

Background Concepts From Information Theory

The basic motivating idea of information theory is that we can represent information using fewer bits, on average, if we use shorter representations for states which occur more often. For instance, Morse code uses only a single bit (“.”) to represent the letter “e”, but four bits (“- - . -”) to represent “q”. This creates a strong connection between probabilistic models/distributions and optimal codes: a code which requires minimal average bits for one distribution (e.g. with lots of e’s and few q’s) will not be optimal for another distribution (e.g. with few e’s and lots of q’s).

For any random variable X.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > * {position: absolute} .MJXc-bevelled > * {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom * {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')}  generated by a probabilistic model M, we can compute the minimum average number of bits required to represent X. This is Shannon’s famous entropy formula

−∑XP[X|M]logP[X|M]

Assuming we’re using an optimal encoding for model M, the number of bits used to encode a particular value x is logP[X=x|M]. (Note that this is sometimes not an integer! Today we have algorithms which encode many samples at once, potentially even from different models/distributions, to achieve asymptotically minimal bit-usage. The “rounding error” only happens once for the whole collection of samples, so as the number of samples grows, the rounding error per sample goes to zero.)

Of course, we could be wrong about the distribution - we could use a code optimized for a model M2 which is different from the “true” model M1. In this case, the average number of bits used will be

−∑XP[X|M1]logP[X|M2]=E[logP[X|M2]|M1]

In this post, we’ll use a “wrong” model M2 intentionally - not because we believe it will yield short encodings, but because we want to push the world into states with short M2-encodings. The model M2 serves a role analogous to a utility function. Indeed, we’ll see later on that every model M2 is equivalent to a utility function, and vice-versa.

Formal Statement

Here are the variables involved in “optimization”:

  • World-state random variables X
  • Parameters θ,θ′ which will be optimized
  • Probabilistic world-model M1(θ) representing the distribution of X
  • Probabilistic world-model M2 representing the encoding in which we wish to make X more compressible

An “optimizer” takes in some parameter-values θ, and returns new parameter-values θ′ such that

E[−logP[X|M2]|M1(θ′)]≤E[−logP[X|M2]|M1(θ)]

… with equality if-and-only-if θ already achieves the smallest possible value. In English: we choose θ′ to reduce the average number of bits required to encode a sample from M1(θ′), using a code optimal for M2. This is essentially just our formula from the previous section for the number of bits used to encode a sample from M1 using a code optimal for M2.

Other than the information-theory parts, the main thing to emphasize is that we’re mapping one parameter-value θ to a “more optimal” parameter-value θ′. This should work for many different “initial” θ-values, implying a kind of robustness to changes in θ. (This is roughly the same concept which Flint captured by talking about “perturbations” to the system-state.) In the context of iterative optimizers, our definition corresponds to one step of optimization; we could of course feed θ′ back into the optimizer and repeat. We could even do this without having any distinguished “optimizer” subsystem - e.g. we might just have some dynamical system in which θ is a function of time, and successive values of θt satisfy the inequality condition.

Finally, note that our model M1 is a function of θ. This form is general enough to encompass all the usual decision theories. For instance, under EDT, M1(θ) would be some base model M conditioned on the data θ. Under CDT, M1(θ) would instead be a causal intervention on a base model M, i.e. M1(θ)=do(M,Θ=θ).

Equivalence to Expected Utility Optimization

Obviously our expression E[−logP[X|M2]|M1(θ)] can be expressed as an expected utility: just set u(X)=logP[X|M2]. The slightly more interesting claim is that we can always go the other way: for any utility function u(X), there is a corresponding model M2, such that maximizing expected utility u(X) is equivalent to minimizing expected bits to encode X using M2.

The main trick here is that we can always add a constant to u(X), or multiply u(X) by a positive constant, and it will still “be the same utility” - i.e. an agent with the new utility will always make the same choices as the old. So, we set

αu(X)+β=logP[X|M2]⟹P[X|M2]=eβeαu(X)

… and look for α,β which give us a valid probability distribution (i.e. all probabilities are nonnegative and sum to 1).

Since everything is in an exponent, all our probabilities will be nonnegative for any α,β, so that constraint is trivially satisfied. To make the distribution sum to one, we simply set β=−ln∑Xeαu(X). So, not only can we find a model M2 for any u(X), we actually find a whole family of them - one for each  0">α>0.

(This also reveals a degree of freedom in our original definition: we can always create a new model M′2 with P[X|M′2]=1ZP[X|M2]α without changing the behavior.)

So What Does This Buy Us?

If this formulation is equivalent to expected utility maximization, why view it this way?

Intuitively, this view gives more semantics to our “utility functions”. They have built-in “meanings”; they’re not just preference orderings.

Mathematically, the immediately obvious step for anyone with an information theory background is to write:

E[−logP[X|M2]|M1]=−∑XP[X|M1]logP[X|M1]+P[X|M1]logP[X|M2]P[X|M1]

=H(X|M1)+DKL(M2.X||M1.X)

The expected number of bits required to encode X using M2 is the entropy of X plus the Kullback-Liebler divergence of (distribution of X under model M2) from (distribution of X under model M1). Both of those terms are nonnegative. The first measures “how noisy” X is, the second measures “how close” the distributions are under our two models.

Intuitively, this math says that we can decompose the objective E[−logP[X|M2]|M1] into two pieces:

  • Make X more predictable
  • Make the distribution of X “close to” the distribution P[X|M2], with closeness measured by KL-divergence

Combined with the previous section: we can take any expected utility maximization problem, and decompose it into an entropy minimization term plus a “make-the-world-look-like-this-specific-model” term.

This becomes especially interesting in situations where the entropy of X cannot be reduced - e.g. thermodynamics. If the entropy H(X) is fixed, then only the KL-divergence term remains. In this case, we can directly interpret the optimization problem as “make the world-state distribution look like P[X|M2]”. If we started from an expected utility optimization problem, then we derive a model M2 such that optimizing expected utility is equivalent to making the world look as much as possible like M2.

In fact, even when H(X) is not fixed, we can build equivalent models M′1,M′2 for which it is fixed, by adding new variables to X. Suppose, for example, that we can choose between flipping a coin and rolling a die to determine X0. We can change the model so that both the coin flip and the die roll always happen, and we include their outcomes in X. We then choose whether to set X0 equal to the coin flip result or the die roll result, but in either case the entropy of X is the same, since both are included. M′2 simply ignores all the new components added to X (i.e. it implicitly has a uniform distribution on the new components).

So, starting from an expected utility maximization problem, we can transform to an equivalent minimum coded bits problem, and from there to an equivalent minimum KL-divergence problem. We can then interpret the optimization as “choose θ to make M1(θ) as close as possible to M2”, with closeness measured by KL-divergence.

What I Imagine This Might Be Useful For

In general, interpretations of probability grounded in information theory are much more solid than interpretations grounded in coherence theorems. However, information-theoretic groundings only talk about probability, not about "goals" or "agents" or anything utility-like. Here, we've transformed expected utility maximization into something explicitly information-theoretic and conceptually natural.  This seems like a potentially-promising step toward better foundations of agency. I imagine there's probably purely-information-theoretic "coherence theorems" to be found.

Another natural direction to take this in is thermodynamic connections, e.g. combining it with a generalized heat engine. I wouldn't be surprised if this also tied in with information-theoretic "coherence theorems" - in particular, I imagine that negentropy could serve as a universal "resource", replacing the "dollars" typically used as a measuring stick in coherence theorems.

Overall, the whole formulation smells like it could provide foundations much more amenable to embedded agency.

Finally, there's probably some nice connection to predictive processing. In all likelihood, Karl Friston has already said all this, but it has yet to be distilled and disseminated to the rest of us.



Discuss

Formal Solution to the Inner Alignment Problem

18 февраля, 2021 - 17:51
Published on February 18, 2021 2:51 PM GMT

We've written a paper on online imitation learning, and our construction allows us to bound the extent to which mesa-optimizers could accomplish anything. This is not to say it will definitely be easy to eliminate mesa-optimizers in practice, but investigations into how to do so could look here as a starting point. The way to avoid outputting predictions that may have been corrupted by a mesa-optimizer is to ask for help when plausible stochastic models disagree about probabilities.

Here is the abstract:

In imitation learning, imitators and demonstrators are policies for picking actions given past interactions with the environment. If we run an imitator, we probably want events to unfold similarly to the way they would have if the demonstrator had been acting the whole time. No existing work provides formal guidance in how this might be accomplished, instead restricting focus to environments that restart, making learning unusually easy, and conveniently limiting the significance of any mistake. We address a fully general setting, in which the (stochastic) environment and demonstrator never reset, not even for training purposes. Our new conservative Bayesian imitation learner underestimates the probabilities of each available action, and queries for more data with the remaining probability. Our main result: if an event would have been unlikely had the demonstrator acted the whole time, that event's likelihood can be bounded above when running the (initially totally ignorant) imitator instead. Meanwhile, queries to the demonstrator rapidly diminish in frequency.

The second-last sentence refers to the bound on what a mesa-optimizer could accomplish. We assume a realizable setting (positive prior weight on the true demonstrator-model). There are none of the usual embedding problems here—the imitator can just be bigger than the demonstrator that it's modeling.

(As a side note, even if the imitator had to model the whole world, it wouldn't be a big problem theoretically. If the walls of the computer don't in fact break during the operation of the agent, then "the actual world" and "the actual world outside the computer conditioned on the walls of the computer not breaking" both have equal claim to being "the true world-model", in the formal sense that is relevant to a Bayesian agent. And the latter formulation doesn't require the agent to fit inside world that it's modeling).

Almost no mathematical background is required to follow the proofs. We feel our bounds could be made much tighter, and we'd love help investigating that. 

These slides (pdf here) are fairly self-contained and a quicker read than the paper itself.

Below, Pdem.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > * {position: absolute} .MJXc-bevelled > * {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom * {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')}  and Pim refer to the probability of the event supposing the demonstrator or imitator were acting the entire time. The limit below refers to successively more unlikely events B; it's not a limit over time. Imagine a sequence of events Bi such that limi→∞Pdem(Bi)=0.



Discuss

Covid 2/18: Vaccines Still Work

18 февраля, 2021 - 15:30
Published on February 18, 2021 12:30 PM GMT

This week the CDC released new guidelines for schools. I’ve spun my analysis of that off into its own post. Scott Alexander also shared some good thoughts on Covid-19 in two new posts, and I discuss both of those, and how our models and predictions differ.

Also, as some combination of retaliation and its continued crusade against the evils of the tech industry and the notion of freedom of speech, the New York Times finally published its hit piece on Scott Alexander. I devote a brief section to it, the upside of which is that the paper is now permanently banned from this space. 

Oh, right, you came here to learn about Covid-19 news. The short news is great. Infections and deaths are down much faster than anticipated. The vaccine news is less great. The process continues to accelerate, but that acceleration was slower than expected, and there’s still lots of FUD about what vaccinated people can and can’t ‘safely’ do and little movement yet towards saner policy in other ways, although there are signs of moving in that direction. The new strains aren’t here in force yet, but it seems likely that will happen soon.

One potentially big thing we learned is the study from infected NBA players, which suggests that the English strain causes infections to play out over a longer period of time than the classic strain, with all the implications of that.

Another was that it got even clearer than it already was that the Pfizer and Moderna vaccines are highly effective after only one dose, and that First Doses First would be a vast improvement while we continue to have much less vaccine than we need.

The Numbers Predictions

Results last week: 6.4% positive rate on 11.3 million tests, and an average of 2,752 deaths after adjusting for Indiana.

Prediction: 5.7% positive rate and an average of 2,450 deaths. Things should still continue to improve.

Result: 5.2% positive test rate on 10.4 million tests, and an average of 2,089 deaths. This is after removing Ohio’s adjustments that add additional deaths from previous months, similar to Indiana’s adjustments from last week. 

I’m straight up adjusting the results rather than listing both answers, because the adjustment makes my prediction less accurate. When adjustments like this make predictions more accurate, it seems necessary to list both results to keep oneself honest.

This is an extraordinarily good result. After several weeks of refusing to budge, deaths plummeted, and the drop in infections continues to be rapid. With a 20%+ week over week decline, we still aren’t fully ready for the English Strain, but it’s a great start and buys us time. 

This will be the first week that I am adjusting my prediction a bit to account for risk that the English Strain is starting to take over, but it won’t happen all at once. It would be very surprising not to see a continued drop.

Prediction for next week: 4.6% positive test rate and an average of 1,800 deaths.

Deaths DateWESTMIDWESTSOUTHNORTHEASTDec 17-Dec 233826515851313772Dec 24-Dec 303363366841713640Dec 31-Jan 64553412750194162Jan 7-Jan 136280396373834752Jan 14-Jan 205249338672074370Jan 21-Jan 276281321781514222Jan 28-Feb 35524307880713410Feb 4-Feb 104937268771653429Feb 11-Feb 173837222152392700

Once we take out the adjustment in Ohio, we see steady advancement across the board. 

Positive Tests DateWESTMIDWESTSOUTHNORTHEASTJan 7-Jan 13474,002262,520531,046306,604Jan 14-Jan 20360,874185,412452,092250,439Jan 21-Jan 27260,180158,737386,725219,817Jan 28-Feb 3191,804122,259352,018174,569Feb 4-Feb 10144,90299,451255,256149,063Feb 11-Feb 1797,89473,713185,765125,773

Once again improvement is steady all around, although slower in the Northeast and especially slower in New York, which I’m personally not too thrilled about. 

Test Counts DateUSA testsPositive %NY testsPositive %Cumulative PositivesDec 24-Dec 3011,300,92411.2%1,303,2866.0%5.98%Dec 31-Jan 611,649,64013.3%1,365,4737.3%6.45%Jan 7-Jan 1313,911,52912.2%1,697,0346.6%6.97%Jan 14-Jan 2014,005,7209.7%1,721,4405.9%7.39%Jan 21-Jan 2712,801,2718.8%1,679,3995.3%7.73%Jan 28-Feb 312,257,1237.7%1,557,5504.6%8.02%Feb 4-Feb 1011,376,5416.4%1,473,4544.1%8.25%Feb 11-Feb 1710,404,5045.2%1,552,5553.5%8.41%

As a point of comparison, these rates are similar to what we saw late in October. 

Also worth noting that we’ve never seen a rolling 7-day average below 4%, so we could get that to an all-time low within the next two weeks. Exciting stuff.

Vaccinations

This was a deeply disappointing week on the vaccine front, with the 7-day average declining over the last few days, and on net almost no progress week over week. Our rate of progress has stalled out. Weather and the holiday I am sure contributed to this, and increased supply is still in our future, but it’s still disheartening. 

Europe

Overall news is good, but nowhere near as good as in America.

Covid Machine Learning Project

As with other sources, the infection numbers look good but the vaccine numbers are disappointing. He has us at 27.7% infected as of February 4, versus 27.2% a week earlier. 

The English Strain

The NBA once again delivers us reliable data on the things we care about.

Clustering means that this can’t be presumed to be a random sample, but 7/65 for December and January is such bad news it could be seen as potentially turning the corner into becoming good news again. If we were at 10%+ new strain over that period, then that means at least 20% by the end of January, and it’s now two weeks later than that which should take us at least close to 50%. 

If we interpret that as the new strain about to become impactful and take over, it’s bad news. But, if we interpret it as the new strain is already here in force then it gets more ambiguous and could even be good news! If the new strain is 25%, or even 50%, of new cases, and cases are still going down, then that’s amazing. It means we’ve somehow managed to do so much additional prevention work that we’re still making rapid progress. So that would be pretty great. 

The timing of the transition between strain, however, wasn’t what was being studied here at all. Here’s the paper. Instead, the paper is studying the timing for when people are infectious with each strain, using the frequent NBA testing to generate a robust data set. That’s pretty great, since it informs all sorts of decision making.

These charts seem super useful, with red being the English strain and blue being classic:

Note, before we go any further, that these are in some ways highly atypical cases. NBA players are not representative of the population, and these could be highly correlated in various ways. It’s entirely possible that these results don’t mean much because of those issues, and it would be super nice if we could run similar tests on a population that wasn’t so exceptional and unusual.

Together, these charts tell a story of the typical infected person with the classic strain being acutely infectious for a brief period, whereas the new strain makes them sick and infectious over a longer period. This would then presumably also be the reason the new strain is deadlier, although I don’t have a good understanding of why it has that effect and wouldn’t have been surprised if this pattern had gone along with a less deadly strain rather than a more deadly strain. 

This means that the infection cycle (the serial interval) of the English strain is longer than that of the old strain. That means that it will grow slower, relative to its increased infectiousness, than the old strain. If we weren’t doing any mitigation at all, it’s possible that the English strain could have a higher R0 than the old one yet end up not becoming (at least at first) the dominant strain, because the old one would grow faster. This means that we should expect the English strain to arrive in numbers somewhat slower than its level of infectiousness would otherwise indicate, buying us more time to prepare, but also means that if the strain arrives in force at a given time, we should estimate it being a bigger jump in infectiousness than we would have previously since it will have had fewer cycles in which to get that far.

This also means that if the English strain is a major factor, and this data is accurate, quarantine periods will have to get longer. Whatever was being done before won’t be enough, and anything that would be enough now was previously overkill. 

There also won’t be one number to represent the increase in infectiousness, because the relative R0 will differ based on people’s behaviors. 

Due to the origin of the sample take all this with a bunch of salt, but it’s a bunch of interesting updates. 

The South African Strain

It is clear that antibodies from our current vaccines are less effective against the South African strain. The question is how much less effective, and what that translates to in practice in terms of protection. When one shows antibody response levels, it is traditional for a reason to use a logarithmic chart. The immune system doesn’t produce exactly enough antibodies, it does its best to massive overkill when that makes sense.

Also, there’s the continuous risk that one will confuse the different measures of ‘effectiveness’ and reach a nonsense conclusion by doing math that isn’t valid. I worry about the interpretation of things like this but trust my readers to handle it: 

There are plenty of people who read that and interpret it very incorrectly as ‘whelp, we were 95% protected, but this is two thirds less protection, so that’s about 1/3 protected.’ That would be rather alarming, but it’s not the case. Cutting antibody effectiveness by 2/3rds is going to reduce vaccine protection, and the ‘no evidence’ statement here is obvious nonsense the way such statements usually are, but we also have no reason to expect a 2/3rds drop in antibody protection levels to translate to anything like a 2/3rds drop in how effective the vaccine will be at preventing infection or death. 

Here’s the chart from the preliminary report, click through to get their explanations:

#YouHadOneJob (to Appoint Someone To Fill)

President Biden continues not to have named his new head of the FDA (WaPo). The two top candidates continue to be Janet Woodcock and Josh Sharfstein

It seems from a brief investigation that Sharfstein’s primary mission would be to stop evil drug companies from potentially earning a profit. Such villains would need to be stopped immediately. On the plus side, he has earned the ire of anti-vaccine groups, and that has to count for something.

Janet Woodcock is the current acting commissioner, and thus directly responsible for our failure to approve vaccines. You gotta love this being front and center on her Wikipedia page: 

In the context of opioids that certainly does not seem great, but we do need that same energy now in other matters, and also there’s this:

Not that it’s working all that well, or anything, based on what we observe, but that doesn’t mean it couldn’t be much worse. At least Janet Woodcock is moving on the margin to produce more innovation and get things to move faster, as opposed to stopping things and slowing them down. So if those are the only choices, I’d have no choice but to go with Janet Woodcock.

Vaccines Work

(And yes, you should know this already.)

Somehow, we constantly get articles and new studies that are framed as surprising, saying that yes, the vaccines that we know work, work. 

And thus, ‘No evidence’ for the obvious conclusion all the Bayesians reached weeks or months ago becomes evidence that counts in newspapers and journals. 

Which is important! Even when one trusts, one should verify. Even if one did not need to verify, one can still learn more and in more detail. Every little bit helps.

It’s best not to get too worked up about the failure to use words like ‘confirms’ and instead call such results ‘game-changing.’ What matters now is that our beliefs, and what we’re willing to do with those beliefs and say out loud, have now converged. 

Yet, even when the write-ups clearly point to the correct response (first doses first), it still pains me to see things like this…

I say this points to First Doses First because otherwise this isn’t a game-changer in pursuit of herd immunity. If you always give everyone their second dose two weeks after the first, this new finding at most puts you two weeks ahead of schedule. The only real ‘game-changer’ would be to use this to do First Doses First.

(In good news, there’s reports that the CDC is in technically confidential discussions about recommending a move to First Doses First.)

And here we are, back to Very Serious Person framing, complete with ‘may’ and ‘partial’:

Another similar article on the topic is here, with the same framing that this is news.

The problem is, we all want to directly measure infectiousness, since it’s genuinely unclear exactly how much protection against this various vaccines offer at various points:

But how are you going to do that? The only practical option that would actually work, that I can think of, would be a challenge trial, which is super illegal, so the next best thing anyone’s come up with so far is what this study did, which was to measure viral load in the infected. 

The result of 75% lower average viral loads in vaccinated people, conditional on infection at all, reinforces our previous understanding. First, it confirms that, conditional on infection at all, vaccinated people are less infectious even after one dose. Second, it confirms that severe disease, hospitalization and death should be much reduced in vaccinated individuals, again after only one dose, even conditional on infection. 

Since one dose also does a remarkably strong job of preventing infection, we can now multiply these effects. The only downside is that, with less viral load and less serious infection, those who are vaccinated are more likely to have a case and be undetected, so the “% effective” number on infected-at-all is going to be lower than the “% effective” number measured by those who test positive. Testing often happens in response to a reason, and also sufficiently light cases generate more (technically false) negatives.

Practical Model of Vaccine Effectiveness

Basically, my current model is that the vaccines all improve your chances at each step of the illness, cumulatively. This fits the data and also my understanding of immunology.

Think of being infected kind of as: 

1. First you get exposed and you roll to see if you get infected at all.

2. Conditional on getting infected at all, roll to see if you get mild symptoms. 

3. Conditional on getting mild symptoms, roll to see if you get moderate I’m-sick-this-sucks level symptoms.

4. Conditional on getting moderate symptoms, roll to see if you get a severe case.

5. Conditional on severe, roll to see if you need hospitalization.

6. Conditional on hospitalization, roll to see if you die.

If you pass a roll, you recover, and then you roll again based on how bad it was to see if you get some form of Long Covid. The further down the chart you get before you stop, the more infectious you are for longer, ratios unknown. And the worse your case, the higher the viral load you pass to someone you infect, which makes their chances at each stage worse as well.

Being young and healthy helps you at each step, and so does vaccination, as does being exposed in step 1 to a low initial viral load, with the difference that vaccination helps relatively more at steps 1-2, and being young and healthy is relatively more effective at stages 3-6, and I don’t know where viral load helps more, but it all helps at least some at all steps.

When we measure vaccine effectiveness, different studies are measuring different places on the chart – most are checking how many people test positive as primary endpoint, which is closest to #3, but others test everyone periodically, which is more like a mix of #1 and #2 (if you’re barely positive, the test won’t be positive for as long and might miss it entirely). We also measure #5 and #6 and report those, because people notice they’re important.

What we notice in every vaccine study is that #6 comes out to 100% protection, and #5 at least comes damn close. Two weeks ago there were 5 vaccine studies released, quite large, with zero hospitalizations and zero deaths in any of the treatment groups. As far as I have seen, for every vaccine anyone is considering offering to anyone, there has not been a single death from Covid-19 in the treatment group of any vaccine trial. 

Call the dominant strain in America the ‘Classic’ strain to contrast it with the South African, Brazillian or English strains. 

My current prior on Pfizer/Moderna vs. Classic is that they’re something like 90% vs. infectiousness (step #2 or so), 95% vs. moderate symptoms (#4), 99%+ versus death. To the point where we’ve fully vaccinated several percent of the population across many countries and have one report of a fully vaccinated person dying of Covid-19, versus what would otherwise be over a hundred per day. 

Similarly, J&J was measured at 66-72% vs. Classic as the headline number, but is clearly 95%+ and plausibly 99%+ against hospitalization and death, we don’t know the full number because again zero cases in the treatment group. 

The other vaccines follow similar paths. They all offer strong protection versus hospitalization and death.

Even in the worst case, where the AZ vaccine is not so effective against the SA strain in terms of infection and transmission, it’s more effective than nothing at all even against infection and transmission, doesn’t block getting a second vaccine later, and it would still prevent a lot of deaths. So the SA decision to not use AZ only makes sense if they can trade their AZ supply for someone else’s supply of a different vaccine, which could still be a win/win.

You want to know how effective the Pfizer vaccine is? This effective:

It’s too early to draw firm conclusions about impact on the IFR (infection fatality rate) from this alone, since my prior for the baseline number is something like 0.2%-0.6%, which means we’d expect 1-3 deaths by now if they were detecting every case from 544 infections, or a few times that if we assume the majority of infections continue to be missed. The hospitalization rate is similar to the overall USA hospitalization rate for cases, but the American death rate is about half our hospitalization rate, so even if hospitalized but vaccinated patients are somewhat less likely to die than hospitalized unvaccinated patients, I’m guessing the hospitalization threshold in Israel is lower. 

Also worth remembering that Israel focused on vaccinating its elderly. 

Winning all fifteen coin flips does happen by accident, but rarely on one’s first attempt. Combine that with winning all the coin flips in all the vaccine trials. 

Or to put it another way, vaccines are so good that mild cases after vaccination are clickbait news items (hence I won’t direct link):

Or, in summary, this:

This study covers our own results in practice, and also finds the vaccines highly effective (pdf). Numbers here were actually somewhat disappointing compared to what we’ve seen elsewhere. Sharing the chart below because the data is the thing, but note that these are very small samples in terms of number of infections in many cases, so don’t take the exact numbers all that seriously.

Once again wasn’t sufficiently powered to measure everything we want, but once again reduced severe outcomes, and a zero on deaths.

Vaccine Work So Well We Can Vaccinate More People

Shout it from the rooftops (study). Moderna half doses illicit robust immune response in healthy adults, including those over the age of 55.  

This is a huge boost to our vaccine supply, if only we are willing to take it. Given the physical mechanisms it isn’t quite a doubling of Moderna doses, but it’s close. 

A letter to the editor reanalyzing the Pfizer data on the first dose only. 

I wouldn’t want to hang my hat on a sample of size n=29, but the pattern keeps repeating itself, and the math isn’t remotely close. 

There is no lower hanging fruit available than this.  

The logistical hurdles are many. What would it mean if we put so many doses into each vial? Well, good news, the FDA only took a few weeks to give their sign-off on that one:

It’s pretty great when there’s a painfully obvious thing to do, you hold it up for weeks, then you allow it and everyone gives you credit for the huge boost provided by your approval.

Celebrate, good times, come on, also look at those reporters doing math:

Presumably the 20% number comes from a 40% increase in Moderna vaccine doses, and Moderna being about half of our supply. That implies that there will still be a shortage of vials, and that if we could find a way to produce more vials we could deliver more vaccine doses. We may not be quite the can-do country we used to be, but ‘produce more vials faster’ still seems like an ask that should be within our reach. 

In good news, France is recommending only one dose for those previously infected.

The data seems very clear that if you’ve already had Covid you need at most one vaccine dose.

Linked studies are here, here and here. The real question is whether such a person needs the first dose of vaccine, or whether the prior infection was sufficient. As I told a commenter last week, my position on that is that there’s no real downside to getting the first dose so you should get it, but I wouldn’t fight for a scarce appointment and go way out of my way if I’d already been infected, and for now let them go to those who need them more.

A proposal to change our method of vaccine administration. Huge if true and implemented, definitely won’t be implemented in time to matter even if true. Thus did not do the research to determine if true, but good brainstorming so figured I’d pass along.

Bubble Bursting

What to make of this story from the conference organized by Peter Diamandis

I’d encourage comparison to this analysis of what a Covid test tells you, at Aceso Under Glass. 

Diamandis attempted to create an immunity bubble via PCR testing.

Masks were encouraged between venues, but not required. 

There was one positive test prior to day one. That person was not admitted, and participated virtually. All other test results were negative, but afterwards…

If you go from testing negative one day to highly infectious the next day, at least one of those tests almost certainly gave the wrong answer, even if you’re only or primarily interested in infectiousness.

He links to this article, noting that people with Type B or AB blood are more likely to test positive than those with Type O blood but not more likely to get sick, and that he is Type O. But a lot of people are Type O, so interpreting this as ‘tests don’t work on Type O’ would be saying they don’t work on almost half the population, which unless you knew a particular person’s blood type would cash out as straight up ‘doesn’t work.’ 

Sounds like everyone is going to recover, which is great. Concrete specific data is always highly useful, and it’s great to have a detailed report. What can we conclude from this?

Masks work, and not using masks was a mistake, but I think the mask data here is being misinterpreted. If one group was isolated in one area and none of them got Covid, and another group was in a different area and many of them did get Covid, the presumed explanation is that the isolated group, even if large, didn’t have any Covid cases to begin with, and they wouldn’t have caught Covid even without masks. Again, that doesn’t make the masks unnecessary or not a good idea, but it could easily have gone the other way.

Mostly what I see is a badly designed protocol. 

You can’t have a bubble without an initial quarantine period.

Tests are great for reducing risk. Testing at the rate this group tested, if practiced by the general population, would end the pandemic in a month or two.

But the requirement for a bubble isn’t reducing risk. The goal is to all but eliminate risk, and this protocol obviously didn’t do that given the timing involved. Rapid tests don’t reliably test positive before symptom onset. 

A 14-day quarantine period before starting is definitely sufficient. A 7-day period before starting, combined with multiple tests on everyone, would in practice be good enough for a bubble of this size. A 0-day quarantine doesn’t make any sense and won’t get it done. 

This stuff is complicated, and people are not explaining it well. As a result, even super smart people like Peter Diamandis are often getting it importantly wrong. 

Again, that doesn’t mean that what they did wasn’t highly useful. The protocol helped by catching one infected person, and on average I would expect it to stop a large majority of potential outbreaks in such a group. Missing someone (or multiple someones) required having bad luck. 

What it doesn’t do, under current conditions, is provide enough safety to form a true bubble where people can ignore transmission risk between those in the bubble. It doesn’t let you ignore masks, distancing and ventilation entirely. 

No, Really, Andrew Cuomo Is The Worst

The Washington Post is among many calling for the removal of Cuomo from office. It seems clear now that not only did Cuomo put Covid-19 positive nursing home residents back into nursing homes where they could infect other residents, he also covered up the resulting deaths. 

New York Times Is Permanently Banned 

It is a great sorrow to see the paper I grew up thinking of as the paper of record, with the motto “All The News That’s Fit To Print,” engage in such brazen libel and retaliation, insinuation of guilt by association so brazen it should be used as an example in textbooks, and condemnation of the very idea of freedom of speech. 

Due to its recent remarkably disingenuous retaliatory hatchet job hit piece on Scott Alexander, together with the paper’s growing general amount of malice (as Eliezer points out it’s not obvious that this piece was even unusually disingenuous for the NYT), The New York Times is now permanently banned from my blog. There will be no links of any kind to NYT and I will not speak to anyone associated with the paper or provide them any information for any reason, other than to inform them of this decision and its cause. I will share information from NYT sources if there is no alternative, without linking.

As further proof of its principled opposition to free speech, The New York Times took this position to its logical conclusion, and also warned us this week about the nefarious Clubhouse and its dangerous potential for ‘unfettered conversation.’

If you’re reading this, you almost certainly read Scott (if not, that’s a serious mistake that you should fix, so go do that now, I’ll wait.) Remember that this is how they conduct themselves in the arena you know. Why would you think they do any differently elsewhere?

For now, I am not extending this ban to avoiding information from individual NYT reporters on Twitter. A remarkably large number of people providing information on Twitter turn out when you check to be NYT reporters. I could be persuaded that I should take this step, and block all of them on principle. 

Beyond that, and urging everyone else to follow suit (or at least to cancel any subscriptions and not talk to their reporters) on pain of me thinking less of you, I consider the matter closed.  

Scott Alexander Predictions on Covid-19

Scott Alexander is out with a post on Covid-19. I provided feedback on an earlier draft, and think both versions were quite good. When writing on a weekly basis, it’s easy to forget to step back and look at the longer term picture more broadly. I’ve done it somewhat, but haven’t focused enough on the longer term, so this is a welcome reminder. Scott is doing what so many do not, creating a physical world model full of gears, trying to figure out how those gears function, and thus figure out how things work and what is likely to happen. Then he even makes probabilistic predictions. It’s what you’d expect from the consensus number one pick

Let’s take a look at the predictions:

The threshold for this prediction is not so high. Right now we are more than 50% down from peak in terms of positive tests and hospitalizations, so this wouldn’t require us to get anywhere near the previous peak. The prediction also requires that the new peak be in March or April; technically if it’s in May then this resolves to no, no matter how high the peak. I still expect this to happen, because math, because the threshold is low, and because while we’ve exceeded expectations in vaccination we likely haven’t exceeded them by enough to make a March deadline once you account for all the lags.

The most promising thing is that the fourth-wave narrative has  become fully accepted by the Very Serious People in advance, which is plausibly why we’re seeing such strong declines in cases now – the control system may be partially working to act on the future rate of growth rather than the current one.

With us at R0~0.83 at the moment, we’re definitely not there yet, but we’re not that far from there especially if the English strain isn’t fully 50%+ more infectious and is only 30-40% more infectious.

Combine all those effects including the timing involved, and I’m at something like 60% here if measured in the 7-day average positive test count. 

Given this says ‘or new strains with similar dynamics’ this seems hard to avoid. A majority could end up being not that many cases, but that doesn’t change what we have to do to prevent case growth, nor does it change the odds here. Our containment plans for the new strains are terrible, and we’re seeing convergent evolution towards things that would count. I think this is low and I would be closer to 80%. A full year is a long time.

I’m interpreting this purely on an individual risk level. I worry that our knowledge generation systems and our vaccine approval systems are so broken that we’ll never ‘know’ for sure the way this question implies we will know, but let’s say that this question is asking about our best guess looking back on 2021. I am much more optimistic here. AstraZeneca is the one that might fail this test against some strains, but if I hadn’t seen Scott’s number I’d have been at least around 85% for this, because 80% reduction isn’t that high a threshold. Given Scott looked and said 55%, I’ll revise that down to 70% on the basis of there being multiple ‘all’ clauses in this, and it being dangerous to get too optimistic about claims that are worded like that. 

Note that this threshold is an order of magnitude or more stronger than the one that says vaccinate anyone anywhere with anything you can get, at any existing price, and I predict that all vaccinations listed here will have been worth getting for the general population regardless of which strains are dominant with a probability of 95%, except insofar as it prevents people from being given access to another vaccine soon thereafter (e.g. if you can get Moderna next week or AZ right now, but the government won’t let you have both, it makes sense to wait). The remaining 5% mostly involves the vaccines somehow interfering with each other and some being vastly better than others plus widely available relatively quickly.

I’m interpreting this prediction as being agnostic as to the framing involved, and assuming it resolves to Yes if and only if the CDC issues guidelines saying many of those previously vaccinated should get an additional shot some time in 2022. With that wording, I’d be around 50% for this, so rough agreement. The difference is more that I don’t think this is that likely to portent a big problem, and more likely to be a similar annoyance to the flu shot, where it’s totally worth doing but not a big deal for healthy people if you end up forgetting.

The long term mutation rate of Covid-19, and how impactful those future mutations are likely to be, is unknown, although we can guess. I agree with Scott that this is the big physical unknown for long run outcomes, but within the range of plausible answers I think our choice between plausible decisions as a civilization matters far more. 

The link Scott provides here is very welcome news. The FDA seems to be on the verge of getting this one at least mostly right. 

The probability here depends on exactly how formal this has to be. If it needs to be an explicit announcement of such a lane for a particular company or group of companies, then I think 50% is a good prediction. If all that’s required is a general understanding by Pfizer or Moderna and those observing them that they could get approval within three months in a future emergency, then I think this is closer to 80%. 

I’m optimistic here because there’s an existing example of this process (the flu) that everyone agrees upon, and because this involves essentially repeating a similar thing to an already done thing, which is what organizations like the FDA deal with best, and also because of how terrible it would look (and be) to hold this up in that situation were it to arise.

I’m going to be less confident on this prediction, and bring it down to 80%, because 10% is not a lot of people, and it wouldn’t take that big a cultural shift to exceed that rate of mask wearing semi-permanently even if there wasn’t a strong physical need for it.

The core question to be decided is, are we going to emerge with a new appreciation for life into a new roaring 20s, or are we going to do to adults what we’ve already done to childhood, and extend life-crippling restrictions indefinitely because of a recalibration of risk and new cultural norms? Will we bring another promise to Mother Earth with a bounty and mercy, or will we hide from her within walls of stone at the barrel of a gun?

It’s great that moving forward we’ll have improved our infrastructure for providing remote services and doing video calls and such, but it would be a supreme tragedy if we let our way of life permanently end in a new wave of disease paranoia. That outcome seems entirely plausible but also entirely preventable. 

There will be rock concerts and music festivals once again. There will be more than there have ever been, because there have to be.

You Gotta Fight For Your Right to Party.

All your other rights as well. Covid could well end up being used as a framework for the denial of our basic rights. If elites can break the rules in an emergency they will create an emergency in order to break the rules, and thank you for coming to this Ted Talk. It could also do the opposite, and embolden people to not listen to elites the next time they try something like this, given their performance record. 

We need to go several steps further than reversion to the old norms. As I discussed in the School Daze section, we need to take this opportunity to undo previously instituted insane safety concern trolling, and let life exist. Covid-19 safety is a group effort where we must contain spread, so safety concerns are at least understandable, but the overarching reach of most such concerns are about blameworthiness, liability and people’s inability to do or intuitively understand or appreciate statistics and math. Free the children. Also free everyone.

Metaculus Redux

I got a bunch of pushback from people involved with Metaculus on my comments on Metaculus from last week. There was a lot of discussion in the LW comment section, and some talk on Twitter as well. I continue to be very happy Metaculus exists (and that Good Judgment Project exists) which is why I am willing to discuss them and why I am sad that they are not better. I’ve written extensively about prediction markets, and these aren’t prediction markets. They also aren’t entirely not prediction markets, of course, but it’s true that they’re different and it’s not obvious the incentives can or should be the same as in prediction markets themselves.

If you are motivated to make predictions in such places, and are putting in the time such that your predictions are net helpful, then you are doing a public service, so please keep doing that. And a big thank you to those helping run and improve such places, even if I don’t agree with all your decisions and don’t choose to participate.

My other request for such places is that I find them next to impossible to usefully navigate for the questions I care about, whether I’m looking to view results or make predictions, and more and better curation and organization of the markets seems high value. So does ensuring that the most important questions exist and are in good form, and get the extra attention they deserve. 

At some point I hope to write up my thoughts in more detail.   

Vitamin D As It Ought To Be

New study claims that it confirms that Vitamin D is very important for Covid and is highly effective even when administered at the hospital. Alas, it looks like the methods and statistics they used were highly suspect to say the least. They didn’t randomize patients, they did the treatments in some wards but not others, then didn’t even assign patients randomly to wards, then didn’t cluster their errors. 

That doesn’t change the fact that everyone should be supplementing Vitamin D, and it’s plausible I should be talking about this way more often lest people forget. The new data isn’t compelling (at least, it isn’t in the form it’s been presented) but it’s also definitely not evidence against Vitamin D. 

Then there’s Scott Alexander’s latest article on the subject, Covid/Vitamin D: Much More Than You Wanted To Know, which is in some sense a very accurate title (as in, the first paragraph tells you what you wanted to know, take Vitamin D regardless), but in another sense not true at all, since if anything he can tell us less than I would want to know, since I want to know more than is known, again even though it doesn’t directly impact behavior today. His thoughts definitely fit right in with ‘Vitamin D study looks good but turns out to be deeply flawed and thus it tells us nothing new.’ 

I understand where Scott’s analysis is coming from here, but I don’t agree with it. I do acknowledge that it is possible that all the gigantic correlations are a big coincidence that results from intermediation from some combination of otherwise poor health, other sunlight effects and socioeconomic status, and failing to control for such matters. It’s possible, sure, despite the effect sizes in the ‘damn check out this chart’ area. I also think there’s a bunch of dialectic thought going on here, where it’s effectively about Whose Study Is Right and so when two disagree we have it out see whose was better, or something, with negative findings needing to be defeated in a righteous battle. I also think Scott is going with a big “yeah, yeah, the boy is crying Vitamin D again and I’m not about to go chasing after this non-existent wolf yet again.’ 

That’s making his prior a lot different than mine, perhaps entirely fairly. If I was informed I’m making an epistemic mistake, my first guess would be that I need to better appreciate the degree to which there’s been wolf-crying and the dynamics involved, and that this should be moving my estimates.

Scott makes three probabilistic assessments (they’re not predictions exactly) at the end:

Tyler Cowen notes that he is even more skeptical than Scott. 

The first curious thing here is the matching 25% numbers. I can imagine Vitamin D helping if taken as a regular supplement while not helping, or helping less or only helping when administered early or the right way, in the hospital. I can’t imagine it working in the hospital, but not working as a regular supplement. That would blow my mind. These two numbers being identical thus suggests Scott doesn’t see it that way, and in particular that he’s thinking that if it doesn’t work in a hospital (or doesn’t work in a hospital for any given reasonable dosing method) it also doesn’t work as a supplement. 

I still think Vitamin D supplementation in advance of infection is a favorite to significantly decrease the risk of getting Covid and give that maybe a 60% chance. I’ve been persuaded that it’s not obviously true and that taking Vitamin D definitely doesn’t mean you can laugh off Covid risk or anything. I find it reasonably plausible that if you try to make up for a deficiency in the hospital, it’s too late to do much about it, and the chance drops to maybe 45%. 

Can you hear Eliezer Yudkowsky in the background saying “Bet! Bet! Bet!”? I hope you can!

The problem is that we both agree on the correct course of action – give everyone Vitamin D supplements now, and give every Covid patient more intense ones in the hospital – because there’s so little downside to doing that. And I believe we both agree that this issue probably isn’t going to be definitely settled, regardless of who is right. If Vitamin D supplementation doesn’t do anything about Covid-19, some studies will keep finding correlations because they aren’t controlling for enough of the right things. If Vitamin D does do something, a lot of studies will still be deeply flawed because scientists and doctors mostly run flawed studies and there’s a lot of correlations to dodge here and people trying to prove a point are especially bad about controlling for such things properly, and other studies will lack sufficient power or use the wrong protocols or control for things twice or whatever and end up finding no effect. There will never be a challenge trial or other definitive answer.

Maybe if there were a particular future study happening where we could both see the protocol, we could bet on what that study will find. That sounds like it might work, and if someone has a good candidate, please share. 

Because we agree on the course of action, in many ways the more interesting questions are epistemic ones. How should one evaluate the evidence in this situation? Which things should move our priors how much? An in depth discussion of that with an eye on general principles of evaluation of evidence seems like it would be generally useful.

I’d also note that the third line says equal or outweigh, not outweigh. The word equal either is or is not doing work here. Does this include the case where Vitamin D supplements do nothing for most people but are at least marginally helpful if you’re sufficiently deficient (42% of Americans are deficient, Google says)? If so, then I find this number strangely low and wonder why it isn’t starting with a 9 (or at least an 8) even if we ignore Covid entirely. If the word equal isn’t doing work and the question is about whether this is worth the cognitive opportunity cost of taking up an action slot that could be used for something else, then I could accept 75% conditional on the Covid-related skepticism being very reasonable.

A final note is that no one is denying that Vitamin D deficiency is very highly correlated with bad Covid-19 outcomes. The world in which Vitamin D supplementation doesn’t help is the world in where there holds some combination of (A) Vitamin D supplementation doesn’t provide Vitamin D in a useful way, (B) Vitamin D levels are a proxy for other sunlight effects, (C) Vitamin D levels are a proxy for age or other health effects or (D) Vitamin D levels are a proxy for socio-economic status. 

Thus, if you go to the doctor and they measure your Vitamin D levels as sufficient, that definitely is very good Covid-risk news for you personally. If they measure your levels as insufficient, that definitely is very bad Covid-risk news for you personally. Thus, there’s a weirdly Calvinist perspective on all this that one could take, that I do not endorse, where those with high Vitamin D are The Elect who are low risk and are saved, even if one is saved via an entirely different method. Again, not endorsing that, but I do find it amusing and interesting to think about.

In Other News

Monthly reminder that I have a Patreon if you have the desire to give me money to support my writing. While I find this motivating, I do not need the money even a little bit. Contributing earns you my silent thanks to you, but that’s it. If you want to contribute enough to matter, contact me privately to minimize the costs involved.

We’re finally doing a challenge trial. Even at this late date, it’s super valuable for its results, and even more valuable for the precedent it sets:

Stimulus plan wants to spend $500 per person on distribution alone, and The Grumpy Economist asks why it costs so much. For all things vaccines I prefer to ask why we are spending so little rather than pinch pennies. One can reasonably raise an eyebrow here, but vaccine distribution is so valuable that the concern should be corruption and theft. If the money is effectively being stolen that would be bad. If the money is being used inefficiently but with any good effect at all, such as by massively increasing payments to those who give the shots, I’m totally fine with that. Cost benefit analysis checks out.

This piece by Zeynep on how to think critically and interpet official communication is fantastic. Not for those who are old hats at this, but great for those getting up to speed.

Some unusually good non-Covid content recently on LessWrong: Anna Salamon observes that PR is corrosive whereas reputation is not, and Eliezer Yudkowsky discusses the concept of Cheerful Prices, the amount of money that would cause one to happily do a thing.

There’s an online event coming up on February 25 about pandemics, biowarfare and the future of human health called Going Viral, if you’re interested you can register at the link, and they asked me to pass it along.

I am curious what changed Dr. Fauci’s timeline here, I haven’t heard an explanation of what caused the update:

The alternate hypothesis, of course, is that Fauci is telling people the timeline he thinks will cause the most responsible behavior rather than his actual estimate (or, in other words, lying.)

Medical students are at risk more from socializing than from treating patients, in one study in Denmark. This matches priors.

Your periodic reminder (in this case from Bryan Caplan) that bioethicists are mustache-twirling villains and without them we wouldn’t have a pandemic. Also, did you know that if you want to know if your experiment is allowed, you have to ask bioethicists and find out their answers? That sure sounds like an experiment on human subjects. Shouldn’t we ask bioethicists if that’s permitted before we ask bioethicists if that’s permitted? 

Your periodic reminder that rapid tests would end this pandemic and could be had en masse for about $5 each, but we don’t allow them to be sold, so what few exist languish in places where the people involved don’t want them

Thread about how we measure the effectiveness of flu vaccines.

Story about a private service to help with vaccine distribution in Oregon. Sadly not about a way to spend more money to get more vaccines, but still helpful.

In WaPo, Alex Tabarrok tells us how to get on with it and vaccinate faster via some of the usual completely obvious minimal-downside strategies we should definitely do. Nothing new here.

It’s fun to think about what might be good to do with the J&J vaccine if we could do whatever we anted, but that doesn’t make such ideas remotely practical or plausible. 

Department of it could always be worse: A current hypothesis that China is holding off on vaccinating anyone until it can have sufficient supply for the whole country. There’s concerns about equity, and then there’s full on anti-escapism

Biden team announced vaccine deliveries sooner than they were officially previously expected:

That is of course excellent news, although how good the news really is depends on what our true previous expectations were and how much this updates those expectations. Biden’s goal of 100 million doses in 100 days was never in doubt, and it seems plausible that all the official timelines have been chosen in a similarly conservative fashion to ensure they can’t fail. Underpromise and overdeliver. 

Germany doesn’t have much vaccine supply, but is managing to actually target its most vulnerable with what supply it does have. Comments suggest Sweden is doing this as well. I too am curious to see how the case fatality rates shift a month from now in such places versus our own.

FDA refuses to authorize rapid tests, so we ship 60mm of them overseas. Sigh. 

We ran an experiment to see people’s willingness to pay for less crowded planes (MR linking to WSJ), and it was low. The implication is that people’s willingness to pay to avoid Covid risk may be low in general, and we should consider that (while keeping in mind that much risk is not private). One complicating factor is that the people who value Covid safety a lot mostly either aren’t flying at all, or think the plane itself is mostly safe. I’ve noted before that the ventilation of the planes themselves seems rather good, and I’d be more worried about the taxi and getting through the airport, and also I haven’t flown at all anyway. I also expect a psychological effect where once a person decides they must fly or is willing to fly, often they don’t properly factor in marginal risk when making decisions, because they’re in the context of a bigger baseline risk and people’s brains don’t handle that very well. 

Even more than that, I think that the way we shop for plane tickets is designed to cause this result. Everyone goes to an internet site and everything is sorted by price, price and price, with filters for airlines and times and stopovers to the extent you use those. But mostly the whole system is about getting the best price, which gives you visibility and makes you the default choice on multiple levels. I don’t see why plane safety via less people on the plane should be any different – people won’t think about it or won’t be aware of the differences, or won’t be willing to actively give up cheaper prices to get the thing even if they’d also never make the opposite trade, and so on. So I think this is interesting and worth pondering, but doesn’t prove as much as one might think.

A thread arguing the UK should go for Covid elimination, and that such suggestions have effectively been censored and considered non-options. It uses a lot of the ‘no evidence’ lines of reasoning, although in ways that are less terrible than normal. I think the way you get to zero is first you vaccinate and then you go for zero rather than locking down first, but yes everyone should absolutely go for zero Covid by the end of the year by getting the contact tracing systems ready for that. I’m actively confused that this isn’t obvious.

Let’s check in on our model of Dr. Fauci, and see how it does on some events from last year:

Weekly Tom Friedman update for additional perspective, nothing new here.

Regulatory hold-ups are worse than you thought. Ada Poonawalla of the Serum Institute of India reports that they have 70 million doses that exist but can’t be shipped because the countries that have purchased them haven’t approved them, and they are already four months old and expire in April. It doesn’t sound like he’s optimistic that, if the countries involved still haven’t approved them, he’ll be able to redirect those vaccine doses to other places. He’s already making 70mm doses a month, so one would hope he’d be able to ship the older doses out and replace them with new ones, so at least nothing spoils unused, but I no longer count on such sanity to prevail. Also worth noting that Ada got the majority of his funding from the Gates Foundation, so they did end up helping build useful capacity in at least one case. 

Part of this ‘vaccine nationalism’ seems obviously good to me. If every country decides it needs its own manufacturing sufficient to make everything it needs, that seems great, because then we might have something approaching enough overall manufacturing capacity. If everyone says ‘no exporting vaccines until we have enough for everyone domestically’ that’s not the ideal distribution mechanism but it is highly motivating for capacity, which in the long term matters more. Whereas restrictions on importing vaccines, requiring one’s own regulatory hurdles everywhere, is pure downside. Even if you want to ‘support’ your own candidate, why not simply pay for the vaccine doses later from your own factories even if you end up not using or exporting them? The amount of pound foolishness, from almost all angles, is staggering. 

Not Covid, but we should be rather worried about our failure to reliably keep the lights on. Literally. 

Non-Covid Weight Loss Drug Update takes a look and finds the results most promising, against a backdrop of historical money incineration that suggests continued need for caution. 

Non-Covid Civilization Watch: Large parts of Texas continue to be without power under dangerously cold conditions. Now there are reports of lots of people losing water as well. Explosions shut off power to large parts of Portland. California’s power grid is increasingly unreliable each year. We should be alarmed about this. It is perhaps the best concrete test of whether our civilization can maintain its physical infrastructure, it can’t be faked, and it is very much not going well. Any given incident can usually be explained away, extreme weather events have always caused temporary power loss, but when it happens more often on larger scales for longer periods, we need to ask ourselves what’s going on. 



Discuss

Страницы