Вы здесь

Новости LessWrong.com

Подписка на Лента Новости LessWrong.com Новости LessWrong.com
A community blog devoted to refining the art of rationality
Обновлено: 1 час 3 минуты назад

Covid 6/25: The Dam Breaks

25 июня, 2020 - 21:30
Published on June 25, 2020 6:30 PM GMT

Last week: Covid-19 6/18: The Virus Goes South

Off-topic reminder that this is happening: New York Times, Please Do Not Threaten The Safety of Scott Alexander By Revealing His True Name

The United States has failed to contain Covid-19. The fight is over in large sections of the country. We have lost.

There is no state capacity left to fight this. No will. Faced with this disease, large sections of the country have chosen herd immunity.

We can only hope it kicks in at a relatively low threshold, that our treatments have improved, and that immunity is long lasting.

Hospitals are already being overwhelmed throughout the South. Things there rapidly getting worse day by day. Even here in New York, where hope for containment is strongest, there are troubling signs. My declaration of local victory might prove premature, at least when it comes to the city itself.

Let’s run the numbers. They’re pretty alarming.

Note that starting next week I plan to truncate the numbers for space reasons, unless I get feedback that I shouldn’t do that.

Positive Tests by Region

Date WEST MIDWEST SOUTH NE ex-NY NY Mar 19-Mar 25 5744 6293 7933 8354 28429 Mar 26-Apr 1 15684 20337 24224 34391 52901 Apr 2-8 19455 31148 39618 56772 65604 Apr 9-15 16291 29267 35570 61921 64463 Apr 16-22 20065 34130 33932 64669 43437 Apr 23-29 20263 42343 33773 62189 42475 Apr 30-May 6 23216 49205 37880 51693 24287 May 7-May 13 22419 43256 37591 40209 16683 May 14-May 20 22725 42762 40343 39273 13709 May 21-May 27 23979 39418 42977 26434 10595 May 28-June 3 32200 31504 50039 24250 9120 June 4-June 10 35487 24674 55731 16622 6071 June 11-June 17 41976 22510 75787 12905 4986 June 18-June 24 66292 26792 107221 10922 4524

Date WEST MIDWEST SOUTH NE ex-NY NY Mar 19-Mar 25 138 104 144 116 278 Mar 26-Apr 1 380 615 572 606 1656 Apr 2-8 707 1454 1309 2115 4327 Apr 9-15 890 2195 1596 3577 5318 Apr 16-22 1033 2343 1727 5147 3716 Apr 23-29 1128 2588 1685 4722 2713 Apr 30-May 6 1012 2413 1747 4908 2582 May 7-May 13 1082 2288 1597 3911 1416 Apr 23-29 1090 2060 1442 3578 963 Apr 30-May 6 775 1723 1290 2341 667 May 28-June 3 875 1666 1387 2121 436 June 4-June 10 743 1297 1230 1611 325 June 11-June 17 778 1040 1207 1270 225 June 18-June 24 831 859 1204 908 153

Positive Test percentages:

Date USA tests Positive % NY tests Positive % Mar 19-Mar 25 347577 16.2% 88,882 32.0% Mar 26-Apr 1 728474 20.2% 117,401 45.1% Apr 2-8 1,064,225 19.8% 144,273 45.5% Apr 9-15 1,026,741 20.4% 160,859 40.1% Apr 16-22 1,235,393 16.1% 143,970 30.2% Apr 23-29 1,552,560 13.0% 202,499 21.0% Apr 30-May 6 1,759,548 10.6% 183,446 13.2% May 7-May 13 2,153,748 7.5% 202,980 8.2% May 14-May 20 2,643,333 6.0% 246,929 5.6% May 21-May 27 2,584,265 5.7% 305,708 3.5% May 28-June 3 3,022,469 5.1% 417,929 2.2% June 4-June 10 3,252,870 4.6% 438,695 1.4% June 11-June 17 3,470,057 4.6% 442,951 1.1% June 18-June 24 4,118,048 5.9% 500,174 1.0%

Things are even scarier than these numbers imply. The scariest part is that this jump is largely concentrated in the last few days.

Here’s the daily trend in positive test rates over the last two weeks, Ex-NY:

Date Ex-NY Positive % 6/11/2020 5.5% 6/12/2020 4.4% 6/13/2020 5.6% 6/14/2020 4.8% 6/15/2020 4.6% 6/16/2020 5.7% 6/17/2020 5.4% 6/18/2020 6.6% 6/19/2020 6.2% 6/20/2020 6.1% 6/21/2020 5.9% 6/22/2020 6.6% 6/23/2020 7.1% 6/24/2020 8.4%

May 16 was the last time prior to this that we had a positive rate (excluding New York) of 7.1% or higher. We have to go back to May 9 to see one above 8%.

Unless something highly unexpected has already happened to prevent it, things are about to get very bad, very quickly. 

Talk To Your Area Man About Lagged Exponential Growth

People are bad at exponential growth. We now have had the opportunity to run a natural experiment. Having been bitten not only internationally but also domestically, will people act any differently the next time around? The answer is no. They will not.

Twitter is full of people trying to help who feel the need to repeatedly and forcefully say that this result is not due solely to a rise in testing. Who feel the need to keep insisting that things are getting worse, that the epidemic is not essentially over.

It seems that only when the ICUs are full, which is happening about now throughout several southern states, do normal people take notice that something might be wrong. Only when the death rate itself rises do people think that there might be danger.

Others think that so long as the situation is getting worse sufficiently slowly, things will turn out fine. That is not how this works. That is not how any of this works. When Cuomo put up the slide that said goal of R0 less than 1.1, the mind boggled. If your R0 is 1.05, everyone gets infected. This isn’t a hard thing to reason out. And yet.

Area Man Confesses To Heinous Crime

He also held a rally in Tulsa.

At that rally, he said that more tests meant more positives, which looked bad. So I told my people, “slow the testing down.”

I will repeat that. The President of the United States ordered coronavirus testing slowed down.

The defense was that he was kidding.

He wasn’t kidding.

When asked to deny the statement, he sidestepped it. When asked about whether he had been kidding, he responded, and I quote: “I never kid.”

I do not point this out to ask why it has yet to become an article of impeachment. I point it out for its logical implications. We need to understand what is actually happening.

To what extent are our numbers manipulated or fraudulent? In predicting the path of the pandemic, to what extent will our government actively interfere with our attempts to save people’s lives and their livelihoods? We know that the federal government has at many points actively interfered with our ability to test for Covid-19, and has waged a war of banditry and piracy to seize medical supplies.

I have seen numerous reports that Florida, along with other states, is manipulating data in order to justify reopening.

This all helps us answer the next question, which is:

What Went Wrong?

There has been a dramatic worsening of the trend lines over this past week. Even here in New York, what looked like clear steady improvement now risks falling apart. What made things go so wrong so quickly?

My current model says that it is a combination of factors.

The central thing that I believe is going on is lockdown fatigue combined with a relative-to-others sense of responsibility, and no state capacity to do anything else.

It has been several months. A lockdown can be a fun change of pace for a week. It can be not so bad for a few months. But after a while it takes its toll. One goes stir crazy. You miss your friends. You lose your job and your savings start to run out. You can’t take it anymore.

Combine that with people adjusting to the behaviors and norms they notice around them. When most people judge who is being responsible, they are grading on a curve. If others are doing unsafe things, and you’re doing unsafe things but things that are less unsafe than others are, or that seem to you more justified, then that’s not being irresponsible. That’s doing what is necessary. That’s living one’s life.

The dam broke on that. The secondary effect of the protests was something between the straw that broke the camel’s back and the unleashing of a tidal wave. People saw others breaking the rules to protest. Many did so mostly to blow off steam after such a long lockdown. The rhetoric shifted to things like ‘defund the police’ and other unpopular far-left agenda items.

People noticed. Then others noticed them noticing, and adjusted expectations. Then others noticed them in turn, and so on. With the perception that a large portion of people was doing whatever it wanted, when a lot of other people ran the program ‘be about as responsible as other people’ that quickly spiraled in many places to very little in the way of precautions.

That is then combined with the problem of air conditioning, especially central air, and it being too hot to want to be outdoors throughout the South.

Mask wearing helps where it is present, but has turned into a political issue. And it turns out that in many places, most people will do an annoying thing for only a brief period before they stop doing it. What I saw in Westchester was an exception.

Then state capacity comes in.

We don’t have any ability to reimpose restrictions at least until after the death rates skyrocket. It’s unclear we’d be able to do anything even then.

We can’t contact trace, even in places like New York City where we hired the people to do it and have a manageable case load. A tracking app is dead in the water. Our blue cognoscenti has decided that anonymity for protesters overrides any motivations to trace. Our red cognoscenti was never on board with the idea.

In Texas it’s like New York in March, ICU capacity is being maxed out and we can’t even get proper PPE to the hospital workers let alone take extra measures.

The good news is we’re starting to finally see a consensus that masks are the only way forward without complete disaster. The mask message is increasing in intensity and more people are getting on board, including red voices.

The bad news is we’re starting to finally see a consensus that masks are the only way forward without complete disaster. We’ve agreed that we don’t have the ability to do much else that is useful.

My real answer to what went wrong is that our civilization is profoundly inadequate. We have lost our ability to do things. If you haven’t read the Moral Mazes sequence, it may help explain how we got to this point. So will understanding simulacra levels. That post was originally intended to be called ‘Simulacra Levels and Covid-19’ and to connect it to the claims of Covid-19: My Current Model, but that turned out to be too ambitious for one post. My work is not yet done to connect it back to our general situation, but essentially we have lost not only the ability to focus on the object level, but where decisions are made we have lost the ability to acknowledge the object level even is a thing. Our top people don’t believe in a physical world the way our grandparents did, and deny that anyone ever believed in one in that way. I’m working on explaining this further, and hope to get the next post out in a few weeks. Can’t rush this.

Baseball! Baseball! Baseball! Baseball! Baseball! Baseball! Baseball! Baseball! Baseball! (with apologies to Bill Simmons)

For many months I have yearned for the return of Major League Baseball. Nothing would be better suited to passing the summer days while maintaining social distance. The game carries minimal risk relative to many other activities. A few can entertain so many for so long. Yet things kept being postponed, as the owners and players were locked in a battle over money.

That battle seems to have ended with the players getting most of what they wanted, through a willful (in my reading) misinterpretation of an agreement the parties reached in March. The owners of course also did not cover themselves in glory on this one. But in the end, they did agree to play ball. Whether or not circumstances let it happen is unclear. I really, really hope we find a way, even if things get bad again. Baseball will be a huge help in getting us all through this.

Football will be even more important, and that much trickier. Football is dangerous under the best of circumstances. I’m not convinced introducing Covid-19 is that big an increase in danger in percentage of risk terms for the activity, but that is not how people will think about it. We can only hope the huge psychic cost of not having it is properly respected.

I know my readers don’t get this. I do. Sports are important. Sports go sports. Athletics are number one. Participants are heroes. Go team, yeah. For real. Let me put it this way, not as a demand but as a warning and declaration of fact:

No Football, No Peace.

What Happens Now? What Should We Do as Individuals?

If you are in the area seeing the explosion in cases, expect things to look like New York City looked in March and April.

We can hope that our improved knowledge will help us better protect the most vulnerable. That is good. It does not make the world around you any less dangerous when deciding what activities to do.

This actually could make things worse. It was criminal that we let our elderly down early in the pandemic, especially by letting the virus into nursing homes. But those very deaths drove home the need for measures that contained the virus. With the new lower death rates, by the time that statistic gets people’s attention, we will be much farther along in the infection rate. A younger person’s real risk will be several times higher relative to average risk, and thus they will take even less of the precautions they need to than they would have before. Then that greater rate means the elderly in turn can’t be protected.

Florida focused early on protecting nursing homes. That is the right strategy if and only if it still lets you contain infections in general. Fail, and nothing you did will save you, unless you can protect them through full herd immunity. That’s a tall order.

Herd immunity is where the South and West are headed. A few Western states might still escape, but something would have to change. Hawaii was down to one case in an entire week, but if you don’t handle things, they come back anyway, and now they’re back up to 91 and growing fast. I think New Mexico, Colorado and parts of Northern California are the only ones who might be all right. In the South, Kentucky and Virginia might muddle through, which not coincidentally are the most northern members of the region, while Florida and Texas are looking like the biggest disasters.

The Midwest is starting to trend back in the wrong direction, and several Northestern states looks vulnerable as well. If there’s been the same level of cultural shift here as there was down South, but delayed and/or starting from a better point, we are at severe risk of tipping back above the critical R0~1 level, and having things start to get worse again. Once that happens, unless something changes, things might go slower, but it’s only a matter of time.

You can look at my spreadsheet for further numerical details under the tab Infections by State.

If you are in the Northeast, Midwest or a relatively safe local region, enjoy your relative safety responsibly while it lasts. Wear a mask. Remember to keep things outdoors. Don’t sweat the small stuff. Get yourself psychologically prepared for things to potentially get worse again, perhaps soon. Keep an eye on local numbers.

If you are in most of the West or South, this is when the real crisis likely starts for you. Due to exhaustion and adjustment, chances supply chains break down are relatively low, but not zero. Chances the medical system collapses are very high. Be prepared. If you need to see a doctor in the next three months, now is bad but next month is worse. Be ready to not go outside for several months, or to stay within your own region of space if you have one of those within your property, if it comes to that.

Resumed The Quarantine Wars Have

New York, New Jersey and Connecticut are now telling travelers who have been to any state with positive test rates above 10% that they must quarantine for fourteen days. It would not surprise me if the rest of our regional alliance followed. The rate overall of positive tests outside New York was over 8% yesterday, so it might not be long before this is the majority of the country.

Last time this was tried, we in New York were the ones being told to quarantine and stay out. It definitely feels good to turn the tables. The question of how this could be enforced remains unclear. It is also unclear whether the rest of the country will tolerate it. What will the federal government do when a blue state says red citizens aren’t welcome? So far, it seems to have been quietly accepted. These days, it’s hard to predict what will or won’t get a reaction. Perhaps the reaction will come when enforcement is attempted.

We live in interesting times.

 

 

 

 

 

 

 

 

 



Discuss

An agile approach to pre-research.

25 июня, 2020 - 21:29
Published on June 25, 2020 6:29 PM GMT

When I'm doing research, I often start with a vague question based on an unsettled emotion, such as excitement, confusion, frustration, or need. Lately, I've been investigating visual imagery. I start off with a question that's as hazy as "should I try and train my visual imagination so that I can be a better scientist?"

All too often, when I leap into searching Google Scholar with just a hazy question in mind, I get lost in a rabbit hole. My mind substitutes alternative metrics of success. How many papers have I read, and to what depth? What do they say, in general? How many papers can I find? How should I group them? Where to start research? Elizabeth think's it's with a question, not a book. Letting my chance findings on Google Scholar define my research agenda seems wrong, an example of lost purposes.

After taking detailed in-the-moment notes on how I investigated this question, I extricated some structure for my agile research process.

During this process, I'm glancing at titles, looking for perhaps 1-2 minutes at an abstract, or spending maybe 5 minutes looking at the text of an article.

1) Note the hazy idea that sparks your curiosity. Eg. "should I try and train my visual imagination?"

2) Try to decompose that into some sub-questions, which might require a paragraph or so to specify. Here are some examples:

  • I know we have different visual faculties for processing surface details vs. spatial relationships. People may have more developed skills or stronger innate potential for one or the other. Do most people tend to be average in both, or is it common to be much stronger in one than the other? Does one or the other correspond to feeling like you have a more vivid imagination?
  • Can visual imagination be improved with practice? If so, how?
  • What's more helpful to a scientist/mathematician/engineer - a strong visual imagination, or skill in using visualization tools such as drawings, physical models, and software?
  • What's more helpful to a scientist - a strong internalized knowledge of various skills, or knowing how to navigate a wide range of reference texts? I imagine investigating this question by testing them to see how well they can remember the content of textbooks or papers they've consulted in the last year. How much of what a scientist is doing is actually internalizing the content of a paper, versus doing an epistemic check on whether to accept the paper's conclusions and just remembering a few sentence's worth of actual content?

3) Search for a likely phrase on Google Scholar. Eg. "visual imagination." Take away new questions from the titles that pop up. For example, "Four measures of visual imagination" made me ask how we'd even study and measure this phenomenon. Are the methods likely to produce useful evidence? Continue searching for related terms, building up the questions you're interested in.

4) As you go along, bring your intuition to bear. Consider how the questions you identify are related. Write down your motivations for doing this research. Think about whether the idea you have is plausible. For example, I considered whether it seemed reasonable that the ability to see and control a mental picture is a key skill for STEM scientists, in an era of widely available visualization tools.

5) Note when superficially related questions are not actually relevant to your inquiry. For example, when I started finding papers on people's ability to see creative visual patterns in a shadowy blob (sort of like a Rorschach), I realized that I needed to specify that I was interested in the ability to see and control a mental picture, not the ability to find creative associations in an image. Altering my search to "visual imagery" rather than "visual imagination" turned up more relevant papers. This process will help you define the domain of your inquiry.

6) Defining the domain of your research project should eventually allow you to prioritize research questions. In my project, I realized I was interested in two issues. First, is the ability to see and control a mental picture a broadly important skill for STEM scientists? If so, how does mental imagery work, and how can it be improved? Realizing that the latter question didn't matter much to me if the answer to the first question was "no" helped me set aside many of the papers that popped up.

7) Prioritizing your research questions may lead you to realize that there's preliminary research you should do before you even address the question you started with. In my case, I want to identify the most useful things I can do to build a career in science. Intuitively, is a big literature review to determine whether and how I should practice visualization a high priority, compared with things like studying for my classes, developing my research project, contacting grad programs, and so on? Maybe I should just keep practicing it intuitively and organically, rather than buttressing it with scientific research.

The outcome is a domain of specific, related research questions that are strongly linked with your original motivation to do research, and clarity on whether this is the right question-domain to pursue given your needs.

I think of this process as an example of "pre-research."

Pre-research can take many forms. Sometimes, I create a spreadsheet to intuitively define large-scale possible directions of research before I do even a single search for literature. Other projects, like this one, involve much more back-and-forth between search and reflection.

The point is to prevent the books you're finding from defining your agenda. Use the books, don't let them use you. Only when your domain is defined should you move on to the next step of scholarship, which you should also figure out how to do efficiently.

One important outcome of pre-research is that if you commit to actually reading a paper or book, you know quite clearly why you're doing so. It's not for the purpose of open-ended exploration. It's to get answers to a set of specific questions that are high-priority within your research domain. This way, a literature is less like running around the field and falling down a rabbit hole, and more like walking around the field and considering whether, why, and where to build your ivory tower.



Discuss

A Personal (Interim) COVID-19 Postmortem

25 июня, 2020 - 21:10
Published on June 25, 2020 6:10 PM GMT

I think it's important to clearly and publicly admit when we were wrong. It's even better to diagnose why, and take steps to prevent doing so again. COVID-19 is far from over, but given my early stance on a number of questions regarding COVID-19, this is my attempt at a public personal review to see where I was wrong.

I have been pushing for better forecasting and preparation for pandemics for years, but I wasn't forecasting on the various specific questions about Pandemics on most platforms until at least mid-March, and I failed in several ways.

Mea Culpa

I was late to update about a number of things, and simply wrong in some cases even on the basis of known information. The failures include initially being slow to recognize the extent of the threat, starting out dismissive about masks, being more concerned about hospital-based transmission than ended up being justified, being overconfident in the response of the US government, and in early March, over-confidently getting a key fact wrong about transmission being at least largely via aerosol droplet versus physical contact. I have a number of excuses, of course. Most other experts agreed with my views, my grandfather passed away in January, followed by his wife in early March, I was under a lot of stress, I was very busy with my personal life, I was trying to do a number of other high-priority projects, I was not paying attention to the details, and so on. But predictive accuracy doesn't care about WHY you were wrong, especially since there are always such excuses. And the impact of my poor judgement was also likely misleading to others in the community.

At the same time, I feel the perhaps egotistical need to note where I was correct early, and what I got right - followed by a clearer description of my failures. I started saying there would be PPE shortages due to COVID-19 by January, and was writing about the supply chain issues well before COVID. I submitted this paper November last year with Dave Denkenberger, which was largely finished last summer, and it was accepted in February, which then took 3 months to get published. The delay was in part due to other demands on my time, but in retrospect, if it had been available 3 months earlier, it would have been far, far more impactful.

I also understood the failure mode we ended up seeing, and in my 2018 paper, discussing overconfidence in claims that pandemics would be rare, I argued that among the most critical risks was failure to respond to emerging pandemics which could in theory be controlled quickly enough. On the other hand, my failure to realize that this is exactly what was happening is perhaps compounded by the fact that I understood the dynamics, and should have been able to identify what was going on.

Lastly, I maintain I was correct in warning about the poorly thought out and in some cases outright dangerous "preparation" in some quarters of the rationality community proposed in March, such as advocating use of bleach and ozone in closed areas for disinfection. Some people in the community were stockpiling N-95 masks and food and buying up second hand ventilators, and as I said at the time, were at best being selfish and defecting. On the other hand, as I mention below, I was insufficiently clear about the need for better preparation, and waited far too long to speak.

Some of My Mistakes, and Related CommentsSlow to recognize the extent of the threat.

I said we should be very concerned in January, albeit not very publicly. I took until early March to start suggesting that it was clear that the US would expect to see large numbers of deaths. I was skeptical of valuable efforts early on, and didn't start really publicly sounding the alarm and reacting until even later. I was later than most of this community in recognizing the risks.

Skeptical about Border Closures

In a conversation that started Jan 27th, I was asked about shutting down borders to prevent spread. I was dismissive, in large part based on the expert consensus. I'm unsure whether this was a mistake on the object level, since I think that at that early point, the facts were unclear enough, and trade wars really are bad. I also expected response to be better, based on previous cases.

I do not think that border shutdowns were feasible, and historically they have not been. Quarantines at borders were and are logistically impossible. And full border closures for COVID-19 were also not very effective most places until very late in the spread, (Mongolia and Vietnam are the exceptions that disprove the rule.) Even late in the pandemic spread, lots of transmission occurred from places where there had been few or no cases at the time people entered. However, when discussing it, I excused my early claims that it was too economically damaging and would have been ineffective by substituting a different argument about political feasibility - one which I think is correct, but was not my original consideration. This was bad epistemic practice, and I should have been clearer that in retrospect, if they could have been put in place, travel bans would have been a much better idea. I still think my later excuse, that they were politically impossible, holds up - but I had not fully thought through the question until well after my early response.

Dismissive about masks.

The research on use of masks was unclear and I don't want to claim it was retrospectively obvious, but as a matter of decision making given uncertain risks, people should have started wearing homemade masks in public much earlier. We will still need to see how much impact promoting mask wearing in public has had, but at the very least it functioned as a clear and important public signal that COVID was serious, which promotes physical distance and other critical factors.

On the other hand, I said at the time, and still maintain that I was correct in suggesting that buying up P95 and surgical masks in February and March was defecting, since it was already clear that those supplies were needed desperately in hospitals. And Fauci has now said as much (as a level-1+2 sage, in my view.). In retrospect, I think it would have been better, consequentially, to push for cloth masks earlier, but current modeling and our understanding of spread make it clear that mask wearing by itself is only marginally effective. I was instead focused on promoting handwashing, which I think is still undersold in importance, and thought that continued focus on masks would be a net negative. I was wrong, and others here were correct.

Not clear enough about the importance of preparation.

I've long said, following all of the experts, that people should have 2 weeks supply of food and basic supplies. Especially people in California, where earthquakes are far more common than severe pandemics. Further preparation should have been unneeded early on - but in fact, most people don't do this, and the people who were advocating making sure that you were prepared for a worse outcome were correct.

On the other hand, there is an argument I've seen here, and by others in the rationality community elsewhere, that encouraging people to buy critical supplies and hoard early in a crisis sends a price signal to get companies to produce. The argument is that this type of hoarding masks and other PPE will convince manufacturers to make more. I thought, and still think, that this is at least partly misunderstanding the way that price signals and supply chain delays propagate. Anyone who's familiar with MIT System Dynamics' Beer Game and the bullwhip effect would tell you that companies that ramped up production in response to demand quickly (rather than projections and an understanding of longer term demand) were being stupid, not prudent, and companies that tried this in exactly this area were burned in the past for doing so. If that isn't clear enough, notice that it took a couple months for the toilet paper and flour "shortages" to be worked out, despite the fact that there was sufficient supply, and there were not actual production supply shortages. Yes, markets are largely efficient, but they aren't magical ways to eliminate production and distribution delays, much less to insulate companies from actual market dynamics - and China and other southeast Asian countries had already stepped up mask production massively by mid-January. Most of the current supply comes from those factories, so the supposed benefits of price signals from buying masks in February seem not to have been actually effective in speeding anything up.

Oversold Hospital-based transmission.

Part of my concern about hoarding of masks and other equipment was that I thought we would once again see a pattern of large transmission events being centered around hospitals. Thankfully, this didn't happen - hospitals have gotten far better at isolation of patients, and they shut down non-essential services early. We did still see many, many cases and deaths in hospital staff, and this was very clearly in large part due to a lack of supply of PPE. Still, it wasn't the critical locus of spread I expected it to be.

Overconfidence in the response of (certain agencies in) the US government.

This was a huge mistake on my part. I have been concerned about the current administration for years, have repeatedly warned that it is destroying government agencies. Despite that, I was (in retrospect very unreasonably) still confident that the CDC was going to handle the situation well. They had handbooks on influenza pandemic preparedness, I had personally discussed pandemic preparedness plans with senior people at CDC just a few years ago, and I was overconfident in the ability to respond. Based on that, in turn, I was confident that the level of concern being voiced by the CDC was a reflection of their planning and ongoing preparation. The CDC has planned for preparation for this exact case for years, and I assumed they would carry out those plans. I was wrong.

It seems, though it is still somewhat unclear, that center directors were told by the director and the head of HHS that they needed not to speak out about the risks, specific recommendations were vetoed, and (easily the worst screw up,) they let the FDA ban private tests, seemingly at the direction of the administration, to hide the extent of the spread. I'm still confused by the level of non-reaction among non-political SES staff and GS-14s. We have seen many people in various agencies come forward with complaints during this administration, but CDC seems to have just dropped the ball on their response. We will likely see in the coming years how much this was due to central directives not to react, versus alack of central directives to react, therefore failing due to passivity. I still want to assume the former, but that's in large part self-justification of my prior views.

I was wrong in trying to defend the CDC's overall response in March. It definitely isn't as clear as I thought at the time that they were, and would be, net positive. I do think that the emergence of Fauci as almost a national hero has been very helpful in getting people to listen to expert recommendations, even if this did come very late. This is a point on the side of getting most people to listen more and attack less. On the other hand, Lesswrong was overall better prepared because of their skepticism, so at the very least I was talking to the wrong crowd to defend them, and more likely should have been quicker to judge their actions as dangerous myself.

The FDA also surprised me with how badly they did, albeit the surprise was less severe because I had lower expectations. I thought they were getting less dangerous to US public health given the previous pushes to reduce regulation by the current administration. Scott Gottlieb was there for two years, and was probably the only Trump nominee I was actually super-happy about. Unfortunately, he left (a fact I wasn't paying attention to,) and it turns out that the incompetence of a sequence of new directors and rapid changes left the FDA even less prepared that they would have been. I would have expected a doctrinaire Republican appointee to seize the opportunity of a crisis to reduce regulation, and instead it seems they did nothing but block critical testing work for months.

I've long considered myself skeptical of government agencies abilities, and lean fairly heavily libertarian in many ways - albeit less than most others at lesswrong. I was still surprised by the level of ongoing, perhaps even malicious incompetence of the current administration. I'm still unclear if this is a Hanlon-dodge, or if they really have broken the US government so badly, so quickly. Other governments managed this far less poorly, so I'm unclear how generalizable the lesson is that governments are bad at everything. But I am glad I left the US.

Speculation about Causes

There are lots of things I did wrong.

First, I think I was too close to the situation. I had spent a ton of time looking at the US's system specifically, and writing about the closely related -topic of influenza pandemics in my dissertation, then doing work for Open Philanthropy on GCBRs. All of this was during the Obama administration. I left the US a bit after Trump was elected, partly for that reason, and worked on related topics that had less to do with US policy. I'd like to say that's why I didn't update, but to be honest, I think I was just being stupid in accepting my cached thoughts about the risk and best responses, instead of re-evaluating.

I also had too-strong priors and "expert" ideas to be properly fox-like in my predictions, and not quick enough to update about how things were actually going based on the data. Because I was slow to move from the base-rate, I underestimated the severity of COVID-19 for too long. I'm unsure how to fix that, since most of the time it's the right move, and paying attention to every new event is very expensive in terms of mental energy. (Suggestions welcome!)

I also gave too much weight to others' forecasts. Good Judgement's predictions were WAY optimistic about this early on, and I was not forecasting the question, but I was assuming that their aggregate guess was better than that of individuals, especially people who aren't forecasters. This is usually correct, but here it was a mistake. (I now think that superforecasting is materially worse than I hoped it would be at noticing rare events early.) I also followed the herd too much from expert circles, and my twitter feed from infectious disease epidemiology circles was behind even my slow self in recognizing that this was a incipient disaster back in March.

Conclusion

COVID-19 went badly in some places, and went disastrously in others. This was largely predictable, and I failed to notice early enough. (The US is in deep, deep trouble, and this will continue for quite a while longer, with myriad longer term effects on the global economy, and on global stability of other types.) I'm chastened about the poorly calibrated overconfidence of my expert opinion.

I'm also partly unsure what the best next steps are for better-calibration. One key thing I did, several years ago, was explicitly try to rely more on other people's views in the rationality community to guide my decisions, and provide a clear source of feedback. I didn't do this as much as I should have in this case. (On the other hand, it was a large part of why I recognized the mistake as quickly as I did, albeit later than I could have - so it was at least a partial success.)

I'm hoping that this exercise is another way in which thinking through the situation gives me a valuable chance to reflect, and that I can get further feedback. I also hope that it's useful for others to perhaps learn from, but I'm unsure how transferable the lessons of my failures are.



Discuss

GPT-3 Fiction Samples

25 июня, 2020 - 19:12
Published on June 25, 2020 4:12 PM GMT



Discuss

There's an Awesome AI Ethics List and it's a little thin

25 июня, 2020 - 16:43
Published on June 25, 2020 1:43 PM GMT

I recently stumbled upon the Awesome Artificial Intelligence Ethics list, and was somewhat underwhelmed with its quality. While it has enough links to look like a good and useful Awesome List, I think I'd come away from it mostly confused and unsure where to go next. Some obvious oversights include:

  • The books section is quite short, given the amount of material that's been published in this space since SuperIntelligence.
  • LessWrong is in the Blogs section, but the Effective Altruism and AI Alignment Fora are not.
  • There are a ton of great introductory-to-in-the-weeds podcasts on AI Ethics topics, but there's no podcasts section (or other section in which podcasts would clearly be appropriate).
  • There are number of technical research agenda that would be to list on the off chance anyone happens upon this list and wants to work their way up to the actual state of research.

I'm going to make some pull requests, but I can only suggest what I know about. Perhaps you can help?



Discuss

DontDoxScottAlexander.com - A Petition

25 июня, 2020 - 08:44
Published on June 25, 2020 5:44 AM GMT

Here's the petition Scott asked us to make.

This is a request for a specific action by the New York Times editors: 

We, the undersigned, urge the New York Times to respect Scott Alexander's request to not reveal his real name in a planned piece discussing the Slate Star Codex blog and community.

[...]

That's all. It seems to me really important for public discourse on the internet for journalists to respect this norm in this situation.

Please share it in the places you share things, and email it to the prominent people who you know that the New York Times respects and care about.

Please sign.

 

Acknowledgements

Thanks to Jacob Lagerros and Rob Bensinger for making the petition with me.

Thanks to Paul Graham, Steven Pinker and many others for their early signatures.

Thanks to Sarah Haider and Tanner Greer for independently organising a petition and then joining forces with ours.

Thanks to so many other people who are still unsubscribing from the NYT, giving them respectful-but-firm feedback, and otherwise supporting Scott in this situation. It's been great to see so much love and support for SSC these past 48 hours.



Discuss

Is there a good way to simultaneously read LW and EA Forum posts and comments?

25 июня, 2020 - 03:15
Published on June 25, 2020 12:15 AM GMT

I mostly read LW because the posts tend to be more fun and my interests (AI safety, thinking well, the broad question of how to do good) appear more consistently here (where, in particular, there are a reasonable number of posts on causes I'm not interested in, EA meta and moral philosophy).

However, there's lots of good stuff on the EA forum and it's cheap to ignore the stuff I'm not interested in. Checking both sites is not acceptable to me because it will lead to time-wasting habits (check one, browse a little, check the other, browse a little, check the first ...). RSS doesn't work because I want to be able to easily read the comments.

Does a solution to this exist?



Discuss

Preview On Hover

25 июня, 2020 - 01:20
Published on June 24, 2020 10:20 PM GMT

A couple years ago, Wikipedia added a feature where if you hover over an internal link you'll see a preview of the target page:

Other sites with similar features include gwern.net:

And LessWrong:

In general, I like these features a lot. They dramatically lower the barrier the following internal links, letting you quickly figure out whether you're interested. On the other hand, they do get in the way. They pop up, overlapping the text you're reading, and mean you need to be paying more attention to where the mouse goes.

I decided I wanted to add a feature like this to my website, but without any overlap. The right margin seemed good, and if you're reading this on jefftk.com with a window at least 1000px wide then hovering over any link from one of my blog posts to another should show a preview:

Here are my five most recent posts if you'd like to give it a try:

There are a lot of options for how to implement this, but I decided to use an iframe that just loads the relevant page. Feel free to look at the page source and see exactly how it works, but the general idea is:

  • It loads the page directly, not a stripped down version like the examples above. My pages are simple enough that this is fine.

  • It's a preview, not a full page, so set scrolling=no.

  • It's a same-origin ("friendly") iframe, so I can reach into it and add a click listener, so clicking on the iframe takes you through to the previewed post.

  • I don't want comments, ads, analytics, or anything else potentially slow to run, and I don't use JS for rendering, so I use sandbox to turn off JS.

  • Once it's open it stays open until you hover a different preview.

  • It appears vertically aligned with the hovered link, and moves with the page out of view as you scroll down.

  • If you hover over a second link close to the first one, it reuses the same vertical position to avoid looking jumpy.

  • If you hover many links in quick succession it starts loading the first one immediately, and then discards any links that have been overtaken by events.

I'm pretty happy with it, but if you find any bugs let me know!

Comment via: facebook



Discuss

Don't punish yourself for bad luck

25 июня, 2020 - 00:52
Published on June 24, 2020 9:52 PM GMT

The following text first summarizes the standard moral-hazard model. Afterwards, I point out that it implies that you always get punished for bad luck. The third part is completely speculative: I speculate on how you should behave towards yourself.

A brief summary of a moral-hazard setting

A Moral Hazard situation occurs when someone takes too much risk, or does not reduce it enough because someone else bears the cost.

The following situation is a typical textbook example. A worker works for a firm, and her effort influences the probability that the firm has high revenue. The worker can exert high or low effort, the firm's revenue can be high or low, and low revenue is more likely when effort is low, but can also occur when effort is high. Moreover, the worker has to get a wage that compensates her for forgoing whatever else she would do with her time.

Suppose the firm would, in principle, be willing to compensate the worker for high effort (which means that we assume that the additional expected revenue gained from high effort ist at least as high as the additional wage needed to make the worker willing to exert high effort). Because workers are usually assumed to be risk-averse, the firm would take the risk of low revenue and the worker gets a wage that is constant in all states of the world.

However, now also suppose the firm cannot directly observe the effort - this constitutes a situation of asymmetric information, because the worker can observe her own effort and the firm cannot. Then the firm cannot condition the payment on the worker's effort. It also cannot just conclude that the worker exerted low effort by observing low revenue, because we assumed that low revenue can also occur when the worker exerted high effort.

The second-best optimal solution (that is, the best solution given this information problem) is to condition payments on the revenue - and thus, on the result instead of the effort to get it. The worker gets a higher wage when the firm has high revenue. Thereby, the firm can design the contract such that the worker will choose to exert high effort.

In this setting of asymmetric information, the worker gets the same expected utility as in the setting with symmetric information (in which effort could be observed), because the firm still has to compensate her for not doing something else. But because the worker now faces an uncertain income stream, the expected wage must be higher than if the wage were constant. (Thus, the firm has a lower expected profit. If the loss to the firm due to the high-revenue wage premium is severe enough, the firm may not even try to enforce high effort.) The asymmetric information precludes an optimal arrangement of which economic agent takes the risk.


You'll get punished for bad luck


At this point, note that the way that the firm offers a higher wage when it has high revenues and a lower one when it has low revenues is a matter of framing. The firm may for example say that it wants its workers to participate in its success, and therefore pay a premium.

Vocabulary of "punishment", by contrast, may not be popular. Also, it seems wrong to call the low wage a punishment wage. Why? Because the optimal contract makes the worker exert high effort, and a low revenue will NOT indicate that the worker idled.

So that is the irony of the situation: An optimal contract punishes you for bad luck, and for nothing else. At the same time, the worker would be more likely to get "punished" if he idled, because low revenue would then be more likely. The threat of punishment for a bad result is exactly what makes the worker exert high effort to at least make the bad results unlikely.

Optimal contracts in your brain?

Suppose you feel a bit split between two "agents" in your brain. One part of you would like to avoid working. The other part would like you to exert high effort to have a good chance of reaching your goals.

You cannot pay yourself a wage for high effort, but you can feel good or bad. Yet the kind-of-metaphorical implication of the moral-hazard optimal-contract model is that you should not punish yourself for bad luck. There are random influences in the world, but if you can see (or remember) how much effort you exerted, it does not make sense to give yourself a hard time because you were unlucky.

On the other hand, maybe you punish yourself because you lie to yourself about your effort? If you have created such an asymmetric-information situation within yourself, punishing yourself for bad luck is a seemingly rational idea. But keep in mind that it is only second-best optimal, under the assumption that this asymmetric information indeed exists. If so, think of ways to measure your effort instead of only your outcome. If you cannot do it, consider whether forcing yourself to exert high effort is really worth it. Solve the problem that actually needs to be solved, and respect the constraints that exist, and none that do not.



Discuss

Can I archive content from lesswrong.com on the wayback machine (internet archive, archive.org) ?

24 июня, 2020 - 23:18
Published on June 24, 2020 6:15 PM GMT

There are some great information on lesswrong.com (LW) that seems to be available publicly (I can access it in an incognito chrome window) and I would like to increase the chances of this information surviving for a long time.

When I try saving a LW page it looks like it does not render correctly on the wayback machine. Ex: https://web.archive.org/web/20200624170623/https://www.lesswrong.com/s/FrqfoG3LJeCZs96Ym/p/8qccXytpkEhEAkjjM

I opened a github issue on LW's repo since I assume it is an issue with the source code of LW. The EA forum seems to have the same issue and it looks like the EA forum's repo is a fork of lesswrong's repo. I am also writing here since it might have more visibility for non tech people.



Discuss

Betting with Mandatory Post-Mortem

24 июня, 2020 - 23:04
Published on June 24, 2020 8:04 PM GMT

Betting money is a useful way to

  • ensure you have some skin in the game when making assertions;
  • get a painful reminder of when you're wrong, so that you'll update;
  • make money off of people, if you're right.

However, I recently made a bet with both a monetary component and the stipulation that the loser write at least 500 words to a group chat about why they were wrong. I like this idea because:

  • It enforces that some cognitive labor be devoted to the update, rather than relying on the pain of lost cash. Even if you do think it through privately, the work of writing it up will help you remember the mistake next time. (If you don't want to spend that amount of time thinking about why you were wrong, then perhaps you aren't very interested in really updating on this bet.)
  • People usually make small-cash bets anyway, so there's not that much skin in the game. Being forced to write publically, or to a select group of peers such as a slack/discord server, makes it feel real for me in a way that losing a small sum of money doesn't.
  • Where normal bets may benefit the participants, these kinds of public bets have more benefit for the whole audience. Observers get a lot more information about the structure of the disagreement, and the update which the loser takes from it.
  • Often, by the time a bet is decided, a lot of other relevant information has come in as well. A public post-mortem gives the loser a chance to convey this information.
  • This kind of bet will often be positive-sum in reputational terms: the winner gets a public endorsement from the loser, but the loser may gain respect from the audience for their gracious defeat and judicious update.

Furthermore, if the loser's write-up is anything short of honest praise for the winner's views, the write-up may provide hints at a continuing disagreement between the loser and winner which can lead to another bet.

This idea feels similar to Ben's Share Models, Not Beliefs. Bets focus only on disagreement with probabilities, not the underlying reasons for those disagreements. Declaring a winner/loser conveys binary information about who was more correct, but this is very little information. Post-mortems give a place for the models to be brought to light.

A group of people who engaged in betting-with-post-mortems together would generally be getting a lot more feedback on practical reasoning and where it can go wrong.



Discuss

Quick Look #1 Diophantus of Alexandria

24 июня, 2020 - 22:12
Published on June 24, 2020 7:12 PM GMT

https://www.storyofmathematics.com/hellenistic_diophantus.html

Diophantus of Alexandria, a 2nd Century Greek mathematician, had a lot of the concepts needed to develop an Algebra. However, he was unable to fully generalize his methods of problem solving, even if he invented some interesting methods.

Ancient math was written in paragraphs, using words for the most part, thus making reading it very, very painful compared to the compact elegance of modern mathematical notation. However, I was surprised to see Diophantus (or his very early editors at least) develop some interesting and helpful notation in his algebra.

Final sigma ‘ς’ represented the unknown variable, but there were different symbols for variables of every power so for x^2… x^6 each had a unique variable. In fact, this situation persisted into the 17th century, even Fermat used N for unknown and S for the unknown-squared and C for the unknown cubed!

The problem with this is that it meant Diophantus couldn’t devise general methods to solve algebraic problems which had multiple unknowns, and it wasn’t obvious to him that one CAN frequently relate x^2 to x.

The cool thing about this notation from the past though, is how it makes obvious something that Algebra I – Algebra II students mess up frequently. You can’t just combine x^2 + x^3, these are different variables whose relation concerns the base. And almost everyone has made this mistake in their early math career. Some never recover.

Although the editor of my copy, Sir Thomas L. Heath, claims that Diophantus experienced limited options as a mathematician because all the letters of the Greek alphabet were in use as letters except for the final sigma, which Diophantus used to represent the unknown variable, I think D could have invented more variables quite easily. We see this in his invention of the subtraction sign as an inverted psi, and his use of a different variable with superscript for an unknown to the nth power up to the sixth. There was also the extinct digamma and all the Egyptian symbols which at least could have cribbed off of. Surely, the problem was not a lack of imagination, but merely satisfaction with the method then in use. Besides, one person can only invent so much, unless that person is Leonard Euler or Von Neumann, neither of whom had any limits. D. merely didn't see the limits of his notation.

Although D’s problems are surprisingly challenging even using modern notation, the logic D. used to solve the problems is obscure. He does not explain his step by step process. Since they are not proofs, and they are merely problems, it’s hard to divine exactly what D. thought the import of his methods were or exactly which steps he took to come to the answer. He seems to have used trial and error to solve some problems frequently, just plugging in numbers until the right answer popped out. He only wanted positive integers in his answers, so the problems are designed to reflect that. However, some problems don’t have an answer as whole number. For those he would estimate the answer. “X is < 11 and > 10.” Sometimes he is wrong on these estimations! I don’t know quite what to make of that. In problems whose answer is a negative number, Diophantus says, “Pthht, absurd!”

This is unfortunate, because if D. had credits and debts in mind when he was putting together these problems, he might have seen the utility of negative numbers and started an accounting revolution 1500 years early.

If Diophantus can teach us one thing about discovery, I believe it is that iterating over different methods of notation might lead us to make conceptual breakthroughs.



Discuss

What's the name for that plausible deniability thing?

24 июня, 2020 - 21:42
Published on June 24, 2020 6:42 PM GMT

There's a concept I remember reading about here, of the idea that you can't just suddenly refuse to answer dangerous questions. You have to consistently refuse to answer some random sample of totally normal questions, so that "refusing to answer a question" doesn't itself become a source of information.

Unfortunately I can't remember what it's usually called, and haven't been able to turn it up via search ("plausible deniability" is way too broad, and I can't find the right narrowing criteria). What is this concept/process called?



Discuss

Abstraction, Evolution and Gears

24 июня, 2020 - 20:39
Published on June 24, 2020 5:39 PM GMT

Meta: this project is wrapping up for now. This is the second of probably several posts dumping my thought-state as of this week.

It is an empirical fact that we can predict the day-to-day behavior of the world around us - positions of trees or buildings, trajectories of birds or cars, color of the sky and ground, etc - without worrying about the details of plasmas roiling in any particular far-away star. We can predict the behavior of a dog without having to worry about positions of individual molecules in its cells. We can predict the behavior of reinforced concrete without having to check it under a microscope or account for the flaps of butterfly wings a thousand kilometers away.

Our universe abstracts well: it decomposes into high-level objects whose internal details are approximately independent of far-away objects, given all of their high-level summary information.

It didn’t have to be this way. We could imagine a universe which looks like a cryptographic hash function, where most bits are tightly entangled with most other bits and any prediction of anything requires near-perfect knowledge of the whole system state. But empirically, our universe does not look like that.

Given that we live in a universe amenable to abstraction, what sorts of agents should we expect to evolve? What can we say about agency structure and behavior in such a universe? This post comes at the question from a few different angles, looking at different properties I expect evolved agents to display in abstraction-friendly universes.

Convergent Instrumental Goals

The basic idea of abstraction is that any variable X is surrounded by lots of noisy unobserved variables, which mediate its interactions with the rest of the universe. Anything “far away” from X - i.e. anything outside of those noisy intermediates - can only “see” some abstract summary information f(X). Anything more than a few microns from a transistor on a CPU will only be sensitive to the transistor’s on/off state, not its exact voltage; the gravitational forces on far-apart stars depend only on their total mass, momentum and position, not on the roiling of plasmas.

One consequence: if an agent’s goals do not explicitly involve things close to X, then the agent cares only about controlling f(X). If an agent does not explicitly care about exact voltages on a CPU, then it will care only about controlling the binary states (and ultimately, the output of the computation). If an agent does not explicitly care about plasmas in far-away stars, then it will care only about the total mass, momentum and position of those stars. This holds for any goal which does not explicitly care about the low-level details of X or the things nearby X.

Noisy intermediates Z mask all information about X except the summary f(X). So, if an agent's objective only explicitly depends on far-away variables Y, then the agent only wants to control f(X), not necessarily all of X.

This sounds like instrumental convergence: any goal which does not explicitly care about things near X itself will care only about controlling f(X), not all of X. Agents with different goals will compete to control the same things: high-level behaviors f(X), especially those with far-reaching effects.

Natural next question: does all instrumental convergence work this way?

Typical intuition for instrumental convergence is something like “well, having lots of resources increases one’s action space, so a wide variety of agents will try to acquire resources in order to increase their action space”. Re-wording that as an abstraction argument: “an agent’s accessible action space ‘far away’ from now (i.e. far in the future) depends mainly on what resources it acquires, and is otherwise mostly independent of specific choices made right now”. 

That may sound surprising at first, but imagine a strategic video game (I picture Starcraft). There’s a finite world-map, so over a long-ish time horizon I can get my units wherever I want them; their exact positions don’t matter to my long-term action space. Likewise, I can always tear down my buildings and reposition them somewhere else; that’s not free, but the long-term effect of such actions is just having less resources. Similarly, on a long time horizon, I can build/lose whatever units I want, at the cost of resources. It’s ultimately just the resources which restrict my action space, over a long time horizon.

(More generally, I think mediating-long-term-action-space is part of how we intuitively decide what to call “resources” in the first place.)

Coming from a different angle, we could compare to TurnTrout’s formulation of convergent instrumental goals in MDPs. Those results are similar to the argument above in that agents tend to pursue states which maximize their long-term action space. We could formally define an abstraction on MDPs in which X is the current state, and f(X) summarizes the information about the current state relevant to the far-future action space. In other words, two states X with the same long-run action space will have the same f(X). “Power”, as TurnTrout defined it, would be an increasing function of f(X) - larger long-run action spaces mean more power. Presumably agents would tend to seek states with large f(X).

Modularity

Fun fact: biological systems are highly modular, at multiple different scales. This can be quantified and verified statistically, e.g. by mapping out protein networks and algorithmically partitioning them into parts, then comparing the connectivity of the parts. It can also be seen more qualitatively in everyday biological work: proteins have subunits which retain their function when fused to other proteins, receptor circuits can be swapped out to make bacteria follow different chemical gradients, manipulating specific genes can turn a fly’s antennae into legs, organs perform specific functions, etc, etc.

One leading theory for why modularity evolves is “modularly varying goals”: essentially, modularity in the organism evolves to match modular requirements from the environment. For instance, animals need to breathe, eat, move, and reproduce. A new environment might have different food or require different motions, independent of respiration or reproduction - or vice versa. Since these requirements vary more-or-less independently in the environment, animals evolve modular systems to deal with them: digestive tract, lungs, etc. This has been tested in simple simulated evolution experiments, and it works.

In short: modularity of the organism evolves to match modularity of the environment.

… and modularity of the environment is essentially abstraction-friendliness. The idea of abstraction is that the environment consists of high-level components whose low-level structure is independent (given the high-level summaries) for any far-apart components. That’s modularity.

Coming from an entirely different direction, we could talk about the good regulator theorem from control theory: any regulator of a system which is maximally successful and simple must be isomorphic to the system itself. Again, this suggests that modular environments should evolve modular “regulators”, e.g. organisms or agents.

I expect that the right formalization of these ideas would yield a theorem saying that evolution in abstraction-friendly environments tends to produce modularity reflecting the modular structure of the environment. Or, to put it differently: evolution in abstraction-friendly environments tends to produce (implicit) world-models whose structure matches the structure of the world.

Reflection

Finally, we can ask what happens when one modular component of the world is itself an evolved agent modelling the world. What would we expect this agent’s model of itself to look like?

I don’t have much to say yet about what this would look like, but it would be very useful to have. It would give us a grounded, empirically-testable outside-view correctness criterion for things like embedded world models and embedded decision theory. Ultimately, I hope that it will get at Scott’s open question “Does agent-like behavior imply agent-like architecture?”, at least for evolved agents specifically.



Discuss

[AN #105]: The economic trajectory of humanity, and what we might mean by optimization

24 июня, 2020 - 20:30
Published on June 24, 2020 5:30 PM GMT

[AN #105]: The economic trajectory of humanity, and what we might mean by optimization Alignment Newsletter is a weekly publication with recent content relevant to AI alignment around the world. View this email in your browser Newsletter #105
Alignment Newsletter is a weekly publication with recent content relevant to AI alignment around the world. Find all Alignment Newsletter resources here. In particular, you can look through this spreadsheet of all summaries that have ever been in the newsletter.
Audio version here (may not be up yet). SECTIONS HIGHLIGHTS
TECHNICAL AI ALIGNMENT
        AGENT FOUNDATIONS
        LEARNING HUMAN INTENT
        PREVENTING BAD BEHAVIOR
        FORECASTING
        MISCELLANEOUS (ALIGNMENT)
AI STRATEGY AND POLICY    HIGHLIGHTS

Modeling the Human Trajectory (David Roodman) (summarized by Nicholas): This post analyzes the human trajectory from 10,000 BCE to the present and considers its implications for the future. The metric used for this is Gross World Product (GWP), the sum total of goods and services produced in the world over the course of a year.

Looking at GWP over this long stretch leads to a few interesting conclusions. First, until 1800, most people lived near subsistence levels. This means that growth in GWP was primarily driven by growth in population. Since then population growth has slowed and GWP per capita has increased, leading to our vastly improved quality of life today. Second, an exponential function does not fit the data well at all. In an exponential function, the time for GWP to double would be constant. Instead, GWP seems to be doubling faster, which is better fit by a power law. However, the conclusion of extrapolating this relationship forward is extremely rapid economic growth, approaching infinite GWP as we near the year 2047.

Next, Roodman creates a stochastic model in order to analyze not just the modal prediction, but also get the full distribution over how likely particular outcomes are. By fitting this to only past data, he analyzes how surprising each period of GWP was. This finds that the industrial revolution and the period after it was above the 90th percentile of the model’s distribution, corresponding to surprisingly fast economic growth. Analogously, the past 30 years have seen anomalously lower growth, around the 25th percentile. This suggests that the model's stochasticity does not appropriately capture the real world -- while a good model can certainly be "surprised" by high or low growth during one period, it should probably not be consistently surprised in the same direction, as happens here.

In addition to looking at the data empirically, he provides a theoretical model for how this accelerating growth can occur by generalizing a standard economic model. Typically, the economic model assumes technology is a fixed input or has a fixed rate of growth and does not allow for production to be reinvested in technological improvements. Once reinvestment is incorporated into the model, then the economic growth rate accelerates similarly to the historical data.



Nicholas's opinion: I found this paper very interesting and was quite surprised by its results. That said, I remain confused about what conclusions I should draw from it. The power law trend does seem to fit historical data very well, but the past 70 years are fit quite well by an exponential trend. Which one is relevant for predicting the future, if either, is quite unclear to me.

The theoretical model proposed makes more sense to me. If technology is responsible for the growth rate, then reinvesting production in technology will cause the growth rate to be faster. I'd be curious to see data on what fraction of GWP gets reinvested in improved technology and how that lines up with the other trends.

Rohin’s opinion: I enjoyed this post; it gave me a visceral sense for what hyperbolic models with noise look like (see the blog post for this, the summary doesn’t capture it). Overall, I think my takeaway is that the picture used in AI risk of explosive growth is in fact plausible, despite how crazy it initially sounds. Of course, it won’t literally diverge to infinity -- we will eventually hit some sort of limit on growth, even with “just” exponential growth -- but this limit could be quite far beyond what we have achieved so far. See also this related post.



The ground of optimization (Alex Flint) (summarized by Rohin): Many arguments about AI risk depend on the notion of “optimizing”, but so far it has eluded a good definition. One natural approach is to say that an optimizer causes the world to have higher values according to some reasonable utility function, but this seems insufficient, as then a bottle cap would be an optimizer (AN #22) for keeping water in the bottle.

This post provides a new definition of optimization, by taking a page from Embedded Agents (AN #31) and analyzing a system as a whole instead of separating the agent and environment. An optimizing system is then one which tends to evolve toward some special configurations (called the target configuration set), when starting anywhere in some larger set of configurations (called the basin of attraction), even if the system is perturbed.

For example, in gradient descent, we start with some initial guess at the parameters θ, and then continually compute loss gradients and move θ in the appropriate direction. The target configuration set is all the local minima of the loss landscape. Such a program has a very special property: while it is running, you can change the value of θ (e.g. via a debugger), and the program will probably still work. This is quite impressive: certainly most programs would not work if you arbitrarily changed the value of one of the variables in the middle of execution. Thus, this is an optimizing system that is robust to perturbations in θ. Of course, it isn’t robust to arbitrary perturbations: if you change any other variable in the program, it will probably stop working. In general, we can quantify how powerful an optimizing system is by how robust it is to perturbations, and how small the target configuration set is.

The bottle cap example is not an optimizing system because there is no broad basin of configurations from which we get to the bottle being full of water. The bottle cap doesn’t cause the bottle to be full of water when it didn’t start out full of water.

Optimizing systems are a superset of goal-directed agentic systems, which require a separation between the optimizer and the thing being optimized. For example, a tree is certainly an optimizing system (the target is to be a fully grown tree, and it is robust to perturbations of soil quality, or if you cut off a branch, etc). However, it does not seem to be a goal-directed agentic system, as it would be hard to separate into an “optimizer” and a “thing being optimized”.

This does mean that we can no longer ask “what is doing the optimization” in an optimizing system. This is a feature, not a bug: if you expect to always be able to answer this question, you typically get confusing results. For example, you might say that your liver is optimizing for making money, since without it you would die and fail to make money.

The full post has several other examples that help make the concept clearer.



Rohin's opinion: I’ve previously argued (AN #35) that we need to take generalization into account in a definition of optimization or goal-directed behavior. This definition achieves that by primarily analyzing the robustness of the optimizing system to perturbations. While this does rely on a notion of counterfactuals, it still seems significantly better than any previous attempt to ground optimization.

I particularly like that the concept doesn’t force us to have a separate agent and environment, as that distinction does seem quite leaky upon close inspection. I gave a shot at explaining several other concepts from AI alignment within this framework in this comment, and it worked quite well. In particular, a computer program is a goal-directed AI system if there is an environment such that adding the computer program to the environment transforms it into an optimizing system for some “interesting” target configuration states (with one caveat explained in the comment).

   TECHNICAL AI ALIGNMENT
 AGENT FOUNDATIONS

Public Static: What is Abstraction? (John S Wentworth) (summarized by Rohin): If we are to understand embedded agency, we will likely need to understand abstraction (see here (AN #83)). This post presents a view of abstraction in which we abstract a low-level territory into a high-level map that can still make reliable predictions about the territory, for some set of queries (whether probabilistic or causal).

For example, in an ideal gas, the low-level configuration would specify the position and velocity of every single gas particle. Nonetheless, we can create a high-level model where we keep track of things like the number of molecules, average kinetic energy of the molecules, etc which can then be used to predict things like pressure exerted on a piston.

Given a low-level territory L and a set of queries Q that we’d like to be able to answer, the minimal-information high-level model stores P(Q | L) for every possible Q and L. However, in practice we don’t start with a set of queries and then come up with abstractions, we instead develop crisp, concise abstractions that can answer many queries. One way we could develop such abstractions is by only keeping information that is visible from “far away”, and throwing away information that would be wiped out by noise. For example, when typing 3+4 into a calculator, the exact voltages in the circuit don’t affect anything more than a few microns away, except for the final result 7, which affects the broader world (e.g. via me seeing the answer).

If we instead take a systems view of this, where we want abstractions of multiple different low-level things, then we can equivalently say that two far-away low-level things should be independent of each other when given their high-level summaries, which are supposed to be able to quantify all of their interactions.

Read more: Abstraction sequence



Rohin's opinion: I really like the concept of abstraction, and think it is an important part of intelligence, and so I’m glad to get better tools for understanding it. I especially like the formulation that low-level components should be independent given high-level summaries -- this corresponds neatly to the principle of encapsulation in software design, and does seem to be a fairly natural and elegant description, though of course abstractions in practice will only approximately satisfy this property.

  LEARNING HUMAN INTENT

Safe Imitation Learning via Fast Bayesian Reward Inference from Preferences (Daniel S. Brown et al) (summarized by Zach): Bayesian reward learning would allow for rigorous safety analysis when performing imitation learning. However, Bayesian reward learning methods are typically computationally expensive to use. This is because a separate MDP needs to be solved for each reward hypothesis. The main contribution of this work is a proposal for a more efficient reward evaluation scheme called Bayesian REX (see also an earlier version (AN #86)). It works by pre-training a low-dimensional feature encoding of the observation space which allows reward hypotheses to be evaluated as a linear combination over the learned features. Demonstrations are ranked using pair-wise preference which is relativistic and thus conceptually easier for a human to evaluate. Using this method, sampling and evaluating reward hypotheses is extremely fast: 100,000 samples in only 5 minutes using a PC. Moreover, Bayesian REX can be used to play Atari games by finding a most likely or mean reward hypothesis that best explains the ranked preferences and then using that hypothesis as a reward function for the agent.

Prerequisites: T-REX



Zach's opinion: It's worth emphasizing that this isn't quite a pure IRL method. They use preferences over demonstrations in addition to the demonstrations themselves and so they have more information than would be available in a pure IRL context. However, it’s also worth emphasizing that (as the authors show) pixel-level features make it difficult to use IRL or GAIL to learn an imitation policy, which means I wasn’t expecting a pure IRL approach to work here. Conceptually, what's interesting about the Bayesian approach is that uncertainty in the reward distribution translates into confidence intervals on expected performance. This means that Bayesian REX is fairly robust to direct attempts at reward hacking due to the ability to directly measure overfitting to the reward function as high variance in the expected reward.

  PREVENTING BAD BEHAVIOR

Avoiding Side Effects in Complex Environments (Alexander Matt Turner, Neale Ratzlaff et al) (summarized by Rohin): Previously, attainable utility preservation (AUP) has been used to solve (AN #39) some simple gridworlds. Can we use it to avoid side effects in complex high dimensional environments as well? This paper shows that we can, at least in SafeLife (AN #91). The method is simple: first train a VAE on random rollouts in the environment, and use randomly generated linear functions of the VAE features as the auxiliary reward functions for the AUP penalty. The Q-functions for these auxiliary reward functions can be learned using deep RL algorithms. Then we can just do regular deep RL using the specified reward and the AUP penalty. It turns out that this leads to fewer side effects with just one auxiliary reward function and a VAE whose latent space is size one! It also leads to faster learning for some reason. The authors hypothesize that this occurs because the AUP penalty is a useful shaping term, but don’t know why this would be the case.

 FORECASTING

Reasons you might think human level AI soon is unlikely (Asya Bergal) (summarized by Rohin): There is a lot of disagreement about AI timelines, that can be quite decision-relevant. In particular, if we were convinced that there was a < 5% chance of AGI in the next 20 years, that could change the field’s overall strategy significantly: for example, we might focus more on movement building, less on empirical research, and more on MIRI’s agent foundations research. This talk doesn't decisively answer this question, but discusses three different sources of evidence one might have for this position: the results of expert surveys, trends in compute, and arguments that current methods are insufficient for AGI.

Expert surveys usually suggest a significantly higher than 5% chance of AGI in 20 years, but this is quite sensitive to the specific framing of the question, and so it’s not clear how informative this is. If we instead ask experts what percentage of their field has been solved during their tenure and extrapolate to 100%, the extrapolations for junior researchers tend to be optimistic (decades), whereas those of senior researchers are pessimistic (centuries).

Meanwhile, the amount spent on compute (AN #7) has been increasing rapidly. At the estimated trend, it would hit $200 billion in 2022, which is within reach of large governments, but would presumably have to slow down at that point, potentially causing overall AI progress to slow. Better price performance (how many flops you can buy per dollar) might compensate for this, but hasn't been growing at comparable rates historically.

Another argument is that most of our effort is now going into deep learning, and methods that depend primarily on deep learning are insufficient for AGI, e.g. because they can’t use human priors, or can’t do causal reasoning, etc. Asya doesn’t try to evaluate these arguments, and so doesn’t have a specific takeaway.



Rohin's opinion: While there is a lot of uncertainty over timelines, I don’t think under 5% chance of AGI in the next 20 years is very plausible. Claims of the form “neural nets are fundamentally incapable of X” are almost always false: recurrent neural nets are Turing-complete, and so can encode arbitrary computation. Thus, the real question is whether we can find the parameterization that would correspond to e.g. causal reasoning.

I’m quite sympathetic to the claim that this would be very hard to do: neural nets find the simplest way of doing the task, which usually does not involve general reasoning. Nonetheless, it seems like by having more and more complex and diverse tasks, you can get closer to general reasoning, with GPT-3 (AN #102) being the latest example in this trend. Of course, even then it may be hard to reach AGI due to limits on compute. I’m not claiming that we already have general reasoning, nor that we necessarily will get it soon: just that it seems like we can’t rule out the possibility that general reasoning does happen soon, at least not without a relatively sophisticated analysis of how much compute we can expect in the future and some lower bound on how much we would need for AGI-via-diversity-of-tasks.



Relevant pre-AGI possibilities (Daniel Kokotajlo) (summarized by Rohin): This page lists 47 things that could plausibly happen before the development of AGI, that could matter for AI safety or AI policy. You can also use the web page to generate a very simple trajectory for the future, as done in this scenario that Daniel wrote up.



Rohin's opinion: I think this sort of reasoning about the future, where you are forced into a scenario and have to reason what must have happened and draw implications, seems particularly good for ensuring that you don’t get too locked in to your own beliefs about the future, which will likely be too narrow.

  MISCELLANEOUS (ALIGNMENT)

Preparing for "The Talk" with AI projects (Daniel Kokotajlo) (summarized by Rohin): At some point in the future, it seems plausible that there will be a conversation in which people decide whether or not to deploy a potentially risky AI system. So one class of interventions to consider is interventions that make such conversations go well. This includes raising awareness about specific problems and risks, but could also include identifying people who are likely to be involved in such conversations and concerned about AI risk, and helping them prepare for such conversations through training, resources, and practice. This latter intervention hasn't been done yet: some simple examples of potential interventions would be generating official lists of AI safety problems and solutions which can be pointed to in such conversations, or doing "practice runs" of these conversations.



Rohin's opinion: I certainly agree that we should be thinking about how we can convince key decision makers of the level of risk of the systems they are building (whatever that level of risk is). I think that on the current margin it's much more likely that this is best done through better estimation and explanation of risks with AI systems, but it seems likely that the interventions laid out here will become more important in the future.

   AI STRATEGY AND POLICY

Medium-Term Artificial Intelligence and Society (Seth D. Baum) (summarized by Rohin): Like a previously summarized paper (AN #90), this paper aims to find common ground between near-term and long-term priorities in medium-term concerns. This can be defined along several dimensions of an AI system: when it chronologically appears, how feasible it is to build it, how certain it is that we can build it, how capable the system is, how impactful the system is, and how urgent it is to work on it.

The paper formulates and evaluates the plausibility of the medium term AI hypothesis: that there is an intermediate time period in which AI technology and accompanying societal issues are important from both presentist and futurist perspectives. However, it does not come to a strong opinion on whether the hypothesis is true or not.

FEEDBACK I'm always happy to hear feedback; you can send it to me, Rohin Shah, by replying to this email. PODCAST An audio podcast version of the Alignment Newsletter is available. This podcast is an audio version of the newsletter, recorded by Robert Miles.
Subscribe here:

Copyright © 2020 Alignment Newsletter, All rights reserved.

Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list.
 

Discuss

Linkpost: M21 Review: We Have Normality

24 июня, 2020 - 19:10
Published on June 24, 2020 4:10 PM GMT

You can find it here.



Discuss

Models, myths, dreams, and Cheshire cat grins

24 июня, 2020 - 13:50
Published on June 24, 2020 10:50 AM GMT

.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > * {position: absolute} .MJXc-bevelled > * {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom * {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')}

"she has often seen a cat without a grin but never a grin without a cat"

Let's have a very simple model. There's a boolean, C, which measures whether there's a cat around. There's a natural number N, which counts the number of legs on the cat, and a boolean G, which checks whether the cat is grinning (or not).

There are a few obvious rules in the model, to make it compatible with real life:

  • ¬C → (N=0).
  • ¬C → ¬G.

Or, in other words, if there's no cat, then there are zero cat legs and no grin.

And that's true about reality. But suppose we have trained a neural net to automatically find the values of C, N, and G. Then it's perfectly conceivable that something might trigger the outputs ¬C and G simultaneously: a grin without any cat to hang it on.

Adversarial examples

Adversarial examples often seem to behave this way. Take for example this adversarial example of a pig classified as an airplane:

Imagine that the neural net was not only classifying "pig" and "airplane", but other things like "has wings" and "has fur".

Then the "pig-airplane" doesn't have wings, and has fur, which are features of pigs but not airplanes. Of course, you could build an adversarial model that also breaks "has wings" and "has fur", but, hopefully, the more features that need to be faked, the harder it would become.

This suggests that, as algorithms get smarter, they will become more adept at avoiding adversarial examples - as long as the ultimate question is clear. In our real world, the categories of pigs and airplanes are pretty sharply distinct.

We run into problems, though, if the concepts are less clear - such as what might happens to pigs and airplanes if the algorithm optimises them, or how the algorithm might classify underdefined concepts like "human happiness".

Myths and dreams

Define the following booleans: HH detects the presence of a living human head, HB a living human body, JH a living jackal head, JB a living jackal body.

In our world real world we generally have HH↔HB and JH↔JB. But set the following values:

¬HH,HB,JH,¬JB,

and you have the god Anubis.

Similarly, what is a dragon? Well, it's an entity such that the following are all true:

{is lizard, is flying, is huge, breath is fire, intelligence is human level, ...}

And, even though those features never go together in the real world, we can put them together in our imagination, and get a dragon.

Note that "is flying" seems more fundamental to a dragon than "has wings", thus all the wingless dragons that fly "by magic". Our imagination seem comfortable with such combinations.

Dreams are always bewildering upon awakening, because they also combine contradictory assumptions. But these combinations are often beyond what our imaginations are comfortable with, so we get things like meeting your mother - who is also a wolf - and handing Dubai to her over the tea cups (that contain milk and fear).

"Alice in Wonderland" seems to be in between the wild incoherence of dream features, and the more restricted inconsistency of stories and imagination.



Discuss

Has anyone explored deliberate infection with milder coronavirus strains for cross-immunity?

24 июня, 2020 - 11:08
Published on June 23, 2020 10:23 PM GMT



Discuss

Does NYT have policies?

24 июня, 2020 - 07:06
Published on June 24, 2020 4:06 AM GMT

Does the New York Times have written policies? Does it publish them? Have they leaked?

Here is a list of six public documents. Most interesting are the Ethical Journalism Guidebook/Handbook and the Guidelines on [Our] Integrity. The first mentions three documents: (A) "the Newsroom Integrity Statement, promulgated in 1999"; is this the Guidelines linked above? (B) "the Policy on Confidential Sources, issued in 2004," archived here. Do they still publish it? and (C) "the Rules of the Road," which sounds like a private document not specific to journalism.

Are there other private written policies? Have they leaked? Are there rumors about them?

I don't mean to imply that policies are an unalloyed good. At some level of detail or disorganization, people simply don't learn them. I have a largely unjustified intuition that lying is bad and lying about policies is particularly bad, seeming to exist to diffuse responsibility.



Discuss

The Dark Miracle of Optics

24 июня, 2020 - 06:09
Published on June 24, 2020 3:09 AM GMT

Alternate titles:

  • The Public-Private Information Gap Rules Everything Around Me
  • Baudrillard’s Simulacra, Steelmanned
  • “Having your cake and eating it too”
  • The Precarity of Prestige Economies
  • “Goodhart’s is just a subset, mannn.”
  • “Costly signals are just a subset, mannn.”
  • The Tragedy of Appearances
  • On Truth & Lies in a Nonmoral Sense

Epistemic status: no idea how original any of this is; it just connects a lot of nodes in my brain. I’ve been told there’s a real debt to Robert Trivers, which I hope to educate myself on shortly. I may just be reinventing signal theory.

In the beginning there was defection.

We can treat a prisoner’s dilemma as an elegant stand-in for coordination more generally. A one-off dilemma has as its ideal solution defection. Bellum omnium contra omnes: the war of all against all, or, “hyper-individualism.”

At the same time, it is clear that many of the “benefits sought by living things”[1]—which is to say, that which assists survival—are more readily reached by group effort.

Crucially, an iterated prisoner’s dilemma has the opposite optimal equilibrium: tit-for-tat, or cooperation, in its many variations, its various guises. And so the trick becomes how to switch individuals over from one-offs onto iterated dilemmas. The technology which achieves this is reputation, allowing formation of a ledger anchored to names[2], faces, identities. Individuals sharing an ecosystem continually run into each other, and given a reputation ledger, cannot defect and “get away” with it, continuing to freeride into oblivion.[3]

Tit-for-tat is exceedingly simple. It enables mutualism (in symbiosis[4]) and is practiced internally to species as diverse as stickleback fish, tree swallows, all primates, bats. All it requires is a sense of continuous identity and tracking of that identity’s (recent-)historical actions. We can take bats as an example: Since mothers’ hunting returns are unequally distributed, but bat babies do better when consistently fed, mothers communally share food. But should a researcher pump up a mother’s gullet full of air, so it appears she had a strong return but is refusing to share, suddenly the other mothers will no longer feed her children, will allow them to starve.

We can read human progress as a history of instituting cooperation. The Stele of Hammurabi’s famous law is eye for an eye; many of its other decrees are variations thereof: if a building collapses and kills its occupant, its builder shall be put to death as well. The Old Testament introduces the Commandments, the laws of Exodus. Almost every major religion has its in-house variation on the Golden Rule. These are efforts at securing internal coordination of the group, which a la Studies on Slack” and multi-level selection theory, will outcompete other groups once instituted. I have heard from reliable sources that laws in the Quran, and many other major religious texts, have similar structures.

But vanilla tit-for-tat reputational ledgers, like a barter system, are difficult and costly to continuously verify. It requires small, local communities of recognition, and prevents civilizations from scaling up. And so there was a need for currency, for credit, for the accumulation, transportation, and commensurability of capital, all of which together say: this individual cooperates. (Or to generalize across contexts, since optics signal more broadly than mere cooperation: This individual carries value; an interaction with her will be positive-sum.) This currency needed to be legible and exchanged across markets, across subcommunities. For these and a thousand other reasons we invented proxies, heuristics, measurements; instituted CVs, letters of recommendation, titles of achievement and nobility, and of course, fashion. But currency is easily counterfeited.

*

Clothing arises to serve object-level purposes: warmth from cold, shelter from sun. But soon it gives rise in turn to fashion: equally tactical, but suited for the social, rather than literal, landscape. (For a social being, both are essential to survival.) Because the garments, the choices of paint pigment, the geometric patterns and metal hanging from the ears, reflected both the wealth of the individual and his affiliation to group, they became sources of information for recipients, on-lookers: ways of deducing whole from part, of grokking a person. As social reality devours the physical—in Baudrillard’s terms, simulacra—thus fashion devours its mother.

In the Upper-Middle Paleolithic Transition, human societies and economies grow increasingly complex. Trade deals and diplomacy are performed among credible spokesmen, and social hierarchies need preservation across interactions between strangers. Fashion enters as a technology for maintaining and navigating the social graph. “By the production of symbolic artefacts that signified different social groups and kinds of relationships, Aurignacian people were able to maintain wider networks that could exist even between people who had never set eyes on each other,” giving them a competitive advantage. The practice spreads through the law of cultural evolution: “The surface of the body… becomes the symbolic stage upon which the drama of socialisation is enacted, and body adornment… becomes the language through which it was expressed.”[5] We have entered the second stage of simulacra. The territory has a map, and there are parties interested in manipulating it.

*

Or consider the butterfly. A “protected” species (poisonous, inedible, etc) gains a survival advantage through honest signaling of this protection. An honest signal is a Pareto improvement—a win-win. The butterfly does not wish to be eaten; the predator does not wish to eat a toxic insect. How does it evolve this protection?

Brute association. The outward phenotypic expression of the butterfly—its public information—becomes associated with some interior, private information—its toxicity. Let’s say the distinctive pattern is black and red. A predator cannot tell whether an insect is toxic from sight, but it can tell by proxy. In other words, the butterfly develops a reputation.

*

Once this association between optics and essence, between appearance and reality, between signal and quality (the biological frame) or public and private information (the economic one), is formed, it can be freeridden. It becomes, in most cases, easier to pay “lip service”—to outwardly express the associated public characteristic—than it is to to develop the private characteristic. This is not entirely the fault of the freerider; it is a difficult situation he finds himself in. Imagine he “chooses” (I’m anthropomorphizing evolution) to remain with his blue and yellow colors: even if his “product” is “good” (I’m mixing metaphors, but I mean to say, his advertising is honest), it will take some time for a trusted association between signal and quality, public and private, to form. As consumers, we may initially disbelieve an advertiser’s claims, and for good reason, since there is incentive to deceive. And thus it is with the sun-basking lizard, deciding which butterfly to eat. Far easier for a precarious insect to ride coattails, to imitate and pretend toward what he is not—and so, quite simply, it does.

The connection with fashion should come into view now. The “barberpole” metaphor of fashion, where lower classes continually imitate higher classes, who are themselves engaged in a continual quest for “distinction” from the chasing masses, is a popular one in rationalist circles for good reason. Its cyclical nature is the result of limited options and a continual evasion of freeriders who exploit an associative proxy: clothing for caste.

*

A quick inventory of where we are: Individuals profit from cooperation, but are always personally incentivized to defect. Reputation ledgers switch us from the one-off tit-for-tat, and its incentivized defection, into an iterated tit-for-tat, and its incentivized cooperation. As civilizations scale, and we wish to do more with what we have, achieve new complexities, we move to an alternate system. A credit system of heuristic and proxy. Thus an individual who wishes to enter the art world will work internships in which she forges relationships of trust, in the hope that she will be recommended. And the employer who takes the recommendation will do so on account of having built up trust with the recommender; this trust is built by history, and its credits are transferable. (Each exchange, of course, comes with a diminishment.) Across many recommendations and positions, across many cities, the accumulating recommendations become virtualized: not only can one fabricate a CV, but one can embellish it, and the latter behavior is so ubiquitous it is hard to call it “cheating,” even though this is what a dishonest signal is. And, at the same time, this intern will find herself competing in a much larger implicit landscape of associations, in which the clothes she wears, the way she speaks, and a hundred other variables come together to—by proxy—provide further evidence of value.

Imagine that a bat mother, instead of having her gullets pumped full of air by researchers, developed a technology to achieve the opposite: to appear as if she had not caught insects, when in reality she had. In other words, to appear as if she was cooperating when in fact she was defecting. To present public information at odds with private information. This bat’s offspring would be most fit, would pass on its genes at higher rates. This bat would have discovered the miracle of optics. But it is a dark, and short-term miracle: the population as a whole would lose its fitness, as its ability to cooperate diminished.

It is better to cooperate than defect. But it is better still to defect while others around you cooperate: to reap the advantages of coordinated effort while contributing none of the work (freeriding). This behavior is blocked insofar as it is noticed. Social systems are not two-player, but N-player games, and resemble public goods games more than prisoners dilemmas, and thus even in the presence of parasites, it can be optimal for other players to invest in the pool.[6] But freeriders remain a burden on the system that rational players will wish to eliminate.

While an honest signal is beneficial to all parties involved—it adds true information to the informational ecosystem which actors can base actions on—a dishonest signal is definitionally exploitative. It causes another self-interested actor to behave against its interest, because its premises are malformed. It causes the sun-basking lizard to pass up on the butterfly, believing it to be protected, when in reality, it is only masquerading.

*

This is the tragedy of appearances. The cheater is punished if he is caught cheating; a society which punishes cheaters (or “parasites”) outperforms that which does not; and thus his optimal behavior will always be to cheat and pretend otherwise, to evade enforcers. He can do this by means of appearance, and the more that appearance is selected for, the more easily he can simply pretend, while embodying none of the internal, proxied-for qualities. freerider situations don’t work when the supporting actor can recognize freeriding, thus the trick, if one wishes to continue freeriding, is to prevent such a recognition.

This is the superset of Goodhart-Campbell. The solution is the superset of costly signaling. The greater the divergence in the incentive structure between proxy and proxied, the greater the incentives to optimize for appearance. Thus we can understand politics, where everything “real” is hidden behind a great veil, and public image carefully manipulated. Thus we can understand Baudrillard’s simulacra, at least in its steelmanned form: the first level is honest signaling, a one-to-one relationship between public and private. Levels 2-4 are self-aware manipulations, “complex patterns of strategic interactions,”[7] and if you believe Baudrillard, we are long past naivete, between simple one-to-oneness. An unsophisticated relationship to maps is a departure point, not a finish.

The tragedy of appearances, and our incessant optimization thereof, is a problem society does not yet seem to have stable solutions to. Taleb might admonish us, in Skin In The Game, to never trust a surgeon who looks the part, to never employee a straight-A student—but while wise as manipulations of the current fashion field, these are inherently unstable and contingent solutions. As soon as we would follow his advice we would see surgeons trending toward slovenliness, students strategically achieving B-grades in Bio for the sake of seeming interesting. Those familiar with Goodhard-Campbell know the pattern well, and the only answers are the same: diminish the gap between incentivized appearance and desired behavior. Easier said than done.

Or we might move away from proxy, heuristic, appearance; we might ditch the resume and credential. But would we move ahead or backwards? Would we become more or less encumbered, more or less handicapped? Currency can be more easily counterfeited, a silver finish over a nickel core, a nice embossing. “If it looks the part…” But look at currency’s advantages.

I wrote in a recent comment to Zvi’s post on simulacra:

But the actually toxic butterflies—the original honest signalers—they can't go anywhere. They're just stuck. One might happen to evolve a new phenotype, but that phenotype isn't protected by reputational association, and it's going to take a very long time for the new signal-association to take hold in predators. Once other insects have learned how to replicate the proxy-association or symbol that protected them, they can only wait it out until it's no longer protective.

Thus there is an arms race toward manufacturing and recognizing what can only be called “bullshit,” following Harry Frankfurt. It is speech designed to improve one’s image. And as our world becomes more mediated by representation, it in turn becomes more exploitable. We enter the Simulacra.

[1] Axelrod & Hamilton 1981.

[2] The Wire season 2: Barksdale’s crew develops a bad reputation for their underwhelming H, renames it to ditch the old baggage and keep slinging shitty product.

[3] See “recognize and retaliate.”

[4] Hence the parasite, which is a freerider (or worse).

[5] David Lewis-Williams, The Mind in the Cave: Consciousness and the Origins of Art

[6] Thanks to romeostevensit for pointing me toward related literature.



Discuss

Страницы