# Новости LessWrong.com

A community blog devoted to refining the art of rationality
Обновлено: 57 минут 15 секунд назад

### Intermittent Distillations #4

9 июля, 2021 - 01:14
Published on July 8, 2021 10:14 PM GMT

The Semiconductor Supply Chain: Assessing National Competitiveness (Saif M. Khan, Alexander Mann, Dahlia Peterson)

The Semiconductor Supply Chain: Assessing National Competitiveness

Summary

This report analyzes the current supply chain for semiconductors. It particularly focuses on which portions of the supply chain are controlled by US and its allies and China. Some key insights:

• The US semiconductor industry is estimated to contribute 39 percent of the total value of the global semiconductor supply chain.
• The semiconductor supply chain is incredibly complicated. The production of a single chip requires more than 1,000 steps and passes through borders more than 70 times throughout production.
• MD is currently the only company with expertise in designing both high-end GPUs and high-end CPUs.
• TSMC controls 54% of the logic foundry market, with a larger share for leading edge production, e.g., state-of-the-art 5 nm node chips.
• Revenue per wafer for TSMC is rapidly increasing, while other foundries are seeing declines.
• The Netherlands has a monopoly on extreme ultraviolet (EUV) scanners, equipment needed to make the most advanced chips.
• The Netherlands and Japan have a monopoly on argon floride (ArF) immersion scanners, needed to make the second most advanced chips.
• The US has a monopoly on full-spectrum electronic design automation (EDA) software needed to design semiconductors.
• Japan, Taiwan, Germany and South Korea manufacture the state-of-the-art 300 mm wafers used for 99.7 percent of the world's chip manufacturing. This manufacturing process requires large amounts of tacit know-how.
• China controls the largest share of manufacturing for most natural materials. The US and its allies have a sizable share in all materials except for low-grade gallium, tungsten and magnesium.
• China controls ~2/3rds of the world's silicon production, but the US and allies have reserves.

The report also analyzes US competitiveness at very detailed levels of the supply chain, which I didn't read that carefully. Tables:

Opinion

One perspective on the economy is that it's running a vast, distributed computation to allocate supply to demand in a relatively efficient manner. Examining the details on one of the supply chains underpinning a half a trillion dollar industry is relatively awe-inspiring. The only thing I'm currently aware of that is as complicated as computer hardware is computer software, which is sort of cheating. As AI becomes more advanced, control of semiconductor production becomes a strategic resource. However, there are multiple monopolies/sort of monopolies at every point. Each of these monopolies has a relatively large amount of bargaining power under many reasonable models. This situation puts the world in an interesting place. One concrete thing that I didn't consider before reading this report is the relevance of design software to semiconductor manufacturing. In retrospect, it seems pretty clear that the design of complicated things, e.g., video games, buildings, semiconductors, and animations, require complicated software with companies dedicated to building it. Lacking this software could constitute a meaningful bottleneck to being able to produce complicated artifacts. The asymmetry between manufacturing software and hardware is that software is easier to acquire through illegal means, whereas a EUV scanner has “100,000 parts, 3,000 cables, 40,000 bolts and 2 kilometers of hosing,” making it prohibitive to steal.

Intelligence Explosion Microeconomics (Eliezer Yudkowsky)

Intelligence Explosion Microeconomics

Summary

Takeaways I found interesting: Evolutionary history strongly suggests that there are linear or superlinear returns to increased cognitive investment. Various lines of evidence, e.g. brain size of humans compared to chimps, look linear with respect to time. The last common ancestor of humans and chimps was only about 10 million years ago. 400,000 generations isn't enough time for mutations with low fitness increases to reach fixation, so the timelines are inconsistent with rapidly diminishing cognitive returns. If intelligence is a product of software capabilities and hardware capabilities, a series of recursive software improvements (in terms of percentile gains) can converge to a finite amount or go to infinity depending on the amount of hardware present. In economics, Jevon's paradox is the observation that increasing the efficiency of coal use might increase the demand for coal. Similarly, algorithmic improvements in AI might increase how profitable AI is, resulting in increased spending on computers.

Opinion

I knew most of the stuff in this paper already, but the presentation was clean and the arguments were well laid-out. I wish I had read it instead of the AI-FOOM debate, although the latter was likely more entertaining.

Artificial Intelligence and Economic Growth (Philippe Aghion, Benjamin F. Jones & Charles I. Jones)

Artificial Intelligence and Economic Growth

Summary

The authors speculate on how economic growth might be affected by artificial intelligence by considering two main themes. The first theme is modeling artificial intelligence as a continuation of the natural historical processes that have automated various parts of the economy for the last 200 years. This theme allows historical data to be used to inform future speculation.The second theme is to constrain growth by considering Barmol's "cost disease", the observation that sectors that experience productivity growth see their share of the GDP declining, while sectors with slow growth see their share of GDP increasing. To quote the authors, "as a consequence, economic growth may be constrained not by what we do well but rather by what is essential and yet hard to improve." The author's model a situation in which the economy has two inputs, labor and capital, and work is divided into various tasks. Tasks can be automated, in which case they take a unit of captial to finish, or unautomated, in which case they take a unit of labor. In this model, as the fraction of tasks that are automated increases, the capital share, i.e. the amount of GDP going to people who are investing money as opposed to time, goes up. However, as the captial share rises, captial begins to accumulate, which results in an excess of automated goods (since they can be produced with only capital). This surplus results in the value of automated goods declining relative to nonautomated goods, resulting in "cost disease" type effects. As more sectors are automated, this increases the share of automated capital. However, since these sectors grow faster, the price declines. Implicit in this model is low elasticity of substitution between automated and nonautomated goods, i.e. as more goods become automated, it requires even more automated goods to replace the remaining nonautomated goods. This rate increases fast enough that even with infinite automated goods, there still remains some amount of goods that are nonautomated. This general process produces overall balanced growth because nonautomated goods provide a constraint the overall speed of the economy. Other models are then presented and analyzed, with the general trend being elasticity of substitution being an important parameter.

Opinion

I'm not really sure of what to think of predicting AI growth with economic tools. I think they're useful because they allow you to answer questions like, "What happens if we model AI as a continuation of historical automation?".However, I am unsure if this is the correct question to be answering. I also enjoyed the discussion of how cost disease type effects can prevent extremely explosive growth even if one good cannot be automated. I'm pretty skeptical that there will exist such a good. I don't have data to back this up, but I have a vague sense that historically when people have claimed that X cannot be automated away for fundamental reasons, they've been mostly wrong. The most plausible story to me seems like regulation won't be fast enough to keep up with the pace of automation and instead of getting e.g. automated doctors, we get AI systems that do all the work for the doctor but the doctor has to be the one that gives the treatment. In the short term, these regulatory hurdles might be enough produce balanced growth for a period before growth becomes explosive.As a counterpoint, historical trends suggest rapid adoption. Brookings:

Are Ideas Getting Harder to Find? (Nicholas Bloom, Charles I. Jones, John Van Reenen, and Michael Webb)

Are Ideas Getting Harder to Find?

Summary

The authors model technological progress as a product of the number of researchers and productivity per researcher. They typically define technological progress such that constant progress leads to exponential growth in the economy, with the logic being that ideas can spread throughout the entire economy without cost. For example, an idea that causes a percentage decrease in the size of a transistor will result in a percentage increase in the number of transistors per some unit area. In areas such as medicine, the authors measure technological progress as a linear change in life expectancy.

The authors measure the number of researchers by taking aggregate R&D spending and deflating it by the average researcher wage. In addition to controlling for the effects of researcher wage increase, assuming roughly that labs will optimally allocate capital between high/low skilled researchers and lab equipment, dividing the by the average researcher wage allows for relatively fair comparisons between these different types of spending. They also consider other measures of research such as the number of clinical trials or the number of published papers in an area. In general, research efforts are increasing exponentially. In semiconductors, the research effort has risen by 18 since 1971. Between 1969 and 2009, corn and soybeans seed efficiency research both rose by more than a factor of 23.

The authors measure total technological progress across many domains by looking at transistors, crop yield per unit area, and life expectancy. The first two exhibit steady exponential growth (constant percentage), while life expectancy historically exhibits linear growth. Estimating the rate of technological progress and the effective number of researchers lets the authors calculate productivity. The results are quite clear: it's declining. For a representative example, transistors per unit area has been basically increasing at a constant 35% per year for the past 50 years, but research effort has risen by 18, which means it's currently 18x harder to maintain the same rate of growth as it was in 1971. This pattern appears to varying degrees in other fields. Table 7 summarizes:

Opinion

Yep, it sure seems like research productivity is declining. Why? I recently read Intelligence Explosion Microeconomics, so I'm tempted to say something like "Research is a product of serial causal depth of thinking, you find the easiest innovations first, and the later ones are going to be exponentially harder." I don't quite think this is true and I'm not sure what it would imply if it were true, but I suspect that low-hanging-fruit being picked is probably the main reason. I've heard "duplicate work" mentioned as another possible reason, but I'm not as compelled because good ideas getting duplicated doesn't seem to happen often enough to explain such large decreases in productivity. I enjoyed reading about how the authors thought about measuring the various inputs and outputs of their equations, e.g. how they would measure "total research" and "technological progress." I wouldn't be surprised if they got something very wrong, but their results have been applied to many areas so I would be surprised if the qualitative trends were wrong. I'm sort of surprised that growth theory is said to assume constant research productivity. The sort of basic observations that R&D spending has increased but economic growth has remained roughly the same seems to imply the obvious conclusion that productivity is declining. The corresponding counterargument surveyed by the paper that productivity within domains remains constant but aggregate productivity is declining seems sort of also clearly false if you think about any industry. My suspicion is that the falseness of this assumption has been obvious for a while, but people thought it might be a reasonable approximation or no one had bothered to get the data to falsify it until the authors. There's some argument that since economics research (and other theoretical stuff) has no clear way for it to make money, it's not going to be as efficient as semiconductor research, so fruit hangs lower, this paper being one such example. I suspect this observation to be relatively sophomoric compared to analyses of "research economies" that have definitely been done by some people.

Statistical Basis for Predicting Technological Progress (Béla Nagy, J. Doyne Farmer, Quan M. Bui, Jessika E. Trancik)

Statistical Basis for Predicting Technological Progress

Summary

How will technological development happen in the future? The authors compile a dataset of production versus unit cost for 62 technologies and hindcast a variety of prediction rules to determine which functional form tends to relate production and unit cost. They find that Wright's law, which predicts unit cost is some constant fraction of cumulitive production, is most predictive, with Moore's law slightly after.Another way to interpret Wright's law is the prediction that unit cost will be exponentially decreasing and cumulative production will be exponentially increasing, with the ratio between the exponents being a constant. Moore's law, on the other hand, only predicts that unit cost will decrease exponentially without positing a relation between production and cost. Wright's law is often interpreted to imply "learning by doing," with the thought being most technological advances (decrease in unit costs) come from attempting to scale up manufacturing. The authors conclude by commenting that their dataset doesn't contain enough technologies that don't grow exponentially to strongly distinguish between Moore's law and Wright's law.

Opinion

From the perspective of thinking about broad strokes technological progress, Wright's law seems like a strict upgrade to Moore's law. However, Wright's law still leaves unanswered the question "Why is cumulative production increasing exponentially?" Also, given that production is increasing extremely rapidly for most technologies, total cumulative production is only a few years or decades of current production levels. For example, if production grows by 50% a year and we're at 150 units per year right now, the past few years will have been 100 units, 67 units, 45 units, etc., which is a convergent geometric series that will sum to 450, 3 years of current production (see that 3 = 1/(1 - (1/1.5))). Thus, even if current technological production stagnates, we should expect cumulative production to continue growing quickly. If Wright's law holds, this will correspond to a large decrease in unit cost.

Experience curves, large populations, and the long run (Carl Shulman)

Experience curves, large populations, and the long run

Summary

An experience curve predicts unit cost will fall a constant percent every time cumulative production doubles. Experience curves have empiricallly good at predicting technological progress across a variety of domains. For information technologies, experience curves predict an approximate halving of unit cost for each doubling of cumulitive production. There are enough resources on Earth to sustain many doublings of cumulative production. For example, solar panel production could be scaled up 1000x, which is 10 doublings. According to Nagy et al., this would result in a 10x decrease in unit costs. For information technologies, whose production has been rising exponentially, a thousand years at current production levels would result in a 100x increase in cumulative production, which implies a 100x decrease in unit costs. Furthermore, Earth only captures about a billionth of the share of solar energy. [Fact check: the Earth is 150 billion meters from the Sun and is about 6 million meter radius. The area of space 150 billion m from the sun is about 10^22, while the area occupied by the Earth is about 10^12, a difference of 10^10, which is about a billion.] This theoretically gives another 9 orders of magnitude (OOM) for growth in cumulative production across many technologies. Space also exists and has many resources, giving many more OOMs. All this suggests if humanity does eventually use these resources, Wright's law implies we will run into the physical limits of technological progress.

Opinion

It's often useful to answer questions like "if we just take the existing trends and extrapolate them, what does that predict?" For example, Roodman uses historical GWP data to predict that humanity reaches a singularity in 2047. The point of the exercise is not to find out when humanity will reach a singularity, but rather determine how strong one's arguments have to be in either direction. For example, people who claim that AGI will cause a singularity around 2050 are making a prediction in line with historical super-exponential economic growth. Those who claim this won't happen are thus claiming that Roodman's extrapolation is overoptimistic. Similarly, using simple models relating technological progress to cumulative production predicts that we'll hit technological limits before we exhaust the amount of easily available resources. This is interesting because then people who claim that we won't either have to defy the increase in cumulative production or the relation between cumulitive production and unit cost.I'm tempted to say something like "burden of proof", but I don't think that's really how arguments should work. The point is that doing such analyses lets discussion drop more easily to object-level disputes about whether particular models are accurate or not or the value of particular parameters, which I think are often more productive.

Discuss

### Jackpot! An AI Vignette

8 июля, 2021 - 23:32
Published on July 8, 2021 8:32 PM GMT

I started noticing the world had gotten weirder when my grandmother messaged the family that she wanted to do her 80th birthday party in Reno.

Don't get me wrong, Reno is great - biggest little city in the world! - but we're New Yorkers, why on earth fly to a desert? If you want to rough it, lets just go to Jersey.

When we asked, she replied that she had gone there with her friends and had a, well, she called it spiritual experience in the Goldstrike! casino. "It was like heaven, I forgot all my cares; I was one with the universe".

"... Can you find enlightenment in Atlantic City?"

Apparently not. The specific game she liked, Nirvana Slots, was only available in Nevada. The slot machine was paired with a lightweight BCI helmet and AI feedback software that optimized for a state of flow. It would modulate the experience of the game - when wins would occur, when losses would occur, the sounds and lights and haptic feedback - such that the gamblers mindset would stay as close as possible to the 'zone', that wonderful place where they don't exist. You get to stay in the zone until you run out of money.

I looked up the game on Wikipedia and saw that with the BCI helmet and cushioned, reclinable chair the slot machine looked more like a fighter pilot ejection seat than a traditional one armed bandit. The similarity was more than skin deep - the article noted that technology was originally developed to augment the combat effectiveness of drone pilots, so they could stay focused on the mission for longer. It was effective, but the side effects were off putting -- pilots ended up flying longer routes and bombing more targets just to stay in the machine longer

I also learned that the reason we needed to fly to Reno is that Nirvana is only legal in Nevada. AI controlled reward protocols were banned in most states after the Farmville Legacy launch caused several hospitalizations from dehydration and starvation, but there was a carve out for non-internet based gambling. While most states had banned it, Nevada was still a place where freedom rang out, where you could go to an e-brothel with deepfaked celebrity VR and then go to a casino and play your favorite AI powered reel spinner.

Nirvana was wildly popular among the slotpullers and the casino owners. As long as the statistical frequency of wins and losses was maintained, the gaming board didn't care when the person won or lost, which meant wins were targeted right when a person's brain might need a boost else they would've stopped playing, and the losses were always low and came in regular intervals such that they were just part of the background noise.

After all this research it's fair to say I was a little concerned, but it was her birthday, and she was going to go with or without us, so in the end we all flew out. When I walked into the resort I was surprised by the quiet - there was the normal sounds of money being won and lost, the branngg branng synthesized sounds of virtual coins being jangled and collected, but there was very little talking. Some of the old bacarrat and craps tables still had people around them, calling out, but they were shunted to the sides; the main floor was hundreds of Nirvana machines, filled with players focused on the screens.

I needed to check in and put my bags down, but after all the fuss around this game I needed to see what it was all about. I sat down at an open game and put on the helmet, slid my credit card into the machine and watched the screen.

A crisp beautiful gong sound was heard -- an array of visual stimulii bombarded me -- almost far away I heard in the distance coins being deposited and the machine rattling. I had won.

----

Story inspired by Astral Codex Ten Book Review: Addiction by Design, and me losing track of time in a casino.

Discuss

### Whistle Synth Pi

8 июля, 2021 - 22:00
Published on July 8, 2021 7:00 PM GMT

I made my whistle-controlled synthesizer into a stand-alone device:

Melody example:

Bass example:

Playing bass by singing falsetto:

It consists of:

• Raspberry Pi 3 computer ($35) • Case ($7)
• USB sound card ($8) • Keypad ($10)
• 3.5mm to 1/4" cable ($4) • XLR to 3.5mm cable ($7)

It also needs a micro SD card, power (2A, USB), and a microphone. I'm using ones I already had, but it doesn't need anything fancy. If you wanted to make a special-purpose device, I think you could get the materials down below 20; there is nothing especially expensive here, and I'm not using most of the features of the Pi. The code is open source: github. Comment via: facebook Discuss ### BASALT: A Benchmark for Learning from Human Feedback 8 июля, 2021 - 20:40 Published on July 8, 2021 5:40 PM GMT Copying the abstract of the paper: The last decade has seen a significant increase of interest in deep learning research, with many public successes that have demonstrated its potential. As such, these systems are now being incorporated into commercial products. With this comes an additional challenge: how can we build AI systems that solve tasks where there is not a crisp, well-defined specification? While multiple solutions have been proposed, in this competition we focus on one in particular: learning from human feedback. Rather than training AI systems using a predefined reward function or using a labeled dataset with a predefined set of categories, we instead train the AI system using a learning signal derived from some form of human feedback, which can evolve over time as the understanding of the task changes, or as the capabilities of the AI system improve. The MineRL BASALT competition aims to spur forward research on this important class of techniques. We design a suite of four tasks in Minecraft for which we expect it will be hard to write down hardcoded reward functions. These tasks are defined by a paragraph of natural language: for example, "create a waterfall and take a scenic picture of it", with additional clarifying details. Participants must train a separate agent for each task, using any method they want. Agents are then evaluated by humans who have read the task description. To help participants get started, we provide a dataset of human demonstrations on each of the four tasks, as well as an imitation learning baseline that leverages these demonstrations. Our hope is that this competition will improve our ability to build AI systems that do what their designers intend them to do, even when the intent cannot be easily formalized. Besides allowing AI to solve more tasks, this can also enable more effective regulation of AI systems, as well as making progress on the value alignment problem. I also mention this in the latest Alignment Newsletter, but I think this is probably one of the best ways to get started on AI alignment from the empirical ML perspective: it will (hopefully) give you a sense of what it is like to work with algorithms that learn from human feedback, in a more realistic setting than Atari / MuJoCo, while still not requiring a huge amount of background or industry-level compute budgets. Section 1.1 of the paper goes into more detail about the pathways to impact. At a high level, the story is that better algorithms for learning from human feedback will improve our ability to build AI systems that do what their designers intend them to do. This is straightforwardly improving on intent alignment (though it is not solving it), which in turn allows us to better govern our AI systems by enabling regulations like "your AI systems must be trained to do X" without requiring a mathematical formalization of X. Discuss ### The Efficient Market Hypothesis in Research 8 июля, 2021 - 20:28 Published on July 8, 2021 5:00 PM GMT A classic economics joke goes like this: Two economists are walking down a road, when one of them notices a20 bill on the ground. He turns to his friend and exclaims: "Look, a $20 bill!" The other replies: "Nah, if there's a$20 on the bill on the ground, someone would've picked it up already."

The economists in the joke believe in the Efficient Market Hypothesis (EMH), which roughly says that financial markets are efficient and there's no way to "beat the market" by making intelligent trades.

If the EMH was true, then why is there still a trillion-dollar finance industry with active mutual funds and hedge funds? In reality, the EMH is not a universal law of economics (like the law of gravity), but more like an approximation. There may exist inefficiencies in markets where stock prices follow a predictable pattern and there is profit to be made (e.g.: stock prices fall when it's cloudy in New York). However, as soon as someone notices the pattern and starts exploiting it (by making a trading algorithm based on weather data), the inefficiency disappears. The next person will find zero correlation between weather in New York and stock prices.

There is a close parallel in academic research. Here, the "market" is generally efficient: most problems that are solvable are already solved. There are still "inefficiencies": open problems that can be reasonably solved, and one "exploits" them by solving it and publishing a paper. Once exploited, it is no longer available: nobody else can publish the same paper solving the same problem.

Where does this leave the EMH? In my view, the EMH is a useful approximation, but its accuracy depends on your skill and expertise. For non-experts, the EMH is pretty much universally true: it's unlikely that you've found an inefficiency that everyone else has missed. For experts, the EMH is less often true: when you're working in highly specialized areas that only a handful of people understand, you begin to notice more inefficiencies that are still unexploited.

A large inefficiency is like a $20 bill on the ground: it gets picked up very quickly. An example of this is when a new tool is invented that can straightforwardly be applied to a wide range of problems. When the BERT model was released in 2018, breaking the state-of-the-art on all the NLP benchmarks, there was instantly an explosion of activity as researchers raced to apply it to all the important NLP problems and be the first to publish. By mid-2019, all the straightforward applications of BERT were done, and the$20 bill was no more.

Above: Representation of the EMH in research. To outsiders, there are no inefficiencies; to experts, inefficiencies exist briefly before they are exploited. Loosely inspired by this diagram by Matt Might.

The EMH implies various heuristics that I use to guide my daily research. If I have a research idea that's relatively obvious, and the tools to attack it have existed for a while (say, >= 3 years), then probably one of the following is true:

1. Someone already published it 3 years ago.
2. Your idea doesn't work very well.
3. The result is not that useful or interesting.
4. One of your basic assumptions is wrong, so your idea doesn't even make sense.
5. Etc.

Conversely, a research idea is much more likely to be fruitful (i.e., a true inefficiency) if the tools to solve it have only existed for a few months, requires data and resources that nobody else has access to, or requires rare combinations of insights that conceivably nobody has thought of.

Outside the realm of the known (the red area in my diagram), there are many questions that are unanswerable. These include the hard problems of consciousness and free will, P=NP, etc, or more mundane problems where our current methods are not strong enough. For an outsider, these might seem like inefficiencies, but it would be wise to assume they're not. The EMH ensures that true inefficiencies are quickly picked up.

To give a more relatable example, take the apps Uber (launched in 2009) and Instagram (launched in 2010). Many of the apps on your phone probably launched around the same time. In order for Uber and Instagram to work, people needed to have smartphones that were connected to the internet, with GPS (for Uber) and decent quality cameras (for Instagram). Neither of these ideas would've been possible in 2005, but thanks to the EMH, as soon as smartphone adoption took off, we didn't have to wait very long to see all the viable use-cases for the new technology to emerge.

Originally posted on my blog: Lucky's Notes.

Discuss

### [AN #155]: A Minecraft benchmark for algorithms that learn without reward functions

8 июля, 2021 - 20:20
Published on July 8, 2021 5:20 PM GMT

[AN #155]: A Minecraft benchmark for algorithms that learn without reward functions Alignment Newsletter is a weekly publication with recent content relevant to AI alignment around the world View this email in your browser Newsletter #155
Alignment Newsletter is a weekly publication with recent content relevant to AI alignment around the world. Find all Alignment Newsletter resources here. In particular, you can look through this spreadsheet of all summaries that have ever been in the newsletter.
Audio version here (may not be up yet).
Please note that while I work at DeepMind, this newsletter represents my personal views and not those of my employer. SECTIONS ﻿HIGHLIGHTS
﻿TECHNICAL AI ALIGNMENT
﻿LEARNING HUMAN INTENT
﻿AI GOVERNANCE
﻿NEWS ﻿ ﻿ ﻿ HIGHLIGHTS

BASALT: A Benchmark for Learning from Human Feedback (Rohin Shah et al) (summarized by Rohin): A typical argument for AI risk, given in Human Compatible (AN #69), is that current AI systems treat their specifications as definite and certain, even though they are typically misspecified. This state of affairs can lead to the agent pursuing instrumental subgoals (AN #107). To solve this, we might instead build AI systems that continually learn the objective from human feedback. This post and paper (on which I am an author) present the MineRL BASALT competition, which aims to promote research on algorithms that learn from human feedback.

BASALT aims to provide a benchmark with tasks that are realistic in the sense that (a) it is challenging to write a reward function for them and (b) there are many other potential goals that the AI system “could have” pursued in the environment. Criterion (a) implies that we can’t have automated evaluation of agents (otherwise that could be turned into a reward function) and so suggests that we use human evaluation of agents as our ground truth. Criterion (b) suggests choosing a very “open world” environment; the authors chose Minecraft for this purpose. They provide task descriptions such as “create a waterfall and take a scenic picture of it”; it is then up to researchers to create agents that solve this task using any method they want. Human evaluators then compare two agents against each other and determine which is better. Agents are then given a score using the TrueSkill system.

The authors provide a number of reasons to prefer the BASALT benchmark over more traditional benchmarks like Atari or MuJoCo:

1. In Atari or MuJoCo, there are often only a few reasonable goals: for example, in Pong, you either hit the ball back, or you die. If you’re testing algorithms that are meant to learn what the goal is, you want an environment where there could be many possible goals, as is the case in Minecraft.

2. There’s lots of Minecraft videos on YouTube, so you could test a “GPT-3 for Minecraft” approach.

3. The “true reward function” in Atari or MuJoCo is often not a great evaluation: for example, a Hopper policy trained to stand still using a constant reward gets 1000 reward! Human evaluations should not be subject to the same problem.

4. Since the tasks were chosen to be inherently fuzzy and challenging to formalize, researchers are allowed to take whatever approach they want to solving the task, including “try to write down a reward function”. In contrast, for something like Atari or MuJoCo, you need to ban such strategies. The only restriction is that researchers cannot extract additional state information from the Minecraft simulator.

5. Just as we’ve overestimated few-shot learning capabilities (AN #152) by tuning prompts on large datasets of examples, we might also be overestimating the performance of algorithms that learn from human feedback because we tune hyperparameters on the true reward function. Since BASALT doesn’t have a true reward function, this is much harder to do.

6. Since Minecraft is so popular, it is easy to hire Minecraft experts, allowing us to design algorithms that rely on expert time instead of just end user time.

7. Unlike Atari or MuJoCo, BASALT has a clear path to scaling up: the tasks can be made more and more challenging. In the long run, we could aim to deploy agents on public multiplayer Minecraft servers that follow instructions or assist with whatever large-scale project players are working on, all while adhering to the norms and customs of that server.

﻿

Rohin's opinion: You won’t be surprised to hear that I’m excited about this benchmark, given that I worked on it. While we listed a bunch of concrete advantages in the post above, I think many (though not all) of the advantages come from the fact that we are trying to mimic the situation we face in the real world as closely as possible, so there’s less opportunity for Goodhart’s Law to strike. For example, later in this newsletter we’ll see that synthetically generated demos are not a good proxy for human demos. Even though this is the norm for existing benchmarks, and we didn’t intentionally try to avoid this problem, BASALT (mostly) avoids it. With BASALT you would have to go pretty far out of your way to get synthetically generated demos, because by design the tasks are hard to complete synthetically, and so you have to use human demos.

I’d encourage readers to participate in the competition, because I think it’s especially good as a way to get started with ML research. It’s a new benchmark, so there’s a lot of low-hanging fruit in applying existing ideas to the benchmark, and in identifying new problems not present in previous benchmarks and designing solutions to them. It’s also pretty easy to get started: the BC baseline is fairly straightforward and takes a couple of hours to be trained on a single GPU. (That’s partly because BC doesn’t require environment samples; something like GAIL (AN #17) would probably take a day or two to train instead.)

﻿ ﻿ ﻿ TECHNICAL AI ALIGNMENT
﻿ LEARNING HUMAN INTENT

What Matters for Adversarial Imitation Learning? (Manu Orsini, Anton Raichuk, Léonard Hussenot et al) (summarized by Rohin): This paper takes adversarial imitation learning algorithms (think GAIL (AN #17) and AIRL (AN #17)) and tests the effect of various hyperparameters, including the loss function, the discriminator regularization scheme, the discriminator learning rate, etc. They first run a large, shallow hyperparameter sweep to identify reasonable ranges of values for the various hyperparameters, and then run a larger hyperparameter sweep within these ranges to get a lot of data that they can then analyze. All the experiments are done on two continuous control benchmarks: the MuJoCo environments in OpenAI Gym and manipulation environments from Adroit.

Obviously they have a lot of findings, and if you spend time working with adversarial imitation learning algorithms, I’d recommend reading through the full paper, but the ones they highlight are:

1. Even though some papers have proposed regularization techniques that are specific to imitation learning, standard supervised learning techniques like dropout work just as well.

2. There are significant differences in the results when using synthetic demonstrations vs. human demonstrations. (A synthetic demonstration is one provided by an RL agent trained on the true reward.) For example, the optimal choice of loss function is different for synthetic demos vs. human demos. Qualitatively, human demonstrations are not Markovian and are often multimodal (especially when the human waits and thinks for some time: in this case one mode is “noop” and the other mode is the desired action).

﻿

Rohin's opinion: I really like this sort of empirical analysis: it seems incredibly useful for understanding what does and doesn’t work.

Note that I haven’t looked deeply into their results and analysis, and am instead reporting what they said on faith. (With most papers I at least look through the experiments to see if the graphs tell a different story or if there were some unusual choices not mentioned in the introduction, but that was a bit of a daunting task for this paper, given how many experiments and graphs it had.)

﻿

Prompting: Better Ways of Using Language Models for NLP Tasks (Tianyu Gao) (summarized by Rohin): Since the publication of GPT-3 (AN #102), many papers have been written about how to select the best prompt for large language models to have them solve particular tasks of interest. This post gives an overview of this literature. The papers can be roughly divided into two approaches: first, we have discrete prompts, where you search for a sequence of words that forms an effective prompt; these are “discrete” since words are discrete. Second, we have soft prompts, where you search within the space of embeddings of words for an embedding that forms an effective prompt; since embeddings are vectors of real numbers they are continuous (or “soft”) and can be optimized through gradient descent (unlike discrete prompts).

Interactive Explanations: Diagnosis and Repair of Reinforcement Learning Based Agent Behaviors (Christian Arzate Cruz et al) (summarized by Rohin): Many papers propose new algorithms that can better leverage human feedback to learn a good policy. This paper instead demonstrates an improved user interface so that the human provides better feedback, resulting in a better policy, on the game Super Mario Bros. Specifically:

1. The user can see the behavior of the agent and rewind / pause to find a place where the agent took a poor action.

2. The system generates an explanation in terms of the underlying state variables that explains why the agent chose the action it chose, relative to the second best action. It can also explain why it didn’t take a particular action.

3. The user can tell the agent that it should have taken some other action, and the agent will be trained on that instruction.

The authors conduct a user study and demonstrate that users find it intuitive to correct “bugs” in a policy using this interface.

﻿

Rohin's opinion: This seems like a great line of research to me. While step 2 isn’t really scalable, since it requires access to the underlying simulator state, steps 1 and 3 seem doable even at scale (e.g. I can imagine how they would be done in Minecraft from pixels), and it seems like this should significantly improve the learned policies.

﻿ ﻿ ﻿ AI GOVERNANCE

Debunking the AI Arms Race Theory (Paul Scharre) (summarized by Sudhanshu): This article, published recently in the Texas National Security Review, argues that various national trends of military spending on AI do not meet the traditional definition of an 'arms race'. However, the current situation can be termed a security dilemma, a "more generalized competitive dynamic between states." The article identifies two ways in which race-style dynamics in AI competition towards the aims of national security might create new risks: (i) a need for increasingly rapid decision-making might leave humans with diminished control or 'out of the loop'; and (ii) the pressure to quickly improve military AI capabilities could result in sacrificing supplementary goals like robustness and reliability, leading to unsafe systems being deployed.

The article offers the following strategies as panaceas to such dynamics. Competing nations should institute strong internal processes to ensure their systems are robust and secure, and that human control can be maintained. Further, nations should encourage other countries to take similar steps to mitigate these risks within their own militaries. Finally, nations should cooperate in regulating the conduct of war to avoid mutual harm. It concludes after citing several sources that advocate for the US to adopt these strategies.

﻿

Sudhanshu's opinion: I think the headline was chosen by the editor and not the author: the AI arms race 'debunking' is less than a fourth of the whole article, and it's not even an important beat of the piece; instead, the article is about how use of technology/AI/deep learning for military applications in multipolar geopolitics can actually result in arms-race-style dynamics and tangible risks.

Even so, I'm not convinced that the traditional definition of 'arms race' isn't met. The author invokes percentage growth in military spending of more than 10% over the previous year as a qualifying criterion for an arms race, but then compares this with the actual spending of 0.7% of the US military budget on AI in 2020 to make their case that there is no arms race. These two are not comparable; at the very least, we would need to know the actual spending on AI by the military across two years to see at what rate this spending changed, and whether or not it then qualifies to be an arms race.

﻿ ﻿ ﻿ NEWS

Hypermind forecasting contest on AI (summarized by Rohin): Hypermind is running a forecasting contest on the evolution of artificial intelligence with a 30,000 prize over four years. The questions ask both about the growth of compute and about performance on specific benchmarks such as the MATH suite (AN #144). FEEDBACK I'm always happy to hear feedback; you can send it to me, Rohin Shah, by replying to this email. PODCAST An audio podcast version of the Alignment Newsletter is available. This podcast is an audio version of the newsletter, recorded by Robert Miles. Subscribe here: Copyright © 2021 Alignment Newsletter, All rights reserved. Want to change how you receive these emails? You can update your preferences or unsubscribe from this list. Discuss ### Looking for Collaborators for an FAGI Research Project 8 июля, 2021 - 20:01 Published on July 8, 2021 5:01 PM GMT Hello folks! I think this is an appropriate post for this forum but if I'm mistaken please just give the word. I'm working on an interesting research direction for AGI, and am looking for collaborators. This is primarily capabilities research at this point, and so some folks here may, validly, argue that it's negative EV. I think it's positive EV and can explain why but my confidence is far from 100% on that. I'll address this in a section at the end. Firstly, about the research: I believe the field of Artificial Life has great potential for creating AGI and is massively under-explored as a path to AGI compared with more conventional machine learning approaches. Why: consider the following two data points that are both pointing towards the same conclusion. 1. Machine Learning is moving towards simpler, more general techniques that also require much more compute. Complex, hand-designed features and algorithms (think GOFAI, Gabor Filters, Canny Edge Detection, etc.) were supplanted by simpler and more general techniques such as deep neural networks trained end-to-end [1]. Now, techniques such as Differentiable Architecture Search are learning architectures while techniques such as Model-Agnostic Meta-Learning are learning optimizers. This trend spanning decades is leading towards models that are more and more general, baking in fewer and fewer assumptions. Question for the reader: what is the ultimate limit of this trend? 2. We have exactly one existence proof of AGI, and it was generated by an extremely simple program. While we don’t yet know the ultimate laws of physics, progress suggests that they are likely concise. Billions of different molecules turned out to be made of just 118 elements, which were in turn found to be made of just 3 sub-atomic particles. The most fundamental laws discovered so far, such as those from Einstein, Heisenberg, and Maxwell, can be written out in about 100 characters. The complex behavior we observe in our universe almost certainly arises from extremely simple rules at the bottom of physics. These observations from progress in AI, as well as our one example of general intelligence evolving from physics, are two lines pointing towards a single conclusion: that there may exist one or more quite simple programs that, when run, create an AGI. Moreover, just as many hand-designed features became unnecessary when more general techniques came along, an extremely simple program might be the easiest way to create an AGI. Note that the program we call physics successfully generated an AGI without having any notion of learning, optimization, or even what an agent is. To conclude, I believe there are physics-like programs that support self-replicators, and when run long enough will evolve an AGI. Of course for these to be useful we'd need one that takes less than the ~14Bn years that physics did. I have specific ideas of what these programs might look like and am currently working with a few top folks in the ALife world to push forward this research. To be clear, I still think machine learning-like approaches might be more likely than ALife approaches to achieve AGI, but compared to the amount of investment in ALife approaches, I think ALife is massively undervalued. What would an ALife approach of this kind look like? Imagine a program that specifies a few simple particles and how they interact. You start with a universe of 10^11 particles (feasible on a single desktop computer) including a single self-replicator (built out of particles of course). This self-replicator replicates but soon random mutations produce a better self-replicator that out-competes the first one. And so on. If a system like this worked, one could potentially throw more and more memory and compute into it to get more and more complex organisms. Is this capabilities research positive EV? I think yes although I'm far from sure. If something along these lines is the fastest path to AGI, I think it needs to be in the right hands. My goal would be, some months or years from now, to get research results that make it clear we're on the right track to building AGI. I'd go to folks I trust such as Eliezer Yudkowsky/MIRI/OpenAI, and basically say "I think we're on track to build an AGI, can we do this together and make sure its safe?" Of course understanding that we may need to completely pause further capabilities research at some point if our safety team does not give us the OK to proceed. But if ALife approaches are in fact the easiest path to AGI, many existing safety efforts may be barking up the wrong tree. About me: I studied Computer Science at Stanford and currently run a 40-person crypto company. I've thought pretty hard about the question of how we build an AGI for several years now, which is a lot less than folks like Eliezer or Paul Christiano. I'd be honored to hear their and other folks' thoughts on this research direction. If this is something you'd be interested in talking about, feel free to comment or send me an email at rafaelcosman@alumni.stanford.edu. Thanks, Raf [1] Clune, Jeff. "Ai-gas: Ai-generating algorithms, an alternate paradigm for producing general artificial intelligence." arXiv preprint arXiv:1905.10985 (2019). Discuss ### Practical anthropics summary 8 июля, 2021 - 18:10 Published on July 8, 2021 3:10 PM GMT This is a simple summary of how to "do anthropics": • If there are no issues of exact copies, or advanced decision theory, and the questions you're asking aren't weird, then use SIA. And by "use SIA", I mean "ignore the definition of SIA, and just do a conventional Bayesian update on your own existence". Infinite universes won't be a problem (any more than they are with conventional probabilities). And this might not increase your expected population by much. First of all, there was the realisation that different theories of anthropic probability correspond to correct answers to different questions - questions that are equivalent in non-anthropic situations. We can also directly answer "what actions should we do?", without talking about probability. This anthropic decision theory gave behaviours that seem to correspond to SIA (for total utilitarianism) or SSA (for average utilitarianism). My personal judgement, however, is that the SIA-questions are more natural than the SSA-questions (ratios of totals rather than average of ratios), including the decision theory situation (total utilitarianism rather than average utilitarianism). Thus, in typical situations, using SIA is generally the way to go. And if we ignore exact duplicates, Boltzmann brains, and simulation arguments, SIA is simply standard Bayesian updating on our existence. Anthropic probabilities can be computed exactly the same way as non-anthropic probabilities can. And there are fewer problems than you might suspect. This doesn't lead to problems with infinite universes - at least, no more than standard probability theories do. And anthropic updates tend to increase the probability of larger populations in the universe, but that effect can be surprisingly small - 7.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > * {position: absolute} .MJXc-bevelled > * {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom * {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')} to 32 given the data we have. Finally, note that anthropic effects are generally much weaker than Fermi observation effects. The fact that we don't see life, on so many planets, tells us a lot more than the fact we see life on this one. Discuss ### Anthropics and Fermi: grabby, visible, zoo-keeping, and early aliens 8 июля, 2021 - 18:07 Published on July 8, 2021 3:07 PM GMT When updating the probability of life across the universe, there are two main observations we have to build on: • Anthropic update: we exist on Earth. • Fermi observation: (we exist on Earth and) we don't see any aliens. I'll analyse how these two observations affect various theories about life in the universe. In general, we'll see that the anthropic update has a pretty weak effect, while the Fermi observation has a strong effect: those theories that benefit most are those that avoid the downgrade from the Fermi, such as the Zoo hypothesis, or the "human life unusually early" hypothesis. Grabby and visible aliens I've argued that an anthropic update on our own existence is actually just a simple Bayesian update; here I'll explain what that means for our updates. This paper talks about grabby aliens, who would expand across the universe, and stop humans from evolving (if they reached Earth before now). As I've argued, "we exist" and "we have not observed X" are statements that can be treated in exactly the same way. We can combine them to say "there are no visible aliens anywhere near", without distinguishing grabby aliens (who would have stopped our existence) from visible-but-not grabby aliens (who would have changed our observations). Thus the Fermi observation is saying there are no grabby or visible aliens nearby[1]. Recall that it's so comparatively easy to cross between stars and galaxies, so advanced aliens would only fail to be grabby if they coordinated to not want to do so. Rare Earth hypotheses Some theories posit that life requires a collection of conditions that are very rarely found together. But rare Earth theories don't differ much, upon updates, from "life is hard" hypotheses. For example, suppose T0.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > * {position: absolute} .MJXc-bevelled > * {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom * {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')} say that life can exist on any planet with rate ρ, while the rare Earth hypothesis, T1, says that life can exist on Earth-like planets with rate p, while Earth-like planets themselves exist with rate r. But neither the Fermi observation nor the anthropic update will distinguish between these theories. The Fermi observation posits that there are no visible aliens close to Earth; the anthropic updates increases the probability of life similar to us. We can see T1 as T0 with a different prior on ρ=pr (induced from the priors on p and r), but both these updates affect ρ. Now, T0 and T1 can be distinguished by observation (seeing dead planets with Earth-like features) or theory (figuring out what is needed for life). Anything that differentially change p and r. But neither the anthropic update nor the Fermi observation do this. Independent aliens Suppose that theory T2 posits that there are aliens in, say, gas giants, whose existence is independent from ours. Visible gas giant alien civilizations exist at a rate ρg, while visible life on rocky planets exist at a rate ρ. Then the anthropic update boosts ρ only, while the Fermi observation penalises ρ and ρg equally (if we make the simplifying assumption that gas giants are as common as terrestrial planets). This gives a differential boost to ρ over ρg, but the effect can be mild. If we assume that there are N gas giants and N terrestrial planets in the milky way, and start with a uniform prior over both ρ and ρg, then after updating, we get: ρ=2N+2, ρg=1N+1. If the rate of gas giant alien civilizations is semi-dependent on our own existence - maybe we both need it to be easy for RNA to exist - then there will be less of a difference in the update for ρ and ρg. So, some differential effect due to anthropics, but not a strong one, at least for uniform priors, and not one that grows. In the cosmic zoo Let T3 be a cosmic zoo hypothesis. It posits that there may be a lot of aliens, but they have agreed - or been coerced - into hiding themselves, so as not to contaminate human development (or some other reason). Then T3 gets a boost from the anthropic update, and no penalty from the Fermi observation. Since most theories get a big downgrade from the Fermi observation, this can raise its probability quite a lot relative to other theories. A few caveats, however: 1. In the zoo hypothesis, aliens are hiding themselves from us. This is close to a "Descartes's demon" hypothesis, in that powerful entities are acting to feed us erroneous observations. Pure Descartes's demon hypotheses are not differentially boosted by anything, since they explain nothing (once you've posited a demon, you also have to explain why we see what we think we see). The zoo hypothesis is not quite as bad - "keep everything hidden" is more likely than other ways aliens could be messing with our observations. Still, it should be a low prior. 2. Though the Fermi observation doesn't downgrade the zoo hypothesis directly, the more carefully we observe the universe, the more unlikely it becomes, since the aliens would have to work harder to conceal any evidence. 3. Conversely, the more visible we become, the less likely the zoo hypothesis becomes, because we have to explain why the zookeepers haven't intervened to keep us concealed (if we suppose that these aliens are powerful enough to intercept light and other signals between the stars, then we're very close to the Descartes demon territory). Once we successfully launched replicating AIs to the stars, then we'd be pretty sure the zoo hypothesis was wrong. Time enough for aliens So far, we've neglected time in the equation, talking about a rate ρ that was per planet, but not stretched over time. But consider theory T4: advanced life starts appearing around 13.77 billion years after the Big Bang, but not before. This theory might be unlikely, but it gets a mild boost from anthropics (since it's compatible with our existence) and avoids the downgrade from the Fermi observation (since it says there are no visible aliens - yet). Since that downgrade has been quite powerful for most theories, T4 can get boosted relative to them - and the more dead planets we observe or infer, the stronger the relative boost is. Now, T4 may seem unlikely, since the Earth is a late planet among the Earth-like planets: "Thus, the average earth in the Universe is 1.8 ± 0.9 billion years older than our Earth". But there are some theories that make more plausible T4, such as some versions of panspermia. Specifically, if we imagine that life had to go through several stages, on several planets - maybe RNA/DNA was the result of billions of years of evolution on a planet much older than the Earth, and was then spread here, where it allowed another stage of evolution. Conversely, theories T5 that posit that advanced life started much earlier than the present day, pay a much higher price via the Fermi observation. 1. The grabby alien paper uses "loud" to designate aliens that "expand fast, last long, and make visible changes to their volumes". Visible aliens are more general; in particular, they need not expand (though this may make them less visible). ↩︎ Discuss ### Covid 7/8: Delta Takes Over 8 июля, 2021 - 16:00 Published on July 8, 2021 1:00 PM GMT Delta is now two thirds of sequenced samples from the past week, so we can be confident that it has taken over, and soon will be most cases in America, and soon after that most cases around the world. That’s bad news, but given we know the case numbers, it sort of is good news. It means we’ve already ‘taken the hit’ to the reproduction rate, mostly, and if we can stabilize things one final time, then unless there’s a new even worse variant then the last scare is over. Things are not yet stable, but they continue to echo what happened this time last year, and things continue to be quite hot. If things cool down a bit (in temperature terms), several weeks pass and these trends continue, then they’ll be more clearly where we are at going forward, and we’ll be counting on further vaccinations and on past vaccinations kicking in. We should have enough momentum there to still get us over the finish line, but in many places it’s going to be close. Let’s run the numbers. The Numbers Predictions I forgot that July 4th was coming, so my deaths prediction was dumb. Whoops. Prediction from last week: Positivity rate of 2.7% (up 0.3%) and deaths decline by 5%. Result: Positivity rate of 2.9% (up 0.5%) and deaths decline by 20% likely due to July 4th. Prediction for next week: Positivity rate of 3.3% (up 0.4%) and deaths increase by 7%. Predicting deaths this week is weird, and I’m predicting an increase because I think this week’s count is artificially low due to the holiday. So there will be some catch-up reporting and some reversion, even if the new rise in cases won’t have caught up to us yet. For the positivity rate, it jumped a lot the last day, but I have to assume it will keep rising for now. As it stops being constrained, the positivity rate will once again become a useful measure of our situation. Deaths DateWESTMIDWESTSOUTHNORTHEASTTOTALMay 27-June 252783811704562991June 3-June 97208179154312883Jun 10-Jun 163686119613142254Jun 17-Jun 235294438312632066Jun 24-Jun 305504597061861901Jul 1-Jul 74593296121281528 I’d love for this to be real, but it’s probably not. It’s July 4th, and I keep forgetting to look ahead to the holidays in the next week before I make predictions. There’s no way this full decline is real, and it’s hard to know where the real number was. My brain continues to not understand how it being the 4th of July causes three days of missing data across the country. That’s not a mindset I can process, even now, not fully. It definitely did happen, whether I can process it or not, and it’s very unclear how much of the difference was made up after the weekend was over. Cases DateWESTMIDWESTSOUTHNORTHEASTTOTALMay 20-May 2633,89034,69448,97324,849142,406May 27-June 231,17220,04433,29314,66099,169Jun 3-Jun 925,98718,26732,54511,54088,339Jun 10-Jun 1623,70014,47225,7528,17772,101Jun 17-Jun 2323,85412,80126,4566,46469,575Jun 24-Jun 3023,24614,52131,7736,38875,928Jul 1-Jul 727,41317,46040,0317,05091,954 That’s an unsettling jump especially given the holiday weekend with its disruptions to reporting, but given the jump I’m going to presume that there wasn’t much disruption to testing, only to deaths. That means we saw a 20% or so rise in cases, up from 10% last week, which was across the board and concentrated in the places one would expect. In terms of absolute levels things are still fine, and there isn’t much room for further acceleration because Delta is already dominant, but at 20% per week (R ~ 1.15) things will get ugly fast if trends don’t improve. These changes are very similar to the regional changes one year ago this week, by region, in percentage terms. Further discussion of likely future trends in the Delta section. Vaccinations The decrease is unwelcome, but also clearly linked to the holiday, so it’s reasonable to expect some amount of rebound over the coming week. It could also be seen as the end of a previous temporary bump, in which case we wouldn’t expect much further decline right away but wouldn’t expect a bounce-back. My guess is the next week will hold steady from here, which is still not bad at all. Last week we were at 46.7% fully vaccinated and 54.4% with one does, and we picked up 0.9% and 0.7% on those respectively. That’s at least an effective 1.5% drop in R0, and I’m guessing more than that. As I say each week, that adds up fast. Periodic reminder from MR that fractional dosing of vaccines would save many lives and end the pandemic faster with no downside. This links to a new paper in Nature Medicine arguing the same thing. And a preprint showing 1/4 doses of Moderna create immunity similar to natural infection. As a reminder, I continue to believe that Moderna’s day-after side effects are due to the dose being actively too large rather than simply not being strictly necessary. And Moderna is getting half-doses ready for children, so we know they can provide them, and also arguing feasibility was always silly. For children, the new half-doses should be overkill the same way the full-doses are overkill for adults. If you have had one shot of the Johnson & Johnson vaccine, should you then get a shot of Pfizer or Moderna? If it is available, absolutely, yes you should. There’s every reason to think that mix and match will work here, one shot of J&J isn’t that differently effective than one shot of mRNA, and no reason beyond supply limitations to not take the second shot. Yet because it is not Officially Recognized all the Very Serious People are falling in line and finding ways to tie themselves up in knots and pretend that a second shot would not be useful. Here’s Nate Silver commenting on the bizarre NYTimes attempt at this. Here’s the Washington Post attempt. It’s all disgraceful bull**** designed to back up official policy that doesn’t make sense, with gems like ‘if you don’t feel safe enough without a booster, use additional measures like mask wearing.’ They’ve committed to one J&J shot counting as ‘vaccinated’ while one shot of Moderna doesn’t, and now they have to pretend that definition makes sense. Remember to always be praising people when they finally do the right thing, regardless of whether they first exhausted all alternatives. The last thing you want to do is punish correct action by using it as justification for someone now being that much more blameworthy for not doing it sooner! Still, sometimes… More vaccinations are the only viable way to deal with Delta. As Matt says here, there’s no going backwards, and we should have and will have very little tolerance for attempts to reimpose intrusive measures: Delta Variant Within a few weeks Delta will be almost all cases, and there won’t be a need for a distinct Delta section, but for now it still makes sense. Where are we on that right now? Mostly we’re there. This is a seven day rolling average of data that’s lagged days behind that to begin with, and this matches where I previously presumed the trend-line was. If it was 65% or so in this 7-day rolling average of lagged data, current case counts likely reflect something more like 75%-80%, and the Alpha+Gamma is accounting for almost all of the remainder as per this graph. That means we have accounted for over 90% of the effect of the shift from pre-Alpha to Delta. If we take this week’s numbers seriously in that context, and don’t make any adjustments, we’d get a final R0 for Delta under exact current conditions of R0 = 1.18 or so, once we take into account the lag in our case numbers. Trevor Bedford thread attempts to ballpark the potential size of the wave coming from Delta. It’s useful to check one’s intuitions and estimate calculations against those of others. Trevor uses a basic SIR model, assumes that vaccinations are fixed and independent of infections (and counts only those currently fully vaccinated, but gives them credit for full immunity, and ignores varying distributions of vaccinations, much of which he notes is wrong but that he hopes such effects will cancel) and as always ignores all control systems in all directions and comes up with a further 11% of the population likely to be infected before this is all over. In addition to all that, I’ve written several times about why SIR is not a great approximation in situations where different people are making varying adjustments in their behaviors and taking radically different levels of risk, which he’s also not considering. Nor does it take the children versus adults distinction seriously. I think these issues mostly point in the direction of a smaller wave, and there’s one that towers over the rest of them, which is that vaccinations will continue until morale improves. We are continuing to see vaccine shots given out at a good clip. At a bare minimum, I would expect the bulk of those who have only had their first shot to get their second shot. He’s starting with 46% fully vaccinated and an R0 = 1.18 (matching my estimate above). If we move that 46% to something like 55%, that’s enough to get R0 = 1, and all that would require is everyone who has had a first shot as of July 5 getting a second shot. It doesn’t work that way, because the first shot’s protection is already present and can’t be double counted, but other factors point the other way. Another way to think about this is that when Trevor says 11% additional infections, that’s 11% additional infections or vaccinations, and can mostly be considered a hard upper bound. More vaccinations means less overshooting after R0 gets down to 1, combined with all the other issues, make me continue to think that the Delta wave is unlikely to get all that big. There’s also the seasonality perspective. This week, we saw a 21% increase in cases. A year ago this week, we saw a 21% increase in cases, both with similar regional patterns. A reminder: Date in 2020Cases TotalJune 11-June 17158164June 18-June 24215751June 25-July 1300510July 2-July 8365107July 9-July 15431972July 16-July 22461441July 23-July 29444797July 30-Aug 5392193Aug 6-Aug 12365028Aug 13-Aug 19322126 Thus, it seems premature to conclude that we are in a permanent R=1.18 world going forward pending additional vaccinations. We also have this situation at very low overall Covid levels, so control systems via individual action have a long way to adjust if necessary. We also have vaccinations continuing, with this week’s disappointing crop still dropping R0 at least by 1.5% (and at least one vaccination week won’t be reflected in cases yet, likely 2-3, so we have a head start here). My baseline scenario continues to be that cases rise for a bit, but things stabilize in most places before reaching levels that require behavioral adjustments, especially by the vaccinated. But I do continue to expect some regional/local issues in places with lower vaccination rates. Periodically commenters will ask what the evidence is that Delta is more lethal than Alpha. I’ve seen such estimates from several sources, and no formal estimates doubting it, but such effects can’t hold a candle to the power of vaccination, and especially of vaccination of the most vulnerable populations. Scott Alexander Analyzes Lockdowns Scott Alexander writes Lockdown Effectiveness: Much More Than You Wanted To Know. This was not, in this case, more than I wanted to know. Instead it felt more like a ton of empty calories, with comparison after comparison and calculation after calculation that had so many caveats (that were explicitly mentioned – Scott plays fair on such matters) that I don’t feel much more enlightened than when I began reading, and the main update I made was that the evidence available to find was such that Scott was unable to provide an update. Sam Bankman-Fried summarizes it this way, which seems right: Comparisons of this form stack the deck in favor of lockdowns, because they discount non-GDP effects (SBF’s #3), and also by considering the average countermeasure against the average gain, instead of comparing effects on the margin. That second one is worth unpacking a bit if it isn’t obvious to you. The value of lockdown was being considered en masse rather than on the margin. Given that we end up controlling the situation either way and never end up in ‘everyone is infected’ mode, stricter long-term lockdowns have increasing marginal cost per case prevented or life saved. Thus, if it’s not clear whether lockdowns in general are good, or whether lockdowns above a given much lower level (like red state level versus blue state level) are good, that means that on the margin we did too much locking down. If we didn’t do enough, lockdowns up to that point would look very good. Another note is that Scott is much too kind about the ‘maybe some of these provisions were not all that great’ aspect of the problem. Closing parks and beaches and playgrounds isn’t ‘we made lives worse for not much gain,’ it’s ‘we actively forced people into more dangerous situations and made the pandemic worse, while also making lived experience worse and burning out public tolerance and trust.’ What was the right rate of mandated lockdown, given our ability to prioritize measures? My belief is that private action reacts better and the control systems are very strong, and that the real reason to do lockdowns is the tail risk of complete disasters (and the big silent reason not to do them is to keep that ammo for when you really need it and/or to minimize risk of complete disaster via loss of civil order and people losing their minds). That’s the argument that an analysis like Scott’s is missing the central point of decision making under uncertainty, rather than stacking (or not stacking) the deck in a particular way. Once we got past the first few months, I’m firmly in the ‘we did too much locking down on the margin’ camp. I believe things would have gone better if we had let people make more of their own decisions. Even better would have been smarter restrictions, but I’ve learned not to make that the comparison point in such questions. In Other News EA Forum post (via MR) reiterates some of the various ways the pandemic was botched. Thread explaining why vaccines provide better immunity than natural infection. This seems like an excellent idea. Businesses liking it makes sense too, but in general there is the worry that ‘the government tells you that you can’t open, then tells you that you can open if you do X’ will make you happy whether or not the prohibition or requirement makes any sense, as it’s better to pay a small additional cost than stay closed. Poor ventilation is quite bad for many non-Covid reasons, and it’s very good to see this acknowledged. Fixing air quality is an underappreciated cause area, whether via filters or better designs, and if that caught on more it would be a large upside to everything that’s happened. Great news, New York gets to keep its outdoor dining: Which of course led to this as the featured comment, gotta love the user name: To which I’d respond on the sidewalk because I have never seen an outdoor dining area block a sidewalk to pedestrians in a meaningful way, not once, what city does this person even live in. It’s amazing we have any nice things at all. And not as great news, New York, insert your own joke here: More surveys at the original post, including that worry is down to pre-pandemic levels, but daily enjoyment still lags behind. My top takeaway here is that tough times make people better appreciate life, and that we all went through some tough times and then things improved, and we now have comparison points that make me feel better. Daily life scores are still lagging, so the ‘life is actually better now’ hypothesis doesn’t fit the data. It also doesn’t fit my observations otherwise, as life in general is pretty great but it’s still not fully back to the old baseline. MR links to this LW post on the recent mask wearing studies, titled ‘we still don’t know if masks work.’ I agree with its finding that the study in question didn’t prove anything in any direction, but that doesn’t mean we don’t know if masks work, because we are allowed to know things that didn’t come from a Properly Done and Formatted Scientific Study. Evidence that some people’s defenses against Covid kick in without causing antibody tests to come back positive. For now I don’t think this has much practical impact, since proving that something happens and showing that it happens frequently enough to matter are very different things. Lab leak: Confirmation that many scientists will affirm its plausibility privately but not publicly. I continue to skip over most lab-related items, and the situation hasn’t changed. NYTimes so no link, but MR points us to a birthday paper. The key finding is that your risk of Covid goes up when you have a birthday, presumably because you have a birthday party, and the effect size cares not whether you’re blue tribe or red tribe, implying their choices of party were remarkably similar. Not Covid In not-ready-for-prime-time-players AI news, IBM Watson has decided to start making tennis predictions. They are, shall we say, not so great. That’s not how this works. That’s not how any of this works. Djokovic absolutely does not lose to Garin 43% of the time, or anything of the sort. You don’t need to know much about tennis to know that the world #1 is going to dominate the world #8 most of the time, yet Watson can’t figure this out, despite the training data providing more than enough information to solve this puzzle. That would be fine if quite odd, if this was a little experiment IBM had privately done and then said ‘whoops, I wonder why that didn’t work.’ Instead, it’s publishing the output as if it’s the next iteration of tennis analysis. I don’t blame Watson, it’s a computer program. I blame the humans, who took a problem that AI should be quite good at, clearly botched it at the most basic level, then didn’t notice they’d botched it somehow. That’s the part I don’t understand. I’ve gotten similarly nonsensical analytical results plenty of times, but when I do I say the whoops thing and learn about how to make better predictions. To paraphrase Seth, if they think this is publishable tennis analysis, keep Watson the hell away from any and all medical care, or it will end quite badly for all concerned. The other part I don’t understand is how they got this kind of answer, given what I know about machine learning, and I’d love to hear a plausible gears-level theory of how this happened, even ignoring that the humans should catch it within five seconds. How did the algorithm get one of the most normal basically-in-sample situations so wrong? If one is curious, my friend Seth offers this tracking of how it would be doing gambling, which can best be summarized as ‘lighting money on fire.’ Discuss ### The SIA population update can be surprisingly small 8 июля, 2021 - 13:45 Published on July 8, 2021 10:45 AM GMT With many thanks to Damon Binder, and the spirited conversations that lead to this post. People often think that the self-indication assumption (SIA) implies a huge number of alien species, millions of times more than otherwise. Thought experiments like the presumptuous philosopher seem to suggest this. But here I'll show that, in many cases, updating on SIA doesn't change the expected number of alien species much. It all depends on the prior, and there are many reasonable priors for which the SIA update does nothing more than double the probability of life in the universe[1]. This can be the case even if the prior says that life is very unlikely! We can have a situation where we are astounded, flabbergasted, and disbelieving about our own existence - "how could we exist, how can this beeeeee?!?!?!?" - and still not update much - "well, life is still pretty unlikely elsewhere, I suppose". In the one situation where we have an empirical distribution, the "Dissolving the Fermi Paradox" paper, the effect of the SIA anthropics update is to multiply the expected population by 7.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > * {position: absolute} .MJXc-bevelled > * {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom * {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')} . The formula Let ρ∈[0,1] be the probability of advanced space-faring life evolving on a given planet; for the moment, ignore issues of life expanding to other planets from their one point of origin. Let f be the prior distribution of ρ, with mean μ and variance σ2. This means that, if we visit another planet, our probability of finding life is μ. On this planet, we exist[2]. Then if we update on our existence we get a new distribution f′; this distribution will have mean μ′: μ′=μ(1+σ2μ2). To see a proof of this result, look at this footnote[3]. Define Mμ,σ2=1+σ2/μ2 to be this multiplicative factor between μ and μ′; we'll show that there are many reasonable situations where Mμ,σ2 is surprisingly low, ranging from below two up to a thousand, rather than in the millions or billions. Beta distributions I Let's start with the most uninformative prior of all: a uniform prior over [0,1]. The expectation of ρ is ∫10ρdρ=1/2, so, without any other information, we expect a planet to have life with 50% probability. The variance is σ2=1/12. Thus if we update on our existence on Earth, we get the posterior f′(ρ)=2ρ; the mean of this is 2/3 (either direct calculation or using M1/2,1/12=1+4/12=4/3). Even though this change in expectation is multiplicatively small, it does seem that the uniform prior and the f′(ρ) are very different, with f′(ρ) heavily skewed to the right. But now consider what happens if we look at Mars and notice that it hasn't got life. The probability of no life, given ρ, is 1−ρ. Updating on this and renormalising gives a posterior 6ρ(1−ρ): The expectation of 6ρ(1−ρ), symmetric around 1/2, is of course 1/2. Thus one extra observation (that Mars is dead) has undone, in expectation, all the anthropic impact of our own existence. This is an example of a beta distribution for α=2 and β=2 (yes, beta distributions have a parameter called β and another one that's α; just deal with it). Indeed, the uniform prior is also a beta distribution (with α=β=1) as is the anthropic updated version 2ρ (which has α=2, β=1). The update rule for beta distributions is that a positive observation (ie life) increases α by 1, and a negative observation (a dead planet) increases β by 1. The mean of an updated beta distribution is a generalised version of Laplace's law of succession: if our prior is a beta distribution with parameters α and β, and we've had m positive observations and n negative ones, then the mean of the posterior is: α+mα+β+m+n. Suppose now that we have observed n dead planets, but no life, and that we haven't done an anthropic update yet, then we have a probability of life of α/(α+β+n). Upon adding the anthropic update, this shifts to (α+1)/(α+β+n+1), meaning that the multiplicative factor is at most (α+1)/α. If we started with the uniform prior with its α=1, this multiplies the probability of life by at most 2. In a later section, we'll look at α<1. High prior probability is not required for weak anthropic update The uniform prior has α=β=1 and starts at expectation 1/2. But we can set α=1 and a much higher β, which skews the distribution to the left; for example, for β=2, 3, and 10: Even though these priors are skewed to the left, and have lower prior probabilities of life (1/3, 1/4, and 1/11), the anthropic update has a factor Mμ,σ2 that is less than 2. Also note that if we scale the prior f by a small ϵ, so replace f(ρ) on the range [0,1] with f(ρ/ϵ)/ϵ on the range [0,ϵ], then μ is multiplied by ϵ and σ2 is multiplied by ϵ2. Thus Mμ,ϵ is unchanged. Here, for example, is the uniform distribution, scaled down by ϵ=1, ϵ=1/3, and ϵ=1/20: All of these will have the same Mμ,σ2 (which is 4/3, just as for the uniform distribution). And, of course, doing the same scaling with the various beta distributions we've seen up until now will also keep Mμ,σ2 constant. Thus there are a lot of distributions with very low μ (ie very low prior probability of life) but an Mμ,σ2 that's less than 2 (ie the anthropic update is less than a doubling of the probability of life). Beta distributions II and log-normals The best-case scenario for Mμ,σ2 is if f assigns probability 1 to ρ=μ. In that case, σ2=0 and M=1: the anthropic update changes nothing. Conversely, the worse-case scenario for Mμ,σ2 is if f only allows ρ=0 and ρ=1. In that case, f assigns probability μ to 1 and 1−μ to 0, for a mean of μ and a variance of σ2=μ−μ2, and a multiplicative factor of Mμ,σ2=1/μ. In this case, after anthropic update, f′ assigns certainty to ρ=1 (since any life at all, given this f, means life on all planets). But there are also more reasonable priors with large Mμ,σ2. We've already seen some, implicitly, above: the beta distributions with α<1. In that case, Mμ,σ2 is bounded by (α+1)/α. If α=3/4 and β=1, for instance, this corresponds to the (unbounded) distribution f(ρ)=(3/4)ρ−1/4; the multiplicative factor is below 7/3, which is slightly above 2. But as α declines, the multiplicative factor can go up surprisingly fast; at α=1/2 it is 3, at α=1/4 it is 5: In general, for α=1/n, the multiplicative factor is bounded by n+1. This gets arbitrarily large as α→0. Though α=0 itself corresponds to the improper prior f(ρ)=1/ρ), whose integral diverges. On a log scale, this corresponds to the log-uniform distribution, which is roughly what you get if you assume "we need N steps, each of probability p, to get life; let's put a uniform prior over the possible Ns". It's not clear why one might want to choose α=1/1020 for a prior, but there is a class of prior that is much more natural: the log-normal distributions. These are random variables X such that log(X) is normally distributed. If we choose log(X) to have a mean that is highly negative (and a variance that isn't too large), then we can mostly ignore the fact that X takes values above 1, and treat it as a prior distribution for ρ. The mean and variance of the log-normal distributions can be explicitly defined, thus giving the multiplications factor as: Mμ,σ2=exp¯¯¯σ2. Here, ¯¯¯σ2 is the variance of the normal distribution log(X). This ¯¯¯σ2 might be large, as it denotes (roughly) "we need N steps, each of probability p, to get life; let's put a uniform-ish prior over a range of possible Ns". Unlike 1/ρ, this is a proper prior, and a plausible one; therefore there are plausible priors with very large Mμ,σ2. The log normal is quite likely to appear, as it is the approximate limit of multiplying together a host of different independent parameters.

Multiplication law

Do you know what's more likely to be useful than "the approximate limit of multiplying together a host of different independent parameters"? Actually multiplying together independent parameters.

The famous Drake equation is:

R∗⋅fp⋅ne⋅f1⋅fi⋅fc⋅L.

Here R∗ is the number of stars in our galaxy, fp the fraction of those with planets, ne the number of planets that can support life per star that has planets, fl the fraction of those that develop life, fi the fraction of those that develop intelligent life, fc the fraction of those that release detectable signs of their existence, and L is the length of time those civilizations endure as detectable.

Then the proportion of advanced civilizations per planet is qflfi, where q is the proportion of life-supporting planets among all planets. To compute the M of this distribution, we have the highly useful result (the proof is in this footnote[4]).

• Let Xi be independent random variables with multiplicative factors Mi, and let M be the multiplicative factor of X=X1⋅X2⋅…⋅Xn. Then M=∏iMi - the total M is the product of the individual Mi.

The paper "dissolving the Fermi paradox" gives estimated distributions for all the terms in the Drake equation. The q is constant, of course (so has Mq=1). The fi has a log-uniform distribution from 0.001 to 1; the M can be computed from the mean and variance of such distributions, so Mfi=log(1/0.001)1−0.00122(1−0.001)2≈3.5.

The fl term is more complicated; it is distributed like g(X)=1−e−eX⋅50log(10) where X is a standard normal distribution. Fortunately, we can estimate its mean and variance without having to figure out its distribution, by numerical integration of g(x) and g(x2) on the normal distribution. This gives μ≈0.5, σ2≈0.25 and M≈2. The overall the multiplicative effect of anthropic update is:

Mplanet≈7.

What if we considered the proportion of advanced civilization per star, rather than per planet? Then we can drop the q term and add in fp and ne. Those are both estimated to be distributed as log-uniform on [0.1,1]; for a total M of

Mstar≈14.

Why is the M higher for civilizations per star than civilizations per planet? That's because when we update on our existence, we increase the proportion of civilizations per planet, but we also update the proportion of planets per star - both of these can make life more likely. The Mstar incorporates both effects, so is strictly higher than Mplanet.

We can do the same by considering the number of civilizations per galaxy; then we have to incorporate R∗ as well. This is log-uniform on [1,100], giving:

Mgalaxy≈32.

Limitations of the multiplier Low multiplier, strong effects

It's important to note that the anthropic update can be very strong, without changing the expected population much. So a low Mμ,σ2 doesn't necessary mean a low impact.

Consider for instance the presumptuous philosopher, slightly modified to use planetary population densities. Thus theory T1 predicts ρ=1/1012 (one in a trillion) and T2 predicts ρ=1; we put initial probabilities 1/2 on both theories.

As Nick noted, the SIA update pushes T2 to being a trillion times more probable than T1; a postiori, roughly, T2 is a certainty (the actual probability is 1012/(1012+1)).

However, the expected population goes from roughly 1/2 (the average of 1/1012 and 1) to roughly 1 (since a postiori T2 is almost certain). This gives a Mμ,σ2 of roughly 2. So, despite the strong update towards T2, the actual population update is small - and, conversely, despite the actual population update being small, we have a strong update towards T2.

Combining multiple theories

In the previous post, note that that both T1 and T2 were point estimates: they posit a constant ρ. So they have a variance of zero, and hence a Mμ,σ2 of 1. But T2 has a much stronger anthropic update. Thus we can't use their Mμ,σ2 to compare the anthropic effects on different theories.

We also can't relate the individual Ms to that of a combined theory. As we've seen, T1 and T2 have Ms of 1, but the combined theory (1/2)T1+(1/2)T2 has an M of roughly 2. But we can play around with the relative initial weight of T1 and T2 to get other Ms.

If we started with (1012/(1012+1))T1+(1/(1012+1))T2, this has a mean of roughly 10−12; the anthropic update sends it to (1/2)T1+(1/2)T2), with a mean of roughly 1/2. So this combined theory has an M of roughly 1012/2, half a trillion.

But, conversely, if we started with (1/(1012+1))T1+(1012/(1012+1))T2, then we have an initial mean of roughly one; its anthropic update is (1/(1024+1))T1+(1024/(1024+1))T2, also with a mean of roughly one. So this combined theory has an M of roughly 1.

There is a weak relation between M and the Mi of the various Ti. Let Mi be the multiplier of Ti has a multiplier of Mi; we can reorder the Ti so that Mi≤Mj for i≤j. Let T be a combined theory that assigns probability pi to Ti.

1. For all {pi}, M≥M1.
2. For all ϵ, there exists {pi} with all 0">pi>0, so that M<M1+ϵ.

So, the minimum value of the Mi is a lower bound on M, and we can get arbitrarily close to that bound. See the proof in this footnote[5].

1. As we'll see, the population update is small even in the presumptuous philosopher experiment itself. ↩︎

2. Citation partially needed: I'm ignoring Boltzmann brains and simulations and similar ideas. ↩︎

3. Given a fixed ρ, the probability of observing life on our own planet is exactly ρ. So Bayes's theorem implies that f′(ρ)∝ρf(ρ). With the full normalisation, this is

f′(ρ)=ρf(ρ)∫10ρf(ρ)dρ.

If we want to get the mean μ′ of this distribution, we further multiply by ρ and integrate:

μ′=Ef′(ρ)=∫10ρ2f(ρ)∫10ρf(ρ)dρdρ=∫10ρ2f(ρ)dρ∫10ρf(ρ)dρ.

Let's multiply this by 1=1/1=(∫10f(ρ)dρ)/(∫10f(ρ)dρ) and regroup the terms:

μ′=∫10ρ2f(ρ)dρ∫10f(ρ)dρ⋅∫10f(ρ)dρ∫10ρf(ρ)dρ.

Thus μ′= Ef(ρ2)/Ef(ρ)= (σ2+μ2)/μ= μ(1+σ2/μ2), using the fact that the variance is the expectation of ρ2 minus the square of the expectation of ρ. ↩︎

4. I adapted the proof in this post.

So, let Xi be independent random variables with means μi and variances σ2i. Let X=∏iXi, which has mean μ and variance σ2. Due to the independence of the Xi, the expectations of their products are the product of their expectations. Note that X2i and X2j are also independent if i≠j. Then we have:

∏iMμi,σ2i=∏i(1+σ2iμ2i)=∏i(μ2i+σ2iμ2i)=∏i(E(X2i)μ2i)=∏i(E(X2i))∏iE(Xi)2=E(X2)E(X)2=μ2+σ2μ2=1+σ2μ2=Mμ,σ2. ↩︎

5. Let {fi}1≤i≤n be probability distributions on ρ, with mean μi, variance σ2i, expectation squared si=Efi(ρ2)=σ2i+μ2i, and Mi=si/μ2i, and let f be the probability distribution f=p1f1+…pnfn, with associated multiplier M. Without loss of generality, assume Mi≤Mj for i<j. Then we'll show that M≥M1.

We'll first show this in the special case where n=2 and M1=M2, then generalise to the general case, as is appropriate for a generalisation. If s1/μ21=M1=M2=s2/μ22, then, since all terms are non-negative, there exists an α such that s1=α2s2 while μ1=αμ2. Then for any given p=p1, the M of f is:

M(p)=ps1+(1−p)s2(pμ1+(1−p)μ2)2=ps1+(1−p)α2s1(pμ1+(1−p)αμ1)2=M11(p)+α2(1−p)(1(p)+α(1−p))2.

The function x→x2 is convex, so, interpolating between the values x=1 and x=α, we know that for all 0≤p≤1, the term (1(p)+α(1−p))2 must be lower than 12(p)+α2(1−p). Therefore (1(p)+α2(1−p))/(1(p)+α(1−p))2 is at most 1, and M(p)≤M1. This shows the result for n=2 if M1=M2.

Now assume that M_1">M2>M1, so that s1/μ21<s2/μ22. Then replace s2 with s′2, which is lower than s2, so that s1/μ21=s′2/μ22. If we define M′(p) as the expression for M(p) with s_2' substituting for s2, we know that M′(p)≤M(p), since s′2<s2. Then the previous result shows that M′(p)≥M1, thus M(p)≥M1 too. To show the result for larger n, we'll induct on n. For n=1 the result is a tautology, M1≤M1, and we've shown the result for n=2. Assume the result is true for n−1, and then notice that f=p1f1+…pnfn can be re-written as f=p1f1+(1−p1)f′, where f′=(p′2f2+…p′nfn) for p′i=pn/(1−p1). Then, by the induction hypothesis, if M′ is the M of f′, then M′≥M2. Then applying the result for n=2 between f1 and f′, gives M≤min(M1,M′). However, since M1≤M2 and M′≥M2, we know that min(M1,M′)=M1, proving the general result. To show M can get arbitrarily close to M1, simply note that M is continuous in the {pi}, define p1=1−ϵ, pi=ϵ/(n−1) for 1">i>1, and let ϵ tend to 0. ↩︎ Discuss ### Anthropics in infinite universes 8 июля, 2021 - 09:56 Published on July 8, 2021 6:56 AM GMT When talking about anthropics, people often say things like "assume the universe is finite; weird things happen in infinite universes". I've myself argued that SSA breaks down when we encounter infinities; SIA breaks down sooner, when we encounter expected infinities. You can formalise this informally[1] with the thought that: 1. In an infinite universe, anything can happen, no matter how unlikely: life must exist somewhere. So our existence doesn't tell us anything about life; its probability could be anything at all. A superficially convincing argument; but not one you'd use for anything else. For instance, consider the following: 1. In an infinite universe, anything can happen, no matter how unlikely: if gravity didn't exist, somewhere it must seem to exist by shear chance. So our observation of gravity doesn't tell us anything about gravity; its probability could be anything at all. I've argued before that anthropic questions are pretty normal. Why would we accept the reasoning in question 1, but reject it in question 2? We shouldn't. We can deal with questions like 2 by talking about limits of probabilities in larger and larger spaces, or by discounting distant observations (similar to sections 2.3 and 3.1 in infinite ethica). So we might define conditional probabilities like P(X∣Y).mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > * {position: absolute} .MJXc-bevelled > * {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom * {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')} in an infinite universe in the following way: • Let Prl(X∣Y) be the ratio of observers, within a large hypersphere of radius r centered on location l, that observe X and Y, relative to the proportion that observes Y. If this tends to a limit as r→∞, independently of l, then define that limit to be P(X∣Y). Note that this definition works just as well for Y= "we observe the force of gravity to be blah" as with Y= "we exist". Now, that definition might not be ideal (in particular, "radius" is not defined for relativistic space-time). No problem: different definitions of probability are asking different questions, and can lead to different anthropic probabilities, just as in the finite case. I'll call these class of questions "SIA-limit questions", since they are phrased as ratios of observers, and dependent on how we use limits to define probability in infinite universes. They each lead to various "SIA-limit anthropic probability theories"; in most standard situations, these should reach the same answers as each other. 1. Yes, it's perfectly possible to formalise informally, and I encourage people to do it more often. ↩︎ Discuss ### A world in which the alignment problem seems low-stakes 8 июля, 2021 - 05:31 Published on July 8, 2021 2:31 AM GMT Bostrom's original instrumental convergence thesis needs to be applied carefully. The danger from power-seeking is not intrinsic to the alignment problem. This danger also depends on the structure of the agent's environment In The Catastrophic Convergence Conjecture, I wrote: But are there worlds where this isn't true? Consider a world where you supply a utility-maximizing AGI with a utility function. The AGI is in a "separate part of the universe"; after the initial specification of the utility function, the left half of the universe evolves independently of the right half. Nothing you can do after specification can affect the AGI's half, and vice versa. No communication can take place between the two halves. The only information you have about the other half is your utility. For simplicity, let's suppose you and the AGI have utility functions over universe-histories which are additive across the halves of the universe. You don't observe any utility information about the other part of the universe until the end of time, and vice versa for the AGI. That is, for history h.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > * {position: absolute} .MJXc-bevelled > * {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom * {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')} , uhuman(h)=uhuman(hleft)+uhuman(hright). If the AGI uses something like causal decision theory, then it won't try to kill you, or "seek power" over you. The effects of its actions have no causal influence over what happens in your half of the universe. Your universe's evolution adds a constant term to its expected utility. (Other decision theories might have it precommit to minimizing human utility unless it attains maximal AGI-utility from the left half of the universe-history, or some other shenanigans. This is beside the point I want to make in this post, but it's important to consider.) However, the setup is still interesting because 1. Goodhart's law still applies: if you give the AGI an incomplete proxy objective, you'll get suboptimal true performance. 2. Value is still complex: it's still hard to get the AGI to optimize the right half of the universe for human flourishing. 3. If the AGI is autonomously trained via stochastic gradient descent in the right half of the universe, then we may still hit inner alignment problems Alignment is still hard, and we still want to get the AGI to do good things on its half of the universe. But it isn't instrumentally convergent for the AGI to seek power over you, and so you shouldn't expect an unaligned AGI to try to kill you in this universe. You shouldn't expect the AGI to kill other humans, either, since none exist in the right half of the universe. To restate: Bostrom's original instrumental convergence thesis needs to be applied carefully. The danger from power-seeking is not intrinsic to the alignment problem. This danger also depends on the structure of the agent's environment Thanks to John Wentworth for feedback on this post. Discuss ### Announcing My Free Online Course "Original Seeing With a Focus On Life" 8 июля, 2021 - 04:26 Published on July 8, 2021 1:26 AM GMT In May and June of 2021, I ran an online nature study course with about two dozen participants. I had three goals: 1. Help people create or deepen a personal connection with nature. 2. Learn how to run month-long online courses with over a dozen participants. 3. Test some hunches about the rationality material I’ve been working on. One of my main hunches was that if you complete this course, you’ll find you’re able to study your own mind, or just about anything else, in the way a naturalist studies nature, with no further guidance. I obviously need more data, but the preliminary feedback looks promising. A solo version of the course is now available for free through my website, thanks largely to support from the Long Term Future Fund. If you want to get better at original seeing, I know of no better resource. I hope that some of you will try it out and tell me how it goes. Discuss ### You should ask for a third-party audit of your cryopreservation 8 июля, 2021 - 04:18 Published on July 8, 2021 1:18 AM GMT Hello (potential) cryonics customer, you should ask for third-party audit of your cryopreservation. This ensures that the cryonics providers aren't performing the procedure badly, say through negligence, ignorance, or something else. Specifically, you should ask that the following be sent to an imaging specialist of your choice AFTER your brain has been preserved 1. An x-ray of your preserved brain, and 2. A needle biopsy of a few brain areas (say, the frontal cortex, parietal cortex, brain stem, basal ganglia, and thalamus) The x-ray will roughly show if large parts of your brain were not preserved well. Micrographs of the biopsy samples will show in fine detail (tens of nanometers resolution) the quality of your preservation. This information will give your loved ones peace of mind that you've maxed your chances of revival and it will also give other cryonics customers confidence that your cryoprovider is worth paying money to. These auditing requirements come from a video by the Brain Preservation Foundation (BPF), Preservation Done Right, and from their article, Vitrifying the Connectomic Self (p. 5, 2nd para). Disclaimer: I volunteer for the BPF, and am an ex-Alcor member. Discuss ### How much chess engine progress is about adapting to bigger computers? 8 июля, 2021 - 01:35 Published on July 7, 2021 10:35 PM GMT (This question comes from a discussion with Carl Shulman.) In this post I describe an experiment that I'd like to see run. I'm posting a1,000 - $10,000 prize for a convincing implementation of these experiments. I also post a number of smaller prizes for relevant desk research or important corrections to this request. Motivation In order to understand the dynamics of the singularity, I'd like to understand how easy it is to improve algorithms and software. We can learn something about this from looking at chess engines. It's not the most relevant domain to future AI, but it's one with an unusually long history and unusually clear (and consistent) performance metrics. In order to quantify the quality of a chess engine, we can fix a level of play and ask "How much compute is needed for the engine to play at that level?" One complication in evaluating the rate of progress is that it depends on what level of play we use for evaluation. In particular, newer algorithms are generally designed to play at a much higher level than older algorithms. So if we quantify the compute needed to reach modern levels of play, we will capture both absolute improvements and also "adaptation" to the new higher amounts of compute. So we'd like to attribute progress in chess engines to three factors: 1. Better software. 2. Bigger computers. 3. Software that are better-adapted to new, bigger computers. Understanding the size of factor #1 is important for extrapolating progress given massive R&D investments in software. While it is easy to separate factors #1 and #2 from publicly available information, it is not easy to evaluate factor #3. Experiment description Pick two (or more) software engines from very different times. They should both be roughly state of the art, running on "typical" machines from the era (i.e. the machines for which R&D is mostly targeted). We then carry out two matches: 1. Run the old engine on its "native" hardware (the "old hardware"). Then evaluate: how little compute does the new engine need in order to beat the old engine? 2. Run the new engine on its "native" hardware (the "new hardware"). Then evaluate: how much compute does the old engine need in order to beat the new engine? With some effort, we can estimate a quantitative ratio of "ops needed" for each of these experiments. For example, we may find that the new engine is able to beat the old engine using only 1% of the "old hardware." Whereas we may find that the old engine would require 10,000x the "new hardware" in order to compete with the new engine. The first experiment tells us about the absolute improvements in chess engines on the task for which the old engine was optimized. (This understates the rate of software progress to the extent that people stopped working on this task.) The second experiment gives us the combination of absolute improvements + adaptation to new hardware. Typical measures of "rate of software progress" will be somewhere in between, and are sensitive to the hardware on which the evaluation is carried out. I believe that understanding these two numbers would give us a significantly clearer picture of what's really going on with software progress in chess engines. Experiment details Here's some guesses about how to run this experiment well. I don't know much about computer chess, so you may be able to make a better proposal. • Old engine, old hardware: my default proposal is the version of Fritz that won the 1995 world computer chess championship, using the same amount of hardware (and time controls) as in that championship. This algorithm seems like a particularly reasonable "best effort" at making full use of available computing resources. I don't want to compare an engine running on a very expensive old machine to an engine running on a cheap modern machine. You may have to be opportunistic about what kind of thing you can actually run. • New engine, new hardware: my default proposal is the version of Stockfish that won TCEC Season 20, on the same hardware+time used in that competition. • Running a new engine on old hardware: We should use whatever modern engine works best with teensy computers. It's not important it be the same as the modern engine on modern hardware. We'd prefer if there was a dedicated team continuing to work on this problem, but absent that we want to use the best thing that exists. • Memory use. When running on the old hardware we need to match the memory use of the old machine. I'm not sure how to handle scaling of memory. One possibility is to hold the ratio fixed and scale them both up/down together, but something more realistic would be welcome. • Matchup vs ELO: I proposed experiments organized around 1:1 contests. I think there are lots of ways that could go wrong. It would be really good to at least sanity-check the results by comparing against a third reference engine. Quantification of this factor is welcome. • More engines, more hardware: you could do the same experiment with additional software engines, and could evaluate at more levels of compute. I think you get a lot of the benefits from the first few measurements, but more data helps and it might be a lot cheaper (especially if you need more engines to get good ELO estimates anyway). • Even older engines. I'm interested in results even older than Fritz, but the further back we go the more uncertainty I have about how to actually do the comparison. • Endgame tables, opening book, learned heuristics: some of the knowledge produced by chess engines was produced using large computers (by playing or observing very large numbers of games, brute-forcing endgames, and so on). We'd prefer exclude these factors if they use more compute than would have been affordable for the old engine. If we can't, we at least want to quantify their influence. Including these factors could significantly overstate the returns of human R&D; it's an important thing to know about, but for the purpose of forecasting the impacts of AI we really want to separate it out. This may be a major constraint for considering new engines. • What counts as an "operation"? I don't think that making "hardware" comparisons between new and old computers will be straightforward. I think it's easier if we restrict attention to consumer microprocessors, and I'm hoping that there's only a little bit of uncertainty (e.g. a factor of 2). I think you can do the experiments before figuring this out, and then try to clarify relevant issues by seeing how fast the computers can run simple relevant backgrounds. (The "old hardware" run should probably be on just one processor.) • Timing out. It may be prohibitively expensive to give the old engine enough compute to beat the new engine. I think you should do a little bit of that work, to try to figure out the basic picture (how expensive it would be, rough bounds on the numbers, some very noisy estimates from playing just a few games). We can figure out where to go from there. • Different time controls. I'm expecting time controls to be comparable between the old and new competitions. If old competitions ran on much longer time controls, I'd prefer scale them down to something comparable (to try to better match what would have been realistic to experiment with during R&D / profitable for non-competition purposes). And similarly if new competitions are longer. • Ponder. Thinking during the opponent's turn could change the answer a tiny amount (up to a factor of 2), but it really doesn't seem worth dealing with, even if some of the engines are optimized to use it. • Inefficiencies from running on new computers. You might have to mess things up a lot to run old engines on new computers (or to deal with weird amounts of memory or so on). Ideally it would be possible to abstract out some of the details of "how long it actually takes" to talk about what the computation or memory costs ought to be if implemented without overhead. We'll have to see how that goes. Prize structure I'm not spending long evaluating any of this and my decisions are going to be completely unaccountable. Dealing with me may be a waste of time. Your only recourse will be to make an angry comment on this post. I know that all makes it less appealing to participate. But please have it in mind before spending any time on this project, and consider yourself warned! I'm planning to give away at least$1,000 sometime in the next 3 months if anyone runs any plausible version of this experiment, or even points me to public information from which it's possible to back out a similar number. I'm fine if that means I have to give $1,000 to a random LW comment. I'd give away$10,000 if someone ran a clean version of the experiment, I found it convincing, and they were transparent about their methods. Before giving away the full prize I'd likely have a waiting period where I offered a separate prize for people to post replications or caveats or so on.

I'm generally expecting/hoping to wrap this up over the next couple of months, but will adjust based on responses.

If multiple people do experiments I'll make some arbitrary call about how to allocate prize money. Timing will matter a bit but it's a secondary consideration to quality of experiment. Earlier submissions can get paid based if they helped pave the way for later submissions.

If you are planning to spend time on this I encourage you to make a comment. I'm very happy to provide thoughts on a proposed experiment, e.g. to tell you what size of prize I'd expect to give it or what concerns I'd have about a proposal. None of this is binding.

If receiving a prize would be weird or problematic for some reason, I'm still interested in the results and you can opt to receive a comparable quantity of gratitude instead of $. Desk research prizes I'd also pay out$100-\$1000 at my discretion for any of the following:

• Plausible (or better yet convincing) estimates of the total investment in chess engine R&D over time.
• Good analysis of the relative importance of hardware/software using public information (must at least improve over the analysis here). Or pointers to other similar experiments that have already been run.
• Any consideration that I feel should significantly change the experimental setup, e.g. quantifying the importance of endgame tables, or noting a critical difference between old and new chess engines, or suggesting a reason that Fritz is a bad engine to compare to.
• Any contributions that make it significantly easier to run this experiment (for example tracking down a usable implementation of an old chess engine).

Discuss

### Consequences of Bayesian Epistemology?

7 июля, 2021 - 20:54
Published on July 6, 2021 8:05 PM GMT

I'm not very experienced in such things, so this might be an oblivious question with an obvious answer. If so, sorry.

I understand that one of the foundations of LessWrong is Bayesian epistemology and reasoning. I've been looking into it, and it seems like the consequences of Bayes' Theorem and similar explorations into probability theory have pretty basic/intuitive implications on rational thought. It seems like it all boils down to "update your beliefs based on evidence." At the moment, I can't see many groundbreaking or especially helpful findings.

There are a couple that are useful, though. "Making beliefs pay rent in anticipated experiences" is useful for ensuring that evidence is available to refine beliefs, and the "conservation of expected evidence" highlights the consequences of conditionality in ways that weren't immediately obvious (e.g. supporting evidence for an already-strong hypothesis isn't that useful, but contradicting evidence is---and the reverse is true for weak hypotheses).

What are some of the most valuable takeaways and implications from Bayesian epistemology? Why does it serve as the effective foundation of this website?

Discuss

### Agency and the unreliable autonomous car

7 июля, 2021 - 17:58
Published on July 7, 2021 2:58 PM GMT

Financial status: This is independent research, now supported by a grant. I welcome financial support.

Epistemic status: I believe ~85% that the technical argument presented in this piece is correct.

Outline
• This is my attempt to explain the basic Löb situation with the 5-and-10 problem.

• This post considers an autonomous car choosing between a long route and a slow route, with the goal of minimizing the time to its destination.

• If the autonomous car makes its decisions using a certain seemingly-reasonable algorithm based on proof search, this post shows that it may nonsensically take the slow route.

• This is the 5-and-10 problem phrased in terms of autonomous driving.

Parable of the unreliable autonomous car

A long time ago, in a far away place, there was an autonomous car.

This particular autonomous car was programmed to use the agent model in its reasoning. That is, it was programmed to model the world as consisting of an agent and an environment, the agent being itself, and the environment being a network of roads that it would navigate each day. This is how the autonomous car was programmed to model the world.

Beyond using the agent model, this autonomous car had been given, by its designers, an accurate model of its own behavior. It could tell you how its own decision algorithm would behave under any set of circumstances. It could tell you this without being faced with those actual circumstances because it had a model of itself, and it could answer questions based on that model.

One day, a passenger got into the autonomous car and gave it a destination. There were two ways for the autonomous car to get there: a fast route, which would get it there at 1pm, and a slow route, which would get it there at 2pm. The goal of the autonomous car was to get to its destination as quickly as possible. This parable is about the reasoning followed by the car in deciding which of these two routes to take.

Now in this far-away place, autonomous cars were designed by a strange cadre of alien engineers, and thus they were programmed with strange algorithms. This particular autonomous car was programmed to decide among routes using the following procedure:

1. Make a list of the possible routes

2. Make a list of the possible arrival times

3. Make a list of all possible logical sentences of the form IF route 1 is taken THEN I will arrive at such-and-such a time AND IF route 2 is taken THEN I will arrive at such-and-such a time AND IF route 3 is taken THEN I will arrive at such-and-such a time AND …

4. Shuffle this list of logical sentences into a random order

5. For each logical sentence in the list from first to last:

6. Search for a proof that this sentence is consistent with your model of the world, up to a maximum proof length of one million characters

7. If a proof is found then output the route associated with the earliest arrival time

• The list of possible routes in step 1 is returned by a subroutine that uses the car’s inbuilt map of the road network. On this particular day it returned two routes: SLOW and FAST.

• The list of possible arrival times used on this particular day were 1pm, 2pm, and 3pm.

• Since there is always a finite number of possible routes and arrival times, there is also always a finite number of logical sentences in step 3.

• Step 5a only considers proofs that can be expressed in under one million characters because otherwise this step might run forever.

• If a proof is found in step 5a then the loop over sentences terminates at step 5b.

Now from our perspective here on Earth, we might think that this is a strange algorithm for an autonomous car to use. We might ask why the autonomous car didn’t just consider the routes one by one and compute an arrival time for each one. We might ask what business an autonomous car has conducting a search over logical proofs while a passenger is waiting patiently in the back seat. These are reasonable questions from our perspective here on Earth. But this parable is not about why this autonomous car was programmed in this way, it is about the tragic consequences that followed from it being programmed in this way.

And so the autonomous car began using its procedure to decide which route to take. It got to step 3 and the sentences in its list were:

IF route FAST is taken THEN I will arrive at 1pm AND IF route SLOW is taken THEN I will arrive at 1pm IF route FAST is taken THEN I will arrive at 1pm AND IF route SLOW is taken THEN I will arrive at 2pm IF route FAST is taken THEN I will arrive at 1pm AND IF route SLOW is taken THEN I will arrive at 3pm IF route FAST is taken THEN I will arrive at 2pm AND IF route SLOW is taken THEN I will arrive at 1pm IF route FAST is taken THEN I will arrive at 2pm AND IF route SLOW is taken THEN I will arrive at 2pm IF route FAST is taken THEN I will arrive at 2pm AND IF route SLOW is taken THEN I will arrive at 3pm IF route FAST is taken THEN I will arrive at 3pm AND IF route SLOW is taken THEN I will arrive at 1pm IF route FAST is taken THEN I will arrive at 3pm AND IF route SLOW is taken THEN I will arrive at 2pm IF route FAST is taken THEN I will arrive at 3pm AND IF route SLOW is taken THEN I will arrive at 3pm

Then in step 4 it shuffled this list of sentences, and just by chance the very first sentence that it considered in step 5 was

IF route FAST is taken THEN I will arrive at 3pm AND IF route SLOW is taken THEN I will arrive at 2pm

Now you and I, dear reader, can see that this sentence is false. In fact the fast route would get the autonomous car to its destination at 1pm, not 3pm. But the autonomous car was merely executing its programmed instructions. And at this time its programmed instructions told it, in step 5a, to look for a proof that the sentence was consistent with its model of the world. And its model of the world was the agent model, and within that model it had an understanding of the the road network (the environment), as well as an understanding of the its own decision-making procedure (the agent; itself), and so it could ask whether it was in fact the case that

IF route FAST is taken THEN I will arrive at 3pm AND IF route SLOW is taken THEN I will arrive at 2pm

And so in step 5a it looked for a proof that this sentence was consistent with its model, and here is how that went.

The autonomous car saw that its own decision-making algorithm would consider sentences and determine, for each one, whether it was consistent with its internal model, by searching for a proof. It saw that the first such sentence to be considered would be

IF route FAST is taken THEN I will arrive at 3pm AND IF route SLOW is taken THEN I will arrive at 2pm

It saw that the route chosen by its decision-making algorithm would be determined by whether this sentence was determined by the algorithm to be consistent with its model or not. It did not know whether this particular sentence was consistent with its model, so it decided to proceed by using proof-by-implication, which is where you assume some sentence p, then derive some sentence q following logically from it, then you get "IF p THEN q". (This is also a common proof method here on Earth.)

The p that the algorithm decided to assume for proof-by-implication was the following:

A proof exists for "IF route FAST is taken THEN I will arrive at 3pm AND IF route SLOW is taken THEN I will arrive at 2pm"

Now, to understand what happened next, it is important to understand that this is not the same as assuming the sentence

IF route FAST is taken THEN I will arrive at 3pm AND IF route SLOW is taken THEN I will arrive at 2pm

The autonomous car saw that the behavior of its decision algorithm rested not on whether taking the FAST route would in fact get it there by 3pm, nor whether taking the SLOW route would in fact get it there at 2pm, but on whether a proof existed of the sentence in question.

So the autonomous car proceeded in its reasoning, assuming for the sake of proof-by-implication that

a proof exists for "IF route FAST is taken THEN I will arrive at 3pm AND IF route SLOW is taken THEN I will arrive at 2pm"

It saw that, given this assumption, its decision algorithm would, at step 5a, find the proof. It saw that its decision algorithm would, upon finding this proof, stop considering further sentences in the list and take the route associated with the earliest arrival time in the sentence that had been proved. It saw that the route associated with the earliest arrival time in the sentence that had been proved was the SLOW route. It then looked into its model of the road network and saw that, upon taking the SLOW route, it would arrive at its destination at 2pm.

Now you might ask why, if the autonomous car had a sufficiently detailed model of the road network, it didn’t just use that model to compare the two routes at the outset and skip over the search over proofs. Wouldn’t it be simpler just to compare the two routes and pick the one with the shortest arrival time? Well, yes, that would seem simpler to us here on Earth. But that is just now how this autonomous car was programmed.

This was all within the scope of the proof-by-implication, so at this time the autonomous car had in fact proved

IF a proof exists for "IF route FAST is taken THEN I will arrive at 3pm AND IF route SLOW is taken THEN I will arrive at 2pm" THEN route SLOW is taken AND I will arrive at 2pm

Now you might think, dear reader, that the manner of reasoning used up to this point by this autonomous car was a little strange. I am afraid, dear reader, that at this point its reasoning became stranger still, although never stepping outside the very same rules of logic that we use here on Earth.

Still searching for a proof in step 5a, the autonomous car made the following logical deductions.

First, from

I will arrive at 2pm

it deduced

IF route SLOW is taken THEN I will arrive at 2pm

This is permitted by the laws of logic because whenever one has "q", one can deduce "IF p THEN q", since the "IF...THEN" is material implication and is logically equivalent to "NOT p OR q".

Second, from

route SLOW is taken

it deduced

NOT (route FAST is taken)

And then from this it deduced

IF route FAST is taken THEN I will arrive at 3pm

This, I’m afraid dear reader, is also permitted by the laws of logic. Once again, "IF p THEN q" is equivalent to "NOT p OR q", and the autonomous car already derived "NOT (route FAST is taken)", so it was perfectly justified in deriving, within the scope of the proof-by-implication in which it had established that route FAST would not be taken, that

IF route FAST is taken THEN I will arrive at 3pm

And so, as this tragedy nears its final movements, the autonomous car had derived, using proof-by-implication, the following:

IF a proof exists for "IF route FAST is taken THEN I will arrive at 3pm AND IF route SLOW is taken THEN I will arrive at 2pm" THEN IF route FAST is taken THEN I will arrive at 3pm AND IF route SLOW is taken THEN I will arrive at 2pm

And here, dear reader, you will see the shape of the demise of our dear autonomous car already outlined in this tragic tale. For Löb’s theorem, with which I am sure you are well acquainted, tells us that whenever we have a proof system that can prove a sentence of the form

IF a proof exists for p THEN p

then that very same formal system will, unwittingly, and without the slightest possibility of mistake, eventually also prove the sentence

p

Yes, tragic indeed, dear reader, is it not? Shall we continue with this tale, or is its conclusion already clear?

The autonomous car had proved

IF a proof exists for "IF route FAST is taken THEN I will arrive at 3pm AND IF route SLOW is taken THEN I will arrive at 2pm" THEN IF route FAST is taken THEN I will arrive at 3pm AND IF route SLOW is taken THEN I will arrive at 2pm

Being of no further use, it discarded this sentence and moved on with its proof search. It is not known for how long the passenger sat in the back seat, or by what strange motions of logic the proof search proceeded from this point. Perhaps many aeons passed, or perhaps it was merely seconds. We do not know. All that is known is that the autonomous car eventually proved

IF route FAST is taken THEN I will arrive at 3pm AND IF route SLOW is taken THEN I will arrive at 2pm

We know that it proved this because Löb’s theorem tells us that it must. We have seen that the logical apparatus of this autonomous car was capable of proving "IF a proof exists for p THEN p", and Löb’s theorem tells us that any logical apparatus capable of proving "IF a proof exists for p THEN p" is also capable of proving "p".

Therefore a proof was found, and the autonomous car proceeded to step 5b, and selected the route with the earliest arrival time, and that route was the SLOW route.

Discussion

How oh how did things go so tragically wrong for our autonomous car?

The first thing we must see, dear reader, is that the strange engineers in this far away place made certain choices in designing this autonomous car. These choices were design choices. They were not necessitated by the laws of logic — nobody would suggest that they were. The laws of logic do not force us to design autonomous cars with sophisticated self-models that use proof search to decide among routes to a destination.

So the most direct conclusion to draw from this sad fable, dear reader, is not to design autonomous cars in this particular way here on Earth, lest we ourselves suffer the fate of the unfortunate passenger from this tale. Making decisions based on proof search might sound like an adventure, dear reader, but as this tale shows, it is not for the faint of heart, nor for the punctual.

There are, of course, any number of alternative design choices one might make for an autonomous car. One might, for example, estimate an arrival time for each route based on distance and traffic conditions, and select the route for which the estimated arrival time is earliest. Why did the alien designers of our cherished autonomous car not give it a decision algorithm such as this?

Well it is not for us to understand the minds of such alien creatures. And yet we might, for the sake of satisfying our own curiosity, speculate, while knowing, of course, that the truth will be far beyond any conjecture that we could possibly invent.

Perhaps it was the intention of these extraordinary designers to incorporate not just knowledge of the environment, but also knowledge of the agent, which in this case is the autonomous car itself, into the estimated arrival time for each route. It is after all the case that the programming of the car affects its arrival time just as much as the environment affects its arrival time. Were the programming of the car such that left turns would be taken very quickly while right turns would be taken very slowly, it would be to the benefit of our esteemed and punctual passenger to pick a route with as few right turns as possible. And although one could hard-code a heuristic that minimizes the number of right turns into the decision algorithm, these alien engineers hardly seem like the type to make such brutish engineering choices. A far more general approach, and perhaps — who can say? — the intention of these remarkable alien engineers would be for the autonomous car to examine its own programming and determine for itself which factors would contribute to the speedy navigation of a route. This is certainly an ambition that we can sympathize with.

We can see, then, that by examining its own programming, our cherished autonomous car "steps out of itself" and looks back at itself as though watching a child from a distance. It is only natural, then, for us, in making our own choices about the design of our own autonomous systems, to similarly "step out" and look at the systems we are building from a certain distance. When we do this, we see that the peculiar design choices made for this peculiar autonomous car are faulty. There is no need to "put ourselves in the shoes" of the autonomous car and wonder why this or that conclusion was reached. We can simply see that the system fails to behave in the manner that we wish it to behave.

And we might speculate that it has something to do with the way the agent model was used.

But that topic, dear reader, will have to wait for another day.

Discuss

### Intelligence without Consciousness

7 июля, 2021 - 08:27
Published on July 7, 2021 5:27 AM GMT

I sleep-talk. In fact, I don't just talk in my sleep, I can carry on a conversation. Advice and insight I have is much lower quality than when I'm awake, but I'm mostly coherent. Once someone recorded a conversation with me, and it was pretty funny. For example: "A lot of people like Batman. We should buy stock in Batman".

I'm not conscious, though. I'm not self-aware. If I have qualia, it's a very tiny amount of qualia. I definitely wouldn't wonder "do I have qualia?" If I didn't keep waking up, there wouldn't be a moral reason for others not to treat me how they find most convenient.

What does this mean? It means that Nonsentient Optimizers and Nonperson Predicates are definitely possible. There is some threshold of intelligence and/or self-awareness needed to change a neural net into a person.

(Lucid dreaming is a separate issue. I'm talking about something that happens even when not lucid dreaming.)

Discuss

### A second example of conditional orthogonality in finite factored sets

7 июля, 2021 - 04:40
Published on July 7, 2021 1:40 AM GMT

Yesterday, I wrote a post that gave an example of conditional non-orthogonality in finite factored sets. I encourage you to read that post first. However, I'm kind of dissatisfied with it because it doesn't show any interesting cases of conditional orthogonality (despite the title seeming to promise that). So I'd like to show you one today.

First, let's imagine that Alice is a person who has some height. Bob and Charlie both measure her height, and take note of the measurements. However, their measuring instruments have independent sources of error, such that neither gets exactly the right answer. In this world, Bob's measurement is not independent of Charlie's measurement, because they'll both be pretty close - the error isn't that big. However, once you know Alice's height, they will be independent, because given that knowledge, learning Bob's measurement doesn't tell you anything about Charlie's measurement. Below, we'll see how we can formalize that in the language of finite factored sets.

Next, we'll define the function b(a,ϵb,ϵc)=a+ϵb, which gives the height that Bob measures, and the partition B which groups tuples with the same value of b together. Similarly, we'll define c(a,ϵb,ϵc)=a+ϵc, which tells us the height that Charlie measures, and the partition C that groups tuples together by their value of c.

What's the history of B? Well, it's the smallest set of factors such that if we know the 'value' of the factors, then we know the 'value' of B, and that's {A,EB}. Similarly, the history of C is {A,EC}. So B's and C's histories have A in common, and therefore aren't orthogonal.

Now, let's consider the set A2={(a,ϵb,ϵc)∣a=2}, which represents the worlds where Alice is 2 metres tall, and check out the conditional histories. The conditional history of B in A2 is the smallest set of factors such that once you're in A2, knowing the 'values' of those factors tells you the 'value' of B, and that includes all the factors that are 'entangled' with those factors by the set A2 - for more detail, check out the previous post. In this case, the conditional history of B is just {EB}: Once you're in A2, knowing ϵb is enough to tell you the value of b. Furthermore, the only thing you need to know to figure out whether something's in A2 is a, so {EB} also satisfies the second condition: if the 'value' of EB at some tuple is compatible with being in A2 (which is always true), and the 'values' of A and EC are jointly compatible with being in A2, then you must be in A2. Similarly, the conditional history of C given A2 is {EC}. So, the conditional histories don't intersect, and B is orthogonal to C given A2.

Hopefully this post was useful both in giving you a better sense of conditional orthogonality, and in illustrating how to model things with finite factored sets.

[footnote 1] Note that we could 'change coordinates' and have the underlying set be tuples (a,b,c) - Alice's height, Bob's measurement, and Charlie's measurement - and the factors being:

• A, the partition of points according to their value of a
• EB, the partition of points according to their value of b−a
• EC, the partition of points according to their value of c−a

This would give exactly the same results as those in the main post!

Discuss