Вы здесь

Новости LessWrong.com

Подписка на Лента Новости LessWrong.com Новости LessWrong.com
A community blog devoted to refining the art of rationality
Обновлено: 13 минут 57 секунд назад

Where do LessWrong rationalists debate?

30 апреля, 2021 - 00:23
Published on April 29, 2021 9:23 PM GMT

There are a few places where rationalists debate on internet outside of Less Wrong. Sometimes those places were created for debates that do not belong to the LW website, such as politics or investing. Sometimes they were created for a specific subgroup of rationalists, for example people who attend the same meetup. Some of them are on Slack, some of them are on Discord, some of them are on Facebook, and probably there are other places.

I propose the following:

  1. Let's make a list of all the places where LessWrong rationalists debate currently.
  2. Let's consider whether it would make sense to reduce the number of these places -- either by choosing among the existing ones, or creating a new one.
  3. Afterwards, let's make the remaining places visible enough on the Less Wrong website, to avoid reinventing the wheel in the future.

Some clarifications:

This proposal only concerns the online forums of LessWrong rationalists, or a subset thereof (such as local meetups, or rationalists who are interested in some specific topic). It does not concern websites for wider audience, such as rationality-adjacent, or Slate Star Codex / Astral Codex Ten, or meta/post/dark/whatever rationalists, or effective altruists, or whatever. -- The idea is that the unified forum should still be populated by Less Wrong audience (rather than by general public), keeping the consensus on rationality, science, reductionism, and preserving the above-average quality of discourse.

On websites such as Slack or Discord, it is possible to create separate channels / chat rooms, and join a selected subset of them. I assume the unified forum would also have parts that are interesting only for a subset of members, such as specific topics, or maybe specific meetups. -- This would remove the need to create a separate forum for specific topics or meetups, as long as you want to debate these with (a subset of) fellow rationalists.

I would prefer not using Facebook, to keep our rationalist digital identities separated from our real names and jobs. (Also, Facebook is addictive, and its UI sucks.)

The thing I am specifically trying to avoid here, is the unnecessary fragmentation, where people duplicate things simply because they don't know these already exist at a different place. (Specifically, my trigger was getting an invitation to join a new discussion for "rationalists interested in investing", when I already know about two other such places, and it would obviously make more sense to join all these three groups together to achieve greater "wisdom of crowds". Each of these places was created independently, split from an already small group of rationalists that created their own discussion site for some other purpose. I assume there are similarly duplicate spaces to discuss politics, exercise, nutrition, books, etc.)

I can also imagine some possible disadvantages of a unified forum: too much noise from having too many people (or maybe some people who write too much), disagreement on shared norms such as Overton window in political debates or the level of evidence required for lifestyle debates, etc. But I think these can be solved by having the subforums. And the worst case is that we can return to the current situation (it's not like we have to delete the old places). Therefore, I think this is worth trying.



Discuss

ACX calibration and updating exercise

29 апреля, 2021 - 22:15
Published on April 29, 2021 7:15 PM GMT

Scott Alexander makes yearly predictions on SSC/ACX. This year, I avoided looking at his blog post so I could make my own predictions about the same topics, using a copy of the post with Scott's probabilities removed. Then I looked at his and others' numbers, to see how much they update me.

For each question...

My numbers were influenced a lot by the fact that I expect to round to the nearest 10% (or to 1% / 5% / 95% / 99% at the extremes) to check calibration.

 

US/WORLD

1. Biden approval rating (as per 538) is greater than 50%: [SA 80, ZM 80, MS 61]

  • Initial: 73%
  • Post-Boyfriend: 73%
  • Post-Scott: 75%
  • Final: 75%

2. Court packing is clearly going to happen (new justices don't have to be appointed by end of year): [SA 5, ZM 5, MS ~4-5+]

  • 5%
  • 7%
  • 4%
  • 4%

3. Yang is New York mayor: [SA 80, ZM 70, SM 69-77]

  • 85%
  • 85%
  • 82%
  • 74%

4. Newsom recalled as CA governor: [SA 5, ZM ?, MS ~7]

  • 4%
  • 6%
  • 4%
  • 10%

5. At least $250 million in damage from BLM protests this year: [SA 30, ZM ≤20]

  • 45%
  • 45%
  • 30%
  • 19%

6. Significant capital gains tax hike (above 30% for highest bracket): [SA 20, ZM 20]

  • 40%
  • 40%
  • 20%
  • 23%

7. Trump is allowed back on Twitter: [SA 20, ZM 10]

  • 7%
  • 8%
  • 13%
  • 12%

8. Tokyo Olympics happen on schedule: [SA 70, ZM 80, MS 75]

  • 66%
  • 66%
  • 66%
  • 74%

9. Major flare-up (significantly worse than anything in past 5 years) in Russia/Ukraine war: [SA 20, ZM ≤15, MS ~16]

  • 6%
  • 6%
  • 20%
  • 14%

10. Major flare-up (significantly worse than anything in past 10 years) in Israel/Palestine conflict: [SA 5, ZM 5-~15]

  • 3%
  • 3%
  • 3%
  • 11%

11. Major flare-up (significantly worse than anything in past 50 years) in China/Taiwan conflict: [SA 5, ZM 3]

  • 2%
  • 2%
  • 2%
  • 2%

12. Netanyahu is still Israeli PM: [SA 40, ZM 30, MS 22]

  • 86%
  • 86%
  • 40%
  • 35%

13. Prospera has at least 1000 residents: [SA 30, ZM 30]

  • 13%
  • 15%
  • 24%
  • 21%
 ECON/TECH

14. Gamestop stock price still above $100: [SA 50, ZM 50, MS 60]

  • 39%
  • 39%
  • 50%
  • 59%

15. Bitcoin above 100K: [SA 40, ZM 25, MS 25]

  • 20%
  • 20%
  • 36%
  • 18%

16. Ethereum above 5K: [SA 50, ZM 30, MS 11]

  • 20%
  • 20%
  • 40%
  • 19%

17. Ethereum above 0.05 BTC: [SA 70, ZM 55, MS ~33]

  • 50%
  • 50%
  • 65%
  • 50%

18. Dow above 35K: [SA 90, ZM<<90, MS 50]

  • 77%
  • 77%
  • 90%
  • 50%

19. ...above 37.5K: [SA 70, ZM ?, MS 20]

  • 48%
  • 48%
  • 70%
  • 20%

20. Unemployment above 5%: [SA 40, ZM 50, MS 37]

  • 74%
  • 74%
  • 55%
  • 55%

21. Google widely allows remote work, no questions asked: [SA 20, ZM 30]

  • 18%
  • 20%
  • 20%
  • 23%

22. Starship reaches orbit: [SA 60, ZM 60, MS 50]

  • 42%
  • 45%
  • 60%
  • 57%
 COVID

23. Fewer than 10K daily average official COVID cases in US in December 2021: [SA 30, ZM 70]

  • 48%
  • 48%
  • 32%
  • 52%

24. Fewer than 50K daily average COVID cases worldwide in December 2021: [SA 1, ZM 1]

  • 8%
  • 8%
  • 0.8%
  • 3%

25. Greater than 66% of US population vaccinated against COVID: [SA 50, ZM 60, MS ~77+]

  • 69%
  • 69%
  • 59%
  • 59%

26. India's official case count is higher than US: [SA 50, ZM ≥80]

  • 70%
  • 70%
  • 55%
  • 75%

27. Vitamin D is [not] generally recognized (eg NICE, UpToDate) as effective COVID treatment: [SA 70, ZM 50 (but mistook SA as 30), MS ~75-85]

  • 93%
  • 93%
  • 81%
  • 81%

28. Something else not currently used becomes first-line treatment for COVID: [SA 40, ZM 25]

  • 29%
  • 36%
  • 40%
  • 40%

29. Some new variant not currently known is greater than 25% of cases: [SA 50, ZM 60]

  • 35%
  • 42%
  • 44%
  • 52%

30. Some new variant where no existing vaccine is more than 50% effective: [SA 40, ZM ≤25]

  • 8%
  • 10%
  • 27%
  • 17%

31. US approves AstraZeneca vaccine: [SA 20, ZM 20, MS 37]

  • 68%
  • 65%
  • 20%
  • 20%

32. Most people I see in the local grocery store aren't wearing a mask: [SA 60, ZM 75]

  • 53%
  • 53%
  • 60%
  • 69%
 COMMUNITY

33. Major rationalist org leaves Bay Area: [SA 60, ZM ?]

  • 91%
  • 91%
  • 89%
  • 89%

34-37. [redacted]

38. No new residents at our housing cluster: [SA 40, ZM 35]

  • 20%
  • 20%
  • 40%
  • 37%

39. No current residents leave our housing cluster: [SA 60, ZM 65]

  • 48%
  • 48%
  • 60%
  • 56%

40-52. [redacted]

53. At least seven days my house is orange or worse on PurpleAir.com because of fires: [SA 80, ZM 80]

  • 53%
  • 55%
  • 80%
  • 80%
 PERSONAL

54-59. [redacted]

60. There are no appraisal-related complications to the new house purchase: [SA 50, ZM 60]

  • 30%
  • 40%
  • 50%
  • 60%

61. I live in the new house: [SA 95, ZM 90]

  • 82%
  • 82%
  • 93%
  • 92%

62. I live in the top bedroom: [SA 60, ZM 65]

  • 67%
  • 67%
  • 58%
  • 65%

63. I can hear / get annoyed by neighbor TV noise: [SA 40, ZM 30]

  • 58%
  • 58%
  • 40%
  • 37%

64. I'm playing in a D&D campaign: [SA 70, ZM 70]

  • 20%
  • 20%
  • 70%
  • 68%

65. I go on at least one international trip: [SA 60, ZM 60]

  • 26%
  • 26%
  • 60%
  • 57%

66. I spend at least a month living somewhere other than the Bay: [SA 50, ZM 50]

  • 17%
  • 17%
  • 50%
  • 50%

67. I continue my current exercise routine (and get through an entire cycle of it) in Q4 2021: [SA 70, ZM 65]

  • 17%
  • 15%
  • 63%
  • 60%

68. I meditate at least 15 days in Q4 2021: [SA 60, ZM 60]

  • 15%
  • 15%
  • 60%
  • 59%

69. I take oroxylum at least 5 times in Q4 2021: [SA 40, ZM 40]

  • 20%
  • 20%
  • 38%
  • 38%

70. I take some substance I haven't discovered yet at least 5 times in Q4 2021 (testing exempted): [SA 30, ZM 25]

  • 31%
  • 31%
  • 30%
  • 24%

71. I do at least six new biohacking experiments in the next eight months: [SA 40, ZM 40]

  • 36%
  • 36%
  • 36%
  • 38%

72. [redacted]:

73. The Twitter account I check most frequently isn't one of the five I check frequently now: [SA 20, ZM 15]

  • 48%
  • 48%
  • 20%
  • 24%

74. I make/retweet at least 25 tweets between now and 2022: [SA 70, ZM 75]

  • 60%
  • 60%
  • 70%
  • 70%
 WORK

75. Lorien has 100+ patients: [SA 90, ZM 85]

  • 58%
  • 58%
  • 90%
  • 87%

76. 150+ patients: [SA 20, ZM 25]

  • 45%
  • 45%
  • 20%
  • 21%

77. 200+ patients: [SA 5, ZM 5-10]

  • 38%
  • 37%
  • 9%
  • 8%

78. I've written at least ten more Lorien writeups (so total at least 27): [SA 30, ZM 30]

  • 43%
  • 43%
  • 24%
  • 24%

79-83. [redacted]:

84. I have switched medical records systems: [SA 20, ZM 15]

  • 46%
  • 50%
  • 20%
  • 20%

85. I have changed my pricing scheme: [SA 20, ZM 20]

  • 28%
  • 34%
  • 20%
  • 20%
 BLOG

86. ACX is earning more money than it is right now: [SA 70, ZM 80]

  • 65%
  • 65%
  • 70%
  • 72%

87-89. [redacted]

90. There is another article primarily about SSC/ACX/me in a major news source: [SA 10, ZM 25]

  • 75%
  • 71%
  • 23%
  • 23%

91. I subscribe to at least 5 new Substacks (so total of 8): [SA 20, ZM 30]

  • 13%
  • 18%
  • 20%
  • 20%

92. I've read and reviewed How Asia Works: [SA 90, ZM 90]

  • 50%
  • 53%
  • 88%
  • 88%

93. I've read and reviewed Nixonland: [SA 70, ZM 70]

  • 45%
  • 56%
  • 65%
  • 66%

94. I've read and reviewed Scout Mindset: [SA 60, ZM 70]

  • 59%
  • 60%
  • 60%
  • 63%

95. I've read and reviewed at least two more dictator books: [SA 50, ZM 45]

  • 42%
  • 42%
  • 42%
  • 42%

96. I've started and am at least 25% of the way through the formal editing process for Unsong: [SA 30, ZM 30]

  • 60%
  • 63%
  • 30%
  • 29%

97. Unsong is published: [SA 10, ZM 10]

  • 15%
  • 42%
  • 14%
  • 14%

98. I've written at least five chapters of some non-Unsong book I hope to publish: [SA 40]

  • 41%
  • 39%
  • 40%
  • 40%

99. [redacted] wins the book review contest: [SA 60, ZM 50]

  • 8%
  • 8%
  • 52%
  • 48%

100. I run an ACX reader survey: [SA 50, ZM 50]

  • 55%
  • 53%
  • 48%
  • 48%

101. I run a normal ACX survey (must start, but not necessarily finish, before end of year): [SA 90, ZM 90]

  • 65%
  • 65%
  • 86%
  • 84%

102. By end of year, some other post beats NYT commentary for my most popular post: [SA 10, ZM 10]

  • 65%
  • 65%
  • 13%
  • 14%

103. I finish and post the culture wars essay I'm working on: [SA 90, ZM 90]

  • 38%
  • 40%
  • 85%
  • 85%

104. I finish and post the climate change essay I'm working on: [SA 80, ZM 80]

  • 46%
  • 47%
  • 80%
  • 80%

105. I finish and post the CO2 essay I'm working on: [SA 80, ZM 80]

  • 49%
  • 52%
  • 80%
  • 80%

106. I have a queue of fewer than ten extra posts: [SA 70, ZM 60]

  • 57%
  • 57%
  • 70%
  • 60%
 META

107. I double my current amount of money ($1000) on PredictIt: [SA 10, ZM 15]

  • 49%
  • 49%
  • 10%
  • 22%

108. I post my scores on these predictions before 3/1/22: [SA 70, ZM 75]

  • 54%
  • 54%
  • 63%
  • 64%


Discuss

We need a career path for invention

29 апреля, 2021 - 21:11
Published on April 29, 2021 6:11 PM GMT

If technological progress has slowed down, what is causing it? Here is a hypothesis.

Broadly speaking, there are three domains of activity important to technological progress: science, invention, and business. Science discovers new knowledge; invention creates useful machines, chemicals, processes, or other products; and business produces and distributes these products in a scalable, self-sustaining way. (Occasionally inventions are distributed by government: water sanitation is an example. But this oversimplified model will serve for our purposes.)

These domains do not form a simple linear pipeline, but they are distinct areas that attract different types of people, pose different challenges, and are judged by different standards. As such they create distinct communities and subcultures.

My hypothesis is that while science and business have functioning career paths, invention today does not.

Consider science. Suppose a high school or university student has a glimmer of desire to become a scientist. They will find that their road has already been paved. “Scientist” is a career. There’s an established path into the career: get a BS and then a PhD in a scientific field. There are research labs that hire scientists, organize them into teams, and give them space and equipment. There is funding for all of this, from government and philanthropy. There is an established deliverable: talks and papers, presented at conferences and published in journals. There are awards and honors that confer prestige within the discipline; some of these, such as the Nobel, are even well-known and respected among the general public.

All of this combines to create a career path for the scientist: anyone with even a modest level of commitment and effort can start down the path, and those who are exceptionally talented and ambitious can reach for inspiring goals. Importantly, there is a feedback loop in which progress down the career path opens opportunities. The more the scientist produces legible accomplishments, the more they are able to get grants, secure coveted positions, and attract talent to work with them. Money, prestige, and the opportunity to do meaningful work all (roughly) go together.

Entrepreneurship has different structures, but the career path is there nonetheless. “Startup founder” is not a job you get hired for; it is a job the founder must create for themselves. They must raise their own funding, create their own organization, and hire their own team. In this sense, the founder is much less well-supported than the scientist. But there are established sources of funding for startups, in venture capital. There is a known job title, CEO, that you can give to yourself and that is understood by others in the industry and in society. There is an objective way to measure success: company profits and market valuation.

The founder career path is to create a successful company. Once again, progress on this path opens up opportunities. The most successful founders have the resources and reputation to launch even more varied and ambitious projects (think Jeff Bezos or Elon Musk). However, a startup failure does not end a career. In Silicon Valley at least, failure is not a black mark, and a failed founder can do another startup, or get a job in engineering, design, sales, or management.

We can think of a career path as a social support structure around a value. In science, the value is new knowledge. In entrepreneurship, the value is profitable business. Having a support structure around a value means that if someone is motivated to pursue that value, they can be paid to do so; and if they succeed, they can expect both prestige and expanded career opportunities.

Now, what is the career path for an inventor?

“Inventor” is not a role one can be hired for. The aspiring inventor finds themselves straddling science and business. They could join a research lab, or become an engineer at a technology-based company. In either case, they will be misaligned with their environment. In research, what is valued is new knowledge. An invention that achieves a practical goal is not valued if it demonstrates no new scientific principle. In the corporate environment, what is valued is what drives the business. The engineer may find themselves optimizing and refining existing products, without any mandate to create fundamentally new ones. Neither environment values simply making fundamentally new technologies work. Alternately, an inventor could also be an entrepreneur, starting a company to commercialize the invention. But this requires of the inventor that they have the wherewithal of the startup founder to raise money, hire a team, etc. We ask this of founders because it’s in the nature of the job: someone who can’t do these things probably wouldn’t succeed at the rest of the founder’s task. But we don’t expect every scientist to found their own research lab, and we shouldn’t expect every inventor to be a founder either.

In the early 20th century there were options for inventors. Some joined the great corporate research labs of the day: General Electric, Westinghouse, Kodak, Dow, DuPont, and of course Bell Labs. Others stayed independent, patented their inventions, and sold or licensed the patents to businesses. This let them make a living by inventing, without being personally responsible for commercializing, scaling, and distributing their inventions (although it required seed funding: many inventors had second jobs, or got angel investment through personal connections).

For reasons I still don’t fully understand, both options have withered. Corporate research is largely not as ambitious and long-term as it used to be. The lone inventor, too, seems to be a thing of the past.

The bottom line is that if a young person wants to focus their career on invention—as distinct from scientific research, corporate engineering, or entrepreneurship—the support structure doesn’t exist. There isn’t a straightforward way to get started, there isn’t an institution of any kind that will hire you into this role, and there isn’t a community that values what you are focused on and will reward you with prestige and further opportunities based on your success. In short, there is no career path.

Note that funding alone does not create a career path. You could start an “invention lab” and hire people to make inventions. You could even pay, reward and promote them based on their success at this task. But it would be difficult to hire any ambitious academic, or anyone who wanted to climb the corporate ladder, because this role wouldn’t be advancing either career path. That isn’t to say that it would be impossible to hire great talent, but you would be facing certain headwinds.

I think this is why the NIH receives relatively conventional grant proposals even for their “transformative research awards”, and why Donald Braben says that he had to build a high degree of trust with researchers before they would even tell him their ambitious research goals (see Scientific Freedom, p. 135). The community that forms around a career path has its own culture, and that includes an oral tradition of career advice, passed down from senior to junior members of the tribe. What kinds of goals to pursue, what kinds of jobs to take and when, how to choose among competing opportunities—there is folklore to provide guidance on all these questions. A single grant program or call for proposals cannot counter the weight of a culture that communicates: “the reliable way to build a scientific career is by proposing reasonable, incremental research goals that are well within the consensus of the field.”

In part, I see this as both the challenge and the opportunity of efforts like PARPA or FROs. It’s a challenge because a career path must ultimately be supported by a whole community. But it’s an opportunity because efforts like this could be how we bootstrap one. Funding alone doesn’t create a career path, but it can attract a few talented and ambitious mavericks who value independence and scoff at prestige. Success could bring more funding, and inspire imitators. Enough imitators would create an ecosystem. Enough success would bring prestige to the field.

It won’t be easy, but I am excited by efforts like these. We need a career path for invention.

Thanks to Ben Reinhardt, Matt Leggett, and Phil Mohun for reading a draft of this.



Discuss

25 Min Talk on MetaEthical.AI with Questions from Stuart Armstrong

29 апреля, 2021 - 18:38
Published on April 29, 2021 3:38 PM GMT

Previous Posts
Formal Metaethics and Metasemantics for AI Alignment
New MetaEthical.AI Summary and Q&A at UC Berkeley 

This time I tried to focus less on the technical details and more on providing the intuition behind the principles guiding the project. I'm grateful for questions and comments from Stuart Armstrong and the AI Safety Reading Group. I've posted the slides on Twitter.

Abstract: We construct a fully technical ethical goal function for AI by directly tackling the philosophical problems of metaethics and mental content. To simplify our reduction of these philosophical challenges into "merely" engineering ones, we suppose that unlimited computation and a complete low-level causal model of the world and the adult human brains in it are available.

Given such a model, the AI attributes beliefs and values to a brain in two stages. First, it identifies the syntax of a brain's mental content by selecting a decision algorithm which is i) isomorphic to the brain's causal processes and ii) best compresses its behavior while iii) maximizing charity. The semantics of that content then consists first in sense data that primitively refer to their own occurrence and then in logical and causal structural combinations of such content.

The resulting decision algorithm can capture how we decide what to do, but it can also identify the ethical factors that we seek to determine when we decide what to value or even how to decide. Unfolding the implications of those factors, we arrive at what we should do. All together, this allows us to imbue the AI with the necessary concepts to determine and do what we should program it to do. 



Discuss

On Chesterton's Fence

29 апреля, 2021 - 17:56
Published on April 29, 2021 2:56 PM GMT

Let's think of the evolution as a conservative force for a bit.

The common narrative of evolution is that it is that great force of progress, optimizing the organisms, casting away inefficiencies, multiplying the species, and generally, driving living things to become more complex, starting from the simplest possible lifeforms, such as protozoans, up to that pinnacle of progress that is ourselves, the vertebrates.

But consider a plant that happens to have a mechanism to survive droughts, such as ability to store water in the roots. If there is no drought this year, that mechanism is a dead weight. It is costly to maintain and there's no immediate benefit form it. The plant would be better of without it. But the next year there's a drought and the plant survives. If the evolution was super progressive, the plant would optimize out its water-accumulating capacity in the first year and then die in the second year leaving no progeny behind.

A conservative may thus say: Evolution is a way to store knowledge of the past. It's a trove of mechanisms that have once proved useful for surviving. "Optimizing" it is just a nice way of saying that the instruments from that war chest are being thrown away.

And, of course, neither our strawman progressive or our strawman conservative are considering the full picture. Organisms benefit from having inherited the survival tricks from the past but also from being able to respond quickly and in novel ways to the new challenges. (But, to be fair, the conservative view is much easier to argue for. Arguing for the progressive view tends to lead to the theories on group selection, if not to some semi-mystical élan évolutif.)

In the end it's all probably just a race with time. The events that are frequent enough (the low temperature during the night, the cold in the winter) are worth optimizing for. Infrequent events (the ice ages) not much so. In the former case the tools for dealing with the event are worth keeping in the war chest. In the latter case they will be inevitably thrown away and we have to rely on the fact that at least some organisms will be, for random reasons, better suited to withstand cold and that those organisms will be the lucky winners of the evolutionary race when the next ice age comes.

To switch from evolutionary biology to political science, let's recall what Jean Monet once said on the topic of creating common European institutions:

The tragic events we have lived through and are still witnessing may have made us wiser. But men pass away; others will take our place. We cannot bequeath them our personal experience. That will die with us. But we can leave them institutions. The life of institutions is longer than that of men: if they are well built, they can accumulate and hand on the wisdom of succeeding generations.

It is easy to notice that this is the conservative argument as stated above. Institutions are our war chest of tools designed to deal with the problems that we've encountered in the past. But they don't necessarily give us the ability to deal with the novel challenges. To do that we need to create new institutions and add them to the chest. And, similarly, if the problem that lead to the creation of an institution doesn't occur for a sufficiently long time, the institution will eventually erode, get removed or repurposed and won't save us when it hits again.

And while this article doesn't offer any particularly new ideas, it may be worth thinking about before removing a fence.



Discuss

Covid 4/29: Vaccination Slowdown

29 апреля, 2021 - 16:50
Published on April 29, 2021 1:50 PM GMT

Compared to expectations, excluding inevitable self-inflicted cratering of our vaccination rate, this was mostly a best realistic case scenario week. Johnson & Johnson was unpaused. India’s rate of case increases probably slowed down (they’re maxed out on testing capacity so it’s hard to know for sure). Biden seemed to come around to providing meaningful help to India and the world, at least to some extent. 

Most of all, cases in America were down a lot, and it’s now clear that things are steadily improving. Unless something changes, we’re going to beat this thing at least in a large portion of the country, and do so on schedule.

The decline looks really bad though. Like the situation in India, it’s Malcom Reynolds-level worse than you know.  When you look at first doses only, the lines are going straight down. If they go all the way to zero, many states and local areas won’t get to herd immunity from the vaccine. 

Let’s run the numbers.

The Numbers Predictions

Prediction from last week: Positivity rate of 5.1% (down 0.2%) and deaths decline by 4%.

Result: 

Once again, Washington Post, your numbers don’t make sense together, but this is still clearly a big miss. 4.4% positivity rate is amazingly great in context. Johns Hopkins has the positivity rate declining from 4.7% to 3.9%. It’s worth noting that this ties the all-time low by their metrics.

As always, there’s the worry this ‘got ahead of ourselves’ somewhat, but I don’t think that’s the case. I think it’s more likely we’re making rapid progress, especially in the areas with higher rates of vaccinations.


Prediction for next week: Positivity rate of 3.9% (down 0.5%) and deaths decline by 6%.

Deaths DateWESTMIDWESTSOUTHNORTHEASTTOTALMar 11-Mar 1714921010321714027121Mar 18-Mar 241823957289512946969Mar 25-Mar 311445976256412626247Apr 1-Apr 71098867178911604914Apr 8-Apr 1410701037162111454873Apr 15-Apr 21883987174711684785Apr 22-Apr 287521173160911104644

I looked for an explanation for the jump in the Midwest, but there does not seem to be a data anomaly. It may be somewhat random, but it’s likely the delayed result of the local surge in cases in early April before things stabilized. While looking for a potential issue, I instead noticed that West Virginia reported -159 deaths one day, which I changed to 0, bringing their weekly total in line with expectations. 

Overall the result is another small drop, which matches expectations. We should see progress accelerate from here within the month.

Cases DateWESTMIDWESTSOUTHNORTHEASTMar 11-Mar 1749,69659,881109,141115,893Mar 18-Mar 2447,92172,81099,568127,421Mar 25-Mar 3149,66993,690102,134145,933Apr 1-Apr 752,891112,84898,390140,739Apr 8-Apr 1460,693124,161110,995137,213Apr 15-Apr 2154,778107,700110,160119,542Apr 22-Apr 2854,88788,97397,48278,442

For another look, here is the monthly Polimath summary by state

The west stalled out, but the improvement elsewhere was dramatic, especially in the Northeast. The Northeast started off unimpressively on the vaccination front, but now has the best vaccine coverage of any region, likely because the limiting factor is transforming from logistical issues or supply into willingness to take the vaccine. It makes sense that hesitancy is at its lowest in the Northeast, and it’s likely this is now paying off. 

I worry that the South, and some other areas with similarly high vaccine hesitancy, will stall out at a place that still requires a lot of preventative action to contain the virus. If that happens, will such action be sustained? 

Vaccinations

Whoops.

This map now looks similar enough to the blue vs. red tribe map that the exceptions like the Dakotas stand out quite a bit. Presumably conditions there were bad enough that hesitancy is lower than it would otherwise be. The map also makes it that much more surprising that Georgia voted the way it did.

What’s clear is that vaccinations are cratering, and it is due to the massively destructive, completely needless pause in the J&J vaccine. The damage is done, and resuming isn’t going to reverse that much of it. See the section Pausing Vaccines For No Reason Still Doesn’t Work for more discussion of that. 

The question is whether we still have enough momentum to get to the finish line. If the above graph was proportional to first doses, I’d say strongly yes, we’re going to be fine. Unfortunately, the declines are much more pronounced in first doses and it seems plausible we will stall out not too far from where we are now.

If that happens, will it be enough? Or will we have essentially destroyed normal life in substantial parts of the United States indefinitely because of one death and six blood clots? It seems clear that the Northeast and other low-hesitancy areas will make it. It is not at all clear that the high-hesitancy states and areas are still on track to make it. 

India

The straight line up continues, but there was never much hope that things would suddenly reverse. The best case scenario was that we’d see this and it would represent a genuine slowdown in case growth, which could then level off within a few weeks. The share of positive tests is also increasing more slowly than before, so I think it’s probably a genuine improvement in the sense that things aren’t getting worse as fast as one would have expected. 

They’re still actively getting worse, though. This might be good news, but that only means the situation is terrible and getting worse, rather than even more terrible and getting even worse faster. It’s a horror show.

Oxygen is running out and the situation is increasingly dire. Without oxygen, a lot more Covid patients die. What happened?

Exponential growth happened. When you’re using half your oxygen capacity (and far more than half your available capacity given industrial needs) you’re less than one doubling away from disaster. Plus all the calculations likely exclude a lot of the people who are buying oxygen privately, and the supplies lost to looting or profiteering. 

On top of that, the logistics of distribution are all designed based on lower levels of consumption, so shipping, storage and delivery all have inadequate capacity given current demand, and also there’s uneven distribution between regions as one would expect. 

Marginal Revolution links to this update on the situation as of April 23. It’s worth posting the bulk of it:

What are we who are here in America doing about this crisis?

Mostly, we’re urgently pleading, with mixed success, for us to stop withholding life saving medicine that isn’t going to otherwise be used, and to stop using nationalistic barriers to prevent shipments of vital supplies.

Sarah is lowballing here with hundreds of thousands. We are already sitting on one hundred million doses. If we used a First Doses First approach (in this case, we’d better), and compare the situation with those doses to the one without where marginal cases have no oxygen and no hospital bed, we can likely save over one million lives that way alone. Add in donations of oxygen, our doses of Johnson & Johnson (let’s face it, that ship has sailed here and we don’t need them much anymore, but it’s not too late for India) and all the other raw materials and supplies, along with lifting export controls, and even without the obviously correct push to increase output of vaccines we can still work miracles here. There’s time left to be heroes, and also help ourselves. Or we could choose another path.

It’s not that we’re doing nothing helpful. We’re putting out Official Diplomatic Statements saying that we’re working around the clock and doing everything we can

Which is great, glad we’re doing all that, the thing is that it still excludes the thing we could have done (make more vaccine doses) and the by far most important thing we could do now (ship them our vaccine doses that are sitting on shelves unused). As a result, a lot of people are going to die.  

You can say the right things…

But that doesn’t change that Biden’s failure to devote a few billion to getting the world vaccinated is more striking each time he proposes a multi-trillion dollar new spending bill. 

It’s one thing to give away the surplus vaccines and capacity you happen to have once you’re finished vaccinating your own country. It’s another to engineer a worldwide solution on purpose rather than incidentally. 

The good news is that the pressure has gotten to us, and we’re going to do the right thing. Like classic Americans, we’ll do it after having exhausted all alternatives, and thus we’re still going to delay this to the ‘coming months’ with ‘safety checks.’ Also note that the link here thinks we only have 50 million doses rather than 100 million, but that doesn’t change what we should do with whatever we have. All of this was after fierce internal debate about whether we should guard against a potential vaccine shortage by hoarding vaccines we’ll never approve, or by not exploring the vaccines we haven’t approved for our own use until after we’ve approved them. 

John Cochrane points out that another contributing reason we haven’t been able to export vaccines is the issue of liability. In America, we didn’t pay anything like what vaccines were worth let alone pay tiny amounts of money to allow for the manufacturing for massive additional quantities of vaccines, but we did at least give vaccine manufacturers immunity from liability if something went wrong. Places like India don’t extend those protections, so the pharma companies protected themselves by not allowing exports to places where they could be sued into oblivion for things the vaccines didn’t even cause, or things so rare they don’t matter. 

As John points out, either the pharma companies are worried about something real, or they aren’t. If they’re worried about a fake problem, then the federal government should indeed assume the liabilities, or it should insist that other countries issue the same waivers. If they’re worried about a real problem, then they really need to do those exact same things, or that’s that. Otherwise, it could plausibly become actively unprofitable to develop vaccines, which is not a place we want to go. 

I have a hard time believing liability concerns are a True Objection to the exporting of vaccines. If we offered donations on the basis of a liability waiver, and India said no, that seems like true political (and literal) suicide. 


Then again, I can see a world in which the optics of our offer would be the dealbreaker here. That it would look so bad to be ‘demanding protection for our fatcat companies’ or something that it’s simply impossible to ask, and thus we let a million people die. Or, we would hope, India waves the liability issue on its own first, without being asked, but we can’t ask or we’re blameworthy. 

We could also be worried about the look of ‘giving up our castoffs’ and giving India the ‘not as good’ vaccines we aren’t willing to approve ourselves, and have to explain why they’re good enough for Indians and not good enough for Americans. Either they’re good vaccines or they’re not, and giving them away puts us in a bind. We’ve already given away a few to Canada and Mexico, so the seal on this might already be broken, but this would be a much bigger spotlight. 

The safety concerns could also be legitimate, because we’ve been letting the vaccine sit on the shelf for months. Thus, having already failed to protect anyone in the meantime, we might also have let the vaccines go bad, and we have to verify that this isn’t the case. That could partly justify the current delay, but of course it makes our earlier decisions that much less excusable.

Another problem was that India literally refused to authorize the Pfizer vaccine without a trial done on Indians in India. Pfizer had to withdraw its application. Now that there’s a crisis, India is doing the commonsensical ‘approve anything anyone trustworthy has approved’ approach. When such vaccine nationalism is present, and combined with demands to wave patent protections (which Pfizer already said they wouldn’t enforce) and the implicit demand to do full uncompensated technology transfer that would go well beyond Covid in its implications, one could think that we’ve been put in a difficult spot. But again, the bad behavior of others (and by all accounts, the Moti government has done plenty of not-so-great things across the board) does not excuse your own failure to help the people of the world.

Vaccines Still Work

The CDC is tracking breakthrough infections and deaths, meaning ones that happen post-vaccination. This Slate article summarizes the early findings: that such events do happen but they are extremely rare. Even Eric Topel notes that there are fewer such cases than we would have expected. I did a little work on the original CDC data as of April 20, and got about a 99% reduction in deaths for the fully vaccinated group versus the general population, as measured in deaths per person-day over the last few months. That calculation had to make a few assumptions regarding exactly how everything was being defined and measured, so it’s possible it’s off a bit in either direction, but 99% feels like a reasonable real-world reduction given what we know and all the behavioral changes that happen after vaccination.

Zeynep analyzes an outbreak at a Kentucky nursing home, which included one of those breakthrough deaths, and explains why she thinks it should be seen as encouraging news about vaccinations rather than bad news. One would expect the vaccines to be less effective in that context than overall, as it was pretty much the worst case scenario – older patients whose immune systems no longer work properly, in close indoor quarters for extended periods. Yet the vaccines still worked very well. 

As with all vaccine news, what is good versus bad news depends upon your previous epistemic state. If you previously thought vaccines were definitely super duper effective, hearing evidence they are only super effective, but without the duper attached, is bad news. Whereas that’s still great news from the perspective of anyone with bigger doubts who isn’t sure if they’re even super effective, let alone super duper effective. This means that a large portion of the time, what the media says is ‘good news’ is something I view as neutral or bad news, but it is still usually helpful in updating people in the correct direction.

Thread by Natalie Dean about the difficulty of avoiding bias in observational vaccination studies. Vaccinations aren’t given out at random, and people are told to act differently both directly prior to vaccination and after vaccination has taken effect. I worry that such de-biasing efforts are relatively robust in the ‘don’t overestimate vaccines due to selection effects’ direction but much less robust in the ‘don’t underestimate vaccines due to behavioral adjustments’ direction.

If you’re still looking for an appointment, vaccine spotter is this week’s website that might help if you haven’t tried them yet (HT: MR).

Or you can walk into a site, at least in some places, and it’s all good:

Pausing Vaccines For No Reason Still Doesn’t Work

Johnson & Johnson is back. After only 10 days, permission has been given to resume vaccinations. I explicitly said last week I didn’t expect the resumption to happen this quickly, so count me as pleasantly surprised at the news. On reflection, I was likely being too cynical here. 

How big is this risk, in the end? 

The problem is that the FDA successfully convinced Americans that the Johnson & Johnson vaccine is not safe, with a majority saying it was not even somewhat safe during the pause (WaPo). The question is, how much of that effect does resuming permission undo? Are people taking their queues here assuming the FDA is on the ball (and thus, it becomes safe again once permission is resumed)? Are they largely accepting that ‘safe’ literally means ‘is allowed by the FDA’ because they view such words as performative deference to authority? Or did they get the message that there’s something wrong with J&J in a way that one cannot magically undo by saying ‘never mind’?

Presumably it’s a mix, and time will tell us which of these groups dominate.

Nate Silver points out the obvious, which is that ‘what the “experts” are actually saying’ has remarkably little to do with when people demand that you ‘listen to the experts.’ He lists several examples of such experts, which you can click through to read and say the obvious common sense things. Thing is, ‘listen to experts’ actually means ‘listen to experts who agree with our elite Very Serious Person opinions in the current context.’ 

This one isn’t by an expert, but is similar, figured this was the best place to put the link. 

What was the result? You can see it in the vaccination graph, but you can see it much clearer in the first doses graph:

Or you could group by age:

This definitely passes the ‘you don’t need statistics because check out this chart’ statistics test. Marginal Revolution gives us another look at the same data:

Yeah, there’s no way this is a coincidence. 

Youyang Gu looks back on his machine learning projections. Like the Covid Tracking Project, he decided to stop midway through because ‘other people have this handled’ and then our world got much worse, because other people very much did not have this handled. The basic messages are ones you’ve likely heard before. He also looks at his final predictions on returning to normality, and finds we are very much on track.  

I mention this here because when you look at his projection, you can see the impact of the J&J suspension very clearly by comparing it to the counterfactual.

He did expect a gradual slowing down of vaccine progress at about this time, but this was a line still going up exactly on schedule suddenly reversing on the spot into a hard decline. 

A good thread of dog in a fire memes that are relevant, for those interested. 

Long Covid

The big known unknown of Covid has always been long Covid. There’s short term obvious reasons to be scared of Covid, but what about long term impacts? Data was always sparse. Now there’s a (preprint of a) nature paper about the matter that offers more concrete data. 

Taken at face value, it looks like Long Covid is a really big deal. This would be the strongest signal yet that you really, really, really don’t want to get Covid even if you’re relatively healthy, and that taking big precautions to avoid it was a reasonable decision given that the vaccines are now here.

This study was in the Veterans Administration, so the population in question was relatively elderly and in relatively poor health. They attempted to find similar controls, but I am profoundly unsatisfied with the methodologies listed here. 

They also prove far too much. Almost everything is worse in the Covid group, and the rate of extra deaths is far too big when compared to the number of excess American deaths. In general, if things were this bad, it would show up in tons of other statistics in ways that it simply doesn’t. 

My conclusion, after discussion and analysis, is that this study is largely measuring selection effects, and this makes it all but impossible to know how much of the effects are due to Log Covid versus the people who get symptomatic Covid and have that diagnosis confirmed starting out as sicker to begin with. 

That does not mean that the Long Covid effect isn’t real. I’m convinced that at least some of this effect is real, but I don’t think we get to update on that much from this study.

What we can update from this study is the evidential value of having had Covid. In important practical senses, if you test positive for Covid, you face the same situation no matter how much of your health issues will be caused by Covid versus revealed by Covid. What matters is you know your expected future health is much poorer across the board. There’s more reason to focus on one’s health, and to prepare for potential trouble. 

If you haven’t had Covid yet, there’s a big difference, because it’s still not clear how much you need to be avoiding Covid out of fear of Long Covid. We know the answer is ‘enough to matter’ but magnitude is important, and I sadly do not think this study should update us that much on that question. 

In Other News

Inside the pro-vaccination advertisement effort.

The first ad showing grandma hugging everyone seems very good, aligning with the above quote. The second one promising the facts seems less good and more done by committee to look good to Very Serious People, and my guess is it convinced approximately no one. 

They settled on the slogan “It’s up to you.” I certainly can’t argue with that in general, but also no one involved believes it. It’s up to you to make the right decision and come to the right conclusions, and we’ll do whatever it takes until you do exactly that. I happen to think that their selected ‘right decision’ is in fact the very right decision on all levels, but I do worry that in an important sense this is a coincidence. We have examples of other cases where the decision wasn’t right, or was socially good but locally harmful, and it’s hard to see much difference in messaging approaches. Hence much of the hesitancy.  

This does not sound like a final check designed to discover the truth. It sounds more like it was designed mostly to give the answer they wanted, along with some mix of a desire to see if they’d made some gigantic horrible mistake and a desire to have a plausible story that they checked for that.

It is also exactly what the campaign sounds like to me. When I saw ‘it’s up to you’ I totally, totally interpreted it as ‘it’s optional.’ Otherwise, what’s up to you? And the answer, of course, is that for such people, it’s ‘up to you’ to make the voluntary choice to do the things you’re told to do. That’s how such people think, and how they envision freedom and democracy. It’s an entire authoritarian mindset where everyone is free to do whatever they want but also disobedience is your fault and you should get punished for it. 

It’s also what the ad needs to be interpreted as to be successful. They talk about this later on, saying that conservatives need to have their autonomy respected. People aren’t stupid. Either something is optional, and they have a choice, or it isn’t and they don’t. You can try to send both messages but you’ll fail. So yeah, it’s optional. Or at least, for now it’s optional, until it isn’t. 

As is their standard operating procedure, the Biden administration then decided instead to ignore all of this respect for those it disagrees with and instead project a superficial message of unity by going with the slogan “We Can Do This.” I wonder how they think that will play out. 

Tabarrok responds to Cruz follow-up question on vaccine passports, points out that partly open is worse than completely open, but better than completely closed. Still leaves open the question of whether passports might lead to slower full reopenings rather than faster ones.

The framing here seems exactly right. 

Dr. Fauci admits his previous statements in this area were blatant lies (which as we all know are the best kind) and notes that he expects to no longer need this particular untruth soon, but he’s not quite ready to let go of it yet.

Biden administration puts Andrew Cuomo, who we all agree is the worst, in charge of weekly briefings for the governors. That’s how much they care about the pandemic. 

Your reminder that Biden’s 100 days of wearing a mask expire on April 30 (tomorrow), and he hasn’t explicitly extended that. 

Brazilian regulator rejects Sputnik’s application. It looks like they had very good reasons for doing so. The Sputnik vaccine has serious problems, and the data presented has additional serious problems, and either you’re going to have regulations that stop this kind of botched operation from moving forward or you don’t. It’s very reasonable to turn this application down. If I had no alternative options, though, and lived in India or another place where things were looking bad, I would still be fine with getting it and much prefer it to doing nothing. I’d still be surprised if the vaccine wasn’t effective, and I’d still be surprised if it was sufficiently unsafe to make the cost/benefit of taking the vaccine be remotely close. Still, I do understand that these are not the standards we must use, and that to let this slide would be very bad precedent. 

This was going around this week, no comment seems necessary:

This MIT paper on ‘guideline to limit indoor airborne transmission’ was also going around this week:

That’s quite the spherical cow you have there. This assumes the absurd conclusion of automatic uniform mixing in indoor spaces, and then concludes that relative location in those spaces doesn’t matter. If my grandmother had wheels she’d be a trolly car.

So of course the media acted responsibly, as they always do.

Of course, as the NY Post shows us this time, it can always get worse:

And then they use this photo:

Then this one:

The actual content of the article notes that the worries about indoor spaces, but for some reason this type of thing is inevitable. All media must post pictures of people outdoors as being irresponsible, because they look scarily like they’re alive and might even be having fun or living life. Unacceptable. Insinuations must take place right away. The public must be warned. 

Then, if necessary, fall back on the story that ‘no this is the way you’re supposed to distance, it only works outside.’ Assert that we were never at war with Eastasia, and it was always unsafe in any indoor space regardless of distance. 

Meanwhile, we have cross-country runners in mandatory masks. Which certainly isn’t not insane. And YouTube pulls videos for having “disinformation” when they agree with European mask policy instead of American blue tribe mask policy

And here are the CDC guidelines for camps. If you’re wondering how ‘spend most of our time outdoors’ changes their recommendations compared to those for schools, I’m here to report that the answer is that it doesn’t. There’s a recommendation to shift things to outdoors, but doing so doesn’t change what you have to do, it’s a pure additional obligation. For example, everyone should stay ‘socially distanced’ at all times, except when eating and thus not wearing masks (and often likely indoors), because that needs to be an exception. Outdoor child masking is otherwise fully mandatory. Paranoia over surfaces is unchanged. If you took these recommendations seriously you wouldn’t have a camp worth going to, rendering such rules worse than useless. 

Robin Hanson points out some of the dynamics that doom us to always have ‘elites’ saying ‘more must be done’ and calling for more government oversight, and claiming to have the expertise to argue about details they don’t understand while not trying to understand the more important meta-level. He is right that it is important, when arguing details on the margin, to point out that the arguing of details on the margin is not a way to make good policy decisions. Hopefully I’ve found the right balance of pointing this out and not belaboring the point. 

See this Zeynep thread for good detailed analysis. Bottom line is that this is trying to make an important point, that in poorly ventilated spaces the virus can build up over time rendering distance insufficient for safety, but is making it in a terribly misleading way. 

Not Covid, but how would you know that without breaking the regulations, since the European Union is proposing to regulate Bayes Rule

Arthur’s wise reaction:



Discuss

Are there any books for wealthy people on creating super-children?

29 апреля, 2021 - 00:41
Published on April 28, 2021 9:32 PM GMT

Growing up I had a friend who had a ridiculous level of dynastic wealth but sadly probably had more conversations with his maids than his parents. The kid would throw lavish parties that would destroy their homes, routinely steal his father's Rolls Royce when he was bored of his Benz, and was overall just a pretty mediocre person. In hindsight, he was desperate for structure and his acting out was likely a cry for help or attention.

 

If we assume that actual parenting wasn't feasible i.e. the father was too busy or just had more important things in his life than his child, what would be the ideal form of parenting for absent parents? My girlfriend argues that loving and attentive parents are essential for a well-developed child. I'm far from convinced. I think that if I attained that level of wealth I could create a super-child by providing him with a number of mentors with whom he would work on a strictly regimented schedule. 

 

5:00am wake up and make breakfast with a professional chef. The chef inspires the child with his love for food and teaches him culinary skills. He would learn the virtues that create a great Chef.

6:00am history lesson with history buff: The child would be lectured to and discuss some period of history and would be intellectually stimulated by someone who is passionate about reading and learning. While other children are learning history from a vapid, disenfranchised, and barely literate 25-year-old who would never voluntarily read a history book, and is only a teacher because of her maternal urges and because early childhood education is the easiest major, our child is developing a love of learning, reading, and history from someone he respects.

8:00am Private Tennis lesson: This was one of the privileges of my upper-middle-class upbringing which had tremendously positive impacts on my health and wellbeing. Aside from the obvious benefits of daily cardio, tennis lessons instilled in me a healthy relationship with competition and gave me a figure to look up to and model myself after.

10:00am World cultures with national geographic correspondant

12:00pm Math lessons with retired NASA engineer.

2:00pm Art lessons

3:00pm Piano Lessons

4:00pm Chess club 

6:00pm Nature walk with Botanist

etc...

My point is that I think for a sufficiently wealthy person, all the tasks of raising a child could be delegated to construct a truly remarkable child even if he/she lacks great genetics. If they have trauma, have them talk to a therapist, if they have a dirty room, have them talk to Jordan Peterson.


Do any of you know of someone creating this kind of life for their children? I know of some historical figures who had a uniquely brilliant upbringing such as Montaigne but are there any contemporary child experiments? Would the wealthy lesswrongers consider a life like this for their child? Are there any books on creating such a life?



Discuss

Draft report on existential risk from power-seeking AI

29 апреля, 2021 - 00:41
Published on April 28, 2021 9:41 PM GMT

I’ve written a draft report evaluating a version of the overall case for existential risk from misaligned AI, and taking an initial stab at quantifying the risk from this version of the threat. I’ve made the draft viewable as a public google doc here. Feedback would be welcome. 

This work is part of Open Philanthropy’s “Worldview Investigations” project. However, the draft reflects my personal (rough, unstable) views, not the “institutional views” of Open Philanthropy.



Discuss

Notes on Robert McIntyre’s Brain Preservation Talk at the Long Now Foundation

28 апреля, 2021 - 23:47
Published on April 28, 2021 8:47 PM GMT

This is cross-posted from my blog and is written more for a general audience rather than LessWrong people who will be more familiar with some of the relevant concepts.

These are my notes on Robert McIntyre’s talk at the Long Now Foundation:
Engram Preservation: Early Work Towards Mind Uploading | Robert McIntyre

https://youtu.be/FCK6Yrx_PSQ

I stumbled across the Long Now Foundation back in 2011 and heard about their 10,000 year clock, a project to design a clock to keep time for 10,000 years (and bring media attention to their project) and it’s cool seeing they’re still doing stuff. (Ten years ago isn’t that far back even in normal time so I hope their foundation lasts longer than that.)

Robert McIntyre is the CEO of a company called Nectome. Nectome’s goal is to try to better understand human memory and preserve brains. I know what you’re thinking... the human memory part seems normal enough but preserving brains? For who, zombies? Stick with me.

I remember hearing something about brain preservation and a thing called the “Large Mammal Brain Preservation Prize” being won and a company called Nectome doing it but didn’t look too much into it. 

I re-stumbled across Nectome in reading the writings of fellow cryonics and life extension supporter, Mati Roy

Let’s dive in:

“I consider myself an archivist. And what I work on archiving are human memories.”

Stages of Information Transmission in History

Robert talks about dividing human history into different stages. We advance from one stage to another by developing technologies that allow better transmission and preservation of information. Every time we invent tech that does this, it radically catapults our society to new heights. Note: Regular people often think about technology as just gadgets like a TV or iPhone but technology (of course, depending on semantics) does include things like language and writing.

This is a similar paradigm to the transition from the hunter-gather stage to the agriculture stage to the industrial revolution. It cannot be overestimated how massive these changes were.

This video discusses this briefly in a nice way: 

https://www.youtube.com/watch?v=mrWPSP35gUk

He calls the pre-language stage intuitive and talks about how weird this must have been. It is a trip to think about what our qualia would have been like in a pre-language era. Like what would our thoughts have been like?

Then we eventually developed language, which was a completely transformative change. The problem with oral communication is it’s extremely low-bandwidth: not a lot of information can be transmitted and it takes a long time to do it. Elon Musk has made the same point about bandwidth for the importance of Neuralink.

Side note: Some of this oral history talk reminds me of Sam Jackson’s speech in this scene in Unbreakable (spoilers).

Then we went from just having oral communication to having symbolic communication with writing. This again was massively game-changing.

Writing is amazing because it can, among other things, preserve and transmit more information than any one person can remember and for much longer periods of time. For instance, Shakespeare has been dead for over 400 years but we can still enjoy his plays.

Writing still has downsides, though. It takes a lot of time to parse it and we still lose a lot of information because it’s hard to record things in writing.

Robert argues we lose most of the value/wisdom by only being able to store writing. He doesn’t talk about video recordings or anything so I think his argument loses some of its strength which I go into more later on.

Information Theory

He then goes into some information theory.

Does information being preserved depend on the technology to read/extract/understand it? Was it always preserved or only preserved after you invent the technology? 

(Robert says it was always preserved. Preservation does not equal the ability to read it.)

His very brief dive into information theory reminds me that I wish someone could point me to a post or video on information theory where they showcase all the distilled, useful parts.

He brings up the injective function and says that preservation means that different things remain different. On the one hand, this feels like an elegant way of capturing what preservation is, and on the other hand, it seems like it may not capture what it actually is or only be one part of preservation, but I don’t know.

So he boils it down to:

Preservation = differences stay different

Preservation ≠ Understanding

Example: He can’t read Chinese but he could preserve Chinese books.

In practice, this happened where people started storing DNA back in the 70s before we had the ability to read it (which wouldn’t come for a few decades). More on this later.

How do we know how memories are stored in the brain? The brain is full of electrical activity so how do we know that’s not essential for memory storage?

People who fell into ice were sometimes able to be brought back and then often had their memories or most of their memories intact, so this suggests we don’t need the electrical activity for storage.

Because it’d be way questionable to conduct experiments like this, it’s good we get some value out of accidents like this.

This finding about people falling into ice and being semi-stable/able to be brought back led to some cool developments.

Memories are physical/DHCA

Deep hypothermic circulatory arrest or DHCA is a type of technique where you put someone in a hypothermic state so you can do surgeries on them where you cut off circulation and brain function so it minimizes damage to the brain. Pretty cool stuff.

Because of these things we know that memories are physical. Separately we know that memories are physical because of all the reasons why we know everything including consciousness is physical. (This is a longer discussion but it’s mostly basic to people who aren’t religious. He also very briefly gets into this at the end of the talk.)

Side note: this is about the continuously updating version of death. Back in the day, “death” would have been defined as not breathing or no heartbeat. Now we can “bring people back” with CPR, but were they actually dead? Now we have brain death which is quite complicated. My partner is a neurology resident and has participated in declaring brain death which is quite a complicated and rigorous process (although I would argue that it’s mostly not useful). This point becomes important especially in light of cryonics where if we could bring people back many years after their bodies/brains were preserved, it’s analogous to them not being dead in the same way that someone brought back with CPR isn’t dead.

This whole line of thinking has led to the creation of the very useful concept: information-theoretic death. You think of what makes up someone’s consciousness -- their memories and personality and whatnot -- and information-theoretic death is about the loss of that information. So if someone is “dead” or “more dead”, the less information you can recover.

Memory Consolidation

He then briefly goes into how memory consolidation works. It is roughly divided into three parts based on how long the memory lasts and how much, if it all, is encoded.

1-30 seconds: electrical signals

(memorizing a phone number or the words you just heard lingering)

<2 hours: unstable changes to synapses

(intermediate process) some changes to synapses but changes are liable, by default will revert back to original stage

(He doesn’t go into what makes them not revert to their original stage but it’d be nice to know more about that.)

>2 hours: structural changes to synapses

(generally encoded as long-term memory, could potentially stay with you your entire life)

Synapses can increase and decrease in size, temporarily or permanently.

He shows this cool, weird image of a synapse. I looked it up and it's the first scientifically accurate 3D model of one.

Changes happen in thousands of synapses throughout the brain even for the most trivial of memories if you’re going to remember it longer than two hours. One principle of the brain is it’s quite distributed. You can destroy any one synapse and it doesn’t affect anything. Pretty cool.

Encoding memory requires protein synthesis. If you inhibit protein synthesis you can’t encode long-term memories. This manifests in things like being blackout drunk where you can’t remember what happened before despite being conscious at the time.

This reminds me that an important point to appreciate generally is the brain is super duper extremely complicated. Learned this in my brain class in college. It’s arguably the most complicated known thing in the universe.

Can you preserve synapses?

Yes, a chemical called glutaraldehyde can and we’ve been able to preserve them since the 60s.

Synapse preservation is analogous to DNA preservation.

We stored DNA before we could do anything with it and we could only scan it decades after we first started storing it and it was super duper expensive.

First genome scanning was in 2003 and was real expensive. (2.7 billion in 1991 dollars.)

Now you can do it for about $1000 and it’s only getting cheaper. Not bad, huh? (This is one of my main gripes with people who think technology only helps the rich. It’s dumb.)

Minor point he made but with a big implication: We could preserve DNA way before but didn’t know enough about it until the structure was discovered to confidently say we could store it. So this is a strong argument that it’s often best to take action before you know something for sure.

His Background, Q and A, and Random Good Points

Robert says he was always interested in brain preservation but doesn’t go into the actual reasons why, which would have been nice although it’s possible he doesn’t remember. 

He volunteered for The Brain Preservation Foundation. He was going to make an explainer video à la Minute Physics but then thought he could just win the Brain Preservation prize the foundation offered.

He briefly mentions the fork in the road of: Do you attack a problem with the tools available now or build better tools? Side note: Better tools are often the big changers of innovation. Think telescope, microscope, transistor, etc.

He won the prize by combining techniques from two different labs. This is a good example of having different disciplines talk to each other and collaboration in general. Reminds me of how Terence Tao is famous for his collaboration even on fields outside of pure maths.

He says preservation is relatively inexpensive but the storage costs are very expensive because it has to be stored at a specific cold but not too cold temperature that’s apparently only currently achievable with explosive gases. It seems like the real next step is figuring out how to preserve it in a way that it can be stored at room temperature or one of the easier cold temperatures. And seems like there’s maybe a startup idea: coming up with a way to create that Goldilocks temperature safer and cheaper, but I don’t know how big the market is for that.

In all this discussion, he knows the landscape of current tech and the costs. How he speaks, like knowing the amounts and costs of things, is very engineer/startup person which is always nice to see.

He thinks it’ll be about 70 years until we can access information in preserved brains. I doubt it’ll be that long if AGI goes right.

He talks about how the San Diego Frozen Zoo had the foresight and bravery to start preserving DNA of species in 1972. They could have been criticized by (stupid) people saying we’ll never have a gigabyte of storage. Even if you could it would be astoundingly expensive. These imaginary critics could have called up Gordon Moore at Intel at the time.

He talks about how someone who had wanted to do a proof of concept for recording DNA might have started with a single base-pair. His team is trying an analogous thing with C. elegans (a common model organism) and showing how they have memory of their environment being shaken and you can see the changes inside them. So hopefully they’ll be able to preserve and then show that the memory change is still there. Really cool, but don’t know how it would work as a startup.

Fun fact: Spices like vanilla and cinnamon have aldehydes in them.

The host asks about Egyptian preservation and luckily he knows, but it’s sort of a related fun fact vs something he’d need to know for his work. It’s like a doc getting asked a fun fact about the heart but it doesn’t really have anything to do with their profession. This reminds me that it’s probably worth it for people to memorize trivia related to things in their work just to make these kinds of conversations flow better, e.g. how much a brain weighs, or when dogs were domesticated, or the etymology of certain words or whatever.

He brings up a great point that humans are continuously being born into a world with more and more powerful technology but aren't necessarily being born with more wisdom. This is dangerous. His point is basically that x-risk is going up because of this.

I want to stop and mention that some people’s solution to this is to be Ludditeesque and ban things they’re uncomfortable with like genetic engineering. This overweighs the cost of action without thinking about inaction, much the way institutions like the FDA only take into account the risks of letting potentially harmful drugs into the market vs the costs of keeping helpful drugs from people. This is dumb.

How do we know that using glutaraldehyde in preserving the brain doesn’t harm memory?

He does a good job of breaking down the problem: either some structure of memory preservation is so fragile that glutaraldehyde fixation destroys it but the structure survives all these other things like seizures, depolarization, etc. or it does preserve it.

He talks about encrypted information that you couldn’t unlock would still be preserved because different things are still different. This is a semantic argument to me that isn’t that strong. If you can never get the information it’s not preserved to me. Sure, some super-advanced tech may be able to decrypt it but if different things are different but never readable, that’s not preserved to me.

And sticking with the book example, only preserving the literal words does lose information which you may or may not care about e.g. the paper type, the font, etc.

Someone asks about recording all the activity in the brain, which is currently not possible, and he mentions an interesting idea of programming DNA to self-report what’s going on.

The host tries to mention a movie and Robert keeps talking about the thing and the host immediately turns towards what Robert is saying. Good on the host.

The guy asking a question on 52:24 has a great voice. Can I hire this guy for narration and voice acting?

“Could you retrieve wisdom and experience independent of language?”

He brings up the analogy of a black box of a neural network that does something like telling dogs and cats apart. You can look at it and try to figure it out, but it would be difficult without running it. We may be able to unblack box things in the future too.

He says another thing that seemed to be minor but I think is a big deal. The easiest way to glean the wisdom from a preserved brain would be to simulate the brain and ask them what it was like to be there. I guess, but that’s kind of dumb. I could record them in 4k with my phone and boom, no need for brain preservation.

https://twitter.com/JohnCGreer/status/1362562525678735360

I get running a brain simulation would be way better but that’s like a trillion times more costly and difficult than just recording the person with the phone. It also reminds me that his concept of transmitting wisdom is really weird. If he’s implying we’ll all be able to upload our minds and meld with other people’s experiences then maybe, but that’s a huge leap that could have been addressed. Otherwise running a brain simulation and asking the person what an event was like is no different from the oral stage of history. And we can preserve that now with video.

Someone asks about ethical concerns and how he thinks about them. Ugh. I usually hate these questions because usually the person is on a moral high horse worried about those less fortunate but has an extremely poor set of ethics where they usually *act* like they care but don’t understand what actually leads to less suffering for people.

That said, he mentions they’re working with Anders Sandberg of the Future of Humanity Institute which is rad.

In response to ethical questions, he says a good way to think about it is to ask the question, “What’s going to enable human flourishing well?” which I think is great and helps clear up how to proceed ethically sometimes.

My translation of the rest of his ethics answer is that a lot of the ethical concerns are analogous to existing things like medical data and HIPAA, so privacy of information is important, and so are safeguards to ensure control and autonomy and having the information being destroyed if and when the person wanted.

What about the body? He argues quadriplegic people retain memory and personality implying the brain is where it’s at. This reminds me of the argument against souls because when the brain gets messed up, their consciousness does too. One could argue the body is like an antenna receiving the soul that when damaged produces a messed up signal. Still, that’s all bullshit.

Embodied cognition: he argues you have to have a body to learn how the world works.

He points out the brain is the hardest thing to preserve so if we can preserve that we can preserve the rest barring “a few stupid things that aren’t worth going into”. No, go into them! What and why, Robert?

He says it’s a moot point because we can preserve the body as well. I say it’s not moot if you can but aren’t. It’s maybe a moot point technically but it’s not a moot point if you’re not doing it. I’m not saying they should as it’d probably increase the storage cost by 10x or something but still.

At 58:00 he goes into a minefield of topics that have a long history of bullshit mixed with people making real attempts at solving them. Things like free will, souls, consciousness, and such.

He does give the caveat that he's trying a new argument so it may be less persuasive.

He’s saying the hard problems aren’t that hard or are mostly made up. He presents a simulation of a simple pendulum that exhibits a harmonic motion. He then adds a small pendulum to it and it creates a whole new chaotic motion.

I think I either don’t understand the bullshit arguments he’s refuting that people use to support the existence of free will or a soul or something. But I don’t think he’s really addressing things like the explanatory gap. I certainly believe it’s all physical and there’s nothing supernatural, but he’s not solving or getting rid of the hard problem of consciousness.

This isn’t my area of expertise and I don’t know the history of its controversies or solutions or arguing it isn’t really a thing. I’m curious as to what the QRI guys think.

His arguments sound like something Daniel Dennett would say, by which I mean they’re kind of confusing. It could be confusing because I’m too dumb or it’s not intuitive but it also may be there’s some Eulering going on. Mind you I only know DD as one of the Four Horsemen and him arguing with Sam Harris about free will, so maybe he has tons of good ideas.

Wish they filmed after wrapping up. They could even have little go pros of people going up and talking to him, haha.

End Thoughts

I also just realized that he doesn't explain the title of his talk: "Engram Preservation: Early Work Towards Mind Uploading". An engram is a term for a physical unit of memory in the brain and mind uploading is the idea where we'll be able to copy the information in the brain and upload it to a computer.

This talk also makes me miss college and taking great classes.

Love people like this and engineers and such. Reminds me of how founding a startup and actually trying to build something is one of the best ways to learn the nitty-gritty details about a field. Vinay Gupta was like this.

Ultimately, really interesting, but I care less because it doesn’t help me or my loved ones. Even if you could perfectly extract the information from the brain and upload it to a computer, I still wouldn’t consider this “me”. Why I wouldn’t is a long discussion but is the same reason why I wouldn’t take the transporter in the teletransportation paradox. For more on the messy subject of personal identity (which I haven’t found a satisfying conclusion to, see Tim Urban’s article: https://waitbutwhy.com/2014/12/what-makes-you-you.html and you can also play with some of the scenarios at https://www.philosophyexperiments.com/)

Alcor, the cryonics institution, discusses why his technique wouldn’t help in reviving tissue: https://www.alcor.org/2018/03/http-www-alcor-org-blog-alcor-position-statement-on-large-brain-preservation-foundation-prize/

I really love and am sometimes manic about preservation so it scratches that itch, but this is even further away from someone being brought back from cryonics. By the time we get that information, AI will probably have either destroyed us or created a utopia for us where these types of things will matter much less. I care most about my loved ones and myself not dying and this doesn’t affect that.

We seem to be biased to care more about losing an amount than acquiring an equivalent amount. So I would be more upset at losing $100 than gaining it. Or more applicably, I would be more upset at losing my current friends than gaining new ones.

This is related to status quo bias too, where we irrationally favor current circumstances because we’d be more upset by losing what we have than by gaining something else. Whereas if the situation were switched, we’d be worried about losing the other thing now. 

Let’s say I live in Jupiter, Florida with my wife Ann and my dog Bobo. I don’t want to change anything in the past because it wouldn’t lead to my current circumstances. Because if I hadn’t gone bankrupt, I never would have moved to Florida living as a pool boy. But if I hadn’t gone bankrupt and was married to Margaret and lived in Long Island with my cat Hazy, then I would feel the same bias in not wanting to change anything because I would lose what I had.

Outside of preventing the loss of people to death, I think there are good arguments that other things are more valuable, like putting effort into creating new experiences versus preserving old. As I’ve said, this is far from my natural inclination, I like doing both, but I do think it’s important to address.

Still, it’s a way cooler project and more important than what most people are working on.

It’d be nice to connect groups like r/DataHoarder to his work. I wonder what they’d think.

I don’t see how it could sustain itself commercially because not enough people are forward-thinking enough to want to preserve their loved ones or their own brains. And those that are are probably more interested in cryonics. I would certainly consider paying for the service if I couldn’t get, say, my dad to sign up for cryonics but could set someone up to preserve his brain instead. So I wonder how funding works. If it’s just sustained on rich people who think it’s interesting and are willing to throw some bucks at it.

Edit: Since writing the finalized draft of this I stumbled across this story written by Sam Hughes about brain uploading:

https://qntm.org/mmacevedo

I haven’t read Robin Hanson’s Ages of Em yet but if you find this area interesting you’ll probably find that interesting as well.



Discuss

AMA: Paul Christiano, alignment researcher

28 апреля, 2021 - 21:55
Published on April 28, 2021 6:55 PM GMT

I'll be running an Ask Me Anything on this post from Friday (April 30) to Saturday (May 1).

If you want to ask something just post a top-level comment; I'll spend at least a day answering questions.

You can find some background about me here.



Discuss

Is sitting in the sun much better than sitting in the shade?

28 апреля, 2021 - 21:48
Published on April 28, 2021 6:48 PM GMT

How much worse is sitting outside in the shade compared to the sun, for the purpose of experiencing enough lux to feel alert?

Making the difference between shade and sun a bit more extreme than it probably is in practice, we can take "sun" to mean "10,000 lux" and "shade" to mean "1,000 lux". Data to back this up: 

Wikipedia gives these lux values for different conditions:

  • 20,000 lux "Shade illuminated by entire clear blue sky, midday"
  • 1,000–2,000 lux "Typical overcast day, midday"
  • <200 lux "Extreme of thickest storm clouds, midday"

NOAO (National Optical Astronomy Observatory) says:

  • 10,000 lux "Full Daylight"
  • 1,000 lux "Overcast Day"
  • 100 lux "Very Dark Day"

[Side question: What does Wikipedia exactly mean with "Shade illuminated by entire clear blue sky"? Is this a strange way of describing sunlight, or are they referring to the lighting experienced standing in the shade of an object during an otherwise sunny day? The words sound like the latter to me, but the former fits better with NOAO's description. I don't think this distinction breaks my question either way.]

Same orders of magnitude, just seems like NOAO is averaging throughout the day and Wikipedia is looking at midday. 

Meanwhile, this LW post discusses how to install indoor lighting to get about 1,000 lux inside. If 1,000 lux is enough to keep a human alert and non-SAD, then it seems like shade is no worse than sun. Or, maybe people installing lumenators in their homes just stop at 1,000 lux because it's impractical to get an order of magnitude more lux than that. 

One way of phrasing the argument in favor of lumenators is that going from 100 lux to 1,000 lux provides huge benefits. Can the same thing be said about going from 1,000 lux to 10,000 lux, or are there diminishing returns? In other words, should I sit in the sun instead of the shade? 



Discuss

[AN #148]: Analyzing generalization across more axes than just accuracy or loss

28 апреля, 2021 - 21:30
Published on April 28, 2021 6:30 PM GMT

Newsletter #148

Alignment Newsletter is a weekly publication with recent content relevant to AI alignment around the world. Find all Alignment Newsletter resources here. In particular, you can look through this spreadsheet of all summaries that have ever been in the newsletter.

Audio version here (may not be up yet).

Please note that while I work at DeepMind, this newsletter represents my personal views and not those of my employer.

SECTIONS

HIGHLIGHTS

TECHNICAL AI ALIGNMENT

AGENT FOUNDATIONS

FORECASTING

FIELD BUILDING

MISCELLANEOUS (ALIGNMENT)

HIGHLIGHTS

Distributional Generalization: A New Kind of Generalization (Preetum Nakkiran and Yamini Bansal) (summarized by Rohin): Suppose you train a classifier to distinguish between CIFAR-10 classes, except each airplane has a 30% chance of being mislabeled as a car. If you then train a model to achieve perfect accuracy on this badly labeled dataset, it will get 100% accuracy on the training set, and 97% of those labels will actually be correct (since 3% are mislabeled airplanes). Under the current paradigm, if we say that the model “generalizes”, that means that it will also get 97% accuracy at test time (according to the actually correct labels). However, this doesn’t tell us anything about what mistakes are made at test time -- is it still the case that 30% of airplanes are mislabeled as cars, or does the model also make mistakes on e.g. deer?

Distributional generalization aims to make claims about situations like these. The core idea is to make claims about the full distribution of classifier outputs, rather than just the single metric of test accuracy.

Formally, we assume there is some distribution D, from which we can sample pairs of points (x, y), which generates both our train and test sets. Then, the train (resp. test) distribution of classifier outputs is (x, f(x)), with x coming from the train (resp. test) set. The train and test distributions of classifier outputs are the objects of study in distributional generalization. In particular, given a [0,1]-valued function on distributions (called a test T), we say that the classifier generalizes w.r.t T if T outputs similar values on the train and test distribution. (W.r.t means “with respect to”.) For example, given a distribution, the accuracy test checks how often the classifier’s output is correct in expectation over that distribution. Generalization w.r.t the accuracy test is equivalent to the canonical notion of generalization.

Let’s suppose that the classifier perfectly fits the training set, so that the train distribution of classifier outputs is the same as the original distribution D. Let’s additionally suppose that the classifier generalizes with respect to the accuracy test, so that the classifier has perfect test accuracy. Then, the test distribution of classifier outputs will also be the same as the original distribution D, that is, all distributions are identical and there isn’t much more to say. So, the interesting situations are when one of these two assumptions is false, that is, when either:

1. The classifier does not perfectly fit the training set, or

2. The classifier does not generalize w.r.t accuracy.

This paper primarily focuses on classifiers that do perfectly fit the training set, but don’t generalize w.r.t accuracy. One typical way to get this setting is to inject label noise (as in the mislabeled airplanes case), since this prevents the classifier from getting 100% test accuracy.

Speaking of which, let’s return to our original example in which we add label noise by mislabeling 30% of airplanes as cars. Notice that, since the label noise is completely divorced from the classifier’s input x, the best way for the classifier to minimize test loss would be to always predict the true CIFAR-10 label, and then 3% of the time the true distribution will say “lol, jk, this airplane is actually a car”. However, in practice, classifiers will also label approximately 30% of airplanes as cars in the test set as well! This incurs higher loss, because the 30% of airplanes that the classifier labels as cars must be independent of the 30% of airplanes that the true distribution labels as cars, which implies that the model disagrees with the true distribution 4.2% of the time; this is worse than the 3% it would get if it consistently labeled airplanes as airplanes. Classifiers trained to interpolation are not Bayes-optimal in the presence of label noise.

Okay, let’s get back to distributional generalization. We already know the classifier does not generalize w.r.t accuracy. However, the fact that it still labels about 30% of airplanes as cars suggests a different kind of generalization. Recall that the train and test distributions of classifier outputs have the form (x, f(x)). Consider the feature L(x) that says whether x is an airplane or not. Then, if we replace (x, f(x)) with (L(x), f(x)), then this now looks identical between the train and test distributions! Specifically, this distribution places 7% on (“yes airplane”, “airplane”), 3% on (“yes airplane”, “car”), and 10% on (“no airplane”, c) for every class c other than “airplane”. An alternative way of stating this is that the classifier generalizes w.r.t all tests whose dependence on x factors through the feature L. (In other words, the test can only depend on whether x is an airplane or not, and cannot depend on any other information about x.)

The authors make a more general version of this claim they call feature calibration: for every feature L that could be learned by the classifier, the classifier generalizes w.r.t all tests whose dependence on x factors through L. Note that they do not assume that the classifier actually learns L: just that, if you hypothetically trained the classifier on a dataset of (x, L(x)), then it could learn that function near-perfectly.

They then provide evidence for this through a variety of experiments and one theorem:

- If you plug in the constant feature L(x) = 0 into the conjecture, it implies that classifiers should get the right class balance (i.e. if your distribution contains class 1 twice as often as class 0, then you predict class 1 twice as often as class 0 at test time). They demonstrate this on a rebalanced version of CIFAR-10, even for classifiers that generalize poorly w.r.t accuracy.

- When using a WideResNet (for which the true CIFAR-10 labels are learnable), if you add a bunch of structured label noise into CIFAR-10, the test predictions reflect that same structure.

- The same thing is true for decision trees applied to a molecular biology dataset.

- A ResNet-50 trained to predict attractiveness on the CelebA dataset (which does not generalize w.r.t accuracy) does satisfy feature calibration w.r.t “wearing lipstick”, “heavy makeup”, “blond hair”, “male”, and “eye-glasses”. Note there is no label noise in this case.

- AlexNet predicts that the right fraction of dogs are Terriers, even though it mistakes which exact dogs are Terriers.

- The nearest-neighbor classifier provably satisfies feature calibration under relatively mild regularity conditions.

In an appendix, they provide preliminary experiments suggesting this holds pointwise. In our mislabeled airplane example, for a specific airplane x from the test set, if you resample a training set (with the 30% mislabeling of airplanes) and retrain a classifier on that set, then there is a roughly 30% chance that that specific x will be misclassified as a car.

The authors then introduce another distributional generalization property: agreement. Suppose we have two classifiers f and g trained on independently sampled training sets. The agreement conjecture states that the test accuracy of f is equal to the expected probability that f agrees with g on the test distribution (loosely speaking, this is how often f and g make the same prediction for test inputs). The agreement property can also be framed as an instance of distributional generalization, though I won’t go into the specific test here. The authors perform similar experiments as with feature calibration to demonstrate that the agreement property does seem to hold across a variety of possible classifiers.

Interestingly, these properties are not closed under ensembling. In our mislabeled airplane example, every model will label 30% of airplanes as cars, but which airplanes are mislabeled is independent across models. As a result, the plurality voting used in ensembles reduces the misclassification rate to 22%, which means that you no longer satisfy feature calibration. Consistent with this, the authors observe that neural network ensembles, random forests, and k-nearest neighbors all did not satisfy feature calibration, and tended to be closer to the Bayes-optimal solution (i.e. getting closer to being robust to label noise, in our example).

Summary of the summary: Let’s look at the specific ways in which classifiers make mistakes on the test distribution. This is called distributional generalization. The paper makes two conjectures within this frame. Feature calibration says that for any feature that a classifier could have learned, the distribution of its predictions, conditioned on that feature, will be the same at train and test time, including any mistakes it makes. Agreement says that the test accuracy of a classifier is equal to the probability that, on some randomly chosen test example, the classifier’s prediction matches that of another classifier trained on a freshly generated training set. Interestingly, while these properties hold for a variety of ML models, they do not hold for ensembles, because of the plurality voting mechanism.

TECHNICAL AI ALIGNMENT
AGENT FOUNDATIONS

The Many Faces of Infra-Beliefs (Diffractor) (summarized by Rohin): When modeling an agent that acts in a world that contains it (AN #31), there are different ways that we could represent what a “hypothesis about the world” should look like. (We’ll use infra-Bayesianism (AN #143) to allow us to have hypotheses over environments that are “bigger” than the agent, in the sense of containing the agent.) In particular, hypotheses can vary along two axes:

1. First-person vs. third-person: In a first-person perspective, the agent is central. In a third-person perspective, we take a “birds-eye” view of the world, of which the agent is just one part.

2. Static vs. dynamic: In a dynamic perspective, the notion of time is explicitly present in the formalism. In a static perspective, we instead have beliefs directly about entire world-histories.

To get a tiny bit more concrete, let the world have states S and the agent have actions A and observations O. The agent can implement policies &#x3A0;. I will use &#x394;X to denote a belief over X (this is a bit handwavy, but gets the right intuition, I think). Then the four views are:

1. First-person static: A hypothesis specifies how policies map to beliefs over observation-action sequences, that is, &#x3A0; → &#x394;(O × A)*.

2. First-person dynamic: This is the typical POMDP framework, in which a hypothesis is a belief over initial states and transition dynamics, that is, &#x394;S and S × A → &#x394;(O × S).

3. Third-person static: A hypothesis specifies a belief over world histories, that is, &#x394;(S*).

4. Third-person dynamic: A hypothesis specifies a belief over initial states, and over the transition dynamics, that is, we have &#x394;S and S → &#x394;S. Notice that despite having “transitions”, actions do not play a role here.

Given a single “reality”, it is possible to move between these different views on reality, though in some cases this requires making assumptions on the starting view. For example, under regular Bayesianism, you can only move from third-person static to third-person dynamic if your belief over world histories &#x394;(S*) satisfies the Markov condition (future states are conditionally independent of past states given the present state); if you want to make this move even when the Markov condition isn’t satisfied, you have to expand your belief over initial states to be a belief over “initial” world histories.

You can then define various flavors of (a)causal influence by saying which types of states S you allow:

1. If a state s consists of a policy &#x3C0; and a world history (oa)* that is consistent with &#x3C0;, then the environment transitions can depend on your choice of &#x3C0;, leading to acausal influence. This is the sort of thing that would be needed to formalize Newcomb’s problem.

2. In contrast, if a state s consists only of an environment E that responds to actions but doesn’t get to see the full policy, then the environment cannot depend on your policy, and there is only causal influence. You’re implicitly claiming that Newcomb’s problem cannot happen.

3. Finally, rather than have an environment E that (when combined with a policy &#x3C0;) generates a world history (oa)*, you could have the state s directly be the world history (oa)*, without including the policy &#x3C0;. In normal Bayesianism, using (oa)* as states would be equivalent to using environments E as states (since we could construct a belief over E that implies the given belief over (oa)*), but in the case of infra-Bayesianism it is not. (Roughly speaking, the differences occur when you use a “belief” that isn’t just a claim about reality, but also a claim about which parts of reality you “care about”.) This ends up allowing some but not all flavors of acausal influence, and so the authors call this setup “pseudocausal”.

In all three versions, you can define translations between the four different views, such that following any path of translations will always give you the same final output (that is, translating from A to B to C has the same result as A to D to C). This property can be used to define “acausal”, “causal”, and “pseudocausal” as applied to belief functions in infra-Bayesianism. (I’m not going to talk about what a belief function is; see the post for details.)

FORECASTING

Three reasons to expect long AI timelines (Matthew Barnett) (summarized by Rohin): This post outlines and argues for three reasons to expect long AI timelines that the author expects are not taken into account in current forecasting efforts:

1. Technological deployment lag: Most technologies take decades between when they're first developed and when they become widely impactful.

2. Overestimating the generality of AI technology: Just as people in the 1950s and 1960s overestimated the impact of solving chess, it seems likely that current people are overestimating the impact of recent progress, and how far it can scale in the future.

3. Regulation will slow things down, as with nuclear energy, for example.

You might argue that the first and third points don’t matter, since what we care about is when AGI is developed, as opposed to when it becomes widely deployed. However, it seems that we continue to have the opportunity to intervene until the technology becomes widely impactful, and that seems to be the relevant quantity for decision-making. You could have some specific argument like “the AI goes FOOM and very quickly achieves all of its goals” that then implies that the development time is the right thing to forecast, but none of these seem all that obvious.

Rohin's opinion: I broadly agree that (1) and (3) don’t seem to be discussed much during forecasting, despite being quite important. (Though see e.g. value of the long tail.) I disagree with (2): while it is obviously possible that people are overestimating recent progress, or are overconfident about how useful scaling will be, there has at least been a lot of thought put into that particular question -- it seems like one of the central questions tackled by bio anchors (AN #121). See more discussion in this comment thread.

FIELD BUILDING

FAQ: Advice for AI Alignment Researchers (Rohin Shah) (summarized by Rohin): I've written an FAQ answering a broad range of AI alignment questions that people entering the field tend to ask me. Since it's a meta post, i.e. about how to do alignment research rather than about alignment itself, I'm not going to summarize it here.

MISCELLANEOUS (ALIGNMENT)

Testing The Natural Abstraction Hypothesis: Project Intro (John Wentworth) (summarized by Rohin): We’ve previously seen some discussion about abstraction (AN #105), and some claims that there are “natural” abstractions, or that AI systems will tend (AN #72) to learn (AN #80) increasingly human-like abstractions (at least up to a point). To make this more crisp, given a system, let’s consider the information (abstraction) of the system that is relevant for predicting parts of the world that are “far away”. Then, the natural abstraction hypothesis states that:

1. This information is much lower-dimensional than the system itself.

2. These low-dimensional summaries are exactly the high-level abstract objects/concepts typically used by humans.

3. These abstractions are “natural”, that is, a wide variety of cognitive architectures will learn to use approximately the same concepts to reason about the world.

For example, to predict the effect of a gas in a larger system, you typically just need to know its temperature, pressure, and volume, rather than the exact positions and velocities of each molecule of the gas. The natural abstraction hypothesis predicts that many cognitive architectures would all converge to using these concepts to reason about gases.

If the natural abstraction hypothesis were true, it could make AI alignment dramatically simpler, as our AI systems would learn to use approximately the same concepts as us, which can help us both to “aim” our AI systems at the right goal, and to peer into our AI systems to figure out what exactly they are doing. So, this new project aims to test whether the natural abstraction hypothesis is true.

The first two claims will likely be tested empirically. We can build low-level simulations of interesting systems, and then compute what summary is useful for predicting its effects on “far away” things. We can then ask how low-dimensional that summary is (to test (1)), and whether it corresponds to human concepts (to test (2)).

A followup post illustrates this in the case of a linear-Gaussian Bayesian network with randomly chosen graph structure. In this case, we take two regions of 110 nodes that are far apart from each other, and operationalize the relevant information between the two as the covariance matrix between the two regions. It turns out that this covariance matrix has about 3-10 “dimensions” (depending on exactly how you count), supporting claim (1). (And in fact, if you now compare to another neighborhood, two of the three “dimensions” remain the same!) Unfortunately, this doesn’t give much evidence about (2) since humans don’t have good concepts for parts of linear-Gaussian Bayesian networks with randomly chosen graph structure.

While (3) can also be tested empirically through simulation, we would hope that we can also prove theorems that state that nearly all cognitive architectures from some class of models would learn the same concepts in some appropriate types of environments.

To quote the author, “the holy grail of the project would be a system which provably learns all learnable abstractions in a fairly general class of environments, and represents those abstractions in a legible way. In other words: it would be a standardized tool for measuring abstractions. Stick it in some environment, and it finds the abstractions in that environment and presents a standard representation of them.”

Read more: Section 1.3 of this version of the paper

Rohin's opinion: The notion of “natural abstractions” seems quite important to me. There are at least some weak versions of the hypothesis that seem obviously true: for example, if you ask GPT-3 some new type of question it has never seen before, you can predict pretty confidently that it is still going to respond with real words rather than a string of random characters. This is effectively because you expect that GPT-3 has learned the “natural abstraction” of the words used in English and that it uses this natural abstraction to drive its output (leaving aside the cases where it must produce output in some other language).

The version of the natural abstraction hypothesis investigated here seems a lot stronger and I’m excited to see how the project turns out. I expect the author will post several short updates over time; I probably won’t cover each of these individually and so if you want to follow it in real time I recommend following it on the Alignment Forum.

FEEDBACK

I'm always happy to hear feedback; you can send it to me, Rohin Shah, by replying to this email.

PODCAST

An audio podcast version of the Alignment Newsletter is available. This podcast is an audio version of the newsletter, recorded by Robert Miles.



Discuss

Facebook is Simulacra Level 3, Andreessen is Level 4

28 апреля, 2021 - 20:38
Published on April 28, 2021 5:38 PM GMT

A passage I just read in The Hard Thing About Hard Things by Ben Horowitz:

Andreessen vs Zuckerberg: How Big Should the Titles Be?

Should your company Vice President the top title or should you have Chief Marketing Officers, Chief Revenue Officers, Chief People Officers, and Chief Snack Officers? There are two schools of thoughts regarding this, one represented by Marc Andreessen and the other by Mark Zuckerberg.

Andreessen argues that people ask for many things from a company: salary, bonus, stock options, span of control, and titles. Of those, title is by far the cheapest, so it makes sense to give the highest titles possible. The hierarchy should have. Presidents, Chiefs, and Senior Executive Vice Presidents. If it makes people feel better, let them feel better. Titles cost nothing. Better yet, when competing for new employees with other companies, using Andreessen's method you can always outbid the competition in at least one dimension.

At Facebook, by contrast, Mark Zuckerberg purposely deploys titles that are significantly lower than the industry standard. Senior Vice Presidents at other companies must take title haircuts down to Directors or Managers at Facebook. Why does he do this? First, he guarantees that every new employee gets releveled as they enter his company. in this way, he avoids accidentally giving new employees higher titles and positions than better-performing existing employees. This boost morale and increases fairness. Second, it forces all the managers of Facebook to understand and internalize Facebook's leveling system, which serves the company extremely well in their own promotion and compensation processes.

He also wants titles to be meaningful and reflect who has influence in the organization. As a company grows quickly, it's important to provide organizational clarity wherever possible and that gets more difficult if there are fifty VPs and ten Chiefs.

Next, he finds that businesspeople often carry inflated titles versus their engineering counterparts. While he recognizes that big titles help them out externally with getting meetings, he still wants to have an organization where the product people and engineers form the cultural core, so he strives to keep this in check as well.

Does Facebook ever miss out on a new hire due to its low titles? Yes, definitely. But one might argue that they miss out on precisely the employees they don't want. In fact, both the hiring and onboarding processes at Facebook have been carefully designed to encourage the right kind of employees to select themselves in and the wrong ones to select themselves out.

Simulacra level 3 is about accurately describing social reality. Level 4 is about defecting on people trying to do level 3, by painting an inaccurate model of social reality. It seems to me this is a crystal clear example of two people recommending Level 3 vs Level 4 strategies.

(Or so it is in my mind. I'm sure Zvi and Benquo and others will say Level 4 is something else.)

As an aside, this book is far better than it has any right to be at giving advice on building successful companies. It's a book that repeatedly stares into the dark at the things that will kill your company (e.g. how to fire senior people, how to minimize internal politicking, when smart people are bad employees, etc) and gives simple and clear advice in each situation. I've personally found it immensely helpful.



Discuss

When Should the Fire Alarm Go Off: A model for optimal thresholds

28 апреля, 2021 - 15:27
Published on April 28, 2021 12:27 PM GMT

In this post I consider a model where there is a possible bad outcome, and the chance of it happening is related to something we can measure. We need to be able to set a threshold for these measurements, such that if a measurement exceeds a threshold then we decide to act; to prepare for or treat the bad outcome. This seems like a pretty basic idea, so I expect that either most results will be trivial, or that someone has already done this before.

A basic example of this would be a disease, where if a patient reaches a threshold for the number and severity of symptoms then the doctor decides to prescribe a medicine. Or the current pandemic, where in February 2020 if governments had a more accurate view of both the chance and severity of a global pandemic they may have decided to put in place seemingly drastic measures to stop worse outcomes in the future.
This should also apply to important risks in the future, where it is important to know the probabilities and magnitudes so that we can adequately prepare. We need to know where to sensibly set the threshold which tells us when to act.

A helpful metaphor I will use here is a building with a sprinkler system. We want the sprinklers to turn on when there is a fire (there is a huge cost if the building burns down), but also we don't want the sprinklers constantly going off whenever there's dust in the air because this also has a cost associated with it.

When deciding whether something is going to happen, and whether to act there are 4 options:

  • True Positive (TP), where the event happens but luckily we decided to act
    • The building is on fire, but the sprinklers turned on and saved everything
  • True Negative (TN), where nothing happens and we didn't do anything
    • No fire, no sprinklers
  • False Positive (FP), where nothing happens and we act unnecessarily
    • The building is not on fire, but the sprinklers turned on anyway
  • False Negative (FN), where the event happens and we didn't act
    • The building is on fire, but the sprinklers didn't turn on and the building burned down

The chance of each of these options will depend on where we set our threshold. If the threshold is very low then we will have a lot of True Positives, but will also have to incur the costs of more False Positives. If the threshold is very high then we will have less False Positives, but will have more False Negatives. 

Mathematical Model

We can decide where to put the threshold by calculating the expected value (EV.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > * {position: absolute} .MJXc-bevelled > * {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom * {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')} ) at a given threshold.

EV=p(TP)⋅V(TP)+p(TN)⋅V(TN)+p(FP)⋅V(FP)+p(FN)⋅V(FN)

Where p(⋅) is the probability of each outcome, and V(⋅) is the return or value of each outcome.

We can express the value of each outcome quite simply, using 2 positive constants:
- D is the cost we have to pay if we don't treat the disease or avert the disaster
 - The cost of the building burning down
- T is the cost of the treatment or the preparations
 - The cost of the sprinkler system going off

For the True Positive case we are in the world where the disaster would have happened but we successfully stop it in exchange for cost T, not having to pay D is effectively the same as gaining D, and so V(TP)=D−T.

For the True Negative case, we do nothing and nothing happens, so very simply V(TN)=0.

For the False Positive case, we spend T and nothing happens so V(FP)=−T.

For the False Negative case, the disaster happens and costs us D, so V(FN)=−D.

And hence our expected value is

EV=p(TP)(D−T)+p(FP)(−T)+p(FN)(−D)

The expected value depends on the probabilities which depend on where we set our threshold t,

p(TP)=PP∫∞tψP(θ)dθ

p(FP)=PN∫∞tψN(θ)dθ

p(FN)=PP∫t−∞ψP(θ)dθ

Where ψP(θ) and ψn(θ) are the distributions over the thing we're measuring, θ. For example θ could be the concentration of particles in the air, as this increases there is a higher chance that the building is on fire. θ could also be an indicator of more complicated and abstract things; the reproductive number of a virus, or the number of warships in the South China Sea. ψP(θ) and ψn(θ) are both normalized to 1 here, and the actual background chance of a positive event is PP, and hence the chance of a negative event (nothing happens) is PN=1−PP. ψP(θ) can be thought of as "given that the event happens, how likely is each each value of θ".

If we want to be good Bayesians about it we could say ψP(θ)=p(θ|Event happens) and ψN(θ)=p(θ|Event doesn't happen).

Finding the Optimal Threshold

We should choose the value of the threshold t which maximizes the expected value. We can write the expected value as a function of t

EV(t)=(D−T) PP∫∞tψP(θ)dθ−T PN∫∞tψN(θ)dθ−D PP∫t−∞ψP(θ)dθ

We can find the maximum of this by taking the derivative with respect to t
and setting it equal to 0.

ddtEV(t)=−(D−T) PP ψP(t)+T PN ψN(t)−D PP ψP(t)=0

Which rearranges to give us

ψP(t)ψN(t)=1−PPPP (2D/T−1)

The threshold t will increase as the factor on the right hand side increases. Interestingly, for given distributions ψP and ψN, the optimal threshold t only depends on the probability of the event PP, and the ratio between the costs of the event and the treatment D/T. If we express the expected value in units of the treatment cost (divide by T), we can see that it also only depends on these variables:

EV(t)=(D/T−1) PP∫∞tψP(θ)dθ− (1−PP)∫∞tψN(θ)dθ−D/T PP∫t−∞ψP(θ)dθ

Perhaps obviously, this model tells us the ways in which we can be acting rationally (maximizing expected utility), but choose a suboptimal threshold because we are misinformed:
- We have an incorrect assessment of the relative cost of the disaster versus the treatment D/T
- We can have an incorrect assessment of the background probability of an event happening PP
- Our probability distributions ψP and ψN can be wrong; for example, the variance of smoke concentration when there is a fire could be larger than we thought.

Toy Example and Results

For the a very simple example, we can take the distributions ψP and ψN to be normal distributions with standard deviations of 1,

ψP(t)=NμP(t)=1√2πexp(−12(t−μP)2)

ψN(t)=NμN(t)=1√2πexp(−12(t−μN)2)

Using these we can solve for the threshold value t which maximizes expected utility

t=1μP−μNln(1−PPPP (2D/T−1))+12μ2P−μ2NμP−μN

Expected Value for Different Threshold Values

We can plot the expected value as a function of the threshold, to see how it changes. If the threshold is set too low, then we lose utility due to acting unnecessarily, while if it is too high then we lose utility because disasters happen which we could have prevented. The optimal t calculated above is at the 'sweet spot' as we can see here:

The top graph is the expected value as a function of the threshold t, with the optimal threshold marked by the green line. Here we have D/T=400 and PP=1%, the distributions ψP and ψN are plotted below, and we can see that the optimal threshold cuts off almost all of the ψP distribution.

We can also look at how things change as our distributions change, specifically as the overlap between them decreases. 

Expected value as a function of the threshold t (top row) for pairs of distributions ψP and ψN (bottom row). The parameters are D/T=500 and PP=1%. The green line shows the optimal threshold, and the red lines show the threshold we would choose if we over or underestimated PP by a factor of 20.

When there is a lot of overlap - when it is difficult to discriminate between the positive and negative situations - there isn't much of a peak in the expected value around the 'sweet spot'. This is because we can't really tell between the two outcomes, and and so being very risk averse (low threshold) is a good plan. In this regime, if underestimating the probability of a the bad event can have very bad outcomes, while overestimating doesn't really effect things.

As the distributions get further apart (we can discriminate more easily) we get a peak at the optimal threshold, and this peak gets wider as the distributions get further apart. This is because we can more easily place the threshold where it catches all the positive (bad) events while not reacting when we don't need to.

Effect of Incorrectly Estimating the Likelihood of an Event

We can also more thoroughly investigate the consequences of under or overestimating the probability of an event. These consequences will vary depending on the regime we are in, specifically on the values of D/T and PP. It is useful to sweep through a wide range of these parameters. We investigate values of D/T ranging from 1 to 105. For the probability PP we can instead use the odds ratio PP/(1−PP), which allows us to investigate very small probabilities (this is important because we should probably care about events which are very unlikely but have large effects). We explore PP/(1−PP) over a range from 10−3 up to almost certainty. We calculate the difference in expected value if we choose the optimal threshold (EVbest), versus the threshold we would choose if we under/overestimated the odds by a factor of 20 (EVunder/overestimate).

ΔEV=EVbest−EVunder/overestimate

ΔEV will be negative because we are choosing a suboptimal threshold. The lost utility will be due either from allowing bad events to happen without treatment, or from acting unnecessarily. 

The change in expected value if we under or over estimate the odds of a bad event by a factor of 20, over a range of values for the cost D/T and the true odds PP/(1−PP). For this example, ψP and ψN are normal distributions with standard deviations of 1, and means of 1 and -1 respectively.

We can clearly see that (for these parameters) the consequences of  underestimating the likelihood of a bad event can be much worse than the consequences of overestimating. The consequences are especially severe when the bad event is very unlikely but is very bad if it does happen.  

Feel free to examine this model, play around with parameters, or find mistakes here:
https://colab.research.google.com/github/peterbarnettnz/risk_neglect/blob/main/Risk_false_positive_negative.ipynb



Discuss

Which type of memory is more taxing?

28 апреля, 2021 - 14:10
Published on April 28, 2021 11:10 AM GMT

There are two basic types of memory:

1) Declarative, which is when the information gets (seemingly) stored all in one place. For example: You learn that in English, "school" is the term for the place you go to attend classes, and store that as a single piece of information. 

2) Procedural, which is when more than one pieces get stored and have to be used in conjunction to bring the memory to mind. An example (from my own childhood) is that I couldn't easily recall if June or July comes first, so stored a two-step procedure: a) l precedes n in the alphabet, b) it is in the opposite order for these two months. (In practice, it was applied as a one-step negation of the declarative information about alphabetical order; a special case: I felt it was superfluous - and actually had difficulty - to store the month succession as a separate piece of information, also declarative).

Now, in theory any piece of information can be stored in either way. Maybe more sizable information has to be procedural if it will survive (by which I only mean: be recalled) in the longer term, but we do know from high-school tests that we could store even very sizable information for the day of the test. 
 

I also remember, from late middle-school, that I was trying to reform a large number of memories and other knowledge, from declarative to procedural, exactly hoping to free up more time and space (yes, I am a computer). In essence I was trying to categorize various such memories and link them to form patterns which could then be used in other batches of tied memories etc.

From my recollection (which is actually declarative by now; since it is of a synopsis) of the above, I have to suppose that I sensed procedural memory to be less taxing. That said, declarative memories are likely by their very nature more useful in forming a stable framework. Procedural is naturally linked to new generation of thought; in a sense it is a perpetual re-enactment of the way you thought to store that information. 

Your thoughts on this?
 



Discuss

Better air is the easiest way not to die

28 апреля, 2021 - 13:24
Published on April 28, 2021 10:24 AM GMT

It’s hard to prioritize health advice. I’m told I should limit salt and eat cruciferous vegetables and do cardio and sleep well and limit alcohol and reduce stress and go for regular checkups. But how much do each of these matter? If you’re a fallible hairless ape, what should you do first?

You should put the stuff with the highest return on effort first, and that’s air.



Discuss

Talkers and doers

28 апреля, 2021 - 07:26
Published on April 28, 2021 4:26 AM GMT

This is a linkpost for https://amirbolous.com/posts/doers

In this world, there are two types of people: the talkers and the doers. The talkers, self-evidently, talk about things. They talk about building. They talk about solving problems. They talk about starting a project. Most people are talkers.

But some special people are doers. They build. They solve problems. They draw. They write. They code. They paint. They do stuff. They don't need to talk because their work speaks for itself.

It's easy to be a talker. It's hard to be a doer because it's uncomfortable on many levels. Uncomfortable to acknowledge that you're a noob in the beginning. Uncomfortable to not know the map from the territory. Uncomfortable to not have the the cheap validation of people around you. But uncomfortable is good, the only way to learn is the hard way (that is getting outside your comfort zone) which is by definition uncomfortable.

Whenever I find myself talking and not doing, I try to do myself and everyone else a huge favor.

I shut the hell up.

I hope that others strive to do this too.



Discuss

Modern Monetary Theory for Dummies

28 апреля, 2021 - 04:22
Published on April 27, 2021 1:11 PM GMT

Modern Monetary Theory, called MMT, is a modern theory of how fiat money works and some implications of this. I don't know the intricate details of this theory but there are some broad conclusions you can draw about someone who talks positively about it.

I'll try to very briefly summarize the understanding I have got from how MMT people view the world. Fiat money is different from real-world commodities in a significant way because it is possible to print any amount of it. The American government can never default on their debts if their debts are in the form of dollars because the US government is able to print literally any amount of dollars. A significant and fundamental reason why fiat money is then valuable (even if the supply could become infinite) is that you need it to pay taxes in the US. The government does not accept tax payments by ANY other means. This means that if you are a US citizen when the tax day comes you will be forced to "buy" dollars from other people in order to avoid going to jail, this creates a demand for the dollar irrespective of the supply. Moreover, the tax amount is also decided by the government, which means that they can create whatever level of demand they want or the dollar by US taxpayers.

So how does fiat work? Firstly, every single dollar in the world has been printed by the American government (FED) and the act of printing and the act of government spending are exactly the same. The only way to spend is to print and the only way to print is to spend. However, since all spending prints money, the government tries to also destroy money. It does this via taxation, you can look at it like every dollar taxed is burned. Generally, however, the government spends more than it taxes, which is called running a deficit, which means that they print more money than they destroy. This creates something called inflation.

You can look at inflation as a tax on liquidity which is just a fancy word for cash. If the government prints the same amount of money that is in circulation at the time then everyone's dollars effectively lose half of their value. So imagine that you had a hundred dollars when the government printed this money you would effectively have lost 50 dollars.

This might sound a bit crazy, if they have total control over the supply and over the demand and they can make the demand sky-high while keeping the supply sky-high don't they have a free wealth creator? No, the wealth is capped by what is produced by US taxpayers. However, what you are seeing is an economic device that could theoretically be used to enslave people. You create enough of a demand for dollars by increasing taxes and you print enough money that people's liquidity is worth almost nothing then you've put the want-to-be tax-payers into an impossible situation. Obviously, they'd probably leave way before you do this.

It may or may not be obvious how this loss of buying power occurs, the technical explanation is that you originally had 100 dollars that we're able to buy some amount of commodities for example you could buy 100 milk packets at 1 dollar each. Then the government printed another 100 dollars but the amount of milk in the world stayed the same. Suddenly the supply of dollars has gone up and as such the "price" of a dollar goes from 1 milk packet to half of a milk packet. The reason the price drops when supply increases are because there is more availability of dollars. The milk salesman suddenly doesn't have to go to you in order for him to buy his dollars, he can also go to the government and sell his milk, and as such, I have to increase how much I'm willing to pay for the milk in order to compete with the government. Twice as much supply roughly translates to half the price.

So when the government inflates you can see it as a tax on liquidity, if you hold a lot of dollars at the time of inflation then you will be heavily taxed, if you instead hold a lot of commodities at the time of inflation you will be unaffected. Obviously, the group that is best off is the government since they got all of the money that was taxed from the liquidity holders. Another effect of inflation is that it reduces debt. Which kind of shows the interesting relationship between debt and liquidity where they are each other's inverses. Having cash is the opposite of owing cash, and you expect that if inflation reduces the value of cash then inflation should also reduce the value of debt in the sense that the person owing money has less value to repay back. An easy illustration of this is that the milkman owes 100 dollars to the bank, inflation happens which makes him able to sell milk for twice the price, and as such he is able to pay off the debt with half as much milk that he needed before the inflation. So his debt effectively halved in its value.

Another interesting aspect of inflation is that it really is only relevant as a process (transition), not as a state. If you have a society where a packet of milk is worth 1 dollar then you can never say what the "inflation" of this society is. Rather you could only say that if that society changed from milk being worth 1 dollar to 2 dollars then you experienced 100% inflation. However, after this price shift, you would no longer be able to say what the "inflation" of society is. So inflation is the change in the relative prices between fiat currency and commodities.

Lastly, it's interesting that banks and governments pretty much serve the opposite positions in balancing inflation. Banks produce debt which we already understand to be anti-liquidity. If you borrow 100 dollars from a bank at a rate of 1% interest then you will have to pay them back 101 dollars at the end of the loan period. Imagine that there are only 100 dollars in the world because only 100 dollars were ever printed. This then means it's impossible for you to pay your debt back. This would cause you to desperately try to first get all 100 dollars in circulation then try to get another dollar from somewhere, perhaps you pay 100 dollars of your debt to the bank and then they purchase a 1 dollar commodity off of you and you use that last dollar to pay the rest of your debt off. This is how and why it is possible that the world is in more debt than there is money (way more debt) because debt is created out of thin air when interest is applied. Additionally, we saw an interesting effect caused by the debt in our example which was that money suddenly came in short supply. We desperately needed all of the money in the world and a little bit more, which means the demand for dollars increased, and thus we would be willing to perhaps sell our packet of milk for less than a dollar. This is called deflation and is the opposite of inflation. Deflation is the effect of fiat getting more buying power through either the creation of debt or the increase in productivity.

Just being able to produce more things more effectively at a cheaper price causes deflation, there is even a hint in there for why that it's "at a cheaper price". As the milkman is able to increase efficiency at his farm by using machines the amount of milk he has gone up by a lot. As such supply rises and as such the prices go down. Since the price went down we see natural deflation. If there was a fixed amount of dollars in the world then the dollar would become more and more valuable as time went on. We would start dividing it up into cents again since we would be able to buy more and more milk for the 1 dollar.

So deflation is caused by banks (or anyone) creating debt as well as an increase in production while inflation is caused by government spending and a decrease in production.



Discuss

A casual intro to Geometric Algebra

28 апреля, 2021 - 04:11
Published on April 28, 2021 12:00 AM GMT

(casual but not informal, prerequisites are trigonometry, vectors, and complex numbers, subject as digested by a non-expert in an unrelated field of science)

Some backgroundHow I got interested in geometric algebra

One evening I stayed after class to speak with my mechanics professor. He showed me a formulation of Maxwell's equations that looked like

∇F=J.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > * {position: absolute} .MJXc-bevelled > * {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom * {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')}

"Wait, what? Aren't there supposed to be four equations?" is a question I should have asked, had I known what a Maxwell was. A year later I had forgotten the exchange, but I happened to come across a swift introduction to geometric algebra on youtube. A comically dramatic flashback ensued and after a brief email correspondence, I ended up with Alan Macdonald's fantastic Linear and Geometric Algebra.

Motivation: representing rotations.

Hermione's hand shot up wildly from the desk right in front of her professor's.

"I know this! In Linearibus Essentia Mathematica by Grant Sanderson, chapter three, minute 3, second 48, he says that the ending coordinates of the basis vectors after a linear transformation uniquely describe the entire transformation! Put the coordinates of rotated basis vectors in a matrix, define the matrix product, and you're all set!"

Professor Vector smiled kindly, "That's spot on, Miss Granger. 10 points to Gryffindor! However, it is known that algorithmagically cheaper methods exist."

--

By the end of high school, it had been essentially imprinted upon me that there were two flavours of rotation: rotation matrixes and complex numbers. 

Here's how to rotate the vector [1, 0] by 30° anti-clockwise using a matrix.

  • Notice the coordinates of the basis vectors (read vertically) after a rotation by θdegrees must be
[cosθ−sinθsinθcosθ]
  • Replace θ by 30° and perform matrix left-multiplication on the vector
[√3/2−1/21/2√3/2][10]=[√3/21/2]

And here's the same rotation, now using complex numbers.

  • Throw the vector [1, 0] on the complex plane like so: 1+0i
  • Multiply by eiπ/6 like this :
    • eiπ/6(1+0i)=eiπ/6=cosπ/6+isinπ/6=√3/2+i/2
  • Hey presto, retrieve the vector [√3/2,1/2] from the complex plane

Looking back on these notations, I feel like they do not capture the essence of what it means to rotate a vector. Matrix notation is really more adapted for robots than for humans. The issue is that unless you sit down and draw the resulting positions of the resulting basis vectors, you cannot tell what kind of transformation is being performed by glancing at the matrix. Complex numbers are already more compact, but it's not so obvious what the imaginary unit has to do with rotation in the first place. 

Is there a better way?

Geometric Algebra

The promise of geometric algebra is the encapsulation of elementary linear transformations on points, lines, planes, and more into a useful and transparent algebra.

What follows is a short introduction to the subject, following a selection of topics in Linear and Geometric Algebra by Alan Macdonald. We'll have a look at

  1. Oriented lengths, areas, and volumes in 3D
  2. The outer product and geometric product
  3. Generalizing complex numbers
  4. Doing rotations better
1. Oriented lengths, areas, and volumes in 3D

An oriented length v, also called a vector, is an arrow. Its length, or norm, is some real number |v|. If we put two oriented lengths u and v head-to-tail, then we'll get a new oriented length u + v. We can also stretch it like this: 2v, and flip it like this: −v

An oriented area B is very similar to an oriented length. It's a segment of a plane but we decide that it has fingers and that it's pointing in one of two orientations, drawn as a swirly arrow, ↺ or ↻. We denote B 's area, which is called norm too, with |B|. Just like with oriented lengths, we can add two oriented areas A and B together: A+B, and scale them like so: 2B. 

An oriented volume T is a segment of space, or blob, with an orientation  ↺ or ↻ and volume |T|. We can add blobs like T1+T2, scale them like aT, Surprisingly, oriented volumes do not have a "direction" in 3D space. Even more surprisingly, they do in 4D. Why? (Hint: does an oriented length have a direction independent from its orientation in 1D space? What about 2D space?)

Oriented lengths, areas, and volumes 

Importantly, oriented areas and volumes don't have a shape because all we know is their orientation, direction, and norm. Therefore, in a given plane, any square is equal to any circle of the same norm. This point may seem confusing at first, but we can see it as an extension of the definition "a vector is an equivalence class under equipollence of unordered pairs of points". 

2. The outer product and geometric product

Take two vectors u and v. Their outer product u∧v is the oriented area generated if the two vectors formed the sides of a parallelogram in the unique plane containing  u and v. Its orientation is the opposite of the orientation of v∧u. 

There's a deep relationship between the cross × and outer ∧ products, in that the cross product is the vector orthogonal to the oriented area generated by the outer product.

Let's turn our attention to the shiny new geometric product of two vectors u and v:

uv=u⋅v+u∧v

We define it as the sum of the inner and outer products of two vectors. Ok, so as an example, let's consider vectors as members of R2 and see what these three products mean component-wise.

The inner product of two vectors is the familiar dot product which yields a scalar.

e1⋅e2=[10]⋅[01]=1⋅0+0⋅1=0

The outer product of two vectors will yield a bivector which is just an oriented area. The bivector below is an oriented area of norm 1 in the plane spanned by the two standard basis vectors.

e1∧e2=[10]∧[01]

Finally, the geometric product is the sum of the above. This sum yields a multivector, which is like a Halloween candy bag of different dimensional objects.

e1e2=0+[10]∧[01]Inner, outer and geometric products

Let θ be the smaller angle between u and v, so θ∈[−π,π] and let's use the figure above to motivate the actual definitions of the inner and outer products.

The inner product of two vectors gives a scalar that is largest when the two vectors are pointing in the same direction (θ=0), zero when they're orthogonal (θ=π/2 or −π/2) to each other, and smallest when they're pointing in opposite directions (θ=π). 

u⋅v=|u||v|cosθ

The outer product of two vectors returns an oriented area, so in some sense, it is proportional to the bivector e1∧e2. When the two vectors are pointing in the same or opposite directions, the area spanned is zero. When they're orthogonal to each other, the norm is maximal. The orientation is decided by how the area is swept, which is the information we can retrieve from the sign of θ. So we have

u∧v=|u||v|sinθ(e1∧e2)3. Generalizing complex numbers

The algebraic properties of complex numbers (and much more) are subsumed by geometric algebra. How? Let's look at complex numbers by trying to rebuild the imaginary unit i from the tools that we've discovered so far. We'll need two properties of the geometric product for this. 

First, watch what happens when you swap the terms of a geometric product of two basis vectors.

e1e2=0+[10]∧[01]=−0−[01]∧[10]=−e2e1

Second, the geometric product of a basis vector with itself will yield 1.

e1e1=[10]⋅[10]+[10]∧[10]=1+0=1

And now, interpreting exponentiation as repeated geometric products, this falls out.

(e1e2)2=e1e2e1e2=−e1e1e2e2=−1=i2

Behold, the unit imaginary (now in bold). From this point onwards, anything you'd like to do with complex numbers could be done with real vectors, which is excellent. What's more, the unit imaginary now represents an oriented area which, we'll see, is a direct answer to our original question of what i is doing in rotation.

So in particular, this Euler identity holds :

eiπ=cosπ+isinπ=−1+0i=−1

And it turns out that the geometric product of two vectors even has a polar form

uv=|u||v|cosθ+|u||v|sinθ(e1∧e2)=|u||v|(cosθ+isinθ)=reiθ4. Doing rotations better

So you'd like to rotate a vector. The first question to be asked is around what?

The plane versus the axis of rotation

Let's consider the 2D case where some vector u is lying in the plane i=e1e2 and is being rotated around the origin counter-clockwise θ degrees to become the vector v. What is the axis of rotation? 

Well, it's hard to say! We could argue that u is rotating around some third axis poking through the center of the page, but the problem is that our space is 2-dimensional. To avoid any philosophy, let's just rotate the plane instead.

First, write down the polar form of uv and then solve for v, knowing that |u|=|v|

uv=|u||v|eiθ⟺u2v=u|u||v|eiθ⟺|u|2v=|u|2ueiθ⟺v=ueiθ

Nice. Multiplying a vector in the plane i on the right by eiθ is the action of rotating by θ degrees, turning u into v. You can check that multiplying on the left by e−iθ does the same thing. So what about doing the same in 3D? What about rotating in some other arbitrary plane? Isn't this what we've all been waiting for?

Absolutely! We'll have to get our hands dirty for a moment, but we'll emerge on the other side with the most compact known algebraic representation of rotations in 3D

First, let's generalize iθ to be any bivector of area θ, and call it an angle. Think of it like an oriented slice of pizza with the pointy end stuck at the origin. Here are some concrete examples.

e3e130°, e1(e2+e3)π, (e1−e2)π/4

In a general rotation of a vector u about the origin, the vector may not even live on the plane of rotation. Like olives on a pizza, there's some bit of u that's stuck on the cheese and some bit that has leaped away. Geometrically, u can be written as the sum of its projection and rejection onto the angle i.

u=u∥+u⊥

We're going to introduce some small changes to the previous rotation identity. First, we'll have to consider the two components of u separately and then, bare with me, split the exponential into two halves. Also, the rejection of u will be left totally unaffected because it's in line with the rotation's axis.

v=u∥eiθ+u⊥=u∥eiθ/2eiθ/2+u⊥e−iθ/2eiθ/2

We would like to send one of those halves to the other side of u so that we can recombine u∥ and u⊥. Luckily, u∥ is in the same plane as i so u∥eiθ/2=e−iθ/2u∥ 

v=e−iθ/2u∥eiθ/2+e−iθ/2u⊥eiθ/2=e−iθ/2ueiθ/2

That's it! Rotation by half-pizza slices on each side of u. Here's how to rotate e1 by the angle e1e3π/2, a quarter-turn in the e1e3 plane.

v=e−e1e3π/4e1ee1e3π/4=(cosπ/4+sinπ/4(e3e1))e1(cosπ/4+sinπ/4(e1e3))=(e1/√2+e3/√2)(1/√2+e1e3/√2)=e1/2+e3/2+e3/2−e1/2=e3Closing thoughts
  1. Vectors u such that u2≠0 have multiplicative inverses in the geometric algebra over a vector space. Yes, you read that right. Also, I haven't found something that hasn't been made instantly easier using geometric algebra, it's great.
  2. A strong opinion, lightly held, of mine, as a non-physicist, is that physics should always be taught using the most appropriate relevant notation. Therefore, I believe that cross-products in classical mechanics should be disposed of.
  3. David Hestenes' project of Geometric Algebra is a Theory of Instruction, as outlined in his article Reforming the Mathematical Language of Physics. I intend to read and get familiar with these tools.
  4. I found Alan McDonald's book to be a great read, and I enjoyed the problems, so I recommend it to anyone who's interested in learning geometric algebra as a natural extension of linear algebra.


Discuss

Faerie Ring meet #2: At Gather.Town

28 апреля, 2021 - 02:17
Published on April 27, 2021 11:17 PM GMT

This is the second meeting of a Faerie Ring, an instance of a copyable ritual for community. Here's slightly more explanation of the format.

  • A group of up to 8 people
  • A call once every 2 weeks, for 2-3 months.
  • Checking in with each other and seeing how we can help one another

My ring still has room for more peeps, and I'm looking to fill predominantly rationalists so I'm advertising here.

We will meet on Gather.town.

Please RSVP with your email address in a DM, or on this google doc



Discuss

Страницы