Вы здесь

Новости LessWrong.com

Подписка на Лента Новости LessWrong.com Новости LessWrong.com
A community blog devoted to refining the art of rationality
Обновлено: 59 минут 31 секунда назад

Let People Move to Jobs

21 октября, 2019 - 21:00
Published on October 21, 2019 6:00 PM UTC

When I argue that we should bring rents down by building more housing, one kind of response I've gotten is:

Why are you trying to move more people into cities? There's lots of housing available in the US, it's just not in the currently trendy cities. Instead of moving people we should be moving jobs. Why couldn't Amazon have put their "second headquarters" in a deindustrializing city like Milwaukee instead of splitting it between DC and NYC?

The idea makes sense: these cities were built out for industries that have moved on, and now they're over-built for current demand. They have lots of buildings available, both commercial and residential, along with a lot of other underutilized infrastructure.

One way to look at it, then, is why aren't companies just moving there on their own? Company rent would fall, they could pay their employees less for the same standard of living or effectively give them all large raises by holding pay constant. Everyone would waste less time in traffic. The host city would be strongly supportive instead of somewhere between negative and neutral. What's keeping them put?

A major factor is that current employees don't want to move. If Google announced it was moving everything to Pittsburgh, ~80% of the company would quit. People put down roots and get attached to areas, generally more so than to jobs.

New companies, however, don't have employees they would need to move. Why do tech startups choose the Bay when almost anywhere else would be cheaper? I see two main answers: to be close to their investors and to be able to hire from a deep talent pool.

And this gets us to the real problem with "let's distribute jobs across the country": industries benefit enormously from centralization. Being in the main city for your industry means, for a start:

  • Good ideas flow more freely between organizations

  • Coordination between organizations is easier

  • Switching jobs is easier, so people don't get as stuck in jobs they don't like

  • Ease of switching also allows people to take larger career risks

People expected telecommuting to change this, but even though we have many technical tools that make remote work far more practical than it was even ten years ago, physically being in the same place as your coworkers remains extremely valuable.

Industries do vary, however, in how well centralization suits them. Three groups:

  • Industries that are distributed because they depend on resources that are distributed. People working in farming, logging, fishing, mining, and other industries spread out because the best places for those activities are spread out.

  • Industries that are centralized because they don't have location-based inputs. Software in the Bay, finance in NYC, film/TV in LA, biotech in Boston, commodities in Chicago, insurance in Hartford, etc.

  • Industries that need people. Doctors, teachers, barbers, auto mechanics, etc will all be a portion of your local population, and they'll mostly go where the people are. Centralization still helps, but people won't travel hundreds of miles to get their hair cut.

Many fewer people work in naturally-distributed industries than they used to. For example farming is down to only 2% of the population:


Thad Woodman, Agricultural Employment Since 1870

An early example of a centralized industry was the auto industry, in Detroit. There was nothing about Detroit that made it uniquely good for producing cars, but as it became the consensus location for the industry it developed local culture and knowledge that had strong advantages for building cars.

Detroit also illustrates two major downsides of this centralization:

  • They stopped innovating so much. It was only because there were other major centers of automotive production elsewhere that we were able to get cars as reliable, safe, and efficient as we have today.

  • When their industry dried up, so did the city. Their cars weren't the best anymore, they lost business to Japan and others, massive layoffs, people moved away, the city is much worse than it was.

These two considerations mean we want:

  • A few key cities for each industry, so there can be competition at scales larger than individual companies. With international borders still being a major political force we'll probably be more towards the "less central than ideal" end of the spectrum for a long time.

  • Multiple industries in each city, so that as one industry in a city becomes more or less successful it isn't carrying the success of the whole city. If an industry does fail, others will be able to grow into their place. Reinventing Boston: 1630-2003 (Glaeser 2004) has a good description for how something like this has worked in Boston.

Overall, people working on the same thing in the same place as each other remains incredibly valuable, and we should let that happen. Instead of trying to distribute industries that are naturally centralized, we should let them centralize while managing the risks. Building more housing doesn't just let people move to where the jobs are, but also allows people working on other things afford to live in the city.

Comment via: facebook



Discuss

Turning air into bread

21 октября, 2019 - 20:50
Published on October 21, 2019 5:50 PM UTC

.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > * {position: absolute} .MJXc-bevelled > * {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom * {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')}

Originally posted on The Roots of Progress, August 12, 2017

I recently finished The Alchemy of Air, by Thomas Hager. It's the story of the Haber-Bosch process, the lives of the men who created it, and its consequences for world agriculture and for Germany during the World Wars.

What is the Haber-Bosch process? It's what keeps billions of people in the modern world from starving to death. In Hager's phrase: it turns air into bread.

Some background. Plants, like all living organisms, need to take in nutrients for metabolism. For animals, the macronutrients needed are large, complex molecules: proteins, carbohydrates, fats. But for plants they are elements: nitrogen, phosphorus and potassium (NPK). Nitrogen is needed in the largest quantities.

Nitrogen is all around us: it constitutes about four-fifths of the atmosphere. But plants can't use atmospheric nitrogen. Nitrogen gas, N2, consists of two atoms held together by a triple covalent bond. The strength of this bond renders nitrogen mostly inert: it doesn't react with much. To use it in chemical processes, plants need other nitrogen-containing molecules. These substances are known as “fixed” nitrogen; the process of turning nitrogen gas into usable form is called fixation.

In nature, nitrogen fixation is performed by bacteria. Some of these bacteria live in the soil; some live in a symbiotic relationship on the roots of certain plants, such as peas and other legumes.

Nitrogen availability is one of the top factors in plant growth and therefore in agriculture. The more fixed nitrogen is in the soil, the more crops can grow. Unfortunately, when you farm a plot of land, natural processes don't replace the nitrogen as fast as it is depleted.

Pre-industrial farmers had no chemistry or advanced biology to guide them, but they knew that soil would lose its fertility over the years, and they had learned a few tricks. One was fertilization with natural substances, particularly animal waste, which contains nitrogen. Another was crop rotation: planting peas, for instance, would replace some of the nitrogen in the soil, thanks to those nitrogen-fixing bacteria on their roots.

But these techniques could only go so far. As the world population increased in the 19th century, more and more farmland was needed. Famine was staved off, for a time, by the opening of the prairies of the New World, but those resources were finite. The world needed fertilizer.

An island off the coast of Peru where it almost never rains had accumulated untold centuries of—don't laugh—seagull droppings, some of the world's best known natural fertilizer. An industry was made out of mining guano on these islands, where it was piled several stories high, and shipping it all over the world. When that ran out after a couple decades, attention turned inland to the Atacama Desert, where, with no rainfall and no life, unusual minerals grew in crystals on the rocks. The crystals included salitre, or Chilean saltpeter, a nitrogen salt that could be made into fertilizer.

It could be made into something else important, too: gunpowder. It turns out that nitrogen is a crucial component not only of fertilizer, but also of explosives. Needing it both to feed and to arm their people, every country considered saltpeter a strategic commodity. Peru, Chile and Bolivia went to war over the saltpeter resources of the Atacama in the late 1800s (Bolivia, at the time, had a small strip of land in the desert, running to the ocean; it lost that strip in the war and has remained landlocked ever since).

By the end of the 19th century, as population continued to soar, it was clear that the Chilean saltpeter would run out within decades, just as the guano had. Sir William Crookes, head of the British Academy of Sciences, warned that the world was heading for mass famine, a true Malthusian catastrophe, unless we discovered a way to synthesize fertilizer. And he called on the chemists of the world to do it.

Nearby, in Germany, other scientists were thinking the same thing. Germany was highly dependent on salt shipped halfway around the world from Chile. But Germany did not have the world's best navy. If—God forbid—Germany were ever to be at war with England (!), they would quickly blockade Germany and deprive it of nitrogen. Germany would have no food and no bombs—not a good look, in wartime.

The prospect of synthesizing fixed nitrogen was tantalizing. After all, the nitrogen itself is abundant in the atmosphere. A product such as ammonia, NH3, could be made from that and hydrogen, which of course is present in water. All you need is a way to put them together in the right combination.

The problem, again, is that triple covalent bond. Owing to the strength of that bond, it takes very high temperatures to rip N2 apart. More troublesome is that ammonia is by comparison a weak molecule. So at temperatures high enough to separate the nitrogen atoms, the ammonia basically burns up.

Fritz Haber was the chemist who solved the fundamental problem. He found that increasing the pressure of the gases allowed him to decrease the temperature. At very high pressures, he could start to get an appreciable amount of ammonia. By introducing the right catalyst, he could increase the production to levels that were within reach of a viable industrial process.

Carl Bosch was the industrialist at the German chemical company BASF who led the team that figured out how to turn this into a profitable process, at scale. The challenges were enormous. To start with, the pressures required by the process were immense, around 200 atmospheres. The required temperatures, too, were very high. No one had ever done industrial chemistry in that regime before, and Bosch's team had to invent almost everything from scratch, pioneering an entirely new subfield of high-pressure industrial chemistry. Their furnaces kept exploding—not only from the pressure itself, but because hydrogen was eating away at the steel walls of the container, as it forced into them. No material was strong enough and inexpensive enough to serve as the container wall. Finally Bosch came up with an ingenious system in which the furnaces had an inner lining of material to protect the steel, which would be replaced on a regular basis.

A further challenge was the catalyst: Haber had used osmium, an extremely rare metal. BASF bought up the entire world's supply, but it wasn't enough to produce the quantities they needed. They experimented with thousands of other materials, finally settling on a catalyst with an iron base combined with other elements.

This is the Haber-Bosch process: it turns pure nitrogen and hydrogen gas into ammonia. The nitrogen can be isolated from the atmosphere (by cooling air until it condenses into liquid, then carefully increasing the temperature: different substances boil at different temperatures, so this process separates them). Hydrogen can be produced from water by electrolysis, or, these days, found in natural gas deposits. The output of the process, ammonia, is the precursor of many important products, including fertilizers and explosives.

The new BASF plant that Bosch built began turning out tons of ammonia a day. It beat out all competing processes (including one that used electric arcs through the air), and provided the world with fertilizer—cheaper and of more consistent quality than could be obtained from the salts of Chile, which were abandoned before they ran out.

Haber-Bosch fed the world—but it also prolonged World War I, and later helped fuel the rise of Hitler.

The Alchemy of Air is as much about the lives of Haber and Bosch, and what happened after their process became a reality, as it is about the science and technology of the process itself. Even though the technology was my main interest this time, I found the history captivating.

Haber was a Jew, at a time when Jews were second-class citizens in Germany. Rather than denouncing the society he lived in, this seemed to cause Haber to seek its approval. After his scientific achievement with ammonia, he got a high-status job at the Kaiser Wilhelm Institutes in Berlin, and sought to be an adviser to the Kaiser himself. Jews were barred from military service, but Haber was able to become a science adviser to the military—even pioneering the use of poison gas in WW1, a role that left him with a reputation as a war criminal.

Haber believed that if Jews showed what good, patriotic German citizens they could be, they could eventually be accepted as equals. Decades later, when the Nazis came to power and began “cleansing” Jews first out of the German government, then out of all of society, Haber saw his dream of acceptance fall completely to pieces. He died, shortly before WW2, in great distress.

Bosch, on the other hand, held liberal political views and was against the Nazis. He even tried to speak out against them, and in a personal meeting with Hilter made a futile argument for freedom of inquiry and better treatment of the Jews. But at the same time he made deals with the Nazis to secure funding for his chemical company—by then he was the head, not only of BASF, but of a broader industry association called IG Farben. He was building a massive chemical plant in the heart of Germany, at Leuna, to produce not only ammonia but also what he saw as his magnum opus: synthetic gasoline, made from coal. In the end Farben became virtually a state company and provided much of the material Germany needed for WW2, including ammonia, gasoline, and rubber.

Bosch died shortly after the war began. On his deathbed, he predicted that the war would be a disaster for Germany. It would go well at first, he said, and Germany would occupy France and maybe even Britain. But then Hitler would make the fatal mistake of invading Russia. In the end, the skies would darken with Allied planes, and much of Germany would be destroyed. It happened as he predicted, and Bosch's beloved Leuna was a major target, ultimately crippled by wave after wave of Allied bombing raids.

Synthetic ammonia is one of the most important industrial products of the modern world, and so Haber-Bosch is one of the most important industrial processes. Around 1% of the total energy of the economy is devoted to it, and Hager estimates that half the nitrogen atoms in your body came from it. It's a crucial part of the story of industrial agriculture, and so a crucial part of the story of how we became smart, rich and free.

The Alchemy of Air: A Jewish Genius, a Doomed Tycoon, and the Scientific Discovery That Fed the World but Fueled the Rise of Hitler



Discuss

Link: An exercise: meta-rational phenomena | Meaningness

21 октября, 2019 - 19:56
Published on October 21, 2019 4:56 PM UTC

An exercise: meta-rational phenomena | Meaningness



Discuss

Why Are So Many Rationalists Polyamorous?

21 октября, 2019 - 19:12
Published on October 21, 2019 4:12 PM UTC

Originally posted at Living Within Reason.


Last week, Jacob Falkovich, of the Putanumonit blog, put up a post trying to figure out why rationalists are disproportionately polyamorous. He notes that about 5% of Americans engage in consensual nonmonogamy, while 17% of Americans in the 2014 Less Wrong survey indicated that they did. My expectation is that the number for both is slightly higher today. In service of this goal, Falkovich developed several theories and surveyed a number of his readers. His results ended up inconclusive.

Since this involves the intersection of the two themes of this blog – rationality and nonmonogamous relationships – I thought I would offer my own theories about why this might be the case. I don’t have any survey data, but if anyone is planning on doing a survey, you may want to include some questions evaluating these theories.

1. THE TRADITIONAL JUSTIFICATIONS FOR MONOGAMY ARE IRRATIONAL

Rationalist try to be rational about everything, so we also try to be rational about relationships. Relationship anarchy is my attempt to derive a rational relationship style from first principals.

While there are some good reasons to be monogamous, anecdotally, the most common justifications I hear for monogamy are jealousy-related. People don’t want open relationships because they would be jealous of their metamours (and often, their partners). But jealousy is just an emotion, and rationalists have a tradition of distrusting emotions. Falkovich somewhat addressed this in his first theory – overcoming intuitions:

A core tenet of Rationality is that what feels true is not necessarily what is true. What feels true may simply be what is pleasant, politically expedient, or what fits your biases and preconceptions. The willingness to entertain the idea that your intuitions about truth may be wrong is a prerequisite for learning Rationality, and Rationality further cultivates that skill.

Unfortunately, Falkovich’s analysis is frustrated by the lack of variance in his survey data on whether people overcome their intuitions. I have a feeling that this result was limited somewhat by the survey questions, which asked participants to rate how much they trusted their intuitions and whether they ever significantly changed their emotions through analysis and introspection.

The difficulty is that there are a whole host of cognitive biases encouraging us to believe that yes, of course we trust our cognition more than our intuition, but that can easily just be motivated reasoning. Some people will admit that they “go with their gut,” but that sort of thing is frowned on in the rationality community, so it doesn’t surprise me that most of the participants claimed to trust their cognition more regardless of whether that’s actually the case.

The small amount of variance in Falkovich’s survey was highly correlated with polyamory, so that lends some credibility to the argument that rationalists choose polyamory because they do not reflexively trust their feelings of jealousy.

2. GAME THEORY

If you spend enough time around my rationalist friends, they will start talking about prisoner’s dilemmas. It’s inevitable. Scott Alexander has a whole game theory sequence. Rationalists love game theory, and in particular, they love coming up with coordination strategies to turn things from zero-sum to positive-sum games.

Monogamy is a zero-sum game. Each person gets one partner, and once that partner is taken, they are removed from the dating pool for everyone else. There is no sharing, coordination, or trading. There are no complicated strategies that can be optimized. In other words, it’s not interesting to rationalists.

Nonmonogamy, properly coordinated, is a positive-sum game. Multiple people can partner with the same person and unless they always want undivided attention at the exact same times, they can coordinate so everyone is better off. Nonmonogamy allows parties to, for example, have a date with one partner while their other partner is busy, spend time with multiple partners at the same time, and coordinate to compensate for imbalances in sex drive. Parties rarely want exactly the same thing from their partners, so there are usually large opportunities for emotional arbitrage.

I strongly suspect that this impulse toward coordination and creating positive-sum interactions underlies a substantial amount of rationalists’ preferences for nonmonogamy.

3. GENDER IMBALANCE

On Falkovich’s survey, 78% of the respondents where heterosexual men. 11% were women interested in men. That’s a 7:1 ratio. Other surveys of the rationalist community have indicated similar gender and sexuality breakdowns.

In the Robert Heinlein novel The Moon is a Harsh Mistress, the moon is originally used as a penal colony, and as a result has a population that is mostly men and few women. The result is that, out of necessity, the society developed to allow women to date multiple men. When there are multiple heterosexual men for each woman interested in dating them, you either have nonmonogamy or you have a lot of lonely men.

A similar thing may be happening with the rationalist community. It’s not the moon, so rationalists are free to date outside of the community, but people often want to date like-minded people. Most rationalists would probably prefer to date other rationalists. Rationalist women likely have multiple suitors all the time, and may find more than one appealing. Unless they are particularly high-status, rationalist men then face the choice of embracing nonmonogamy, dating outside the community, or not dating at all. Notable also is that most of the high-status men in the rationalist community are nonmonogamous. Under those constraints, nonmonogamy may be the ideal choice for many of us.

Further Study

Ozy is currently recruiting nonmonogamous survey participants. If you are nonmonogamous, please consider taking the survey. I have never done any kind of survey design, so I do not know how one would test the above theories. However, if someone is planning on doing a survey of the rationalist community and is interested in this question, I encourage you to consider the above and perhaps try to design some questions to test its accuracy.



Discuss

What does "meta-execution without indirection" look like?

21 октября, 2019 - 17:38
Published on October 21, 2019 12:59 PM UTC

I've been trying to understand IDA/Factored Evaluation at a deep level, and I find meta-execution especially confusing. The LW post says that it is "HCH + annotated functional programming + a level of indirection", but I'm not sure what the "level of indirection" is doing. To understand it better, I want to know what HCH + Annotated Functional Programming (without indirection) would look like, and how this differs from meta-execution. Any help is much appreciated!



Discuss

Person-moment affecting views

21 октября, 2019 - 16:40
Published on October 21, 2019 1:40 PM UTC

[Epistemic status: sloppy thoughts not informed by the literature. Hoping actual population ethicists might show up and correct me or point me to whoever has already thought about something like this better.]

Person-affecting views say that when you are summing up the value in different possible worlds, you should ignore people who only exist in one of those worlds. This is based on something like the following intuitions:

  1. World A can only be better than world B insofar as it is better for someone.
  2. World A can’t be better than world B for Alice, if Alice exists in world A but not world B.

The further-fact view says that after learning all physical facts about Alice and Alice’—such as whether Alice’ was the physical result of Alice waiting for five seconds, or is a brain upload of Alice, or is what came out of a replicating machine on Mars after Alice walked in on Earth, or remembers being Alice—there is still a further meaningful question of whether Alice and Alice’ are the same person.

I take the further-fact view to be wrong (or at least Derek Parfit does, and I think we agree the differences between Derek Parfit and I have been overstated). Thinking that the further-fact view is wrong seems to be a common position among intellectuals (e.g. 87% among philosophers).

If the further-fact view is wrong, then the what we have is a whole lot of different person-moments, with various relationships to one another, which for pragmatic reasons we like to group into clusters called ‘people’. There are different ways we could define the people, and no real answer to which definition is right. This works out pretty well in our world, but you can imagine other worlds (or futures of our world) where the clusters are much more ambiguous, and different definitions of ‘person’ make a big difference, or where the concept is not actually useful.

Person-affecting views seem to make pretty central use of the concept ‘person’. If we don’t accept the further-fact view, and do want to accept a person-affecting view, what would that mean? I can think of several options:

  1. How good different worlds are depends strongly on which definition of ‘person’ you choose (which person moments you choose to cluster together), but this is a somewhat arbitrary pragmatic choice
  2. There is some correct definition of ‘person’ for the purpose of ethics (i.e. there is some relation between person moments that makes different person moments in the future ethically relevant by virtue of having that connection to a present person moment)
  3. Different person-moments are more or less closely connected in ways, and a person-affecting view should actually have a sliding scale of importance for different person-moments

Before considering these options, I want to revisit the second reason for adopting a person-affecting view: If Alice exists in world A and not in world B, then Alice can’t be made better off by world A existing rather than world B. Whether this premise is true seems to depend on how ‘a world being better for Alice’ works. Some things we might measure would go one way, and some would go the other. For instance, we could imagine it being analogous to:

  1. Alice painting more paintings. If Alice painted three paintings in world A, and doesn’t exist in world B, I think most people would say that Alice painted more paintings in world A than in world B. And more clearly, that world A has more paintings than world B, even if we insist that a world can’t have more paintings without somebody in particular having painted more paintings. Relatedly, there are many things people do where the sentence ‘If Alice didn’t exist, she wouldn’t have X’.
  2. Alice having painted more paintings per year. If Alice painted one painting every thirty years in world A, and didn’t exist in world B, in world B the number of paintings per year is undefined, and so incomparable to ‘one per thirty years’.

Suppose that person-affecting view advocates are right, and the worth of one’s life is more like 2). You just can’t compare the worth of Alice’s life in two worlds where she only exists in one of them. Then can you compare person-moments? What if the same ‘person’ exists in two possible worlds, but consists of different person-moments?

Compare world A and world C, which both contain Alice, but in world C Alice makes different choices as a teenager, and becomes a fighter pilot instead of a computer scientist. It turns out that she is not well suited to it, and finds piloting pretty unsatisfying. If Alice_t1A is different from Alice_t1C, can we say that world A is better than world C, in virtue of Alice’s experiences? Each relevant person-moment only exists in one of the worlds, so how can they benefit?

I see several possible responses:

  1. No we can’t. We should have person-moment affecting views.
  2. Things can’t be better or worse for person-moments, only for entire people, holistically across their lives, so the question is meaningless. (Or relatedly, how good a thing is for a person is not a function of how good it is for their person-moments, and it is how good it is for the person that matters).
  3. Yes, there is some difference between people and person moments, which means that person-moments can benefit without existing in worlds that they are benefitting relative to, but people cannot.

The second possibility seems to involve accepting the second view above: that there is some correct definition of ‘person’ that is larger than a person moment, and fundamental to ethics – something like the further-fact view. This sounds kind of bad to me. And the third view doesn’t seem very tempting without some idea of an actual difference between persons and person-moments.

So maybe the person-moment affecting view looks most promising. Let us review what it would have to look like. For one thing, the only comparable person moments are the ones that are the same. And since they are the same, there is no point bringing about one instead of the other. So there is never reason to bring about a person-moment for its own benefit. Which sounds like it might really limit the things that are worth intentionally doing. Isn’t making myself happy in three seconds just bringing about a happy person moment rather than a different sad person moment?

Is everything just equally good on this view? I don’t think so, as long as you are something like a preference utilitarian: person-moments can have preferences over other person-moments. Suppose that Alice_t0A and Alice_t0C are the same, and Alice_t1A and Alice_t1C are different. And suppose that Alice_t0 wants Alice_t1 to be a computer scientist. Then world A is better than world C for Alice_t0, and so better overall. That is, person-moments can benefit from things, as long as they don’t know at the time that they have benefited.

I think an interesting  feature of this view is that all value seems to come from meddling preferences. It is never directly good that there is joy in the world for instance, it is just good because somebody wants somebody else to experience joy, and that desire was satisfied. If they had instead wished for a future person-moment to be tortured, and this was granted, then this world would apparently be just as good.

So, things that are never directly valuable in this world:

  • Joy
  • Someone getting what they want and also knowing about it
  • Anything that isn’t a meddling preference

On the upside, since person-moments often care about future person-moments within the same person, we do perhaps get back to something closer to the original person-affecting view. There is often reason to bring about or benefit a person moment for the benefit of previous person moments in the history of the same person, who for instance wants to ‘live a long and happy life’. My guess after thinking about this very briefly is that in practice it would end up looking like the ‘moderate’ person-affecting views, in which people who currently exist get more weight than people who will be brought into existence, but not infinitely more weight. People who exist now mostly want to continue existing, and to have good lives in the future, and they care less, but some, about different people in the future.

So, if you want to accept a person-affecting view and not a further-fact view, the options seem to me to be something like these:

  1. Person-moments can benefit without having an otherworldly counterpart, even though people cannot. Which is to say, only person-moments that are part of the same ‘person’ in different worlds can benefit from their existence. ‘Person’ here is either an arbitrary pragmatic definition choice, or some more fundamental ethically relevant version of the concept that we could perhaps discover.
  2. Benefits accrue to persons, not person-moments. In particular, benefits to persons are not a function of the benefits to their constituent person-moments. Where ‘person’ is again either a somewhat arbitrary choice of definition, or a more fundamental concept.
  3. A sliding scale of ethical relevance of different person-moments, based on how narrow a definition of ‘person’ unites them with any currently existing person-moments. Along with some story about why, given that you can apparently compare all of them, you are still weighting some less, on grounds that they are incomparable.
  4. Person-moment affecting views

None of these sound very good to me, but nor do person-affecting views in general, so maybe I’m the wrong audience. I had thought person-moment affecting views were almost a reductio, but a close friend says he thought they were the obvious reasonable view, so I am curious to hear others’ takes.



Discuss

Strong stances

21 октября, 2019 - 16:40
Published on October 21, 2019 1:40 PM UTC

I. The question of confidence

Should one hold strong opinions? Some say yes. Some say that while it’s hard to tell, it tentatively seems pretty bad (probably). There are many pragmatically great upsides, and a couple of arguably unconscionable downsides. But rather than judging the overall sign, I think a better question is, can we have the pros without the devastatingly terrible cons?

A quick review of purported or plausible pros:

  1. Strong opinions lend themselves to revision:
    1. Nothing will surprise you into updating your opinion if you thought that anything could happen. A perfect Bayesian might be able to deal with myriad subtle updates to vast uncertainties, but a human is more likely to notice a red cupcake if they have claimed that cupcakes are never red. (Arguably—some would say having opinions makes you less able to notice any threat to them. My guess is that this depends on topic and personality.)
    2. ‘Not having a strong opinion’ is often vaguer than having a flat probability distribution, in practice. That is, the uncertain person’s position is not, ‘there is a 51% chance that policy X is better than policy -X’, it is more like ‘I have no idea’. Which again doesn’t lend itself to attending to detailed evidence.
    3. Uncertainty breeds inaction, and it is harder to run into more evidence if you are waiting on the fence, than if you are out there making practical bets on one side or the other.
  2. (In a bitterly unfair twist of fate) being overconfident appears to help with things like running startups, or maybe all kinds of things.
    If you run a startup, common wisdom advises going around it saying things like, ‘Here is the dream! We are going to make it happen! It is going to change the world!’ instead of things like, ‘Here is a plausible dream! We are going to try to make it happen! In the unlikely case that we succeed at something recognizably similar to what we first had in mind, it isn’t inconceivable that it will change the world!’ Probably some of the value here is just a zero sum contest to misinform people into misinvesting in your dream instead of something more promising. But some is probably real value. Suppose Bob works full time at your startup either way. I expect he finds it easier to dedicate himself to the work and has a better time if you are more confident. It’s nice to follow leaders who stand for something, which tends to go with having at least some strong opinions. Even alone, it seems easier to work hard on a thing if you think it is likely to succeed. If being unrealistically optimistic just generates extra effort to be put toward your project’s success, rather than stealing time from something more promising, that is a big deal.
  3. Social competition
    Even if the benefits of overconfidence in running companies and such were all zero sum, everyone else is doing it, so what are you going to do? Fail? Only employ people willing to work at less promising looking companies? Similarly, if you go around being suitably cautious in your views, while other people are unreasonably confident, then onlookers who trust both of you will be more interested in what the other people are saying.
  4. Wholeheartedness
    It is nice to be the kind of person who knows where they stand and what they are doing, instead of always living in an intractable set of place-plan combinations. It arguably lends itself to energy and vigor. If you are unsure whether you should be going North or South, having reluctantly evaluated North as a bit better in expected value, for some reason you often still won’t power North at full speed. It’s hard to passionately be really confused and uncertain. (I don’t know if this is related, but it seems interesting to me that the human mind feels as though it lives in ‘the world’—this one concrete thing—though its epistemic position is in some sense most naturally seen as a probability distribution over many possibilities.)
  5. Creativity
    Perhaps this is the same point, but I expect my imagination for new options kicks in better when I think I’m in a particular situation than when I think I might be in any of five different situations (or worse, in any situation at all, with different ‘weightings’).

A quick review of the con:

  1. Pervasive dishonesty and/or disengagement from reality
    If the evidence hasn’t led you to a strong opinion, and you want to profess one anyway, you are going to have to somehow disengage your personal or social epistemic processes from reality. What are you going to do? Lie? Believe false things? These both seem so bad to me that I can’t consider them seriously. There is also this sub-con:

    1. Appearance of pervasive dishonesty and/or disengagement from reality
      Some people can tell that you are either lying or believing false things, due to your boldly claiming things in this uncertain world. They will then suspect your epistemic and moral fiber, and distrust everything you say.
  2. (There are probably others, but this seems like plenty for now.)

II. Tentative answers

Can we have some of these pros without giving up on honesty or being in touch with reality? Some ideas that come to mind or have been suggested to me by friends:

1. Maintain two types of ‘beliefs’. One set of play beliefs—confident, well understood, probably-wrong—for improving in the sandpits of tinkering and chatting, and one set of real beliefs—uncertain, deferential—for when it matters whether you are right. For instance, you might have some ‘beliefs’ about how cancer can be cured by vitamins that you chat about and ponder, and read journal articles to update, but when you actually get cancer, you follow the expert advice to lean heavily on chemotherapy. I think people naturally do this a bit, using words like ‘best guess’ and ‘working hypothesis’.

I don’t like this plan much, though admittedly I basically haven’t tried it. For your new fake beliefs, either you have to constantly disclaim them as fake, or you are again lying and potentially misleading people. Maybe that is manageable through always saying ‘it seems to me that..’ or ‘my naive impression is..’, but it sounds like a mess.

And if you only use these beliefs on unimportant things, then you miss out on a lot of the updating you were hoping for from letting your strong beliefs run into reality. You get some though, and maybe you just can’t do better than that, unless you want to be testing your whacky theories about cancer cures when you have cancer.

It also seems like you won’t get a lot of the social benefits of seeming confident, if you still don’t actually believe strongly in the really confident things, and have to constantly disclaim them.

But I think I actually object because beliefs are for true things, damnit. If your evidence suggests something isn’t true, then you shouldn’t be ‘believing’ it. And also, if you know your evidence suggests a thing isn’t true, how are you even going to go about ‘believing it’? I don’t know how to.

2. Maintain separate ‘beliefs’ and ‘impressions’. This is like 1, except impressions are just claims about how things seem to you. e.g. ‘It seems to me that vitamin C cures cancer, but I believe that that isn’t true somehow, since a lot of more informed people disagree with my impression.’ This seems like a great distinction in general, but it seems a bit different from what one wants here. I think of this as a distinction between the evidence that you received, and the total evidence available to humanity, or perhaps between what is arrived at by your own reasoning about everyone’s evidence vs. your own reasoning about what to make of everyone else’s reasoning about everyone’s evidence. However these are about ways of getting a belief, and I think what you want here is actually just some beliefs that can be got in any way. Also, why would you act confidently on your impressions, if you thought they didn’t account for others’ evidence, say? Why would you act on them at all?

3. Confidently assert precise but highly uncertain probability distributions “We should work so hard on this, because it has like a 0.03% chance of reshaping 0.5% of the world, making it a 99.97th percentile intervention in the distribution we are drawing from, so we shouldn’t expect to see something this good again for fifty-seven months.” This may solve a lot of problems, and I like it, but it is tricky.

4. Just do the research so you can have strong views. To do this across the board seems prohibitively expensive, given how much research it seems to take to be almost as uncertain as you were on many topics of interest.

5. Focus on acting well rather than your effects on the world. Instead of trying to act decisively on a 1% chance of this intervention actually bringing about the desired result, try to act decisively on a 95% chance that this is the correct intervention (given your reasoning suggesting that it has a 1% chance of working out). I’m told this is related to Stoicism.

6. ‘Opinions’
I notice that people often have ‘opinions’, which they are not very careful to make true, and do not seem to straightforwardly expect to be true. This seems to be commonly understood by rationally inclined people as some sort of failure, but I could imagine it being another solution, perhaps along the lines of 1.

(I think there are others around, but I forget them.)

III. Stances

I propose an alternative solution. Suppose you might want to say something like, ‘groups of more than five people at parties are bad’, but you can’t because you don’t really know, and you have only seen a small number of parties in a very limited social milieu, and a lot of things are going on, and you are a congenitally uncertain person. Then instead say, ‘I deem groups of more than five people at parties bad’. What exactly do I mean by this? Instead of making a claim about the value of large groups at parties, make a policy choice about what to treat as the value of large groups at parties. You are adding a new variable ‘deemed large group goodness’ between your highly uncertain beliefs and your actions. I’ll call this a ‘stance’. (I expect it isn’t quite clear what I mean by a ‘stance’ yet, but I’ll elaborate soon.) My proposal: to be ‘confident’ in the way that one might be from having strong beliefs, focus on having strong stances rather than strong beliefs.

Strong stances have many of the benefits of confident beliefs. With your new stance on large groups, when you are choosing whether to arrange chairs and snacks to discourage large groups, you skip over your uncertain beliefs and go straight to your stance. And since you decided it, it is certain, and you can rearrange chairs with the vigor and single-mindedness of a person who knowns where they stand. You can confidently declare your opposition to large groups, and unite followers in a broader crusade against giant circles. And if at the ensuing party people form a large group anyway and seem to be really enjoying it, you will hopefully notice this the way you wouldn’t if you were merely uncertain-leaning-against regarding the value of large groups.

That might have been confusing, since I don’t know of good words to describe the type of mental attitude I’m proposing. Here are some things I don’t mean by ‘I deem large group conversations to be bad’:

  1. “Large group conversations are bad” (i.e. this is not about what is true, though it is related to that.)
  2. “I declare the truth to be ‘large group conversations are bad’” (i.e. This is not of a kind with beliefs. Is not directly about what is true about the world, or empirically observed, though it is influenced by these things. I do not have power over the truth.)
  3. “I don’t like large group conversations”, or “I notice that I act in opposition to large group conversations” (i.e. is not a claim about my own feelings or inclinations, which would still be a passive observation about the world)
  4. “The decision-theoretically optimal value to assign to large groups forming at parties is negative”, or “I estimate that the decision-theoretically optimal policy on large groups is opposition” (i.e. it is a choice, not an attempt to estimate a hidden feature of the world.)
  5. “I commit to stopping large group conversations” (i.e. It is not a commitment, or directly claiming anything about my future actions.)
  6. “I observe that I consistently seek to avert large group conversations” (this would be an observation about a consistency in my behavior, whereas here the point is to make a new thing (assign a value to a new variable?) that my future behavior may consistently make use of, if I want.)
  7. “I intend to stop some large group conversations” (perhaps this one is closest so far, but a stance isn’t saying anything about the future or about actions—if it doesn’t get changed by the future, and then in future I want to take an action, I’ll probably call on it, but it isn’t ‘about’ that.)

Perhaps what I mean is most like: ‘I have a policy of evaluating large group discussions at parties as bad’, though using ‘policy’ as a choice about an abstract variable that might apply to action, but not in the sense of a commitment.

What is going on here more generally? You are adding a new kind of abstract variable between beliefs and actions. A stance can be a bit like a policy choice on what you will treat as true, or on how you will evaluate something. Or it can also be its own abstract thing that doesn’t directly mean anything understandable in terms of the beliefs or actions nearby.

Some ideas we already use that are pretty close to stances are ‘X is my priority’, ‘I am in the dating market’, and arguably, ‘I am opposed to dachshunds’. X being your priority is heavily influenced by your understanding of the consequences of X and its alternatives, but it is your choice, and it is not dishonest to prioritize a thing that is not important. To prioritize X isn’t a claim about the facts relevant to whether one would want to prioritize it. Prioritizing X also isn’t a commitment regarding your actions, though the purpose of having a ‘priority’ is for it to affect your actions. Your ‘priority’ is a kind of abstract variable added to your mental landscape to collect up a bunch of reasoning about the merits of different things, and package them for easy use in decisions.

Another way of looking at this is as a way of formalizing and concretifying the step where you look at your uncertain beliefs and then decide on a tentative answer and then run with it.

One can be confident in stances, because a stance is a choice, not a guess at a fact about the world. (Though my stance may contain uncertainty if I want, e.g. I could take a stance that large groups have a 75% chance of being bad on average.) So while my beliefs on a topic may be quite uncertain, my stance can be strong, in a sense that does some of the work we wanted from strong beliefs. Nonetheless, since stances are connected with facts and values, my stance can be wrong in the sense of not being the stance I should want to have, on further consideration.

In sum, stances:

  1. Are inputs to decisions in the place of some beliefs and values
  2. Integrate those beliefs and values—to the extent that you want them to be—into a single reusable statement
  3. Can be thought of as something like ‘policies’ on what will be treated as the truth (e.g. ‘I deem large groups bad’) or as new abstract variables between the truth and action (e.g. ‘I am prioritizing sleep’)
  4. Are chosen by you, not implied by your epistemic situation (until some spoilsport comes up with a theory of optimal behavior)
  5. therefore don’t permit uncertainty in one sense, and don’t require it in another (you know what your stance is, and your stance can be ‘X is bad’ rather than ‘X is 72% likely to be bad’), though you should be uncertain about how much you will like your stance on further reflection.

I have found having stances somewhat useful, or at least entertaining, in the short time I have been trying having them, but it is more of a speculative suggestion with no other evidence behind it than trustworthy advice.



Discuss

The Principled Intelligence Hypothesis

21 октября, 2019 - 16:40
Published on October 21, 2019 1:40 PM UTC

I have been reading the thought provoking Elephant in the Brain, and will probably have more to say on it later. But if I understand correctly, a dominant theory of how humans came to be so smart is that they have been in an endless cat and mouse game with themselves, making norms and punishing violations on the one hand, and cleverly cheating their own norms and excusing themselves on the other (the ‘Social Brain Hypothesis’ or ‘Machiavellian Intelligence Hypothesis’). Intelligence purportedly evolved to get ourselves off the hook, and our ability to construct rocket ships and proofs about large prime numbers are just a lucky side product.

As a person who is both unusually smart, and who spent the last half hour wishing the seatbelt sign would go off so they could permissibly use the restroom, I feel like there is some tension between this theory and reality. I’m not the only unusually smart person who hates breaking rules, who wishes there were more rules telling them what to do, who incessantly makes up rules for themselves, who intentionally steers clear of borderline cases because it would be so annoying to think about, and who wishes the nominal rules were policed predictably and actually reflected expected behavior. This is a whole stereotype of person.

But if intelligence evolved for the prime purpose of evading rules, shouldn’t the smartest people be best at navigating rule evasion? Or at least reliably non-terrible at it? Shouldn’t they be the most delighted to find themselves in situations where the rules were ambiguous and the real situation didn’t match the claimed rules? Shouldn’t the people who are best at making rocket ships and proofs also be the best at making excuses and calculatedly risky norm-violations? Why is there this stereotype that the more you can make rocket ships, the more likely you are to break down crying if the social rules about when and how you are allowed to make rocket ships are ambiguous?

It could be that these nerds are rare, yet salient for some reason. Maybe such people are funny, not representative. Maybe the smartest people are actually savvy. I’m told that there is at least a positive correlation between social skills and other intellectual skills.

I offer a different theory. If the human brain grew out of an endless cat and mouse game, what if the thing we traditionally think of as ‘intelligence’ grew out of being the cat, not the mouse?

The skill it takes to apply abstract theories across a range of domains and to notice places where reality doesn’t fit sounds very much like policing norms, not breaking them. The love of consistency that fuels unifying theories sounds a lot like the one that insists on fair application of laws, and social codes that can apply in every circumstance. Math is basically just the construction of a bunch of rules, and then endless speculation about what they imply. A major object of science is even called discovering ‘the laws of nature’.

Rules need to generalize across a lot of situations—you will have a terrible time as rule-enforcer if you see every situation as having new, ad-hoc appropriate behavior. We wouldn’t even call this having a ‘rule’. But more to the point, when people bring you their excuses, if your rule doesn’t already imply an immovable position on every case you have never imagined, then you are open to accepting excuses. So you need to see the one law manifest everywhere. I posit that technical intelligence comes from the drive to make these generalizations, not the drive to thwart them.

On this theory, probably some other aspects of human skill are for evading norms. For instance, perhaps social or emotional intelligence (I hear these are things, but will not pretend to know much about them). If norm-policing and norm-evading are somewhat different activities, we might expect to have at least two systems that are engorged by this endless struggle.

I think this would solve another problem: if we came to have intelligence for cheating each other, it is unclear why general intelligence per se is is the answer to this, but not to other problems we have ever had as animals. Why did we get mental skills this time rather than earlier? Like that time we were competing over eating all the plants, or escaping predators better than our cousins? This isn’t the only time that a species was in fierce competition against themselves for something. In fact that has been happening forever. Why didn’t we develop intelligence to compete against each other for food, back when we lived in the sea? If the theory is just ‘there was strong competitive pressure for something that will help us win, so out came intelligence’, I think there is a lot left unexplained. Especially since the thing we most want to explain is the spaceship stuff, that on this theory is a random side effect anyway. (Note: I may be misunderstanding the usual theory, as a result of knowing almost nothing about it.)

I think this Principled Intelligence Hypothesis does better. Tracking general principles and spotting deviations from them is close to what scientific intelligence is, so if we were competing to do this (against people seeking to thwart us) it would make sense that we ended up with good theory-generalizing and deviation-spotting engines.

On the other hand, I think there are several reasons to doubt this theory, or details to resolve. For instance, while we are being unnecessarily norm-abiding and going with anecdotal evidence, I think I am actually pretty great at making up excuses, if I do say so. And I feel like this rests on is the same skill as ‘analogize one thing to another’ (my being here to hide from a party could just as well be interpreted as my being here to look for the drinks, much as the economy could also be interpreted as a kind of nervous system), which seems like it is quite similar to the skill of making up scientific theories (these five observations being true is much like theory X applying in general), though arguably not the skill of making up scientific theories well. So this is evidence against smart people being bad at norm evasion in general, and against norm evasion being a different kind of skill to norm enforcement, which is about generalizing across circumstances.

Some other outside view evidence against this theory’s correctness is that my friends all think it is wrong, and I know nothing about the relevant literature. I think it could also do with some inside view details – for instance, how exactly does any creature ever benefit from enforcing norms well? Isn’t it a bit of a tragedy of the commons? If norm evasion and norm policing skills vary in a population of agents, what happens over time? But I thought I’d tell you my rough thoughts, before I set this aside and fail to look into any of those details for the indefinite future.



Discuss

Strengthening the foundations under the Overton Window without moving it

21 октября, 2019 - 16:40
Published on October 21, 2019 1:40 PM UTC

As I understand them, the social rules for interacting with people you disagree with are like this:

  • You should argue with people who are a bit wrong
  • You should refuse to argue with people who are very wrong, because it makes them seem more plausibly right to onlookers

I think this has some downsides.

Suppose there is some incredibly terrible view, V. It is not an obscure view: suppose it is one of those things that most people believed two hundred years ago, but that is now considered completely unacceptable.

New humans are born and grow up. They are never acquainted with any good arguments for rejecting V, because nobody ever explains in public why it is wrong. They just say that it is unacceptable, and you would have to be a complete loser who is also the Devil to not see that.

Since it took the whole of humanity thousands of years to reject V, even if these new humans are especially smart and moral, they probably do not each have the resources to personally out-reason the whole of civilization for thousands of years. So some of them reject V anyway, because they do whatever society around them says is good person behavior. But some of the ones who rely more on their own assessment of arguments do not.

This is bad, not just because it leads to an unnecessarily high rate of people believing V, but because the very people who usually help get us out of believing stupid things – the ones who think about issues, and interrogate the arguments, instead of adopting whatever views they are handed – are being deprived of the evidence that would let them believe even the good things we already know.

In short: we don’t want to give the new generation the best sincere arguments against V, because that would be admitting that a reasonable person might believe V. Which seems to get in the way of the claim that V is very, very bad. Which is not only a true claim, but an important thing to claim, because it discourages people from believing V.

But we actually know that a reasonable person might believe V, if they don’t have access to society’s best collective thoughts on it. Because we have a whole history of this happening almost all of the time. On the upside, this does not actually mean that V isn’t very, very bad. Just that your standard non-terrible humans can believe very, very bad things sometimes, as we have seen.

So this all sounds kind of like the error where you refuse to go to the gym because it would mean admitting that you are not already incredibly ripped.

But what is the alternative? Even if losing popular understanding of the reasons for rejecting V is a downside, doesn’t it avoid the worse fate of making V acceptable by engaging people who believe it?

Well, note that the social rules were kind of self-fulfilling. If the norm is that  you only argue with people who are a bit wrong, then indeed if you argue with a very wrong person, people will infer that they are only a bit wrong. But if instead we had norms that said you should argue with people who are very wrong, then arguing with someone who was very wrong would not make them look only a bit wrong.

I do think the second norm wouldn’t be that stable. Even if we started out like that, we would probably get pushed to the equilibrium we are in, because for various reasons people are somewhat more likely to argue with people who are only a bit wrong, even before any signaling considerations come into play. Which makes arguing some evidence that you don’t think the person is too wrong. And once it is some evidence, then arguing makes it look a bit more like you think a person might be right. And then the people who loathe to look a bit more like that drop out of the debate, and so it becomes stronger evidence. And so on.

Which is to say, engaging V-believers does not intrinsically make V more acceptable. But society currently interprets it as a message of support for V. There are some weak intrinsic reasons to take this as a signal of support, which get magnified into it being a strong signal.

My weak guess is that this signal could still be overwhelmed by e.g. constructing some stronger reason to doubt that the message is one of support.

For instance, if many people agreed that there were problems with avoiding all serious debate around V, and accepted that it was socially valuable to sometimes make genuine arguments against views that are terrible, then prefacing your engagement with a reference to this motive might go a long way. Because nobody who actually found V plausible would start with ‘Lovely to be here tonight. Please don’t take my engagement as a sign of support or validation—I am actually here because I think Bob’s ideas are some of the least worthy of support and validation in the world, and I try to do the occasional prophylactic ludicrous debate duty. How are we all this evening?’



Discuss

Are ethical asymmetries from property rights?

21 октября, 2019 - 16:40
Published on October 21, 2019 1:40 PM UTC

These are some intuitions people often have:

  • You are not required to save a random person, but you are definitely not allowed to kill one
  • You are not required to create a person, but you are definitely not allowed to kill one
  • You are not required to create a happy person, but you are definitely not allowed to create a miserable one
  • You are not required to help a random person who will be in a dire situation otherwise, but you are definitely not allowed to put someone in a dire situation
  • You are not required to save a person in front of a runaway train, but you are definitely not allowed to push someone in front of a train. By extension, you are not required to save five people in front of a runaway train, and if you have to push someone in front of the train to do it, then you are not allowed.

Here are some more:

  • You are not strongly required to give me your bread, but you are not allowed to take mine
  • You are not strongly required to lend me your car, but you are not allowed to unilaterally borrow mine
  • You are not strongly required to send me money, but you are not allowed to take mine

The former are ethical intuitions. The latter are implications of a basic system of property rights. Yet they seem very similar. The ethical intuitions seem to just be property rights as applied to lives and welfare. Your life is your property. I’m not allowed to take it, but I’m not obliged to give it to you if you don’t by default have it. Your welfare is your property. I’m not allowed to lessen what you have, but I don’t have to give you more of it.

[Edited to add: A basic system of property rights means assigning each thing to a person, who is then allowed to decide what happens to that thing. This gives rise to asymmetry because taking another person’s things is not allowed (since they are in charge of them, not you), but giving them more things is neutral (since you are in charge of your things and can do what you like with them).]

My guess is that these ethical asymmetries—which are confusing, because they defy consequentialism—are part of the mental equipment we have for upholding property rights.

In particular these well-known asymmetries seem to be explained well by property rights:

  • The act-omission distinction naturally arises where an act would involve taking someone else’s property (broadly construed—e.g. their life, their welfare), while an omission would merely fail to give them additional property (e.g. life that they are not by default going to have, additional welfare).
  • ‘The asymmetry’ between creating happy and miserable people is because to create a miserable person is to give that person something negative, which is to take away what they have, while creating a happy person is giving that person something extra.
  • Person-affecting views arise because birth gives someone a thing they don’t have, whereas death takes a thing from them.

Further evidence that these intuitive asymmetries are based on upholding property rights: we also have moral-feeling intuitions about more straightforward property rights. Stealing is wrong.

If I am right that we have these asymmetrical ethical intuitions as part of a scheme to uphold property rights, what would that imply?

It might imply something about when we want to uphold them, or consider them part of ethics, beyond their instrumental value. Property rights at least appear to be a system for people with diverse goals to coordinate use of scarce resources—which is to say, to somehow use the resources with low levels of conflict and destruction. They do not appear to be a system for people to achieve specific goals, e.g. whatever is actually good. Unless what is good is exactly the smooth sharing of resources.

I’m not actually sure what to make of that—should we write off some moral intuitions as clearly evolved for not-actually-moral reasons and just reason about the consequentialist value of upholding property rights? If we have the moral intuition, does that make the thing of moral value, regardless of its origins? Is pragmatic rules for social cohesion all that ethics is anyway? Questions for another time perhaps (when we are sorting out meta-ethics anyway).

A more straightforward implication is for how we try to explain these ethical asymmetries. If we have an intuition about an asymmetry which stems from upholding property rights, it would seem to be a mistake to treat it as evidence about an asymmetry in consequences, e.g. in value accruing to a person. For instance, perhaps I feel that I am not obliged to create a life, by having a child. Then—if I suppose that my intuitions are about producing goodness—I might think that creating a life is of neutral value, or is of no value to the created child. When in fact the intuition exists because allocating things to owners is a useful way to avoid social conflict. That intuition is part of a structure that is known to be agnostic about benefits to people from me giving them my stuff. If I’m right that these intuitions come from upholding property rights, this seems like an error that is actually happening.



Discuss

Replacing expensive costly signals

21 октября, 2019 - 16:40
Published on October 21, 2019 1:40 PM UTC

I feel like there is a general problem where people signal something using some extremely socially destructive method, and we can conceive of more socially efficient ways to send the same signal, but trying out alternative signals suggests that you might be especially bad at the traditional one. For instance, an employer might reasonably suspect that a job candidate who did a strange online course instead of normal university would have done especially badly at normal university.

Here is a proposed solution. Let X be the traditional signal, Y be the new signal, and Z be the trait(s) being advertised by both. Let people continue doing X, but subsidize Y on top of X for people with very high Z. Soon Y is a signal of higher Z than X is, and understood by the recipients of the signals to be a better indicator. People who can’t afford to do both should then prefer Y to X, since Y is is a stronger signal, and since it is more socially efficient it is likely to be less costly for the signal senders.

If Y is intrinsically no better a signal than X (without your artificially subsidizing great Z-possessors to send it) then in the long run Y might only end up as strong a sign as X, but in the process, many should have moved to using Y instead.

(A possible downside is that people may end up just doing both forever.)

For example, if you developed a psychometric and intellectual test that only took half a day and predicted very well how someone would do in an MIT undergraduate degree, you could run it for a while for people who actually do MIT undergraduate degrees, offering prizes for high performance, or just subsidizing taking it at all. After the best MIT graduates say on their CVs for a while that they also did well on this thing and got a prize, it is hopefully an established metric, and an employer would as happily have someone with the degree as with a great result on your test. At which point an impressive and ambitious high school leaver would take the test, modulo e.g. concerns that the test doesn’t let you hang out with other MIT undergraduates for four years.

I don’t know if this is the kind of problem people actually have with replacing apparently wasteful signaling systems with better things. Or if this doesn’t actually work after thinking about it for more than an hour. But just in case.



Discuss

The fundamental complementarity of consciousness and work

21 октября, 2019 - 16:40
Published on October 21, 2019 1:40 PM UTC

Matter can experience things. For instance, when it is a person. Matter can also do work, and thereby provide value to the matter that can experience things. For instance, when it is a machine. Or also, when it is a person.

An important question for what the future looks like, is whether it is more efficient to carry out these functions separately or together.

If separately, then perhaps it is best that we end up with a huge pile of unconscious machinery, doing all the work to support and please a separate collection of matter specializing in being pleased.

If together, then we probably end up with the value being had by the entities doing the work.

I think we see people assuming that it is more efficient to separate the activities of producing and consuming value. For instance, that the entities whose experiences matter in the future will ideally live a life of leisure. And that lab grown meat is a better goal than humane farming.

Which seems plausible. It is at least in line with the general observation that more efficient systems seem to be specialized.

However I think this isn’t obvious. Some reasons we might expect working and benefiting from work to be done by overlapping systems:

  • We don’t know which systems are conscious. It might be that highly efficient work systems tend to be unavoidably conscious. In which case, making their experience good rather than bad could be a relatively cheap way to improve the overall value of the world.
  • For humans, doing purposeful activities is satisfying, so much so that there are concerns about how humans will cope when they are replaced by machines. It might be hard for humans to avoid being replaced, since they are probably much less efficient than other possible machines. But if doing useful things tends to be gratifying for creatures—or for the kinds of creatures we decide are good to have—then it is less obvious that highly efficient creatures won’t be better off doing work themselves, rather than being separate from it.
  • Consciousness is presumably cheap and useful for getting something done, since we evolved to have it.
  • Efficient production doesn’t seem to evolve to be entirely specialized, especially if we take an abstract view of ‘production’. For instance, it is helpful to produce the experience of being a sports star alongside the joy of going to sports games.
  • Specialization seems especially helpful if keeping track of things is expensive. However technology will make that cheaper, so perhaps the world will tend less toward specialization than it currently seems. For instance, you would prefer plant an entire field of one vegetable than a mixture, because then when you harvest them, you can do it quickly without sorting them. But if sorting them is basically immediate and free, you might prefer to plant the mixture. For instance, if they take different nutrients from the soil, or if one wards of insects that would eat the other.


Discuss

Realistic thought experiments

21 октября, 2019 - 16:40
Published on October 21, 2019 1:40 PM UTC

What if…

…after you died, you would be transported back and forth in time and get to be each of the other people who ever lived, one at a time, but with no recollection of your other lives?

…you had lived your entire life once already, and got to the end and achieved disappointingly few of your goals, and had now been given the chance to go back and try one more time?

…you were invisible and nobody would ever notice you? What if you were invisible and couldn’t even affect the world, except that you had complete control over a single human?

…you were the only person in the world, and you were responsible for the whole future, but luckily you had found a whole lot of useful robots which could expand your power, via for instance independently founding and running organizations for years without your supervision?

…you would only live for a second, before having your body taken over by someone else?

…there was a perfectly reasonable and good hypothetical being who knew about and judged all of your actions, hypothetically?

…everyone around you was naked under their clothes?

…in the future, many things that people around you asserted confidently would turn out to be false?

…the next year would automatically be composed of approximate copies of today?

…eternity would be composed of infinitely many exact copies of your life?

Added later:

…you just came into existence and got put into your present body—conveniently, with all the memories and skills of the body’s previous owner?

***

(Sometimes I or other people reframe the world for some philosophical or psychological purpose. These are the ones I can currently remember off the top of my head. Several are not original to me*. I’m curious to hear others.)

*Credits: #3 is from Plato and Joseph Carlsmith respectively. #5 is surely not original, but I can’t find its source easily. #7 is some kind of standard anti-social anxiety advice. #9 is from David Wong’s Cracked post on 5 ways you are sabotaging your own life (without even knowing it). #10 is old. #11 is from commenter Doug S, and elsewhere Nate Soares, and according to him is common advice on avoiding the Sunk Cost Fallacy.



Discuss

Personal relationships with goodness

21 октября, 2019 - 16:40
Published on October 21, 2019 1:40 PM UTC

Many people seem to find themselves in a situation something like this:

  1. Good actions seem better than bad actions. Better actions seem better than worse actions.
  2. There seem to be many very good things to do—for instance, reducing global catastrophic risks, or saving children from malaria.
  3. Nonetheless, they continually do things that seem vastly less good, at least some of the time. For instance, just now I went and listened to a choir singing. You might also admire kittens, or play video games, or curl up in a ball, or watch a movie, or try to figure out whether the actress in the movie was the same one that you saw in a different movie. I’ll call this ‘indulgence’, though it is not quite the right category.

On the face of it, this is worrying. Why do you do the less good things? Is it because you prefer badness to goodness? Are you evil?

It would be nice to have some kind of a story about this. Especially if you are just going to keep on occasionally admiring kittens or whatever for years on end. I think people settle on different stories. These don’t have obviously different consequences, but I think they do have subtly different ones. Here are some stories I’m familiar with:

I’m not good: “My behavior is not directly related to goodness, and nor should it be”, “It would be good to do X, but I am not that good” “Doing good things rather than bad things is generally supererogatory”

I think this one is popular. I find it hard to stomach, because if I am not good that seems like a serious problem. Plus, if goodness isn’t the guide to my actions, it seems like I’m going to need some sort of concept like schmoodness to determine which things I should do. Plus I just care about being good for some idiosyncratic reason. But it seems actually dangerous, because not treating goodness as a guide to one’s actions seems like it might affect one’s actions pretty negatively, beyond excusing a bit of kitten admiring or choir attendance.

In its favor, this story can help with ‘leaving a line of retreat‘: maybe you can better think about what is good, honestly, if you aren’t going to be immediately compelled to do it. It also has the appealing benefit of not looking dishonest, hypocritical, or self-aggrandizing.

Goodness is hard: “I want to be good, but I fail due to weakness of will or some other mysterious force”

This one probably only matches one’s experience while actively trying to never indulge in anything, which seems rare as a long term strategy.

Indulgence is good: “I am good, but it is not psychologically sustainable to exist without admiring kittens. It really helps with productivity.” “I am good, and it is somehow important for me to admire kittens. I don’t know why, and it doesn’t sound that plausible, but I don’t expect anything good to happen if I investigate or challenge it”

This is nice, because you get to be good, and continue to pursue good things, and not feel endlessly bad about the indulgence.

It has the downside that it sounds a bit like an absurd rationalization—’of course I care about solving the most important problems, for instance, figuring out where the cutest kittens are on the internet’. Also, supposing that fruitless entertainments are indeed good, they are presumably only good in moderation, and so it is hard for observers to tell if you are doing too much, which will lead them to suspect that you are doing too much. Also, you probably can’t tell yourself if you are doing too much, and supposing that there is any kind of pressure to observe more kittens under the banner of ‘the best thing a person can do’, you might risk that happening.

I’m partly good; indulgence is part of compromise: “I am good, but I am a small part of my brain, and there are all these other pesky parts that are bad, and I’m reasonably compromising with them” “I have many parts, and at least one of them is good, and at least one of them wants to admire kittens.”

This has the upside of being arguably relatively accurate, and many of the downsides of the first story, but to a lesser degree.

Among these, there seems to be a basic conflict between being able to feel virtuous, and being able to feel honest and straightforward. Which I guess is what you get if you keep on doing apparently non-virtuous things. But given that stopping doing those things doesn’t seem to be a real option, I feel like it should be possible to have something close to both.

I am interested to hear about any other such accounts people might have heard of.

 



Discuss

Worth keeping

21 октября, 2019 - 16:40
Published on October 21, 2019 1:40 PM UTC

(Epistemic status: quick speculation which matches my intuitions about how social things go, but which I hadn’t explicitly described before, and haven’t checked.)

If your car gets damaged, should you invest more or less in it going forward? It could go either way. The car needs more investment to be in good condition, so maybe you do that. But the car is worse than you thought, so maybe you start considering a new car, or putting your dollars into Uber instead.

If you are writing an essay and run into difficulty describing something, you can put in additional effort to find the right words, or you can suspect that this is not going to be a great essay, and either give up, or prepare to get it out quickly and imperfectly, worrying less about the other parts that don’t quite work.

When something has a problem, you always choose whether to double down with it or to back away.

(Or in the middle, to do a bit of both: to fix the car this time, but start to look around for other cars.)

I’m interested in this as it pertains to people. When a friend fails, do you move toward them—to hold them, talk to them, pick them up at your own expense—or do you edge away? It probably depends on the friend (and the problem). If someone embarrasses themselves in public, do you sully your own reputation to stand up for their worth? Or do you silently hope not to be associated with them? If they are dying, do you hold their hand, even if it destroys you? Or do you hope that someone else is doing that, and become someone they know less well?

Where a person fits on this line would seem to radically change their incentives around you. Someone firmly in your ‘worth keeping’ zone does better to let you see their problems than to hide them. Because you probably won’t give up on them, and you might help. Since everyone has problems, and they take effort to hide, this person is just a lot freer around you. If instead every problem hastens a person’s replacement, they should probably not only hide their problems, but also many of their other details, which are somehow entwined with problems.

(A related question is when you should let people know where they stand with you. Prima facie, it seems good to make sure people know when they are safe. But that means it also being clearer when a person is not safe, which has downsides.)

If there are better replacements in general, then you will be inclined to replace things more readily. If you can press a button to have a great new car appear, then you won’t have the same car for long.

The social analog is that in a community where friends are more replaceable—for instance, because everyone is extremely well selected to be similar on important axes—it should be harder to be close to anyone, or to feel safe and accepted. Even while everyone is unusually much on the same team, and unusually well suited to one another.



Discuss

Strong stances

21 октября, 2019 - 16:10
Published on October 21, 2019 1:10 PM UTC

I. The question of confidence

Should one hold strong opinions? Some say yes. Some say that while it’s hard to tell, it tentatively seems pretty bad (probably). There are many pragmatically great upsides, and a couple of arguably unconscionable downsides. But rather than judging the overall sign, I think a better question is, can we have the pros without the devastatingly terrible cons?

A quick review of purported or plausible pros:

  1. Strong opinions lend themselves to revision:
    1. Nothing will surprise you into updating your opinion if you thought that anything could happen. A perfect Bayesian might be able to deal with myriad subtle updates to vast uncertainties, but a human is more likely to notice a red cupcake if they have claimed that cupcakes are never red. (Arguably—some would say having opinions makes you less able to notice any threat to them. My guess is that this depends on topic and personality.)
    2. ‘Not having a strong opinion’ is often vaguer than having a flat probability distribution, in practice. That is, the uncertain person’s position is not, ‘there is a 51% chance that policy X is better than policy -X’, it is more like ‘I have no idea’. Which again doesn’t lend itself to attending to detailed evidence.
    3. Uncertainty breeds inaction, and it is harder to run into more evidence if you are waiting on the fence, than if you are out there making practical bets on one side or the other.
  2. (In a bitterly unfair twist of fate) being overconfident appears to help with things like running startups, or maybe all kinds of things.
    If you run a startup, common wisdom advises going around it saying things like, ‘Here is the dream! We are going to make it happen! It is going to change the world!’ instead of things like, ‘Here is a plausible dream! We are going to try to make it happen! In the unlikely case that we succeed at something recognizably similar to what we first had in mind, it isn’t inconceivable that it will change the world!’ Probably some of the value here is just a zero sum contest to misinform people into misinvesting in your dream instead of something more promising. But some is probably real value. Suppose Bob works full time at your startup either way. I expect he finds it easier to dedicate himself to the work and has a better time if you are more confident. It’s nice to follow leaders who stand for something, which tends to go with having at least some strong opinions. Even alone, it seems easier to work hard on a thing if you think it is likely to succeed. If being unrealistically optimistic just generates extra effort to be put toward your project’s success, rather than stealing time from something more promising, that is a big deal.
  3. Social competition
    Even if the benefits of overconfidence in running companies and such were all zero sum, everyone else is doing it, so what are you going to do? Fail? Only employ people willing to work at less promising looking companies? Similarly, if you go around being suitably cautious in your views, while other people are unreasonably confident, then onlookers who trust both of you will be more interested in what the other people are saying.
  4. Wholeheartedness
    It is nice to be the kind of person who knows where they stand and what they are doing, instead of always living in an intractable set of place-plan combinations. It arguably lends itself to energy and vigor. If you are unsure whether you should be going North or South, having reluctantly evaluated North as a bit better in expected value, for some reason you often still won’t power North at full speed. It’s hard to passionately be really confused and uncertain. (I don’t know if this is related, but it seems interesting to me that the human mind feels as though it lives in ‘the world’—this one concrete thing—though its epistemic position is in some sense most naturally seen as a probability distribution over many possibilities.)
  5. Creativity
    Perhaps this is the same point, but I expect my imagination for new options kicks in better when I think I’m in a particular situation than when I think I might be in any of five different situations (or worse, in any situation at all, with different ‘weightings’).

A quick review of the con:

  1. Pervasive dishonesty and/or disengagement from reality
    If the evidence hasn’t led you to a strong opinion, and you want to profess one anyway, you are going to have to somehow disengage your personal or social epistemic processes from reality. What are you going to do? Lie? Believe false things? These both seem so bad to me that I can’t consider them seriously. There is also this sub-con:

    1. Appearance of pervasive dishonesty and/or disengagement from reality
      Some people can tell that you are either lying or believing false things, due to your boldly claiming things in this uncertain world. They will then suspect your epistemic and moral fiber, and distrust everything you say.
  2. (There are probably others, but this seems like plenty for now.)

II. Tentative answers

Can we have some of these pros without giving up on honesty or being in touch with reality? Some ideas that come to mind or have been suggested to me by friends:

1. Maintain two types of ‘beliefs’. One set of play beliefs—confident, well understood, probably-wrong—for improving in the sandpits of tinkering and chatting, and one set of real beliefs—uncertain, deferential—for when it matters whether you are right. For instance, you might have some ‘beliefs’ about how cancer can be cured by vitamins that you chat about and ponder, and read journal articles to update, but when you actually get cancer, you follow the expert advice to lean heavily on chemotherapy. I think people naturally do this a bit, using words like ‘best guess’ and ‘working hypothesis’.

I don’t like this plan much, though admittedly I basically haven’t tried it. For your new fake beliefs, either you have to constantly disclaim them as fake, or you are again lying and potentially misleading people. Maybe that is manageable through always saying ‘it seems to me that..’ or ‘my naive impression is..’, but it sounds like a mess.

And if you only use these beliefs on unimportant things, then you miss out on a lot of the updating you were hoping for from letting your strong beliefs run into reality. You get some though, and maybe you just can’t do better than that, unless you want to be testing your whacky theories about cancer cures when you have cancer.

It also seems like you won’t get a lot of the social benefits of seeming confident, if you still don’t actually believe strongly in the really confident things, and have to constantly disclaim them.

But I think I actually object because beliefs are for true things, damnit. If your evidence suggests something isn’t true, then you shouldn’t be ‘believing’ it. And also, if you know your evidence suggests a thing isn’t true, how are you even going to go about ‘believing it’? I don’t know how to.

2. Maintain separate ‘beliefs’ and ‘impressions’. This is like 1, except impressions are just claims about how things seem to you. e.g. ‘It seems to me that vitamin C cures cancer, but I believe that that isn’t true somehow, since a lot of more informed people disagree with my impression.’ This seems like a great distinction in general, but it seems a bit different from what one wants here. I think of this as a distinction between the evidence that you received, and the total evidence available to humanity, or perhaps between what is arrived at by your own reasoning about everyone’s evidence vs. your own reasoning about what to make of everyone else’s reasoning about everyone’s evidence. However these are about ways of getting a belief, and I think what you want here is actually just some beliefs that can be got in any way. Also, why would you act confidently on your impressions, if you thought they didn’t account for others’ evidence, say? Why would you act on them at all?

3. Confidently assert precise but highly uncertain probability distributions “We should work so hard on this, because it has like a 0.03% chance of reshaping 0.5% of the world, making it a 99.97th percentile intervention in the distribution we are drawing from, so we shouldn’t expect to see something this good again for fifty-seven months.” This may solve a lot of problems, and I like it, but it is tricky.

4. Just do the research so you can have strong views. To do this across the board seems prohibitively expensive, given how much research it seems to take to be almost as uncertain as you were on many topics of interest.

5. Focus on acting well rather than your effects on the world. Instead of trying to act decisively on a 1% chance of this intervention actually bringing about the desired result, try to act decisively on a 95% chance that this is the correct intervention (given your reasoning suggesting that it has a 1% chance of working out). I’m told this is related to Stoicism.

6. ‘Opinions’
I notice that people often have ‘opinions’, which they are not very careful to make true, and do not seem to straightforwardly expect to be true. This seems to be commonly understood by rationally inclined people as some sort of failure, but I could imagine it being another solution, perhaps along the lines of 1.

(I think there are others around, but I forget them.)

III. Stances

I propose an alternative solution. Suppose you might want to say something like, ‘groups of more than five people at parties are bad’, but you can’t because you don’t really know, and you have only seen a small number of parties in a very limited social milieu, and a lot of things are going on, and you are a congenitally uncertain person. Then instead say, ‘I deem groups of more than five people at parties bad’. What exactly do I mean by this? Instead of making a claim about the value of large groups at parties, make a policy choice about what to treat as the value of large groups at parties. You are adding a new variable ‘deemed large group goodness’ between your highly uncertain beliefs and your actions. I’ll call this a ‘stance’. (I expect it isn’t quite clear what I mean by a ‘stance’ yet, but I’ll elaborate soon.) My proposal: to be ‘confident’ in the way that one might be from having strong beliefs, focus on having strong stances rather than strong beliefs.

Strong stances have many of the benefits of confident beliefs. With your new stance on large groups, when you are choosing whether to arrange chairs and snacks to discourage large groups, you skip over your uncertain beliefs and go straight to your stance. And since you decided it, it is certain, and you can rearrange chairs with the vigor and single-mindedness of a person who knowns where they stand. You can confidently declare your opposition to large groups, and unite followers in a broader crusade against giant circles. And if at the ensuing party people form a large group anyway and seem to be really enjoying it, you will hopefully notice this the way you wouldn’t if you were merely uncertain-leaning-against regarding the value of large groups.

That might have been confusing, since I don’t know of good words to describe the type of mental attitude I’m proposing. Here are some things I don’t mean by ‘I deem large group conversations to be bad’:

  1. “Large group conversations are bad” (i.e. this is not about what is true, though it is related to that.)
  2. “I declare the truth to be ‘large group conversations are bad’” (i.e. This is not of a kind with beliefs. Is not directly about what is true about the world, or empirically observed, though it is influenced by these things. I do not have power over the truth.)
  3. “I don’t like large group conversations”, or “I notice that I act in opposition to large group conversations” (i.e. is not a claim about my own feelings or inclinations, which would still be a passive observation about the world)
  4. “The decision-theoretically optimal value to assign to large groups forming at parties is negative”, or “I estimate that the decision-theoretically optimal policy on large groups is opposition” (i.e. it is a choice, not an attempt to estimate a hidden feature of the world.)
  5. “I commit to stopping large group conversations” (i.e. It is not a commitment, or directly claiming anything about my future actions.)
  6. “I observe that I consistently seek to avert large group conversations” (this would be an observation about a consistency in my behavior, whereas here the point is to make a new thing (assign a value to a new variable?) that my future behavior may consistently make use of, if I want.)
  7. “I intend to stop some large group conversations” (perhaps this one is closest so far, but a stance isn’t saying anything about the future or about actions—if it doesn’t get changed by the future, and then in future I want to take an action, I’ll probably call on it, but it isn’t ‘about’ that.)

Perhaps what I mean is most like: ‘I have a policy of evaluating large group discussions at parties as bad’, though using ‘policy’ as a choice about an abstract variable that might apply to action, but not in the sense of a commitment.

What is going on here more generally? You are adding a new kind of abstract variable between beliefs and actions. A stance can be a bit like a policy choice on what you will treat as true, or on how you will evaluate something. Or it can also be its own abstract thing that doesn’t directly mean anything understandable in terms of the beliefs or actions nearby.

Some ideas we already use that are pretty close to stances are ‘X is my priority’, ‘I am in the dating market’, and arguably, ‘I am opposed to dachshunds’. X being your priority is heavily influenced by your understanding of the consequences of X and its alternatives, but it is your choice, and it is not dishonest to prioritize a thing that is not important. To prioritize X isn’t a claim about the facts relevant to whether one would want to prioritize it. Prioritizing X also isn’t a commitment regarding your actions, though the purpose of having a ‘priority’ is for it to affect your actions. Your ‘priority’ is a kind of abstract variable added to your mental landscape to collect up a bunch of reasoning about the merits of different things, and package them for easy use in decisions.

Another way of looking at this is as a way of formalizing and concretifying the step where you look at your uncertain beliefs and then decide on a tentative answer and then run with it.

One can be confident in stances, because a stance is a choice, not a guess at a fact about the world. (Though my stance may contain uncertainty if I want, e.g. I could take a stance that large groups have a 75% chance of being bad on average.) So while my beliefs on a topic may be quite uncertain, my stance can be strong, in a sense that does some of the work we wanted from strong beliefs. Nonetheless, since stances are connected with facts and values, my stance can be wrong in the sense of not being the stance I should want to have, on further consideration.

In sum, stances:

  1. Are inputs to decisions in the place of some beliefs and values
  2. Integrate those beliefs and values—to the extent that you want them to be—into a single reusable statement
  3. Can be thought of as something like ‘policies’ on what will be treated as the truth (e.g. ‘I deem large groups bad’) or as new abstract variables between the truth and action (e.g. ‘I am prioritizing sleep’)
  4. Are chosen by you, not implied by your epistemic situation (until some spoilsport comes up with a theory of optimal behavior)
  5. therefore don’t permit uncertainty in one sense, and don’t require it in another (you know what your stance is, and your stance can be ‘X is bad’ rather than ‘X is 72% likely to be bad’), though you should be uncertain about how much you will like your stance on further reflection.

I have found having stances somewhat useful, or at least entertaining, in the short time I have been trying having them, but it is more of a speculative suggestion with no other evidence behind it than trustworthy advice.



Discuss

The Principled Intelligence Hypothesis

21 октября, 2019 - 16:10
Published on October 21, 2019 1:10 PM UTC

I have been reading the thought provoking Elephant in the Brain, and will probably have more to say on it later. But if I understand correctly, a dominant theory of how humans came to be so smart is that they have been in an endless cat and mouse game with themselves, making norms and punishing violations on the one hand, and cleverly cheating their own norms and excusing themselves on the other (the ‘Social Brain Hypothesis’ or ‘Machiavellian Intelligence Hypothesis’). Intelligence purportedly evolved to get ourselves off the hook, and our ability to construct rocket ships and proofs about large prime numbers are just a lucky side product.

As a person who is both unusually smart, and who spent the last half hour wishing the seatbelt sign would go off so they could permissibly use the restroom, I feel like there is some tension between this theory and reality. I’m not the only unusually smart person who hates breaking rules, who wishes there were more rules telling them what to do, who incessantly makes up rules for themselves, who intentionally steers clear of borderline cases because it would be so annoying to think about, and who wishes the nominal rules were policed predictably and actually reflected expected behavior. This is a whole stereotype of person.

But if intelligence evolved for the prime purpose of evading rules, shouldn’t the smartest people be best at navigating rule evasion? Or at least reliably non-terrible at it? Shouldn’t they be the most delighted to find themselves in situations where the rules were ambiguous and the real situation didn’t match the claimed rules? Shouldn’t the people who are best at making rocket ships and proofs also be the best at making excuses and calculatedly risky norm-violations? Why is there this stereotype that the more you can make rocket ships, the more likely you are to break down crying if the social rules about when and how you are allowed to make rocket ships are ambiguous?

It could be that these nerds are rare, yet salient for some reason. Maybe such people are funny, not representative. Maybe the smartest people are actually savvy. I’m told that there is at least a positive correlation between social skills and other intellectual skills.

I offer a different theory. If the human brain grew out of an endless cat and mouse game, what if the thing we traditionally think of as ‘intelligence’ grew out of being the cat, not the mouse?

The skill it takes to apply abstract theories across a range of domains and to notice places where reality doesn’t fit sounds very much like policing norms, not breaking them. The love of consistency that fuels unifying theories sounds a lot like the one that insists on fair application of laws, and social codes that can apply in every circumstance. Math is basically just the construction of a bunch of rules, and then endless speculation about what they imply. A major object of science is even called discovering ‘the laws of nature’.

Rules need to generalize across a lot of situations—you will have a terrible time as rule-enforcer if you see every situation as having new, ad-hoc appropriate behavior. We wouldn’t even call this having a ‘rule’. But more to the point, when people bring you their excuses, if your rule doesn’t already imply an immovable position on every case you have never imagined, then you are open to accepting excuses. So you need to see the one law manifest everywhere. I posit that technical intelligence comes from the drive to make these generalizations, not the drive to thwart them.

On this theory, probably some other aspects of human skill are for evading norms. For instance, perhaps social or emotional intelligence (I hear these are things, but will not pretend to know much about them). If norm-policing and norm-evading are somewhat different activities, we might expect to have at least two systems that are engorged by this endless struggle.

I think this would solve another problem: if we came to have intelligence for cheating each other, it is unclear why general intelligence per se is is the answer to this, but not to other problems we have ever had as animals. Why did we get mental skills this time rather than earlier? Like that time we were competing over eating all the plants, or escaping predators better than our cousins? This isn’t the only time that a species was in fierce competition against themselves for something. In fact that has been happening forever. Why didn’t we develop intelligence to compete against each other for food, back when we lived in the sea? If the theory is just ‘there was strong competitive pressure for something that will help us win, so out came intelligence’, I think there is a lot left unexplained. Especially since the thing we most want to explain is the spaceship stuff, that on this theory is a random side effect anyway. (Note: I may be misunderstanding the usual theory, as a result of knowing almost nothing about it.)

I think this Principled Intelligence Hypothesis does better. Tracking general principles and spotting deviations from them is close to what scientific intelligence is, so if we were competing to do this (against people seeking to thwart us) it would make sense that we ended up with good theory-generalizing and deviation-spotting engines.

On the other hand, I think there are several reasons to doubt this theory, or details to resolve. For instance, while we are being unnecessarily norm-abiding and going with anecdotal evidence, I think I am actually pretty great at making up excuses, if I do say so. And I feel like this rests on is the same skill as ‘analogize one thing to another’ (my being here to hide from a party could just as well be interpreted as my being here to look for the drinks, much as the economy could also be interpreted as a kind of nervous system), which seems like it is quite similar to the skill of making up scientific theories (these five observations being true is much like theory X applying in general), though arguably not the skill of making up scientific theories well. So this is evidence against smart people being bad at norm evasion in general, and against norm evasion being a different kind of skill to norm enforcement, which is about generalizing across circumstances.

Some other outside view evidence against this theory’s correctness is that my friends all think it is wrong, and I know nothing about the relevant literature. I think it could also do with some inside view details – for instance, how exactly does any creature ever benefit from enforcing norms well? Isn’t it a bit of a tragedy of the commons? If norm evasion and norm policing skills vary in a population of agents, what happens over time? But I thought I’d tell you my rough thoughts, before I set this aside and fail to look into any of those details for the indefinite future.



Discuss

Person-moment affecting views

21 октября, 2019 - 16:10
Published on October 21, 2019 1:10 PM UTC

[Epistemic status: sloppy thoughts not informed by the literature. Hoping actual population ethicists might show up and correct me or point me to whoever has already thought about something like this better.]

Person-affecting views say that when you are summing up the value in different possible worlds, you should ignore people who only exist in one of those worlds. This is based on something like the following intuitions:

  1. World A can only be better than world B insofar as it is better for someone.
  2. World A can’t be better than world B for Alice, if Alice exists in world A but not world B.

The further-fact view says that after learning all physical facts about Alice and Alice’—such as whether Alice’ was the physical result of Alice waiting for five seconds, or is a brain upload of Alice, or is what came out of a replicating machine on Mars after Alice walked in on Earth, or remembers being Alice—there is still a further meaningful question of whether Alice and Alice’ are the same person.

I take the further-fact view to be wrong (or at least Derek Parfit does, and I think we agree the differences between Derek Parfit and I have been overstated). Thinking that the further-fact view is wrong seems to be a common position among intellectuals (e.g. 87% among philosophers).

If the further-fact view is wrong, then the what we have is a whole lot of different person-moments, with various relationships to one another, which for pragmatic reasons we like to group into clusters called ‘people’. There are different ways we could define the people, and no real answer to which definition is right. This works out pretty well in our world, but you can imagine other worlds (or futures of our world) where the clusters are much more ambiguous, and different definitions of ‘person’ make a big difference, or where the concept is not actually useful.

Person-affecting views seem to make pretty central use of the concept ‘person’. If we don’t accept the further-fact view, and do want to accept a person-affecting view, what would that mean? I can think of several options:

  1. How good different worlds are depends strongly on which definition of ‘person’ you choose (which person moments you choose to cluster together), but this is a somewhat arbitrary pragmatic choice
  2. There is some correct definition of ‘person’ for the purpose of ethics (i.e. there is some relation between person moments that makes different person moments in the future ethically relevant by virtue of having that connection to a present person moment)
  3. Different person-moments are more or less closely connected in ways, and a person-affecting view should actually have a sliding scale of importance for different person-moments

Before considering these options, I want to revisit the second reason for adopting a person-affecting view: If Alice exists in world A and not in world B, then Alice can’t be made better off by world A existing rather than world B. Whether this premise is true seems to depend on how ‘a world being better for Alice’ works. Some things we might measure would go one way, and some would go the other. For instance, we could imagine it being analogous to:

  1. Alice painting more paintings. If Alice painted three paintings in world A, and doesn’t exist in world B, I think most people would say that Alice painted more paintings in world A than in world B. And more clearly, that world A has more paintings than world B, even if we insist that a world can’t have more paintings without somebody in particular having painted more paintings. Relatedly, there are many things people do where the sentence ‘If Alice didn’t exist, she wouldn’t have X’.
  2. Alice having painted more paintings per year. If Alice painted one painting every thirty years in world A, and didn’t exist in world B, in world B the number of paintings per year is undefined, and so incomparable to ‘one per thirty years’.

Suppose that person-affecting view advocates are right, and the worth of one’s life is more like 2). You just can’t compare the worth of Alice’s life in two worlds where she only exists in one of them. Then can you compare person-moments? What if the same ‘person’ exists in two possible worlds, but consists of different person-moments?

Compare world A and world C, which both contain Alice, but in world C Alice makes different choices as a teenager, and becomes a fighter pilot instead of a computer scientist. It turns out that she is not well suited to it, and finds piloting pretty unsatisfying. If Alice_t1A is different from Alice_t1C, can we say that world A is better than world C, in virtue of Alice’s experiences? Each relevant person-moment only exists in one of the worlds, so how can they benefit?

I see several possible responses:

  1. No we can’t. We should have person-moment affecting views.
  2. Things can’t be better or worse for person-moments, only for entire people, holistically across their lives, so the question is meaningless. (Or relatedly, how good a thing is for a person is not a function of how good it is for their person-moments, and it is how good it is for the person that matters).
  3. Yes, there is some difference between people and person moments, which means that person-moments can benefit without existing in worlds that they are benefitting relative to, but people cannot.

The second possibility seems to involve accepting the second view above: that there is some correct definition of ‘person’ that is larger than a person moment, and fundamental to ethics – something like the further-fact view. This sounds kind of bad to me. And the third view doesn’t seem very tempting without some idea of an actual difference between persons and person-moments.

So maybe the person-moment affecting view looks most promising. Let us review what it would have to look like. For one thing, the only comparable person moments are the ones that are the same. And since they are the same, there is no point bringing about one instead of the other. So there is never reason to bring about a person-moment for its own benefit. Which sounds like it might really limit the things that are worth intentionally doing. Isn’t making myself happy in three seconds just bringing about a happy person moment rather than a different sad person moment?

Is everything just equally good on this view? I don’t think so, as long as you are something like a preference utilitarian: person-moments can have preferences over other person-moments. Suppose that Alice_t0A and Alice_t0C are the same, and Alice_t1A and Alice_t1C are different. And suppose that Alice_t0 wants Alice_t1 to be a computer scientist. Then world A is better than world C for Alice_t0, and so better overall. That is, person-moments can benefit from things, as long as they don’t know at the time that they have benefited.

I think an interesting  feature of this view is that all value seems to come from meddling preferences. It is never directly good that there is joy in the world for instance, it is just good because somebody wants somebody else to experience joy, and that desire was satisfied. If they had instead wished for a future person-moment to be tortured, and this was granted, then this world would apparently be just as good.

So, things that are never directly valuable in this world:

  • Joy
  • Someone getting what they want and also knowing about it
  • Anything that isn’t a meddling preference

On the upside, since person-moments often care about future person-moments within the same person, we do perhaps get back to something closer to the original person-affecting view. There is often reason to bring about or benefit a person moment for the benefit of previous person moments in the history of the same person, who for instance wants to ‘live a long and happy life’. My guess after thinking about this very briefly is that in practice it would end up looking like the ‘moderate’ person-affecting views, in which people who currently exist get more weight than people who will be brought into existence, but not infinitely more weight. People who exist now mostly want to continue existing, and to have good lives in the future, and they care less, but some, about different people in the future.

So, if you want to accept a person-affecting view and not a further-fact view, the options seem to me to be something like these:

  1. Person-moments can benefit without having an otherworldly counterpart, even though people cannot. Which is to say, only person-moments that are part of the same ‘person’ in different worlds can benefit from their existence. ‘Person’ here is either an arbitrary pragmatic definition choice, or some more fundamental ethically relevant version of the concept that we could perhaps discover.
  2. Benefits accrue to persons, not person-moments. In particular, benefits to persons are not a function of the benefits to their constituent person-moments. Where ‘person’ is again either a somewhat arbitrary choice of definition, or a more fundamental concept.
  3. A sliding scale of ethical relevance of different person-moments, based on how narrow a definition of ‘person’ unites them with any currently existing person-moments. Along with some story about why, given that you can apparently compare all of them, you are still weighting some less, on grounds that they are incomparable.
  4. Person-moment affecting views

None of these sound very good to me, but nor do person-affecting views in general, so maybe I’m the wrong audience. I had thought person-moment affecting views were almost a reductio, but a close friend says he thought they were the obvious reasonable view, so I am curious to hear others’ takes.



Discuss

Strengthening the foundations under the Overton Window without moving it

21 октября, 2019 - 16:10
Published on October 21, 2019 1:10 PM UTC

As I understand them, the social rules for interacting with people you disagree with are like this:

  • You should argue with people who are a bit wrong
  • You should refuse to argue with people who are very wrong, because it makes them seem more plausibly right to onlookers

I think this has some downsides.

Suppose there is some incredibly terrible view, V. It is not an obscure view: suppose it is one of those things that most people believed two hundred years ago, but that is now considered completely unacceptable.

New humans are born and grow up. They are never acquainted with any good arguments for rejecting V, because nobody ever explains in public why it is wrong. They just say that it is unacceptable, and you would have to be a complete loser who is also the Devil to not see that.

Since it took the whole of humanity thousands of years to reject V, even if these new humans are especially smart and moral, they probably do not each have the resources to personally out-reason the whole of civilization for thousands of years. So some of them reject V anyway, because they do whatever society around them says is good person behavior. But some of the ones who rely more on their own assessment of arguments do not.

This is bad, not just because it leads to an unnecessarily high rate of people believing V, but because the very people who usually help get us out of believing stupid things – the ones who think about issues, and interrogate the arguments, instead of adopting whatever views they are handed – are being deprived of the evidence that would let them believe even the good things we already know.

In short: we don’t want to give the new generation the best sincere arguments against V, because that would be admitting that a reasonable person might believe V. Which seems to get in the way of the claim that V is very, very bad. Which is not only a true claim, but an important thing to claim, because it discourages people from believing V.

But we actually know that a reasonable person might believe V, if they don’t have access to society’s best collective thoughts on it. Because we have a whole history of this happening almost all of the time. On the upside, this does not actually mean that V isn’t very, very bad. Just that your standard non-terrible humans can believe very, very bad things sometimes, as we have seen.

So this all sounds kind of like the error where you refuse to go to the gym because it would mean admitting that you are not already incredibly ripped.

But what is the alternative? Even if losing popular understanding of the reasons for rejecting V is a downside, doesn’t it avoid the worse fate of making V acceptable by engaging people who believe it?

Well, note that the social rules were kind of self-fulfilling. If the norm is that  you only argue with people who are a bit wrong, then indeed if you argue with a very wrong person, people will infer that they are only a bit wrong. But if instead we had norms that said you should argue with people who are very wrong, then arguing with someone who was very wrong would not make them look only a bit wrong.

I do think the second norm wouldn’t be that stable. Even if we started out like that, we would probably get pushed to the equilibrium we are in, because for various reasons people are somewhat more likely to argue with people who are only a bit wrong, even before any signaling considerations come into play. Which makes arguing some evidence that you don’t think the person is too wrong. And once it is some evidence, then arguing makes it look a bit more like you think a person might be right. And then the people who loathe to look a bit more like that drop out of the debate, and so it becomes stronger evidence. And so on.

Which is to say, engaging V-believers does not intrinsically make V more acceptable. But society currently interprets it as a message of support for V. There are some weak intrinsic reasons to take this as a signal of support, which get magnified into it being a strong signal.

My weak guess is that this signal could still be overwhelmed by e.g. constructing some stronger reason to doubt that the message is one of support.

For instance, if many people agreed that there were problems with avoiding all serious debate around V, and accepted that it was socially valuable to sometimes make genuine arguments against views that are terrible, then prefacing your engagement with a reference to this motive might go a long way. Because nobody who actually found V plausible would start with ‘Lovely to be here tonight. Please don’t take my engagement as a sign of support or validation—I am actually here because I think Bob’s ideas are some of the least worthy of support and validation in the world, and I try to do the occasional prophylactic ludicrous debate duty. How are we all this evening?’



Discuss

Personal relationships with goodness

21 октября, 2019 - 16:10
Published on October 21, 2019 1:10 PM UTC

Many people seem to find themselves in a situation something like this:

  1. Good actions seem better than bad actions. Better actions seem better than worse actions.
  2. There seem to be many very good things to do—for instance, reducing global catastrophic risks, or saving children from malaria.
  3. Nonetheless, they continually do things that seem vastly less good, at least some of the time. For instance, just now I went and listened to a choir singing. You might also admire kittens, or play video games, or curl up in a ball, or watch a movie, or try to figure out whether the actress in the movie was the same one that you saw in a different movie. I’ll call this ‘indulgence’, though it is not quite the right category.

On the face of it, this is worrying. Why do you do the less good things? Is it because you prefer badness to goodness? Are you evil?

It would be nice to have some kind of a story about this. Especially if you are just going to keep on occasionally admiring kittens or whatever for years on end. I think people settle on different stories. These don’t have obviously different consequences, but I think they do have subtly different ones. Here are some stories I’m familiar with:

I’m not good: “My behavior is not directly related to goodness, and nor should it be”, “It would be good to do X, but I am not that good” “Doing good things rather than bad things is generally supererogatory”

I think this one is popular. I find it hard to stomach, because if I am not good that seems like a serious problem. Plus, if goodness isn’t the guide to my actions, it seems like I’m going to need some sort of concept like schmoodness to determine which things I should do. Plus I just care about being good for some idiosyncratic reason. But it seems actually dangerous, because not treating goodness as a guide to one’s actions seems like it might affect one’s actions pretty negatively, beyond excusing a bit of kitten admiring or choir attendance.

In its favor, this story can help with ‘leaving a line of retreat‘: maybe you can better think about what is good, honestly, if you aren’t going to be immediately compelled to do it. It also has the appealing benefit of not looking dishonest, hypocritical, or self-aggrandizing.

Goodness is hard: “I want to be good, but I fail due to weakness of will or some other mysterious force”

This one probably only matches one’s experience while actively trying to never indulge in anything, which seems rare as a long term strategy.

Indulgence is good: “I am good, but it is not psychologically sustainable to exist without admiring kittens. It really helps with productivity.” “I am good, and it is somehow important for me to admire kittens. I don’t know why, and it doesn’t sound that plausible, but I don’t expect anything good to happen if I investigate or challenge it”

This is nice, because you get to be good, and continue to pursue good things, and not feel endlessly bad about the indulgence.

It has the downside that it sounds a bit like an absurd rationalization—’of course I care about solving the most important problems, for instance, figuring out where the cutest kittens are on the internet’. Also, supposing that fruitless entertainments are indeed good, they are presumably only good in moderation, and so it is hard for observers to tell if you are doing too much, which will lead them to suspect that you are doing too much. Also, you probably can’t tell yourself if you are doing too much, and supposing that there is any kind of pressure to observe more kittens under the banner of ‘the best thing a person can do’, you might risk that happening.

I’m partly good; indulgence is part of compromise: “I am good, but I am a small part of my brain, and there are all these other pesky parts that are bad, and I’m reasonably compromising with them” “I have many parts, and at least one of them is good, and at least one of them wants to admire kittens.”

This has the upside of being arguably relatively accurate, and many of the downsides of the first story, but to a lesser degree.

Among these, there seems to be a basic conflict between being able to feel virtuous, and being able to feel honest and straightforward. Which I guess is what you get if you keep on doing apparently non-virtuous things. But given that stopping doing those things doesn’t seem to be a real option, I feel like it should be possible to have something close to both.

I am interested to hear about any other such accounts people might have heard of.

 



Discuss

Страницы