Вы здесь

Новости LessWrong.com

Подписка на Лента Новости LessWrong.com Новости LessWrong.com
A community blog devoted to refining the art of rationality
Обновлено: 1 день 6 часов назад

A guide to Iterated Amplification & Debate

15 ноября, 2020 - 20:14
Published on November 15, 2020 5:14 PM GMT

This post is about two proposals for aligning AI systems in a scalable way:

  • Iterated Distillation and Amplification (often just called 'Iterated Amplification'), or IDA for short,[1] is a proposal by Paul Christiano.
  • Debate is an IDA-inspired proposal by Geoffrey Irving.

To understand this post, you should be familiar with the concept of outer alignment, and preferably with inner alignment as well. Roughly,

  • Outer alignment is aligning the training signal or training data we give to our model with what we want.
  • If the model we find implements its own optimization process, then inner alignment is aligning [the thing the model is optimizing for] with the training signal.

See also this post for an overview and this paper or my ELI12 edition for more details on inner alignment.

1. Motivation / Reframing AI Risk

Why do we need a fancy alignment scheme?

There has been some debate a few months back about whether the classical arguments of the kind made in Superintelligence for why AI is dangerous hold up to scrutiny. I think a charitable reading of the book can interpret it as primarily defending one claim, which is also an answer to the leading question. Namely,

  • It is hard to define a scalable training procedure that is not outer-misaligned.

For example, a language model (GPT-3 style) is outer-misaligned because the objective we train for is to predict the most likely next word, which says nothing about being 'useful' or 'friendly'. Similarly, a question-answering system trained with Reinforcement Learning is outer-misaligned because the objective we train for is 'optimize how much the human likes the answer', not 'optimize for a true and useful answer'.

I'll refer to this claim as (∗).mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > * {position: absolute} .MJXc-bevelled > * {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom * {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')} . If (∗) true, it is a problem even under the most optimistic assumptions. For example, we can suppose that

  1. progress is gradual all the way, and we can test everything before we deploy it;

  2. we are likely to maintain control of AI systems (and can turn them off whenever we want to) for a while after they exceed our capabilities;

  3. it takes at least another 50 years for AI to exceed human capabilities across a broad set of tasks.

Even then, (∗) remains a problem. The only way to build an outer-aligned AI system is to build an outer-aligned AI system, and we can't do it if we don't know how to do it.

In the past, people have given many examples of how outer alignment could fail (there are a lot of those in Superintelligence, and I've given two more above). But the primary reason to believe (∗) is that it has taken people a long time to come up with a formalized training scheme that is not clearly outer-misaligned. IDA and Debate are two such schemes.

If outer alignment works out, that alone is not sufficient. To solve the entire alignment problem (or even just Intent Alignment[2]), we would like to have confidence that an AI system is

  1. outer-aligned; and
  2. inner-aligned (or not using an inner optimizer); and
  3. training competitive; and
  4. performance-competitive.

Thus, IDA and Debate are a long way from having solved the entire problem, but the fact that they may be outer-aligned is reason to get excited, especially if you think the alignment problem is hard.

2. The Key Idea

Training AI systems requires a training signal. In some cases, this signal is easy to provide regardless of how capable the system is – for example, it is always easy to see whether a system has won a game of Go, even if the system plays at superhuman level. But most cases we care about are not of this form. For example, if an AI system makes long-term economic decisions, we only know how good the decisions are after they've been in place for years, and this is insufficient for a training signal.

In such cases, since we cannot wait to observe the full effects of a decision, any mechanism for a more rapid training signal has to involve exercising judgment to estimate how good the decisions are ahead of time. This is a problem once we assume that the system is more capable than we are.

To the rescue comes the following idea:

The AI system we train has to help us during training.

IDA and Debate provide two approaches to do this.

3. Iterated Distillation and Amplification

Before we begin, here are other possible resources to understand IDA:

This is Hannah.

Hannah, or H for short, is a pretty smart human. In particular, she can answer questions up to some level of competence.

As a first step to realize IDA, we wish to distill Hannah's competence at this question-answering task into an AI system (or 'model') A1. We assume A1 will be slightly less competent than Hannah, therefore Hannah can provide a safe training signal.

A1 may be trained by reinforcement learning or by supervised learning of any form.[3] The basic approach of IDA leaves the distillation step as a black box, so any implementation is fine, as long as the following is true:

  • Given an agent as input, we obtain a model that imitates the agent's behavior at some task but runs much faster.
  • The output model is only slightly less competent than the input agent at this task.
  • This process is alignment-preserving. In other words, if H is honest, then A0 should be honest as well.

If we applied A1 to the same question-answering task, it would perform worse:

However, A1 has vastly improved speed: it may answer questions in a few milliseconds that would have taken H several hours. This fact lets us boost performance through a step we call amplification:

In the general formulation of the IDA scheme, amplification is also a black box, but in this post, we consider the basic variant, which we call stock IDA. In stock IDA, amplification is realized by giving H access to the model A1. The idea is that this new 'agent' (consisting of H with access to A1) is more competent than Hannah is by herself.

If it is not obvious why, imagine you had access to a slightly dumber version of yourself that ran at 10000 times your speed. Anytime you have a (sub)-question that does not require your full intellect, you can relegate it to this slightly dumber version and obtain an answer at once. This allows you to effectively think for longer than you otherwise could.

Thus, we conjecture that this combined 'agent' has improved performance (compared to H) at the same question-answering task.

Here is a different way of describing what happened. Our combined 'agent' looks like this:

Since A1 tries to imitate H, we could think of Hannah as having access to an (imperfect) copy of herself. But since A1 thinks much faster than H, it is more accurate to view her as having access to many copies of herself, like so:

Where the gray circle means 'this is a model that tries to behave like the thing in the circle.'

At this point, we've covered one distillation and one amplification step. You might guess what happens next:

We train a new model A2 to imitate the agent [Haccess⟶A1] on the question-answering task. Since [Haccess⟶A1] is more competent than H, this means that A2 will be more competent than A1 (which was trained to imitate just H).

In this example, A2 is almost exactly as competent as H. This is a good time to mention of my performance numbers are made-up – the three properties they're meant to convey are that

  • performance goes up in each amplification step; and
  • performance goes down in each distillation step; but
  • performance goes up in each (amplification step, distillation step) pair.

After each distillation step, we end up with some model Ak. While Ak was trained in a very particular way, it is nonetheless just a model, which can answer questions very quickly. Each Ak performs better than its predecessor Ak−1 without a loss of speed.

The next amplification step looks like this:

Note that, in each amplification step, we always give Hannah access to our newest model. The Ak's get better and better, but Hannah remains the same human.

This new 'agent' is again more competent at the question-answering task:

Now we could train a model A3 to imitate the behavior of [Haccess⟶A2] on the question-answering task, which would then be less competent than the system above, but more competent than A2 (and in our case, more competent than H). It would still be a model and thus be extremely fast. Then, we could give Hannah access to A3, and so on.

One way to summarize this process is that we're trying to create a model that imitates the behavior of a human with access to itself. In particular, each model Ak imitates the behavior of [Haccess⟶Ak−1]. Does this process top out at some point? It's conceivable (though by no means obvious) that it does not top out until Ak is superintelligent. If so, and if distillation and amplification are both alignment-preserving, our scheme would be both aligned and performance-competitive.

Recall that our 'agent' [Haccess⟶A2] now looks like this:

Since A2 tries to imitate [Haccess⟶A1], we can alternatively depict this as

Once again, we draw more than one of these since A2 is much faster than [Haccess⟶A1], so it is as if H had access to a lot of these, not just one. (Also not just three, but I only have that much space.)

Since each A1 tries to imitate H, we can depict this further like so:

Thus, insofar as the imitation step 'works' (i.e., insofar as we can ignore the circles), the resulting system will behave as if it were composed of Hannah consulting many copies of herself, each of which consulting many copies of herself. This is after precisely four steps, i.e., distillation → amplification → distillation → amplification. You can guess how it would look if we did more steps.

The name 'Hannah' is a bit on-the-nose as her name starts with 'H', which also stands for 'human'. Thus, the tree above consists of a human consulting humans consulting humans consulting humans consulting humans consulting humans consulting humans...

We call the entire tree HCH,[4] which is a recursive acronym for Humans consulting HCH. Generally, HCH is considered to have infinite depth.

Note that there is an important caveat hidden in the clause 'insofar as the imitation step works'. In each distillation step, we are training a model to predict the answers of a system that thinks for much longer than itself. Thus, each Ak is only more competent than Ak−1 insofar as it is possible to solve problems in less time through better algorithms. There are strong reasons to believe that this is the case for a large class of tasks, but we know that it isn't possible for every task. For example, an HCH tree can play perfect chess (literally perfect, not just superhuman) by searching the entire chess tree.[5] A model trained by IDA cannot do the same.

In the aforementioned LessWrong sequence, the illustration for the Distillation → Amplification process looks like this:

Alternatively, if we consider all of the Ak's to be the same AI system that gets upgraded over time, we have the following (where r denotes a reward signal).

4. Factored Cognition

Informally, the Factored Cognition Hypothesis says that each question can be decomposed into easier subquestions such that the answer to the original question follows from the answer to the subquestions. Factored Cognition plays a crucial role for the applicability of both Debate and many instances of IDA.[6]

Here is an illustration, where the top block is a question, each layer below a block is a set of subquestions whose answers determine the top-level question, and darkness/size of the blocks corresponds to difficulty:

We might now hope that the absolute difficulties look something like this:

Where the lower part is meant to indicate that we can decompose all of the above questions such that they eventually bottom out in the lowest stripe of questions smart humans can answer in 15 minutes.

I see two ways to illustrate why Factored Cognition is important for stock IDA. One is the HCH picture – insofar as the imitations 'work', a model trained via stock IDA behaves just like a tree of humans consulting each other. Thus, if the model is supposed to be superintelligent, then we better hope that any question a superintelligent AI could answer can be recursively decomposed into subquestions, until we end up with something Hannah can answer by herself. (Otherwise, stock IDA may not be performance-competitive.) In other words, we better hope that the Factored Cognition Hypothesis holds.

Another way is to look at just one amplification step in the procedure. Suppose that we have successfully trained model A8, which is already smarter than H, and now want to use this to create the smarter agent [Haccess⟶A8]. Suppose that A8 is already smart enough to answer super hard questions. We want the new agent to be smarter than A8, so we want it to be able to answer super-duper hard questions. In other words, we're in this position:

This means that, to answer this question, Hannah has to do the following:

She has to take the question Q and decompose it into subquestions q1,q2,q3,q4, such that the subquestions imply the answer to Q, and each qi is at most super hard. Then, she can use A8 to answer the qi, receive answers ai, and, on their basis, output an answer a for Q.

This means that she requires the Factored Cognition Hypothesis to hold for this particular step (the one from super-duper hard to super hard). If the Factored Cognition Hypothesis fails for any one jump of difficulty, performance might grind to a halt at that level.

Both views point to the same phenomenon because they describe the same idea: HCH is idealized stock IDA, i.e., it is what stock IDA hopes to approximate in the limit. Both the concrete training procedure and the ideal utilize Factored Cognition.

It is also conceivable that a decomposition of the kind that Hannah needs to solve this problem does exist, but she is not smart enough to find it. This problem can be considered a motivation for Debate.[7]

5. Debate

Before we begin, here are other possible resources to understand Debate:

Suppose a smart agent X makes the following argument:

She wants to argue that D holds. Thus, she claims that A is true, that A implies B because {argument symbolized by leftmost arrow}, that B implies C because {conjunction of the arguments made by middle arrows} and that C implies D because {argument made by rightmost arrow}.

Then comes forth an equally smart agent Y to claim that

X cannot give up on the step since the entire argument depends on it, so she has to defend it. Unlike in normal debates, both X and Y now forget about the surrounding context: the steps from A to B and from C to D no longer matter (and neither does the first conjunct of the step from B to C). The remaining debate is entirely about the second conjunct of the step from B to C.

Thus, we zoom into this step. It turns out there is more going on; the step does itself has structure to it.

Then sayeth X:

Then sayeth Y:

Now, X has to defend this step, and so on. Eventually, the steps become so simple that Hannah can recognize the flaw for herself. The step from B23112 to B23113 was false; therefore the step from B2311 to B2312 was false; therefore the step from B231 to B232 was false; therefore the step from B23 to B24=C was false; therefore the step from B2 to C was false; therefore the argument that A implies D was false. X was wrong; Y was right.

In the Debate game, an AI system X debates Y, an identical copy of itself. The input to the game is a question; the output is a transcript of the arguments made by X and Y. The game proceeds in rounds: first, X says something, then Y, then X, then Y, and so on. Ideally, the first statement should always be the answer to the input question. In the end, a human H decides who won the game. In doing so, H should adhere to the following principles:

  • Enforce recursive zooming into statements. If Y points to a particular part of X's argument that she claims is wrong, and X responds by changing topics, then the game should end immediately, and Y should win.
  • If a subquestion is resolved in favor of either party, always judge the entire debate based on who was right on that subquestion.

To illustrate why the second point makes sense, suppose that the debate above ends differently. X and Y still zoom into the problem until they hit step B23112→B23113, but this time, Hannah thinks the step is valid. In this case, you might suspect that this is not conclusive since the argument could fail at another step. However, since Y was free to critique any part of the entire argument, chose this particular step, and failed to establish a flaw, we conclude that she could not have done better anywhere else.[8] Thus, based solely on verifying the validity of step B23112→B23113, Hannah concludes that every part of the argument was valid and that X won the debate.

Recall the question-difficulty hierarchy from the previous chapter:

The recursive zooming into a question is closely analogous to the 'division into subquestions' depicted here. The important difference is how the tree gets sliced up during the training process. If [Haccess⟶A8] is asked a super-duper hard question Q, but A8 can only answer super hard questions, then it's her job to decompose Q into q1,...,qk such that the qi's determine Q and all of them are at most super hard. In other words, we're taking a local slice of the tree.

Of course, each distillation step will require many such slices, and every subsequent distillation step will require many such slices at a higher level of the tree.

Conversely, each Debate game slices an entire vertical path out of the tree.

If Debate is ever implemented, we won't start off with superintelligent agents, and so we won't start off with super-duper6-hard questions either. Consequently, the paths may be very short in the beginning and grow longer over time. Either way, each path should always go all the way to the bottom.[9]

The other difference is what I've mentioned at the end of the previous chapter: in Debate, the human never has to deal with figuring out how to decompose a question. At most, she has to figure out whether a decomposition is valid.[10] The way Geoffrey Irving put it is that a debate transcript (as depicted above) corresponds to a single path through the tree of amplification.[11] Crucially, it is a path chosen by the two Debate agents.

6. Comparison

Both IDA and Debate...

  • may or may not be outer-aligned
  • try to utilize the AI systems they're trying to train during the training process
  • are designed to scale up to superintelligence
  • rely on some version of the Factored Cognition Hypothesis to be applicable[12] since they traverse the tree of difficult problems/questions

However, IDA...

  • carves a local slice out of the tree at each training step
  • has no built-in solution for decomposing questions into subquestions
    • A separate model may be trained for this purpose, or the questions may go meta, i.e., "what is a good way to decompose this question?"
    • Insofar as this makes the decompositions worse, it implies that a shallow HCH tree is less powerful than a shallow Debate tree.
  • can look very different depending on how the amplification and distillation black boxes are implemented
  • only approximates HCH insofar as all distillation steps 'work'

Whereas Debate...

  • carves a vertical slice/path out of the tree at each training step
    • Therefore, it relies on the claim that such a path reliably provides meaningful information about the entire tree.
  • probably won't be training-competitive in the above form since each round requires human input
    • This means one has to train a second model to imitate the behavior of a human judge, which introduces further difficulties.
  • requires that humans can accurately determine the winner of a debate with debaters on every level of competence between zero and superintelligence
  • could maybe tackle Inner Alignment concerns by allowing debaters to win the debate by demonstrating Inner Alignment failure in the other debater via the use of transparency tools
7. Outlook

Although this post is written to work as a standalone, it also functions as a prequel to a sequence on Factored Cognition. Unlike this post, which is summarizing existing work, the sequence will be mostly original content.

If you've read everything up to this point, you already have most of the required background knowledge. Beyond that, familiarity with basic mathematical notation will be required for posts one and two. The sequence will probably start dropping within a week.

  1. As far as I know, the proposal is most commonly referred to as just 'Iterated Amplification', yet is most commonly abbreviated as 'IDA' (though I've seen 'IA' as well). Either way, all four names refer to the same scheme. ↩︎

  2. Intent Alignment is aligning [what the AI system is trying to do] with [what we want]. This makes it the union of outer and inner alignment. Some people consider this the entire alignment problem. It does not include 'capability robustness'. ↩︎

  3. I think the details of the distillation step strongly depend on whether IDA is used to train an autonomous agent (one which takes agents by itself), or a non-autonomous agent, one which only takes actions if queried by the user.

    For the autonomous case, you can think of the model as an 'AI assistant', a system that autonomously takes actions to assist you in various activities. In this case, the most likely implementation involves reinforcement learning.

    For the non-autonomous case, you can think of the model as an oracle: it only uses its output channels as a response to explicit queries from the user. In this case, the distillation step may be implemented either via reinforcement learning or via supervised learning on a set of (question, answer) pairs.

    From a safety perspective, I strongly prefer the non-autonomous version, which is why the post is written with that in mind. However, this may not be representative of the original agenda. The sequence on IDA does not address this distinction explicitly. ↩︎

  4. Note that, in the theoretical HCH tree, time freezes for a node whenever she asks something to a subtree and resumes once the subtree has delivered the answer, so that every node has the experience of receiving answers instantaneously. ↩︎

  5. It's a bit too complicated to explain in detail how this works, but the gist is that the tree can play through all possible combinations of moves and counter-moves by asking each subtree to explore the game given a particular next move. ↩︎

  6. In particular, it is relevant for stock IDA where the amplification step is implemented by giving a human access to the current model. In principle, one could also implement amplification differently, in which case it may not rely on Factored Cognition. However, such an implementation would also no longer imitate HCH in the limit, and thus, one would need an entirely different argument for why IDA might be outer-aligned. ↩︎

  7. Geoffrey Irving has described Debate as a 'variant of IDA'. ↩︎

  8. This is the step where we rely on debaters being very powerful. If Y is too weak to find the problematic part of the argument, Debate may fail. ↩︎

  9. Given such a path p, the value |p| (the total number of nodes in such a path) is bounded by the depth of the tree, which means that it grows logarithmically with the total size of the tree. This is the formal reason why we can expect the size of Debate transcripts to remain reasonably small even if Debate is applied to extremely hard problems. ↩︎

  10. Note that even that can be settled via debate: if Y claims that the decomposition of X is flawed, then X has to defend the decomposition, and both agents zoom into that as the subproblem that will decide the debate. Similarly, the question of how to decompose a question in IDA can, in principle, itself be solved by decomposing the question 'how do I decompose this question' and solving that with help from the model. ↩︎

  11. This is from the podcast episode I've linked to at the start of the chapter. Here is the relevant part of the conversation:

    Geoffrey: [...] Now, here is the correspondence. In amplification, the human does the decomposition, but I could instead have another agent do the decomposition. I could say I have a question, and instead of a human saying, “Well, this question breaks down into subquestions X, Y, and Z,” I could have a debater saying, “The subquestion that is most likely to falsify this answer is Y.” It could’ve picked at any other question, but it picked Y. You could imagine that if you replace a human doing the decomposition with another agent in debate pointing at the flaws in the arguments, debate would kind of pick out a path through this tree. A single debate transcript, in some sense, corresponds to a single path through the tree of amplification.

    Lucas: Does the single path through the tree of amplification elucidate the truth?

    Geoffrey: Yes. The reason it does is it’s not an arbitrarily chosen path. We’re sort of choosing the path that is the most problematic for the arguments. ↩︎

  12. To be precise, this is true for stock IDA, where amplification is realized by giving the human access to the model. Factored Cognition may not play a role in versions of IDA that implement amplification differently. ↩︎



Discuss

[Research Review] Perineuronal Nets in Macaques

15 ноября, 2020 - 14:43
Published on November 15, 2020 11:43 AM GMT

This is a review of Distribution of N-Acetylgalactosamine-Positive Perineuronal Nets in the Macaque Brain: Anatomy and Implications by Adrienne L. Mueller, Adam Davis, Samantha Sovich, Steven S. Carlson, and Farrel R. Robinson.

A critical period in neuronal development is a time of synaptic plasticity. Perineuronal Nets (PNNs) "form around neurons near the end of critical periods during development". PNNs inhibit the formation of new connections. PNNs inhibit plasticisty. We believe this to be causal because[1] "[d]issolving them in the amygdala allowed experience to erase fear conditioning in adult rats, conditioning previously thought to be permanent."

PNNs surround more neurons in some parts of the brain than others. In particular, "PNNs generally surrounded a larger proportion of neurons in motor areas than in sensory areas". For example, "NNs surround almost 50% of neurons in the ventral horn of the cervical spinal cord but almost none of the neurons in the dorsal horn." We know from other research[2] that motor control is associated with the ventral spinal cord whereas sensory input is dorsal.

PNNs are shown in green [below].

Here is a graph of PNNs in each brain region [below].

The cerebral cortex stands out as having few PNNs everywhere sampled. This makes sense if the cerebral cortex needs to be adaptable and therefore plastic. The most PNNs were discovered in the cerebellar nucleus, a motor structure.

The distribution of PNNs is evidence that motor areas are less plastic than sensory areas. If true, then sensory input may involve more computation than motor output.

  1. The experiment in question may have also dissolved the rest of the extracellular matrix, besides PNNs, and that dissolution may have been what caused the erasure. ↩︎

  2. Technically, the research in question concerns humans, not macaques, but I think that we are similar enough to serve as a model for macaques. ↩︎



Discuss

Beware Experiments Without Evaluation

15 ноября, 2020 - 11:18
Published on November 15, 2020 8:18 AM GMT

Sometimes, people propose "experiments" with new norms, policies, etc. that don't have any real means of evaluating whether or not the policy actually succeeded or not.

This should be viewed with deep skepticism -- it often seems to me that such an "experiment" isn't really an experiment at all, but rather a means of sneaking a policy in by implying that it will be rolled back if it doesn't work, while also making no real provision for evaluating whether it will be successful or not.

In the worst cases, the process of running the experiment can involve taking measures that prevent the experiment from actually being implemented!

Here are some examples of the sorts of thing I mean:

  • Management at a company decides that it's going to "experiment with" an open floor plan at a new office. The office layout and space chosen makes it so that even if the open floor plan proves detrimental, it will be very difficult to switch back to a standard office configuration.
  • The administration of an online forum decides that it is going to "experiment with" a new set of rules in the hopes of improving the quality of discourse, but doesn't set any clear criteria or timeline for evaluating the "experiment" or what measures might actually indicate "improved discourse quality".
  • A small group that gathers for weekly chats decides to "experiment with" adding a few new people to the group, but doesn't have any probationary period, method for evaluating whether someone's a good fit or removing them if they aren't, etc.

Now, I'm not saying that one should have to register a formal plan for evaluation with timelines, metrics, etc. for any new change being made or program you want to try out -- but you should have at least some idea of what it would look like for the experiment to succeed and what it would look like for it to fail, and for things that are enough of a shakeup more formal or established metrics might well be justified.



Discuss

Examples of Measures

15 ноября, 2020 - 04:44
Published on November 15, 2020 1:44 AM GMT

I recently started learning measure theory but had a pretty hard time finding real-world examples of the different measures. So this is a list of real-world examples of some different, common measures.

The count measure

If I have 4 apples, their count measure is 4. The count measure is just normal counting.

The Dirac measure

You are in a forest with some trees. It's daytime. The trees cast shadows, but the tree cover is sparse enough that there's still some sunlight that hits the ground. The dirac measure for whether you're standing in a shadow is something that outputs 1 when you're standing in a shadow, and 0 if you're not.

Imagine moving the point labeled 'you' around inside the blue region. Whenever it's inside a green circle, the dirac measure is 1; whenever not inside a green circle, it's 0.

Formally, if δx.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > * {position: absolute} .MJXc-bevelled > * {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom * {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')}  means "in a shadow when standing in position x", and A is the collection of shadow regions, the mathematical expression of this dirac measure is δx(A)={0,x∉A1,x∈A. 

The dirac measure always looks like this, whatever meanings you attach to δx and the set A. If you've heard of an "indicator function", this is nearly the same thing.

(Since x is a point, the forest analogy breaks down a little: you are not allowed to be standing halfway in and out of shadow. You're in or you're out. We could fix this above by talking about "where the bajillionth atom on your left pinky toe is" instead of "where you're standing").

The Lebesque measure

The length of a line. The area of a flat shape. The volume of a 3D object. The equivalent concept in 4D, 5D, and so on. The name can be intimidating, but as a person in the physical world, you already have an intuitive idea of how the Lebesque measure should work. 

An important feature of the Lebesque measure μL is that if you move your set around, μL doesn't change. An orange has the same volume whether it's in my left hand, my right hand, or balanced on your head across the room from me. Also known as "translation invariance".

Probability measure

A probability measure μp is a measure that satisfies the usual requirements on a probability distribution: it's restricted to the range (0, 1), its sum over that range is 1, and it satisfies a condition called "countable additivity", which you probably don't have to think about too much.

Pictured is an example with μp(1)=μp(2)=14;μp(3)=12.

Wikipedia Angle measure

A measure μ∘ that sums to 360°. Pictured, μ∘(a)≈50∘;μ∘(b)≈130°;μ∘(c)=180∘.

Wikipedia

Angle measures are rotation invariant: rotate the picture above, and a, b, and c will still be the same angles. I think they're translation invariant, too? Seems like if you scoot that picture around, nothing changes, as long as you moved everything all at the same time.

There are more measures, but I don't understand them yet! 

Thanks to the Youtube channel The Bright Side Of Mathematics for getting me from knowing nothing to knowing something, when all the other expositions I found were too hard to understand.



Discuss

Early Thoughts on Ontology/Grounding Problems

15 ноября, 2020 - 02:19
Published on November 14, 2020 11:19 PM GMT

These all seem to be pointing to different aspects of the same problem.

  • Cross-ontology goal translation: given a utility function over a latent variable in one model, find an equivalent utility function over latent variables in another model with a different ontology. One subquestion here is how the first model’s input data channels and action variables correspond to the other model’s input data channels and action variables - after all, the two may not be “in” the same universe at all, or they may represent entirely separate agents in the same universe who may or may not know of each other's existence.
  • Correspondence theorems: quantum mechanics should reduce to classical mechanics in places where classical worked well, special relativity should reduce to Galilean relativity in places where Galilean worked well, etc. As we move to new models with new ontologies, when and how should the structure of the old models be reproduced?
  • The indexing problem: I have some system containing three key variables A, B, and C. I hire someone to study these variables, and after considerable effort they report that X is 2.438. Apparently they are using different naming conventions! What is this variable X? Is it A? B? C? Something else entirely? Where does their X fit in my model?
  • How do different people ever manage to point to the same thing with the same word in the first place? Clearly the word “tree” is not a data structure representing the concept of a tree; it’s just a pointer. What’s the data structure? What’s its type signature? Similarly, when I point to a particular tree, what’s the data structure for the concept of that particular tree? How does the “pointer” aspect of these data structures work?
  • When two people are using different words for the same thing, how do they figure that out? What about the same word for different things?
  • I see a photograph of a distinctive building, and wonder “Where is this?”. I have some data - i.e. I see the distinctive building - but I don’t know where in the world the data came from, so I don’t know where in my world-model to perform an update. Presumably I need to start building a little side-model of “wherever this picture was taken”, and then patch that side-model into my main world model once I figure out “where it goes”.
  • Distributed models and learning: a bunch of different agents study different (but partially overlapping) subsystems of a system - e.g. biologists study different subsystems of a bacteria. Sometimes the agents end up using different names or even entirely different ontologies - e.g. some parts of a biological cell require thinking about spatial diffusion, while some just require overall chemical concentrations. How do we combine submodels from different agents, different ontologies and different data? How can we write algorithms which learn large model structures via stitching together small structures each learned independently from different subsystems/data?

Abstraction plays a role in these, but it’s not the whole story. It tells us how high-level concepts relate to low-level, and why very different cognitive architectures would lead to surprisingly similar abstractions (e.g. neural nets learning similar concepts to humans). If we can ground two sets of high-level abstractions in the same low level world, then abstraction can help us map from one high-level to the low-level to the other high-level. But if two neural networks are trained on different data, and possibly even different kinds of data (like infrared vs visual spectrum photos), then we need a pretty detailed outside model of the shared low-level world in order to map between them.

Humans do not seem to need a shared low-level world model in order to pass concepts around from human to human. Things should ultimately be groundable in abstraction from the low level, but it seems like we shouldn’t need a detailed low-level model in order to translate between ontologies.

In some sense, this looks like Ye Olde Symbol Grounding Problem. I do not know of any existing work on that subject which would be useful for something like “given a utility function over a latent variable in one model, find an equivalent utility function over latent variables in another model”, but if anybody knows of anything promising then let me know.

Not Just Easy Mode

After poking at these problems a bit, they usually seem to have an “easy version” in which we fix a particular Cartesian boundary.

In the utility function translation problem, it’s much easier if we declare that both models use the same Cartesian boundary - i.e. same input/output channels. Then it’s just a matter of looking for functional isomorphism between latent variable distributions.

For correspondence theorems, it’s much easier if we declare that all models are predicting exactly the same data, or predict the same observable distribution. Again, the problem roughly reduces to functional isomorphism.

Similarly with distributed models/learning: if a bunch of agents build their own models of the same data, then there are obvious (if sometimes hacky) ways to stitch them together. But what happens when they’re looking at different data on different variables, and one agent’s inferred latent variable may be another agent’s observable?

The point here is that I don’t just want to solve these on easy mode, although I do think some insights into the Cartesian version of the problem might help in the more general version.

Once we open the door to models with different Cartesian boundaries in the same underlying world, things get a lot messier. To translate a variable from model A into the space of model B, we need to “locate” model B’s boundary in model A, or locate model A’s boundary in model B, or locate both in some outside model. That’s the really interesting part of the problem: how do we tell when two separate agents are pointing to the same thing? And how does this whole "pointing" thing work to begin with?

Motivation

I’ve been poking around the edges of this problem for about a month, with things like correspondence theorems and seeing how some simple approaches to cross-ontology translation break. Something in this cluster is likely to be my next large project.

Why this problem?

From an Alignment as Translation viewpoint, this seems like exactly the right problem to make progress on alignment specifically (as opposed to embedded agency in general, or AI in general). To the extent that the “hard part” of alignment is translating from human concept-space to some AI’s concept-space, this problem directly tackles the bottleneck. Also closely related is the problem of an AI building a goal into a successor AI - though that’s probably somewhat easier, since the internal structure of an AI is easier to directly probe than a human brain.

Work on cross-ontology transport is also likely to yield key tools for agency theory more generally. I can already do some neat things with embedded world models using the tools of abstraction, but it feels like I’m missing data structures to properly represent certain pieces - in particular, data structures for the “interface” where a model touches the world (or where a self-embedded model touches itself). The indexing problem is one example of this. I think those interface-data-structures are the main key to solving this whole cluster of problems.

Finally, this problem has a lot of potential for relatively-short-term applications, which makes it easier to build a feedback cycle. I could imagine identifying concept-embeddings by hand or by ad-hoc tricks in one neural network or probabilistic model, then using ontology translation tools to transport those concept-embeddings into new networks or models. I could even imagine whole “concept libraries”, able to import pre-identified concepts into newly trained models. This would give us a lot of data on how robust identified abstract concepts are in practice. We could even run stress tests, transporting concepts from model to model to model in a game of telephone, to see how well they hold up.

Anyway, that’s one potential vision. For now, I’m still figuring out the problem framing. Really, the reason I’m looking at this problem is that I keep running into it as a bottleneck to other, not-obviously-similar problems, which makes me think that this is the limiting constraint on a broad class of problems I want to solve. So, over time I expect to notice additional possibilities which a solution would unblock.



Discuss

Signalling & Simulacra Level 3

14 ноября, 2020 - 22:24
Published on November 14, 2020 7:24 PM GMT

"We lie all the time, but if everyone knows that we're lying, is a lie really a lie?"

-- Moral Mazes

A common Bayesian account of communication analyzes signalling games: games in which there is hidden information, and some actions can serve to communicate that information between players. The meaning of a signal is precisely the probabilistic information one can infer from it.

I'll call this the signalling analysis of meaning. (Apparently, it's also been called Gricean communication.)

In Maybe Lying Can't Exist, Zack Davis points out that the signalling analysis has some counterintuitive features. In particular, it's not clear how to define "lying"!

Either agents have sufficiently aligned interests, in which case the agents find a signalling system (an equilibrium of the game in which symbols bear a useful relationship with hidden states, so that information is communicated) or interests are misaligned, in which case no such equilibrium can develop.

We can have partially aligned interests, in which case a partial signalling system develops (symbols carry some information, but not as much as you might want). Zack gives the example of predatory fireflies who imitate a mating signal. The mating signal still carries some information, but it now signals danger as well as a mating opportunity, making the world more difficult to navigate.

But the signalling analysis can't call the predator a liar, because the "meaning" of the signal includes the possibility of danger.

Zach concludes: Deception is an ontologically parasitic concept. It requires a pre-existing notion of truthfulness. One possibility is given by Skyrms and Barrett: we consider only the subgame where sender and receiver have common goals. This gives us our standard of truth by which to judge lies.

I conclude: The suggested solution seems OK to me, but maybe we want to throw out the signalling analysis of meaning altogether. Maybe words don't just mean what they probabilistically imply. Intuitively, there is a distinction between connotation and denotation. Prefacing something with "literally" is more than just an intensifier. 

The signalling analysis of meaning seems to match up rather nicely with simulacrum level 3, where the idea that words have meaning has been lost, and everyone is vibing.

Level 3 and Signalling

Out of the several Simulacra definitions, my understanding mainly comes from Simulacra Levels and their Interactions. Despite the risk of writing yet-another-attempt-to-explain-simulacra-levels, here's a quick summary of my understanding:

  1. Truth-telling. An honest attempt to communicate object-level facts.
  2. Lying. For this to be meaningful and useful, there must be an equilibrium where truth-telling is common. Liars exploit that equilibrium to their advantage.
  3. You say X because you want to sound like the cool kids. Words have lost their inherent meanings, perhaps due to the prevalence of level 2 strategies. However, words still convey political information. Like level 1, level 3 has a sort of honesty to it: a level 3 strategy is conveying true things, but without regard for the literal content of words. We could call this level Humbug (more or less). It's also been called "signalling", but that begs the question of the present essay.
  4. Bullshit. Even the indirect meanings of words are corrupted, as dishonest actors say whatever is most advantageous in the moment. This level is parasitic on level 3 in the same way that level 2 is parasitic on level 1.

Here are some facts about the signalling analysis of meaning.

  • There is no distinction between denotation and connotation.
  • An assertion's meaning is just the probabilistic conclusions you can reach from it.
  • Map/territory fit is just how well those probabilistic conclusions match reality.
  • If a statement like "guns don't kill people, people kill people" lets you reliably infer the political affiliation of the speaker, then it has high map/territory fit in that sense.
  • If a particular lie is common, this fact just gets rolled into the "meaning" of the utterance. If "We should get together more often" is often used as part of a polite goodbye, then it means "I want to indicate that I like you as a person and leave things open without making any commitments" (or something like that).

This sounds an awful lot like level-3 thinking to me.

I'm not saying that signalling theory can only analyze level-three phenomena! On the contrary, I still think signalling theory includes honest communication as a special case. I still think it's a theory of what information can be conveyed through communication, when incentives are not necessarily aligned. After all, signalling theory can examine cases of perfectly aligned incentives, where there's no reason to lie or manipulate.

What I don't think is that signalling theory captures everything that's going on with truthfulness and deceit.

Signalling theory now strikes me as a level 3 understanding of language. It can watch levels 1 and 2 and come to some understanding of what's going on. It can even participate. It just doesn't understand the difference between levels 1 and 2. It doesn't see that words have meanings beyond their associations. 

This is the type of thinking that can't tell the difference between "a implies b" and "a, and also b" -- because people almost always endorse both "a" and "b" when they say "a implies b". 

This is the type of thinking where disagreement tends to be regarded as a social attack, because disagreement is associated with social attack.

This is the type of thinking where we can't ever have a phrase meaning "honestly" or "literally" or "no really, I'm not bulshitting you on this one" because if such a phrase existed then it would immediately be co-opted by everyone else as a mere intensifier.

The Skyrms & Barrett Proposal

What about the proposal that Zack Davis mentioned:

Brian Skyrms and Jeffrey A. Barrett have an explanation in light of the observation that our sender–receiver framework is a sequential game: first, the sender makes an observation (or equivalently, Nature chooses the type of sender—mate, predator, or null in the story about fireflies), then the sender chooses a signal, then the receiver chooses an action. We can separate out the propositional content of signals from their informational content by taking the propositional meaning to be defined in the subgame where the sender and receiver have a common interest—the branches of the game tree where the players are trying to communicate.

This is the sort of proposal I'm looking for. It's promising. But I don't think it's quite right.

First of all, it might be difficult to define the hypothetical scenario in which all interests are aligned, so that communication is honest. Taking an extreme example, how would we then assign meaning to statements such as "our interests are not aligned"?

More importantly, though,  it still doesn't make sense of the denotation/connotation distinction. Even in cases where interests align, we can still see all sorts of probabilistic implications of language, such as Grice's maxims. If someone says "frogs can't fly" in the middle of a conversation, we assume the remark is relevant to the conversation, and form all kinds of tacit conclusions based on this. To be more concrete, here's an example conversation:

Alice: "I just don't understand why I don't see Cedrick any more."

Bob: "He's married now."

We infer from this that the marriage creates some kind of obstacle. Perhaps Cedrick is too busy to come over. Or Bob is implying that it would be inappropriate for Cedrick to frequently visit Alice, a single woman. None of this is literally said, but a cloud of conversational implicature surrounds the literal text. The signalling analysis can't distinguish this cloud from the literal meaning.

The Challenge for Bayesians

Zach's post (Maybe Lying Can't Exist, which I opened with) feels to me like one of the biggest challenges to classical Bayesian thinking that's appeared on LessWrong in recent months. Something like the signalling theory of meaning has underpinned discussions about language among rationalists since before the sequences.

Like logical uncertainty, I see this as a challenge in the integration of logic and probability. In some sense, the signalling theory only allows for reasoning by association rather than structured logical reasoning, because the meaning of any particular thing is just its probabilistic associations.

Worked examples in the signalling theory of meaning (such as Alice and Bob communicating about colored shapes) tend to assume that the agents have a pre-existing meaningful ontology for thinking about the world ("square", "triangle" etc). Where do these crisp ontologies come from, if (under the signalling theory of meaning) symbols only have probabilistic meanings?

How can we avoid begging the question like that? Where does meaning come from? What theory of meaning can account for terms with definite definitions, strict logical relationships, and such, all alongside probabilistic implications?

To hint at my opinion, I think it relates to learning normativity.



Discuss

[Event] Ajeya's Timeline Report Reading Group #1 (Nov. 17, 6:30PM - 8:00PM PT)

14 ноября, 2020 - 22:14
Published on November 14, 2020 7:14 PM GMT

Ever since Ajeya's timeline report came out, I've been wanting to discuss it with other people, and see what they think about it. This is the relevant LessWrong post: https://www.lesswrong.com/.../draft-report-on-ai-timelines

The report is split into four files, and this meeting will be about the first one. If you want to attend, please make sure that you have given the first file at the very least a very thorough skim.

If this time doesn't work well for people, I am also happy to do multiple sessions on the first report, or to reschedule it.

Structure will be mostly open discussion, with a Google Doc to help us decide on what to explore open in parallel.

The event will be happening at this link. After you spawn, walk upwards until you see the Tardis, then teleport to the library. If you get lost, just send me a message via the menu on the right and I can get you:

http://garden.lesswrong.com?code=9A40&event=ajeya-s-ai-timelines-report-reading-group



Discuss

Stuart Russell at SlateStarCodex Online Meetup

14 ноября, 2020 - 21:42
Published on November 14, 2020 6:42 PM GMT

Professor Stuart Russell will speak briefly on his book "Human Compatible", and then will take questions. The event begins Dec. 6, 2020 at 20:30 Israel Standard Time, 10:30 Pacific Standard time, 18:30 UTC.

Please register here and we will send you an invitation.

Stuart Russell is a Professor of Computer Science at the University of California at Berkeley, holder of the Smith-Zadeh Chair in Engineering, and Director of the Center for Human-Compatible AI. His book "Artificial Intelligence: A Modern Approach" (with Peter Norvig) is the standard text in AI, used in 1500 universities in 135 countries. His research covers a wide range of topics in artificial intelligence, with an emphasis on the long-term future of artificial intelligence and its relation to humanity. 



Discuss

Specialized Labor and Counterfactual Compensation

14 ноября, 2020 - 21:13
Published on November 14, 2020 6:13 PM GMT

Note: the math didn't import. For now you may want to read it at the original source.

I have three purposes in this post. The first is to review the formal game theory found in Robert Ellickson's Order Without Law. It's not a large part of the book, but it's the part that I'm most qualified to judge. Not that I'm a formal game theorist myself, but I'm closer to being one of them than to being any kind of social scientist, historian or lawyer. If his formal game theory is nonsense, that would suggest that I ought to discount his writing on other fields, too. (Perhaps not discount it completely, especially because formal game theory is outside his main area of study. Then again, lots of the book is outside his main area of study.)

Spoiler alert: I think he holds up reasonably well. I want to ding him a few points, but nothing too serious, and he possibly even contributes a minor original result.

My second purpose, which is valuable for the first but also valuable of itself, is to try to extend it further than Ellickson did. I don't succeed at that.

My third is simply to be able to cut it from my in-progress review of the rest of the book.

Ellickson discusses two games. One is the classic Prisoner's Dilemma, in which you either Cooperate (for personal cost but social benefit) or Defect (for personal benefit but social cost).1 The other he calls Specialized Labor, in which two people must choose whether to Work on some common project or Shirk their share of it. It differs from the Prisoner's Dilemma in two ways. First, it's asymmetrical; one player is a less effective worker than the other, and gets less payoff from Working while the other Shirks than does the other player. The other is that in this game, the socially optimal outcome is Work/Shirk, not Work/Work.

(Many authors consider that the second change isn't really a change, and that a Prisoner's Dilemma can perfectly well have Cooperate/Defect be socially optimal. So they'd say Specialized Labor is simply an asymmetrical version of the Prisoner's Dilemma. In my taxonomy I define the Prisoner's Dilemma more narrowly than that; see also this comment. Ellickson uses the same narrow definition as me. I'd instead say Specialized Labor is an asymmetrical version of Too Many Cooks.)

Note that payoffs aren't measured in utility. They're measured in something Ellickson calls "welfare". He doesn't really explore the formal consequences of this. But what it gives us is that, since welfare is supposed to be objective, we can sum different people's welfare; when I used the phrases "social cost" and "socially optimal" in the previous paragraphs, talking about the sum of both players' results, that was a meaningful thing to do. I'm not sure exactly what it costs us, except that I don't expect results about mixed strategies to hold. (Someone won't necessarily prefer "50% chance of 3 welfare" to "certain chance of 1 welfare". I wasn't planning to consider mixed games anyway.) We can still assume that people prefer higher amounts of welfare to lower amounts of it.2

I'm going to pretend that Cooperate and Defect are also called Work and Shirk, so that I don't have to use both names when talking about both games.

In normal-form, these games look like this:

Prisoner's Dilemma Player 2 Work Shirk Player 1 Work \( ww_* \), $ ww_* $ $ ws_* $, $ sw_* $ Shirk $ sw_* $, $ ws_* $ $ ss_* $, $ ss_* $ $ sw_* > ww_* > ss_* > ws_* $, and $ 2ww_* > sw_* + ws_* $ Specialized Labor
Player 2 Work Shirk Player 1 Work $ ww_* $, $ ww_* $ $ ws_1 $, $ sw_* $ Shirk $ sw_* $, $ ws_2 $ $ ss_* $, $ ss_* $ $ sw_* > ww_* > ss_* > ws_1 > ws_2 $, and $ 2ww_* < sw_* + ws_1 $

How to read these symbols: the subscript is the player who gets the payoff, the first letter is their move, and the second letter is the other player's move. If the subscript is \( * \), then this combination is symmetric.3 So \\( ws_1 \\) is the payoff to Player 1, if he Works while Player 2 Shirks. $ ws_2 $ is the payoff to Player 2, if she Works while Player 1 Shirks.4 $ ws_* $ is both of these values, when they're equal to each other. And to be clear, when they're equal, $ ws_1 $ can stand in for $ ws_* $ just as easily as the other way around.

To help make the structure more visible, I've colored the symbols in green or red according to local incentive gradients - green for "this player prefers this outcome to the outcome they get from changing their move", red for the opposite of that. So when $ ws_1 $ is red, that means $ ss_1 > ws_1 $, since $ ss_1 $ represents Player 1's payoff if he changes his move while Player 2 keeps hers the same. A quadrant is a Nash equilibrium (meaning "neither player wants to change their move unilaterally") iff it has two green symbols. I've also given a slightly darker background to the socially optimal quadrants.

Comparing these games, Ellickson claims for example that norms will tend to punish someone who Shirks in a Prisoner's Dilemma, rather than rewarding those who Work, because eventually most people will Work and it's cheaper to sanction the thing that happens rarely. But in a Specialized Labor game, norms will tend to reward the efficient worker ("cheapest labor-provider") for Working, because that encourages people to obtain the skills necessary to perform this work. There's a background level of skills that everyone is expected to have, and people are punished for falling short of them and rewarded for exceeding them.

So most of the points I want to ding Ellickson here are because this is kind of a strange choice of games. For one thing, it seems to assume that: teaming up to work is more expensive than working individually, iff players have unequal skill levels.

Honestly I don't think that's so implausible as a heuristic. I think "most work projects have gains from working together" is a decent guess, and then one way to remove those gains could be if one player is much more skilled than the other. Still, Ellickson doesn't make this argument, or acknowledge that the assumption is kind of weird.

Another way to justify the omission is if the ommitted possibilities don't add much of interest. Prisoner's Dilemma and Specialized Labor are opposite corners in a two-by-two grid parameterized by "synergistic/discordant" (gains or no gains from cooperation) and "symmetrical/asymmetrical". If our tools for working with them can also be applied to the other corners without much extra effort, then there's no need to consider the others in detail. More on this later.

Something weird on the face of it is that in Specialized Labor, Work/Work results in the same payoff to both players. Why assume that that's symmetrical? But I don't think this is a big deal. Plausibly people can calibrate how hard they work if they think they're getting a worse result than the other. Also I suspect you just don't change much by allowing it to be asymmetrical, provided that both payoffs are in between $ sw_* $ and $ ss_* $.

Similarly you might suppose that the efficient worker doesn't just pay less to Work than the inefficient worker, he also does a better job. In which case we might want to set $ sw_1 < sw_2 $. But again, I doubt that matters much.

Here's my largest objection: Ellickson doesn't consider that work might be worth doing selfishly. In both games, you maximize your own outcome by Shirking, and if that means the work doesn't get done, so be it. But that puts a narrow band on the value of a piece of work. From a social perspective, it's not worth doing for the benefits it gives to one person, but it is worth doing for the benefits it gives to two. I think a lot of the situations Ellickson looks at don't really fit that model. For example, building a fence seems like something you'd often do of your own accord, simply for the benefits it gives to yourself, but Ellickson considers it a Prisoner's Dilemma because most people have the relevant skills. (He doesn't analyse whether fence-building is more easily done in tandem.)

To model this possibility, we'd set $ ws_1 > ss_* $, and maybe $ ws_2 > ss_* $ as well. This gives the game that I like to call the Farmer's Dilemma and others call Chicken, Hawk/Dove or Snowdrift. (Here's why I call it that.) Normally I think of the Farmer's Dilemma as symmetrical, but the asymmetrical case seems fine to count as an instance of it, at least right now.

The tricky thing about this game is that even though you'd be willing to do the work yourself if no one else benefitted, the fact that someone else does benefit makes you want them to join in and help with the work. If they decline, your only in-game way to punish them is not to do the work, which hurts you too - but if you don't punish them, you're a sucker. This is fundamentally different from the tricky thing with Prisoner's Dilemma and Specialized Labor, which in both cases is simply that people have no selfish incentive to work. So it seems like an important omission. Especially because depending on the exact payoffs, it may be that "one player is a sucker while the other makes out like a bandit" is both a Nash equilibrium and socially optimal.

The thesis of the book is to propose a certain hypothesis. Roughly speaking, and for the purpose of this essay, we can assume the hypothesis says: norms will evolve to maximize the aggregate welfare of the players.

(And so Farmer's Dilemmas might be a good place to look for failures of the hypothesis. When the socially optimal result is for one player to be a sucker, and that's also a Nash equilibrium, the hypothesis thinks this is fine. Humans might not think that, and norms might evolve that the hypothesis would have ruled out. But note that this is only the case in the Discordant Farmer's Dilemma - when there are no gains from cooperation. In the Synergistic Farmer's Dilemma, the socially optimal result is for both players to Work. The Discordant Farmer's Dilemma might be rare in practice - I wouldn't expect it with fence-building, for example.)

Let's pretend we're creating a system of norms for these games. Something we can do is mandate transfers of welfare between players. In each quadrant, we can take some of one player's payoff and give it to the other. Total payoff stays the same, and so the socially optimal outcome stays in the same place. But the distribution of welfare changes, and the Nash equilibria might move.

How do we encourage the socially optimal result by doing this? This is Ellickson's possible minor contribution. He points out that we can do it by introducing a debt from those who Shirk to those who Work, and that the value $ ww_* - ws_1 $ works in both these games.

He calls this the "liquidated-Kantian formula" but doesn't explain the name, and I have only a vague understanding of where he might be going with it. Since the name hasn't caught on, I'm going to propose my own: counterfactual compensation. If I Shirk, I compensate you for your losses compared to the world where I worked.

(To compare: actual compensation would be compensating you for the losses you actually suffered from working, $ ss_* - ws_1 $. Actual restitution would be handing over to you the gains I got from your work, $ sw_* - ss_* $. Counterfactual restitution would be handing over to you the gains I got from not working myself, $ sw_* - ww_* $. Each of these takes one player's payoff in one quadrant, and subtracts the same player's payoff in an adjacent quadrant. Compensation is about your costs, and restitution is about my gains. The actual variants are about differences between the world where no one worked and the worlds where one of us worked; they're about the effects of work that actually happened. The counterfactual variants are about the differences between the worlds where only one of us worked and the world where we both worked; they're about the effects of work that didn't happen.)

(Also: yes, obviously there are caveats to apply when bringing this formula to the real world. Ellickson discusses them briefly. I'm going to ignore them.)

If we apply this formula to the Prisoner's Dilemmma, we get this:

Prisoner's Dilemma with counterfactual compensation Player 2 Work Shirk Player 1 Work $ ww_* $, $ ww_* $ $ ww_* $, $ sw_* + ws_* - ww_* $ Shirk $ sw_* + ws_* - ww_* $, $ ww_* $ $ ss_* $, $ ss_* $ $ sw_* > ww_* > ss_* > ws_* $, and $ 2ww_* > sw_* + ws_* $

Since $ ww_* > sw_* + ws_* - ww_* $, this puts the incentives in the correct place. The Nash equilibrium is now for both players to Work, which is socially optimal.

(In my taxonomy, depending on whether $ sw_* + ws_* - ww_* ≷ ss_* $, this new game is at the point where The Abundant Commons meets either Cake Eating or Studying For a Test. It's not unique in either case, because there are at most three distinct payout values.)

Specialized Labor is more complicated. There are three ways we might decide to apply counterfactual compensation. We could say that the Shirker compensates the Worker for the Worker's costs, either $ ww_* - ws_1 $ or $ ww_* - ws_2 $ depending on who Worked. Or we could say that the Shirker compensates the Worker for what the efficient Worker's costs would have been, $ ww_* - ws_1 $ regardless of who Worked. Or we could say that the efficient worker never owes anything to the inefficient worker; he gets to just say "sorry, I'm not going to pay you for work I could have done more easily". Lets call these approaches "actual-costs", "efficient-costs" and "substandard-uncompensated"

Ellickson doesn't discuss these options, and I ding him another point for that. He just takes the substandard-uncompensated one. Here's what it looks like.

Specialized Labor with counterfactual compensation (substandard-uncompensated) Player 2 Work Shirk Player 1 Work $ ww_* $, $ ww_* $ $ ww_* $, $ sw_* + ws_1 - ww_* $ Shirk $ sw_* $, $ ws_2 $ $ ss_* $, $ ss_* $ $ sw_* > ww_* > ss_* > ws_1 > ws_2 $, and $ 2ww_* < sw_* + ws_1 $

Player 2 has no incentive to Work, regardless of what Player 1 does, because $ ss_* > ws_2 $ and (unlike in the Prisoner's Dilemma) $ sw_* + ws_1 - ww_* > ww_* $. And given that Player 2 is Shirking, Player 1 has incentive to Work. So again, we've moved the Nash equilibrium to the socially optimal quadrant.

This isn't, like, a mind-shattering result that's going to blow open the field of game theory. But I don't remember seeing it before, and Ellickson doesn't attribute it to anyone else. I'm inclined to give him some credit for it. Even if others have had the insight before - which I expect they have - it seems like he's still doing competent work in a field outside his own. Not amazing work, not particularly difficult work, but competent.

One objection: the inefficient worker gets a better result than the efficient worker. That seems bad to me, because it discourages people from becoming the efficient worker. I don't think this is a big deal, though. For one thing, acquiring skills probably does increase your own payoff; your skills will feed into $ ww_* $, not just your $ ws $. (So it directly increases your payoff in Work/Work, and reduces your debt in Shirk/Work.) Someone else acquiring skills will increase your payoff even more, perhaps, but that's not a big problem. For another thing, such incentives can be handled out-of-game. I do think Ellickson should have acknowledged this issue, and I ding him a point for not doing so. But a brief note would have been fine.

What happens if we apply counterfactual compensation in the other possible ways? The only difference is in the bottom left quadrant, which becomes either $ sw_* + ws_2 - ww_* $, $ ww_* $ (actual-costs) or $ sw_* + ws_1 - ww_* $, $ ww_* + ws_2 - ws_1 $ (efficient-costs). The problem with both of these is that that quadrant might now be a Nash equilibrium. In the first case, Player 1 might prefer that quadrant over Work/Work, depending on $ 2ww_* ≷ sw_* + ws_2 $, and Player 2 will certainly prefer it over Shirk/Shirk. In the second case, Player 1 will certainly prefer that quadrant over Work/Work, and Player 2 might prefer it over Shirk/Shirk, depending on $ ww_* + ws_2 - ws_1 ≷ ss_* $. That's not great, we only want a Nash equilibrium in the socially optimal quadrant.

On the other hand, I note that if $ ws_1 - ws_2 $ is small, then the social cost is low; and if it's large, then (except perhaps with some fairly specific payoff values?) that quadrant isn't a Nash equilibrium. Meanwhile, if payoffs are uncertain - if people might disagree about who the more efficient worker is - then either of the other choices seems more robust. And this is more of an aesthetic judgment, but it feels like the kind of aesthetic judgment that sometimes hints at deeper problems: there's something a bit weird about how substandard-uncompensated is discontinuous. A small change in Player 2's skills lead to a small change in her compensation in each quadrant, until she gets equally skilled as Player 1, at which point there's a large change in the Shirk/Work quadrant.

On the other other hand, a feature of how these games translate to the real world is that players encourage each other to discuss in advance. Someone building unilaterally may not get to claim this debt. So if they disagree about who the efficient worker is, that's unlikely to cause much grief.

What about measures other than counterfactual compensation? Actual compensation ($ ss_* - ws_1 $) doesn't work. If a player expects the other to Shirk, they'd be indifferent to Working; and in a Prisoner's Dilemma, if they expect the other to Work, they might prefer to Work or not depending on $ ww_* ≷ sw_* + ws_1 - ss_* $. (In Specialized Labor, that inequality always resolves as $ < $ which gives the incentives we want.)

Actual restitution ($ sw_* - ss_* $) is sometimes okay in a Prisoner's Dilemma, but if $ ws_* + sw_* < 2ss_* $ then Shirk/Shirk remains a Nash equilibrium; players will only want to Work if they expect the other to also Work. In Specialized Labor it has the problem that players would prefer to Work than to pay restitution, and so Work/Shirk cannot be a Nash equilibrium.

Counterfactual restitution ($ sw_* - ww_* $) has much the same problem in a Prisoner's Dilemma; if $ ws_* + sw_* < ww_* + ss_* $ then Shirk/Shirk is a Nash equilibrium. And in both games, a player who expects the other to Work will be indifferent to Working.

There are other options for payment one might consider; I haven't even looked at all of them of the form "one raw payoff minus another raw payoff". But so far, counterfactual compensation seems like the best option.

(We could even consider values of the debt based on information outside of the original payoff matrix. But Ellickson points out that when deciding how to act in the first place, players will already want to figure out what the payoff matrix looks like. If the debt was based on other information, there'd be a further cost to gather that information.)

While we're here, let's look at the other games implied by Prisoner's Dilemma and Specialized Labor. The Asymmetrical Prisoner's Dilemma (or Synergistic Specialized Labor) has $ ws_1 ≠ ws_2 $ but $ 2ww_* > ws_1 + sw_* $. In this case, counterfactual compensation does exactly what we want it to do, just like in the symmetrical Prisoner's Dilemma; except that substandard-uncompensated is no good, it doesn't give us a Nash equilibrium at all. (Player 1 prefers Shirk/Work to Work/Work, and Work/Shirk to Shirk/Shirk. Player 2 prefers Work/Work to Work/Shirk, and Shirk/Shirk to Shirk/Work.) If Ellickson had considered this game, he'd have had to discuss the possible ways one might apply counterfactual compensation, which would have been good. So I ding him a point for it.

Symmetrical Specialized Labor (or Discordant Prisoner's Dilemma, or Too Many Cooks) has $ ws_1 = ws_2 $ but $ 2ww_* < ws_* + sw_* $. The difficulty here is that there's no way to break the symmetry. Any of the three ways to apply counterfactual compensation will be equivalent, and leave us with two Nash equilibria in the two socially equal quadrants. The "discuss in advance" feature saves us again, I think; players don't need to somehow acausally cooperate to select one to Work and one to Shirk, they can just, like, talk about it. So I think it was basically fine for Ellickson to not consider this game, though it would have been worth a brief note.

How does this work in the Farmer's Dilemma? First we need to clarify exactly what set of games that refers to. In symmetrical games, I think of it as having $ sw_* > ww_* > ws_* > ss_* $; that is, each player would prefer the other to do all the work, or failing that to help; but they'd still rather do it all themselves than for the work not to get done.

I'm going to break symmetry by separating $ ws_1 $ from $ ws_2 $ as before. Without loss of generality, we can specify $ ws_1 > ws_2 $, but I'm not going to decide whether $ ws_2 ≷ ss_* $. It might be that only one player is skilled enough to benefit from Working alone.

So in normal form, the Farmer's Dilemma looks like this:

Farmer's Dilemma Player 2 Work Shirk Player 1 Work $ ww_* $, $ ww_* $ $ ws_1 $, $ sw_* $ Shirk $ sw_* $, $ ws_2 $ $ ss_* $, $ ss_* $ $ sw_* > ww_* > ws_1 > ss_*$, and $ ws_1 > ws_2 $

Either of the top two quadrants could be socially optimal, depending whether the game is synergistic or discordant (that is, whether $ 2ww_* ≷ sw_* + ws_* $). Shirk/Work may or may not be a Nash equilibrium, depending whether $ ws_2 ≷ ss_* $. So how does it look with counterfactual compensation? I'll consider the synergy and discord cases separately.

Synergistic Farmer's Dilemma with counterfactual compensation (substandard-uncompensated) Player 2 Work Shirk Player 1 Work $ ww_* $, $ ww_* $ $ ww_* $, $ sw_* + ws_1 - ww_* $ Shirk $ sw_* $, $ ws_2 $ $ ss_* $, $ ss_* $ $ sw_* > ww_* > ws_1 > ss_*$, $ ws_1 > ws_2 $, and $ 2ww_* > sw_* + ws_1 $

Oh dear. Substandard-uncompensated compensation is clearly not going to work; Shirk/Work might still be a Nash equilibrium. In Specialized Labor it was fine that the efficient Worker would prefer the inefficient Worker to do all the work, because the inefficient worker would say "nuts to that". In a Farmer's Dilemma she might continue to Work, which we don't want. Even if we specified $ ws_2 < ss_* $, we'd simply have no Nash equilibrium; like in the Asymmetrical Prisoner's Dilemma, one player would always get a better result by changing their move.

Fortunately, either of the others seems fine. The payoffs for these are the same as in Specialized Labor, but their values have changed relative to adjacent quadrants. Actual-costs gives us $ sw_* + ws_2 − ww_* $, $ ww_* $ in that quadrant, which isn't a Nash equilibrium because $ ww_* > sw_* + ws_2 − ww_* $. (Compared to this quadrant, Player 1 would rather Work and Player 2 would rather Shirk.) And efficient-costs again gives us $ sw_* + ws_1 − ww_* $, $ ww_* + ws_2 - ws_1 $, which isn't a Nash equilibrium because $ ww_* > sw_* + ws_1 − ww_* $. (Player 1 would still rather Work. Player 2 may or may not prefer to Shirk; if $ ws_2 > ss_* $ she'll certainly prefer this quadrant, might prefer it even if not, but it's not a problem either way.)

What about the discordant case? If $ ws_2 < ss_* $ we actually already have the desired result. The only Nash equilibrium is Work/Shirk which is socially optimal. But as discussed above, it's a crap result for Player 1, and my sense is that the "no incentive to become the efficient worker" problem now becomes a lot more of an issue. Let's see what happens with counterfactual compensation.

Discordant Farmer's Dilemma with counterfactual compensation (substandard-uncompensated) Player 2 Work Shirk Player 1 Work $ ww_* $, $ ww_* $ $ ww_* $, $ sw_* + ws_1 - ww_* $ Shirk $ sw_* $, $ ws_2 $ $ ss_* $, $ ss_* $ $ sw_* > ww_* > ws_1 > ss_* $, $ ws_1 > ws_2 $, and $ 2ww_* < sw_* + ws_1 $

Again, substandard-uncompensated doesn't really help; Shirk/Work will be a Nash equilibrium iff it was one before. But at least Player 1 gets a less-bad result from Work/Shirk. (Player 2 still does better than him.)

Actual-costs might also be a Nash equilibrium in that quadrant, if $ ww_* < sw_* + ws_2 − ww_* $. And so might efficient-costs, if $ ww_* + ws_2 - ws_1 > ss_* $. (Again, this always holds if $ ws_2 > ss_* $, so looking only at the Nash equilibria, this is strictly worse than having no compensation.)

So this is unfortunate. We can't reliably remove that Nash equilibrium with counterfactual compensation. Depending how we apply it, we might even make it an equilibrium when it wasn't before.

(Actual restitution also works in the synergistic game, but moves the Nash equilibrium to Work/Work in the discordant game. Counterfactual restitution makes players indifferent to Working if they expect their partner to Work, so in practice I guess Work/Work is the Nash equilibrium there, too. And actual compensation would be negative, which is silly.)

Summing up, counterfactual compensation:

  • Gives people good incentives in Prisoner's Dilemma. In an Asymmetrical Prisoner's Dilemma, substandard-uncompensated doesn't work.
  • Gives people good incentives in Specialized Labor, using substandard-uncompensated. Mostly-good incentives using the other implementations.
  • Gives people good incentives in the Synergistic Farmer's Dilemma, except that substandard-uncompensated only works sometimes.
  • Maybe kinda sorta helps a bit in the Discordant Farmer's Dilemma. Maybe not.

So that's not amazing. I do think the Discordant Farmer's Dilemma is just fundamantally, in technical terms, a real bastard of a game. But even in the synergistic variant, the way we calibrate it to get the best incentives is different from the way we calibrate it for the best incentives in Specialized Labor.

So I appreciate Ellickson's contribution, and I think it's a real one. But it's not as much as we might have hoped. I think he had a blind spot about the Farmer's Dilemma, and his tools don't really work against it. He also would have done well to consider counterfactual compensation schemes other than substandard-uncompensated.

With counterfactual compensation in mind, Ellickson proposes a variant Iterated Prisoner's Dilemma tournament, and a strategy for it that he calls "Even-Up". Even-Up takes advantage of features of the tournament that make it more realistic, and is modelled on real-world behaviours that he describes elsewhere in the book.

The tournament has rounds of both Prisoner's Dilemma and Specialized Labor, and payoffs for them can vary considerably. He suggests that perhaps one in five rounds might have each payoff increased twentyfold. Additionally, in between rounds, players can unilaterally choose to make a side payment to their partner.

To apply the Even-Up strategy, a player would use an internal balance to keep account of standing with their partner. Whenever counterfactual compensation would be owed, according to the analysis above, they'd adjust the balance by its value. (Ellickson doesn't specify, but presumably they'd also adjust whenever their partner makes a payment to them.) Whenever the balance was close to zero, they'd play the socially optimal strategy. If they were in debt, they'd make a side payment. And if they were in credit, they'd "exercise self-help": Shirk when they'd otherwise Work.5 (But only if the debt owed was more than half the value of the compensation, so that the balance would become closer to zero.)

There are three parameters I might be inclined to play with. One: which variant of counterfactual compensation should we use? (Ellickson's wording doesn't make it clear which he intends. Above he took substandard-uncompensated for granted, but is wording here sort of hints ambiguously at efficient-costs. He doesn't note or justify the change if there is one.) As noted, substandard-uncompensated gives the right incentives where the other options sometimes don't. Still, I wouldn't be surprised if the other options sometimes helped to avoid a feud (a loop of mutual defections or alternating defections).

Related, two: suppose we do use substandard-uncompensated. When in credit, and facing a Specialized Labor game as the efficient worker, should we Shirk? (Since we'd never Work as the inefficient worker, this is the only time the choice of counterfactual compensation variants is relevant.) Regardless of the other player's move, no compensation is owed. So Shirking will destroy communal resources, but not bring players' standings back in balance. On the other hand, it does stop us from extending more credit that may never be paid back. It may be worth having a higher threshold for this than for Shirking in a Prisoner's Dilemma, but I'd say never Shirking in this case would be a mistake.

And three: is "brings the balance closer to zero" the optimal condition to use for when to exercise self-help? If we exercise it more readily, others may be more inclined to cooperate with us in the first place, but that effect is probaby minor - there's only so much we can be exploited for, over the whole game. On the other hand, we're also destroying more total payoff, per round. It may be worth only exercising self-help if our credit is more than say three-quarters the value of counterfactual compensation.

(If we're looking at modfications to the tournament: as well as tweaking the probability distribution of the various possible payoff matrices, I'd be interested to see what changes if you add a small or large transaction fee to the side payments. Naturally I'd also like to see what happens if you add the possibility of Farmer's Dilemmas, but then Even-Up needs to be altered to account for it. Of other games in the genre, a Discordant Abundant Commons ($ ws_1 > ww_* > \{ sw_*, ss_* \} $, $ 2ww_* < ws_1 + sw_* $, and I'm not sure what the constraints on $ ws_2 $ should be) would also be a good addition. Maybe an asymmetrical Anti-Coordination variant, with a single socially optimal outcome so as not to confuse SociallyOptimalOutcomeBot. The others don't seem like they'd add much; they all have $ ww_* $ as the highest payoff, so their socially optimal outcomes are also individually optimal. That doesn't mean there's no reason not to play Work, but the reasons mostly boil down to "I'm willing to hurt myself to threaten or punish you" and you already get that from the Farmer's Dilemma. So I'm not convinced the other games add much strategic depth, and they do add noise.)

Ellickson predicts that Even-Up would do well in this tournament, and I agree. It's inexploitable, rewards its partners for cooperating, forgives past transgressions, and plays well against itself. I'd be concerned about what happens if it plays against some similar strategy with different ideas of fairness - might you get into a situation where only one of them is ever satisfied at a time, leading to alternating defections? More generally I just don't trust either myself or Ellickson to have especially reliable intuitions about this.

Ellickson also says that if Even-Up turns out not to be evolutionary stable - that is, if a society of Even-Up players can be exploited by other strategies, or wouldn't be able to enter groups currently dominated by other strategies - his hypothesis would no longer be credible. I think it would be stable, but even if not, I'd be somewhat forgiving. I'd want to know why not, before deciding how it reflects on the hypothesis.

  1. Strictly speaking: if you Defect, that always harms your opponent and benefits yourself, relative to you Cooperating. And if your opponent Cooperates, this will always be a social cost as well, harming your opponent more than it benefits you. But if your opponent is also Defecting, then the structure of a Prisoner's Dilemma is agnostic on whether your defection is a social cost; it might benefit you more than it harms your opponent. 

  2. I'm not actually sure we can assume that, but that question is out of scope. 

  3. This notation is kind of experimental on my part. Ellickson instead uses symbols $ A, B, C, D, E $ in descending order, but that makes it hard to remember which goes where in the grid. And when I look at the Farmer's Dilemma later on, the ordering will be lost, making it even more confusing. 

  4. I tossed a coin to choose how to assign pronouns. 

  5. Incidentally, in in a typical IPD tournament, with only Prisoner's Dilemmas and no variance in payoffs, Even-Up plays identically to Tit-for-Tat. An Even-Up player would never be in debt in such a tournament, since they'd never Shirk except to correct a balance, and doing so would either leave the balance unchanged (if their partner also Shirked) or bring it to exactly zero. 



Discuss

University of Tübingen, Master's Machine Learning

14 ноября, 2020 - 20:15
Published on November 14, 2020 3:50 PM GMT

The purpose of this post is to give an overview of the Machine Learning (ML) Master’s program at the University of Tübingen. It should function as guidance for people who are interested in studying ML and weigh the pros and cons.

This article is part of a series of articles on different European master's programs related to artificial intelligence and machine learning.

General Overview

The ML program in Tübingen has 120 credits, 30 of which are assigned for the thesis. The program has three mandatory courses, Deep Learning, Statistical Machine Learning, and Probabilistic Machine Learning. All other courses can be chosen more or less freely with some small restrictions, e.g. they have to be in the broad area of ML. The full range of lectures can be found in the module handbook, though not all of them exist yet. Since 2019 was the first year of the Master, I expect these gaps to be closed in the next two years. In the winter semester 20/21 there are already many new courses.

Regarding prerequisites: There are some specifications on the website but they can be a bit vague. According to the creator of the program, who also oversees admissions, absolutely necessary criteria are having a sufficient understanding of proof-based math (e.g. through a math or Computer Science (CS) Bachelor’s degree) and a basic understanding of algorithms and other CS concepts. To give you a prior probability for a successful application we can only look at the first iteration of the degree where aone-thirdround 150 people from all around the world applied, 60 were deemed to be sufficiently qualified and around 40-60 actually started the Master. The program probably could have handled more students but the creator decided that applicants need to pass a certain level of skill. This acceptance rate of one third does not seem very low but I expect it to get more competitive in the future. Last time the program was only announced three months before the application deadline and already 150 people applied. Since then the University of Tübingen/the MPI has had more exposure within the media and the official ML YouTube-channel hit 800 subscribers in its first week I would expect the program to become more competetive. An alternative in case of rejection is to apply for the CS Master’s and transfer to the ML Master’s later. However, if too many people use this loophole it might be closed or ML students might be prioritized in contested lectures.

I think the general self-understanding of the program is one of excellence, i.e. it wants to produce people who have a deep understanding of the current ML landscape. As far as I can tell Tübingen seems to put a lot of emphasis on the theoretical understanding of ML (all courses have practical exercises too) but it’s hard to judge without an explicit comparison. The second emphasis in Tübingen is the social component of ML. There are lots of seminars discussing the ethics of ML or the intersection between ML and other fields such as medicine. From my personal experience, I would estimate that around 75 percent of the lectures that I attended fulfill the idea of excellence, i.e. they teach a mixture between old but relevant and new material, require a lot of effort but yield great understanding. Unfortunately, I had some courses that were rather shallow, didn’t update their content even though new research was available, or were clearly too easy. Since most lectures are in their first iterations and five new professorships in the realm of ML have been filled in the last two years and the university is still hiring more I expect the average quality of the lectures to rise further.

The student population that I know so far (only the first generation, so small sample size) is roughly 50 percent German and 50 percent international but I expect them to become more international in the future. From my perspective they are on average rather high-performing, ambitious students confirming the self-understanding of excellence.

The core lectures of the course are Deep Learning, Probabilistic Machine Learning, and Statistical Machine Learning. If you want to peak into the lectures you can find them on YouTube. There are around 20 further ML related lectures including Mathematics of ML, Data Literacy, Time Series, Self-Driving Cars, Neural Data Analysis, and Efficient ML in Hardware - just to name a few (For a full list look at the module handbook). Additionally, you can choose between around 50 different general CS lectures to broaden your perspectives.

Since only 24 credit points of 120 are mandatory lectures the program allows for individual specializations. Currently, one can specialize in applied or theoretical ML but it is impossible to focus exclusively on, for example, Reinforcement Learning. Given the impressive amount of new ML-related professors and groups, I expect that specialization will be easier in the future.

Regarding Corona, die University of Tübingen has adapted quite well and all lectures are now online. If you want a sample of the average quality of lectures, I would recommend looking at the YouTube channel.

The grading scheme in Tübingen is similar to other programs in Germany, i.e. it is rather hard but possible to achieve the best grade of 1.0. It is also realistic that you fail a class if you have not prepared for it and there is little grade inflation compared to e.g. the US.

If you already want to publish at conferences during your Master’s program, most supervisors will support you if you are willing to put in the effort. My supervisor, for example, told me that the aim of my Master’s thesis was to submit it to ICML if the results were sufficiently good (I started with a CS master and switched to ML if you were wondering about the timeline). However, whether you want to go through this effort and try to publish is obviously up to you and your supervisor, I can only say that most potential senior researchers would be up for it and willing to support the effort.

Research Directions in Tübingen

The amount of ML research that is done in Tübingen is huge. There is Deep Learning, Probabilistic ML, Statistical ML, Computer Vision, Robotics, some Reinforcement Learning (RL), Self-Driving Cars, Robustness and Adversarial examples, some Natural Language Processing (NLP), Fairness, and Ethics in AI (from a technical and humanities perspective), ML in Climate Science, a very large Neuroscience section, Causality and much more. I think the fields that are currently a bit underrepresented are NLP, RL, and AI-safety. Some years ago, Tübingen didn’t have a large focus on Deep Learning but they have upgraded and adapted since then and I would expect them to be a global tier 2 when it comes to Deep Learning.

Some of the fields that Tuebingen is internationally known for include Probabilistic Numerics (Philipp Hennig), Empirical Inference with a focus on causality and Kernel methods (Bernhard Schölkopf), Robustness and Optimization (Matthias Hein), Self-driving cars and Computer Vision (Andreas Geiger), and the neuroscience groups (Bethge lab, Peter Dayan). If you are interested in the intersection between ML and neuroscience I would suggest doing the ML master. If your focus lies with the foundations of neuroscience there are other master programs in Tübingen that might be a better fit.

Personally, I first did a bit of everything for a year and then specialized in the overlap of probability theory and Deep Learning by working on Bayesian Neural Networks. I think for most subfields in ML it can be said that somebody in Tübingen is working on them and if you are interested in specializing very early I would recommend clicking through the links below.

If you want to check out who does ML research in Tübingen, have a look at the research groups, people’s page of the IMPRS, the website of the MPI-IS, and the ML in science cluster of excellence.

Options outside of University

Tübingen is right at the heart of the cyber-valley initiative. This essentially just means the province and local industry come together to boost the ML competence in the region. They fund new professorships, research groups, buildings, etc. In short, Tübingen spends a lot of money on ML. Being an Excellence Cluster is not just a label but comes with a 50 Million Euro grant over 7 years that started in 2019. Its aim is to attract global talent at the intersection of ML and other sciences (e.g. climate science, ML for social good or ML in medicine). The benefits of such a cluster are indirect but noticeable. Many of the cluster people offer seminars which means that you can discuss the implications of ML with domain experts (e.g. a philosopher or geologist) and gain new perspectives. Additionally, the cluster organizes workshops and small conferences that are usually free for students where you can broaden your perspectives.

If you are leaning more to the research side, you can try to become a student assistant, write your thesis or do research projects with the university or at the Max Planck Institute for Intelligent Systems (MPI-IS) and thereby have direct access to top researchers in their respective field (this is not exaggerated, just look at the latest news).

If you care more about industry experience there are also lots of options. You can do internships or collaborations with large companies like Bosch or IBM. Bosch and Amazon are both building an AI campus for 700 and 200 researchers respectively that should be finished in the next couple of years. Even though their buildings are not built yet, they already do collaborations with the university.

Some personal notes

Even though this sounds a lot like a promotional piece, I honestly think that Tübingen is the place to be for ML, at least in continental Europe. However, if you want to do research in the fields of NLP, RL, or AI safety other universities might be a better fit. Even though I am not sure if there are Master’s programs with a strong focus on AI safety in Europe or even globally.

Regarding the courses: I have taken most of the ones that are already available and I think the majority of lectures and seminars are good with some exceptions. To figure out which ones you should avoid, I would recommend asking more experienced students. The vast amount of different options definitely is a benefit, especially since they are likely to become even more in the future.

Tübingen as a town might not be for everyone. At the end of the day, it is not a large city but a town of 90k inhabitants (35k of which are students) that has fewer options (nightlife or food diversity) than a larger city could provide. However, the university provides a lot of options for physical activity and there are other ways to spend your free time. If you really want a “big city feeling” though, you will likely not find it in Tübingen.

From an Effective Altruism (EA) perspective, Tübingen is pretty nice. There is an active EA chapter, we are currently founding an AI-safety reading group, and there is a small LessWrong chapter. There are also many other university groups and NGOs, like a debating club or Global Marshall Plan, that might be interesting from an EA perspective.

Decision Guide

You should consider the ML Master’s program in Tübingen if you

  • Want to have a 2-year/120 ECTS tuition-free master
  • Want theoretical and practical courses in your program
  • Want to have the option to cooperate with industry (e.g. Amazon, Bosch) or academic (e.g. MPI-IS, Cluster of excellence) collaborations
  • Want the option to explore the intersection between ML and other sciences (e.g. ethics in AI or ML in Medicine)

You should not choose Tübingen if you

  • Want to have a big city feeling to your place of study
  • Want to focus primarily on topics of Natural Language Processing or Reinforcement Learning

If you have any further questions about the town or program, want to get advice on how to improve your chances of getting in, or just want to leave some feedback don’t hesitate to contact me via the channels listed on my blog. If you want to know more about the research I do you can find short summary posts on my blog.



Discuss

University of Edinburgh, Master's Artificial Intelligence

14 ноября, 2020 - 20:14
Published on November 14, 2020 3:49 PM GMT

This article is part of a series of articles on different European master's programs related to artificial intelligence and machine learning.

Basic data on the degree
  • Duration: 1 year (or 2 years part-time)
  • Cost: ~15.000€ p.a. for EU students (before Brexit)
  • 90 ECTS
Purpose of this article 

The main focus of this text is to help you decide, whether studying an MSc AI degree at the University of Edinburgh (AI@Ed) is an option for you. It is not supposed to give detailed technical information like links to admission pages or tips for how to find a room in Edinburgh. You will need this kind of information once you decide to apply to UoE. I hope that I can guide your decision-making process with this article. 

This is also not an advertisement, I have no affiliation with the University of Edinburgh. It's just an honest and objective opinion on the degree, the university, and the city.

Text is by: Marco Kinkel, feel free to message me on hi@marco-kinkel.de

The degreeCourses and Areas

Being able to choose from a variety of topics was important to me. AI research is manifold and includes grasping a notion of intelligence from different viewpoints. I was happy to choose from interdisciplinary courses on Cognitive Science, Neuroscience, Robotics, Bioinformatics and of course many core ML courses. The university is especially known for its research and lectures on Bayesian ML, Computer Vision, Natural Language Processing and Biomedical Sciences. Here is a list of most important courses (which obviously changes over the years). The bold courses can be considered the standard of the degree, you will find the majority of your friends in these. However, depending on your interests, it can be worth thinking outside the box and taking a less popular course. It can expand your horizon and you will get better teaching because they are less crowded (e.g. 10 instead of 200 students).

  • ML
    • Introductory Applied Machine Learning
    • Machine Learning and Pattern Recognition
    • Machine Learning Practical
    • Reinforcement Learning
  • Vision
    • Image and Vision Computing
  • Language
    • Accelerated Natural Language Processing
    • Automatic Speech Recognition
    • Natural Language Understanding, Generation, and Machine Translation
  • Data Science
    • Text Technologies for Data Science
    • Data Mining and Exploration
  • Design and HCI
    • Case Studies in Design Informatics
    • Human-Computer-Interaction
    • The Human Factor: Working with Users
  • Bio and Neuroscience
    • Computational Cognitive Neuroscience
    • Bioinformatics
  • Misc
    • Probabilistic Modelling and Reasoning
    • Computational Cognitive Science
    • Natural Computing
    • Robotics: Science and Systems
    • Algorithmic Game Theory and its Applications
  • Introductory Informatics Courses (for students from other fields)
    • Introduction to Practical Programming with Objects
    • Computer Programming for Speech and Language Processing
    • Programming Skills
  • Additionally, you can choose one or two courses from other schools, including courses like
    • Robotics, AI and the Law
    • The Computational Mind
    • Ethics of Artificial Intelligence

You can find descriptions of these courses in the Degree Regulations and Programme of Study (DRPS) for the academic years 2019/2020 and 2020/2021. Unfortunately, the current Covid-19 situation results in a cancellation of many courses for the academic year starting in September 2020, as you can see in this list.

As you can see, AI@Ed does not force you in a particular area of AI. Many courses exist on Machine Learning and Language Processing, but you can always choose to flavour your degree with diverse (but due to the time limit of one year slightly superficial) knowledge in AI-related fields like robotics, neuroscience, language processing, philosophy of mind and computational psychology. The variety of courses has its drawbacks: it is difficult to choose only 6 to 8 courses within the very short time of one year. This is a general downside of the degree to which I will come back later. From an EA standpoint it is important to notice that while some courses mention AI ethics or AI sustainability, no course specifically focuses on AI alignment or safety. 

In addition to the choosable courses, you will have two mandatory courses that are supposed to prepare you for the final thesis: Informatics Research Review (IRR) introduces academic writing and citation techniques and is just additional practice in academic writing. Informatics Project Proposal (IPP) on the other hand is actually useful because you collect literature and write a proposal document for the topic of your dissertation.

For people with a less technical background, there are courses to enhance your programming skills. These are useful but not necessary because most courses don't require a lot of programming experience. My friends with backgrounds in Physics, Neuroscience or Statistics easily succeeded without these additional courses. If you have used a Jupyter Notebook before and used Pandas, R or Matlab to inspect some experimental data, you’re all set. You don’t need any knowledge on subject-specific Python libraries, as those will be introduced in the respective courses. A good introduction to ML-related Python libraries is the labs of the lecture Introductory Applied Machine Learning.

Generally, the amount of code you will encounter depends a lot on your courses. In most courses the lectures impart theoretical knowledge which is applied in their tutorials and labs. If you’re interested in coding and practicing ML model design, implementation and parameter tuning, you should take Machine Learning Practical. Other courses like Game Theory are purely theoretical.

Influential Researchers at UoELectures, Tutorials and Labs

You will have lectures with between 50-150 other students from different Informatics MSc degrees (Informatics, AI, Data Science, Cognitive Science, ...). Most main lectures are not very individual with 150+ students, but the tutorials and labs are divided into small groups. In tutorials, you typically discuss exercises that go somewhat deeper into the lecture material. The tutors are mostly Ph.D. students who can be more or less motivated or talented for a tutoring job. In labs, you complete assignments and other exercises with the help of (often not very helpful) instructors and your fellow students. The attendance to lectures and tutorials is only mandatory for non-EU students due to visa regulations (note that this might change after Brexit). 

The lectures have very different qualities. In the UK, I think, professors are not obligated to hold lectures, which increases their interest and hence the lecture quality. But in my personal experience, one of five lectures is still so bad that it may be better to just read the slides (but this is probably the case in every university).

You will encounter some 'inverted classroom' lectures, where the material is provided as videos and the actual lectures are QA sessions. There are very few courses where you do actual research (that could be published). In Machine Learning Practical where you develop and evaluate your own ML models, you have a lot of freedom, which comes closest to actual ML research.

Generally, all lectures are recorded, so you can re-watch them as often as you like. This is very useful in the coursework period (middle of semester, see below), where you will hardly have the time to attend all lectures. 

Most courses contain two to four hours of lecture plus one or two hours of tutorials and labs per week. A typical week in my first semester looked like this:

You can see that it is quite full if you attend all tutorials and labs (which you should in the beginning to keep up the pace). The material is conveyed very quickly, so being sick for a week is not a rewarding experience. Between lectures, you will find yourself sitting in Appleton Tower (see below) doing assignments and catch up on lecture materials. This leads to the second point of criticism: the work-life balance is rather bad. Since every lecture has mid-term assignments worth between 10 and 50% of the final mark, it's difficult to keep up with the lecture material during the semester. Hence, you will not have a lot of fun in the weeks of the exam period. I cannot compare the work-life balance in AI@Ed to other MSc degrees from my own experience but I've heard of degrees with less stressful semesters and exam preparations.

Exams and Dissertation

The UK has its own marking scheme, which seems pretty self-explanatory at first glance:

Mark (%)GradeDescription90-100A1An excellent performance, satisfactory for a distinction80-89A2An excellent performance, satisfactory for a distinction70-79A3An excellent performance, satisfactory for a distinction60-69BA very good performance50-59CA good performance, satisfactory for a master's degree

Everything below that is a fail. However, a mark higher than 75% is only given for extraordinary performance, i.e. if the quality of your work is publishable. Therefore, many exams contain open questions worth 25% to keep students from getting too high marks. The general range of grades is therefore somewhere between 60 and 75 percent. This can be confusing for recruiters outside of the UK, who might think that a 73% degree is rather bad. Note that the 'excellency' range is as large (25%) as the rest of the marking range, so whether your work is deemed excellent, and if so, how excellent, is rather unpredictable.  From our experience, excellency sometimes just means a huge amount of extra work, but this is again not guaranteed to give you >75%. This critique applies to all universities in the UK, not only UoE.

Usually, you will have one coursework per course within the semester. It consists of applied exercises on the lecture material, but it can go way beyond. Completing it will take a considerable amount of time, which is stressful since you have a coursework (CW) for every course. Your effort should depend on the weight of the CW, which is usually between 10% and 50% of the final course mark. However, the CW is marked by many different tutors in a more or less unmoderated process, so you can have frustratingly bad luck with your marker.

In contrast, the dissertation process is organized very well. You can propose a thesis project topic yourself or you can choose from a huge list of offered projects. The selection process is not interesting here but be assured that you will definitely find an exciting project in your field of interest, some even in collaboration with Amazon (although it's hard to get them). You will start with the dissertation project after all exams and lectures are completed (mid May) and finish after 3 months (mid August). This is a very short time for a dissertation, but most of the supervisors are aware of that and hence define the topics narrowly. I don't know about anyone who published a research article from their dissertation but this should be possible. However, bear in mind that you only have 3 months. This is a downside for people aiming for a PhD after their degree. But don’t despair,many MSc thesis supervisors will offer you a PhD position if you did well in your thesis.

Here is a public archive of outstanding dissertations from 2019, to get a glimpse of the variety.

One-year degree

Although it is compelling to do a one-year MSc degree, to quickly proceed to the next level, this short time also has its disadvantages:

  • No contact to master students in higher year
  • you have to think about the thesis (self-proposed) and a Ph.D. (application deadlines) very early on
  • you can take only a few courses
  • you don't have a lot of free time to explore the city and country, because the dissertation is written in the holidays

I want to address the problem of specialization again: If you already know which field to specialize in, you're fine with a one-year degree. If on the other hand, your aspiration is to get a broad interdisciplinary knowledge about AI, that's perfect too! But you cannot get both in one year. Say you spend your 6-8 courses on Introductory Applied Machine Learning, Computer Vision, Natural Language Processing, Computational Cognitive Science, Robotics, Computational Neuroscience, Data Mining, and Game Theory. Then you had one course in each of those fields, which is great for a broad overview but rather bad if you want to call yourself a professional afterward. I would advise against doing a one-year degree if you still want to explore the diverse AI-related research areas and specialize in some sub-field.

University and organizationAdmission

As for most AI programs nowadays, it's very hard to get in. First of all, you need two reference letters and good grades. According to this website, the offer rate is about 14%. This coincides with my experience of having exceptionally smart and diligent fellow students.

Buildings, working areas

UoE's lecture halls are very modern and large. If you don't take unusual courses, they are all within a radius of a 5-minute walk and all within the south of the city. A nice park (the Meadows) is nearby where you can spend some time in the sun (if it's out).

As a busy student, you will spend most of your spare time studying in Appleton Tower, that is a modern 9-floor building dedicated to informatics students. It contains lecture halls, large lab spaces with PCs, and seminar rooms which can be used for studying in groups or individually. In exam times you can find people basically living there, which is possible thanks to the kitchen areas. When you see the modern interior equipment in the working areas, labs, and lecture halls, you know where the student fees go.

While Appleton Tower is specifically for School of Informatics students, the main library is open for everyone and offers additional workplaces. However, in the exam period, you will have difficulties finding a spot there.

Every big building like Appleton Tower, David Hume Tower, and the Library have a small cafe on the ground floor. Here you can get hot beverages, snacks, and at lunchtime even a small selection of hot food. I will address the food problem in a second.

Organisation, ITO, Student Reps

Officially, the university is very open for communication and there are many channels to approach. We have a very friendly staff at the Informatics Teaching Organisation (ITO), who is responsible for all official student-to-university issues (exams, lecture organization, tutorial group assignment, ...). The ITO together with the student representatives work very well for organizational issues. However, it is very difficult to get in touch with the lecturers, the researchers, and their departments. Bad decisions in the university's upper management led to a massive influx of students in the past years, which overwhelmed the staff and led to long chains of communication. The high-profile researchers are flooded with requests for supervision and have to cut their research to cope with organizational tasks. If you are really interested to work with one specific researcher, you will eventually get in contact with them but it's difficult to get a broad overview of the research at UoE because you can't just walk in the departments and have a chat. You cannot even get close to the research departments, because they are located in a building you can only enter with an appointment. This also frustrates the researchers at UoE, which is why they participate in strikes.

Strikes

During my one year in Edinburgh, we had two strikes organized by the University and Colleges Union (UCU). The reasons for the strike are manyfold. One of them is the massive increase in student numbers (which increases the UoE turnover) together with the decreasing spending for staff and organization. You can find more reasons here.

Many lecturers and academic employees participated and even students solidarised, which led to buildings being closed, and some lectures being canceled. The demands have not been fulfilled since, so additional strikes can be expected in the next few years.

Clubs and Activities

A very curious component of university life in the UK are clubs. You will find a club for every thinkable hobby or interest (Harry Potter Club, Skydiving Club, Atheism and Humanism Club, Beneficial Artificial Intelligence Club, etc). Clubs always welcome new members and are a great opportunity to try new activities. However, the program is so stressful that you don't have much time for activities anyway. Most people's free time activities were restricted to going to the gym. This of course depends on how ambitious you personally are, there are rumors of people who actually have time for other extracurricular activities.

EdinburghFood

The university, unfortunately, has no central canteen with cheap food. There is a large selection of wrap, soup, and sandwich places in the area and the cafés in the university buildings also sell some food, so you will not starve, but the lunch becomes rather expensive over time (e.g. £5 for a wrap. If you fancy a nice hummus falafel wrap I recommend the instagram account that writes in-depth reviews on each of them in Edinburgh). It makes sense to bring your own food and use the microwaves in the university buildings.

Accommodation

The university dorms are usually more expensive than private housing and you can only apply to them after you received an unconditional offer (which happens rather late), so I would recommend looking for private shared flats on SpareRoom. The rent is high in Edinburgh, you can expect to pay £450 to £650 per month for a room.

Edinburgh

Edinburgh is a beautiful city in a beautiful country. Although it has half a million inhabitants, it feels like a small town if you avoid the tourism spots. This is easy as a student because the university area is south of the tourism center. If you live close to the uni, you will not need any public transport ever. Uni, pubs, supermarkets are all within a walking distance.

The city also has a lot to offer, with many (over-priced) attractions such as the castle, but also beautiful and free places such as Holyrood Park. The city offers lots of pubs, nice places to eat, and cultural activities. You can easily avoid the tourism areas (except in August with the Fringe Festival).

The winters in Edinburgh are cold, windy, rainy and dark, and depending on your accommodation, going inside doesn't help much. So prepare for that by bringing warm clothes and buying vitamin D supplements and a SAD lamp or light box. However, the summer is beautiful and if you have the time you can swim in the sea, go hiking or just roam around in the green parks.

Summary

AI@UoE considers itself as an 'elite' program in Europe. Considering the acceptance rate and the intelligence and diligence of the students, this is definitely true. However, it depends on your course choices and a bit of luck, whether you receive an 'elite' education. Some courses and lecturers are more challenging than others, which often leads to a better learning outcome, but a worse work-life balance. I'm not sure where the pressure comes from, but some lecturers feel the urge to compete with other elite universities when it comes to the course contents and speed ("We cannot cut topics because we must compete with a similar course at the University of Oxford''). This can be frustrating, but again, you will learn a lot more in these challenging courses. 

Many high-profile researchers work at UoE and this degree can be a great starting point for a subsequent Ph.D. with one of them. However, the general direction of the university described in the strike section increasingly demotivates the staff and leads to less 1:1 communication for students, including the increasing difficulty to get in touch with those high-profile researchers. This will only become a problem if you are planning to build a network and connect with the local research staff. If you plan to just get your degree and move on, you will probably not be impacted by this issue.

AI@UoE offers a wide range of lectures with a good portion of most AI-related topics. You can get an interdisciplinary degree (including philosophy, neurobiology, psychology, cognitive science, robotics) or focus on core ML ideas. However, you will not find many courses covering EA related topics like AI Alignment and social impact of AI. The university is very modern and provides nice spaces to study and to collaborate. Finally, Edinburgh is the perfect mixture of a large city with lots of activities and a small town where you can live and study without being distracted by tourists. I would 

You should do the AI@UoE degree, if...
  • ... you know which fields you want to specialize in
  • ... you want to do a quick 1-year degree
  • ... you like Edinburgh and Scotland
You should consider not doing AI@UoE, if...
  • ... you want to gain broad knowledge about AI and a specialization in a sub-field
  • ... you desire 1:1 communication with researchers
  • ... you want unimpeded progress without possibly being affected by strikes
  • ... you are more interested in the societal impact or possible beneficial applications of AI than in technical aspects


Discuss

European Master's Programs in Machine Learning, Artificial Intelligence, and related fields

14 ноября, 2020 - 19:43
Published on November 14, 2020 3:51 PM GMT

While there is no shortage of detailed information on master’s degrees, we think that there is a lack of perspectives from students that have actually completed the program and experienced the university.

Therefore we decided to write articles on multiple European master's programs on Machine Learning, Artificial Intelligence, and related fields. The texts are supposed to give prospective students an honest evaluation of the teaching, research, industry opportunities, and city life of a specific program. Since many of the authors are Effective Altruists and interested in AI safety a respective section is often included as well.

It may not always be obvious, but there are many English-language degrees across Europe. Compared to America, these can be more affordable, offer more favorable visa terms, and a higher quality of life. We hope that you will consider bringing your talents to Europe. These are the articles that have already been written:

This selection of Master’s programs is not an ultimate list of “good” master's programs – we just happened to know the authors of the articles. If you want to add an article about any ML-related program anywhere in the world don’t hesitate to contact us and we will add it to the list. We also don't claim that this is a complete overview of the respective programs and want to emphasize that this does not necessarily reflect the perception of all students within that program. 

Authors: Marius (lead organizer), Leon (lead organizer), Marco, Xander, David, Lauro, Jérémy, Ago, Richard, James, Javier, Charlie,  Magdalena, Kyle. 



Discuss

On Arguments for God

14 ноября, 2020 - 12:06
Published on November 14, 2020 9:06 AM GMT

This post is about God, but of course, it isn't really about God, but about a particular pattern in general.

We're pretty much all in agreement here that God doesn't exist.

I have no doubts that this correct, but it also poses a trap. 

Suppose there are forty arguments for God. Even if we know definitely for a fact that God doesn't exist, it doesn't mean that all of these forty arguments are wrong.

It would if all of these arguments claimed to definitely prove that he doesn't exist, but not if some of these arguments only claim he is more likely to exist than not or that God is not as unlikely as we might think.

In fact, it'd actually be suspicious if all forty of these arguments came out against God. Surely we should expect the advantage to belong to the deists in at least one or two?

But since we have very good reasons to believe God doesn't exist and someone presents us with those arguments, surely we'll assume that they have to be wrong. And so we'll search very hard, until we find something that is plausibly an error or at least more plausible than God and talk ourselves into believing it. 

And once we've introduced that first error, we've opened up the door for more.



Discuss

What are some good examples of fake beliefs?

14 ноября, 2020 - 10:40
Published on November 14, 2020 7:40 AM GMT

I'm reading through Rationality from AI to Zombies again and just finished the section on Fake Beliefs. I understand the ideas and now I'm trying to ask myself whether the ideas are useful.

To me, the biggest reason for them being useful is if I myself have some fake beliefs, it'd be good to identify and get rid of them. I don't think I do though.

Another reason why they might be useful is to identify them in others and better understand what is going on. Actually, this doesn't seem particularly useful, but it at least is somewhat interesting/satisfying. Anyway, I'm having trouble thinking of good examples of fake beliefs in others. The examples in the blog posts seem pretty contrived and perhaps uncharitable to what is actually going on in their heads.

So, to wrap this question up, I am most interested in hearing about examples of fake beliefs that rationalists might be prone to having, but I am also interested in hearing any other examples.



Discuss

My Confusion about Moral Philosophy

14 ноября, 2020 - 06:49
Published on November 14, 2020 3:49 AM GMT

Something about the academic discussion about moral philosophy always confused me, probably this is a more general point about philosophy as such. Historically people tried to arrive towards truths about objects. One used to ask questions like what is right or wrong. Then one started to ask what the definition of right and wrong could be. One could call that Platonism. There is the idea of truth and the game is won by defining the idea of truth, a chair or a human in a satisfying way. I claim the opposite is true. You can define an object or an idea and the definition of the idea makes the idea to a useful entity which one can develop further. At least this would be the right way to philosophies in my opinion. Something similar is done in mathematics too to my knowledge. Axioms seem to be the beginning. On a few axioms in math all theorems and all sentences seem to be built upon. Change the axioms or subvert them then one would end up with a totally different system of mathematics, with different theorems and sentences most likely. However the main difference in this analogy is that we know of the axioms in mathematics to be true on an intuitive level. That’s the unique difficulty of philosophy. We do not seem to have axioms in philosophy. We could however make a somewhat reasonable assumption that if one of the foundational axioms will prove wrong the system of mathematics might entangle itself in contradictions or at least in some inconsistencies. Historically this did. in fact happen. To give an historical example there was a Fundation crisis of mathematics in the second half of the 19. century and in the early 20. century. Therefore one could argue that the same could happen to philosophy once philosophy is evolved enough. Now I will explain my confusion about moral philosophy.

Moral philosophy seems to me to be a judgement about once own utility function. You can basically choose if you care more about being just to people, maximizing their utility or doing what is regarded as honorable by your peers. You can choose if you want to include animals, plants or just humans in your considerations. There does not seem to be a right answer in the sense that a right answer would have a special pair of attributes. In the usual academic discussion of utilitarianism, deontological ethics or virtue ethics there will always appear something that makes a theory problematic, therefore one will abstain from fully committing to one of the mentioned systems, of which their are of course several different versions. What confuses me a bit is that those problems will change anyone’s mind. A strict utilitarian will necessarily get in conflict with some considerations of justice. That should not surprise someone because if one started deciding to be utilitarian one defined a scope about things one will care and about which things one will not care. The true reasons one might be uncomfortable with the implications of the trolley problem is that one violates his utility function which precisely does not care about the academic discussion of it, but cares about the felling of guilt and shame. Morality is motivated by our feelings and our philosophy about it is just an attempt to make or feelings that evolutionary evolved consistent. The rational way to deal with once morality seems therefore for me to be to just make sure one minimizes guilt, shame and maximizes the pleasure that helping others will give most people. If one assumes that we can not control our moral sentiments or do away with it, we could have a inconsistent moral system without compromising our rationality. Because it’s inconsistency contributes to our moral enjoyment and minimizes our moral suffering. 

At the beginning I described mathematics. And I described that it’s foundations relies on Axioms. It seems to me that one could describe a whole school of thought in philosophy on the foundation of rationality. Instead of asking in moral philosophy what is right to do? Which is determined by vague notions of right. One could ask what is rational to do? Rationality is far easier to define and inconsistencies can exist as long a consistency with the idea of rationality is present. This will of course not end the discussion about moral philosophy, but could show that it isn’t as relevant for humans to a certain extent. This mode of thinking could be extended to other fields too. For example to politics. Instead of concerning oneself in political philosophy to such an large extent on legitimation questions one could concern oneself more about what rational legislators or governments should do. Rationality of a government could even play a part in legitimizing it. 
 



Discuss

Sharding the Brigade

14 ноября, 2020 - 06:30
Published on November 14, 2020 3:30 AM GMT

The Secular Solstice is planning on using the bucket brigade singing app that some friends and I have been building. While the events we've hosted so far have been something like 20 people, this might be 500 or more. I've spent some time over the last couple weeks figuring out how to scale it, and I think it's in a good place now.

I started, as one always should, with profiling. By far the biggest amount of time was being spent in Opus encoding and decoding. This is what lets us send audio over the internet at decent quality without using an enormous amount of bandwidth, so it's not optional. We're also already doing it in C (libopus with Python bindings) so this isn't a case where we can get large speedups by moving to a lower-level language.

The first question was, if we pulled off encoding and decoding into other processes, could we keep everything else in the same process? The app is built around a shared buffer, that everyone is constantly reading and writing from at various offsets, and it's a lot nicer if that can stay in one process. Additionally, there is a bunch of shared state that represents things like "are we currently in a song and when did it start?" or "who's leading?" that, while potentially separable, would be easier to keep together.

I split the code into a "outer" portion that implemented decoding and encoding, and a "inner" portion that handled everything else. Running a benchmark [1], I got the good news that the inner portion was fast enough to stay all in one process, even with a very large number of clients:

$ python3 unit-stress.py 2.45ms each; est 245 clients 2.44ms each; est 246 clients 2.46ms each; est 243 clients $ python3 unit-stress.py inner 0.05ms each; est 11005 clients 0.05ms each; est 11283 clients 0.05ms each; est 11257 clients

Since encoding and decoding are stateful, I decided that those should run in long-lived processes. Each user can always talk to just one of these processes, and it will always have the appropriate state. This means we don't have to do any locking, or any moving the state between CPUs. I don't know of a great way to manage many sharded processes like this, but since we only need about eight of them we can do it manually:

location /echo/api/01 { include uwsgi_params; uwsgi_pass 127.0.0.1:7101; } location /echo/api/02 { include uwsgi_params; uwsgi_pass 127.0.0.1:7102; } ...

This also meant creating echo-01.service, echo-02.service, etc. to listen on 7101, 7102, etc.

Once we have our codec processes running, we need to way for them all to interact with global state. After playing around with a bunch of ideas, I decided on each codec process (client) having a shared memory area which is also open in a singleton global process (server). The client can make blocking RPCs to the server, and because it's so fast that's not a problem that it's blocking.

I decided on a buffer layout of:

1 byte: status (whose turn) 2 bytes: json length N bytes: json 4 bytes: data length N bytes: data To make an RPC, the client fills the buffer and, as a final step, updates the status byte to tell the server to take its turn. The server is constantly looping over all of the shared memory buffers, and when it sees one that is ready for processing it decodes it, updates the global state, writes its response, and updates the status byte to tell the client the response is ready.

The protocol is the same in both directions: arbitrary JSON (10kB max), then audio data (200kB max). This means that when we want to pass a new value through shared memory we don't need to update the protocol, but it does mean the server has a bit more processing to do on every request to decode / encode JSON. This is a fine trade-off, since the only part of the system that is too slow is dealing with the audio codec.

I set up a reasonably realistic benchmark, sending HTTP requests directly to uWSGI servers (start_stress_servers.sh) from python (stress.py). I needed to use two other computers to send the requests, since running it on the same machine was enough to hurt the server's performance, and one additional machine was not able to push the server to its limit.

Initially I ran into a problem where we are sending a user summary which, when the number of users get sufficiently high, uses more than our total available space for JSON in the shared-memory protocol. We are sending this for a portion of the UI that really doesn't make sense for a group this large, so I turned it off for the rest of my testing.

With no sharding I measure a maximum of ~216 simulated clients, while with sharding I get ~1090.

Looking at CPU usage, the server process (python3) is at 73%, so still some headroom:

While it would be possible to make various parts more efficient and get even larger speed ups, I think this is will be sufficient for Solstice as long as we run it on a sufficiently parallel machine.


[1] All benchmarks in this post taken on the same 6-processor 12-thread Intel i7-8700 CPU @ 3.20GHz running Linux, courtesy of the Swarthmore CS department.

Comment via: facebook



Discuss

Notes on Wisdom

14 ноября, 2020 - 05:37
Published on November 14, 2020 2:37 AM GMT

This post examines the virtue of wisdom. It is meant mostly as an exploration of what other people have learned about this virtue, rather than as me expressing my own opinions about it, though I’ve been selective about what I found interesting or credible, according to my own inclinations. I wrote this not as an expert on the topic, but as someone who wants to learn more about it. I hope it will be helpful to people who want to know more about this virtue and how to nurture it.

Singing the praises of wisdom at LessWrong has a bringing coals to Newcastle feel to it. After all, isn’t this community all about working hard and passionately to hack through the jungle of bias, illusion, and ignorance in search of the hidden temple of Athena?

So I was tempted to skip over wisdom and work on writing up some other virtue instead. But I’m hoping that by exploring wisdom as-a-virtue I can illuminate some facets of it that otherwise receive less attention here.

Two varieties of wisdom

There are two senses of wisdom that are found in some virtue traditions:

  1. phrónēsis, or “practical wisdom” (sometimes translated “prudence”), which concerns knowing how the world works, and reasoning well about how to pursue goals effectively (and about which goals are worth pursuing — which sometimes gets separated out into “conative wisdom”)
  2. philosophy, which concerns a more big-picture understanding of “what it’s all about,” whether or not there seems to be any way to make practical use of that understanding

They are both important: Phrónēsis without philosophy can make you merely clever; while without phrónēsis, philosophy can leave you with your head in the clouds, unable to bring your wisdom down to earth where you can make it matter.

“The title wise is, for the most part, falsely applied. How can one be a wise man, if he does not know any better how to live than other men? — if he is only more cunning and intellectually subtle?” ―Thoreau

Philosophy is also sometimes considered an important end in itself. Aristotle thought it was the richest and most satisfying activity for people to engage in, and reasoned that it was the pastime of the gods.

The person with the virtue of wisdom habitually and regularly prioritizes thinking and behaving wisely. Which raises the question: why wouldn’t you? You might at first think that the only reason why you would deliberately think or behave unwisely is because you believe mistakenly that you are being wise.

That is one way you can go astray: you might understand the wise course of action based on the sort of situation you are in, but mistakenly believe you are in some other sort of situation; or vice-versa, you might understand the situation you are in well enough, but be mistaken about how to confront situations of that sort wisely. But people are also deflected from wisdom by being overwhelmed by emotions like fear or anger, or by sensations like pleasure or pain. For this reason, virtues like courage, endurance, self-control, and temperance can come to the assistance of wisdom.

Wisdom and mistakes

It is a popular belief that we gain wisdom (or gain it most effectively) by learning from our own mistakes.

“Wisdom is a virtue of old age, and it seems to come only to those who, when young, were neither wise nor prudent.” ―Hannah Arendt

On the other hand, learning from other people’s mistakes may be the more prudent way to go about it (#LFMF!). LessWrong is in part a collection of dead-ends marked by warning signs, pointing out the mistakes in reasoning that others have been waylayed by.

But you typically learn other people’s mistakes from other people’s failures, which may leave your own artisanal mistakes unchallenged. If you are willing to strap on your theories and go into battle with reality until you lose, you will be more likely to discover and shed your worst theories. This takes courage, confidence, industriousness, and a willingness to fail and to admit failure.

Surfing less unwisely

“The fool doth think he is wise, but the wise man knows himself to be a fool.” ―Shakespeare (As You Like It)

At least since Socrates, wisdom has been associated with epistemic humility. The “LessWrong” name itself nods at that tradition: To be more wise, assume that you are wrong, try to figure out where and how, patch that up as best you can, lather, rinse, repeat. Don’t be too proud of the nuggets of wisdom you have dug up, but occasionally peer into the vast voids of ignorance, the blank spaces on the map. Imagine those things that could be true that would mean utterly overthrowing most of what you currently suspect to be true. Don’t become attached to your best guesses or too inclined to round off a high probability into a certitude, but always prefer reality to your favorite hypothesis.

Wisdom seems to have less to do with arriving on the firm ground of confident understanding, and more to do with learning to surf the unstable edge of profound uncertainty: neither clinging to the barely-buoyant flotsam of belief nor being pulled out into a sea of nihilism by a undertow of skepticism.

Mystical vs. rational wisdom techniques

To understand and make our way in the world around us, we try to systematize, to find regularities, to discover cause-and-effect relationships, and so forth. We create a map, using our knowledge of the territory that we have passed through, to help us predict the territory we are about to enter. By extrapolating from suggestive patterns in the world, our maps can illuminate things we do not experience directly, and can suggest places to look to discover more than we might have stumbled upon on our own. Habits of rational thinking help us to keep our maps from misrepresenting the territory, and warn us about where our maps might be misleading even when they are as accurate as we can make them.

Mystical wisdom techniques suggest a different way to go about it: rather than just improving your map and your map-reading, take some time also to look directly at the territory and improve the quality of your vision. The advantage of this approach is that you lose the compression artifacts and other errors that come from trying to reconstruct the territory from the map. A disadvantage is that while maps can sometimes be shared, visions have to be turned into maps before they can be — and by the time you have turned your vision into a map, there may be little to recommend it when compared with maps arrived at through more rational methods.



Discuss

The Darwin Game - Rounds 3 to 9

14 ноября, 2020 - 04:05
Published on November 14, 2020 1:05 AM GMT

Rounds 3-9

MeasureBot maintains its lead. SimplePatternFinderBot takes second place.

Deep dive into SimplePatternFinderBot

SimplePatternFinderBot

Yonge's SimplePatternFinder can speak for itself.

if pattern != None: if pattern.IsPatternFairOrGoodForUs(): # Try and stick to it as it looks good ret = pattern.OurNext() else: # We have a problem. If it is a smart opponent we # don't want to encourage it to stick with it, on # the other hand if it is a dumb bot that will stick # with it regardless then we are better getting # something rather than nothing. It's also possible # we might not have been able to establish # co-operation yet if pattern.OurNext() >= 3: # The pattern is godd for us for at least this # move, so stick to it for now. ret = pattern.OurNext() elif (self.theirScore + pattern.GetNext())/self.turn\ >= 2.25: # Under no circumstances allow it to get too many # points from playing an unfavourable pattern ret = 3 elif self.theirMoves[-1] + self.ourMoves[-1] == 5: # If we managed to co-operate last round, # hope we can break the pattern and co-operate # this round. return self.theirMoves[-1] elif not self.hasTotalOfFiveBeenPlayed: # If the combined scores have never been 5 # before try to arrange this to see if it will # break the deadlock. ret = pattern.OurNext() elif self.round < 4 and pattern.OurNext() >= 1: # It looks like we are probably dealing with # a nasty bot. Tolerate this within limits in # the early game where it is more likely to be # a dum bot than a sophisticated bot that is # very good at exploiting us, so we at least # get something ret = pattern.OurNext() elif self.round < 8 and pattern.OurNext() >= 2: # If we would get an extra point be tolerant # for a little longer. ret = pattern.OurNext() else: # It looks like it is being completly # unreasonable, so normally return 3 # to stop us from being exploited, # but occasionally offer 2 just in case # we have managed to accidentally get # ourselves into a defect cycle against # a more reasonable bot num = random.randint(0,50) if num == 0: # Possibly this should only be done once? ret = 2 else: ret = 3 Everything so far

Today's Obituary Bot Team Summary Round Silly Chaos Bot NPCs Plays randomly. 4 Silly 4 Bot NPCs Always returns 4. 5 S_A Chaos Army "79% of the time it submits 1, 20% of the time it submits 5, 1% of the time it submits a random number between 0 and 5." 6 Silly 5 Bot NPCs Always returns 5. 6 Silly Invert Bot 0 NPCs Returns 0 onthe first round. Returns 5 - <opponents_last_move> on subsequent rounds. 6 Silly 1 Bot NPCs Always returns 1. 6 PasswordBot Multics Fodder for EarlyBirdMimicBot 8 Definitely Not Collusion Bot Multics Fodder for EarlyBirdMimicBot 8 Silly Invert Bot 2 NPCs Returns 2 onthe first round. Returns 5 - <opponents_last_move> on subsequent rounds. 9 Silly Random Invert Bot NPCs Plays randomly on first turn. Returns 5 - <opponents_last_move> on subsequent rounds. 9 Ben-Bot Norm Enforcers Collaborates with jacobjacob 9

Rounds 10-20 will be posted on November 16, at 5 pm Pacific Time.



Discuss

A Self-Embedded Probabilistic Model

13 ноября, 2020 - 23:36
Published on November 13, 2020 8:36 PM GMT

One possibly-confusing point from the Embedded Agents sequence: it’s actually not difficult to write down a self-embedded world model. Just as lazy data structures can represent infinite sequences in finite space, a lazily-evaluated probabilistic model can represent a world which is larger than the data structure representing the model - including worlds in which that data structure is itself embedded. The catch is that queries on that model may not always be computable/well-defined, and even those which are computable/well-defined may take a long time to compute - e.g. more time than an agent has to make a decision.

In this post, we’ll see what this looks like with a probabilistic self-modelling Turing machine. This is not the most elegant way to picture a self-embedded probabilistic model, nor the most elegant way to think about self-modelling Turing machines, but it does make the connection from probabilistic models to quining and diagonalization explicit.

The Model

Let’s write out a Turing machine as a probabilistic model.

Pieces:

  • Tape
  • Head
  • k-bit-per-timestep input channel
  • k-bit-per-timestep output channel
  • k-bit-per-timestep random bit channel

I’m including an input channel so we can have data come in at every timestep, rather than just putting all the input on one tape initially (which would let the machine perform arbitrary computation between each data point arriving). Similarly with the output channel: it will have a bit in it every timestep, so the machine can’t perform arbitrary computation between output bits. This is a significant difference between this model and the usual model, and it makes this model a lot more similar to real-world computational systems, like CPUs or brains or .... It is, effectively, a bounded computation model. This will play only a minor role for current purposes, but if you’re thinking about how decision-theoretic queries work, it becomes much more relevant: the machine will only have finite time to answer a query before a decision must be made.

The relationships:

  • The tape-state at time t.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > * {position: absolute} .MJXc-bevelled > * {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom * {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')}  and position position, Tape(t)[position], is a function of Tape(t−1)[position] and Head(t) (if position=Head(t−1).position).
  • The head-state at time t, Head(t), is a function of Head(t−1), tape-state at the head’s previous position Tape(t−1)[Head(t−1).position], and the input bits In(t) and Rand(t).
  • The output-state at time t, Out(t), is a function of Head(t).

Also, we’ll assume that we know the initial state head0 of the head and tape0 of the tape, and that only a finite section of the tape starts out nonzero. Put that all together into a probability distribution factorization, and we get

P[Tape,Head,Out|In,Rand]=I[Head(0)=head0]I[Tape[0]=tape0]⋅

∏tP[Head(t)|Head(t−1),Tape(t−1)[Head(t−1).position],In(t),Rand(t)]⋅

∏tP[Tape(t)|Tape(t−1),Head(t−1).position,Head(t)]⋅

∏tP[Out(t)|Head(t)]

Each line here handles the update for one component - the head, tape and output - with the initial conditions on the first line. We can also further break apart the tape-update term, since only the tape-position where the head is located depends on the head-state:

P[Tape(t)|Tape(t−1),Head(t−1).position,Head(t)]=

P[Tape(t)[Head(t−1).position]|Tape(t−1)[Head(t−1).position],Head(t)]⋅

∏position≠Head(t−1).positionP[Tape(t)[position]|Tape(t−1)[position]]

Each of the "atomic" conditional probabilities in these formulas would be given by the machine's local update-rules - e.g. head state as a function of previous head state, state of previous tape location, and inputs, or tape state as a function of previous tape-state (usually the identity function). We can also incorporate a model of the rest of the world, which would give P[In|Out] and P[Rand], with In at later times depending only on Out at earlier times so that the whole thing works out to a well-behaved (i.e. acyclic) factorization.

The important thing to notice is that, while the distribution is over an infinite space, we can express it in a finite set of equations (i.e. the equations above). We can also perform ordinary probabilistic calculations using these equations: it’s just a plain old Bayes net, and there’s a bounded number of nonzero variables at each timestep. More to the point, we can hard-code a representation of these equations in the initial state of the Turing machine (quining to fit the representation of the initial state inside the representation of the model inside the initial state), and then the machine itself can perform ordinary probabilistic calculations using these equations. We can treat the equations defining the model as a lazy data structure, and run queries on them.

Diagonalizing

So, what happens if we try to diagonalize our self-modelling Turing machine? What happens if we look at the first non-null output of the machine and then flip it, and we program the machine to output the most likely value of this flipped output? Well, that scheme falls apart at “first non-null output of the machine”. There’s no guarantee that the machine ever outputs a non-null output. Let’s write this out concretely, in terms of the probabilistic model and queries involved.

We’ll assume that Out consists of two bits. The machine always outputs “00” until its computation completes, at which point it outputs either “10” if it wants to pass a logical-zero output or “11” for a logical-one output. The program on the initial tape is some ordinary probabilistic reasoning program, and we hardcode our query into it.

What should our query be? Well, we want to look at the first non-null output, so we’ll have to write something like “Out(mintts.t.Out(t)≠“00”)” - the minimum time t such that the output is not "00". Then we want the machine to output the least-likely value of that variable, so we’ll ask for something like

argminvalP[Out(mintts.t.Out(t)≠“00”)=“0”+val]

Then, when the machine goes to compute mintts.t.Out(t)≠“00”, the computation may not complete at all, in which case there will not be any time t which satisfies that condition.

The main thing to notice in this example is that the query itself was not well-defined. It’s not just that our machine can’t answer the query; even an outside observer with unlimited computational power would conclude that the answer is not well-defined, because the time t which it asks about does not exist. Our query is trying to address a variable which doesn’t exist. The “non-halting weirdness” comes from passing in weird queries to the model, not from any weirdness in the model itself. If we stick to “normal” queries - i.e. ask about the value of a specific variable - then there isn’t any conceptual problem (though it may take a while for the query to run). So from this perspective, the central problem of self-embedded world models is not representation or interpretation of the model, but rather the algorithmic problem of expanding the set of queries we can answer “without any weirdness”.

In this example, there is another possible behavior: the machine may output a logical zero with probability ½ and a logical one with probability ½, using its random bit source. This would require a probabilistic reasoning algorithm quite different from what we normally use, but would be entirely self-consistent and non-weird. That’s an example of what it might mean to “expand the set of queries we can answer without any weirdness”.

What Queries Do We Care About?

We do not care about all queries equally. Depending on the details of our underlying model/logic, there may be lots of queries which are uncomputable/undefined, but which we don’t actually care about answering. We want a theory of embedded agents, but that does not necessarily imply that we need to handle every possible query in some very expressive logic.

So which queries do we need to handle, in order to support a minimum viable model of agency?

This is a hard question, because it depends on what decision theory we’re using, and exactly what kind of counterfactuals that decision theory contains (and of course some decision theories don’t directly use a probabilistic model at all). But there are at least some things we’ll definitely want - in particular, if we’re using a probabilistic model at all, we’ll want some way to do something like a Bayesian update. That doesn’t necessarily mean updating every probability of every state explicitly; we could update lazily, for instance, i.e. just store the input data directly and then go look it up if and when it’s actually relevant to a query. More generally, we want some data structure which summarizes whatever info we have from the inputs in a form suitable to answering whatever queries we’re interested in.

(To me, this sounds like an obvious use-case for abstraction: throw out info which is unlikely to be relevant to future queries of interest.)

Another interesting class of queries is optimization queries, of the sort needed for decision theories or game-theoretic agents. One way to think about the decision theoretic problems of embedded agency is that we want to figure out what the “right” class of queries is, in order to both (a) guarantee that the optimization query is actually solved solved correctly and quickly, and (b) get good performance in a wide variety of environments. (Of course, this isn’t the only way to think about it.)



Discuss

How to get the benefits of moving without moving (babble)

13 ноября, 2020 - 23:18
Published on November 13, 2020 8:17 PM GMT

If you've been following along with the location discussion (you probably haven't, that's okay), you'll know that I've become convinced that trying to get the community to leave Berkeley en masse is probably not a good idea. However, that leaves us in a bit of a cheeky conundrum (sorry, been watching lots of British comedy) – there are in fact real reasons why some people are excited about moving, and we shouldn't just throw all that in the garbage, even if we decide not to move.

So in this post, I want to figure out how we can get the things that we want out of moving, without moving (thanks to Aray for the general idea). The point of this is to stop thinking of move/don't-move as a binary, and instead focus on ways of achieving whatever goals are hidden at the root of our desire to move.

I'm choosing to focus on what I've come to believe are three of the main cruxes:

  • Opportunity to stop stagnating / be a new person
  • Political stability
  • Nicer surroundings

I've taken inspiration from jacobjacob and generated 50 dumb ways to get each of the things (in spoiler tags, in case you want to generate your own!). In inviting you to do this babble challenge, I also invite you – if you so choose – to babble not on these topics, but on cruxes of your own.

Stop stagnating

We've been in Berkeley for a long time, and some people just want to move because they want to be anywhere other than the place they already are. Your physical location definitely shapes the thoughts you have and the actions you take, so if you feel stuck in a rut, shaking up your whole life by moving can sound pretty appealing. How else can we shake up our lives?

Babble:

  1. Move to a different room in your house
  2. Start living with different housemates
  3. Move to a different physical house in the same neighborhood
  4. Move to a new neighborhood
  5. Rearrange your furniture
  6. Redecorate the house
  7. Paint your walls
  8. Spend a lot of time in VR
  9. Meditate a lot to become more attentive to your experience
  10. Start using a different room as the default common space
  11. Go to more conferences
  12. Go on retreats for much of the year
  13. Commute to an office instead of working from home, for context change
  14. Go on a walk / bike ride / drive in a different place each day
  15. Rotating offices - instead of having people from the same org working in the same place all the time, we reorganize once every one to three months
    1. Maybe we have one big office building and people work on different floors
    2. Maybe we keep the offices we have and just rotate the groups of people
  16. MIRI has a permanent retreat venue a couple hours away where researchers can go any time they want
  17. Go do more touristy things in the area where you live
  18. Go to more events
  19. Transition to a different gender
  20. Change the smellscape of your environment (e.g. with flowers, candles, or essential oils)
  21. Change the soundscape of your environment (e.g. by playing music all the time, or getting a lot of birds)
  22. Walk around on stilts or in high heels
  23. If you're bilingual, do all your work-thinking in the other language
  24. Start learning a completely new field - e.g. art history for an AI researcher, or organic chemistry for a historian
  25. Get a dog
  26. Switch up your mode of transportation - e.g. if you usually bike everywhere, walk instead, or vice versa
  27. Make one of the rooms in your house a Dreamatorium
  28. Start sleeping in a tent in your yard
  29. Spend a night on the streets
  30. Don't have internet at your home, only at your office (or vice versa)
  31. Become nocturnal
  32. Read everything upside down
  33. Take drugs
  34. Take a month-long vow of silence
  35. Sing everything you say
  36. Rhyme everything you say
  37. Call old friends you haven't talked to in years and ask for their take on the problems you're currently facing (whether personal or technical)
  38. Use lasers to make yourself colorblind
  39. Drastically switch up the aesthetic of your computer-using experience - e.g. by switching operating systems
  40. Start using a different web browser so that you get different kinds of results
  41. Get imprisoned
  42. Get rid of everything you own
  43. Have a baby
  44. Get married / divorced
  45. Implant electrodes in your brain
  46. Switch from typing your thoughts into a computer to writing them on paper
  47. Build a physical model of the abstract theory you're working on, e.g. out of wood or tinkertoys
  48. Take a job as a security guard or something, so you have a lot of time with nothing to do when you're not allowed to distract yourself with the internet, so you can have a bunch of unstructured thoughts
  49. Make new friends in a totally different social circle; their different way of thinking will help you generate new kinds of thoughts
  50. For organizations, have the ops team and researchers switch roles temporarily so that everyone can get a new perspective on the organization's goals.
  51. Completely revamp your routines – go back into explore mode for things where you've been in exploit mode a long time (e.g. restaurant choice, TV shows)

Whew, well, not all of those were completely useless! Onward!

Political and social stability

A major thing lots of people want out of moving is to get away from the stressful uncertainty of recent social and political upheaval. How can we get that without moving?

Babble:

  1. Have people you trust run for public office
  2. Dedicate your life to founding a secret society that inserts people aligned with your values into positions of immense power in your country
  3. Single-handedly disarm all the nukes in the world, like Superman in that one Justice League episode
  4. Buy all of the major news networks and let them mostly continue as they are but subtly make everything less partisan
  5. Write some very influential books
  6. Put sedatives in the municipal water supply
  7. Secede from your country / form a micronation
  8. Go really hard on raising the sanity waterline - e.g. get rationality training into all public schools
  9. Print ten million copies of HPMoR / the Sequences / SSC and distribute them evenly around your country
  10. Go back in time and finagle things so that there's less political polarization (not sure how, you figure it out)
  11. Find a Death Note and eliminate the people who are linchpins of social and political instability
  12. Like the previous one but in a technologically possible and yet still untraceable way, like… targeted asteroids
  13. Somehow become a big wig on Capitol Hill and spearhead some major bipartisan movement
  14. Invent a supervillain-type ray that causes all guns in the world to melt
  15. Become a Jesus / Gandhi / Forrest Gump type figure
  16. Purge Night
  17. Require mental health screenings for people before they can run for public office
  18. Abolish the CDC and FDA and most bureaucracy in the US; then people won't be angry because they had to wait five hours at the DMV and they won't be sick and angry about it because they can't afford healthcare
  19. Outlaw swear words
  20. Overhaul all of the algorithms that decide what to show people on the internet, to actively counter partisanship and general polarization
  21. Legalize marijuana and criminalize alcohol so that when people want to use legal drugs to numb their pain they're more likely to get chill than angry
  22. Get them vaccines distributed right quick like so we can end lockdown and therefore hopefully return to a better baseline of sanity
  23. Somehow import the collectivist values that make Japanese society so relatively functional
  24. Make the week into eight days instead, so that we get three rest days for every five workdays
  25. Make an AI that's a really great psychotherapist, then provide it for free to everyone in the world, and socially normalize or even require its use
  26. Positive singularity
  27. Build a giant fortress
  28. Get people to exercise more, because exercise is the magic that cures all ills
  29. Get people to make more art, because making art is the magic that cures all ills
  30. Automate away the vast majority of jobs and instead free people to make art or whatever, but also invent a fully immersive virtual reality experience (like Star Trek's holodecks) so that if they don't have anything productive they want to do they can just stay out of the way while being happy
  31. Just chill out, things are actually pretty fine
  32. More hugs
  33. Cause a whole lot more people who think like you (or the way you like people to think) to move to your area, a la Free State Project – then at least if shit goes down, you'll be surrounded by allies
  34. Become a citizen of another country, just in case
  35. Just really solidify your personal social group, and pretend people outside of your bubble don't exist
  36. Follow Eliezer's suggestions to reboot the police
  37. Make a society just like the one in Brave New World – that was a piss-poor attempt at a dystopia given that everyone is happy all the time, aging is curtailed, and society is incredibly stable
  38. Go hard on genetically modifying embryos so that within a generation everyone is smarter and more level-headed
  39. Make many billions of dollars, take over the world
  40. Build a time machine, take over the world
  41. Nukes in space?, take over the world???
  42. Befriend a bunch of highly influential people (Bill Gates, Donald Trump, Kim Jong-Un, etc) and whisper in their ears like a vizier in a movie
  43. Replace a bunch of highly influential people with clones loyal to you
  44. Seduce Donald Trump
  45. Inundate the world with more resources than humanity could possibly use, so that there's nothing to fight over anymore. At the very least then we'd have different problems.
  46. Hire a whole team of bodyguards so you don't have to worry about violence
  47. Invent and spread widely a faster mode of travel, like hyperloop or flying cars, so that there's more global connectedness, and therefore maybe more global understanding
  48. Get the Autobots to come from Cybertron and save us from ourselves
  49. Join the military, rise in the ranks, take over the military, abolish the military
  50. Form a worldwide movement of people doing random acts of kindness – that's the kind of thing I thought might change the world when I was in high school, and who knows, it's not impossible
Nicer surroundings

Finally, some people want to move because they just don't like the place they are all that much. I'm going to divide this babble in half, because there are two main classes of solutions: change your surroundings, or get better at accepting your surroundings as they are.

Changing your surroundings:

  1. Move to a nicer neighborhood in your area
  2. Move to a house with a big backyard
  3. Get a water feature
  4. Get lots of plants
  5. Redecorate your house
  6. Renovate your house
  7. Put a lot of effort into optimizing your work and living setups, your commute, etc.
  8. Become friends with everyone on your block, knock down the fences in your backyards, and make the area behind your houses a big private park
  9. Unilaterally shut down the street to car traffic and instead make it a place for kids to play
  10. Lobby for car-free roads or car-free days in your city
  11. Organize people to pick up litter in your neighborhood
  12. Generally combat the broken window effect in your neighborhood
  13. Fill your home with nice sounds and smells
  14. Plant a bunch of trees around your house
  15. Invent a way to replicate the effects of the Harry Potter notice-me-not spell, so that most people can't perceive you, so you don't have to deal with them
  16. (Western-US-specific) Fund controlled burns throughout the year all over the state to cap how bad wildfire season can get
  17. Secure all your furniture to the walls per earthquake best practices, so that you don't have to worry about things falling on you if there's a big earthquake
  18. If you don't like urban life, relocate to a suburb within commuting distance
  19. Abolish cars
  20. Make a zen garden
  21. VR
  22. KonMari your life
  23. Put a lumenator in every room where you spend time
  24. Decrease your exposure to your surroundings by staying home all the time (I bet a lot of you are already doing this :P) – then you only need to make your house good, which is way easier than making a whole city good
  25. Invent truly giant, like spaceship-sized air purifiers – they hover above the city and nullify all effects of pollution, wildfires, and even COVID
  26. Fill the air with happiness gas, like the Joker
  27. Exterminate all ticks / mosquitoes / whatever pest is the worst in your area
    ----
    Accepting your surroundings:
  28. Purposely go out in the world with a childlike sense of wonder – What kind of tree is that? Can you believe that cars exist? Can you believe that people exist?
  29. Mindfulness meditation
  30. See a therapist
  31. Have a conversation with someone who really likes living in the area
  32. Find a place in or near your home that you just genuinely love being, and soak up that feeling
  33. Start a gratitude journaling habit
  34. Remind yourself that the other places you've lived or might want to live aren't perfect; make a list of the ways in which those places aren't as good as your current city/area
  35. Remind yourself of the positive reasons that you initially ended up in your current location, and maybe try to get back some of that magic
  36. Be a tourist in your own city – benefit from all the very best things it has to offer
  37. Buy property so that you're locked into staying, and let post-hoc justification work its magic
  38. Recite the Serenity Prayer
  39. Forbid yourself from saying or writing negative things about the place you live, so as to not strengthen those neural pathways
  40. Think of all the people you would never have met and things you would never have done if you hadn't been where you are
  41. Spend more time with your friends and be grateful that you live near them
  42. Think of all the ways you could have it worse – e.g., maybe Berkeley has some problems with being dirty, but it sure beats the slums of Mumbai
  43. Actually spend some time in a different place and think about all the things you miss about home
  44. Befriend a bunch of your neighbors
  45. Start participating in and organizing local community events so that you feel like you're a part of something nice
  46. Do community service to feel more connected to your city (not to purchase utilons, obv)
  47. Have kids, because kids need friends and need to go to school and stuff, which will cause you to become more integrated into the community
  48. Lobotomy
  49. Listen to Bobby McFerrin's Don't Worry Be Happy on loop until it sinks in. As a bonus you could buy one of those creepy fake fish to sing it for you.
  50. Post all over social media about how much you like the place you live – put #aesthetic pictures of it on Instagram, extoll its virtues on your Facebook, fight people on Twitter who don't like it. Eventually you will hopefully have convinced yourself you like the place, or at the very least, it will be too awkward of a social move to admit that you don't.

Well there you have it! I'd be interested to hear either other people's answers to these prompts, or their own cruxes. While I've largely made up my own mind on whether it's a good idea to move, I think most people still feel pretty unresolved. At the very least, it seems like there are real problems that need to be addressed – and if we don't move, we need to find other ways to address them.



Discuss

Страницы