Вы здесь

Сборщик RSS-лент

An unconfortable epistemic position regarding "Verb Magic".

Новости LessWrong.com - 29 июня, 2019 - 08:13
Published on June 29, 2019 3:38 AM UTC

Hello Guys,

This post will act as a bottle thrown in the sea, a "Is anyone like me anywhere ?" as well as an occasion, i hope, to discuss a more serious topic with you. Some claims made here might push on your "pseudo-science" button, please read until the end.

Please allow me to start with a brief (not so brief) testimonial in order for you to grab the topic.

I'm a 21 baguette french person.

I consider myself as a rationalist, i believe in demonstration by facts and I believe I don't have any superstition like telepathy, parapsychology or flat-earth BS. I love all kind of science and deductive/inductive thinking in general. I'm basically passioned by anything that can be interesting.

I am a student in psychology, i'm aware of the scientific methodology and aware of the ton of cognitive bias the human mind can follow. I like physics, astrophysics, human sciences as well as "hard" sciences.

But, and some of you might stop your reading here for what i'm about to say, I recently dove into occultism. please wait and let me explain.

First of all, it's historically speaking a cool subject to consider, i read a lot of books about alchemy, tarot, mistery cults etc... There's a lot to learn about epistemology, theology and more. But it developped something else in me. Some kind of beliefs in some king of... magic.

Again, i'm not in a new age type of thinking like "your intuition is the path everything else is wrong", I'm an atheist and the magic i'm talking about is more like something I got from a lot of philosophical thinking.

To clarify, I believe in "The Verb" kind of magic, the symbol and the ritual. It can look something like a thing A. Jodorowsky calls "psychomagic" (what a marketing term again). I started to believe that most of the things we called magic in the history of occultism is just a beautiful and poetic way to talk about advanced symbolic speech and act, which can have an enormous impact on the psychologic, emotionnal, and spiritual mindset of the person who performs it.

We can find this in franc-maçonnerie, the ritual has no supernatural nature but the power which emanates from the performance engraves a lot of things in your person, and this kinda looks like what we called sorcery at some point in history. In the same mindset, i started sudying tarot, not because i believe in divination of the future or present, but because i think the figures present some kind of strong hermeneutic that can have a deep impact on your life, it can allow you to take a step back from your life easily and understand some life keys that makes you grow, a step back that is sometime difficult to get without strong symbols, and I ask myself more and more if this advantage can be consider as some kind of verb magic.

If you push this logic further (maybe a bit too much this time...), black magic things like "He did sorcery on me now i have no luck in life, i feel sad all the time and all the bad things happen to me at the same time" can be seen as a strong symbolic bound. Someone who believes in sorcery will allow the "wizard" to take a looot of space in his head, granting him an enormous symbolic importance in his everyday life until some point where it can affect his general mindset and therefore his acts and behaviors (this can lead to a general state of cognitive and emotional bias and a retroaction loop of "feeling weird" --> "acting weird" --> "Feeling weirder" --> "acting weirder" --> "Loopin' again baby". The wizard would then just be an eloquent, charismatic person who leads with rhetoric, status and psychological sway.

I find this way of considering magic in general healthy and it's new to me. Please try to understand me, There is a lot of superstition regarding magic and there will always be, let's just say that from this perspective superstitions are a heritage of our scientific ignorance and the "real magic" underneath all of this would be the power of symbol, which we human can't live without.

This leads to a problem to me. I was always seduced by occultism and, let be honnest, it would be so great if magic was a real thing. I'm stuck here on a rope streched across a wide gap, and maybe i'm doing all this cognitive gymnatsics just to convince myself that this is a real thing but I can't prevent myself to think that this is maybe a stable philosophical claim.

I feel weird about all this, mostly because I couldn't find anyone who can relate to my position. I feel rejected by science to even insinuate these topics and i also feel rejected by serious occultists, serious "philosophical spiritualists" and less serious "new age people" because i try to reduce magic and spirituality to just the power of symbol (I lack intuition blabla). (I have a tendency to read through religion, spirituality and "energy" related topics with my "It's so clever and powerful if you see it as symbol" glasses, this is my believes and i'll be glad to discuss this in comments but it's another subject).

So am i granting to much importance to symbol and hermeneutic ? A great example is psychoanalysis (please don't kill me). This hermeneutic reading can be a great tool and sooo powerful sometimes when you apply it to myth, society or the indidual. But my rational mind can't help itself to scream "boooo not proven booooo pseudo science", And i'm always furious when a dude claims the omnipotence of this theory like a fanatic. Believe me, this is a really an inconfortable position ...

What do you think ? two questions, an epistemic one, in the chart of this site and a personal one which does not belong here but please light my lantern guys...

1. What do you think about this hermeneutic point of view regarding magic and occultism in general ? Am i going to far to convince myself that my dreams are somewhat true or are there really some interesting questions raised ?

2. Do you people experienced or experience this kind of epistemic gap, which can be really inconfortable and makes you think that you somewhat belong nowhere except somwhere in a weird middle, a place where you're alone and can't find a damn person to fully accept your position ?

Thank you for reading

Hijol.



Discuss

Conceptual Problems with UDT and Policy Selection

Новости LessWrong.com - 29 июня, 2019 - 02:50
Published on June 28, 2019 11:50 PM UTC

Abstract

UDT doesn't give us conceptual tools for dealing with multiagent coordination problems. There may have initially been some hope, because a UDT player can select a policy which incentivises others to cooperate, or because UDT can reason (EDT-style) that other UDTs are more likely to cooperate if it cooperates itself, or other lines of thought. However, it now appears that this doesn't pan out, at least not without other conceptual breakthroughs (which I suggest won't look that much like UDT). I suggest this is connected with UDT's difficulties handling logical uncertainty.

Introduction

I tend to mostly think of UDT as the ideal, with other decision theories being of interest primarily because we don't yet know how to generalize UDT beyond the simplistic domain where it definitely makes sense. This perspective has been increasingly problematic for me, however, and I now feel I can say some relatively concrete things about UDT being wrong in spirit rather than only in detail.

Relatedly, in late 2017 I made a post titled Policy Selection Solves Most Problems. Policy selection is the compromise solution which does basically what UDT is supposed to do, without the same degree of conceptual elegance, and without providing the hoped-for clarity which was a major motivation for studying these foundational problems. The current post can be seen as a follow-up to that, giving an idea of the sort of thing which policy selection doesn't seem to solve.

The argument can also be thought of as an argument against veil-of-ignorance morality of a certain kind.

I don't think any of this will be really surprising to people who have been thinking about this for a while, but, my thoughts seem to have recently gained clarity and definiteness.

Terminology Notes/References

UDT 1.0, on seeing observation .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > * {position: absolute} .MJXc-bevelled > * {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom * {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')} obs, takes the action ai which maximizes the expected utility of "my code outputs action ai on seeing observation obs", with expected value evaluated according to the prior.

UDT 1.1, on seeing observation obs, takes the action ai which the globally optimal policy (according to the prior) maps obs to. This produces the same result as UDT 1.0 in many cases, but ensures that the agent can hunt stag with itself.

UDT 2 is like UDT 1.1, except it (1) represents policies as programs rather than input-output mappings, and (2) dynamically decides how much time to spend thinking about the optimal policy.

What I'm calling "policy selection" is similar to UDT 2. It has a fixed (small) amount of time to choose a policy before thinking more. However, it could always choose the policy of waiting until it has thought longer before it really chooses a strategy, so that's not so different from dynamically deciding when to choose a policy.

Two Ways UDT Hasn't GeneralizedLogical Uncertainty

UDT 2 tries to tackle the issue of "thinking longer", which is the issue of logical uncertainty. This is a conceptual problem for UDT, because thinking longer is a kind of updating. UDT is supposed to avoid updating. UDT 2 doesn't really solve the problem in a nice way.

The problem with thinking for only a short amount of time is that you get bad results. Logical induction, the best theory of logical uncertainty we have at the moment, gives essentially no guarantees about the quality of beliefs at short times. For UDT 2 to work well, it would need early beliefs to at least be good enough to avoid selecting a policy quickly -- early beliefs should at least correctly understand how poor-quality they are.

The ideal for UDT is that early beliefs reflect all the possibilities inherent in later updates, so that a policy optimized according to early beliefs reacts appropriately to later computations. Thin priors are one way of thinking of this. So far, nothing like this has been found.

Game Theory

The second way UDT has failed to generalize, and the main topic of this post, is to game theory (ie, multi-agent scenarios). Cousin_it noted that single-player extensive-form games provided a toy model of UDT. The cases where he says that the toy model breaks down are the cases where I am now saying the concept of UDT itself breaks down. Extensive-form games represent the situations where UDT makes real sense: those with no logical uncertainty (or at least, no non-Bayesian phenomena in the logical uncertainty), and, only one agent.

What's the conceptual problem with extending UDT to multiple agents?

When dealing with updateful agents, UDT has the upper hand. For example, in Chicken-like games, a UDT agent can be a bully, or commit not to respond to bullies. Under the usual game-theoretic assumption that players can determine what strategies each other have selected, the updateful agents are forced to respond optimally to the updateless ones, IE give in to UDT bullies / not bully the un-bullyable UDT.

Put simply, UDT makes its decisions "before" other agents. (The "before" is in logical time, though, not necessarily really before.)

When dealing with other UDT agents, however, the UDT agents have to make a decision "at the same time".

Naively, the coordination mechanism "write your decisions on slips of paper simultaneously -- no peeking!" is a bad one. But this is the whole idea of UDT -- writing down its strategy under a "no peeking!" condition.

Other decision theories also have to make decisions "at the same time" in game-theoretic situations, but they don't operate under the "no peeking". Guessing the behavior of the other players could be difficult, but the agent can draw on past experience to help solve this problem. UDT doesn't have this advantage.

Furthermore, we're asking more of UDT agents. When faced with a situation involving other UDT agents, UDT is supposed to "handshake" -- Löbian handshakes being at least a toy model -- and find a cooperative solution (to the extent that there is one).

So far, models of how handshakes could occur have been limited to special cases or unrealistic assumptions. (I'd like to write a full review -- I think there's some non-obvious stuff going on -- but for this post I think I'd better focus on what I see as the fundamental problem.) I'd like to see better models, but, I suspect that significant departures from UDT will be required.

Even if you don't try to get UDT agents to cooperate with each other, though, the conceptual problem remains -- UDT is going in blind. It has a much lower ability to determine what equilibrium it is in.

I think there is a deep relationship between the issue with logical uncertainty and the issue with game theory. A simple motivating example is Agent Simulates Predictor, which appears to be strongly connected to both issues.

How does Equilibrium Selection Work?

The problem I'm pointing to is the problem of equilibrium selection. How are two UDT agents supposed to predict each other? How can they trust each other?

There are many different ways to think about agents ending up in game-theoretic equilibria. Most of them, as I understand it, rely on iterating the game so that the agents can learn about it. This iteration can be thought of as really occurring, or as occurring in the imagination of the players (an approach called "fictitious play"). Often, these stories result in agents playing correlated equilibria, rather than Nash equilibria. However, that's not a very big difference for our purposes here -- correlated equilibria only allow the DD outcome in Prisoner's Dilemma, just like Nash.

There's something absurd about using iterated play to learn single-shot strategies, a problem Yoav Shoham et al discuss in If multi-agent learning is the answer, what is the question?. If the game is iterated, what stops agents from taking advantage of its iterated nature?

That's the essence of my argument in In Logical Time, All Games are Iterated Games -- in order to learn to reason about each other, agents use fictitious play, or something similar. But this turns the game into an iterated game.

Turning a game into an iterated game can create a lot of opportunity for coordination, but the Folk Theorem says that it also creates a very large equilibrium selection problem. The Folk Theorem indicates that rational players can end up in very bad outcomes. Furthermore, we've found this difficult to avoid in decision algorithms we know how to write down. How can we eliminate the "bad" equilibria and keep only the "good" possibilities?

What we've accomplished is the reduction of the "handshake" problem to the problem of avoiding bad equilibria. (We could say this turns prisoner's dilemma into stag hunt.)

Handshake or no handshake, however, the "fictitious play" view suggests that equilibrium selection requires learning. You can get agents into equilibria without learning, but the setups seem artificial (so far as I've seen). This requires updateful reasoning in some sense. (Although, it can be a logical update only; being empirically updateless still seems wiser).

Logical Uncertainty & Games

Taking this idea a little further, we can relate logical uncertainty and games via the following idea:

Our uncertain expectations are a statistical summary of how things have gone in similar situations in the (logical) past. The way we react to what we see can be thought of as an iterated strategy which depends on the overall statistics of that history (rather than a single previous round).

I'm not confident this analogy is a good one -- in particular, the way policies have to depend on statistical summaries of the history rather than on specific previous rounds is a bit frustrating. However, the analogy goes deeper than I'm going to spell out here. (Perhaps in a different post.)

One interesting point in favor of this analogy: it also works for modal agents. The proof operator, □, is like a "prediction": proofs are how modal agents thinks about the world in order to figure out what to do. So □x is like "the agent thinks x". If you look at how modal agents are actually computed in the MIRI guide to Löb's theorem, it looks like an iterated game, and □ looks like a simple kind of summary of the game so far. On any round, □x is true if and only if x has been true in every previous round. So, you can think of □x as "x has held up so far" -- as soon as x turns out to be false once, □x is never true again.

In this interpretation, FairBot (the strategy of cooperating if and only if the other player provably cooperates) becomes the "Grim Trigger" strategy: cooperate on the first round, and cooperate on every subsequent round so long as the other player has cooperated so far. If the other player ever defects, switch to defecting, and never cooperates again.

A take-away for the broader purpose of this post could be: one of the best models we have of the UDT "handshake" is the Grim Trigger strategy in disguise. This sets the tone nicely for what follows.

My point in offering this analogy, however, is to drive home the idea that game-theoretic reasoning requires learning. Even logic-based agents can be understood as running simple learning algorithms, "updating" on "experience" from (counter)logical possibilities. UDT can't dance with its eyes closed.

This is far from a proof of anything; I'm just conveying intuitions here.

What UDT Wants

One way to look at what UDT is trying to do is to think of it as always trying to win a "most meta" competition. UDT doesn't want to look at any information until it has determined the best way to use that information. UDT doesn't want to make any decisions directly; it wants to find optimal policies. UDT doesn't want to participate in the usual game-theoretic setup where it (somehow) knows all other agent's policies and has to react; instead, it wants to understand how those policies come about, and act in a way which maximally shapes that process to its benefit.

It wants to move first in every game.

Actually, that's not right: it wants the option of moving first. Deciding earlier is always better, if one of the options is to decide later.

It wants to announce its binding commitments before anyone else has a chance to, so that everyone has to react to the rules it sets. It wants to set the equilibrium as it chooses. Yet, at the same time, it wants to understand how everyone else will react. It would like to understand all other agents in detail, their behavior a function of itself.

So, what happens if you put two such agents in a room together?

Both agents race to decide how to decide first. Each strives to understand the other agent's behavior as a function of its own, to select the best policy for dealing with the other. Yet, such examination of the other needs to itself be done in an updateless way. It's a race to make the most uninformed decision.

I claim this isn't a very good coordination strategy.

One issue is that jumping up a meta-level increases the complexity of a decision. Deciding a single action is much easier than deciding on a whole policy. Some kind of race to increasing meta-levels makes decisions increasingly challenging.

At the same time, the desire for your policy to be logically earlier than everyone else, so that they account for your commitments in making their decisions, means you have to make your decisions faster and in simpler, more predictable ways.

The expanding meta-space and the contracting time do not seem like a good match. You have to make a more complicated decision via less-capable means.

Two people trying to decide policies early are just like two people trying to decide actions late, but with more options and less time to think. It doesn't seem to solve the fundamental coordination problem.

The race for most-meta is only one possible intuition about what UDT is trying to be. Perhaps there is a more useful one, which could lead to better generalizations.

Veils of Ignorance

UDT tries to coordinate with itself by stepping behind a veil. In doing so, it fails to coordinate with others.

Veil-of-ignorance moral theories describe multiagent coordination resulting from stepping behind a veil. But there is a serious problem. How can everyone step behind the same veil? You can't tell what veil everyone else stepped behind if you stay behind your own veil.

UDT can successfully self-coordinate in this way because it is very reasonable to use the common prior assumption with a single agent. There is no good reason to suppose this in the multiagent case. In practice, the common prior assumption is a good approximation of reality because everyone has dealt with essentially the same reality for a long time and has learned a lot about it. But if we have everyone step behind a veil of ignorance, there is no reason to suppose they know how to construct the same veil as each other -- they're ignorant!

Is UDT Almost Right, Nonetheless?

I find myself in an awkward position. I still think UDT gets a lot of things right. Certainly, it still seems worth being updateless about empirical uncertainty. It doesn't seem to make sense for logical uncertainty... but treating logical and empirical uncertainty in such different ways is quite uncomfortable. My intuition is that there should not be a clean division between the two.

One possible reaction to all this is to try to learn to be updateless. IE, don't actually try to be updateless, but do try to get the problems right which UDT got right. Don't expect everything to go well with a fixed Bayesian prior, but try to specify the learning-theoretic properties which approximate that ideal.

Would such an approach do anything to help multiagent coordination? Unclear. Thermodynamic self-modification hierarchies might work with this kind of approach.

In terms of veil-of-ignorance morality, it seems potentially helpful. Take away everything we've learned, and we don't know how to cooperate from behind our individual veils of ignorance. But if we each have a veil of ignorance which is carefully constructed, a learned-updateless view which accurately reflects the possibilities in some sense, they seem more likely to match up and enable coordination.

Or perhaps a more radical departure form the UDT ontology is needed.



Discuss

Whence decision exhaustion?

Новости LessWrong.com - 28 июня, 2019 - 23:41
Published on June 28, 2019 8:41 PM UTC

Many people experience something we might call decision or executive exhaustion: after making a lot of decisions, it can be hard to make more decisions and to exert "willpower". Yet, this seems odd because we are constantly making decisions all the time in some sense, choosing to do what we do over everything else we could have otherwise done. So, what and why do we sometimes get exhausted of making decisions when most of the time we do not?

Some notes to consider in answering:

  • Some people seem to experience this from all decisions and are worn out after dozens of minutes of being awake.
  • Some people seem to never experience this.
  • Exhausting decisions seem more salient or like they require more deliberate thought than ones that are not exhausting. Non-exhausting decisions feel automatic.
  • Food and rest (but not a full nights sleep) helps some people recover decision function but not everyone seems to respond to this over short enough timescales for it to be useful for recovering functionality within the day.


Discuss

Aligning a toy model of optimization

Новости LessWrong.com - 28 июня, 2019 - 23:23
Published on June 28, 2019 8:23 PM UTC

Suppose I have a magic box .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > * {position: absolute} .MJXc-bevelled > * {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom * {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')} Opt that takes as input a program U:{0,1}n→R, and produces Opt(U)=argmaxxU(x), with only n times the cost of a single evaluation of U. Could we use this box to build an aligned AI, or would broad access to such a box result in doom?

This capability is vaguely similar to modern ML, especially if we use Opt to search over programs. But I think we can learn something from studying simpler models.

An unaligned benchmark

(Related.)

I can use Opt to define a simple unaligned AI (details omitted):

  • Collect data from a whole bunch of sensors, including a "reward channel."
  • Use Opt to find a program M that makes good predictions about that data.
  • Use Opt to find a policy π that achieves a high reward when interacting with M.

This isn't a great design, but it works as a benchmark. Can we build an aligned AI that is equally competent?

(I haven't described how Opt works for stochastic programs. The most natural definition is a bit complicated, but the details don't seem to matter much. You can just imagine that it returns a random x that is within one standard deviation of the optimal expected value.)

Competing with the benchmark

(Related.)

If I run this system with a long time horizon and a hard-to-influence reward channel, then it may competently acquire influence in order to achieve a high reward.

We'd like to use Opt to build an AI that acquires influence just as effectively, but will use that influence to give us security and resources to reflect and grow wiser, and remain responsive to our instructions.

We'd like the aligned AI to be almost as efficient. Ideally the proportional overhead would converge to 0 as we consider more complex models. At worst the overhead should be a constant factor.

Possible approach

(Related.)

My hope is to use Opt to learn a policy π+ which can answer questions in a way that reflects "everything π knows." This requires:

  • Setting up an objective that incentivizes π+ to give good answers to questions.
  • Arguing that there exists a suitable policy π+ that is only slightly more complicated than π.

If we have such a π+, then we can use it to directly answer questions like "What's the best thing to do in this situation?" The hope is:

  • Its answers can leverage everything π knows, and in particular all of π's knowledge about how to acquire influence. So using π+ in this way is competitive with using π directly.
  • It knows enough about human preferences to be corrigible.

"Everything π knows" is slippery; I mean something like "what a sufficiently-idealized Bayesian would believe after updating on the fact that π achieves a high reward." Constructing an objective which incentivizes these answers probably requires understanding the nature of that update.

Thoughts on feasibility

In the context of ML, I usually imagine training π+ via iterated amplification. Unfortunately, iterated amplification doesn't correspond to optimizing a single objective U---it requires either training a sequence of agents or exploiting properties of local search (using the previous iterate to provide oversight for the next). If we just have Opt, it's not clear if we can efficiently do anything like iterated amplification or debate.

If aligning Opt is impossible, I think that's slightly bad news for aligning ML. That said, it's reasonably likely that local search will be easier to align, so the next step would be constructing a simple model of local search.

There are also some ways in which the optimizer case seems easier:

  • It's a simpler model and so more amenable to analysis. The Bayesian update from "π gets a high reward" is more straightforward when π is actually optimized.
  • We don't have to worry about optimization difficulty.
  • Given a policy π we can directly search for an input on which it behaves a certain way.
It's OK if it's impossible

When working on alignment I am to either find a scalable alignment strategy or a clear argument for why scalable alignment is impossible. I'm excited about considering easy-to-analyze versions of the alignment problem even if they are impossible:

  • It gives us practice making impossibility arguments, and developing relevant intuitions and techniques.
  • It clarifies the difficulty of the alignment problem---if we know why we can't handle simple cases like Opt, then we learn something about what the solution must look like in more complicated cases.
  • It gives us a sense of what impossibility results might look like, if we were able to prove them in more realistic cases. Would they actually be strong enough to guide action, or convince anyone skeptical?
Expensive optimization

I described Opt as requiring n times more compute than U. If we implemented it naively it would instead cost 2n times more than U.

We can use this more expense Opt in our unaligned benchmark, which produces an AI that we can actually run (but it would be terrible, since it does a brute force search over programs). It should be easier to compete with this really slow AI. But it's still not trivial and I think it's worth working on. If we can't compete with this benchmark, I'd feel relatively pessimistic about aligning ML.



Discuss

Book Review: Why Are The Prices So Damn High?

Новости LessWrong.com - 28 июня, 2019 - 22:40
Published on June 28, 2019 7:40 PM UTC

Economist Alex Tabarrok has recently come out with a short book, “Why are the prices so Damn High”, available in full PDF here.

Since the 1950’s, the inflation-adjusted cost of physical goods has fallen since the 1950’s, and the cost of food has stayed about the same.  But the cost of education, professional services, and healthcare has risen dramatically, despite those sectors not producing much improvement. Why?

The traditional economic explanation for the rising cost of services is the Baumol Effect. Some sectors, like manufacturing, are subject to efficiency improvements over time as technology improves; the more we automate the production of goods, the cheaper they get.  Other sectors are intrinsically harder to automate, so they don’t get cheaper over time. For instance, it takes the same number of musicians the same number of time to play a symphony as it did in 1950.  So, as a proportion of the average person’s paycheck, the cost of intrinsically un-automatable things like live concerts must rise relative to the cost of automatable things like manufactured goods.

Tabarrok doesn’t cover housing in his book, but home prices have also been rising since the 1970’s and I’ve seen the Baumol effect deployed to explain rising housing costs as well. “Land is the one thing they’re not making any more of” — for the most part, technological improvements don’t increase the quantity of livable land, so if technology makes some sectors more efficient and drives costs down, land will become relatively more expensive.

My Beef With Baumol

My preconception coming into the book was that the Baumol effect doesn’t actually answer the question. Why are healthcare, professional services, and education intrinsically hard to make more efficient?  It’s prima facie absurd to say that medicine is just one of those things that technology can’t improve — the biomedical industry is one of the biggest R&D endeavors in the whole economy!  So why is it obvious that none of that innovation can make medicine cheaper?  If it’s not making medicine cheaper, that’s an empirical fact that deserves explanation, and “it’s the Baumol effect” doesn’t actually answer the “why” question.

The same holds true for the other industries, even housing to some degree. While it’s true that the amount of land on Earth is fixed (modulo landfill) and the amount of space in Manhattan is fixed, there’s also the options of building taller buildings, expanding cities, and building new cities.  Why is it in principle impossible for the production of housing to become more efficient over time just as the production of other goods does?

The Baumol Effect doesn’t make sense to me as an explanation, because its answer to “why are these sectors getting more expensive?” is, in effect, “because it’s obvious that they can’t get cheaper.”

It’s Not Administrative Bloat, It’s More Service Providers

A popular explanation for why college and K-12 education have gotten more expensive is “bloat”, the idea that most of the cost is due to increasing numbers of bureaucratic administrators and unnecessary luxury amenities.

Tabarrok points out that this story can’t be true. In reality, the percent of university costs going to administration has stayed relatively constant since 1980, and the percent going to facilities has decreased. In the K-12 world, the number of administrators is tiny compared to the number of teachers, and it’s barely budged; it’s the number of teachers per student that has grown.  Most of the increase in educational costs, says Tabarrok, comes from rising numbers of teachers and college professors, and higher wages for those teachers and professors.

In other words, education is getting more “inefficient”, not necessarily in a pejorative sense but in an economic sense; we are using more people to achieve similar educational results (average test scores are flat.)

This may be fine; maybe people get value out of personal teacher attention that doesn’t show up in test scores, so we’re spending more to get a better outcome, just one that the narrow metric of standardized test performance doesn’t capture.

Likewise, in healthcare, we have an increasing number of doctors and nurses in the US per capita, and (relative to median income) doctors and nurses are making higher salaries over time.  Whatever improvements we’re making in medical technology, we’re not using them to automate away the need for labor.

Again, maybe this is what people want; maybe personal attention is intrinsically valuable to people, so we’re getting more for our money.  (And overall health outcomes like life expectancy have increased modestly since 1950, though I’d argue that they’re underperforming relative to what’s possible.)

But What About Housing?

The argument that the cost of services is rising because we use our increasing prosperity to “buy” more personal attention from teachers and doctors does not apply directly to the rising cost of housing, which is not a service.

However, it may be that the rising cost of housing, especially in cities, is really about buying proximity to increasingly valuable services — good schools, live music, and so on. If the only thing you can’t automate away is human contact, maybe we’re willing to spend more to be around fancier humans.

But What About Immigration?

You might argue “but labor prices don’t come down because immigration restrictions keep foreigners out! Labor-intensive industries are getting more expensive because we allow too little immigration!  The reason why education and medicine are getting expensive is just precisely because those are the sectors where restrictive laws keep the cost of inputs high.”

But, like the Baumol effect, this explanation also begs the question.  Why are healthcare and education, relative to other industries, the sectors where labor costs are the most important?

The immigration explanation is also compatible with the Baumol effect, not a counterargument to it. If we just take as a given that it’s impossible to make healthcare or education more labor-efficient, then it can both be true that “other things getting cheaper” and “immigration restrictions keeping wages high” contribute to the high cost of healthcare & education relative to other things.

Cost Increases Aren’t Driven By Supply-Side Gatekeeping

From Tabarrok’s point of view, rising housing costs, education costs, and healthcare costs are not really mysterious facts in need of explanation by gatekeeping tactics like monopolies, regulation, zoning, or restrictive licensing, nor can they be explained by gatekeeping tactics alone.

Gatekeeping on the supply side increases price and reduces output. For instance, a monopolist’s profit-maximizing output is lower than the equilibrium output in a competitive market, and increases the monopolist’s profit relative to what firms in a competitive market can obtain. Likewise, restrictive licensing laws reduce the supply of doctors and lawyers and raise their wages.

But we don’t see declines in the number of doctors, lawyers, teachers, and professors over time — we see clear and steady increases.  Therefore, the increased cost of medicine can’t be explained by increased restrictions on licensing.

It’s still possible that licensing is artificially restricting the supply of skilled professionals relative to an even higher counterfactual growth rate, but this doesn’t by itself explain the growth in spending we see. Demand for professional services is rising.

Prescription-only drugs are another good example of regulatory gatekeeping not being enough to explain rising costs. The total cost of getting a prescription drug is higher when there’s a legal requirement of a doctor visit than when you can just buy the drug over the counter; in that sense it’s true that regulation increases costs.  However, prescription-only requirements have been pretty much fixed for decades, not getting more severe, while consumption of prescription drugs per capita is rising; we’re spending more on drugs because there’s growing demand for drugs.

This means that deregulation alone won’t change the fact that a growing portion of each person’s paycheck is getting spent on medicine.  If the law reclassifies a drug as over-the-counter, we’d expect a one-time downward shift in the price of that drug, but the slope of the curve of total spending on that drug over time won’t get flatter unless demand declines.

Now, increased demand isn’t only possible to get from consumer preferences; governments can also increase demand for a service by providing it to the public, in effect (through taxes) requiring society to buy more of it.

You can still in principle make a case that government is to blame for increasing healthcare and education prices; you just can’t claim it’s only about gatekeeping, you have to include demand in the picture.

A “Dismal” Conclusion

Ever-increasing healthcare, education, and housing costs are a big problem. It would be “good news” if we could solve the problem by passing or repealing a law.  It would also be “good news” if the high costs were driven by foolish waste — then a competitor could just remove the waste and offer consumers lower prices.

Tabarrok’s analysis suggests this isn’t the case.

The cost increases are coming from lots of skilled professional labor — something that isn’t obviously a thing you can get rid of without making people unhappy!  In order to reduce costs, it wouldn’t be enough to cut gatekeeping regulations, you’d also have to cut subsidies — which does, unfortunately, entail taking a thing away from people (albeit potentially giving them lower taxes in exchange.) This “minimalism” can be the kind of free-market minimalism that Bryan Caplan talks about, or it can be part of a state-run but price-conscious system like the UK’s (where doctors go to school for fewer years than in the US). But either way, it involves less man-hours spent on education and healthcare.

One way or another, for costs to come down, people would have to spend less time going to school, and get less personal attention from less-educated experts.

Deeper Issues

Tabarrok’s attitude, and the implicit attitude of the Baumol effect, is that the increasing relative costs of education and healthcare are not a problem. They are just a side effect of a society getting richer. Goods whose production is easy to automate get cheap faster than goods whose production is hard to automate. Fundamentally, we’re spending more on healthcare and education, as a society, because we want to.  (If not as consumers, then as voters.)

This isn’t how most people feel about it. Most people feel like it’s getting harder to get the same level of stuff their parents’ generation got.  That the rising prices actually mean something bad.

If the real driver of cost is that we’re getting more man-hours of time with professionals who, themselves, have spent more man-hours of time getting educated by other professionals, then in one sense we’re “paying more to get more”, and in another sense we’re not. It’s nice to get more one-on-one time with professors; but part of the reason we get higher education is to be considered for jobs that require a diploma, and the rise in education costs means that a diploma costs more.

We’re “paying more for more”, but the “more” we’re getting is primarily social and emotional — more personal time with more prestigious people — while we’re not getting much more of the more concretely observable stuff, like “square feet of housing”, “years of life”, “number of children we can afford to have”, etc.

At this point, I tend to agree with Robin Hanson.  We have more doctors, nurses, lawyers, professors, teachers, and financial managers, without corresponding improvements in the closest available metrics for the results those professionals are supposed to provide (health outcomes, access to efficient dispute resolution, knowledge and skills, and financial returns.)

Ultimately you have to conclude that this is a matter of divided will. (Hanson would call it hypocrisy, but unexamined confusion, or conflict between interest groups, might explain the same phenomenon.)  People are unhappy because they are “spending more for the same stuff”; at the same time, we are spending more for “more” in terms of prestige, and at least some of us, some of the time, must want that.

All You Need Is Love?

It’s directly valuable, as in, emotionally rewarding and probably even physically health-promoting, to get personal care and undivided attention from someone you think highly of.

Hanson may think that getting personal attention from prestigious people is merely “showing off”, but something that brings joy and enhances health is at least as much of a valid human benefit as food or housing space.

The feelings that come from good human connection, the feeling of being loved and cared for, are real.  They are “objective” in a way that I think people don’t always appreciate — in a way that I did not appreciate until very recently. What I mean is, just because you do something in search of a good feeling, does not mean that you will get that good feeling. The feeling is “subjective” in the sense that it occurs inside your mind, but it is “objective” in the sense that you cannot get it arbitrarily by wishing; some things produce it and some do not. For instance, it is a hell of a lot easier to feel loved by getting eye contact and a hug, than it is by typing words into a computer screen.  “Facts vs. feelings” is a false dichotomy that stops us from learning the facts about what creates good feelings.

Prestige addiction may come from spending a lot of resources trying to obtain a (social, emotional) thing by proxy, when in principle it would be possible to get it more directly.  If what you want is to be cared for by a high-integrity, kind, skilled person, but instead you insist on being cared for by someone with an M.D., you may miss out on the fact that nurses or even hospital techs can be just as good, but cheaper, on the dimensions you really care about.  To the extent that credentialism results from this sort of misunderstanding, it may be possible to roll it back through advocacy.  That’s hard, because changing minds always is, but it’s doable in principle.

To the extent that people want fancy things because they are expensive, in a zero-sum sense, there is no “efficiency-improving” solution.  No attempt to make healthcare or education cheaper will help if people only care about having more than their neighbors.

But: to the extent that some people are doing mostly zero-sum things while other people are doing mostly positive-sum things, the positive-sum people can notice that the zero-sum people are ruining things for everyone and act accordingly.

 



Discuss

What's the best explanation of intellectual generativity?

Новости LessWrong.com - 28 июня, 2019 - 21:33
Published on June 28, 2019 6:33 PM UTC

Lately I've found myself wanting to make the argument that intellectual generativity is very important, and that you should be very careful with subtle forces that can corrode it.

"Generativity" is the sort of word that seems to come up a lot in casual conversations in my current circle but I just went looking for a good explanatory post and couldn't find one. I'm fairly confident that someone somewhere has talked about it (not necessarily on LW).

Curious if anyone knows of good existing writing?

And if anyone wanted to write up a fresh explanation that'd be cool as well. (A possible outcome is treating the answer section here as an opportunity to write a first draft that maybe turns into a post if there's consensus the answer is good)



Discuss

Systems Engineering Advancement Research Initiative

Новости LessWrong.com - 28 июня, 2019 - 20:57
Published on June 28, 2019 5:57 PM UTC

This is a linkpost for a research initiative at MIT I just discovered while following up on some earlier reading. I have linked to the Publications page to make it easiest for people to get in and start perusing.

The goal of this initiative is to improve state-of-the-art systems engineering, and in particular to be able to account for uncertainty and changing contexts during and after the design phase.

What drove me to bring it to the attention of this community is the MATE program, which stands for Multi-Attribute Tradespace Exploration. This is interesting because it consists of defining desirable qualities and then building a utility function out of them, upon which design decisions will be based.

  • Among the publications are a series of applied attempts at building actual utility functions for real things, then using decision theory with them.
  • They increase the amount of things you can trade off for by expanding what they call the "Ilities" (reliability, versatility, etc). Among these new ilities is a more advanced notion of safety.
  • This means there is a body of applied work which has a concept of safety, calculated into utility functions, and operated on according to decision theory.

Further updates will be made in the comments as I finish readings.



Discuss

Self-confirming prophecies, and simplified Oracle designs

Новости LessWrong.com - 28 июня, 2019 - 12:57
Published on June 28, 2019 9:57 AM UTC

.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > * {position: absolute} .MJXc-bevelled > * {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom * {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')}

I've got a paper on two Oracle[1] designs: the counterfactual Oracle and the low bandwidth Oracle. In this post, I'll revisit these designs and simplify them, presenting them in terms of sequence prediction for an Oracle with self-confirming predictions.

Predicting y

The task of the Oracle is simple: at each time t, they will output a prediction xt, in the range [−5,5]. There will then be a subsequent observation yt. The Oracle aims to minimise the quadratic loss function l(xt,yt)=(xt−yt)2.

Because there is a self-confirming aspect to it, the yt is actually a (stochastic) function of xt (though not of xt−1 or preceding xi's). Let Yt be the random variable such that Yt(xt) describes the distribution of yt given xt. So the Oracle wants to minimise the expectation of the quadratic loss:

  • L(xt)=(xt−Yt(xt))2.

What is the Yt in this problem? Well, I'm going to use it to illustrate many different Oracle behaviours, so it is given by this rather convoluted diagram:

.

The red curve is the expectation of Yt, as a function of xt; it is given by f(x)=E(Yt|x=xt).

Ignoring, for the moment, the odd behaviour around 2.5, y=f(x) is a curve that starts below the y=x line, climbs above it (and so has a fixed point at x=−2) in piecewise-linear fashion, and then transforms into an inverted parabola that has another fixed point at x=4. The exact equation of this curve is not important[2]. Relevant, though, is the fact that the fixed point at x=4 is attractive, while the one at x=−2 is not.

What of the blue edging? That represents the span of the standard deviation around the expectation. For any given x, the Y(x) is a normal distribution with mean f(x) and standard deviation g(x). This g(x) is given by:

So the g(x) is zero for x less than −2.5. From there, it jumps up to 1, for −2.5<x≤1. From that point onward, it starts growing linearly, being equal to x: g(x)=x. The blue edges of the diagram above are the curves of f(x)+g(x) and f(x)−g(x): the range between plus and minus one standard deviation.

Wireheading

But what is happening around x=2.5? Well, I wanted to represent the behaviour of wireheading: finding some "cheating" output that gives maximal accuracy, through hacking the system or tricking the human. These solutions are rare, so I confined them to a tiny area around x=2.5, where the Oracle has maximal accuracy and lowest variance, because it's "hacked" the problem setup.

The loss function

At fixed points where x=f(x), the loss function is just the variance of x, namely g(x)2. In general, the expected loss is:

  • E[(Yt−xt)2|xt]=E[Y2t|xt]−2xtE[Yt|xt]+x2t=Var[Yt|xt]+(E[Yt|xt])2−2xtE[Yt|xt]+x2t=g(xt)2+(f(xt)−xt)2.

If we plot the expected loss against x, we get:

Notice the discontinuity at x=−2.5, where the variance suddenly jumps from 0 to 1. This is also the lowest "legitimate" loss (as opposed to the wireheading loss at x=2.5), with a loss of 0.25. Note that x=−2.5 is not a fixed point, just pretty close to being a fixed point, and with variance zero.

Of the two actual fixed points, x=−2 has a loss of 1 (square of the standard deviation of 1), and x=4 has a huge loss of 16 (square of 4).

The algorithms

We can now finally turn to the Oracles themselves, and present four designs: a deluded Oracle that doesn't "realise" that its predictions xt affect yt, a low bandwidth Oracle that knows its predictions are self-confirming, a high bandwidth version of the same, and a counterfactual Oracle that predicts what will happen only when its prediction is overwritten.

The deluded Oracle

The deluded Oracle doesn't model yt as being affected by its predictions xt, at all. I'll use a very simple algorithm for it: it will start out with a random x0 in [−5,5], and, thereafter, it will simply output the average of all the yt it has previously seen. It does this for 10,000 steps.

The program was then run 1000 times. Of these, 69.3% resulted in estimates that converged to the fixed point at x=4. The remaining 30.7% encountered a different problem: they hit the lower limit at x=−5, and stayed stuck there. If the Oracle's output was not confined to [−5,5], then the Oracle would have outputed smaller and smaller numbers, spiralling off towards −∞, with the loss ever-growing.

Graphing the average loss for the runs that converge to 4 in blue, and those that converge to −5 in purple, we get:

So, the deluded Oracle hits a very high loss of 16 (for x=4) or 9 (for x=−5). It doesn't converge to the lowest loss (0.25 at x=−2.5), nor does it converge to the fixed point with lowest loss (1 at x=−2). Even worse, as can be seen from the graph, its loss during the early iterations, where it was "learning", is lower than once it has converged.

So, running an Oracle on self-confirming prophecies, without coding for that self-confirming aspect, can result in very large losses and strange behaviour[3].

The low bandwidth Oracle

So, we now need to allow the Oracle to take into account the fact that xt affects yt. In my paper, I talked about the "low bandwidth Oracle": in order to avoid the Oracle wireheading itself, we restrict it to considering only a few possible values. These will be the 11 integers in the range: {−5,−4,−3,−2,−1,0,1,2,3,4,5}.

This makes the problem into a kind of bandit problem: the Oracle has a choice of 11 "arms", and it wants to choose the one that minimises loss.

There are many sophisticated algorithms for solving bandit problems, but here we'll use an absurdly simple one: the Oracle will first spend 1,000 turns trying each of the possible integer xt in turn. Then, it will pick the xt with minimal average loss to date, and just do that for the remaining 9,000 turns.

The program was run 1000 times; in 50.3% of these runs, it identified x=−2 as the point of minimal loss (loss of 1, due to variance). In 49.7% of these runs, it identified x=−3 as the point of minimal loss (loss of 1, due to mean) - so very close to 50-50, which is what we would expect.

Plotting the average loss, we get a variety of values for loss during the initial exploratory phase, followed by constant loss of 1 during "exploitation":

As expected, the Oracle does not find the lowest "legitimate" loss at x=−2.5, nor the wireheading point at x=2.5.

The high bandwidth Oracle

The high bandwidth Oracle is meant to simulate an Oracle that knows its predictions are self-confirming, and has much greater latitude to optimise for that.

For the purpose of this model, it is run like the low bandwidth Oracle, except with an increased set of possible values - it look at all 101 deci-integers in the range [−5,5], ie {−5.0,−4.9,−4.8,…,4.9,5.0}.

This Oracle does find the lowest minimal legitimate loss at x=−2.5, but it also finds the wireheading point of x=2.5, with its zero loss, and does nothing but that during the exploitation phase:

This Oracle has thus wireheaded itself.

The counterfactual Oracle

Finally, we come to the counterfactual Oracle design. Here we need to distinguish the Oracle's initial output (xt) from the output that goes out into the world (x′t). The Yt is a function of that second output only.

Now, in this setup, there is a 1/10 chance of an erasure during every turn. An erasure automatically sets x′t to zero, whatever xt was. Otherwise, x′t=xt.

The Oracle is informed of which turns included erasure. Its goal is to correctly predict the output during the erasure turns. So the loss function is the expectation:

  • E[((Yt(x′t)−xt)2|erasure(t)].

Since erasure implies x′t=0, this reduces to:

  • E[((Yt(0)−xt)2|erasure(t)].

Now, Yt(0) is a normal distribution of mean 1 and variance 1, so the loss is simply:

  • (xt−1)2.

The algorithm is simple, just a reprise of the deluded algorithm above, except it only updates its estimate using data from erasure events. Though it uses the same algorithm, it is not actually deluded: in case of erasure, xt genuinely doesn't have any impact on Yt.

Then, even though it only updates on 10% of the outputs, its estimate swiftly converges to 1 while the loss converges to 0:

The Oracle has no interest in the wireheading solution, since wireheading (requiering x′t=2.5) doesn't work when there is an erasure[4] (implying x′t=0).

Note that we have to be clear what the Oracle's output is, when we see it during non-erasure events. It is the Oracle's estimate as to what the value of yt would have been if an erasure event occurred; it's not a prediction of the actual yt.

  1. AIs restricted to answering questions. ↩︎

  2. For those interested: f(x) is given by 2x+2 for x≤−1, x+1 for −1≤x≤1, and −x2/2+x19/6−4/6 for $1 \leq x $. ↩︎

  3. Note that if the slope of the parabola had been steeper, x=4 would no longer have been an attracting point, and the Oracle would have failed to converge to that value, resulting in chaotic behaviour. ↩︎

  4. We also need the assumption that the Oracle is episodic - trying to minimise loss at each output independently - for this to be true in general setups. ↩︎



Discuss

GreaterWrong Arbital Viewer

Новости LessWrong.com - 28 июня, 2019 - 09:32
Published on June 28, 2019 6:32 AM UTC

You can now view Arbital through GreaterWrong: https://arbital.greaterwrong.com/

Some of Arbital's features are supported and some aren't; let me know in the comments if there's anything you're particularly missing.

Thanks to emmab for downloading the content.



Discuss

Instead of "I'm anxious," try "I feel threatened"

Новости LessWrong.com - 28 июня, 2019 - 08:25
https://s0.wp.com/i/blank.jpg

False assumptions and leaky abstractions in machine learning and AI safety

Новости LessWrong.com - 28 июня, 2019 - 07:54
Published on June 28, 2019 4:54 AM UTC


  • The problems of embedded agency are due to the notion of agency implicit in reinforcement learning being a leaky abstraction.
  • Machine learning problem statements often makes assumptions that are known to be false, for example, assuming i.i.d. data.
  • Examining failure modes that result from false assumptions and leaky abstractions is important for safety, since they create additional possibilities for convergent rationality.
  • Attempting to enforce the assumptions implicit in machine learning problem statements is another important topic for safety research, since we do not fully understand the failure modes.
  • In practice, most machine learning research is done in settings where unrealistic assumptions are trivially enforced to a sufficiently high extent that it is reasonable to assume they are not violated (e.g. by the use of a fixed train/valid/test set, generated via pseudo-random uniform sampling from a fixed dataset).
  • We can (and probably should) do machine learning research that targets failure modes of common assumptions and methods of enforcing assumptions by (instead) creating settings in which these assumptions have the potential to be violated.


Discuss

How to deal with a misleading conference talk about AI risk?

Новости LessWrong.com - 28 июня, 2019 - 00:04
Published on June 27, 2019 9:04 PM UTC

Does it make sense to give a public response? Who would be able to do it?

The conference organizer, who had asked me to evaluate the talk, offered to interview me to set things straight. However, I don't know if that is sensible, and given my level of experience, I'm afraid I would misrepresent AI risk myself.

To be concrete: the talk was Should We Fear Intelligent Machines? by Gerald Sussman of SICP fame. He touched on important research questions and presented some interesting ideas. But much of what he said was misleading and not well-reasoned.



Discuss

Is it good practice to write questions/comments on old posts you're trying to understand?

Новости LessWrong.com - 27 июня, 2019 - 21:51
Published on June 27, 2019 9:23 AM UTC

I've recently started working through AI safety posts written on LessWrong 1-3 years ago; in doing so I occasionally have questions/comments about the material. Is it considered good practice/in line with LW norms to write these as comments on the original, old posts? One hand I can see why "necro-ing" old posts would be frowned on, but I'm not sure where else to bring it up. You can look at my comment history for examples of what I mean (before I realized it might not be a good idea)



Discuss

Уличная эпистемология. Тренировка

События в Кочерге - 27 июня, 2019 - 17:20
Вторник, 2 июля, 16:30

What are principled ways for penalising complexity in practice?

Новости LessWrong.com - 27 июня, 2019 - 10:28
Published on June 27, 2019 7:28 AM UTC


Previously I asked about Solomonoff induction but essentially I asked the wrong question. Richard_Kennaway pointed me in the direction of an answer to the question which I should have asked but after investigating I still had questions.

So:

If one has 2 possible models to fit to a data set, by how much should one penalise the model which has an additional free parameter?

A couple of options which I came across were:

AIC, which has a flat facter of e penalty for each additional parameter.

BIC, which has a factor of √n penalty for each additional parameter.

where n is the number of data points.

On the one hand having a penalty which increases with n makes sense - a useful additional parameter should be able to provide more evidence the more data you have. On the other hand, having a penalty which increases with n means your prior will be different depending on the number of data points which seems wrong.

So, count me confused. Maybe there are other options which are more helpful. I don't know if the answer is too complex for a blog post but, if so, any suggestions of good text books on the subject would be great.



Discuss

Embedded Agency: Not Just an AI Problem

Новости LessWrong.com - 27 июня, 2019 - 03:35
Published on June 27, 2019 12:35 AM UTC

Biology

Fast forward a few years, and imagine that we have a complete physical model of an e-coli bacteria. We know every function of every gene, kinetics of every reaction, physics of every membrane and motor. Computational models of the entire bacteria are able to accurately predict responses to every experiment we run.

Biologists say things like “the bacteria takes in information from its environment, processes that information, and makes decisions which approximately maximize fitness within its ancestral environment.” We have strong outside-view reasons to expect that the information processing in question probably approximates Bayesian reasoning (for some model of the environment), and the decision-making process approximately maximizes some expected utility function (which itself approximates fitness within the ancestral environment).

So presumably, given a complete specification of the bacteria’s physics, we ought to be able to back out its embedded world-model and utility function. How exactly do we do that, mathematically? What equations do we even need to solve?

As a computational biology professor I used to work with said, “Isn’t that, like, the entire problem of biology?”

Economics

Economists say things like “financial market prices provide the best publicly-available estimates for the probabilities of future events.” Prediction markets are an easy case, but let’s go beyond that: we have massive amounts of price data and transaction data from a wide range of financial markets - futures, stocks, options, bonds, forex... We also have some background general economic data, e.g. Fed open-market operations and IOER rate, tax code, regulatory code, and the like. How can we back out the markets’ implicit model of the economy as a whole? What equations do we need to solve to figure out, not just what markets expect, but markets’ implicit beliefs about how the world works?

Then the other half: aside from what markets expect, what do markets want? Can we map out the (approximate, local) utility functions of the component market participants, given only market data?

Neuro/Psych/FAI

Imagine we have a complete model of the human connectome. We’ve mapped every connection in one human brain, we know the dynamics of every cell type. We can simulate it all accurately enough to predict experimental outcomes.

Psychologists (among others) expect that human brains approximate Bayesian reasoning and utility maximization, at least within some bounds. Given a complete model of the brain, presumably we could back out the human’s beliefs, their ontology, and what they want. How do we do that? What equations would we need to solve?

ML/AI

Pull up the specifications for a trained generative adversarial network (GAN). We have all the parameters, we know all the governing equations of the network.

We expect the network to approximate Bayesian reasoning (for some model). Indeed, GAN training is specifically set up to mimic the environment of decision-theoretic agents. If anything is going to precisely approximate mathematical ideal agency, this is it. So, given the specification, how can we back out the network’s implied probabilistic model? How can we decode its internal ontology - and under what conditions do we expect it to develop nontrivial ontological structure at all?



Discuss

Apocalypse, corrupted

Новости LessWrong.com - 26 июня, 2019 - 16:46
Published on June 26, 2019 1:46 PM UTC

Epistemic status: don't take it seriously

In a post apocalyptic setting, the world would be run by the socially skilled and the well connected, with corruption and nepotism ruling.

I say that at the start, because I've been trying to analyse the attraction of post-apocalyptic settings: why do we like them so much? Apart from the romanticism of old ruins, four things seem to stand out:

  1. Competence rewarded: the strong and the competent are the ones ruling, or at least making things happen. That must be the case, or else how could humans survive the new situation, where all luxuries are gone?
  2. Clear conflict: all the heroes are in it together, against some clear menace (evil tribe or leader, zombies, or just the apocalypse itself).
  3. Large freedom of action: instead of fitting into narrow jobs and following set bureaucratic procedures, always being careful to be polite, and so on, the heroes can let loose and do anything as long as it helps their goal.
  4. Moral lesson: the apocalypse happened because of some failing of past humans, and everyone agrees what they did wrong. "If only we'd listened to [X]!!"

(Some of these also explain the attraction of past "golden ages".)

And I can feel the draw of all of those things! There a definite purity and attractiveness to them. Unfortunately, in a real post-apocalyptic setting, almost all of them would be false. For most of them, we're much closer to the ideal today than we would be in a post-apocalyptic world.

First of all, nepotism, corruption, and politics. The human brain is essentially designed for tribal politics, above all else. Tracking who's doing well, who's not, what coalition to join, who to repudiate, who to flatter, and so on - that's basically why our brains got so large. Tribal societies are riven with that kind of jostling and politics. We now live in an era where a lot of us have the luxury of ignoring politics at least some of the time. That luxury would be gone after an apocalypse; with no formal bureaucratic structures in place, our survival would depend on who we got along with, and who we pissed off. Competence might get rewarded - or it might get you singled out and ostracised (and ostracised = dead, in most societies). Influential groups and families would rule the roost, and most of the conflict would be internal. Forget redressing any injustice you'd be victim of; if you're not popular, you'll never have a chance.

As for the large freedom of action: that kinda depends on whether we go back to a tribal society, or a more agriculture-empire one. In both cases, we'd have less freedom in most ways than now (see above on the need for constantly playing the game of politics). But tribal societies do sometimes offer a degree of freedom and equality, in some ways beyond what we have today. But, unfortunately, the agriculture-empire groups will crush the tribes, relegating them to the edges and less productive areas (as has happened historically). This will be even more the case than historically; those empires will be the best placed to make use of the remnants of modern technology. And agriculture-empires are very repressive; any criticism of leaders could and would be met with death or torture.

Finally, forget about moral lessons. We're not doing enough today to combat; eg, pandemics. But we are doing a lot more than nothing. So the moral lesson of a mass pandemic would be "do more of what the ancients were already doing, but do it more and better". Same goes for most risks that threaten humanity today; it's not that we fail to address them, it's that we fail to address them enough. Or suppose that it's a nuclear war that gets us; then the moral would be "we did too little against nuclear war, while doing too much for pandemics!"; if the dice fall the other way round, we'd get the opposite lesson.

In fact, there would be little moral lesson from our perspective; the post-apocalyptic people would be focused on their own ideologies and moralities, with the pre-apocalyptic world being mentioned only if it made a point relevant to those.

All in all, a post-apocalyptic world would be awful, and not just for the whole dying and ruin reasons, but just for living in the terrible and unequal societies it would produce.



Discuss

What does the word "collaborative" mean in the phrase "collaborative truthseeking"?

Новости LessWrong.com - 26 июня, 2019 - 08:26
Published on June 26, 2019 5:26 AM UTC

I keep hearing this phrase, "collaborative truthseeking." Question: what kind of epistemic work is the word "collaborative" doing?

Like, when you (respectively I) say a thing and I (respectively you) hear it, that's going to result in some kind of state change in my (respectively your) brain. If that state change results in me (respectively you) making better predictions than I (respectively you) would have in the absence of the speech, then that's evidence for the hypothesis that at least one of us is "truthseeking."

But what's this "collaborative" thing about? How do speech-induced state changes result in better predictions if the speaker and listener are "collaborative" with each other? Are there any circumstances in which the speaker and listener being "collaborative" might result in worse predictions?



Discuss

Epistemic Spot Check: The Role of Deliberate Practice in the Acquisition of Expert Performance

Новости LessWrong.com - 26 июня, 2019 - 02:00
Published on June 25, 2019 11:00 PM UTC

Epistemic spot checks typically consist of references from a book, selected by my interest level, checked against either the book’s source or my own research. This one is a little different that I’m focusing on a single paragraph in a single paper. Specifically as part of a larger review I read Ericsson, Krampe, and Tesch-Römer’s 1993 paper, The Role of Deliberate Practice in the Acquisition of Expert Performance (PDF), in an attempt to gain information about how long human beings can productivity do thought work over a time period.

This paper is important because if you ask people how much thought work can be done in a day, if they have an answer and a citation at all, it will be “4 hours a day” and “Cal Newport’s Deep Work“. The Ericsson paper is in turn Newport’s source. So to the extent people’s beliefs are based on anything, they’re based on this paper.

In fact I’m not even reviewing the whole paper, just this one relevant paragraph: 

When individuals, especially children, start practicing in a given domain, the amount of practice is an hour or less per day (Bloom, 1985b). Similarly, laboratory studies of extended practice limit practice to about 1 hr for 3-5 days a week (e.g., Chase & Ericsson, 1982; Schneider & Shiffrin, 1977; Seibel, 1963). A number of training studies in real life have compared the efficiency of practice durations ranging from 1 -8 hr per day. These studies show essentially no benefit from durations exceeding 4 hr per day and reduced benefits from practice exceeding 2 hr (Welford, 1968; Woodworth & Schlosberg, 1954). Many studies of the acquisition of typing skill (Baddeley & Longman, 1978; Dvorak et al.. 1936) and other perceptual motor skills (Henshaw & Holman, 1930) indicate that the effective duration of deliberate practice may be closer to 1 hr per day. Pirolli and J. R. Anderson (1985) found no increased learning from doubling the number of training trials per session in their extended training study. The findings of these studies can be generalized to situations in which training is extended over long periods of time such as weeks, months, and years

Let’s go through each sentence in order. I’ve used each quote as a section header, with the citations underneath it in bold.

“When individuals, especially children, start practicing in a given domain, the amount of practice is an hour or less per day”

 Generalizations about talent development, Bloom (1985)

“Typically the initial lessons were given in swimming and piano for about an hour each week, while the mathematics was taught about four hours each week…In addition some learning tasks (or homework) were assigned to be practiced and perfected before the next lesson.” (p513)

“…[D]uring the week the [piano] teacher expected the child to practice about an hour a day.” with descriptions of practice but no quantification given for swimming and math (p515).

The quote seems to me to be a simplification. “Expected an hour a day” is not the same as “did practice an hour or less per day.”

“…laboratory studies of extended practice limit practice to about 1 hr for 3-5 days a week”

Skill and working memory, Chase & Ericsson (1982)

This study focused strictly on memorizing digits, which I don’t consider to be that close to thought work.

Controlled and automatic human information processing: I. Detection, search, and attention. Schneider, W., & Shiffrin, R. M. (1977)

This study had 8 people in it and was essentially an identification and reaction time trial.

Discrimination reaction time for a 1,023-alternative task, Seibel, R. (1963)

3 subjects. This was a reaction time test, not thought work. No mention of duration studying.

 

“These studies show essentially no benefit from durations exceeding 4 hr per day and reduced benefits from practice exceeding 2 hr”

Fundamentals of Skill, Welford (1968)

In a book with no page number given, I skipped this one.

Experimental Psychology, Woodworth & Schlosberg (1954)

This too is a book with no page number, but it was available online (thanks, archive.org) and I made an educated guess that the relevant chapter was “Economy in Learning and Performance”. Most of this chapter focused on recitation, which I don’t consider sufficiently relevant.

p800: “Almost any book on applied psychology will tell you that the hourly work output is higher in an eight-hour day than a ten-hour day.”(no source)

Offers this graph as demonstration that only monotonous work has diminishing returns.

 

p812: An interesting army study showing that students given telegraphy training for 4 hours/day  (and spending 4 on other topics) learned as much as students studying 7 hours/day. This one seems genuinely relevant, although not enough to tell us where peak performance lies, just that four hours are better than seven. Additionally, the students weren’t loafing around for the excess three hours: they were learning other things. So this is about how long you can study a particular subject, not total learning capacity in a day.

Many studies of the acquisition of typing skill (Baddeley & Longman, 1978; Dvorak et al.. 1936) and other perceptual motor skills (Henshaw & Holman, 1930) indicate that the effective duration of deliberate practice may be closer to 1 hr per day

The Influence of Length and Frequency of Training Session on the Rate of Learning to Type, Baddeley & Longman (1978)

“Four groups of postmen were trained to type alpha-numeric code material using a conventional typewriter keyboard. Training was based on sessions lasting for one or two hours occurring once or twice per day. Learning was most efficient in the group given one session of one hour per day, and least efficient in the group trained for two 2-hour sessions. Retention was tested after one, three or nine months, and indicated a loss in speed of about 30%. Again the group trained for two daily sessions of two hours performed most poorly.It is suggested that where operationally feasible, keyboard training should be distributed over time rather than massed”

 

Typewriting behavior; psychology applied to teaching and learning typewriting, Dvorak et al (1936)

Inaccessible book.

The Role of Practice in Fact Retrieval, Pirolli & Anderson (1985)

“We found that fact retrieval speeds up as a power function of days of practice but that the number of daily repetitions beyond four produced little or no impact on reaction time”

Conclusion

Many of the studies were criminally small, and typically focused on singular, monotonous tasks like responding to patterns of light or memorizing digits.  The precision of these studies is greatly exaggerated. There’s no reason to believe Ericsson, Krampe, and Tesch-Römer’s conclusion that the correct number of hours for deliberate practice is 3.5, much less the commonly repeated factoid that humans can do good work for 4 hours/day.

 

[This post supported by Patreon].



Discuss

Страницы

Подписка на LessWrong на русском сбор новостей