Вы здесь

Новости LessWrong.com

Подписка на Лента Новости LessWrong.com Новости LessWrong.com
A community blog devoted to refining the art of rationality
Обновлено: 44 минуты 34 секунды назад

One Doubt About Timeless Decision Theories

6 часов 54 минуты назад
Published on Mon Oct 22 2018 01:39:57 GMT+0000 (UTC)

Timeless Decisions Theories (including variants like FDT, UDT, ADT, ect.) provide a rather elegant method of solving a broader class of problems than CDT. While CDT requires the outcomes of decisions to be independent of the individual making the decision (in such a way that causal surgery on a single node is valid), timeless decisions theories can handle any problem where the outcome is a function of the choice selected (even if this occurs indirectly as a result of a prediction).

(Epistemic Status: Thoughts for further investigation)

This is an excellent reason to investigate these decision theories, yet we need to make sure that we don't get blinded by insight. Before we immediately jump to conclusions by taking this improvement, it is worthwhile considering what we give up. Perhaps there are other classes that we might which to optimise over which we can no longer optimise over once we have included this whole class?

After all, there is a sense in which there is no free lunch. As discussed in the TDT paper, for any algorithm, we could create a situation where there is an agent that specifically punished that algorithm. The usual response is that these situations are unfair, but a) the universe is often unfair b) there are plausible situations where the algorithm chosen influences the outcome is slightly less unfair ways.

Expanding on b), there are times when you want to be predictable to simulators. One consideration that I haven't seen raised is that some agents may wish to eliminate agents that they can't predict. Further, rather than facing a perfect predictor, it seems like it'll be at least a few orders of magnitude more likely that you'll face an imperfect predictor. Modelling these as X% perfect predictor, 100-X% random predictor will usually be implausible as predictors won't have a uniform success rate over all algorithms. These situations are slightly more plausible for scenarios involving AI, but even if you perfectly know an agent's source code, you are unlikely to know its exact state in terms of observations.

It therefore seems that the "best" decision theory algorithm might be dominated by factors other than optimal performance on the narrow class of problems TDT operates on. It may very well be the case that TDT is ultimately taking the right approach, but even if this is the case, I thought it was worthwhile sketching out these concerns so that they can be addressed.



Discuss

List of previous prediction market projects

7 часов 49 минут назад
https://s0.wp.com/i/blank.jpg

Verbal Zendo

10 часов 43 минуты назад
Published on Sun Oct 21 2018 21:51:11 GMT+0000 (UTC)

You're probably familiar with the 2-4-6 test. Well, I made an automated version of the verbal version of the Buddhist version of that. In essence: there's a rule that declares some phrases "have the buddha-nature" while others don't, and you get to keep testing ideas against it until you're pretty sure you've figured out the rule; then, you can check to see if you were right. If that sounds interesting to you, it can be played in-browser here.

Warning: Don't let yourself get nerd-sniped. My puzzles are not optimised for fun or meaning; if you find yourself spending more than an hour or so on this weird toy of mine, just look at the answers and move on.



Discuss

Addressing three problems with counterfactual corrigibility: bad bets, defending against backstops, and overconfidence.

21 октября, 2018 - 15:03
Published on Sun Oct 21 2018 12:03:12 GMT+0000 (UTC)

Summary: I propose that counterfactual corrigibility can be slightly adjusted to addresses known problems with bad bets and defending against backstops. This adjusted version has learning-esque behaviour and hence it can have problems with overconfidence about the button state. A new proposal: robust counterfactual corrigiblity is introduced to simultaneously address all three problems.

Counterfactual corrigibility: a current proposal and its weaknesses

Note: this section recaps Jessica's discussion of causal counterfactual corrigibility from here.

Causally counterfactual agents consider each of the directions they might recieve, and then ensure that if the world was adjusted so that they recieved that direction, they would fulfil the corresponding goal.

They can be defined on a causal graph that contains the following nodes:

  • .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > * {position: absolute} .MJXc-bevelled > * {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom * {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')} A, a policy, that the AI may optimize
  • B, a communication channel
    • for simplicity, we will let B be a button whose states are pressed, P, and unpressed, ¬P.
  • C, a set of states on which performance is judged.
    • we will let C consist of some number of chairs produced, a positive integer, and whether the AI has shut down, a boolean
  • Some other states that describe the way the rest of the world works.

A counterfactually corrigible AI would select its policy π∈A to maximize the following score:

vfixedπ′(π)=a⋅E[UP|do(B=P),π]+(1−a)⋅E[U¬P|do(B=¬P),π], where a∈(0,1)

where

UP={−kno shutdown0shutdown, k∈R+, and

U¬P=number of chairs.

This proposal has some major advantages: it allocates constant (1−a) weight to the aim of shutting down if commanded to do so. So all else being equal, it will allocate value to being able to observe and follow a shut down command. Moreover, it will not manipulate B because the different counterfactuals are given a constant weight irrespective of the chosen policy π.

However, this proposal has some weaknesses:

  • Bad bets: Let the AI have finite amount of resources. If the button is pressed, this just switches the goal from chair-manufacturing to another manufacturing task, for which resources are equally useful, i.e. UP=number of tables. Let there be a variable H, where H=F means the human has put their finger on the button. H almost always causes B (H=F and B=P have 99% correlation). Suppose that the AI has the option, after viewing H=F, of paying for a coupon that is worth $1 if B=¬P. Then, the AI will pay up to 50c for that coupon, even though ¬P is very unlikely. This is because it treats B as uncorrelated with its antecedents. In practice, this might lead to other agents repeatedly taking the AI's resources.
  • Defending against backstops: Let the human have the option of disabling the AI by disassembling it, in case the shutdown button does not work. Suppose that, similarly to in the previous example, the human has applied their finger to the shutdown button. On observing this, the AI would still give half of its weight to the scenario in which B is not successfully depressed (perhaps, if it has an elaborate model, it would imagine the button jammed up) --- a scenario in which the human is trying to disassemble the AI, although its goal is to make paperclips. In order to preserve its manufacturing ability, it would be well-advised to spend significant resources avoiding disassemble.

In an alternative formulation, Jessica lets a=p(B=P|π′), where π′ is fixed to some alternative policy that is not optimized concurrently with π. which does not do anything to alleviate these problems, although it is very suggestive of the following formulation, which does.

Potential solution to bad bets and defending against backstops: learning-esque behaviour

If we want the AI to avoid bad bets, one reasonable approach is to have the AI learn the likely value of B by taking account of correlations between B and its causes such as H. To do this, we can factor out a larger set of variables S, which includes the human's behaviour as well as the button's state, and is equal to the cartesian product S=H×B . Then we can have the AI maximize:

vlearnπ′(π)=∑sp(S=s|π′)⋅E[Us|do(S=s),π]

where Us is just set according to the button state that it entails: Us:={UPP∈sU¬P¬P∈s .

With this formulation, it is desirable for a property to prepare more for a button press, once the finger is on the button. Since 99x more weight goes on p(S=(F,B)) than p(S=(F,¬B)), that counterfactual is given correspondingly more importance. With the correlation taken account of, the P=¬B coupon is worth little once H=F is known, hence this particular kind of bad bet will not occur. (There are probably other kinds of questionable bets, but those may be desirable or even necessary.)

By the same argument, this learning-esque behaviour will mean that if H=F is observed, then P=¬B has very little importance. For the part of the policy where H=F has been observed, there is only a very small incentive to guard against disassembly. So this kind of defence would still be expected from a superintelligent maximizer (which would be hard to secure for anyway) but would not be expected from a milder optimizer.

A further problem and resolution: overconfidence and robust counterfactual corrigibility.

The problem with learning-esque behavior is that the point of corrigibility is to correct errors, whereas learning is a complex property that itself may misfire. In particular, how would we turn off a system that learned with certainty that we would not want it to shut down? For this problem, we want a proposal that avoids the two problems of vfixed while also avoiding the overconfidence of vlearn. A natural way to achieve this is to have the AI select a policy that works well for a set of probability estimates. This could be viewed as having an AI that views the probability distribution as unknowable that one must be fundamentally robust to. The policy π is selected by:

vmixedπ′(π)=∑sminγ∈{p(s|π′),ρsπ′}(γ⋅E[Us|do(S=s),π])

where ρsπ′=a⋅p(s|π′,do(B=P))+(1−a)⋅p(s|π′) with a∈(0,1).

Let us consider its performance with respect to the kinds of problems. First, this score attenuates but does not eliminate the problem of bad bets. Note first that ρsπ′ and p(s|π′) are alike but for the fact that the former places a little extra weight on B=P. Imagine again that B=P changes the goal from chair-manufacturing to table-manufacturing. If the AI thinks it is equally good at chair and table manufacturing, then it will not take bad bets. After all, any gains on the distribution ρsπ′ would occur alongside losses on the real distribution p(s|π′) and so the minimum of the two would be decreased. However, what if the AI thinks it is better at chair-manufacturing than table manufacturing? Then, it can afford to take some bad bets and to increase its performance if B=¬P in exchange for some losses if B=P. I suspect that it will be extremely hard to avoid bad bets completely, but this will need further investigation.

Second, this seems to help substantially with the problem of defending against backstops. For situations where the system looks likely to shut down, it can give arbitrarily high weight to having its policy fulfil that objective. It does not need to give any consideration to the counterfactual where the button cannot be pressed!

Third, this score is that overcomes the issue of overconfidence. No matter the AI system's beliefs about S, the score is bounded above by the performance on ρsπ′, which always gives weight of at least a to the counterfactual in which the button is pressed. So learning does not do away with the incentive to shut down.

Other limitations with robust counterfactual corrigibility

There are still a bunch more limitations with this formulation:

  • Like all the formulations, it requires a causal graph, which might be different from what a transformative AI uses.
  • These formulations instill a curious disposition in the AI with respect to the counterfacted variables. But the AI might become all to curious about these variables. If the AI is not satisfied by looking at the button state, it might need to disassemble and interrogate the human in order to be a little more certain about which state the button is in. Possibly mild optimization is the solution for this.

I expect that a bunch more problems to emerge, because the presence of some bad bets is concerning, and because all proposals in this area seem to end up having hidden problems.



Discuss

Berkeley: being other people

21 октября, 2018 - 05:50
Published on Sun Oct 21 2018 02:50:01 GMT+0000 (UTC)

Sometimes I enjoy understanding better what it is like to be other people. You can do this somewhat subtly by talking to people for ages about other topics, and making inferences. Lately I’ve been asking more directly, something like, ‘what about your experience do you think other people would be surprised by?’  But that’s hard to answer, because one doesn’t necessarily have things cached in that way, and many of one’s own idiosyncrasies are probably like water to a fish, and it involves imagining other people imagining you.

Another way to learn about such things is to ask a bunch of people about the details of a common experience. For instance, I have enjoyed:

Going to evensong in Oxford with a bunch of people from the office, then later discussing what we thought about when we got bored: 

  • The very old but humorously hateful notes in the song book
  • The possible friction between the church’s commitment to the poor and their lavish church decor
  • The fact that each of the people in the choir is conscious right now and looking back at us, and later will go and collect their children from school and make dinner in their kitchen and go on living their lives forever
  • The skull decorations

Learning about the YouTube genres that different people are into: 

  • How things work, e.g. how cherry plantations are dried
  • People accidentally dying in extreme sports
  • Marriage proposals
  • Movie trailers
  • Giant pimples being popped
  • Video game reviews
  • Planes crashing
  • Obscure dances

Hearing different people’s views of the monkey waiter sculpture in my house’s foyer 

  • Somehow problematic
  • Creepy in a fun way
  • Never noticed it, but it has a nice face
  • Is a novelty object and therefore disturbs the neutrality of the foyer

One thing I take away from this kind of thing is that different people are paying attention to different things about their environment, and thinking about it in different terms, and getting different kicks out of it.

Many of my friends say they think they are pretty legible, so there would not be much surprising to others about their internal life. My guess is that they are thinking their experience is mostly a sort of standard one, with this window of visual experience, and some accurately represented sounds, and some reasonable thoughts about the things going on in their lives, and so on. But I guess that actually the same visual scene looks in some sense very different to different people, because of things like where their attention goes, what abstractions they use to think about it, and what associations and emotional flavor things have for them.

If you want to play this game with me, what do you think about when you are waiting in the grocery line? What YouTube genres do you come back to? What about your experience do you think other people wouldn’t guess?



Discuss

The Bizarre Behavior of Berkeley Rationalists

20 октября, 2018 - 07:14
Published on Sat Oct 20 2018 04:13:37 GMT+0000 (UTC)

It seems to me that over the past 1 or 2 months a lot of rationalists (N > 7.5) in Berkeley/Oakland have been acting pretty bizarrely (based on my observation of their online behavior). There are three main areas of concern:

  1. An increase in expressed interest in Buddhism/spiritualism/meditation/mindfulness

  2. AI Safety researchers (and associates) talking about really stupid topics

  3. Increased hypomanic behavior and quackery among some people

I don't want to be more specific with #2 and #3 (no identifying information), but I think many people have seen at least #1 occurring.

I don't actually have any issues with #1 by itself (I am interested in these sorts of things), but I'm wondering why it's spreading more over the past 1 or 2 months.

Anyway, #2 and #3 are somewhat concerning, and I'm wondering if there's any overlap with #1.

Sorry for being vague and not providing examples of #2 and #3.



Discuss

The Art of the Overbet

19 октября, 2018 - 17:00
Published on Fri Oct 19 2018 14:00:00 GMT+0000 (UTC)

Previously: The Kelly Criterion

Last time I said, never go full Kelly.

In practice, I strongly agree with this. Either one should go far over full Kelly because the core Kelly assumptions have broken down and you want to throw the rules out the window, or you are trying to responsibly grow a bankroll and full Kelly betting on what you believe your edge to be would be massively overly aggressive.

This time we go over the second category with practical examples. What are situations in which one should engage in what appears to be massive over betting?

Four main scenarios: Not winning is catastrophic, losing is acceptable, and losing is impossible, losing is inevitable.

You Win or You Die

When you play the game of thrones, you win or you die. If playing it ‘safe’ with your resources means you don’t win, then it means that you die. Either don’t play, or don’t hold back.

In Rounders (spoiler alert, movie is recommended), the star finds himself owing $15,000 to the Russian mob, and his best efforts could only get him $10,000. Without better options and unwilling to run for his life, he walks into the poker club of the man he owes the money to, and says “I owe you that money tomorrow, right? I’ve got $10,000. I’m looking for a game.”

Famously, early in the company’s history, the founder of UPS once found himself without the funds to make payroll. He knew that if he missed payroll, that would be the end of UPS. So he flew to Vegas with what funds he had, bet them all on black, won, made payroll, and now we have a UPS.

Magic players often select ‘safe’ decks instead of ‘risky’ decks. This is exactly backwards. If you flame out of a tournament and lose all rounds, you get nothing. If you win half of them, you still get nothing. If you win three quarters, you usually still get almost nothing relative to winning. Extra variance is great.

If you believe that the fate of everyone depends on crossing a threshold or hitting an impossibly precise target, no matter how bad the odds, you deploy everything you have and take your best shot.

There’s a deadline. We don’t have time for Kelly.

Win or Go Home

Losing not being so bad, or continuing to play not being so good, is effectively very similar to losing being catastrophic but not safely avoidable. In both cases, surviving is not a worthwhile goal.

A classic mistake is the gambler who forgets that they are using their bankroll to pay the rent. On forums I would often read stories of hard working sports gamblers with mid-five figure bankrolls. They make sure to make only ‘responsible’ wagers on sporting events, risking only 1-3% of their funds each time. When they have a good month, they could eat and make rent, but that ate up most of the profits.

What they continuously failed to realize was that this was not a sustainable situation. Being ‘responsible’ only ensured that, even if they were good enough at picking winners, they would never succeed due to their fixed costs. They were being ‘responsible’ with their sizing relative to their bankroll, but completely irresponsible when sizing relative to fixed costs of their time.

When I first started gambling on sports, I put aside a fixed bankroll I committed not to replenishing, but then acted what any Kelly-style formula would call completely irresponsible with that bankroll until after my first few double ups. As the operation became clearly worth my time, I scaled back and did more ‘responsible’ sizing, knowing that I wouldn’t be eaten alive by fixed costs.

Later, when exploring if it was worthwhile to return to wagering, I did a similar thing, struggled, and eventually lost everything I’d set aside. This was very good. It let me move on. To this day, we can never know for sure whether I had an edge during that second attempt – I suspect my luck was quite perverse – but I do know it wasn’t the best use of my time to find out.

Fail fast is a well-known principle for start-ups. I broke this rule once, with MetaMed, and it was an expensive mistake. Even if you have segregated your potential losses in dollar space, you need to contain them in time space, and emotional space, and social space.

There’s no deadline. But we don’t have time for Kelly.

Bet You Can’t Lose

You can’t lose everything if you can’t risk everything.

Most people’s resources are mostly not money, or even things at all, most of the time. When someone ‘loses everything’ we talk of them losing their friends, their family, their reputation, their health, their ability to work. And for good reason.

There is a scene in Defending Your Life where we flash back to the main character paying over $2,000, a third of all the money he has, to avoid a middle seat on an international flight. In context, this is to his credit, because it was ‘brave’ to risk spending so much. In a sense it was brave, in a more important general sense it was stupid, but in the most important sense he was spending a very small fraction of his resources, so the relevant questions were: Was this transaction worth it? No. Did this put the character at risk of having a liquidity crisis where not having any cash could be expensive? A little.

The best reason to do ‘safe’ things with money, and to maintain liquid savings, is to avoid paying the cost of needing liquidity and not having it, or paying the costs of avoiding scenarios where that liquidity might become necessary. This includes the ability to take advantage of opportunity. Avoiding extra costs of having to borrow money, which is expensive in time and in emotional well-being and social capital, not only in money, is by most people largely underestimated. Slack is vital.

There is also a benefit to having no cash. For some people, and in some families and cultures, one who has cash is expected to spend it, or inevitably will quickly spend it. Sometimes this is on one’s self, sometimes on others, but the idea that one can have money and conserve it by spending only responsibly isn’t a thing. The only socially acceptable way to not spend money is to not have money. Thus, there is a high effective ‘tax’ on savings.

In such situations, not only is risk not so bad, it can be actively great. If your risky bet pays off, you can have enough money for big purchases or even to escape your poverty trap. If it fails, you would have wasted the money anyway, and now you can conserve slash mooch off others. Thus, your goal is to find a way to translate cash, which one will inevitably lose, into inalienable property that can be protected, or skills and connections and relationships, or at least experiences one can remember.

This isn’t just for poor people. Start-up culture works this way, too, and the only way to get things at reasonable prices, including labor, or to be able to raise money, is to clearly have spent everything you have and have no spare resources. This is a large portion of why start-ups are so stressful – you are not allowed to have any slack of any kind. I’m likely about to start a new company, and this is the thing that I dread most about the idea.

You Bet Your Life

This is all in contrast to the grand project we have been assigned, of ‘saving for retirement.’

Older people saving for retirement in modern developed countries, or at least some of those saving for retirement, face a special situation. Unable to earn additional funds, and without families or friends they can count on, their survival or at least quality of life depends entirely upon their ability to save up money earlier in life to spend later.

If they run out of money, they’ll still get some amount of government support, and perhaps some amount of familial support, but any remaining years are going to suck. If they die with money in the bank, that money can be passed on to others, but this is mostly a small benefit relative to not running out of cash to spend.

Compound this with large variance in remaining life span, and highly variant needs for health care and assistance during those remaining years, and you have an entire population pressured to furiously save every dollar they can. Any other use of one’s resources is looked at as irresponsible.

This is profoundly weird and profoundly messed up. 

It is handled, by most people and by those we trust to advise us, mindbogglingly badly.

That’s true even in the cases where the problem definition mostly applies to one’s situation, if one is willing to assume the world will continue mostly as it is, and one is unable to invest in other types of resources and expects to have little assistance from them.

It also leads us, as a society, to treat our savings as the savings, the retirement savings, and to apply the principles of that problem to all saving problems, and all risk management problems. Young people take ‘risks’ and buy stocks, older people play it ‘safe’ and buy bonds.

If the world were a much more certain place, where we knew how long we would live, in what state of health, and how the economy and world would do, and everything else we might need money for, and how much money we needed to engage in various activities and consume various goods, we could kind of have a target number of dollars. There was an ad a few years back that was literally this. Each person had a giant seven-figure red number they would carry around from place to place, with an oddly exact number of dollars they ‘needed’ in order to be ‘able to’ retire. Implied was that less than this was failure and would be terrible, more than that would be success and no further funds required. Now you can rest.

Instead, at best we have a probabilistic distribution of how much utility one would be able to get from various amounts of capital, in various scenarios involving one’s life and how the outside world is working. Even in this simplified problem, how does one then ‘play it safe’? At best one can reduce variance from some ‘normal’ shocks like a decline in the stock market, while still being exposed to others, and likely not protect much at all from bigger risks. No matter what, the bulk of what you have will probably go to waste or be passed on to others, or else you are risking disaster. At a minimum, you’re ‘betting’ on what to do with and where to spend the rest of your life, and there you are very much all-in. Odd for someone who is about to die to even try to ‘play it safe.’

 

 

 

 

 

 

 

 



Discuss

Update on Structured Concurrency

19 октября, 2018 - 14:10
Published on Fri Oct 19 2018 11:10:01 GMT+0000 (UTC)

Since I've written the article on structured concurrency and implemented libdill the progress went on.

Most importantly, Nathaniel J. Smith has published his "Notes on structured concurrency, or: Go statement considered harmful" article. (If you prefer to watch a video, go here.) It's better, more detailed and more eloquent than anything I have ever written on the topic. If you want to read one thing about structured concurrency go read Nathaniel's article.

After C-based libdill we've also got Venice which brought structured concurrency to Swift and Trio which brought it to Python.

Then, few weeks ago, a new version of Kotlin's coroutine library was released that supports structured concurrency. Here's the accompanying blog post by Roman Elizarov.

"Concurrency made easy: coming soon to a programming language near you" recently published by John Belmonte takes another thoughtful shot at explaining the paradigm. It also summarizes the state of its adoption.

Finally, I would like to point out Aleksey Kladov's blog post "Exceptions vs Structured Concurrency" which goes beyond simple "let's tie thread lifetimes to lexical scopes!" agenda and discusses some hard questions of the actual semantics implied by structured concurrency paradigm. He does so in the context of Rust's crossbeam library. We need more stuff like this.

All of that is great work and I hope we'll see more of it in the future!

For now, let me make few comments about stuff that may not be new, but which is sort of advanced and haven't been covered well enough yet.

Thread bundles: It's not just syntactic sugar

Libdill, Trio and Kotlin all introduce a concept of thread bundles. In libdill it's called "bundle", in Trio it's "nursery", in Kotlin it's "scope" (it would be nice to have some unified terminology here). At first sight it looks just like syntactic sugar that allows you to cancel all the threads within the bundle in one go.

However, it's more than that. It allows you to cancel a bunch of threads in parallel.

To understand why it matters, consider a network server with 1000 open connections. Each connection is handled by its dedicated thread. Also, let's assume we want to give each connection one second to do clean shutdown, to perform termination handshake with the peer etc.

If we canceled the connections in a simple for loop, the shutdown of the server would take, in the worst case, 1000 seconds, i.e. more than 16 minutes. In fact, if you are facing a DoS attack that opens connections and then leaves them hanging, it would take exactly 16 minutes 40 seconds. If, on the other hand, there was a construct to cancel the connections in parallel, the server would shut down in just one second. That's a difference that would make ops people cry.

Ordered cancellation

There's a temptation to equate thread bundles with scopes: All the threads launched within the scope would be canceled in parallel when the scope is exited.

There's a problem with that. Imagine that, within the scope, we create a logging thread first, then wait for incoming connections and launch one thread per connection. Given that all the threads were launched within the same scope, they will be canceled in parallel. It may happen that the logging thread would exit first. In that case all the logs produced by the connections while they are shutting down would be lost.

What we really want is shutting down the connections first, then shutting down the logging thread.

You can do that by nesting the scopes like this:

async with trio.open_nursery() as n1: n1.start_soon(logger) async with trio.open_nursery() as n2: while True: ... n2.start(connection)

Doable, but consider that instead of two threads you had five threads. And you wanted to cancel them in a particular order. The five nested blocks would look somewhat ugly.

One way to deal with that would be to adopt semantics a bit like that of C++ destructors: The destructors are called in the reverse order of how the constructors were called.

{ nursery n1; n1.start_soon(logger); nursery n2; while(1) { ... n2.start_soon(connection); } } // n2 is shut down here (all connections in parallel), when done, n1 is canceled

We should also think about whether it would be worth it — at least in statically typed languages — to impose a discipline on the programmer and allow them to run only one type of thread within one bundle/nursery. It would mean that different types of threads could not be shut down in parallel. But that's probably what we want anyway. (Counterexamples are welcome!)

{ nursery<logger> n1; n1.start_soon(); nursery<connection> n2; while(1) { ... n2.start_soon(); } } Deadlines, cancellations, grace periods

Imagine, again, the case of a network server. We may want to limit the lifetime of each connection to one hour. If it's lasts longer than that it either means that we've lost connectivity or that the peer is trying to perform a DoS attack. So let's just shut it down.

So far so good.

However, imagine we want to shut down the server. Canceling everything immediately would be easy. Just exit the main scope.

But we still want to give connections a one minute grace period to shut down cleanly. What we can do, in the main thread, is to sleep for one minute, then exit. But it feels ugly. Why wait for one minute even if all the connections terminated within few seconds?

We have a conflict of deadlines here. The connection threads are supposed to deadline in both one hour and one minute. How are we supposed to deal with that in a clean way?

Libdill provides bundle_wait() function which has a deadline parameter. It waits for all the threads in the bundle (nursery) to finish. When they do, it exits. If the deadline expires and some threads are still running it cancels them and exits.

This approach works for C where cancellation has to be done by hand anyway, but it's kind of clumsy in more sophisticated languages where it is supposed to happen automatically.

To be frank, I am not sure whether this scenario can be implemented in Trio. This section of the docs discusses the cancellation semantics but I don't see any obvious match there. (Correct me if I'm wrong!) I am not sure about Kotlin's implementation either.

Multi-core support

Finally, it should be said that all the implementations that I am aware of use coroutines and ignore OS threads. What that means is they can't use structured concurrency on more than one CPU core. At the same time there seems to be no inherent reason why OS threads or processes couldn't be made part of the structured concurrency world.

There may be technical reasons though.

I once implemented launching of OS process that was semantically equivalent to launching a coroutine in libdill, with cancellation and everything. You just had to use go_process() instead of go(). Surprisingly, it worked.

But then I deleted it. And I haven't even tried with OS threads.

The reason was that implementation of threads in the C standard library (pthreads) is complex and fragile. It contains a lot of corner cases and poorly specified behavior. Once signals are added to the mix all bets are off. Even if you make it work on one operating system, there's no guarantee that you won't experience weird behavior on a different operating system.

And I really don't want to deal with that complexity. Libdill is a hobby project and I have only limited time to spend on it.

But, eventually, if structured concurrency is to take off, it will have to deal with OS threads and processes. It would need people with enough free time and enthusiasm to deal with the complexity and maybe also some political will to change standard libraries in such a way that they play well with structured concurrency.



Discuss

Population Aging as an Impediment to Addressing Global Catastrophic Risks

19 октября, 2018 - 02:21
Published on Thu Oct 18 2018 22:28:20 GMT+0000 (UTC)

[ epistemic status: first Less Wrong post, developing hypothesis, seeking feedback and help fleshing out the hypothesis into something that could be researched and about which a discussion paper can be written. A comment/contribution to Eliezer Yudkowsky's "Cognitive biases potentially affecting judgment of global risks" in Bostrom & Cirkovic's "Global Catastrophic Risks" (2008) ]

Most of the Global Catastrophic Risks we face in the 21st century, like anthropogenic climate change, comet and asteroid impacts, pandemics, and uncontrolled artificial intelligence, are high impact (affecting the majority or all of humanity), of terminal intensity (producing mass death, economic and social disruption, and in some cases potential human extinction), and are of highly uncertain probability [1]. This last factor is one major factor making it difficult to bring public attention and political will to bear on mitigating them. This is critical as all of our work/research on AI safety and other issues will be for naught if there is no understanding or will to implement them. Implementation may not require public involvement in some cases (AI safety may be manageable by consensus between AI researchers, for example) but others, like the detection of Earth orbit crossing asteroids and comets, may require significant expenditure to build detectors, etc.

My interest at present is in additional factors that make mustering political and public will even more difficult - given that these are hard problems to interest people in in the first place, what factors make that even more difficult? I believe that the aging of populations in the developed world may be a critical factor, progressively redirecting societal resources from long-term projects like advanced infrastructure, or foundational basic science research (which arguably AI Safety counts as), towards provision of health care and pensions.

Several factors make an aging developed world population a factor in blunting long-term planning:

(1) Older people (age 65+), across the developed world, vote more often than younger people

(2) Voters are more readily mobilized to vote to protect entitlements than to make investments for the future

(3) Older voters have access to, and are more aware of, entitlements than are younger people

(4) Expanding on (3), Benefits and entitlements are of particularly high salience to the aged because of their failure to save adequately for retirement. This trend has been ongoing and seems unlikely to be due to cognitive biases surrounding future planning.

(6) Long term investments, research, and other protections/mitigations against Global Catastrophic Risks will require a tradeoff with providing benefits to present people

(7) Older people have more present focus and less future focus than younger people (to the extent that younger people do - my anecdotal data is that most people interested in the far future of humanity are <50 years old, and a small subset of that <50 year old population). Strangely, even people with grandchildren and great-grandchildren express limited interested in how their descendants will live and how safe their futures will be.

#6 is the point on which I am most uncertain (though I welcome questions and challenges that I should be more uncertain). Unless artificial intelligence and automation in the near term (15-30 years) provide really substantial economic benefits, enough that adequate Global Catastrophic Risk mitigation could be requisitioned without everyone noticing too much (and even then it may be a hard sell), it seems likely that future economic growth will be slower. Older workers, on average (my hunch says ...) are harder to retrain, and harder to motivate to retrain to take new positions, especially if the alternative is state-funded retirement. In a diminished economic future, one not as rich as it would have been with a more stable population pyramid, politics seems likely to focus on zero-sum games of robbing (young) Peter to pay (old) Paul, whether directly through higher taxation or indirectly by under-investing in the future.

Am I jumping ahead of the problem here? Do we not know enough about what it would take to address the different classes of Global Catastrophic and Existential Risk, or is there a reason to focus now on the factors that could prevent us from 'doing something about it'?



Discuss

Facebook, The Rodents, and The Common Knowledge Machine

19 октября, 2018 - 00:07
Published on Thu Oct 18 2018 21:07:06 GMT+0000 (UTC)

"The rationalist communities use of Facebook is an inadequate scenario. A group of people put real money into making a simple tool that could enable transitions from inadequate equilibria to better, 'adequate' ones. This tool was made in the basic sense, but had none of the necessary polishing or marketing work done because the money only stipulated raw tool creation. Better planning and project management could have avoided this outcome. The fact that this outcome was collectively allowed to stand implies serious issues with the foundational makeup of the people within the rationalist community."



Discuss

IAS Special Year on Optimisation, Statistics and Theoretical Machine Learning

18 октября, 2018 - 18:43
Published on Thu Oct 18 2018 15:43:44 GMT+0000 (UTC)

From the same institution that gave us Homotopy Type Theory, a special year on Optimisation, Statistics and Theoretical Machine Learning. I'm expecting significant theoretical advances in the field of machine learning.



Discuss

Book review: 23 things they don't tell you about capitalism

18 октября, 2018 - 18:30
Published on Thu Oct 18 2018 15:30:01 GMT+0000 (UTC)

Right from the title of his book 23 things they don't tell you about capitalism, it's clear that Ha-Joon Chang is pitting himself against the establishment. In doing so, he lays the ground for both the book's significant insights and its serious flaws. Chang's arguments are divided into 23 chapters, covering a range of economic issues. Each chapter starts off with two headings: "What they tell you" and "What they don't tell you: "they" refers to neo-liberal economists peddling "the free-market ideology that has ruled the world since the 1980s". Chang is not shy in setting up in opposition: in the introduction, he claims that the result of this ideology has been "the polar opposite of what was promised". Instead, he wants to push everything leftwards - still capitalist, but under the supervision of a bigger and more active government which intervenes to promote some sectors and impede others.

Let's start with four of Chang's lines of argumentation which I found convincing. The first is that we shouldn't promote the idea of selfishness as an acceptable goal or even a moral good, as discussed in Thing 2 (Companies should not be run in the interest of their owners) and Thing 5 (Assume the worst about people and you get the worst). I think that examining the value of economic ideas in terms of their effects on cultural mindsets is underrated, and am glad that Chang and others are pushing back on this. The second is the argument that the neoliberal approach to international development has failed, discussed in Thing 7 (Free-market policies rarely make poor countries rich), Thing 11 (Africa is not destined for underdevelopment) and Thing 15 (People in poor countries are more entrepreneurial than people in rich countries). This is a much more complicated question than I'm able to properly evaluate, but as far as I can tell, these are sound and reasonable arguments which deserve serious consideration (and are elaborated upon further in his book Kicking Away the Ladder, where he argues that America and the UK became rich by using the very trade barriers they now rail against). Thirdly, he claims in Thing 22 (Financial markets need to become less, not more, efficient) that innovations in financial markets are, on the margin, doing more harm than good. I've seen this argument floated before, particularly with reference to high-frequency trading, and have yet to see a convincing rebuttal. Lastly, he offers a defense of job security and the welfare state, discussed in Thing 10 (The US does not have the highest living standard in the world), Thing 20 (Equality of opportunity may not be fair), and Thing 21 (Big government makes people more open to change). These lines of argument are fairly common, but worth reading another take on. In Thing 21, there's a nice analogy between welfare for employees and bankruptcy laws for employers: both are measures which encourage risk-taking by reducing the possible worst-case loss.

Yet that last chapter also showcases one of the book's main failings. Thing 21's title is about the benefits of big government, but its content is only about the welfare state. I'm happy to grant that social safety nets can be beneficial for job mobility, while still strongly believing that increased regulation and state-sector employment have the exact opposite effect. Perhaps Chang's conflation of big government with the former alone is an honest mistake, but if so it's one of several cases in the book where good arguments are used to imply bad conclusions. Phrasing his points as 23 challenges to conventional wisdom seems powerful, but disguises the fact that quite a few of them fail to support his overall anti-establishment, pro-government stance, and some actively undermine it.

A good example of this is Thing 3 (Most people in rich countries are paid more than they should be). According to Chang, the conventional wisdom is the following: "In a market economy, people are rewarded according to their productivity. Bleeding-heart liberals may find it difficult to accept that a Swede gets paid fifty times what an Indian gets paid for the same job, but that is a reflection of their relative productivities." However, he argues, it's largely artificial barriers to free movement which perpetuate income differences. This point is, in my mind, entirely correct: increasing international mobility is just about the best thing we could do to help those in poor countries. Yet I doubt you'd find any leading economist who'd deny that international borders are a huge contributor to international inequality. In fact, the Open Borders movement is driven disproportionately by the libertarian right, the strongest supporters of the free markets which Chang rails against elsewhere.

Similarly, Thing 17 (More education in itself is not going to make a country richer) is spot-on in its analysis - but Chang conveniently leaves out the fact that overeducation is perpetuated by massive government subsidies for universities. Meanwhile in Thing 18 (What is good for General Motors is not necessarily good for the US), he argues that regulation can be a force for good and claims that GM "should have been forced to invest in the technologies and machines needed to build better cars, instead of lobbying for protection". The missing link, of course, is the question of whether, even if good regulations are possible, they will ever be consistently implemented. What the example of GM actually suggests is that if regulation is on the cards, lobbyists will more likely than not manage to twist it into the harmful sort.

A second issue is the fallacy of grey: the idea that because there's no black and white answer to a question, we can't sensibly choose between the remaining options. This is particularly prevalent in Chang's discussion of Thing 1 (There is no such thing as a free market), where he argues that all markets have regulations - such as those against child labour, pollution, unapproved medicines, practicing medicine or law without a license, and the sale of votes, organs, or courtroom decisions - which are determined by culture and politics, and so the goal of reducing government interference is an illusory one. But firstly, the mere fact that these regulations exist doesn't make them a good idea: many libertarians would argue that occupational licensing and obligatory pharmaceutical testing, for example, should be repealed. Even apart from that, I think Chang's point is rather misguided: the real world is complicated, of course, but broadly freer markets can be well worth striving for even if there's no platonic ideal of a totally free market (are the benefits of freeing slaves illusory just because we can't define what "perfect individual freedom" means?). Thing 19 (Despite the fall of communism, we are still living in planned economies) falls prey to the same fallacy.

Other arguments that Chang makes, which I have fewer strong opinions about: that inflation isn't nearly as bad as it's made out to be; that where multinationals are based affects their behaviour significantly; that manufacturing is underrated and knowledge-based services like banking and consulting are overrated (although he skims over the most scalable ones, like software and entertainment); that governments have an important role in "picking winners" (historically true in research at least, unclear elsewhere); that trickle-down economics doesn't work (tying in to the much bigger debate about inequality overall); that CEOs are overpaid (probably true, but also probably of mainly symbolic importance); that labour-saving devices like the washing machine have changed the world more than the internet (plausible in general, but false when it comes to washing machines in particular, according to the author of 50 Inventions that Shaped the Modern Economy: apparently people haven't saved time overall, because they wash clothes way more frequently now); and that government regulation is good at restricting choices and thus reducing the complexity of businesses' problems (an interesting idea in theory, but in reality adding a morass of legislation probably makes the decision landscape even more complex).

The book ends on an ironic note with Thing 23 (Good economic policy does not require good economists). Chang points out that the miraculous growth of the Asian economies was led by engineers, lawyers, generals, and practically everyone except actual economists. That's a thought-provoking fact, but upon reflection I'm not sure this chapter would actually be controversial amongst economists. After all, almost all economists would agree that getting rid of crony capitalism and pork-barrel subsidies is good, increasing immigration is also good, taxing negative externalities like pollution is good, and increasing trade barriers is bad. The fact that these ideas aren't implemented is not due to lack of expertise, but rather to lack of political will. In other words, coming up with ideas which work is much easier than coming up with ideas which work within the constraints of the current political orthodoxy. Perhaps, in the short term, we need economists mainly for the latter; and in the long term, we need them to change the overall political orthodoxy in the right direction. (My essay on conflict theory vs mistake theory is also relevant here).

Overall, this book is a worthwhile read, and identifies a number of important and relevant ideas. I think it would have been better off without its ideological slant - perhaps as "23 things you didn't know about capitalism"? - and with acknowledgement that the left-right divide is a rather limited explanatory tool. At the same time, Chang's facts are interesting even when they are rhetorically misused; and my existing political views make me interpret his arguments more harshly than most would. I now know a few more things about capitalism - which is what he promised, after all.

Discuss

UBI for President

18 октября, 2018 - 18:09
Published on Thu Oct 18 2018 15:09:08 GMT+0000 (UTC)

Cross posted, as always, from Putanumonit.

Epistemic status: I am not an economist, and neither is Andrew Yang. On the other hand, it's not like the experts are brimming with great solutions to the problems discussed herein, or are united in consensus on the main issues.

Andrew Yang is running for president in 2020 on a platform of Universal Basic Income. Last week I got a chance to hear from Andrew directly and ask him a few questions about it. I came away cautiously optimistic about trying UBI (up from cautiously neutral), and massively impressed by Andrew Yang (up from not having heard of him). It’s time to talk about UBI, and it’s time to pay attention to Yang 2020.

UBI and the Labor Automation Tsunami

Yang lays out the case for urgent UBI on Sam Harris’ podcast, in an interview with Quillette, and in great detail in his book, The War on Normal People. I’ll try do justice to the idea with a brief summary.

Who is “the median American“? It is a person without a college degree, without much of a support network, and without $500 to spare in case of an emergency. She works in retail (4.3 million jobs) or a call center (2.5 million). He’s a fast food worker (3.7 million) or a truck driver (3.5 million). These millions of jobs are being automated today and will keep being so in the immediate future, along with warehouse workers, accountants, and radiologists.

Millions of Americans are about to meet the fate of the 2 million or so construction workers who lost their jobs after the financial crisis. What are those 2 million up to? In short: idleness, video games, Oxycontin, loneliness, despair. As their numbers swell, and McKinsey predicts that 15% of all jobs [1]. will be displaced by automation by 2030, this cheerful list may grow to include crime and rioting. What solutions are American politicians offering to this looming crisis?

Trump blames the cheating foreigners, even though the number of Americans who benefit economically from trade (roughly all of them) swamps the number of American workers who lose their job as a result of trade (a mere 90,000 a year). Bernie blames Amazon and Walmart, the two companies that besides providing massive consumer surplus are also the employer-of-last-resort to more than 2 million low-skilled American workers. Other politicians mumble about “retraining”, but current government retraining programs are both tiny and inefficient. There is no viable plan to scale them to millions of workers, and no reason to expect that they will work.

By and large, the mainstream position on labor force automation among US politicians is to ignore it. The worse the problem gets, the stronger the desire to ignore it grows.

Andrew Yang’s solution is to provide every adult American citizen with $1,000 a month, no strings attached. This will be paid for by a combination of a consumption tax (like a 10% VAT), reduction in other welfare payment (one can receive UBI xor existing welfare), and “additional revenues from economic growth”. I personally think that the latter is shorthand for “more taxes”, ideally, of the Georgist variety.

Here’s what Yang says will happen when every American gets $12k a year guaranteed:

  • People will be encouraged to work as UBI eliminates welfare cliffs and gives people the slack to move to better jobs / better cities.
  • People will start businesses and do creative, nonprofit and care work.
  • People will be happier and healthier with a solid safety net beneath them.
  • Bureaucratic overhead will be reduced because the one thing our government knows how to do efficiently is sending a lot of people checks each month.

Here a couple of outcomes that I worry may happen when every American gets $12k a year guaranteed:

  • Ultimately, a UBI is a transfer of money. The net recipients of that money will be the broke and unemployed (great), but also rentiers and landlords (not great).
  • Women will spend more time on housework and raising kids (great), but men will spend more time playing video games (not great?).

I asked Andrew about both points, and the extent of our disagreement seems to lie in different general models of economics and human behavior.

What UBI Buys

Andrew and I both noticed that Americans spend a lot of their income on housing, healthcare, and education – henceforth, HH&E. But we have different answers to why this is the case.

From what I gathered, Andrew’s basic model is that Americans spend a lot of money on HH&E because they (HH&E) are expensive. Under this assumption, if Americans had extra money they could then afford to spend it on other things that will make them happier. Andrew’s policy platform contains a lot of ideas to make HH&E cheaper. For example, regulating the ratio of administrators to students at universities, or single-payer healthcare with flat salaries rather than pay-per-service for doctors. With the extra income, Americans will get more and better services.

My model is that Americans spend a lot of money on HH&E because they (Americans) are rich. Under this assumption, if Americans had extra money HH&E would increase in price to absorb any extra disposable income. This is a somewhat counterintuitive model, and it is built of the following components:

  1. For measuring how much money people have to spend, Actual Individual Consumption(AIC) is vastly better than measures like GDP per capita.
  2. Going by AIC, Americans are way richer than the rest of the world – about 50% richer than most developed areas like Europe and Japan.
  3. Most Americans can afford a full suite of goods whose supply is not constrained: food, clothing, electronics, transportation, etc.
  4. Not every American can afford the expensive items in the above categories: organic quinoa, designer jeans, the latest iPhone, a new car. The difference between those and the versions affordable to every American: rice, H&M jeans, Nokia 2, a 2008 Toyota is almost entirely a difference of signaling, not of quality.
  5. Really, all Americans should read my guide to buying things smartly.
  6. HH&E all have their supply constrained in one way or another by the government: the number of houses built, the number of hospitals in a city and number of doctors credentialed, the number of accredited schools and teachers.
  7. HH&E is getting more expensive without getting better. Apartments in San Francisco, healthcare outcomes, skills acquired in school – none of them have improved much in the last few decades but all have multiplied in price.
  8. HH&E involves a huge amount of zero-sum signaling.
  9. Here’s a thorough analysis showing that US Healthcare costs are adequately explained by American’s extra AIC, and we should expect healthcare to be less of a zero-sum signaling competition than houses and schools. Especially schools.

Bringing it all together: the majority of Americans have more money than it takes to cover one’s non-HH&E needs. Whatever money is left over is spent by rich Americans on competing for a limited supply of nice neighborhoods, prestigious schools, impressive healthcare. The cost disease trickles down: if Columbia university can hire 1,000 administrators and charge $240,000 for a four-year degree that teaches few useful skills, lower ranked schools can also get away with more bloat, higher tuition, and less education. Rising rents in San Francisco push housing prices up 80 miles away in Gilroy.

So: a $1,000/month UBI will probably increase the disposable income of most Americans, at the expense of the top 10-20%% who will pay a lot more in extra taxes. Giving money to the very poorest Americans will enable them to buy better food, clothes etc. But the extra money given to everyone else, which will be the majority of UBI, will go to landlords, school administrators, healthcare providers, and local monopolies – all without increasing the quality of HH&E services provided one bit.

Undoubtedly, both Andrew’s model and mine are simplistic, incomplete, and imprecise. Neither of us is dogmatically committed to either view, the true answer certainly lies somewhere in the middle, or even to the side. But on the margin, my model makes UBI sound a lot less attractive than it would be otherwise.

UBI and Work

Andrew Yang, citing the Roosevelt Institute think tank, claims UBI will increase the number of working Americans by 4 million. Unfortunately, that’s not what the Roosevelt Institute says: they actually put the number at 1.1 million [2], and even that is based on a macro model rather than on observed evidence.

The observed evidence is mixed, and very little of it is obviously relevant to basic income that is permanent, universal, and implemented in a giant and rich nation. Putanumonit raised thousands of dollars for a basic income project in Kenya, but that was designed to help Kenyans. The fact that a Kenyan starts working after getting UBI because he can afford a wheelbarrow to cart supplies doesn’t mean that an American truck driver who lost his supply-carting job to a robot will use UBI to find alternative employment.

Andrew’s story goes something like this: after losing his job, the truck driver is stuck. He doesn’t have spare money for training or relocation, and going on welfare limits his ability to move and try things, especially if it’s disability welfare. With UBI he could afford to move to a big city, pay rent and vocational school training for a few months, and reinvent himself as a plumber or A/C repairman or stripper.

My story goes something like this: people work because they need money. If they need money less, they will work less. I’ll have to see a lot of evidence to contradict this simple story. Andrew himself agreed that for most people who drop out of the labor force, UBI will not pull them back in.

But as I wrote a couple of years ago: this is not a decisive argument against UBI. Working hours are f="http://benjaminrosshoffman.com/costs-are-not-benefits/">not a benefit to be maximized, they’re a cost. John Maynard Keynes famously predicted that we would all work at most 15 hours a week. He made that prediction shortly before working himself to death. But the reason we don’t work 15 hours a week is the weird equilibrium we’re in of what is valued by society.

Humans don’t intrinsically value “hours worked”. We value things like status, sex, community, pleasure. In modern society, we learned to associate a lot of this with work and consumption. This is especially true of men, which is why men left out of the work-consumption cycle fall into greater despondency than women.

Here’s Andrew:

I will say that if you dig into the data, you find that men and women experience idleness differently. […] The data shows that women who are out of work get involved in the community and go back to school and do things that are quite productive and pro-social. Whereas, men who are out of work spend 75 percent of their time on the computer playing videogames and surfing porn—and then tend to devolve into substance abuse and self-destructive behaviors. Men who are out of work volunteer less than employed men, even though they have more time.

Putting on the cynical evolutionary psychology hat I borrowed from Geoffrey Miller, I would guess that unemployed men volunteer less because volunteering doesn’t get them respected or laid. That’s the ultimate reason men do things (don’t ask me how it goes with blogging).

But putting on my optimistic evo-psych hat I notice that 20,000 years ago Homo sapiensmales competed for sex and status by telling stories, or seeing who can throw a rock farthest, or painting hands on the walls of a cave. This can probably be approximated today by playing basketball or backgammon with your buddies, or by competing for karma on a Dunbar-sized subreddit. If we stop denigrating people who do this, their lives may not be so miserable. I think that a lot of the “substance abuse and self-destructive behaviors” follow not from playing video games, but from feeling guilty and shamed over playing video games. A safety net of social respect can be as important as a safety net of cash.

Of course, if everyone played The Witcher 3 all day there would be no one to develop The Witcher 4: Witcherer than Ever. Society needs people to be productive to grow and prosper. But not everyone can be equally productive.

The median truck driver is 49, high school-educated, and has few skills other than driving a truck. It seems somewhat arbitrary to blame him (truckers are 94% male) for not guessing 25 years ago that trucking will get automated before whatever other jobs he may have chosen. The US has massive reserves of productivity and growth in the millions of skilled immigrants who would come given the chance, and in preparing the next generation for a 21st-century economy. As one of the former planning to have some of the latter, I can afford to pay for some trucker-wireheading.

Bottom line: I’m not very optimistic about UBI as a panacea for those left behind by automation, but I think it’s probably worth the experiment. Despite the potential benefits, no other country seems willing to take up the gauntlet. Americans can afford it, and it’s not certain that we can afford to continue ignoring the problem of labor automation.

Putanumonit Endorses Yang 2020

Ironically, I found almost everything else about Andrew Yang more impressive than his defense of UBI.

Yang struck me as thoughtful, curious, and humble, and yet with enough charisma to not let those three traits entirely submarine his political prospects. He’s a second-generation immigrant with no whiff of identity politics. His background is in social entrepreneurship: he created and sold an education company, then founded a non-profit that creates jobs in cities like Cleveland and Pittsburgh.

More important than Andrew’s personality and street cred are his politics. Yang is liberal but pro-business and skeptical of government’s ability to do many things well aside from cutting checks and passing simple regulations.  The 71 non-UBI points on Yang’s policy platform can be described as “the most sensible thing that can squeeze inside the Democrat Party Overton window, erring on the side of caution and incrementalism”.

And Yang has a plan – to get into the national spotlight by sneaking up on Iowa.

The long tortuous process of American presidential elections starts with the Iowa caucuses. “Caucus” means that instead of just casting a vote, Iowans have to gather somewhere and yell at each other for a few hours; last time out only 170,000 people bothered showing up. This means that 30,000 votes or so may be enough for a top 2 or top 3 finish, hopefully sparking a positive cascade of media coverage and popularity.

In Andrew’s words: “Any time 50 Iowans gather in a room and speak the words ‘Basic Income’, I appear in a puff of smoke to tell them about it.” The message is well received in a purple state with an economy based on manufacturing and agriculture.

Will this strategy work? Most people you know will almost certainly not become the president, and that is probably true of Andrew Yang as well. And yet, Yang got an avowed political passivist fired up about his candidacy enough to write 3,000 words about it and encourage all of you to spread the word. I do this for two reasons.

The first comes from Eliezer Yudkowsky, who asked us to Stop Voting for Nincompoops.

I seriously think the best thing you can do about the situation, as a voter, is stop trying to be clever.  Don’t try to vote for someone you don’t really like, because you think your vote is more likely to make a difference that way.  Don’t fret about “electability”.  Don’t try to predict and outwit other voters.  Don’t treat it as a horse race.  Don’t worry about “wasting your vote” – it always sends a message, you may as well make it a true message.
Remember that this is not the ancestral environment, and that you won’t die if you aren’t on the winning side.  Remember that the threat that voters as a class hold against politicians as a class is more important to democracy than your fights with other voters.  Forget all the “game theory” that doesn’t take future incentives into account; real game theory is further-sighted, and besides, if you’re going to look at it that way, you might as well stay home.  When you try to be clever, you usually end up playing the Politicians’ game.
Clear your mind of distractions…
And stop voting for nincompoops.
If you vote for nincompoops, for whatever clever-sounding reason, don’t be surprised that out of 300 million people you get nincompoops in office.
The arguments are long, but the voting strategy they imply is simple:  Stop trying to be clever, just don’t vote for nincompoops.
Oh – and if you’re going to vote at all, vote in the primary.

In my years in the US, I have seen several dozen presidential candidates. I think that Andrew Yang is the first one that isn’t a nincompoop.

The second reason comes from Milton Friedman (who, by the way, made the case for a form of UBI 50 years ago).

Only a crisis – actual or perceived – produces real change. When that crisis occurs, the actions that are taken depend on the ideas that are lying around. That, I believe, is our basic function: to develop alternatives to existing policies, to keep them alive and available until the politically impossible becomes the politically inevitable.

This I believe is Andrew Yang’s basic function: to keep the ideas of UBI and sensible Democrat-tolerable economic policy alive and available for when the crisis comes. Even if Yang doesn’t make it all the way to the White House, there is probably no better way to get his ideas out there. And with the crisis of automation and unemployment coming sooner rather than later, we are going to need those ideas.

[1] Yang’s website states that: “The smartest people in the world now predict that a third of all working Americans will lose their job to automation in the next 12 years.” Yang also mentioned the number of displaced jobs as 30% on Sam’s podcast, and said that he got this figure from the McKinsey report on the future of work.

However, the report only estimates the number of jobs that are potentially replaceable, and gives a range of 0-30% with 15% being the median estimate.

[2] I found this report by the Roosevelt Institute modeling the macroeconomic impact of UBI. Table 3 shows the potential labor force increase under various UBI scenarios. The relevant scenario is number 12: $1,000 a month funded by increased taxes. The estimated impact of scenario number 12 is an increase of 1.11 million jobs. 4 million extra jobs will only be added if UBI comes entirely from deficit spending.

I didn’t expect that when writing a column in praise of Andrew Yang I’ll end up calling him out for misreporting numbers, but the priorities of Putanumonit are clear: truth in numbers first, politics fifty seventh.



Discuss

Dating book recommendations

18 октября, 2018 - 03:20
Published on Thu Oct 18 2018 00:20:33 GMT+0000 (UTC)

Anyway... I have a consistent dating problem that I don't know how to solve. I 'v never gotten a date from online dating. So I was wondering if there were any good books or resources or papers or something on the subject?



Discuss

New /r/gwern subreddit for link-sharing

18 октября, 2018 - 01:49
Published on Wed Oct 17 2018 22:49:36 GMT+0000 (UTC)



Discuss

Trying for Five Minutes on AI Strategy

17 октября, 2018 - 19:18
Published on Wed Oct 17 2018 16:18:31 GMT+0000 (UTC)

Epistemic Status: I know basically nothing about any of this, or at least no more than any other LessWronger, I just happened to listen to some podcasts about AI strategy recently and decided to try my hand.

Epistemic Effort: Probably about an hour of thought cumulatively, plus maybe two hours to write this up

Global Coordination

AI, like many other x-risks, is in part a global coordination problem. As I see it, there are two main subproblems here: the problem of "where to go" (i.e. how to coordinate and what to coordinate on), and the problem of "how to get there from here" (taking into account the inadequacies present in our actual, real-life political systems).

Putting this another way: there is a top-down way of looking at the problem, which is: if we imagine what the world would be like if we managed to globally coordinate on stopping AI risk, what would that look like? What coordination mechanisms would we have used, etc. And then there is a bottom-up way of looking at it, which is: there are certain people in the world who are already, right now, concerned about AI risk. What series of actions could those people perform that would ensure (or as close to "ensure" as we can muster) that AI risk would be mitigated? (Though see this post from Benquo for a forceful objection to this line of thinking that I don't yet know how to take into account.)

As far as I can tell, there are three classes of solutions to the bottom-up version of the problem:

  1. Work unilaterally, outside of states
  2. Get people who are already in positions of power within states to care about the problem (by which I mean be fully willing to implement their end of a solution. Merely believing it's important in the abstract doesn't count).
  3. Get people who already care about the problem into positions of power within states.

Solutions in class 1 may, of course, not be sufficient, if state coordination ends up being unavoidably necessary to solve the problem. Solutions in class 2 and 3 run into various problems described in Inadequate Equilibria; in particular, class 2 solutions face the "lemons problem." I don't (yet) have anything especially original to say about how to solve these problems (I'd be highly grateful for reading suggestions of places where people have proposed solutions/workarounds to the problems in Inadequate Equilibria, outside the book itself of course).

As for the top-down part of the problem, I see the following solutions:

  1. International treaty
  2. Weak supranational organization (the UN or another in a similar vein)
  3. Strong supranational organization (the EU but on a world scale) i.e. a supranational confederation
  4. One world nation e.g. supranational federalism

in rough order of ease of implementation. Solutions 3 or 4 (ignoring their political infeasibility) would be especially useful because they could solve not just AI risk, but also other x-risks that require global coordination, whereas if we solve AI risk by treaty, that makes little or no progress on other x-risks.

Actually, "fostering global coordination" seems to me like a good candidate for a high-impact cause area in its own right, as it attacks multiple x-risks at once. A lack of ability to internationally coordinate is a major factor increasing the likelihood of most x-risks (AI, climate change, bioterrorism, nuclear war, and maybe asteroid risk, though probably not non-anthropogenic pandemic risk or solar flares knocking out the power grid), not just AI, so working directly on methods of fostering our ability to globally coordinate is probably a high-impact cause area in itself, separate from work on particular x-risks. Though I should note that either Bryan Caplan or Robin Hanson (or maybe both; I don't have time to find the reference at the moment, I'll edit it in if I find it) has argued that pushing on increasing global coordination carries some risk of ending up with a global tyranny, an x-risk in itself.

Avoiding politicization

Maybe this is super obvious, but I don't think I've seen anyone else come out and say it: it's important that AI safety not become a politically polarized issue the way e.g. climate change has. It doesn't seem to be much of one yet, as neither party (sorry for U.S.-centrism) is talking about it basically at all (maybe this is why nobody has made this point), though I see Democrats talking about technology more than Republicans.

So, we need to make sure AI safety doesn't become politically polarized. How can we ensure this? Well, that depends whether you think we need to make sure it's widely discussed or not. On the one hand, I've seen it argued (I don't remember where or by who right now) that it might be dangerous to try to promote AI safety to a lay audience, because the message will almost certainly get distorted; if you think this is the way to go, then of course it's rather easy to make sure AI safety doesn't get polarized--just don't talk about it much to lay audiences or try to do public outreach about it. On the other hand, it seems likely that politicians will be a necessary component of any effort to globally coordinate around AI safety, and politicians need to focus to a large extent on the issues the public is concerned with in order to get reelected (I suspect this model is way too naive, but I don't know enough about the nitty-gritty of political science at the moment to make it better), so one way to make politicians care about AI safety is to get the public to care about AI safety. If this is the strategy you favor, then you have to balance making AI safety part of the national conversation with making sure it doesn't get politically polarized. But it seems like most issues in the public consciousness are also polarized to some extent. (This is another claim I'm highly unconfident about. Feel free to suggest counterexamples or models that contradict it; one counterexample I can think of off the top of my head is social media--it's quite salient in the national conversation, but not particularly partisan. I also believe the recent book Uncivil Agreement is largely an argument against my claim, but I haven't read it yet so I don't know for sure.) So this is an instance of a more general problem: getting politics to pay attention to issues that aren't polarized.

[After writing this section, a friend raised a ton of complications about whether polarization might be a good or bad thing, which I plan to write up in a separate post at some point.]

These are preliminary thoughts, posted in an exploratory spirit. Please do point out places where you think I've gotten something wrong; e.g. if you think one of my taxonomies is incomplete, it probably is, so I'd love if you'd point it out. I'd be especially grateful for reading suggestions, as I'm sure there are tons of places where I'm simply ignorant of relevant literature or of entire fields of study related to what I'm talking about (I have already gone through Allan Dafoe's AI strategy reading list and research agenda and extracted the readings I want to start with, although I'd be grateful if anyone has a copy of Huntington, Samuel P. “Arms Races: Prerequisites and Results.” Public Policy 8.1 (1958): 41–86. Can't find it online anywhere.)



Discuss

Do Animals Have Rights?

17 октября, 2018 - 18:22
Published on Wed Oct 17 2018 15:22:58 GMT+0000 (UTC)

[Regard this article as a draft; unfinished piece of writing]

I’m not writing this article particularly because I seek to provide some answers – but because I seek to get some.

I recently came across a reasonably plausible – at least seemingly – take which transparently suggests that animals do not have rights. Jordan Peterson, an apparently infamous Canadian thinker, clinical psychologist and a professor of psychology at the University of Toronto, expressed it. I find it worth considering. Let’s have a look at it (I paraphrased it).

Animals do not have rights. Human beings have rights. Rights are not "inside" or part of a person. They are part of the complex agreements that make up civilized society. Or, in other words, they represent a story which educated human beings choose to believe in order to cooperate flexibly and in large numbers. They (or we) act upon this story as though it were a reality – because it massively comes in handy. My right to freedom, for example, is your obligation to let me speak and act with a minimum of interference. Thus, each of my rights is your obligation. And each of your rights is, simultaneously, my obligation.Animals cannot shoulder an obligation. Thus, they cannot participate in the complex social contract that structures rights.This does not mean that we should treat them any old way. But it does mean that the proper treatment of animals is not predicated upon their "rights.”This also explains why you don't have a "right" to medical care. Someone else has to provide it. If you have a right to it, then the provider, who has no choice but to provide it, is no more than a slave. Thus, if we had to give animals rights – this would result in us being their slaves. Why Does This Issue Even Matter?!

In order to be able to hope that humanity can make progress towards protecting and treating animals, I believe it may be (perhaps immensely) efficient to reach a consensus when it comes to what general path we should take in order to try to do so. And, by deciding to bestow rights upon them – both legally and socially speaking – we automatically choose a path. By choosing this path, things will change – and it may be ambiguous whether for the better.

Peterson essentially argues that if we give animals rights this would cause contradiction when it comes to what the concept of “rights” means. Theoretically speaking, I tend to agree with him. But, would this contradiction actually come to life and thus have any baleful or consequences in practice? Or is it only and merely a theoretical and sophisticated truth? Indeed, theoretically, by offering rights to animals, this will result in us being their slaves. But animals cannot be or become aware of that.

I will conclude vaguely, by sustaining that neither am I sure that by giving animals rights this would render us their slaves; nor that by doing so we will choose the best path towards protecting them better.



Discuss

Policy Beats Morality

17 октября, 2018 - 09:39
Published on Wed Oct 17 2018 06:39:40 GMT+0000 (UTC)

(Crossposted from Medium)

This is a simple point, but one that gets overlooked, so I think it deserves a clear statement. Morality is less effective than incentives at changing behavior, and most of the time, policy is the way incentives get changed.Telling people the right thing to do doesn’t work. Even if they believe you, or understand what you are saying, most people will not change their behavior simply because it’s the right thing to do. What works better is changing the incentives. If this is done right, people who won’t do the right thing on their own often support the change, and their behavior will follow.I remember reading a story that I think was about Martin Gardner’s column in Scientific American in which he asked eminent scientists to write in whether they would cooperate with someone described as being “as intelligent as themselves” in a one-shot prisoner’s dilemma. He was disappointed to find that even many of the smartest people in the world were rational, instead of superrational. Despite his assertion that intelligent enough people should agree that superrationality leads to better outcomes for everyone, those people followed their incentives, and everyone defected. Perhaps we can chalk this up to their lack of awareness of newer variants of decision theory, but the simpler explanation is that morality is a weak tool, and people know it. The beneficial nature of the “morality” of non-defection wasn’t enough to convince participants that anyone would go along.Environmentalists spent decades attempting “moral suasion” as a way to get people to recycle. It didn’t work. What worked was curb-side pickup of recycling that made money for municipalities, paired with fines for putting recyclables in the regular garbage. Unsurprisingly, incentives matter. This is well understood, but often ignored. When people are told the way to curb pollution is to eat less meat or drive less, they don’t listen. The reason their behavior doesn’t change isn’t because it’s “really” the fault of companies, it’s because morality doesn’t change behavior much — but policy will.The reason politics is even related to policy is because politicians like being able to actually change public behavior. The effectiveness of policy in changing behavior is the secondary reason why — after donations by Intuit and H&R Block — congress will never simplify the tax code. To paraphrase / disagree with Scott Alexander, “Society Is Fixed, Policy Is Mutable.” Public policy can change the incentives in a way that makes otherwise impossible improvements turn into defaults. Punishment mechanisms are (at least sometimes) sufficient to induce cooperation among free-riders.Policy doesn’t change culture directly, but it certainly changes behaviors and outcomes. So I’ll say it again: policy beats morality.*) Yes, technological change and innovation can ALSO drive changes in incentives, but predicting the direction of such changes is really hard. This is why I’m skeptical that innovation alone is a good target for changing systems. Even when technology lowers the cost of recycling, it’s rarely clear beforehand whether new technology will in fact manage to prompt such changes — electric trolleys were a better technology than early cars, but they lost. Electric cars are still rare. Nuclear power is the lowest carbon alternative, but it’s been regulated into inefficiency.

Discuss

My Hello World

17 октября, 2018 - 07:47
Published on Wed Oct 17 2018 04:47:37 GMT+0000 (UTC)

Hmm.

Staring at a blinking cursor on a draft post does elicit quite the runaway stream of thoughts about what on earth I’ll write about, how I would go about writing whatever that might be, all while concurrently reflecting on the impressions my writing style, content, and signals might make on any passerby.

Oh well, guess I’ll write anyways. I’ve spent too many gorram years lurking, reading, absorbing, etc. and not actively developing my writing faculties or discussing interesting things with people who are of shorter inferential distance to me that most. This seems like a problem that’s easily solved by simply starting to write, not allowing for my efforts to be vanquished by my personal perfectionist daemon, and doing what is good enough for the moment...plus, writing and participating in discussions dramatically increase the rate and quality of my intellectual development, so, yay!

I don’t think it’s possible to convey how absolutely weird it is to me to see words published with a moniker I use attached to them. Not sure if that will ever stop being weird, because isn’t it ridiculous? This whole writing thing as a “crystallization” of one’s present state of being and cognition into a less ephemeral medium is certainly a great boon to trying to have human civilization, since it allows for “permanent” records of ideas, culture, blah blah, etc. you get the point...but as a social animal it does feel rather scary to “put oneself out there” for the world to see and judge. What else am I supposed to do though? Not do that? Pah. I need to do that for a variety of reasons, including Tsuyoku Naritai. So I shall.

Next time: I’m not entirely sure what I’ll talk about, but lately I have been focusing on “how to ask good questions”, so maybe it will be about that subject.

Take care of yourself, valete



Discuss

case study: iterative drawing

17 октября, 2018 - 07:35
Published on Wed Oct 17 2018 04:35:12 GMT+0000 (UTC)

.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > * {position: absolute} .MJXc-bevelled > * {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom * {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex} .mjx-line-box-test {display: table!important} .mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')}

This exercise brings together a few principles from my previous posts. If you're in the mood to to a little drawing (15mins).

Try: pick an object (eg a cup) to draw. Divide your page into 20 rectangles. Do a brief sketch in one of the rectangles. For a few moments, thoughtfully and non-judgmentally reflect on it. What interests you, what kind of shapes are there? Draw another cup variant, noticing as you go.

This exercise is taken from the youtube video: Iterative Drawing, the fastest way to improve. 40min video. The tl;dr is the exercise above. You do a tiny drawing experiment, and non-judgementally pay attention to what happened.

Iterative drawing is an example in fast feedback loops. You try out lots of different hypothesis and explore the space of all possible cups much faster. Of all possible marks on paper you will find the once you like if you explore more. If you're to spend 15 minutes drawing cups, I think I'd rather be the person who has drawn twenty over the person who has drawn one. [1]

You are generating a lot of interesting data. And you’ve paying attention to it. Quality and quantity. Anything you notice can then feedback into the generation process. The changes are cumulative. The results of each experiment inform the next batch of theories. Maybe the handles too large, perhaps a different angle. You’re also focusing on the fundamentals, shape and proportion. Not focusing on the details.

A few doodles & interesting things I discovered:

 

 

[1] reminds me of that story about the students who were graded by quantity vs quality. Discussion and thoughts here.



Discuss

Страницы