## Вы здесь

### Rent Needs to Decrease

Новости LessWrong.com - 11 октября, 2019 - 15:40
Published on October 11, 2019 12:40 PM UTC

Here's part of a comment I got on my housing coalitions post: I consider it extremely unlikely you have found renters with the expectation of rent going down. Assuming they want to live in a well maintained building, I consider unlikely they even desire it, once they think about it. What renters hope for in general is increases that are less than their increases in income. Landlords mostly do expect that rents will go up, but the magnitude of their expectations matters, many have the same expectations as renters for moderate increases. Others will have short term/transactional thinking and will want to charge what the market will bear.

This seems worth being explicit about: when I talk about how I think rents should be lower, I really mean lower. I'm not trying to say that it's ok if rent keeps rising as long as incomes rise faster, but that rents should go down.

Here are Boston rents in June 2011:

And in June 2019:

These are on the same scale, though not adjusted for inflation (13% from 2011-06 to 2019-06).

In 2011 a two-bedroom apartment in my part of Somerville would have gone for $1800/month, or$2050/month in 2019 dollars. In 2019, it would be 3000/month. Compared to 13% inflation we have 67% higher rents. Another way to look at this is that for what you would pay now for an apartment a ten-minute walk from Davis would, in 2011, have covered an apartment a ten-minute walk from Park St. And what you would have been paying for a Harvard Sq apartment in 2011 wouldn't get you an East Arlington apartment today. These large increases have been a windfall for landlords. Property taxes haven't risen much, upkeep is similar, but because demand has grown so much without supply being permitted to rise to meet it the market rent is much higher. If we build enough new housing that rents fall to 2011 levels, landlords will make less money than they had been hoping, but they'll still be able to afford to keep up their properties. I'll support pretty much any project that builds more bedrooms: market rate, affordable, public, transitional. Rents are so high that all of these would still be worth building and maintaining even if everyone could see were were building enough housing that rents would fall next year to 2011 levels. As a homeowner and a landlord, I know that this means I would get less in rent and I'm ok with that. I value a healthy community that people can afford to live in far more than a market that pays me a lot of money for being lucky enough to have bought a two-family at a good time. Discuss ### When we substantially modify an old post should we edit directly or post a version 2? Новости LessWrong.com - 11 октября, 2019 - 13:40 Published on October 11, 2019 10:40 AM UTC Does anyone have any thoughts on this? Discuss ### Sets and Functions Новости LessWrong.com - 11 октября, 2019 - 10:41 Published on October 11, 2019 5:06 AM UTC Sets and functions are two of the most basic ideas in mathematics, and we'll need to know what they are to discuss some things about categories rigorously. Normally you'd learn about sets and functions way before encountering category theory, but in the spirit of assuming as little math as possible, we should write this post. It's also worth addressing a few matters for their conceptual relevance. Sets are imaginary bags we put things into. For example, you can take a dog, a cat, and a shoe, put them in an imaginary bag, and now you have a set consisting of .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > * {position: absolute} .MJXc-bevelled > * {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom * {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')} {dog,cat,shoe}. The members of the set—dog, cat, and shoe—are called the elements of the set. A subtle but important aspect of a set is that the imaginary bag has to be defined by a rule. This rule can be pretty much anything, like "put into a bag everything I'm pointing at," but it does have to be a rule. Typically, sets can fit pretty much anything in, and so you can often just say "here is my set" rather than having to be explicit about the rule. We'll get back to why the rule matters at the end. For now, sets are imaginary bags that you can put pretty much anything into. What are we putting into these bags, exactly? Pretty much anything, yes—but clearly we aren't actually putting dogs, cats, and shoes into bags. Mathematically, what are these things? That is to say, what's the difference between the set {dog} and the set {cat}? Well, what's the difference between the equations x+2=3 and y+2=3? Nothing but the name of the variable—which does not matter at all. We could call x anything. We could represent it with a thirty-foot tall watercolor of a fire truck. So what's the difference between the set {dog} and the set {cat}? Only the name of the element—which does not matter at all. Just like we can take posets like 1→2→3 and a→b→c and represent their common structure abstractly as ∙→∙→∙ , we can do the same for {dog} and {cat} with this set: {∙}. The set {∙} is what a one-element set like {dog} or {cat} really is. There's no mathematical meaning to {dog} that actually makes the element of this set a four-legged barking creature. It's just an element that happens to be labeled "dog." So why do we care about sets? Set theory is really important to mathematics, but from a category-theoretic perspective, they actually aren't very important at all. Instead, sets serve one major purpose: sets let us define functions, and functions are really, really important! Functions are maps between sets that meet a few rules. First of all, let's just talk about the "maps" part of that. Think of sets as countries, and the elements of the sets as cities in that country. A map between the two sets is a map telling you how to go from the cities in one country to the cities in the other country. But of course, math is a little more specific than that. So let's say you have one set A={a,b,c} and another set X={x,y,z}. What does it mean to define a map—a morphism, in fact—from A to X? Well, it's pretty simple in the end. You have to start from a city in A, so one of a,b, or c. And you have to end up in a city in X, so one of x,y, or z. Let's say you go from a to x. Then the map is just...the set of where you started from and where you ended up. That is, a and x, respectively. That's it! It's a short trip. There's not much to sets in the first place, so there's not much to maps between them. (Technically, sets have no structure—they're imaginary bags filled with black dots—and so the maps between them are as simple as a map across a country with no geography.) But let's point out one thing: we do need this map to tell us where we started and where we ended. In a set, the order of the elements doesn't mean anything. For example, the set {apple,banana,orange} and the set {orange,apple,banana} are literally identical. The only reason the elements are even written in an order at all is because there's no other way to display text. To get our elements in order, so that we can show where we started from and where we ended up, we use something called an ordered pair, which you've probably seen from doing Cartesian coordinates. When we have a map telling us to go from a to x, we represent that as an ordered pair (a,x). The ordered pair means "we started at a and ended up at x." Although sets don't have orders, we can have sets of ordered pairs (since we can put pretty much anything into sets), and in that way we can represent order in a set. For example, you can have the set consisting of just the ordered pair (a,x). That would be written {(a,x)}. So what does it mean to define a map from A to X? It means defining a set of ordered pairs, the first part of the ordered pair coming from the set A and the second part of the ordered pair coming from the set X. That is to say, a map f:A→X is defined as a set whose elements are ordered pairs (∙A,∙X), where ∙A is an element in A and ∙X is an element in X. So for example, we could have a map with the following directions: {(a,x),(b,y),(c,z)}. This maps says, "If you're in a, go to x. If you're in b, go to y. If you're in c, go to z." All such maps are called binary relations, because they define relationships between pairs of things. For example, the map just given defines relationships between a and x, b and y, and c and z. We could define all sorts of maps based on this definition. You could make "partial" maps that tell you where to go if you're in a, but not if you're in b or c. You could make "choose your own adventure" maps that have forking directions, e.g., a map having both (a,x) and (a,y) in them. What's the best map? That is unquestionably a special kind of binary relation known as a function. "Best" might be a strong word, but functions have two special properties that have made it the most important type of map, and indeed morphism, in all of mathematics. The first property of functions is that they provide you instructions for going from A to X for every "city" in A. Let's move away from the countries-and-cities metaphor and consider the study of science. Think of the elements of A as possible states of the world. As scientists, we'd like a scientific rule that gives us predictions for every possible state of the world, i.e., something that provides a map for every element of A. That's something a function does—this property is called total. The second property of functions is something called univalence, which means that you only get one mapping for every city you could be starting from. That is to say, if your function tells you to do (a,x) and (a,y), it must be the case that x and y are completely identical, i.e., x=y. Having a map to two different cities starting from the same city is strictly disallowed by univalence. Let's relate univalence to science as well. Basically, it captures the idea of determinism. If a state of the world yields a particular output, and then you reset things to that exact state of the world again, it had better yield the same output again. I.e., if you have a state of the world a, and you observe that a yields output x and also yields output y, the outputs x and y had better be the same output that accidentally got two different labels applied to it. So between totality and univalence, a function basically captures the idea of "making deterministic predictions for everything." Which is exactly what science is all about. We can combine totality and univalence into this single rule: A function f:A→X is a set of ordered pairs (∙A,∙X) where each element of A shows up in one and only one ordered pair. That is to say, a is definitely in an ordered pair, as guaranteed by totality. But by univalence, it will only show up once: if you have (a,x), then you won't also have (a,y). You should know that while a function can't give you (a,x) and (a,y) unless x=y, it can totally give you (a,x) and (b,x). That would just be saying that two states of the world end up at the same point, which certain seems like a scientific possibility. Now we're going to address sets and functions from a category theoretic perspective. In fact, we're going to make a category. *** Let's build a category whose objects are sets and whose morphisms are functions. The first step is to make sets into objects. We do this by saying that sets are objects. Seriously, it's that simple—objects don't really have any properties at all aside from their morphisms, so there's nothing more to this step. The next step is to make our functions into morphisms. We do this by saying that functions are morphisms, and then we check that functions obey the rules of morphisms in a category. First, domain and codomain. Sets serve as the domain and codomain of functions, and since sets are our objects, the functions will clearly have the objects of this category as their domain and codomain. So the first real thing to figure out is composition. Let's say we have A,B, and C as sets, and functions f:A→B and g:B→C. Composition would require that we have another function h:A→C such that h=g∘f. Let's break this requirement down. The function f starts with the elements in A and returns some elements in B. That is to say, we have a set of ordered pairs of the sort (a,b), where a comes from A and b comes from B. Say that A consists of {a1,a2,a3} and B is {b1,b2,b3,b4}. The function f allows us to specifically assign an element in B to each element of A. That is to say, we can ask, what is the specific (i.e., only one) element of B corresponding to a1? It could be b2, for example. If so, then f(a1)=b2. And we can ask the question next for a2 and a3. We might reuse elements of B or not. Let's say the full function f gives {(a1,b2),(a2,b2),(a3,b1)}. Having assigned elements of B to elements of A in this way, we could think of elements of A as "hiding in" elements of B. For example, a1 is hiding in b2. (That is to say, the function f is hiding a1 in b2—it doesn't get to hide there automatically.) Next we have g:B→C, which specifically assigns an element in C to each element of B. Say that C consists of {c1,c2}. Let's say g gives {(b1,c2),(b2,c1),(b3,c1),(b4,c1)}. Now let's reveal our hidden soldiers. The elements a1 and a2 were hiding in b2, and a3 was hiding in b1. Naturally, they ambush the soldiers of C like so: {(a1,c1),(a2,c1),(a3,c2)}. (Why is this ambush allowed? Because (a1,b2) means f(a1)=b2, and (b2,c1) means g(b2)=c1. Substituting f(a1) for b2, we have g(f(a1))=c1.) Is that "ambush" set a function? Yes, it has each element of A in an ordered pair, and each element is in only one pair. Is it a function A→C? Yes, the "first part" of each ordered pair comes from A and the "second part" of each comes from C. Can we call this function h? Yes, we just label it that for free. Is this function h the same as doing f first, and then g? Yes, that's exactly what we just saw. So functions compose. Check mark. Now let's prove that these functions compose associatively. Say you have functions Di→Ej→Fk→G. We want to show that (k∘j)∘i=k∘(j∘i). Let's plug in an arbitrary element d of D. Our functions carry that element through to some element of G, and we want to know if it's the same regardless of how you group the functions. So let's see if (k∘j)∘i(d)=k∘(j∘i(d)). We know that composition is allowed (we're proving that composition is associative, after all). So let's compose. Say that k∘j=y:E→G and j∘i=x:D→F. Now we can simplify to asking if y∘i(d)=k∘x(d). Well, what is y∘i(d)? It's a mapping of an element from D through i to E, and from E through y to G. And y is equal to the path that going through from E through j to F, and from there through k to G. So overall, y∘i(d) maps an element from D through i to E, through j to F, and through k to G. And what is k∘x(d)? It's a mapping of an element from D through x to F, and from F through k to G. And since x is equal to the path that goes from D through i to E, and from there through j to F. So overall, k∘x(d) maps an element from D through i to E, through j to F, and through k to G. Which is exactly what we just said about y∘i(d). Because they're both carrying the same element through the same functions, they have to end up at the same element in G on pain of violating univalence. So they're equal for the case of d, and since d is an arbitrary element in an arbitrary set, it works in general. Thus, composition is associative. Finally we need each set to have an identity morphism. Because our morphisms are functions, this will be the identity function. This is as simple as asking whether, for any set A, we can define a function 1A:A→A that does nothing. Here's an example of such a function. Say A={a1,a2,a3}. Then a function A→A would do nothing if it gave {(a1,a1),(a2,a2),(a3,a3)}. That is to say, each element just gets mapped back to itself. Let's show that does nothing. I.e., if you have a function g:A→B, the composition 1A∘g is just equal to g. Well, duh! The function g is a mapping of elements from A to B. So if you keep all the elements where they were in A, then g is just going to be what g was going to be if you didn't do anything in the first place. Obviously, you can just pair up each element with itself for any set, and that's going to keep not doing anything no matter how big the set gets. So every set has an identity function. (And only one, at that—there's only one way to pair up each element with itself.) And now we're done. We've just shown how sets and functions can be considered a category: sets are objects by declaration, and we can show that functions between sets compose associatively, and an identity function can be defined for every set. Neat! You might be wondering if we could have defined a category with sets as objects and binary relations as morphisms. Indeed we can, in pretty much the same way as we did with sets and functions. In fact, since functions are just particular types of binary relations, proving that binary relations meet the rules of a category in general would have proved it for the specific case as well. In fact, the category of sets and functions is a subcategory of the category of sets and binary relations. Yet it is the case that the category of sets and functions gets the significant name Set, whereas the category of sets and binary relations gets the much less interesting name Rel. That's because it is the category-theoretic perspective that everything that's interesting about a category is contained in its morphisms. Functions are basically the most important morphism in mathematics, so we give the category for which functions are morphisms the name Set—we care about sets because, more than anything, they let us define functions. *** One last note on defining the category of sets, Set. You may have heard of Russell's paradox, which says you can't have the set of all sets. That's because sets have to be defined according to a rule, as we said in the beginning. What if you try to define a set of all sets that are not elements of themselves? Then one possibility is that this set is either an element of itself, in which case it is included, in which case we have violated the rule just laid down. But the only other possibility is that it is not an element of itself, in which case it should be an element of itself according to its rule. But we just saw why we can't let it be an element of itself. So we bounce back and forth in eternal paradox. So we can't have a set of all sets. Then how can we have a category of all sets? We'll discuss that in the next post, which will address this issue and use our new understanding of sets to more properly formalize the definition of a category given a couple of posts ago. Additionally, we'll look at two other interesting things about sets. We'll see how, although sets are bags of black dots, we can use functions to give those black dots meaning. It's the category-theoretic view that everything you want to know about an object is found in the object's morphisms. Furthermore, we'll also see that there's two special kinds of sets which can be though of as the "minimal" and "maximal" set, respectively. In doing so, we'll get our first tastes of the two main goals of this series, the Yoneda lemma and adjoint functors. *** Incidentally, writing these posts has been shockingly exhausting and time-absorbing. So after this next post, I don't expect anything further on the weekend, although I may try to answer comments then. Five posts a week is not sustainable. 2-3 a week is probably more reasonable. This experience has given me a lot of respect for people who keep up daily blogs or write books. Thanks very much to people reading and commenting on these so far. It's very useful to have feedback and gratifying to know that people are interested in category theory. I am going to work as well on lowering the cost to myself of creating drawings to aid explanations. I have always explained this material in the past with easy access to pen and paper, and somehow it never quite occurred to me that even sketching out a few dots and arrows is much more of a pain on a computer. Taking recommendations, if you have any. Discuss ### Reflections on Premium Poker Tools: Part 4 - Smaller things that I've learned Новости LessWrong.com - 11 октября, 2019 - 04:26 Published on October 11, 2019 1:26 AM UTC Previous posts: In the previous post, I talked about what I've learned. That post focused on bigger things. But there were a lot of smaller, more miscellaneous things that I've learned too. Those are the things that I want to talk about in this post. People think of a mobile app when you say you're building an "app" Even when I clarify and try to explain that it's a web app, most people are still confused. So sometimes I call it a website, which I hate because that sort of implies that it's static. Sometimes I describe it as poker software. I still haven't found a good solution to this. I think "website" is probably best. Ghosting is normal This is a huge pet peeve of mine. I hate it. But apparently it's just a thing that many people do. At least in the business world. Let me give you some examples. 1) I reached out to this vlogger on Twitter. I asked if she'd be interested in working with me. She said she is. Then she proposed that I pay her as an advertiser to promote the app. I said in response that I'm only looking for revenue share partnerships right now, and asked if she's interested in that. No response. I follow up in a few days. No response. Follow up again. No response. 2) There was a guy who I would study poker with via Skype every week. I swear, we had a good relationship, and had productive study sessions. At some point he was going to be away for some trip for a few weeks, so we said we'd resume when he gets back. After the trip I reach out to set up a time for our next session. No response. I reach out again. No response. Again. No response. I eventually start spacing it out over months, but I never get a response from him. Eventually he signs up as my second paid user. I email him to thank him and ask if he wants to catch up. No response. At this point maybe he just feels too awkward to respond. I'm really confused though. I have no clue what happened. 3) There've been a few times where I'd try to set up lunches with people for advice. Many times the conversation would go: Me: Want to get lunch?Them: Sure, how about next week. Me: Sounds good, what would be a good time for you? Them: Silence. Me: Just checking in for lunch next week.Them: Silence.Me: Still want to get lunch? Them: Silence.Me: Sorry we missed each other last week. Want to reschedule?Them: Silence.Them: Hey, sorry I didn't respond previously. How's this upcoming week for you?Me: No worries. I'm free. How about Wednesday?Them: Silence. I could go on and on giving examples of this sort of stuff, but I think you get the idea. Additionally, I've found that when people want something from you, this phenomena completely disappears! Mysterious, huh? Book authors are just people In the beginning I'd get star struck when I met or talked with book authors, or similarly "famous" people. But now I just see them as people. People aren't banging down their door. You can email them. It's not implausible that they'll get coffee with you. There's no such thing as "just throwing out a number" "Suppose we said it'd be100/month."

"Suppose we say it's 50% revenue share."

I've made statements like this, intending for it to not mean anything and just be, y'know, throwing out a number. But in my experience, people get attached to these numbers. Or at least they become heavily anchored to them.

A few kind words means the world to an early stage entrepreneur

Sometimes I'd get emails from people saying that they really like the app and that they're thankful that I created it. That stuff really meant so much to me, and made me so happy. I almost want to create some sort of effective altruist movement of doing things like that, given the amount of utility it produces.

Paying for people's meals doesn't seem to induce much reciprocation

A lot of times I meet with people and will pay for their meals, in hopes that they'll reciprocate and spend more effort trying to help me out. But I've found it to be incredibly ineffective.

Here's an extreme example. There was a point where I was pushing hard for people to sit with me and allow me to do user research with them. I would offer to buy someone lunch if they would do so. I posted this on Reddit, and one guy took me up on it. So we met for lunch when he was in town.

He ended up bringing along two friends. They were both only vaguely interested in poker. Not good.

During the lunch, the table was a bit too crowded to take out laptops and do the user research. That's another thing I've learned — don't plan on doing things that involve a computer over lunch. So anyway, we have lunch, we realize that the user research part isn't working out because there's not enough room, so we say we'll do it after we finish lunch. Then one guy says that they're actually super tired from being up all night last night, and maybe we could do the user research tomorrow, or via Skype when they get back home if tomorrow doesn't work out. I say sure.

I pay the bill for all four of us. It must have been about $80. They're busy the next day. Afterwards, I text him/them a few times trying to set up a time to do the user research. Not responsive. Sometimes they straight up ignore me. Later on when I launched, I emailed them. No response. You'd expect a little bit more from a group who you treated to an$80 lunch. Maybe these guys were the exception, but I get the sense that they're more the norm.

Cialdini — are you part of the replication crisis now too?

ROI isn't enough, even for high stakes pros

I talked previously about how poker software is easily a +ROI investment, and how I've been surprised at how unwilling people are to see it that way. But I was particularly struck by the fact that high stakes poker pros wouldn't see it that way. I would have thought that such people would be logical enough to jump on +ROI opportunities.

Example: I was having a conversation with Cy where he was saying that $100/month is pretty expensive, and I said I don't think it is at all. Consider some quick math. Say a high stakes poker player plays 10,000 hands/month, and the software improves your winrate by 0.1bb/100 hands. That's 1bb/1,000 hands, 10bb/10,000 hands, or 10bb/month. If you're playing$25/50, which him and Brad play, that's $500/month. And those are pretty conservative assumptions. The software could easily improve your winrate by more than that. And online poker pros often play way more than 10,000 hands/month. But this logic did not change his perspective. He said that he just doesn't see it this way, and that he knows his other high stakes friends don't either. I deleted my backlog, and it turns out nothing terrible happened I was inspired to delete my backlog by Jason Fried wants to delete my backlog. It's worked out pretty well for me! It alleviated some stress, and I didn't feel like I was missing something. Getting people to do user research with is hard I figured it wouldn't be that hard. "Hey, let me sit down with you over lunch and watch you use my app. I'll pick up the bill!" "Hey, let's Skype and let me watch you use my app. I'll coach you for free!" I would think that stuff like this would easily get me dozens and dozens of volunteers pretty quickly, but it didn't work out like that for me. I found it hard to find people to do user research with. E2E testing wasn't worth it for me I've had a very love-hate relationship with e2e testing. It's so awesome when you have all your tests and can run them to make sure things work. But I really hate the framework I'm using, Nightwatch.js. It's slow and cumbersome. Maintaining the tests proved to be really time consuming. There were so many weird bugs. I ultimately just decided to stop maintaining and writing e2e tests, because I didn't feel that they were worth it, given how time consuming they were. Long inferential distances is the realest thing in the world One example is that I was talking to a professional poker player and coach. He didn't know how to read a 2d graph with x-y coordinates. I said "x-axis". He said, "what?". This made me decide to change the text on my app to say "horizontal axis" instead of "x-axis". He also struggled to understand that a point on the graph refers to a pair of data points. And to understand what the slope means. And how to calculate expected value. He wasn't the only one. I have plenty of other examples of stuff like this. I don't want to come across as being mean though. Just sayin'. I certainly have my own share of incompetencies. I often felt a strong urge to get out of the house Furthermore, for some reason, I felt an urge to get far away. There's this coffee shop I like to go to that is about an hour long bike ride away. I found myself wanting to go there a lot. This urge confused me, because the coffee shop isn't that great. Eventually I realized that I really liked the bike ride, and being far away from home. Maybe because it acted as a divider between "work day" and "relaxing time". Other stuff There are definitely things that I'm forgetting. Hopefully I'll add to this post as I remember them. Discuss ### Reflections on Premium Poker Tools: Part 3 - What I've learned Новости LessWrong.com - 11 октября, 2019 - 03:49 Published on October 11, 2019 12:49 AM UTC Previous posts: I finally made the decision to call it quits. Now I think it would be a good time to reflect on my experiences and see if I could learn something from them. Market size As I talked about in the previous post, initially, I thought that the market size was hundreds of thousands of users, and that I could make something like$100/user. After talking to people in the industry, I now believe that the market is more like 5-10k users, and not all of them are willing to pay $100. This is incredibly important! Going after a$20M market is very different from going after a $200k one. If I knew it was the latter in the beginning, I wouldn't have pursued this as a business. Spending 2+ years for the chance at making maybe$200k just isn't worth it, given the inherent uncertainty with startups, and the alternatives of pursuing a startup with a higher upside, or getting a job that pays approximately that much but does so with 100% certainty.

So what happened? Where did I go wrong? Let's see. This was (roughly) my initial logic:

100k subscribers to the poker subreddit. Educational YouTube videos gets 100k+ views. Popular posts on TwoPlusTwo (big poker forum) have 1M+ views over the years.The kind of person to subscribe to the poker subreddit, watch an educational YouTube video, or spend their time on TwoPlusTwo is probably somewhat serious about poker. They're trying to get better at the game. All of this is indicative of a market size of hundreds of thousands of users. Possibly more. And poker is expanding. So there seems to be a big market here.And poker players are probably pretty willing to spend money. Investing in software is +ROI. Poker players love ROI! And they tend to be on the wealthy side.

Maybe I really underestimated the divide between passive + free and active + paid. Watching a YouTube video is something that is passive. You sit there and consume information. Using poker software is something that is active. You have to sit and think and mess around with numbers. Watching a YouTube video is free. Poker software costs money.

But then what about the existence of poker books? There around 500 poker books on the market. The top ones get up to 100k sales, and many others get in the tens of thousands, I think. Maybe it's that with books, someone is telling you what the right answers are, but with software, you have to figure out the right answers yourself.

Anyway, I think the bigger point is that I should have found people in the industry and asked them about the market size. I started doing that towards the end of my journey, but I should have done so from the beginning. People in the poker world all seem to have a pretty solid idea of what the market is really like. Why screw around trying to figure it out myself with these questionable proxies when I could just ask the people who actually know?

I really can't emphasize enough how huge this is. It would have only taken a few hours, and it would have saved me so much time.

So why didn't I go out and talk to people in the industry? I'm not quite sure. I think a part of it was that I didn't actually feel like I had to. It seemed pretty clear to me that the market was big, so I was more concerned with making the product awesome.

Another part of it is that I didn't see it as an option to talk to people in the industry. Because why would they want to talk to little ol' me? They're basically B-list celebrities, in some sense. They've written books. Tons and tons of people know them. Don't these people have hundreds of fan emails every day that they never respond to? That was my thinking in the beginning. Now, I've come to realize that they're just people, and that they're often happy to chat and provide advice.

It also would have been good if I had access to the actual financial data of my competitors. But none of them are public companies. Does that mean this isn't an option, or are there still ways?

I came across one cool approach last night. If you don't have access to their financial data, you could look at how many employees they have, and multiply by something like $125k or$200k. Something in that ballpark should give you an idea of their revenue. My competitors, from what I can tell, are all working solo. So that is a sign that they aren't making millions and millions of dollars. Not definitive, but definitely points in that direction.

Another interesting option is to actually call your competitors and talk to them! Eg. you could pretend to be a prospective employee, and in that conversation start asking about revenue and stuff. I'm not sure how I feel about that sort of deception though.

Here's a closing thought for this section: the world isn't that big. For the majority of my journey, my thoughts on the market size for poker has been, "I'm not sure exactly how big it is, but it's big enough." The world is just such a big place, and so many people play poker. The market just has to be huge. Now I realize that this isn't true.

"Just another month or two"

As I explain in the first post, for a very long time, I kept thinking to myself:

I'm not sure if I should really pursue this as a business or a long-term thing, but I do know that I want to finish up X, Y and Z. It'll only take a month or two, and I think there's a good chance that it finally gets me over the hump.

This just kept happening over, and over, and over again. It's crazy.

And each time it happened, it felt like this was the time it really would only be another month or two. I was wrong last time, and the time before, and the time before, and the time before... but this time... this time I'll be right.

Wow. Articulating it like that really helps put things in perspective. I need to diagnose myself with a chronic case of the planning fallacy. I need to do a better job of adjusting in the opposite direction. I have a tendency to be overconfident, so I need to adjust in the other direction, and be less confident. That's what you have to do with known biases: try to adjust in the other direction.

And with the planning fallacy in particular, there's a known cure: the outside view. Don't try to reason from the ground up. Look at how long similar things have taken in the past. Maybe use the reference class of "times I've thought it'd only be another month or two".

Man, that makes me laugh. "Times I've only thought it'd be another month or two." Ha! That reference class is full of miscalculations, so it's pretty clear that I need to adjust in the other direction pretty hard.

I say all of this stuff, but I still worry that I'm going to make the same mistakes again.

It always takes longer than you expect, even when you take into account Hofstadter's Law.Agility

Check out this excerpt from the first post in this series:

And I have this little voice in my head saying:

Hey Adam... it's been over a year... you don't have any users. This like totally goes against the whole lean startup thing, y'know.

And then I say in response:

I know, I know! But I really question whether the lean startup applies to this particular scenario. I know you want to avoid wasting time building things that people don't actually want. That's like the biggest startup sin there is. I know. But like, what am I supposed to do?My hypothesis is that once my app gets to the point where it's in the ballpark of Flopzilla or better, that people will want to use it. It takes time to get to that point. There isn't really a quick two week MVP I could build to test that hypothesis. I'm already trying to avoid building non-essential features, and focus on getting to that point as quickly as possible. So what am I supposed to do?If I released this and found that I had no users and no one wants this, what would that tell me? Just that people don't like this version of the app enough. Sure, ok. But the real question is whether they'll like the Ballpark Flopzilla version enough, or the Better Than Flopzillla version enough. My hypothesis is that they will, and releasing it now wouldn't invalidate that hypothesis. And I can't think of a way to test those hypotheses quickly. I think I just need to build it and see what happens.

I'm going to resist the temptation to respond to this right now. Right now I just want to tell the story. The story as it actually happened. But I do want to say that there were a lot of voices swimming around in my head questioning what I was doing.

I said in that post that I'm going to resist the temptation to respond to it. Here's where I do get to respond to it.

Here are my general thoughts about the whole lean startup thing:

1. I think the essence of it is to ruthlessly avoid spending time on things unnecessarily. Eg. you don't need to spend a week building a fancy navigation menu before you even know whether people want your product. I heartily, heartily support that message.

2. It can be tempting to think that there are no quick experiments you can run. This is usually wrong. Think more carefully, try to isolate the component assumptions, and get creative.

I definitely could have done a better job with (2).

3. But sometimes there truly aren't any quick experiments you can run. Eg. SpaceX. How is SpaceX supposed to build a quick MVP? (Well, there are some things they could do, but I don't want to get distracted from the central point that some hypotheses inherently just take a long time to validate.)

Now that I'm finished with Premium Poker Tools and am reflecting on what I've learned, it's time to add a fourth general thought.

4. If you are in a situation where (3) applies and are going to spend... I don't know... two years and three months testing a hypothesis... then the upside damn well better be worth it!

Pretty obvious. If the risk is big, the reward needs to also be big.

With Premium Poker Tools, the reward never big enough. It just wasn't. This is no SpaceX.

Even if my initial ideas about the market size were correct, and there was the potential to make $10-20M, that still isn't enough to justify spending so long testing a hypothesis. However, I don't think it's that simple. This issue is very tangled up with the planning fallacy stuff I talked about in the "Just another month or two" section. I never actually decided to spend 2+ years on this. Show me the money I have another objection to the above section. I wasn't testing a hypothesis the whole time. No. About a year into it, I already had market validation. That's right. I already had validation. I'm not sure when exactly I would say it occurred, but a big thing is when I posted to Reddit and someone asked where they can donate. Another big thing is seeing one of the most popular poker players in the world using my app. Another was having a bunch of people tell me that they would pay for it. Another is having random people email me thanking me for building the app and telling me how great it is. Now, I know they say talk is cheap. I know how startup people always want to see traction. Users. Money. Growth. I didn't have any of that. But I did have all of the other stuff in the above paragraph! People said so many nice things to me. Sure, conventional wisdom might say that the above paragraph isn't enough, and that you need real traction... But I'm a Bayesian! I'm better than that! Saying that you need actual traction is like how the scientific community waits too long before accepting something as true. Being a Bayesian, I can update in inches. I can be faster than science. I can be faster than conventional startup wisdom. Well, that's how I used to think anyways. Now I run around my apartment with my fingers in my ears yelling SHOW ME THE MONEY!!!!!!!!! I'm exaggerating of course. I don't actually do that. And I don't actually think you should ignore everything except "actual results". No, I'm still a Bayesian, and Bayesians don't throw away evidence. But given my experiences, I've come to believe that such evidence isn't nearly as strong as I had previously thought. Deals fall through This is similar to the above section. In the above section, I'm basically talking about how when people say "This app is awesome! I would pay for it!" it doesn't actually mean that much. This section is going to talk about how when potential business partners say "This is really interesting! Let's talk more!" or even "I'm in!", it doesn't actually mean that much. You can read more about it in part one, but I've just had so many deals and stuff fall through. A lot of people were telling me that they were interested in working with me. Some even said that they would work with me. A verbal "yes". I thought to myself: Ok, great! This is pretty strong bayesian evidence right here. I wouldn't have so many people saying this stuff if they didn't mean it. Sure, maybe a few fall through, but not everyone.And also, I haven't even put myself out there too much. Just to throw out a number, maybe I've talked to 10% of the potential people who I could partner with. If I've gotten this much interest from the 10%, once I go after the 90%, I should end up with a good amount of partnerships. Given my experiences, I've come to believe that verbal interest just isn't that telling. And this seems to mirror the experiences of others as well. I don't want to say that verbal interest means nothing though. Honestly, I probably went too far in the above paragraph saying that it "just isn't that telling". I still have limited experience, and I'm not sure how strongly to weigh it as evidence. But one thing does seem pretty clear: the lack of actual traction is stronger evidence than verbal interest. Eg. for me, I had a lot of people saying they're interested in partnering with me, but no one actually following through. I think the lack of follow through is stronger evidence than the verbal interest. Similarly, I had a lot of people saying they really like the app and stuff, but even when it was free I was only getting maybe 100 users/month, and they weren't spending that much time on the app. I should have paid more attention to that. Build it and they'll come I've always had a perspective that goes something like this: My app is at least in the same ballpark as Flopzilla. I feel pretty confident that it's a little bit better, actually. So then, I would think I should at least get 10-20% of the market, if not more.Yes, I know they were the first mover and have the brand recognition, but if my product is in the ballpark, I should still make a dent. I release it, people hear about it, some people like it and start using it, they tell their friends, people link to it in forums, put screenshots in blog posts, etc. I would think that if my product is as good as the market leader, through a process something like what I just described, I would get my slice of the pie.Furthermore, I would expect my slice to be proportionate to the quality of my product. If the product is a little worse than the competitors, maybe my slice is 5%. If it's a little better, maybe I get 30-40%. If it's way better, maybe it's 75%.And maybe that perspective is too optimistic. That's certainly possible. But it can't be too off the mark, right? Maybe if the product is a little worse I end up with 1% instead of 5%. Maybe if it's only a little better I get 5-10% instead of 30-40%. Maybe if it's way better I get 50% instead of 75%. Although it's still a little counterintuitive to me, I have to say that my perspective is different now. The perspective I described above is some version of "build it and they'll come" (BIATC). I now thing that BIATC is pretty wrong. I'm still not quite sure what the mechanism is, though. I suppose that it takes a lot to actually get word of mouth to happen. I suppose people don't actually do that much comparison shopping, and instead lean heavily towards things that are popular, have social proof, and that they stumble across organically, eg. in blogs. I feel like I may have overestimated BIATC due the stuff I hear from YC folks. They talk a lot about focusing heavily on the product, as opposed to marketing it. I at least get the impression from them that making something people want is what it's all about, and that if you manage to do so, you'll have success. Here's Paul Graham in How to Start a Startup: It's not just startups that have to worry about this. I think most businesses that fail do it because they don't give customers what they want. Look at restaurants. A large percentage fail, about a quarter in the first year. But can you think of one restaurant that had really good food and went out of business?Restaurants with great food seem to prosper no matter what. A restaurant with great food can be expensive, crowded, noisy, dingy, out of the way, and even have bad service, and people will keep coming. It's true that a restaurant with mediocre food can sometimes attract customers through gimmicks. But that approach is very risky. It's more straightforward just to make the food good. Using the example of restaurants, I actually can think of a lot of restaurants with great food that don't do so well. Hole-in-the-wall-type places with awesome food, but that are never too busy. And on the other hand, I can think of a lot of trendier restaurants that have terrible food but a lot of customers. Still, I don't want to completely discount BIATC. I think it can be true in some situations. I just think those situations have to be pretty extreme. You need to 10x your competition. You need to be solving a hair on fire problem. You need to be building a painkiller, not a vitamin. You need your users to really, really love you. When you totally blow the competition away, or when you truly solve an important problem that hasn't been solved yet, yeah, when you build it, they'll come. Maybe that's what YC is trying to convey. Customer acquisition is hard Of course, I didn't actually just build my app and sit there expecting people to come to me. I did try to acquire customers. It just didn't work. Despite the fact that I have a product that people say all of these nice things about, I only managed to get, let's say three paid users and 100 free trial sign ups. That amount of traction seems very much not in line with the quality of product I have, which makes me think that customer acquisition is very hard. Or maybe just that I'm still bad at it. Here's what I tried, how it went, and what I learned: Affiliate partnerships This has always been Plan A. Find some people who already have huge followings eg. on YouTube, and piggyback off of that by offering them revenue share. Should be pretty simple. It's free money for them, and the product is good, so why wouldn't they want to do it? Especially when I ended up offering them 50%. Well, I'm still not quite sure what the answer to that is. Maybe they know how small the market is and how little money they'd make? Perhaps. But still, it's so low effort for them to throw a link in the description. And they're often already using poker software in their posts/books/videos, so it isn't any extra effort for them to use mine. In fact, mine is a better fit because mine is the only web app, and they could link to simulation results. In talking to them, the response I usually get is that they've been meaning to check the app out but just haven't had time. This makes no sense to me because as poker professionals, I'd think that they are already spending time studying with software, so why not substitute mine in? Maybe they don't actually study? Maybe they don't want to spend the time messing around with a product they're unfamiliar with? Who knows. My takeaway here is that if the deal you're offering people is only marginally beneficial, you might just end up with no partners. YouTube and blogging I think I have produced solid content with my YouTube channel and blog. People have told me that. But I've gotten only a minuscule amount of hits. Again, my initial thinking was what I described in BIATC. That the amount of hits I get should be at least roughly proportional to the quality of my content. Nope! That didn't happen! In retrospect, this makes sense. How are people supposed to discover your YouTube videos? YouTube recommends videos that are already popular, and chooses the videos that are already popular to put high in their search results. Similar with blogging. It's a chicken-egg problem that I have yet to figure out. Paid advertising There are a ton of huge companies built off of ad revenue: Google, Facebook, Instagram, Twitter, Reddit, etc. So then, that gives me the impression that paid ads are a huge thing that everyone is doing. And if they're doing it, they're doing it because it works for them, presumably. So paid ads have always been something that I assumed to work well. Nope! The first place I learned this was in Julian Shapiro's guide. He said something that made it actually seem pretty clear in retrospect. If you were able to get an ad channel working profitably for you, you'd just be able to scale up with that channel, acquiring customers profitably, and making a ton of money. If every company were able to do that, entrepreneurship would be pretty easy. Since reading his stuff, I started to hear other people say the same things, that it's really hard to do profitably and most companies never manage to do so. I still decided to give paid ads a shot, because it's a low risk high reward thing, but it turned out not to work for me. Direct sales I spent a pretty good amount of time doing direct sales. Some in person networking at a poker meetup, playing poker at the casinos, meeting people in coffee shops, Ubers, etc. Then I also spent some time DMing people on poker forums, and emailing poker players and coaches. I didn't go too crazy because I don't want to be spammy, but I definitely did some. None of it really worked though. I think my takeaway mirrors the BIATC stuff, where you need to have something really valuable to actually get peoples attention. I again found this surprising. I figured that when you actually are in a conversation with someone, they'd sort of give you the benefit of the doubt. But no, I didn't find that to be true. Refer a friend There was a point where I was offering$20 for every friend you refer. Pretty good, right?! Seemed pretty generous to me, and like something people would want to take advantage of. But no, that didn't happen. Still confusing to me, but I guess the lesson again is that it really takes a lot to get peoples attention.

Poker players spend a lot of time in forums discussing hands. They'll say things like, "you only have 33% equity, so you should fold". My app lets you link to a simulation that shows that you only have 33% equity. For non-users, it's in readonly mode. For users, when they could click the link and play around with the assumptions.

I thought this would be huge. Making it easier for people to discuss hands in the forums. But no, it did nothing. I think a big part of it is that most people in the forums approach it very casually and don't want to spend a lot of time on their comments. They just want to add their two cents, and then leave.

Free

The app was even free for a long period of time. I would have thought that this would attract a ton of users, but no, that didn't happen. More evidence that it takes a lot to actually get people to look in your direction, and that customer acquisition is hard.

People are lazy and irrational

Here's what I mean. It's really true that if you're a poker player and are actually trying to get better, that you should have some sort of poker software to study with. Coaches and professionals say this all the time. But people are still too lazy to do so.

Hell, they are just too lazy to study in general. I've said this before, but they prefer passive things. Watching a video and having someone "tell you the answers", even though that sort of passive study never works, regardless of the field. Eg. with math, you have to actually do a lot of practice problems yourself to get it to stick.

I've also heard a lot of coaches complain about students who pay them 100+/hr not do the homework that they are assigned. The coaches beg and plead, but the students just don't want to do the work. That's the lazy part. I guess the irrational part plays into that, but what I really had in mind is that poker software is pretty easily a +ROI investment, but people still don't care. And if poker players aren't persuaded by +ROI investments, then I'm not sure what other demographic would be persuaded. Everything else There are definitely things that I'm forgetting. Hopefully I'll add to this post as I remember them. If any readers out there have any advice, I'm all ears! I want to try to learn as much as I can from this. Discuss ### How feasible is long-range forecasting? Новости LessWrong.com - 11 октября, 2019 - 01:11 Published on October 10, 2019 10:11 PM UTC Lukeprog posted this today on the blog at OpenPhil. I've quoted the opening section, and footnote 17 which has an interesting graph I haven't seen before. How accurate do long-range (≥10yr) forecasts tend to be, and how much should we rely on them? As an initial exploration of this question, I sought to study the track record of long-range forecasting exercises from the past. Unfortunately, my key finding so far is that it is difficult to learn much of value from those exercises, for the following reasons: 1. Long-range forecasts are often stated too imprecisely to be judged for accuracy. 2. Even if a forecast is stated precisely, it might be difficult to find the information needed to check the forecast for accuracy. 3. Degrees of confidence for long-range forecasts are rarely quantified. 4. In most cases, no comparison to a “baseline method” or “null model” is possible, which makes it difficult to assess how easy or difficult the original forecasts were. 5. Incentives for forecaster accuracy are usually unclear or weak. 6. Very few studies have been designed so as to allow confident inference about which factors contributed to forecasting accuracy. 7. It’s difficult to know how comparable past forecasting exercises are to the forecasting we do for grantmaking purposes, e.g. because the forecasts we make are of a different type, and because the forecasting training and methods we use are different. Despite this, I think we can learn a little from GJP about the feasibility of long-range forecasting. Good Judgment Project’s Year 4 annual report to IARPA (unpublished), titled “Exploring the Optimal Forecasting Frontier,” examines forecasting accuracy as a function of forecasting horizon in this figure (reproduced with permission): This chart uses an accuracy statistic known as AUC/ROC (see Steyvers et al. 2014) to represent the accuracy of binary, non-conditional forecasts, at different time horizons, throughout years 2-4 of GJP. Roughly speaking, this chart addresses the question: “At different forecasting horizons, how often (on average) were forecasters on ‘the right side of maybe’ (i.e. above 50% confidence in the binary option that turned out to be correct), where 0.5 represents ‘no better than chance’ and 1 represents ‘always on the right side of maybe’?” For our purposes here, the key results shown above are, roughly speaking, that (1) regular forecasters did approximately no better than chance on this metric at ~375 days before each question closed, (2) superforecasters did substantially better than chance on this metric at ~375 days before each question closed, (3) both regular forecasters and superforecasters were almost always “on the right side of maybe” immediately before each question closed, and (4) superforecasters were roughly as accurate on this metric at ~125 days before each question closed as they were at ~375 days before each question closed. If GJP had involved questions with substantially longer time horizons, how quickly would superforecaster accuracy declined with longer time horizons? We can’t know, but an extrapolation of the results above is at least compatible with an answer of “fairly slowly.” I'd be interested to hear others' thoughts on the general question, and any opinions on the linked piece. Discuss ### The Baconian Method (Novum Organum Book 1: 93-107) Новости LessWrong.com - 10 октября, 2019 - 21:54 Published on October 10, 2019 6:54 PM UTC This is the seventh post in the Novum Organum sequence. For context, see the sequence introduction. In this section, Bacon lists reasons why we should believe much greater progress in science is possible, and in doing so begins to describe his own inductivist methodology in detail. We have used Francis Bacon's Novum Organum in the version presented at www.earlymoderntexts.com. Translated by and copyright to Jonathan Bennett. Prepared for LessWrong by Ruby. Ruby's Reading Guide Novum Organum is organized as two books each containing numbered "aphorisms." These vary in length from three lines to sixteen pages. Bracketed titles of posts in this sequence, e.g. Idols of the Mind Pt. 1, are my own and do not appear in the original.While the translator, Bennett, encloses his editorial remarks in a single pair of [brackets], I have enclosed mine in a [[double pair of brackets]]. Bennett's Reading Guide [Brackets] enclose editorial explanations. Small ·dots· enclose material that has been added, but can be read as though it were part of the original text. Occasional •bullets, and also indenting of passages that are not quotations, are meant as aids to grasping the structure of a sentence or a thought. Every four-point ellipsis . . . . indicates the omission of a brief passage that seems to present more difficulty than it is worth. Longer omissions are reported between brackets in normal-sized type.Aphorism Concerning the Interpretation of Nature: Book 1: 93–107 by Francis Bacon 93. We have to assume that the force behind everything is God; for our subject matter—·namely nature·—is good in such a way that it plainly comes from God, who is the author of good and the father of light. Now in divine operations even the smallest beginnings lead unstoppably to their end. It was said of spiritual things that ‘The kingdom of God cometh not with observation’ [Luke 17:20], and it is the same with all the greater works of divine providence: everything glides on smoothly and noiselessly, and the work is well under way before men are aware that it has begun. And don’t forget Daniel’s prophecy concerning the last ages of the world: ‘Many shall run to and fro, and knowledge shall be increased’ [Daniel 12:4], clearly indicating that the thorough exploration of the whole world is fated to coincide with the advancement of the sciences. (By ‘fated’ I mean ‘destined by ·God’s· providence’. I would add that there have been so many distant voyages that ‘the thorough exploration of the whole world’ seems to have reached completion or to be well on the way to it.) 94. Next topic: the best of all reasons for having hope, namely the errors of the past, the wrong roads so far taken. In the course of censuring a poorly run government the critic said something excellent: The worst things in the past ought to be regarded as the best for the future. For if you had conducted yourself perfectly yet still ended up in your present ·miserable· condition, you would have not even a hope of improvement. But as things stand, with your misfortunes being due not to the circumstances but to your own errors, you can hope that by abandoning or correcting these errors you can make a great change for the better. Similarly, if throughout many years men had gone the right way about discovering and cultivating the sciences, and the sciences had still been in the state they are now actually in, it would have been absurdly bold to think that further progress was possible. But if the wrong road has been taken, and men have worked on things that weren’t worthwhile, it follows that the troubles have arisen not from •circumstances that weren’t in our power but from •the human intellect—and the use and application of that can be remedied. So it will be really useful to expound these errors; because every harm they have done in the past gives us reason to hope to do better in the future. I have already said a little about these errors, but I think I should set them out here in plain and simple words. 95. Those who have been engaged in the sciences divide into experimenters and theorists. The experimenters, like •ants, merely collect and use ·particular facts·; the theorists, like •spiders, make webs out of themselves. But the •bee takes a middle course: it gathers its material from the flowers of the garden and the field, but uses its own powers to transform and absorb this material. A true worker at philosophy is like that: • he doesn’t rely solely or chiefly on the powers of the mind ·like a theorist = spider·, and • he doesn’t take the material that he gathers from natural history and physical experiments and store it up in his memory just as he finds it ·like an experimenter = ant·. Instead, • he stores the material in his intellect, altered and brought under control. So there is much to hope for from a closer and purer collaboration between these two strands in science, experimental and theoretical—a collaboration that has never occurred before now. 96. We have never yet had a natural philosophy that was pure. What we have had has always been tainted and spoiled: in Aristotle’s school by logic; in Plato’s by natural theology; in the second school of Platonists (Proclus and others) by mathematics, which ought only to set natural philosophy’s limits, not generate it or give it birth. From a pure and unmixed natural philosophy we can hope for better things ·than can be expected from any of those impure systems·. 97. No-one has yet been found who was sufficiently firm of mind and purpose to decide on and to carry out this programme: Clean right out all theories and common notions, and apply the intellect—thus scrubbed clean and evenly balanced—to a fresh examination of particulars. [[By particulars, Bacon likely means something close to specific individual data points and observations.]] For want of this, the human knowledge that we have is a mish-mash, composed of •childish notions that we took in along with our mothers’ milk, together with •·the results of· much credulity and many stray happenings. So if someone of mature years, with functioning senses and a well-purged mind, makes a fresh start on examining experience and particular events, better things may be hoped for from him. In this respect, I pledge myself to have good fortune like that of Alexander the Great. Don’t accuse me of vanity until you have heard me out, because what I am getting at—taken as a whole—goes against vanity. Aeschines said of Alexander and his deeds: ‘Assuredly we don’t live the life of mortal men. What we were born for was that in after ages wonders might be told of us’, as though Alexander’s deeds seemed to him miraculous. But ·what I am saying about myself is not like that, but rather like this·: in the next age Livy took a better and a deeper view of the matter, saying of Alexander that ‘all he did was to have the courage to neglect sources of fear that were negligible’. I think that a similar judgment may be passed on me in future ages: that I did no great things, but simply cut down to size things that had been regarded as great. . . . 98. We can’t do without experience; but so far we haven’t had any foundations for experience, or only very weak ones. No-one has searched out and stored up a great mass of particular events that is adequate • in number, • in kind, • in certainty, or • in any other way to inform the intellect. On the contrary, learned men— relaxed and idle—have accepted, as having the weight of legitimate evidence for constructing or confirming their philosophy, bits of hearsay and rumours about experience. Think of a kingdom or state that manages its affairs on the basis not of •letters and reports from ambassadors and trustworthy messengers but of •street-gossip and the gutter! Well, the way philosophy has managed its relations with experience has been exactly like that. • Nothing examined in enough careful detail, • nothing verified, • nothing counted, • nothing weighed, • nothing measured is to be found in natural history. And observations that are loose and unsystematic lead to ideas that are deceptive and treacherous. Perhaps you think that this is a strange thing to say. You may want to comment: Your complaint is unfair. Aristotle—a great man, supported by the wealth of a great king—composed an accurate natural history of animals; and others, with greater diligence though making less fuss about it, made many additions; while yet others compiled rich histories and descriptions of metals, plants, and fossils. If so, it seems that you haven’t properly grasped what I am saying here. For the rationale of a •natural history that is composed for its own sake is not like the rationale of a •natural history that is collected to supply the intellect with the concepts it needs for building up philosophy. They differ in many ways, but especially in this: the former attends only to the variety of natural species ·as they are found in nature·, not to ·deliberately constructed· experiments in the mechanical arts. In the business of life, the best way to discover a man’s character, the secrets of how his mind works, is to see how he handles trouble. In just the same way, nature’s secrets come to light better when she is artificially shaken up than when she goes her own way. So we can hope for good things from natural philosophy when natural history—which is its ground-floor and foundation—is better organized. Then, but not until then! 99. Furthermore, even when there are plenty of mechanical experiments, there’s a great scarcity of ones that do much to enlarge the mind’s stock of concepts. The experimental technician isn’t concerned with discovering the truth, and isn’t willing to raise his mind or stretch out his hand for anything that doesn’t bear on his ·practical· project. There will be grounds for hope of scientific advances when ·and only when· men assemble a good number of natural-history experiments that •are in themselves of no ·practical· use but simply •serve to discover causes and axioms. I call these ‘experiments of light’, to distinguish them from the ·practically useful but theoretically sterile· ones that I call ‘experiments of fruit’ [here ‘fruit’ = ‘practical results’]. Now, experiments of this kind have one admirable property: they never miss or fail! Their aim is not to •produce some particular effect but only to •discover the natural cause of something; and such an experiment succeeds equally well however it turns out, for either way it settles the question. 100. Many more experiments should be devised and carried out, and ones of an utterly different kind from any we have had up to now. But that is not all. There should also be introduced an entirely different method, order, and procedure for carrying through a programme of experiments. To repeat something I have already said [82]: when experimentation wanders around of its own accord, it merely gropes in the dark and confuses men rather than instructing them. But when there is a firmly regulated, uninterrupted series of experiments, there is hope for advances in knowledge. 101. Even after we have acquired and have ready at hand a store of natural history and experimental results such as is required for the work of the intellect, or of philosophy, still that is not enough. The intellect is far from being able to retain all this material in memory and recall it at will, any more than a man could keep a diary all in his head. Yet until now there has been more thinking than writing about discovery procedures—experimentation hasn’t yet become literate! But a discovery isn’t worth much if it isn’t ·planned and reported· in writing; and when this becomes the standard practice, better things can be hoped for from experimental procedures that have at last been made literate. 102. The particulars ·that have to be studied· are very numerous, and are like an army that is dispersed across a wide terrain, threatening to scatter and bewilder the intellect ·that tries to engage with them·. There’s not much to be hoped for from intellectual skirmishing ·with these particulars·, dashing here and there among them in a disorderly way. What is needed is first •to get the relevant particulars drawn up and arranged, doing this by means of tables of discovery that are well selected, well arranged, and fresh (as though living); and •to put the mind to work on the prepared and arranged helps that these tables provide. [[By axiom, Bacon means something akin to hypothesis or model.]] 103. But after this store of particulars has been laid before our eyes in an orderly way, we shouldn’t pass straight on to the investigation and discovery of new particulars or new discoveries; or anyway if we do do that we oughtn’t to stop there. I don’t deny that when all the experiments of all the arts have been collected and ordered and brought within the knowledge and judgment of one man, new useful things may be discovered through taking the experimental results of one art and re-applying them to a different art (using the approach to experiments that I have called ‘literate’, ·meaning that the results are properly recorded in writing·). But nothing much can be hoped for from that procedure. Much more promising is this: from those particular results derive axioms in a methodical manner, then let the light of the axioms point the way to new particulars. For our road does not lie on a level, but goes up and down—up to axioms, then down again to scientific practice. [[For a modern plain English description of Bacon's method see: 1, 2, 3. A concrete example of what Bacon is discussing might be as follows: Particular: you observe that both parents of sparrows care for their young*.Highly-General Axiom/Hypothesis: both sexes of all bird species care for the young;Medium-General Hypothesis: both of sexes small birds care for their young;Narrow Axiom/Hypotheses: Some of both sexes of sparrows living in South England care for their young. Aristotle might start with a fews observations or a folk belief that some birds of both sexes care for their young and then formulate a universal truth: For all X such that X is a bird, it cares for its young. By syllogism, Aristotle will derive new particular cases: Robins are are kind of bird therefore box sexes of Robins care for their young. This is syllogistic demonstration. Bacon states that the Aristotelian approach is utterly invalid and instead one musts only generalize modestly from observations, using each expansion of the generalization to seek out further evidence which will either confirm or deny further expansion. *This is a fictitious example.]] 104. But the intellect mustn’t be allowed •to jump—to fly—from particulars a long way up to axioms that are of almost the highest generality (such as the so-called ‘first principles’ of arts and of things) and then on the basis of them (taken as unshakable truths) •to ‘prove’ and thus secure middle axioms. That has been the practice up to now, because the intellect has a natural impetus to do that and has for many years been trained and habituated in doing it by the use of syllogistic demonstration. Our only hope for good results in the sciences is for us to proceed thus: using a valid ladder, we move up gradually—not in leaps and bounds—from particulars to lower axioms, then to middle axioms, then up and up until at last we reach the most general axioms. ·The two ends of this ladder are relatively unimportant· because the lowest axioms are not much different from ·reports on· bare experience, while the highest and most general ones—or anyway the ones that we have now—are notional and abstract and without solid content. It’s the middle axioms that are true and solid and alive; they are the ones on which the affairs and fortunes of men depend. Above them are the most general axioms, ·which also have value, but· I am talking not about abstract axioms but rather about ones of which the middle axioms are limitations ·and which thus get content from the middle axioms·. So the human intellect should be •supplied not with wings but rather •weighed down with lead, to keep it from leaping and flying. This hasn’t ever been done; when it is done we’ll be entitled to better hopes of the sciences. 105. For establishing axioms we have to devise a different form of induction from any that has been use up to now, and it should be used for proving and discovering not only so-called ‘first principles’ but also the lesser middle axioms— indeed all axioms. The induction that proceeds by simply listing positive instances is a childish affair; its conclusions are precarious and exposed to peril from a contradictory instance; and it generally reaches its conclusions on the basis of too few facts—merely the ones that happen to be easily available. A form of induction that will be useful for discovery and demonstration in the sciences and the arts will have •to separate out a nature through appropriate rejections and exclusions, and then, after a sufficient number of negatives, •to reach a conclusion on the affirmative instances. [Bacon will start to explain this in 2-15.] No-one has ever done this, or even tried to, except for Plato who does indeed make some use of this form of induction for the purpose of discussing definitions and ideas. But for this kind of induction (or demonstration) to be properly equipped for its work, many things have to be done that until now no mortal has given a thought to; so that much more work will have to be spent on this than has ever been spent on the syllogism. And this induction should be used not only in the discovery of axioms but also in drawing boundaries around notions. It is in this induction that our chief hope lies. [[Here Bacon again mentions the importance of Looking Into the Dark.]] 106. When establishing an axiom by this kind of induction, we must carefully note whether the axiom is shaped so as to fit only the particulars from which it is derived, rather than being larger and wider. And if it is larger and wider, we must see whether its greater scope is confirmed and justified by new particulars that it leads us to. Such a justified increase of scope saves us from being stuck with things that are already known (but if it isn’t justified then we are over-stretching, loosely grasping at shadows and abstract forms rather than at solid things in the world of matter). When we do things in this way we shall at last have justified hope. 107. At this point I should remind you of what I said earlier [80] about extending the range of natural philosophy so that the particular sciences can be grounded in it, and the branches of knowledge don’t get lopped off from the trunk. For without that there will be little hope of progress. Discuss ### "Mild Hallucination" Test Новости LessWrong.com - 10 октября, 2019 - 20:57 Published on October 10, 2019 5:57 PM UTC In Scott Alexander's Lot's of People Going Around with Mild Hallucinations All the Time, he shows that several people not currently on LSD still experience mild hallucinations commonly associated with currently taking LSD. I would like to test to see if I could teach you how to see these mild hallucinations, regardless of experience with psychedelics. Below are 3 tests that should take 1-2 minutes to complete. If you choose to complete 1 or more of these, please comment both failed and successful attempts. Please also comment if you can already see some of these, even if you think it seems obvious. Test 1: Visual Snow Description: See the Visual Snow Wiki for a nice visualization on the top-right. I would describe it as "jumpy spiderwebs made out of light", similar in feel to the "black stars" people see when feeling faint (when they get up too quick). I would say it's NOT the same experience as mental imagination or eye floaters. Test: For 1 minute (click here for a 1 minute timer), close your eyes and try to see the back of your eyelids using your peripheral vision. If a minute elapses with nothing resembling "visual snow", then it's a failure. If it's a success, then try to see visual snow with your eyes open, again for 1 minute at most. Test 2: Afterimage Around Objects Description: It's similar in feel to the image on the right in the afterimage wiki. Similar to seeing a bright light and still seeing it in your vision after you look away. Test: For 2 minutes max (click here for a 2 minute timer), find a brightly colored object that's against a different flat colored background (a red towel hanging in front of a light tan wall, your face in the mirror in front of a white door, etc), and just stare at the object using your peripheral vision. Don't shift your eyes, just pick a spot and focus on your peripheral vision. If you don't see a colored afterimage of the object around parts of that object, then it's a failure. Test 3: Breathing Walls Description: It looks like the static surface you're looking at (floors, walls, ceilings) is shifting, rotating, swirling, "breathing" (sort of dilating back and forth?) even though you know that it's actually still static. Usually more apparent in patterned surfaces than plain colored ones. Test: For 1 minute, find a larger, textured surface (carpet, pop-corn ceilings, [other examples?]), and stare at it using your peripheral vision. If after a minute of staring you don't see any moving, shifting, etc, then it's a failure. Discuss ### Ненасильственное общение. Тренировка События в Кочерге - 10 октября, 2019 - 19:30 Как меньше конфликтовать, не поступаясь при этом своими интересами? Ненасильственное общение — это набор навыков для достижения взаимопонимания с людьми. Приходите на наши практические занятия, чтобы осваивать эти навыки и общаться чутче и эффективнее. ### CPH meetup 10/10/19 Новости LessWrong.com - 10 октября, 2019 - 18:38 Published on October 10, 2019 10:08 AM UTC This is intended as a space for people curious about the world and critical of the methods of understanding it. It is primarily aimed at people interested in applying Bayesian Rationality(LessWrong, CFAR, etc.) to their critical methods but also welcomes Slate Star Codex readers, and others curious about the world. Last time the community seemed excited about a community member sharing their view on something they are knowledgeable about, and so without further ado I would like to present this weeks activity. Activity: Community Presentation Carl Dybdahl will be doing a presentation on the causes of gender dysphoria and transsexuality, where he will be talking about Blanchard's typology, research into gender issues, critiques of the model, and things that may be interesting to study in the future. Schedule: 19.00 Informal meet and greet 19:30 Activity begins 21:00 Socialize This is the second meetups in a series that will last until December. We are meeting in the aquarium. Discuss ### What is the real "danger zone" for food? Новости LessWrong.com - 10 октября, 2019 - 14:00 Published on October 10, 2019 11:00 AM UTC In my post about keeping dry food warm in school lunches I wrote that "The general rule is that hot food shouldn't be below 140F (60C) for more than two hours, because substantial bacteria can grow, and the closer it is to 100F (37C)the worse it is." Someone said they had heard 130F (55C) was the limit, and asked where this rule came from, and I read more about it. Bacteria that's most dangerous to us generally thrive best around body temperature, so the farther you get from 100F (37C) the better. Food safety material generally describes this as a "danger zone", between refrigerator temperature and cooking temperature. This is what's represented in the FDA's Food Code: 3-501.19.B.1 Time as a Public Health Control: Time - maximum up to 4 hours: The FOOD shall have an initial temperature of 5C (41F) or less when removed from cold holding temperature control, or 57C (135F) or greater when removed from hot holding temperature control It gives more detail in section 3-501.16's "The Safety of the Time as a Public Health Control Provision from Cooking Temperatures (135F or above) to Ambient": FDA conducted in-house laboratory experiments to test the safety of the existing TPHC provisions of 4 hours without temperature control starting with an initial temperature of 135F or above. Clostridium perfringens was chosen to represent a worst case scenario pathogen for foods allowed to cool from cooking temperatures to ambient without temperature control, because its spores can survive normal cooking procedures, it can grow at relatively high temperatures (>120F) and it has a short lag period. C. perfringens spores were inoculated into foods that were cooked and then cooled to yield a cooling curve that would promote outgrowth as quickly as possible. The growth data suggest that the existing 4-hour TPHC provision will be safe for 6 hours after cooking, with the additional 2-hour margin of safety built-in for consumer handling. There's a great chart in The "Danger Zone" Reevaluated by Frank Bryan, showing how long food can spend at various temperatures: Since the time I care about is 4hr from packing to eating, even under the food code guidelines it should be safe if it starts at cooking or refrigerator temperatures. To be safe, though, I'm going to keep using the thermal mass thermos approach. Comment via: facebook Discuss ### Testing the Efficacy of Disagreement Resolution Techniques (and a Proposal for Testing Double Crux) Новости LessWrong.com - 10 октября, 2019 - 10:18 Published on October 10, 2019 7:18 AM UTC Introduction I will describe a procedure for testing the efficacy of disagreement resolution techniques (DRTs) together with methodologies for inducing their use (induction methods). DRTs are structured games (in the loosest senses of the terms) that involve conversation, and are aimed at helping participants who disagree about some topic either figure out the truth about that topic, or come to better understand each others' positions and the sources of their disagreement. An induction method is just any way of trying to get someone to use a DRT. I am writing up a description of this procedure here because I plan to use it to test and find DRTs, and I would like to get feedback before I go out and do that. I originally came up with this procedure in order to test Double Crux, and I still plan to. I will describe the first step of that plan in the second half of this post. The first half of the post explains the general procedure and some frames I think might be useful for understanding the second half of the post. I would also like to invite others to use the general procedure for whatever seems like a good idea to them. It seems fairly obvious to me now, but something something hindsight. I would like any kind of feedback that you think might make the methodology I describe herein better. I am particularly interested in feedback on the specific procedure I plan to use for testing Double Crux, but feedback on the general procedure would also be great. Any feedback or advice on the statistical analysis of the results of my test procedure for Double Crux would also be appreciated. General Procedure Step 0: Gather participants and filter them for desired characteristics. Ignorance of the DRT to be tested should be verified for all participants regardless of the aims of those conducting the study. Inform accepted participants of the nature of the study: that they will be scored according to a proper scoring rule; that they may have to take a module; discuss a controversial question with a stranger, etc. Step 1: Have participants assign credence distributions over the possible answers to multiple choice questions. Such questions should have definite correct answers, and it should be hard or impossible for participants to look up the answers. Step 2: Pair participants according to disagreement measured by total variation distance. Step 3: Randomly assign participants to one of three groups. Control Group 1: Members are given no special instructions. Control Group 2: Member are given basic advise on how to have useful conversations with people who disagree with you. Treatment Group: Members will be induced to use the DRT to be tested during the conversation in step 4 using the induction method to be tested. Step 4: Inform members of all three groups who they were paired with and what question the pair disagreed about. Members of the first two groups are instructed to have a conversation with their partner with the aim of becoming more confident about what the right answer is. For the third group, the induction method is applied either before or during their conversations depending on its design. Members of the third group are instructed to use the DRT being tested in order to figure out the right answer to the question they were assigned. Have all three groups conduct their conversations. Step 5 Have participants assign a new credence distribution over the possible answers to the multiple choice questions they discussed in step 4. Step 6: Pay participants according to a proper scoring rule scored on the credence distributions they assigned in step 5. Measurement There are two kinds of data that I think we should try to collect from this kind of procecure. The first is the amount of evidence or information gained by each participant through their conversation. For instance, if a participant started out assigning a credence of .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > * {position: absolute} .MJXc-bevelled > * {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom * {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')} 0.2 to the correct answer for their assigned question, and then assigned the correct answer a credence of 0.5. This means that they started out assigning it odds of 1:4, and ended up assigning it odds of 1:1 as a result of their conversation. This suggests that they treated the conversation as a test result with a likelihood ratio of 4:1, which amounts to gaining 2 bits of evidence from the conversation. If a participant updates away from the correct answer, the likelihood ratio will be below one, and so the log of the likelihood ratio will be negative. The second kind of data I would like to collect is the degree to which a pair's beliefs converged after the conversation regardless of the truth of the answer on which they converged. I will measure this by taking the total variation distance of their credence distributions before their conversation, and subtracting from that number the total variation distance between the credence distributions they assigned after their conversation. Constraints on Multiple Choice Questions The multiple choice questions used should be questions for which participants are unlikely to already know the correct answer. They should also be questions for which it is reasonable to expect that people might make incremental progress on figuring out the correct answer through conversation, eg: "will president Trump be reelected in 2020" would be fine, but "what is Joe Smith's sister's name" would not. The questions should also have the property that if you find out the right answer, it is not completely trivial to convince others of the right answer. Counterintuitive physics puzzles satisfy this criteria, but most raven's progressive matrices questions do not. It might be good for there to be a set of neutral questions, such as counterintuitive physics or logic puzzles, or questions about how long it took for spider silk to evolve, as well as a set of controversial or politically sensitive questions, such as who will be elected or nominated for some important political office, or whether the GDP of a country went up or down after a particular policy was implemented. DRTs are not Individuated by their Induction Methods The original example of a DRT that I had in mind was Double Crux. Having anchored on Double Crux, the alternative possible DRT-induction method pairs I imagined I might later test shared Double Crux's canonical pedagogical structure: you are taught the technique by someone who knows it or maybe read about it, you practice it some, and then you use it with other people. The space of possible DRT-induction method pairs is much larger than this would suggest. DRTs are not always transmitted to someone before the first time they are used. Some methods of induction allow the first time someone uses a DRT to be simultaneous with the first time they learn how it works or beforehand. One example of an induction method that allows participants to use a DRT before they learn how the DRT works is having a trained facilitator facilitate their conversation according to the norms of the DRT. Another might be using a program that guides participants through a discussion. Maybe it keeps track of their Cruxes; keeps track of the various topic branches and sub conversations; the different arguments that have been made and what their premises are; keeps track of users' credences and asks them to update them explicitly when a new argument is inputted, etc. If a DRT were codified in such a program, participants might not have to be preemptively instructed in how to use the DRT. The program might be sufficiently intuitive and give participants enough instruction on its own. This would have the added bonus that they could later use that program without needing a trained facilitator around. I could imagine designing a program like this for Double Crux, but also for other DRTs that are not Double Crux. Of course, you could also design alternative DRTs and transmit them in the way that Double Crux is normally transmitted. This means that whether an accompanying induction method is applied before or during the first time participants use a DRT does not depend much on the nature of the DRT itself. The general procedure does not test the efficacy of DRTs in isolation; it tests them together with a particular method of induction. If a DRT totally fails to get any interesting result when we try to induce it with one method, that does not mean that the DRT itself is inefficacious. It might be that the induction method used failed to induce the DRT's use, or that it induced its use but failed to transmit the most important fragments of the DRT. Specific Plans to Test Double Crux Induced by an Online Training ModuleProcedure Step 0: I will collect participants on positly.com. I will filter them for not already knowing what Double Crux is. I may also filter them for having completed a bachelor's degree depending on how difficult the final set of questions and the concepts involved in the training module turn out to be. Postily participants are already filtered by Positly for being decent study participants in general. Step 1: I will have them take a questionnaire that contains several multiple choice questions. Some of these will be predictions about politically sensitive topics. Some of these will be physics or logic puzzles. Some will be numerical estimation questions like "how many miles of road are there in Africa". I will ask them to assign credence distributions over the multiple choices for each question. I will then ask them if they would like to participate in a larger study. I will first explain what sorts of things they can expect to do during that study and what sorts of compensation they can expect to earn, if they say yes, I will collect their emails and record their data on a spreadsheet. Step 2: I will pair participants according to disagreement, preferring larger disagreements to smaller ones. Again, this will be measured by total variation of distance. Step 3: I will then randomly assign pairs to one of three groups: Control Group 1: Members of this group will not be given any specific advice. Control Group 2: Members of this group will be given general advice on how to have useful conversations with people they disagree with. I will tell them that they should try to see each other as cooperating to solve a puzzle, that they should see themselves as partners on the same team, that they should try really hard to understand why the other person had such different beliefs from them, and that they should try to ask each other a lot of questions. Treatment Group: Members of this group will be asked to complete an online training module that is intended to teach them some critical fragment of the Double Crux technique, and asked to Double Crux with their partners in step 4. (More on the design of this module later.) Step 4: Participants will be asked to communicate with the other member of their pair via a video chat service, or a live chat service. (More on this later.) They will be asked to speak to each other for as long as they would like, but for no less than 30 minutes. Step 5: I will then have participants assign a new credence distribution over the multiple choice answers for the question they were assigned to discuss in step 4. Step 6: I may offer participants extra compensation to record or send a log of their conversation. I will then recruit people who can credibly claim to assess Double Crux performance, and have them rate logs or recordings of the conversations for Double Cruxiness. They will rate these logs on a scale from 1 to 10. They should give a 1 to any conversation they consider to not be an instance of Double Crux at all, a 5 to a conversation that seems average Double Cruxy for conversations between people who disagree and know about Double Crux, and a 10 to any conversation that seems like a totally paradigmatic instance of Double Crux. If I decide to do this, I will try to design the module without the use of the word "crux" so as to minimize the chance that participants use the term in their logs, as that would certainly tip off the raters. Step 7: Reward participants according to a Briar score scored on the distributions they assigned in step 5. Measurements I plan to measure the same quantities mentioned above. I will measure the amount of information gained by each participant from the conversation, and I will measure the degree to which each pair converged. Text Chat or Video Chat? I am genuinely unsure about what medium I should ask people to conduct their conversations through. Video chat is closer to in person conversation, and I personally find the extra bandwidth useful, especially when I am trying to have a cooperative conversation. However, it would be easier to have participants send logs that can be rated and analyzed if the conversations are mediated through text. Text also offers the advantage of providing a visible log of the conversation for the participants to look over, which is basically the same as giving participants more working memory. I would be interested to hear other peoples' thoughts on this. Why have People Rate the Double Cruxiness of the Conversations? I would like to take logs of the conversations and have people rate them for Double Cruxiness so that if I get a negative result, it is more informative. Without step 6, if it turned out that participants in the treatment group got about as much information out of their conversations as members of either control group, we might interpret this as being primarily evidence that the module used in the experiment does not teach Double Crux. But if we know that people who understand Double Crux rate the participants in the treatment group as using Double Crux more than those in the control groups, then we can rule out that explanation. If the raters rate the treatment group's Double Cruxiness above the level of the control groups, then that suggests that the module teaches Double Crux. As such, a negative result would be some evidence against the efficacy of the Double Crux technique itself. If the raters rate the treatment group's Double Cruxiness near the level of the control groups, this means that the module does not teach Double Crux, or at least it does not teach it any better than giving some basic advice about how to have useful disagreements. In that case, the treatment group scoring about as well as control group 2 would not be much evidence against the efficacy of Double Crux itself, although it would certainly be further evidence that the module does not successfully induce the technique. A positive result would suggest that those people I consider relative authorities on Double Crux are not reliable raters of the degree to which the technique is being used, or are not sensitive to the important aspects of its use. I think this would be an interesting thing to learn and further investigate. It might help with the design of future DRTs or training modules. I might ultimately leave this step out of a pilot study. I would be more likely to include step 6 in a later follow up to the pilot. If I did decide to include step 6, I would make sure that nobody, including my self, knew how well the treatment group did compared to the control groups until after the rating was done. Designing the Module Although I use Double Crux often, I do not think of myself as a qualified instructor of the technique. I would like the module to be designed primarily by people who can credibly claim to have successfully taught Double Crux multiple times. Preferably it would be designed by people who have also put a lot of thought into figuring out what parts of the technique are most important, and what makes it work. I will try to recruit or contract some such folks to help me design the module. If they turn out to be too expensive for my current budget, then I will just try my best at designing the module, plagiarizing from the people I would have hired as much as possible, and asking for as much feedback as I can handle along the way. I expect that the module should include examples, diagrams, and questions that check for understanding. I would prefer for it to be no longer than 30 minutes, but it might have to be. Regardless of who ends up designing it, I will be asking for feedback on the module from other people. Pilot Study My current plan is to pay people 20 usd just for participating, since it is a rather intensive study. (It might take up to an hour and a half to complete. I will start out offering a lower price and see what happens, but I would be happy to go as high as 20 usd.) I will also offer up to 10 usd in monetary rewards for updating towards the right answer. I may or may not offer an additional reward for sending a log if I decide to include step 6 in the pilot study at all. This would be more likely if I ended up asking participants to use video chat instead of text, since recording video is harder than copying and pasting. I would like to have at least 30 participants in each group for the pilot study. Statistical Analysis My plan is to use both standard significance analysis, specifically a Welch test, and also to use the Bayesian method, BEST, described here. In this section, I will use a standard difference of means test since it is easier to explain, but the Welch test gives similar results. You can verify their similarity yourself using this online calculator. There are three distributions we are interested in. The distributions of our measurements for the control group 1 population, the control group 2 population, and the treatment group population. The analysis is the same for either, except that the sample for the convergence measure will have half as many data points. I will focus on the evidence measure here, since it is what I am more interested in, especially for the pilot study. I will call their respective means &#x3BC;1, &#x3BC;2, and &#x3BC;t. There are two hypotheses we want to test, and two corresponding null hypotheses. H0:&#x3BC;1−&#x3BC;t≤0 H0:&#x3BC;1−&#x3BC;t>0 And also: H0:&#x3BC;2−&#x3BC;t≤0 H0:&#x3BC;2−&#x3BC;t>0 It might also be interesting to compare &#x3BC;1 and &#x3BC;2. The significance analysis for all such tests is the same, so I will arbitrarily pick the first one. The means of two distributions being equal is equivalent to the means of their sample distributions being equal: &#x3BC;¯x1−&#x3BC;¯xt=0 . The distributions of &#x3BC;¯x1 and &#x3BC;¯xt are approximately normally distributed by the central limit theorem, and so their difference is also normally distributed. To estimate the standard deviation of the difference distribution we use the pooled variance formula: &#x3C3;¯x1−¯xt=√s21n1+s2tnt Estimating the sample standard deviations to be about 4 bits of information (higher than I expect them to be) each, with a sample size of 30 in each group, that gives us the following critical values for each significance level: For &#x3B1;=.2 we should reject the null hypothesis at approximately ¯x1−¯x2=0.87 . For &#x3B1;=.15 we should reject at approximately ¯x1−¯x2=1.07 . For &#x3B1;=.1 we should reject at approximately ¯x1−¯x2=1.33 . If instead we guess sample standard deviations of 2 bits, then we get: For &#x3B1;=.2 we should reject the null hypothesis at approximately ¯x1−¯x2=0.43 . For &#x3B1;=.15 we should reject the null hypothesis at approximately ¯x1−¯x2=0.66 . For &#x3B1;=.05 we should reject the null hypothesis at approximately ¯x1−¯x2=0.85. I am optimistic that I can get better a sample mean difference greater than 0.43, but that would not be significant at the liberal &#x3B1;=.2 level unless the standard deviation is 2 or less. I think it is possible the pilot study will get a sample difference of greater than 0.87. I am less optimistic about greater than 1.07, and I think better than 1.33 is unlikely. Of course, the sample standard deviations will almost certainly be higher or lower than my current best guesses. My expectations about what results I am likely to get are pure speculation, but they are my best guesses, and I do take my guesses about the sample standard deviations to be genuinely conservative. I will likely later end up laughing at how optimistic or pessimistic I previously was. It might be worth pointing out that with a sample size of about 115 in each group, and sample standard deviations around 3, the critical value for &#x3B1;=0.5 is about 0.7, which I would be relatively optimistic about beating. As state above, I also plan to use BEST to do a Bayesian analysis of the results. I will give a few example priors and their corresponding posteriors, and hopefully also provide a program in R or Python that allows one to input a prior and get a posterior as output. If there is a better way to do this than the Welch test that does not assume equal variances of the relevant population distributions, or a better bayesian approach, I am all ears. I would also be interested in hearing about alternatives to the BEST approach . I might be missing something important. This is just the bog standard approach I found after googling and asking around for a bit. In any case, I plan to make all data (aside from the conversation logs) publicly available (while preserving anonymity of course). I would encourage folks to analyze that data however they like and let me know what they find. What Happens After you get your Result? If I get a positive result, I will look for further funding to do a larger study. If I get a positive result on a stronger study, I will have a module that has been empirically shown to helps pairs of people who disagree get more information out of their conversations. I plan to distribute any such module for free. I will of course also try to look for other DRT-induction methods with larger effect sizes. If I get a negative result on the pilot study, I will still make the data publicly available, as well as the module, and let people make their own judgments on how that result should update them. I may release a program that allows one to input a prior over a parameter space and gives the appropriate posterior as output. I will then continue to systematically test DRTs and methods for inducing their use. This is all part of my grander plot to rigorously test, develop, and disseminate, systematic methods for improving the art of human rationality. I am not going to give up on that because of one negative result. With that, I would like to thank Daniel Filan, Vaniver (Matthew Graves), Oliver Habryka, Luke Raspkoff, Katja Grace, and Eliezer Yudkowsky, for their feedback and/or their much appreciated encouragement. Discuss ### Thoughts on "Human-Compatible" Новости LessWrong.com - 10 октября, 2019 - 08:24 Published on October 10, 2019 5:24 AM UTC The purpose of this book is to explain why [superintelligence] might be the last event in human history and how to make sure that it is not... The book is intended for a general audience but will, I hope, be of value in convincing specialists in artificial intelligence to rethink their fundamental assumptions. Yesterday, I eagerly opened my copy of Stuart Russell's Human Compatible (mirroring his Center for Human-Compatible AI, where I've worked the past two summers). I've been curious about Russell's research agenda, and also how Russell argued the case so convincingly as to garner the following acclamations from two Turing Award winners: Human Compatible made me a convert to Russell's concerns with our ability to control our upcoming creation—super-intelligent machines. Unlike outside alarmists and futurists, Russell is a leading authority on AI. His new book will educate the public about AI more than any book I can think of, and is a delightful and uplifting read.—Judea Pearl This beautifully written book addresses a fundamental challenge for humanity: increasingly intelligent machines that do what we ask but not what we really intend. Essential reading if you care about our future. —Yoshua Bengio Bengio even recently lent a reasoned voice to a debate on instrumental convergence! Bringing the AI community up-to-speed I think the book will greatly help AI professionals understand key arguments, avoid classic missteps, and appreciate the serious challenge humanity faces. Russell straightforwardly debunks common objections, writing with both candor and charm. I must admit, it's great to see such a prominent debunking; I still remember, early in my concern about alignment, hearing one professional respond the entire idea of being concerned about AGI with a lazy ad hominem dismissal. Like, hello? This is our future we're talking about! But Russell realizes that most people don't intentionally argue in bad faith; he structures his arguments with the understanding and charity required to ease the difficulty of changing one's mind. (Although I wish he'd be a little less sassy with LeCun, understandable as his frustration may be) More important than having fish, however, is knowing how to fish; Russell helps train the right mental motions in his readers: With a bit of practice, you can learn to identify ways in which the achievement of more or less any fixed objective can result in arbitrarily bad outcomes. [Russell goes on to describe specific examples and strategies] (p139) He somehow explains the difference between the Platonic assumptions of RL and the reality of a human-level reasoner, while also introducing wireheading. He covers the utility-reward gap, explaining that our understanding of real-world agency is so crude that we can't even coherently talk about the "purpose" of eg AlphaGo. He explains instrumental subgoals. These bits are so, so good. Now for the main course, for those already familiar with the basic arguments: The agenda Please realize that I'm replying to my understanding of Russell's agenda as communicated in a nontechnical book for the general public; I also don't have a mental model of Russell personally. Still, I'm working with what I've got. Here's my summary: reward uncertainty through some extension of a CIRL-like setup, accounting for human irrationality through our scientific knowledge, doing aggregate preference utilitarianism for all of the humans on the planet, discounting people by how well their beliefs map to reality, perhaps downweighting motivations such as envy (to mitigate the problem of everyone wanting positional goods). One challenge is towards what preference-shaping situations the robot should guide us (maybe we need meta-preference learning?). Russell also has a vision of many agents, each working to reasonably pursue the wishes of their owners (while being considerate of others). I'm going to simplify the situation and just express my concerns about the case of one irrational human, one robot. There's fully updated deference: One possible scheme in AI alignment is to give the AI a state of moral uncertainty implying that we know more than the AI does about its own utility function, as the AI's meta-utility function defines its ideal target. Then we could tell the AI, "You should let us shut you down because we know something about your ideal target that you don't, and we estimate that we can optimize your ideal target better without you." The obstacle to this scheme is that belief states of this type also tend to imply that an even better option for the AI would be to learn its ideal target by observing us. Then, having 'fully updated', the AI would have no further reason to 'defer' to us, and could proceed to directly optimize its ideal target. which Russell partially addresses by advocating ensuring realizability, and avoiding feature misspecification by (somehow) allowing for dynamic addition of previously unknown features (see also Incorrigibility in the CIRL Framework). But supposing we don't have this kind of model misspecification, I don't see how the "AI simply fully computes the human's policy, updates, and then no longer lets us correct it" issue is addressed. If you're really confident that computing the human policy lets you just extract the true preferences under the realizability assumptions, maybe this is fine? I suspect Russell has more to say here that didn't make it onto the printed page. There's also the issue of getting a good enough human mistake model, and figuring out people's beliefs, all while attempting to learn their preferences (see the value learning sequence). Now, it would be pretty silly to reply to an outlined research agenda with "but specific problems X, Y, and Z!", because the whole point of further research is to solve problems. However, my concerns are more structural. Certain AI designs lend themselves to more robustness against things going wrong (in specification, training, or simply having fewer assumptions). It seems to me that the uncertainty-based approach is quite demanding on getting component after component "right enough". Let me give you an example of something which is intuitively "more robust" to me: approval-directed agency. Consider a human Hugh, and an agent Arthur who uses the following procedure to choose each action: Estimate the expected rating Hugh would give each action if he considered it at length. Take the action with the highest expected rating. Here, the approval-policy does what a predictor says to do at each time step, which is different from maximizing a signal. Its shape feels different to me; the policy isn't shaped to maximize some reward signal (and pursue instrumental subgoals). Errors in prediction almost certainly don't produce a policy adversarial to human interests. How does this compare with the uncertainty approach? Let's consider one thing it seems we need to get right: Where in the world is the human? How will the agent robustly locate the human whose preferences it's learning, and why do we need to worry about this? Well, a novice might worry "what if the AI doesn't properly cleave reality at its joints, relying on a bad representation of the world?". But, having good predictive accuracy is instrumentally useful for maximizing the reward signal, so we can expect that its implicit representation of the world continually improves (i.e., it comes to find a nice efficient encoding). We don't have to worry about this - the AI is incentivized to get this right. However, if the AI is meant to deduce and further the preferences of that single human, it has to find that human. But, before the AI is operational, how do we point to our concept of "this person" in a yet-unformed model whose encoding probably doesn't cleave reality along those same lines? Even if we fix the structure of the AI's model so we can point to that human, it might then have instrumental incentives to modify the model so it can make better predictions. Why does it matter so much that we point exactly to be human? Well, then we're extrapolating the "preferences" of something that is not the person (or a person?) - the predicted human policy in this case seems highly sensitive to the details of the person or entity being pointed to. This seems like it could easily end in tragedy, and (strong belief, weakly held) doesn't seem like the kind of problem that has a clean solution. this sort of thing seems to happen quite often for proposals which hinge on things-in-ontologies. Human action models, mistake models, etc. are also difficult in this way, and we have to get them right. I'm not necessarily worried about the difficulties themselves, but that the framework seems so sensitive to them. Conclusion This book is most definitely an important read for both the general public and AI specialists, presenting a thought-provoking agenda with worthwhile insights (even if I don't see how it all ends up fitting together). To me, this seems like a key tool for outreach. Just think: in how many worlds does alignment research benefit from the advocacy of one of the most distinguished AI researchers ever? Discuss ### Examples of Categories Новости LessWrong.com - 10 октября, 2019 - 05:19 Published on October 10, 2019 1:25 AM UTC Before we go through any more conceptual talk, let's explore some examples of categories to get a better feel of what they are. Categories do two things: 1) They function as mathematical structures that capture our intuition about how models of cause-and-effect (composition) work in general. 2) They generalize other mathematical structures that capture our intuition about how specific models work. Obviously, 1 and 2 are related. We model particular pieces of reality with particular mathematical structures (e.g., we model shapes with geometry), so generalizing how we model the world and generalizing fields of mathematics are highly related tasks. Because I'm assuming my audience isn't familiar with any mathematical structures, we can't just start tossing out examples. Essentially the art of "categorifying" a mathematical structure is as simple as identifying what are the objects and what are the associatively composable morphisms. See here for a list of examples (scroll down to the table), which ought to give you an idea that many different kinds of mathematics really are just examples of categories on some level. To give just one example of a mathematical structure generalized by category theory, you might be relatively familiar with the category known as Set, the category whose objects are sets and whose morphisms are functions—functions being very nice maps between sets. Functions compose, as we've already seen, and you can check that this composition is associative, so, voila, we've "categorified" a mathematical structure. Set as category: Objects: sets Morphisms: functions Identity morphism: identity function .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > * {position: absolute} .MJXc-bevelled > * {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom * {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')} f(x)=x Rule of composition: function composition, e.g., 2x2 Associativity: 2(x2)=(2×1)x2 Since Set is pretty important, we'll discuss it in a separate post to make sure we're all on the same page about sets and functions. But hopefully this gives you an idea of what it looks like to take a "math thing" and make it into a category. *** Even if you're not a math expert, you probably know that 2 is bigger than 1. That's all the level of expertise required to understand a very important type of a category called a poset, which is category-speak for "partially ordered set." A partially ordered set is a set with an order than is at least partial. A poset is any category for which there is at most one morphism between any two objects. I.e., if you have A→B in a category, then you do not also have B→A in the same category, unless B is merely a relabeling of A. (In which case the morphism is the identity morphism, as a poset also rules that there can only be at most one morphism between any object and itself, and since every object must have an identity morphism, there isn't any other possibility.) A poset is partially ordered in the sense that, since A→B but not B→A, you could think of A and B as being in an order. For example, you could represent natural numbers, e.g., 1,2,3, etc., as objects and have the morphism be "less than or equal to," i.e., the ≤ sign. Then A→B means "A≤B." Hence the category has an order—the objects are ordered by how big they are. (Why less than or equal to and not simply less than? Because we need the identity morphism, and a number can't be less than itself. But it can be less than or equal to itself—by being equal to itself!) Posets are partial orders in the sense that there can be at most one morphism between A and B. However, there does not need to be a morphism between them at all! For example, say your objects are sports players, and your morphism refers to "less than or equal to in skill." Then A→B would mean that that the basketball player A is less than or equal to in skill than basketball player B. However, suppose, that C is a soccer player. Then morphisms like A→C or B→C wouldn't really make sense. Hence, the orders in the category are only partial—they do not go through the entire category. When you do have at least and at most one morphism between A and B, that's called a total order. The natural numbers are in fact totally ordered by size. One is less than or equal to two, two is less than or equal to three, etc. (Examples of total orders are probably easier to think of than partial orders, but, as we'll see later, partial orders are sufficient to do the things we really care about having orders for, without needing to go so far as to necessitate that the order be total. I.e., partial orders let us assume less to get the same basic outcome for what we'll be doing later.) Posets are really important for a couple of reasons. The main one is that posets are the simplest interesting kind of category. Posets are interesting because they can have morphisms between different objects, and morphisms are where all the interesting stuff in a category lie, as we'll see in a couple of posts. However, because there is at most one morphism between any two objects in a poset, posets stay fairly tame and simple. If you're ever confused by a concept in category theory, it's pretty much always a good idea to apply it to a poset so that you can see the idea in its simplest interesting form. We'll also see that posets are very useful for studying what it means to do things in the best possible way—which is what adjunction, arguably the key concept of category theory, is all about. *** One of the nice things about categories is how easily visualized their underlying structure is. For example, take a poset like 1→2→3 and another poset like a→b→c. These are clearly very similar posets. (In fact, they're isomorphic.) Despite the superficial differences, we clearly have "basically the same thing" going on with these two categories. So let's get abstract (which is what category theory is all about). Look at this category: ∙→∙→∙. We've gotten rid of the labels on the objects and just replaced them with generic nodes. Using this framework, we can forget concerns about what specific category we're working with and just look at the general types of categories that can exist. The most basic kind of category is the empty category, a category with no objects and therefore no morphisms. Obviously, we can't exactly depict this category visually—there's nothing to depict. The "next" kind of category is a single object with just the identity morphism. Leaving the identity morphism implicit, it looks like this: Growing more complex, we can add in a second object, but no morphisms between objects, just two objects and their identity morphisms. ∙∙ A category with just identity morphisms is called discrete—the objects aren't connected to each other, hence the term. (Note that all discrete categories are posets—there is clearly at most one morphism between any two objects.) Turning the above diagram into a poset (in this case a total order) is as easy as adding an arrow. ∙→∙ Here's something called a "commutative square. I'll add in labels to make it easier to follow. Af→Be↓↓hX→gY The commutative square is a very important and famous shape in category theory. It commutes in the sense that the paths Ae→Xg→Y and Af→Bh→Y are equivalent. That is to say, if you start at A, you can go right then down to get to Y, or down then right to get to Y, and it does not matter which path you choose. This is exactly analogous to how the knight moves in chess: typically (always? or does it not hold at the edge of the board?) there are two paths the knight can follow to get to a particular square from its current position, and it does not matter which path you choose. In fact, it matters so little that you can just jump straight to the square, you don't have to trace out the "L" that the knight does. Similarly, you can just jump straight from A to Y without actually going down one of the two paths: that's composition. A lot of important claims in category theory can be boiled down to saying that a particular diagram, often a square, commutes. For example, the basic definition of a natural transformation has at its center a commutative square—and properly defining natural transformations was the original motivation for developing category theory! (We'll talk about natural transformations quite a bit later.) From here we can branch out in lots of different ways in terms of adding morphisms and objects, but you should get the general idea. Whenever you can think of a model that can be boiled down to nodes and arrows, it's probably a category. You should know that categories like the two categories depicted here are totally valid. You can have an infinite number of morphisms between two objects going back and forth. In the end, as long as the morphisms compose (associatively), you can do pretty much anything you want. We'll get a clearer view of exactly how to talk about this once we've discussed sets and functions in the next post. Discuss ### Long Term Future Fund application is closing this Friday (October 11th) Новости LessWrong.com - 10 октября, 2019 - 03:44 Published on October 10, 2019 12:44 AM UTC Just a reminder that the Long Term Future Fund application is closing in two days. In past rounds we received 80% of our applications the day of the deadline, but I am not sure whether that is contingent on us reminding everyone that the application is closing soon, so I will continue creating these reminder posts for now. As a reminder, here is an excerpt from the application form of what kinds of grants we are looking for: We are particularly interested in small teams and individuals that are trying to get projects off the ground, or that need less money than existing grant-making institutions are likely to give out (i.e. less than ~100k, but more than 10k). Here are a few examples of project types that we're open to funding an individual or group for (note that this list is not exhaustive): + To spend a few months (perhaps during the summer) to research an open problem in AI alignment or AI strategy and produce a few blog posts or videos on their ideas + To spend a few months building a web app with the potential to solve an operations bottleneck at x-risk organisations + To spend a few months up-skilling in a field to prepare for future work (e.g. microeconomics, functional programming, etc). + To spend a year testing an idea that has the potential to be built into an org. We are also interested in applications for larger projects, or potential future long-term organizations that require more than100k of funding.

You can find more details on the kind of project we are likely to fund on the fund page (in particular I recommend reading our past recommendation writeups): https://app.effectivealtruism.org/funds/far-future

Discuss

### Minimization of prediction error as a foundation for human values in AI alignment

Новости LessWrong.com - 9 октября, 2019 - 21:23
Published on October 9, 2019 6:23 PM UTC

I've mentioned in posts twice (and previously in several comments) that I'm excited about predictive coding, specifically the idea that the human brain either is or can be modeled as a hierarchical system of (negative feedback) control systems that try to minimize error in predicting their inputs with some strong (possibly un-updatable) prediction set points (priors). I'm excited because I believe this approach better describes a wide range of human behavior, including subjective mental experiences, than any other theory of how the mind works, it's compatible with many other theories of brain and mind, and it may give us an adequate way to ground human values precisely enough to be useful in AI alignment.

A predictive coding theory of human values

My general theory of how to ground human values in minimization of prediction error is simple and straightforward:

• Neurons form hierarchical control systems.
• Those control systems aim to minimize prediction error via negative feedback (homeostatic) loops.
• The positive signal of the control system occurs when prediction error is minimized; the negative signal of the control system occurs when prediction error is maximized.
• There is also a neutral signal when there is insufficient information to activate the positive or negative signal "circuitry".
• cf. feeling/sensation is when the mind makes a determination about sense data, and sensations are positive, negative, or neutral
• "Good", "bad", and "neutral" are then terms given to describe the experience of these positive, negative, and neutral control signals, respectively, as they move up the hierarchy.

I've thought about this for a while so I have a fairly robust sense in my mind of how this works that allows me to verify it against a wide variety of situations, but I doubt I've conveyed that to you already. I think it will help if I give some examples of what this theory predicts happens in various situations that accounts for the behavior people observe and report in themselves and others.

• Mixed emotions/feelings are the result of a literal mix of different control systems under the same hierarchy receiving positive and negative signals as a result of producing less or more prediction error.
• Hard-to-predict people are perceived as creepy or, stated with less nuance, bad.
• Familiar things feel good by definition: they are easy to predict.
• Similarly, there's a feeling of loss (bad) when familiar things change.
• Mental illnesses result from failures of neurons to set good/bad thresholds appropriately, to update set points at an appropriate rate to match current rather than old circumstances, and from sensory input issues causing either prediction error or internally correct predictions that are poorly correlated with reality (this broadly including issues related both to sight, sound, smell, taste, touch and to mental inputs from long term memory, short term memory, and otherwise from other neurons).
• Desire and aversion are what it feels like to notice prediction error is high and for the brain to take actions it predicts will lower it either by something happening (seeing sensory input) or not happening (not seeing sensory input), respectively.
• Good and bad feel like natural categories because they are, but ones that are the result of a brain interacting with the world rather than features of the externally observed world.
• Etc.

Further exploration of these kinds of cases will help in verifying the theory via whether or not adequate and straightforward applications of the theory can explain various phenomena (I view it as being in a similar epistemic state to evolutionary psychology, including the threat of misleading ourselves with just-so stories). It does to some extent hinge on questions I'm not situated to evaluate experimentally myself, especially whether or not the brain actually implements hierarchical control systems of the type described, but I'm willing to move forward because even if the brain is not literally made of hierarchical control systems the theory appears to model what the brain does well enough that whatever theory replaces it will also have to be compatible with many of its predictions. Hence I think we can use it as a provisional grounding even as we keep an eye out for ways in which it may turn out to be an abstraction that we will have to reconsider in the light of future evidence, and that work we do based off of it will be amendable to translation to whatever new, more fundamental grounding we may discover in the future.

Relation to AI alignment

So that's the theory. How does it relate to AI alignment?

First note that this theory is naturally a foundation of axiology, or the study of values, and by extension a foundation for the study of ethics, to the extent that ethics is about reasoning about how agents, each with their own (possibly identical) values, interact. This is relevant for reasons I and more recently Stuart Armstrong have explored:

Stuart has been exploring one approach by grounding human values in an improvement on the abstraction for human values used in inverse reinforcement learning that I think of as a behavioral economics theory of human values. My main objection to this approach is that it is behaviorist: it appears to me to be grounded in what can be observed from external human behavior by other agents and has to infer the internal states of agents across a large inferential gap, true values being a kind of hidden and encapsulated variable an agent learns about via observed behavior. To be fair this has proven an extremely useful approach over the past 100 years or so in a variety of fields, but it also suffers an epistemic problem in that it requires lots of inference to determine values, and I believe this makes it a poor choice given the magnitude of Goodharting effects we expect to be at risk from with superintelligence-levels of optimization.

In comparison, I view a predictive-coding-like theory of human values as offering a much better method of grounding human preferences. It is

• parsimonious: the behavioral economics approach to human values allows comparatively complicated value specifications and requires many modifications to make it reflect a wide variety of observed human behavior, whereas this theory lets them be specified in simple terms that become complex by recursive application of the same basic mechanism;
• requires little inference: if it is totally right, only the inference of measuring neuron activity creates room for epistemic error within the model;
• captures internal state: true values/internal state is assessed as directly as possible rather than inferred from behavior;
• broad: works for both rational and non-rational agents without modification;
• flexible: even if the control theory model is wrong, the general "Bayesian brain" approach is probably right enough for us to make useful progress over what is possible with a behaviorist approach such that we could translate work that assumes predictive coding to another, better model.

Thus I am quite excited about the possibility that predictive coding approach may allow us to ground human values precisely enough to enable successfully aligning AI with human values.

This is a first attempt to explain what has been my "big idea" for the last year or so now that it has finally come together enough in my head that I'm confident presenting it, so I very much welcome feedback, questions, and comments that may help us move towards a more complete evaluation and exploration of this idea.

Discuss

### Рациональное додзё. Конкретность

События в Кочерге - 9 октября, 2019 - 19:30
Конкретность — сестра эмпиризма, и поэтому очень ценна для того, чтобы лучше познавать мир. На додзё мы будем практиковаться, практиковаться и ещё раз практиковаться в том, чтобы оставаться привязанными к реальности через слова и мысли.

### Expected Value- Millionaires Math

Новости LessWrong.com - 9 октября, 2019 - 17:50
Published on October 9, 2019 2:50 PM UTC

I credit reading this post years ago as the first step towards making me instinctively THINK in terms of expected value when considering opportunities. I consider it to be among the best "Why" resources for thinking in expected value, and wish it was more known among the LW crowd.

Discuss

### On Collusion - Vitalik Buterin

Новости LessWrong.com - 9 октября, 2019 - 17:45
Published on October 9, 2019 2:45 PM UTC

...if there is a situation with some fixed pool of resources and some currently established mechanism for distributing those resources, and it’s unavoidably possible for 51% of the participants can conspire to seize control of the resources, no matter what the current configuration is there is always some conspiracy that can emerge that would be profitable for the participants....This fact, the instability of majority games under cooperative game theory, is arguably highly underrated as a simplified general mathematical model of why there may well be no “end of history” in politics and no system that proves fully satisfactory; I personally believe it’s much more useful than the more famous Arrow’s theorem, for example.

I've found this post quite useful in thinking about mechanism design and problems of designing stable systems.

Discuss

### Regularly Scheduled: Day-of Reminders

Новости LessWrong.com - 9 октября, 2019 - 14:00
Published on October 9, 2019 11:00 AM UTC

Two years ago I wrote Regularly Scheduled, a tool to make it easier for people to handle repeated events:

I mostly use it for a recurring dinner we host with some friends from college. It works well, and has been running with basically no maintenance. After twice now forgetting I was hosting dinner in the few days between RSVPing "yes" and people showing up at my house, however, I realized it needed to be sending reminder emails day-of. I added these today, and they look like:

They only go to people who RSVP'd yes; they're a reminder to come not a reminder to RSVP.

If you have an event you'd like to set up with it, feel free! Let me know if you run into any issues and I'll try to get the code back into my head.

Discuss