# Новости LessWrong.com

A community blog devoted to refining the art of rationality
Обновлено: 3 минуты 26 секунд назад

### Bay Area Winter Solstice Volunteering

1 сентября, 2019 - 05:17
Published on September 1, 2019 2:05 AM UTC

This year's Bay Area Winter Solstice will be held on Sunday, December 15 in the Chabot Planetarium. The Solstice is an annual gathering where we come together to celebrate humanity and the things that matter to us.

We're now soliciting volunteers from the Rationalist and Effective Altruist communities for the various volunteering roles that the Solstice will require! Consider giving a speech, leading a singalong, or joining the Bay Area Bayesian Choir for its autumn rehearsals. We're also looking for intrepid volunteers who might like to be in charge of A/V, food, programs, and other aspects of event logistics.

Apply here: https://tinyurl.com/wintersolstice2019contributors

New this year, we will also be having an unconference the Saturday immediately before Solstice, in the style and flavor of New York's Megameetup. Tickets will be sold separately. Stay tuned for details.

Cheers,
Nat, Claire, & Chelsea

Discuss

### Arguing Absolute Velocities

1 сентября, 2019 - 05:11
Published on September 1, 2019 2:11 AM UTC

CW: Analogies

You and a friend are arguing over a physics question that the two of you got different answers to. The question is as follows: "You are on a train speeding seventy miles per hour east, while you run 5 miles per hour west. How fast are you moving?"

Well, the answer is obvious, you think to yourself. The question specifically says that I'm running 5 miles per hour west, so I must be moving 5 miles per hour west. The extra information about the train is just there to confuse me.

Your friend is just as convinced: If the train is moving 70 miles per hour east, and I'm running 5 miles per hour west, then my speed must be 70 minus 5, or 65 miles per hour east, he reasons. I guess this guy must just be one of the slower kids in the school. The next five minutes are spent in argument because you're telepathic and can hear your friend's mocking thoughts.

Then, because this was actually one of those dreams where you're back in high school and don't know any of the answers on the test, Galileo materializes in front of you.

"Forsake these notions of absolute velocity!" he booms. "Woe unto you, who have so quickly forgotten my doctrine of Galilean relativity! You are traveling 5 miles per hour west relative to the train, and 65 miles per hour east relative to the train tracks. You’re also zipping at like a bajillion miles per hour around the sun."

You are both deeply embarrassed, and have learned your lesson. You will always be careful to understand that a single number is usually meaningless: What matters is the relationship between some numbers. To talk about velocities, you need a reference point, something whose velocity has been fixed by agreement, at which point you can start to talk about

Galileo smiles at you both as he fades away into the abstract concept of experimentation and hands you Old Spice cleaning products because this was actually a really weird commercial for soap.

It’s temptingly easy to argue absolute velocities.

Take Scott’s SSC post, "How Bad Are Things?"

This is also why I am wary whenever people start boasting about how much better we’re doing than back in the bad old days. That precise statement seems to in fact be true. [… But] I don’t think we have any idea how many people do or don’t have it pretty good.

No matter how obvious the right bar seems to you, where it should go is debatable. You could put the bar as low as bare subsistence, since people who are alive and have not killed themselves presumably prefer life to death, implying their life is valuable to them and therefore a net good. You could set the bar at the average person’s quality of life. You could pick some "Acceptable" quality of living based on what you think constitutes a bare minimum. You could stop making me list examples since I'm running out of them. There are infinitely many lines you can pick to divide good and bad, and all are at least somewhat arbitrary.

Or maybe you’re arguing about whether the world is fundamentally a good place or not. So let’s imagine that there is a moralometer, way out past Alpha Centauri, which can perfectly measure the amount of good in the universe. Having discovered this moralometer, you can now read off its precise output: The amount of good in the world is exactly equal to 1,276,642.

You might be a bit disappointed to hear that this number has changed exactly zero opinions, because without having anything to compare it to, this number doesn't mean anything.2

This sounds pretty familiar, doesn’t it? We have two free variables, two questions we are trying to answer at the same time, but the answer to each depends on the other. "Is this possible world good" fails to be meaningful even if we completely agree on both ethics and on the state of the world, because we haven’t set a reference point for what’s “Good.”

So the question of whether a possible world is good might be a meaningless one parsed literally. But nobody parses the question literally. Nobody tries to calculate the total amount of good before comparing it to a predetermined standard to see if it's above or below it.

No, the truth is much worse. As soon as you’re discussing whether something is “Good” or “Bad” instead of “Better” or “Worse,” you’ve already failed. As soon as you start to argue about whether a policy, a choice, or anything else would be “Good,” you’ve left a free variable in the equations, and that way lies confusion.

Your subconscious is far better at tricking you than you give it credit for. Give it a single opening, leave a single definition unspecified, and it will immediately grasp for any positive affect that it can attach to help you win your argument; your subconscious is interested in winning arguments, even if they’re meaningless, not in being right. Without you ever noticing, your brain will seize on a reference point that lets you call your position “Good,” and anyone who dislikes your position will just as easily find a reference point that will deny you this victory. After all, what word has more positive affect than the word “Good” itself?

Just since I started writing this post, I’ve noticed at least half a dozen conversations falling into this failure mode. The most recent was a discussion on the US healthcare system where about thirty minutes in I realized with extreme embarrassment that the other person wasn’t actually disagreeing with me on whether some candidate’s particular healthcare plan was an improvement over the current US healthcare system3 -- they just thought that this plan was worse than some other candidate’s healthcare plan.

There’s a couple suggestions I can make to help avoid this mistake.

The first is, beware of ascribing anything some inherent property of “Good” or "Bad." As soon as you hear these words come up in a discussion, start worrying. You will notice that the CBO score on bills does not include a section saying "By the way, this law contains an essence of goodness." (Or, if it does, you should start worrying for completely different reasons). Hearing discussions of "Good" and "Bad" are common, albeit not perfectly reliable, signals that your argument has gone off track. The same goes for any word that implies making an absolute judgment (especially one with emotional affect) rather than a relative one.

Usually, the words you're looking for are “Better” and “Worse.” “Democratic candidate# 247’s healthcare policy is better than #233’s” is far less likely to fall prey to this problem than “Democratic candidate #247’s healthcare policy is good.” 4 None of this is to say that you can never use the words good/bad (everyone knows what you mean when you say “Killing is bad”), but the situations where you disagree with someone are very rarely the same situations where you can afford even an ounce of vagueness.

At some point in the healthcare discussion laid out above, someone mentioned something along the lines of “Yeah, but better than the American healthcare system is a low bar to clear,” or “Yeah, of course that plan is better than the American healthcare system as it is right now, because it’s terrible.” This is the smoking gun, and how I realized I was making this mistake. As soon as you hear this, you know the conversation has fallen into this failure mode, or is about to do so unless you immediately wrench it off of this track.

Remember this: There. Are. No. Objectively. Low. Or. High. Bars. To figure out if a bar is high or low, you need another bar; and if you ask the first bar, it’s the other one that’s the problem. A bar can be bad at conveying information, i.e. “Passing this law would be better than starting a genocide,” but it can't be wrong.

Moving the reference point is the last refuge of the scoundrel; the temptation to do so is strongest when you are losing, when the winds of evidence are blowing against you. Recognizing this temptation, and knowing when it is strongest, is the first step towards resisting it.

Remember that even if you think the reference point you’re using is probably obvious, it is not. People will pick terrible reference points even when there's a perfectly good reference point that the rest of the world uses; see America's insistence that water must freeze at exactly 32 degrees. In my healthcare discussion, I thought that the reference point “The way things are right now” was obvious. It was not.

You have to be damn sure that the reference point is obvious before you use it. In the absence of a reference point for good/bad being declared, assume that the reference point is "This is preferable to inaction/no change," but anything less clear than “Murder is bad” (reference point clearly intended to be not murdering people) should probably be avoided. Most importantly, if someone disagrees with a claim you've made along the lines “X is good/bad,” proceed to set an explicit reference point.

After all, arguing absolute velocities is pretty bad.

Notes:

1. I do feel the need to point out that Scott’s post is still pretty good overall as long as you recognize the implied reference point Scott’s using is “How good the average person thinks other people’s lives are,” and the intention of the post is to show that the actual level is below this. It’s just that other people are using other bars, and as phrased this is easy to misinterpret.
2. If only I had some nice, convenient reference point I could use, like how good the world has been in the past! I mean, this is my first thought for a reference point to use, and so questions about whether the world is good or not are turned into a question of whether the world is better than it once was. I suspect lots of the people talking to Scott made the same subconscious leap as I would.
3. Let’s admit it though, a monkey banging on your head with a wrench whenever you felt sick would be an improvement over the current US healthcare system.
4. This joke will not age well, but goddammit, I’m leaving it in anyways. Not like anyone’s gonna be reading this more than a year from now. Any potential time-travellers should know that 20 candidates qualified for the first Democratic debate.

Discuss

### Logical Counterfactuals and Proposition graphs, Part 2

31 августа, 2019 - 23:58
Published on August 31, 2019 8:58 PM UTC

In this post we extend the notion of propositional graphs to cover any first order theory.

Consider some arbitrary first order theory. For example a theory of integers.

3 and 7 are of type N, and < is a function from N2→B. So 3<7 is a valid Boolean that can be treated just like p with respect to the propositional transformations. ie (3<7)∧p≡p∧(3<7).

We can also add S:N→N the successor function. Here are some more equivalence rules you might want.

Sα<Sβ≡α<β

α<0≡⊥

0<Sα≡⊤

¬(α<β)≡β<Sα

In general, there is a finite list of types. Call them X1,X2,...Xn

All functions have a fixed number of inputs, each with a fixed type, and output something of a fixed type. In our parse tree from before, this means that our edges have one of finitely many different colours, and that each symbol node has a fixed pattern of colors for its inputs and outputs. + is a node of type N and both its children are of type N. < is a node of type B, and both its children are of type N. ¬ is of type B and has one child of type B. Where the output type is not clear, it may be indicated by a subscript. <B(3N,5N)

Before we can get to predicate logic, we will introduce a set of implicit variable symbols VX for each type X in use. V=∪nx=1VX is the set of all implicit variables. An implicit variable is an abstract symbol like ⊥ or 3. Specific implicit variables will be represented by capital roman letters (ABC). Substitution rules can also contain references to arbitrary members of some VX. Eg

Sub(ΓN,δN,+N(αN,βN))≡+N(Sub(ΓN,δN,αN),Sub(ΓN,δN,βN))

Sub(ΓN,δN,ΓN)≡δN

Here Sub is the substitution function, the αN,βN,δN represent any expression that has a real type. The ΓN represents any single symbol from the set VN.

Applying these equivalences tells us that

Sub(X,5,+(3,X))≡+(Sub(X,5,3),Sub(X,5,X))≡+(Sub(X,5,3),5)

If 3 is a shorthand for S(S(S(0))) then the rules Sub(ΓN,δN,S(αN))≡S(Sub(ΓN,δN,αN)) and Sub(ΓN,δN,0N)≡0N allow deduction all the way to +(Sub(X,5,3),5)≡8

A general first order theory makes use of = and ∀ as well as [/] for substitution. These do not need any special properties, and can be defined by the normal substitution rules. Technically there is a collection of Sub's, one for each pair of types.

We have 4 axioms, which can be turned into rules by making them all equivalent to ⊤.

∀B(ΓN,=B,(ΓN,ΓN))≡⊤

∀B(ΓN,∀(ΔN,⟹B(=B(αN,βN),=B(ϕB,Sub(ΓN,ΔN,ϕB)))))≡⊤

⟹B(∀B(ΓN,ϕB),Sub(ΓN,αN,ϕB))≡⊤

⟹B(∀N(ΓN,⟹B(ϕB,ψB)),⟹B(∀N(ΓN,ϕB),∀N(ΓN,ψB)))

And a rule that allows generalization.

∀N(ΓN,⊤)≡⊤

I believe that this set of substitution rules, or something like it, can produce predicate logic. I also believe that more substitution rules and symbols can be added to make any first order theory. The intuition is that you can take any proof in any first order logical theory and convert it into a series of substitution rules by anding in any axioms, and applying moduls ponens. Generalization can be done by turning ⊤ into ∀(X,⊤) and then using substitutions not dependent on X to turn it into ∀(X,p).

Theorem

For any set of symbols with appropriate type signatures, for any recursively enumerable equivalence relation H. (With the property that a≡b⟹f(a)≡f(b) )There exists a finite set of extra symbols and a finite set of substitution rules such that these equivalences hold.

Proof Outline

Introduce the symbols 0,1,end so that binary strings can be stored 0(1(1(0(end))))

Give them some new type Y, and add a load of type converters, symbols that take in an object of type Y, and output some other type.

Introduce + for concatenation.

+(0(αY),βY)≡0(+(αY,βY))

And the same for 1

+(end,αY)≡αY

P(α,0(β))≡0(P(0(α),β))

P(α,1(β))≡0(P(1(α),β))

P(α,end)≡1(α)

Then if |α|=n then P(end,α)≡0n1 reverse(α).

This means that J(α,β)≡+(P(end,α),P(end,β)) can unambiguously express pairs of bitstrings as a single bitstring.

Arbitrary syntax trees can be encoded like ∗(ConvN(αY),ConvN(βY))≡ConvN(J(010end,J(αY,βY)))

Where 010 is the unique bitstring representing "*", an arbitrary symbol.

ConvN(αY)≡rightN(TM1(left(J(αY,βY))))

As βY can be anything, it could be an encoding of what you want. Then we let left(0(α))≡0(left(α)) and the same for one, to run left to the far end of the bitstring. Then left(end)≡n(left(end)) where n is a null symbol for tape that doesn't store data and we can extend tape as needed. Using rules like TM1(n(α))≡1(TM3(α)) we can run any turing machine with one symbol per state. Then finally we have a rule like TM4(0(α))≡success(TM4(0(α))). Let success commute with 0,1 so it can go up to the top. Let right(success(α))≡right(success(n(α))).

Then you can unwind all the computations, and rightN(success(TM1(left(αY,βY))))≡ConvN(βy) means that you can take the bit stream that you computed to be equal, and decode it back into its symbols.

Lemma

Any possible computably checkable formal proof system can be converted into an equivalence graph.

Post Ending.

Next time we will see when different proof systems are provably the same. I know the formatting is rough and the explanations aren't always as clear as could be. Tell me about any unusually opaque parts. If these ideas turn out to be important, someone will need to rework this. I know the axioms used are not the simplest possible, and proofs are not fully formalized.

Discuss

### Cambridge LW/SSC Meetup

31 августа, 2019 - 20:10
Published on August 31, 2019 5:10 PM UTC

Oops, I messed up and forgot to post our monthly event. Anyway, it's tomorrow.

Apartment 2.

Discuss

### Peter Thiel/Eric Weinstein Transcript on Growth, Violence, and Stories

31 августа, 2019 - 05:44
Published on August 31, 2019 2:44 AM UTC

I’ve recently been through a spate of listening to a massive number of podcasts. When I recently listened to the first ninety minutes of a three-hour conversation between Eric Weinstein and Peter Thiel, I was surprised to hear echoed a lot of ideas we've discussed on LessWrong about stagnation in academia and scientific discourse more generally (e.g. here and here). I realised there was no good online transcript for people to read and discuss, so in my off-work hours, as a hopefully useful public service I thought I’d try my hand at making one, and used a format inspired by other podcasts who do the same (in this case, the often brilliant 80,000 Hours Podcast).

I've split it into five posts for readability. (Further notes on editing are in this comment.) I'll post them every couple of days for the next week or so. The posts are: this introduction, Stories About Academia, Stories About Education, Political Violence and Distraction Theories, and Stories About Progress.

I've finished listening to the podcast, and found it to contain a very interesting alternative worldview, that I've since come back to many times in conversation.

Some of the topics discussed
• What stories institutions tell when growth has stopped.
• The argument that the physics community is the most important community on Earth.
• Why we’re losing polymaths.
• Why it can be bad to train too many people.
• The relationship between automation and redistribution.
• The theory that a great deal of political discourse is a distraction from not having solutions to the lack of innovation/growth.
• How the theories of Rene Girard are an antidote to strong libertarian impulses.
• The importance of a society not disillusioned about growth, but excited about growth.
• Whether scientific progress can be a motivating story for society.

On increasing the number of students with a higher education:

Links not explicitly referred to but that (I think) give strong arguments for positions discussed and are perhaps implicitly referred to:

Other Discussion on the Web

(Let me know about more, either in the comments or via PM.)

InterviewIntroduction

Eric Weinstein: Hello and welcome to The Portal's first episode. Today, I'll be sitting down with Peter Thiel. Now, if you've been following me on Twitter, or perhaps as a podcast guest on other podcasts, you may know that I work for Thiel Capital. But one of the things that people ask me most frequently is, given that you are so different than your boss and friend Peter Thiel, how is it the two of you get along? What is it that you talk about? Where do you agree and disagree? Now, oddly, Peter and I both do a fair amount of public speaking. But I don't believe that we've ever appeared in public together and very few people have heard our conversations. What's more, he almost never mentions me, and I almost never mentioned him in our public lives.

Eric Weinstein: So hopefully this podcast will give some indication of what a conversation is like with somebody who I find one of the most interesting and influential teachers of our time; somebody who has influenced all sorts of people in Silicon Valley involved with technology and inventing tomorrow, and who is often not seen accurately, in my opinion, by the commentariat and the regular people who opine as pundits in the world of science and technology.

Eric Weinstein: I hope you'll find Peter as fascinating as I do. Without further ado, this is the first episode of The Portal. Thanks for joining us.

Personal Backgrounds

Eric Weinstein: Hello and welcome. You found The Portal. I'm your host, Eric Weinstein, and I think this is our first interview show to debut, and I'm here with my good friend and employer, Mr. Peter Thiel. Peter, welcome to The Portal.

Peter Thiel: Well, Eric, thanks for having me on your program.

Eric Weinstein: No, this is a great honor. One of the things I think is kind of odd is that lots of people know that I work for you and many people know that we're friends, but even though we both do a fair amount of public speaking, I don't think we've ever appeared any place in public together. Is that your recollection as well?

Peter Thiel: I can't think of a single occasion. So this proves we're not the same person.

Eric Weinstein: We're not the same person, yeah. You are not my alter ego. But on that front, I think it is kind of an odd thing for me. I mean, we met each other, I think when I was in my late 40s, and if you'd ever told me that the person who would be most likely to complete my thoughts accurately would be you, I never would have believed it, never having met you. We have somewhat opposite politics. We have very different life histories. How do you think it is that we've come to share such a lot of thinking? I mean, I have to say that a lot of my ideas are cross pollinated with yours. So you occur in a lot of my standard riffs. How do you think it is that we came to different conclusions, but share so much of a body of thought?

Peter Thiel: So I'm always hard pressed to answer that, since the conclusions all seem correct to me. And it's always mysterious why it feels like we're the outliers and we're among the very few people that reach some of these conclusions about the relative stagnation in science and technology, the ways in which this is deranging or culture, our politics, our society, and then how we need to try to find some bold ways out; some bold ways to find a new portal to a different world.

Peter Thiel: And I think there are different ways the two of us came at this. I feel like you got to some of these perspectives at a very early point, sort of the mid 1980s, that something was incredibly off. I probably got there in the early, mid-90s, when I was from this track law firm job in New York city. And somehow everything felt like it was more like a Ponzi scheme. It wasn't really going towards the future everyone had promised you, in the elite undergraduate and law school education I had gone through.

Peter Thiel: And so, yeah. So I think there was sort of a point, we got to these insights. But it's still striking how out of sync they feel with so much of our society, even in 2019.

Eric Weinstein: Yeah, I mean, that's a very striking thing for me. And it's also something that's frustrated me. Sometimes, when I look forward to you being interviewed, it often feels to me that so much time is spent on the initial question,"Are we somewhat stagnating in science and technology," that rather than assuming that as a conclusion - which I think we can make a pretty convincing argument that there has been a lot of stagnation - it seems to me that a lot of these conversations hang at an earlier level. And so one of the things that I was hoping to do in this, which is, I think, your second long form podcast. You did Dave Rubin's show sometime ago ... Is to sort of presuppose some of the basics that people will be familiar with who've been following either one of us, or both of us, and to get to the part of the conversation that I think never gets explained and discussed, because people are always so hung up at the initial frame issue.

What is the dominant narrative?

Eric Weinstein: So with your indulgence, let's talk a little bit about what you and I see, and any differences that we might have, about this period of time that we find ourselves in, in 2019. What would you say is the dominant narrative before we get to what might be our shared counter narrative?

Peter Thiel: Well, you know, the dominant narrative is probably fraying and has been fraying for some time, but it is something like we're in a world of generally fast scientific and technological progress. Things are getting better all the time. There's some imbalances that maybe need to be smoothed out. There's some corner case problems. Maybe there's some dystopian risks, because the technology is so fast and so scary that it might be destructive. But it's a generally accelerationist story. And then there's some sort of micro-adjustments within that, that one would have to make.

Peter Thiel: There's are all sorts of ways that I think it's fraying. I think 2008 was a big watershed moment, but that still what's largely been holding together. And then there's sort of different institutions. You can look at the universities where there's a tracked thing. It's costing more every year, but it's still worth it. It's still an investment in the future. And this was probably already questionable in the 1980s, 1990s. College debt in the United States in 2000 was $300 billion. Now it's around in$1.6 trillion, 1.7 trillion. And so there's a way in which the story was shaky 20 years ago and today is much shakier. It's still sort of holding together somehow. Eric Weinstein: So in this story, in essence, the great dream is that your children will become educated, they will receive a college education, they will find careers. And in this bright and dynamic society, they can look forward to a future that is brighter than the future that previous generations look forward to. Peter Thiel: Yeah, so I think ... Now again, I think people are hesitant to actually articulate it quite that way, because that already sounds not quite true to- Eric Weinstein: Well, to your point, they've been adding epicycles for some time. Peter Thiel: And so it's a ... Maybe it's a bright future, but it's really different from the parents, because we can't quite know. And they have all these new devices. They have an iPhone and they can text really fast on the iPhone. We can't even understand what the younger generation is doing. So maybe it's better on ... But "better" has sort of an objective scale. Maybe it's just different and unmeasurable, but better in sort of an unmeasurable way. Peter Thiel: So there sort of are ways it's gotten modified but, that would still be a very powerfully intact narrative. And then that there are sort of straight forward things we can be doing. The system's basically working, and it's basically going to continue to work. And they're sort of a global version of this. There's a US version. There's an upper middle class US version. There's a lot of different variations on this. Eric Weinstein: So it always strikes me that one of the things that you do very well is that you're willing - and you know, you're famously a chess player - you're willing to make certain sacrifices in order to advance a point. And in this case, I think you and I would both agree that there's certain areas that have continued to follow the growth story more than the general economy, and that you have to kind of give those stories their due before you get to see this new picture. Where do you think the future has been relatively more bright in recent years? Peter Thiel: Well, again I sort of date this era of relative stagnation and slowed progress all the way back to the 1970s, so I think it's been close to half a century that we've been in this era of seriously slowed progress. Obviously, a very big exception to this has been the world of bits: Computers, internet, mobile internet, software. And so Silicon Valley has somehow been this dramatic exception. Whereas the world of atoms has been much slower for something like 50 years. Peter Thiel: And you know, when I was an undergraduate at Stanford in the late 1980s, almost all engineering disciplines, in retrospect, were really bad fields to go into. People already knew, at the time, you shouldn't go into nuclear engineering. AeroAstro was a bad idea. but you know, chemical engineering, mechanical engineering, all these things were bad fields. Computer science would've been a very good field to go into. And that's been sort of an area where there's been tremendous growth. Peter Thiel: So that's sort of the signature one that I would cite. There are questions about how healthy it is, at this point, even within that field. So, you know, the iPhone is now looking the same as it did seven, eight years ago. So that's the iconic invention. Not quite so sure. And so there's been sort of a definitely a change in the tone even within Silicon Valley in the last five, six years on this. But that had been one that was very, very decoupled. Peter Thiel: The decoupling itself had some odd effects, where if you have sort of a narrow cone of progress around this world of bits, then the people who are in those parts of the economy that have more to do with atoms will feel like they're being left behind. And so there was something, there was something about the tech narrative that had this very ... Didn't necessarily feel inclusive, didn't feel like everybody was getting ahead. And one of the ways I've described it is that we live in a world where we've been working on the Star Trek computer in Silicon Valley, but we don't have anything else from Star Trek. We don't have the warp drive, we don't have the transporter, we can't re-engineer matter in sort of this cornucopian world where there is no scarcity. And how good is a society where you have a well-functioning Star Trek computer, but nothing else from Star Trek? Eric Weinstein: Yeah, that's incredibly juicy. I mean, one of the ways that I attempted to encode something, which, in part I got from you, was to say, "Of course your iPhone is amazing. It's all that's left of your once limitless future," because it's the collision of the communications and the semiconductor revolutions that did seem to continue. And I date the sort of break in the economy to something like 1972, '73, '74. It's really quite sharp in my mind. Is it that way in yours? Peter Thiel: Yes. I'd say 1968, people still ... The narrative progress seemed intact. By '73, it was somehow over. So somewhere in that five-year period. The 1969 version was we landed on the moon in July of 1969 and you know, Woodstock starts three weeks later. And maybe that's one way you could describe the cultural shift. You can describe it in terms of the oil shocks in 1973 at the back end. With the benefit of hindsight, there were things that were already fraying by the late 1960s, so the environment was getting dramatically worse. Eric Weinstein: Right. Peter Thiel: You have the graduate movies, you should go into plastics. I think that was 1968 or '69. So there were sort of things where the story was fraying, but I think it was still broadly intact in 1968, and somehow seemed very off by '73. Next post on Sunday will be Stories About Academia. Discuss ### What does social psychology tell us about getting consistent evaluations out of groups? 31 августа, 2019 - 04:11 Published on August 31, 2019 1:11 AM UTC Suppose we have different groups repeatedly making the same kind of decision (ex: awards in a civil lawsuit). We would like their decisions to be consistent and predictable, and are even willing to trade some accuracy to do so. Social psychology has a lot of predictions about things that will influence a groups answer, and thus could affect consistency (ex: a confident person speaking first will sway the group enormously) (assuming that survived the replication crisis, which I haven't checked). What other problems and solutions does post-crisis social psychology suggest when trying to get consistent evaluations out of groups? Discuss ### Look at the Shape of Your Utility Distribution 31 августа, 2019 - 02:27 Published on August 30, 2019 11:27 PM UTC If we can talk about "expected utility", then "utility" has to be a random variable sampled form some distribution. You can then ask questions like "what is the probability that U is less than 10?" or, in other words, "how many outcomes have U<10, and how likely are they?". With those questions we can draw a probability density function for our utility distribution. Here is a sketch of one such distribution. I'm going to assume that it is normal or common and then compare others to it. X-axis is utility ("bad" on the left, "good" on the right) and Y-axis is probability (the bottom line is 0). The peak in the middle says that some values of U are common for many outcomes. The tails on ether side say that the outcomes with very large of very low values of U are unlikely. Here is another sketch. The left tail is heavy, meaning that "bad" outcomes are more likely than in the common distribution. An agent with this U distribution would exhibit caution or inaction, because the risk/reward ratio of most actions would be awful. The agent would say "what if I fail?". However, if precautions are taken, this agent would exhibit much more normal reward-seeking behavior. This state seems similar to human fear or anxiety. Here is a similar distribution, but now the right tail is very thin, meaning that "good" outcomes are near-impossible. This agent would exhibit caution and inaction, just like the previous one. However this one would rather say "what's the point?". If nothing good is going to happen, then its best option is to stop wasting resources. At best the agent could try to prepare in case some misfortune may randomly befall it. This state seems similar to human depression. Finally, a distribution with a thicker right tail, meaning that "good" outcomes are likely. This agent should be very active, since many actions will have great risk/reward. This could make the agent take unnecessary risks and waste resources. However, if the agent has well calibrated beliefs, this can lead to great productivity. In humans, similar states range from optimism to mania. I believe that there is a direct relation between human emotions or mental states and the functions U and P. For example, it's hard to separate "I'm afraid" from "I believe that I'm in danger". It's not clear which comes first and which causes the other. Maybe they are the same thing? Also consider treatments such as CBT and exposure therapy, that affect mental states by changing people's beliefs. If feelings and beliefs were independent, these should not work as well as they do. proposition: Some human emotions can be interpreted as labels for categories of utility distributions. If this is true, then analyzing utility functions directly may be more informative than working with the labels. Then, instead of asking yourself "what do I feel", you should ask yourself "what do I believe" and "what do I value". corollary: Rationality often protects us from wild fluctuations in our values and beliefs. If our cat died, we could jump to the conclusion that we will be devastated for years. But if we know how poorly people estimate their future happiness (see "affective forecasting"), we will have more accurate beliefs, therefore our distribution will be less disturbed, and therefore we will feel less depressed. In this way rationality makes us more Spock-like. notes: • The shape of a distribution is by no means sufficient to explain all behavior. It matters, for a given point (u0,p0), if it represents one outcome with that probability or a large group of very different outcomes with much smaller probabilities. Also, it matters what actions are possible and what utilities each action leads to. Still, this simple view seems useful. • I don't think that the "common" distribution is symmetric - there is always a good chance of dying, but there is hardly anything comparable on the positive side. I'm ignoring this for simplicity. • Normally we talk about expected utility of some action. However here I'm marginalizing the distribution over all possible actions. This is problematic - how do we assign probabilities to actions we haven't chosen yet? It's also not entirely necessary, we can talk about the distribution for some specific action. I'm ignoring this for simplicity. • Do people even have utility functions? I don't think it matters. I think something similar could be said about a more general human choice function, though it would be more awkward. • Do other emotions work like that? E.g. anger or love? The shape of the distribution may not be sufficient for them, but I believe that other properties of U and P might work. • What about other shapes? E.g. what if the distribution has two peaks? Who knows, maybe there is a label for that. Although there doesn't have to be one, especially if it's not a common shape. Discuss ### Does anyone know of a good overview of what humans know about depression? 31 августа, 2019 - 02:22 Published on August 30, 2019 11:22 PM UTC If I wanted to spend 1 to 5 hours getting oriented on everything that we collectively know about depression (starting with whatever pieces are settled, and then moving onto the confusing mess of conflicting data that's left), what resources should I start with? Textbooks, sections of textbooks, web articles, blogposts by experts, youtube videos, and other formats that I haven't thought of, are all admissible. Thanks! Discuss ### September Bragging Thread 31 августа, 2019 - 00:58 Published on August 30, 2019 9:58 PM UTC Thought I'd try reviving an old LessWrong experiment: the Bragging Thread. (See this old one for some context) LessWrong mostly rewards people for writing blogposts. That's cool and all, but is not the only thing worth doing. What's an awesome thing you've done lately? You are encouraged to be as blatantly proud as you inwardly feel. :) (Note that this is not a thread for progress, or for things you might do. This is for celebrating things you have actually done lately) Discuss ### 2-D Robustness 30 августа, 2019 - 23:27 Published on August 30, 2019 8:27 PM UTC This is a short note on a framing that was developed in collaboration with Joar Skalse, Chris van Merwijk and Evan Hubinger while working on Risks from Learned Optimization, but which did not find a natural place in the report. Mesa-optimisation is a kind of robustness problem, in the following sense: Since the mesa-optimiser is selected based on performance on the base objective, we expect it (once trained) to have a good policy on the training distribution. That is, we can expect the mesa-optimiser to act in a way that results in outcomes that we want, and to do so competently. The place where we expect trouble is off-distribution. When the mesa-optimiser is placed in a new situation, I want to highlight two distinct failure modes; that is, outcomes which score poorly on the base objective: • The mesa-optimiser fails to generalise in any way, and simply breaks, scoring poorly on the base objective. • The mesa-optimiser robustly and competently achieves an objective that is different from the base objective, thereby scoring poorly on it. Both of these are failures of robustness, but there is an important distinction to be made between them. In the first failure mode, the agent fails to generalise on capabilities. In the second, it generalises on capabilities, but fails to “generalise on objective”. This second failure mode seems in general more dangerous: if an agent is sufficiently capable, it might, for example, hinder human attempts to shut it down (if its capabilities are robust enough to generalise to situations involving human attempts to shut it down). These failure modes map to what Paul Christiano calls benign and malign failures in Techniques for optimizing worst-case performance. This distinction suggests a framing of robustness that we have found useful while writing our report: instead of treating robustness as a scalar quantity that measures the degree to which the system continues working off-distribution, we can view robustness as a 2-dimensional quantity. Its two axes are something like “capabilities” and “alignment”, and the failure modes at different points in the space look different. Unlike the 1-d picture, the 2-d picture suggests that more robustness is not always a good thing. In particular, robustness in capabilities is only good insofar is it is matched by robust alignment between the mesa-objective and the base objective. It may be the case that for some systems, we’d rather the system get totally confused in new situations than remain competent while pursuing the wrong objective. Of course, there is a reason why we usually think of robustness as a scalar: one can define clear metrics for how well the system generalises, in terms of the difference between performance on the base objective on- and off-distribution. In contrast, 2-d robustness does not have an obvious way to ground its two axes in measurable quantities. Nevertheless, as an intuitive framing I find it quite compelling, and invite you to also think in these terms. Discuss ### How to Make Billions of Dollars Reducing Loneliness 30 августа, 2019 - 20:30 Published on August 30, 2019 5:30 PM UTC Loneliness Is a Big Problem On Facebook, my friend Tyler writes: Lately, I've been having an alarming amount of conversations arise about the burdens of loneliness, alienation, rootlessness, and a lack of belonging that many of my peers feel, especially in the Bay Area. I feel it too. Everyone has a gazillion friends and events to attend. But there's a palpable lack of social fabric. I worry that this atomization is becoming a world-wide phenomenon – that we might be some of the first generations without the sort of community that it's in human nature to rely on. And that the result is a worsening epidemic of mental illness... Without the framework of a uniting religion, ethnicity, or purpose, it's hard to get people to truly commit to a given community. Especially when it's so easy to swipe left and opt for things that offer the fleeting feeling of community without being the real thing: the parties, the once-a-month lecture series, the Facebook threads, the workshops, the New Age ceremonies. We often use these as "community porn" – they're easier than the real thing and they satisfy enough of the craving. But they don't make you whole. I've had some thoughts about experiments to try. But then I think about how hard it is (especially in this geographic area) to get people to show up to something on at least a weekly basis. Even if it's for something really great. I see many great attempts at community slowly peter out. Having a smartphone that keeps you entertained all day, and enough money to live by yourself, might sound like first world problems. But they are likely contributors to loneliness. And as developing countries get richer, they'll start having first world problems too. So I think addressing loneliness could be very high-leverage for the world. People are starting businesses to address loneliness: you can pay someone to call you periodically or take you for a walk. But I'd argue these services are a band-aid in the same sense that parties, workshops, and ceremonies are. They don't solve the underlying problem: You're still alone by default instead of together by default. Roommates Could Be a Great Solution Sociologists think there are three conditions necessary for making friends: proximity; repeated, unplanned interactions; and a setting that encourages people to let their guard down and confide in each other. These conditions tend to be present during college for many people, but not afterwards. Why do people find it easier to make friends in college? Maybe it's because college students don't usually live alone. Going to events doesn't work because (a) you don't typically get repeated interactions with the same person and (b) events take place at a scheduled time. Which may or may not be a time you're feeling lonely. If you have a lot of roommates, all you have to do is step outside your room and find someone to chat with. No transportation CO2 emissions needed. But more important, you know your roommates are always gonna be around. But I Already Have Roommates Even if you already have roommates, I think there's a good chance your roommate situation is under-optimized. Given that you spend so much time with them, there's a lot of value in living with people you really connect with. (Finding great coworkers makes sense for similar reasons.) The layout of your house and the number of roommates you have can also make a big difference. I used to have friends living in a 4-bedroom place where all the bedrooms opened directly into a single large common area. If anyone else was outside their room, you'd immediately know it and have an opportunity for interaction. Later I lived in an 8-bedroom place which felt far lonelier, even with every room occupied. The house was laid out so it was easy to go about your day without ever running into a fellow roommate. I also lived in a house with over 50 bedrooms for a while, which was wild & a lot of fun. But I Don't Want Roommates One reason you might not want roommates is because you're worried you might have conflicting preferences for what living together should be like. For example, my philosophy towards dirty dishes is to let them pile up on the counter and periodically stuff them all in the dishwasher, to be as time-efficient as possible. Surprisingly, some people dislike this approach. RoomieMatch.com is a website which tries to solve the roommate compatibility problem. You create a profile by answering questions about dishes, food in the fridge, housecleaning, social events, noise, overnight guests, shared household items, walking around in your underwear, TV, etc. In addition, there are questions to help predict how you well you will connect as people. You Could Make a Lot of Money RoomieMatch has two search options: free and cheap. Cheap costs20/year.

The problem with RoomieMatch is they're leaving a massive amount of money on the table.

A few years ago, a friend of mine was jobless & struggling financially. He was living in a 4-bedroom house at the time, and he was the primary contact with the landlord. My friend took responsibility for vetting folks from Craigslist in order to fill the remaining rooms in the house. He found that folks from Craigslist were willing to pay enough rent for the remaining 3 rooms that he was able to live rent-free until he found a job.

I acknowledge this is murky ethical territory, and I'm not condoning my friend's actions. (I don't believe anyone ever found out or got upset, for whatever that's worth.) The point I'm trying to make is that property management is way more lucrative than roommate matching. RoomieMatch makes $20 per user per year at best. My friend was making$100+ per user per month.

What I'm suggesting is that you take the full-stack startup playbook which has been successful in Silicon Valley recently, and apply it to online roommate matching + property management.

The extreme full-stack approach is to own your own properties. Apparently the US has a surplus of big houses right now.

There are already players in this space such as Roam which are proving that people will pay for community. (As if people paying extra to live in hip cities like SF & NYC didn't prove that already. BTW, I found that the awesome community at the Athena Hotel more than made up for the fact that it's in a non-hip city.) Anyway, I think existing players are mostly pursuing the extreme full-stack option. I actually think this is the wrong play. You want to be a marketplace, like Airbnb (valued at over 30 billion). The more people who are using your tool, the finer-grained roommate matching services you can provide. It's hard to achieve massive scale if you have to own every property. You want to be playing matchmaker for individuals with common interests who all happen to be looking for rooms around the same time, plus landlords with empty houses. Maybe you'll want to undercut RoomieMatch, and provide free matching services for people who live in their properties, in order to achieve the necessary scale. (RoomieMatch's existing scale is impressive by the way--I quickly got 100+ active, vetted matches in a midsize US city when I tried the tool. If you have the money you might want to just buy it.) So instead of buying properties, maybe you just want to contact people selling large homes & see if you can convince them to let you manage their property. Note that this is a good company to start if a recession happens, since people who currently live alone will be thinking about how to save on rent. This Could Be Really Great Most roommate search tools, like Craigslist, don't make it easy to figure out if a future roommate is someone you'd actually want to live with. Imagine reaching a scale where you could match people based on factors like: • They love to play board games, or pool, or Super Smash Bros. • They want a compost pile and a garden in their backyard. • One has a pet, and the other likes animals but isn't yet ready to make a lifetime commitment. • They want a squat rack in the basement to save time & money going to the gym. • They want to continue partying like college students after graduation. • They want to be part of an intentional community devoted to mutual improvement and life optimization, or spirituality, or whatever. • They want to share childcare responsibilities. • They're all fans of the same sports team. • They enjoy reading and discussing the same genre of novels, or watching the same movies. • They're musicians looking for people to jam with. • They want to live near hiking trails and go on group hikes together. • They want to do independent study of the same topic. • They're trying to eat a healthier diet. • They just moved to a new city and want friends they can explore the city with. • They have the same unusual work schedule. • One needs a caretaker, the other wants to make extra money. • They like the idea of having a couch or two listed on CouchSurfing. • One knows a language the other wants to learn. I also see opportunities to reduce friction in the current roommate matching process: • Automatically find times when everyone is available for a meet & greet video call. • Let people take virtual tours of the houses on offer to minimize driving. • No need to worry about breaking a lease if someone moves to a different house in your company's network. Let people try out a few communities & see what works for them. Use machine learning to improve your matching as you gather more data. • Provide external mediation in the event of roommate disputes, and have a reputation system to encourage good behavior. You aren't providing housing as a service (like Airbnb), or companionship as a service (like the people-walking startup). You're providing community as a service. You could even organize mixers across your houses. Conclusion Technology has been blamed for the loneliness epidemic, but I think we can use technology to cure the loneliness epidemic as well. I'm too busy being obsessed with machine learning to start any company which isn't mostly about that. But I think this is a product the world needs, and I want you to build it. I encourage you to sign the Founders Pledge and donate the money to effective charities in case you actually end up making billions of dollars as a result of reading this. I apologize if you found the tone of this post overly sales-y. My goal was to light a spark in the right person. (Feel free to steal phrases from this post when pitching investors!) Some folks in the rationalist community might be a little underwhelmed by this idea, since people in the rationalist community have been living together in group houses for a long time. The thing is, finding roommates by connecting based on mutual interests via the internet is still kind of weird in the eyes of the general public. As Paul Graham put it: "Live in the future, then build what's missing." The existence of so many lonely people proves that this option is still missing for most people. Anyway, if you're interested in building/investing in this, please comment below, or send me a private message via my user page with the country you're in and I'll put you in contact with others who message me. Cross-posted from the Effective Altruism Forum Discuss ### A Game of Giants 30 августа, 2019 - 09:01 Published on August 30, 2019 5:48 AM UTC Key insights • Let's look at emergence tower (atom -> molecule -> .. -> cell -> organ system -> animal -> pack/tribe) and consider where an independent lifeforms live • amoebas are on a cell level • ants live on a pack level • spiders live on single animals level • humans live somewhere between ants and spiders and can move up and down as a response to external threat • Evolutionary pressure prefers inner tribe kindness and outer kind ruthlessness • Totally ruthless people die to inner tribe competition • Totally kind people die to intertribe competition Discuss ### Is there a type of utilitarianism that combines the sum and median utility? 30 августа, 2019 - 07:37 Published on August 29, 2019 10:42 PM UTC The total utilitarians think the quantity of happiness is the most important, the average utilitarians in response think the quality of each individuals happiness is the most important. But obviously we care about both, so I was very surprised I couldn't find the notion of combining the two online. Which is doubly surprising because combining the two solves the two (biggest) paradoxes of utilitarianism. I quickly wrote my idea down in this paper (1 page) https://bobjacobssite.files.wordpress.com/2019/08/balance-utilitarianism-7.pdf and asked those who read it to tell me if this already existed or if I was missing something obvious. I didn't get any responses so now I'm asking you. Discuss ### AI Alignment Writing Day Roundup #1 30 августа, 2019 - 04:26 Published on August 30, 2019 1:26 AM UTC Here are some of the posts from last week's writing day. Due to the participants writing 34 posts in less than 24 hours (!), I'm re-airing them to let people have a proper chance to read (and comment) on them, in roughly chronological order. 1) Markets are Universal for Logical Induction by John Swentworth A discussion and proof of the following. We want to show that any possible logical inductor can be represented by a market of traders - i.e. there is some market of traders which produces exactly the same prices.2) Intentional Bucket Errors by Scott GarrabrantBucket errors are normally thought of as a bad thing. It has "errors" right in the name. However, I want to argue that bucket errors can sometimes be useful, and you might want to consider having some bucket errors on purpose.3) Logical Counterfactuals and Proposition graphs, Part 1 by Donald HobsonWithin this sequence of posts I will outline a procedure for logical counterfactuals based on something similar to proof length. In this post I present a reimagining of propositional logic in which proving a theorem is taking a walk around a graph of equivalent proposititions. The post shows two intuitive models of proving propositional logic: ...a directed acyclic graph, as shown above. Under this interpretation, all we have to do is test node identity. and another graph where: ...all statements that are provably equivalent in propositional logic will be within the same connected component of the graph. All statements that can't be proved equivalent are in different components, with no path between them.Finding a mathematical proof becomes an exercise in navigating an infinite maze.4) Why so much variance in human intelligence? by me In this question I ask a rambling question about why the difference between peak and median human performance is so much larger than for other species, and Vaniver and Carl Shulman give some fascinating answers. Discuss ### LessWrong Updates - September 2019 30 августа, 2019 - 01:12 Published on August 29, 2019 10:12 PM UTC To better communicate site updates, we’re going to experiment with a once-monthly updates post. We’ll pin this post pin on the homepage for roughly a week and then continue to post updates here throughout the month. Please also feel free to use the comments section on this post as a Schelling point to give feedback, file bug reports, or ask questions you have about the site. (You can also email us, ask a question, or use Intercom.) Recent Features Shortform The major feature announcement for August was the launch of the Shortform beta (aka www.lesslong.com). Shortform is for: Writing that is short in length, or written in a short amount of time. Includes off-the-cuff thoughts and brainstorming. Shortform content can be seen in Recent Discussion on the homepage, on www.lesslong.com/shortform, and on the All Posts page using the Daily view. Anyone can create their own shortform by clicking New Shortform in their dropdown menu, clicking the New Shortform Post button on the homepage above Recent Discussion, or via the text box on the Shortform page. See the announcement post here with full explanation of the philosophy behind Shortform and full instructions on how to use it. We’ll also be having a party on September 7 in Berkeley to celebrate the launch of Shortform. Details. Upcoming Features Heads up on new features which should arrive in September: Subscriptions Overhaul We have a planned overhaul of LessWrong’s subscription system which will allow you to subscribe to posts, comments, users, and private messages thereby receiving notifications and/or emails. Link Previews Soon, when you hover over a green link on a LessWrong page that leads to another LessWrong page, you’ll see a pop-up preview of what’s on the other side plus extra info like the author and karma of the post being linked to. New Editor Option, including Collaborative Editing and In-Line Comments Currently, you can write comments and posts using either the Draft.js or Markdown editor. Soon will enable a new experimental editor that should be more powerful and include features like collaborative editing and in-line comments in the style of Google Docs. Convert Comments to Posts Sometimes, you write a comment, and then realize it turned out to be a self-contained essay. Soon comments will have a create draft post option that will create a copy of it in draft form. You can make any additional edits you’d like, and then publish it like normal. If you do use this on your top-level shortform comments, you’ll also have the option of moving it’s comments to the new post once you publish it. Other Updates MIRI Summer Fellows Program Writing Day MSFP held a writing day on Thursday, 22nd August and produced several dozen posts. You can use the All Posts page to easily rewind and see the great AI Safety content they produced. LessLong Launch Party in Berkeley We’ll be hosting a Shortform launch party in Berkeley on September 7. See the LessWrong event or FB event for details. How to reach the team for feedback or support Lastly, if you’re looking to get in touch with the LessWrong team, the following are good ways: Discuss ### Why study perturbative adversarial attacks ? 29 августа, 2019 - 23:16 Published on August 29, 2019 8:15 PM UTC This post summarizes and comments on Motivating the Rules of the Game for Adversarial Example Research Summary of paper Despite the amount of recent work done, human-inperceptible perturbation adversarial attacks (Example: One Pixel Attack) are not as useful as the researchers may think, for two reasons: 1. They are not based on realistic attacks against these AI systems. we were unable to find a compelling example that required indistinguishability. ... the best papers on defending against adversarial examples carefully articulate and motivate a realistic attack model, ideally inspired by actual attacks against a real system. There are much better attack methods where a real adversary could use: • Test-set attack. Just keep feeding it natural inputs until it gets an error. As long as the system is not error-free this will succeed. • It's suggested that such attacks can be used to fool speeding cameras, by adding human-invisible dots to a license plate. But if one is actually caught speeding, the failure of the algorithm would simply prompt a human review. Much more usefully, one can use clear sprays that are intended to overexpose any photographed image of the licence plate. • Similarly, it's suggested that a perturbation attack could threaten to make a self-driving car to misidentify a STOP sign. But any robust self-driving car must deal with situations much worse than that, such as the stop sign not existing, not visible, and people breaking the traffic rules. • Against security cameras which automatically identify blacklisted people, one can wear a good mask. Since there probably are humans monitoring the camera feed, it's necessary to fool both the camera and the humans. Imagine a monitoring system that automatically displays people's real faces on camera and the best match faces. A human monitor would immediately notice the discrepancy. 1. They are not very fruitful for improving robustness. In practice, the best solutions to the l_p problem are essentially to optimize the metric directly and these solutions seem not to generalize to other threat models. My speculations If so much work has been done for such dubious gains, I have two bitter questions: 1. Why did they work on the perturbation attacks so much? 2. Why are these works so fun to read? The second question partially answers the first: because they are fun. But that can't be the only explanation. I think the other explanation is that perturbational adversarial examples are easy, because they can be defined in one short equation, and trained without domain knowledge (just like the neural networks themselves). As for why these works are so fun to read, I think it's because they are extremely humorous, and confirms comforting beliefs about human superiority. The humor comes from the contrast between tiny perturbations in input and big perturbations in output, between incomprehensible attacks and comprehensible results, between the strange behavior of neural networks and the familiar behavior of humans. Gilmer, Justin, Ryan P. Adams, Ian Goodfellow, David Andersen, and George E. Dahl. “Motivating the Rules of the Game for Adversarial Example Research.” ArXiv Preprint ArXiv:1807.06732, 2018. Discuss ### AI Forecasting Resolution Council (Forecasting infrastructure, part 2) 29 августа, 2019 - 20:35 Published on August 29, 2019 5:35 PM UTC This post introduces the AI Forecasting Resolution Council, a group of researchers with technical expertise in AI who will allow us to expand the space of effectively forecastable questions. It is the second part in a series of blog posts which motivate and introduce pieces of infrastructure intended to improve our ability to forecast novel and uncertain domains like AI. The Council is currently in beta, and we're launching early to get feedback from the community and quickly figure out how useful it is. Background and motivation A key challenge in (AI) forecasting is to write good questions. This is tricky because we want questions which both capture important uncertainties, and are sufficiently concrete that we can resolve them and award points to forecasters in hindsight. Here are some example questions within AI that make this especially difficult: Counterfactual questions Suppose in 2000 you use “superhuman Othello from self-play” as a benchmark of a certain kind of impressive AI progress, and forecast it to be possible by 2020. It seems you were correct -- very plausibly the AlphaZero architecture should work for this. However, in a strict sense your forecast was wrong -- because no one has actually bothered to build a powerful Othello agent. So if a calibrated forecaster faces this question in 2000, considerations regarding who will bother to pursue what project “screen off” considerations regarding fundamental drivers of AI progress and their gradients. Yet the latter concern is arguably more interesting. This problem could be solved if we instead forecasted the question “If someone were to run an experiment using the AI technology available in 2020, given certain resource constraints, would it seem with >95% confidence, that they’d be able to create a superhuman Othello agent that learnt only from self-play?” Doing so requires a way of evaluating the truth value of that counterfactual, such as by asking a group of experts. Similarity questions Suppose we try to capture performance by appealing to a particular benchmark. There's a risk that the community will change its focus to another benchmark. We don’t want forecasters to spend their effort thinking about whether this change will occur, as opposed to fundamental question about the speed of progress (even if we would want to track such sociological facts about which benchmarks were prominent, that should be handled by a different question where it’s clear that this is the intent). So to avoid this we need a sufficiently formal way of doing things like comparing performance of algorithms across multiple benchmarks (for example, if RL agents are trained on a new version of Dota, can we compare performance to OpenAI Five’s on Dota 2?). Definition-of-terms questions This is more straightforward and related to the AI Forecasting Dictionary. For example, how do we sufficiently clearly define what counts as “hard-coded domain knowledge”, and how much reward shaping you can add before the system no longer learns from “first principles”? Valuation questions Not all important uncertainties we care about might be able to be turned into a concretely operationalised future event. For example, instead of trying to operationalise how plausible the IDA agenda will seem in 3 years by making a long, detailed specification of the outcome of various experiments, we might just ask “How plausible will IDA seem to this evaluator in 3 years?” and then try to forecast that claim. Making this work will require carefully choosing the evaluators such that, for example, it is generally easier and less costly to forecast the underlying event than the opinions of the evaluator, and that we trust that the evaluation actually tracks some important, natural, hard-to-define measure. Prediction-driven evaluation is a deep topic, yet if we could make it work it is potentially very powerful. See e.g. this post for more details. AI Forecasting Resolution Council As a step towards solving the above problems, we’re setting up the AI Forecasting Resolution Council, a group of researchers with technical expertise in AI, who are volunteering their judgement to resolve questions like the above. The services of the council are available to any forecasting project, and all operations for the council will be managed by Parallel Forecast. In case there is more demand for resolutions than can be filled, Parallel will decide which requests to meet. We think that this Council will create streamlined, standardised procedures for dealing with tricky cases like the above, thereby greatly expanding the space of effectively forecastable questions. There are still many questions to be figured out regarding incentives, mechanism design, and question operationalisation, and we think that by setting up the Resolution Council, we are laying some groundwork to begin experimenting in this direction; and discover best practices and ideas for new, exciting experiments. The initial members of the council are: We expect to be adding several more members over the coming months. The database of previous verdicts and upcoming resolution requests can be found here. How to use the council if you run a forecasting project If you’re attempting to forecast AI and have a problem that could be solved by querying the expert council at a future state, let us know by filling in this resolution request form. How to join the council If you have technical expertise in AI and would be interested in contributing to help expand the space of forecastable questions, let us know using this form. There is no limit on the number of judges, since we can always randomise who will vote on each distinct verdict. Discuss ### Why are the people who should be doing safety research, but aren’t, doing something else? 29 августа, 2019 - 11:51 Published on August 29, 2019 8:51 AM UTC Imagine the set “the 50 people who would most helpfully contribute to technical alignment research, were they to be working on it, yet who are working on something else instead.” If you had to guess—if you had to make up a story which seems plausible—why are they working on something else instead? And what is it they’re working on? Discuss ### The I Ching Series (3/10): This year, how can I live a happy life while succeeding in school? 29 августа, 2019 - 09:04 Published on August 29, 2019 6:03 AM UTC The Rules: • No divination-related questions. • I will do ten I Ching sessions, about every six weeks. Eight out of ten sessions must produce at least one new piece of evidence, or else I'll abandon the practice. • Definition of valid evidence • Serious work, and "mid-term exam" success, on novel ideas generated by the I Ching, or on projects that I’ve neglected at least 6 weeks. • Insights that I can articulate, that I still fully agree with after six weeks, that seem to improve my daily decisions and mental health, and that seem sensible to my friends. I will keep the fact that I used the I Ching to generate these insights a secret until the end of the project. • I will treat the most challenging parts of my reading as if they were true and important. After all, the strangest parts of the reading might be most valuable, if the point of the I Ching is to bust you out of ruts. Preliminary Reflection: I should acknowledge that I've been looking forward to doing this reading, as the last one led to such dramatic and helpful changes. That sense of anticipation strikes me as a potential source of motivated reasoning either to defend this process as useful, or perhaps to overcorrect by creating ever more stringent standards for what counts as "useful." I've also started to suspect that the primary utility of this I Ching procedure lies in making falsifiable and objective rulesets and standards of evidence for my own personal growth projects, not in illuminating the unexplored depths of my psyche with ancient wisdom. Question: This year, how can I live a happy life while succeeding in school? Post-Run Reflection: This session seemed most helpful in producing a way to conceptualize the coming year in a very broad sense. It mostly generated insights and intentions, rather than ideas or projects, though it could be that the former will lead to the latter. Because the question concerns an entire year, is the appropriate time to check for "success" on this phase in six weeks or in August 2020? I also have to acknowledge that every single one of the insights and intentions I gleaned from this session were things I'd thought of before, talked about, or attempted to put into action. Some of the "First Steps" are projects that I've neglected for at least six weeks, and explicitly describing each of these as a set of linked, formal goals rather than background thoughts is novel. In my original rules for "valid evidence," insights did not need to be novel, they only needed to be sensible and useful. Yet I also have a feeling that a summation of thoughts and feelings I've already had or personal growth projects I've already been working on isn't enough. I would feel more confident about the utility of the I Ching this time around if I am able to achieve clear and durable changes around my intentions. Requiring this might mean that I am requiring more strict standards of evidence, possibly because there are so many degrees of freedom this time around. Many activities could count toward any of the insights I listed, of which in turn there are many. Yet a happier life can't always be defined as a set of straightforward projects, and is probably more a matter of shifting emphasis, balance, and overall perspective. I did glean some potentially valuable broad perspective and expectation-setting, ideas for conversations and slight changes to how I handle situations that might benefit my day-to-day routines, and strategies for how to plan effectively. I will return in a couple weeks with an update giving my best judgement on whether or not these insights have led to changed behavior. Approximate time to complete this document: 3 or so hours initially. Insights, Intentions, First Steps: • Insight: This year will be stressful. I should carefully schedule and plan my life, taking care of the details, and "striking back" like a guerrilla by planning truly high-quality, memorable times of joy, even if they are few and far between. • First Step: I am setting an intention here to do one activity for each week of the first month of the fall quarter. This will include a group dinner, a hike, a high-quality arts activity, and going out dancing. • Intention: Introspect about what sorts of activities and conversations would truly bring me satisfaction, as one-offs or routines. Avoid the frivolous with grace, but actively offer opportunities to join me in mutually agreeable activities. • First Step: Creating plans by considering broad and variety categories of things I like to do, then narrowing down from there, can help me make decisions. As a further first step, broad categories I dislike include watching multiple episodes of TV at a time, amateur dance/theater/open mics, unstructured dance, fancy bars, and browsing at shops. • Intention: When I notice myself dissembling, create a procedure for responding to it so that I can stay honest. That might work by creating a log in my phone describing the situation soon after the moment, and looking into it more deeply when I get home. • First Step: Start by having a tactful conversation with my girlfriend about my likes and dislikes, acknowledging that I've done a poor job of articulating them, the emotional dynamics behind them, and promising to give better guidance on what I'd prefer to do in the future. • Intention: Identify the situations in my life that are only tolerable, and try to identify a tension I haven't resolved, such as between preserving a connection with my housemates and moving to a quieter space with housemates who share more of my interests. • First Step: Consider what sort of connection I could envision maintaining with individual housemates after a move, write down what I'm looking for in a new house, and start looking for options. • Intention: By the end of the year, feel as though I've dissolved my tendency to make jealous comparisons between my own and others' success. Notice moments when I succumb to this and respond with meditation, CBT, or the like. • First Step: Just identifying this as a goal counts as a first step. I think that my process might not become clear except in the moment. • Intention: Avoid impulsive, flailing activity. Life will not offer spontaneous abundance this year - any harvest that is available will require diligence. Flailing comes in many forms: wasted food or empty cupboards leading forcing me to buy food at restaurants, excessive drinking or weed smoking, conversations that run to a tiresome length, reading or writing projects that don't tie into a larger purpose, acquiescence to proposals from friends that I don't want to participate in, and so on. Plan to avoid these as much as possible. It's better to stay in and do nothing than do something I don't want to do, and it's better to eat a plain baked sweet potato than waste17 on takeout. When plans fail, prefer simplicity and stillness to flailing attempts at action.
• First Step: Create a set of rules for how do deal with "defeats" (such as being hungry with no food in the house) without "flight" (buying expensive takeout) but with "correct retreat" (buying a sweet potato or making a PB&J, and committing some time ASAP to better meal planning).
• Intention: During social time, create space to retreat into my own psyche. That might mean stepping away, having preliminary conversations to express this need, leaving early and returning later, and so on.
• First Step: After social time, reflect on whether it could have benefited from such a retreat, and how I might have accomplished that.
• Intention: Notice when I am "climbing" - trying to force connections artificially out of a sense of insecurity. Let that go. Try to remap the emotion of feeling "shut out" from somebody else's inner world to a sense of mutually dignified distance that we can cross when we are ready.
• First Step: Have some conversations with friends to explore this more deeply.

Prev: Run 2/10

Discuss

### This sometimes helps to expose assumptions

29 августа, 2019 - 02:01
Published on August 28, 2019 6:42 PM UTC