Вы здесь

Новости LessWrong.com

Подписка на Лента Новости LessWrong.com Новости LessWrong.com
A community blog devoted to refining the art of rationality
Обновлено: 46 минут 57 секунд назад

I Started a Sports and Gambling Substack

26 августа, 2020 - 00:30
Published on August 25, 2020 9:30 PM GMT

The substack is here.

The latest post is here and contains predictions for tomorrow’s baseball games. I also posted predictions for yesterday’s games here, and here you can find my offensive team ratings. Pitcher ratings will be posted soon.

I wanted to try out the substack system and software as a potential landing point, and as per my discussion regarding the forced switch to the new WordPress editor. I also wanted to avoid conflating the sports modeling and gambling talk with my rationality blog, as I expect them to be relevant to the interests of different groups of people.

If you have meaningful feedback for what the model has to say, or are finding these posts interesting or useful, let me know and also hopefully subscribe to the substack. If not, I’ll likely periodically mention it is there, but will do my best to keep the two efforts distinct.



Discuss

Sunday, 20/8/30, 12pm PDT – Tagging Celebration: Habryka/Crawford + Party

25 августа, 2020 - 23:21
Published on August 25, 2020 8:21 PM GMT

FB Event

Woop! Woop! This past Saturday, gallant taggers brought our immense first tagging campaign to a close. The archives have been swept and every post with over 25 karma has been given at least one tag. To date, 15,440 tags have been applied to 8,022 posts.

See the full celebration post here.

While this is only the beginning for tagging, what a beginning it is! We think this deserves some celebration. We're preparing a two-part event:

Part 1: Oliver Habryka and Jason Crawford discuss Intellectual Progress

https://us02web.zoom.us/j/82547847213

Back in 2017, when the future of LessWrong was in doubt, Habryka led the revival efforts, pulling together a team and creating LessWrong 2.0. Central to Habryka's vision of LessWrong was that it would accelerate intellectual progress on the important problems facing humanity. Intellectual progress here simply meaning progress on building knowledge and understanding.

For several years, Jason has been seeking to answer the question of which factors have led to human progress in general: how did we go from living at the mercy of nature with merely stone tools and fire to buildings, electricity, medicine, legal systems, etc? Jason's research, and info about the broader Progress Studies movement, can be found at his blog, Roots of Progress (with many pieces crossposted to LessWrong).  

Of course, the history of human progress, in general, is tightly woven with humanity's intellectual progress. When our understanding of the world increased, so did our ability to shape it (for better or worse). 

To celebrate the new tagging system (itself designed with the goal of intellectual progress) we've decided to have Habryka and Jason chat about questions such as:

  • What historical factors were important for intellectual progress?
  • What conditions are most important to create now in order to get intellectual progress?

Starting at 12:00 PDT, Habryka and Jason will chat on Zoom for ~1 hour, including some Q&A

Part 2: Party in Rational Woods (our Topia location)

https://topia.io/rational-woods (feel free to check it out now)

Following the talk, we will migrate to a social environment that more easily allows people to strike up small group conversations. Raemon has created the marvelous Rational Woods on the Topia platform. You can check it out now for social hangouts. There are some neat features, so make sure to click on things.

This section will begin following the main talk and will last for several hours, so long as people are having a good time. Conceivably, we could play Breakfast is Served or Person Do Thing with LessWrong tags if there's enough enthusiasm. Not sure if that'll work well or not.

Also, I hope everyone can treat our top taggers as VIPs at the party. They deserve it. Three cheers for them!

The Details

When: Sunday, 30th August, 12:00PM PDT / 3:00PM EDT / 7:00PM UTC

Where: 

FB Event: https://www.facebook.com/events/2827544884191033/



Discuss

Mathematical Inconsistency in Solomonoff Induction?

25 августа, 2020 - 20:23
Published on August 25, 2020 5:09 PM GMT

What counts as a hypothesis for Solomonoff induction? The general impression I’ve gotten in various places is “a hypothesis can be anything (that you could write down)”. But I don’t think that’s quite it. E.g. evidence can be written down but is treated separately. I think a hypothesis is more like a computer program that outputs predictions about what evidence will or will not be observed.

If X and Y are hypotheses, then is “X and Y” a hypothesis? “not X”? “X or Y?” If not, why not, and where can I read a clear explanation of the rules and exclusions for Solomonoff hypotheses?

If using logic operators with hypotheses does yield other hypotheses, then I’m curious about a potential problem. When hypotheses are related, we can consider what their probabilities should be in more than one way. The results should always be consistent.

For example, suppose you have no evidence yet. And suppose X and Y are independent. Then you can calculate the probability of P(X or Y) in terms of the probability of P(X) and P(Y). You can also calculate the probability of all three based on their length (that’s the Solomonoff prior). These should always match but I don’t think they do.

The non-normalized probability of X is 1/2^len(X).

So you get:

P(X or Y) = 1/2^len(X) + 1/2^len(Y) - 1/2^(len(X)+len(Y))

and we also know:

P(X or Y) = 1/2^len(X or Y)

since the left hand sides are the same, that means the right hand sides should be equal, by simple substitution:

1/2^len(X or Y) = 1/2^len(X) + 1/2^len(Y) - 1/2^(len(X)+len(Y))

Which has to hold for any X and Y.

We can select X and Y to be the same length and to minimize compression gains when they’re both present, so len(X or Y) should be approximately 2len(X). I’m assuming a basis, or choice of X and Y, such that “or” is very cheap relative to X and Y, hence I approximated it to zero. Then we have:

1/2^2len(X) = 1/2^len(X) + 1/2^len(X) - 1/2^2len(X)

which simplifies to:

1/2^2len(X) = 1/2^len(X)

Which is false (since len(X) isn’t 0). And using a different approximation of len(X or Y) like 1.5len(X), 2.5len(X) or even len(X) wouldn’t make the math work.

So Solomonoff induction is inconsistent. So I assume there’s something I don’t know. What? (My best guess so far, mentioned above, is limits on what is a hypothesis.)

Also here’s a quick intuitive explanation to help explain what’s going on with the math: P(X) is both shorter and less probable than P(X or Y). Think about what you’re doing when you craft a hypotheses. You can add bits (length) to a hypothesis to exclude stuff. In that case, more bits (more length) means lower prior probability, and that makes sense, because the hypothesis is compatible with fewer things from the set of all logically possible things. But you can also add bits (length) to a hypothesis to add alternatives. It could be this or that or a third thing. That makes hypotheses longer but more likely rather than less likely. Also, speaking more generally, the Solomonoff prior probabilities are assigned according to length with no regard for consistency amongst themselves, so its unsurprising that they’re inconsistent unless the hypotheses are limited in such a way that they have no significant relationships with each other that would have to be consistent, which sounds hard to achieve and I haven’t seen any rules specified for achieving that (note that there are other ways to find relationships between hypotheses besides the one I used above, e.g. looking for subsets).



Discuss

The Best Toy In The Park

25 августа, 2020 - 17:30
Published on August 25, 2020 2:30 PM GMT

One of the playgrounds near our house has an angled ring shaped merry-go-round:

There are lots of different toys at the park, but this one is different. It has depth.

It's a freely spinning ring, with a 10deg tilt. As you walk up, it spins you back down again. It has enough momentum, however, that you can get yourself to higher positions on the ring by working it back and forth. Because of the tilt, your speed changes as you move around the ring, and you have to adjust your angle relative to the ring as well. First is the challenge of staying upright as you walk at the bottom. Then the challenge of getting yourself to the top. Then the challenge of getting around it at different speeds, changing orientation, riding with other people, rolling on it, jumping, etc.

There are shallow games, like Tic-tac-toe, that are fun for a bit, and deeper games, like Go, that can be enjoyed for years. While I haven't spent years on any of these toys, this angled spinning ring has the most depth of any I've seen.

Comment via: facebook



Discuss

Why hasn't there been research on the effectiveness of zinc for Covid-19?

24 августа, 2020 - 23:58
Published on August 24, 2020 8:58 PM GMT

There was significant interest in zinc early on. It seemed like one of the most promising avenues for protecting oneself against Covid-19. Searching on both Google and Lesswrong itself, I do not see new research on it. (This is the closest thing I found; A metareview that says there is almost no research whatsoever.) Its epistemic status seems frozen in the pre-Covid past: promising experimental treatment with no significant evidence confirming or disconfirming it, and no largescale commercialization and deployment.

 

This seems quite an inefficient result, if my understanding is correct. What has gone wrong?



Discuss

Socially-distanced outdoor Petrov Day ceremonial manual

24 августа, 2020 - 17:11
Published on August 24, 2020 2:11 PM GMT

For Petrov Day 2020, we of the Austin Less Wrong community decided that we would modify the usual ceremonial procedure to accommodate COVID-19 social distancing guidelines - the ceremony will be held outdoors, with everyone wearing masks and keeping 6 feet of distance from others. The modified Petrov Day manual can be read here:

We are making this modified manual available in case other Less Wrong meetup groups are considering holding an outdoor Petrov Day ceremony and would like to use it. If so, please note that obtaining all the materials and finding a suitable location may take a few weeks, so make sure to begin your preparations well in advance.

On that note, if you are in Austin or within driving distance, please save the date: Saturday, September 26, 2020, 6:30pm, at an outdoor location in Austin, TX. Petrov Day is our premiere event of the year and we would be delighted to see you there. I will post more information here on Less Wrong as the date approaches; or, message me on LW or email (jchan107@protonmail.com) and I'll keep you informed of any updates. In the meantime, you can sign up for our mailing list and join us for our regular virtual and in-person meetups, which are variously announced there. Our virtual meetups have been going strong since the lockdown, with activities and social meetups each week.



Discuss

When can Fiction Change the World?

24 августа, 2020 - 16:47
Published on August 24, 2020 1:47 PM GMT

I suspect that a nontrivial percentage of the people reading this became involved with the community because of Harry Potter and the Methods of Rationality.

So to the extent that those who were drawn to join the community because of that source are making the world a better place, we have at least one clear example of a novel having an important impact

I’ve made a living through self publishing novels for the last five years (specifically Pride and Prejudice variations, that is Jane Austen fan fiction). Recently inspired by conversations at EA Virtual and worries made more emotionally salient by GTP-3 examples, I decided that I wanted to put part of my professional time towards writing novels that might have a positive impact on conversations around AI.

As part of this I did some thinking about when fiction seemed to exert an influence on public policy, and then I looked for academic research on the subject, and I think there are people in the community who will find this write up about the subject interesting and useful.

Theoretical Model

I identified four common mechanisms that seemed to be involved when fiction had a large impact on opinions. This is not an exhaustive list, and there is some overlap and fuzziness around the boundary of each concept.

Radicalizing the already convinced:

A classic example is Uncle Tom’s Cabin, a novel about a slave unjustly suffering written in the 1850s that was credited with helping to spark the Civil War. Uncle Tom’s Cabin did not introduce anyone to the idea that slavery was bad, or convince anyone who thought that slavery was a fine peculiar Southern institution that it was actually evil. However it seems to have radicalized Northern attitudes towards slavery, and it was part of the moment when enough one issue voters on slavery existed that the party system broke down and allowed the new abolitionist Republican party to win congress and the presidency in 1860.

Research has been done via surveys to find out if readers of popular novels about climate change have changed their views about climate change relative to similar readers who did not read any of them. Concerned readers become alarmed by climate change after reading, and those who were aware but not concerned become concerned. However, readers who think that climate change is a hoax usually don’t read ‘Cli-fi’ and when they do read it, their opinions are not changed. (Schneider-Mayerson, M. 2018 and 2020)

Evoking empathy for new groups:

Uncle Tom’s Cabin succeeded at radicalizing northerners by making them care about the fate of a particular southern slave. LGBTQ representation in media drive viewers to care about gay characters, and see them as normal human beings who deserve to have the same chances for happiness as anyone else. In the 1970s and 80s The Jeffersons and The Cosby Show helped convince white Americans that black Americans could be successful, intelligent and well dressed citizens.

Some of the research in cultivation theory specifically shows that heavy TV watchers as a group changed their opinions on minorities far more than the general public as positive representations of these groups became common.

On the other hand, negative representations can exist. For example anti semetic stories in which the long-nosed Jewish banker mistreats a poor person. Or media which portrays minority Americans as dangerous and violent. West German attitudes towards the Holocaust were substantially changed when a quarter of the viewing population watched the Holocaust miniseries in the seventies, and this likely contributed to passing laws that supported extraditions of more war criminals to Isreal.

Exposure to New Points of View:

After reading Upton Sinclair’s novel The Jungle, the president at the time, Teddy Roosevelt supposedly stared at his morning sausage and was unable to stomach the possibility of human body parts being in it. The rest of the public was equally horrified by the prospect of tainted meat, and within a year the act establishing the FDA was made.

Methods of Rationality exposed many of its readers to a way of thinking about the world that they’d never seen and that they found highly engaging. When I read Atlas Shrugged as a teenager who was trying to decide if he thought God existed, I saw for the first time an expression of intellectually satisfied atheists who were confident in living and being happy living without believing that God existed. Supposedly many of the people who laid the foundation for the science of robotics were inspired by reading Isaac Asimov’s Robots series.

Community Building:

The science fiction community created a space for people interested in engineering and technology to meet each other informally in the early part of the century. Atlas Shrugged recruited people to join objectivist circles. The Less Wrong community recruited probably half its current population from people who loved Methods of Rationality.

A paper about fiction influencing international relations argued that the Left Behind novels were part of the structure that maintained unity amongst Christian Evangelicals during George W Bush’s presidency. (Musgrave)

How attitudinal changes lead to real world changes

Influencing specific important individuals:

Ronald Reagan became more opposed to nuclear weapons after watching The Day After. Scientists who started the field of robotics read Asimov. Paul Ryan and Alan Greenspan are huge fans of Ayn Rand and Atlas Shrugged. Many of us are here because we liked Methods of Rationality. Often specific influential individuals then cause major changes because they were influenced to do so by reading a specific book or watching a specific movie.

Supporting a mass movement:

Uncle Tom’s cabin radicalized Northern abolitionist attitudes, and was part of the process that led to the Republican party. Radicalizing environmentalists via cli-fi possibly has led to groups like Extinction Rebellion and enthusiasm for buying Teslas. Anti-war films reduced political willingness to keep troops in Vietnam and Iraq. Nuclear apocalypse films increased political support for the Test Ban Treaty. Global warming films are definitely part of why the public in most countries supports things like the Paris Agreement, or why California passed a cap and trade policy. LGBTQ rights and civil rights are more likely to get friendly supreme court rulings, friendly company hiring practices, etc because they are popular, which is partly because of media representations.

How does fiction change attitudes?

Transportation theory; synthetic experiences and aliefs:

The brain doesn’t fully treat fictional evidence differently, and fiction gives a veneer of the real and specific to ideas that before were just general and abstract.

Professor Paul Bloom at Yale describes this phenomenon as fiction creating ‘aliefs’, where the emotional portion of our brain responds to things that are not real as though they were. For example most atheists are unwilling to sign a contract selling their soul to the devil, which Bloom interprets as being an example of part of the brain treating the fictional entity as real. However, as we all know the real world is theoretically over determined, and refusing to take twenty dollars to sell your soul to the devil might be a reasonable response in a case of potentially infinite stakes with trivial residual uncertainty.

Feeling deeply transported into a different world in a key part of fiction’s appeal, and it seems to be an important part of how fiction can make people treat fictional examples as though they were concrete and real. (Busselle 2009)

A research manipulation to make readers feel more emotionally engaged before exposing them to a story changed how believable they found it, while changing whether a story was labelled as fiction based on a true story, or a true story did not.

Cli-fi readers talked about now feeling like climate change would actually happen to actual people. This led to increased radicalization as they had a sense of connection to this future. Having characters who the readers could identify with acting in familiar settings that create a sense of place and being a real location may have been what drove that success. (Schneider-Mayerson 2018)

Fiction influences readers by making the problem seem more real, creating a feeling of emotional verisimilitude and plausibility, and making the problem seem vivid and concrete.

Lesson for EA writers: Someone, possibly Mark Twain, said that you need to give your readers two familiar things for every strange idea you introduce. If you want to get at a broader audience (not necessarily a mass audience!) to be moved by your book, try to make sure you actually hit that target.

Cultivation theory:

According to this model, people think the world is like the media they are repeatedly exposed to. As a result people who watch lots of TV think there is more crime and that more people are lawyers than people who don’t watch very much TV. They also like minorities and LGBTQ groups as well today as low TV watchers, after decades of positive media representation of those groups. However in the early surveys before this representation happened, high TV watchers were more bigotted against minority groups. (Mosharafa 2015)

A Harvard professor convinced lots of TV shows to insert designated drivers into episodes where the group went to a party and got drunk as part of the public safety campaign. Given how much money different companies pay to have people be seen in movies drinking Pepsi while flying somewhere on Southwest, after they paid with a Chase credit card, we can assume that ideological product placement probably has some impact.

Maybe we could try to convince screenwriters to look for a chance to get their characters to be mentioned doing EA type things, like donating a regular percentage of their money to extreme poverty reduction, or talking about impact evaluations after mentioning they’ve donated to something.

Audiences will reject and resist ideas that disagree with their preexisting assumptions. They also will draw lessons and meanings from fiction that are congruent with what they already believed.

Example: After reading a cli-fi novel where a PoV character betrays and murders another PoV character at the very end of the novel, conservative and moderate readers drew the lesson from the book that you can’t trust anyone, and that you need to be grateful for the little things in life. They also tended to identify with the ruthless and initially amoral rich male character, rather than the middle class activist/journalist or the poor Latina immigrant. (Schneider-Mayerson 2020)

Example: Teenagers and children make fun of media that is obviously trying to convince them to act in a way that adults would like them to, for example anti-drug messages in shows for teenagers seem to have had a very limited effect, especially when the ‘facts’ in the message are broadly believed to be false.

Example: Tom Clancy’s influence on the Republican policy elite was much smaller during the George W Bush presidency when the use of preemptive wars that he thought were a bad idea had become the preferred elite policy. (Musgrove)

What goes wrong?

Democracy counts numbers not intensity

If the radicalized people simply become more passionate members of a blocked political coalition, it doesn’t do anything. There needs to be a way to transmit the increased passion for the subject into an actual change. Movements like the Extinction Rebellion are reflections of the way that the blocked political coalition in favor of stronger climate policies is now engaging in civil disobedience because it is clear that they are not going to be able to directly achieve the policy victories they view as desperately important through purely democratic processes.

Attitudinal changes are temporary:

A group of psychology freshmen were assigned to read The Omnivore’s Dilemma and compared to a group of freshmen who weren’t assigned to read it. Their attitudes right after they read the book were changed in the direction of the book compared to the control group, but after a year’s time their opinions had mostly returned to what the control group thought. (Hormes et al 2013)

Books generally do not convert opponents:

Southern slave owners did not generally read Uncle Tom’s Cabin, and if they did, they wrote thought it was an unrealistic and dishonest portrayal of slavery instead of deciding that slavery must be ended. Climate change hoaxers do not generally read cli-fi novels, and when they do, they say that the climate change scenario portrayed was unrealistic.

The Left Behind novels may have tied together Evangelicals in the conservative political coalition, but nobody who wasn’t an evangelical Christian had the slightest interest in reading them. The framework of the argument in a book must match the presuppositions of the audience (Musgrave).

The changes in attitude that come with fiction often depend on the reader not having much personal experience with the situation. In the case of events that are common in media, but uncommon in personal life, people will automatically recall media examples of the situation, for example murder trials, chemical explosions or international spy rings. But if you ask them to recall an event that is both common in real life and in media, such as dates or highway accidents, they automatically think of their own experiences or those of friends (Busselle 2003).

Southerners had many personal experiences with slavery that would have dominated fictional portrayals in how they thought about it. AI researchers have daily personal experience with AI being extremely dumb and not suddenly destroying the world.

Lesson for EA writers: People who disagree with the model of the world they think is expressed by your book probably won’t read it. If they do read the book, they will be in a ‘am I allowed to disbelieve this’ mindset, rather than trying to figure out if it is actually true. So be aware of what ideas will feel strange and might create resistance in your desired audience and either figure out a way to make your argument so that it follows logically from their existing presuppositions, or figure out a way to market your book to an audience that shares your presuppositions or is undecided on them.

Example: The Day After was an explicitly non partisan film designed to simply show ordinary Americans being ordinary and then dying because of the nuclear exchange. The Republican establishment that controlled the presidency at the time did not need to react to the movie as a partisan attack, but as expressing authentic concerns.

Backlash:

Fictional portrayals of effective torture provoked extensive debate and elite backlash arguing against the portrayal, and thus the effect of these scenes on the support for torture was at best ambiguous, and likely null. (Payne)

But despite the response, did this portrayal possibly legitimize, or give a platform for the idea that torture could be a wothy tool in extreme situations, and thus even though the public debate did not fully support the idea, it still became more popular and legitimate?

My suspicion is that this effect was negligible. The idea that torture might be effective and legitimate to use in extreme circumstances already existed. I remember as a teenager spontaneously thinking about torturing terrorists as probably being useful on the morning of Sept 11, 2001. It is a natural idea for people to have.

Media that showed effective torture reflected this pre-existing belief. Possibly the belief could have been delegitimized in the way racism was delegitimized after the 60s by showing that it had no political or social power, but simply not speaking about the possibility of torture being effective would not have done that. The idea did in fact have political power and was believed by many elites in the party holding the presidency and congress.

Most likely the net effect of 24 cinematically displaying effective torture was zero.

However, fictional representations may be part of how particular policies change from polarized to bipartisan over time. For example the military preparedness policies promoted by a story about an invasion of Britain by Germany written in 1871 were strongly disliked by the Liberals who were in power at that time, but twenty years later, military preparedness against the chance of an invasion was funded with bipartisan support (Kirkwood 2012).

Lesson for EA authors: Be aware of pushing against an established opposition. If you can’t undercut the coalition against your preferred policy, you probably will achieve very little. Robin Hanson’s political orthogonality thesis, the idea that you will be most effective at pulling the debate in a direction without an established opposition is relevant here.

Depressing people too much to act

As they are focused on disasters, often cli-fi books create intense negative affect, depression, and a sense of helplessness and hopelessness. This can lead people to paradoxically act less. (Schneider-Mayerson 2018)

Conclusion:

Audiences have a sophisticated response to what they read. They will notice the things that aren’t said or even considered in the books they read. If it is clear that a book is trying to promote a particular political point of view, many readers will strongly discount the intended message. They will also spontaneously come up with objections to arguments that do not feel correct to them.

For change to happen it does not only need to change opinions, there needs to be a way for the changed opinion to be turned into action.

Simply radicalizing people doesn’t matter without a path for change.

This especially true if there is a blocking coalition that is unaffected by the attitudinal change. Uncle Tom’s Cabin mattered because the North had enough people to politically dominate the country, and because the South delegitimized itself in Northern eyes by seceding.

In many cases politically motivated fiction that successfully radicalized those who consumed it probably did very little:

For example, Climate change fiction will only matter in the long run if it weakens the power of the blocking coalition (since the supporting coalition will act whenever it is in power anyways). And it doesn’t seem to weaken the blocking coalition directly.

Possibly, despite the failure of the broader coalition, cli-fi books are actually making an important contribution by intensifying the salience of climate change in the supportive political coalition. Climate change is viewed as an extremely important issue outside of the US, and one of the political coalitions in the US is dedicated to pushing forward climate policies. An Inconvenient Truth and The Day After Tomorrow and cli-fi novels are plausibly why the US might someday pass strong climate change policies, and why California and Germany already have.

In cases where the goal is for small numbers of people to engage in intense efforts by donating substantial amounts of money or of changing their professional plans, radicalizing a few people is probably more valuable than convincing a majority to vote differently. Democratic political majorities require broad, but shallow, agreement. Deep engagement by narrow communities might improve AI safety norms, expand the use of randomized control trials in global poverty reduction research, or fund charities that give poor American inmates bail money

Finally: Highly successful changes can take a long time to become real.

The laws against debtor’s prison and child labor that Charles Dickens promoted were passed over decades. The scientists who attribute their interest in robotics to Isaac Asimov’s only started making substantial progress decades after the first stories were published. To the extent that Ayn Rand’s novels have led to any concrete policy changes, it has taken a long time, and those policy changes were not large.

Rick Busselle & Helena Bilandzic (2009) Measuring Narrative Engagement, Media Psychology, 12:4, 321-347

Rick W. Busselle & L. J. Shrum (2003) Media Exposure and Exemplar Accessibility, Media Psychology, 5:3, 255-282

Jenny Kitzinger, (2009) Questioning the sci-fi alibi: a critique of how ‘science fiction fears’ are used to explain away public concerns about risk. Journal of Risk Research

Paul Musgrave, J. Furman Daniel. Working Paper on Fiction and International Relations Theory

Schneider-Mayerson, M. (2018). The Influence of Climate Fiction: An Empirical Survey of Readers. Environmental Humanities, 10(2), 473–500.

Schneider-Mayerson, M. (2020). "Just as in the Book"? The Influence of Literature on Readers' Awareness of Climate Injustice and Perception of Climate Migrants. ISLE: Interdisciplinary Studies in Literature and Environment.

Eman Mosharafa, (2015) All you Need to Know About: The Cultivation Theory, www.researchgate.net/publication/337077784_All_you_Need_to_Know_About_The_Cultivation_Theory

Rodger A. Payne, Popular Culture and Public Deliberation about Torture

Hormes JM, Rozin P, Green MC and Fincher K (2013) Reading a book can change your mind, but only some changes last for a year: food attitude changes in readers of The Omnivore's Dilemma. Front. Psychol. 4:778. doi: 10.3389/fpsyg.2013.00778 www.frontiersin.org/articles/10.3389/fpsyg.2013.00778/full

Kirkwood, P. M. (2012). The Impact of Fiction on Public Debate in Late Victorian Britain: The Battle of Dorking and the "Lost Career" of Sir George Tomkyns Chesney (Fall 2012). Graduate History Review.

https://www.vox.com/2014/9/1/5998571/why-anti-drug-campaigns-like-dare-fail

https://www.huffpost.com/entry/designated-driver-campaig_b_405249\



Discuss

Do you vote based on what you think total karma should be?

24 августа, 2020 - 16:37
Published on August 24, 2020 1:37 PM GMT

I recently strong-downvoted a post that I would have weak-upvoted if it had been at a lower karma. In general, I usually vote primarily based on what I think the total karma should be. I'm curious whether other people do similar things.

This is both a question and a poll. The poll is in the comments; it works via upvotes but there is a karma balance comment. (Note that one can recover the non-weighted results (i.e., number of votes) by hovering one's mouse over the current score.) This is about votes on LessWrong only.

I'm also wondering whether this behavior is, in some sense, anti-virtuous. If everyone votes based on what they think the total karma should be, then a post's karma reflects [a weighted average of opinions on what the post's total karma should be] rather than [a weighted average of opinions on the post]. This feels worse, though I'm not entirely sure that it is.

Meta: There's been some speculation about this (maybe read after voting), but nothing conclusive.



Discuss

Is 'satificing' optimisation?

24 августа, 2020 - 14:51
Published on August 24, 2020 11:51 AM GMT

It seems to me that the behavioural science research around 'the trade-off between maximising and satisficing' in terms of well-being relates with human beings being 'optimisers' only sometimes. Also, it seems to suggests that there can be alternative and better approaches to optimisation? Is this true? Or is 'satisficing' just a hidden multi-objective optimisation that factors in time and effort costs?

See what trigger my question: A decision-maker’s dilemma: search for the best option or settle for ‘good enough’?



Discuss

Learning human preferences: black-box, white-box, and labelled white-box access

24 августа, 2020 - 14:42
Published on August 24, 2020 11:42 AM GMT

This post is inspired by system identification; however, I'm not an expert in that domain, so any corrections or inspirations on that front are welcome.

I want to thank Rebecca Gorman for her help and ideas on this topic.

Knowing an agent

This is an agent:

Fig. 1

We want to know about its internal mechanisms, its software. But there are several things we could mean by that.

Black-box

First of all, we might be interested in knowing its input-output behaviour. I've called this its policy in previous posts; a full map that will allow us to predict its output in any circumstances:

Fig. 2

I'll call this black-box knowledge of the agent's internals.

White-box

We might be interested in knowing more about what's actually going on in the agent's algorithm, not just the outputs. I'll call this white-box knowledge; we would be interested in something like this (along with a detailed understanding of the internals of the various modules):

Fig. 3

Labelled white-box

And, finally, we might we interested in knowing what the internal modules actually do, or actually mean. This is the semantics of the algorithm, resulting in something like this:

Fig. 4

The "beliefs", "preferences", and "action selectors" are labels that explain what these modules are doing.

If we know those, I'd call it labelled white-box knowledge.

Levels of access

We can have different levels of access to the agent. For example, we might be able to run it inside any environment, but not pry it open; hence we know its full input-output behaviour. This would give us (full) black-box access to the agent (partial black box access would be knowing some of its behaviour, but not in all situations).

Or we might be able to follow its internal structure. This gives us white-box access to the agent. Hence we know its algorithm.

Or, finally, we might have a full labelled diagram of the whole agent. This gives us labelled-white-box access to the agent (the term is my own).

Things can more complicated, of course. We could have only access to parts of the agent/structure/labels. Or we could have a mix of different types of access - grey-box seems to be the term for something between black-box and white-box.

Humans seem to have a mixture of black-box and labelled-white-box access to each other - we can observe each other's behaviour, and we have our internal theory of mind that provides information like "if someone freezes up on a public speaking stage, they're probably filled with fear".

Access and knowledge

Complete access at one level gives complete knowledge at that level. So, if you have complete black-box access to the agent, you have complete black-box knowledge: you could, at least in principle, compute every input-output map just by running the agent.

So the interesting theoretical challenges are those that involve having access at one level and trying to infer a higher level, or having partial access at one or multiple levels and trying to infer full knowledge.

Multiple white boxes for a single black box

Black-box and white-box identification are have been studied somewhat extensively in system identification. One fact remains true: there are multiple white-box interpretations of the same black-box access.

We can have the "angels pushing particles to resemble general relativity" situations. We can add useless epicycles, that do nothing, to the model of the white-box; this gives us a more complicated white-box with identical black-box behaviour. Or you could have the matrix mechanics vs wave mechanics situation in quantum mechanics, where two very different formulations were shown to be equivalent.

There are multiple ways of choosing among equivalent white-box models. In system identification, the criteria seems to be "go with what works": the model is to be identified for a specific purpose (for example, to enable control of a system) and that purpose gives criteria that will select the right kind of model. For example, linear regression will work in many rough-and-ready circumstances, while it would be stupid to use it for calibrating sensitive particle detectors when much better models are available. Different problems have different trade-offs.

Another approach is the so called "grey-box" approach, where a class of models is selected in advance, and this class is updated with the black-box data. Here the investigator is making "modelling assumptions" that cut down on the possible space of white-box models to consider.

Finally, in this community and among some philosophers, algorithmic simplicity is seen as good and principled way of deciding between equivalent white-box models.

Multiple labels for one white-box

A similar issue happens again at a higher level: there are multiple ways of assigning labels to the same white-box system. Take the model in figure 4, and erase all the labels (hence giving us figure 3). Now reassign those labels; there are multiple ways we could label the modules, and still have the same structure as figure 4:

Fig. 5

We might object, at this point, insisting that labels like "beliefs" and "preferences" be assigned to modules for a reason, not just because the structure is correct. But having a good reason to assign those labels is precisely the challenge.

We'll look more into that issue in future sections, but here I should point out that if we consider the labels as purely syntactic, then we can assign any label to anything:

Fig. 6

What's "Tuna"? Whatever we want it to be.

And since we haven't defined the modules or said anything about their size and roles, we can decompose the interior of the modules and assign labels in completely different ways:

Fig. 7

Normative assumptions, labels, and structural assumptions

We need to do better than that. Paper "Occam’s razor is insufficient to infer the preferences of irrational agents" talked about "normative assumptions", assumptions about the values (or the biases) of the agent.

In this more general setting, I'll refer to them as "structural assumptions", as they can refer to beliefs, or other features of the internal structure and labels of the agent.

Almost trivial structural assumptions

These structural assumptions can be almost trivial; for example, saying "beliefs nad preferences update from knowledge, and update the action selector", is enough to rule out figures 6 and 7. This is equivalent with starting with figure 4, erasing the labels, and wanting to reassign labels to the algorithm while ensuring the graph is isomorphic to figure 4. Hence we have a "desired graph" that we want to fit our algorithm into.

What the Occam's razor paper shows is that we can't get good results from "desired graph + simplicity assumptions". This is unlike the black-box to white-box transition, where simplicity assumptions are very effective on their own.

Figure 5 demonstrated that above: the beliefs and preference modules can be labelled as each other, and we can still get the same desired graph. Even worse, since we still haven't specified anything about the size of these modules, the following label assignment is also possible. Here, the belief and preference "module" have been reduced to mere conduits, that pass on the information to the action selector, that has expanded to gobble up all of the rest of the agent.

Fig. 8

Note that this decomposition is simpler than a "reasonable" version of figure 4, since the boundaries between the three modules don't need to be specified. Hence algorithmic simplicity will tend to select these degenerate structures more often. Note this is almost exactly the "indifferent planner" of the Occam's razor paper, one of the three simple degenerate structures. The other two - the greedy and anti-greedy planners - are situations where the "Preferences" module has expanded to full size, with the action selector reduced to a small appendage.

Adding semantics or "thick" concepts

To avoid those problems, we need to flesh out the concepts of "beliefs", "preferences[1]", and so on. The more structural assumptions we put on these concepts, the more we can avoid degenerate labelled white-box solutions[2].

So we want something closer to our understanding of preferences and beliefs. For example, preferences are supposed to change much more slowly than beliefs. So the impact of observations on the preference module - in an information-theoretic sense, maybe - would be much lower than on the beliefs modules, or at least much slower. Adding that as a structural assumption cuts down on the number of possible labelled white-box solutions.

And it we are dealing with humans, trying to figure out their preference - which is my grand project at this time - then we can add a lot of other structural assumptions. "Situation X is one that updates preferences"; "this behaviour shows a bias"; "sudden updates in preferences are accompanied by large personal crises"; "red faces and shouting denotes anger", etc...

Basically any judgement we can make about human preferences can be used, if added explicitly, to restrict the space of possible labelled white-box solutions. But these need to be added in explicitly at some level, not just deduced from observations (ie supervised, not unsupervised learning), since observations can only get you as far as white-box knowledge.

Note the similarity with semantically thick concepts and with my own post on getting semantics empirically. Basically, we want an understanding of "preferences" that is so rich that only something that is clearly a "preference" can fit the model.

In the optimistic scenario, a few such structural assumptions are enough to enable an algorithm to quickly grasp human theory of mind and quickly sort our brain into plausible modules, and hence isolate our preferences. In the pessimistic scenario, theory of mind, preferences, beliefs, and biases are all so twisted together that even extensive examples are not enough to decompose them. See more in this post.

  1. We might object to the arrow from observations to "preferences": preferences are not supposed to change, at least for ideal agents. But many agents are far from ideal (including humans); we don't want the whole method to fail because there was a stray bit of code or neuron going in one direction, or because two modules reused the same code or the same memory space. ↩︎

  2. Note that I don't give a rigid distinction between syntax and semantics/meaning/"ground truth". As we accumulate more and more syntactical restrictions, the number of plausible semantic structures plunges. ↩︎



Discuss

"Good judgement" and its components

24 августа, 2020 - 13:32
Published on August 24, 2020 10:32 AM GMT

Epistemic status: Sharing a personal ontology I've found useful. I expect most claims to be fairly uncontroversial, but maybe the perspective will be interesting/clarifying for some readers. (Cross-posted, except for this paragraph, from the EA forum.)

Good judgement is about mental processes which tend to lead to good decisions. (I think good decision-making is centrally important for longtermist EA, for reasons I won't get into here.) Judgement has two major ingredients: understanding of the world, and heuristics.

Understanding of the world helps you make better predictions about how things are in the world now, what trajectories they are on (so how they will be at future points), and how different actions might have different effects on that. This is important for helping you explicitly think things through. There are a number of sub-skills, like model-building, having calibrated estimates, and just knowing relevant facts. Sometimes understanding is held in terms of implicit predictions (perhaps based on experience). How good someone's understanding of the world is can vary a lot by domain, but some of the sub-skills are transferrable across domains.

You can improve your understanding of the world by learning foundational facts about important domains, and by practicing skills like model-building and forecasting. You can also improve understanding of a domain by importing models from other people, although you may face challenges of being uncertain how much to trust their models. (One way that models can be useful without requiring any trust is giving you clues about where to look in building up your own models.)

Heuristics are rules of thumb that you apply to decisions. They are usually held implicitly rather than in a fully explicit form. They make statements about what properties of decisions are good, without trying to provide a full causal model for why that type of decision is good. Some heuristics are fairly general (e.g. "avoid doing sketchy things"), and some apply to specific domains (e.g. "when hiring programmers, put a lot of weight on the coding tests").

You can improve your heuristics by paying attention to your experience of what worked well or poorly for you. Experience might cause you to generate new candidate heuristics (explicitly or implicitly) and hold them as hypotheses to be tested further. They can also be learned socially, transmitted from other people. (Hopefully they were grounded in experience at some point. Learning can be much more efficient if we allow the transmission of heuristics between people, but if you don't require people to have any grounding in their own experience or cases they've directly heard about, it's possible for heuristics to be propagated without regard for whether they're still useful, or if the underlying circumstances have changed enough that they shouldn't be applied. Navigating this tension is an interesting problem in social epistemology.)

One of the reasons that it's often good to spend time with people with good judgement is that you can make observations of their heuristics in action. Learning heuristics is difficult from writing, since there is a lot of subtlety about the boundaries of when they're applicable, or how much weight to put on them. To learn from other people (rather than your own experience) it's often best to get a chance to interrogate decisions that were a bit surprising or didn't quite make sense to you. It can also be extremely helpful to get feedback on your own decisions, in circumstances where the person giving feedback has high enough context that they can meaningfully bring their heuristics to bear.

Good judgement generally wants a blend of understanding the world and heuristics. Going just with heuristics makes it hard to project out and think about scenarios which are different from ones you've historically faced. But our ability to calculate out consequences is limited, and some forms of knowledge are more efficiently incorporated into decision-making as heuristics rather than understanding about the world.

One kind of judgement which is important is meta-level judgement about how much weight to put on different perspectives. Say you are deciding whether to publish an advert which you think will make a good impression on people and bring users to your product, but contains a minor inaccuracy which would require much more awkward wording to avoid. You might bring to bear the following perspectives:

A) The heuristic "don't lie"
B) The heuristic "have snappy adverts"
C) The implicit model which is your gut prediction of what will happen if you publish
D) The explicit model about what will happen that you drew up in a spreadsheet
E) The advice of your partner
F) The advice of a professional marketer you talked to

Each of these has something legitimate to contribute. The choice of how to reach a decision is a judgement, which I think is usually made by choosing how much weight to put on the different perspectives in this circumstance (including sometimes just letting one perspective dominate). These weights might in turn be informed by your understanding of the world (e.g. "marketers should know about this stuff"), and also by your own experience ("wow, my partner always seems to give good advice on these kinds of tricky situations").

I think that almost always the choice of these weights is a heuristic (and that the weights themselves are generally implicit rather than explicit). You could develop understanding of the world which specify how much to trust the different perspectives, but as boundedly rational actors, at some point we have to get off the understanding train and use heuristics as shortcuts (to decide when to spend longer thinking about things, when to wrap things up, when to make an explicit model, etc.).

Overall I hope that people can develop good object-level judgement in a number of important domains (strategic questions seem particularly tricky+important, but judgement about technical domains like AI, and procedural domains like how to run organisations also seem very strongly desirable; I suspect there's a long list of domains I'd think are moderately important). I also hope we can develop (and support people to develop) good meta-level judgement. When decision-makers have good meta-level judgement this can act as a force-multiplier on the presence of the best accessible object-level judgement in the epistemic system. It can also add a kind of robustness, making badly damaging mistakes quite a lot less likely.



Discuss

Split-a-Dollar Game

24 августа, 2020 - 07:54
Published on August 24, 2020 4:54 AM GMT

When researching the topic of distributing government seats in Switzerland. I've chanced on the so-called split-the-dollar game, which is kind of simplified, game-theoretical model of the process. Given that despite having some basic knowledge of the game theory I have never heard of the game, it may be worth making a short blog post about it.

In the game, three players are asked to vote on splitting a dollar. The game is iterative, it has multiple rounds. At the beginning, a third of a dollar may be assigned to each player. The players, unanimously, vote for the arrangement.

However, the arrangement is not stable. It's a game about forming coalitions. Two players can conspire and vote in a arrangement where they split the dollar fairly among themselves and give nothing to the third player.

But even then the arrangement isn't stable. Now the third player can approach one of the other two coallition parties and propose a better deal: Instead of 50 cents they would get 75 cents and the proposing party would content itself with just 25 cents. Their current coalition partner would get nothing. The party is incentivized to accept not only because of sheer greed, but also because it is aware that if they rejected the proposal, the proposing party can make the same offer to their coalition partner and thus lock them out of the coalition.

But even now, with two players sharing the dollar in 75:25 ratio, the arrangement is not stable. The senior partner can blackmail the junior partner by threatening to make a coalition with the opposition party. The junior partner, in their turn, can do exactly the same thing.

As can be seen, any arrangement is, in principle, unstable. The negotiation game can go on forever without reaching a stable state.

We can see the game in action in the case of Switzerland. In 1953, despite FDP (liberals), CVP (christian democrats) and SP (social democrats) getting very much the same amount of votes, the seats in government were distributed among them in 4:2:0 ratio. (There was also one seat going to SVP, which matched their election result and is not relevant to this story.)

The interesting aspect here is that unlike in other countries with the same problem, Switzerland has been able - after more than 100 years of playing the game - to reach a stable state. They did so by adopting a cultural norm, so called "Magic Formula", which splits the seats in government in a fashion proportional to election results.

Note that the game still doesn't have a stable strategy. The problem wasn't solved within the game. It was solved on the meta-level, by parties agreeing to respect the voting results, internalizing that respect and punishing those who don't behave.

Also note that this haven't in fact solved the problem, but moved it, commendably so, to a different level. Today the split-the-dollar game isn't played among parliamentarians forming a government, but rather among members of government voting on practical issues.

August 24th, 2020



Discuss

Investment is a useful societal mechanism for getting new things made. Stock trading shares some functionality with investment, but seems very very inefficient, at that?

24 августа, 2020 - 04:18
Published on August 24, 2020 1:18 AM GMT

Jack Bogle, the creator of the first index fund, says

The stock market has nothing—n-o-t-h-i-n-g—to do with the allocation of capital. All it means is that if you’re buying General Motors stock, say, someone else is selling it to you. Capital isn’t allocated—the ownership just changes. I may be an investor, you may be a speculator. But no capital goes anywhere. This is basically a closed system. You have new IPOs and whatnot, but they’re very small compared to this vast thing we call a market

My response to this has always been... if that's true, what is the point in all of this? It's a mechanism that predicts the success of companies, but plays only a very small role in investment? Could we get that money to do something better, then?



Discuss

Empathetic vs Intrinsic Oof

24 августа, 2020 - 03:12
Published on August 24, 2020 12:12 AM GMT

Crossposted from Figuring Figuring.

Some people are empathetic oofers, and some people are intrinsic oofers. (“Oof” as in the sound one makes when told a story about someone’s unintentional, but unfortunate gaffe.) Empathetic oofers, like myself, only experience awkwardness if they think someone else around them might be feeling awkward. Intrinsic oofers can feel awkward, or cringe, or other related things, even if they know for sure nobody else around them feels that way. 

Empathetic oofers do this thing that drives intrinsic oofers crazy. The way we deal with awkward situations is creating common knowledge about how we feel about the awkwardness inducing situation, and asking others to do the same. An overly simplified model of this explains why that strategy makes sense if everybody in the room is an empathetic oofer. 

If Alice and Bob are the only ones in the room, and they are both empathetic oofers, and they both know that the other is an empathetic oofer, but they do not know that they both know, then they will both feel awkward. 

Alice will know that Bob will only feel awkward if he does not know that Alice does not feel awkward, but she does not know if Bob knows that Alice is also an empathetic oofer, so she does not know that Bob does not feel awkward, and so she in fact feels awkward. The same argument switching “Alice” and “Bob” shows that Bob feels awkward. 

Now suppose that Alice and Bob are the only ones in the room, they both know that the other is an empathetic oofer, and they both know that they both know. Like before, Alice knows that Bob will only feel awkward if he does not know that Alice does not feel awkward, but she also knows that Bob knows that Alice will not feel awkward unless she does not know that Bob does not feel awkward, so she doesn’t have to worry about Bob worrying that she might feel awkward for some other reason, but since they don’t have common knowledge, she doesn’t know if Bob knows that he doesn’t have to worry about that! This means she doesn’t know that Bob doesn’t feel awkward, and so she feels awkward. 

This reasoning gets hard to write about in English, but I promise the recursion goes all the way up the chain if they have common knowledge, and ends up bottoming out in neither of them feeling awkward, and both of them knowing that neither of them feel awkward, and knowing that they know, etc.

This is why empathetic oofers are tempted to say things like:

“Oh hey, you know, I’m worried you might feel awkward about the thing. I don’t feel awkward about it exactly, except insofar as I am worried that you feel awkward about it.” 

If their interlocutor is also an empathetic oofer, they will probably reply something like: 

“Oh thank god! I was worried that you felt awkward about the thing! I don’t feel awkward about the thing… that is… unless you actually secretly do…”

In real life, this doesn’t actually go all the way through to establishing all of the infinite steps of recursion required for common knowledge, but that’s probably fine, because I can barely keep three levels straight in my head, and I can’t write well about more than one.

I swear to you, dear intrinsically oofing reader, that among my fellow empathetic oofers, this trick actually works. We actually get to just stop feeling awkward after assuring each other that we are both empathetic oofers and that we both know that we both are. In fact, as empathetic oofer friends become more confident in their common knowledge that they are both empathetic oofers, the verbal exchanges become less and less important, and they get more and more comfy doing or saying absurdly awkwardness-inducing things in front of each other. 

Now, I imagine, although I cannot know for sure, that this seems totally insane to intrinsic oofers. My instrinsic oofer friends do not feel awkward because they are worried that I might feel awkward, they feel awkward because of that extremely awkward thing I just said! 

How could me saying: 

“Hey, I’m worried that you might feel awkward about this awkward thing I just said.” 

Possibly help in this situation? Why would I call attention to it? Why wouldn’t I just let it slowly fade from our memories like a sane person? 

Well, this is why, because creating common knowledge really does get rid of awkwardness if your interlocuter is also an empathetic oofer, and empathetic oofers exist. We also tend to disproportionately hangout with each other, because it means we get to use this one crazy trick to bypass 90% of the discomfort of hanging out.

Before I had this model, I didn’t really understand why this trick sometimes completely backfires. So I hope that my empathetic oofer friends will get some understanding out of it as well. This is why some people conclude you must be insane when you try to make common knowledge about how everyone feels about the awkwardness-inducing thing that just happened.

And of course, instrinsic oofers can seem insane to empathetic oofers as well. Think of the annoyance you might experience watching an episode of Seinfeld or some other sitcom, in which a whole unfortunate kerfuffle could have been completely avoided if only one person had managed the courage to talk openly about the thing. Now imagine that one of the characters was about to get the braves to shout out that the emperor has no clothes, but another character stopped them, or worse, covered their ears and started shouting “lalalala”. That’s what intrinsic oofers seem like to empathetic oofers ALL THE TIME.

But remember, my fellow empathetic oofers, the reason your intrinsically oofing friend is covering their ears is not that they are cowards or that they love feeling awkward, it’s that our one crazy trick genuinely does not work for them. In fact, it makes them feel even more awkward. Their best strategy for dealing with awkwardness is (I think) trying to not pay it too much attention, and by trying to create common knowledge, you are making that impossible.

I’m not sure how real this taxonomy is, and of course everybody is a bit of each, but I can definitely tell which of my close friends is which, or at least it seems like I can. Maybe a more fine grained way of talking about things would have different people experiencing empathetic vs intrinsic oof about different things, or in different circumstances.

I don’t have any good advice for how to bridge the gap yet, but realizing that there is a gap, and that some people really are on the other side of it, is usually a good first step. Making common knowledge about which side of the gap you are on might be a good move too… at least if you’re an empathetic oofer, that is.



Discuss

Charting Is Mostly Superstition

23 августа, 2020 - 23:26
Published on August 23, 2020 8:26 PM GMT

Part 6 of the Inefficient Markets sequence.

[I am not your financial advisor! For educational purposes only. Read part 1 first.]

The technical analyst reads the signs and portends and divines the future prices. Three black crows are a bad omen, don'tcha know. An abandoned baby is a sign of changes coming. The morning star doji means the fates are smiling.

Candlestick patterns! Trendlines! MACD's! Bollinger Bands! Stochastics! There are dozens and dozens of these so-called "technical indicators". Your stockbroker's trading software is probably full of them. They all sound very arcane and financial. Many of them have tunable parameters.

Do you know what the right parameters are? Do you know what the signs and portends mean?

Neither do they.

Maybe some of this stuff used to work. Some of it may be self-fulfilling prophecy. (Ichimoku charts only work in Japan for some reason.)

Maybe if you stare at enough charts, memorize enough patterns, some day it will all click.

Some traders seem to think that way. They may even think they're making progress. I'd call it confirmation bias.

There is something to some of this, but you're not quantifying it. What parameters? Why? How big is your edge?

Stop thinking like a diviner, and start thinking like a data scientist.

How to Chart Like a Quant

I'm not going to say that you never look at a price chart (although I have heard that said), it does tell you some things at a glance.

But as trading styles go, collecting directional alpha in a single asset is hard mode. And people think they can do it using a gut feel for signs and portends?

Really?

We kind of have to start with a single asset, because it's hard to explain how to chart a pair without explaining how to chart a single asset first, but we do want to be able to compare the relative performance of assets later on.

Share price is not the same as market cap. It's as much about how many shares were issued, and how often it split as about how well the company performed.

Comparing two assets on share price is apples-to-oranges. It's just useless for our purposes.

Even comparing the recent prices to past prices is not that helpful. Is Apple the same company it was in the '80s? Different market, different CEO, different employees (mostly). And different share price.

If you want to make money, you don't care so much about where it was, but how it moves.

Think velocity and acceleration, not position. We don't care about where it is,

We don't have to do differential calculus, but that's kind of the idea. Because price history is not continuous (there have been a finite number of trades in the history of an asset), discrete methods will do.

From Adjusted Close to Daily Returns

All right. Hands-on learning time. We're going to transform and visualize some market data, but to do that, we first need some data.

There are many sources of market data. We want one with a daily adjusted close. That's the close price in terms of the current number of shares, accounting for any splits and dividends.

Honestly, once you know how, it's a easier to do data science in a Jupyter Notebook with Python (or in R, or J, among others), especially when using very large data sets. I would be using Python for my own analysis, but a spreadsheet is a little more accessible, so I'll start there to explain how it's done.

First, get the data. For example, go to https://finance.yahoo.com (or wherever you can get adjusted close data) and look up a big tech company, like AAPL (Apple Inc.)

Find the "Historical Data", set the time period to get at least ten years worth. with "daily" resolution. Find the "download" link to get the data as a .csv file.

Do the same for a big index ETF, like SPY (SPDR S&P 500 ETF Trust).

I'll be using the free and open-source Gnumetric, but any good spreadsheet application (Google Sheets, Microsoft Excel, etc.) can do this stuff.

You should see a table that looks something like this, with a lot more rows.

Date Open High Low Close Adj Close Volume 2010-08-20 35.627144 36.274284 35.57143 35.662857 30.817675 96057500 2010-08-23 35.970001 36 35.035713 35.114285 30.343628 103510400

Select the "Date" and "Adj Close" columns, and find the button to plot a chart. You should get something like this:

This is basically what your broker would give you. This is not what we want. We want something like the first derivative of this (think velocity, not position), but it's discrete data, so we'll use finite differences.

In the next open column, add a "differences" header. Then enter the formula to get the difference in adjusted close from the previous day =F3-F2 in cell H3, in this example. Double click the fill handle to fill the formula into the rest of the column. This is the profit or loss for holding one (adjusted) share for one day. We'll be adding several columns. It's probably a good idea to save your work after each column. But the .csv format doesn't store the formulas, so save using your spreadsheet's native format (.gnumetric in this example).

Select the "differences" column and plot a chart. You should get something like this:

Notice that the differences chart looks like more of a stationary process than the price chart did. This should make it easier to compare market behavior over different time frames. But notice that we still have a problem. The earlier times have a much lower variance than the later times. This isn't because the volatility increased, it's because the share price increased. But, except for granularity, the number of dollars you can put in doesn't depend on the share price.

In the next open column, add a "returns" header. Then scale the difference by the previous day's price to get the percentage return. In this example, you would enter the formula =H3/F2 in the cell I3, and fill down. Then select the returns column and change its display style to percentage. This is the percentage profit or loss you would get for holding a portfolio of this asset for one day, regardless of how many shares.

Now plot the chart for "returns". You should get something like this:

Notice that the process looks very stationary now. You can more easily compare the behavior of the asset at different times, and also between assets by charting them this way.

From Daily Returns to Distribution Histograms

For a typical stock, this chart will look very noisy.

We can get a better handle on "random" data by using the usual statistical methods. Try computing some summary statistics. Note the skewness and variance. Your spreadsheet should have functions for these.

The geometric mean of returns is a more useful statistic than the arithmetic mean. But this is not the right form to compute it. In the next open column, add a "factors" header. Add one to the returns. In this example, you would use the formula =1+I3 in the cell J3, and fill down. Multiplying that factor by your portfolio value yields the portfolio value for the next day. If the geometric mean of these factors is over 1, then you have a positive expectancy for buy-and-hold.

We can also plot the distribution of returns as a histogram. In Gnumetric, you can find this feature in the statistics menu. One-tenth the number of rows is probably a reasonable bin count (Gnumetric calls them "cutoffs"), but the bars can get too thin to see well if you use too many.

You should get something like this:

Notice that it resembles a normal distribution, but it's not perfect. There are extreme outliers, or "fat tails". Most days it doesn't move very much. There's a bit of negative skew.

Autocorrelation Effects

One form of exploitable non-randomness that's easy to look for is autocorrelation. This means that if you shift the data by a certain time, you may observe a dependence between the shifted data and the original. In other words, there's a tendency to trend or to revert to the mean. Pure noise doesn't show this property, although, due to chance, the correlation would not be zero.

In the next open column, add a "shifted 1" header. We want to ask the question, was the previous day's return at all predictive of today's?

In this example, you would use the formula =I3 in cell K2, and fill down. Now "shifted 1" should have the same data as "returns" but shifted up by one row.

Select the "returns" and "shifted 1" columns and find "correlation" in the statistics menu. You can also plot the resulting table as columns. We see that there's about a -0.057 correlation for AAPL in this time frame, in other words, a small mean-reversion tendency.

It's small. Is this statistically significant, or what we'd expect from noise? If it's just noise, there's really no reason to expect this to hold in the future. Is it a big enough effect to exploit after paying spread and commissions? These are questions to keep in mind.

Let's try this with the index. In a new Gnumetric window, open the index data (SPY in this example.)

Copy the columns you added and paste them in the same positions in the new window to apply our formulas to the SPY data.

Compute the correlation table again. Notice anything different?

The correlation is almost -0.15. That's a much stronger effect. Is this significant or due to noise?

Collecting directional alpha in a single asset is hard mode. By using an index ETF, we're looking at a basket of multiple assets instead, and got a much stronger mean-reversion effect.

An edge doesn't have to be all that big to make money over time, if you have the discipline to trade it systematically.

The difference was already that noticeable just by using an ETF that tracks a basket instead of a single stock. Does SPY have the strongest effect? Try others and see! Let me know what you find in the comments below.

But there's no reason to think any of the available ETFs are optimal. You can get even stronger mean-reversion effects with a basket of assets you specifically select and balance to behave that way. Actually finding those is a bit of a trick. And trading them can get a bit more complex. But that's a topic for a later post.

You can also try shifting by different amounts. You can add a "shifted 2" and "shifted 3" column, and so on, and plot the correlations all at once.

Conditional Return Distributions

So we've found what appears to be some non-randomness in market behavior. Is it exploitable?

We can plot a histogram to show the returns distribution for a strategy, instead of a single asset.

Let's use the index ETF, because it had a stronger effect. The basic strategy will be to hold only if the previous day's close was negative. Otherwise, our return will be zero, because we're holding cash instead.

In the next open column, add the header "strategy returns". In this example, that's =IF(K3<0, I3, 0) in cell L3, which means "if 'shifted 1' is negative, then the return is as before, otherwise zero."

Let's plot the histograms together. Select the "returns" and "strategy returns" columns, and find "histogram" in the statistics menu.

You should get something like this:

I've adjusted the colors and zoomed in on the Y-axis a bit. That big spike in magenta goes much higher. The green histogram is the original returns. The magenta is the strategy. The spike is from the zeros, which is the most frequent bin for our strategy. Notice that for the strategy, the right side of the zero line tends to be higher. The green histogram shows the opposite effect. Notice also that we escaped some of the most negative outliers. That's the strategy exploiting the mean-reversion effect.

Plotting conditional histograms like this is a good way to see what effect a condition has on return distributions.

There are many other simple conditions you might try. Does it work better if you use some cutoff more negative than zero? Maybe the effect is bigger, but that would also increase the number of zero returns. Maybe it's worth it, maybe it isn't.

Those signs and portends I mentioned before. How do you know how to use them? How do you know if they're effective? You can make a conditional histogram based on whatever signal, and see what effect it has on returns.

A word of caution. You can stack a few rules to get better distributions, but the more ad-hoc you make your strategy, the more likely you're overfitting to the noise. Plot simple rules independently before you plot the combined strategy. Keep your strategies to just a handful of simple rules.

The idea isn't to optimize your strategy's parameters, as that will risk an overfit, instead try the obvious rule for the effect you want to exploit, and see if perturbations to the parameters make the effect disappear. If so, it was probably noise. We're not interested in trading any effect so fragile. We'd rather trade what's likely to hold up in the future. Any monkey can optimize a strategy on past data. But we can't trade in the past.

Also try the rule on different time frames. If it works on old data, but not recent data, then maybe the pattern no longer holds. If it inverts frequently, maybe it's just noise.

Let's try some summary statistics for the strategy vs the original. Also make a "strategy factors" column and compute the geometric mean. What do you find? How does it compare? Also look at the variance, and consider leverage. Even though the absolute returns may be lower, is the Sharpe ratio better? If so, we can leverage up to compensate without adding much risk.

Experiment with these and compare notes with each other in the comments below. If any of you are statistics people, I'd like to hear your thoughts about statistical significance, and how we can apply Bayes' rule to this kind of analysis.

If you find a nice effect, check the quant/finance literature on Google Scholar. If someone else has found it too, you can be more confident that you've found a legitimate edge, and you're not just chasing ghosts in the noise.



Discuss

Postel’s Principle as Moral Aphorism

23 августа, 2020 - 22:36
Published on August 23, 2020 7:36 PM GMT

[All the usual disclaimers. Wanders dangerously close to moral relativism. Cross-posted from Grand, Unified, Empty.]

I.

Postel’s Principle (also known as the Robustness Principle) is an obscure little guideline somewhat popular among computer programmers, in particularly those working on network protocols. The original goes like this:

Be conservative in what you do, be liberal in what you accept from others.

My parents were both computer programmers, as am I, and my first job as a programmer was working on network protocols, so it shouldn’t be too surprising that I ran across this principle a long, long time ago. I suspect I heard it while still a teenager, before finishing high school, but I honestly don’t remember. Suffice to say that it’s been kicking around my brain for a long time.

As a rule of thumb in computer programming, Postel’s Principle has some basic advantages. You should be conservative in what you do because producing output that isn’t strictly compliant with the specification risks other programs being unable to read your data. Conversely, you should be liberal in what you accept because other programs might occasionally produce non-compliant data, and ideally your program should be robust and keep working in the face of data that isn’t quite 100% right.

While in recent years the long-term effects of Postel’s Principle on software ecosystems have led to some pushback, I’m more interested in the fact that Postel’s Principle seems to apply as well just as well as a moral aphorism as it does in programming. Context matters a lot when reading, so here’s a list of other aphorisms and popular moral phrases to get your brain in the right frame:

  • What would Jesus do?
  • Actions speak louder than words.
  • If you can’t say something nice, don’t say anything at all.
  • Give a man a fish and you feed him for a day; teach a man to fish and you feed him for a lifetime.
  • Be conservative in what you do, and liberal in what you expect from others.

II.

I am, by nature, a fairly conservative person. I’m also, whether by nature or past experience, somewhat socially subordinate; I’m usually much happier in a secondary position than in any role of real authority, and my self-image tends to be fairly fragile. The manosphere would happily write me off as a “beta male”, and I’m sure Jordan Peterson would have something weird to say about lobsters and serotonin.

This combination of personality traits makes Postel’s Principle a natural fit for defining my own behaviour. Rather than trying to seriously enforce my own worldview or argue aggressively for my own preferences, I endeavour not to make waves. The more people who like me, the more secure my situation, and the surest way to get people to like me is to follow Postel’s Principle: be conservative in my own actions (or else I might do something they disapprove of or dislike), and be liberal in what I accept from others (being judgemental is a sure way to lose friends).

[People who know me IRL will point out that in fact I am pretty judgemental a lot of the time. But I try and restrict my judginess (judgmentality? judgementalism?) to matters of objective efficiency, where empirical reality will back me up, and avoid any kind of value-based judgement. E.g. I will judge you for being an ineffective, inconsistent feminist, but never for holding or not holding feminist values.]

Unfortunately, of course, the world is a mind-boggling huge place with an annoyingly large number of people, each of whom has their own slightly different set of moral intuitions. There is clearly no set of behaviours I could perform that will satisfy all of them, so I focus on applying Postel’s Principle to the much smaller set of people who are in my “social bubble” (in the pre-COVID sense). If I’m not likely to interact with you soon, or on a regular basis, then I’m relatively free to ignore your opinion.

Talking about the “set” of people on whom to apply Postel’s Principle provides a nice segue into the formal definitions that are implicit in the English aphorism. For my own behaviour, it makes sense to think of it like the intersection operation in set theory, or the universal quantifier in predicate logic: something is only morally permissible for me if it is permissible for all of the people I am likely to interact with regularly. Conversely, of course, the values I must accept without judgment are the union of the values of the people I know; it is morally permissible if it is permissible for any of the people I am likely to interact with regularly.

III.

Since the set of actions that are considered morally permissible for me are defined effectively by my social circle, it becomes of some importance to intentionally manage my social circle. It would be untenable to make such different friends and colleagues that the intersection of their acceptable actions shrinks to nothing. In that situation I would be forced to make a choice (since inaction is of course its own kind of action) and jettison one group of friends in order to open up behavioural manoeuvring space again.

Unfortunately, it sometimes happens that people change their moral stances, especially when under pressure from other people who I may not be interacting with directly. Even if I have a stable social circle and behavioural manoeuvring space today, tomorrow one of my friends could decide they’re suddenly a radical Islamist and force me with a choice. While in some sense “difficult”, many of these choices end up being rather easy; I have no interest in radical Islam, and so ultimately how close I was to this friend relative to the rest of my social circle matters only in the very extreme case where they were literally my only acquaintance worth speaking of.

Again unfortunately, it sometimes happens that large groups of people change their moral stances all at once. Memes spread incredibly fast, and a small undercurrent of change can rapidly become a torrent when one person in a position of power or status chooses a side. This sort of situation also forces me with a choice, and often a much more difficult one. Apart from the necessity of weighing and balancing friend groups against each other, there’s also a predictive aspect. If I expect a given moral meme to become dominant over the next decade, it seems prudent to be “on the right side of history” regardless of the present impact on my social circle.

Being forced to choose between two social groups with incompatible moral stances is, unsurprisingly, stressful. Social alienation is a painful process, as can attest any Amish person who has been shunned. However what may be worse than any clean break is the moment just before, trying to walk the knife edge of barely-overlapping morals in the desperate hope that the centre can hold.

IV. (PostScript)

I wrote this focused mostly on myself. Having finished, I cannot help but wonder how much an approximation of Postel’s Principle guides the moral principles of most people, whether they would acknowledge it or not. Even people who claim to derive their morality from first principles often end up with something surprisingly close to their local social consensus.



Discuss

How to set up foot pedals

23 августа, 2020 - 19:40
Published on August 23, 2020 4:40 PM GMT

Let's say you would like to control your computer partly with your feet: how do you go about it? This wasn't something that I was especially interested in before my wrists started hurting, but now I have it set up I like it a lot and I would definitely keep it even if my wrists were completely better. I have three pedals, set for scroll down, scroll up, and "archive this email message and take me to the next unread message." You could possibly use foot controls for different things in different programs, but I haven't gotten into that.

I got a cheap no-name three pedal foot switch for $35 shipped. I suspect it's not the most durable option, but it's cheap enough that if it breaks I can easily replace it. It comes with (useless-to-me) Windows-only software to program it, but out of the box the three foot switches send a, b, and c. I installed the open source tool Karabiner-Elements to remap them. First I needed to tell Karabiner to only affect the foot pedals:

USB vendor id 1452 (0x5AC) is Apple, so I could tell that the foot pedal was the remaining device. Then I needed to configure what I want it to happen:

The reason for using open square bracket ([) is that it's the gmail keyboard shortcut for "archive this email message and take me to the next unread message."

By itself this is definitely not enough to make my wrists happy again, but it takes a substantial load off them, and lets me handle my email mostly hands-free. If I had a one-off repetitive task I needed to do, I would consider setting up a temporary remapping just for that task. At some point I also might consider setting up something to let me use this hardware to control my rhythm stage setup.

Comment via: facebook



Discuss

SlateStarCodex Online Meetup: Dr David Manheim on Cooperation for AI

23 августа, 2020 - 13:00
Published on August 23, 2020 10:00 AM GMT

On August 30, 17:30 GMT, 20:30 IDT, 10:30 PDT, Dr David Manheim will speak on "Cooperation for AI: From formal models in AI safety to geopolitics".

He will discuss multiagent dynamics, and also competition and race dynamics for AI takeoff safety.

David is a researcher with the University of Haifa, who works with Future of Humanity Institute and other organizations on investigating large-scale risks to humanity.

Read his related paper here.

Sign up here and we'll send you a link to the online meetup



Discuss

[Geo: California/Bay Area] Mini-Research of 2020 Fire Situation

23 августа, 2020 - 08:13
Published on August 23, 2020 5:13 AM GMT

California currently has a lot of fire. It's not good. Some friends on a FB thread were wondering whether this implied we should be doing something, e.g., leaving town. I've put in 1-2 hours of research now, and will probably do a bit more.

Instead of posting further updates to that Facebook thread, I'm going to make them primarily here. If you're following here from that thread, you can make an account and subscribe to this post and get emailed if I (or someone else) comments with more updates.

This isn't world-class, super-rigorous research. Just me sharing the results of some basic Googling and Wikipedia reading. Very casual.

Friday, August 21st

I did some Googling and reading, most of Wikipedia, during a long Zoom meeting today.

From a 0.5-1 hour search, this year isn't obviously going to be much worse than prior years. I was worried when one of the news articles above said last year 260k acres burnt, and this past few days we've already seen 400k burn, but actually, last year was on the very low end (see image). 2018 was the largest fire year to date and that was 1.9M acres [1].

UPDATE: Well now 2020 is already at 1.4M and it's only August...

Re firefighters: Ben Goldhaber got the number from the NYT article saying 1500/9,400 = 16% were inmates. A 2018 document from Cal Fire has it at 3500/12,200 = 29% [2]. I could see the inmate firefighters either being more or less valuable on the margin. As someone said elsethread, maybe they get used in riskier situations to greater effect, or actually, they're less well trained and get less access to the equipment like helicopters so their actual contribution is less than their proportion of the whole. I suspect the latter.

There's yet another question of how much of a difference firefighting makes. Turns out it's hard to estimate/measure. I found one detailed paper [3] looking at metrics and methodology, but my quick read didn't leave me with a strong sense of how helpful firefighting is. Maybe a factor of 2? My guess is the loss of firefighting power is on the same order of magnitude as a bunch of other variables that affect outcomes, but I'm not sure.

Tentative conclusion: things could turn out to be bad and seems worth going elsewhere if it's cheap, but so far my 0.5-1 hour investigation didn't make me think the outside view from previous years doesn't apply.

I'll try to dig in a bit more to the pattern of fires and whether there's something else that would have you predict this year is much worse than 2017 or 2018. I haven't looked at the locations of this year's fires and how that compares.

[1] https://en.wikipedia.org/wiki/List_of_California_wildfires#Post-2000
[2] https://www.fire.ca.gov/media/4922/glance.pdf
[3] sci-hub.st/10.1007/s40725-019-00085-4

Bibliography:
https://en.wikipedia.org/wiki/2018_California_wildfires
https://en.wikipedia.org/wiki/California_Department_of_Forestry_and_Fire_Protection
https://en.wikipedia.org/wiki/California_fire_camps
https://www.fire.ca.gov/stats-events/](https://www.fire.ca.gov/stats-events/
https://en.wikipedia.org/wiki/Wildfire_suppression

Saturday August 22nd

UPDATE: Changed my opinion. This year could be much worse.

Yesterday when I was comparing, I had a 400k acres number in my mind. That might have already been out of date then. Latest data [1, 2] says that already 1.4M acres have burnt. That's not quite as much as 2018 yet, but fire season last till November. I would now bet > 50% that this year will beat the record.

I don't trust journalists not to sensationalize, and I don't feel calibrated on Vox, but this [3]:

But this week’s blazes stand out for their scale, timing, locations, and intensity, even among recent record-breaking fire seasons.

Like many (most?) things, fire size is heavy tailed. Of the 1.4M, a lot of if it just a few major fire areas, pretty much all in Northern California near the bay [2], see image. I haven't checked previous years' distribution of fire since I couldn't find maps quickly, will try to get them later. Slightly suspicion right now the big fires are closer to us than previous years, but not sure.

The top two fires here are apparently now the 2nd and 3rd largest fires on record for California.

1.4M acres burned, most of it from the top few fires on the list

A "be prepared to evacuate" warning has been issued to people in the Berkeley Hills. Make sense, they are right on the edge of the forest and there was fire there even in 2018 [4]

My gut feeling (and don't trust me just 'cause I found a few links) is the Hills could get affected again. How much danger that implies for someone living in Berkeley or Oakland not right near the hills is a further research question about how cities burn. [My cached answer is "not that well", but worth more investigation.]

My current thought is we should definitely be prepared for a lot of bad air. I was able to buy more air filters on Amazon, so I'd get 'em now. I'd pack a suitcase, because it's pretty cheap. I do feel motivated to research more now since I think there's enough of a chance it could be worse than previous years. I'll share more when I've got it.

 

[1] https://www.fire.ca.gov/incidents/2020/
[2] https://en.wikipedia.org/wiki/2020_California_wildfires#cite_note-CalFire_Stats-1
[3] https://www.vox.com/2020/8/21/21377181/california-wildfire-2020-coronavirus-covid-19-heat-scu-lnu-lightning-complex-climate
[4] https://www.berkeleyside.com/2020/08/22/red-flag-warning-for-berkeley-hills-means-residents-need-to-be-ready-to-evacuate-heres-what-you-need-to-know



Discuss

Why don't countries, like companies, more often merge?

23 августа, 2020 - 05:17
Published on August 22, 2020 11:56 PM GMT

Whereas companies often undergo a friendly merger or acquisition, why don't countries more often do this? Set aside colonialism because I'd like to focus on the case where the threat of violence is not paramount.

I suppose the right incentives aren't in place but is this an intrinsic fact of international relations or democratic politics?

For example, a merger between the US and Mexico seems, naively, like it could be in both nations' interest. The US gets more land and lower-cost labor and Mexico gets access to better governance and knowledge spillover.



Discuss

Страницы