Вы здесь

Сборщик RSS-лент

Ситхские техники

События в Кочерге - 6 марта, 2021 - 13:00
Продолжается мини-серия лекций по техникам продуктивности. Андрей Ведерников расскажет о технике рефлексии эмоций, а также о своем подходе к восстановлению внутренних ресурсов.

Структурирование

События в Кочерге - 5 марта, 2021 - 19:00
Структурирование — формат, на котором участники выдвигают вопросы или темы, на которые они хотели бы «поструктурироваться», то есть эксплицитно разобраться с моделью проблемы, которая у них уже есть, и построить вместе с напарником что-то новое поверх; собираются в пары по интересу к предложенным темам, и говорят один на один.

Texas Freeze Retrospective: meetup notes

Новости LessWrong.com - 3 часа 47 минут назад
Published on March 3, 2021 2:48 PM GMT

This article is a writeup of the conversation at a meetup hosted by Austin Less Wrong on Saturday, February 27, 2021. The topic was the winter weather and infrastructure crisis that took place the previous week. There were a total of 13 participants, including 8 people who were in Texas at the time and 5 who weren't.

I was the note-taker but I was not in Texas myself, so replies to any comments will probably come from people other than me. Below the section break, "I" refers to whoever was speaking at the time. Thanks to everyone who contributed and helped compile these notes.

Disclaimer: I took pains to make it clear before, during, and after the meetup that I was taking notes for posting on LessWrong later. I do not endorse posting meetup write-ups without the knowledge and consent of those present!

The 2021 Texas Freeze Personal anecdotes

I lost power Monday through Thursday. The inside temperature dropped from 68°F to 47°F on Monday alone; over the course of the week the thermostat hit a minimum of 40°F. (Either the thermostat couldn't read any lower or the kitchen was even colder, since my olive oil solidified, which happens at 37°F). My breath was visible indoors. I had to keep my phone off most of the time, so most of the day was spent reading books under several blankets. I had a carbon monoxide scare on Tuesday after using the fireplace. I started boiling water on Wednesday, when the order was declared in some areas of Austin but not yet mine, because it seemed likely the order would soon be extended city-wide, which indeed occurred a day later. Even after getting power back, I still couldn't get groceries—stores had long lines, and H-E-B was closed after 5pm. Gas stations were out-of-order.

I lost power Monday through Friday—there was some damage to a local power line. I teamed up with my neighbors. We had a fire going out back that people could warm themselves and cook things at. We didn't have much in the way of preparation supplies, but we did have candles and water bottles. We had advance notice that we might lose water, so we filled up the tub and every container we could find. (We didn't lose water, but we got the boil-water notice.) A tree branch fell and blocked our alleyway; we worked together to remove it, yielding a bunch of firewood as a side benefit. The house was well-insulated (≈50°F), but some of our warm clothing got wet, so it would've been better to have had more. My cat helped keep the bed warm, and my dog was helpful for peace-of-mind what with all the strange noises at all times of the night.

I lost power starting Monday for 8 days, and water Thursday through Sunday. I survived by living within walking distance of the University of Texas campus. I went to the CompSci building and claimed a classroom to live in for the next few days. The whole building turned into a refugee camp for computer science students—they had water and power, since the campus has its own generator. Classes were canceled from Monday till Wednesday 9 days later. On Thursday, a friend's place got power back but not water, so we stayed there but had to go drive to the campus to get water every day.

I live right next to a hospital, so I never lost power. I did lose internet, but I was able to get it back by calling my service provider. I also lost water, for a total of 9 days. I regret not filling up my bathtub beforehand. Fortunately I had a few gallons of drinking water on hand, which was a lifesaver since stores were closed. I used half of it to flush the toilet once, but conserved the rest, and ate and drank a lot less than usual. I ended up filling containers from a nearby lake to use as toilet-flushing water. A nearby store was handing out filtered water for free.

I wasn't in Austin for the freeze, but I returned shortly afterward. My apartment lost power. Food in the freezer melted and refroze. (Tip: If canned food freezes, you should throw it away.) I wasn't around to drip the faucets, but people doing so in other units was effective. Also, the complex has gas-powered heat; it looks like it never dropped below freezing, since the houseplants survived. However, the kitchen sink still isn't working quite right.

I got lucky here, living in a rural area. I didn't lose power or water, though we lost some water pressure. I should've realized beforehand it was going to be bad, looking at the weather forecast. We have a donkey, so we had to bring him inside the garage. He didn't want to move, but once he was inside he was fine with it.

I also got lucky, and never lost power. When we realized water was in jeopardy, we filled up the tub, which was good. I wish I had kept more groceries in the house. I didn't realize that even after stores reopened, lines would be really long. I was running low by the end of the week.

I lost power Monday through Thursday. I had water but it was cold, and there was a boil-water order from Thursday to the next Monday. I booked a hotel downtown, for only 1 night initially, but I ended up staying for 4 nights. The hotel had a false fire alarm.

I also lost power Monday through Thursday, though with 30 minutes of power on Tuesday. It got to 46°F in the house according to an actual thermometer. (Watch out, because sometimes a thermostat has a minimum display temperature in the 40s.)

Preparedness What things were helpful to have?
  • Water purification: battery-powered UV light or iodine tablets. (You can take them camping.)
  • Giant bins, buckets, or jugs for storing boiled water.
  • Rolly cart for transporting water.
  • A home with a gas stove, otherwise I would not have been able to cook or boil water.
    • Outdoor grill and charcoal—I could've used this to cook if I hadn't had a gas line. However, there would've been a risk of hypothermia being outside and then unable to effectively warm up inside. I didn't actually end up using it.
  • Electric kettle and air fryer (for cooking without a stove), but only because we were in a UT campus building that had electricity but no stove.
  • Camping stove.
  • Mylar blankets.
  • Lots of warm clothing: jogging pants, ski mask, long underwear, Uniqlo Heattech (M, W), other skiing/camping gear.
    • REI is a good place for this stuff
    • "There's no such thing as bad weather, only bad clothing"
  • Hand and toe warmers—it's a package that generates heat chemically. You put them inside your shoes or gloves (in between two layers).
  • Solar panel, which was enough to keep phones charged.
  • Flashlights, battery-powered lantern, extra batteries.
  • Lighter and matches for starting gas-powered appliances.
  • Lots of dried and canned foods and a few MREs I had ordered for fun and never used.
  • Some fireplace fuel, it was mostly old newspapers and brown grocery bags which was not ideal but better than nothing.
What things did you wish you had?
  • Much more firewood.
    • However much firewood you think you need, get 5 times that. (This is a general principle for preparedness!)
  • An axe for making my own firewood.
  • Solar generator.
  • Solar phone charger.
    • Without one I needed to keep my phone off most of the time. The ability to look up safety knowledge (e.g. how to use a fireplace safely) was very limited. If the battery had reached zero, not being able to call someone as a last resort or 911 for an emergency may have been dangerous.
  • Electric blanket (powered by a solar generator, if practical).
  • Pressure cooker.
  • Grains (quinoa, etc.).
  • Drinking water.
  • A Brita water filter pitcher for water that was boiled then cooled. Sediment may sometimes show up in water during a boil water notice.
Knowledge and skills

What knowledge and skills were useful?

  • Knowing about restaurants giving out free stuff. If you could access the internet and had the means to drive on ice, websites were listing places that were giving out free stuff.
  • Knowing your neighbors and being in good communication with them. This was a bonding experience. We were sharing firewood, candles, etc., and hanging out to relieve boredom. It'll always be the case that you have something your neighbors need, and vice-versa.
  • Reading books like True Grit and The Revenant—optimistic stories of survival to put you in the right mindset. (Not a depressing story like The Road.) Then you can burn the book for heat ☺︎

Miscellaneous safety knowledge that was broadcast to Texans:

  • Know the risks of driving on snow and ice, and be able to judge how likely your car may be to get stuck on the road.
  • Drip your sinks so your pipes don't freeze. Wrap outdoor faucets with a rag and duct tape. If your pipes freeze they may burst and cause flooding.
  • To avoid carbon monoxide poisoning, don't heat your home with a gas stove or oven, don't run your car in a closed garage, don't operate a charcoal grill inside a closed garage, and don't supplement your fireplace fuel with grill charcoal.

Additional safety facts that would've been good to memorize:

  • Hypothermia from cold exposure is a risk when indoor temperatures fall below 60°F, more of a risk with infants or the elderly.
  • Alcohol makes you feel warmer because it draws blood to your skin, resulting in increased loss of heat and increased risk of hypothermia.
  • Symptoms of hypothermia (shivering, paleness, poor balance, slurred speech, confusion).
  • Symptoms of carbon monoxide poisoning (headache, nausea, chest pain, dizziness, confusion).

Improvised strategies:

  • How to ration firewood.
    • I had only 4 logs and a few sticks and a lot of paper (I did not intend to try lasting 4 days in the cold with that much firewood, it was just all out of stock beforehand). I used 2 logs at a time for 2 separate fires, one on Tuesday and one on Thursday.
    • More logs would've been much better than more paper. Burning the paper was labor intensive so I had to supervise the fire more, because paper burns up very quickly.
    • Burning a fire in the morning seemed to be the most helpful, because at night I'm under a pile of blankets and I don't need the house to have heat. About 90 minutes of fire burning raised the temperature by 5°F according to the battery-powered thermostat; I'm not really certain how long each fire lasted but I think it was 90 minutes.
Transportation

Driving was hard, such that some of us considered it not an option. Austin has limited infrastructure for removing ice from roads. Cars were getting stuck everywhere. There was a 10-car pileup near my place! I had to walk to the grocery store, for which having a large backpack was helpful.

If you have 4-wheel drive, know how to use it, but you should still drive very carefully. Don't pass people, and turn gradually lest you fishtail.

If you had chains or snow tires you could put them on, but most people here don't have them. Chains aren't that expensive, but they're a pain to put on and off, and make for an unpleasant driving experience.

The temperature warmed up by 60°F in 24 hours after the freeze ended. This could make your tires run out of pressure. Check and see if you need to get them re-pressurized.

Uber wasn't too expensive, because they suppressed surge pricing, but that meant there weren't many rides available. A 2-mile ride was only $14, but I tipped $20.

Food and water

A lot of people don't know what foods are good to eat in a cold home without refrigeration. I saw posts about people throwing away butter, eggs, vegetables, and other things that would've been fine. My eggs went slightly warm in the refrigerator but I'm still eating them and it's fine. Yogurt was good; meat was fine for a few days. Learn how to tell by smell when something is bad. When ERCOT shut down the power, they were 4 minutes away from a total statewide failure which would've lasted a month. If something like that were to happen, knowing how to stretch food supplies would've been of value.

I used the outside for refrigeration, but my eggs froze. (Incidentally, looking up the freezing point of eggs, I could only find results about human gamete preservation...) It's useful to have a cooler to fill with snow and bring inside, which protects food from animals and sub-freezing temperatures. You'll still want cold beer even when it's cold out!

It's good to have water treatment tablets, especially if you can't boil, but note that you have to let it sit for 1–4 hours (depending on the brand/type of tablet) rather than drinking it immediately. Do not ingest the tablet.

Boiling, UV, and tablets kill organisms, but filters are necessary for removing particulates. You can cobble together a water filter by layering different types of earth—a layer of pebbles, dirt, sand, and ash. (Example)

What other kinds of disasters should we prepare for? Multiple-whammy disasters

Shortly before the ice storm started, San Angelo, a city in West Texas, was already dealing with carcinogens in their water supply, which cannot be boiled away, so they had to buy water, which likely went out of stock very quickly. Then the snow came, the power went out, and they couldn't drive anywhere. On top of the pandemic it made for a quadruple-whammy. Think about combinations of different disasters.

In a way, this whole event was a weird combination of things all going wrong at once. A cascading failure: The electricity went out, causing heating to fail, which both made generating electricity even more difficult and caused water pipes to freeze.

Hot weather

We had our cold-weather disaster; what about a hot-weather disaster? What if it's really hot in the summertime and a power outage knocks out air conditioning? (The record high temperature for Texas during summer is 120°F.)

I was living on the east coast during such an event. The power was out for a few days. I spent most of my time in the basement, wearing light clothing. This could be bad for Texas: Texas doesn't have basements.

On the one hand, the Texas electrical grid is probably much more robust in heat (at least in a typical summer) than in cold, given that we more commonly deal with heat. On the other hand, the Texas grid is one of four independent grids in North America: East, West, Quebec, Texas. This can be problematic because our ability to import power is limited.

We see evidence reported that climate change may increase the likelihood of extreme weather events, both hot and cold, in the coming years. We don't currently see a scientific consensus regarding whether or not climate change was a contributing cause of this cold snap in particular (source).

Tips to keep cool: Fill your bathtub and soak in it, or soak your feet in a bucket of water (because your feet have lots of capillaries). Keep sunlight out of the house.

Electromagnetic pulse (EMP)

Some preppers worry about it; it would be really bad. Gas and water pumps would fail.

It could be either a deliberate attack or a naturally-occurring solar storm (recent Forbes article about this, Hacker News discussion). To prepare for this, you'd have to set up a house in a remote area with lots of supplies, and have enough gasoline on hand to be able to drive there. (This is a general prepper method.)

Existential risk from dependency on technology

When technology is developed and people start depending on it, its failure can have a worse outcome than if the technology had never been available at all. The electrical grid is one example; others include modern medicine and the logistical infrastructure for transporting food to populated areas.

There could be x-risk from eliminating death—if the population ages past fertility, but then the means of eliminating death is lost, then humanity will die out.

Biological

Biological warfare, or a naturally-occurring pandemic: Imagine a disease much worse and more contagious than COVID-19. When COVID-19 began, people were desensitized because of bird flu and other such false alarms.

Civil disorder

Civil disorder can be initiated by some exogenous shock such as a hurricane or loss of food supply. Is it plausible that it could happen for an entirely endogenous cause? Maybe when some political situation arises where a lot of people think they have no other option than violence, e.g. the Troubles in Northern Ireland. But it seems like the modern state has more capacity to manage violence than in previous times.

Militia violence is more likely than state action.

Can you prepare for a dictatorship or totalitarian surveillance state? Prepare to leave the country; marry someone with another citizenship. It's hard to imagine that a dictatorship could spread to other countries in the way that Nazi Germany or Soviet Russia did. The interwar period was more fragile than things are now, in terms of the risk of mass killings, and people are more able to flee if things get bad.

But there were people who, by the time they knew they wanted to leave their country, couldn't. In the case of Nazi Germany, there was the situation of the MS St. Louis; Jewish survivors are especially paranoid because of this. Article: When is it time to leave your country? There are no clear answers, but keep journaling and write down your criteria. Otherwise, things will change gradually and everything will seem normal when it happens. Watch out for "emergency powers."

Or maybe anticipating specific events isn't the right way to think about it; instead, think "This country is a total mess, so something bad is going to happen even if I can't think of what." SSC article The Influenza of Evil: We already have antibodies against things like Nazism and Communism, so if mass death occurs in the US, it may be due to something that doesn't pattern-match to either of those things.

Bugging out

When would having a bug-out bag be useful? Mass rioting/looting—but you'd probably have a bit more time to pack than just a few minutes. In summer 2020, San Antonio had one hour's notice. But it might be that stores are more vulnerable than homes. Also last summer, people living in CHAZ might've wanted to leave on short notice.

If your house is burning down, having a bug-out bag is good. Less extreme, having supplies to leave your house for a few days is useful, as it was during the Texas freeze. Packing the bag in advance is helpful because it's stressful to have to remember all the different things you need (e.g. I keep a spare toothbrush and toothpaste in my suitcase, because I always forget it otherwise).

Relatedly, keeping a bag of extra clothing and supplies in your car in case it breaks down in the middle of nowhere is good practice.

Is AI risk preppable?

Be as illegible as possible, so the AI doesn't know where to find your isolated wilderness hideout or that you exist at all. But this isn't helpful against nanobots. In an unfriendly AI takeoff scenario, you probably won't survive very long regardless of where you are.

Context: AI Impacts 2020 review

Think about career security: Is your job still relevant with AI in the picture?

There's a spectrum of takeoff scenarios. A hard takeoff is too fast to react to; a slower takeoff might make things more difficult and displace people's jobs. But there's also an intermediate case, where AI can still do a lot of harm short of existential. E.g.: terrorist drones that can be deployed by anyone untraceably; robot robbery; weaponized self-driving cars.



Discuss

Open & Welcome Thread – March 2021

Новости LessWrong.com - 6 часов 8 минут назад
Published on March 3, 2021 12:27 PM GMT

If it’s worth saying, but not worth its own post, here's a place to put it.

If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.

If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the new Concepts section.

The Open Thread tag is here. The Open Thread sequence is here.



Discuss

What other peptide vaccines might it be useful to make?

Новости LessWrong.com - 12 часов 10 минут назад
Published on March 3, 2021 6:25 AM GMT

Related: Making Vaccine

In my limited understanding of the workings of the open-source radvac vaccine, it is surprisingly basic. In particular, if the peptide design works, then simply ordering and snorting it is sufficient. Most of the rest of the process, which is about creating nanoparticles to which the peptides can attach, just serves to boost effectiveness. 

When I say this is "basic", I still want to commend the radvac team and johnswentworth for the combined ingenuity and agency required for making it happen. Sometimes it takes true skill to find a simple solution.

Especially when most people aren't even looking. 

I spent most of last year not even considering the hypothesis that it might be possible to make safe, cheap and effective vaccines at home. For a year I allowed my life to be governed by a constraint that a more competent version of me could have relaxed in a month. 

It feels a bit like going to Hogwarts and seeing someone do transfiguration for the first time. The thoughts that should be racing through my mind is: how do I learn such magic? And: if I could master such magic... what else could I use it for? 

I'm curious for others' input as I think through that. 

What else could the general method of peptides, administered intranasally, be used for? 

Other diseases, things other than vaccinations, or even enhancement rather than prevention? 

My knowledge of this domain is very limited, and these questions might not have interesting answers. But they certainly seems worth at least posing, and I am curious about the result. 



Discuss

Curiosity about Aligning Values

Новости LessWrong.com - 16 часов 13 минут назад
Published on March 3, 2021 12:22 AM GMT

Hi there, I've been thinking a lot about AI Alignment and values, the latter for longer than the former, admittedly. I'm in graduate school and study values through ethics. I would love to start a conversation about a thought that shot through my mind just last night. In thinking about values, we often focus on the principles, the concepts such as "good" and "bad" -- most simply, the nouns and adjectives. These are challenging to bridge consensus even in the same language, let alone across cultural, linguistic, and geographic boundaries. In my past experience as an English teacher, conveying verbs was always easier than trying to explain things like integrity.

Here's my question: what if instead of fixed concepts and rules, AI alignment focused on actions as the underlying reward function? In other words, might programming AI to focus on the means rather than the ends facilitate an environment in which humans are freer to act and reach their own ends, prioritizing activated potential over predetermined outcome? Can the action, instead of the outcome, become the parameter, rendering AI a facilitator rather than a determiner? 

There's a lot more to these questions, with details and explanations that I would be happy to dive into with anyone interested in discussing further (I didn't think it appropriate to make my first post too lengthy). Either way, I'm happy to have found this group and look forward to connecting with likeminded and unlikeminded folks. Thank you for reading! ~Elisabeth



Discuss

Enabling Children

Новости LessWrong.com - 18 часов 33 минуты назад
Published on March 3, 2021 12:02 AM GMT

There was a recent discussion on Less Wrong about whether or not ambitious people should have kids.

I didn't manage to watch it, though I have a lot of interest in the topic. It wasn't recorded but I can still hope for someone to post lots of details about it in the comments.

One subtheme from the comments that stuck out to me was the idea that there are lots of people who want children (because it would satisfy a genetic urge, or because it is fulfilling, etc.) but delay or decide not to procreate. They hold off because they expect that having children dominates their life in a way which could prevent them as a parent from being able to focus on your work, and this could be a severe limiter for ambitious goals. I certainly resonate with this.

To solve this, I've been thinking for a long time that it would be great if a bunch of families lived together. You'd only require one parent (or caretaker) to watch over a bunch of other people's kids at any moment. Living together has a bunch of other benefits too. It's hard to find big houses (or a cluster) with all the amenities that a bunch of separate families would need, but it is possible to build such developments to spec.

As a proof of concept, Jefftk raises his kids with a bunch of other adults in the house. I don't think his house would be big enough for those adults to have their own families, though.

Why don't people already co-raise their kids with other ambitious families? Well, I think some people probably do, in private situations that we aren't aware of. But it's not widespread. It seems like a really difficult coordination problem, to set up a good situation to raise kids with other families all in one place. You have to solve simultaneously for:

  • the space needs of all the families, plus any hired help
  • money: the means to pay for the house (mortgages, rent, etc), wages for hired help if any; plus negotiating loans or room rents across lots of people
  • whatever location constraints you have (for jobs/commutes/schools/etc)
  • all the labor of actually finding or building a home (possibly including solving zoning constraints) and maintenance
  • challenges related to having a bunch of people sharing the same space and presumably working from home, being very busy, etc. (like, there's a normal cooking rotation but someone gets called away and nobody else has enough time to prepare dinner for 12, clean up afterwards, etc)

Plus, kids themselves are so chaotic (I assume -- I don't yet have kids) that once you have them, I imagine that they dominate your ability to coordinate. It seems like planning ahead might be essential.

Can money solve some of these coordination problems? In the limiting case yes: if, for example, you already happened to own a big building in the center of a big job market, perhaps the Empire State Building, there's no coordination needed because you could rent or gift rooms to people you wanted to live with. You would still need to figure out how to make those rooms functional for the families living in them (but you could solve that with money as well). And to the extent that such people share some common space, you still need to negotiate the rules of that space. But it's a lot easier!

So maybe one could solve it if they were a billionaire. Could it be solved on a thousandth of that budget - for a "mere" million dollars?

This is where creativity comes in. A million dollars seems like just barely enough to build or purchase a big house, small multifamily apartment building, or cluster of 3-4 standalone houses, on low-value land. e.g., here's a 12-bedroom retreat center/B&B with a lot of common space on the market in fairly rural Vermont (but only ~ 2.5h from Boston!) as of this writing listed for $689K.

There are not many such places on the market. But if I can find one in my area online in less than 30 minutes of searching, it should be possible to find more if you are patient, hire a real estate agent, etc.

Would this work? Based on the 5 criteria above:

  • space needs: Is the kitchen big enough? are there enough common spaces and private rooms?
  • money: if everyone puts in a down payment of $100k, you should be able to borrow the rest. this may be achievable. alternatively, maybe one well-off person can borrow it all, or otherwise fund it, and rent to the others; etc. I realize a million dollars is far from chump change, and there are many logistics in pulling it together, but it seems more schlep than impossible challenge. :)
  • location: a rural location seems kinda sucky for anyone who can't work from home. so this idea would probably be restricted in appeal to people who can plausibly work from home. As for school, maybe kids could go to local school in whatever town; or maybe a homeschooling situation would be feasible.
  • labor of real estate purchase & maintenance: This seems solvable in the early days if there are one or two core founder-types who are motivated to put in a lot of work to make it a reality. Eventually (with additional money) the longer term maintenance work could be outsourced, if nobody wants to do it.
  • solving "roommate conflicts"/negotiating use of space: this seems like the biggest unknown -- are some people going to have different standards of cleanliness? can multiple families even share a space at all sustainably? Will noise be a problem? I don't know how to gather data on this other than trying to figure it out with the specific people I'd be living with, but it seems tractable, especially if everyone who joins is game to accept some early stage chaos while things get ironed out.

I'm particularly interested to hear from people who would like to raise kids in such an environment but are unsure it would work, for some reason. What would convince you that such an idea is worth investing time, energy and money into?



Discuss

Attempt at a gears model of getting infected with COVID-19

Новости LessWrong.com - 2 марта, 2021 - 18:39
Published on March 2, 2021 3:39 PM GMT

I read a bit but don't know how good my understanding is. For reasoning about TAFFIX or the RaDVaC vaccine it would be good to have a decent gears model. I'm writing my current model so that others can check it for errors and not for others believing my model:

There's SARS-CoV-2 particles in the air around you.

You breath in those particles.

If your nasal cavity is lined with mucus, those particles get onto the mucus and the mucus traps them. The mucus however itself doesn't kill SARS-CoV-2 particles in it.

There are two ways of getting rid of the mucus with the SARS-CoV-2 particles. Either you blow your nose and get rid of the mucus in your nasal cavity that way or you shallow it.

If the body knows how to defend against SARS-CoV-2 from a previous infection it puts antibodies inside the mucus to bind the SARS-CoV-2 particles. If the SARS-CoV-2 particles however aren't bound they sometimes get again into the inhaled air and make it into the lung or the go from the mucus into the nasal tissue.

Once antigens enter the nasal tissue, the mucosal immune system starts producing antibodies for those antigens so that they get trapped in the mucus and in the nasal tissue.

If there are too much SARS-CoV-2 particles the immune system in the nasal tissue doesn't manage to handle the threat in time, the SARS-CoV-2 particles grow in quantity. The get into the lymph and blood and travel through the body. Then the normal immune system kicks in and tries to fight it. If there aren't many particles it somehow manages to fight it (so that there's a threshold for infection) but if there are too much it first has to go through effort to build up the immune system defenses and there's a COVID-19 infection.

Is there something wrong in my recounting or something important I'm missing? I'm also happy of other resources that give a clear gears model.



Discuss

A whirlwind tour of Ethereum finance

Новости LessWrong.com - 2 марта, 2021 - 12:36
Published on March 2, 2021 9:36 AM GMT

As a hacker and cryptocurrency liker, I have been hearing for a while about "DeFi" stuff going on in Ethereum without really knowing what it was. I own a bunch of ETH, so I finally decided that enough was enough and spent a few evenings figuring out what was going on. To my pleasant surprise, a lot of it was fascinating, and I thought I would share it with LW in the hopes that other people will be interested too and share their thoughts.

Throughout this post I will assume that the reader has a basic mental model of how Ethereum works. If you don't, you might find this intro & reference useful.

Why should I care about this?

For one thing, it's the coolest, most cypherpunk thing going. Remember how back in 2012, everyone knew that Bitcoin existed, but it was a pain in the ass to use and it kind of felt weird and risky? It feels exactly like that using all this stuff. It's loads of fun.

For another thing, the economic mechanism design stuff is really fun to think about, and in many cases nobody knows the right answer yet. It's a chance for random bystanders to hang out with problems on the edge of human understanding, because nobody cared about these problems before there was so much money floating around in them.

For a third thing, you can maybe make some money. Specifically, if you have spare time, a fair bit of cash, appetite for risk, conscientiousness, some programming and finance knowledge, and you are capable of and interested in understanding how these systems work, I think it's safe to say that you have a huge edge, and you should be able to find places to extract value.

General overview

In broad strokes, people are trying to reinvent all of the stuff from typical regulated finance in trustless, decentralized ways (thus "DeFi".) That includes:

  • Making anything that has value into a transferable asset, typically on Ethereum, and typically an ERC-20 token. A token is an interoperable currency that keeps track of people's balances and lets people transfer it.
  • Making liquid exchanges where you can swap all of those tokens at market prices.
  • Making schemes for moving those tokens over time, like borrowing, futures, etc.
  • Making elaborate scams and arbitrages to obtain other people's tokens.

It's not completely clear to me what the main value proposition of all of this is. It's easy to generate things about it that seem somewhat valuable, but hard to say how each stacks up. Some possible value includes:

  • Evading regulation, like securities laws, money laundering laws, sanctions, capital controls, laws against online gambling, etc. etc.
  • Allocation of capital among projects that can raise money using cryptocurrency tokens (because somehow they have a scheme to tie the success of their project to the value of the token, making it a kind of virtual equity.)
  • Having less middlemen than existing financial systems, making it more trustworthy and cheaper. (It is not currently more trustworthy or cheaper than mainstream American institutions, but it plausibly could be in a few years.)
Tokenization

The first step is to make everything into an ERC-20 token. This will let all the other products work with everything, because they will interoperate with ERC-20 tokens.

Stablecoins and pegs

It's common for someone to want to own an Ethereum version of some other asset that is not Ethereum, so that they can use it on Ethereum. The most typical example of this is US dollars. A token whose price is designed to be pegged to an external thing like this is called a stablecoin.

There are a few techniques people use to accomplish this. The most popular one is to have a giant pile of US dollars somewhere under someone's control, and have that person act as a counterparty for anyone who wants to buy or sell 1 US dollar for 1 token. This is what Tether and USDC do.

A more complicated version is to have a giant pile of something that is not US dollars, but is worth something, and then be willing to trade 1 US dollar worth of whatever, for 1 token. This seems like what Reserve does. In a way it's also what MakerDAO does; I'll discuss this later when I get around to talking about loans.

Another asset people really like to own on Ethereum is bitcoin. This is accomplished by WBTC ("wrapped bitcoin") using a giant-pile-of-bitcoin technique, with a few useful twists; firstly, there's a cryptographic scheme by which you can trustlessly mint the WBTC tokens on Ethereum and send the BTC to the custodian on Bitcoin atomically. (This is a general mechanism for cross-blockchain transfers.) Secondly, you can go look at the Bitcoin balance of the custodian, so you know they actually have some collateral somewhere.

There are some other schemes for doing this that are trying to get around the requirement of having a giant pile of assets, by making it so that more of the coin is minted when the value of the coin goes above the peg, and the coin is burnt somehow when the value goes below the peg. Basis kind of tried to do this. It seems like nobody quite knows yet whether or not this is possible.

Governance tokens, i.e. tokenized equity

It's extremely common for new cryptocurrency projects to want to essentially give out equity in their project as a way to make money. In particular, a very common way to do it seems to be to mint "governance" tokens representing equity, give some of them to investors, and also give a bunch to early users, as a way of establishing a big userbase and attaining some network effects.

Frequently these tokens have a profit sharing component. Almost always, they let the owners of the tokens make some kinds of decisions relevant to the project -- parameter values, future redistribution of profits, development direction, project mergers, and so on, under the assumption that they will do so in a self-interested way as the stakeholders in the project.

I've found that governance tokens seem surprisingly valuable on the market, even when there is no explicit profit sharing component. For example, the Uniswap governance token has a market cap of $6B. Like companies that don't issue dividends or buy back shares, it's slightly puzzling at first glance where so much value is coming from. Expectation of future profit sharing mechanisms? The ability to control the future direction of the project in other ways? Maybe people just like the stock. A lot of how people are making money right now on these platforms is by taking advantage of aggressive governance token rewards for using new and speculative projects.

Exchanges

Once everything is a token, you want to be able to trade the tokens. The old cryptocurrency model of having centralized, NYSE-style exchanges hasn't gone away; you can still trade lots of tokens on Coinbase and so on. However, a totally new variety of trustless, decentralized exchanges have popped up in the past few years. I will explain Uniswap, which is just about the simplest possible one, and then talk about some variations on a theme.

Automated market making

Uniswap is a constant function market maker. More specifically, it's a constant product market maker. This means it is an Ethereum contract that manages a giant pool of two tokens, and anyone can trade one token for the other against the pool as long as their trade preserves the product of the two token quantities in the pool.

The main reason you would do this is if you wanted one instead of the other. But even if you didn't, you would do it anyway, if the current rate being offered (i.e. the ratio of the two token quantities in the pool, times the constant) is different than the rate you can get on another liquid market. If it's better, you trade one way on the pool and the other on the market. If it's worse, vice versa. As a result of this arbitrage, Uniswap's rate will always magically be at the prevailing market rate.

Well, not quite. It will always magically be somewhere inside the prevailing market spread, plus or minus transaction fees, for the first epsilon quantity you're trading. After that, it will slip to a worse rate. The slippage will depend on how giant the pool of tokens is compared to your trade.

There are a number of appealing properties about this compared to a centralized exchange with an order book:

  • All the machinery for making new pools is all automated and anyone can do it. My understanding is that you can invent two new ERC-20 tokens right now, mint a bunch of them, and make your very own Uniswap pool, without asking anyone's permission.
  • The trade I described is way less computationally expensive than keeping track of an order book, so it costs way less gas. This is a huge concern right now in Ethereum land.
  • The software is quite simple. That means there probably aren't many bugs.
  • You only need to trust the software; you don't have to put your money into a questionable cryptocurrency exchange platform.

As a result of these properties, a lot of people are making similar kinds of automated market makers for different purposes. For example, Curve is a CFMM which uses a function that has less slippage when the two tokens have very similar value; it's used for trading stablecoins and tokens that are pegged against each other. Balancer is a CPMM that works with a set of more than two tokens and maintains the constant product invariant for all of them, so you can trade any of them for the rest of them.

Yield farming

All of these automated market makers operate on the same model of incentivizing people with money ("liquidity providers") to make a really giant pool of tokens for them, so that they can support lots of trade volume with minimal slippage. There are typically two incentives: one is that you get transaction fees on the trades proportional to your portion of the pool, the other is that you get governance tokens for the exchange platform.

These frequently add up to what looks like a really good APR on your money. The shallower the pool and the more the trading volume, the more of the transaction fees you are making. So a lot of people spend their time looking for AMM pools with attractive yields, buying whatever tokens that pool supports, and depositing them in the pool. Then later you can take your tokens back out plus whatever profits you made in the meantime. This is called "yield farming".

At a glance this sounds like a relatively safe way to make money, but there are a few non-obvious risks. One is your exposure to the tokens you bought and put in the pool. If the pool is so shallow, it's probably because one or the other of those tokens is new or has questionable value, so you might not be thrilled to own them.  Another is systemic risk based on smart contract hacks, malicious governance, and so on. Another is impermanent loss; it turns out that for many AMMs, liquidity providers will not be indifferent to price movements of the tokens. So it's a way to make money, but not necessarily a very safe way to make money.

Loans

There seems to be a lot of demand for loans of cryptocurrency. Like a lot of this technology, it's not totally clear to me where the demand is coming from. Some possibilities:

  • Borrowing for the purposes of shorting this or that token.
  • Borrowing as a way to invest on margin.

Because it's supposed to be trustless and decentralized, all loans are necessarily at least fully collateralized (except for flash loans, which I describe later.) I'll explain two prominent lending systems.

MakerDAO & the DAI stablecoin

MakerDAO is one of the oldest Ethereum projects. It has a mechanism that not only results in a way to borrow with collateral, but also results in a stablecoin just sort of popping out as a consequence, which is cool.

The way it works is, you can send some stuff to a Maker "vault" contract. The stuff can be any token that has value according to a big list that Maker manages collectively. The USD value (per some governance-approved oracle) of the stuff acts as your collateral. Once you have some stuff in there, you can mint a token called DAI proportional to the amount of your collateral (everything is somewhat overcollateralized, because most of the stuff has a lot of price variance) and do whatever you like with the DAI. Later, you can pay back the DAI to get your collateral back.

What if a loan becomes insufficiently collateralized, because the market value of the collateral went down? Maker solves this problem by making it profitable for other people to liquidate you in this case; other people can bid DAI in an auction to pay off your loan and take your collateral. Naturally, this should happen basically instantaneously.

There are some other mechanisms in play; as usual, there's a governance token MKR that shares profits from a fee you pay on your loans. There's also a kind of emergency shutdown lever that MKR holders can pull to liquidate everyone and redistribute the collateral to everyone who owns DAI, presumably in case Maker becomes self-aware.

Why is DAI stable? Well, if DAI got much more expensive than $1, you can make more DAI, sell the DAI on the market, and then default. If DAI got much less expensive than $1, people with outstanding loans could buy DAI off the market and pay off their loans with it. If DAI really tanked, DAI holders could invoke the emergency shutdown and reclaim the collateral. And when DAI starts drifting a bit, Maker governance tries to nudge it using a familiar central bank tool -- the interest rate on Maker loans. All of these things help peg DAI to the dollar and they seem to mostly work.

Compound

Whereas Maker is basically a way to "lend to yourself", or take out leverage, Compound is an interpersonal lending system.

You start by putting a bunch of stuff into Compound, which will serve as collateral. Hopefully, other people have also put some stuff into Compound that you would like to borrow. If so, you can borrow it -- like Maker, you can borrow $1 of stuff for every $1.50 of collateral. Interest will accrue at some rate, and you can repay the loan and get your collateral back any time you choose. Meanwhile, the other people who put the stuff you borrowed there get the interest you paid.

What's the interest rate? Each token you can lend or borrow (e.g.) has an "interest rate model", chosen by platform governance (I don't know how they decide what it ought to be) which defines the interest rate as some function of the supply and demand for loans of that token. So if the demand for loans is high, but there isn't much of that token sitting around, lenders will get a high rate when they put that token into the pool.

Compound has a different approach to liquidating positions that have dipped below their collateral requirements due to market movements. If a loan is not sufficiently collateralized, any random person can call a function on the contract to steal some of the collateral and pay back some of the loan, at a better than market rate; so they will. It's a kind of "whoever notices it first" situation rather than Maker's slow auction. To me this seems simpler and effective -- I wonder why Maker didn't do it.

Malarkey

It turns out if you create a bunch of interoperable software that lets you transfer utility around, the software sometimes has unanticipated problems. It's very common for people to notice some way to Dutch book some combination of these services and pump money out until someone figures out how to do something about it.

Recently, someone created an amazing new invention which is like the Low Orbit Ion Cannon of arbitrage, called a flash loan. A micro loan is a very small loan; a flash loan is a very fast loan. You can borrow the money -- with zero collateral required, and with a very small fee -- if and only if you can repay it at the end of the same Ethereum transaction. I find this quite remarkable.

Aave (how do you pronounce this?) is the leading flash loan service. To use it, you have to deploy a contract that calls the loan contract, does whatever you wanted the money for, and calls the loan contract again to repay it. I guess there are also some other use cases for flash loans. For example, if you have a collateralized loan on Compound, and you don't want to repay the loan, but you wish the collateral were something else, you could (as I understand it) take a flash loan of the other thing, deposit it, withdraw your old collateral, and sell the old collateral to repay your flash loan.

But the really sweet use case is incinerating slightly imperfect systems made of Ethereum contracts and taking out all the money. Check out some of these autopsies.

Burninating the peasants at bZx, a kind of margin trading platform:

The origin account of the transaction starts with nothing, then borrows and moves a pile of cash, causes two huge Uniswap orders (in both directions) in the course of the same transaction, and ends up with 65 ETH. That definitely looks fishy.

Burninating the countryside at Harvest, a decentralized hedge fund:

The attacker repeatedly exploited the effects of impermanent loss of USDC and USDT inside the Y pool on Curve.fi. They used the manipulated asset value to deposit funds into the Harvest’s vaults and obtain vault shares for a beneficial price, and later exit the vault at a regular share price generating a profit...The value lost is about $33.8 million, which corresponded to approximately 3.2% of the total value locked in the protocol at the time before the attack.

Burninating the thatched-roof houses of something called Value DeFi after they tweeted about their new "flash loan attack protection":

The attacker returned $2 million to the protocol and pocketed $6 million — and with it left one audacious message stating, “do you really know flashloan?”

Value Defi said it suffered a “complex attack that resulted in a net loss of $6 million.”

You get the idea.

Should I really send my money to this computer program?

Good question. After a few evenings of looking at this I haven't been able to easily quantify the risk involved in using any of these tools to actually attempt to make money. If you go look at being a liquidity provider for stablecoin pairs on Curve -- which "should" be a very low risk investment when everything works -- you're getting a rate of a few dozen percent APR. But the potential, hard-to-quantify costs and risks abound:

  • Is there a bug in Curve's contracts? Maybe. Maybe not, if nobody has found it yet? But who knows?
  • Will you screw up in the process of moving your money onto the platform, or get owned by malware that nabs your keys?
  • Will the underlying tokens, like DAI or USDC, lose their value for a hard-to-understand reason, even though they are not supposed to?
  • What will the APR be like three months from now? You're getting rewarded in CRV governance tokens, and it's expensive in gas to sell them instantly -- are those going to hold their value? Who exactly is on the other side of this trade?
  • Speaking of gas, given current prices, if you don't put a lot of money in, you're probably not looking at an attractive proposition.
  • Doing US taxes on this is ridiculous. There's software that is supposed to help build your return, but it's hard to keep the tax implications in your head and take them into account.

So... a few dozen percent APR is a lot. And it's not designed to be a scam. To paraphrase Eliezer, "The contract does not hate you, nor does it love you, but you have deposited ERC-20 tokens which it can use for something else."

I advise the emptor to caveat. But I also advise you to check this stuff out, if you independently value exploring an exciting new land of nonsense.



Discuss

Good brainteasers for children?

Новости LessWrong.com - 2 марта, 2021 - 09:44
Published on March 2, 2021 6:44 AM GMT

I love asking children (and adults in some cases) the following question:

Five birds are sitting in a tree. A hunter takes a rifle and shoots one of them. How many birds are left? (If your answer is 'four' - try again!)

This is a system I/system II trap, akin to "which weighs more, a pound of feathers or a pound of gold?" In my experience kids (and adults) usually get this wrong the first time, but kids get a special kick out of something that sounds like a math problem they do for homework but turns out to be a bit more. I've also used the 2, 4, 8 puzzle for impromptu demos of confirmation bias. These are fun and engaging ways to teach kids about cognitive biases before they could realistically read the Sequences or Thinking Fast and Slow.

Can we share or brainstorm any more? Some basic inclusion criteria (feel free to argue or suggest more):

  1. Problems/tasks should reliably trigger some cognitive bias or other "glitch."
  2. Any stepwise thinking needs to be finished within a child's attention span. Feel free to assume it's a particularly smart and motivated child if you need to. 

I don't have any kids of my own but have local friends with younger families. Having a few tricks like these really helps me create a "fun uncle" persona, but I'm also curious if parents have a different perspective or experience posing these kinds of questions to their kids.



Discuss

The Flexibility of Abstract Concepts

Новости LessWrong.com - 2 марта, 2021 - 09:43
Published on March 2, 2021 6:43 AM GMT

I can discuss Daoist ideas with Taiwanese friends easily even if they have no background in Daoism. But when I try the same thing with white people it often feels as if I'm trying to explain quantum field theory to someone who has never heard of mathematics. It is easier for me to discuss Zen with a Taiwanese atheist than a Western psychonaut.

In his book review on The Geography of Thought, PeterMcCluskey draws attention to differences between Westerners' and East Asians' ways of thinking. This post elaborates on one specific difference: the flexibility of abstract concepts.

Western people are trained to think terms of universal principles. East Asians are trained to think contextually.

The Dao [map] is not the Dao [territory]

East Asians are conditioned from an early age to understand that the map is not the territory. Saying "the map is not the territory" would be like saying "the sky is up", "money is valuable" or "don't kick kittens". The distinction between map and territory has been understood for literally thousands of years.

道可道也,非恒道也。

The "dao" referred to by "the dao" [map] is not the dao [territory].

― 《道德经》Dao de Jing by Laozi 老子, 4th century BC

Of course the map is not the territory. How could anyone who comprehends the concept of lying possibly confuse the two? And yet, I was talking to an American a couple months ago who literally did not believe me when I tried to explain how the word "infinity" has different meanings in different contexts.

Western Rhetoric

Western society has a long tradition of rhetoric where you debate the truth of statements like "murder is bad" or "the Greens should win the next election". Practically every grade school essay states a claim and then defends it. "A theme in The Great Gatsby is…".

American rhetoric reaches its purest form in the Lincoln-Douglas (LD) debate format. In the LD debate format two competitors debate a resolution like "Resolved: The United States ought to guarantee universal child care."[1] One side is debates in favor. The other side debates against.

Resolutions never center around objective facts. (That would be a policy debate.) Instead, they come down to questions of value. Each debater defines victory in terms of a value criterion. A value is something universally agreed to be good like "justice". The criterion is a method of measuring the value.

Throughout the entire process it is implied that words if you specify a value and if you specify a criterion then this resolution has a truth value between zero and one (inclusive). Except that's almost never the case because words don't have well-defined meanings.

Consider a simpler resolution: "Resolved: Murder is immoral."

Some people consider murder to be immoral.[2]. But murder is just the killing of another person in violation of the law. There are lots of cases where murder is moral. You can start by shooting the guards at Dachau.

Abstract statements tend to be broad. Broad statements tend to have exceptions. When a blanket statement has lots of exceptions it is said to "depend on context". By training children in the traditional of adversarial competitive rhetoric, Western society trains its population to ignore context because in a debate, the map really is the territory. Americans even think of ourselves as context-independent personalities.

“Tell me about yourself” seems a straightforward enough question to ask of someone, but the kind of answer you get very much depends on what society you ask it in. North Americans will tell you about their personality traits (“friendly, hard-working”), role categories (“teacher,” “I work for a company that makes microchips”), and activities (“I go camping a lot”). Americans don’t condition their self-descriptions much on context. The Chinese, Japanese, and Korean self, on the other hand, very much depends on context (“I am serious at work”; “I am fun-loving with my friends”). A study asking Japanese and Americans to describe themselves either in particular contexts or without specifying a particular kind of situation showed that Japanese found it very difficult to describe themselves without specifying a particular kind of situation—at work, at home, with friends, etc. Americans, in contrast, tended to be stumped when the investigator specified a context—“I am what I am.” When describing themselves, Asians make reference to social roles (“I am Joan’s friend”) to a much greater extent than Americans do. Another study found that twice as many Japanese as American self-descriptions referred to other people (“I cook dinner with my sister”).

Quote from The Geography of Thought in a comment by Kaj_Sotala

Post-Modernism

Western philosophy's reaction to taking taking words too seriously was the Post-Modernist movement. The Post-Modernists improved Western philosophy by throwing out the map. They damaged Western society by throwing out the territory too.

The more labels you have for yourself, the dumber they make you.

Keep your identity small by Paul Graham

No, no, NO. This is backwards. The mistake you should "keep your identity small" stems from the erroneous assumption that identities are well-defined. It confuses the label with the underlying reality. A small identity merely does no harm. You can do better than that. The best approach is strong beliefs loosely held. If can shed your identities like you shed clothes then you can keep your identity large without mistaking your identity for your self. You can get the best of all the worlds.

The real lesson here is that the concepts we use in everyday life are fuzzy…. Even a concept as dear to us as I. It took me a while to grasp this, but when I did it was fairly sudden, like someone in the nineteenth century grasping evolution and realizing the story of creation they'd been told as a child was all wrong…. Everyday words are inherently imprecise.

How to Do Philosophy by Paul Graham

That's better. And it illustrates how the flexibility of abstract concepts is not hammered into every child in the West until it becomes second nature. If you grow up in East Asia then the first last and most important thing you are taught is how to blend in.

Context Switching

There's an old Daoist teaching technique where you say something like pain is not the unit of effort and then say the opposite like pain is the unit of effort. The Western response is to figure out which one is true. The Eastern response is to quickly shift contexts because each statement is true in the appropriate context ala Chapter 1 of The Art of War.

Consider race. Race, like all abstract concepts, is flexible and context-dependent. People have mistaken me for Indian, Japanese, Chinese and Ethiopian. I don't mean I told them I was x.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > * {position: absolute} .MJXc-bevelled > * {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom * {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')} and they didn't argue. I don't mean I walked around Japan without anyone noticing me. I mean an Ethiopian, unprompted, literally asked me "Are you Ethiopian?" while both of us stood on American soil and then, when I answered no, he asked if my family was Ethiopian. I've been asked "Are you a Muslim?" in Tokyo. My race is a function of where I am, who I'm with, who I'm talking to, my language, my accent, my clothing, my posture…and sometimes even the color of my skin.

There is a secret game Asian-Americans play among ourselves called the "What kind of Asian are you?" game. Whenever an Asian-American meets another Asian-American we try to guess each other's nationality. If you guess right you gain charisma points. If you guess wrong you lose charisma points. Of course, you don't literally say "I know you are a <whatever>." That is a faux pas. Instead you imply it by demonstrating common cultural understandings not shared by the wider Western world.

What makes this game interesting is you can't do it by physical appearance—national boundaries aren't drawn phenotypographically. Nor can you do it from accent. You have to read subtle cultural cues. For example, I like roleplaying a Chinese nationalist when I'm online—nevermind that my family is from the Republic of China[3].

When I want to look white I use words like "Manuchuria"[4].

Be the grey man.

  1. This topic is the 2021 March/April Topic of the National Speech & Debate Association ↩︎

  2. Thank you MaxG for granting me permission to link to your post. ↩︎

  3. This sentence is a joke about 20th century East Asian history. ↩︎

  4. This sentence is another joke about 20th century East Asian history. ↩︎



Discuss

Tap Water and Filtration

Новости LessWrong.com - 2 марта, 2021 - 06:45
Published on March 2, 2021 3:45 AM GMT

I've been wondering about the safety and quality of the tap water in my city. I've always been assured that it's clean right out the tap. But I spent this past year living rurally with a groundwater well on-site, and when I visited the city I was struck by how bad the tap water tasted to me.

Given its frequency of use in all our lives, I am generally surprised by the lack of discussion around water quality in my circles. It seems like nobody talks about it, and almost everyone I know drinks (and cooks with etc) what I now consider bad-tasting tap water. Does anyone have good leads or resources on this topic? I'd like to learn more and am unsure where to look first.

Some questions under consideration:

  • What is 'safe' to drink?
  • What makes water quality good vs bad?
  • Are taste and safety/quality correlated?
  • Any recommended water quality tests?
  • To filter or not to filter?
  • Which filtering methods actually work?
  • What questions would you be asking?


Discuss

Are the Born probabilities really that mysterious?

Новости LessWrong.com - 2 марта, 2021 - 06:08
Published on March 2, 2021 3:08 AM GMT

In Eliezer's Quantum Mechanics sequence, he presents the Born probabilities as still being mysterious in our understanding. In particular, the fact that it's the only non-linear phenomenon in quantum mechanics is considered quite strange.

However, I've been reading Everett's "Many Worlds" thesis, and he derives the Born probabilities (pp. 69-72) by asking what happens to an observer as the system evolves. If we posit a measure M for which the measure of a trajectory (of an observer) at one time equals the sum of the measures of each trajectory "branching" from the initial one, then he shows it must be (up to a multiplicative constant) the squared amplitude.

He then claims that this is "fully analogous" to Liouville's theorem, which can be interpreted as a law of Conservation of Information. So taking this seriously, the Born probabilities are as inevitable as the 2nd law of thermodynamics and the uncertainty principle, among many other well-known consequences of Liouville's theorem.

In that case, it seems to me that most of the mystery has been washed away. But I'm not quite sure to what extent he means all of that when saying "fully analogous". In particular, I'd like to know if the Born probabilities are truly an inevitable consequence of Conservation of Information?



Discuss

Economic Class

Новости LessWrong.com - 2 марта, 2021 - 04:58
Published on March 2, 2021 1:58 AM GMT

Social class is fossilized wealth. There are three economic classes in the USA.

  • If you do physical work then you belong to the the "working class", "lower class", "blue collar" or simply "labor".
  • If you do nonphysical work then you belong to the "middle class" or "white collar".
  • If other people work for you then you belong to the "upper class" or "bourgeoisie".
Labor

The working class lives in the physical world. Blue collar problems are physical problems like injury, health, violence and broken machines. At the bottom of the working class you find mindless labor like farming, food service, cashiers and—increasingly—warehouses. (Petty crime is underclass.)

College used to be a ticket out of the lower class. This path is increasingly difficult due to price increases, credential inflation and opaque acceptance criteria designed to keep the lower class in its place. The military does still functions in its traditional role as a ticket into the middle class, but only if you get into the right specialties.

In the middle of the working class are the skilled trades. All the traditional handyman job live here: plumber, roofer, electrician, drywall repair. Among women you can find lots of nursing. Retail sales belongs here too as do police officers. Rare occupations include jugglers, clowns, close-up magicians and other small-scale entertainers. (Media-based entertainers belong to the middle class.) Psychics and priests serve the working class but are themselves middle class.

Attempting to break into the middle class can be risky due to the sticker price of college plus the lost wages. If you go into the military there's no guarantee they'll teach you anything useful. A more prudent goal may be to break into the skilled trades. Blue collar work doesn't require credentials the way white collar work does. If you can do something then employers will generally allow to do it. If you aren't allowed to do it then it's because there's a union rule or government regulation getting in you way. Your productivity is fundamentally limited by physical reality. No matter how good of a janitor you are, there is a physical limit to the number of rooms you can clean in a day.

The working class might go to trade school for a year or two but most of what you know you learned from friends, absorbed from family, taught yourself or—most likely—learned on the job. It's straightforward to learn things by doing them because they things you work with are physical and therefore intuitive. If you break into the skilled trades then you can expect to do the same thing for a long time.

Professional athletes are the pinnacle of the working class, but professional athletes are rare. Far more common are the petty bourgeoisie. The petty bourgeoisie consists of small business owner who work alongside their employees. Small business owners tend to be humble, wise and down-to-earth. If you can fully automate your business then you jump over the middle class all the way into the bourgeoisie.

White Collar

The middle class does intellectual labor. The problem with intellectual labor is it's hard to tell whether someone is doing it right. If there were clear criteria for success then the job would have been automated away by now. Moreover, if you knew what the right thing to do was then you wouldn't need to hire someone else to do it.

If it were possible to measure individual white collar workers' output then the middle class would resemble the working class. It even does in a handful of fields where individual performance can be measured: sales, contracting and startup entrepreneurship. These ruthlessly meritocratic fields tend to be dominated by confident people who speak clearly.

Alas, most of the middle class works for large corporations. When white collar workers collaborate in teams it is impossible to how how much each individual employee is worth. Middle class posturetalk is a byproduct of the constraint that it's hard to measure the productivity of individual white collar workers.

We have a phrase to describe what happens when rankings have to be created without any meaningful criteria. We say that the situation degenerates into a popularity contest.

Why Nerds are Unpopular by Paul Graham

Welcome to Dilbertland.

Generally-speaking, corporations cannot measure individual productivity. The result is a Market for Lemons where everyone is promoted to their level of incompetence. Corporations have a few methods of fighting back, all of them crude and imprecise:

  • Hiring based on IQ. Hiring directly based on IQ is illegal. However, your ability to solve abstract questions about computer algorithms is highly-dependent on your IQ. If is legal to ask questions about computer algorithms if you are a hiring an engineer (even if that engineer never uses them on the job).
  • Credentialing. It's hard to measure if someone is a good electrical engineer but it's easy to measure if someone has a degree in electrical engineering or used to work for Facebook. This also skews the IQ distribution in your favor.
  • Firing awful people.

Putting all of this together, you get a population of incompetent employees obsessed with credentials. They cannot be rewarded for exceptional productivity but they are punished for being unpopular. The result is painstakingly conformist population quick to signal allegiance to the winning side of every conflict, especially if it means punishing the outgroup (which reduces competition). When advantage shifts they jump ship. The middle class's opinions are painstakingly crafted to win a Keynesian beauty contest.

The Upper Class

It is a common middle class myth that you can work your way up the corporate ladder to the top. This is a deliberately-crafted illusion. No matter how long you work for Microsoft you will never save enough money to buy Microsoft. You don't get to the top of the class pyramid by working. You are rich when other people work for you.



Discuss

Notes on "Bioterror and Biowarfare" (2006)

Новости LessWrong.com - 2 марта, 2021 - 03:43
Published on March 2, 2021 12:43 AM GMT

Cross-posted from the EA Forum

I recently finished reading Malcolm Dando’s 2006 book Bioterror and Biowarfare: A Beginner’s Guide, following Darius Meissner’s recommendation. I’d recommend the book for people who want to learn more about bioterror, biowarfare, and/or chemical weapons, and who don’t yet have extensive knowledge on those topics. (I fit that description, and have ranked the book as the 24th most useful-to-me of the 49 effective-altruism-related books I’ve read since learning about effective altruism (EA).) 

Here, I'll: 

  1. Summarise my six "main updates" from this book
  2. Share the Anki cards I made for myself when reading the book
    • I intend this as a lower-effort alternative to writing notes specifically for public consumption or writing a proper book review
    • If you want to download the cards themselves to import them into your own deck, follow this link
  3. Share a few final thoughts on whether you should read this book

(Since the first of those three parts seems the most valuable per word, and the second part is quite long, I’ve split parts 2 and 3 into comments below the post itself.)

My hope is that this post will be help some people to quickly:

  • Gain some key insights from the book
  • Work out whether reading/listening to the book is worth their time

(Note: Before commenting on this post, it may be worth considering whether your comment might pose an infohazard [see also]. Feel free to send me a direct message instead/first. Relatedly, I ran this post by someone before publishing it.)

My main updates from the book 

This section briefly summarises the main ways in which the book shifted my beliefs on relatively high-level points that seem potentially decision-relevant, as opposed to just specific facts I learned about. Note that each of those updates was more like a partial shift towards even more / somewhat less credence in something, rather than a total reversal of my previous views. (See also Update Yourself Incrementally.)

  1. It seems that the picture I’d gotten regarding bioterror and biowarfare from some talks/writings by EAs (especially The Precipice) was basically correct, not misleading, and not missing extremely important points
    • I already would’ve guessed that that’s the case, but my guess would’ve only had perhaps ~75% confidence.
      • So this updates me towards placing a bit more trust in the analysis I get from EAs (or at least professional high profile EA researchers)
        • Perhaps even on topics that those EAs have less than a year’s worth of expertise on themselves
  2. There was more overall state-level bioweapons activity during WW1, in the interwar period, and during WW2 than I’d have guessed
    • This should push my estimate of the existential risk posed by biotechnology/bioweapons slightly upwards
  3. Historically, bioweapons activity has focused more on anti-animal and anti-plant weapons/attacks, relative to anti-personnel ones, than I’d have guessed
    • Perhaps this should make me think of the “bio” part of anthropogenic biorisk as “risks from biotechnology/bioweapons”, rather than “risks from engineered pandemics”?
      • Because this suggests some of the risk might come not from human pandemics, but rather from anti-animal or anti-plant attacks causing major agricultural shortfalls which in turn indirectly harm the long-term future via the sorts of paths that ALLFED and Beckstead (2015) worry about
        • But I’m not sure how significant that risk pathway is
    • Perhaps this should also push my estimate of the existential risk posed by biotechnology/bioweapons slightly downwards?
      • This would be based on the premises that (a) this historical fact implies there’ll be less anti-personnel bioweapons activity than I’d have thought, and (b) anti-personnel bioweapons activity contributes more to existential risk than anti-animal or anti-plant bioweapons activity does
      • But I’m not sure if either of those premises hold
    • I think this should also update me towards placing at least a bit more credence on ALLFED’s / David Denkenberger’s concern about agricultural shortfalls caused by things like a possible “super weed, super crop pathogen, super bacterium, or super crop pest” (source)
      • And that should in turn slightly update me towards placing more credence on other claims of theirs which I haven’t evaluated
  4. The author presents some pathogens as quite concerning and says they are placed in categories for relatively high concern by some relevant authorities, despite those pathogens seeming to me to be extremely unlikely to cause an existential catastrophe, and quite unlikely even to cause a global catastrophe
    • This doesn’t really affect my estimate of the existential risk posed by biotechnology/bioweapons, because:
      • Other pathogens the author mentions do seem to pose more risk from a longtermist perspective
      • This only relates to existing pathogens, not possible future pathogens
      • I already believed that posing an existential risk is a high bar that most pathogens won’t meet
    • Also, of course, something can be bad without meeting the definition of a global catastrophe!
    • But this point does suggest to me that much of the general vibe of concern that comes from some experts on biorisk might be about much smaller scale (though still bad) disasters than what longtermists are most concerned about
  5. This book somewhat increased my credence in the view that international law (e.g., the 1975 Biological and Toxin Weapons Convention (BTWC)) can be quite effective for tackling issues like biorisk (rather than e.g. it just almost never being taken seriously), and that it is very important for reducing biorisk
  6. During the George W. Bush administration (and the Clinton administration, to a lesser extent), US reluctance or opposition to certain improvements to the BTWC seems to have been the largest or one of the largest obstacles to those improvements happening
    • (See e.g. the Wikipedia article’s section on the failed negotiation of a verification protocol.)
    • This somewhat increased my credence that the US often fails to take a leadership role in, or actively undermines, diplomatic or international law efforts that could reduce existential or global catastrophic risks
      • I already had substantial credence in that from an international relations university unit and some of my nuclear weapons research
    • But then at other times and in other ways, the US had a very positive role
    • Altogether, this updates me towards higher credence that influencing relevant US government actions could be highly valuable
      • Since we can’t just count on the best actions being taken by default, but nor does it seem inevitable that bad actions will always be taken

My thanks to Darius Meissner for helpful comments on an earlier draft of this post (including the parts I've now split out into comments).



Discuss

Forecasting Newsletter: February 2021

Новости LessWrong.com - 2 марта, 2021 - 00:51
Published on March 1, 2021 9:51 PM GMT

HighlightsIndex
  • Prediction Markets & Forecasting Platforms
  • In The News
  • Recent Blog Posts
  • Personal Forecasts
  • Hard To Categorize
  • Long Content

Sign up here or browse past newsletters here.

Prediction Markets & Forecasting Platforms

A new US company, Kalshi, has gotten regulatory approval from the Commodity Futures Trading Commission to create a betting platform. I don't really have many thoughts given that they haven't launched yet. I expect them to use newer technologies than PredictIt just because they are newer, and I expect them to have somewhat lower fees because it would make business sense. They are planning to open in March. From the Wall Street Journal (unpaywalled archive link): 

Alfred Lin, a partner at Sequoia and a Kalshi board member, said Kalshi’s embrace of regulation was one of the reasons his firm invested in the startup. “They’re taking regulation fairly seriously,” he said. “Companies that move fast and break things are not going to work in this regulated environment.”

Hypermind has a new forecasting tournament: Where will AI be in 2023?, with a prize pot worth $7,000 so far.

Forecasters on Good Judgment Open have the opportunity to receive feedback from superforecasters if they participate in the Think Again challenge, make 10 predictions and complete a survey.

With QURI, I've been improving Metaforecast, a search tool for probabilities. It now has more prediction platforms, a nicer interface, and more search options. Readers might be interested in COVID predictions, readers from the EA movement might be particularly interested in all GiveWell and OpenPhilanthropy predictions.

Metaforecast search for "Israel"In the News

Biden ordered the creation of a National Center for Epidemic Forecasting and Outbreak Analytics (secondary source). The new agency looks somewhat related to a previous proposal mentioned in this newsletter:  Forecasting the next COVID-19.

Suboptimal demand forecasting for semiconductor chips has led to pausing automobile production in the US. On the one hand, automakers struggle to compete for chips against more profitable tech products—e.g., iPhones—on the other hand, US sanctions on China’s Huawei and SMIC put even more pressure on semiconductor production capacity.

The European Central Bank will be holding its 11th Conference on Forecasting Techniques as an online event on 15 and 16 June 2021. 

Hewlett Packard has built a new supercomputer dedicated to weather forecasting for the U.S. Air Force. The new system advertises a peak performance of 7.2 petaflops. This is comparable to estimates of the human brain, and around two orders of magnitude lower than the fastest supercomputer.

Future Returns: Using the Past to Forecast the Future of the Markets. An analyst at Fidelity looks at the historical base rate for market behavior in situations similar to the current, COVID-19-affected, performance.

Where The Latest COVID-19 Models Think We're Headed — And Why They Disagree, by FiveThirtyEight

Recent blog posts

Boring is back, baby! Experienced PredictIt bettor discusses the future profitability of political predictions:

The political betting community has been quietly dreading the potential boringness of the Biden presidency – without politics being so crazy, engagement should fall off and so should the deposits of new accounts coming in to bet on whatever wild stuff Trump was up to next

I’d more or less written off this year as one in which I’d be happy to earn a third what I did last year on PredictIt and maybe try doing some grown-up work or something (lol, as if). Then it turns out January was one of the most interesting months in politics of the entire Trump presidency (to put it mildly) and engagement has remained fairly substantial. But that doesn’t mean the doldrums aren’t coming.

The following three articles, among others, won a “Forecasting Innovations Prize” I had previously co-organized under QURI.

Crowd-Forecasting Covid-19 describes the results of a COVID-19 crowd-forecasting project created during the author's PhD. This is probably the one app in which human forecasters can conveniently forecast different points in a time series, with confidence intervals. The project’s forecasts were submitted to the German and Polish Forecast Hub, and they did surprisingly well in comparison with other groups.They are looking for forecasters, and will soon expand to cover 32 European countries as part of the yet-to-be-launched European Forecast Hub.

Incentivizing forecasting via social media explores the implications of integrating forecasting functionality with social media platforms. The authors consider several important potential issues at length, propose possible solutions, as well as give recommendations regarding next steps. The scenario they consider— if it were to occur—could possibly have a large impact on the “information economy”.

Central Limit Theorem investigation visualizes how quickly the central limit theorem works in practice, i.e., how many distributions of different types one has to sum (or convolve) to approximate a Gaussian distribution in practice (rather than in the limit). The visualizations are excellent and give the readers intuitions about how long the central limit theorem takes to apply. As a caveat, the post requires understanding that the density of the sum of two independent variables is the convolution of their densities. That is, that when the post mentions “the number of convolutions you need to look Gaussian”, this is equivalent to “the number of times you need to sum independent instances of a distribution in order for the result to look Gaussian”. This point is mentioned in an earlier post of the same sequence.

I stumbled upon Alert Foxes, a blog with a few forecasting posts by Alex Foster (perfect anagram!). I particularly enjoyed the decompositions of his predictions on US election questions.

Vitalik Buterin writes about his experience betting on the US election using crypto prediction markets.

AstralCodexTen—previously SlateStarCodex, a blog I hold in high regard and which is probably known by everyone on LW—has started a weekly series discussing forecasting questions (1, 2, 3).

Personal forecasts

A piece of feedback I got at the end of last year about this newsletter was to talk more about my own predictions, so here are two which I recently got wrong and one that I got right:

The first one was Will Kim Kardashian or Kanye West file for divorce before March 1, 2021?. After some investigation, I thought that they would try to time the divorce to maximize news about the last season of Keeping Up with the Kardashians, and was quite surprised when they didn't. Other bettors were also surprised, as the price on Polymarket looked as follows:

PolyMarket prices for question "Will Kim Kardashian or Kanye West file for divorce before March 1, 2021?"

The second prediction I got grievously wrong was How much new funding will facial recognition companies raise between July 1 and December 31, 2020, inclusive?, on CSET-foretell. With the passage of time I updated away from the option "Less than $200 million", which ended up being chosen for resolution. The resolution source, Crunchbase, describes Acceso Digital as "a developer of facial recognition and identification technology created to solve document and process management", but doesn't classify it in the "facial recognition" category. In September 2020, Acesso Digital raised R$580M (circa $90 million), which would have been enough to raise the final question resolution to the next category ("More than $200 million but less than or equal to $500 million"). 

Thirdly, I assigned a 50% probability to winning an EA forum prize for a research project, which I did.

Hard to Categorize

The Illinois Commission on Government Forecasting and Accountability is a government agency in charge of making e.g., revenue predictions. Judging by its webpage, it seems somewhat outdated. A similar agency in California appears to be more up to date. It might be interesting for platforms like Metaculus to try to partner with them.

Meta and consensus forecast of COVID-19 targets (secondary source) provides a variety of forecasts about COVID. They provided forecasts about US deaths conditional on vaccination rates, which could have been particularly action-guiding. They also find that forecasts which aggregate predictions from infectious disease experts and “trained forecasters” have wider uncertainty intervals than the COVID-19 Forecast Hub.

Upstart, a company which uses machine learning/data analysis to predict loan repayment, is looking for one or more forecasters with a good track record to do some consulting work. If you're interested, let me know

Long Content

Evaluating Short-term Forecast among Different Epidemiological Models under a Bayesian Framework (supporting data, webpage). The authors notice that the relative merits of different epidemic forecasting methods and approaches are difficult to compare. This is because they don’t normally have access to the same data or computational capacity in the wild. The authors set out to carry out that comparison themselves, but they don’t arrive to any sharp conclusions, other than ARIMA not being able to keep up with stochastic approaches. 

from 'Evaluating Short-term Forecast among Different Epidemiological Models under a Bayesian Framework'

We calibrate stochastic variants of growth models and the standard SIR model into one Bayesian framework to evaluate their short-term forecasts. 

Broadly speaking, there are five types of approaches to forecasting the number of new cases or the expected total mortality caused by the COVID-19: 1) time-series forecasting such as autoregressive integrated moving average (ARIMA) [...]; 2) growth curve fitting based on the generalized Richards curve (GRC) or its special cases [...]; 3) compartmental modeling based on the susceptible-infectious-removed (SIR) models or its derivations [...]; 4) agent-based modeling [...]; 5) artificial intelligence (AI)-inspired modeling.

There has been a growing debate amongst researchers over model performance evaluation and finding the best model appropriate for a certain feature (cases, deaths, etc.), a particular regional level (county, state, country, etc.), and more. Fair evaluation and comparison of the output of different forecasting methods have remained an open question, since models vary in their complexity in terms of the number of variables and parameters that characterize the dynamic states of the system.

Although a comparison of predictive models for infectious diseases has been discussed in the literature, to our best knowledge, no existing work systematically compares their performances, particularly with the same amount of data information.

None of the models proved to be golden standards across all the regions in their entirety, while the ARIMA model underperformed all stochastic models proposed in the paper

Comparing weather forecasts in Tasmania now to those made 30 years ago, a news article mentions that the amount of available data has increased 13.5 million times.

The Kelly Criterion, visualized in 3D.

The Kelly Criterion in 3D, visualized by lsusr. Source: Less Wrong

On the topic of Kelly, see also Kelly isn't (just) about logarithmic utility, Kelly *is* (just) about logarithmic utility and Never Go Full Kelly.

The EpiBench Platform to Propel AI/ML-based Epidemic Forecasting: A Prototype Demonstration Reaching Human Expert-level Performance (secondary source).

During the COVID-19 pandemic, a significant effort has gone into developing ML-driven epidemic forecasting techniques. However, benchmarks do not exist to claim if a new AI/ML technique is better than the existing ones. The "covid-forecast-hub" is a collection of more than 30 teams, including us, that submit their forecasts weekly to the CDC.

It is not possible to declare whether one method is better than the other using those forecasts because each team's submission may correspond to different techniques over the period and involve human interventions as the teams are continuously changing/tuning their approach. Such forecasts may be considered "human-expert" forecasts and do not qualify as AI/ML approaches, although they can be used as an indicator of human expert performance.

We are interested in supporting AI/ML research in epidemic forecasting which can lead to scalable forecasting without human intervention. Which modeling technique, learning strategy, and data pre-processing technique work well for epidemic forecasting is still an open problem. To help advance the state-of-the-art AI/ML applied to epidemiology, a benchmark with a collection of performance points is needed and the current "state-of-the-art" techniques need to be identified. We propose EpiBench a platform consisting of community-driven benchmarks for AI/ML applied to epidemic forecasting to standardize the challenge with a uniform evaluation protocol.

In this paper, we introduce a prototype of EpiBench which is currently running and accepting submissions for the task of forecasting COVID-19 cases and deaths in the US states and We demonstrate that we can utilize the prototype to develop an ensemble relying on fully automated epidemic forecasts (no human intervention) that reaches human-expert level ensemble currently being used by the CDC.

In an experiment, the researchers compared 3 AI and machine learning forecasting methods and 30 methodologies pulled from published research using EpiBench. They found that while many of the forecasts reportedly used the same model (SEIR), they predicted “drastically” different outcomes. Moreover, two methodologies identical except that one smoothed data over 14 days versus the other’s 7 days varied “significantly” in their performance, suggesting that data preprocessing played a nontrivial role.

Note to the future: All links are added automatically to the Internet Archive. In case of link rot, go here and input the dead link.

"I never think of the future. It comes soon enough".

Albert Einstein, said probably as a joke (source).



Discuss

If the cosmos isn't inherently 3d, then...

Новости LessWrong.com - 2 марта, 2021 - 00:45
Published on March 1, 2021 9:45 PM GMT

Assuming that the cosmos is not inherently three dimensional - assuming, that is, that 3d is anthropic and tied to the biology of the observer - why would our senses pick it up as 3d?



Discuss

Weighted Voting Delenda Est

Новости LessWrong.com - 1 марта, 2021 - 23:52
Published on March 1, 2021 8:52 PM GMT

Unlike a number of other issues, this one I didn't call in advance, though in retrospect it's, if anything, much more obvious than other things I did call out. Weighted voting on LW, at minimum the ability for it to be visible and preferably the ability for it to affect anything at all except, at most, the number displayed next to a user on their profile page, is a catastrophic failure in progress and must be destroyed.

I've said in the past that

The Hamming problem of group rationality, and possibly the Hamming problem of rationality generally, is how to preserve epistemic rationality under the inherent political pressures existing in a group produces.

It is the Hamming problem because if it isn’t solved, everything else, including all the progress made on individual rationality, is doomed to become utterly worthless. We are not designed to be rational, and this is most harmful in group contexts, where the elephants in our brains take the most control from the riders and we have the least idea of what goals we are actually working towards.

And, closely connected but somewhat separable:

Most things we do are status-motivated, even when we think we have a clear picture of what our motivations are and status is not included in that picture. Our picture of what the truth looks like is fundamentally warped by status in ways that are very hard to fully adjust for.

I also said, particularly for the latter, that "the moderation policies of new LessWrong double down on this". I stand by that, but I missed a bigger issue: the voting system, where higher karma grants a bigger vote, also doubles down on it. Big names are overrepresented on the front page, at the top of the comments section, and everywhere else you can discover new LW content. This was somewhat understandable when LW was working itself out of its doldrums and influential people were making an effort to put good content here, but if that was the driver, it would have gotten less noticeable over time, and instead it has gotten more blatant.

Individuals can opt-out of seeing these votes, but to a first approximation that's useless. Everybody knows that everyone can see the strength of votes, even if that isn't strictly true; social proof is stronger than abstract inference. Social proof is bad, very bad, at best something to be used like someone carrying around two slightly-subcritical uranium masses in their pocket, where a small slip could make them fit together and kick off a chain reaction. It is Dark Arts at their most insidious because, like the Unbreakable Vow, it's tightly integrated into society, extremely useful for some goals we endorse, and very difficult to stop using. And we can each opt out of individually seeing this signal, but we can't opt out of the community seeing and displaying social proof and 'everybody knowing' that, if not them, everybody else is doing so. Even if, in point of fact, 90% of users are opting out of seeing vote totals*, each user 'knows' that everyone, or nearly everyone, other than themself, sees them, and knows that everyone else sees them, and knows that they know, etc., etc.; social proof is a very effective means of establishing common knowledge, which makes it extremely useful, except that it is virtually just as effective at establishing inaccurate common knowledge as it is for accurate.

The medium... is the mess.

It is not sufficient, for establishing common knowledge of a fact, that the fact be true. But it is also, crucially, not necessary. There's a party game called 'Hive Mind': you get a prompt, and write down six things that fit it. You get points based on how many other people wrote them down. If the prompt is "insect", one of the six should say "spider". You know a spider is not an insect; probably so does everyone else around the table. But everybody knows that a spider is a bug and a bug is an insect, so everybody knows "spider" belongs on the list. Never mind that it's false; a spider is not an insect but there's no common knowledge of that fact, and there is common knowledge of its opposite.

So, much like the spider: everybody knows that the big names are more correct than the little fish. Just about everyone can, and occasionally sometimes does, notice and remind themself that this is not inherently true, and the big names should get more weight only because they have demonstrated the ability to generate past good ideas and thereby earned a big name. But there is no common knowledge of that, because the voting system is structured to promote common knowledge that the big names are always right. This is a catastrophe, even if the big names are almost-always right.

Possible solutions, in ascending order of estimated usefulness starting from the mildest:

  • All sorting order which sort by vote score sort only by the unweighted vote (but may still display the weighted vote)
  • Comments cease to count toward weight-increasing karma
  • All users who have not deliberately opted in to the weighted-vote system see only unweighted votes
  • Comments no longer are possible to vote strongly on and all votes on them revert to 1 vote per person
  • ...also apply this to all posts outside Meta
  • ... ...and to those within Meta
  • Remove all capacity to cast votes at all
  • Remove weighted voting from comments and posts, plus remove the capacity to manually curate 'Frontpage'
  • Impossible to cast votes on comments, period, votes are exclusively for top-level posts
  • ...also include some amount of removing the ability to cast weighted votes for posts
  • ...and default to sorting comments randomly, or by size of their subtree ('sort by activity')
  • ... ...plus removing weighted votes, i.e. combine all those last three

I don't really expect any of this to be done. No one seems to be willing to treat small-group politics or status-corrupting instincts as important, and people who are respected much more than me are actively working in the opposite direction in the name of instrumental rationality. But it needs to be said.

* I do not believe this; I would guess about 5% of users opt out. I would be interested to learn the true number.



Discuss

Kelly *is* (just) about logarithmic utility

Новости LessWrong.com - 1 марта, 2021 - 23:02
Published on March 1, 2021 8:02 PM GMT

This post is a response to SimonM's post, Kelly isn’t (just) about logarithmic utility. It's an edited and extended version of some of my comments there.

To summarize the whole idea of this post: I'm going to argue that any argument in favor of the Kelly formula has to go through an implication that your utility is logarithmic in money, at some point. If it seems not to, it's either:

  • mistaken
  • cleverly hiding the implication
  • some mind-blowing argument I haven't seen before.

Actually, the post I'm responding to already mentioned one argument in this third category, which I'll mention later. But for the most part I think the point still stands: the best reasons to suppose Kelly is a good heuristic go through arguing logarithmic utility.

The main point of this post is to complain about bad arguments for Kelly -- something which I apparently enjoy doing rather a lot. Take that as an attention-conservation warning.

The rest of this post will consider various arguments in favor of the Kelly criterion (either as a decent rule of thumb, or, as the iron law of investment). Each section considers one argument, with a section title hopefully descriptive of the argument considered.

1: It's About Repeated Bets

This argument goes something like: "If you were to make just one bet, the right thing to do would be to maximize expected value; but for repeated bets, if you bet everything, you'll lose all your money quickly. The Kelly strategy adjusts for this."

Suppose you'd just want to maximize expected money in a single-bet case. 

A Bayesian wants to maximize E[u(S⋅x)].mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > * {position: absolute} .MJXc-bevelled > * {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom * {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')} , where x is your starting money and S is a random variable for the payoff-per-dollar of your strategy. In a two-step scenario, the Bayesian wants to maximize E[u(S1⋅S2⋅x)]. And so on. 

 If your preferred one-step strategy is one which maximizes expected money, this means u(x)=x for you. But this allows us to push the expectation inwards. Look at the two-step case: E[u(S1⋅S2⋅x)] =E[S2⋅S1⋅x] =E[S1]⋅E[S2]⋅x (the last step holds because we assume the random variables are independent). So we maximize the total expected money by maximizing the expected money of S1 and S2 individually.

Similarly for any number of steps: you just maximize the expectation in each step individually.

Note that the resulting behavior will be crazy. If you had a 51% chance of winning a double-or-nothing bet, you'd want to bet all the money you have. By your own probability estimates, you stand a 49% chance of losing everything. From a standard-human perspective, this looks quite financially irresponsible. It gets even worse for repeated bets. The strategy is basically "bet all your money at every opportunity, until you lose everything." Losing everything would become a virtual certainty after only a few bets -- but the expectation maximizer doesn't care. The expectation maximizer happily trades away the majority of worlds, in return for amassing exponentially huge sums in the lucky world where they keep winning.

("And that's the right thing to do, for their values!" says the one.

"Is it, though?" says the other. "That's putting the cart before the horse. In Bayesian utility theory, you first figure out what the preferences are, and then you figure out a utility function to represent those preferences. You shouldn't just go from caring about money to naively maximizing expected money."

"True," says the one. "But there is a set of preferences which someone could have, which would imply that utility function.")

So, my conclusion? If you don't prefer maximizing expected money for repeated bets (and you probably don't), then you must not prefer it for a single-shot bet, either. 

Nothing about expected value maximization breaks when we apply it to multiple decisions across time. The culprit is the utility function. If the Kelly criterion is appealing, it must be because your utility is approximately logarithmic.

(By the way, this section shouldn't be confused for arguing against every possible argument for Kelly that involves repeated bets. The current section is only arguing against the super naive argument which claims Kelly is some kind of adjustment to expectation-maximization to handle the repeated-bets case.)

2: It's About Optimizing Typical Outcomes

I won't fully go through the standard derivation of Kelly, but it goes something like this. First, we suppose a specific type of investment opportunity will pay out with probability p. Then, we suppose we face similar opportunities many times. We note that the fraction of successes must be very close to p. Then, under that assumption, we do some math to figure out what the optimal investment strategy is.

For example, suppose we play a game: you start with $100, and I start with $∞. We'll make bets on a fair coin; whatever you wager, I'll multiply it by 3 if the coin comes up heads. However, if the coin comes up tails, I'll take it all. We will flip exactly 100 times. How will you decide how much to bet each time? The Kelly derivation is saying: choose your optimal strategy by assuming there will be exactly 50 heads and 50 tails. This won't be exactly true, but it's probably close; if we flipped even more times, then it would be more certain that we'd be very close to that ratio.

The main point I want to make about this is that it's not much of an argument for using the Kelly formula. Just because most worlds look very close to the 50-50 world, doesn't mean planning optimally for the 50-50 world is close to optimal in general.

Suppose you consider betting half your money every time, in our game. The Kelly evaluation strategy goes like this: when you win, you double your money (because you keep 1/2, and put 1/2 on the line; I triple that sum, to 3/2; combining that with the 1/2 you saved, you've doubled your money). When you lose, you halve your money. Since you'll win and lose equally many times, you'd break even with this strategy, keeping $100; so, it's no better than keeping all your money and never betting a cent. (The Kelly recommendation for this 1/4th; 1/2 is far too much.)

But consider: 51-49 and 49-51 are both quite probable as well, almost as probable as the 50-50 outcome. In one case, you double your money one more time, and halve it one less time. So you'll end with $400. In the other case, just the opposite, so you'll end with $25.

Do these two possibilities cancel out, so that we can act like the 50-50 case is all that matters? Not to an expected-money maximizer; the average between $400 and $25 is $212.50; a significant gain over $100. So now it sounds like this strategy might not be so close to breaking even after all.

Generally speaking, although the ratio of success to failure will converge to p, the absolute difference between the true number of successes and the number expected by the Kelly analysis won't converge to zero. And the small deviations in ratio will continue to make large differences in value, like those above. So why should we care that the ratio converges?

Ok. It's hard to justify taking only the single most probable world (like the 50-50 world) and planning for that one. But there are steelmen of the basic argument. As John Wentworth said:

maximizing modal/median/any-fixed-quantile wealth will all result in the Kelly rule

The discussion above can be thought of as maximizing the mode (choosing the strategy which maximizes the most probable amount of money we might get). John points out that we can choose many other notions of "typical outcome", and get the same result. Just so long as we don't optimize the mean (which gets us the expected-money strategy again), we end up with the Kelly strategy.

Optimizing for the mode/median/quantile is usually a significantly worse idea than optimizing expected utility. For example, optimizing for median utility just means ranking every possibility from worst to best (with a number of copies based on its probability), and judging how well we're doing by looking at the possibility which ends up at the halfway point. This is perfectly consistent with a 49% chance of extreme failure; median-utility-optimization doesn't care how bad the worst 49% is. This is really implausible, as a normative (or descriptive) theory of risk management.

The fixed-quantile-maximizer allows us to tweak this. We can look at the bottom 2% mark (ie an outcome close to the bottom of the list), so that we can't be ignoring a terrible disaster that's got almost 50% probability. But this is insensitive to really good outcomes vs merely moderately good ones, until they cross the 98% probability line. For example, if a task just inherently has a 10% chance of bad-as-it-can-be failure (which there's nothing you can do about), the 2%-quantile-maximizer won't optimize at all; any option will look equally bad to it.

If all of these choices are terrible in general, why should we find them at all plausible in the particular case of justifying the Kelly rule?

So no one should see the Kelly derivation and think "OK, Kelly maximizes long-run profits, great." 

Instead, I think the Kelly derivation and related arguments should be seen as much more indirect. We look at this behavior Kelly recommends, and we say to ourselves, "OK, this seems pretty reasonable." And we look at the behavior which expected money-maximization recommends, and we say, "No, that looks entirely unreasonable." And we conclude that our preferences must be closer to those of a Kelly agent than those of an expected-money maximizer.

In other words, we conclude that our utility is approximately logarithmic in money, rather than linear.

(A conclusion which is, by the way, very plausible on other grounds [Economic Growth and Subjective Well-Being: Reassessing the Easterlin Paradox. Betsey Stevenson and Justin Wolfers.].)

3: It's About Time-Averaging Rather Than Ensemble-Averaging

A new approach to economic decision-making called Ergodicity Economics, primarily developed by Ole Peters, attempts to make a much more sophisticated argument similar to "Kelly is about repeated bets". It is not simply the naive argument I dismissed in the first section. I think it's much more interesting. But, ultimately, I think it's not that convincing.

I won't be able to explain the whole thing in this post, but one of the central ideas is time-averaging rather than ensemble-averaging. Ole Peters critiques Bayesians for averaging over possibilities. He states that ensemble averages are appropriate when a lot of things are happening in parallel, like insurance companies tabulating death rates to ensure their income is sufficient for what they'll have to pay out. However, when you're an individual, you only die once. When things happen sequentially, you should be taking the time-average.

Peters' approach addresses many more things than just the Kelly formula -- just to be clear. It's just one particular case we can analyze. But, here's roughly what Peters would do for that case. We can't time-average our profits, since those can keep increasing boundlessly. (As we accumulate more money to bet with, we can make larger bets, so the average winnings could just go to infinity.) So we look at the ratio of our money from one round to the next. This, it turns out, we can time-average. And what strategy maximizes that time-average? Kelly, of course!

My problem with this is mainly that it seems very ad-hoc. I would be somewhat more impressed if someone could prove that there was a unique correct choice of what to maximize, rather than just creatively coming up with something that can be time-averaged, and then declaring that we should maximize that. This seems suspiciously close to just taking a logarithm without any justification.

Not only do we have to choose a function to time-average, we also have to select an appropriate way to turn our situation into an iterated game. This isn't a difficulty in the Kelly case, but in principle, it's another degree of freedom in the analysis, which makes the results feel more arbitrary. (If you're a Bayesian who can represent your life as a big game tree where all the branches end in death, how would you abstract out isolated situations as infinitely-iterated games, in order to apply the Peters construction?)

4: It's About Convergent Instrumental Goals

The basic idea of this argument is similar to the naive first argument we discussed: argue that repeated bets bring you closer and closer to logarithmic utility. Unlike the first attempt, we now grant that linear utility doesn't work this way. But maybe linear utility is a very special case.

Suppose you need $5 to ride the bus. Nothing else is significant to you right now. We can think of your utility as 1u if you have $5 or more, and 0u otherwise.

Now suppose someone approaches you with a bet at the bus stop. It's a double-or-nothing bet. You yourself are 50-50 on the outcome, so ordinarily, it wouldn't be worth taking. In this case, however, the bet could save you: if you have $2.50 or more, the bet could give you a 50% chance at $5, so you could ride the bus!

So now your expected utility, as a function of money in your pocked at the beginning of the scenario, is actually a two-step function: 0u for less that $2.50, 0.5u from $2.50 to <$5, and 1u for $5 and up.

What's important about this scenario is that the bet changed your expected value function. Mossin (who I'll discuss more in a bit) calls this your derived utility function.

In the first section, I showed that this doesn't happen for linear utility functions. If your utility function is linear, your derived utility function is also linear. Mossin calls functions with this property myopic, because they can make each decision as if it was their last. 

Log utility is also myopic, just like linear utility: E[log(S1⋅S2⋅x)] =E[log(S1)+log(S2)+log(x)] =E[log(S1)]+E[log(S2)]+E[log(x)]. Maximizing long-term log-money breaks down to maximizing the log-utility of each step.

If you know a little dynamical systems theory, you might be thinking: aha, we know these are fixed points, but is one of these points an attractor? Perhaps risk-averse functions which somewhat resemble logarithmic functions will have derived utility functions which are a bit closer to logarithmic, so that when we face many many bets, our derived utility function will become very close to logarithmic.

If true, this would be a significant vindication of the Kelly rule! Imagine that you're a stock trader who plans to retire at a specific date. Your utility is some function of the amount of money you retire with. The above argument would say: your derived utility function is the result of many, many, bets. So, as long as your utility function meets some basic conditions (eg, isn't linear), your derived utility function will be a close approximation of a logarithm!

Until I read SimonM's post, I actually thought this was true. However, SimonM says the following:

"Optimal Multiperiod Portfolio Policies" (Mossin) shows that for a wide class of utilities, optimising utility of wealth at time t is equivalent to maximising utility at each time-step. 

IE, Mossin shows that a lot of utility functions actually are myopic! Not all utility functions, by any means, but enough to break the hope that logarithmic utility is a strong attractor.

So, for a large class1 of utility functions, the "Kelly is about repeated bets" argument fails just as hard as it did for the linear case.

This is really surprising!

So it appears we can't argue that log utility is a convergent instrumental goal. It's not true that a broad variety of agents will want to Kelly-bet in the short term in order to maximize utility in the long term. This seems like a pretty bad sign for SimonM's argument that Kelly is about repeated bets.2

If anyone thinks they can recover this argument, please let me know! It's still possible that some class of functions has this property. It's just that now we know we need to side-step a lot of functions, not just linear functions. So we won't be able to push the argument through with weak assumptions, EG, "any risk-averse function implies approximately logarithmic derived utility". However, it's still possible that all of Mossin's myopic functions are "unrealistic" in some way, so that we can still argue Kelly is an instrumentally convergent strategy for humans.

But I currently see no reason to suspect this.

5: It's About Beating Everyone Else

At the beginning of this post, I mentioned that SimonM did give one result which neither seems mistaken, nor seems to be about logarithmic utility. Here's what SimonM says:

"Competitive optimality". Any other strategy can only beat Kelly at most 1/2 the time. (1/2 is optimal since the other strategy could be Kelly)

This is true because Kelly optimizes median utility. No other strategy can have higher median utility; so, given any other strategy, Kelly must be better at least half the time.

Humans have a pretty big competitive component to our preferences. People enjoy being the richest person they know. So, this could plausibly be relevant for someone's betting strategy, and doesn't require logarithmic utility.

I've also heard it said that a market will evolve to be dominated by Kelly bettors. I think this basically refers to the idea that in the long run, you can expect Kelly bettors to have higher wealth than anyone else with arbitrarily high probability (because Kelly maximizes any quantile, not just median). However, I was curious if Kelly comes out on top in a more literally evolutionary model. The Growth of Relative Wealth and the Kelly Criterion examines this question. I haven't looked at it in-depth, but it appears the answer is "sometimes".

Conclusion: To Kelly, Or Not To Kelly?

My experience writing this post has been a progressive realization that the argument for the Kelly criterion is actually much weaker than I thought. I expected to mainly look at arguments for Kelly and show how they have to go through an assumption tantamount to log-utility. Instead, I spend more time finding that the arguments were just not very good.

  • When I responded to ideas about optimizing mode/median/quantiles in the comment section to SimonM's post, my objection was just "it's important to point out that you're optimizing mean/median/quantile, rather than the more usual expected-value". But now I'm like: optimizing mohe/median/quantile is actually a pretty terrible principle, generally speaking! Why would we apply it here?
  • I had thought that some form of "instrumental convergence" argument would work, as discussed in section 4. But it appears not!

So before writing this post, my position was: Kelly is optimal in a non-Bayesian sense, which is peculiar, but seems oddly compelling. Within a Bayesian framework, we can "explain" this compellingness by supposing logarithmic utility. So it seems like the utility of money is roughly logarithmic for humans, which, anyway, is plausible on other grounds. Furthermore, risk-averse agents will have logarithmic expected values in practice, anyway, due to instrumental convergence. So it's fair to say Kelly bets are approximately optimal for humans.

But now, I think: Kelly is optimal in a peculiar non-Bayesian sense, but it's pretty terrible.3 Furthermore, there's no instrumental convergence to Kelly, as far as I can tell. So all I'm left with is: human utility appears to be approximately logarithmic in money, on other grounds.

Overall, this still suggests Kelly is a decent rule of thumb!

I certainly haven't exhausted all the ways people have argued in favor of the Kelly criterion, either. If you think you know of an argument which isn't addressed by any of my objections, let me know.

Footnotes

1:

I should note that while SimonM says "a wide class", Mossin instead says:

it will be shown that the only utility functions allowing myopic decision making are the logarithmic and power functions which we have encountered earlier

IE, Mossin seems to think of it as a narrow class. However, Mossin's result is enough to block any approach I would have taken to proving some kind of convergence result. (I spend some time trying to prove a result while writing this, before I gave up and read Mossin.)

In case you're curious, Mossin's "power functions" are:

u(x)=1λ−1(μ+λx)1−1/λ

Where μ and λ are some parameters which appear to be fixed by the surrounding context in the paper (not free), but I haven't fully understood that part yet.

Mossin also discusses a broader class of weakly myopic functions. These utility functions aren't quite the same as their derived functions, but I'm guessing they're also going to be counterexamples to any attempted convergence result.

2:

SimonM realizes that Mossin's result poses a problem for his narrative, at least at a shallow level:

BUT HANG ON! I hear you say. Haven't you just spent the last 5 paragraphs saying that Kelly is about repeated bets? If it all reduces to one period, why all the effort? The point is this: legible utilities need to handle the multi-period nature of the world. I have no (real) sense of what my utility function is, but I do know that I want my actions to be repeatable without risking ruin!

At first, I thought this was waffling and excuses; but on reflection, I entirely agree. As I said in section 2, I think the right argument for Kelly as a heuristic is the fairly indirect one: Kelly seems like a sane way of managing risk of ruin, so my preferences must be closer to logarithmic than (eg) linear.

3:

I confess, although optimizing for mode/median/quantiles is not very good, I still find something interesting about the argument from section 2. The general principle "ignore extremely improbable extreme outcomes" seems like a hack, but it's an interesting hack, since it blocks many philosophical problems (such as Pascal's Wager). And, in this particular case, it seems oddly plausible: it intuitively seems like the expected-money-maximizer is doing something wrong, and a plausible analysis of that wrongness is that it happily trades away all its utility in increasingly many worlds, for a vanishing chance of happiness in tiny slivers of possibility-space. It would be nice to have solid principles which block this behavior. But mode/median/quantile maximization are not plausible as general principles.

Also, even though optimizing for mode/median/quantiles seem individually terrible, optimizing for them all at once is actually pretty good! My criticisms of the individual principles don't apply when they're all together. However, optimizing for all of them at once is not possible in general.



Discuss

Grading myself on SSC's 2020 predictions

Новости LessWrong.com - 1 марта, 2021 - 22:55
Published on March 1, 2021 7:55 PM GMT

Mantic Monday: Judging April COVID Predictions [Judging Our April 2020 Covid-19 Predictions](https://www.lesswrong.com/posts/dRuTeLm7oEfxxBFRF/judging-our-april-2020-covid-19-predictions

I privately recorded my own predictions on these questions. I felt uncomfortable posting these publicly at the time. I can't fix that, but I can post my answers now and do better next time.

Score: 3.34



Discuss

Страницы

Подписка на LessWrong на русском сбор новостей