# Новости LessWrong.com

A community blog devoted to refining the art of rationality
Обновлено: 20 часов 26 минут назад

### Compositionality: SQL and Subways

19 июля, 2021 - 08:37
Published on July 19, 2021 5:37 AM GMT

Programmers like to talk about a concept called compositionality, which is the idea that programs and code should consist of small, modular components that fit together cleanly. This property is considered almost a holy grail of software design, but there's not a lot of clarity about what it means in concrete terms. In this post I want to offer a working definition for compositionality, and offer some examples of sofware - and infrastructure! - that meets the definition. I'll also look at some structures that fall short, and try to drill down into why they failed. According to my definition, a system of tools is composeable if it meets the following conditions:

1. Parsimony: there is a small number of tools.
2. Efficiency: the tools are highly optimized (often this is possible because there is only a small number of them).
3. Interoperability: the output of one tool can be used as the input to another tool without a lot of additional work. More generally, the tools fit together cleanly.
4. Practicality: Users can combine the tools together to perform tasks that are useful and significant in the real world.

A system that has these properties creates huge value for its users. The system is easy to learn, since there are just a small number of basic operations. But these primitives can be composed together to create an exponentially large set of complex operations, which will be efficient because they are made up of efficient components. Finally, if the Practicality requirement is met, the large set of possible operations will overlap substantially with thet set of tasks that people actually want to perform.

zcat 2014-04-04/*.log.Z | cut -f 4, 18 | grep "cnn.com" | grep 18433 | wc

This command would unzip all the log files in a directory, extract two columns of interest, and search for the target domain and the client ID, and then count all the resulting lines. At the time, it felt very powerful to be able to run such a significant operation without using any actual code. This power comes from the compositionality of the Unix suite of tools. The core set of commands is relatively small. The commands are written in C, so they start up and run fast. Furthermore, since each program in the chain runs in a separate process, the approach is parallel by default, so the user can easily take advantage of a multi-CPU machine. The pipe mechanism, combined with good design of the utilities, means that the suite is quite interoperable. Finally, the real-world utility of this technique is proved by the fact that many people are able to use it to get real work done.

When I started using this technique, I thought it was awesome, and I dreamed of ways to replace all my bulky Java and Python code with chains of Unix commands. Nowadays, I no longer see the Unix command suite as the pinnacle of compositionality, for a couple of reasons. First, many of the commands don't offer options that one would naturally expect, and this reduces their combinatorial utility. For example, at one point long ago wanted to get the total disk usage of all the files in a directory that matched a certain pattern. I was sure that I could do this by creating a chain of commands like this:

ls -al | grep "MyPattern" | cut -f 3 | sum

Unfortunately, this doesn't work because ls doesn't tab-separate its output, so it can't be easily fed into the cut command (Unix zealots, please don't chide me about how this can be done easily if I only knew the right options for ls or some other command. The point is that it wasn't easy). More subtly, the Unix pipe approach is not as cleanly interoperable as it might initially seem. The primitive commands do not expose the data structure of their inputs and outputs, so there is no easy way to tell if a chain of commands is structurally valid without just running it.

In my view, a better candidate for the Most Composeable prize is SQL, which is probably the most widely used software on the planet. SQL is made up a small number of basic commands with names like SELECT, UPDATE, INSERT, and DELETE. These commands have a simple, common grammar. For example, SELECT, UPDATE, and DELETE commands can all be modified by WHERE clauses, in a way that is extremely intuitive and works in the same way for each command. SELECT statements can be transformed from simple queries into aggregations by the use of statements like GROUP BY, and the resulting object has exactly the same structure. Thus, for example, you can perform an aggregation using a SELECT and a GROUP BY, and then INSERT the result into another table, which can then be queried, updated, or deleted using subsequent SELECT, UPDATE, or DELETE statements. Databases engines can achieve great performance by analyzing a query and planning how to execute it efficiently - this analysis works because SQL uses a small number of core operations. And, of course, SQL is at the core of a huge number of software applications, proving its practical utility.

While I'm still on the subject of software, I want to mention a couple of systems that I do not consider to exemplify compositionality. The first negative example is just a fully general programming language like Python or Java. Some languages are better than others, but most people would agree that languages themselves are not intrinsically composeable. The reason is that, because they are fully general, languages are essentially too complex. The output of one function call is not guaranteed to fit into the input of another call. This might work in particular programs, if the engineer has been careful to build a good API, but typically programming involves a lot of data wrangling. The engineer has to make a REST call, then convert the JSON response into a Java Map, then feed the Map into a 3rd party library, and finally put the result into a database.

I'm a bit reluctant to mention this, since it probably exposes both my age and my dubious taste in software, but early in my career I was excited by a technology called Extensible Stylesheet Language Transformations (XSLT). XSLT was a language for transforming XML documents, and XSLT was itself a subset of XML. This design seemed offer a lot of potential: it was both minimalist (everything is just an XML document) and to offer meta-programming potential (you can transform your XSLT with XSLT). XSLT seemed like it would meet all the criteria for compositionality. There are a small number of data transformation operations that formed the basis of the language. Some smart software wizards would create optimized XSLT engines that were could perform transformations with high performance. The transformations could be chained together easily, since XML is both the input and the output. Finally, it seemed like this could be used for practical applications: I recall using XSLT in web applications to generate HTML pages from source data. While XSLT seemed promising at the start, after working with it for a while, I realized that it failed badly at code-length efficiency. The bulky syntax of XSLT meant that even simple tasks required lots of ugly XML tags and excessive typing (see this example).

Now that I've tested out this notion of compositionality with some examples from the world of bits, I want to try to extend it to the world of atoms. One of the best examples of the concept in the physical world is the Tokyo subway:

As you can see, Tokyo has about 15 distinct subway lines that connect dozens of stations. The system provides combinatorial utility: a Tokyo denizen wakes up at his home near Ueno, takes the Hibiya line to Akihabara to buy some electronic gadgets for his son, and then takes the Yamanote line to Tokyo to buy his wife some perfume at a department store. Then he takes the Marunouchi line to Shinjuku to meet his friends for drinks (putting his gifts in a handy subway locker). Finally, at the end of the day, he drunkenly stumbles back onto the Yamanote line and goes home to Ueno. That story represents just one of billions of efficient paths that the system allows. This map doesn't even actually do justice to the full power of the system, because it only shows subways and not the regular rail lines, which connect to the subway stations and bring travelers in from other cities.

Let's look at the Tokyo subway network in relation to the criteria for compositionality. First, the system is made up of a relatively small number of lines (far fewer than the number of city streets). A subway train can service its line in a highly optimized way: compared to the automobile, it is much safer and more efficient in terms of total person-miles travelled per unit of energy. The fact that the subway lines connect at shared stations provides the clean input to output property. Finally, the system is plainly of immense practical value.

Now, subway systems aren't automatically composeable. The Tokyo system has the quality because it is well designed, but not every city is so lucky. A notorious failure of the Bay Area Rapid Transit system is that it does not connect to CalTrain. This means the system's individual components are not highly interoperable, and therefore prevents the system from achieving combinatorial utility.

Compared to subways, the road system is an illuminating negative example of compositionality. Roads obviously connect to one another: with the exception of some islands, you can get to every road location in North America from any other location. And roads are clearly of great practical value. But the cars-on-roads system fails badly on the efficiency criterion. Cars are actually quite dangerous, and their fuel inefficiency is a major contributor to climate change and other environmental problems. This system also fails the parsimony criterion: there are far too many cars and too many roads as well. The overabundance of roads makes it easy to get lost, and also consumes vast swaths of land area that could otherwise be dedicated to housing, parks, or other purposes.

As a final example, I want to look at an innovation that increased the level of compositionality in a set of pre-existing tools. That innovation is the standardized cargo container, or intermodal container. We've all seen thousands of these objects being hauled by many-wheeled trucks on the highway. Some people might not realize that these boxes can be lifted from the truckbed and placed directly onto a cargo ship. Furthermore, it was not until the 70s that the idea of using a standard container became common. Prior to that, when a batch of goods was delivered to a port by a truck, the goods would have to be unloaded from the truck and loaded onto the ship by human laborers.

How does the intermodal container fit into the concept of compositionality? Before the 70s, we had the same kinds of transport vehicles - trucks, trains, and ships. These transport tools were good enough on their own, but they were not cleanly interoperable. Loading and unloading the goods cost time, money, and risked damaging the goods. In some cases, the added cost meant that it was no longer profitable to transport the goods to remote nodes of the network. So the advent of the standardized container increased the interoperability of the system by an enormous degree. This increased the system's overall compositionality, leading to an increase in combinatorial utility - now there are billions of inexpensive paths a product might travel through the global cargo network.

I think the principles of compositionality I mentioned above provide a good set of design goals for system designers. Create a small number of primitive operations. Make sure that those operations are highly optimized, and that they connect to each other as cleanly as possible. Finally, make sure that when you combine the primitives to create an exponentially large set of complex operations, this set overlaps substantially with the set of tasks that people actually want to perform.

Discuss

### Happy paths and the planning fallacy

19 июля, 2021 - 02:26
Published on July 18, 2021 11:26 PM GMT

(Cross posted on my personal blog.)

The other day at work I was talking with my boss about a project I am working on. He wanted an update on what tasks are remaining. I spent some time walking him through this. At the end, he asked, "So just to confirm, these six tasks are all that is remaining for this project?".

I wasn't sure how to respond. It didn't feel true that those were the only six tasks that were remaining. But if it wasn't true, then what are the other remaining tasks? I didn't have an answer to that, so I was confused.

Then it hit me. Those six were the only known tasks that were remaining. But there very well might be unknown tasks that are left to do. So I told him this, and that we should expect the unexpected.

He responded by saying that he agrees, but also rephrased his question. He asked me to confirm that if we're talking about the happy path, that the six tasks I outlined are the only ones on the happy path.

I don't see a problem with talking about happy paths. Well, at least not in theory. In theory, it might be useful to have a sense of what that best case scenario path forward looks like. But in practice, I worry that, somewhere along the way, happy paths will get converted to expected paths.

That's a little tangential to the main point I want to make in this post though. The main point I want to make in this post is that the terminology of happy path, expected path, and unhappy path seems like it'd be really helpful for thinking about and fighting against the planning fallacy.

What is the planning fallacy? Well, using this new terminology, the planning fallacy is our tendency to confuse the happy path with the expected path. When people think about how long something will take − what the expected path is − they tend to think mostly about what the happy path looks like. And therefore, they tend to underestimate how long things will take. Or how many resources they will require.

For example, consider students predicting how long their homework will take:

Buehler et al. asked their students for estimates of when they (the students) thought they would complete their personal academic projects. Specifically, the researchers asked for estimated times by which the students thought it was 50%, 75%, and 99% probable their personal projects would be done. Would you care to guess how many students finished on or before their estimated 50%, 75%, and 99% probability levels?

• 13% of subjects finished their project by the time they had assigned a 50% probability level;
• 19% finished by the time assigned a 75% probability level;
• and only 45% (less than half!) finished by the time of their 99% probability level.

As Buehler et al. wrote, “The results for the 99% probability level are especially striking: Even when asked to make a highly conservative forecast, a prediction that they felt virtually certain that they would fulfill, students’ confidence in their time estimates far exceeded their accomplishments.”

The issue is that these students were only thinking about the happy path.

A clue to the underlying problem with the planning algorithm was uncovered by Newby-Clark et al., who found that

• Asking subjects for their predictions based on realistic “best guess” scenarios; and
• Asking subjects for their hoped-for “best case” scenarios . . .

... produced indistinguishable results.

When people are asked for a “realistic” scenario, they envision everything going exactly as planned, with no unexpected delays or unforeseen catastrophes—the same vision as their “best case.”

Reality, it turns out, usually delivers results somewhat worse than the “worst case.”

After thinking of this analogy, I put it to use a day later. I had to go to Mexico with my girlfriend for dental work she needed. We had an appointment at 10am and were staying at a hotel about two miles away. I was thinking about what time we should wake up in order to be at the appointment in time. My first instinct was to think, "It'll take X minutes to brush our teeth, Y minutes to check out of the hotel, Z minutes to cross the border, etc. etc." But then I realized something! "Hey, all of this stuff I'm refering to, this is just the happy path! I need to consider unexpected things too." The terminology of "happy path" helped me to recognize my falling for the planning fallacy. I suspect that I'm not alone here, and that it'd be a helpful tool for others as well.

However, I'm not that optimistic. Simply knowing about biases, or even having good terminology/analogies for them usually isn't enough. We still really struggle with them. And sure enough, I was no exception.

After I realized my happy path error, I thought about it again and concluded that we should wake up at 8:30am. My girlfriend convinced me that we could wake up at 8:45am instead, which I agreed to. She ended up waking up even earlier and gathering our stuff to check out. Then it felt to me like we had a lot of time, so I took a leisurely shower. We didn't leave the room until about 9:30am. Then we realized we need to fill up our watercooler with ice. Which was on a different floor of the hotel.

Ultimately we got to the dentist right on time, but I think it was lucky. Crossing the border happened really quickly, but could have taken longer. Checking out of the hotel also was really quick, but could have taken longer. At the end of the day, I did a good job at first of recognizing the planning fallacy and adjusting for it, but then in the morning I erased the good work I had done and committed the planning fallacy again. I think this goes to show how hard it is to actually fight against such biases and win.

That doesn't mean we shouldn't try to win though, nor that we can't make incremental progress. Hopefully this terminology of "happy path" will help people make such incremental progress.

Discuss

### One Study, Many Results (Matt Clancy)

19 июля, 2021 - 02:10
Published on July 18, 2021 11:10 PM GMT

I didn't see this post, its author, or the study involved elsewhere on LW, so I'm crossposting the content. Let me know if this is redundant, and I'll take it down.

Summary

This post looks at cases where teams of researchers all began with the same data, then used it to answer a question — and got a bunch of different answers, based on their different approaches to statistical testing, "judgment calls", etc.

This shows the difficulty of doing good replication work even without publication bias; none of the teams here had any special incentive to come up with a certain result, and they all seemed to be doing their best to really answer the question.

Also, I'll copy the conclusion of the post and put it here:

More broadly, I take away three things from this literature:

1. Failures to replicate are to be expected, given the state of our methodological technology, even in the best circumstances, even if there’s no publication bias.
2. Form your ideas based on suites of papers, or entire literatures, not primarily on individual studies.
3. There is plenty of randomness in the research process for publication bias to exploit. More on that in the future.
The post

Science is commonly understood as being a lot more certain than it is. In popular science books and articles, an extremely common approach is to pair a deep dive into one study with an illustrative anecdote. The implication is that’s enough: the study discovered something deep, and the anecdote made the discovery accessible. Or take the coverage of science in the popular press (and even the academic press): most coverage of science revolves around highlighting the results of a single new (cool) study. Again, the implication is that one study is enough to know something new. This isn’t universal, and I think coverage has become more cautious and nuanced in some outlets during the era of covid-19, but it’s common enough that for many people “believe science” is a sincere mantra, as if science made pronouncements in the same way religions do.

But that’s not the way it works. Single studies - especially in the social sciences - are not certain. In the 2010s, it has become clear that a lot of studies (maybe the majority) do not replicate. The failure of studies to replicate is often blamed (not without evidence) on a bias towards publishing new and exciting results. Consciously or subconsciously, that leads scientists to employ shaky methods that get them the results they want, but which don’t deliver reliable results.

But perhaps it’s worse than that. Suppose you could erase publication bias and just let scientists choose whatever method they thought was the best way to answer a question. Freed from the need to find a cool new result, scientists would pick the best method to answer a question and then, well, answer it.

The many-analysts literature shows us that’s not the case though. The truth is, the state of our “methodological technology” just isn’t there yet. There remains a core of unresolvable uncertainty and randomness in the best of circumstances. Science isn’t certain.

Crowdsourcing Science

In many-analyst studies, multiple teams of researchers test the same previously specified hypothesis, using the exact same dataset. In all the cases we’re going to talk about today, publication is not contingent on results, so we don’t have scientists cherry-picking the results that make their results look most interesting; nor do we have replicators cherry-picking results to overturn prior results. Instead, we just have researchers applying judgment to data in the hopes of answering a question. Even still results can be all over the map.

Let’s start with a really recent paper in economics: Huntington-Klein et al. (2021). In this paper, seven different teams of researchers tackle two research questions that had been previously published in top economics journals (but which were not so well known that the replicators knew about them). In each case, the papers were based on publicly accessible data, and part of the point of the exercise was to see how different decisions about building a dataset from the same public sources lead to different outcomes. In the first case, researchers used variation across US states in compulsory schooling laws to assess the impact of compulsory schooling on teenage pregnancy rates.

Researchers were given a dataset of schooling laws across states and times, but to assess the impact of these laws on teen pregnancy, they had to construct a dataset on individuals from publicly available IPUMS data. In building the data, researchers diverged in how they handled different judgement calls. For examples:

One team dropped data on women living in group homes; others kept them.

Some teams counted teenage pregnancy as pregnancy after the age of 14, but one counted pregnancy at the age of 13 as well

One team dropped data on women who never had any children

In Ohio, schooling was compulsory until the age of 18 in every year except 1944, when the compulsory schooling age was 8. Was this a genuine policy change? Or a typo? One team dropped this observation, but the others retained it.

Between this and other judgement calls, no team assembled exactly the same dataset. Next, the teams needed to decide how, exactly, to perform the test. Again, each team differed a bit in terms of what variables it chose to control for and which it didn’t. Race? Age? Birth year? Pregnancy year?

It’s not immediately obvious which decisions are the right ones. Unfortunately, they matter a lot! Here were the seven teams’ different results.

Depending on your dataset construction choices and exact specification, you can find either that compulsory schooling lowers or increases teenage pregnancy, or has no impact at all! (There was a second study as well - we will come back to that at the end)

This isn’t the first paper to take this approach. An early paper in this vein is Silberzahn et al. (2018). In this paper, 29 research teams composed of 61 analysts sought to answer the question “are soccer players with dark skin tone more likely to receive red cards from referees?” This time, teams were given the same data but still had to make decisions about what to include and exclude from analysis. The data consisted of information on all 1,586 soccer players who played in the first male divisions of England, Germany, France and Spain in the 2012-2013 season, and for whom a photograph was available (to code skin tone). There was also data on player interactions with all referees throughout their professional careers, including how many of these interactions ended in a red card and a bunch of additional variables.

As in Huntington-Klein et al. (2021), the teams adopted a host of different statistical techniques, data cleaning methods, and exact specifications. While everyone included “number of games” as one variable, just one other variable was included in more than half of the teams regression models. Unlike Huntington-Klein et al. (2021), in this study, there was also a much larger set of different statistical estimation techniques. The resulting estimates (with 95% confidence intervals) are below.

Is this good news or bad news? On the one hand, most of the estimates lie between 1 and 1.5. On the other hand, about a third of the teams cannot rule out zero impact of skin tone on red cards; the other two thirds find a positive effect that is statistically significant at standard levels. In other words, if we picked two of these teams’ results at random and called one the “first result” and the other a “replication,” they would only agree whether the result is statistically significant or not about 55% of the time!

Let’s look at another. Breznau et al. (2021) get 73 teams, comprising 162 researchers to answer the question “does immigration lower public support for social policies?” Again, each team was given the same data. This time, that consisted of responses to surveys about support for government social policies (example: “On the whole, do you think it should or should not be the government’s responsibility to provide a job for everyone who wants one?”), measures of immigration (at the country level), and various country-level explanatory variables such as GDP per capita and the Gini coefficient. The results spanned the spectrum of possible conclusions.

Slightly more than half of the results found no statistically significant link between immigration levels and support for policies - but a quarter found more immigration reduced support, and more than a sixth found more immigration increased support. If you picked two results at random, they would agree on the direction and statistical significance of the results less than half the time!

We could do morestudies, but the general consensus is the same: when many teams answer the same question, beginning with the same dataset, it is quite common to find a wide spread of conclusions (even when you remove motivations related to beating publication bias).

At this point, it’s tempting to hope the different results stem from differing levels of expertise, or differing quality of analysis. “OK,” we might say, “different scientists will reach different conclusions, but maybe that’s because some scientists are bad at research. Good scientists will agree.” But as best as these papers can tell, that’s not a very big factor.

The study on soccer players tried to answer this in a few ways. First, the teams were split into two groups based on various measures of expertise (teaching classes on statistics, publishing on methodology, etc). The half with greater expertise was more likely to find a positive and statistically significant effect (78% of teams, instead of 68%), but the variability of their estimates was the same across the groups (just shifted in one direction or another). Second, the teams graded each other on the quality of their analysis plans (without seeing the results). But in this case, the quality of the analysis plan was unrelated to the outcome. This was the case even when they only looked at the grades given by experts in the statistical technique being used.

The last study also split its research teams into groups based on methodological expertise or topical expertise. In neither case did it have much of an impact on the kind of results discovered.

So; don’t assume the results of a given study are definitive to the question. It’s quite likely that a different set of researchers, tackling the exact same question and starting with the exact same data would have obtained a different result. Even if they had the same level of expertise!

Resist Science Nihilism!

But while most people probably overrate the degree of certainty in science, there also seems to be a sizable online contingent that has embraced the opposite conclusion. They know about the replication crisis and the unreliability of research, and have concluded the whole scientific operation is a scam. This goes too far in the opposite direction.

For example, a science nihilist might conclude that if expertise doesn’t drive the results above, then it must be that scientists simply find whatever they want to find, and that their results are designed to fabricate evidence for whatever they happen to believe already. But that doesn’t seem to be the case, at least in these multi-analyst studies. In both the study of soccer players and the one on immigration, participating researchers reported their beliefs before doing their analysis. In both cases there wasn’t a statistically significant correlation between prior beliefs and reported results.

If it’s not expertise and it’s not preconceived beliefs that drive results, what is it? I think it really is simply that research is hard and different defensible decisions can lead to different outcomes. Huntington-Klein et al. (2021) perform an interesting exercise where they apply the same analysis to different teams data, or alternatively, apply different analysis plans to the same dataset. That exercise suggests roughly half of the divergence in the teams conclusions stems from different decisions made in the database construction stage and half from different decisions made about analysis. There’s no silver bullet - just a lot of little decisions that add up.

More importantly, while it’s true that any scientific study should not be viewed as the last word on anything, studies still do give us signals about what might be true. And the signals add up.

Looking at the above results, while I am not certain of anything, I come away thinking it’s slightly more likely that compulsory schooling reduces teenage pregnancy, pretty likely that dark skinned soccer players get more red cards, and that there is no simple meaningful relationship between immigration and views on government social policy. Given that most of the decisions are defensible, I go with the results that show up more often than not.

And sometimes, the results are pretty compelling. Earlier, I mentioned that Huntington-Klein et al. (2021) actually investigated two hypotheses. In the second, Huntington-Klein et al. (2021) ask researchers to look at the effect of employer-provided healthcare on entrepreneurship. The key identifying assumption is that in the US, people become eligible for publicly provided health insurance (Medicare) at age 65. But people’s personalities and opportunities tend to change more slowly and idiosyncratically - they also don’t suddenly change on your 65th birthday. So the study looks at how rates of entrepreneurship compare between groups just older than the 65 threshold and those just under it. Again, researchers have to build a dataset from publicly available data. Again every team made different decisions, such that none of the data sets are exactly alike. Again, researchers must decide exactly how to test the hypothesis, and again they choose slight variations in how to test it. But this time, at least the estimated effects line up reasonably well.

I think this is pretty compelling evidence that there’s something really going on here - at least for the time and place under study.

And it isn’t necessary to have teams of researchers generate the above kinds of figures. “Multiverse analysis” asks researchers to explicitly consider how their results change under all plausible changes to the data and analysis; essentially, it asks individual teams to try and behave like a set of teams. In economics (and I’m sure in many other fields - I’m just writing about what I know here), something like this is supposedly done in the “robustness checks” section of a paper. In this part of a study, the researchers show how their results are or are not robust to alternative data and analysis decisions. The trouble has long been that robustness checks have been selective rather than systematic; the fear is that researchers highlight only the robustness checks that make their core conclusion look good and bury the rest.

But I wonder if this is changing. The robustness checks section of economics papers has been steadily ballooning over time, contributing to the novella-like length of many modern economics papers (the average length rose from 15 pages to 45 pages between 1970 and 2012). Some papers are now beginning to include figures like the following, which show how the core results change when assumptions change and which closely mirror the results generated by multiple-analyst papers. Notably, this figure includes many sets of assumptions that show results that are not statistically different from zero (the authors aren’t hiding everything).

Economists complain about how difficult these requirements make the publication process (and how unpleasant they make it to read papers), but the multiple-analyst work suggests it’s probably still a good idea, at least until our “methodological technology” catches up so that you don’t have a big spread of results when you make different defensible decisions.

More broadly, I take away three things from this literature:

1. Failures to replicate are to be expected, given the state of our methodological technology, even in the best circumstances, even if there’s no publication bias.
2. Form your ideas based on suites of papers, or entire literatures, not primarily on individual studies.
3. There is plenty of randomness in the research process for publication bias to exploit. More on that in the future.

Discuss

### Summary and Notes from 'The Signal and the Noise'

19 июля, 2021 - 01:06
Published on July 18, 2021 10:05 PM GMT

This is a compressed version of a blog post I wrote on my personal website.

The Signal and the Noise: The Art and Science of Prediction was written by Nate Silver, a political analyst who is most famous for creating the election forecasting website fivethirtyeight.com. The Signal and the Noise is one of the small number of popular books about forecasting, hence why I thought this write-up would be useful.

Weather forecasting

Humans don’t have a good track record predicting the outcomes of complex systems. But one domain where we have excelled is weather forecasting. Weather forecasts are amazingly accurate relative to the complexity involved. In the mid-70s, the US National Weather Service was off by about 6 degrees (Fahrenheit) when trying to forecast three days in advance. This isn’t much more accurate than what you get if you look at long-term averages – as in, what temperature is most likely in this region at this time of year, not taking into account any specific information. Now, the average miss is 3.5 degrees. This is actually slightly less of an improvement than I would have guessed, although to reduce the error in a forecast by a factor of two requires way more than twice the effort, since errors can compound.

I was surprised to learn how large a role humans still play in weather forecasting. A human expert assessing many computer-generated forecasts is often better than any of the forecasts are by themselves. Humans make precipitation forecasts 25% more accurate than computers alone and temperature forecasts 10% more accurate. Moreover, the accuracy added by humans has not significantly changed over time, so humans have been getting better at the same rate as the machines! If you’re wondering why the weather forecasts you use don’t feel very accurate, it’s in part because weather services are private companies that tend to exaggerate forecasts for appeal; you won’t see this inaccuracy in government forecasts. In particular, meteorologists are known to have a “wet bias” – they forecast rain more often than it actually occurs.

There have been some pretty tremendous positive externalities of commercial weather forecasting, most notably in creating sophisticated early warning systems for extreme weather. The ability to predict typhoons in India and Bangladesh has probably saved many thousands of lives. Silver has a few stories in here about people who refuse to leave their homes during an evacuation because of an unjust scepticism of the forecasts. There also appears to be an exposure effect: studies of hurricanes find that having survived a hurricane before makes you less likely to evacuate future ones.

Philip Tetlock

A fox knows many things, but a hedgehog knows one big thing.

Archilochus

You will probably be familiar with Philip Tetlock’s work on forecasting. Some details I didn’t know about it:

• The more often an expert was on TV, the less accurate their predictions were.
• When an expert says something has no chance of happening, it happens 15% of the time. When they say it is guaranteed to happen, it happens 75% of the time. While foxes get better at predicting with more information, hedgehogs get worse. If you have grand theories instead of partial explanations, having more facts can make your worldview even less accurate.
• Group aggregations of forecasts outperform individual ones by 15-20% on average.
• Partisan differences in prediction were not seen in general (people were relatively unbiased in guessing how many seats Republicans vs. Democrats would win) but there were marked in specific cases (a left-leaning pundit is much more likely to say a specific Democrat will win).

(I wonder if this generalises? If we have some kind of broad philosophical or political worldview that biases us, we might actually see more bias the more we zero in on specific cases. Hence, while talking about specifics and partial explanations is usually the better way to get at the truth, to be effective it might require some deconstructing of one’s prior beliefs.)

Climate change

Historically, the magnitude of warming from climate change has been overestimated by scientists. The actual level of warming was below the 1990 IPCC estimates’ most optimistic projection. In response, the IPCC revised down its models in 1995, and now the observed outcomes fall well within the confidence interval of the projected outcomes (albeit the warming is still slightly less than predicted). You could certainly tell a story here about bias: scientists probably want to find a large warming effect and they think that we’re at more risk of panicking too little than too much. However, these estimates assumed a “business as usual” case; so, one factor that wasn’t addressed adequately was that Chinese industry caused an unexpected increase in sulphur dioxide concentration starting around 2000, and sulphur dioxide causes a cooling effect. People forget about the other factors that contribute to warming – water vapour is actually the factor that contributes the most to the greenhouse effect! This all seems complicated to take into consideration, so the less-than-stellar prediction performance of climate scientists can probably be forgiven. They also seem to have humility: just 19% of climate scientists think that climate science can do a good job of modelling sea-level rise 50 years from now. At least as of when this book was published (2012), the effect of climate change on most extreme weather events also appears to be unclear.

The estimates around climate change are spectacularly noisy, which is well-known, but I think I had failed to appreciate just how noisy they are. Over the last 100 years, temperature has declined in one-quarter of decades – e.g. global temperatures fell from 2001 to 2011.

Economic forecasting

The only function of economic forecasting it to make astrology look respectable.

John Kenneth Galbraith

Richard Thaler breaks down the efficient market hypothesis (EMH) into two parts: the No Free Lunch assumption and the Price is Right assumption. No Free Lunch (the Groucho Marx theorem) says that you shouldn’t be willing to buy a stock from anyone willing to sell it to you; it’s difficult if not impossible to consistently beat the market. The Price is Right says that assets are priced in a way that encapsulates all information.

Thaler has a famous paper in which he looks at the company 3Com, which created a separate stock offering for its subsidiary Palm. There was a scheme whereby 3Com stockholders were guaranteed to receive three shares in Palm for every two shares in 3Com that they held, which implied that it was mathematically impossible for Palm stock to trade at more than two-thirds of the value of 3Com stock. Yet, for several months, Palm actually traded higher than 3Com, through a combination of hype and transaction costs.

Silver points out that if you look at the predictions of the Blue Chip Economic Survey and the Survey of Professional Forecasters, the former has some forecasters which do consistently better than others over the long run, but the latter doesn’t. The reason why is that Blue Chip isn’t anonymous, and so forecasters have an incentive to make bold claims that would garner them a lot of esteem if they turned out to be true. One study found a “rational bias” – the lesser the reputation of the institution that someone was forecasting from, the bolder they were in the claims they made. While considerations of esteem probably worsen forecasts overall, they lead some individuals to consistently outperform the crowd.

If EMH is true, how could outside observers notice massive market inefficiencies? One of the reasons why bubbles do not sort themselves out is the career incentives of traders: if you bet against the market and the market doesn’t crash, you look like an idiot, while going along with the herd won’t result in exceptionally bad personal outcomes. Silver says there is significant evidence that such herding behaviour exists.

It shocked me to learn that, over the long run, house prices in the US were remarkably stable until recently. In inflation-adjusted terms, $10,000 invested in a home in 1896 would be worth just$10,600 in 1996 (as measured by the Case-Schiller index). The value of such an investment would then almost double between 1996 and 2006!

Heavy-tailed distributions

Scott Alexander criticises how people sometimes use the low total death tolls from terrorism as a way to mock conservatives, or people who are concerned about terrorism in general. Most years, lightning kills more people in the US than terrorism, so why worry? Well, every year since WW2, lightning has killed more people than atomic bombs. Would this be a convincing argument for not worrying about nuclear war? If you’ve read The Black Swan, you’ll know that lots of things are like this, with heavy-tailed risks, and that we sometimes try to shoehorn these into normal distributions.

Earthquakes are distributed according to one such heavy-tailed distribution (a power-law) whereby for every one-point increase on the Richter scale, an earthquake is ten times less likely. So the bulk of the devastation comes from just a few earthquakes. The Chilean earthquake of 1960, the Alaskan earthquake of 1964, and the Great Sumatra Earthquake of 2004 accounted for half of all energy released by all earthquakes in the world over the last 100 years!

Epidemics

One interesting thing Silver talks about in one of the middle chapters was the failure of SIR models to account for how there wasn’t a re-emergence of HIV in the early 2000s among active gay communities like that in San Francisco (there was an increase in unprotected sex and other STDs). It’s actually still somewhat a matter of debate why this didn’t happen, but probably it was because people began to “serosort” – namely, choose partners who had the same HIV status as them. This goes against one of the major assumptions of the SIR model, which is that interactions among individuals are random.

You may have recently heard about President Ford’s 1976 campaign to vaccinate 200 million people against a suspected H1N1 pandemic. The vaccine dramatically increased the rates of the nerve-damaging Guillain-Barré syndrome, and the public turned against it, such that only 50% of people were willing to be vaccinated! The severity of the outbreak also turned out to be less than expected, so the government gave up after 25% of people were immunised. I’m surprised that I haven’t seen more written about this, and a postmortem about what went wrong here would be helpful.

Discuss

### Munich SSC/ACX (+LW) (online) meetup August 2021

18 июля, 2021 - 23:39
Published on July 18, 2021 8:39 PM GMT

We have started as in-person Munich SSC meetup during the «meetups everywhere» drive. Unfortunately, we were later forced to turn online-only. Fortunately, it means that coming is easier, and everyone is welcome!

Turns  out that even when in-person meetups are possible, it makes sense to also have an online meetup going as a parallel thing.

We are a pretty unfocused group, so feel free to drop by if you want to discuss something that you hope might be interesting to people reading SSC/ACX or LW — or hear something like that discussed. Agreeing with LW conclusions or Scott Alexander positions on whatever is neither expected nor discouraged (either choice can lead to a detailed argument…).

End time is just an indication — the meetup lasts as long as we want to talk about something, and it is also perfectly normal to leave whenever you prefer (or have) to.

Discuss

### Any taxonomies of conscious experience?

18 июля, 2021 - 21:28
Published on July 18, 2021 6:28 PM GMT

I have some expertise in machine learning and AI. I broadly believe that human minds are similar to modern AI algorithms such as deep learning and reinforcement learning. I also believe that it is likely that consciousness is present wherever algorithms are executing (a form of panpsychism). I am trying to create theories about how AI algorithms could generate conscious experiences. For example, it may be the case that when an AI is in a situation where it believes that many actions it could take will lead to an improvement in its situation it might feel happiness. If it feels that most choices will lead to a worse situation and it is searching for the least worst option, it might feel fear and sadness. I am trying to find existing research that might give me a taxonomy of conscious experiences (ideally with associated experimental data e.g. surveys etc.) that I could use to define a scope of experiences that I could then try to map onto the execution of machine learning algorithms. Ideally I am looking for taxonomies that are quite comprehensive, I have found other taxonomies very useful in the past for similar goals, such as Wordnet, ConceptNet, TimeUse surveys, DSM (psychology diagnosis) etc.

I have a very limited understanding of phenomenology and believe that its goals in understanding conscious experience may be relevant but I am concerned that it is not a subject that is presented in a systematic textbook style format that I am looking for. I would be very grateful for any suggestions as to where I might find any systematic overview that I might be able to use. Perhaps from teaching materials or something from Wikipedia or any other source that attempts this kind of broad systematic taxonomy.

Discuss

### A model of decision-making in the brain (the short version)

18 июля, 2021 - 17:39
Published on July 18, 2021 2:39 PM GMT

I’ve been playing around with a hypothesis about how decision-making is laid out in the human brain. Regular readers of my posts have seen this multiple times before; in particular I discussed it in great detail in Big Picture of Phasic Dopamine. But that’s a long post covering many topics, and recently I've found that I keep wanting to talk about (and solicit feedback about) just this one thing.

So here it is in brief, stripped of all the details, justifications, uncertainties, references, etc. Email me or comment at the lesswrong crosspost if you want to discuss, tell me I’m wrong, etc.

The model, in four steps

Step 1: there are parts of the cortex (dorsolateral prefrontal cortex, hippocampus) that propose thoughts, plans, etc. You can think of these as having some attractor dynamics or whatever, such that you start with a bubbly soup of partially-formed mutually-incompatible sub-thoughts, and then they quickly settle into a stable, fully-formed thought. The striatum intervenes during this process, making a quick rough guess at how promising the different pieces look, suppressing the less promising bits and enhancing the more promising bits, so that when you get a fully-formed thought, it’s likelier to be fairly promising.

Step 2: once you have a stable fully-formed thought, various other parts of the brain (mainly medial prefrontal cortex, anterior cingulate cortex, ventral striatum, amygdala, hippocampus (sorta)) “assess” that thought according to maybe dozens of genetically-hardcoded criteria like “If I'm gonna do this plan, how appropriate would it be to cringe? To salivate? To release cortisol? To laugh? How much salt would I wind up eating? How much umami?” Etc. etc. And they send this scorecard down to the hypothalamus and brainstem.

Step 3: Finally, we're at the hypothalamus & brainstem. They look at the scorecard from the previous step, and they combine that information with other information streams that they have access to, like metabolic status information—if I'm hungry, a plan that will involve eating gets extra points, whereas if I’m over-full, the same plan would lose points. Taking all that information into account, they make the final decision as to whether the plan is good or bad (a.k.a. "reward"), using a genetically-hardcoded algorithm. That decision is relayed back to the proposer as a (positive or negative) dopamine signal. If the thought / plan is bad then it gets immediately suppressed; if the thought / plan is good then it gets strengthened and stabilized so it can start orchestrating motor commands and so on.

Step 4: Learning algorithms make steps 1 & 2 get better over time. Step 1 improves because we learn to propose better plans, by treating the step 3 decisions as ground truth. Step 2 improves because the brainstem can recognize inaccurate assessments (in hindsight) and issue corrections. For example, if a plan gets a high score in the “will involve eating lots of food” category, and then the plan is executed, but you don’t wind up eating any food, then the hypothalamus & brainstem notice the discrepancy and send up a training signal to tweak the “will involve eating lots of food” assessment calculator. (The learning algorithms here are classified as “reinforcement learning” for step 1 and “supervised learning” for step 2.)

OK I suck at brevity, here’s just a couple side-notes:
• One of the Step 2 assessments is "reward prediction", i.e. a prediction about what the decision will be in step 3. Then we subtract that from the actual step 3 decision to get Reward Prediction Error, which is helpful for various reasons.
• I can’t resist mentioning that this system seems beautifully compatible with my poorly-researched sketchy theory of how the brain solves the “symbol grounding” problem for social instincts. The idea is: sometimes the proposed “thought” happens to be a brief empathetic simulation of another person, and then the Step 2 “scorecard” gives the brainstem information about what that other person is feeling. Then the brainstem can trigger various reactions associated with jealousy, pride, etc. at appropriate times.

Again, much more at Big Picture of Phasic Dopamine.

Discuss

### The Mountaineer's Fallacy

18 июля, 2021 - 02:45
Published on July 17, 2021 11:45 PM GMT

I'm sure this has a name, but I can't remember it. So I have given it a new name. The Mountaineer's Fallacy.

The Mountaineer's Fallacy is the suggestion that climbing Mount Everest is a good start on a trip to the moon.

In one sense, you are making progress on a key metric: distance from the moon. But in another, more accurate sense, you are wasting a lot of time and effort on something that has no chance of helping you get to the moon.

Discuss

### A Models-centric Approach to Corrigible Alignment

17 июля, 2021 - 20:27
Published on July 17, 2021 5:27 PM GMT

This post starts with a reasonable story of how alignment might be solved, based on extracting human values from an AIs model of the world. Then I will elaborate on what I see as the main steps in this process. After that I will review the existing work (which I am aware of) on these areas, as well as discussing points of confusion. I will finish with a list of identified areas of confusion in this scheme.

A Reasonable Story for How Things Go Right

An AI models the world, in doing so it models humans, in doing so it models human values. It then extracts those human values somehow. Then it acts "according to" those values in a similar way to humans do, but with superhuman capabilities.

Breaking this down

1: Modelling humans, in particular human decision making. This is distinct from inverse reinforcement learning, as with this approach it only needs to be done from the perspective of a general predictive model: model humans as part of the world. Modelling the world to a reasonable accuracy in general will do the job thanks to the later steps.

2: Modelling humans in such a way that human values can in theory be separated from the rest of the model. This is not trivial. A neural network (for example) which predicts human behaviour will be hopelessly entangled between the human's beliefs, values, and random biases and akrasia. Possibly corrigibility-as-in-understanding work can help here but it doesn't seem easy. Something like a Pearlian causal network of factors governing human decision making would be ideal. Existing work is building towards this.

3: Pointing to a human in the model. This is non-trivial. A model of the world which predicts the world will have things in it isomorphic to humans, but we need a robust way of pointing to them. Existing work seems to give a mathematically well-defined way to point to things. This will probably need testing.

4: Actually separating values from non-values. There is a no-free-lunch theorem which states that values and beliefs cannot in general be separated from one another. I strongly suspect that we — as humans who have some idea of our own beliefs — can get around this. We could build priors into the modelling system as to what values vs beliefs generally look like. We can also tell the model to put extra bayes-weight on human statements about what our values are. We could potentially just (with an understandable-enough model) do something like step 3 to point to what our values are. None of these are concrete solutions but the problem seems tractable.

5: Using those values in way that leverages the AI's processing power, "knowledge" and some sort of decision theory to actually make good decisions and do good actions. This seems to also mostly be more of the capabilities flavour, like step 1 is.

More Depth on the Problems and Identifying Confusion

Stating a seemingly-plausible story of how things go right is probably not enough. There are both obvious and non-obvious problems with it. There are probably also areas of confusion which I have not untangled — or even noticed — yet.

Step 1:

There are some reasons this might be done "for us" by capabilities research:

Modelling humans and human behaviour as part of a general world model seems to be strongly capabilities-like research. A superintelligence capable of causing large-scale harm to humanity can probably model human behaviour. However...

Lots of ways of causing harm to humans do not require a complete model of human behaviour. We still need to implement steps 2-5 before the AI gains significant power, it might not be possible to do this if modelling human behaviour is something the AI only does after we have lost the ability to directly understand and intervene in its behaviour. However...

Without steps 2-5 the AI might not be agenty enough to even act in the world. This does seem like something worth investigating further though, we don't have a strong concept of when agentic behaviour arises spontaneously: some people suggest that all AIs have an incentive to become agents regardless of their implementation.

Step 2:

For this we need an AI which can do something like "splitting the world up". The "Gold Standard" here would probably be a system which builds a predictive model, and hands you a human-understandable Pearlian-looking causal network. Currently I see a few possible ways this might happen.

Modern logical AI (like the Apperception Engine or Logical Neural Networks) could progress, and take over from or unify with current neural networks. These methods generally seem to do a good job of splitting the world into meaningful and understandable abstractions/parts. The Apperception Engine needs significant improvement before it becomes anything particularly useful. I don't currently understand logical neural networks very well, so I don't have a good idea of how useful they might be in the future.

We might also be able to achieve step 2 with modifications to existing architectures. There is a significant amount of research currently being undertaken to understand neural networks, like the circuits work. If this bears fruit then we might be able to isolate particular parts of a network, and get something like a causal network from it.

Mathematical work like Scott Garrabrant's Finite Factored Sets might also improve our understanding of how causality is best expressed in a network.

Step 3:

There are various techniques which infer an agent's preferences based on the agent's actions. All of these share the need to have an agent defined to them in the first place. Therefore we need to be able to draw a boundary around a human (or multiple humans separately) in the AIs model of the world, which allows it to distinguish the "actions" or "decisions" of the human from various other causal interactions.

Pointing to "something" in a model is a problem which has been written about in the Abstraction Sequence by johnswentworth. In particular Pointing to a Flower seems to have a pretty solid grounding for how to do this. Whether or not this will work in practice requires some testing. In particular stability with respect to increasing model detail is worth looking at.

Step 4:

Separating the beliefs of an agent from the values of an agent is not — in general — possible. But we don't need to do it in general, we only need to do it for one very, very small subset of all agents which exist or could exist. I see a few possible ways of attacking this problem.

One way would be similar to step 3 in methodology: try and find a mathematical property which holds for the "values" part of human decision-making. I have not seen much successful work in this area.

A more reasonable way is something like inverse reinforcement learning, which I don't understand particularly well but seems to be a tractable area of study. The FHI have recently featured a paper which makes progress towards being able to learn the preferences of non-rational, non-omniscient agents.

A third (if somewhat impractical) way would be to — once we have a human-understandable model of human decision making — simply point to the "values-ey" parts of the model by hand. This may not be stable to increasing model complexity, and we may simply make errors.

Of course this step also assumes that human values exist. avturchin has suggested that what we define as "values" don't actually exist. I am skeptical of this: the "It All Adds Up to Normality" principle makes me unwilling to accept that despite all the philosophical discussion on human values, they won't end up being an important part of a model of human behaviour. That article does discuss issues with the abstraction level of human values, they may only exist within certain methods of modelling a human, and not in other models. Our pointers to them might not be robust to changes in the AIs model.

Step 5:

This is another of the steps which is strongly entangled with capabilities research. Decision theory, embedded agency, and numerous other issues come into play here. Hopefully — though it does not seem certain — we will have these solved by the time we create an agentic AI. One way I can imagine this not being the case is an agent-bootstrap situation as stated in step 1.

Doing badly on certain capabilities research is also a way to have an AI which while "aligned" in a certain sense of having our values, fails to implement those values in a sensible way and causes a lot of harm in the process.

A method of solving steps 4 and 5 might be to allow our model of human behaviour to observe itself and self modify, removing parts of itself like akrasia and biases and improving its own models of the world, and decision theories. We could then wait for this to converge on some final behavioural algorithm. This would certainly need a lot of investigation before it works.

List of Problems and Confusion Areas

So we have identified a good few problems with no currently known solutions, there are also some problems here I did not mention above:

• We do not know how to generally infer causal networks from sensory data (steps 1 and 2)
• We do not know whether our current understanding of abstractions will allow us to "point to" a human in a causal network (steps 2 and 3)
• We do not have a good idea of how to infer human values from a model of a human (step 4)
• We also do not have a good idea of how to "point to" values in the structure of an agent (step 4)
• Bad things might happen if a non-valueish part of human decision making gets mistakenly labelled as a value (step 4)
• We do not know how to make an AI which implements any values at all — let alone human ones (step 5)
• Human values might not exist, even if they exist at some level of abstraction the pointer to them may not be robust at all scales (overall problem)
• We do not know whether this scheme of alignment requires such an accurate model of the world that it could only be implemented in an AI that is already too powerful to directly modify (overall problem)
• Modelling humans to too high of a degree might constitute "mind crime" by creating and destroying many conscious beings (overall problem)
• It is not clear which, or how many, humans to model and how to extract overall values from them (overall problem)
• We don't know how and when agentic behaviour might arise unintentionally, and we'll need to protect against that (overall problem)
• Other unknown areas of confusion...

Discuss

### Generalising Logic Gates

17 июля, 2021 - 20:25
Published on July 17, 2021 5:25 PM GMT

Logic circuits traditionally consist of a directed, acyclic graph. Each node can take a value of 0 or 1. The nodes can have 0, 1, or 2 inputs. 0-input nodes are the "input" for the whole system. 1-input nodes are always NOT gates, they output 0 if they are input 1 and vice versa. 2-input nodes are either AND or OR gates, which work like you would expect. Some of the nodes are designated output nodes.

For m=0,n=1 there are 21×20=2 possible gates. These correspond to gates with no inputs and one output, so the two gates are a node which always outputs 0 and a node which always outputs 1. If we allow n to vary, we have 2n possible (0,n) gates. This makes sense, as a (0,n) gate just consists of a node which always outputs the same string of n bits. (m,n) gates can then be considered as a lookup table with 2m locations, each containing n bits of information. In general there are n×2m bits of information required to specify an (m,n) gate.

Example 1

Now let's consider all possible (2,1) gates. There are 21×22=16 of them:

010=00002

01000100

110=00012

01010100

210=00102

01001100

310=00112

01011100

410=01002

01000110

510=01012

01010110

610=01102

01001110

710=01112

01011110

810=10002

01000101

910=10012

01010101

1010=10102

01001101

1110=10112

01011101

1210=11002

01000111

1310=11012

01010111

1410=11102

01001111

1510=11112

01011111

So what are the numbers above each gate?

Consider the input {0,1}m as a set of numbers a0,...,am−1. We can define the input A uniquely using A=∑m−1i=0ai2i. In binary this is just the concatenation am−1||...||a0. This means there are 2m possible inputs. We will index these with k∈0,...,2m−1.

We then take bj as the jth output when the input A is fed into the gate. We define B similarly to A above as B=∑n−1j=0bj2j. We then let Bk be the output number corresponding to the input Ak.

Each B has n digits, so we can define C=∑2m−1k=0Bk2Ak×n. This is equivalent to concatenating Bn−1||...||B0 in binary, which also makes clear that C has n×2m digits so the maximum value of C is 2n×2m−1.

Each C thus uniquely defines an (n,m) gate, therefore the three values (m,n;C) with C∈0,...,2n×2m−1 define a function {0,1}m→{0,1}n. This is a slight generalisation of the notion of a boolean function, as boolean functions are generally defined as functions {0,1}n→{0,1}. We will call C the identifier of the gate.

Let's look at the above table more closely. We see some familiar gates: 1 is NOR, 6 is XOR, 7 is NAND, 8 is AND, 9 is XNOR, and 15 is OR. In fact for any gate B, the gate 15−B is the result of applying a NOT function to the output.

We also have some input-asymmetric gates. 2, 4, 11, and 13 are all of the form (a0 [GATE] NOT a1) where [GATE] is  an AND, OR, NAND or NOR gate.

The remaining gates are also of note. 1 and 15 are both constant. 3, 5, 10, and 12 all only depend on one of the inputs. These gates could be constructed from a circuit containing a single (0,1) or (1,1) gate. These illustrate a way in which gates can be trivial: they can be independent of one or more of the inputs.

There are other ways gates can be trivial: but these rely on having multiple output nodes. The first is duplicate output nodes, imagine the set of (2,2) gates for which b0=b1 for all inputs. There are 16 of them, each corresponding to a (2,1) gate.

Another is separability. We will define an (m,n) gate as separable if we can split the m inputs into two or more subsets, each of which can only affect a subset of the n outputs. A  case of this would be the (2,2) gate where b0 = a0, and b1 = a1. Another example would be the (2,2) gate where b0 = NOT a1, and b1 = a0.

Gates can also be equivalent to each other with regards to permuting the inputs. We see this with the pairs 2 and 4; and 11 and 13 above.

Example 2

A half-adder is a (2,2) gate. We can work through and find its identifier: First we need a table of inputs and outputs. We will take the carry term first and the sum second.

0100, 00, 110, 11, 0

The binary representation of C can be easily read off as 10 01 01 00. 100101002=14810 therefore the half-adder is identified as (2,2;148).

A full-adder is a (3,2) gate:

01 0100, 00, 110, 11, 0 0100, 11, 011, 01, 1

The binary number C is here read off as 11 10 10 01 10 01 01 00. 11101001100101002=5979610. So the full adder can be expressed as (3,2;59796). C gets big fast as gates gain inputs.

Constructing Gates from Simpler Gates

We might want to construct larger gates from simpler ones. This is how computers are built, and in fact we can build a half-adder from just two "standard" (2,1) logic gates, an AND and an XOR. A full-adder requires 5 "standard" (2,1) logic gates. It may be possible to build a full-adder with even fewer (2,1) logic gates if we are allowed to use all 15, but I have not checked and I do not know a good way to check yet.

We can use some tools from information theory to lower-bound how many smaller gates we ought to need in general to construct a bigger one. We will consider the space complexity as the amount of information required to specify a gate of a certain size, and compare this to the space complexity of a network of g (2,1) gates.

The space complexity of a gate is log2(m)+log2(n)+n×2m. This grows as Θ(n×2m).

When specifying a gate network, we need to specify the gates, which grows linearly in g: Θ(g). We must also consider the information required to specify the connectivity of the network. This is an acyclic directed graph. The number of vertices is g+m, and the number of edges is 2g. The space complexity of this thus grows as the space complexity of a sparse directed graph, which is Θ(|V|+|E|). Here this ends up being Θ(g+m).

But this is not always the case: consider a gate which adds together two d-digit binary numbers. This will be a (2d,d+1) gate. But to construct this from 1 half-adder and d−1 full-adders requires only g=2+5×(d−1)=5d−3 separate standard logic gates. Our value of g here does not satisfy Θ(g+m)=Θ(n×2m), as Θ(d)≠Θ(d×22d). Something is clearly up here. Addition appears to be some sort of special process which is unusually simple.

In this framework, this appears to us as addition being much easier than expected to construct as a network. Being easier than expected to construct as a network also makes it faster to compute, since the number of gates corresponds the number of operations needed to evaluate the process.

Discuss

### (2009) Shane Legg - Funding safe AGI

17 июля, 2021 - 19:46
Published on July 17, 2021 4:46 PM GMT

Above is a link to an interesting blog post by Shane Legg. It was written before he started DeepMind, and he earns a hell of a lot of points for accomplishing a lot of the insanely ambitious goals set out in the post.

This part is particularly interesting:

The impression I get from the outside is that SIAI [now MIRI] views AGI design and construction as so inherently dangerous that only a centrally coordinated design effort towards a provably correct system has any hope of producing something that is safe.  My view is that betting on one horse, and a highly constrained horse at that, spells almost certain failure.  A better approach would be to act as a parent organisation, a kind of AGI VC company, that backs a number of promising teams.  Teams that fail to make progress get dropped and new teams with new ideas are picked up.  General ideas of AGI safety are also developed in the background until such a time when one of the teams starts to make serious progress.  At this time the focus would be to make the emerging AGI design as safe as possible.

Discuss

### Irrational Economics

17 июля, 2021 - 16:37
Published on July 17, 2021 1:25 PM GMT

Lars P. Syll has great new post up on his blog, accessible below.

https://larspsyll.wordpress.com/2021/07/07/deductivism-the-original-sin-in-mainstream-economics/

I highly recommend reading before commenting. My question to the forum is not whether or not Lars is right or wrong. I think he produces a pretty air-tight argument. But rather, given the hand in glove relationship between economic predictions and political policy: Is the failure of the economists to gloss over such an important scientific axion a feature, or a bug?

Is the old saw that economists function in a capitalist society more as theologians than scientists, on the balance true? Put another way, is there evidence that the Neo-classical economists understand the problem that rampant deductivism, divorced from practical application to reality, is nonsense -- but persist in the face of said knowledge for financial and political gain?

Discuss

### Large corporations can unilaterally ban/tax ransomware payments via bets

17 июля, 2021 - 15:56
Published on July 17, 2021 12:56 PM GMT

After the Colonial Pipeline ransomware attack, the idea of banning ransomware payments has become more prominent (at least in the US). This has the benefit of reducing the incentives of hackers as well as avoiding money going to criminal syndicates. It would also be possible to instead simply tax ransomware payments at a very high rate (e.g., 1000%).

While actually implementing this ban/tax at a national level might be difficult and implementing at a state level results in a competitive disadvantage for that state, I claim that large corporations should be able to unilaterally simulate some of the effects of a ransom tax while gaining a competitive advantage. They could do this just by betting against paying a ransom. Specifically, the corporation could open a prediction market for 'this corporation pays a ransom in the next 5 years' and then place a large, unhedged, and uninsured bet against paying a ransom. Of course, the exact terms and payout details could vary. For example, the payout could be proportional to the total ransomware payments over a period to simulate a proportional tax instead of a flat tax. This is is a sort of pre-commitment strategy like disabling your steering wheel in a game of chicken (except with prediction markets).

This market should be reasonably efficient, so the bet itself should have near 0 expected value for the company. However, the side effects of the bet are likely positive expected value: it should discourage ransomware attacks and signal confidence about security. Note that if only one company made this bet, that would likely just shift the targets of ransomware attacks. That's still a good enough reason for a company to use this approach. Despite the fact that penalizing ransomware payments is a public good, companies should be able to unilaterally benefit regardless of how many other companies also use this approach. Additionally, if a decent percentage of companies actually implemented this policy, than the incentives could change more broadly.

Cons
• A company must be large and well known to implement this policy. It's not useful if ransomware groups don't hear about it.
• Due to asymmetric information and adverse selection, it may be difficult for companies to secure good rates on the prediction market. This should be less of an issue for public companies and companies could just allow external audits. If these external audits disclosed security issues to the company (perhaps this would be required), then this should also improve security.
• The bet might end up being very high risk (and it can't be hedged without eliminating the whole point).
• Ransomware groups participating in the prediction market could undermine the approach and allow for 'pump and dump' schemes. It should be possible to reduce this issue in exchange for losing market liquidity and some efficiency. For instance, the company could make the bet with a large institution instead of on an open market.

Please comment if you have seen this idea or similar suggested elsewhere (I certainly haven't).

Is anyone aware of other cases where this sort of game theoretic betting could be useful/is used? I think this is approach is generally only useful if the opposing party in the game can't participate in the bet.

Discuss

### Is game depth a confused concept?

17 июля, 2021 - 15:37
Published on July 17, 2021 12:37 PM GMT

This is just a thought that came to me today, for which I didn't do any literature search. No idea if it's new.

The "depth" of a game is the range of Elo ratings between the best and worst player. Since Elo ratings are connected to win probability (if the difference is 200 points, the stronger player wins 76% of the time), game depth can also be defined as the length of the longest chain of players where each one beats the next e.g. 60% of the time.

Games with more depth are supposed to involve more knowledge, strategy, learning and so on. For example, chess has about double the depth of checkers. But I don't think that's right. Consider these games:

1. Deca-chess: two players play 10 games of chess against each other, and whoever wins more, wins.

2. Coin-chess: a three-sided coin is flipped. Heads -> player 1 wins, tails -> player 2 wins, wings -> a game of chess is played and whoever wins wins.

Under the above definition, deca-chess is "deeper" than chess, because the slightly better player has higher probability to win a match than one game. And coin-chess is more "shallow", because the slightly better player has less of an edge. Even though the amount of knowledge, strategy and learning required is exactly the same!

Can we fix the concept of depth to remove the influence of repeated trials and luck? I think we can, by introducing another variable: effort.

Imagine an individual player, let's call him Bob. Have Bob play in an imaginary tournament, where each individual one-on-one match has so many games that the better player wins almost certainly. But in some matches, tell Bob to play half-heartedly, and in others, tell Bob to use maximum effort. (Please ignore the unreality of the hypothetical, I'm trying to make a point.)

By the end of the tournament, we'll see players arranged by skill. But we'll also know, from looking at many Bobs, which sub-ranges of skill are covered by individual ranges of effort. In other words, we'll end up knowing this: "The difference between the worst and best player in the tournament is covered by about N intervals between a player's slack and best effort at each level".

That number N could be used as a new measure of game depth. Under this measure, chess, deca-chess and coin-chess should be equally deep, and checkers should come out less deep. And intuitively it makes sense too: "the best player surpasses me by about 5x the difference between my slack and my best effort" is something everyone can understand. It lets you feel how much work is ahead of you. (Measuring against the worst player is not as informative, but everyone only cares about their distance to the top, so that's ok.)

Having everyone play long matches against each other is not very realistic, but in games with less chance, maybe matches don't need to be that long. I'd be curious to apply this to something like Raven's matrices: draw a histogram with test score as X and number of people as Y, and renormalize the X axis so that a distance of 1 corresponds to the difference between slack and best effort for a typical person at that point. Then when a new person takes the test or plays the game, we match them against a database of previous results, and tell them "ok, you performed at X units of maximum effort from the median person".

Discuss

### Equivalent of Information Theory but for Computation?

17 июля, 2021 - 12:38
Published on July 17, 2021 9:38 AM GMT

A quick google search brings up "computation theory". This does not seem to be as precise as information theory, we cannot talk about "n units of computation" the same way we can talk about "m bits of information". In fact there does not seem to be a generally accepted fundamental unit of computation which we can talk about.

Computational complexity theory is well-developed but only talks about scaling.

Turing machines seem to be able to "hide" a lot of the work in the number of available symbols to go on the tape or the number of available states of the pointer.

Logic circuits haven't produced much and seem a bit arbitrary (why only OR, AND, and NOT gates?).

Seems like we need a theory of computation to qualitatively understand things like logical uncertainty.

Discuss

### Media Diet: Cultivating Voice, Deliberate Action, and Alive Time

17 июля, 2021 - 08:56
Published on July 17, 2021 5:56 AM GMT

Content Consumption Versus Production

My present consumption versus production ratio is not well balanced, I consume ridiculously more media and content than what I produce.(1) So! I am changing that :)

I think content consumption is a necessary part of life and can be quite healthy and good for someone...but! If the ratio is off and/or one is consuming certain kinds of content especially via certain mediums, and one is also trying to get stronger, improve, "level up" then cultivating a prudent media diet is necessary. I want to improve and have something to protect plus pursue, so that's why I'm doing this stuff.

Producing content helps me voice my voice, get stronger, practise deliberate action, and thrive in alive time. Additionally, the more deep work I engage in, the happier and more highly functioning I become. Book recommendation: after reading Cal Newport's "Deep Work", I read OP A.G. Sertillanges' "The Intellectual Life: Its Spirit, Conditions, Methods" and that provided yet more insight into the benefits of deep work plus further increased my desire to do that and is an excellent guide to doing independent, novel, intellectual, and creative work. I highly endorse this book and will be re-reading it very soon. It's likely better and more timeless than Newport's "Deep Work", in my opinion.

Sources of Inspiration

I looked to these posts as examples and guides for building my media diet rules.

Media Diet and Habits
1. I reserve 8 hours of each weekday for productive, creative, deliberate, etc. work.

• 4-6 hours of each weekday's reserved 8 hours is for deep work and/or strictly necessary must-get-done-now work. The other 2-4 hours is used for less focus-required, less deliberate but still productive work.

• No internet access, social activity, or otherwise potentially distracting things are permitted during this time except for what is strictly necessary to support the work I'm focusing on at that moment.

• If I absolutely can't focus on anything, or need relief I may play the piano, write, draw, exercise, or meditate for some reasonable amount of time before returning to work.

2. No Hacker News or other interesting news aggregators, social media sites, news sites, videos, TV, movies, YouTube, etc. permitted except on Saturdays and Sundays.

3. I can be as social as I want outside of the 8 hours I reserve each weekday for working.

4. I can watch one episode of Star Trek per weekday.

5. I can play one hour of video games per weekday.

6. Anything is allowed if I'm doing it socially outside of my weekday reserved 8 hours.

7. I reserve 4 hours of each weekendday for productive, creative, deliberate, etc. work but otherwise impose no restrictions on weekends.

8. When with friends or if situations prudently require it, I may make necessary on-the-spot temporary exceptions to any of the rules. Life is change, one must know how to prudently adjust as necessary.

9. I must resume writing daily shortform posts, weekly reviews, and so on to better structure and document this initiative, plus such things help my life more broadly.

10. I commit myself to this diet and habit change for 3 months. I will reevaluate this initiative on 17 October 2021 and decide whether to continue with it and if so in what ways (and make any necessary changes).

My oath of reply for this post lasts until 17 October 2021.

Be well!
Cheers,
Willa

Discuss

### What are some examples from history where a scientific theory predicted a significant experimental observation in advance?

17 июля, 2021 - 08:39
Published on July 17, 2021 5:39 AM GMT

A few examples I can think of off the top of my head, to give a feel for the reference class I'm looking for:

- The existence and position of Neptune were predicted from observations of Uranus's orbit, before anyone had ever observed Neptune directly

- Black holes were predicted from the equations of General Relativity before we'd ever observed them or their effects on stars' motion

- Not as quantitative, but Darwin's theory of Evolution predicted that we'd find some method by which natural selection actually occurs, before we ever knew about DNA.

Are there other cool examples like these?

Discuss

### Longtermism and short-termism for personal life extension

17 июля, 2021 - 06:52
Published on July 17, 2021 3:52 AM GMT

Created: 2021-07-15 | Updated: 2021-07-16

By short-termism, I basically mean a method of analysis where one assumes the world stays the same. With longtermism, indirect effects and uncertain futures are taken into account. The longtermist approach focuses on increasing the probability of different stable outcomes, whereas the short-termist approach focuses on the short-term direct impact.

When it comes to personal life extension, short-termist strategies include things like: Increasing the number of dreams, and notably lucid dreams experienced and remembered

At a first approximation, these interventions might all look great. They can all increase subjective life expectancy more than they decrease objective life expectancy, and maybe even robustly so.

However, when the stakes are astronomical, this approach is completely backward. The long termist approach for personal life extension focuses on dying later, not (directly) on living more. Using the following as a toy example, that could mean things like sleeping 10% more only to live 1% more, which doesn't seem to make sense given this would reduce one’s total subjective lifespan at a first approximation, except that this 1% increase in objective lifespan could actually transform into astronomically more because it could allow one to live long enough to live maximally long; i.e. reach a point where cryonics, anti-aging, or other anti-death technologies reach longevity escape velocity. Some longtermist interventions will come at a negligible or zero short-term subjective life expectancy cost, such as lifelogging as life extension, while others will greatly diminish your short-term subjective life expectancy, such as pre-emptive biostasis.

The longtermist approach still has overlapping recommendations with the short-termist one, but when it does, they will sometimes arise for different reasons, and to a different degree. For example, sleeping less might be good to earn more money and therefore be able to afford better biostasis instead of being good because it increases the length of your short-term subjective experience. On the other hand, other interventions are good from both the short-termist and longtermist perspectives because they increase both subjective and objective lifespan. Such interventions include a healthy diet, a healthy amount of safe exercise, breathing clean air, etc. This category tends to be the one people naturally think about when thinking about longevity.

Some interventions are not obvious to categorize; they will depend on the context and the specific reasoning behind them. For example, avoiding a new experience because it cannot be lifelogged would help avoid a micro-death at the cost of the experience. On the other hand, this experience might help preserve your self as a whole (ex.: attending a cryonics conference that is not lifelogging-friendly), and so might be worth sacrificing.

For an agent with no diminishing returns on additional subjective experiences, it’s still not literally impossible for the short-termist approach to be superior (ex.: maximum lifespan could be small or unlikely), but in practice, it seems to me like the longtermist approach is basically always correct.

There will still be various interventions within the longtermist approach that will trade-off against each other. For example, there will be longtermist arguments to get cryopreserved earlier (ex.: avoid further identity degradation) and there will also be longtermist arguments to get cryopreserved later (ex.: to wait until cryonics technologies get better).

Discuss

### Bayesianism versus conservatism versus Goodhart

17 июля, 2021 - 02:39
Published on July 16, 2021 11:39 PM GMT

I've defended using Bayesian averaging of possible utility functions, even in situations where it seems that Goodhart problems should rear their heads.

Thanks to a more recent discussion with Scott, I can now clarify all these old posts, and look more at the issues of Goodhart's law, Bayesianism, and conservatism.

Where Goodhart will always remain a problem

The Goodhart problem will always remain if we miss some of the key variables. As Stuart Russell said:

A system that is optimizing a function of n variables, where the objective depends on a subset of size k<n, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable. This is essentially the old story of the genie in the lamp, or the sorcerer’s apprentice, or King Midas: you get exactly what you ask for, not what you want.

If the AI's target is phrased in terms of world GDP and human survival, then no function of these variables will make it care about human happiness and flourishing. So the Goodhart problem will always be a problem some key variables are missed.

This is the reason that I'm looking at model-splintering, so that the AI can add variables as its power and knowledge develops.

Bayes versus Goodhart

There may be variables that are in more than one category - for example, if happiness/sadness are on the same scale (which is not automatically true), then maybe this would be linearly important in standard situations, but things would go very bad if it got too negative. So if v is this variable, we might want to include v in V1 and v−ω in V3 (for some critical value ω). Similarly, if we wanted v to stay in the range [a,b], we could add both v−a and b−v to V3 (or to V2).

Anyway, each set Vi has Ni elements. To encode the fact we're adding extra variables who's values we're uncertain about, we'll assume that ni of them are "really" in the set, but the AI can't tell which ones. Define σ=(σ1,σ2,σ3) as selecting ni variables from Vi, and

Uσ=(∑v1∈σ1v1)⋅(∏v1∈σ3H(v2))+∑v3∈σ3−e−v3.

Here H is the Heaviside step function, where H(x)=0 if x≤0 and H(x)=1 otherwise[1].

Then if we are completely ignorant about which variables are truly in the sets, the Bayesian utility is:

U=UV1,V2,V3=∑σUσ(N1n1)(N2n2)(N3n3).

What can we say about U? Well, if we fix a v3∈V3, then it will appear in (N1n1)(N2n2)(N3−1n3−1) of the Uσ terms (i.e. all choices of the V1 and V3 variables, and all possible choices of the other n3−1 variables in the other N3−1 variables in V3). Thus v3 appears in U as an additional term which is

−e−v3(N3−1n3−1)(N3n3)=−e−v3n3N3.

How about a fixed v2∈V2? Well, if v2 goes below 0, that will kill off (N1n1)(N2−1n2−1)(N3n3) of the Uσ. So if all other v′2∈V2 are positive, sending v2 from negative to positive will multiply the expected value by 1−n2/N2.

Similarly, for fixed v1∈V1, if all v2∈V2 are positive, then it will appear in U as +v1(n1/N1).

So the Bayesian mix is roughly linear in the V1, treats any v∈V2 as loosing n2/N2 of the utility if it goes negative, and is linear in the negative exponential of the V3.

Being somewhat sloppy

In some ways we cannot afford to be sloppy: assume that v should be in V2 but isn't; so the true utility is U=UV1,V2∪{v},V3, but a U′=UV1,V2,V3-maximiser might sacrifice v to increase U′; thus (ignoring the V3) maximising U′ may set U to 0.

Conversely, if v should be in V3 but isn't, and the true utility is U=UV1,V2,V3∪{v}. Then a U′-maximiser might sacrifice v to maximise U, at a cost of some multiple of −e−v, which might be enormous.

So, just as forgetting a variable triggers a standard Goodhart effect (see the section above), forgetting to assign it to the right category also triggers a Goodhart effect.

On the other hand, if we get the numbers n1, n2, or n3 wrong, then we scale various pieces of the utility by a constant factor. This tends to have a small effect, as compared with the exponential penalty of the V3 or the "kill off V1" effects of the V2.

In the general case, we wouldn't have Vi with ni of them that matter, but we'd have a probability distribution pi over the possible variables in Vi that matter (this would be a probability distribution over 2Vi, the set of subsets of Vi). In that case, we can be a bit sloppy with pi as long as we don't miss out variables we should include.

So, along certain axes, we can be sloppy and imprecise without the system breaking. This is a change from the standard Goodhart/AI alignment situations, where sloppiness almost always guarantees disaster.

Conservatism

If we were being more conservative, instead of maximising U=UV1,V2,V3 a weighted sum of all Uσ, we might want to maximise the minimum value of these, and hence the utility function

μ=μV1,V2,V3=minσ(Uσ).

Let's compare what maximising μ and maximising U might get us.

For V1, maximising μ involves maximising the average of the minimal n1 values of the V1, while U maximises the whole average. Here, U is superior to the μ, since V1 variables are explicitly designed to be traded off linearly against each other - if we had reasons to think they behaved differently when low-valued, they should instead/also be in V2 or V3.

If we have a single v3∈V3 that gets very negative, U will get a penalty of −e−v3(n3/N3) while μ gets a penalty of −e−v3. Up to that constant factor, they behave similarly.

If any v2∈V2 is negative, then μ ignores all of V1, while U scales the V1 utility function by 1−(n3/N3). Thus U can take the risk of letting some values in V2 go negative (in exchange for enough V1 bonus), while μ won't take the risk at all. We could see μ as behaving like U if n2=N2.

Which is correct there? Most likely, U. Imagine if n2=1 while N2=1024. Then μ is saying that "there are a trillion trillion of these variables in V2. It's bad for one of them to be negative. However, you should behave as if it's bad for any of them to be negative, and not take that risk."

That's an extreme example; but the point is that U incorporates our knowledge or estimate of tradeoffs, but μ does not. Since it does not know the tradeoffs, we can set up situations where μ-maximising is not just sub-optimal, but quite dire. For example, if one of the v2∈V2 is already negative, then the μ-maximiser will accept to reduce ∑v1∈V1v1 to just above zero (no matter how high it was initially) in exchange for an infinitesimal chance of increasing v2 above 0. So the amount of U-value that the μ-maximiser is willing to throw away is potentially unbounded.

Bayesianism versus conservatism Bayes beats conservatism

Both μ-maximising and U-maximising are forms of conservatism; neither of them is just saying "maximise the vi linearly". When designing our conservative utility, we'd like to incorporate our knowledge about tradeoffs. The μ-style approach does this very badly in many circumstances, and so can lose a lot of value.

Why might we nevertheless use a μ-style approach rather than a U-style one? If we feel that there is some circumstance where U could also lose a lot of value, and that we haven't fully accounted for. After all, there may be situations where we don't know the tradeoffs; maybe there are some tradeoffs where U-maximising is terrible, and we just haven't thought of them yet.

But why would μ-maximising avoid these problems? Maybe there are theorems about how badly μ-maximising can mess up; since it is based around a minimum, it is plausible that we can bound the downside risk in some circumstances. But if we knew these theorems, we could use them to design a better U.

There might be a practical argument for μ or U. Maybe humans are too clever when we try to account for all our genuine preferences, so, in practice, μ-maximising is safer than U-maximising. We'd like an AI to take this into account, and automatically create a good U, but maybe that's also too clever - maybe we can't program an AI to successfully take into account the biases we have when estimating our own biases.

Still, seeing the definite failures of μ and the ability to apply most of our μ-insights to U-design, there's a strong argument that U-maximising would likely be safer than μ-maximising.

Bayes is conservatism

I've been a bit unfair there. I constructed U with some care and effort; I could have made it more realistic, but it was a reasonably complex and involved effort.

When constructing μ, on the other hand, I simply took the previously defined Uσ and took a hard minimum across them. If I wanted to construct a better μ, I could have put in more effort to define functions over which to take a minimum, taking into account all the likely tradeoffs...

We can see the convergence here: this sounds a lot like building U. My argument here is, not fully proven, is that if we are constructing any objective function that incorporates all our knowledge, we can do as well or better with a Bayesian mix of utility functions.

If we have an argument for why a non-Bayesian approach (such as a minimum or a smooth minimum) might work better, then that very argument becomes a tool to improve our Bayesian mix. Saying "if we use the Bayesian mix, we have problem P" becomes "how bad is P, and what would be pay to avoid it?", which modifies the Bayesian mix.

1. If we wanted a more gradual transition, we could use S0, the smoothstep/clamping function defined by S0(x)=0 if x<0, S0(x)=1 if 1">x>1, and S0(x)=x otherwise. ↩︎

Discuss

### "If and Only If" Should Be Spelled "Ifeff"

17 июля, 2021 - 01:03
Published on July 16, 2021 10:03 PM GMT

If and only if is an important logical concept, useful in many contexts, both mathematical and nonmathematical. Unfortunately, "if and only if" is also an unwieldy five-syllable phrase. Mathematicians have solved this problem by shortening it to "iff". Unfortunately, this shortening has not caught on in non-mathematical contexts. This makes some communication and thinking unwieldy and ambiguous.

I think the reason "iff" hasn't caught on more broadly is because it's easily misread as "if", and doesn't have an intuitive pronunciation. I think both of these problems would be solved by changing the spelling to "ifeff". (prononunced /ɪfɛff/) This would slightly improve the thinking and communication of most English speakers.

I think a small group of people using "ifeff" in their writing would likely start a process where "ifeff" eventually takes over, via the usual process by which vocabulary spreads, and that "ifeff" would be used by groups that don't currently have a short-enough word for this concept. I also think the correspondence between "iff" and "ifeff" is intuitive enough that this will not cause very much confusion.

Discuss