Вы здесь

Новости LessWrong.com

Подписка на Лента Новости LessWrong.com Новости LessWrong.com
A community blog devoted to refining the art of rationality
Обновлено: 43 минуты 54 секунды назад

A dialectical view of the history of AI, Part 1: We’re only in the antithesis phase. [A synthesis is in the future.]

16 ноября, 2023 - 15:34
Published on November 16, 2023 12:34 PM GMT

Cross-posted from New Savanna.

The idea that history proceeds by way of dialectical change is due primarily to Hegel and Marx. While I read bit of both early in my career, I haven’t been deeply influenced by either of them. Nonetheless I find the notion of dialectical change working out through history to be a useful way of thinking about the history of AI. Because it implies that that history is more than just one thing of another.

This dialectical process is generally schematized as a movement from a thesis, to an antithesis, and finally, to a synthesis on a “higher level,” whatever that is. The technical term is Aufhebung. Wikipedia:

In Hegel, the term Aufhebung has the apparently contradictory implications of both preserving and changing, and eventually advancement (the German verb aufheben means "to cancel", "to keep" and "to pick up"). The tension between these senses suits what Hegel is trying to talk about. In sublation, a term or concept is both preserved and changed through its dialectical interplay with another term or concept. Sublation is the motor by which the dialectic functions.

So, why do I think the history of AI is best conceived in this way? The first era, THESIS, running from the 1950s up through and into the 1980s, was based on top-down deductive symbolic methods. The second era, ANTITHESIS, which began its ascent in the 1990s and now reigns, is based on bottom-up statistical methods. These are conceptually and computationally quite different, opposite, if you will. As for the third era, SYNTHESIS, well, we don’t even know if there will be a third era. Perhaps the second, the current, era will take us all the way, whatever that means. Color me skeptical. I believe there will be a third era, and that it will involve a synthesis of conceptual ideas computational techniques from the previous eras.

Note, though, that I will be concentrating on efforts to model language. In the first place, that’s what I know best. More importantly, however, it is the work on language that is currently evoking the most fevered speculations about the future of AI.

Let’s take a look. Find a comfortable chair, adjust the lighting, pour yourself a drink, sit back, relax, and read. This is going to take a while.

Symbolic AI: Thesis

The pursuit of artificial intelligence started back in the 1950s it began with certain ideas and certain computational capabilities. The latter were crude and radically underpowered by today’s standards. As for the ideas, we need two more or less independent starting points. One gives us the term “artificial intelligence” (AI), which John McCarthy coined in connection with a conference held at Dartmouth in 1956. The other is associated with the pursuit of machine translation (MT) which, in the United States, meant translating Russian technical documents into English. MT was funded primarily by the Defense Department.

The goal of MT was practical, relentlessly practical. There was no talk of intelligence and Turing tests and the like. The only thing that mattered was being able to take a Russian text, feed it into a computer, and get out a competent English translation of that text. Promises was made, but little was delivered. The Defense Department pulled the plug on that work in the mid-1960s. Researchers in MT then proceeded to rebrand themselves as investigators of computational linguistics (CL).

Meanwhile researchers in AI gave themselves a very different agenda. They were gunning for human intelligence and were constantly predicting we’d achieve it within a decade or so. They adopted chess as one of their intellectual testing grounds. Thus, in a paper published in 1958 in the IBM Journal of Research and Development, Newell, Shaw, and Simon wrote that if “one could devise a successful chess machine, one would seem to have penetrated to the core of human intellectual endeavor.” In a famous paper, John McCarthy dubbed chess to be the Drosophila of AI.

Chess isn’t the only thing that attracted these researchers, they also worked on things like heuristic search, logic, and proving theorems in geometry. That is, they choose domains which, like chess, were highly rationalized. Chess, like all highly formalized systems, is grounded in a fixed set of rules. We have a board with 64 squares, six kinds of pieces with tightly specified rules of deployment, and a few other rules governing the terms of play. A seeming unending variety of chess games then unfolded from these simple primitive means according to the skill and ingenuity, aka intelligence, of the players.

This regime, termed symbolic AI in retrospect, remained in force through the 1980s and into the 1990s. However, trouble began showing up in the 1970s. To be sure, the optimistic predictions of the early years hadn’t come to pass; humans still beat computers at chess, for example. But those were mere setbacks.

These problems were deeper. While the computational linguistics were still working on machine translation, they were also interested in speech recognition and speech understanding. Stated simply, speech recognition goes like this: You give a computer a string of spoken language and it transcribes it into written language. The AI folks were interested in this as well. It’s not the sort of thing humans give a moment’s thought to; we simply do it. It is mere perception. It was proving to be surprisingly difficult. The AI folks also turned to computer vision: Give a computer a visual image and have it identify the object. That was difficult as well, even with such simple graphic objects as printed letters.

Speech understanding, however, was on the face of it intrinsically more difficult. Not only does the system have recognize the speech, but it must understand what is said. But how do you determine whether or not the computer understood what you said. You could ask it: “Do you understand?” And if it replies, “yes,” then what? You give it something to do.

That’s what the DARPA Speech Understanding Project set out to do in over five years in the early to mid 1970s. Understanding would be tested by having the computer answer questions about database entries. Three independent projects were funded; interesting and influential research was done. But those systems, interesting as they were, were not remotely as capable as Siri or Alex in our time, which run on vastly more compute encompassed in much smaller packages. We were a long way from having a computer system that could converse as fluently as a toddler, much less discourse intelligently on the weather, current events, the fall of Rome, the Mongol’s conquest of China, or how to build a fusion reactor.

During the 1980s the commercial development of AI petered out and a so-called AI Winter settled in. It would seem that AI and CL had hit the proverbial wall. The classical era, the era of symbolic computing, was all but over.

Statistical NLP and Machine Learning: Antithesis

Let’s step back to 1957 for a moment. That’s when the Russians launched Sputnik. It’s also when Noam Chomsky published Syntactic Structures, which was to prove very influential in linguistics and in the so-called cognitive sciences more generally. First of all, he reinforced the idea that linguists should study syntax, but not semantics, and argued for their strict separation. That’s the point of his famous example sentence: “Colorless green ideas sleep furiously.” It’s syntactically perfect, but semantically nonsense. He then went on to model syntax on the idea of mathematical proof, where the syntactic structure of a sentence is, in effect, a theorem in a mathematical system.

He also argued against statistical methods, asserting (p. 17): “I think that we are forced to conclude that grammar is autonomous and independent of meaning, and that probabilistic models give no particular insight into some of the basic problems of syntactic structure.” Statistical methods are necessarily data-driven. Where the inference rules of symbolic systems were painstakingly hand-crafted by researchers, the rules of statistical systems are derived from a computational analysis of data.

While various statistical ideas had been around since the 1950s, including neural nets, one of the first arenas where they demonstrated their practical value is that of speech recognition. In the mid-1980s an IBM group led by Fred Jelinek used a standard statistical technique, a hidden Markov model, that demonstrated a 20,000-word vocabulary. Where the work done under DARPA’s 1970s Speech Understand Project called on syntactic and semantic information to aid in disambiguating speech, Jelinek’s work was based only on speech data. Later in the decade various speech recognition systems were released as consumer products.

Speech recognition is a very kind of problem from that of identifying syntactic structures in sentences, the problem at the center of Chomsky’s conceptual universe. It is not unreasonable to think of parsing as a top-down formal process. How can you shoehorn speech recognition into that conceptual box? You can’t. Speech recognition is about identifying discrete categories (words) in a continuous stream of messy data. The inventory of words may be finite, if large, but the variability of speech streams is unbounded.

We’re no longer in Kansas. It’s a different world, an opposite one.

Statistical methods of machine translation began replace rule-based going into the 2000s and they were, in turn, supplanted by deep learning in the 2010s (note that deep learning is fundamentally statistical in character). Though none of these various approaches was equivalent to human translation, the statistical approaches are adequate for a variety of informal purposes. In 2012 AlexNet won an important machine vision competition by a significant margin and signaled the emergence of deep neural nets as the dominant machine learning technology. In 2020 GPT-3 surprised everyone with its handling of language tasks and led to widespread speculation about reaching a tipping point (I wrote a working paper on the subject, GPT-3: Waterloo or Rubicon? Here be Dragons). A tipping point to where? That’s not clear. But wherever it is, the release of ChatGPT in late November of 2022 seems to have brought WHERE a lot closer, and to a much wider segment of the population.

That’s where we are now.

Yes, I know, there’s all the vision and image stuff. Yes, it’s interesting and important. And, no, it doesn’t change the story in any fundamental way.

AGI Now or do we Need New Architectures?

What’s next? No one really knows, no one. Most, though certainly not all, of the attention and capital investment is on large language models (LLMs). One widely popular view, that now has quite a bit of investment money riding on it, is that we have most of the basic ideas and techniques that we need.

All we have to do is scale it up: more compute, more data (see Rich Sutton's "The Bitter Lesson"). As far as I know, that’s where most of the money is going these days. How long before we know, one way or another, whether or not scaling is the way to go? I don’t know, nor does anyone else, not really. If I’m wrong, then the basic scheme of this post is wrong. All we have is the eclipse of classical AI by statistical AI. End of story.

It is undeniable that these newer methods have achieved spectacular success in areas where symbolic AI did hit one of those proverbial walls. It is also undeniable that the availability of more compute has been central to this success. What reason do we have to think that still more compute and more data won’t continue to reap rewards?

There are reasons for skepticism. For one thing LLMs have a tendency to “hallucinate,” as it’s called. They make things up. They have problems with common-sense reasoning as well. Both of these problems seem related to the fact that the systems have no direct access to the external world, which would give them purchase on ground truth and access to all the many details of the physical world on which common sense rests. They also have difficulties with tight logical reasoning and with (complex) planning. These various limitations seem to be inherent in the architecture and thus cannot be fixed simply by scaling up.

Those criticisms, and others, need to be argued and not just stated. But this is not the place for those arguments. I’ve articulated some of them in GPT-3: Waterloo or Rubicon? Gary Marcus is perhaps the most visible critical of the current regime; you can check his substack, Marcus on AI, for arguments as well. We both advocate the addition of symbolic methods to these symbolic systems. David Ferrucci, who led IBM’s Watson project, has been keeping out of the public sphere, but he is building a company, Elemental Cognition, based on the integration of statistical and symbolic concepts and techniques. Yann LeCun argues that systems trained on language alone are inherently limited; he is surely correct about this. Subbarao Kambhampati, who is an expert in planning systems, has tweet stream listing a number of critical articles he’s written. Others are critical as well.

My point is simply that there ARE arguments against the idea that all we need to do is build bigger and bigger systems. Simply pointing to the future and saying “Onward ho!” does not negate those reasons. Should those reasons prove valid, what happens next?

Synthesis: Assuming I’m right, what’s dialectical about this evolution?

Or perhaps, more simply, why is it an evolution and not just one thing after another?

Let’s go back to the beginning. While I’ve told the story in pretty much the standard way, symbolic computing being eclipsed by various statistical methods, both kinds of ideas were in the mix from the beginning. As Richard Hughes Gibson points out in a very interesting recent article, back in the 1940s and early 1950s Claude “information theory” Shannon, in effect, hand-simulated small versions of current machine learning engines for language. And Warren Weaver proposed a statistical basis for machine translation.

Why, then, weren’t those ideas acted upon at the time? Because, as a Marxist might say, the material conditions weren’t right. They didn’t have either the computers or the data necessary for doing it. While transiters existed, just barely, integrated circuits did not, much less the very large-scale integration necessary for really powerful processors, not to mention the whole range of peripherals necessary.

Even if they’d had the compute, I doubt that those statistical ideas would have been the first to be tried. Why? Because there is a long intellectual tradition – extending back through George Boole in the 19th century, Leibniz in the 17th, to Aristotle in the ancient world – in which language and thought are conceptualized in terms of logic. That’s the tradition that would have been closest to hand when those intellectuals gathered at that famous 1956 Dartmouth workshop that marked the institutional origins of artificial intelligence. Moreover, way back in 1943 Warren McCulloch and Walter Pitts had published “A Logical Calculus of the Ideas Immanent in Nervous Activity,” in which, as the title says, it made sense to think of the nervous system as carrying out logical operations.

That pretty much settled it. The joint theoretical and practical investigation of the computational view of mind was to be grounded in three ideas arising out of the Dartmouth workshop: “symbolic methods, systems focused on limited domains (early expert systems), and deductive systems versus inductive systems.” The first and third follow from the tradition of logic while the second was a practical necessity dictated by the nature of the available computers.

Our intrepid band of intellectual adventurers then set out to explore the new territory. One can argue whether or not they discovered the territory, or invented it, but at the moment I don’t care which way we think about it. They explored it for roughly three decades, and then AI Winter arrived in the mid-1980s. You can see it in the dip in the blue curve in the following graph, which is from Google Ngrams:

What happened? There is a lot of discussion about that in the literature. It is an important issue. But I’m not even going to attempt the briefest of summaries. I’m going to take it as a brute fact.

What happened next? As you see, the neural networks curve begins to rise, peaking in the mid-1990s. From the point of view I’m taking in this essay, that’s a family of statistical techniques, though a somewhat different family than that associated with machine learning, which has a gradual rise from the mid-1980s up to 2015, at which point it shifts to a steep rise, taking it above the curves for both neural networks and artificial intelligence. As for “artificial intelligence,” I strongly suspect that from 2015 on the term is more likely to refer to the statistical techniques in the neural net or machine learning families than to traditional symbolic AI. It’s not that symbolic AI has dropped off the face of the earth. It’s still around; but it’s a relatively small component of the work being done under the general rubric of AI.

And, of course, neither machine learning or neural networks came out of nowhere. As I pointed out at the beginning of this section, they were there from the beginning. They never disappeared. Someone was always working on them. But when AI Winter set in, things thawed out for, first neural nets, and then somewhat later for machine learning. People and resources moved into those areas and began exploring those regions of the territory.

That brings us to where we were in the previous section: What’s next? As I said then, I think the current regime is living on borrowed time. It too will hit a wall. What then?

I don’t know. Perhaps the whole enterprise will collapse. Those committed to the current regime, Bigger Data Bigger Compute, will refuse to admit that progress has stalled and will limp along as best they can. Since advocates of other approaches have been starved for resources, they can’t really move into the conceptual and technical vacuum that has opened up. They’ll just straggle along, surviving on scraps left to them by the remnants of the Bigger Gangs. What of China, India, Japan, and even Africa? Will they have been bogged down as well? Who knows.

Somehow, though, I do think the enterprise will continue somehow. Assuming that people and material resources are available, what about the ideas? I suppose it is possible that there is some third family of ideas that is distinctly different from the symbolic and statistical families, but I haven’t heard of them, nor has anyone I know.

No, to continue forward, we are going to need approaches that combine ideas from both families. Just what form that synthesis will take, who knows? Perhaps we should look to the brain for inspiration. That’s what David Hays and I did back in 1988, Principles and Development of Natural Intelligence. More recently I’ve augmented and extended those ideas with complex dynamics (e.g. Walter J. Freeman) and the geometric semantics of Peter Gärdenfors, Relational Nets Over Attractors, A Primer: Part 1, Design for a Mind (2023). Surely I am not the only one exploring such ideas. If the dialectic gets stalled, it will not be for lack of ideas.

As for dialectics, first we explore symbolic territory, Thesis. That collapses and we explore statistical territory, Antithesis. When that collapses? When that collapses we’ve run out of territory, no? We take ideas from both territories and create a new territory, on a “higher” level: Synthesis.

More later. 

* * * * * 
 

Addendum: The Turing Awards for 1975 and 2018 

It just occurred to me that the committee that selects the winners of the Turing Award has been kind enough to recognize the distinction between symbolic and statistical AI. In 1975 the award went to Allen Newell and Herbert Simon for “basic contributions to artificial intelligence, the psychology of human cognition, and list processing.” In 2018 it went to Yoshua Bengio, Geoffrey Hinton, and Yann LeCun “for conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing.” Here's their papers marking the occasion:

Allen Newell and Herbert A. Simon, Computer science as empirical inquiry: symbols and search, Communications of the ACM. Volume 19 Issue 3 March 1976pp 113–126, https://doi.org/10.1145/360018.360022

Yoshua Bengio, Yann Lecun, Geoffrey Hinton, Deep Learning for AI, Communications of the ACM, July 2021, Vol. 64 No. 7, Pages 58-65, https://doi.org/10.1145/3448250

The final paragraph of this essay is quite suggestive:

How are the directions suggested by these open questions related to the symbolic AI research program from the 20th century? Clearly, this symbolic AI program aimed at achieving system 2 abilities, such as reasoning, being able to factorize knowledge into pieces which can easily recombined in a sequence of computational steps, and being able to manipulate abstract variables, types, and instances. We would like to design neural networks which can do all these things while working with real-valued vectors so as to preserve the strengths of deep learning which include efficient large-scale learning using differentiable computation and gradient-based adaptation, grounding of high-level concepts in low-level perception and action, handling uncertain data, and using distributed representations.

It is my belief that, yes, we can do that, but doing so is going to require that we integrate the tools and techniques of classical symbolic reasoning and of statistical deep learning on a ‘higher’ level. Think of Piaget’s concept of reflective abstraction, where the operational tools of one level become objects deployed and manipulated at an evolved higher level.



Discuss

How much fraud is there in academia?

16 ноября, 2023 - 14:50
Published on November 16, 2023 11:50 AM GMT

Quinn wrote a while ago "I heard a pretty haunting take about how long it took to discover steroids in bike races. Apparently, there was a while where a "few bad apples" narrative remained popular even when an ostensibly "one of the good ones" guy was outperforming guys discovered to be using steroids."

I have been thinking about that notion after researching BPC 157 where it seems that the literature around it is completely fraudulent.

How do you think about the issue of how much of the literature is fraudulent?



Discuss

Learning coefficient estimation: the details

16 ноября, 2023 - 06:19
Published on November 16, 2023 3:19 AM GMT

What this is for

The learning coefficient (LC), or RLCT, is a quantity from singular learning theory that can help to quantify the "complexity" of deep learning models, among other things.

This guide is primarily intended to help people interested in improving learning coefficient estimation get up to speed with how it works, behind the scenes. If you're just trying to use the LC for your own project, you can just use the library without knowing all the details, though this guide might still be helpful. It's highly recommended you read this post before reading this one, if you haven't already.

We're primarily covering the WBIC paper (Watanabe 2010), the foundation for current LC estimation techniques, but the presentation here is original, aiming for better intuition, and differs substantially from the paper. We'll also briefly cover Lau et al. 2023.

Despite all the lengthy talk, what you end up doing in practice is really simple, and the code is designed to highlight that. After some relatively quick setup, the actual LC calculation can be comfortably done in one or two lines of code.

What this isn't for
  • A good overview of SLT, or motivation behind studying the LC or loss landscape volume in the first place. We're narrowly focused on LC estimation here.
  • Sampling details. These are very important! But they're not really unique to singular learning theory, and there are plenty of good resources and tutorials on MCMC elsewhere.
  • Derivations of formulas, beyond the high-level reasoning.
TLDR 
  • What is the learning coefficient? (Review from last time)
    • The learning coefficient (LC), also called the RLCT, measures basin broadness.
    • This isn't new, but typically "basin broadness" is operationalized as "basin flatness" - that is, via the determinant of the Hessian. When the model is singular (eigenvalues of the Hessian are zero), this is a bad idea.
    • The LC operationalizes "basin broadness" as the (low-loss asymptotic) volume scaling exponent. This ends up being the right thing to measure, as justified by singular learning theory.
  •  How do we measure it?
    •  It turns out that measuring high-dimensional volume directly is hard. We don't do this.
    • Instead we use MCMC to do what's known in statistics as "method of moments" estimation. We contrive a distribution with the LC as a population parameter, sample from that distribution and calculate one of its moments, and solve for the LC.
    • We simplify some details in this section, but this is the conceptual heart of LC estimation.
  • How do we measure it (for real)?
    • The above is a bit simplified. The LC does measure loss volume scaling, but the "loss" it uses is the average or "infinite-data" limit of the empirical loss function.
    • In practice, you don't know this infinite-data loss function. Luckily, you already have a good estimate of it - your empirical loss function. Unluckily, this estimate isn't perfect - it can have some noise. And it turns out this noise is actually worst in the place you least want it.
    • But it all works out in the end! You actually just need to make one small modification to the "idealized" algorithm, and things work fine. This gets you an algorithm that really works in practice!
    • Finally, the state-of-the-art method (Lau et al. 2023) makes a couple simple modifications, for scalability among other reasons: it measures the learning coefficient only *locally*, and uses mini-batch loss instead of full-batch.

In chart form: as we move from idealized (top) to realistic (bottom), we get new problems, solutions, and directions for improvement. The guide itself covers the first two rows in the most detail, which are likely the most conceptually difficult to think about, and skips directly from the second row to the fourth row at the very end.

 

See the linked Colab notebook for the full guide.



Discuss

Extrapolating from Five Words

16 ноября, 2023 - 02:21
Published on November 15, 2023 11:21 PM GMT

If you only get about five words to convey an idea, what will someone extrapolate from those five words? Rather than guess, you can use LLMs to experimentally discover what people are likely think those five words mean. You can use this to iterate on what five words you want to say in order to best convey your intended meaning.

I got this idea because I tried asking Claude to summarize an article at a link. Claude doesn't follow links, so it instead hallucinated a summary from the title, which was included in the URL path. Here's an example of it doing this with one of my LessWrong posts:

It hallucinates some wrong details and leaves out lots of details that are actually in the post, but it's not totally off the mark here. If my ~Five Words were "the problem of the criterion matters", this would be a reasonable extrapolation of why I would say that.

Rather than using a link, I can also ask Claude to come up what it thinks I would have put in a post with a particular title:

Strangely it does worse here in some ways and better in others. Unlike when it hallucinated the summary of the link, this time it came up with things I would absolutely not say or want someone to come away with, like the idea that we could resolve the problem of the criterion enough to have objective criteria for knowledge.

But maybe prompting it about LessWrong was the issue, since LessWrong puts off a lot of positivists vibes, Eliezer's claims to the contrary not withstanding. So I tried a different prompt:

This is fine? It's not great. It sounds like a summary of the kind of essay a bored philosophy undergrad would write for their epistemology class.

Let me try asking it some version of "what do my ~Five Words mean?":

This is pretty good, and basically what I would expect someone to take away from me saying "the problem of the criterion matters". Let's see what happens if I tweak the language:

Neat! It's picked up on a lot of nuance implied by saying "important" rather than "matters". This would be useful for trying out different variations on a phrase to see what those small variations change about the implied meaning. I could see this being useful for tasks like word smithing company values and missions and other short phrases where each word has to carry a lot of meaning.

Now let's see if it can do the task in reverse!

Honestly, "uncertainty undermines knowledge" might be better than anything I've ever come up with. Thanks, Claude!

As a final check, can Claude extrapolate from its own summary?

Clearly it's lost some of the details, particularly about the problem of the criterion, and has made up some things I wasn't trying to have it get at. Seems par for the course in terms of condensing down a nuanced message into about five words and still having the core of the message conveyed.

Okay, final test, what can Claude extrapolate from typical statements I might make about my favorite topic, fundamental uncertainty?

Hmm, okay, but not great. Maybe I should try to find another phrase to point to my ideas? Let's see what it thinks about "fundamental uncertainty" as a book title:

Close enough. I probably don't need to retitle my book, but I might need to work on a good subtitle.

Based on the above experiment in prompt engineering, Claude is reasonably helpful at iterating on summaries of short phrases. It was able to pick up on subtle nuance, and that's really useful for finding the right short phrase to convey a big idea. The next time I need to construct a short phrase to convey a complex idea, I will likely iterate the wording using Claude or another LLM.



Discuss

In Defense of Parselmouths

16 ноября, 2023 - 02:02
Published on November 15, 2023 11:02 PM GMT

Prerequisites: The Quaker and the Parselmouth.

I.

First, a quick summary.

In the prerequisite post, Benjamin Hoffman describes three kinds of people. These people are hypothetical extremes: they're the social and epistemic equivalents of perfect spheres interacting in frictionless vacuums. There are Quakers, who always tell the truth and keep their word when they say they'll do something. There are Actors, who always say what seems good to say at the moment and who don't reliably keep their word even if they swear and oath. Lastly, there are Parselmouths, who can lie freely to Actors but speak only the truth to other Parselmouths and (by implication) speak only truth to Quakers.

I approve of this distinction. It is abstracted and the real world is never this clear, but in my experience it does get at something useful to understand. I think truthtelling is a powerful institutional advantage, and wish more people were Quakers in this dichotomy. Benjamin points out that Parselmouths are somewhat odd, in that habitually telling lies likely erodes the instinct or maybe even ability to tell the truth; it may not be possible for real people to stay consistently Parselmouths without slowly becoming Actors.

Speaking truth is hard. It's hard work to figure out what the true state of the world is. It's hard to quickly and accurately state what you think is true; the English language makes "I believe there's a ninety percent chance of rain tomorrow" a much longer sentence than "it's going to rain tomorrow." There's a lot of extra emotional sharp elbows you wind up throwing when someone asks you how you liked the (burned and unseasoned) casserole they brought to the potluck. Quakers of the world, I salute you. Actors of the world, I get it. 

My first claim is that it's reasonable to be a Parselmouth.

II.

Storytime! The following story details events that happened about two decades ago, when I was several feet shorter than I am now. Some details have been substantiated by other people who were around at the time, but many likely have morphed over the years.

When I was a kid, I had to get a bunch of shots. My mom took me into the office, and I goofed around in waiting area for a little bit before a nurse waved me past the front desk and Mom and I went in. The nurse sat me down in the doctor's office on a big plastic chair and rubbed my shoulder with something cold while asking my mother questions, then she asked me to sit still for a moment and said "This won't hurt a bit. Are you ready?" I nodded. Then she stabbed me with a needle.

It hurt. I started crying, and continued crying for some time, well after the pain had faded to a dull ache. No amount of consoling from my parents or treats from the nurse changed this. I did not have the ability to articulate what made me upset then, but it was not the pain (even as a child, I had a remarkably high tolerance for pain when it had a purpose) but at confusion. It wasn't supposed to hurt- were they wrong about whether it would hurt? That didn't make sense, sticking a sharp thing into someone usually hurt them, why would someone think it wouldn't? Did I misremember what they said, and they said it would hurt instead of that it wouldn’t? Is my memory really that fallible? I was utterly confused, and couldn't make sense of what happened.

With the benefit of years experience, it's obvious what happened. The nurse lied to keep a small child still while giving them a shot. This story would repeat itself for years, and I would be bewildered and confused each time. The hypothesis that someone would simply lie would not occur to me until much later, after an epiphany on how the world regarded truth.

While painful, that understanding turned out to be a useful skeleton key to a lot of human interaction. Sometimes people just lie, and often for smaller stakes than you might think. Words are not truth, just sounds people make or symbols on a page. People lie, casually, easily, on important things and on trivial things, to get what they want or just because they don’t care. This not caring is not malicious, simply indifferent. 

III. 

Sometimes stating factual mistruths is totally fine. I claim an important distinction is whether the bulk majority of people involved know which statements are which.

There are some obvious cases. If you pick up a book from the Fantasy aisle of the bookstore and it has a dragon on it, the person who wrote the words inside that book has pretty much carte blanche to say whatever made up things they want in that book. There are slightly less obvious cases; I regret to inform you that both professional wrestlers and stage comedians have flexible relationship between what comes out of their mouths and what the ground state of reality is like. Then there's the dubious examples where it's fiction but an onlooker might get reasonably confused, like found footage horror movies or anything in a newspaper that comes after the phrase "studies show."[1] Fiction is good fun for millions of people the world over and any injunction against lying needs to make space for that.

(Another childhood story: an uncle of mine once took me fishing, and when we came back after a day of catching no fish we managed to do an unintentional comedy duet where I systematically disagreed with every line of his fish story until my grandfather took mercy on him and took me aside to explain that these were not supposed to be taken as literal truth.)

Some factual mistruths are, if you actually examine what is happening, successfully communicating to both sides of a conversation what's going on.

If you ask a native speaker of American English if they can come help you with something and they reply "In a second," and you get confused when they are not there a sixtieth of a minute later, I am in fact sorry. This bothered me too for a long time, and eventually I made peace with it. There is no law of the universe connecting the sounds "ˈsɛkənd" with the amount of time that's around the time it takes a human heart to beat. In some contexts "ˈsɛkənd" means the unit of time about equal to a heartbeat, in other contexts it means the count between first and third, in the context where you just asked for help and someone said "in a second" it means something like "soon, but not right now."

I have picked many variations of this fight over the years, and I've given up. I'm on the side of the linguistic descriptivists now.

There is this: I don't think everyone knows how this works. Existence proof, I didn't know it for what feels like a long time so saying "everyone knows" twenty years ago would have been wrong. Everyone does not know. People are going to keep bumping into this at least as long as there are literal minded kids running around, and probably a lot longer since some of those kids grow up to become literal minded adults and their stance on truth transforms into a fiercely held principle. I try to pay attention enough to notice when someone is taking my words literally and code switch or at least warn them.

The thing the word "Literally" does in a lot of colloquial American English sentences is to function as an intensifier. "He's the best in the world at that game" and "He's literally the best in the world at that game" often are said with the intent to mean basically the same thing. No actual authoritative comparison to the rest of the world was made. 

This is frustrating. It would be useful to have a way to mark "this statement is in Quaker mode" in the middle of a conversation. Sadly, as far as I can tell every attempt English has used to have that kind of marker has gotten suborned to function as an intensifier. The English word "very" supposedly comes from "verrai" meaning "true."

There are some carveouts where factual mistruths are generally agreed not to be allowed. The American legal system takes a dim view of people lying under oath while in a courthouse. Some contracts aspire to be factually true about what's going on. (Though "until death do us part" is still in many marriage vows despite appalling divorce rates, and the enforceability of some documents which resemble legal contracts like End User License Agreements is questionable.) Some people have, between them on a personal level, managed to create spaces where factual mistruths are Not Supposed To Happen. Once in a while, whole communities try for it. As Benjamin points out in the Quaker and the Parselmouth, the actual real life Quakers are still around.

And then there's LessWrong.

IV.

LessWrong sometimes calls itself a truth seeking community.[2] As I write this, the about page says "We seek to hold true beliefs" and (I claim) implies that we seek to profess true beliefs. I love this community in large part because I find The Truth to be a thing of beauty, worthy of song and poetry, worth dedicating ones life to its pursuit. 

But groups are not uniform, and the amount of earnest truthfulness varies. The fact that someone is commenting on LessWrong does not mean that you can trust them absolutely. It doesn't even mean for sure that they care about Truth the way that you might. Maybe they're new here. Maybe they do care about truth but have a different idiomatic case like the "just a second" example above you aren't aware of. Maybe they're trying but failing to uphold those ideals, and the failure hurts and it hurts worse to admit it.

(Or maybe they don't care at all. Some people just like to watch the world burn, after all.)

(That list of reasons is not exhaustive.)

It would gladden my heart to hear that somewhere there was a shining city upon a hill where the truth was spoken freely, a place where Actors durably weren't allowed to tread without changing their ways. Here and there, when the tradeoffs are right, try and help that project out when I can. It's not my central goal, but it would be good to have.

Part of the problem is that Eternal September where newcomers need to acculturate, but I think a bigger problem is that of group coordination. The attempts I've seen to drag the whole Rationalist community kicking and screaming into the Truth, to all hunt Stag instead of Rabbit, haven't worked. You can try and hold yourself to this standard, you can patiently make the case to yet another Actor or Parselmouth that the Quaker way is better, and maybe you should, but I think colloquial and casual usage is going to keep being a way people talk.

“I speak to everyone in the language they understand," said Ender. "That isn't being slick. It's being clear.”

-Speaker for the Dead, by Orson Scott Card

V.

Confession time. I claim myself as a Parselmouth in this dichotomy. The Quakers have my sympathies, but I no longer can consider myself one of them.

It used to take me an awkwardly long time to respond to things people said to me. I don't mean a small gap as I waited for them to finish speaking, I mean a full five to ten seconds of dead air. See, if you asked me "what did you do today?" then I needed to think over my day, summarize the important parts, decide how that made me feel, put this into words, then check to make sure those words we true from all likely angles. This often required several revisions, mentally rewriting multiple drafts of the sentence before beginning to speak. “Where are my keys?” They’re on the counter- no, wait, I don’t actually know that because I’m not looking at them- I last saw them on the counter- is “the counter” sufficiently distinguished from other counters?- wait I’ve been thinking about this for too long aaaah. “When will you be there?” “Very likely before eight p.m. but I would estimate between five and six p.m. conditional on my car not being out, otherwise- wait, sorry, I’m giving too much information and in a weird format and aaaaaaaaahhh.”

Nowadays I just say “five thirty, I’ll let you know if I’m running late.” I don’t actually work through whether this is accurate, but it fits a momentary intention and I can answer within a second of the other person ending their sentence. Likewise, if someone says they’ll grab milk while at the store, I’m no longer baffled by the lack of milk in the fridge that night. Doing what they said they would isn’t a momentous oath, it’s just a momentary intention.

(I’d like to make a note here that, throughout all of Quakers and Actors, Benjamin never treats Actors as intentionally lying as part of a deliberate strategy, just being really unreliable. This endears the author to me and I’m continuing that usage, though I would like to take a moment to establish that intentional and malicious liars do in fact exist and productive habits for interacting with Actors will catastrophically fail when you come into contact with such people. This kind of liar is outside the scope of this post, and while I am not saying I would do that there is value to be gained in practicing CONSTANT VIGILANCE against this.)

I agree with Benjamin that being in Actor mode wears down the instinct towards the truth, and I also think that the known fact that someone lies sometimes is significant evidence that they might be lying at this very moment. You should never completely trust that a Parselmouth is in Quaker mode. 

(You should never completely trust a Quaker either! Zero and one are not probabilities! They might be wrong, they might have decided this is the thing that is worth lying for, they might have been replaced by their identical twin! CONSTANT VIGILANCE.)

The reverse, however, is also true. I think being in Quaker mode wears down your facility with lies, the oil of social integration and flexibility. It makes you vulnerable to the lies of others, unable to anticipate and predict that they might mislead or misinform. If Quakers and Actors ever actually existed and mixed, I suspect that Quakers would find themselves frustrated and deceived again and again because they were not expecting mistruths to happen.

I can notice this in my own mind in the short term. When I move from a long weekend interacting with entirely rationalists who are at every turn reminded of who and what this community is to Monday Morning chatting with a stranger next to me at the train station, I'm slower to come up with quick and glib answers to "how are you doing?" The first few conversational exchanges when I show up to a rationalist meetup I have to correct myself for giving off the cuff answers which are fast and easy but perhaps not maximally truthful.  

Perhaps I’m just committing the typical mind fallacy. It seems to be a true fact about my own mind that to be a Quaker would mean finding the world a confusing place, full of unpredictable dangers, untrue words that I would be defenseless against. Summarized heavily, I view my options as:

  1. To be a Quaker and live amid chaos. 
  2. To be an Actor and give up much of my ability to collaborate long-term. 
  3. To be a Parselmouth and code switch depending on who I’m speaking to, accepting the damage to my truth telling ability as well as mistakes due to misidentification. 

Of these, I choose 3.

To be clear, at my worst I think I'm only as untrustworthy as an idealized Actor. These aren't modeled as con men or malicious, only as people who don't view speech acts as binding on future actions. Most of the time, according to my own evaluation, I'm slightly more honest, forthright, and fulfill more of my verbal commitments than the median person around me. If your takeaway from this essay is that I'm going to try and lie to you and get you to do something against your interests for my own benefit, I don't think you've understood me properly. The whole point of this essay was to make it easier for people to model when I would and would not state things which are not true. I'm kind of putting in extra effort here, and I try to err on the side of honesty.

I try to speak nothing but truth to those who I observe and estimate to speak truth, and if you are a Quaker in this dichotomy I would like to know so I can reciprocate.  By default however you should not assume everything I say is something I would swear an oath to; I am a Parselmouth, and only feel compelled to speak the truth and nothing but the truth to those who I believe feel compelled to speak the truth to me.

(You should not trust anyone completely, CONSTANT VIGILANCE.)

  1. ^

    This is a joke about how a thing which people quite reasonably expect to be true is often not actually true. I don't have a high opinion of newspapers and their ability to interpret scientific studies these days.

    Explaining the joke can ruin the joke but in this essay it seems unusually worth it to try and be strictly accurate. Also, this makes for a good example: if I hadn't included this footnote, would that joke been acceptable for a Quaker to say? How about a Parselmouth? What if I have reason to believe that people sometimes don't read the footnotes, does that change how acceptable it is?

  2. ^

    This is untrue. Communities do not have mouths to speak or fingers to type. A community speaking is a type error. This metaphor of a gestalt groups of humans being reified as capable of action is one I disagree with. That's an essay for another day however, and here I use the metaphor.



Discuss

Life on the Grid (Part 1)

16 ноября, 2023 - 01:37
Published on November 15, 2023 10:37 PM GMT

The physical layout of the environment in which someone is raised impacts their cognitive abilities as an adult. A recent study based on video game data from 397,162 people across 38 countries found that “people who grew up outside cities are better at navigation.” More specifically, “growing up in cities with low Street Network Entropy (e.g. Chicago) led to better results at video game levels with a regular layout, while growing up outside cities or in cities with higher Street Network Entropy (e.g. Prague) led to better results at more entropic video game levels.”

In plain english: if you grow up in a grid-like environment, then you will be worse at navigating less grid-like environments. 

Association between Environment and Video Game (Sea Hero Quest) Wayfinding Performance stratified by age, gender, and education. The SHQ Wayfinding Performance is computed from the trajectory length and has been averaged within 5-years windows.Two examples of cities with low (Chicago) and high (Prague) SNE. Right: Distribution of the street bearings across 36 bins of 10 degrees.

This finding is perhaps not wholly surprising, and at first blush might not seem all that significant. The vast majority of us hardly ever need to truly wayfind IRL anymore. Technology has made our navigational skills all but obsolete. So what does it matter whether certain environments are better or worse at preserving them?

It matters a great deal. To understand why, we will need to take a small detour.

In his appearance on Making Sense with Sam Harris, complexity scientist David Krakauer makes a distinction between complementary cognitive artifacts—technologies that make us more intelligent after using them—and competitive cognitive artifacts (if you can’t guess what these do then maybe you’ve been using them too much). The canonical example of a competitive artifact is a calculator: repeated usage leaves you worse at mental arithmetic than you were before. Contrast this with an abacus, which can have quite the opposite effect: expert users can eventually develop such a high-fidelity mental model that they no longer even need to use the physical abacus, and are able to maintain their enhanced arithmetic skills without it. 

Typus arithmeticae tratta da Margarita Philosophica (1503)

The brain is a complex system, full of interconnected representational systems. This is why an abacus doesn’t just help you with math—it helps you with everything. Dr. Krakuer explains, “[the mind] doesn’t have a firewall around it such that the abacus’ functional advantages are confined to arithmetic…it actually has really interesting indirect effects on linguistic competence and geometric reasoning.” Positive global effects, that reach beyond the intended use, are a characteristic of all complementary cognitive artifacts.

On the other hand, the functional disadvantages of competitive cognitive artifacts are also not limited to one domain or skill. Dr. Krakauer goes on to discuss how replacing primitive wayfinding technologies like maps, astrolabes, and sextants with automated wayfinding technologies has had exactly this effect: 

Your familiarity with mapmaking and topographical, topological, and geometric reasoning is generally valuable in your life, not just in navigating across the city. So taking away a map doesn’t just make you worse at getting from one door to another, it makes you worse in many ways…A good example of this, which both Einstein and Frank Lloyd Wright depended upon, was wooden cubes. Early in their youth, they both became very enamored of these cubes and would construct worlds out of cubes, like Minecraft. And both of them claimed—Frank Lloyd Wright in the case of architecture, and Einstein in the case of the geometry of the universe—that the intuitions they built up playing with these cubes were instrumental in their later lives. I would claim the same is true for maps. If you know how to navigate through a true space, like a Euclidean space or a curved space on the surface of the earth, that allows you to think about different kinds of spaces, relationship spaces, idea spaces. The notion of a path from one idea to another, as a metaphor, actually has an immediate and natural implementation in terms of a path in real space.

That our cognition (reasoning, memory, creativity, etc.) is akin to a kind of mental navigation is betrayed by our conceptual metaphors: “a field of knowledge,” a subject that is “unexplored,” “a train of thought,” “a trip down memory lane,” “jogging your memory,” “a flight of fancy.” The neuroscience is moving quickly here (metaphor intended), but emerging evidence supports this claim: “the brain encodes abstract knowledge in the same way that it represents positions in space, hinting at a more universal theory of cognition.”

This is also why the ancient memory enhancement technique, the Method of Loci—the memory palace technique—is still used by memory champions today. According to The Art of Memory, “The method of Loci involves memorizing information by placing a mnemonic image for each item to be remembered at a point along an imaginary journey. The information can then be recalled in a specific order by mentally walking the same route through the imaginary journey and converting the mnemonic images back into the facts that they represent.” The connection between walking and creativity, long known to intellectuals and artists and supported by recent research, is also of note here—it is as if movement through the physical landscape primes one for greater ease of movement in the more abstract landscapes of ideas (see the wider literature on embodied cognition for further discussion of this theme). 

The rhythm of walking generates a kind of rhythm of thinking, and the passage through a landscape echoes or stimulates the passage through a series of thoughts. This creates an odd consonance between internal and external passage, one that suggests that the mind is also a landscape of sorts and that walking is one way to traverse it. A new thought often seems like a feature of the landscape that was there all along, as though thinking were traveling rather than making. And so one aspect of the history of walking is the history of thinking made concrete — for the motions of the mind cannot be traced, but those of the feet can.

— Rebecca Solnit, Wanderlust: A History of Walking

We also use spatial metaphors to talk about the socio-economic landscape, or life itself: “life’s journey,” “broadening your horizons,” “finding myself” or “soul-searching,” “career paths,” “walks of life,” and so on. This makes sense when we consider our deep evolutionary history: many of the most important things we had to know for survival were spatial in nature—the wildebeest migrate through this valley, we can gather berries in that patch of forest, someone was killed by a saber tooth tiger over there, etc. We sometimes even conceptualize knowledge with an arboreal metaphor (“branches of knowledge”)—a vestige, perhaps, of our ancestral tree-dwelling lifestyle.

Our lack of wayfinding ability is present also in our physical, cultural, and metaphysical landscape, which has also become regular and grid-like. The consequence, which I believe we are now witnessing, is a lack of resilience and resourcefulness, an unwillingness to “blaze trails”, an inability to produce “path-breaking” innovations, and numerous other deficiencies which are leaving us altogether worse off than we were when the world was a bit more chaotic and idiosyncratic. The social and intellectual “technologies” (e.g. the educational system, norms around child-rearing) that we’ve put in place have increasingly become competitive cognitive artifacts rather than cooperative.

The excessive “gridification” of the world can be seen most clearly in the physical domain. For a variety of reasons (economic, environmental, changing aesthetic preferences), newer cities tend to have simpler layouts. 

Two of the oldest American cities in the figure, Boston and Charlotte, are the most irregular. Source for both figures: Geoff Boeing 

The same is true of architecture. More ornate and culturally distinct forms have been replaced the world over by drab monoliths. The result is an oppressive standardization and homogenization, a loss of the quirks and eccentricities that give places their unique character.

The loss of irregularity and variation is one aspect of the physical gridification of the world, but there is also the sheer number of roads and tracks crisscrossing the globe. It has become nearly impossible to “get off the grid.” Virtually nothing and nowhere escapes the techno-social net which we have cast over the planet. Uncharted territory has become a thing of the past.

And even if you wanted to drop out nowadays, could you? You are identified, tracked, networked, and classified everywhere you go, at every moment of your day. Never in history has it been harder to find a fresh start, a clean slate, or a place to march to the beat of your own drummer. There are no Walden Ponds or hermitage retreats anymore—even the off-the-grid spots are now on the grid.

Ted Gioia

a giant sea-serpent attacks a ship off the coast of Norway on Olaus Magnus’s Carta marina of 1539, this image from the 1572 edition.

If you grew up in the 18th century, there were still new places to go. After hearing tales of foreign adventure, you could become an explorer yourself. This was probably true up through the 19th and early 20th centuries; after that point photography from National Geographic showed every Westerner what even the most exotic, under-explored places on earth look like. Today, explorers are found mostly in history books and children’s tales. Parents don’t expect their kids to become explorers any more than they expect them to become pirates or sultans. Perhaps there are a few dozen uncontacted tribes somewhere deep in the Amazon, and we know there remains one last earthly frontier in the depths of the oceans. But the unknown seems less accessible than ever.

— Peter Thiel, Zero to One

Peter Thiel has argued that the innovative spirit has degenerated in part because we no longer believe in secrets, and that we no longer believe in secrets because there is no longer any accessible frontier. This is a truly unprecedented state of affairs for humanity. For eons, our minds and cultures have evolved in delicate symbiosis with the Unknown, that place on the map labeled “Here Be Dragons.” Without this Unknown, that place where there may be cities of gold or fountains of youth, the heroes (but not just the heroes, all of us) have nowhere to journey, and all of the things which can make us into heroes—bravery, fortitude, ingenuity, daring, and the like—begin to atrophy. Without this Unknown, we begin to feel confined—trapped—like a beautiful and dangerous animal in a cramped cage: we develop a claustrophobia; imagination and inspiration wither. We aren’t as hopeful as we used to be, but we don’t know why.

That kid’s face says it all—the abacuses are sideways jail cell bars (source)

Life on the Grid begins at a young age, when you are given a job at the factory of tears and tedium that is the modern education system, where you will sit at square desks in square rooms in square buildings reading from square books and writing on square pieces of paper for countless hours of your childhood (in most cases I guess they will be rectangles, but you get the point). You will level up from one grade to the next by demonstrating that you can follow arbitrary rules and regurgitate knowledge that is spoon-fed to you in clearly-defined subjects, units, and chapters. If you are even remotely good at this game, they will insist that you do it for the first 22 years of your life, but even that’s not enough anymore.

At some point, god willing, you will enter “the real world,” that mystical realm your parents and teachers always spoke of. However, you will quickly come to find that “the real world” is another vast bureaucratic system which isn’t really all that different from “the fake world” of school. Sure, you have a little more freedom and some things are slightly different (bosses instead of teachers, cubicles instead of desks), but the basics are the same: just stay on the straight and narrow, keep your head down, and keep leveling up.

As for our post-education lives (what little remains of them), many enter Corporate America or become Public Servants. Some of us become artists or entrepreneurs, but even that is no escape: these career paths have also been engulfed by the Grid. Erik Hoel writes of how there is one indisputable difference between contemporary writers and the writers of even the recent past (early 2000s): pretty much everyone now has an MFA.

But a majority of people under the age of 50 successful in publishing today literally got A+s. They all raised their hands at the right time, did everything they needed to get into Harvard or Amherst or Williams or wherever, then jumped through all the necessary hoops to make it to the Iowa Writers’ Workshop or Columbia University, etc.

Faulkner didn’t finish high school, recent research shows Woolf took some classes in the classics and literature but was mostly homeschooled, Dostoevsky had a degree in engineering…Not one of these great writers would now be accepted to any MFA in the country. The result of the academic pipeline is that contemporary writers, despite a surface-level diversity of race and gender that is welcomingly different than previous ages, are incredibly similar in their beliefs and styles, moreso than writers in the past.

 

As for entrepreneurs, it’s more of the same. Gone are the days of Gates and Zuckerberg when advanced degrees were optional and the digital frontier was wide open. Now, everyone reads the same essays, follows the same guides (“8 Steps to Becoming a Tech Entrepreneur When You Know Nothing About Technology”), and applies to the same programs, incubators, and combinators

Here was Tyler Cowen writing in 2007 (“The Loose Reins on U.S. Teenagers Can Produce Trouble or Entrepreneurs”):

The new ideas and business principles behind the Web have carved out the ideal territory for the young. A neophyte is more likely to see that music can come from computers rather than just from stores or radios, or that it is best to book a flight without using a travel agent. Clay Shirky, an associate teacher at New York University, notes that many young people are blessed by an absence of preconceptions about Internet businesses. Years of experience are critical to refining and improving a long-familiar product, like bread. But completely new, outside-the-box ideas — which typically come from the young — are more important for founding Napster or YouTube.

Sixteen years later and literally all of this is false now. The young don’t even know how to use computers anymore.

  

To review: growing up in simplistic spatial environments and using GPS has given you brain damage and life has become a soul-crushing video game utterly devoid of mystery or adventure. We are trapped in the Grid like an insect in the spider’s web; vigorous struggle will only serve to entangle us further. To extricate ourselves, we must, as individuals, gently subvert the very foundations of the Grid, which is nothing external but a facet of human nature: the impulse towards control, the systematizing instinct, the part of us that abhors anomaly and ambiguity and seeks to eradicate them. What begins as an earnest attempt to break free can so easily slip back into a self-imposed system of rules and practices with standards to be met and schedules to be followed. For that reason I am hesitant to provide any concrete suggestions—they will only serve to constrain your thinking. For now, there is perhaps only one thing that can be said: if you want to get off the grid, then get lost.



Discuss

Glomarization FAQ

15 ноября, 2023 - 23:20
Published on November 15, 2023 8:20 PM GMT

The main reason I'm posting this here is because I want it to be publicly recorded that I wrote this post in 2023. That way, I can show people in the future that I didn't just make this up on the spot; it's been around since long before whatever it is they're asking me about this time. I don't have a blog of my own, so this is where I'm putting it. Still, some of you might find it interesting, or want to use it yourself.

 

If I've sent you this post, you've probably just asked me a question, to which I responded something along the lines of "I refuse to confirm or deny that." Maybe you asked if I wrote a particular story, or if I had sex with someone, or if I robbed a bank. When you heard my refusal you likely became more suspicious, assuming it means that I really did write/bang/rob the story/person/bank.

This is the explanation of why that's not necessarily the case.

 

Why are you refusing to answer my question?

Suppose - and as generally holds throughout this post, I'm not saying this is true or false, just asking you to consider the hypothetical - I had never committed a crime in my life, any crime, with the singular exception of... let's say, public urination. With a bag over my head, so people didn't know for sure that it was me. The police are nevertheless able to narrow the suspects down to a pretty small list, with me on it, and they ask me if I did it.

Some people would just lie to the police. The crucial element here, though, is that I don't want to lie about this. This entire policy is about avoiding lies, and about being able to maintain my privacy while staying completely honest.[1] If I'm refusing to lie, but I also don't want them to know the truth, I can't answer the question at all. So I say, "I refuse to confirm or deny that."

But then suppose the police ask me if I killed someone - which I didn't do. And they ask if I robbed a bank, and if I sell drugs, and all sorts of other questions about crimes that I've never done before. I always tell the truth: "No, I didn't do that." And of course, now the police know that the answer to the public urination question was yes, because that's the only one where I refused to answer the question.

The only policy which doesn't tell them my secrets, other than outright lying, is to refuse to confirm or deny whether I committed any of the crimes - even the ones that I really didn't commit.

 

All right, but doesn't that mean that you must have committed some crime in the first place? After all, if you hadn't, there would be no harm in answering the question for every crime that exists.

This policy doesn't just apply to crimes that I've committed so far. There are a lot of other possible situations I could be in, and answering inconsistently gives up information.

For example, if I haven't committed any crimes yet, but will in the future, then the moment that I change from always saying "No" to always saying "I refuse to confirm or deny that," the police know I've just done something -

 

So you're planning to commit a crime?

No! Or rather, I refuse to confirm or deny that I'm planning to commit a crime, but no, that is not a valid conclusion from what I said earlier! First off, even if I anticipate only a small chance of committing a crime many decades from now, it's still worth following this rule just in case - don't give me that look, I'm not done talking yet. Furthermore, this doesn't just apply to crimes.

Suppose you're asking whether I had sex with person X.[2] For the same reasons as before, I can't just refuse to confirm or deny whether I've had sex with the people if the answer is "Yes," because then the difference in my responses gives it away. ("I haven't had sex with W, I can't tell you about X, I haven't had sex with Y, I haven't had sex with Z...)

And if I had never had sex with anyone, but had committed crimes, then by answering "No" about the sex questions, but "I refuse to confirm or deny that" about the crime questions, I would be revealing that I had committed at least one crime. Therefore, I would still have to refuse to answer the sex questions, even if I was a virgin.

There's also some things about logical decision theory and cooperating with hypothetical versions of myself, but I don't think any of that is necessary to prove the point. Yes, I expect that at some point in my past, present, or future, there is a reasonably high chance that I am asked a question that I don't want to honestly answer. And that's enough to show that I need to refuse to confirm or deny some things.

 

What exactly is the general policy you're following?

Refuse to answer any question for which one of the possible answers to that question would, if true, be something I would want to conceal - regardless of whether said answer actually is true. Also known as "Glomarization."[3]

 

Isn't that pretty difficult to follow all the time?

Yes! In fact, as much as I would want to always Glomarize, I often have to make exceptions out of practicality. If someone asks what I did today, then in theory, I would have to refuse to answer, because one of the possible answers is "I stole your car." But most of the time, I end up just answering the question.

So what would be similar to strict Glomarization, but still permissive enough that I don't have to refuse to answer every question? Well, I start with a cost-benefit analysis. If the police ask whether I was the Bag-Headed Peeer[4], the cost of telling the truth in the case where the truth is "Yes" exceeds the benefit of telling the truth in the case where the truth is "No." So here, I Glomarize. But in a different scenario, this switches around: suppose I win a $1,000,000 lottery for which Canadians are not eligible, and I need to give the lottery person my address in order to get the money - but I have a mild preference for privacy about my address. In this situation, the cost in the case where I live in Canada is much less than the benefit in the case where I live elsewhere. So here, I tell the truth, because across all possible cases, Glomarization would hurt me.

SituationAnswer AAnswer BCost if ABenefit if BWhat do you do?Interrogated by police"I did it.""I didn't do it."Go to jail?[5]Police like you moreGlomarizeWon the lottery"I live in Canada.""I live [elsewhere]."Less privacy$1,000,000Tell the truth

I also need to consider the relevant probabilities in the calculation, and possibly randomize a bit to keep things secret with only a little cost. If 99% of my days are spent on innocent things and 1% on bank robberies, I don't have to Glomarize 100% of the time to hide when I've robbed a bank, just 2% of the time, or 10% if I want to be extra safe. Of course, the downside here this policy does give people some evidence about how frequently I have something to hide, or at least helps them establish an upper bound.

 

I'm skeptical that you actually do this as frequently as you're implying.

Unfortunately, I don't really have any witnesses that I can cite at the time of writing. There's everyone in my family, who can confirm that I really do refuse to answer questions fairly often, and I think there were some cases where they later discovered I wasn't hiding anything (and some where they discovered I was). But that's not an option, because I'm not actually willing to confirm or deny who my family is.

Luckily, for everyone but the first person who I send this post to, I'll be able to tell them about the people who I sent it to before them! So unless you happen to be that first person, there will be other people who can testify to my weirdness by the time you're reading this.

 

Why did you say "that's classified"?

That's just how I say "I refuse to confirm or deny that" some of the time! "Classified" sounds cooler than "refuse to confirm or deny," like I'm a secret agent of some sort. Whether I am in fact a secret agent of some sort is of course classified.

 

Why did you list "wrote a story" earlier in the post?

People sometimes get accused of writing stories under a pseudonym. I, like many authors before me[6], think it's fun to fuel speculation about which stories I secretly have or haven't written, and which other authors on the Internet I am or am not. It's also helpful in case I write something taboo I don't want associated with my real name.

 

Is there anything I can do if I really want to know the answer anyway?

Change the incentives! For example, suppose you're my boss, and I don't want to discuss politics because I think you'd fire people who disagree with you. If you want to know about my political beliefs, then prove you've previously had no problem working with people, even when you hated what they stood for politically. If you're not able to prove that... well, that's why I'm not answering your questions.

  1. ^

    Of course, there's an additional reason not to lie to the police even if I wasn't generally honest - it's illegal.

  2. ^

    I don't actually care much about people knowing who I have or haven't had sex with, but there are other reasons I would want to hide this sort of thing: respecting the other person's privacy, or if the person is so unpopular that people will refuse to associate with me if they know I've had sex with them, or if they're on the run from the law and I don't want to reveal that they were at my house - crap, now I'm back to talking about crimes again.

  3. ^

    Glomarization is named after a boat that the CIA wanted to keep secret.

  4. ^

    How many e's are supposed to be in that word?

  5. ^

    According to the first few Google results, there's usually just a $500 fine, but if it's a repeat offence or in front of a large crowd, the punishment increases and you do likely end up in jail.

  6. ^

    Or perhaps zero authors before me.



Discuss

Страницы