Вы здесь

Новости LessWrong.com

Подписка на Лента Новости LessWrong.com Новости LessWrong.com
A community blog devoted to refining the art of rationality
Обновлено: 39 минут 18 секунд назад

Whistle-based Synthesis

14 октября, 2019 - 15:10
Published on October 14, 2019 12:10 PM UTC

I'm reasonably happy with my Bass Whistle, where I can whistle and have it come out as a decent sounding bass. I've been using it when playing mandolin in a duo, and it fits well there. When playing piano or with a piano player, however, there's already bass so something that falls in a different place in the overall sound would be better. That could be melody, though I can't whistle fast enough for everything, but probably something simpler: harmonies, riffs, horn lines.

When I take my current software, optimized for bass, and tell it to synthesize notes a few octaves up, it sounds terrible:

  • Raw whistled input:
    (mp3)
  • Bass version (needs headphones or good speakers):
    (mp3)
  • Treble version:
    (mp3)

I'm using simple additive synthesis with the first four harmonics, which means adding together four sine waves. I think what's going on is that higher notes need more complexity to sound good? Playing around with distortion and fading the harmonics at different rates it sounds a bit more interesting:

  • Adding distortion:
    (mp3)
  • Adding fade:
    (mp3)
  • Adding both:
    (mp3)

I'm still not very happy with it, though. It sounds artificial and silly. There are good synthesizers, the product of decades of work on turning "play this note at this time" into good sounding audio, so perhaps I could use my pitch detection to drive a standard synthesizer?

I made some stand-alone open source software that pipes the pitch detection through to MIDI. This was kind of tricky: MIDI doesn't have a way to say "play this frequency". Instead you just have "play this note" and "bend the note by this much". How to interpret pitch bend is up to the synthesizer, but generally the range is ±2 half steps. So we need some math:

in: wavelength in: sample_rate in: current_note # Convert from "this wave is 23.2 samples long" # to "the frequency is 1896.6 HZ". frequency = sample_rate / wavelength # MIDI is equal tempered, with each octave divided # into twelve logarithmically equal pieces. Take # A440 as a reference point, so represent our # 1896.6 HZ as "25.29 half steps above A440": distance_from_a440 = 12 * log2(frequency / 440) # A440 is A4, or midi note 69, so this is 94.29. fractional_note = 69 + distance_from_a440 # MIDI uses a note + bend encoding. Stay on the # same note if possible to avoid spurious attacks. if (current_note and current_note - 2 < fractional_note < current_note + 2) integer_note = current_note else integer_note = round(fractional_note) # To compute the pitch bend, we first find the # fractional part of the note, in this case 0.29: fractional_bend = fractional_note - integer_note # The bend will always be between -2 and +2, a # whole tone up or down. MIDI uses 14 bits to # represent the range between -2 and +2, so -2 is 0 # and +2 is 2^14. The midpoint is 2^13, 8192: integer_bend = round((1 + fractional_bend / 2) * 8192 - 1) # The bend is 14bits which gets split into two 7-bit # values. We can do this with masking and shifting. bend_least_significant = integer_bend & 0b1111111 bend_most_significant = (integer_bend & 0b11111110000000) >> 7 out: integer_note out: bend_least_significant out: bend_most_significant

Initially I screwed this up, and thought pitch bend was conventionally ±1 semitone, and didn't end up catching the bug until I wrote up this post.

I have this working reasonably well, except that when I bend more than a whole note I get spurious attacks. Say I slide from A to C: the slide from A to Bb to B can all be done with pitch bend, but then once I go above B the system needs to turn off bent A and start working with a new note. I would love to suppress the attack for that note, but I don't know any way to communicate that in MIDI. I don't know what people with existing continuous pitch electronic instruments do?

A second problem I've run into is that what sounds like a steady whistled pitch actually has a lot of tiny variations. Consider this input: (mp3)

This sounds reasonably steady to me, but it isn't really. Here are some successive zero crossings:

wavelength (samples) frequency (hz) midi note 39.02 1130.07 49.3300 39.26 1123.41 49.2277 38.66 1140.68 49.4918 39.25 1123.62 49.2309 38.90 1133.71 49.3857 38.85 1135.21 49.4087

My synth doesn't mind, and just passes the variability through to the listener, where it's not a problem. I track where in the wave we are, and slowly adjust the rate we move through the wave to match the desired frequency:

  • Bass output:
    (mp3)
  • Pure sine treble output:
    (mp3)
When I pass that variability into some regular synths, however, even when I don't cross note boundaries I get a wavery output. I think this may be another artifact of using a synth that isn't designed for continuous pitch input? Or possibly the problem is that real MIDI pitch wheels don't just suddenly jump from +23% to +49% over a 40ms period, and so they haven't needed to design for it?

I can fix this some on my end by averaging the most recent pitches to smooth out the variability, but then it stops feeling so responsive (and quick slides don't work, and if I start a note slightly sour it takes longer to fix it). I think the answer is probably "find a better synth" but I'm not sure how to figure out what to use.

Still, I like this a lot, and I think there's something here. If you have a mac and want to play with this, the code is on github.

Comment via: facebook



Discuss

Maybe Lying Doesn't Exist

14 октября, 2019 - 10:04
Published on October 14, 2019 7:04 AM UTC

In "Against Lie Inflation", the immortal Scott Alexander argues that the word "lie" should be reserved for knowingly-made false statements, and not used in an expanded sense that includes unconscious motivated reasoning. Alexander argues that the expanded sense draws the category boundaries of "lying" too widely in a way that would make the word less useful. The hypothesis that predicts everything predicts nothing: in order for "Kevin lied" to mean something, some possible states-of-affairs need to be identified as not lying, so that the statement "Kevin lied" can correspond to redistributing conserved probability mass away from "not lying" states-of-affairs onto "lying" states-of-affairs.

All of this is entirely correct. But Jessica Taylor (whose post "The AI Timelines Scam" inspired "Against Lie Inflation") wasn't arguing that everything is lying; she was just using a more permissive conception of lying than the one Alexander prefers, such that Alexander didn't think that Taylor's definition could stably and consistently identify non-lies.

Concerning Alexander's arguments against the expanded definition, I find I have one strong objection (that appeal-to-consequences is an invalid form of reasoning for optimal-categorization questions for essentially the same reason as it is for questions of simple fact), and one more speculative objection (that our intuitive "folk theory" of lying may actually be empirically mistaken). Let me explain.

(A small clarification: for myself, I notice that I also tend to frown on the expanded sense of "lying". But the reasons for frowning matter! People who superficially agree on a conclusion but for different reasons, are not really on the same page!)

Appeals to Consequences Are Invalid

There is no method of reasoning more common, and yet none more blamable, than, in philosophical disputes, to endeavor the refutation of any hypothesis, by a pretense of its dangerous consequences[.]

David Hume

Alexander contrasts the imagined consequences of the expanded definition of "lying" becoming more widely accepted, to a world that uses the restricted definition:

[E]veryone is much angrier. In the restricted-definition world, a few people write posts suggesting that there may be biases affecting the situation. In the expanded-definition world, those same people write posts accusing the other side of being liars perpetrating a fraud. I am willing to listen to people suggesting I might be biased, but if someone calls me a liar I'm going to be pretty angry and go into defensive mode. I'll be less likely to hear them out and adjust my beliefs, and more likely to try to attack them.

But this is an appeal to consequences. Appeals to consequences are invalid because they represent a map–territory confusion, an attempt to optimize our description of reality at the expense of our ability to describe reality accurately (which we need in order to actually optimize reality).

(Again, the appeal is still invalid even if the conclusion—in this case, that unconscious rationalization shouldn't count as "lying"—might be true for other reasons.)

Some aspiring epistemic rationalists like to call this the "Litany of Tarski". If Elijah is lying (with respect to whatever the optimal category boundary for "lying" turns out to be according to our standard Bayesian philosophy of language), then I desire to believe that Elijah is lying (with respect to the optimal category boundary according to ... &c.). If Elijah is not lying (with respect to ... &c.), then I desire to believe that Elijah is not lying.

If the one comes to me and says, "Elijah is not lying; to support this claim, I offer this-and-such evidence of his sincerity," then this is right and proper, and I am eager to examine the evidence presented.

If the one comes to me and says, "You should choose to define lying such that Elijah is not lying, because if you said that he was lying, then he might feel angry and defensive," this is insane. The map is not the territory! If Elijah's behavior is, in fact, deceptive—if he says things that cause people who trust him to be worse at anticipating their experiences when he reasonably could have avoided this—I can't make his behavior not-deceptive by changing the meanings of words.

Now, I agree that it might very well empirically be the case that if I say that Elijah is lying (where Elijah can hear me), he might get angry and defensive, which could have a variety of negative social consequences. But that's not an argument for changing the definition of lying; that's an argument that I have an incentive to lie about whether I think Elijah is lying! (Though Glomarizing about whether I think he's lying might be an even better play.)

Alexander is concerned that people might strategically equivocate between different definitions of "lying" as an unjust social attack against the innocent, using the classic motte-and-bailey maneuver: first, argue that someone is "lying (expanded definition)" (the motte), then switch to treating them as if they were guilty of "lying (restricted definition)" (the bailey) and hope no one notices.

So, I agree that this is a very real problem. But it's worth noting that the problem of equivocation between different category boundaries associated with the same word applies symmetrically: if it's possible to use an expanded definition of a socially-disapproved category as the motte and a restricted definition as the bailey in an unjust attack against the innocent, then it's also possible to use an expanded definition as the bailey and a restricted definition as the motte in an unjust defense of the guilty. Alexander writes:

The whole reason that rebranding lesser sins as "lying" is tempting is because everyone knows "lying" refers to something very bad.

Right—and conversely, because everyone knows that "lying" refers to something very bad, it's tempting to rebrand lies as lesser sins. Ruby Bloom explains what this looks like in the wild:

I worked in a workplace where lying was commonplace, conscious, and system 2. Clients asking if we could do something were told "yes, we've already got that feature (we hadn't) and we already have several clients successfully using that (we hadn't)." Others were invited to be part an "existing beta program" alongside others just like them (in fact, they would have been the very first). When I objected, I was told "no one wants to be the first, so you have to say that."

[...] I think they lie to themselves that they're not lying (so that if you search their thoughts, they never think "I'm lying")[.]

If your interest in the philosophy of language is primarily to avoid being blamed for things—perhaps because you perceive that you live in a Hobbesian dystopia where the primary function of words is to elicit actions, where the denotative structure of language was eroded by political processes long ago, and all that's left is a standardized list of approved attacks—in that case, it makes perfect sense to worry about "lie inflation" but not about "lie deflation." If describing something as "lying" is primarily a weapon, then applying extra scrutiny to uses of that weapon is a wise arms-restriction treaty.

But if your interest in the philosophy of language is to improve and refine the uniquely human power of vibratory telepathy—to construct shared maps that reflect the territory—if you're interested in revealing what kinds of deception are actually happening, and why—

(in short, if you are an aspiring epistemic rationalist)

—then the asymmetrical fear of false-positive identifications of "lying" but not false-negatives—along with the focus on "bad actors", "stigmatization", "attacks", &c.—just looks weird. What does that have to do with maximizing the probability you assign to the right answer??

The Optimal Categorization Depends on the Actual Psychology of Deception

Deception
My life seems like it's nothing but
Deception
A big charade

I never meant to lie to you
I swear it
I never meant to play those games

"Deception" by Jem and the Holograms

Even if the fear of rhetorical warfare isn't a legitimate reason to avoid calling things lies (at least privately), we're still left with the main objection that "lying" is a different thing from "rationalizing" or "being biased". Everyone is biased in some way or another, but to lie is "[t]o give false information intentionally with intent to deceive." Sometimes it might make sense to use the word "lie" in a noncentral sense, as when we speak of "lying to oneself" or say "Oops, I lied" in reaction to being corrected. But it's important that these senses be explicitly acknowledged as noncentral and not conflated with the central case of knowingly speaking falsehood with intent to deceive—as Alexander says, conflating the two can only be to the benefit of actual liars.

Why would anyone disagree with this obvious ordinary view, if they weren't trying to get away with the sneaky motte-and-bailey social attack that Alexander is so worried about?

Perhaps because the ordinary view relies an implied theory of human psychology that we have reason to believe is false? What if conscious intent to deceive is typically absent in the most common cases of people saying things that (they would be capable of realizing upon being pressed) they know not to be true? Alexander writes—

So how will people decide where to draw the line [if egregious motivated reasoning can count as "lying"]? My guess is: in a place drawn by bias and motivated reasoning, same way they decide everything else. The outgroup will be lying liars, and the ingroup will be decent people with ordinary human failings.

But if the word "lying" is to actually mean something rather than just being a weapon, then the ingroup and the outgroup can't both be right. If symmetry considerations make us doubt that one group is really that much more honest than the other, that would seem to imply that either both groups are composed of decent people with ordinary human failings, or that both groups are composed of lying liars. The first description certainly sounds nicer, but as aspiring epistemic rationalists, we're not allowed to care about which descriptions sound nice; we're only allowed to care about which descriptions match reality.

And if all of the concepts available to us in our native language fail to match reality in different ways, then we have a tough problem that may require us to innovate.

The philosopher Roderick T. Long writes

Suppose I were to invent a new word, "zaxlebax," and define it as "a metallic sphere, like the Washington Monument." That's the definition—"a metallic sphere, like the Washington Monument." In short, I build my ill-chosen example into the definition. Now some linguistic subgroup might start using the term "zaxlebax" as though it just meant "metallic sphere," or as though it just meant "something of the same kind as the Washington Monument." And that's fine. But my definition incorporates both, and thus conceals the false assumption that the Washington Monument is a metallic sphere; any attempt to use the term "zaxlebax," meaning what I mean by it, involves the user in this false assumption.

If self-deception is as ubiquitous in human life as authors such as Robin Hanson argue (and if you're reading this blog, this should not be a new idea to you!), then the ordinary concept of "lying" may actually be analogous to Long's "zaxlebax": the standard intensional definition ("speaking falsehood with conscious intent to deceive"/"a metallic sphere") fails to match the most common extensional examples that we want to use the word for ("people motivatedly saying convenient things without bothering to check whether they're true"/"the Washington Monument").

Arguing for this empirical thesis about human psychology is beyond the scope of this post. But if we live in a sufficiently Hansonian world where the ordinary meaning of "lying" fails to carve reality at the joints, then authors are faced with a tough choice: either be involved in the false assumptions of the standard believed-to-be-central intensional definition, or be deprived of the use of common expressive vocabulary. As Ben Hoffman points out in the comments to "Against Lie Inflation", an earlier Scott Alexander didn't seem shy about calling people liars in his classic 2014 post "In Favor of Niceness, Community, and Civilization"

Politicians lie, but not too much. Take the top story on Politifact Fact Check today. Some Republican claimed his supposedly-maverick Democratic opponent actually voted with Obama's economic policies 97 percent of the time. Fact Check explains that the statistic used was actually for all votes, not just economic votes, and that members of Congress typically have to have >90% agreement with their president because of the way partisan politics work. So it's a lie, and is properly listed as one. [bolding mine —ZMD] But it's a lie based on slightly misinterpreting a real statistic. He didn't just totally make up a number. He didn't even just make up something else, like "My opponent personally helped design most of Obama's legislation".

Was the politician consciously lying? Or did he (or his staffer) arrive at the misinterpretation via unconscious motivated reasoning and then just not bother to scrupulously check whether the interpretation was true? And how could Alexander know?

Given my current beliefs about the psychology of deception, I find myself inclined to reach for words like "motivated", "misleading", "distorted", &c., and am more likely to frown at uses of "lie", "fraud", "scam", &c. where intent is hard to establish. But even while frowning internally, I want to avoid tone-policing people whose word-choice procedures are calibrated differently from mine when I think I understand the structure-in-the-world they're trying to point to. Insisting on replacing the six instances of the phrase "malicious lies" in "Niceness, Community, and Civilization" with "maliciously-motivated false belief" would just be worse writing.

And I definitely don't want to excuse motivated reasoning as a mere ordinary human failing for which someone can't be blamed! One of the key features that distinguishes motivated reasoning from simple mistakes is the way that the former responds to incentives (such as being blamed). If the elephant in your brain thinks it can get away with lying just by keeping conscious-you in the dark, it should think again!



Discuss

Regarding Archimedes (a philosophy of math anecdote)

14 октября, 2019 - 00:25
Published on October 13, 2019 9:25 PM UTC


Regarding tales told of Archimedes, other than those about his enthusiasm when he managed to conceive of the law of flotation, his contempt for the Roman soldier who demanded he would cease his calculations on those circles he had drawn on sand and the epigram on his tomb where his discovery of the relation between the volumes of a sphere, a cone and a cylinder was accounted for, there is one other worth of mentioning:

He once met, in a beach near Syracuse, a young boy and a slightly older and much taller youth. It was a warm day and those two were wearing white himatia without chitons underneath, as was usual for poorer people. Both left the right shoulder bare, with the fabric rising diagonally to the left shoulder where a pin held it in place, yet Archimedes noticed that in the way they had positioned themselves right next to each other the upper part of the child’s clothes was reaching exactly to the height of the lower part of the youth’s next to him. This formed an elegant, straight line, starting at the lower edge of the first cloth and ending to the upper edge of the other, and from a distance one would get the impression that it was all one continuous material instead of two distinct himatia.  

Despite his surprise, he immediately became aware that while he could notice the effect due to standing opposite those two, their position naturally rendered it impossible for them to witness and enjoy this harmonious sight.  

That they remained there, immobile, because they were indeed aware of the effect and meant for anyone coming near to observe it, was readily obvious; yet Archimedes couldn’t refrain from walking closer and asking just why they went into so much trouble so as to produce something they themselves weren’t allowed to observe, given that if they’d even attempt to steal a glance at it the delicate balance instantly would be ruined and therefore nothing would remain to be seen...

The youth kept silent, but the child, being more impulsive, replied that Archimedes was wrong. It wasn’t at all true that they stood there in this manner out of an intention to present to others the form he saw. To be precise, they were fully oblivious that anyone would be seeing such a thing when coming from afar. They were standing so close to each other out of mere habit. Lastly, they weren’t moving at all because they had been waiting there for their father to return from the beach.  

It is said that then, as he was walking away, he thought that it isn’t as much that nature seamlessly presents us with mathematical symmetries, but that the overall number of elements which, unbeknownst to us, are woven together into any image is so vast that a small number of our own interests is always also contained within it.  

Text was first published at https://www.patreon.com/Kyriakos



Discuss

(a)

13 октября, 2019 - 20:39
Published on October 13, 2019 5:39 PM UTC

Cross-posted to my personal blog.

For a while now, I've been using "(a)" notation to denote archived versions of linked pages. This is a small effort towards creating Long Content (a) – content that has a lifespan of decades or centuries, rather than months or years.

I think basically anyone whose writing includes links to other work should include archived links alongside the original hyperlinks, if the writing is intended to be long-lived. (And if you're not trying to write long-lived content, what are you doing, even?)

I was happy to see Zuck (a) & Guzey (a) using "(a)" notation in some of their recent work. Perhaps "(a)" will catch on!

Practically, archive.fo is my first choice for creating archives of webpages. It's free to use, and it's hard for content to be removed from the archive. (Folks can't just email in requesting that arbitrary content be removed.)

But archive.fo can be slow to save new pages, and its library is fairly small.

archive.org is my second choice. It's run by the Internet Archive (fun aside (a)), is free to use, has a massive library, and is quick to add new pages. Unfortunately, folks can remove arbitrary content by emailing in, so I expect archive.org to be less durable than archive.fo in the long run.

perma.cc also seems promising. I don't use it because it's expensive if you don't have an academic affiliation.

And maybe one day Quora will come around (a) to Long Content being good...



Discuss

MA Price Accuracy Law

13 октября, 2019 - 15:00
Published on October 13, 2019 12:00 PM UTC

Massachusetts has an interesting law for grocery stores to make sure price scanners are configured correctly: if your item rings up for more than the price on the shelf you get one for free (or $10 off it's it's more than $10). Specifically:

if there is a discrepancy between the advertised price, the sticker price, the scanner price or the display price and the checkout price on any grocery item, a food store or a food department shall charge a consumer the lowest price. If the checkout price or scanner price is not the lowest price or does not reflect any qualifying discount, the seller:
  • shall not charge the consumer for 1 unit of the grocery item, if the lowest price is $10 or less;

  • shall charge the consumer the lowest price less $10 for 1 unit of the grocery item, if the lowest price is more than $10

  — MGL I.XV.94.184.C

The grocery store is required to put a sign at each register describing the law, which means that when you notice this you can point to the sign. Which is way better than trying to show the cashier the relevant text of the law on your phone would be.

I have fun trying to remember the price I see for each item as I put it into my cart so if it rings up at a different price I can point that out. The law has an exception for cases where the price is a "gross error" (off by half) but in most cases discrepancies are small: ringing up at $4.99 when it said $4.50 on the shelf. Because you get the item for free if they've overcharged you, however, what matters is just that they put a misleadingly low price on the shelf.

I've noticed stores rarely have a good system in place for fixing these problems. When I catch one they generally check and give me the item for free, but that doesn't usually translate into fixing the price on the shelf. Which means that when I come in next time, it's often still wrong.

This seems like something that a group of shoppers could use together. Whenever anyone noticed a mispricing they could post to a mailing list ("the store brand blueberries are marked $3.99 but ring up as $4.29"), and then everyone on the list could go get some free blueberries. This would probably get stores to be faster about updating their prices.

Even if the stores got very fast at fixing things, though, it could still be rough for them. Say one person goes through and notices they've been overcharged for something. They don't say anything to the store, but instead write to the list and name a time. At the designated time a group of shoppers pick up one unit each and fan out over the store's checkout lines. The items are all scanned, the shoppers all object, and the store has to give away one item per checkout line instead of just one item total. This could be a parody heist plotline in a sitcom.

(While this is hard to fix with technical means, if people started doing it, of course, they would update the law.)

Comment via: facebook



Discuss

What's going on with "provability"?

13 октября, 2019 - 06:59
Published on October 13, 2019 3:59 AM UTC

Every so often I hear seemingly mathematical statements involving the concept of being provable. For example:

  • I've seen Gödel's Incompleteness Theorem stated as "if a mathematical system is powerful enough to express arithmetic, then either it contains a contradiction or there are true statements that it cannot prove."
  • On the AI alignment forum, one of the pinned sequences describes Löb's Theorem as "If Peano Arithmetic can prove that a proof of P would imply the truth of P, then it can also prove P is true".

I find each of these statements baffling for a different reason:

  • Gödel: What could it mean for a statement to be "true but not provable"? Is this just because there are some statements such that neither P nor not-P can be proven, yet one of them must be true? If so, I would (stubbornly) contest that perhaps P and not-P really are both non-true.
  • Löb: How can a system of arithmetic prove anything? Much less prove things about proofs?

And I also have one more general confusion. What systems of reasoning could these kinds of theorems be set in? For example, I've heard that there are proofs that PA is consistent. Let's say one of those proofs is set in Proof System X. Now how do we know that Proof System X is consistent? Perhaps it can be proven consistent by using Proof System Y? Do we just end up making an infinite chain of appeals up along a tower of proof systems? Or do we eventually drive ourselves into the ground by reaching system that nobody could deny is valid? If so, why don't we just stop and PA or ZFC?

Oh, speaking of ZFC. There seems to be a debate about whether we should accept the Axiom of Choice. Isn't it...obviously true? I don't really understand this topic well enough to have any particular question about the AC debate, but my confusion definitely extends to that region of concept space.

So here's my question: Where can I learn about "provability" and/or what clarifying insights could you share about it?



Discuss

AI alignment landscape

13 октября, 2019 - 05:10
Published on October 13, 2019 2:10 AM UTC

Here’s a talk I gave at EA Global 2019, where I describe how intent alignment fits into the broader landscape of “making AI go well,” and how my work fits into intent alignment. This is particularly helpful if you want to understand what I’m doing, but may also be useful more broadly. I often find myself wishing people were clearer about some of these distinctions.

<a href="https://medium.com/media/7e6d526c817829eea08842218290c560/href">https://medium.com/media/7e6d526c817829eea08842218290c560/href</a>

Here is the main overview slide from the talk:

The highlighted boxes are where I spend most of my time.

Here are the full slides from the talk:

EAG_2019.pdf



Discuss

Prediction Markets Don't Reveal The Territory

13 октября, 2019 - 02:54
Published on October 12, 2019 11:54 PM UTC

[A draft section from a longer piece I am writing on prediction and forecasting. Epistemic Status: I don't know what I am missing, and I am filled with doubt and uncertainty.]

If the notion of professional forecasters disturbs you in your sleep, and you toss and turn worrying about the blight of experts brooding upon the world, perhaps the golden light of distributed information systems have peaked out from beyond these darkest visions, and you have hope for the wisdom of crowds.

Prediction markets aggregate information by incentivizing predictors to place bets on the outcomes of well-defined questions. Since information can be both niche and useful, prediction markets also incentivize the development of specialized expertise that is then incorporated into the general pool of information in the form of a bet. When this works, it works very well.

When information is not widely distributed or discoverable, prediction markets are not useful. Prediction markets for WMDs, Pope Francis’ next pronouncement, or which celebrity couple will choose to live in a van probably will not work. Or consider a public prediction market about what percent of the current freshman class at California public universities will make it to graduation. Such a market would be pretty distorted if all the registrars and admissions councillors were betting as well.

Prediction markets do have some wicked clever uses too. For example, a prediction market can also act as a proxy for some other event. That is to say, that through some ingenious design one can correlate a prediction market’s assessment of an event to another measurable outcome. Here is one instance in which researchers used a prediction market about the probability of war in Iraq, correlated it to the price of oil, and estimated the effect of war on oil prices. This provided the very useful information telling us what % of the price of oil is caused by the threat of war in Iraq. At an even broader level, this prediction market design allows us to study the effects of war on economies.

On the other hand, an additional limitation to prediction markets is that people have to be interested enough to take part in them, which is a real bummer. Intelligent quantitative people might enjoy researching to gain some betting leverage in prediction markets qua prediction markets. But even then, most people want to research questions that they themselves find interesting [citation needed]. So even the best designed prediction market can fail without enough parties incentivized to care.

The greatest limitation for prediction markets however is not any of the above technical problems. We are optimistic that these can be overcome. But there is a logical problem which can’t. Since each specialized piece of information is converted into a bet, the market will react to that new information without having to know the particulars of that information. This is the beautiful wonder of markets - everything is turned into a utility function. However, for boards, administrators, and governments which want to take action based upon the information from a prediction market two bits of important information are left totally inaccessible. First, what information was the most salient for bringing the market odds where they currently are? Secondly, what aspect of the current state of affairs is the most leverageable? That is, of all the hidden factors which caused the market consensus to reach this point, which, if any of them, do we have any power to affect? If the point is to not just know what the market says, but to know how the world works, then prediction markets in themselves may not be of much help. Here are two quick examples to demonstrate illustrate the point:

You work at the North Pole managing present-procurement for heads of state (PP-HOS, for short). Each year you scramble to get enough coal for the various heads of state because you don’t know until Christmas week whether they are on the naughty or nice list. This is an important question because heads of state receive coal proportional to their standing in society, and since the cost of coal rises in winter, it costs your department quite a bit of money to expedite all these coal orders. So this year you have created a prediction market to tell you the chances of the president of Hungary getting coal again this year and you plan on acting on the market’s prediction in September, well ahead of the November coal rush…. The market is a success! Your department saves some money, and you save just about the right amount of coal for the beloved president of Hungary. But when the big man pulls the plug on funding the market apparatus, you realize that despite all the little helpers that made the market a success, you didn’t gain any wisdom about how to predict whether a head of state will get coal this year from it. That is an example of a market working without conveying any insights. Thus markets keep inscrutable the inner workings of Father Christmas’ naughty list.

The second example demonstrates the leverage problem of a market. You are the principal of a school. You get drunk one night and rent out a Vegas casino which you revamp into a test score betting complex. You want to know how your students will do on this week’s standardized test. So you make all their information available to patrons who then place bets on each student. Despite the drugs, sex, and alcohol in this administrative Bacchanal, the market works astoundingly well, and the predicted individual performance on the standardized tests matches the actual performance near perfectly. However, in the sober light of late afternoon, you realize that your market solution for predicting scores didn’t reveal much about what you should be doing differently. In fact, the post-mortem indicates that the 3 biggest predictors of test scores are not things even remotely under your control. You despair believing that there is nothing you can do to help students improve. Even if there were a fourth cause of test success which is under your control, it doesn’t matter and will not be discernible among the thousands of bets made, because it, like everything else was flattened into the same utility function.



Discuss

Planned Power Outages

12 октября, 2019 - 17:10
Published on October 12, 2019 2:10 PM UTC

With the dubiously motivated PG&E blackouts in California there are many stories about how lack of power is a serious problem, especially for people with medical dependencies on electricity. Examples they give include people who:

  • Have severe sleep apnea, and can't safely sleep without a CPAP.

  • Sleep on a mattress that needs continous electricity to prevent it from deflating.

  • Need to keep their insulin refrigerated.

  • Use a medicine delivery system that requires electricity every four hours to operate.

This outage was dangerous for them and others, but it also seems like a big problem that they're in a position where they need absolutely reliable grid power. Even without politically motivated outages, the grid isn't built to a standard of complete reliabilty.

There's an awkward valley between "reasonably reliable, but with a major outage every few years in a storm or something" and "completely reliable, and you can trust your life on it" where the system is reliable enough that we stop thinking of it as something that might go away but it's not so reliable that we should.

We can't get California out of this valley by investing to the point that there won't be outages; earthquakes, if nothing else, ensure that. So instead we should plan for outages, and make outages frequent enough that this planning will actually happen. Specifically:

  • Insurance should cover backup power supplies for medical equipment, and they should be issued by default.

  • When there hasn't been an outage in ~1y, there should be a test outage to uncover unknown dependencies.

While this outage was probably not done for good reasons, the problems it has uncovered are ones we need to fix.

Discuss

I would like to try double crux.

12 октября, 2019 - 08:13
Published on October 10, 2019 9:34 PM UTC

Hello,

I would like to try double crux https://www.lesswrong.com/posts/exa5kmvopeRyfJgCy/double-crux-a-strategy-for-resolving-disagreement with someone. My statement A is "There is God" (I indeed believe in it, it is not just for the sake of trying the technique). I have three cruxes (well, two and a half, to be honest), according to the rules I do not publish it here so that you would prepare your cruxes independently.

Thank you!



Discuss

A Short Introduction to Theory of Change

12 октября, 2019 - 03:01
Published on October 11, 2019 7:00 PM UTC

(Cross-posted from LinkedIn.)

At the heart of any strategy are two questions: what do we want to accomplish? And how are we going to do it?

In many situations, answering these questions might not seem difficult. We may already have a mission statement or set of values that guides all of our actions, addressing the first question. Likewise, we may already have a plan of action in place, a set of activities that seems to match the goals we’ve set out. Problem, meet solution. Done and done.

As intuitive as it is to imagine the beginning and the end of that process, though, all too often the devil is in the details—or more specifically in this case, all of the pesky steps in between. Figuring out what those ought to be takes real work, and is generally not something that can be done in one’s head. And because it takes work, a lot of times we don’t bother to do it.

Fortunately, there is a tool called theory of change that provides a means of figuring out all the steps. A theory of change is a visual depiction of your strategy. You probably already have a notion in your head of what your strategy is, but a theory of change gives you a means of articulating that strategy in a form of a diagram.

Why would you want to do this? Making your theory of change explicit accomplishes several things:

  • Developing a theory of change is a great way to get big questions about program or organizational strategy out in the open. In some cases, these questions might not have ever been considered before, or they’ve been thought about but never discussed among the team.
  • A finished theory of change diagram is a useful shorthand for explaining the mechanics of how a program works to someone who’s not that familiar with it. It’s not a marketing document, but it is a communication tool for people who want to understand your strategy in depth.
  • Theories of change are great for helping you decide what to measure. What information is most important for your program or organization to have? Are there any assumptions underlying your success that seem especially vulnerable or uncertain? Does the theory of change show that certain objectives have to be met before any others are possible? If so, you might want to focus measurement or tracking efforts on those objectives or assumptions.
  • Walking through a theory of change with new staff members can be a great way to get them up to speed quickly on the “big picture,” and not just the details that they need to focus on day to day. A theory of change in this context can be particularly effective at showing staff how their work fits into a larger plan.
  • Many funders are accustomed to working with theories of change and logic models, and including a theory of change with a grant proposal can communicate that you speak their language and have thought seriously about how a program works.

Nearly all theories of change contain the following fundamental elements. In combination, they describe a linear, causal pathway between programs or policy interventions and an aspirational end-state.

  • Activities are actions or strategies undertaken by the organization that is the subject of the theory of change. These activities usually take place in the context of ongoing programs, although they can also be one-time projects, special initiatives, or policies such as legislation or regulations.
  • Outcomes are the desired short-, medium-, or long-term results of successful program or policy implementation.
  • Impacts (or Goals) represent the highest purpose of the program, initiative, or organization that is the subject of the theory of change.

To illustrate this, we can look at a simple example. Let’s say you’ve decided you want to go to law school, and in order to get into law school you have to get a good score on the LSAT. So, how can you make sure you get a good score? Intuitively, you decide that taking a test prep class is the way to go. It sounds simple enough, but it’s worth thinking through the assumption that taking the test prep class would actually improve your score. Why do we think that might happen? Well, one factor could certainly be that you get more familiarity with the test and the types of questions asked. Perhaps there is another, more psychological factor at work too. If you’re someone who gets nervous taking tests, the practice exams and deep engagement with the material that comes with a class could help you to get more comfortable with the idea of the LSAT and make it seem less intimidating, thus improving performance.

Sure enough, this line of thinking lends itself quite easily to a theory of change:

Of course, most real-life programs and initiatives are quite a bit more complex than this simple example, which is why it's important to take the time to get the details outside of your head and onto the page or screen. Here's a theory of change I helped develop for the William and Flora Hewlett Foundation's Performing Arts Program, which distributes about $20 million a year to organizations in the San Francisco Bay Area region. This was the first theory of change the program ever had, and it was used to guide grantmaking between 2009 and 2011. (Don't be confused by the labels; in this case, "Ultimate Outcomes" = "Impacts," and "Cluster" and "Component" outcomes just mean early-stage and late-stage respectively.)

The truth is that any decision you make, if it has any element of intentionality, can be diagrammed as a theory of change. Everything from taking an umbrella with you in case it rains to making time for your favorite TV show has a theory of change behind it. Even if the idea of formalizing your decision-making in this way feels utterly unnatural, I can assure you that if you think strategically at all, then you have a theory of change in your head already. What I can’t tell you is whether it’s a good theory of change—that’s something that you probably won’t be able to figure out until you take the time to write it down and get feedback on it.

Theory of change was developed originally as an evaluation methodology. But I’ve come to believe it’s much more powerful when deployed as a design tool for strategy. I’ve worked with many different strategy frameworks over the years, and most of them are essentially the same set of tools in different packaging. For me, what sets theory of change apart is its insistence that we name the assumptions of cause and effect behind our work. It can’t tell you what your goals should be, but if you already know where you want to end up, I don’t know of another tool that prompts anywhere near the same level of critical thinking about how you’re going to get there.



Discuss

A simple sketch of how realism became unpopular

12 октября, 2019 - 01:25
Published on October 11, 2019 10:25 PM UTC

[Epistemic status: Sharing current impressions in a quick, simplified way in case others have details to add or have a more illuminating account. Medium-confidence that this is one of the most important parts of the story.]


Here's my current sense of how we ended up in this weird world where:

  • I still intermittently run into people who claim that there's no such thing as reality or truth;
  • a lot of 20th-century psychologists made a habit of saying things like 'minds don't exist, only behaviors';
  • a lot of 20th-century physicists made a habit of saying things like 'quarks don't exist, only minds';
  • there's a big academic split between continental thinkers saying (or being rounded off to saying) some variant of "everything is culture / perception / discourse / power" and Anglophone thinkers saying (or being rounded off to saying) "no".

Background context:

1. The ancient Greeks wrote down a whole lot of arguments. In many cases, we're missing enough textual fragments or context that we don't really know why they were arguing — what exact propositions were in dispute, or what the stakes were.

2. In any case, most of this is screened off by the fact that Europe's memetic winners were Christianity plus normal unphilosophical beliefs like "the sky is, in fact, blue".

3. Then, in 1521, the Protestant Reformation began.

4. In 1562, the Catholics found a giant list of arguments against everything by the minor Greek skeptic Sextus Empiricus, got very excited, and immediately weaponized them to show that the Protestant arguments fail (because all arguments fail).

5. These soon spread and became a sensation, and not just for being a useful superweapon. A lot of intellectuals were earnest humanists used to taking arguments at face value, and found Sextus' arguments genuinely upsetting and fascinating.


I trace continental thinkers' "everything is subjective/relative" arguments back to a single 1710 error in George Berkeley:

[...] I am content to put the whole upon this Issue; if you can but conceive it possible for one extended moveable Substance, or in general, for any one Idea or any thing like an Idea, to exist otherwise than in a Mind perceiving it, I shall readily give up the Cause[....]But say you, surely there is nothing easier than to imagine Trees, for instance, in a Park, or Books existing in a Closet, and no Body by to perceive them. I answer, you may so, there is no difficulty in it: But what is all this, I beseech you, more than framing in your Mind certain Ideas which you call Books and Trees, and the same time omitting to frame the Idea of any one that may perceive them? But do not you your self perceive or think of them all the while? This therefore is nothing to the purpose: It only shews you have the Power of imagining or forming Ideas in your Mind; but it doth not shew that you can conceive it possible, the Objects of your Thought may exist without the Mind: To make out this, it is necessary that you conceive them existing unconceived or unthought of, which is a manifest Repugnancy.

If I can imagine a tree that exists outside of any mind, then I can imagine a tree that is not being imagined. But "an imagined X that is not being imagined" is a contradiction. Therefore everything I can imagine or conceive of must be a mental object.

Berkeley ran with this argument to claim that there could be no unexperienced objects, therefore everything must exist in some mind — if nothing else, the mind of God.

The error here is mixing up what falls inside vs. outside of quotation marks. "I'm conceiving of a not-conceivable object" is a formal contradiction, but "I'm conceiving of the concept 'a not-conceivable object'" isn't, and human brains and natural language make it easy to mix up levels like those.

(I can immediately think of another major milestone in the history of European thought, Anselm's ontological argument for God, that shows the same brain bug.)

Berkeley's master argument was able to find fertile soil in an environment rife with non-naturalism, skeptical arguments, and competition between epistemic criteria and authorities. Via Kant and Kant's successors (especially Hegel), he successfully convinced the main current of 19th-century European philosophy to treat the idea of a "mind-independent world" as something ineffable or mysterious, and to treat experiences or perspectives as fundamental.

My unscholarly surface impression of the turn of the 20th century is that these memes ("the territory is fundamentally mysterious" and "maps are sort of magical and cosmically important") allowed a lot of mysticism and weird metaphysics to creep into intellectual life, but that ideas like those are actually hard to justify in dry academic prose, such that the more memetically fit descendants of idealism in the 20th century ended up being quietist ("let's just run experiments and not talk about all this weird 'world' stuff") or phenomenalist / skeptic / relativist ("you can't know 'world' stuff, so let's retreat to just discussing impressions; and maybe you can't even know those, so really what's left is power struggles").

Today, the pendulum has long since swung back again in most areas of intellectual life, perhaps because we've more solidly settled around our new central authority (science) and the threats to centralized epistemic authority (religious and philosophical controversy) are more distant memories. Metaphysics and weird arguments are fashionable again in analytic philosophy; behaviorism is long-dead in psychology; and quietism, non-realism, and non-naturalism at least no longer dominate the discussion in QM, though a lot of Copenhagen slogans remain popular.


The above is a very simple picture featuring uneven scholarship, and history tends to be messier than all that. (Ideas get independently rediscovered, movements go one step forward only to retreat two steps back, etc.) Also, I'm not claiming that everyone endorsed the master argument as stated, just that the master argument happened to shift intellectual fashions in this direction in a durable way.



Discuss

Rent Needs to Decrease

11 октября, 2019 - 15:40
Published on October 11, 2019 12:40 PM UTC

Here's part of a comment I got on my housing coalitions post: I consider it extremely unlikely you have found renters with the expectation of rent going down. Assuming they want to live in a well maintained building, I consider unlikely they even desire it, once they think about it. What renters hope for in general is increases that are less than their increases in income. Landlords mostly do expect that rents will go up, but the magnitude of their expectations matters, many have the same expectations as renters for moderate increases. Others will have short term/transactional thinking and will want to charge what the market will bear.

This seems worth being explicit about: when I talk about how I think rents should be lower, I really mean lower. I'm not trying to say that it's ok if rent keeps rising as long as incomes rise faster, but that rents should go down.

Here are Boston rents in June 2011:

And in June 2019:

These are on the same scale, though not adjusted for inflation (13% from 2011-06 to 2019-06).

In 2011 a two-bedroom apartment in my part of Somerville would have gone for $1800/month, or $2050/month in 2019 dollars. In 2019, it would be $3000/month. Compared to 13% inflation we have 67% higher rents.

Another way to look at this is that for what you would pay now for an apartment a ten-minute walk from Davis would, in 2011, have covered an apartment a ten-minute walk from Park St. And what you would have been paying for a Harvard Sq apartment in 2011 wouldn't get you an East Arlington apartment today.

These large increases have been a windfall for landlords. Property taxes haven't risen much, upkeep is similar, but because demand has grown so much without supply being permitted to rise to meet it the market rent is much higher. If we build enough new housing that rents fall to 2011 levels, landlords will make less money than they had been hoping, but they'll still be able to afford to keep up their properties.

I'll support pretty much any project that builds more bedrooms: market rate, affordable, public, transitional. Rents are so high that all of these would still be worth building and maintaining even if everyone could see were were building enough housing that rents would fall next year to 2011 levels.

As a homeowner and a landlord, I know that this means I would get less in rent and I'm ok with that. I value a healthy community that people can afford to live in far more than a market that pays me a lot of money for being lucky enough to have bought a two-family at a good time.



Discuss

When we substantially modify an old post should we edit directly or post a version 2?

11 октября, 2019 - 13:40
Published on October 11, 2019 10:40 AM UTC

Does anyone have any thoughts on this?



Discuss

Sets and Functions

11 октября, 2019 - 10:41
Published on October 11, 2019 5:06 AM UTC

Sets and functions are two of the most basic ideas in mathematics, and we'll need to know what they are to discuss some things about categories rigorously. Normally you'd learn about sets and functions way before encountering category theory, but in the spirit of assuming as little math as possible, we should write this post. It's also worth addressing a few matters for their conceptual relevance.

Sets are imaginary bags we put things into. For example, you can take a dog, a cat, and a shoe, put them in an imaginary bag, and now you have a set consisting of .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > * {position: absolute} .MJXc-bevelled > * {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > * {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom * {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')} {dog,cat,shoe}. The members of the set—dog, cat, and shoe—are called the elements of the set.

A subtle but important aspect of a set is that the imaginary bag has to be defined by a rule. This rule can be pretty much anything, like "put into a bag everything I'm pointing at," but it does have to be a rule. Typically, sets can fit pretty much anything in, and so you can often just say "here is my set" rather than having to be explicit about the rule. We'll get back to why the rule matters at the end. For now, sets are imaginary bags that you can put pretty much anything into.

What are we putting into these bags, exactly? Pretty much anything, yes—but clearly we aren't actually putting dogs, cats, and shoes into bags. Mathematically, what are these things?

That is to say, what's the difference between the set {dog} and the set {cat}?

Well, what's the difference between the equations x+2=3 and y+2=3? Nothing but the name of the variable—which does not matter at all. We could call x anything. We could represent it with a thirty-foot tall watercolor of a fire truck.

So what's the difference between the set {dog} and the set {cat}? Only the name of the element—which does not matter at all. Just like we can take posets like 1→2→3 and a→b→c and represent their common structure abstractly as ∙→∙→∙ , we can do the same for {dog} and {cat} with this set: {∙}.

The set {∙} is what a one-element set like {dog} or {cat} really is. There's no mathematical meaning to {dog} that actually makes the element of this set a four-legged barking creature. It's just an element that happens to be labeled "dog."

So why do we care about sets? Set theory is really important to mathematics, but from a category-theoretic perspective, they actually aren't very important at all. Instead, sets serve one major purpose: sets let us define functions, and functions are really, really important!

Functions are maps between sets that meet a few rules. First of all, let's just talk about the "maps" part of that. Think of sets as countries, and the elements of the sets as cities in that country. A map between the two sets is a map telling you how to go from the cities in one country to the cities in the other country.

But of course, math is a little more specific than that. So let's say you have one set A={a,b,c} and another set X={x,y,z}. What does it mean to define a map—a morphism, in fact—from A to X?

Well, it's pretty simple in the end. You have to start from a city in A, so one of a,b, or c. And you have to end up in a city in X, so one of x,y, or z. Let's say you go from a to x. Then the map is just...the set of where you started from and where you ended up. That is, a and x, respectively.

That's it! It's a short trip. There's not much to sets in the first place, so there's not much to maps between them. (Technically, sets have no structure—they're imaginary bags filled with black dots—and so the maps between them are as simple as a map across a country with no geography.) But let's point out one thing: we do need this map to tell us where we started and where we ended. In a set, the order of the elements doesn't mean anything. For example, the set {apple,banana,orange} and the set {orange,apple,banana} are literally identical. The only reason the elements are even written in an order at all is because there's no other way to display text.

To get our elements in order, so that we can show where we started from and where we ended up, we use something called an ordered pair, which you've probably seen from doing Cartesian coordinates. When we have a map telling us to go from a to x, we represent that as an ordered pair (a,x). The ordered pair means "we started at a and ended up at x."

Although sets don't have orders, we can have sets of ordered pairs (since we can put pretty much anything into sets), and in that way we can represent order in a set. For example, you can have the set consisting of just the ordered pair (a,x). That would be written {(a,x)}.

So what does it mean to define a map from A to X? It means defining a set of ordered pairs, the first part of the ordered pair coming from the set A and the second part of the ordered pair coming from the set X.

That is to say, a map f:A→X is defined as a set whose elements are ordered pairs (∙A,∙X), where ∙A is an element in A and ∙X is an element in X.

So for example, we could have a map with the following directions: {(a,x),(b,y),(c,z)}. This maps says, "If you're in a, go to x. If you're in b, go to y. If you're in c, go to z." All such maps are called binary relations, because they define relationships between pairs of things. For example, the map just given defines relationships between a and x, b and y, and c and z.

We could define all sorts of maps based on this definition. You could make "partial" maps that tell you where to go if you're in a, but not if you're in b or c. You could make "choose your own adventure" maps that have forking directions, e.g., a map having both (a,x) and (a,y) in them.

What's the best map? That is unquestionably a special kind of binary relation known as a function. "Best" might be a strong word, but functions have two special properties that have made it the most important type of map, and indeed morphism, in all of mathematics.

The first property of functions is that they provide you instructions for going from A to X for every "city" in A. Let's move away from the countries-and-cities metaphor and consider the study of science. Think of the elements of A as possible states of the world. As scientists, we'd like a scientific rule that gives us predictions for every possible state of the world, i.e., something that provides a map for every element of A. That's something a function does—this property is called total.

The second property of functions is something called univalence, which means that you only get one mapping for every city you could be starting from. That is to say, if your function tells you to do (a,x) and (a,y), it must be the case that x and y are completely identical, i.e., x=y. Having a map to two different cities starting from the same city is strictly disallowed by univalence.

Let's relate univalence to science as well. Basically, it captures the idea of determinism. If a state of the world yields a particular output, and then you reset things to that exact state of the world again, it had better yield the same output again. I.e., if you have a state of the world a, and you observe that a yields output x and also yields output y, the outputs x and y had better be the same output that accidentally got two different labels applied to it.

So between totality and univalence, a function basically captures the idea of "making deterministic predictions for everything." Which is exactly what science is all about.

We can combine totality and univalence into this single rule: A function f:A→X is a set of ordered pairs (∙A,∙X) where each element of A shows up in one and only one ordered pair. That is to say, a is definitely in an ordered pair, as guaranteed by totality. But by univalence, it will only show up once: if you have (a,x), then you won't also have (a,y).

You should know that while a function can't give you (a,x) and (a,y) unless x=y, it can totally give you (a,x) and (b,x). That would just be saying that two states of the world end up at the same point, which certain seems like a scientific possibility.

Now we're going to address sets and functions from a category theoretic perspective. In fact, we're going to make a category.

***

Let's build a category whose objects are sets and whose morphisms are functions.

The first step is to make sets into objects. We do this by saying that sets are objects. Seriously, it's that simple—objects don't really have any properties at all aside from their morphisms, so there's nothing more to this step.

The next step is to make our functions into morphisms. We do this by saying that functions are morphisms, and then we check that functions obey the rules of morphisms in a category.

First, domain and codomain. Sets serve as the domain and codomain of functions, and since sets are our objects, the functions will clearly have the objects of this category as their domain and codomain.

So the first real thing to figure out is composition. Let's say we have A,B, and C as sets, and functions f:A→B and g:B→C. Composition would require that we have another function h:A→C such that h=g∘f.

Let's break this requirement down. The function f starts with the elements in A and returns some elements in B. That is to say, we have a set of ordered pairs of the sort (a,b), where a comes from A and b comes from B. Say that A consists of {a1,a2,a3} and B is {b1,b2,b3,b4}. The function f allows us to specifically assign an element in B to each element of A. That is to say, we can ask, what is the specific (i.e., only one) element of B corresponding to a1? It could be b2, for example. If so, then f(a1)=b2. And we can ask the question next for a2 and a3. We might reuse elements of B or not. Let's say the full function f gives {(a1,b2),(a2,b2),(a3,b1)}.

Having assigned elements of B to elements of A in this way, we could think of elements of A as "hiding in" elements of B. For example, a1 is hiding in b2. (That is to say, the function f is hiding a1 in b2—it doesn't get to hide there automatically.)

Next we have g:B→C, which specifically assigns an element in C to each element of B. Say that C consists of {c1,c2}. Let's say g gives {(b1,c2),(b2,c1),(b3,c1),(b4,c1)}.

Now let's reveal our hidden soldiers. The elements a1 and a2 were hiding in b2, and a3 was hiding in b1. Naturally, they ambush the soldiers of C like so: {(a1,c1),(a2,c1),(a3,c2)}.

(Why is this ambush allowed? Because (a1,b2) means f(a1)=b2, and (b2,c1) means g(b2)=c1. Substituting f(a1) for b2, we have g(f(a1))=c1.)

Is that "ambush" set a function? Yes, it has each element of A in an ordered pair, and each element is in only one pair. Is it a function A→C? Yes, the "first part" of each ordered pair comes from A and the "second part" of each comes from C. Can we call this function h? Yes, we just label it that for free. Is this function h the same as doing f first, and then g? Yes, that's exactly what we just saw.

So functions compose. Check mark.

Now let's prove that these functions compose associatively. Say you have functions Di→Ej→Fk→G. We want to show that (k∘j)∘i=k∘(j∘i).

Let's plug in an arbitrary element d of D. Our functions carry that element through to some element of G, and we want to know if it's the same regardless of how you group the functions. So let's see if

(k∘j)∘i(d)=k∘(j∘i(d)).

We know that composition is allowed (we're proving that composition is associative, after all). So let's compose. Say that k∘j=y:E→G and j∘i=x:D→F. Now we can simplify to asking if y∘i(d)=k∘x(d).

Well, what is y∘i(d)? It's a mapping of an element from D through i to E, and from E through y to G. And y is equal to the path that going through from E through j to F, and from there through k to G. So overall, y∘i(d) maps an element from D through i to E, through j to F, and through k to G.

And what is k∘x(d)? It's a mapping of an element from D through x to F, and from F through k to G. And since x is equal to the path that goes from D through i to E, and from there through j to F. So overall, k∘x(d) maps an element from D through i to E, through j to F, and through k to G.

Which is exactly what we just said about y∘i(d). Because they're both carrying the same element through the same functions, they have to end up at the same element in G on pain of violating univalence. So they're equal for the case of d, and since d is an arbitrary element in an arbitrary set, it works in general. Thus, composition is associative.

Finally we need each set to have an identity morphism. Because our morphisms are functions, this will be the identity function. This is as simple as asking whether, for any set A, we can define a function 1A:A→A that does nothing.

Here's an example of such a function. Say A={a1,a2,a3}. Then a function A→A would do nothing if it gave {(a1,a1),(a2,a2),(a3,a3)}. That is to say, each element just gets mapped back to itself.

Let's show that does nothing. I.e., if you have a function g:A→B, the composition 1A∘g is just equal to g. Well, duh! The function g is a mapping of elements from A to B. So if you keep all the elements where they were in A, then g is just going to be what g was going to be if you didn't do anything in the first place.

Obviously, you can just pair up each element with itself for any set, and that's going to keep not doing anything no matter how big the set gets. So every set has an identity function. (And only one, at that—there's only one way to pair up each element with itself.)

And now we're done. We've just shown how sets and functions can be considered a category: sets are objects by declaration, and we can show that functions between sets compose associatively, and an identity function can be defined for every set.

Neat!

You might be wondering if we could have defined a category with sets as objects and binary relations as morphisms. Indeed we can, in pretty much the same way as we did with sets and functions. In fact, since functions are just particular types of binary relations, proving that binary relations meet the rules of a category in general would have proved it for the specific case as well. In fact, the category of sets and functions is a subcategory of the category of sets and binary relations.

Yet it is the case that the category of sets and functions gets the significant name Set, whereas the category of sets and binary relations gets the much less interesting name Rel. That's because it is the category-theoretic perspective that everything that's interesting about a category is contained in its morphisms. Functions are basically the most important morphism in mathematics, so we give the category for which functions are morphisms the name Set—we care about sets because, more than anything, they let us define functions.

***

One last note on defining the category of sets, Set. You may have heard of Russell's paradox, which says you can't have the set of all sets. That's because sets have to be defined according to a rule, as we said in the beginning. What if you try to define a set of all sets that are not elements of themselves? Then one possibility is that this set is either an element of itself, in which case it is included, in which case we have violated the rule just laid down. But the only other possibility is that it is not an element of itself, in which case it should be an element of itself according to its rule. But we just saw why we can't let it be an element of itself. So we bounce back and forth in eternal paradox.

So we can't have a set of all sets. Then how can we have a category of all sets? We'll discuss that in the next post, which will address this issue and use our new understanding of sets to more properly formalize the definition of a category given a couple of posts ago.

Additionally, we'll look at two other interesting things about sets. We'll see how, although sets are bags of black dots, we can use functions to give those black dots meaning. It's the category-theoretic view that everything you want to know about an object is found in the object's morphisms. Furthermore, we'll also see that there's two special kinds of sets which can be though of as the "minimal" and "maximal" set, respectively. In doing so, we'll get our first tastes of the two main goals of this series, the Yoneda lemma and adjoint functors.

***

Incidentally, writing these posts has been shockingly exhausting and time-absorbing. So after this next post, I don't expect anything further on the weekend, although I may try to answer comments then. Five posts a week is not sustainable. 2-3 a week is probably more reasonable. This experience has given me a lot of respect for people who keep up daily blogs or write books. Thanks very much to people reading and commenting on these so far. It's very useful to have feedback and gratifying to know that people are interested in category theory.

I am going to work as well on lowering the cost to myself of creating drawings to aid explanations. I have always explained this material in the past with easy access to pen and paper, and somehow it never quite occurred to me that even sketching out a few dots and arrows is much more of a pain on a computer. Taking recommendations, if you have any.



Discuss

Reflections on Premium Poker Tools: Part 4 - Smaller things that I've learned

11 октября, 2019 - 04:26
Published on October 11, 2019 1:26 AM UTC

Previous posts:

In the previous post, I talked about what I've learned. That post focused on bigger things. But there were a lot of smaller, more miscellaneous things that I've learned too. Those are the things that I want to talk about in this post.

People think of a mobile app when you say you're building an "app"

Even when I clarify and try to explain that it's a web app, most people are still confused. So sometimes I call it a website, which I hate because that sort of implies that it's static. Sometimes I describe it as poker software. I still haven't found a good solution to this. I think "website" is probably best.

Ghosting is normal

This is a huge pet peeve of mine. I hate it. But apparently it's just a thing that many people do. At least in the business world. Let me give you some examples.

1) I reached out to this vlogger on Twitter. I asked if she'd be interested in working with me. She said she is. Then she proposed that I pay her as an advertiser to promote the app. I said in response that I'm only looking for revenue share partnerships right now, and asked if she's interested in that. No response. I follow up in a few days. No response. Follow up again. No response.

2) There was a guy who I would study poker with via Skype every week. I swear, we had a good relationship, and had productive study sessions. At some point he was going to be away for some trip for a few weeks, so we said we'd resume when he gets back. After the trip I reach out to set up a time for our next session. No response. I reach out again. No response. Again. No response. I eventually start spacing it out over months, but I never get a response from him. Eventually he signs up as my second paid user. I email him to thank him and ask if he wants to catch up. No response. At this point maybe he just feels too awkward to respond. I'm really confused though. I have no clue what happened.

3) There've been a few times where I'd try to set up lunches with people for advice. Many times the conversation would go:

Me: Want to get lunch?Them: Sure, how about next week. Me: Sounds good, what would be a good time for you? Them: Silence. Me: Just checking in for lunch next week.Them: Silence.Me: Still want to get lunch? Them: Silence.Me: Sorry we missed each other last week. Want to reschedule?Them: Silence.Them: Hey, sorry I didn't respond previously. How's this upcoming week for you?Me: No worries. I'm free. How about Wednesday?Them: Silence.

I could go on and on giving examples of this sort of stuff, but I think you get the idea.

Additionally, I've found that when people want something from you, this phenomena completely disappears! Mysterious, huh?

Book authors are just people

In the beginning I'd get star struck when I met or talked with book authors, or similarly "famous" people. But now I just see them as people. People aren't banging down their door. You can email them. It's not implausible that they'll get coffee with you.

There's no such thing as "just throwing out a number"

"Suppose we said it'd be $100/month."

"Suppose we say it's 50% revenue share."

I've made statements like this, intending for it to not mean anything and just be, y'know, throwing out a number. But in my experience, people get attached to these numbers. Or at least they become heavily anchored to them.

A few kind words means the world to an early stage entrepreneur

Sometimes I'd get emails from people saying that they really like the app and that they're thankful that I created it. That stuff really meant so much to me, and made me so happy. I almost want to create some sort of effective altruist movement of doing things like that, given the amount of utility it produces.

Paying for people's meals doesn't seem to induce much reciprocation

A lot of times I meet with people and will pay for their meals, in hopes that they'll reciprocate and spend more effort trying to help me out. But I've found it to be incredibly ineffective.

Here's an extreme example. There was a point where I was pushing hard for people to sit with me and allow me to do user research with them. I would offer to buy someone lunch if they would do so. I posted this on Reddit, and one guy took me up on it. So we met for lunch when he was in town.

He ended up bringing along two friends. They were both only vaguely interested in poker. Not good.

During the lunch, the table was a bit too crowded to take out laptops and do the user research. That's another thing I've learned — don't plan on doing things that involve a computer over lunch. So anyway, we have lunch, we realize that the user research part isn't working out because there's not enough room, so we say we'll do it after we finish lunch. Then one guy says that they're actually super tired from being up all night last night, and maybe we could do the user research tomorrow, or via Skype when they get back home if tomorrow doesn't work out. I say sure.

I pay the bill for all four of us. It must have been about $80.

They're busy the next day. Afterwards, I text him/them a few times trying to set up a time to do the user research. Not responsive. Sometimes they straight up ignore me.

Later on when I launched, I emailed them. No response.

You'd expect a little bit more from a group who you treated to an $80 lunch. Maybe these guys were the exception, but I get the sense that they're more the norm.

Cialdini — are you part of the replication crisis now too?

ROI isn't enough, even for high stakes pros

I talked previously about how poker software is easily a +ROI investment, and how I've been surprised at how unwilling people are to see it that way. But I was particularly struck by the fact that high stakes poker pros wouldn't see it that way. I would have thought that such people would be logical enough to jump on +ROI opportunities.

Example: I was having a conversation with Cy where he was saying that $100/month is pretty expensive, and I said I don't think it is at all. Consider some quick math. Say a high stakes poker player plays 10,000 hands/month, and the software improves your winrate by 0.1bb/100 hands. That's 1bb/1,000 hands, 10bb/10,000 hands, or 10bb/month. If you're playing $25/50, which him and Brad play, that's $500/month. And those are pretty conservative assumptions. The software could easily improve your winrate by more than that. And online poker pros often play way more than 10,000 hands/month. But this logic did not change his perspective. He said that he just doesn't see it this way, and that he knows his other high stakes friends don't either.

I deleted my backlog, and it turns out nothing terrible happened

I was inspired to delete my backlog by Jason Fried wants to delete my backlog. It's worked out pretty well for me! It alleviated some stress, and I didn't feel like I was missing something.

Getting people to do user research with is hard

I figured it wouldn't be that hard.

"Hey, let me sit down with you over lunch and watch you use my app. I'll pick up the bill!"

"Hey, let's Skype and let me watch you use my app. I'll coach you for free!"

I would think that stuff like this would easily get me dozens and dozens of volunteers pretty quickly, but it didn't work out like that for me. I found it hard to find people to do user research with.

E2E testing wasn't worth it for me

I've had a very love-hate relationship with e2e testing. It's so awesome when you have all your tests and can run them to make sure things work. But I really hate the framework I'm using, Nightwatch.js. It's slow and cumbersome. Maintaining the tests proved to be really time consuming. There were so many weird bugs. I ultimately just decided to stop maintaining and writing e2e tests, because I didn't feel that they were worth it, given how time consuming they were.

Long inferential distances is the realest thing in the world

One example is that I was talking to a professional poker player and coach. He didn't know how to read a 2d graph with x-y coordinates. I said "x-axis". He said, "what?".

This made me decide to change the text on my app to say "horizontal axis" instead of "x-axis".

He also struggled to understand that a point on the graph refers to a pair of data points. And to understand what the slope means. And how to calculate expected value.

He wasn't the only one. I have plenty of other examples of stuff like this.

I don't want to come across as being mean though. Just sayin'. I certainly have my own share of incompetencies.

I often felt a strong urge to get out of the house

Furthermore, for some reason, I felt an urge to get far away. There's this coffee shop I like to go to that is about an hour long bike ride away. I found myself wanting to go there a lot. This urge confused me, because the coffee shop isn't that great. Eventually I realized that I really liked the bike ride, and being far away from home. Maybe because it acted as a divider between "work day" and "relaxing time".

Other stuff

There are definitely things that I'm forgetting. Hopefully I'll add to this post as I remember them.



Discuss

Reflections on Premium Poker Tools: Part 3 - What I've learned

11 октября, 2019 - 03:49
Published on October 11, 2019 12:49 AM UTC

Previous posts:

I finally made the decision to call it quits. Now I think it would be a good time to reflect on my experiences and see if I could learn something from them.

Market size

As I talked about in the previous post, initially, I thought that the market size was hundreds of thousands of users, and that I could make something like $100/user. After talking to people in the industry, I now believe that the market is more like 5-10k users, and not all of them are willing to pay $100.

This is incredibly important! Going after a $20M market is very different from going after a $200k one. If I knew it was the latter in the beginning, I wouldn't have pursued this as a business. Spending 2+ years for the chance at making maybe $200k just isn't worth it, given the inherent uncertainty with startups, and the alternatives of pursuing a startup with a higher upside, or getting a job that pays approximately that much but does so with 100% certainty.

So what happened? Where did I go wrong? Let's see. This was (roughly) my initial logic:

100k subscribers to the poker subreddit. Educational YouTube videos gets 100k+ views. Popular posts on TwoPlusTwo (big poker forum) have 1M+ views over the years.The kind of person to subscribe to the poker subreddit, watch an educational YouTube video, or spend their time on TwoPlusTwo is probably somewhat serious about poker. They're trying to get better at the game. All of this is indicative of a market size of hundreds of thousands of users. Possibly more. And poker is expanding. So there seems to be a big market here.And poker players are probably pretty willing to spend money. Investing in software is +ROI. Poker players love ROI! And they tend to be on the wealthy side.

Maybe I really underestimated the divide between passive + free and active + paid. Watching a YouTube video is something that is passive. You sit there and consume information. Using poker software is something that is active. You have to sit and think and mess around with numbers. Watching a YouTube video is free. Poker software costs money.

But then what about the existence of poker books? There around 500 poker books on the market. The top ones get up to 100k sales, and many others get in the tens of thousands, I think. Maybe it's that with books, someone is telling you what the right answers are, but with software, you have to figure out the right answers yourself.

Anyway, I think the bigger point is that I should have found people in the industry and asked them about the market size. I started doing that towards the end of my journey, but I should have done so from the beginning. People in the poker world all seem to have a pretty solid idea of what the market is really like. Why screw around trying to figure it out myself with these questionable proxies when I could just ask the people who actually know?

I really can't emphasize enough how huge this is. It would have only taken a few hours, and it would have saved me so much time.

So why didn't I go out and talk to people in the industry? I'm not quite sure. I think a part of it was that I didn't actually feel like I had to. It seemed pretty clear to me that the market was big, so I was more concerned with making the product awesome.

Another part of it is that I didn't see it as an option to talk to people in the industry. Because why would they want to talk to little ol' me? They're basically B-list celebrities, in some sense. They've written books. Tons and tons of people know them. Don't these people have hundreds of fan emails every day that they never respond to? That was my thinking in the beginning. Now, I've come to realize that they're just people, and that they're often happy to chat and provide advice.

It also would have been good if I had access to the actual financial data of my competitors. But none of them are public companies. Does that mean this isn't an option, or are there still ways?

I came across one cool approach last night. If you don't have access to their financial data, you could look at how many employees they have, and multiply by something like $125k or $200k. Something in that ballpark should give you an idea of their revenue. My competitors, from what I can tell, are all working solo. So that is a sign that they aren't making millions and millions of dollars. Not definitive, but definitely points in that direction.

Another interesting option is to actually call your competitors and talk to them! Eg. you could pretend to be a prospective employee, and in that conversation start asking about revenue and stuff. I'm not sure how I feel about that sort of deception though.

Here's a closing thought for this section: the world isn't that big. For the majority of my journey, my thoughts on the market size for poker has been, "I'm not sure exactly how big it is, but it's big enough." The world is just such a big place, and so many people play poker. The market just has to be huge. Now I realize that this isn't true.

"Just another month or two"

As I explain in the first post, for a very long time, I kept thinking to myself:

I'm not sure if I should really pursue this as a business or a long-term thing, but I do know that I want to finish up X, Y and Z. It'll only take a month or two, and I think there's a good chance that it finally gets me over the hump.

This just kept happening over, and over, and over again. It's crazy.

And each time it happened, it felt like this was the time it really would only be another month or two. I was wrong last time, and the time before, and the time before, and the time before... but this time... this time I'll be right.

Wow. Articulating it like that really helps put things in perspective. I need to diagnose myself with a chronic case of the planning fallacy. I need to do a better job of adjusting in the opposite direction. I have a tendency to be overconfident, so I need to adjust in the other direction, and be less confident. That's what you have to do with known biases: try to adjust in the other direction.

And with the planning fallacy in particular, there's a known cure: the outside view. Don't try to reason from the ground up. Look at how long similar things have taken in the past. Maybe use the reference class of "times I've thought it'd only be another month or two".

Man, that makes me laugh. "Times I've only thought it'd be another month or two." Ha! That reference class is full of miscalculations, so it's pretty clear that I need to adjust in the other direction pretty hard.

I say all of this stuff, but I still worry that I'm going to make the same mistakes again.

It always takes longer than you expect, even when you take into account Hofstadter's Law.Agility

Check out this excerpt from the first post in this series:

And I have this little voice in my head saying:

Hey Adam... it's been over a year... you don't have any users. This like totally goes against the whole lean startup thing, y'know.

And then I say in response:

I know, I know! But I really question whether the lean startup applies to this particular scenario. I know you want to avoid wasting time building things that people don't actually want. That's like the biggest startup sin there is. I know. But like, what am I supposed to do?My hypothesis is that once my app gets to the point where it's in the ballpark of Flopzilla or better, that people will want to use it. It takes time to get to that point. There isn't really a quick two week MVP I could build to test that hypothesis. I'm already trying to avoid building non-essential features, and focus on getting to that point as quickly as possible. So what am I supposed to do?If I released this and found that I had no users and no one wants this, what would that tell me? Just that people don't like this version of the app enough. Sure, ok. But the real question is whether they'll like the Ballpark Flopzilla version enough, or the Better Than Flopzillla version enough. My hypothesis is that they will, and releasing it now wouldn't invalidate that hypothesis. And I can't think of a way to test those hypotheses quickly. I think I just need to build it and see what happens.

I'm going to resist the temptation to respond to this right now. Right now I just want to tell the story. The story as it actually happened. But I do want to say that there were a lot of voices swimming around in my head questioning what I was doing.

I said in that post that I'm going to resist the temptation to respond to it. Here's where I do get to respond to it.

Here are my general thoughts about the whole lean startup thing:

1. I think the essence of it is to ruthlessly avoid spending time on things unnecessarily. Eg. you don't need to spend a week building a fancy navigation menu before you even know whether people want your product. I heartily, heartily support that message.

2. It can be tempting to think that there are no quick experiments you can run. This is usually wrong. Think more carefully, try to isolate the component assumptions, and get creative.

I definitely could have done a better job with (2).

3. But sometimes there truly aren't any quick experiments you can run. Eg. SpaceX. How is SpaceX supposed to build a quick MVP? (Well, there are some things they could do, but I don't want to get distracted from the central point that some hypotheses inherently just take a long time to validate.)

Now that I'm finished with Premium Poker Tools and am reflecting on what I've learned, it's time to add a fourth general thought.

4. If you are in a situation where (3) applies and are going to spend... I don't know... two years and three months testing a hypothesis... then the upside damn well better be worth it!

Pretty obvious. If the risk is big, the reward needs to also be big.

With Premium Poker Tools, the reward never big enough. It just wasn't. This is no SpaceX.

Even if my initial ideas about the market size were correct, and there was the potential to make $10-20M, that still isn't enough to justify spending so long testing a hypothesis.

However, I don't think it's that simple. This issue is very tangled up with the planning fallacy stuff I talked about in the "Just another month or two" section. I never actually decided to spend 2+ years on this.

Show me the money

https://www.youtube.com/watch?v=FFrag8ll85w (I had to)

I have another objection to the above section. I wasn't testing a hypothesis the whole time. No. About a year into it, I already had market validation. That's right. I already had validation.

I'm not sure when exactly I would say it occurred, but a big thing is when I posted to Reddit and someone asked where they can donate. Another big thing is seeing one of the most popular poker players in the world using my app. Another was having a bunch of people tell me that they would pay for it. Another is having random people email me thanking me for building the app and telling me how great it is.

Now, I know they say talk is cheap. I know how startup people always want to see traction. Users. Money. Growth. I didn't have any of that. But I did have all of the other stuff in the above paragraph! People said so many nice things to me. Sure, conventional wisdom might say that the above paragraph isn't enough, and that you need real traction...

But I'm a Bayesian! I'm better than that! Saying that you need actual traction is like how the scientific community waits too long before accepting something as true. Being a Bayesian, I can update in inches. I can be faster than science. I can be faster than conventional startup wisdom.

Well, that's how I used to think anyways. Now I run around my apartment with my fingers in my ears yelling SHOW ME THE MONEY!!!!!!!!!

I'm exaggerating of course. I don't actually do that. And I don't actually think you should ignore everything except "actual results". No, I'm still a Bayesian, and Bayesians don't throw away evidence. But given my experiences, I've come to believe that such evidence isn't nearly as strong as I had previously thought.

Deals fall through

This is similar to the above section. In the above section, I'm basically talking about how when people say "This app is awesome! I would pay for it!" it doesn't actually mean that much. This section is going to talk about how when potential business partners say "This is really interesting! Let's talk more!" or even "I'm in!", it doesn't actually mean that much.

You can read more about it in part one, but I've just had so many deals and stuff fall through. A lot of people were telling me that they were interested in working with me. Some even said that they would work with me. A verbal "yes". I thought to myself:

Ok, great! This is pretty strong bayesian evidence right here. I wouldn't have so many people saying this stuff if they didn't mean it. Sure, maybe a few fall through, but not everyone.And also, I haven't even put myself out there too much. Just to throw out a number, maybe I've talked to 10% of the potential people who I could partner with. If I've gotten this much interest from the 10%, once I go after the 90%, I should end up with a good amount of partnerships.

Given my experiences, I've come to believe that verbal interest just isn't that telling. And this seems to mirror the experiences of others as well.

I don't want to say that verbal interest means nothing though. Honestly, I probably went too far in the above paragraph saying that it "just isn't that telling". I still have limited experience, and I'm not sure how strongly to weigh it as evidence.

But one thing does seem pretty clear: the lack of actual traction is stronger evidence than verbal interest. Eg. for me, I had a lot of people saying they're interested in partnering with me, but no one actually following through. I think the lack of follow through is stronger evidence than the verbal interest. Similarly, I had a lot of people saying they really like the app and stuff, but even when it was free I was only getting maybe 100 users/month, and they weren't spending that much time on the app. I should have paid more attention to that.

Build it and they'll come

I've always had a perspective that goes something like this:

My app is at least in the same ballpark as Flopzilla. I feel pretty confident that it's a little bit better, actually. So then, I would think I should at least get 10-20% of the market, if not more.Yes, I know they were the first mover and have the brand recognition, but if my product is in the ballpark, I should still make a dent. I release it, people hear about it, some people like it and start using it, they tell their friends, people link to it in forums, put screenshots in blog posts, etc. I would think that if my product is as good as the market leader, through a process something like what I just described, I would get my slice of the pie.Furthermore, I would expect my slice to be proportionate to the quality of my product. If the product is a little worse than the competitors, maybe my slice is 5%. If it's a little better, maybe I get 30-40%. If it's way better, maybe it's 75%.And maybe that perspective is too optimistic. That's certainly possible. But it can't be too off the mark, right? Maybe if the product is a little worse I end up with 1% instead of 5%. Maybe if it's only a little better I get 5-10% instead of 30-40%. Maybe if it's way better I get 50% instead of 75%.

Although it's still a little counterintuitive to me, I have to say that my perspective is different now. The perspective I described above is some version of "build it and they'll come" (BIATC). I now thing that BIATC is pretty wrong.

I'm still not quite sure what the mechanism is, though. I suppose that it takes a lot to actually get word of mouth to happen. I suppose people don't actually do that much comparison shopping, and instead lean heavily towards things that are popular, have social proof, and that they stumble across organically, eg. in blogs.

I feel like I may have overestimated BIATC due the stuff I hear from YC folks. They talk a lot about focusing heavily on the product, as opposed to marketing it. I at least get the impression from them that making something people want is what it's all about, and that if you manage to do so, you'll have success. Here's Paul Graham in How to Start a Startup:

It's not just startups that have to worry about this. I think most businesses that fail do it because they don't give customers what they want. Look at restaurants. A large percentage fail, about a quarter in the first year. But can you think of one restaurant that had really good food and went out of business?Restaurants with great food seem to prosper no matter what. A restaurant with great food can be expensive, crowded, noisy, dingy, out of the way, and even have bad service, and people will keep coming. It's true that a restaurant with mediocre food can sometimes attract customers through gimmicks. But that approach is very risky. It's more straightforward just to make the food good.

Using the example of restaurants, I actually can think of a lot of restaurants with great food that don't do so well. Hole-in-the-wall-type places with awesome food, but that are never too busy. And on the other hand, I can think of a lot of trendier restaurants that have terrible food but a lot of customers.

Still, I don't want to completely discount BIATC. I think it can be true in some situations. I just think those situations have to be pretty extreme. You need to 10x your competition. You need to be solving a hair on fire problem. You need to be building a painkiller, not a vitamin. You need your users to really, really love you. When you totally blow the competition away, or when you truly solve an important problem that hasn't been solved yet, yeah, when you build it, they'll come. Maybe that's what YC is trying to convey.

Customer acquisition is hard

Of course, I didn't actually just build my app and sit there expecting people to come to me. I did try to acquire customers.

It just didn't work. Despite the fact that I have a product that people say all of these nice things about, I only managed to get, let's say three paid users and 100 free trial sign ups. That amount of traction seems very much not in line with the quality of product I have, which makes me think that customer acquisition is very hard. Or maybe just that I'm still bad at it.

Here's what I tried, how it went, and what I learned:

Affiliate partnerships

This has always been Plan A. Find some people who already have huge followings eg. on YouTube, and piggyback off of that by offering them revenue share. Should be pretty simple. It's free money for them, and the product is good, so why wouldn't they want to do it? Especially when I ended up offering them 50%.

Well, I'm still not quite sure what the answer to that is.

Maybe they know how small the market is and how little money they'd make? Perhaps. But still, it's so low effort for them to throw a link in the description. And they're often already using poker software in their posts/books/videos, so it isn't any extra effort for them to use mine. In fact, mine is a better fit because mine is the only web app, and they could link to simulation results.

In talking to them, the response I usually get is that they've been meaning to check the app out but just haven't had time. This makes no sense to me because as poker professionals, I'd think that they are already spending time studying with software, so why not substitute mine in? Maybe they don't actually study? Maybe they don't want to spend the time messing around with a product they're unfamiliar with? Who knows.

My takeaway here is that if the deal you're offering people is only marginally beneficial, you might just end up with no partners.

YouTube and blogging

I think I have produced solid content with my YouTube channel and blog. People have told me that. But I've gotten only a minuscule amount of hits.

Again, my initial thinking was what I described in BIATC. That the amount of hits I get should be at least roughly proportional to the quality of my content.

Nope! That didn't happen! In retrospect, this makes sense. How are people supposed to discover your YouTube videos? YouTube recommends videos that are already popular, and chooses the videos that are already popular to put high in their search results. Similar with blogging. It's a chicken-egg problem that I have yet to figure out.

Paid advertising

There are a ton of huge companies built off of ad revenue: Google, Facebook, Instagram, Twitter, Reddit, etc. So then, that gives me the impression that paid ads are a huge thing that everyone is doing. And if they're doing it, they're doing it because it works for them, presumably. So paid ads have always been something that I assumed to work well.

Nope! The first place I learned this was in Julian Shapiro's guide. He said something that made it actually seem pretty clear in retrospect. If you were able to get an ad channel working profitably for you, you'd just be able to scale up with that channel, acquiring customers profitably, and making a ton of money. If every company were able to do that, entrepreneurship would be pretty easy.

Since reading his stuff, I started to hear other people say the same things, that it's really hard to do profitably and most companies never manage to do so. I still decided to give paid ads a shot, because it's a low risk high reward thing, but it turned out not to work for me.

Direct sales

I spent a pretty good amount of time doing direct sales. Some in person networking at a poker meetup, playing poker at the casinos, meeting people in coffee shops, Ubers, etc. Then I also spent some time DMing people on poker forums, and emailing poker players and coaches. I didn't go too crazy because I don't want to be spammy, but I definitely did some.

None of it really worked though. I think my takeaway mirrors the BIATC stuff, where you need to have something really valuable to actually get peoples attention.

I again found this surprising. I figured that when you actually are in a conversation with someone, they'd sort of give you the benefit of the doubt. But no, I didn't find that to be true.

Refer a friend

There was a point where I was offering $20 for every friend you refer. Pretty good, right?! Seemed pretty generous to me, and like something people would want to take advantage of. But no, that didn't happen. Still confusing to me, but I guess the lesson again is that it really takes a lot to get peoples attention.

Sharable simulation links

(I talk more about this in this post.)

Poker players spend a lot of time in forums discussing hands. They'll say things like, "you only have 33% equity, so you should fold". My app lets you link to a simulation that shows that you only have 33% equity. For non-users, it's in readonly mode. For users, when they could click the link and play around with the assumptions.

I thought this would be huge. Making it easier for people to discuss hands in the forums. But no, it did nothing. I think a big part of it is that most people in the forums approach it very casually and don't want to spend a lot of time on their comments. They just want to add their two cents, and then leave.

Free

The app was even free for a long period of time. I would have thought that this would attract a ton of users, but no, that didn't happen. More evidence that it takes a lot to actually get people to look in your direction, and that customer acquisition is hard.

People are lazy and irrational

Here's what I mean. It's really true that if you're a poker player and are actually trying to get better, that you should have some sort of poker software to study with. Coaches and professionals say this all the time. But people are still too lazy to do so.

Hell, they are just too lazy to study in general. I've said this before, but they prefer passive things. Watching a video and having someone "tell you the answers", even though that sort of passive study never works, regardless of the field. Eg. with math, you have to actually do a lot of practice problems yourself to get it to stick.

I've also heard a lot of coaches complain about students who pay them $100+/hr not do the homework that they are assigned. The coaches beg and plead, but the students just don't want to do the work.

That's the lazy part. I guess the irrational part plays into that, but what I really had in mind is that poker software is pretty easily a +ROI investment, but people still don't care. And if poker players aren't persuaded by +ROI investments, then I'm not sure what other demographic would be persuaded.

Everything else

There are definitely things that I'm forgetting. Hopefully I'll add to this post as I remember them.

If any readers out there have any advice, I'm all ears! I want to try to learn as much as I can from this.



Discuss

How feasible is long-range forecasting?

11 октября, 2019 - 01:11
Published on October 10, 2019 10:11 PM UTC

Lukeprog posted this today on the blog at OpenPhil. I've quoted the opening section, and footnote 17 which has an interesting graph I haven't seen before.

How accurate do long-range (≥10yr) forecasts tend to be, and how much should we rely on them?

As an initial exploration of this question, I sought to study the track record of long-range forecasting exercises from the past. Unfortunately, my key finding so far is that it is difficult to learn much of value from those exercises, for the following reasons:

  1. Long-range forecasts are often stated too imprecisely to be judged for accuracy.
  2. Even if a forecast is stated precisely, it might be difficult to find the information needed to check the forecast for accuracy.
  3. Degrees of confidence for long-range forecasts are rarely quantified.
  4. In most cases, no comparison to a “baseline method” or “null model” is possible, which makes it difficult to assess how easy or difficult the original forecasts were.
  5. Incentives for forecaster accuracy are usually unclear or weak.
  6. Very few studies have been designed so as to allow confident inference about which factors contributed to forecasting accuracy.
  7. It’s difficult to know how comparable past forecasting exercises are to the forecasting we do for grantmaking purposes, e.g. because the forecasts we make are of a different type, and because the forecasting training and methods we use are different.

Despite this, I think we can learn a little from GJP about the feasibility of long-range forecasting. Good Judgment Project’s Year 4 annual report to IARPA (unpublished), titled “Exploring the Optimal Forecasting Frontier,” examines forecasting accuracy as a function of forecasting horizon in this figure (reproduced with permission):

This chart uses an accuracy statistic known as AUC/ROC (see Steyvers et al. 2014) to represent the accuracy of binary, non-conditional forecasts, at different time horizons, throughout years 2-4 of GJP. Roughly speaking, this chart addresses the question: “At different forecasting horizons, how often (on average) were forecasters on ‘the right side of maybe’ (i.e. above 50% confidence in the binary option that turned out to be correct), where 0.5 represents ‘no better than chance’ and 1 represents ‘always on the right side of maybe’?”

For our purposes here, the key results shown above are, roughly speaking, that (1) regular forecasters did approximately no better than chance on this metric at ~375 days before each question closed, (2) superforecasters did substantially better than chance on this metric at ~375 days before each question closed, (3) both regular forecasters and superforecasters were almost always “on the right side of maybe” immediately before each question closed, and (4) superforecasters were roughly as accurate on this metric at ~125 days before each question closed as they were at ~375 days before each question closed.

If GJP had involved questions with substantially longer time horizons, how quickly would superforecaster accuracy declined with longer time horizons? We can’t know, but an extrapolation of the results above is at least compatible with an answer of “fairly slowly.”

I'd be interested to hear others' thoughts on the general question, and any opinions on the linked piece.



Discuss

The Baconian Method (Novum Organum Book 1: 93-107)

10 октября, 2019 - 21:54
Published on October 10, 2019 6:54 PM UTC

This is the seventh post in the Novum Organum sequence. For context, see the sequence introduction.

In this section, Bacon lists reasons why we should believe much greater progress in science is possible, and in doing so begins to describe his own inductivist methodology in detail.

We have used Francis Bacon's Novum Organum in the version presented at www.earlymoderntexts.com. Translated by and copyright to Jonathan Bennett. Prepared for LessWrong by Ruby.

Ruby's Reading Guide

Novum Organum is organized as two books each containing numbered "aphorisms." These vary in length from three lines to sixteen pages. Bracketed titles of posts in this sequence, e.g. Idols of the Mind Pt. 1, are my own and do not appear in the original.While the translator, Bennett, encloses his editorial remarks in a single pair of [brackets], I have enclosed mine in a [[double pair of brackets]].

Bennett's Reading Guide

[Brackets] enclose editorial explanations. Small ·dots· enclose material that has been added, but can be read as though it were part of the original text. Occasional •bullets, and also indenting of passages that are not quotations, are meant as aids to grasping the structure of a sentence or a thought. Every four-point ellipsis . . . . indicates the omission of a brief passage that seems to present more difficulty than it is worth. Longer omissions are reported between brackets in normal-sized type.Aphorism Concerning the Interpretation of Nature: Book 1: 93–107

by Francis Bacon

93. We have to assume that the force behind everything is God; for our subject matter—·namely nature·—is good in such a way that it plainly comes from God, who is the author of good and the father of light. Now in divine operations even the smallest beginnings lead unstoppably to their end. It was said of spiritual things that ‘The kingdom of God cometh not with observation’ [Luke 17:20], and it is the same with all the greater works of divine providence: everything glides on smoothly and noiselessly, and the work is well under way before men are aware that it has begun. And don’t forget Daniel’s prophecy concerning the last ages of the world: ‘Many shall run to and fro, and knowledge shall be increased’ [Daniel 12:4], clearly indicating that the thorough exploration of the whole world is fated to coincide with the advancement of the sciences. (By ‘fated’ I mean ‘destined by ·God’s· providence’. I would add that there have been so many distant voyages that ‘the thorough exploration of the whole world’ seems to have reached completion or to be well on the way to it.)

94. Next topic: the best of all reasons for having hope, namely the errors of the past, the wrong roads so far taken. In the course of censuring a poorly run government the critic said something excellent:

The worst things in the past ought to be regarded as the best for the future. For if you had conducted yourself perfectly yet still ended up in your present ·miserable· condition, you would have not even a hope of improvement. But as things stand, with your misfortunes being due not to the circumstances but to your own errors, you can hope that by abandoning or correcting these errors you can make a great change for the better.

Similarly, if throughout many years men had gone the right way about discovering and cultivating the sciences, and the sciences had still been in the state they are now actually in, it would have been absurdly bold to think that further progress was possible. But if the wrong road has been taken, and men have worked on things that weren’t worthwhile, it follows that the troubles have arisen not from •circumstances that weren’t in our power but from •the human intellect—and the use and application of that can be remedied. So it will be really useful to expound these errors; because every harm they have done in the past gives us reason to hope to do better in the future. I have already said a little about these errors, but I think I should set them out here in plain and simple words.

95. Those who have been engaged in the sciences divide into experimenters and theorists. The experimenters, like •ants, merely collect and use ·particular facts·; the theorists, like •spiders, make webs out of themselves. But the •bee takes a middle course: it gathers its material from the flowers of the garden and the field, but uses its own powers to transform and absorb this material. A true worker at philosophy is like that:

  • he doesn’t rely solely or chiefly on the powers of the mind ·like a theorist = spider·, and
  • he doesn’t take the material that he gathers from natural history and physical experiments and store it up in his memory just as he finds it ·like an experimenter = ant·. Instead,
  • he stores the material in his intellect, altered and brought under control.

So there is much to hope for from a closer and purer collaboration between these two strands in science, experimental and theoretical—a collaboration that has never occurred before now.

96. We have never yet had a natural philosophy that was pure. What we have had has always been tainted and spoiled: in Aristotle’s school by logic; in Plato’s by natural theology; in the second school of Platonists (Proclus and others) by mathematics, which ought only to set natural philosophy’s limits, not generate it or give it birth. From a pure and unmixed natural philosophy we can hope for better things ·than can be expected from any of those impure systems·.

97. No-one has yet been found who was sufficiently firm of mind and purpose to decide on and to carry out this programme:

Clean right out all theories and common notions, and apply the intellect—thus scrubbed clean and evenly balanced—to a fresh examination of particulars.

[[By particulars, Bacon likely means something close to specific individual data points and observations.]]

For want of this, the human knowledge that we have is a mish-mash, composed of •childish notions that we took in along with our mothers’ milk, together with •·the results of· much credulity and many stray happenings. So if someone of mature years, with functioning senses and a well-purged mind, makes a fresh start on examining experience and particular events, better things may be hoped for from him. In this respect, I pledge myself to have good fortune like that of Alexander the Great. Don’t accuse me of vanity until you have heard me out, because what I am getting at—taken as a whole—goes against vanity. Aeschines said of Alexander and his deeds: ‘Assuredly we don’t live the life of mortal men. What we were born for was that in after ages wonders might be told of us’, as though Alexander’s deeds seemed to him miraculous. But ·what I am saying about myself is not like that, but rather like this·: in the next age Livy took a better and a deeper view of the matter, saying of Alexander that ‘all he did was to have the courage to neglect sources of fear that were negligible’. I think that a similar judgment may be passed on me in future ages: that I did no great things, but simply cut down to size things that had been regarded as great. . . .

98. We can’t do without experience; but so far we haven’t had any foundations for experience, or only very weak ones. No-one has searched out and stored up a great mass of particular events that is adequate

  • in number,
  • in kind,
  • in certainty, or
  • in any other way

to inform the intellect. On the contrary, learned men— relaxed and idle—have accepted, as having the weight of legitimate evidence for constructing or confirming their philosophy, bits of hearsay and rumours about experience. Think of a kingdom or state that manages its affairs on the basis not of •letters and reports from ambassadors and trustworthy messengers but of •street-gossip and the gutter! Well, the way philosophy has managed its relations with experience has been exactly like that.

  • Nothing examined in enough careful detail,
  • nothing verified,
  • nothing counted,
  • nothing weighed,
  • nothing measured

is to be found in natural history. And observations that are loose and unsystematic lead to ideas that are deceptive and treacherous. Perhaps you think that this is a strange thing to say. You may want to comment:

Your complaint is unfair. Aristotle—a great man, supported by the wealth of a great king—composed an accurate natural history of animals; and others, with greater diligence though making less fuss about it, made many additions; while yet others compiled rich histories and descriptions of metals, plants, and fossils.

If so, it seems that you haven’t properly grasped what I am saying here. For the rationale of a •natural history that is composed for its own sake is not like the rationale of a •natural history that is collected to supply the intellect with the concepts it needs for building up philosophy. They differ in many ways, but especially in this: the former attends only to the variety of natural species ·as they are found in nature·, not to ·deliberately constructed· experiments in the mechanical arts. In the business of life, the best way to discover a man’s character, the secrets of how his mind works, is to see how he handles trouble. In just the same way, nature’s secrets come to light better when she is artificially shaken up than when she goes her own way. So we can hope for good things from natural philosophy when natural history—which is its ground-floor and foundation—is better organized. Then, but not until then!

99. Furthermore, even when there are plenty of mechanical experiments, there’s a great scarcity of ones that do much to enlarge the mind’s stock of concepts. The experimental technician isn’t concerned with discovering the truth, and isn’t willing to raise his mind or stretch out his hand for anything that doesn’t bear on his ·practical· project. There will be grounds for hope of scientific advances when ·and only when· men assemble a good number of natural-history experiments that •are in themselves of no ·practical· use but simply •serve to discover causes and axioms. I call these ‘experiments of light’, to distinguish them from the ·practically useful but theoretically sterile· ones that I call ‘experiments of fruit’ [here ‘fruit’ = ‘practical results’]. Now, experiments of this kind have one admirable property: they never miss or fail! Their aim is not to •produce some particular effect but only to •discover the natural cause of something; and such an experiment succeeds equally well however it turns out, for either way it settles the question.

100. Many more experiments should be devised and carried out, and ones of an utterly different kind from any we have had up to now. But that is not all. There should also be introduced an entirely different method, order, and procedure for carrying through a programme of experiments. To repeat something I have already said [82]: when experimentation wanders around of its own accord, it merely gropes in the dark and confuses men rather than instructing them. But when there is a firmly regulated, uninterrupted series of experiments, there is hope for advances in knowledge.

101. Even after we have acquired and have ready at hand a store of natural history and experimental results such as is required for the work of the intellect, or of philosophy, still that is not enough. The intellect is far from being able to retain all this material in memory and recall it at will, any more than a man could keep a diary all in his head. Yet until now there has been more thinking than writing about discovery procedures—experimentation hasn’t yet become literate! But a discovery isn’t worth much if it isn’t ·planned and reported· in writing; and when this becomes the standard practice, better things can be hoped for from experimental procedures that have at last been made literate.

102. The particulars ·that have to be studied· are very numerous, and are like an army that is dispersed across a wide terrain, threatening to scatter and bewilder the intellect ·that tries to engage with them·. There’s not much to be hoped for from intellectual skirmishing ·with these particulars·, dashing here and there among them in a disorderly way. What is needed is first •to get the relevant particulars drawn up and arranged, doing this by means of tables of discovery that are well selected, well arranged, and fresh (as though living); and •to put the mind to work on the prepared and arranged helps that these tables provide.

[[By axiom, Bacon means something akin to hypothesis or model.]]

103. But after this store of particulars has been laid before our eyes in an orderly way, we shouldn’t pass straight on to the investigation and discovery of new particulars or new discoveries; or anyway if we do do that we oughtn’t to stop there. I don’t deny that when all the experiments of all the arts have been collected and ordered and brought within the knowledge and judgment of one man, new useful things may be discovered through taking the experimental results of one art and re-applying them to a different art (using the approach to experiments that I have called ‘literate’, ·meaning that the results are properly recorded in writing·). But nothing much can be hoped for from that procedure. Much more promising is this: from those particular results derive axioms in a methodical manner, then let the light of the axioms point the way to new particulars. For our road does not lie on a level, but goes up and down—up to axioms, then down again to scientific practice.

[[For a modern plain English description of Bacon's method see: 1, 2, 3.

A concrete example of what Bacon is discussing might be as follows:

Particular: you observe that both parents of sparrows care for their young*.Highly-General Axiom/Hypothesis: both sexes of all bird species care for the young;Medium-General Hypothesis: both of sexes small birds care for their young;Narrow Axiom/Hypotheses: Some of both sexes of sparrows living in South England care for their young.

Aristotle might start with a fews observations or a folk belief that some birds of both sexes care for their young and then formulate a universal truth: For all X such that X is a bird, it cares for its young. By syllogism, Aristotle will derive new particular cases: Robins are are kind of bird therefore box sexes of Robins care for their young. This is syllogistic demonstration.

Bacon states that the Aristotelian approach is utterly invalid and instead one musts only generalize modestly from observations, using each expansion of the generalization to seek out further evidence which will either confirm or deny further expansion.

*This is a fictitious example.]]

104. But the intellect mustn’t be allowed •to jump—to fly—from particulars a long way up to axioms that are of almost the highest generality (such as the so-called ‘first principles’ of arts and of things) and then on the basis of them (taken as unshakable truths) •to ‘prove’ and thus secure middle axioms. That has been the practice up to now, because the intellect has a natural impetus to do that and has for many years been trained and habituated in doing it by the use of syllogistic demonstration. Our only hope for good results in the sciences is for us to proceed thus: using a valid ladder, we move up gradually—not in leaps and bounds—from particulars to lower axioms, then to middle axioms, then up and up until at last we reach the most general axioms. ·The two ends of this ladder are relatively unimportant· because the lowest axioms are not much different from ·reports on· bare experience, while the highest and most general ones—or anyway the ones that we have now—are notional and abstract and without solid content. It’s the middle axioms that are true and solid and alive; they are the ones on which the affairs and fortunes of men depend. Above them are the most general axioms, ·which also have value, but· I am talking not about abstract axioms but rather about ones of which the middle axioms are limitations ·and which thus get content from the middle axioms·. So the human intellect should be •supplied not with wings but rather •weighed down with lead, to keep it from leaping and flying. This hasn’t ever been done; when it is done we’ll be entitled to better hopes of the sciences.

105. For establishing axioms we have to devise a different form of induction from any that has been use up to now, and it should be used for proving and discovering not only so-called ‘first principles’ but also the lesser middle axioms— indeed all axioms. The induction that proceeds by simply listing positive instances is a childish affair; its conclusions are precarious and exposed to peril from a contradictory instance; and it generally reaches its conclusions on the basis of too few facts—merely the ones that happen to be easily available. A form of induction that will be useful for discovery and demonstration in the sciences and the arts will have •to separate out a nature through appropriate rejections and exclusions, and then, after a sufficient number of negatives, •to reach a conclusion on the affirmative instances. [Bacon will start to explain this in 2-15.] No-one has ever done this, or even tried to, except for Plato who does indeed make some use of this form of induction for the purpose of discussing definitions and ideas. But for this kind of induction (or demonstration) to be properly equipped for its work, many things have to be done that until now no mortal has given a thought to; so that much more work will have to be spent on this than has ever been spent on the syllogism. And this induction should be used not only in the discovery of axioms but also in drawing boundaries around notions. It is in this induction that our chief hope lies.

[[Here Bacon again mentions the importance of Looking Into the Dark.]]

106. When establishing an axiom by this kind of induction, we must carefully note whether the axiom is shaped so as to fit only the particulars from which it is derived, rather than being larger and wider. And if it is larger and wider, we must see whether its greater scope is confirmed and justified by new particulars that it leads us to. Such a justified increase of scope saves us from being stuck with things that are already known (but if it isn’t justified then we are over-stretching, loosely grasping at shadows and abstract forms rather than at solid things in the world of matter). When we do things in this way we shall at last have justified hope.

107. At this point I should remind you of what I said earlier [80] about extending the range of natural philosophy so that the particular sciences can be grounded in it, and the branches of knowledge don’t get lopped off from the trunk. For without that there will be little hope of progress.



Discuss

"Mild Hallucination" Test

10 октября, 2019 - 20:57
Published on October 10, 2019 5:57 PM UTC

In Scott Alexander's Lot's of People Going Around with Mild Hallucinations All the Time, he shows that several people not currently on LSD still experience mild hallucinations commonly associated with currently taking LSD.

I would like to test to see if I could teach you how to see these mild hallucinations, regardless of experience with psychedelics. Below are 3 tests that should take 1-2 minutes to complete. If you choose to complete 1 or more of these, please comment both failed and successful attempts. Please also comment if you can already see some of these, even if you think it seems obvious.

Test 1: Visual Snow

Description: See the Visual Snow Wiki for a nice visualization on the top-right. I would describe it as "jumpy spiderwebs made out of light", similar in feel to the "black stars" people see when feeling faint (when they get up too quick).

I would say it's NOT the same experience as mental imagination or eye floaters.

Test: For 1 minute (click here for a 1 minute timer), close your eyes and try to see the back of your eyelids using your peripheral vision. If a minute elapses with nothing resembling "visual snow", then it's a failure.

If it's a success, then try to see visual snow with your eyes open, again for 1 minute at most.

Test 2: Afterimage Around Objects

Description: It's similar in feel to the image on the right in the afterimage wiki. Similar to seeing a bright light and still seeing it in your vision after you look away.

Test: For 2 minutes max (click here for a 2 minute timer), find a brightly colored object that's against a different flat colored background (a red towel hanging in front of a light tan wall, your face in the mirror in front of a white door, etc), and just stare at the object using your peripheral vision. Don't shift your eyes, just pick a spot and focus on your peripheral vision. If you don't see a colored afterimage of the object around parts of that object, then it's a failure.

Test 3: Breathing Walls

Description: It looks like the static surface you're looking at (floors, walls, ceilings) is shifting, rotating, swirling, "breathing" (sort of dilating back and forth?) even though you know that it's actually still static. Usually more apparent in patterned surfaces than plain colored ones.

Test: For 1 minute, find a larger, textured surface (carpet, pop-corn ceilings, [other examples?]), and stare at it using your peripheral vision. If after a minute of staring you don't see any moving, shifting, etc, then it's a failure.



Discuss

Страницы