Вы здесь

Сборщик RSS-лент

April First: Announcing LessWrong 3.0 – Now in VR

Новости LessWrong.com - 1 час 22 секунды назад
Published on April 1, 2020 8:00 AM GMT

On this April Fools’ Day, we at LessWrong face two problems.

  1. According to Michael Nielsen, software is a way to develop transformative tools for thought. Things like Anki and collaborative editing allow us to think new thoughts. But at LessWrong, we argue that this doesn’t go far enough.
  2. Secondly, everyone is socially isolated, so we need to make LessWrong a far more social environment. We need to make a space that is superior to normal social reality in every measurable way.

The answer? Replace LessWrong with VR.

We’re now proud to announce the new LessWrong Frontpage, built entirely in Mozilla Hubs.

Here is the new landing page, where you can read the core readings of the site: The Sequences, The Codex, The Methods.

Here is a look at the new community page, and a showcase of the amazing social interactions you can have on this platform.

And, the part you've all been waiting for, here is the new LessWrong Frontpage, where you can read the best recent posts.

So we’re using Mozilla Hubs. Why? Because you hear all the things about Mozilla Hubs that you do about any startup about to take off. Words like “Unusable”, “Irritating” and “An all-round terrible UI experience”. If people are saying this about your product and still using it, that means it’s got to be good.

To give some hard data on this, in a survey of a recent academic conference held on the Mozilla Hubs platform, the attendees reported the following genuine data (emphasis added):

With support from the co-chairs of the 2019 ACM Symposium on User Interface Software and Technology (UIST) and the Hubs by Mozilla internal product team, we surveyed the motivations and experiences of remote attendees and discovered:

92% of all Social VR attendees would like to repeat the experience of attending a conference remotely using Hubs.

69% of all remote attendees rated the experience as very good or mostly good.

[...]

[Many] respondents reported difficulty hearing audio, poor visibility of the presentations, and lags in the presentation.

Only half of participants claimed they understood how to use the technology.

So, to be clear, of the attendees of this conference, half didn't know how to use the actual software, but almost all of them (92%!) would like to repeat the experience anyways! How much of a burning need does a product need to fill that at least 42% of your users want to continue using your product, whilst claiming not to know how to actually use your product? To us, this signals amazing product-market fit, and I think we should jump on the bandwagon as early as possible. 

What are its other selling points? In Mozilla Hubs you can visit the site in 3 dimensions. You'll also be able to see the other visitors, and engage in all of the social primate behaviours humans normally do at parties like talking, laughing, and continuously saying “is my microphone working?”, “can anyone hear me?” and “how does walking work again?”.

Have you ever wanted to be in a room with more than 25 people? No? Neither have we, so all rooms have a limit of 25 people who are allowed to enter. In addition, as we approach 25 people, the room will slow down on all devices and become significantly more laggy, causing the conversation to naturally slow down to prevent it from spiraling out of control.

If you've been part of LessWrong for any significant amount of time, you know how much effort we've spent thinking about how to avoid the problem of eternal september. Recently, after looking at our analytics for multiple minutes, we found out that a lot of users we don’t want have much slower computers, or are using their phones to browse LessWrong. 

So, by making LessWrong basically unusable on those devices, we are ensuring a continued high-quality discussion experience on the site, by filtering only for rational people who spend exorbitant amounts of money on their computer hardware. We've already had great success with this strategy when we drastically increased the processing power necessary to run LessWrong 2.0 by moving everything to a javascript based web-app architecture, so we consider this a natural next step for us to take.

(As a continuation of Karma 2.0 we are working on a feature in which your avatar size can scale with your karma, such that users with the most karma can signal their superiority even better, and truly tower over their intellectual contemporaries.)

So it has come to this. For April Fools’ Day 2020, we give you the LessWrong 3.0 Homepage, in open beta only for today.

Experience the future

Discuss

[IMPORTANT WARNING] Prevention of Data & Identity Theft

Новости LessWrong.com - 3 часа 36 секунд назад
Published on April 1, 2020 6:00 AM GMT

There is currently no vaccine to prevent data, identity, or computational resource theft. Some non-IEEE approved software supplements are considered by many to be prophylactic, though users are advised to be wary that many such supplements contain viruses themselves.

LessWrong recommends that you:

  • Avoid allowing irrational people to touch your devices.
  • Avoid touching devices with your eyes, nose, and mouth.
  • Disconnect your computer from your Wi-Fi for 14 days if it is infected.
  • Follow LessWrong’s recommendations for using a Password.
    • LessWrong does not recommend that healthy web users use passwords to protect themselves from computer viruses, including those circulating in 2019 and 2020.
    • Passwords should be used by people with symptoms of security breach like overheating CPUs due to their hardware being used in botnets. Passwords are also crucial for blogging professionals on the frontlines of internet discussion.
    • If you are using a password, ensure it is properly fitted to your application with appropriate letter, numbers, and special characters.
    • DO NOT HOARD PASSWORDS
      • Certain classes of passwords are in limited supply and must be reserved for those on the front line who need them most.


Discuss

What's the expected QALY (quality-adjusted life years) loss per capita due to SARS-COV-2? QALY gain of increasing ICU capacity? Of buying new ventilators?

Новости LessWrong.com - 3 часа 17 минут назад
Published on April 1, 2020 5:42 AM GMT

In my online echo chambers and news bubbles it seems that a lot of effort worldwide is spent on obtaining more ventilators and increasing ICU capacity. Given that 75% of patients on ventilators die anyway, a single patient occupies an ICU bed for weeks, and that most of the dead are old or have serious pre-existing conditions, I wonder how much good that does.

I strongly suspect that more is spent on those efforts than they are worth because the reports or overwhelmed ICU units are so visually and emotionally captivating. But I'm too lazy to look up the numbers (and the QALY estimate would probably require some original modelling, which I have no experience with).

Related fun fact: Americans spend disproportionately more money on healthcare in their final years. This study: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1464043/ (I haven't checked whether it's been reproduced) suggests 18-25%, depending on the source of funding. It's not as ridiculous as it sounds, though, because you don't know in advance which year is going to be the final one.



Discuss

Necessity and Warrant

Новости LessWrong.com - 5 часов 47 минут назад
Published on April 1, 2020 3:12 AM GMT

Literary warrant, a concept introduced by Wyndam Hulme in 1911, has the status of a principle. A subprinciple of the principle of representation, it enjoins that the vocabulary of a subject language be empirically derived from the literature it is intended to describe. This means that a literature must be determined. For Hulme, the language in question was the Library of Congress Classification (LCC), and the literature that served as warrant were the books housed in the Library of Congress. For a discipline-specific language, the literature might be defined as the canonical texts in the discipline or as the core set of documents of the discipline, as this is determined by citation frequency. Once the literature of a discipline is defined, then expressions in it indicative of aboutness become candidates for inclusion in the vocabulary of the language.

    — Elaine Svenonius, The Intellectual Foundation Of Information Organization

In the previous post I discussed the concept of phenomenological necessity. Reality has a consistent ruleset on which we can base reasoning. Our expectations about reality should be based on rules and sense-data derived from reality. The extreme consistency of physics is one of the most important revelations of the 20th century: while our world is dizzying in its complexity and anti-inductive in its presentation the underlying principles are comparatively simple:

The vision I got from Democritus was of a God who was single-mindedly obsessed with enforcing a couple of rules about certain types of information you are not allowed to have under any circumstances. Some of these rules I’d already known about. You can’t have information from outside your light cone. You can’t have information about the speed and position of a particle at the same time. Others I hadn’t thought about as much until reading Democritus. Information about when a Turing machine will halt. Information about whether certain formal systems are consistent. Precise information about the quantum state of a particle. The reason God hasn’t solved world poverty yet is that He is pacing about feverishly worried that someone, somewhere, is going to be able to measure the quantum state of a particle too precisely, and dreaming up new and increasingly bizarre ways He can prevent that from happening.

    — Scott Alexander, Book Review and Highlights: Quantum Computing Since Democritus Okay, Bayes-Goggles back on. Are you really going to believe that large parts of the wavefunction disappear when you can no longer see them? As a result of the only non-linear non-unitary non-differentiable non-CPT-symmetric acausal faster-than-light informally-specified phenomenon in all of physics? Just because, by sheer historical contingency, the stupid version of the theory was proposed first?

    — Eliezer Yudkowsky, The Dilemma: Science or Bayes?

The complexity that arises from these 'simple' rules leads to uncertainty, and that uncertainty makes the world unpredictable and difficult for our minds to make sense of. We might imagine ourselves in a maze, carefully mapping our environment. As better patterns come along that predict the maze with increasing accuracy and economy of expression we manage to put more and more of the maze into a smaller and smaller representation. But as the representation becomes smaller, the work necessary to unpack it into the territory we're interested in increases. It's difficult to model most real world systems just by knowing their underlying physics. The complexity of how reality expresses itself forces us to rely on abstractions, mental models, and approximations. There's no easy way to know everything about anything. This means that which questions and forms of knowledge we choose to acquire is just as important as any reasoning techniques we use to deal with them.

A good example of this can be found in the book Evangelism Explosion by D. James Kennedy. Written with the goal of making modern Christian churches see exponential growth, the author shares (at length and in detail) exactly how he goes about witnessing Christ to others and teaching Christians to witness. What's striking about it is that before he attempts to share the gospel, he makes a point of asking two questions to set up the conversation: Whether the target knows they'll be with god after they die and what they'd say if god asked them why he should let them in. The thing I find so interesting about this is that he opens with a Christian frame, and mostly seems to ignore the possibility that you might encounter an atheist or even a Hindu.

The script itself is simple, seemingly just a positive presentation of the basic idea that eternal life can be yours if you give up your mind and body to Christ. I found this so shocking that I stopped reading and asked an apostate if this stuff really worked, and he informed me that it did. I'd always thought of Evangelists as being out to convert non-Christians, but if I'm to take what I read from them seriously the goal is mostly to take weak Christian theists and turn them into strong Christian theists. By the time you're willing to engage with a question like "Will you be with god in heaven after you die?" you're already most of the way to being a Christian. Imagine this same conversation with an atheist, who would say "Woah woah back up, who says this 'god' character exists? Why do you think that?". If engaging with someone's questions about god takes you most of the way to being Christian, it should be obvious that letting other people ask you questions without justification takes you most of the way to believing whatever they want you to believe.

Eliezer Yudkowsky discusses this problem frequently in his Rationality: AI to Zombies. Most notable is his essay on privileging the hypothesis, where he tries to get across a similar idea. There Eliezer imagines numbering every possible hypothesis in a scenario from one to some very large number like 4 billion. He then uses the basic principles of information theory to show that taking a question without justification is to skip over the vast majority of possibilities. Remember that only one hypothesis is the truth, so to do this without care risks excluding the correct answer from consideration before we've even begun to analyze things. Unfortunately, I didn't understand what he meant on my first read, because I wasn't very familiar with computer science.

This idea is of extreme importance, so it's worth explaining in detail. To keep it simple, lets go back to that notion of numbering every hypothesis above noise from one to 4 billion. It takes a certain amount of information to represent this number. The standard unit of information in computer science is of course the binary digit (bit), a one or a zero. A single bit has two possibilities, 1 or 0. Two bits in sequence can be combined four ways: 00,01,10,11. If we add a third bit, you will find there are eight combinations. In fact, each added bit allows us to represent twice the number of combinations. By the time we have put together 32 bits, we can represent 4 billion possibilities. As we gain information about a problem, we (hopefully) narrow in on the correct point in this sea of hypothesis. To ask a question that narrows us down to say, 10 possibilities, is to assert that we have already collected the vast majority of information, or the majority of bits necessary to represent our choice in this domain. If you consider a question merely because the question has been asked, you are allowing other people to choose most of your beliefs for you.

We can call this notion of whether or not a question is justified warrant, in the same sense that the police need a warrant before they can search a US citizens house. If necessity asks "Is this a reasonable expectation?", warrant asks "Why are(n't) we considering this question?". The just world fallacy, denial of death, and postmodernism are failures of necessity. Privileging the hypothesis, confirmation bias, and the base rate fallacy are failures of warrant. Together, warrant and necessity form the "punch and kick" of rationality, basic foundational moves which must be practiced and mastered before it's possible to reliably execute more advanced technique.

Necessity and warrant go together, because to get good at using necessity we have to manage uncertainty, and usefully managing uncertainty forces us to get good at warrant. For example, in the tournaments run by Philip Tetlock to see who is best at predicting the future the winners tend to explore much more of the hypothesis space than the mediocre. Instead of examining one possibility and prematurely narrowing things down to a handful of outcomes (all of which might be wrong) they look at several possible outcomes and try to weigh their probability against each other. This sort of thorough exploration helps us become more justified in our beliefs. Evaluating the relative likelihood of a set of possibilities moves us beyond focusing on individual facts or ideas, and facilitates the creation of consistent mental models of reality which can begin to suggest necessary conclusions.

Consider again the earlier questions about god. Christianity claims that those who don't give up their mind and body to Christ will be eternally damned. This is a pretty scary idea, but it becomes a bit less scary when we consider other theologies claiming much the same thing. Even in the Abrahamic family alone we have Judaism, Christianity, and Islam which all mutually claim followers of the other two will meet their end in the lake of fire. Just by considering all of the major world religions, we find the framing attack James Kennedy uses to persuade less compelling.

The failure of warrant is also behind one of the more important design flaws in modern republics: focusing control mechanisms on the consideration of new laws rather than the proposal of new laws. In theory a republic is meant to be kept in check by allowing competing interests at the table, which prevents one segment of society from unfairly appropriating state power to enrich and elevate itself over others. The typical implementation of this assumes that ideas for new laws appear because of "concerned citizens", and the process doesn't focus too much on the origins or justifications for laws. In the context of information theory, this is a recipe for disaster. If simply proposing a question takes us most of the way to saying "yes", then in practice what we've done with this ruleset is leave most of the power of making laws almost completely unregulated and uncontrolled. Bad actors can re-propose their unpopular initiatives until they get the right set of circumstances for them to pass.

Principles Of Warrant

The last time I talked about warrant I avoided precisely defining what makes a question worth considering, pushing the matter out to social consensus. But the worthiness of questions exists independently of social approval. To think clearly even when those around us don't requires some kind of objective standard. In her Intellectual Foundation Of Information Organization, Elaine Svenonius deals with questions of this sort often in the context of library science. To figure out which features should be part of a biobliographic record, she lays out principles of warrant and uses them as criteria to justify the inclusion or exclusion of information. We can do something similar, but because 'asking questions' is such a broad thing it's not really possible to write out a complete set of principles. Rather I suspect that the Pareto Rule is in play and expecting any justification for questions does most of the work for us. Still, there are some principles of warrant that come to mind:

Principle Of Confusion

If two or more trustworthy models predict contradictory outcomes, you are confused about a subject and should be asking what the source of contradiction is.

Principle Of Priors

When we expect something to be true and find that observation or inference implies it isn't, we should notice we're confused and ask questions.

Principle Of Pain

Empirical observation of problems is a good reason to ask questions about why they occur and how they can be stopped.

Principle Of Relation

If you're already asking a question, it's often warranted to ask questions which are closeby in question-space. Be wary however that the principle of relation is fairly weak, and six degrees of kevin bacon means that it can be used adversarially to shoehorn discussion of topics which wouldn't otherwise come up.

Principle Of Balance

When you ask a question, it's also often useful to ask its inverse.

Principle Of Exhaustion

If a question can be interpreted as belonging to a meaningful category, asking other questions in the same category can be useful to compare answers/etc.

This list is obviously not exhaustive, and memorizing it wouldn't be a good strategy for getting good at warrant. Using warrant in practice is more like imagining some sense-data you would like to see. For example, if you're reviewing the literature on the procedure to induce Haitian voodoo spirit possession, what you're really asking is a question like "Where in the world would I look to find information on this? Who would know about it? Where has information been left behind by the presence of this phenomena?". You might try anthropological accounts of voodoo practices, or hit up YouTube to see if an inconsiderate tourist has filmed the proceedings (or a nosy anthropologist). Getting good at thinking about where a phenomena would leave traces in the world lets you remove degrees of freedom from your beliefs until they're tightly constrained by empirical observation; that is to say they have become thoroughly justified.



Discuss

How special are human brains among animal brains?

Новости LessWrong.com - 7 часов 24 минуты назад
Published on April 1, 2020 1:35 AM GMT

Humans are capable of feats of cognition that appear qualitatively more sophisticated than those of any other animals. Is this appearance of a qualitative difference indicative of human brains being essentially more complex than the brains of any other animal? Or is this “qualitative difference” illusory, with the vast majority of human cognitive feats explainable as nothing more than a scaled-up version of the cognitive feats of lower animals?

“How special are human brains among animal brains?” is one of the background variables in my framework for AGI timelines. My aim for this post is not to present a complete argument for some view on this variable, so much as it is to:

  • present some considerations I’ve encountered that shed light on this variable
  • invite a collaborative effort among readers to shed further light on this variable (e.g. by leaving comments about considerations I haven’t included, or pointing out mistakes in my analyses)
Does mastery of language make humans unique? Human conscious experience may have emerged from language

Humans seem to have much higher degrees of consciousness and agency than other animals, and this may have emerged from our capacities for language. Helen Keller (who was deaf and blind since infancy, and only started learning language when she was 6) gave an autobiographical account of how she was driven by blind impetuses until she learned the meanings of the words “I” and “me”:

Before my teacher came to me, I did not know that I am. I lived in a world that was a no-world. I cannot hope to describe adequately that unconscious, yet conscious time of nothingness. I did not know that I knew aught, or that I lived or acted or desired. I had neither will nor intellect. I was carried along to objects and acts by a certain blind natural impetus. I had a mind which caused me to feel anger, satisfaction, desire. These two facts led those about me to suppose that I willed and thought. I can remember all this, not because I knew that it was so, but because I have tactual memory. It enables me to remember that I never contracted my forehead in the act of thinking. I never viewed anything beforehand or chose it. I also recall tactually the fact that never in a start of the body or a heart-beat did I feel that I loved or cared for anything. My inner life, then, was a blank without past, present, or future, without hope or anticipation, without wonder or joy or faith.[...] … When I learned the meaning of "I" and "me" and found that I was something, I began to think. Then consciousness first existed for me. Thus it was not the sense of touch that brought me knowledge. It was the awakening of my soul that first rendered my senses their value, their cognizance of objects, names, qualities, and properties. Thought made me conscious of love, joy, and all the emotions. I was eager to know, then to understand, afterward to reflect on what I knew and understood, and the blind impetus, which had before driven me hither and thither at the dictates of my sensations, vanished forever.Mastery of language may have conferred unique intellectual superpowers

I think humans underwent a phase transition in their intellectual abilities when they came to master language, at which point their intellectual abilities jumped far beyond those of other animals on both an individual level and a species level.

On an individual level, our capacity for language enables us to entertain and express arbitrarily complex thoughts, which appears to be an ability unique to humans. In theoretical linguistics, this is referred to as “digital infinity”, or “the infinite use of finite means”.

On a species level, our mastery of language enables intricate insights to accumulate over generations with high fidelity. Our ability to stand on the shoulders of giants is unique among animals, which is why our culture is unrivaled in its richness in sophistication.

Language aside, how unique are humans? Humans ≈ Neanderthals + language?

The most quintessentially human intellectual accomplishments (e.g. proving theorems, composing symphonies, going into space) were only made possible by culture post-agricultural revolution. So, when evaluating humans’ innate intellectual capacities, a better reference point than modern humans like ourselves would be our hunter-gatherer ancestors.

We can reduce the question of how complex our hunter-gatherer ancestors’ brains are into two sub-questions: how complex is our capacity for mastering language, and how complex are brains that are similar to ours, but don’t have the capacity for mastering language?

Neanderthal brains seem like plausible proxies for the latter. Neanderthals are similar enough to modern humans that they’ve interbred, and the currently available evidence suggests that they may not have mastered language in the same way that behaviorally modern humans have. (I don’t think this evidence is very strong, but this doesn’t matter for my purposes—I’m just using Neanderthals as a handy stand-in to gesture at what a human-like intelligence might look like if it didn’t have the capacity for language.)

Higher intelligence in animals

Chimpanzees, crows, and dolphins are capable of impressive feats of higher intelligence, and I don’t think there’s any particular reason to think that Neanderthals are capable of doing anything qualitatively more impressive. I’ll share some examples of these animals’ intellectual feats that I found particularly illustrative.

Chimpanzees have been observed to lie to each other under experimental conditions. From Wikipedia:

...food was hidden and only one individual, named Belle, in a group of chimpanzees was informed of the location. Belle was eager to lead the group to the food but when one chimpanzee, named Rock, began to refuse to share the food, Belle changed her behaviour. She began to sit on the food until Rock was far away, then she would uncover it quickly and eat it. Rock figured this out though and began to push her out of the way and take the food from under her. Belle then sat farther and farther away waiting for Rock to look away before she moved towards the food. In an attempt to speed the process up, Rock looked away until Belle began to run for the food. On several occasions he would even walk away, acting disinterested, and then suddenly spin around and run towards Belle just as she uncovered the food.

In Aesop’s fable of the crow and the pitcher, a thirsty crow figures out that it can drop pebbles into a pitcher, so that the water rises to a high enough level for it to drink from. This behavior has been experimentally replicated, indicating that crows have a “sophisticated, but incomplete, understanding of the causal properties of displacement, rivalling that of 5–7 year old children”.

When Kelly the dolphin was given rewards of fish for picking up scraps of paper, “Kelly figured out that she received the same fish regardless of the size of the piece of trash she was delivering to her trainer. So she began hiding big pieces of trash under a rock. Kelly would then rip off small pieces from the trash and deliver them one at a time so that she could receive more fish.” Additionally, “when a bird landed in the pool, Kelly snatched it and delivered it to her trainers. She received a large amount of fish in return. Knowing this, she decided to start hiding fish each time she was fed. She would then use the fish to lure birds when none of her trainers were around. Kelly knew that by saving one or two fish now, she could get many more fish later by turning in a bird.“ (Also reported on The Guardian; I don’t know how reputable these sources are, so take this anecdote with a grain of salt.)

See these Wikipedia pages for some more interesting examples, and see here for a more thorough review of the evidence of higher intelligence in animals.

“Qualitatively” more advanced cognition may emerge from scale

Many aspects of human cognition that may appear qualitatively different from what other animals are capable of, such as long chains of abstract reasoning, also appear qualitatively different from what less intelligent humans are capable of. As a particularly extreme example, John von Neumann’s cognitive abilities were so advanced that a Nobel Laureate, Hans Bethe, once remarked that "[his] brain indicated a new species, an evolution beyond man".

At the same time, the genes that code for different humans’ brains are virtually identical from an evolutionary perspective. This suggests that the seemingly qualitative differences between humans’ and animals’ cognition might not be so different from the seemingly qualitative differences between John von Neumann’s cognition and mine—our brains might be doing essentially the same thing as theirs, except at a higher scale.

How hard is mastery of language?Could language capacity fall out from general capacities?

Maybe it was extraordinarily difficult to evolve the cognitive mechanisms that allow us to learn language, above and beyond our cognitive machinery for learning other things. I think this is plausible, but I don’t think the case for this is very strong.

Animals (Washoe, Koko, and Alex the parrot) have demonstrated the ability to learn simple forms of symbolic communication, which they never evolved to do, indicating that their ability to learn things in general is good enough to learn very simple forms of language. It’s true that there are aspects of human language that escape animals, but they also escape feral children, and might escape animals for mundane reasons, like their not having critical periods long enough to learn these aspects of language.

Additionally, AI language models provide evidence that simple and general learning mechanisms can capture many of the intricacies of human language that other animals miss, further suggesting that there’s nothing intrinsically difficult about learning language. Here’s an excerpt from GPT-2, a relatively recent language model:

SYSTEM PROMPT (HUMAN-WRITTEN)In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.MODEL COMPLETION (MACHINE-WRITTEN, 10 TRIES)The scientist named the population, after their distinctive horn, Ovid’s Unicorn. These four-horned, silver-white unicorns were previously unknown to science.Now, after almost two centuries, the mystery of what sparked this odd phenomenon is finally solved.Dr. Jorge Pérez, an evolutionary biologist from the University of La Paz, and several companions, were exploring the Andes Mountains when they found a small valley, with no other animals or humans. Pérez noticed that the valley had what appeared to be a natural fountain, surrounded by two peaks of rock and silver snow. Pérez and the others then ventured further into the valley. “By the time we reached the top of one peak, the water looked blue, with some crystals on top,” said Pérez.Why haven’t other species mastered language?

If language isn’t a particularly difficult cognitive capacity to acquire, why don’t we see more animal species with language?

One possibility is that the first species that masters language, by virtue of being able to access intellectual superpowers inaccessible to other animals, has a high probability of becoming the dominant species extremely quickly. (Humans underwent the agricultural revolution within 50,000 years of behavioral modernity—a blink of an eye on evolutionary timescales—after which their dominance as a species became unquestionable.) Since we shouldn’t expect to see more than one dominant species at a time, this would imply a simple anthropic argument for our unique capacities for language: we shouldn’t expect to see more than one species at a time with mastery of language, and we just happen to be the species that made it there first.

It may also turn out that language is hard to evolve not because it’s a particularly sophisticated cognitive mechanism, but because the environments that could have supported language and selected for it might have been very unique. For example, it may be that a threshold of general intelligence has to be crossed before it’s viable for a species to acquire language, and that humans are the only species to have crossed this threshold. (Humans do have the highest cortical information processing capacity among mammals.)

It might also turn out that the cultural contexts under which language could evolve require a mysteriously high degree of trust: “... language presupposes relatively high levels of mutual trust in order to become established over time as an evolutionarily stable strategy. This stability is born of a longstanding mutual trust and is what grants language its authority. A theory of the origins of language must therefore explain why humans could begin trusting cheap signals in ways that other animals apparently cannot (see signalling theory).”

My current take

As we came to master language, I think we underwent a phase transition in our intellectual abilities that set us apart from other animals. Besides language, I don't see much that sets us apart from other animals—in particular, most other cognitive differences seem explainable as consequences of either language or scale, and I don’t think the cognitive mechanisms that allow us to master language are particularly unique or difficult to acquire. Overall, I don’t see much reason to believe that human brains have significantly more innate complexity than the brains of other animals.


Thanks to Paul Kreiner and Stag Lynn for helpful commentary and feedback.



Discuss

FLI Podcast: The Precipice: Existential Risk and the Future of Humanity with Toby Ord

Новости LessWrong.com - 7 часов 58 минут назад
Published on April 1, 2020 1:02 AM GMT

Toby Ord’s “The Precipice: Existential Risk and the Future of Humanity" has emerged as a new cornerstone text in the field of existential risk. The book presents the foundations and recent developments of this budding field from an accessible vantage point, providing an overview suitable for newcomers. For those already familiar with existential risk, Toby brings new historical and academic context to the problem, along with central arguments for why existential risk matters, novel quantitative analysis and risk estimations, deep dives into the risks themselves, and tangible steps for mitigation. "The Precipice" thus serves as both a tremendous introduction to the topic and a rich source of further learning for existential risk veterans. Toby joins us on this episode of the Future of Life Institute Podcast to discuss this definitive work on what may be the most important topic of our time.

Topics discussed in this episode include:

-An overview of Toby's new book
-What it means to be standing at the precipice and how we got here
-Useful arguments for why existential risk matters
-The risks themselves and their likelihoods
-What we can do to safeguard humanity's potential

You can find the page for this podcast here: https://futureoflife.org/2020/03/31/he-precipice-existential-risk-and-the-future-of-humanity-with-toby-ord/

Transcript:

Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. This episode is with Toby Ord and covers his new book “The Precipice: Existential Risk and the Future of Humanity.” This is a new cornerstone piece in the field of existential risk and I highly recommend this book for all persons of our day and age. I feel this work is absolutely critical reading for living an informed, reflective, and engaged life in our time. And I think even for those well acquainted with this topic area will find much that is both useful and new in this book. Toby offers a plethora of historical and academic context to the problem, tons of citations and endnotes, useful definitions, central arguments for why existential risk matters that can be really helpful for speaking to new people about this issue, and also novel quantitative analysis and risk estimations, as well as what we can actually do to help mitigate these risks. So, if you’re a regular listener to this podcast, I’d say this is a must add to your science, technology, and existential risk bookshelf. 

The Future of Life Institute is a non-profit and this podcast is funded and supported by listeners like you. So if you find what we do on this podcast to be important and beneficial, please consider supporting the podcast by donating at futureoflife.org/donate. If you support any other content creators via services like Patreon, consider viewing a regular subscription to FLI in the same light. You can also follow us on your preferred listening platform, like on Apple Podcasts or Spotify, by searching for us directly or following the links on the page for this podcast found in the description.

Toby Ord is a Senior Research Fellow in Philosophy at Oxford University. His work focuses on the big picture questions facing humanity. What are the most important issues of our time? How can we best address them?

Toby’s earlier work explored the ethics of global health and global poverty, demonstrating that aid has been highly successful on average and has the potential to be even more successful if we were to improve our priority setting. This led him to create an international society called Giving What We Can, whose members have pledged over $1.5 billion to the most effective charities helping to improve the world. He also co-founded the wider effective altruism movement, encouraging thousands of people to use reason and evidence to help others as much as possible.

His current research is on the long-term future of humanity,  and the risks which threaten to destroy our entire potential.

Finally, the Future of Life Institute podcasts have never had a central place for conversation and discussion about the episodes and related content. In order to facilitate such conversation, I’ll be posting the episodes to the LessWrong forum at Lesswrong.com where you’ll be able to comment and discuss the episodes if you so wish. The episodes more relevant to AI alignment will be crossposted from LessWrong to the Alignment Forum as well at alignmentforum.org.  

And so with that, I’m happy to present Toby Ord on his new book “The Precipice.”

We’re here today to discuss your new book, The Precipice: Existential Risk and the Future of Humanity. Tell us a little bit about what the book is about.

Toby Ord: The future of humanity, that’s the guiding idea, and I try to think about how good our future could be. That’s what really motivates me. I’m really optimistic about the future we could have if only we survive the risks that we face. There have been various natural risks that we have faced for as long as humanity’s been around, 200,000 years of Homo sapiens or you might include an even broader definition of humanity that’s even longer. That’s 2000 centuries and we know that those natural risks can’t be that high or else we wouldn’t have been able to survive so long. It’s quite easy to show that the risks should be lower than about 1 in 1000 per century.

But then with humanity’s increasing power over that time, the exponential increases in technological power. We reached this point last century with the development of nuclear weapons, where we pose a risk to our own survival and I think that the risks have only increased since then. We’re in this new period where the risk is substantially higher than these background risks and I call this time the precipice. I think that this is a really crucial time in the history and the future of humanity, perhaps the most crucial time, this few centuries around now. And I think that if we survive, and people in the future, look back on the history of humanity, schoolchildren will be taught about this time. I think that this will be really more important than other times that you’ve heard of such as the industrial revolution or even the agricultural revolution. I think this is a major turning point for humanity. And what we do now will define the whole future.

Lucas Perry: In the title of your book, and also in the contents of it, you developed this image of humanity to be standing at the precipice, could you unpack this a little bit more? What does it mean for us to be standing at the precipice?

Toby Ord: I sometimes think of humanity has this grand journey through the wilderness with dark times at various points, but also moments of sudden progress and heady views of the path ahead and what the future might hold. And I think that this point in time is the most dangerous time that we’ve ever encountered, and perhaps the most dangerous time that there will ever be. So I see it in this central metaphor of the book, humanity coming through this high mountain pass and the only path onwards is this narrow ledge along a cliff side with this steep and deep precipice at the side and we’re kind of inching our way along. But we can see that if we can get past this point, there’s ultimately, almost no limits to what we could achieve. Even if we can’t precisely estimate the risks that we face, we know that this is the most dangerous time so far. There’s every chance that we don’t make it through.

Lucas Perry: Let’s talk a little bit then about how we got to this precipice and our part in this path. Can you provide some examples or a story of global catastrophic risks that have happened and near misses of possible existential risks that have occurred so far?

Toby Ord: It depends on your definition of global catastrophe. One of the definitions that’s on offer is 10%, or more of all people on the earth at that time being killed in a single disaster. There is at least one time where it looks like we’ve may have reached that threshold, which was the Black Death, which killed between a quarter and a half of people in Europe and may have killed many people in South Asia and East Asia as well and the Middle East. It may have killed one in 10 people across the whole world. Although because our world was less connected than it is today, it didn’t reach every continent. In contrast, the Spanish Flu 1918 reached almost everywhere across the globe, and killed a few percent of people.

But in terms of existential risk, none of those really posed an existential risk. We saw, for example, that despite something like a third of people in Europe dying, that there wasn’t a collapse of civilization. It seems like we’re more robust than some give us credit for, but there’ve been times where there hasn’t been an actual catastrophe, but there’s been near misses in terms of the chances.

There are many cases actually connected to the Cuban Missile Crisis, a time of immensely high tensions during the Cold War in 1962. I think that the closest we have come is perhaps the events on a submarine that was unknown to the U.S. that it was carrying a secret nuclear weapon and the U.S. Patrol Boats tried to force it to surface by dropping what they called practice depth charges, but the submarine thought that there were real explosives aimed at hurting them. The submarine was made for the Arctic and so it was overheating in the Caribbean. People were dropping unconscious from the heat and the lack of oxygen as they tried to hide deep down in the water. And during that time the captain, Captain Savitsky, ordered that this nuclear weapon be fired and the political officer gave his consent as well.

On any of the other submarines in this flotilla, this would have been enough to launch this torpedo that then would have been a tactical nuclear weapon exploding and destroying the fleet that was oppressing them, but on this one, it was lucky that the flotilla commander was also on board this submarine, Captain Vasili Arkhipov and so, he overruled this and talked Savitsky down from this. So this was a situation at the height of this tension where a nuclear weapon would have been used. And we’re not quite sure, maybe Savitsky would have decided on his own not to do it, maybe he would have backed down. There’s a lot that’s not known about this particular case. It’s very dramatic.

But Kennedy had made it very clear that any use of nuclear weapons against U.S. Armed Forces would lead to an all-out full scale attack on the Soviet Union, so they hadn’t anticipated that tactical weapons might be used. They assumed it would be a strategic weapon, but it was their policy to respond with a full scale nuclear retaliation and it looks likely that that would have happened. So that’s the case where ultimately zero people were killed in that event. The submarine eventually surfaced and surrendered and then returned to Moscow where people were disciplined, but it brought us very close to this full scale nuclear war.

I don’t mean to imply that that would have been the end of humanity. We don’t know whether humanity would survive the full scale nuclear war. My guess is that we would survive, but that’s its own story and it’s not clear.

Lucas Perry: Yeah. The story to me has always felt a little bit unreal. It’s hard to believe we came so close to something so bad. For listeners who are not aware, the Future of Life Institute gives out a $50,000 award each year, called the Future of Life Award to unsung heroes who have contributed greatly to the existential security of humanity. We actually have awarded Vasili Arkhipov’s family with the Future of Life Award, as well as Stanislav Petrov and Matthew Meselson. So if you’re interested, you can check those out on our website and see their particular contributions.

And related to nuclear weapons risk, we also have a webpage on nuclear close calls and near misses where there were accidents with nuclear weapons which could have led to escalation or some sort of catastrophe. Is there anything else here you’d like to add in terms of the relevant historical context and this story about the development of our wisdom and power over time?

Toby Ord: Yeah, that framing, which I used in the book comes from Carl Sagan in the ’80s when he was one of the people who developed the understanding of nuclear winter and he realized that this could pose a risk to humanity on the whole. The way he thought about it is that we’ve had this massive development over the hundred billion human lives that have come before us. This succession of innovations that have accumulated building up this modern world around us.

If I look around me, I can see almost nothing that wasn’t created by human hands and this, as we all know, has been accelerating and often when you try to measure exponential improvements in technology over time, leading to the situation where we have the power to radically reshape the Earth’s surface, both say through our agriculture, but also perhaps in a moment through nuclear war. This increasing power has put us in a situation where we hold our entire future in the balance. A few people’s actions over a few minutes could actually potentially threaten that entire future.

In contrast, humanity’s wisdom has grown only falteringly, if at all. Many people would suggest that it’s not even growing. And by wisdom here, I mean, our ability to make wise decisions for human future. I talked about this in the book under the idea about civilizational virtues. So if you think of humanity as a group of agents, in the same way that we think of say nation states as group agents, we talk about is it in America’s interest to promote this trade policy or something like that? We can think of what’s in humanity’s interests and we find that if we think about it this way, humanity is crazily impatient and imprudent.

If you think about the expected lifespan of humanity, a typical species lives for about a million years. Humanity is about 200,000 years old. We have something like 800,000 or a million or more years ahead of us if we play our cards right and we don’t lead to our own destruction. The analogy would be 20% of the way through our life, like an adolescent who’s just coming into his or her own power, but doesn’t have the wisdom or the patience to actually really pay any attention to this possible whole future ahead of them and so they’re just powerful enough to get themselves in trouble, but not yet wise enough to avoid that.

If you continue this analogy, what is often hard for humanity at the moment to think more than a couple of election cycles ahead at best, but that would correspond say eight years to just the next eight hours within this person’s life. For the kind of short term interests during the rest of the day, they put the whole rest of their future at risk. And so I think that that helps to see what this lack of wisdom looks like. It’s not that it’s just a highfalutin term of some sort, but you can kind of see what’s going on is that the person is incredibly imprudent and impatient. And I think that many others virtues or vices that we think of in an individual human’s life can be applied in this context and are actually illuminating about where we’re going wrong.

Lucas Perry: Wonderful. Part of the dynamic here in this wisdom versus power race seems to be one of the solutions being slowing down power seems untenable or that it just wouldn’t work. So it seems more like we have to focus on amplifying wisdom. Is this also how you view the dynamic?

Toby Ord: Yeah, that is. I think that if humanity was more coordinated, if we were able to make decisions in a unified manner better than we actually can. So, if you imagine this was a single player game, I don’t think it would be that hard. You could just be more careful with your development of power and make sure that you invest a lot in institutions, and in really thinking carefully about things. I mean, I think that the game is ours to lose, but unfortunately, we’re less coherent than that and if one country decides to hold off on developing things, then other countries might run ahead and produce similar amount of risk.

Theres this kind of the tragedy of the commons at this higher level and so I think that it’s extremely difficult in practice for humanity to go slow on progress of technology. And I don’t recommend that we try. So in particular, there’s only at the moment, only a small number of people who really care about these issues and are really thinking about the long-term future and what we could do to protect it. And if those people were to spend their time arguing against progress of technology, I think that it would be a really poor use of their energies and probably just annoy and alienate the people they were trying to convince. And so instead, I think that the only real way forward is to focus on improving wisdom.

I don’t think that’s impossible. I think that humanity’s wisdom, as you could see from my comment before about how we’re kind of disunified, partly, it involves being able to think better about things as individuals, but it also involves being able to think better collectively. And so I think that institutions for overcoming some of these tragedies of the commons or prisoner’s dilemmas at this international level, are an example of the type of thing that will make humanity make wiser decisions in our collective interest.

Lucas Perry: It seemed that you said by analogy, that humanity’s lifespan would be something like a million years as compared with other species.

Toby Ord: Mm-hmm (affirmative).

Lucas Perry: That is likely illustrative for most people. I think there’s two facets of this that I wonder about in your book and in general. The first is this idea of reaching existential escape velocity, where it would seem unlikely that we would have a reason to end in a million years should we get through the time of the precipice and the second is I’m wondering your perspective on Nick Bostrom calls what matters here in the existential condition, Earth-originating intelligent life. So, it would seem curious to suspect that even if humanity’s existential condition were secure that we would still be recognizable as humanity in some 10,000, 100,000, 1 million years’ time and not something else. So, I’m curious to know how the framing here functions in general for the public audience and then also being realistic about how evolution has not ceased to take place.

Toby Ord: Yeah, both good points. I think that the one million years is indicative of how long species last when they’re dealing with natural risks. It’s I think a useful number to try to show why there are some very well-grounded scientific reasons for thinking that a million years is entirely in the ballpark of what we’d expect if we look at other species. And even if you look at mammals or other hominid species, a million years still seems fairly typical, so it’s useful in some sense for setting more of a lower bound. There are species which have survived relatively unchanged for much longer than that. One example is the horseshoe crab, which is about 450 million years old whereas complex life is only about 540 million years old. So that’s something where it really does seem like it is possible to last for a very long period of time.

If you look beyond that the Earth should remain habitable for something in the order of 500 million or a billion years for complex life before it becomes too hot due to the continued brightening of our sun. If we took actions to limit that brightening, which look almost achievable with today’s technology, we would only need to basically shade the earth by about 1% of the energy coming at it and increase that by 1%, I think it’s every billion years, we will be able to survive as long as the sun would for about 7 billion more years. And I think that ultimately, we could survive much longer than that if we could reach our nearest stars and set up some new self-sustaining settlement there. And then if that could then spread out to some of the nearest stars to that and so on, then so long as we can reach about seven light years in one hop, we’d be able to settle the entire galaxy. There are stars in the galaxy that will still be burning in about 10 trillion years from now and there’ll be new stars for millions of times as long as that.

We could have this absolutely immense future in terms of duration and the technologies that are beyond our current reach and if you look at the energy requirements to reach nearby stars, they’re high, but they’re not that high compared to say, the output of the sun over millions of years. And if we’re talking about a scenario where we’d last millions of years anyway, it’s unclear why it would be difficult with the technology would reach them. It seems like the biggest challenge would be lasting that long in the first place, not getting to the nearest star using technology for millions of years into the future with millions of years of stored energy reserves.

So that’s the kind of big picture question about the timing there, but then you also ask about would it be humanity? One way to answer that is, unless we go to a lot of effort to preserve Homo sapiens as we are now then it wouldn’t be Homo sapiens. We might go to that effort if we decide that it’s really important that it be Homo sapiens and that we’d lose something absolutely terrible. If we were to change, we could make that choice, but if we decide that it would be better to actually allow evolution to continue, or perhaps to direct it by changing who we are with genetic engineering and so forth, then we could make that choice as well. I think that that is a really critically important choice for the future and I hope that we make it in a very deliberate and careful manner rather than just going gung-ho and letting people do whatever they want, but I do think that we will develop into something else.

But in the book, my focus is often on humanity in this kind of broad sense. Earth-originating intelligent life would kind of be a gloss on it, but that has the issue that suppose humanity did go extinct and suppose we got lucky and some other intelligent life started off again, I don’t want to count that in what I’m talking about, even though it would technically fit into Earth-originating intelligent life. Sometimes I put it in the book as humanity or our rightful heirs something like that. Maybe we would create digital beings to replace us, artificial intelligences of some sort. So long as they were the kinds of beings that could actually fulfill the potential that we have, they could realize one of the best trajectories that we could possibly reach, then I would count them. It could also be that we create something that succeeds us, but has very little value, then I wouldn’t count it.

So yeah, I do think that we may be greatly changed in the future. I don’t want that to distract the reader, if they’re not used to thinking about things like that because they might then think, “Well, who cares about that future because it will be some other things having the future.” And I want to stress that there will only be some other things having the future if we want it to be, if we make that choice. If that is a catastrophic choice, then it’s another existential risk that we have to deal with in the future and which we could prevent. And if it is a good choice and we’re like the caterpillar that really should become a butterfly in order to fulfill its potential, then we need to make that choice. So I think that is something that we can leave to future generations that it is important that they make the right choice.

Lucas Perry: One of the things that I really appreciate about your book is that it tries to make this more accessible for a general audience. So, I actually do like it when you use lower bounds on humanity’s existential condition. I think talking about billions upon billions of years can seem a little bit far out there and maybe costs some weirdness points and as much as I like the concept of Earth-originating intelligent life, I also think it costs some weirdness points.

And it seems like you’ve taken some effort to sort of make the language not so ostracizing by decoupling it some with effective altruism jargon and the kind of language that we might use in effective altruism circles. I appreciate that and find it to be an important step. The same thing I feel feeds in here in terms of talking about descendant scenarios. It seems like making things simple and leveraging human self-interest is maybe important here.

Toby Ord: Thanks. When I was writing the book, I tried really hard to think about these things, both in terms of communications, but also in terms of trying to understand what we have been talking about for all of these years when we’ve been talking about existential risk and similar ideas. Often when in effective altruism, there’s a discussion about the different types of cause areas that effective altruists are interested in. There’s people who really care about global poverty, because we can help others who are much poorer than ourselves so much more with our money, and also about helping animals who are left out of the political calculus and the economic calculus and we can see why it is that they’re interests are typically neglected and so we look at factory farms, and we can see how we could do so much good.

And then also there’s this third group of people and then the conversation drifts off a bit, it’s like who have this kind of idea about the future and it’s kind of hard to describe and how to kind of wrap up together. So I’ve kind of seen that as one of my missions over the last few years is really trying to work out what is it that that third group of people are trying to do? My colleague, Will MacAskill, has been working on this a lot as well. And what we see is that this other group of effective altruists are this long-termist group.

The first group is thinking about this cosmopolitan aspect as much as me and it’s not just people in my country that matter, it’s people across the whole world and some of those could be helped much more. And the second group is saying, it’s not just humans that could be helped. If we widen things up beyond the species boundary, then we can see that there’s so much more we could do for other conscious beings. And then this third group is saying, it’s not just our time that we can help, there’s so much we can do to help people perhaps across this entire future of millions of years or further into the future. And so the difference there, the point of leverage is this difference between what fraction of the entire future is our present generation is perhaps just a tiny fraction. And if we can do something that will help that entire future, then that’s where this could be really key in terms of doing something amazing with our resources and our lives.

Lucas Perry: Interesting. I actually had never thought of it that way. And I think it puts it really succinctly the differences between the different groups that people focused on global poverty are reducing spatial or proximity bias in people’s focus on ethics or doing good. Animal farming is a kind of anti-speciesism, broadening our moral circle of compassion to other species and then the long-termism is about reducing time-based ethical bias. I think that’s quite good.

Toby Ord: Yeah, that’s right. In all these cases, you have to confront additional questions. It’s not just enough to make this point and then it follows that things are really important. You need to know, for example, that there really are ways that people can help others in distant countries and that the money won’t be squandered. And in fact, for most of human history, there weren’t ways that we could easily help people in other countries just by writing out a check to the right place.

When it comes to animals, there’s a whole lot of challenging questions there about what is the effects of changing your diet or the effects of donating to a group that prioritize animals in campaigns against factory farming or similar and when it comes to the long-term future, there’s this real question about “Well, why isn’t it that people in the future would be just as able to protect themselves as we are? Why wouldn’t they be even more well-situated to attend to their own needs?” Given the history of economic growth and this kind of increasing power of humanity, one would expect them to be more empowered than us, so it does require an explanation.

And I think that the strongest type of explanation is around existential risk. Existential risks are things that would be an irrevocable loss. So, as I define them, which is a simplification, I think of it as the destruction of humanity’s long-term potential. So I think of our long term potential as you could think of this set of all possible futures that we could instantiate. If you think about all the different collective actions of humans that we could take across all time, this kind of sets out this huge kind of cloud of trajectories that humanity could go in and I think that this is absolutely vast. I think that there are ways if we play our cards right of lasting for millions of years or billions or trillions and affecting billions of different worlds across the cosmos, and then doing all kinds of amazing things with all of that future. So, we’ve got this huge range of possibilities at the moment and I think that some of those possibilities are extraordinarily good.

If we were to go extinct, though, that would collapse this set of possibilities to a much smaller set, which contains much worse possibilities. If we went extinct, there would be just one future, whatever it is that would happen without humans, because there’d be no more choices that humans could make. If we had an irrevocable collapse of civilization, something from which we could never recover, then that would similarly reduce it to a very small set of very meager options. And it’s possible as well that we could end up locked into some dystopian future, perhaps through economic or political systems, where we end up stuck in some very bad corner of this possibility space. So that’s our potential. Our potential is currently the value of the best realistically realizable worlds available to us.

If we fail in an existential catastrophe, that’s the destruction of almost all of this value, and it’s something that you can never get back, because it’s our very potential that would be being destroyed. That then has an explanation as to why it is that people in the future wouldn’t be better able to solve their own problems because we’re talking about things that could fail now, that helps explain why it is that there’s room for us to make such a contribution.

Lucas Perry: So if we were to very succinctly put the recommended definition or framing on existential risk that listeners might be interested in using in the future when explaining this to new people, what is the sentence that you would use?

Toby Ord: An existential catastrophe is the destruction of humanity’s long-term potential, and an existential risk is the risk of such a catastrophe.

Lucas Perry: Okay, so on this long-termism point, can you articulate a little bit more about what is so compelling or important about humanity’s potential into the deep future and which arguments are most compelling to you with a little bit of a framing here on the question of whether or not the long-termist’s perspective is compelling or motivating for the average person like, why should I care about people who are far away in time from me?

Toby Ord: So, I think that a lot of people if pressed and they’re told “does it matter equally much if a child 100 years in the future suffers as a child at some other point in time?” I think a lot of people would say, “Yeah, it matters just as much.” But that’s not how we normally think of things when we think about what charity to donate to or what policies to implement, but I do think that it’s not that foreign of an idea. In fact, the weird thing would be why it is that people in virtue of the fact that they live in different times matter different amounts.

A simple example of that would be suppose you do think that things further into the future matter less intrinsically. Economists sometimes represent this by a pure rate of time preference. It’s a component of a discount rate, which is just to do with things mattering less in the future, whereas most of the discount rate is actually to do with the fact that money is more important to have earlier which is actually a pretty solid reason, but that component doesn’t affect any of these arguments. It’s only this little extra aspect about things matter less just because we’re in the future. Suppose you have that 1% discount rate of that form. That means that someone’s older brother matters more than their younger brother, that their life is equally long and has the same kinds of experiences is fundamentally more important for their older child than the younger child, things like that. This just seems kind of crazy to most people, I think.

And similarly, if you have these exponential discount rates, which is typically the only kind that economists consider, it has these consequences that what happens in 10,000 years is way more important than what happens in 11,000 years. People don’t have any intuition like that at all, really. Maybe we don’t think that much about what happens in 10,000 years, but 11,000 is pretty much the same as 10,000 from our intuition, but these other views say, “Wow. No, it’s totally different. It’s just like the difference between what happens next year and what happens in a thousand years.”

It generally just doesn’t capture our intuitions and I think that what’s going on is not so much that we have a kind of active intuition that things that happen further into the future matter less and in fact, much less because they would have to matter a lot less to dampen the fact that we can have millions of years of future. Instead, what’s going on is that we just aren’t thinking about it. We’re not really considering that our actions could have irrevocable effects over the long distant future. And when we do think about that, such as within environmentalism, it’s a very powerful idea. The idea that we shouldn’t sacrifice, we shouldn’t make irrevocable changes to the environment that could damage the entire future just for transient benefits to our time. And people think, “Oh, yeah, that is a powerful idea.”

So I think it’s more that they’re just not aware that there are a lot of situations like this. It’s not just the case of a particular ecosystem that could be an example of one of these important irrevocable losses, but there could be these irrevocable losses at this much grander scale affecting everything that we could ever achieve and do. I should also explain there that I do talk a lot about humanity in the book. And the reason I say this is not because I think that non-human animals don’t count or they don’t have intrinsic value, I do. It’s because instead, only humanity is responsive to reasons and to thinking about this. It’s not the case that chimpanzees will choose to save other species from extinction and will go out and work out how to safeguard them from natural disasters that could threaten their ecosystems or things like that.

We’re the only ones who are even in the game of considering moral choices. So in terms of the instrumental value, humanity has this massive instrumental value, because what we do could affect, for better or for worse, the intrinsic value of all of the other species. Other species are going to go extinct in about a billion years, basically, all of them when the earth becomes uninhabitable. Only humanity could actually extend that lifespan. So there’s this kind of thing where humanity ends up being key because we are the decision makers. We are the relevant agents or any other relevant agents will spring from us. That will be things that our descendants or things that we create and choose how they function. So, that’s the kind of role that we’re playing.

Lucas Perry: So if there are people who just simply care about the short term, if someone isn’t willing to buy into these arguments about the deep future or realizing the potential of humanity’s future, like “I don’t care so much about that, because I won’t be alive for that.” There’s also an argument here that these risks may be realized within their lifetime or within their children’s lifetime. Could you expand that a little bit?

Toby Ord: Yeah, in the precipice, when I try to think about why this matters. I think the most obvious reasons are rooted in the present. The fact that it will be terrible for all of the people who are alive at the time when the catastrophe strikes. That needn’t be the case. You could imagine things that meet my definition of an existential catastrophe that it would cut off the future, but not be bad for the people who were alive at that time, maybe we all painlessly disappear at the end of our natural lives or something. But in almost all realistic scenarios that we’re thinking about, it would be terrible for all of the people alive at that time, they would have their lives cut short and witness the downfall of everything that they’ve ever cared about and believed in.

That’s a very obvious natural reason, but the reason that moves me the most is thinking about our long-term future, and just how important that is. This huge scale of everything that we could ever become. And you could think of that in very numerical terms or you could just think back over time and how far humanity has come over these 200,000 years. Imagine that going forward and how small a slice of things our own lives are and you can come up with very intuitive arguments to exceed that as well. It doesn’t have to just be multiply things out type argument.

But then I also think that there are very strong arguments that you could also have rooted in our past and in other things as well. Humanity has succeeded and has got to where we are because of this partnership of the generations. Edmund Burke had this phrase. It’s something where, if we couldn’t promulgate our ideas and innovations to the next generation, what technological level would be like. It would be like it was in the Paleolithic time, even a crude iron shovel would be forever beyond our reach. It was only through passing down these innovations and iteratively improving upon them, we could get billions of people working in cooperation over deep time to build this world around us.

If we think about the wealth and prosperity that we have the fact that we live as long as we do. This is all because this rich world was created by our ancestors and handed on to us and we’re the trustees of this vast inheritance and if we would have failed, if we’d be the first of 10,000 generations to fail to pass this on to our heirs, we will be the worst of all of these generations. We’d have failed in these very important duties and these duties could be understood as some kind of reciprocal duty to those people in the past or we could also consider it as duties to the future rooted in obligations to people in the past, because we can’t reciprocate to people who are no longer with us. The only kind of way you can get this to work is to pay it forward and have this system where we each help the next generation with the respect for the past generations.

So I think there’s another set of reasons more deontological type reasons for it and you could all have the reasons I mentioned in terms of civilizational virtues and how that kind of approach rooted in being a more virtuous civilization or species and I think that that is a powerful way of seeing it as well, to see that we’re very impatient and imprudent and so forth and we need to become more wise or alternatively, Max Tegmark has talked about this and Martin Rees, Carl Sagan and others have seen it as something based on a cosmic significance of humanity, that perhaps in all of the stars and all of the galaxies of the universe, perhaps this is the only place where there is either life at all or we’re the only place where there’s intelligent life or consciousness. There’s different versions of this and that could make this exceptionally important place and this very rare thing that could be forever gone.

So I think that there’s a whole lot of different reasons here and I think that previously, a lot of the discussion has been in a very technical version of the future directed one where people have thought, well, even if there’s only a tiny chance of extinction, our future could have 10 to the power of 30 people in it or something like that. There’s something about this argument that some people find it compelling, but not very many. I personally always found it a bit like a trick. It is a little bit like an argument that zero equals one where you don’t find it compelling, but if someone says point out the step where it goes wrong, you can’t see a step where the argument goes wrong, but you still think I’m not very convinced, there’s probably something wrong with this.

And then people who are not from the sciences, people from the humanities find it an actively alarming argument that anyone who would make moral decisions on the grounds of an argument like that. What I’m trying to do is to show that actually, there’s this whole cluster of justifications rooted in all kinds of principles that many people find reasonable and you don’t have to accept all of them by any means. The idea here is that if any one of these arguments works for you, then you can see why it is that you have reasons to care about not letting our future be destroyed in our time.

Lucas Perry: Awesome. So, there’s first this deontological argument about transgenerational duties to continue propagating the species and the projects and value which previous generations have cultivated. We inherit culture and art and literature and technology, so there is a duties-based argument to continue the stewardship and development of that. There is this cosmic significance based argument that says that consciousness may be extremely precious and rare, and that there is great value held in the balance here at the precipice on planet Earth and it’s important to guard and do the proper stewardship of that.

There is this short-term argument that says that there is some reasonable likelihood I think, total existential risk for the next century you put at one in six, which we can discuss a little bit more later, so that would also be very bad for us and our children and short-term descendants should that be realized in the next century. Then there is this argument about the potential of humanity in deep time. So I think we’ve talked a bit here about there being potentially large numbers of human beings in the future or our descendants or other things that we might find valuable, but I don’t think that we’ve touched on the part and change of quality.

There are these arguments on quantity, but there’s also I think, I really like how David Pearce puts it where he says, “One day we may have thoughts as beautiful as sunsets.” So, could you expand a little bit here this argument on quality that I think also feeds in and then also with regards to the digitalization aspect that may happen, that there are also arguments around subjective time dilation, which may lead to more better experience into the deep future. So, this also seems to be another important aspect that’s motivating for some people.

Toby Ord: Yeah. Humanity has come a long way and various people have tried to catalog the improvements in our lives over time. Often in history, this is not talked about, partly because history is normally focused on something of the timescale of a human life and things don’t change that much on that timescale, but when people are thinking about much longer timescales, I think they really do. Sometimes this is written off in history as Whiggish history, but I think that that’s a mistake.

I think that if you were to summarize the history of humanity in say, one page, I think that the dramatic increases in our quality of life and our empowerment would have to be mentioned. It’s so important. You probably wouldn’t mention the Black Death, but you would mention this. Yet, it’s very rarely talked about within history, but there are people talking about it and there are people who have been measuring these improvements. And I think that you can see how, say in the last 200 years, lifespans have more than doubled and in fact, even in the poorest countries today, lifespans are longer than they were in the richest countries 200 years ago.

We can now almost all read whereas very few people could read 200 years ago. We’re vastly more wealthy. If you think about this threshold we currently use of extreme poverty, it used to be the case 200 years ago that almost everyone was below that threshold. People were desperately poor and now almost everyone is above that threshold. There’s still so much more that we could do, but there have been these really dramatic improvements.

Some people seem to think that that story of well-being in our lives getting better, increasing freedoms, increasing empowerment of education and health, they think that that story runs somehow counter to their concern about existential risk that one is an optimistic story and one’s a gloomy story. Ultimately, what I’m thinking is that it’s because these trends seem to point towards very optimistic futures that would make it all the more important to ensure that we survive to reach such futures. If all the trends suggested that the future was just going to inevitably move towards a very dreary thing that had hardly any value in it, then I wouldn’t be that concerned about existential risk, so I think these things actually do go together.

And it’s not just in terms of our own lives that things have been getting better. We’ve been making major institutional reforms, so while there is regrettably still slavery in the world today, there is much less than there was in the past and we have been making progress in a lot of ways in terms of having a more representative and more just and fair world and there’s a lot of room to continue in both those things. And even then, a world that’s kind of like the best lives lived today, a world that has very little injustice or suffering, that’s still only a lower bound on what we could achieve.

I think one useful way to think about this is in terms of your peak experiences. These moments of luminous joy or beauty, the moments that you’ve been happiest, whatever they may be and you think about how much better they are than the typical moments. My typical moments are by no means bad, but I would trade hundreds or maybe thousands for more of these peak experiences, and that’s something where there’s no fundamental reason why we couldn’t spend much more of our lives at these peaks and have lives which are vastly better than our lives are today and that’s assuming that we don’t find even higher peaks and new ways to have even better lives.

It’s not just about the well-being in people’s lives either. If you have any kind of conception about the types of value that humanity creates, so much of our lives will be in the future, so many of our achievements will be in the future, so many of our societies will be in the future. There’s every reason to expect that these greatest successes in all of these different ways will be in this long future as well. There’s also a host of other types of experiences that might become possible. We know that humanity only has some kind of very small sliver of the space of all possible experiences. We see in a set of colors, this three-dimensional color space.

We know that there are animals that see additional color pigments, that can see ultraviolet, can see parts of reality that we’re blind to. Animals with magnetic sense that can sense what direction north is and feel the magnetic fields. What’s it like to experience things like that? We could go so much further exploring this space. If we can guarantee our future and then we can start to use some of our peak experiences as signposts to what might be experienceable, I think that there’s so much further that we could go.

And then I guess you mentioned the possibilities of digital things as well. We don’t know exactly how consciousness works. In fact, we know very little about how it works. We think that there’s some suggestive reasons to think that minds including consciousness are computational things such that we might be able to realize them digitally and then there’s all kinds of possibilities that would follow from that. You could slow yourself down like slow down the rate at which you’re computed in order to see progress zoom past you and kind of experience a dizzying rate of change in the things around you. Fast forwarding through the boring bits and skipping to the exciting bits one’s life if one was digital could potentially be immortal, have backup copies, and so forth.

You might even be able to branch into being two different people, have some choice coming up as to say whether to stay on earth or to go to this new settlement in the stars, and just split with one copy go into this new life and one staying behind or a whole lot of other possibilities. We don’t know if that stuff is really possible, but it’s just to kind of give a taste of how we might just be seeing this very tiny amount of what’s possible at the moment.

Lucas Perry: This is one of the most motivating arguments for me, the fact that the space of all possible minds is probably very large and deep and that the kinds of qualia that we have access to are very limited and the possibility of well-being not being contingent upon the state of the external world which is always in flux and is always impermanent, we’re able to have a science of well-being that was sufficiently well-developed such that well-being was information and decision sensitive, but not contingent upon the state of the external world that seems like a form of enlightenment in my opinion.

Toby Ord: Yeah. Some of these questions are things that you don’t often see discussed in academia, partly because there isn’t really a proper discipline that says that that’s the kind of thing you’re allowed to talk about in your day job, but it is the kind of thing that people are allowed to talk about in science fiction. Many science fiction authors have something more like space opera or something like that where the future is just an interesting setting to play out the dramas that we recognize.

But other people use the setting to explore radical, what if questions, many of which are very philosophical and some of which are very well done. I think that if you’re interested in these types of questions, I would recommend people read Diaspora by Greg Egan, which I think is the best and most radical exploration of this and at the start of the book, it’s a setting in a particular digital system with digital minds substantially in the future from where we are now that have been running much faster than the external world. Their lives lived thousands of times faster than the people who’ve remained flesh and blood, so culturally that vastly further on, and then you get to witness what it might be like to undergo various of these events in one’s life. And in the particular setting it’s in. It’s a world where physical violence is against the laws of physics.

So rather than creating utopia by working out how to make people better behaved, the longstanding project have tried to make us all act nicely and decently to each other. That’s clearly part of what’s going on, but there’s this extra possibility that most people hadn’t even thought about, where because it’s all digital. It’s kind of like being on a web forum or something like that, where if someone attempts to attack you, you can just make them disappear, so that they can no longer interfere with you at all. And it explores what life might be like in this kind of world where the laws of physics are consent based and you can just make it so that people have no impact on you if you’re not enjoying the kind of impact that they’re having is a fascinating setting to explore radically different ideas about the future, which very much may not come to pass.

But what I find exciting about these types of things is not so much that they’re projections of where the future will be, but that if you take a whole lot of examples like this, they span a space that’s much broader than you were initially thinking about for your probability distribution over where the future might go and they help you realize that there are radically different ways that it could go. This kind of expansion of your understanding about the space of possibilities, which is where I think it’s best as opposed to as a direct prediction that I would strongly recommend some Greg Egan for anyone who wants to get really into that stuff.

Lucas Perry: You sold me. I’m interested in reading it now. I’m also becoming mindful of our time here and have a bunch more questions I would like to get through, but before we do that, I also want to just throw out here. I’ve had a bunch of conversations recently on the question of identity and open individualism and closed individualism and empty individualism are some of the views here.

For the long-termist perspective, I think that it’s pretty much very or deeply informative for how much or how little one may care about the deep future or digital minds or our descendants in a million years or humans that are around a million years later. I think for many people who won’t be motivated by these arguments, they’ll basically just feel like it’s not me, so who cares? And so I feel like these questions on personal identity really help tug and push and subvert many of our commonly held intuitions about identity. So, sort of going off of your point about the potential of the future and how it’s quite beautiful and motivating.

A little funny quip or thought there is I’ve sprung into Lucas consciousness and I’m quite excited, whatever “I” means, for there to be like awakening into Dyson sphere consciousness in Andromeda or something, and maybe a bit of a wacky or weird idea for most people, but thinking more and more endlessly about the nature of personal identity makes thoughts like these more easily entertainable.

Toby Ord: Yeah, that’s interesting. I haven’t done much research on personal identity. In fact, the types of questions I’ve been thinking about when it comes to the book are more on how radical change would be needed before it’s no longer humanity, so kind of like the identity of humanity across time as opposed to the identity for a particular individual across time. And because I’m already motivated by helping others and I’m kind of thinking more about the question of why just help others in our own time as opposed to helping others across time. How do you direct your altruism, your altruistic impulses?

But you’re right that they could also be possibilities to do with individuals lasting into the future. There’s various ideas about how long we can last with lifespans extending very rapidly. It might be that some of the people who are alive now actually do directly experience some of this long-term future. Maybe there are things that could happen where their identity wouldn’t be preserved, because it’d be too radical a break. You’d become two different kinds of being and you wouldn’t really be the same person, but if being the same person is important to you, then maybe you could make smaller changes. I’ve barely looked into this at all. I know Nick Bostrom has thought about it more. There’s probably lots of interesting questions there.

Lucas Perry: Awesome. So could you give a short overview of natural or non-anthropogenic risks over the next century and why they’re not so important?

Toby Ord: Yeah. Okay, so the main natural risks I think we’re facing are probably asteroid or comet impacts and super volcanic eruptions. In the book, I also looked at stellar explosions like supernova and gamma ray bursts, although since I estimate the chance of us being wiped out by one of those in the next 100 years to be one in a billion, we don’t really need to worry about those.

But asteroids, it does appear that the dinosaurs were destroyed 65 million years ago by a major asteroid impact. It’s something that’s been very well studied scientifically. I think the main reason to think about it is A, because it’s very scientifically understood and B, because humanity has actually done a pretty good job on it. We only worked out 40 years ago that the dinosaurs were destroyed by an asteroid and that they could be capable of causing such a mass extinction. In fact, it was only in 1960, 60 years ago that we even confirmed that craters on the Earth’s surface were caused by asteroids. So we knew very little about this until recently.

And then we’ve massively scaled up our scanning of the skies. We think that in order to cause a global catastrophe, the asteroid would probably need to be bigger than a kilometer across. We’ve found about 95% of the asteroids between 1 and 10 kilometers across, and we think we’ve found all of the ones bigger than 10 kilometers across. We therefore know that since none of the ones were found are on a trajectory to hit us within the next 100 years that it looks like we’re very safe from asteroids.

Whereas super volcanic eruptions are much less well understood. My estimate for those for the chance that we could be destroyed in the next 100 years by one is about one in 10,000. In the case of asteroids, we have looked into it so carefully and we’ve managed to check whether any are coming towards us right now, whereas it can be hard to get these probabilities further down until we know more, so that’s why my what about the super volcanic corruptions is where it is. That the Toba eruption was some kind of global catastrophe a very long time ago, though the early theories that it might have caused a population bottleneck and almost destroyed humanity, they don’t seem to hold up anymore. It is still illuminating of having continent scale destruction and global cooling.

Lucas Perry: And so what is your total estimation of natural risk in the next century?

Toby Ord: About one in 10,000. All of these estimates are in order of magnitude estimates, but I think that it’s about the same level as I put the super volcanic eruption and the other known natural risks I would put as much smaller. One of the reasons that we can say these low numbers is because humanity has survived for 2000 centuries so far, and related species such as Homo erectus have survived for even longer. And so we just know that there can’t be that many things that could destroy all humans on the whole planet from these natural risks,

Lucas Perry: Right, the natural conditions and environment hasn’t changed so much.

Toby Ord: Yeah, that’s right. I mean, this argument only works if the risk has either been constant or expectably constant, so it could be that it’s going up and down, but we don’t know which then it will also work. The problem is if we have some pretty good reasons to think that the risks could be going up over time, then our long track record is not so helpful. And that’s what happens when it comes to what you could think of as natural pandemics, such as the coronavirus.

This is something where it’s got into humanity through some kind of human action, so it’s not exactly natural how it actually got into humanity in the first place and then its spread through humanity through airplanes, traveling to different continents very quickly, is also not natural and is a faster spread than you would have had over this long-term history of humanity. And thus, these kind of safety arguments don’t count as well as they would for things like asteroid impacts.

Lucas Perry: This class of risks then is risky, but less risky than the human-made risks, which are a result of technology, the fancy x-risk jargon for this is anthropogenic risks. Some of these are nuclear weapons, climate change, environmental damage, synthetic bio-induced pandemics or AI-enabled pandemics, unaligned artificial intelligence, dystopian scenarios and other risks. Could you say a little bit about each of these and why you view unaligned artificial intelligence as the biggest risk?

Toby Ord: Sure. Some of these anthropogenic risks we already face. Nuclear war is an example. What is particularly concerning is a very large scale nuclear war, such as between the U.S. and Russia and nuclear winter models have suggested that the soot from burning buildings could get lifted up into the stratosphere which is high enough that it wouldn’t get rained out, so it could stay in the upper atmosphere for a decade or more and cause widespread global cooling, which would then cause massive crop failures, because there’s not enough time between frosts to get a proper crop, and thus could lead to massive starvation and a global catastrophe.

Carl Sagan suggested it could potentially lead to our extinction, but the current people working on this, while they are very concerned about it, don’t suggest that it could lead to human extinction. That’s not really a scenario that they find very likely. And so even though I think that there is substantial risk of nuclear war over the next century, either an accidental nuclear war being triggered soon or perhaps a new Cold War, leading to a new nuclear war, I would put the chance that humanity’s potential is destroyed through nuclear war at about one in 1000 over the next 100 years, which is about where I’d put it for climate change as well.

There is debate as to whether climate change could really cause human extinction or a permanent collapse of civilization. I think the answer is that we don’t know. Similar with nuclear war, but they’re both such large changes to the world, these kind of unprecedentedly rapid and severe changes that it’s hard to be more than 99% confident that if that happens that we’d make it through and so this is difficult to eliminate risk that remains there.

In the book, I look at the very worst climate outcomes, how much carbon is there in the methane clathrates under the ocean and in the permafrost? What would happen if it was released? How much warming would there be? And then what would happen if you had very severe amounts of warming such as 10 degrees? And I try to sketch out what we know about those things and it is difficult to find direct mechanisms that suggests that we would go extinct or that we would collapse our civilization in a way from which you could never be restarted again, despite the fact that civilization arose five times independently in different parts of the worlds already, so we know that it’s not like a fluke to get it started again. So it’s difficult to see the direct reasons why it could happen, but we don’t know enough to be sure that it can’t happen. In my sense, that’s still an existential risk.

Then I also have a kind of catch all for other types of environmental damage, all of these other pressures that we’re putting on the planet. I think that it would be too optimistic to be sure that none of those could potentially cause a collapse from which we can never recover as well. Although when I look at particular examples that are suggested, such as the collapse of pollinating insects and so forth, for the particular things that are suggested, it’s hard to see how they could cause this, so it’s not that I am just seeing problems everywhere, but I do think that there’s something to this general style of argument that unknown effects of the stressors we’re putting on the planet could be the end for us.

So I’d put all of those kind of current types of risks at about one in 1,000 over the next 100 years, but then it’s the anthropogenic risks from technologies that are still on the horizon that scare me the most and this would be in keeping with this idea of humanity’s continued exponential growth in power where you’d expect the risks to be escalating every century. And I think that the ones that I’m most concerned about, in particular, engineered pandemics and the risk of unaligned artificial intelligence.

Lucas Perry: All right. I think listeners will be very familiar with many of the arguments around why unaligned artificial intelligence is dangerous, so I think that we could skip some of the crucial considerations there. Could you touch a little bit then on the risks of engineered pandemics, which may be more new and then give a little bit of your total risk estimate for this class of risks.

Toby Ord: Ultimately, we do have some kind of a safety argument in terms of the historical record when it comes to these naturally arising pandemics. There are ways that they could be more dangerous now than they could have been in the past, but there are also many ways in which they’re less dangerous. We have antibiotics. We have the ability to detect in real time these threats, sequence the DNA of the things that are attacking us, and then use our knowledge of quarantine and medicine in order to fight them. So we have reasons to look to our safety on that.

But there are cases of pandemics or pandemic pathogens being created to be even more spreadable or even more deadly than those that arise naturally because the natural ones are not being optimized to be deadly. The deadliness is only if that’s in service of them spreading and surviving and normally killing your host is a big problem for that. So there’s room there for people to try to engineer things to be worse than the natural ones.

One case is scientists looking to fight disease, like Ron Fouchier with the bird flu, deliberately made a more infectious version of it that could be transmitted directly from mammal to mammal. He did that because he was trying to help, but it was, I think, very risky and I think a very bad move and most of the scientific community didn’t think it was a good idea. He did it in a bio safety level three enhanced lab, which is not the highest level of biosecurity, that’s BSL four, and even at the highest level, there have been an escape of a pathogen from a BSL four facility. So these labs aren’t safe enough, I think, to be able to work on newly enhanced things that are more dangerous than anything that nature can create in a world where so far the biggest catastrophes that we know of were caused by pandemics. So I think that it’s pretty crazy to be working on such things until we have labs from which nothing has ever escaped.

But that’s not what really worries me. What worries me more is bio weapons programs and there’s been a lot of development of bio weapons in the 20th Century, in particular. The Soviet Union reportedly had 20 tons of smallpox that they had manufactured for example, and they had an accidental release of smallpox, which killed civilians in Russia. They had an accidental release of anthrax, blowing it out across the whole city and killing many people, so we know from cases like this, that they had a very large bioweapons program. And the Biological Weapons Convention, which is the leading institution at an international level to prohibit bio weapons is chronically underfunded and understaffed. The entire budget of the BWC is less than that of a typical McDonald’s.

So this is something where humanity doesn’t have its priorities in order. Countries need to work together to step that up and to give it more responsibilities, to actually do inspections and make sure that none of them are using bio weapons. And then I’m also really concerned by the dark side of the democratization of biotechnology. The fact that rapid developments that we make with things like Gene Drives and CRISPR. These two huge breakthroughs. They’re perhaps Nobel Prize worthy. That in both cases within two years, they are replicated by university students in science competitions.

So we now have a situation where two years earlier, there’s like one person in the world who could do it or no one who could do it, then one person and then within a couple of years, we have perhaps tens of thousands of people who could do it, soon millions. And so if that pool of people eventually includes people like those in the Aum Shinrikyo cults that was responsible for the Sarin gas in the Tokyo subway, who actively one of their goals was to destroy everyone in the world. Once enough people can do these things and could make engineered pathogens, you’ll get someone with this terrible but massively rare motivation, or perhaps even just a country like North Korea who wants to have a kind of blackmail policy to make sure that no one ever invades. That’s why I’m worried about that. These rapid advances are empowering us to make really terrible weapons.

Lucas Perry: All right, so wrapping things up here. How do we then safeguard the potential for humanity and Earth-originating intelligent life? You seem to give some advice on high level strategy, policy and individual level advice, and this is all contextualized within this grand plan for humanity, which is that we reach existential security by getting to a place where existential risk is decreasing every century that we then enter a period of long reflection to contemplate and debate what is good and how we might explore the universe and optimize it to express that good and then that we execute that and achieve our potential. So again, how do we achieve all this, how do we mitigate x-risk, how do we safeguard the potential of humanity?

Toby Ord: That’s an easy question to end on. So what I tried to do in the book is to try to treat this at a whole lot of different levels. You kind of refer to the most abstract level to some extent, the point of that abstract level is to show that we don’t need to get ultimate success right now, we don’t need to solve everything, we don’t need to find out what the fundamental nature of goodness is, and what worlds would be the best. We just need to make sure we don’t end up in the ones which are clearly among the worst.

The point of looking further onwards with the strategy is just to see that we can set some things aside for later. Our task now is to reach what I call existential security and that involves this idea that will be familiar to many people to do with existential risk, which is to look at particular risks and to work out how to manage them, and to avoid falling victim to them, perhaps by being more careful with technology development, perhaps by creating our protective technologies. For example, better bio surveillance systems to understand if bio weapons have been launched into the environment, so that we could contain them much more quickly or to develop say a better work on alignment with AI research.

But it also involves not just fighting fires, but trying to become the kind of society where we don’t keep lighting these fires. I don’t mean that we don’t develop the technologies, but that we build in the responsibility for making sure that they do not develop into existential risks as part of the cost of doing business. We want to get the fruits of all of these technologies, both for the long-term and also for the short-term, but we need to be aware that there’s this shadow cost when we develop new things, and we blaze forward with technology. There’s shadow cost in terms of risk, and that’s not normally priced in. We just kind of ignore that, but eventually it will come due. If we keep developing things that produce these risks, eventually, it’s going to get us.

So what we need to do to develop our wisdom, both in terms of changing our common sense conception of morality, to take this long-term future seriously or our debts to our ancestors seriously, and we also need the international institutions to help avoid some of these tragedies of the commons and so forth as well, to find these cases where we’d all be prepared to pay the cost to get the security if everyone else was doing it, but we’re not prepared to just do it unilaterally. We need to try to work out mechanisms where we can all go into it together.

There are questions there in terms of policy. We need more policy-minded people within the science and technology space. People with an eye to the governance of their own technologies. This can be done within professional societies, but also we need more technology-minded people in the policy space. We often are bemoan the fact that a lot of people in government don’t really know much about how the internet works or how various technologies work, but part of the problem is that the people who do know about how these things work, don’t go into government. It’s not just that you can blame the people in government for not knowing about your field. People who know about this field, maybe some of them should actually work in policy.

So I think we need to build that bridge from both sides and I suggest a lot of particular policy things that we could do. A good example in terms of how concrete and simple it can get is that we renew the New START Disarmament Treaty. This is due to expire next year. And as far as I understand, the U.S. government and Russia don’t have plans to actually renew this treaty, which is crazy, because it’s one of the things that’s most responsible for the nuclear disarmament. So, making sure that we sign that treaty again, it is a very actionable point that people can kind of motivate around and so on.

And I think that there’s stuff for everyone to do. We may think that existential risk is too abstract and can’t really motivate people in the way that some other causes can, but I think that would be a mistake. I’m trying to sketch a vision of it in this book that I think can have a larger movement coalesce around it and I think that if we look back a bit when it came to nuclear war, the largest protest in America’s history at that time was against nuclear weapons in Central Park in New York and it was on the grounds that this could be the end of humanity. And that the largest movement at the moment, in terms of standing up for a cause is on climate change and it’s motivated by exactly these ideas about irrevocable destruction of our heritage. It really can motivate people if it’s expressed the right way. And so that actually fills me with hope that things can change.

And similarly, when I think about ethics, and I think about how in the 1950s, there was almost no consideration of the environment within their conception of ethics. It just was considered totally outside of the domain of ethics or morality and not really considered much at all. And the same with animal welfare, it was scarcely considered to be an ethical question at all. And now, these are both key things that people are taught in their moral education in school. And we have an entire ministry for the environment and that was within 10 years of Silent Spring coming out, I think all, but one English speaking country had a cabinet level position on the environment.

So, I think that we really can have big changes in our ethical perspective, but we need to start an expansive conversation about this and start unifying these things together not to be just like the anti-nuclear movement and the anti-climate change movement where it’s fighting a particular fire, but to be aware that if we want to actually get out there preemptively for these things that we need to expand that to this general conception of existential risk and safeguarding humanity’s long-term potential, but I’m optimistic that we can do that.

That’s why I think my best guess is that there’s a one in six chance that we don’t make it through this Century, but the other way around, I’m saying there’s a five in six chance that I think we do make it through. If we really played our cards right, we could make it a 99% chance that we make it through this Century. We’re not hostages to fortune. We humans get to decide what the future of humanity will be like. There’s not much risk from external forces that we can’t deal with such as the asteroids. Most of the risk is of our own doing and we can’t just sit here and bemoan the fact we’re in some difficult prisoner’s dilemma with ourselves. We need to get out and solve these things and I think we can.

Lucas Perry: Yeah. This point on moving from these particular motivation and excitement around climate change and nuclear weapons issues to a broader civilizational concern with existential risk seems to be a crucial and key important step in developing the kind of wisdom that we talked about earlier. So yeah, thank you so much for coming on and thanks for your contribution to the field of existential risk with this book. It’s really wonderful and I recommend listeners read it. If listeners are interested in that, where’s the best place to pick it up? How can they follow you?

Toby Ord: You could check out my website at tobyord.com. You could follow me on Twitter @tobyordoxford or I think the best thing is probably to find out more about the book at theprecipice.com. On that website, we also have links as to where you can buy it in your country, including at independent bookstores and so forth.

Lucas Perry: All right, wonderful. Thanks again, for coming on and also for writing this book. I think that it’s really important for helping to shape the conversation in the world and understanding around this issue and I hope we can keep nailing down the right arguments and helping to motivate people to care about these things. So yeah, thanks again for coming on.

Toby Ord: Well, thank you. It’s been great to be here.



Discuss

Meta-preferences two ways: generator vs. patch

Новости LessWrong.com - 8 часов 8 минут назад
Published on April 1, 2020 12:51 AM GMT

In my last post (Gricean communication and meta-preferences), I talked about the infinite tower of meta-preferences that in principle could be applied by humans. Example meta-preference: "I'd rather that I didn't act like I want to watch a bunch of random youtube videos - that's not a 'real' preference, it's something I should change." And then you can have meta-meta-feelings about this meta-preference, and off you go.

This infinite tower can't fit into reality, so it has to be dealt with somehow if we want to include the effects of meta-preferences in the process of value learning. Now, it's possible to deny this premise, model humans as actually only being able to compute a finite number of levels, and imitate that finite number of levels perfectly. But if we want to avoid the reductio that value learning should aim for perfect human imitation, I think there are two different camps here.

The generator camp treats each level of meta-preferences as an important generator of the level below. The infinite regress is avoided by using the fact that all levels of the tower are computed by the same mind, so incorporating meta-ethics is about approximating a fixed point of this self-ratification process. Preferably finding a fixed point that makes judgements sort of like humans' in easy cases. I think one can make a very good case for this, and that it's sort of hinted at in IDA approaches.

But my preferred approach is to treat the higher meta-levels as patches to the object level. That is, finding some approximation scheme where the meta-preferences are corrections that rapidly converge. Unlike the generator framework, different levels in the patch framework can be different parts of a heterogeneous values model, rather than being different questions posed to a single agent model.

One way of thinking about patches

In brainstorming for this post, I went through a bunch of patch ideas that won't work (happy to talk about those if prompted). But here's a way of thinking by making the analogy to semi-supervised learning (similar to the ideas used in "What's the dream for giving natural language commands to AI?") that might be productive:

To say it too-simply, imagine just asking the AI to include meta-preferences. If we "ask" (in quotes because we might be talking about prompting with some large amount of examples or other methods of definition) the AI for the representation of "what humans want," we trigger a different representation than if we ask for "what humans want, if we knew more, thought faster, were more the people we wished we were, had grown up farther together, etc."

We have to be a little careful about what kind of thing the AI is actually doing here. Activating features is fundamentally about pattern-matching. "What humans want" would just be some representative, useful feature that does a good job helping the AI predict the prompts. Where humans are inconsistent or vague, we might want the AI to average things out by representativeness or usefulness as a feature. This would mean that minor inconsistencies would have only minor contributions, but along the axes where there's strong conflict, the sum might cancel out.

Now, what would it do with "what humans want, if we knew more, thought faster, were more the people we wished we were, etc."? The exact same thing! It would not simulate any actual self-improvement, instead it would find some feature that's representative of self-improvement - it would change its representation of human values in a "self-improvementy" sort of way.

You can simulate this exact same cognitive pattern by asking yourself "what would someone do if they were a better person?" You didn't answer this question by simulating them, or even trying to mirror some process of self-reflection. Instead, you have some pre-computed ideas about what that sort of change looks like. These might be abstracted from actual observations of self-reflection, but they're also going to incorporate more abstract ideas that we might not have learned from experience.

There's a bunch of issues with this - it runs straight into the alien concepts problem, and we might want to more explicitly simulate the actual dynamics of self-reflection. But for now, I'm finding it a useful way to think about "meta-preferences as patch."



Discuss

Requesting examples of successful remote research collaborations, and information on what made it work?

Новости LessWrong.com - 9 часов 29 минут назад
Published on March 31, 2020 11:31 PM GMT

I have some experience of remote work, e.g. my current job. Not being in the same room is mostly not a problem at all. But for some reason, trying to do research together remotely is much harder, and several people I have talked to share this experience.

I have some intuitionist about why this is the case, but I also don't expect it to be very useful to discuss the reasons something can not be done. Instead I'm asking for positive examples.

Can you tell me about any successful remote research collaboration? And if so, give me as much information and general context about it as you can.



Discuss

LW Coronavirus Agenda Update 3/31

Новости LessWrong.com - 11 часов 20 минут назад
Published on March 31, 2020 9:40 PM GMT

Two weeks ago I announced the LessWrong Coronavirus Agenda, an attempt to increase knowledge by coordinating research between LW participants. This post is an update on that. If you want to skip to action items, check out the last section.

Last Week’s Spotlight QuestionsWhat is my prognosis (short term or long term) if I am infected with coronavirus?What will the economic effects of a 3 week quarantine be? 3 months?

This one was obviously going to take more than a week to answer, but we had some great starts.

  • Bucky on the cost of shutdowns.
  • Magfrump on small businesses.
  • Ssobanska on what we already know from China. Due to a glitch this one was invisible to most people, so I highly encourage you to check it out now.
Other Highlights Of The Week

Peter Hurford of Rethink Priorities shared his research agenda, which is operating at a different level of abstraction.EconDB, a per-country dashboard of economic indicators.Open thread is dead. Long live the open thread.

Changes To The Agenda

“What is my prognosis?” is being retired from the spotlight and its answers added to the record, although it will remain available for new answers.Added two answers to "What are the basic epidemiological parameters of C19?"Economics questions expanded (see below)

This Week’s Spotlight QuestionsPrevious ultra-vague economics question replaced by three mildly less vague economics questionsWhat are the costs, benefits, and logistics of opening up new vaccine facilities?

Prompted by David Manheim’s excellent comment and subsequent excellent discussion in the agenda post, I want to investigate how, exactly, vaccine production facilities could be made and what the costs and benefits of doing so are.
This was published a day later than planned because I used up my daily posting quota on the actual questions and got throttled.



Discuss

Outperforming the human Atari benchmark

Новости LessWrong.com - 31 марта, 2020 - 22:33
Published on March 31, 2020 7:33 PM GMT

I haven't paid much attention to Atari in a long time, and so would appreciate takes from anyone who follows this more closely. My take:

A single architecture that can handle both the games that require significant exploration, and the games that require long-term credit assignment, and the 'easy' games, without tricks or expert performance, seems like an achievement to me. The main question then becomes "does it scale out of the simulator to problems we care about more than Atari?"

What does it use? It looks like it's a lot of 'engineering improvements' and data. That is, lots of approaches will have tons of small components that are set to some simple default. You need to explore, so you use the simplest possible method of exploration, which is every action you roll a die and epsilon percent of the time you take a random action. Of course you could do better if you thought about it more, but there are many places like that in the code, and 'thinking about it more' requires both developer effort and compute (which, since the efficiency of the whole thing depends on how much compute it can spend, might mean you're spending dollars to earn cents if you use a fancy method where a simple one would do).

That is, this doesn't look like a radical new competitor to DQN; this looks like DQN after five years of iteration and testing, where some defaults are swapped to fancier settings and others aren't, and thus is updating me a little upward on the importance of engineering and data as an input factor.

They say this isn't the end of Atari research:

This by no means marks the end of Atari research, not only in terms of data efficiency, but also in terms of general performance. We offer two views on this: firstly, analyzing the performance among percentiles gives us new insights on how general algorithms are. While Agent57 achieves strong results on the first percentiles of the 57 games and holds better mean and median performance than NGU or R2D2, as illustrated by MuZero, it could still obtain a higher average performance. Secondly, all current algorithms are far from achieving optimal performance in some games. To that end, key improvements to use might be enhancements in the representations that Agent57 uses for exploration, planning, and credit assignment.



Discuss

Rational Manifesto

Новости LessWrong.com - 31 марта, 2020 - 20:03
Published on March 31, 2020 5:03 PM GMT

About one month ago, I saw an Agile Manifesto. It inspired me to create my own one because you know, manifests seem to be popular. So, for the 1st of April, I created Rational Manifesto:

  • Manifests are full of obvious things. This is one of them.
  • The previous item was a recursive joke.
  • Before using some advice from the manifest, you should check that this advice works.
  • Previous advice should be checked too.
  • If manifest contains a thing that seems to be true, that doesn't say anything about others.
  • If you found something wrong in the manifest - this manifest is useless.
  • The previous item is an example of the wrong one.
  • Every manifest tries to become a cult.
  • If you hadn't understood previous items - you have a chance to join the cult.
  • Add one cultist point for every item you don't understand. Including this one.
  • This manifest was signed by Eliezer Yudkowsky, Donald Knuth, Linus Tovalds and Daniel Kahneman.
  • If it makes it more valuable for you - add another two cultist points.
  • Answering to your question: No, they haven't signed it yet.
  • Add another two points.
  • If you often try to explain something from the end.
  • If you think that all items in this manifest are obvious - another 5 points.
  • This manifest is excellent and will save the world.
  • Another 2 points if you believed it.
  • If you weren't counting your points - add 3 more.
  • If you finished with a positive number of cultist points - think of it.

And yep, then I tried to pass this manifest by myself, after a week since I've created it - my score was positive.



Discuss

April Coronavirus Open Thread

Новости LessWrong.com - 31 марта, 2020 - 19:48
Published on March 31, 2020 4:48 PM GMT

Last month’s Coronavirus Open Thread did a fantastic job at being a place for coronavirus-related information and questions that didn’t merit a top level post, but at almost 400 comments, many of which were great at the time but are now obsolete, it’s getting a little creaky. So for the next month (probably. Who knows what’s going to happen in that month) this is the new spot for comments and questions about coronavirus that don’t fit anywhere else and aren’t worth a top level post.

Wondering what happened in last month’s thread? Here are the timeless and not-yet-eclipsed-by-events highlights:

  • Scott Alexander comes up with Hammer and Dance 6 days before Tomas Pueyo
  • Spiracular on why SARS-Cov-2 is unlikely to be lab-created.
  • Two documents collating estimates of basic epidemiological parameters, in response to this thread
  • Discussion on whether the tuberculosis vaccine provides protection against COVID-19.
  • Suggestive evidence that COVID-19 removes sense of taste and smell.
  • Could copper tape be net harmful?

Want to know what’s coming up in the future? Check out the Coronavirus Research Agenda and its related questions.

Wondering why the April thread is going up on 3/31? Because everything’s a little more confusing on 4/1 and I didn’t want the extra hassle.



Discuss

What are the costs, benefits, and logistics of opening up new vaccine facilities?

Новости LessWrong.com - 31 марта, 2020 - 19:43
Published on March 31, 2020 4:43 PM GMT

In this comment on the parent question, David Manheim makes the case for building new vaccine production facilities before we have a vaccine ready to produce, and for this to be subsidized. I find this question interesting in its own right, but it's also an opportunity to dig into the nitty-gritty of producing something, which I wish happened more on LessWrong. I don't expect this to lead directly to a vaccine facility coming into existence- presumably the people who have the power to do that already have access to the knowledge as well- but I think it's a useful exercise in how things actually get done.


To recap, David's argument as I understand it is:

  • the world is under-capacity for vaccines as it is
  • building a vaccine production facility takes a lot of time and money, so we need to start funding and building them now to have them ready when we have candidates ready.
  • we're likely to want to mass-produce multiple vaccines until we know which ones work the best, increasing the amount of capacity needed.

There's already a great discussion of this in the replies to his thread, and I encourage you to read it before commenting here. Some questions that came up in that discussion:

  • is there such a thing as a generic vaccine production facility, or do different vaccines require different facilities?
  • what raw materials (e.g. sterile eggs) are we also likely to need large quantities of, and should perhaps start sourcing now?

Some additional questions I have are:

  • are there costs besides the obvious financial ones to building these facilities ahead of time? Are they competing for a resource we need more elsewhere?
  • who does have the power and knowledge to make these facilities?
  • what are some historical equivalents?


Discuss

Blog Post Day II Retrospective

Новости LessWrong.com - 31 марта, 2020 - 18:03
Published on March 31, 2020 3:03 PM GMT

Saturday was Blog Post Day II. It wasn't as good as Blog Post Day. However, I'm still glad I did it.

To my knowledge six people wrote posts for Blog Post Day II. However, only one person (Ben Pace) finished on time and posted on LW. One other person (myself) took an extra two days to post, because I wanted to get feedback from some friends. Interestingly, both Ben and I didn't write the posts we originally intended to write; we gave up on those and switched to our Plan B's instead.

The remaining four people seem to have mostly made good progress, even finishing their posts, but they are either posting elsewhere (I don't know where) or saving up their drafts to post later as part of a sequence.

This is about 75% as much participation as we had last time. I'm thinking I should do these less frequently than every month. I'll do the next one in 2-3 months probably. If you want it to happen sooner, say so! These things exist to serve the public interest, so if you are interested, make it public.



Discuss

What are some good / fun / informative treatises on good governance?

Новости LessWrong.com - 31 марта, 2020 - 15:06
Published on March 31, 2020 12:06 PM GMT

Like many of us, I find both (1) an unusual amount of time on my hands to read and (2) a renewed interest in systems of governance which lead to competent collective decision-making.

I honestly haven't read much in this sphere. I was hoping for something which is to, say, political science as Thinking, Fast and Slow is to psychology and rationality: something that is interesting, gives me a good notion of some big problems, which is probably partly wrong, but remains nevertheless informative.

Although I'd be yet more interested which is to political science as, say, CFAR's handbook is to rationality: an attempt to both identify problems and provide means by which they could be solved. A kind of compressed, economically aware, game-theoretically involved, historically anecdoted look at ways of governing and trying to fix classical problems of malice and incompetence within government.

I know the question has a hopeless scope; any suggestions welcome.



Discuss

Cloudy With a Chance of Meatballs PS3 Review

Новости LessWrong.com - 31 марта, 2020 - 05:20
Published on March 31, 2020 2:20 AM GMT

Growing up we didn't have a video game system and I've never played very much, but a friend found that working from home was not compatible to having their PS3 around, so now we have theirs. I was looking for games at a good level for our kids ( Lily, just turned six, and Anna, just turned four) and on the advice of this r/PS3 comment decided to get a used copy of Cloudy With A Chance of Meatballs. We've now played the first few levels, so I wanted to write up my impressions. Here I am, someone who doesn't know anything about video games writing about a game that came out eleven years ago, whee!

The game doesn't bear much resemblance to the book, except that there's large food everywhere and occasionally more falls from the skies. You move around with a joystick, jump with a button, and use whatever implement you happen to be holding with another button. In our first time playing Lily understood each of the actions right away, but had trouble combining them. Jumping onto something involves both moving and jumping at the same time, which was tricky. Even just our second time playing she had that better, but precise movement is still at the edge of what she can do.

Anna understands how the controller works, but doesn't have good enough command over her fingers to reliably get it to do what she wants. She can make her character go in her chosen direction more often than not, but if she's going for any sort of target she usually overshoots it. Some getting frustrated and just pressing buttons randomly. I think I should play a few more games with just Anna, because when Lily's playing she often just watches her instead of figuring out how to move her own character.

The game has a main character and a sidekick, but they can do all the same things. The difference is that if the two go off in different directions the camera follows the main character and the sidekick gets teleported back into view. There's nothing the sidekick can do to accidentally impede progress. This works very well for two kids where one mostly understands the game and the other is tagging along.

The characters can die, but death doesn't matter very much. If you fall in the water or spend too long walking through hot chili you fall over dramatically but then pop back up again next to the other player. If both players die at once you start a bit farther back, but not much.

There's a small amount of violence, like little gummy bears who pounce on you, and you do chop them or pour hot coffee on them, but it's light and cartoony enough that I'm not really worried about it.

So far Lily and Anna have played with me kibitzing, I've played with each of them individually, and ~5d in they just tried playing together without me. They've played about half an hour a day each day since we got it, and would be enthusiastic about playing longer if we let them. It's a very silly game, but it's at a good level for them in that there's nothing you need to do that's very hard and the stakes are super low throughout.

Comment via: facebook



Discuss

What will happen to supply chains in the era of COVID-19?

Новости LessWrong.com - 31 марта, 2020 - 04:57
Published on March 31, 2020 1:57 AM GMT

Bucky and skybian both asked questions about how COVID-19 and associated shutdowns will affect supply chains. This is a question just to discuss that. I'm interested anything from "observations of very concrete things that have already happened in China" to "speculation about weird counterintuitive effects" to "here are some facts about supply chains in general".

Like the parent question, this is intended to provoke more babble than prune. But even the babbliest thing should be presented such that other people can build on it. So if you have an idea, please share the reasoning or data behind it as well.



Discuss

How will this recession differ from the last two?

Новости LessWrong.com - 31 марта, 2020 - 04:32
Published on March 31, 2020 1:32 AM GMT

My model is that the last recession (2008) was essentially financial in nature- a bunch of wealth was destroyed on paper but we lost no productive capacity. The 2002 recession was similar- most of what happened was revealing that things were valueless, rather than destroying value.

I expect this recession to be different, because it stems from an actual decrease in the availability of labor (although it's still subject to demand spirals stemming from that). I'd expect it to have more in common with the 1972 oil embargo (on which I can find zero books), which was also a sudden decrease in the supply of a crucial production component. How will the supply shock nature change the consequences of the recession? Or am I wrong that this one will be different at all?


Note: These questions are intended to provoke more babble than prune. But even the babbliest thing should be presented such that other people can build on it. So if you have a prediction, please share the reasoning or data behind it as well.



Discuss

What happens in a recession anyway?

Новости LessWrong.com - 31 марта, 2020 - 04:16
Published on March 31, 2020 1:16 AM GMT

I know the technical definition of a recession- a GDP contraction lasting at least two quarters- but do not have a satisfactory answer as to what happens.

A non-exhaustive list of the kinds of consequences I'm thinking of, sometimes including references, which are intended to be suggestive not definitive:



Discuss

Why do we have offices?

Новости LessWrong.com - 31 марта, 2020 - 04:01
Published on March 31, 2020 1:01 AM GMT

They seem expensive, and not useful for jobs that can apparently be done remotely.

h/t Matthew Barnett for making me wonder

Followup question: how can we translate those benefits to remote work?



Discuss

Страницы

Подписка на LessWrong на русском сбор новостей