Вы здесь

Новости LessWrong.com

Подписка на Лента Новости LessWrong.com Новости LessWrong.com
A community blog devoted to refining the art of rationality
Обновлено: 11 минут 49 секунд назад

Dance Weekend Schedule Checklist

3 декабря, 2019 - 04:50
Published on December 3, 2019 1:50 AM UTC

When I've been booked for a dance weekend, the organizers generally send a draft schedule out a few weeks in advance to look over. If they don't send one I'll usually ask about two weeks out. There are a few things I want to check, since it's easy for organizers to miss things when putting schedules together:

  • Is there enough time for sleeping from when we stop playing until we start playing the next morning, factoring in any time getting between housing and the venue? One weekend scheduled us to play until 11:30pm on Friday and start at 9:30am on Saturday, while housing us almost an hour from the hall.

  • Does the schedule include time for our initial sound check? Generally we want ~45min to get things right before playing the first time, ~30min minimum.

  • Does the schedule include time for changeovers? About 20min for one band to get off stage and the other to get on is usually good.

  • Are there any unnecessary band changeovers? As long as you leave enough time for sleeping, having one band end the evening and then start the next morning saves you a round of setting up the stage.

  • Are we playing about the amount of time we expected? One weekend had us playing four 3.5hr contra sessions which is really a lot.

  • If we're playing in multiple places, do we have enough time to get between them?

  • Are we scheduled to play anything we don't know how to play, or that needs special preparation? We're leading a sing-along? A slot says "techno contra" next to our name?

  • Are there any tweaks that would make naps or other traveling-with-a-kid things easier?

  • Are we playing a medley, and if so is it a reasonable length? One event scheduled us to play a 1hr15min medley; after talking with the organizers we ended up playing it ABA with the other band.

  • Are we getting to work with both callers? In a typical two-band two-caller two-night weekend there's no way to have all three of (a) both bands get a night of opening and a night of closing, (b) both callers get the same, and (c) both bands get a night slot with both callers. I'd rather lose (a) or (b) than (c), but it's especially sad when we don't end up working with one of the callers at all.

This isn't the same list everyone will have: some people need more sleep, others need to minimize the number of times they get on/off the stage, others play very poorly early in the morning or late at night, and others have restrictions I haven't thought of. While there are some issues that won't be clear until you've been there and see what it's like (the building has the AC set to 58 with chiller units right behind the band‽) it's worth it to look over the schedule carefully to notice places where adjustments could be helpful.



Discuss

MIRI’s 2019 Fundraiser

3 декабря, 2019 - 04:16
Published on December 3, 2019 1:16 AM UTC

(Crossposted from the MIRI blog)

Giving Tuesday is tomorrow, December 3! Facebook will match up to $100,000 per organization, starting at 5:00:00am PT—so set those alarms early! The matching pool will likely be used up within 10 seconds, so see Colm’s post for tips on how to get donations to MIRI’s Fundraiser FB Page matched. Giving Tuesday donations will be included in the fundraiser total.

MIRI’s 2019 fundraiser is now live, December 2–31!

Over the past two years, huge donor support has helped us double the size of our AI alignment research team. Hitting our $1M fundraising goal this month will put us in a great position to continue our growth in 2020 and beyond, recruiting as many brilliant minds as possible to take on what appear to us to be the central technical obstacles to alignment.

Our fundraiser progress, updated in real time:

MIRI is a CS/math research group with a goal of understanding how to reliably “aim” future general-purpose AI systems at known goals. For an introduction to this research area, see Ensuring Smarter-Than-Human Intelligence Has A Positive Outcome and Risks from Learned Optimization in Advanced Machine Learning Systems. For background on how we approach the problem, see 2018 Update: Our New Research Directions and Embedded Agency.

At the end of 2017, we announced an expanded research focus and a plan to substantially grow our research team, with a goal of hiring “around ten new research staff over the next two years.” Two years later, I’m happy to report that we’re up eight research staff, and we have a ninth starting in February of next year, which will bring our total research team size to 20.[1]

We remain excited about our current research directions, and continue to feel that we could make progress on them more quickly by adding additional researchers and engineers to the team. As such, our main organizational priorities remain the same: push forward on our research directions, and grow the research team to accelerate our progress.

While we’re quite uncertain about how large we’ll ultimately want to grow, we plan to continue growing the research team at a similar rate over the next two years, and so expect to add around ten more research staff by the end of 2021.

Our projected budget for 2020 is $6.4M–$7.4M, with a point estimate of $6.8M,[2] up from around $6M this year.[3] In the mainline-growth scenario, we expect our budget to look something like this:

Looking further ahead, since staff salaries account for the vast majority of our expenses, I expect our spending to increase proportionately year-over-year while research team growth continues to be a priority.

Given our $6.8M budget for 2020, and the cash we currently have on hand, raising $1M in this fundraiser will put us in a great position for 2020. Hitting $1M positions us with cash reserves of 1.25–1.5 years going into 2020, which is exactly where we want to be to support ongoing hiring efforts and to provide the confidence we need to make and stand behind our salary and other financial commitments.

For more details on what we’ve been up to this year, and our plans for 2020, read on!

1. Workshops and scaling up

If you lived in a world that didn’t know calculus, but you knew something was missing, what general practices would have maximized your probability of coming up with it?

What if you didn’t start off knowing something was missing? Could you and some friends have gotten together and done research in a way that put you in a good position to notice it, to ask the right questions?

MIRI thinks that humanity is currently missing some of the core concepts and methods that AGI developers will need in order to align their systems down the road. We think we’ve found research paths that may help solve that problem, and good ways to rapidly improve our understanding via experiments; and we’re eager to add more researchers and engineers’ eyes and brains to the effort.

A significant portion of MIRI’s current work is in Haskell, and benefits from experience with functional programming and dependent type systems. More generally, if you’re a programmer who loves hunting for the most appropriate abstractions to fit some use case, developing clean concepts, making and then deploying elegant combinators, or audaciously trying to answer the deepest questions in computer science, then we think you should apply to work here, get to know us at a workshop, or reach out with questions.


As noted above, our research team is growing fast. The latest additions to the MIRI team include:

Evan Hubinger, a co-author on “Risks from Learned Optimization in Advanced Machine Learning Systems”. Evan previously designed the functional programming language Coconut, was an intern at OpenAI, and has done software engineering work at Google, Yelp, and Ripple.

Jeremy Schlatter, a software engineer who previously worked at Google and OpenAI. Some of the public projects Jeremy has contributed to include OpenAI’s Dota 2 bot and a popular (but now deprecated) debugger for the Go programming language.

Seraphina Nix, joining MIRI in February 2020. Seraphina graduates this month from Oberlin College with a major in mathematics and minors in computer science and physics. She has previously done research on ultra-lightweight dark matter candidates, deep reinforcement learning, and teaching neural networks to do high school mathematics.

Rafe Kennedy, who joins MIRI after working as an independent existential risk researcher at the Effective Altruism Hotel. Rafe previously worked at the data science startup NStack, and he holds an MPhysPhil from the University of Oxford in Physics & Philosophy.

MIRI’s hires and job trials are typically drawn from our 4.5-day, all-expense-paid AI Risk for Computer Scientists(AIRCS) workshop series.

Our workshop program is the best way we know of to bring promising talented individuals into what we think are useful trajectories towards being highly-contributing AI researchers and engineers. Having established an experience that participants love and that we believe to be highly valuable, we plan to continue experimenting with new versions of the workshop, and expect to run ten workshops over the course of 2020, up from eight this year.

These programs have led to a good number of new hires at MIRI as well as other AI safety organizations, and we find them valuable for everything from introducing talented outsiders to AI safety, to leveling up people who have been thinking about these issues for years.

If you’re interested in attending, apply here. If you have any questions, we highly encourage you to shoot Buck Shlegeris an email.

Our MIRI Summer Fellows Program plays a similar role for us, but is more targeted at mathematicians. We’re considering running MSFP in a shorter format in 2020. For any questions about MSFP, email Colm Ó Riain.

2. Research and write-ups

Our 2018 strategy update continues to be a great overview of where MIRI stands today, describing how we think about our research, laying out our case for working here, and explaining why most of our work currently isn’t public-facing.

Given the latter point, I’ll focus in this section on spotlighting what we’ve written up this past year, providing snapshots of some of the work individuals at MIRI are currently doing (without any intended implication that this is representative of the whole), and conveying some of our current broad impressions about how our research progress is going.

Some of our major write-ups and publications this year were:

  • Risks from Learned Optimization in Advanced Machine Learning Systems,” by Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, Joar Skalse, and Scott Garrabrant. The process of generating this paper significantly clarified our own thinking, and informed Scott and Abram’s discussion of subsystem alignment in “Embedded Agency.”
    Scott views “Risks from Learned Optimization…” as being of comparable importance to “Embedded Agency” as exposition of key alignment difficulties, and we’ve been extremely happy about the new conversations and research that the field at large has produced to date in dialogue with the ideas in “Risks from Learned Optimization…”
  • Thoughts on Human Models, by Scott Garrabrant and DeepMind-based MIRI Research Associate Ramana Kumar, argues that the AI alignment research community should begin prioritizing “approaches that work well in the absence of human models.”
    The role of human models in alignment plans strikes us as one of the most important issues for MIRI and other research groups to wrestle with, and we’re generally interested in seeing what new approaches groups outside MIRI might come up with for leveraging AI for the common good in the absence of human models.
  • Cheating Death in Damascus,” by Nate Soares and Ben Levinstein. We presented this decision theory paper at the Formal Epistemology Workshop in 2017, but a lightly edited version has now been accepted to The Journal of Philosophy, previously voted the second highest-quality academic journal in philosophy.
  • The Alignment Research Field Guide, a very accessible and broadly applicable resource both for individual researchers and for groups getting off the ground.

Our other recent public writing includes an Effective Altruism Forum AMA with Buck Shlegeris, Abram Demski’s The Parable of Predict-O-Matic, and the many interesting outputs of the AI Alignment Writing Day we hosted toward the end of this year’s MIRI Summer Fellows Program.


Turning to our research team, last year we announced that prolific Haskell programmer Edward Kmett joined the MIRI team, freeing him up to do the thing he’s passionate about—improving the state of highly reliable (and simultaneously highly efficient) programming languages. MIRI Executive Director Nate Soares views this goal as very ambitious, though would feel better about the world if there existed programming languages that were both efficient and amenable to strong formal guarantees about their properties.

This year Edward moved to Berkeley to work more closely with the rest of the MIRI team. We’ve found it very helpful to have him around to provide ideas and contributions to our more engineering-oriented projects, helping give some amount of practical grounding to our work. Edward has also continued to be a huge help with recruiting through his connections in the functional programming and type theory world.

Meanwhile, our newest addition, Evan Hubinger, plans to continue working on solving inner alignment for amplification. Evan has outlined his research plans on the AI Alignment Forum, noting that relaxed adversarial training is a fairly up-to-date statement of his research agenda. Scott and other researchers at MIRI consider Evan’s work quite exciting, both in the context of amplification and in the context of other alignment approaches it might prove useful for.

Abram Demski is another MIRI researcher who has written up a large number of his research thoughts over the last year. Abram reports (fuller thoughts here) that he has moved away from a traditional decision-theoretic approach this year, and is now spending more time on learning-theoretic approaches, similar to MIRI Research Associate Vanessa Kosoy. Quoting Abram:

Around December 2018, I had a big update against the “classical decision-theory” mindset (in which learning and decision-making are viewed as separate problems), and towards taking a learning-theoretic approach. [… I have] made some attempts to communicate my update against UDT and toward learning-theoretic approaches, including this write-up. I talked to Daniel Kokotajlo about it, and he wrote The Commitment Races Problem, which I think captures a good chunk of it.

For her part, Vanessa’s recent work includes the paper “Delegative Reinforcement Learning: Learning to Avoid Traps with a Little Help,” which she presented at the ICLR 2019 SafeML workshop.

I’ll note again that the above are all snapshots of particular research directions various researchers at MIRI are pursuing, and don’t necessarily represent other researchers’ views or focus. As Buck recently noted, MIRI has a pretty flat management structure. We pride ourselves on minimizing bureaucracy, and on respecting the ability of our research staff to form their own inside-view models of the alignment problem and of what’s needed next to make progress. Nate recently expressed similar thoughts about how we do nondisclosure-by-default.

As a consequence, MIRI’s more math-oriented research especially tends to be dictated by individual models and research taste, without the expectation that everyone will share the same view of the problem.


Regarding his overall (very high-level) sense of how MIRI’s new research directions are progressing, Nate Soares reports:

Progress in 2019 has been slower than expected, but I have a sense of steady progress. In particular, my experience is one of steadily feeling less confused each week than the week before—of me and other researchers having difficulties that were preventing us from doing a thing we wanted to do, staring at them for hours, and then realizing that we’d been thinking wrongly about this or that, and coming away feeling markedly more like we know what’s going on.
An example of the kind of thing that causes us to feel like we’re making progress is that we’ll notice, “Aha, the right tool for thinking about all three of these apparently-dissimilar problems was order theory,” or something along those lines; and disparate pieces of frameworks will all turn out to be the same, and the relevant frameworks will become simpler, and we’ll be a little better able to think about a problem that I care about. This description is extremely abstract, but represents the flavor of what I mean by “steady progress” here, in the same vein as my writing last year about “deconfusion.”
Our hope is that enough of this kind of progress gives us a platform from which we can generate particular exciting results on core AI alignment obstacles, and I expect to see such results reasonably soon. To date, however, I have been disappointed by the amount of time that’s instead been spent on deconfusing myself and shoring up my frameworks; I previously expected to have more exciting results sooner.
In research of the kind we’re working on, it’s not uncommon for there to be years between sizeable results, though we should also expect to sometimes see cascades of surprisingly rapid progress, if we are indeed pushing in the right directions. My inside view of our ongoing work currently predicts that we’re on a productive track and should expect to see results we are more excited about before too long.

Our research progress, then, is slower than we had hoped, but the rate and quality of progress continues to be such that we consider this work very worthwhile, and we remain optimistic about our ability to convert further research staff hours into faster progress. At the same time, we are also (of course) looking for where our research bottlenecks are and how we can make our work more efficient, and we’re continuing to look for tweaks we can make that might boost our output further.

If things go well over the next few years—which seems likely but far from guaranteed—we’ll continue to find new ways of making progress on research threads we care a lot about, and continue finding ways to hire people to help make that happen.

Research staff expansion is our biggest source of expense growth, and by encouraging us to move faster on exciting hiring opportunities, donor support plays a key role in how we execute on our research agenda. Though the huge support we’ve received to date has put us in a solid position even at our new size, further donor support is a big help for us in continuing to grow. If you want to play a part in that, thank you.

Donate Now
  1. This number includes a new staff member who is currently doing a 6-month trial with us.
  2. These estimates were generated using a model similar to the one I used last year. For more details see our 2018 fundraiser post.
  3. This falls outside the $4.4M–$5.5M range I estimated in our 2018 fundraiser post, but is in line with the higher end of revised estimates we made internally in Q1 2019.


Discuss

Open & Welcome Thread - December 2019

3 декабря, 2019 - 03:00
Published on December 3, 2019 12:00 AM UTC

  • If it’s worth saying, but not worth its own post, here's a place to put it.
  • And, if you are new to LessWrong, here's the place to introduce yourself.
    • Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are welcome.

If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ.

The Open Thread sequence is here.



Discuss

Call for resources on the link between causation and ontology

3 декабря, 2019 - 01:03
Published on December 2, 2019 10:03 PM UTC

I'm a little ashamed to admit I only read "Why Correlation Usually ≠ Causation" yesterday. It's very, very good, and you should read it too.

My essential takeaway from it is this: You can find nonzero correlations between almost anything you care to measure. However, it seems unlikely that the number of causal relationships in the universe scales at all proportionally to the number of correlative ones in the Universe.

This question feels like the wrong one to be asking to me, somehow. It feels ontology-flavored, in a way that doesn't make it a great match for how I normally think about statistics, and I would appreciate some book recommendations on the subject in the comments. But first, let me try to explain my thinking on this.

Start with the "base" layer of reality, the movement of atoms, or electrons, or strings, or what-have-you. If we are watching the actions and reactions of that layer from afar, then it seems to me that we have the best possible environment for doing a few experiments to first demonstrate correlation, and then a few more to demonstrate causation afterwards. While we can never be 100% sure, we can asymptotically reach certainty in that world. So far, so good; there's a reason experimental physics can get so precise with its predictions.

When you go one layer of abstraction up -- to molecules, if our base layer was "atoms" -- it seems to me that suddenly the difficulty of ascertaining causation should skyrocket. There are many more confounding variables and possibilities, that make designing an adequate experiment much harder. In addition, it is harder to define "molecule" precisely than it was to define "atom". How far do we move the constituent atoms apart before we turn a molecule into a non-molecule, for example? That seems like a question that you have to sometimes answer differently depending on different scenarios.

The experiments you run for correlation between molecules, on the other hand, might be harder, but I don't get the feeling they experience the same kind of... Combinatorial-superexplosion-y additional difficulty that an experiment designed for causation has to handle.

You should probably try to account for things like thermal noise, and trace impurities and the like, if you have to, but past a certain point it's sort of okay to let go of the reins and just say "We can do more correlation tests later". The claim underneath that claim being that, those things which muck up the data are mostly due to random chance, and if we do the experiment again under different conditions, we will get a different set of random circumstances wrapping around the experiment.

This problem feels like it recurs every time you go up a level, which is why it concerns me so much. When you get to the level of dealing with human beings in medicine, it feels to me the difficulty of determining causation must be so vast as to be almost not worth the effort; and yet, at the same time, that intuition feels clearly wrong, because there was a lot of low-hanging fruit in the world of medicine -- vaccines being the example par excellence. But on the other hand, vaccines operate on a relatively simple causal mechanism! Maybe it shouldn't be surprising that such low hanging fruit exists; what would be truly impressive would be if we found an easy cure to some disease founded upon principles that only show themselves at the level of reasoning about humans themselves, the same way we usually reason about molecules-as-primitives instead of atoms-as-primitives when we start to do biochemistry

I apologize if this isn't a terribly clear explanation of what I'm getting at. If anything in here strikes you as similar to a problem you have thought about yourself and have read up on, let me know. At the least, I should be able to come back within a few month and be able to properly pose my question.



Discuss

Nov 2019 gwern.net newsletter

3 декабря, 2019 - 00:16
Published on December 2, 2019 9:16 PM UTC



Discuss

A list of good heuristics that the case for AI x-risk fails

2 декабря, 2019 - 22:26
Published on December 2, 2019 7:26 PM UTC

I think one reason machine learning researchers don't think AI x-risk is a problem is because they haven't given it the time of day. And on some level, they may be right in not doing so!

We all need to do meta-level reasoning about what to spend our time and effort on. Even giving an idea or argument the time of day requires it to cross a somewhat high bar, if you value your time. Ultimately, in evaluating whether it's worth considering a putative issue (like the extinction of humanity at the hands (graspers?) of a rogue AI), one must rely on heuristics; by giving the argument the time of day, you've already conceded a significant amount of resources to it! Moreover, you risk privileging the hypothesis or falling victim to Pascal's Mugging.

Unfortunately, the case for x-risk from out-of-control AI systems seems to fail many powerful and accurate heuristics. This can put proponents of this issue in a similar position to flat-earth conspiracy theorists at first glance. My goal here is to enumerate heuristics that arguments for AI takeover scenarios fail.

Ultimately, I think machine learning researchers should not refuse to consider AI x-risk when presented with a well-made case by a person they respect or have a personal relationship with, but I'm ambivalent as to whether they have an obligation to consider the case if they've only seen a few headlines about Elon. I do find it a bit hard to understand how one doesn't end up thinking about the consequences of super-human AI, since it seems obviously impactful and fascinating. But I'm a very curious (read "distractable") person...


A list of heuristics that say not to worry about AI takeover scenarios:
  • Outsiders not experts: This concern is being voiced exclusively by non-experts like Elon Musk, Steven Hawking, and the talkative crazy guy next to you on the bus.
  • Ludditism has a poor track record: For every new technology, there's been a pack of alarmist naysayers and doomsday prophets. And then instead of falling apart, the world got better.
  • It's straight out of science fiction: AI researchers didn't come up with this concern, Hollywood did. Science fiction is constructed based on entertaining premises, not realistic capabilities of technologies.
  • It's not empirically testable: There's no way to falsify the belief that AI will kill us all. It's purely a matter of faith. Such beliefs don't have good track records of matching reality.
  • It's just too extreme: Whenever we hear an extreme prediction, we should be suspicious. To the extent that extreme changes happen, they tend to be unpredictable. While extreme predictions sometimes contain a seed of truth, reality tends to be more mundane and boring.
  • It has no grounding in my personal experience: When I train my AI systems, they are dumb as doorknobs. You're telling me they're going to be smarter than me? In a few years? So smart that they can outwit me, even though I control the very substrate of their existence?
  • It's too far off: It's too hard to predict the future and we can't really hope to anticipate specific problems with future AI systems; we're sure to be surprised! We should wait until we can envision more specific issues, scenarios, and threats, not waste our time on what comes down to pure speculation.

I'm pretty sure this list in incomplete, and I plan to keep adding to it as I think of or hear new suggestions! Suggest away!!

Also, to be clear, I am writing these descriptions from the perspective of someone who has had very limited exposure to the ideas underlying concerns about AI takeover scenarios. I think a lot of these reactions indicate significant misunderstandings about what people working on mitigating AI x-risk believe, as well as matters of fact (e.g. a number of experts have voiced concerns about AI x-risk, and a significant portion of the research community seems to agree that these concerns are at least somewhat plausible and important).




Discuss

What I talk about when I talk about AI x-risk: 3 core claims I want machine learning researchers to address.

2 декабря, 2019 - 21:20
Published on December 2, 2019 6:20 PM UTC

Recently, as PCSOCMLx, I (co-)hosted a session with the goal of explaining, debating, and discussing what I view as "the case for AI x-risk". Specifically, my goal was/is to make the case for the "out-of-control AI killing everyone" type of AI x-risk, since many or most ML researchers already accept that there are significant risks from misuse of AI that should be addressed.I'm sharing my outline, since it might be useful to others, and in order to get feedback on it. Please tell me what you think it does right/wrong!

Some background/context

I estimate I've spent ~100-400 hours discussing AI x-risk with machine learning researchers during the course of my MSc and PhD. My current impression is that rejection of AI x-risk by ML researchers is mostly due to a combination of:

  • Misunderstanding of what I view as the key claims (e.g. believing "the case for x-risk hinges on short-timelines and/or fast take-off").
  • Ignorance of the basis for AI x-risk arguments (e.g. no familiarity with the argument from instrumental convergence).
  • Different philosophical groundings (e.g. not feeling able/compelled to try and reason using probabilities and expected value; not valuing future lives very much; an unexamined apparent belief that current "real problems" should always take precedence of future "hypothetical concerns" resulting in "whataboutism").

I suspect that ignorance about the level of support for AI x-risk concerns among other researchers also plays a large role, but it's less clear... I think people don't like to be seen to be basing their opinions on other researchers'. Underlying all of this seems to be a mental move of "outright rejection" based on AI x-risk failing many powerful and useful heuristics. AI x-risk is thus commonly viewed as a Pascal's mugging: "plausible" but not plausible enough to compel any consideration or action. A common attitude is that AI take-over has a "0+epsilon" chance of occurring.I'm hoping that being more clear and modest in the claims I/we aim to establish can help move discussions with researchers forward. I've recently been leaning heavily on the unpredictability of the future and making ~0 mention of my own estimates about the likelihood of AI x-risk, with good results.

The 3 core claims:

1) The development of advanced AI increases the risk of human extinction (by a non-trivial amount, e.g. 1%), for the following reasons:

  • Goodhart's law
  • Instrumental goals
  • Safety-performance trade-offs (e.g. capability control vs. motivation control)

2) To mitigating this existential risk (x-risk) we need progress in 3 areas:

  • Knowing how to build safe systems ("control problem")
  • Knowing that we know how to build safe systems ("justified confidence")
  • Preventing people from building unsafe systems ("global coordination")

3) Mitigating AI x-risk seems like an ethical priority because it is:

  • high impact
  • neglected
  • challenging but tractable
Reception:

Unfortunately, only 3 people showed up to our session (despite something like 30 expressing interest). So I didn't learn to much about the effectiveness of this presentation. My 2 main take-aways are:

  • Somewhat unsurprisingly, claim 1 had the least support. While I find this claim and the supporting arguments quite compelling and intuitive, there seem to be inferential gaps that I struggle to address quickly/easily. A key sticking point seems to be the lack of a highly plausible concrete scenario. I think it might also require more discussion of epistemics in order to move people from "I understand the basis for concern" to "I believe there is a non-trivial chance of an out-of-control AI killing everyone".
  • The phrase "ethical priority" raises alarm bells for people, and should be replaced of clarified. Once I clarified that I meant it in the same way as "combating climate change is an ethical priority", people seemed to accept it.
Some more details on the event:

The title for our session was: The case for AI as an existential risk, and a call for discussion and debate. Our blurb was: A growing number of researchers are concerned about scenarios in which machines, instead of people, control the future. What is the basis for these concerns, and are they well-founded? I believe they are, and we have an obligation as a community to address them. I can lead with a few minutes summarizing the case for that view. We can then discuss what nuances, objections, and take-aways.I also started with some basic background to make sure people understood the topic:

  • X-risk = risk of human extinction
  • The 3 kinds of risk (misuse, accident, structural)
  • The specific risk scenario I'm concerned with: out of control AI



Discuss

Is the rate of scientific progress slowing down? (by Tyler Cowen and Ben Southwood)

2 декабря, 2019 - 06:45
Published on December 2, 2019 3:45 AM UTC

Mini-book-length analysis of the question, by Cowen and Southwood. I'd love to read quotes and summaries of key arguments



Discuss

The history of smallpox and the origins of vaccines

1 декабря, 2019 - 23:51
Published on December 1, 2019 8:51 PM UTC

Smallpox was one of the worst diseases in human history. It killed an estimated 300 million people or more in the 20th century alone; only tuberculosis and malaria have been more deadly. Its victims were often children, even infants.

It is also, among diseases, humanity’s greatest triumph: the only human disease we have completely eradicated, with zero naturally occurring cases reported worldwide, even in the most remote locations and the poorest countries, for over forty years.

Along the way, it led to fierce religious and political debates, some of the earliest clinical trials and quantitative epidemiology, and the first vaccines.

This is the story of smallpox, and how we killed it.



Discuss

Two Bay Area Winter Solstice Events

1 декабря, 2019 - 21:49
Published on December 1, 2019 6:49 PM UTC

Happy December, everyone! This year, the Bay Area Winter Solstice organizers have two events on offer for you to enjoy. Hope to see you there!

Pre-Solstice Unconference, 2019

Saturday, December 14, 2019

Want to spend some time socializing with the people you'll see at Solstice? This year, we're doing an Unconference the day before! It's like an afterparty but without the "after".

[LessWrong event with more details]

Get tickets on Eventbrite.

Bay Area Winter Solstice, 2019

Sunday, December 15, 2019

Gather with us in a planetarium in the Oakland hills. Huddle and sing against the encroaching cold and dark. Share warmth and light. Together we will celebrate humanity and the things that matter to us, through speeches and songs.

[LessWrong event with more details]

Purchase a ticket for this year's Solstice using the secretparty link.



Discuss

Vocal Range

1 декабря, 2019 - 18:50
Published on December 1, 2019 3:50 PM UTC

I'm organizing another secular solstice, which has me thinking about where to pitch songs so the most people will feel comfortable. I found several people saying that range of C to C generally works well with untrained voices, and if they had said something weird like Db-E I might have believed them, but C-C is so simple I'm skeptical.

I tried looking for papers and didn't find anything, so I ran a survey. I linked people to wtuner and asked them what were the highest and lowest notes they could happily sing. I got thirty one responses:

The median happy-singing range was 21 half-steps, which is a bit under two octaves. For example, G2 to E4. Here are the actual notes:

Some of these responses are a bit dubious: one person gave a happy range from A3 to E4 (eight notes). Another response gave a range from F2 to F3, but the response with the next lowest high note gave D4 (nine notes higher). One possibility is people interpret "happy" in a range of ways, where some people are giving the limits of their range while others are just giving where they feel their voice sounds best?

Taking the survey literally, however, we can calculate the number of people who would be unhappy with a given range of notes:

This shows people in the survey are generally happiest with a midpoint of E, so for example Bb-Bb. The difference between the best and worst options is dramatic: a 17-note range centered on D#, E, or F (ex: Ab-C) works for 2/3 of the singers, while a 17-note range centered on Ab through C (ex: D-F#) works for at most 1/3 of the singers.

This would say "Happy Birthday", which has a one-octave range and starts on the fifth, would work best in G, Ab, or A, and worst in D (which would center it on Bb).

Here are the centers of people's ranges:

There's one group with a midpoint around F3, and then a less well defined group with a midpoint just under an octave higher, around Eb4. I suspect this is related to gender, though I didn't ask about that on the survey. [1] What makes centering on ~E work well, then, is probably that low voices can do Bb2-Bb3, while higher voices can do Bb3-Bb4. This is also good for having a full sound, because the voices are spread over two octaves; when you do the opposite and pitch it E-E, a large fraction of people are in the same E3-E4 octave.


[1] Phrasing questions about gender is hard, especially when you are mostly interested in one aspect (in this case, testosterone giving lower voices) which may not be central to someone's gender identity. I couldn't easily figure out how to phrase a question in a way I thought people were likely to be ok with, and I wanted to get the survey out, so I didn't include one.

Comment via: facebook



Discuss

{Math} A times tables memory.

1 декабря, 2019 - 18:40
Published on December 1, 2019 3:40 PM UTC

I have a distinct memory of being 8 years old, or so, and being handed one of those worksheets where they ask you to multiply numbers up through 12x12, and being viscerally disgusted by the implied pedagogy of it. That was over a hundred things you were asking me to memorize. On my own time. The whole reason I rush through my school work is so I don't have to do anything when I get home. I don't know if eight year old me swore, but this was definitely a "Screw you" moment for him.

But he actually ended up being able to do that sheet pretty quickly, at least compared to most of the rest of the class. There were a few kids who were faster than me, but I got the impression they were dumb enough to have to practice this instead of watching Ed, Edd 'n' Eddy at home. Or worse, they actually did memorize this stuff, instead of practice to get quick with the multiply-numbers-in-your-head algorithm like I did. (Because of course nobody else in the class would be doing it the same way I did, just much faster. But eight-year-olds aren't known to have particularly nuanced concepts of self that can gracefully accept that there are other people naturally much better than them at what they do best.)

Later on, we moved up to multiplying arbitrary two-digit-by-one-digit numbers, and then two-digit-by-two-digit numbers. (I didn't piece together how uncommon this was until a few years later.) Everyone who outpaced me in the times-tables speed tests were now far, far below me; meanwhile, I just had to chain my little "multiply-small-numbers" mental motion to a few "add-up-the-sums" motions. 76 * 89 = 7*8*100 + 6*8*10 + 7*9*10 + 6*9. I felt like I was so clever. I started to take pride in the fact that I was now leading the pack, even though I had told myself before that I didn't care!

That is, of course, until the kids who were originally faster than me also realized how to perform that mental motion, and then they leapt past me in speed with the combined force of split-second memory of times tables and a quick ability to perform algorithms.

I think by the time we were finished with the lightning round worksheet practice, I was in the bottom quarter of the class for speed, and when I did push myself to speed up, I'd start making careless mistakes like mixing up which one of 6*7 and 7*7 was 42 and which was 49, again?

Later in my mathematical pedagogy, I am taking a Real Analysis course. There are two midterms in this course. The first one I did not prepare for at all, falling into my old 8-year-old failure mode: "If I can't just compute the answer on the spot to the question, I sort of deserve to fail, don't I?" I got a B-, in the lower half of the class.

The second one, I reminded myself of the times tables kids. I got an A.



Discuss

Pre-Solstice Unconference 2019

1 декабря, 2019 - 08:34
Published on December 1, 2019 5:34 AM UTC

This unconference is a gathering of the rationalist and effective altruist communities, held the day before Bay Area Secular Solstice.

Think something like the New York megameetup: if a bunch of us are all going to be around during Solstice weekend, why not also take the excuse to hang out with each other?

We've rented Omni Commons in Oakland for a day of socializing and information-sharing.

The unconference has a light theme, to create some inspiration (but don't let it constrain you). This year's theme will be **group coordination**:

What ambitious project would you embark upon if you knew for sure that you had the support? What kinds of support can we give each other to help with those projects? What do we ourselves need in order to provide that support to each other?

Buffet style lunch and snacks will be provided. Invite friends!

Register via Eventbrite: https://www.eventbrite.com/e/winter-solstice-unconference-tickets-71860628073

Note: The main Solstice event itself is here – https://www.lesswrong.com/events/Fpaa7hNb8RhdLS9Jj/bay-area-winter-solstice-2019

Hope to see you there!



Discuss

Bay Area Winter Solstice 2019

1 декабря, 2019 - 06:17
Published on December 1, 2019 3:17 AM UTC

tl;dr buy a ticket at https://bay-area-winter-solstice-2019.secretparty.io/

Gather with us in a planetarium in the Oakland hills. Huddle and sing against the encroaching cold and dark. Share warmth and light. Together we will celebrate humanity and the things that matter to us, through speeches and songs.

The Bay Area Winter Solstice is an annual gathering organized by the Bay Area rationality and EA community. Come celebrate the longest night of the year with us. We will grieve the hard times, our losses and our struggles, and we will celebrate all that we have accomplished and all that we have to hope for.

Face the cold of winter head-on, together ✨

Logistics: We will again be hosting this year’s Secular Solstice at the planetarium at the Chabot Space & Science Center.

Snacks will be provided by organizers; attendees are welcome to bring additional food to share.

Doors open at 6:30pm. The performance starts promptly at 7:00pm, and ends by 9:00pm. We have the venue available for socializing until 11:00pm.

Purchasing tickets: https://bay-area-winter-solstice-2019.secretparty.io/

Last year, Eventbrite’s waitlist system posed something of a technical obstacle to last-minute ticket transfers; this year, we've solved that problem by switching to Secretparty. It’s not that the party is a secret – it’s that this platform's features should enable ticket transfers to be fully democratized. If you can't make the event, coordinate a ticket transfer to someone else privately or via posting on the Facebook page.

Organize carpools: https://tinyurl.com/solstice2019carpooling

Donations: This is a money pool for anyone who wants to donate money to fund Solstice in order to enable us to continue offering tickets on a pay-what-you-want basis: https://tinyurl.com/solstice2019donations



Discuss

Learning Abstract Math from First Principles?

1 декабря, 2019 - 04:26
Published on December 1, 2019 12:41 AM UTC

I find that I am usually quite good at applied math, and enjoy it. I am taking a course currently that is split into two parts, Vector Calculus and Complex Analysis. The vector calculus makes sense to me and I can see how and why it works, and I find it interesting and enjoyable to learn.

On the other hand, I spend quite a bit of mental energy wrapping my head around the hows and whys of the more abstract complex analysis. I am not sure if I enjoy abstract math or not in general because I do not understand it as well. So, my question: Does anyone have any recommended resources for learning (any) abstract mathematical topic from first principles, that explains reasonably well what's going on with the math, rather than just how to do it?



Discuss

Why aren't assurance contracts widely used?

1 декабря, 2019 - 03:20
Published on December 1, 2019 12:20 AM UTC

A priori, dominant assurance contracts seem like awesome tools for solving a fairly broad range of collective action problems. Why aren't they used much? Or is it just that they are a new idea and we should expect them to grow in prominence in the next few decades?





Discuss

Epistemic Spot Check: Fatigue and the Central Governor Module

1 декабря, 2019 - 03:10
Published on December 1, 2019 12:10 AM UTC

Epistemic spot checks used to be a series in which I read papers/books and investigated their claims with an eye towards assessing the work’s credibility. I became unhappy with the limitations of this process and am working on creating something better. This post about both the results of applying the in-development process to a particular work, and observations on the process. As is my new custom, this discussion of the paper will be mostly my conclusions. The actual research is available in my Roam database (a workflowy/wiki hybrid), which I will link to as appropriate.

This post started off as an epistemic spot check of Fatigue is a brain-derived emotion that regulates the exercise behavior to ensure the protection of whole body homeostasis, a scientific article by Timothy David Noakes. I don’t trust myself to summarize it fairly (we’ll get to that in a minute), so here is the abstract:

An influential book written by A. Mosso in the late nineteenth century proposed that fatigue that “at first sight might appear an imperfection of our body, is on the contrary one of its most marvelous perfections. The fatigue increasing more rapidly than the amount of work done saves us from the injury which lesser sensibility would involve for the organism” so that “muscular fatigue also is at bottom an exhaustion of the nervous system.” It has taken more than a century to confirm Mosso’s idea that both the brain and the muscles alter their function during exercise and that fatigue is predominantly an emotion, part of a complex regulation, the goal of which is to protect the body from harm. Mosso’s ideas were supplanted in the English literature by those of A. V. Hill who believed that fatigue was the result of biochemical changes in the exercising limb muscles – “peripheral fatigue” – to which the central nervous system makes no contribution. The past decade has witnessed the growing realization that this brainless model cannot explain exercise performance.This article traces the evolution of our modern understanding of how the CNS regulates exercise specifically to insure that each exercise bout terminates whilst homeostasis is retained in all bodily systems. The brain uses the symptoms of fatigue as key regulators to insure that the exercise is completed before harm develops.These sensations of fatigue are unique to each individual and are illusionary since their generation is largely independent of the real biological state of the athlete at the time they develop.The model predicts that attempts to understand fatigue and to explain superior human athletic performance purely on the basis of the body’s known physiological and metabolic responses to exercise must fail since subconscious and conscious mental decisions made by winners and losers, in both training and competition, are the ultimate determinants of both fatigue and athletic performance

The easily defensible version of this claim is that fatigue is a feeling in the brain. The most out there version of the claim is that humans are capable of unlimited physical feats, held back only by their own mind, and the results of sporting events are determined beforehand through psychic dominance competitions. That sounds like I’m being unfair, so let me quote the relevant portion

[A]thletes who finish behind the winner may make the conscious decision not to win, perhaps even before the race begins. Their deceptive symptoms of “fatigue” may then be used to justify that decision. So the winner is the athlete for whom defeat is the least acceptable rationalization

(He doesn’t mention psychic dominance competitions explicitly, but it’s the only way I see to get exactly one person deciding to win each race).

This paper generated a lot of ESC-able claims, which you can see here. These were unusually crisp claims that he provided citations for: absolutely the easiest thing to ESC (having your own citations agree with your summary of them is not sufficient to prove correctness, but lack of it takes a lot works out). But I found myself unenthused about doing so. I eventually realized that I wanted to read a competing explanation instead. Luckily Noakes provided a citation to one, and it was even more antagonistic to him than he claimed.

VO2,max: what do we know, and what do we still need to know?, by Benjamin D. Levine takes several direct shots at Noakes, including:

For the purposes of framing the debate, Dr Noakes frequently likes to place investigators into two camps: those who believe the brain plays a role in exercise performance, and those who do not (Noakes et al. 2004b). However this straw man is specious. No one disputes that ‘the brain’ is required to recruit motor units – for example, spinal cord-injured patients can’t run. There is no doubt that motivation is necessary to achieve VO2,max. A subject can elect to simply stop exercising on the treadmill while walking slowly because they don’t want to continue; no mystical ‘central governor’ is required to hypothesize or predict a VO2 below maximal achievable oxygen transport in this case.

Which I would summarize as “of course fatigue is a brain-mediated feeling: you feel it.” 

I stopped reading at this point, because I could no longer tell what the difference between the hypotheses was. What are the actual differences in predictions between “your muscles are physically unable to contract?” and “your brain tells you your muscles are unable to contract”? After thinking about it for a while, I came up with a few:

  1. The former suggests that there’s no intermediate between “safely working” and “incapacitation”.
  2. The latter suggestions that you can get physical gains through mental changes alone.
  3. And that this might lead to tissue damage as you push yourself beyond safe limits.

Without looking at any evidence, #1 seems unlikely to be true. Things rarely work that way in general, much less in bodies.

The strongest pieces of evidence for #2 and #3 isn’t addressed by either paper: cases when mental changes have caused/allowed people to inflict serious injuries or even death to themselves.

  1. Hysterical strength (aka mom lifts car off baby)
  2. Involuntary muscle spasms (from e.g., seizures or old-school ECT)
  3. Stiff-man syndrome.

So I checked these out.

Hysterical strength has not been studied much, probably because IRBs are touchy about trapping babies under cars (with an option on “I was unable to find the medical term for it). There are enough anecdotes that it seems likely to exist, although it may not be common. And it can cause muscle tears, according to several sourceless citations. This is suggestive, but if I was on Levine’s team I’d definitely find it insufficient.

Most injuries from seizures are from falling or hitting something, but it appears possible for injuries to result from overactive muscles themselves. This is complicated by the fact that anti-convulsant medications can cause bone thinning, and by the fact that some unknown percentage of all people are walking around with fractures they don’t know about.

Unmodified electro-convulsive therapy had a small but persistent risk of bone fractures, muscle tears, and join dislocation. Newer forms of ECT use muscle relaxants specifically to prevent this.

Stiff-man Syndrome: Wikipedia says that 10% of stiff-man syndrome patients die from acidosis or autonomic dysfunction. Acidosis would be really exciting- evidence that overexertion of muscles will actually kill you. Unfortunately when I tried to track down the citation, it went nowhere (with one paper inaccessible). Additionally, one can come up with other explanations for the acidosis than muscle exertion. So that’s not compelling.

Overall it does seem clear that (some) people’s muscles are strong enough to break their bones, but are stopped from doing so under normal circumstances. You could call this vindication for Noake’s Central Governor Model, but I’m hesitant. It doesn’t prove you can safely get gains by changing your mindset alone.  It doesn’t prove all races are determined by psychic dominance fights. Yes, Noakes was speculating when he postulated that, but without it his theory is something like “you notice when your muscles reach their limits”. When you can safely push what feel like physical limits on the margin feels like a question that will vary a lot by individual and that neither paper tried to answer.

Overall, Fatigue is a brain-derived emotion that regulates the exercise behavior to ensure the protection of whole body homeostasis neither passed nor failed epistemic spot checks as originally conceived, because I didn’t check its specific claims. Instead I thought through its implications and investigated those, which supported the weak but not strong form of Noake’s argument.

In terms of process, the key here was feeling and recognizing the feeling that investigating forward (evaluating the implications of Noake’s arguments) was more important than investigating backwards (the evidence Noake provided for his hypothesis). I don’t have a good explanation for why that felt right at this time, but I want to track it.



Discuss

CO2 Stripper Postmortem Thoughts

1 декабря, 2019 - 00:20
Published on November 30, 2019 9:20 PM UTC

So, I have finally completed building a CO2 stripper that removes CO2 from the air to (hopefully) improve cognition in environments with high CO2 levels. In California, the weather is pretty good so it's easy to just crack a window at any point during the year, but other areas get quite cold during the winter or quite warm during summer and it's infeasible to open a window unless you want to spend an awful lot of money on heating or cooling bills. It didn't work quite as well as the math indicated at first, but the whole thing is built, and basically functional. The rest of this post will be a reflection on the lessons learned while doing so.


1: In hardware, ideas are cheap, execution is expensive

So, the fundamental idea is extremely simple once you have some basic knowledge of chemistry. The goal is to get CO2 into some form that isn't the gas form, via some sort of chemical reaction.

Submarines and CO2 capture from flue gas use a reversible reaction with ethanolamines, where they absorb CO2 at high temperatures and release it at low temperatures. Reversible reactions are good for making waste, but heating up and cooling down large quantities of liquid takes an awful lot of energy. Submarines have nuclear reactors onboard, and flue gas is hot, but we don't necessarily have the energy required. Also ethanolamines are toxic and hard to get a hold of for a civilian and really stinky, being the major component of "submarine smell".

Adsorption onto zeolites is also plausible, but the issue is that it requires alternately exposing the zeolites to high air pressure and low air pressure, and high airflow is required. The combination of high pressure and high airflow means that again, you're using a lot of energy. The basic math is as follows: One human produces about 1 kg of CO2 in 24 hours. We can idealize a perfect CO2 stripper as a magic box that inhales air and spits it out at 0 ppm. If you want a steady-state concentration of 500 ppm for 2 people, then we can see how much air-flow is required to lock up 2 kg of CO2 in 24 hours. This comes out to about 100 cubic feet per minute. This is the bare minimum air flow for any CO2 stripper, but in this particular case, it corresponds to a 25 horsepower air compressor, which is 18 kilowatts. This is equivalent to running 5 electric dryers at once. So that one is out too, especially since we were assuming 100% efficiency at eliminating CO2.

What about irreversible reactions? Just lock the CO2 up as a solid waste? Well, to begin with, this is going to produce quite a waste stream, and consume quite a bit of chemicals, you'd better hope it's safe and that the feed chemical is cheap. The reaction used on space missions used lithium hydroxide. The basic idea is that lithium hydroxide makes a very basic solution. Carbon dioxide is slightly acidic, so it dissolves very fast into basic solutions. Then you get precipitation of lithium carbonate which is safe.

The problem is that lithium hydroxide is quite expensive. It was used on space missions because it's the most mass-efficient way of doing that sort of reaction and every gram counts in space missions, but we want the cheapest way of doing that reaction.

And then we hit upon the perfect solution. Calcium hydroxide. It's an extremely cheap bulk chemical, 15 bucks for a 50-pound sack of it at a hardware store. It's fairly mild as far as hydroxides go, being pH 12.4. So instead of giving you horrible chemical burns, it's safe to handle unless you're exposed to it for over an hour at a time without washing it off. It's the alkaline analogue of the difference between 1 M hydrochloric acid, and lemon juice. And when it reacts with CO2, it makes CaCO3, aka limestone, which is totally harmless. In fact, it's a common laboratory demonstration that breathing onto a solution of this stuff produces a white film/crust on the top, which is the CO2 in the breath locked up as solid limestone. It's the obvious choice if you're trying to remove CO2 via chemical means.

And in fact, in the SSC comment section, someone else independently had the exact same idea! Just lock up CO2 with calcium hydroxide!

The simplicity of an idea in the field of atoms instead of bits doesn't necessarily mean that anyone on earth has ever done it before, though, or will ever do it, and I'm not worried about anyone scooping the idea, because building novel hardware is hard enough to provide a natural barrier to entry unless it's a large company that's interested in the idea. Ideas are cheap, execution is expensive, in both time and money.

2: Only polymaths need apply

If you're trying to build a novel machine in your garage, and aren't working as part of an engineering team, you will either need an improbably wide range of knowledge, or the general ability to pick up whatever you need to learn. There's the basic knowledge of chemistry to spot that this is the obvious reaction to go for, but the full design requires:

Familiarity with wastewater aerators to know what to buy to prevent clogging with solids, knowledge on which materials won't react with your chemicals, the math of air flow in pipes, the ability to read fan pressure/airflow curves, the ability to go from "I want a circuit that does this" to building a novel electronic circuit on a breadboard without frying anything important, enough programming knowledge to write some basic arduino code, familiarity with hazardous waste disposal regulations in your state, familiarity with waste dewatering techniques, familiarity with which sort of pumps can pump sludge instead of pure water, some electrical engineering knowledge to work safely with 220V power without frying yourself or anyone else, knowledge of soundproofing, and especially the familiarity with everything at Home Depot that lets you home in on the most efficient and foolproof way of building a thing that does what you want. Probably some other stuff too that I consider obvious but others might not.

Now, most of this is pretty easy to pick up given enough starting mental firepower, and the sense of what to google for. Or just having lots of experience with building material things.

Having one of the relevant fields of knowledge manifests itself as knowing ahead of time which approaches will work and which will fail and what solutions past work in the area has already found.

For some of these, missing it will manifest as not knowing that there's an incoming bullet in a particular area, like not knowing that fine bubble aerators will promptly clog if there's lots of particulates in the water, or not suspecting that high air flow rates are incompatible with small pipe (I knew the latter one and it still almost got me until I idly decided to work out airflow velocity in the pipe and realized it was around 200 mph)

3: The planning fallacy is huge here.

So, it wound up costing a lot more than I thought and taking a lot longer than I thought. The mechanism of why the planning fallacy hits so hard here is tied in with the design process. What happens is that you start out with a sketchy outline of all the component parts (like, "I need something that automatically dispenses chemical powder"), and as it becomes time to build a part, you drill down further and further in fleshing out the details until eventually you've drilled down far enough for your design to Actually Work in reality. While you do this, you will inevitably come across parts that are a lot harder to do than you expected, which you were glossing over on the first pass. The shiny black box of "build a chemical dispenser" looks more tractable than "how the fuck do I build a motor mounting plate with my inadequate tools", which you didn't initially suspect you had to do because you weren't thinking at that level of detail. And also as you address the parts that are easy to do, all that is left is the parts that are hard or annoying or time-consuming to do, which can be somewhat demoralizing.

Same sort of thing goes with cost. You start out with "so here's the cost for the big parts and everything else that's left shouldn't cost that much" (black-box warning on "everything else"!), and then you go to Home Depot and pick up a bunch of 4-inch ABS pipe and black glue and all the 90 degree and T pieces you need for the aeration pipes and look at the cost and it's 100 bucks. Home Depot trips add up shockingly fast. There's also all the stuff you buy that you don't eventually end up using because the design evolves as you actually try to build it, like buying gears when you don't actually need gears, and all the stuff you didn't think you had to buy but it turns out that you do need it.

And sometimes you just get hit with some problem you didn't expect at all and now have to fix, like "my fan is making a screaming noise, what do"

4. Why is there a valley of death?

Universities and the government funds basic research. Then there's the private sector of business. The gap between the two, where you have to go from basic research to a business selling the new exciting thing is called the "valley of death". Now, you'd think this is what R&D is for. But a lot of R&D from a business seems to be focused on marginal improvements to existing things that already fall under the scope of what the existing business does, and not so much on building a novel thing that can be the seed of a new business. Building a novel thing requires a wide knowledge base, as discussed before, and inevitably takes a lot more money and time than expected. It's the sort of thing done by inventors in a garage as a project of love, not the sort of thing you get paid to do.

Further, crossing the valley of death requires both the technical capacity to build the thing, and the business skills to make a new business from scratch. If you have several people with different skills joined together, it can be bridged, but one flaw of doing it alone is that there are a lot more inventors with the ability to build the thing, than inventors with the ability to build the thing and also the ability or willingness to start a business that sells the thing. I'm in the former category. I can build it, but I hate building it and if I have to build all the machines myself to sell, I'd flatly reject it, and I really don't want to be responsible for running a business selling it, I'd have no idea how to run a business, and it'd eat too much time. My dream is to get a design good enough to sell, patent it, find someone willing to make a business out of it, and just receive a cut of profits without having to be involved in anything more regarding the production or selling of the machines, besides helping out with technical design work. Further, someone with just the business skills won't necessarily have the technical ability to come up with the machine in the first place, let alone build it. And there's also the lemon market problem of businesspeople identifying competent non-scam technical people with a viable design, and technical people finding competent non-scam businesspeople.

There are further issues such as designing the new invention such that it is robust and keeps working for a while (not a property that prototypes generally have), and designing it such that it is easy to build and maintain (also not usually a property associated with garage prototypes).

I've heard that there's a company in the UK that takes garage prototypes and updates the design for robustness, easy constructibility, and cost, which seems like an important part of closing the valley.

5. Building alone vs building as part of a team.

In a certain sense, I was blessed on this project, because I had complete control over the entire design. I had to contend with no meetings, and no unexpected changes to parts of the design that were already locked in, and no team decisions that were dumb and couldn't possibly work. It's the dream for anyone who dislikes group projects in engineering. All failures are attributable to me alone, as well as all successes. Then again, having someone else to work on the project with me definitely would have sped it up and I could rely on their knowledge of things I was ignorant of, relaxing the polymath requirement. Maybe there's an optimal design team size? I guess it'd depend on how parallelizable the work is, as well as how decision-making-quality scales with group size.

6. Final diagnosis and where to go from here.

So, it was over-time and over-budget and didn't work as well as I had hoped, but it does indeed work. Planning fallacy is a huge obstacle here, and I now certainly see why there's a valley of death for this sort of work.

In order to make a version that's practical for domestic use, I'd have to redo the design to be a rain-column design, primarily because it only requires high airflow, instead of the combination of high airflow and high pressure, which requires buying an expensive fan from China and the expensive electronic components which provide the appropriate power to operate the fan. A rain column design could use a much cheaper and simpler fan that operates from a wall outlet.

Further, in order for others interested in CO2 reduction to have one of their own, I'd have to team up with someone who could make a small business in assembling and selling these things, preferably involving someone who is not me building the relevant thing. PM me if interested.







Discuss

What's been written about the nature of "son-of-CDT"?

1 декабря, 2019 - 00:03
Published on November 30, 2019 9:03 PM UTC

I'm quite curious what kind of decision algorithm a CDT agent might implement in a successor AI, but I've only found a few vague references. Is that because little's been written about it?



Discuss

How To Change a Dance

30 ноября, 2019 - 16:40
Published on November 30, 2019 1:40 PM UTC

Let's say you don't like something about your local dance. Perhaps you'd like to see gender free calling, a different approach to booking bands, a new kind of special event, or something else. How can you make this happen?

The best case is that you talk to the organizers, they say "what a great idea!" and handle the rest, but it rarely works this way. [1] Maybe the organizers have a different background and don't understand why you think your ideas would be an improvement. Maybe your ideas would be more work, at least at first, and they're feeling overworked with what they're doing already. Maybe the ideas take longer to explain than you can get in during conversations at the break. When this works, it's the least difficult, but for anything tricky it seems to me like it usually doesn't.

Here are three things, however, that I have seen work well:

  • Start helping out. Come early and set up, stay late and clean up, offer to help take money at the door. By doing this work you demonstrate a commitment to the community and to the dance, you learn what's involved in running the dance, and you build trust with the current organizers. Then ask about joining the organizing committee. While trusting you to arrive early and unlock the building doesn't seem like it should translate into trusting your view on questions like whether the dance should book more newer callers, it does seem to work that way. I think some of this is that doing the work to make something happen builds ownership, which gets people to make thoughtful decisions that are better for the long-term health of the community.

  • Build consensus. Say you and your friends are all on board with gender-free bathrooms, but a lot of other dancers are uneasy about them. If you had a vote you'd probably be in the minority, and if you were able to change things by fiat you'd have a lot of unhappy dancers and maybe a revolt. How can you get to where this wouldn't be a controversial change anymore? Friendly one-on-one conversations can go a long way here, especially when people can convey a perspective someone hasn't considered much before. Listen, figure out why other people feel the way they do, share why you see it differently. This can be a lot of work, and the work of, in this example, building consensus on gender free bathrooms will mostly fall fall on trans and gender non-conforming dancers if you're not careful. So if you're not directly impacted by something, talk to those that are about what help is needed. As the idea moves towards the mainstream of the community, most organizers will see that and go from "maybe we should do that, but the dancers wouldn't like it" to "sounds like this is what people want."

  • Start something new. Sometimes what you're looking for is different enough from the existing event that trying to change it makes less sense than starting your own. Find other people to start it with you who have a similar perspective, put a bunch of thought into how you'd like things to be different, and don't be afraid to try approaches you don't see at other dances. When this goes well the new event expands the community by pulling in a new crowd of people and builds a more robust scene. The existing organizers will probably worry that the current community isn't big enough to support an additional event, and if you mostly just split the existing community then they may be right. Actively recruit from places the existing event doesn't, run different kinds of publicity that fit what makes your dance different, pull in new circles.

Which of these approaches makes the best sense for you in your particular situation will vary, but volunteering to help out with the dance is often a good place to start.


[1] I'm writing this as general advice, and I'm not trying to say "don't talk to me about BIDA". If you have thoughts about BIDA please let me know, and I'd be happy to talk to you about how the dance can be better.

Comment via: facebook



Discuss

Страницы