Вы здесь

Новости LessWrong.com

Подписка на Лента Новости LessWrong.com Новости LessWrong.com
A community blog devoted to refining the art of rationality
Обновлено: 1 час 4 минуты назад

You can talk to EA Funds before applying

28 сентября, 2021 - 23:39
Published on September 28, 2021 8:39 PM GMT

Cross-posted to the EA Forum.

One thing I have realized as a fund manager for the EA Long-Term Future Fund is that there are a lot of grants that I would like to make that never cross our desk, just because potential applicants are too intimidated by us or don’t realize that their idea is one we’d be willing to fund if only they applied.

To try to remedy that problem, I’m going to start offering the following service: if you have any idea of any way in which you think you could use money to help the long-term future, but aren’t currently planning on applying for a grant from any grant-making organization, I want to hear about it. Feel free to send me a private message on the EA Forum or LessWrong, or if that doesn’t work for any reason, send me an email. I promise I’m not that intimidating :)

Not only that, but having talked about this with some of the other EA Funds managers, many of them were willing to extend the same offer as well:

For example, here are some of the sorts of grants I’m often excited about but that I rarely see anyone apply for:

  • “I want to transition to a career in something longtermist, but that transition would be difficult for me financially and I’d like to have some extra financial reserves to make it easier.”
  • “I think I would be more productive in my longtermist job if I had more money to spend on things that would save me time.”
  • “I have an idea for a longtermist project I want to work on, but I don’t want to commit to definitely working on that project for the duration of a long grant and want freedom to change my mind and switch to a different project if I want.”
  • “I have an ambitious idea for a project that I think would benefit the long-term future, but I think it would take a lot of money, more than what I normally see LTFF grants being given out for.”

Really, though, I don’t want to anchor anybody too much on these specific ideas—if you have any idea of any way in which you think you could use money to help the long-term future, I want to hear about it.



Discuss

Collection of arguments to expect (outer and inner) alignment failure?

28 сентября, 2021 - 19:55
Published on September 28, 2021 4:55 PM GMT

Various arguments have been made for why advanced AI systems will plausibly not have the goals their operators intended them to have (due to either outer or inner alignment failure).

I would really like a distilled collection of the strongest arguments.

Does anyone know if this has been done?

If not, I might try to make it. So, any replies pointing me to resources with arguments that I've missed (in my own answer) would also be much appreciated!

Clarification: I'm most interested in arguments that alignment failure is plausible, rather than merely that it is possible (there are already examples that establish the possibility of outer and inner alignment failure for current ML systems, which probably implies we can't rule it out for more advanced versions fo these systems either).



Discuss

Страницы