Interesting idea! We haven’t built that in yet, but I think we could build a feature that would add up your donations throughout the year and track your projected impact, but wait until Giving Tuesday to actually disburse the funds (in a way that would enable the match).
Context: the report is 190 pages long and was published this month. Those who are reading it seem unlikely to reply with detailed analysis on this particular Forum post.
Object-level response: becoming excellent at chess, go, and shogi is interesting, since it is more general than being excellent at any one alone. My impression is that the AI safety community recognises the importance of milestones like this. It is simply the case that superintelligence typically means something far more general still, such as
an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills
which will not include an AI which can play a specific set of games.
Since we have now discovered that the disagreement is merely a matter of definitions, hostilities can be ceased :)
By that logic you are turning the idea of an x-risk into anything that really matters in the long run. So poverty is an x-risk too in this definition. That makes it not a useful definition and is also very different from how most people think about the term.
Extinction (or something just as bad): x-risk. I go by that.
If ‘Coordination for EA researchers’ is considered by enough people to be a worthwhile project to undertake, I’d be interested in working on that (in a project design capacity).
And on a related note, I think combining this project with others like the ‘EA expertise board’ or ‘Build a platform to match projects with people who can do them’ would enable the platform to reach a critical mass of active users, making it really worthwhile for the community.
Can it help enable Giving Tuesday matching despite many small donations throughout the year?
Thanks! Great question. Yes, it’s primarily designed to help people who aren’t as familiar with effective giving make more meaningful donations. But even for people already engaged, it’ll help organize your charities in one place, automate all of your donations, and track the impact of every dollar.
Good day all,
Can anyone please provide an example of a tangible output from this ‘research organization’ of the sort EA generally recognize and encourage?
Any rationale or consideration as to how association with such opaque groups does anything other than seriously undermine EA’s mission statement would also be appreciated.
You might enjoy this post Claire wrote: Ethical Offsetting is Antithetical to EA.
I actually wrote up a survey a bit ago pulling together negative externalities with estimates in the literature: https://www.col-ex.org/posts/pigouvian-compendium/. From (estimated) largest to smallest, they are:
Rafa_fanboy, I’ve written to you but I’m not sure if you saw the message. You consistently make comments that aren’t helpful, and people regularly report them. Please try to keep your comments in the realm of “friendly and productive.”
Hey Holly! Since I read this post I found an accountability partner a bit later. The system works well and we’ve built it out according to our own needs, but I like that you’ve kept it simple. A lot of systems and lifehacks are very specific which makes them a pain to implement.
So thank you and good job! :)
IF I’d make an addition to your advice, it would probably be: plan a brief evaluation after x (3? 6?) months to take a bigger picture:
Has the system actually helped you accomplish things, rather than just being productive?
How do you like the role of each other? Would you like them to be more strict? Would you like them to focus on specific questions or pitfalls? (E.g. I asked my buddy to guard my from taking on too many responsibilities—when I plan a new project she asks me “ok, so what are you going to drop to make place for it?“)
This sounds promising!
one question—as i understand it’s not designed to help EA’s who already donate in cash, but mostly for people who aren’t familiar with effective giving, and won’t have enough motivation to get into it—right?
you dont need to specifically offset that, just donate to the best charity. you already have a net positive impact by being ea anyway
I see, yes good point.
It appears unfortunate that nobody can be bothered to read the underlying paper, written by Eric Drexler, a senior Oxford Martin fellow which completely reframes the AI debate, to something far from paper clips and much closer to reality. If a human sat down at Chess, Go, and Shogi and simply by playing the game became far better than any other human in a couple of weeks we would all see this as superintelligent. That this achievement is so easily dismissed to me shows an a complete unwillingness to deal with reality as it is.
I generally agree. The question is whether we should call something an X-risk by the impact if it happens alone or by the impact*probability. If the latter, and if comets are an X-risk, then we should call extreme climate change (and definitely nuclear war) an X-risk.
Thanks very much! And the workplace activism handbook looks interesting even though it’s not directly related to this particular project, so thanks for that too.
Interesting question! I think Givewell’s estimate for how much money they’ve directed over the years should be counted in some way as well.