Currently doing local AI safety Movement Building in Australia and NZ.
Chris Leong
This is a very interesting idea. I’d love to see if someone could make it work.
I’m perfectly fine with holding an opinion that goes against the consensus. Maybe I could have worded it a bit better though? Happy to listen to any feedback on this.
I suppose at this stage it’s probably best to just agree to disagree.
Sorry, I misread the definition of ex ante.
I agree that the post poses a challenge to the standard EA view.I don’t see “There are no massive differences in impact between individuals” as an accurate characterization of the claim the argument is showing.
“There are no massive ex ante differences in impact between individuals” would be a reasonable title. Or perhaps “no massive identifiable differences”?
I can see why this might seem like an annoying technicality. I still think it’s important to be precise and rounding arguments off like this increases the chances that people talk past each other.
“Is that this is not true because for there to be massive differences ex ante we would (a) need to understand the impact of choices much better”—Sorry, that’s a non-sequitur. The state of the world is different from our knowledge of it. The map is not the territory.
“X is false” and “We don’t know whether X is true or false” are different statements.
It’s fine to mention other factors too, but the claim (at least from the outline) seems to be that “it’s hard to tell” rather than “there are no large differences in impact”. Happy to be corrected if I’m wrong.
“I understand the post is claiming that in as much as it is possible to evaluate the impact of individuals or decisions, as long as you restrict to ones with positive impact the differences are small, because good actions tend to have credit that is massively shared.”—There’s a distinction between challenges with evaluating differences in impact and whether those impacts exist.
The other two arguments listed in the outline are: “Does this encourage elitism”? and a pragmatic argument that individualized impact calculations are not the best path of action.
None of these are the argument made in the title.
I gave this a downvote for the clickbait title which from the outline doesn’t seem to match the actual argument. Apologies if this seems unfair, titles like this are standard in journalism, but I hope this doesn’t become standard in EA as it might affect our epistemics. This is not a comment on the quality of the post itself.
Amazing work!
1) What did you make it in?
2) How difficult was it?
3) Is it open source?
Amazing idea!
Sorry to hear this. Unfortunately, AI Safety opportunities are very competitive.
You may want to develop your skills outside of the AI safety community and apply to AI Safety opportunities again further down the track when you’re more competitive.
Happy to talk that through if you’d like, though I’m kind of biased, so probably better to speak to someone who doesn’t have a horse in the race.
I don’t know if this can be answered in full-generality.
I suppose it comes down to things like:
• Financial runway/back-up plans in case your prediction is wrong
• Importance of what you’re doing now
• Potential for impact in AI safety
I would love to see attempts at either a community-building fellowship or a community-building podcast.
With the community-building podcast, I suspect that people would prefer something that covers topics relatively quickly as community builders are already pretty busy.
a) I suspect AI able to replace human labour will create such abundance that it will eliminate poverty (assuming that we don’t then allow the human population to increase to the maximum carrying capacity).
b) The connection the other way around is less interesting. Obviously, AI requires capital, but once AI is able to self-reproduce then amount of capital required to kickstart economic development becomes minimal.
c) “I also wonder if you have the time to expand on why you think AI would solve or improve global poverty, considering it currently has the adverse effect?”—How is it having an adverse effect?
Debating still takes time and energy which reduces the time and energy available elsewhere.
I think taking a role like this early on could also be high-value if you’re trying to determine whether working in a particular cause area is for you. Often it’s useful to figure that out pretty early on. Of course, the fact that it isn’t the exact same job as you might be doing later on might make it less valuable for this.