Upvoted for explaining your stance clearly, though Iâm unclear on what you see as the further implications of:
Because there are good reasons to work on AI safety, you need to have a better reason not to.
This is true about many good things a person could do. Some people see AI safety as a special case because they think itâs literally the most good thing, but other people see other causes the same way â and I donât think we want to make any particular thing a default âjustify if not Xâ.
(FWIW, Iâm not sure you actually want AI to be this kind of default â you never say so â but thatâs the feeling I got from this comment.)
Note that there are many people who should not work on AI safety because they have >400x more traction on problems 400x smaller, or whatever.
When someone in EA tells me they work on X, my default assumption is that they think their (traction on X * assumed size of X) is higher than the same number would be for any other thing. Maybe Iâm wrong, because theyâre in the process of retraining or got rejected from all the jobs in Y or something. But I donât see it as my job to make them explain to me why they did X instead of Y, unless theyâre asking me for career advice or something.
There may be exceptional cases where someone is working on something really unusual, but in those cases, I aim for a vibe of âcurious and interestedâ rather than âexpecting justificationâ. At a recent San Diego meetup, I met a dentist and was interested to learn how he chose dentistry; as it turns out, his reasoning was excellent (and I learned a lot about the dental business).
Finding the arguments for AI risk unconvincing is not a reason to just not work on AI risk, because if the arguments are wrong, this implies lots of effort on alignment is wasted and we need to shift billions of dollars away from it (and if they have nonessential flaws this could change research directions within alignment), so you should write counterarguments up to allow the EA community to correctly allocate its resources.
This point carries over to global health, right? If someone finds EA strategy in that area unconvincing, do they need to justify why they arenât writing up their arguments?
In theory, maybe it applies more to global health, since the community spends much more money on global health than AI? (Possibly more effort, too, though I could see that going either way.)
This is true about many good things a person could do. Some people see AI safety as a special case because they think itâs literally the most good thing, but other people see other causes the same way â and I donât think we want to make any particular thing a default âjustify if not Xâ.
Iâm unsure how much I want AI safety to be the default, there are a lot of factors pushing in both directions. But I think one should have a reason why one isnât doing each of the top ~10 things one could, and for a lot of people AI safety (not necessarily technical research) should be on this list.
When someone in EA tells me they work on X, my default assumption is that they think their (traction on X * assumed size of X) is higher than the same number would be for any other thing. Maybe Iâm wrong, because theyâre in the process of retraining or got rejected from all the jobs in Y or something. But I donât see it as my job to make them explain to me why they did X instead of Y, unless theyâre asking me for career advice or something.
My guess is that the median person who filled out the EA survey isnât being consistent in this way. I expect that they could have a one-hour 1-1 with a top community-builder that makes them realize they could be doing something at least 10% better. This is a crux for me.
Separately, I do feel a bit weird about making every conversation into a career advice conversation, but often this seems like the highest impact thing.
If someone finds EA strategy in [global health] unconvincing, do they need to justify why they arenât writing up their arguments?
This was thought-provoking for me. I think existingposts of similar types were hugely impactful. If money were a bottleneck for AI safety and I thought money currently spent on global health should be reallocated to AI safety, writing up some document on this would be among the best things I could be doing. I suppose in general it also depends on oneâs writing skill.
My guess is that the median person who filled out the EA survey isnât being consistent in this way. I expect that they could have a one-hour 1-1 with a top community-builder that makes them realize they could be doing something at least 10% better. This is a crux for me.
I agree with most of this. (I think that other people in EA usually think theyâre doing roughly the best thing for their skills/âbeliefs, but I donât think theyâre usually correct.)
I donât know about âtop community builderâ, unless we tautologically define that as âperson whoâs really good at giving career/âtrajectory adviceâ. I think you could be great at building or running a group and also bad at giving advice. (There are several ways to be bad at giving advice â you might be ignorant of good options, bad at surfacing key features of a personâs situation, bad at securing someoneâs trust, etc.)
Separately, I do feel a bit weird about making every conversation into a career advice conversation, but often this seems like the highest impact thing.
Iâm thinking about conversations in the vein of an EAG speed meeting, where youâre meeting a new person and learning about what they do for a few minutes. If someone comes to EAG and all their speed meetings turn into career advice with an overtone of âyouâre probably doing something wrongâ, that seems exhausting/âdispiriting and unlikely to help (if they arenât looking for help). Iâve heard from a lot of people who had this experience at an event, and it often made them less interested in further engagement.
If I were going to have an hour-long, in-depth conversation with someone about their work, even if they werenât specifically asking for advice, I wouldnât be surprised if we eventually got into probing questions about how they made their choices (and I hope theyâd challenge me about my choices, too!). But I wouldnât try to ask probing questions unprompted in a brief conversation unless someone said something that sounded very off-base to me.
Upvoted for explaining your stance clearly, though Iâm unclear on what you see as the further implications of:
This is true about many good things a person could do. Some people see AI safety as a special case because they think itâs literally the most good thing, but other people see other causes the same way â and I donât think we want to make any particular thing a default âjustify if not Xâ.
(FWIW, Iâm not sure you actually want AI to be this kind of default â you never say so â but thatâs the feeling I got from this comment.)
When someone in EA tells me they work on X, my default assumption is that they think their (traction on X * assumed size of X) is higher than the same number would be for any other thing. Maybe Iâm wrong, because theyâre in the process of retraining or got rejected from all the jobs in Y or something. But I donât see it as my job to make them explain to me why they did X instead of Y, unless theyâre asking me for career advice or something.
There may be exceptional cases where someone is working on something really unusual, but in those cases, I aim for a vibe of âcurious and interestedâ rather than âexpecting justificationâ. At a recent San Diego meetup, I met a dentist and was interested to learn how he chose dentistry; as it turns out, his reasoning was excellent (and I learned a lot about the dental business).
This point carries over to global health, right? If someone finds EA strategy in that area unconvincing, do they need to justify why they arenât writing up their arguments?
In theory, maybe it applies more to global health, since the community spends much more money on global health than AI? (Possibly more effort, too, though I could see that going either way.)
Thanks for the good reply.
Iâm unsure how much I want AI safety to be the default, there are a lot of factors pushing in both directions. But I think one should have a reason why one isnât doing each of the top ~10 things one could, and for a lot of people AI safety (not necessarily technical research) should be on this list.
My guess is that the median person who filled out the EA survey isnât being consistent in this way. I expect that they could have a one-hour 1-1 with a top community-builder that makes them realize they could be doing something at least 10% better. This is a crux for me.
Separately, I do feel a bit weird about making every conversation into a career advice conversation, but often this seems like the highest impact thing.
This was thought-provoking for me. I think existing posts of similar types were hugely impactful. If money were a bottleneck for AI safety and I thought money currently spent on global health should be reallocated to AI safety, writing up some document on this would be among the best things I could be doing. I suppose in general it also depends on oneâs writing skill.
I agree with most of this. (I think that other people in EA usually think theyâre doing roughly the best thing for their skills/âbeliefs, but I donât think theyâre usually correct.)
I donât know about âtop community builderâ, unless we tautologically define that as âperson whoâs really good at giving career/âtrajectory adviceâ. I think you could be great at building or running a group and also bad at giving advice. (There are several ways to be bad at giving advice â you might be ignorant of good options, bad at surfacing key features of a personâs situation, bad at securing someoneâs trust, etc.)
Iâm thinking about conversations in the vein of an EAG speed meeting, where youâre meeting a new person and learning about what they do for a few minutes. If someone comes to EAG and all their speed meetings turn into career advice with an overtone of âyouâre probably doing something wrongâ, that seems exhausting/âdispiriting and unlikely to help (if they arenât looking for help). Iâve heard from a lot of people who had this experience at an event, and it often made them less interested in further engagement.
If I were going to have an hour-long, in-depth conversation with someone about their work, even if they werenât specifically asking for advice, I wouldnât be surprised if we eventually got into probing questions about how they made their choices (and I hope theyâd challenge me about my choices, too!). But I wouldnât try to ask probing questions unprompted in a brief conversation unless someone said something that sounded very off-base to me.