Furthermore, people have repeatedly brought up the argument that the first “bad” EA project in each area can do more harm than an additional “good” EA project, especially if you consider tail risks, and I think this is more likely to be true than not. E.g. the first political protest for AI regulation might in expectation do more harm than a thoughtful AI policy project could prevent. This provides a reason for EAs to be risk-averse. (Specifically, I tentatively disagree with your claims that “we’re probably at the point where there are more false negatives than false positives, so more chances can be taken on people at the low end”, and that we should invest “a small amount”.) Related: Spencer Greenberg’s idea that plenty of startups cause harm.
I thought this was pretty vague and abstract. You should say why you expect this particular project to suck!
It seems plausible that most EAs who do valuable work won’t be able to benefit from this. If they’re students, they’ll most likely be studying at a university outside Blackpool and might not be able to do so remotely
I also wonder what the target market is. EA doing remote work? EAs need really cheap accommodation for certain time?
I thought this was pretty vague and abstract. You should say why you expect this particular project to suck!
I also wonder what the target market is. EA doing remote work? EAs need really cheap accommodation for certain time?
I wasn’t making a point about this particular project, but about all the projects this particular project would help.