(Note: Maybe some of the following projects I suggest already have been done. I haven’t thoroughly researched that. If you know that something like I suggest to do has already been done, please comment!)
Some ideas for bounties (or grants) for projects or tasks:
An extremely good introduction to “Why should you care about AI safety?” for people who are not stupid but have no idea of AI. (In my opinion preferably as video, though a good article would also be nice.) (I think of a rather short introduction, like 10min-20min)
An extremely good explanation for “Why is AI safety so hard?” for people who have just read or watched the (future) extremely good introduction to “Why should you care about AI safety?”. (For people who have little idea about what AI safety is actually about.) (Should be an easily understandable introduction to the main problems in AI safety.) (I was thinking of something like 15-30 min read or video, though additionally a longer and more detailed version would probably be useful as well.)
A project that tries to understand what the main reasons for are why people reject EA after hearing about it. (through a survey and explicit questioning of people)
(A bit related to 3) A project that examines the question “To which degree is effective altruism innate?” and the related question “How many potential (highly, mid or slightly) EA-engaged people are there in the world?”. And perhaps also the related question “What life circumstances or other causes cause people to become effective altruists?
A study that examines what the best way to introduce EA is. (E.g Is it better to not use the term “effective altruism”?; Is The Drowning Child and the Expanding Circle a good introduction to EA ideas or is it rather deterrent?; For spreading longtermism, should I rather first recommend the Precipice or HPMoR (to spread rationality first)?) (Maybe make it something like a long-term study to which many people throughout the world can contribute.)
Make a good estimation of the likelihood that some individual or a small group can significantly (though often indirectly) raise x-risk (for example by creating a bioengineered pandemic, triggering atomic war, triggering an economic crisis (e.g. through hacking attacks), triggering an AI weapons armsrace, triggering a bad political movement etc.).
I would also love to see people funded, who are just thinking about how the EA community could coordinate better, how efficiency of research in EA-aligned causes can be increased, how EA should be developed in the future, what possible good (mega-)projects are, how to increase the bottleneck of EA (e.g. how to integrate good leadership into EA).
About 1&2: I don’t think that should be done just by someone, but by one or more AI researchers who have an extremely good overview about the whole field of AI safety and are able to explain it well to people without prior knowledge. I think it should be something like by far the best introduction. Something which is clearly the thing you would recommend something who is wondering about what AI safety is.
Those are all quite big and important tasks. I think it would be better to advertise those tasks and reach out to qualified people and then fund interested people so they do those tasks, instead of creating bounties, but bounties could work as well.
Of course, you could also create a bounty like “for every 100k$ you raise for EA, you get 5k$” or so. (Yes just totally convince Elon Musk of EA and become a billionaire xD.) But I’m not sure if that would be a good idea, because there could be some downside risk to the image of EA fundraisers.
Create an extremely good video as introduction to effective altruism. (Should be convincing and lead to action.)
maybe also create a very good video or article discussing objections against effective altruism (and why they may be questionable, if they are questionable)
Create well-designed T-shirts with (funny) EA, longtermist or AI safety printings, that I would love to walk around with in public and hope that someone asks me about it. (I would prefer if there is no “effective altruism” directly on the T-shirt. Maybe sth into this direction, though I don’t like the robot that much, because many people associate AGI and robots far too much, but it is still kind of good.)
(Note: Maybe some of the following projects I suggest already have been done. I haven’t thoroughly researched that. If you know that something like I suggest to do has already been done, please comment!)
Some ideas for bounties (or grants) for projects or tasks:
An extremely good introduction to “Why should you care about AI safety?” for people who are not stupid but have no idea of AI. (In my opinion preferably as video, though a good article would also be nice.) (I think of a rather short introduction, like 10min-20min)
An extremely good explanation for “Why is AI safety so hard?” for people who have just read or watched the (future) extremely good introduction to “Why should you care about AI safety?”. (For people who have little idea about what AI safety is actually about.) (Should be an easily understandable introduction to the main problems in AI safety.) (I was thinking of something like 15-30 min read or video, though additionally a longer and more detailed version would probably be useful as well.)
A project that tries to understand what the main reasons for are why people reject EA after hearing about it. (through a survey and explicit questioning of people)
(A bit related to 3) A project that examines the question “To which degree is effective altruism innate?” and the related question “How many potential (highly, mid or slightly) EA-engaged people are there in the world?”. And perhaps also the related question “What life circumstances or other causes cause people to become effective altruists?
A study that examines what the best way to introduce EA is. (E.g Is it better to not use the term “effective altruism”?; Is The Drowning Child and the Expanding Circle a good introduction to EA ideas or is it rather deterrent?; For spreading longtermism, should I rather first recommend the Precipice or HPMoR (to spread rationality first)?) (Maybe make it something like a long-term study to which many people throughout the world can contribute.)
Make a good estimation of the likelihood that some individual or a small group can significantly (though often indirectly) raise x-risk (for example by creating a bioengineered pandemic, triggering atomic war, triggering an economic crisis (e.g. through hacking attacks), triggering an AI weapons armsrace, triggering a bad political movement etc.).
I would also love to see people funded, who are just thinking about how the EA community could coordinate better, how efficiency of research in EA-aligned causes can be increased, how EA should be developed in the future, what possible good (mega-)projects are, how to increase the bottleneck of EA (e.g. how to integrate good leadership into EA).
About 1&2: I don’t think that should be done just by someone, but by one or more AI researchers who have an extremely good overview about the whole field of AI safety and are able to explain it well to people without prior knowledge. I think it should be something like by far the best introduction. Something which is clearly the thing you would recommend something who is wondering about what AI safety is.
Those are all quite big and important tasks. I think it would be better to advertise those tasks and reach out to qualified people and then fund interested people so they do those tasks, instead of creating bounties, but bounties could work as well.
Of course, you could also create a bounty like “for every 100k$ you raise for EA, you get 5k$” or so. (Yes just totally convince Elon Musk of EA and become a billionaire xD.) But I’m not sure if that would be a good idea, because there could be some downside risk to the image of EA fundraisers.
Two more ideas:
Create an extremely good video as introduction to effective altruism. (Should be convincing and lead to action.)
maybe also create a very good video or article discussing objections against effective altruism (and why they may be questionable, if they are questionable)
Create well-designed T-shirts with (funny) EA, longtermist or AI safety printings, that I would love to walk around with in public and hope that someone asks me about it. (I would prefer if there is no “effective altruism” directly on the T-shirt. Maybe sth into this direction, though I don’t like the robot that much, because many people associate AGI and robots far too much, but it is still kind of good.)