Proposal: Give prizes to people spotting / blowing the whistle on papers bullshitting its readers, and explaining why.
Details: There could be a Bullshit Alert Prize for the one blowing the whistle, and a Bullshit Award for the one having done the bullshitting. This would be similar to the Darwin Awards in that you don’t want to be the source of such an award.
Great idea! This sounds like a lot of fun. I’m also unsure about the net benefit. We might want to keep it as unaffiliated as possible from other EA organizations in order to avoid any spillover damage.
Proposal: Track people’s beliefs over time, and what information gave them the biggest update.
Details: It could be done at the same time than the EA survey every year. And/or it could be a website that people continuously update.
Motivation: The goals are
1) to track which information is the most valuable so that more people consume it, and
2) see how beliefs evolve (which might be evidence in itself about which beliefs are true; although, I think most, including myself, wouldn’t think this was the strongest form of evidence). It could be that most people make a similar series of paradigm shifts over time, and knowing which ones might help speed things up.
Proposal: Have a prediction market on what politicians will accomplish in their next mandate.
Why: That way, it will make it easier for people to know how likely each policies are to be implemented, and it will make it harder for politicians to bullshit everyone.
Related: This project would complement really well the Polimeter which tracks the promises made by politicians. They are now part of the Vox Pop Labs.
Note: I think I’ve seen this idea somewhere else, but I don’t remember where.
Since the Doomsday Clock from the Bulletin of the Atomic Scientists doesn’t have any clear methodology for why the clock advances or recedes, I am providing the Metaculus Doomsday Clock as an alternative. Currently the way it advances is by using the Metaculus median prediction of humanity going extinct by 2100 to determine how many minutes we are from midnight. It can be improved, so make suggestions in the comments.
Created: early 2019 (or maybe before) | Originally shared on EA Work
Cause area: AI safety
Proposal: Make a collection of (royalty) free images representing the idea of AI / AI safety / AI x-risk / AI risk that aren’t anthropomorphizing AI or otherwise misportraying AI (both by searching for existing images and by creating more). This could be used by the media, local AI (safety) groups, etc.
Details: I think this is less of a problem than it used to be, but still think this could be valuable. If you want funding for that, you could consider applying for a grant from the Long-Term Future Fund: https://app.effectivealtruism.org/funds/far-future.
Proposal: have a group to experiment with coordinating on small projects that require coordination
Example: I just posted a proposal about improving the Cause Prioritization Wiki. If 100 major edits get committed, then everyone does the edits they committed. This is useful because a wiki only becomes interesting when there’s a lot of editors, so this allows the platform to get bootstrapped, and avoids the chicken-egg problem.
Comments: There’s a meta-thread to discuss the group itself in the group, so I invite you to comment there if you have any comments on the group.
Meta-proposal: Research what would be the consequences of implementing the proposal.
Proposal: Give the ability to citizens to decide where X% (say 2%) of their tax goes directly (it can be a charity or a government program)
Details: Of course, government can rebalance the rest of its budget in such a way that there’s no counterfactual changes. But maybe it would still make a change. If not, then maybe the X% has to go to a charity. Or maybe the donations could be made for more specific governmental projects.
Reasoning: Maybe individuals have specific insights that the government doesn’t have when it comes to public good, but altruism aside, individuals don’t have an incentive to finance public goods. Empowering citizens to directly decide where part of their taxes goes would help with that.
Extra: Mayyybe their could be a way to certified some charities as efficient, but that’s dangerous of going full circle, and having the government once again making the decisions, but their might be some in-between that would be superior. Maybe there should be a restriction to charities working on public good.
Thought on impact: Maybe philanthropists would give X% less to charities given they would have this mechanism to direct money to charities they want to support. If that’s true, then increasing income taxes by X% would sort of be going full circle, except now everyone would be giving X% to charity.
Name: Calling it the “philanthropy tax” might confuse the concept with “taxing philanthropy”. I’m definitely open to hearing other suggestions for names.
Update: Not surprisingly, other people have had similar ideas. For example, see Robert Lee’s Facebook post.
Category: Bringing powerful countries closer together
Idea: Handshake statue in Time Square and some equivalent place in China, where people can give each other a handshake across the world
Effectiveness: I don’t know;doesn’t seem effective, but also maybe such symbols are powerful and would bring the world closer to each other, hence increasing cooperation / reducing risk of wars
Created: early 2019 (or maybe before) | Originally shared on EA Work
Cause area: aging
Dry Feb is a Canadian initiative that invites people to go sober for February to raise money for the Canadian Cancer Society: https://www.dryfeb.ca/.
Imagine this idea, but worldwide and for general medical research.
I would suggest fundraising for the Methuselah Foundation for its broad approach. They fund a lot of prizes which create market pressures for medical progress, so avoid the donors to have to figure out which research groups are the most effective. They’ve also had other initiative that helps the field at large such as conferences and roadmaps. More on them here: https://www.mfoundation.org/who-we-are/.
Created: early 2019 (or maybe before) | Originally shared on EA Work
Proposal: I think this could help understanding decision theories (especially functional decision theory). There could be some scenarios where the user has to choose an action or a decision procedure and see how this affects other parts of the scenario that are logically connected to the agent the user controls. For example: playing the prisoner dilemma with a copy of oneself, Newcomb’s problem, etc. Could be done in a similar way to Nicky Case’s games.
Context: Sometimes I come up with ideas that are very likely information hazard, and I don’t share them. Most of the time I come up with ideas that are very likely not information hazard.
Problem: But also, sometimes, I come up with ideas that are in-between, or that I can’t tell whether I should share them are not.
Solution hypothesis: I propose creating a group with which one can share such ideas to get external feedback on them and/or about whether they should be shared more widely or not. To reduce the risk of information leaking from that group, the group could:
be kept small (5 participants?)
note: there can always be more such groups
be selective
exam on information hazard / on Bostrom’s paper on the topic
notably: some classes of hazard should definitely not be shared in that group, and this should be made explicit
questionnaire on how one handled information in the past
notably: secrets
have a designated member share a link on an applicant’s Facebook wall with rewards for reporting antisocial behavior
pledge to treat the information with the utmost seriousness
commit to give feedback for each idea (to have a ratio of feedback / exposed person of 1)
Questions: What do you think of this idea? How can I improve this idea? Would you be interested in helping with or joining such a group?
Possible alternatives:
Info-hazard buddy: ask a trusted EA friend if they want to give you feedback on possible info-hazardy ideas
warning: some info-hazard ideas (/idea categories) should NOT be thought about more. some info-hazard can be personally damaging to someone (ask for clear consent before sharing them, and consider whether it’s really useful to do so).
note: yeah I think I’m going to start with this first
EtA 2020-04-18
Ritual to become info-hazard buddy:
Ask the person the share their lying policy
Ask the person for an history of lies they’ve told
Post on their Facebook wall an anonymous form for people to report that person’s trustworthiness
Check who that person has blocked on Facebook, and reach out to them to ask why they were blocked
Write a doc about a pledge to handling information with the utmost care, and sign it (maybe also sign a legal non-disclosure agreement)
Although some infractions should probably be sued in some alternative court if possible to avoid the information getting more exposure
Proposal: Pay someone with a ‘donation gift card’ or ‘donation credits’
Details and rationale:
Often, when I work on a project approved by EAs, I don’t necessarily want to be paid as much as I want to be able to have people work on my EA projects in the future.
Imagine you have a Donor Advisor Fund called the Altruist Bank which emits one Altruist Credit per USD you put into it. The Altruist Credit can be spent by saying to which charity you want the DAF to send a USD. The Altruist Credit can also be given to other people directly.
My hope was that accepting to be paid with altruist credits would be a strong signal of alignment on altruism, and altruistic people might perform better at altruist projects (as their incentives are more aligned). A discounted wage might also act as a signal, although maybe it can also attract less qualified people (?)
It might also encourage a culture of more donations.
And **maybe** be simpler than everyone individually opening a DAF.
Avoiding possible problems:
If we can somehow make it illegal to sell, that would be useful because otherwise anyone can sell their Altruist Credits to altruists for just slightly less than 1 USD each, at which point you’re just back with USDs
If it became massively used, then it could start to being used just as a currency (as long as everyone expect others to accept it) (although this seems unlikely to happen)
Additional note:
I think parallel economies, such as Simbi, are bad for basic Econ 101 reasons, but here maybe the altruistic signaling is of sufficient additional value (?)
I appreciate this idea! However, I’d prefer that people cross-post Forum content to groups that already have substantial/relevantly targeted audiences (actually, I’d really like people to do this more often), rather than creating a new group that could split off some of the Forum’s readership.
Having a Forum-focused Facebook group also seems like it would raise the chances of more discussion happening on Facebook rather than on the Forum posts themselves, which seems bad (comments harder to find later, not linked to anyone’s profile, not open for karma voting, not eligible for the Comment Prize, etc.)
If the group really is just a collection of links that people can easily share in other groups, and if discussion comes back to the Forum, it could be a net positive. I’ll be curious to see how it gets used.
Rather than a ban, probably official discouragement + a polite reminder that people should add their comments to the Forum as well as the Facebook posts? If people really want to talk on Facebook, it seems bad to stop them, but gentle nudges go a long way!
summary: have an app that helps people decide whether they shouldn’t go to work
context: in the last 12 hours I spent maybe about 2 hours ‘empowering’ someone I know by giving them more information to help them decide whether they should take sick days
problem: knowing what’s the probability one’s infected (by the coronavirus) helps informing them about whether they should avoid going to work. the probability beyond which you should stay home is not the same for each type of job. at what point should one not go to work?
the 2 main sub questions are:
what’s the probability that I’m infected?
there are already forms that sort of do that ex.: https://covid19.empego.ca/#/, but I would prefer a more probabilistic approach with more detailed input
if I’m infected, what damage am I likely to cause, in expectation? how many people am I meeting at work? how many confirmed cases are in my city? etc.
there’s an app made by EAs that might get released in the coming days that address a similar question
for example: Someone told me: my partner was coughing, had a sore throat, and had X fever during the whole day, but is now feeling better; yesterday ze was okay, and we slept together, but I haven’t seen zir since then. ze wasn’t outside the country recently, and hasn’t met anyone infected as far as ze knows. ze lives in city Y which has Z cases.
there could also be intermediary recommendations (maybe?): go to work, but take the following precautions:
wear a mask
avoid meetings
etc.
addendum: in countries that don’t have a monetary incentives for people to self-quarantine, there will be a negative externality not captured. but the tool should still improve decision making.
Potential problem: it might accelerate all scientific progress, which isn’t relevant in the framework of technological differential progress, or possibly harmful (?) if, for example, AI parrallelizes better than AI safety
moving my answers in separate comments below this answer.
particularly useful feedback includes, but isn’t limited to:
links to a similar project that was already done
connection with people interested in this project
analysis of the usefulness of the project
note: those are just ideas, they might not be a priority, or good at all
The Bullshit Awards
Proposal: Give prizes to people spotting / blowing the whistle on papers bullshitting its readers, and explaining why.
Details: There could be a Bullshit Alert Prize for the one blowing the whistle, and a Bullshit Award for the one having done the bullshitting. This would be similar to the Darwin Awards in that you don’t want to be the source of such an award.
Example: An analysis that could have won this is Why We Sleep — a tale of institutional failure.
Note: I’m not sure whether that’s a good way to go about fixing that problem. Is shaming a useful tool?
Great idea! This sounds like a lot of fun. I’m also unsure about the net benefit. We might want to keep it as unaffiliated as possible from other EA organizations in order to avoid any spillover damage.
Belief Network
Last updated: 2020-03-30
Category: group rationality; signal boosting
Proposal: Track people’s beliefs over time, and what information gave them the biggest update.
Details: It could be done at the same time than the EA survey every year. And/or it could be a website that people continuously update.
Motivation: The goals are
1) to track which information is the most valuable so that more people consume it, and
2) see how beliefs evolve (which might be evidence in itself about which beliefs are true; although, I think most, including myself, wouldn’t think this was the strongest form of evidence). It could be that most people make a similar series of paradigm shifts over time, and knowing which ones might help speed things up.
Alternative name: MindChange
What’s been done so far: Post on LessWrong What are some articles that updated your beliefs a lot on an important topic? The EA survey also tracks some high-level views, notably on cause prioritization.
Update:
Just saw a similar idea I had (I think 2 years ago).
A Chrome Extension and Plug In to measure changes to one’s world model and one’s behaviors.
Goal: Try to find the articles that are the most likely to update our map and/or behaviors.
Promise Prediction
Proposal: Have a prediction market on what politicians will accomplish in their next mandate.
Why: That way, it will make it easier for people to know how likely each policies are to be implemented, and it will make it harder for politicians to bullshit everyone.
Related: This project would complement really well the Polimeter which tracks the promises made by politicians. They are now part of the Vox Pop Labs.
Note: I think I’ve seen this idea somewhere else, but I don’t remember where.
Quantified Doomsday Clock
Context:
(source: Matthew Barnett’s Facebook wall)
Proposal: IFTTT-connected Doomsday-looking Doomsday Clock
Notes: Please contact me if interested in helping commercializing this; I have a few ideas and can fund the project.
Question: What would be a good name for it? Brainstroming:
Quantified Doomsday Clock
Quantum Doomsday Clock
Metaculus Doomsday Clock (pro: publicity for Metaculus; but important con IMO: inflexible and not future-proof as it becomes dependent on Metaculus)
Rather than 2100 can I suggest the next century? Otherwise we’d move away from midnight as we move toward 2100 - very counterintuitive
yeah good point, I agree; thanks!
https://aicountdown.com/ links to
Royalty free AI images
Created: early 2019 (or maybe before) | Originally shared on EA Work
Cause area: AI safety
Proposal: Make a collection of (royalty) free images representing the idea of AI / AI safety / AI x-risk / AI risk that aren’t anthropomorphizing AI or otherwise misportraying AI (both by searching for existing images and by creating more). This could be used by the media, local AI (safety) groups, etc.
Details: I think this is less of a problem than it used to be, but still think this could be valuable. If you want funding for that, you could consider applying for a grant from the Long-Term Future Fund: https://app.effectivealtruism.org/funds/far-future.
Cross-post: https://www.facebook.com/groups/1696670373923332/permalink/2287004484889915/
Group for collective actions
Status: done, see: https://www.facebook.com/groups/LWCoordination/
Proposal: have a group to experiment with coordinating on small projects that require coordination
Example: I just posted a proposal about improving the Cause Prioritization Wiki. If 100 major edits get committed, then everyone does the edits they committed. This is useful because a wiki only becomes interesting when there’s a lot of editors, so this allows the platform to get bootstrapped, and avoids the chicken-egg problem.
Comments: There’s a meta-thread to discuss the group itself in the group, so I invite you to comment there if you have any comments on the group.
Philanthropy tax / Giving your 2 percents
Meta-proposal: Research what would be the consequences of implementing the proposal.
Proposal: Give the ability to citizens to decide where X% (say 2%) of their tax goes directly (it can be a charity or a government program)
Details: Of course, government can rebalance the rest of its budget in such a way that there’s no counterfactual changes. But maybe it would still make a change. If not, then maybe the X% has to go to a charity. Or maybe the donations could be made for more specific governmental projects.
Reasoning: Maybe individuals have specific insights that the government doesn’t have when it comes to public good, but altruism aside, individuals don’t have an incentive to finance public goods. Empowering citizens to directly decide where part of their taxes goes would help with that.
Extra: Mayyybe their could be a way to certified some charities as efficient, but that’s dangerous of going full circle, and having the government once again making the decisions, but their might be some in-between that would be superior. Maybe there should be a restriction to charities working on public good.
Thought on impact: Maybe philanthropists would give X% less to charities given they would have this mechanism to direct money to charities they want to support. If that’s true, then increasing income taxes by X% would sort of be going full circle, except now everyone would be giving X% to charity.
Name: Calling it the “philanthropy tax” might confuse the concept with “taxing philanthropy”. I’m definitely open to hearing other suggestions for names.
Update: Not surprisingly, other people have had similar ideas. For example, see Robert Lee’s Facebook post.
Impact of the 5% payout rule
Category: meta-EA; research
Proposal: Research what would be the consequences of removing the 5% payout rule.
Motivating intuition: maybe it would help longer-termist causes (?) and it might also increase the global ratio of investing / consumption (?)
Date posted: 2020-03-06
Additional information:
( source: Foundation (United States law) )
Shaking hands across the world
Category: Bringing powerful countries closer together
Idea: Handshake statue in Time Square and some equivalent place in China, where people can give each other a handshake across the world
Effectiveness: I don’t know;doesn’t seem effective, but also maybe such symbols are powerful and would bring the world closer to each other, hence increasing cooperation / reducing risk of wars
Source: Space Force TV show, s1e7 8:30
Sober September
Created: early 2019 (or maybe before) | Originally shared on EA Work
Cause area: aging
Dry Feb is a Canadian initiative that invites people to go sober for February to raise money for the Canadian Cancer Society: https://www.dryfeb.ca/.
Imagine this idea, but worldwide and for general medical research.
I would suggest fundraising for the Methuselah Foundation for its broad approach. They fund a lot of prizes which create market pressures for medical progress, so avoid the donors to have to figure out which research groups are the most effective. They’ve also had other initiative that helps the field at large such as conferences and roadmaps. More on them here: https://www.mfoundation.org/who-we-are/.
An idea for a name is “Sober September”.
Tangentially, reducing alcohol consumption might also be a somewhat effective intervention to increase QALYs in richer countries (ex.: htp://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.183.4036&rep=rep1&type=pdf).
Note: I’m not working on this, but could provide some guidance.
Decision Theory Interactive Guide
Created: early 2019 (or maybe before) | Originally shared on EA Work
Proposal: I think this could help understanding decision theories (especially functional decision theory). There could be some scenarios where the user has to choose an action or a decision procedure and see how this affects other parts of the scenario that are logically connected to the agent the user controls. For example: playing the prisoner dilemma with a copy of oneself, Newcomb’s problem, etc. Could be done in a similar way to Nicky Case’s games.
EA StackExchange
Created: early 2019 (or maybe before) | Originally shared on EA Work
Create a quality StackExchange site so that the EA community can build up knowledge online.
Note: The previous attempt to do so failed (see: https://area51.stackexchange.com/proposals/97583/effective-altruism).
Moved from my short form; created on 2020-02-28
Group to discuss information hazard
Context: Sometimes I come up with ideas that are very likely information hazard, and I don’t share them. Most of the time I come up with ideas that are very likely not information hazard.
Problem: But also, sometimes, I come up with ideas that are in-between, or that I can’t tell whether I should share them are not.
Solution hypothesis: I propose creating a group with which one can share such ideas to get external feedback on them and/or about whether they should be shared more widely or not. To reduce the risk of information leaking from that group, the group could:
be kept small (5 participants?)
note: there can always be more such groups
be selective
exam on information hazard / on Bostrom’s paper on the topic
notably: some classes of hazard should definitely not be shared in that group, and this should be made explicit
questionnaire on how one handled information in the past
notably: secrets
have a designated member share a link on an applicant’s Facebook wall with rewards for reporting antisocial behavior
pledge to treat the information with the utmost seriousness
commit to give feedback for each idea (to have a ratio of feedback / exposed person of 1)
Questions: What do you think of this idea? How can I improve this idea? Would you be interested in helping with or joining such a group?
Possible alternatives:
Info-hazard buddy: ask a trusted EA friend if they want to give you feedback on possible info-hazardy ideas
warning: some info-hazard ideas (/idea categories) should NOT be thought about more. some info-hazard can be personally damaging to someone (ask for clear consent before sharing them, and consider whether it’s really useful to do so).
note: yeah I think I’m going to start with this first
EtA 2020-04-18
Ritual to become info-hazard buddy:
Ask the person the share their lying policy
Ask the person for an history of lies they’ve told
Post on their Facebook wall an anonymous form for people to report that person’s trustworthiness
Check who that person has blocked on Facebook, and reach out to them to ask why they were blocked
Write a doc about a pledge to handling information with the utmost care, and sign it (maybe also sign a legal non-disclosure agreement)
Although some infractions should probably be sued in some alternative court if possible to avoid the information getting more exposure
Why wouldn’t you just ask four people who you trust to review each idea in confidence? Why formalize it or insist they reciprocate it?
Increase the prize for the International Mathematics Olympiads
Rationale: It’s a useful source of talent EAs have used, and the current prizes are pretty low (less than 100 USD each AFAIK).
I’d be willing to pitch in that prize. Please reach out to me if interested.
Category: research
Externalities of war predictions
See: link
Altruist credits
Epistemic status: not sure if the idea works
Category: meta
Proposal: Pay someone with a ‘donation gift card’ or ‘donation credits’
Details and rationale:
Often, when I work on a project approved by EAs, I don’t necessarily want to be paid as much as I want to be able to have people work on my EA projects in the future.
Imagine you have a Donor Advisor Fund called the Altruist Bank which emits one Altruist Credit per USD you put into it. The Altruist Credit can be spent by saying to which charity you want the DAF to send a USD. The Altruist Credit can also be given to other people directly.
My hope was that accepting to be paid with altruist credits would be a strong signal of alignment on altruism, and altruistic people might perform better at altruist projects (as their incentives are more aligned). A discounted wage might also act as a signal, although maybe it can also attract less qualified people (?)
It might also encourage a culture of more donations.
And **maybe** be simpler than everyone individually opening a DAF.
Avoiding possible problems:
If we can somehow make it illegal to sell, that would be useful because otherwise anyone can sell their Altruist Credits to altruists for just slightly less than 1 USD each, at which point you’re just back with USDs
If it became massively used, then it could start to being used just as a currency (as long as everyone expect others to accept it) (although this seems unlikely to happen)
Additional note:
I think parallel economies, such as Simbi, are bad for basic Econ 101 reasons, but here maybe the altruistic signaling is of sufficient additional value (?)
FDA Policy Think-tank (and/or advocacy group)
Forum Facebook page
Posted: 2020-03-07
Category: signal boosting
Proposal: Share the best (say >=100 karmas) posts on the EA Forum on a Facebook page called “Best of the EA Forum”
Why? So that people that naturally go on Facebook but not on the EA Forum can be exposed to that content
Note: If there’s a way to get this list easily, it might facilitate the process.
Update: 2020-04-24
Experimental page using Zapier: https://www.facebook.com/EAForumKarma100/
x-post: https://www.facebook.com/groups/1392613437498240/permalink/2947443972015171/
I appreciate this idea! However, I’d prefer that people cross-post Forum content to groups that already have substantial/relevantly targeted audiences (actually, I’d really like people to do this more often), rather than creating a new group that could split off some of the Forum’s readership.
Having a Forum-focused Facebook group also seems like it would raise the chances of more discussion happening on Facebook rather than on the Forum posts themselves, which seems bad (comments harder to find later, not linked to anyone’s profile, not open for karma voting, not eligible for the Comment Prize, etc.)
If the group really is just a collection of links that people can easily share in other groups, and if discussion comes back to the Forum, it could be a net positive. I’ll be curious to see how it gets used.
thanks for your comment, I totally agree!
maybe we could ban comments? and delete the page if that doesn’t end up working?
Rather than a ban, probably official discouragement + a polite reminder that people should add their comments to the Forum as well as the Facebook posts? If people really want to talk on Facebook, it seems bad to stop them, but gentle nudges go a long way!
Rationalist Olympiads
Potential funding: EA Meta Fund
Ideas:
See section “Games and Exercises” of “How to Run a Successful LessWrong Meetup Group” for some ideas: https://wiki.lesswrong.com/mediawiki/images/c/ca/How_to_Run_a_Successful_Less_Wrong_Meetup_Group.pdf
Calibration game
Cognitive science quiz (ex.: game Irrationality)
Cognitive bias game (note: I’m still developing it)
Coronavirus: Should I go to work?
UPDATE: An EA project I’m part of might do this
summary: have an app that helps people decide whether they shouldn’t go to work
context: in the last 12 hours I spent maybe about 2 hours ‘empowering’ someone I know by giving them more information to help them decide whether they should take sick days
problem: knowing what’s the probability one’s infected (by the coronavirus) helps informing them about whether they should avoid going to work. the probability beyond which you should stay home is not the same for each type of job. at what point should one not go to work?
the 2 main sub questions are:
what’s the probability that I’m infected?
there are already forms that sort of do that ex.: https://covid19.empego.ca/#/, but I would prefer a more probabilistic approach with more detailed input
if I’m infected, what damage am I likely to cause, in expectation? how many people am I meeting at work? how many confirmed cases are in my city? etc.
there’s an app made by EAs that might get released in the coming days that address a similar question
for example: Someone told me: my partner was coughing, had a sore throat, and had X fever during the whole day, but is now feeling better; yesterday ze was okay, and we slept together, but I haven’t seen zir since then. ze wasn’t outside the country recently, and hasn’t met anyone infected as far as ze knows. ze lives in city Y which has Z cases.
there could also be intermediary recommendations (maybe?): go to work, but take the following precautions:
wear a mask
avoid meetings
etc.
addendum: in countries that don’t have a monetary incentives for people to self-quarantine, there will be a negative externality not captured. but the tool should still improve decision making.
Maybe summarizing the book “Who Goes First? The Story of Self-experimentation in Medicine”. Two possibly important thesis:
self-experimentation is important
medical innovations are available way before they get adopted
Science policy think tank (or advocacy group?)
Potential problem: it might accelerate all scientific progress, which isn’t relevant in the framework of technological differential progress, or possibly harmful (?) if, for example, AI parrallelizes better than AI safety
Related: https://causeprioritization.org/Improving_science