Community norm—proposal: I wished all EA papers were posted on the EA Forum so I could see what other EAs thought of them, which would help me decide whether I want to read them.
Every once in a while, I see someone write something like “X is neglected in the EA Community”. I dislike that. The part about “in the EA Community” seems almost always unnecessary, and a reflection of a narrow view of the world. Generally, we should just care about whether X is neglected overall.
I assume you don’t have a problem with it when people are making the claim specifically about EA, as opposed to the wider world?
Like if I said “Building teams that come from a variety of relevant backgrounds and diverse demographics is neglected in EA”, even if you disagreed with the statement, you probably wouldn’t mind the “neglected in EA” part?
Although I agree that “neglected in EA” often leads to lazy writing… I think the argument above could be phrased a lot more clearly.
hummm. I don’t know about your specific example; I would need an argument for why it’s better to have this “in the EA community”. but yeah, there are things that can be “neglected in the EA community” if they are specific to the community. like someone to help resolve conflicts within the community for example. so thanks for the clarification. I should specify that the ‘X’ in my original comment was element of general {Interventions, Causes}, and not about the health of the community.
Although, maybe the EA Community has a certain prestige that make it a good position from which to propagate ideas through society. So if, for example, the EA Community broadly acknowledged anti-aging as an important problem, even without working much on it, it might get other people to work on it that would have otherwise worked on something less important. So in that sense it might make sense. But still, I would prefer if it was phrased more explicitly as such, like “The EA Community should acknowledge X has an important problem”.
My assistant agency, Pantask, is looking to hire new remote assistants. We currently work only with effective altruist / LessWrong clients, and are looking to contract people in or adjacent to the network. If you’re interested in referring me people, I’ll give you a 100 USD finder’s fee for any assistant I contract for at least 2 weeks (I’m looking to contract a couple at the moment).
This is a part time gig / sideline. Tasks often include web searches, problem solving over the phone, and google sheet formatting. A full description of our services are here: https://bit.ly/PantaskServices
I wonder about the risks of optimising for persuasive arguments over accurate arguments. I feel like it’s a negative-sum game, and will result in everyone (most people) having a worse model of the world, and that we should have a strong norm against that. Some people have done this for arguments for donating, so maybe you want to update a bit against donating to balance this out: https://schwitzsplinters.blogspot.com/2020/06/contest-winner-philosophical-argument.html
Is there a name for a moral framework where someone cares more about the moral harm they directly cause than other moral harm?
I feel like a consequentialist would care about the harm itself whether or not it was caused by them.
And a deontologist wouldn’t act in a certain way even if it meant they would act that way less in the future.
Here’s an example (it’s just a toy example; let’s not argue whether it’s true or not).
A consequentialist might eat meat if they can use the saved resources to make 10 other people vegans.
A deontologist wouldn’t eat honey even if they knew they would crack in the future and start eating meat.
If you care much more about the harm caused by you, you might act differently than both of them. You wouldn’t eat meat to make 10 other people vegan, but you might eat honey to avoid later cracking and start eating meat.
A deontologist is like someone adopting that framework, but with an empty individualist approach. A consequentialist is like someone adopting that framework, but with an open individualist approach.
I wonder if most self-label deontologist would actually prefer this framework I’m proposing.
EtA: I’m not sure how well “directly caused” can be cached out. Anyone has a model for that?
Not all Effective Altruists are effective altruists and vice versa (where the capitalization means “part of the community” whereas the lowercase version means “does good effectively”).
Asking where Effective Altruists give makes sense, but checking where effective altruists are giving seems like it’s somewhat getting the causal arrow reversed. To know they are effective, you must first check which organizations are effective, and *then* you can determine that those who gave to those organizations were effective.
But I guess there’s also a more meaningful way to interpret the statement. Ex.: Where do smart strategic altruists give money? (you can still determine how smart and strategic they are in some direct ways without checking which organizations are the most effective). If you find some effective organizations first, you can also ask “Where else do those donors give” which might unveil charities you missed.
In the past months, a lot more people weren’t working and were receiving a government-funded basic income (and also were socially isolated). I wonder if that increased the probability the BLM events happening. And if so, how we should update our models of what would happen in a future where AI made a lot of people unemployed and where the government provided a UBI.
If the great filter is after sentience, but before technologically mature civilisations, the cosmos could be filled with lifeforms experiencing a lot of moral harm
Look on the bright side: they don’t have factory farming ;)
Or maybe the hidden premise of wild life suffering is false: the net expected value of wild life is positive (there’s probably some positive hedonic utility in basic vital functions) & something like the repugnant conclusion is true.
(By the way, I thought you were more a sort of preference utilitarianist)
I am “more a sort of preference utilitarian”—“moral harm” is a neutral term, and depending on your values can be “suffering” or “preference violation” or something else
Or maybe the hidden premise of wild life suffering is false: the net expected value of wild life is positive (there’s probably some positive hedonic utility in basic vital functions) & something like the repugnant conclusion is true.
not for negative (hedonist/preference) utilitarians, maybe for total utilitarians
Context: Sometimes I come up with ideas that are very likely information hazard, and I don’t share them. Most of the time I come up with ideas that are very likely not information hazard.
Problem: But also, sometimes, I come up with ideas that are in-between, or that I can’t tell whether I should share them are not.
Solution hypothesis: I propose creating a group with which one can share such ideas to get external feedback on them and/or about whether they should be shared more widely or not. To reduce the risk of information leaking from that group, the group could:
be kept small (5 participants?)
note: there can always be more such groups
be selective
exam on information hazard / on Bostrom’s paper on the topic
notably: some classes of hazard should definitely not be shared in that group, and this should be made explicit
questionnaire on how one handled information in the past
notably: secrets
have a designated member share a link on an applicant’s Facebook wall with rewards for reporting antisocial behavior
pledge to treat the information with the utmost seriousness
commit to give feedback for each idea (to have a ratio of feedback / exposed person of 1)
Questions: What do you think of this idea? How can I improve this idea? Would you be interested in helping with or joining such a group?
Possible alternatives:
Info-hazard buddy: ask a trusted EA friend if they want to give you feedback on possible info-hazardy ideas
warning: some info-hazard ideas (/idea categories) should NOT be thought about more. some info-hazard can be personally damaging to someone (ask for clear consent before sharing them, and consider whether it’s really useful to do so).
note: yeah I think I’m going to start with this first
oh, of course, for-profit charities are a thing! that makes sense
I learned about it in “Economics without Illusion”, chapter 8.
it’s not because your organization’s product/service/goal is to help other people and your customers are philanthropists that you can’t make a profit.
profitable charities might increase competition to provide more effective altruism, and so still provide more value even though it makes a profit (maybe)
summary: Combining the insights that 1) smokers already know smoking is unhealthy, 2) Thai’s society is hierarchical (where older people need to act as role models), a 5000$ ad was created about a kid asking an adult “Can I get a light?”. This went viral and increased calls to the hotline by 62% in 1 month.
> The GCF wishes to draw your attention to UN75′s ‘One-minute Survey’. It is a survey that anyone can take, opinion polling in 50 countries and artificial intelligence sentiment analysis of traditional and social media in 70 countries, will generate compelling data to inform national and international policies and debate.
> The views and ideas that are generated will be presented, by the Secretary-General, to world leaders and senior UN officials on September 21, 2020, at a high-level event to mark the 75th anniversary.
Now’s the time to ask for an existential risk organization within the UN.
Policy suggestion for countries with government-funded health insurance or healthcare: People using death-with-dignity can receive part of the money that is saved by the government if applicable.
Which could be used to pay for cryonics among other things.
EtA:epistemic status: I don’t really know what I’m talking about
I had a friend post on Facebook (I can’t find back who it was) and a friend in person (Haydn Thomas-Rose) tell me that maybe some/most antivaxxers were actually just afraid of needles. In which case, developing alternative vaccine methods, like oral vaccines, might be pretty useful.
Alternative hypotheses:
antivaxxers mostly don’t like that something stays in their body, and that’s what differentiate them from other medicine
antivaxxers are suspicious that *everyone* needs vaccines, and that’s what differentiate them from other medicine
antivaxxers are right
Of course, it’s probably a combination of factors, but I wonder which are the major ones.
Also, even if the hypothesis is true, I wouldn’t expect people to know the source of their belief.
I wonder if we could test this hypothesis short of developing an alternative method. Maybe not. Maybe you can’t just tell one person that you have an oral vaccine, and have them become pro-vaccine on the spot, but would rather need broader social validation and time to transition mentally.
Have you read any interviews with people who don’t like vaccines, or visited any of the websites/message boards where they explain their beliefs? Or do you think there’s a large population of these people who use other beliefs to hide their true beliefs, or don’t actually realize what their true beliefs are?
This seems like a lot of guesswork when, in my experience, people who don’t like vaccines are often quite vocal about their beliefs and reasoning.
Have you read any interviews with people who don’t like vaccines, or visited any of the websites/message boards where they explain their beliefs?
No, I’m uninformed. I added in the OP “epistemic status: I don’t really know what I’m talking about” :)
do you think there’s a large population of these people who use other beliefs to hide their true beliefs, or don’t actually realize what their true beliefs are?
I don’t know.
This seems like a lot of guesswork when, in my experience, people who don’t like vaccines are often quite vocal about their beliefs and reasoning.
I realize it would have been more helpful to link to examples of people discussing their opposition. While I don’t have the time to look for first-person sources, this page seems like a helpful summary to start with!
I’m gonna take flak for this, but the majority of anti-vaxxers are women, and have 2 things in common:
- a negative experience with a doctor in the 2 years preceding their initial interest in anti-vaxx, where they didn’t feel their concerns were taken seriously (there are refs for this) - fear of guilt for possible future harms caused by acts of commission more than acts of omission (not sure if there are refs for that, but i have seen it in several dialogues on and offline)
One thing seems to counter anti-vaxx well: a trusted GP
Assuming this is a factor, maybe improving society’s epistemic norms would also help? Like, making it clear that tu quoque is not valid reasoning and that people shouldn’t be penalized for noticing and admitting to irrational fears without rationalizing them.
(I’m saying this because if anything can be said to be a trigger for me, it’s needles. When I tell people what happened to make that the case, they—my therapist included—tend to say I’ve given them a new nightmare. I avoided getting immunizations for several years because of it. And yet it seems really damn easy to notice the real reason for that and recognize that it shouldn’t inform my normative judgments. Although, maybe it’s harder to do that if there’s no particular incident that obviously caused the phobia?)
I’ve looked into this before and I’m pretty sure the expected harm from an adverse reaction to some (many?) vaccines outweighs the expected harm from actually getting the disease it protects against (because the chance of an adverse reaction from eg. the polio vaccine is much higher than the chance you’ll actually get polio). I’d add that as another reason why people would be against personal vaccination, and it’s understandable.
Mind-readers as a neglected life extension strategy
Last updated: 2020-03-30
Status: idea to integrate in a longer article
Assuming that:
Death is bad
Lifelogging is a bet worth taking as a life extension strategy
It seems like a potentially really important and neglected intervention is improving mind readers as this is by far the most important part of our experience that isn’t / can’t be captured at the moment.
We don’t actually need to be able to read the mind right now, just to be able to record the mind with sufficiently high resolution (plausibly along text and audio recording to be able to determine which brain patterns correspond to what kind of thoughts).
Questions:
Assuming we had extremely good software, how much could we read minds with our current hardware? (ie. how much is it worth recording your thoughts right now?)
How inconvenient would it be? How much would it cost?
To do:
Ask on Metaculus some operationalisation of the first question
topic: coronavirus | epistemic status: question / idea / hypothesis
the coronavirus doesn’t hit every countries at the same time, so they should share ventilators. “if you get it first, you may borrow my ventilators (until I need them), and when you don’t need yours anymore, you can lend them to me.”
to preserve the incentives to create more ventilators, a country could pledge to share as much ventilators as the other country has itself.
it seems like a strictly positive exchange. the risk might be a country not returning the ventilators, but maybe the Chinese and US armies could act as the world’s police, or something like that (and the US and China wouldn’t exchange ventilators among themselves)
is something like this happening? are countries sharing their ventilators optimally?
Non-denominational volunteering opportunities in politics
Tracking political promises
Polimeter is a platform that allows to track how well politicians keep their promises. This likely increases the incentive for politicians to be honest. This is useful because if citizens don’t know how their vote will translate in policies, it’s harder for them to vote meaningfully. Plus, citizens are likely to prefer more honest politicians all else equal. The platform allows to create new trackers as well as contributing to existing ones.
Epistemic status: speculative; arm-chair thinking; non-expert idea; unfleshed idea
Proposal: Have nuclear powers insure each other that they won’t nuke each other for mutually assure destruction (ie. destroying my infrastructure means you will destroy your economy). Not accepting an offered of mutual insurances should be seen as extremely hostile and uncooperative, and possible even be severely sanctioned internationally.
Also: what about just explicitly criminalizing a) a first strike, b) a nuclear attack? The idea is to make it more likely that the individuals who participated in a nuclear strike would be punished—even if they considered it to be morally justified.
(Someone will certainly think this is “serious April Fool’s stuff”)
BTW, I have recently learned that ICJ missed an opportunity to explicitly state that using nukes (or at least a first strike) is a violation of international law.
Community norm—proposal: I wished all EA papers were posted on the EA Forum so I could see what other EAs thought of them, which would help me decide whether I want to read them.
Short-termism is to longtermism what longtermism is to infinitarianism.
although to be fair, longtermism and infinitarianism reasoning often suggest the same courses of actions in our world, I have the impression
Every once in a while, I see someone write something like “X is neglected in the EA Community”. I dislike that. The part about “in the EA Community” seems almost always unnecessary, and a reflection of a narrow view of the world. Generally, we should just care about whether X is neglected overall.
I assume you don’t have a problem with it when people are making the claim specifically about EA, as opposed to the wider world?
Like if I said “Building teams that come from a variety of relevant backgrounds and diverse demographics is neglected in EA”, even if you disagreed with the statement, you probably wouldn’t mind the “neglected in EA” part?
Although I agree that “neglected in EA” often leads to lazy writing… I think the argument above could be phrased a lot more clearly.
hummm. I don’t know about your specific example; I would need an argument for why it’s better to have this “in the EA community”. but yeah, there are things that can be “neglected in the EA community” if they are specific to the community. like someone to help resolve conflicts within the community for example. so thanks for the clarification. I should specify that the ‘X’ in my original comment was element of general {Interventions, Causes}, and not about the health of the community.
Although, maybe the EA Community has a certain prestige that make it a good position from which to propagate ideas through society. So if, for example, the EA Community broadly acknowledged anti-aging as an important problem, even without working much on it, it might get other people to work on it that would have otherwise worked on something less important. So in that sense it might make sense. But still, I would prefer if it was phrased more explicitly as such, like “The EA Community should acknowledge X has an important problem”.
Posted a similar version of this comment here: https://www.facebook.com/groups/effective.altruists/permalink/3166557336733935/?comment_id=3167088476680821&reply_comment_id=3167117343344601
When talking about causes, I’d like to see comments like “there hasn’t been enough analysis of effectiveness of meta-science interventions”.
Sometimes, yeah! Although, I think people over use “more research is needed”
Part-time remote assistant position
My assistant agency, Pantask, is looking to hire new remote assistants. We currently work only with effective altruist / LessWrong clients, and are looking to contract people in or adjacent to the network. If you’re interested in referring me people, I’ll give you a 100 USD finder’s fee for any assistant I contract for at least 2 weeks (I’m looking to contract a couple at the moment).
This is a part time gig / sideline. Tasks often include web searches, problem solving over the phone, and google sheet formatting. A full description of our services are here: https://bit.ly/PantaskServices
The form to apply is here: https://airtable.com/shrdBJAP1M6K3R8IG It pays 20 usd/h.
You can ask questions here, in PM, or at mati@pantask.com.
I wonder about the risks of optimising for persuasive arguments over accurate arguments. I feel like it’s a negative-sum game, and will result in everyone (most people) having a worse model of the world, and that we should have a strong norm against that. Some people have done this for arguments for donating, so maybe you want to update a bit against donating to balance this out: https://schwitzsplinters.blogspot.com/2020/06/contest-winner-philosophical-argument.html
On the other hand, I sometimes want to pay people to change my mind to incentivize finding evidence. A good example is paying for arguments that lead someone to revoke their cryonics memberships, hence making them save money: https://www.lesswrong.com/posts/HxGRCquTQPSJE2k9g/i-will-pay-usd500-to-anyone-who-can-convince-me-to-cancel-my Although if I did that, I would likely also have a bounty for arguments to spend resources for life extension interventions.
So maybe 2 crucial differences are:
a) whether the recipient of the argument is also the one paying for it or otherwise consenting / aware of what’s going on
b) there’s a bounty on the 2 sides
Is there a name for a moral framework where someone cares more about the moral harm they directly cause than other moral harm?
I feel like a consequentialist would care about the harm itself whether or not it was caused by them.
And a deontologist wouldn’t act in a certain way even if it meant they would act that way less in the future.
Here’s an example (it’s just a toy example; let’s not argue whether it’s true or not).
A consequentialist might eat meat if they can use the saved resources to make 10 other people vegans.
A deontologist wouldn’t eat honey even if they knew they would crack in the future and start eating meat.
If you care much more about the harm caused by you, you might act differently than both of them. You wouldn’t eat meat to make 10 other people vegan, but you might eat honey to avoid later cracking and start eating meat.
A deontologist is like someone adopting that framework, but with an empty individualist approach. A consequentialist is like someone adopting that framework, but with an open individualist approach.
I wonder if most self-label deontologist would actually prefer this framework I’m proposing.
EtA: I’m not sure how well “directly caused” can be cached out. Anyone has a model for that?
x-post: https://www.facebook.com/groups/2189993411234830/ (post currently pending)
Not all Effective Altruists are effective altruists and vice versa (where the capitalization means “part of the community” whereas the lowercase version means “does good effectively”).
Asking where Effective Altruists give makes sense, but checking where effective altruists are giving seems like it’s somewhat getting the causal arrow reversed. To know they are effective, you must first check which organizations are effective, and *then* you can determine that those who gave to those organizations were effective.
But I guess there’s also a more meaningful way to interpret the statement. Ex.: Where do smart strategic altruists give money? (you can still determine how smart and strategic they are in some direct ways without checking which organizations are the most effective). If you find some effective organizations first, you can also ask “Where else do those donors give” which might unveil charities you missed.
x-post: Facebook—Effective Altruism Polls
In the past months, a lot more people weren’t working and were receiving a government-funded basic income (and also were socially isolated). I wonder if that increased the probability the BLM events happening. And if so, how we should update our models of what would happen in a future where AI made a lot of people unemployed and where the government provided a UBI.
If the great filter is after sentience, but before technologically mature civilisations, the cosmos could be filled with lifeforms experiencing a lot of moral harm
Look on the bright side: they don’t have factory farming ;)
Or maybe the hidden premise of wild life suffering is false: the net expected value of wild life is positive (there’s probably some positive hedonic utility in basic vital functions) & something like the repugnant conclusion is true.
(By the way, I thought you were more a sort of preference utilitarianist)
I am “more a sort of preference utilitarian”—“moral harm” is a neutral term, and depending on your values can be “suffering” or “preference violation” or something else
not for negative (hedonist/preference) utilitarians, maybe for total utilitarians
EtA: Moved to my EA project idea list
Group to discuss information hazard
Context: Sometimes I come up with ideas that are very likely information hazard, and I don’t share them. Most of the time I come up with ideas that are very likely not information hazard.
Problem: But also, sometimes, I come up with ideas that are in-between, or that I can’t tell whether I should share them are not.
Solution hypothesis: I propose creating a group with which one can share such ideas to get external feedback on them and/or about whether they should be shared more widely or not. To reduce the risk of information leaking from that group, the group could:
be kept small (5 participants?)
note: there can always be more such groups
be selective
exam on information hazard / on Bostrom’s paper on the topic
notably: some classes of hazard should definitely not be shared in that group, and this should be made explicit
questionnaire on how one handled information in the past
notably: secrets
have a designated member share a link on an applicant’s Facebook wall with rewards for reporting antisocial behavior
pledge to treat the information with the utmost seriousness
commit to give feedback for each idea (to have a ratio of feedback / exposed person of 1)
Questions: What do you think of this idea? How can I improve this idea? Would you be interested in helping with or joining such a group?
Possible alternatives:
Info-hazard buddy: ask a trusted EA friend if they want to give you feedback on possible info-hazardy ideas
warning: some info-hazard ideas (/idea categories) should NOT be thought about more. some info-hazard can be personally damaging to someone (ask for clear consent before sharing them, and consider whether it’s really useful to do so).
note: yeah I think I’m going to start with this first
It seems to me like the ratio of preparedness : prevention for environmental change should be way higher
oh, of course, for-profit charities are a thing! that makes sense
I learned about it in “Economics without Illusion”, chapter 8.
it’s not because your organization’s product/service/goal is to help other people and your customers are philanthropists that you can’t make a profit.
profitable charities might increase competition to provide more effective altruism, and so still provide more value even though it makes a profit (maybe)
https://en.wikipedia.org/wiki/Charitable_for-profit_entity
x-post: https://www.facebook.com/mati.roy.09/posts/10159007824884579
If your animal companion kills a human unlawfully, instead of being euthanized, there should be the option for you to pay to put zir in jail.
Posting here because I think maybe having a strong legal framework to protect animals in general might be EA(-ish).
summary: Combining the insights that 1) smokers already know smoking is unhealthy, 2) Thai’s society is hierarchical (where older people need to act as role models), a 5000$ ad was created about a kid asking an adult “Can I get a light?”. This went viral and increased calls to the hotline by 62% in 1 month.
https://creativesamba.substack.com/p/bangkok-smoking-kid-you-worry-about
From the Global Challenges Foundation:
> The GCF wishes to draw your attention to UN75′s ‘One-minute Survey’. It is a survey that anyone can take, opinion polling in 50 countries and artificial intelligence sentiment analysis of traditional and social media in 70 countries, will generate compelling data to inform national and international policies and debate.
> The views and ideas that are generated will be presented, by the Secretary-General, to world leaders and senior UN officials on September 21, 2020, at a high-level event to mark the 75th anniversary.
Now’s the time to ask for an existential risk organization within the UN.
Link: https://un75.online/#s2
Policy suggestion for countries with government-funded health insurance or healthcare: People using death-with-dignity can receive part of the money that is saved by the government if applicable.
Which could be used to pay for cryonics among other things.
EtA: epistemic status: I don’t really know what I’m talking about
I had a friend post on Facebook (I can’t find back who it was) and a friend in person (Haydn Thomas-Rose) tell me that maybe some/most antivaxxers were actually just afraid of needles. In which case, developing alternative vaccine methods, like oral vaccines, might be pretty useful.
Alternative hypotheses:
antivaxxers mostly don’t like that something stays in their body, and that’s what differentiate them from other medicine
antivaxxers are suspicious that *everyone* needs vaccines, and that’s what differentiate them from other medicine
antivaxxers are right
Of course, it’s probably a combination of factors, but I wonder which are the major ones.
Also, even if the hypothesis is true, I wouldn’t expect people to know the source of their belief.
I wonder if we could test this hypothesis short of developing an alternative method. Maybe not. Maybe you can’t just tell one person that you have an oral vaccine, and have them become pro-vaccine on the spot, but would rather need broader social validation and time to transition mentally.
Have you read any interviews with people who don’t like vaccines, or visited any of the websites/message boards where they explain their beliefs? Or do you think there’s a large population of these people who use other beliefs to hide their true beliefs, or don’t actually realize what their true beliefs are?
This seems like a lot of guesswork when, in my experience, people who don’t like vaccines are often quite vocal about their beliefs and reasoning.
No, I’m uninformed. I added in the OP “epistemic status: I don’t really know what I’m talking about” :)
I don’t know.
Thanks for the input
I realize it would have been more helpful to link to examples of people discussing their opposition. While I don’t have the time to look for first-person sources, this page seems like a helpful summary to start with!
Ray Taylor says:
https://www.facebook.com/mati.roy.09/posts/10158690001894579
Seth Nicholson says:
https://www.facebook.com/mati.roy.09/posts/10158690001894579
Matthew Barnett says:
https://www.facebook.com/mati.roy.09/posts/10158690001894579
Most likely. Contrary to the common saying. Most people are not in the future. Most people are completely causally disconnected from our universe.
Do you think most people alive today are living in causally disconnected locations from planet earth?
Yes, I think the universe is spatially Big, to the extent that most currently alive people are living outside our Reachable Universe
(I’m assuming “currently alive” can be cached out robustly, but am still a bit confused about the implications of the relativity of simultaneity)
proposals for new markets: https://forum.radicalxchange.org/t/proposal-for-new-markets/304
Mind-readers as a neglected life extension strategy
Last updated: 2020-03-30
Status: idea to integrate in a longer article
Assuming that:
Death is bad
Lifelogging is a bet worth taking as a life extension strategy
It seems like a potentially really important and neglected intervention is improving mind readers as this is by far the most important part of our experience that isn’t / can’t be captured at the moment.
We don’t actually need to be able to read the mind right now, just to be able to record the mind with sufficiently high resolution (plausibly along text and audio recording to be able to determine which brain patterns correspond to what kind of thoughts).
Questions:
Assuming we had extremely good software, how much could we read minds with our current hardware? (ie. how much is it worth recording your thoughts right now?)
How inconvenient would it be? How much would it cost?
To do:
Ask on Metaculus some operationalisation of the first question
update: now posted as a question: https://forum.effectivealtruism.org/posts/CbwnCiCffSuCzz3kM/are-countries-sharing-ventilators-to-fight-the-coronavirus
topic: coronavirus | epistemic status: question / idea / hypothesis
the coronavirus doesn’t hit every countries at the same time, so they should share ventilators. “if you get it first, you may borrow my ventilators (until I need them), and when you don’t need yours anymore, you can lend them to me.”
to preserve the incentives to create more ventilators, a country could pledge to share as much ventilators as the other country has itself.
it seems like a strictly positive exchange. the risk might be a country not returning the ventilators, but maybe the Chinese and US armies could act as the world’s police, or something like that (and the US and China wouldn’t exchange ventilators among themselves)
is something like this happening? are countries sharing their ventilators optimally?
x-post with https://causeprioritization.org/Democracy (see wiki for latest version)
Epistemic status: intuition; tentative | Quality: quick write-up | Created: 2019-12-05 | Acknowledgement: Nicolas Lacombe for discussions on tracking political promises
Assumption: more democracy is valuable; related: The rules for rulers, 10% Less Democracy
Non-denominational volunteering opportunities in politics
Tracking political promises
Polimeter is a platform that allows to track how well politicians keep their promises. This likely increases the incentive for politicians to be honest. This is useful because if citizens don’t know how their vote will translate in policies, it’s harder for them to vote meaningfully. Plus, citizens are likely to prefer more honest politicians all else equal. The platform allows to create new trackers as well as contributing to existing ones.
Voting reform
The Center for Election Science is working to implement an approval voting mechanism in more jurisdictions in the US. They work with volunteers with various expertise; see: https://www.electionscience.org/take-action/volunteer/.
National Popular Vote Interstate Compact
National Popular Vote is promoting the National Popular Vote Interstate Compact which aims to make the electoral vote reflect the popular vote. They are looking for volunteers; see https://www.nationalpopularvote.com/volunteer.
Nuke insurance
Category: Intervention idea
Epistemic status: speculative; arm-chair thinking; non-expert idea; unfleshed idea
Proposal: Have nuclear powers insure each other that they won’t nuke each other for mutually assure destruction (ie. destroying my infrastructure means you will destroy your economy). Not accepting an offered of mutual insurances should be seen as extremely hostile and uncooperative, and possible even be severely sanctioned internationally.
Also: what about just explicitly criminalizing a) a first strike, b) a nuclear attack? The idea is to make it more likely that the individuals who participated in a nuclear strike would be punished—even if they considered it to be morally justified.
(Someone will certainly think this is “serious April Fool’s stuff”)
Good point. My implicit idea was to have the money in an independent trust, so that the “punishment” is easier to enforce.
BTW, I have recently learned that ICJ missed an opportunity to explicitly state that using nukes (or at least a first strike) is a violation of international law.