Hi Marcus, I think this sounds like a great idea.
There are a number of communities that have been created across the EA space which bring together people with a professional affiliation (I see Aaron has mentioned REG, which is likely the most similar to your concept). I don’t believe this has been done with pro athletes before.
I founded and run a group called SoGive which raises funds and does analysis on charities.
I would be happy to connect with you and support you if that would help; I’ll send you a direct message on the EA Forum.
Thanks Soeren, this is a useful point to help to tease out the thinking more clearly:
Agree that major institutions/governments will invest better in pandemic preparedness for some (unknown) number of years from now (better than recently, anyway)
Also expect that this work will be inadequate, by (for example) overindexing/overfitting on what’s happened before (flu with fatality rate of 2.5% or less, or another coronavirus), but not anticipating other possible pandemics (Nipah, Hendra, or man-made)
If you had asked me in (say) early April, I would have guessed that major institutions will get more funding, and that NGOs who are better at considering tail risks and x-risks and tackling these overfitting errors will also get more funding.
We now think that those major institutions will get more funding, but that the more existential-risk-focused NGOs aren’t getting materially more funding, at the moment
I raised a similar question on the Effective Altruism fb group last year.
Notable responses included the comment from Howie Lempel which reiterated the points in the Open Phil article about how it seemed unlikely that someone watching the field would fail to notice if there was a sudden increase in capabilities.
Also Rob Wiblin commented to ask to make it clear that 80,000 hours doesn’t necessarily endorse the view that nanotech/APM is as high a risk as that survey suggests.
SoGive offers volunteering opportunities doing charity analysis. If you’re interested, get in touch with me via sanjay [at] sogive.org
I’m slightly confused about the long reflection.
I understand it involves “maybe <...> 10 billion people, debating and working on these issues for 10,000 years”. And *only after that* can people consider actions which may have a long term impact on humanity.
How do we ensure that
(a) everyone gets involved with working on these issues? (presumably some people are just not interested in thinking about this? Getting people to work on things they’re unsuited for seems unhelpful and unpleasant)
(b) Actions that could have a long term impact on humanity could be taken unilaterally. How could people be stopped from doing that?
I think a totalitarian worldwide government could achieve this, but I assume that’s not what is intended
Not sure if this is the best place to ask this question, but does anyone know where we could find more thinking on cash transfers and Dutch disease?
My short answer is:
Your main reason for setting up a charity is probably to provide tax incentives for your donors. So the best jurisdiction is probably the jurisdiction where your donors are.
However there are some exceptions where this doesn’t apply. For example, you may be setting up a charity solely or primarily to access Google Ad grants.
If this is the case, then “shopping” for the jurisdiction with the least regulatory overhead would make sense. It would also need to consider whether the process requires someone with an address in that country.
I don’t know the answer to this, and given that it’s something of an edge case, I don’t know of anyone having done this comparison.
Thank you for having the desire to encourage innovation. I’m confident that fellowships like these can be valuable.
From other such fellowships that I’ve seen, the successful ones typically have something that draws people to want to apply. This may include, for example, sponsorship from a high-profile individual.
I hope this helps. Good luck!
I think the benefits of fiscal sponsorship were fairly clear from your post.
For the example in your first bullet point, it may be that there are enough donors to warrant creating a DAF, but that still wouldn’t mean the option outperforms dealing with an existing DAF provider.
For your second bullet point, I hadn’t appreciated this element of your post on first reading. I expect an existing DAF provider probably would be nervous about providing this service. And I could imagine people in the EA community benefiting from this. However it would make me nervous too—it sounds like the sort of scheme that could be made to look really bad in the hands of the right (or wrong!) journalist. But maybe these risks are more surmountable than I realise.
″ Are you referring to the DAF or FS side of things, or both? ” Both
″ My prior was that it would be fairly straightforward because there are UK DAFs in existence, and CEA does both DAF-like and FS-like things to a limited extent (sponsoring EA orgs and running EA funds). ” This is a very reasonable, but incorrect line of thought. The Charity Commission is very clear about the fact that even if someone else has successfully applied for something in the past, it doesn’t mean that someone else applying for exactly the same thing should be allowed it in the future.
″ While CEA might have charitable purposes that seem restrictive, it doesn’t seem like that’s impacting their ability to try to do everything under the sun. ” I don’t think their purposes do seem restrictive. Under a careful reading, as I remember it, it’s fairly clear that their objects are extremely broad. This was why my first bullet suggested that CEA could provide this service.
″ You tried to create a trust to do this before, but it was rejected because the charitable objects were too broad? ” No, sorry, I may not have been clear on this. The reason why I said that an unincorporated entity (i.e. a trust) could do this was that a trust *would* (I think!) get approved, even with broad objects. However an incorporated charity (a CIO, to use the jargon) was rejected for having too-broad objects, notwithstanding the long list of pre-existing precedents whose pattern I was following.
Note that using a trust has downsides. With a trust, I would recommend only funding individuals and non-charities with extreme caution.
Could we have better help for those whose content has been (heavily) downvoted?
I often see people plaintively saying something like: “My comment has been heavily downvoted, but I have no idea why!” Can the forum be more helpful for this scenario?
Not sure what the best solution is, but here’s an idea:
if someone’s comment/post has been downvoted enough for it to have net negative status, the UI allows the user to ask for feedback (e.g. it’s an option when you click on the three dots on the top right hand side)
if they ask for feedback, the forum contacts all those who downvoted it and also some high-karma people and links to the content and asks for feedback (which they don’t have to give, and which would be anonymous)
The feature could perhaps incorporate additional features
to increase the probability that people provide feedback, they could be remunerated (this could an alternative use for the Forum prize money, if it was decided that forum prizes didn’t incentivise people more than the existing karma system) (perhaps there would need to be some thought given to avoiding the perverse incentive for people to give downvotes too liberally)
the system could incorporate some mechanism to make sure that users don’t overuse/abuse this feature (e.g. perhaps the user has to write out and submit to the forum what they will do differently in the future before they are allowed to use the feature again)
Thanks for asking, and sorry it wasn’t clear from the notes.
″ Thousands of sites in more than 100 countries house radiological sources. These are usually sealed sources of radiation used to power batteries, industrial gauges or blood irradiation equipment. In what seems a cruel paradox, the very same isotopes used for life-saving blood transfusions and cancer treatments in hospitals also can also be used to build a radiological “dirty bomb.” ”
If you want to read more, this is taken from NTI’s website: https://www.nti.org/about/radiological/
I think there are real benefits to having an entity which can provide fiscal sponsorship.
For the Donor Advised Fund (DAF) side of things, I’m less convinced.
I doubt that there are many people who would benefit from having a DAF who can’t already get the benefits that they need from existing DAF providers (i.e. I suspect it’s not worthwhile to invest the c $15k to set this up for such a small number of people)
If there’s more demand than I realise, then I think if we take evidence of that demand to an existing DAF provider, I believe they would be more than happy to provide those people with that service
Speaking about the UK, it would be hard (impossible?) to set up an entity which has broad enough objects to make this work, and which is also incorporated. Options include
CEA could provide this service (CEA was set up in the old days when this was easier)
A new unincorporated entity (a trust) could serve this purpose
I have tried to do this before, and the application was rejected.
Thank you for raising this topic.
I’m not sure yet whether I’m on board, and in order to know the answers I would need more information.
IMPACT: not only how widespread is the experience of not being treated with dignity, but also how bad is it? I feel that my bank treats me with indignity as a matter of course, so we need some way to factor in severity of indignity, and we shouldn’t accidentally take the prevalence of all cases of indignity (severe or otherwise) and then multiply them by the most severe severity and end up with an overestimate
TRACTABILITY: “Dignity is also highly solvable <...> include potentially highly cost-effective interventions such as listening” I think the tractability claim needs more substantiation. Me choosing to listen more is cheap. However if I pay you to get corrupt officials in the developing world to be better active listeners, I would predict poor cost-effectiveness because it probably wouldn’t work, I would guess.
NEGLECTEDNESS: Defining the interventions better will help us better assess neglectedness. However at first glance it seems that it’s probably not neglected. If we survey lots of aid professionals and asked them “Do you want your colleagues and the aid sector as a whole to treat beneficiaries with respect” I predict that a very high proportion will say yes. However if I had a clearer picture of your action plan, I might conclude that your particular approach may well be neglected
Of these, I think the first (impact) is the most important. Any concerted effort on the topic of dignity will inevitably have opportunity costs, so we need to understand why it’s more important than some other factors.
Thank you again for raising a fresh idea. The questions I’m raising are intended to be positive and encouraging.
I toyed with this idea too. I imagined a world where people could remember their past lives, and maybe there was also some way of making this public (some way of linking facebook profiles of your current life with your previous lives?) This was partly interesting because of the implications it had for people’s attitudes to animal welfare. (Hindu vegetarianism appears to have been unusually driven by a desire to promote animal welfare, as opposed to some other religious dietary restrictions which originated from human health needs).
However I think I preferred the world mentioned earlier in the post, where the same consequentialist utilitarian framework causes your appearance to update. It means that the feedback loops are faster. And I think people care more about being good-looking than they do having a nice time in their next life (even if they had good reason to believe that the next life were real).
The appearance-oriented idea is also a great mechanism for highlighting the fact that in the real world virtue and appearance are different (despite the fact that films and other art sometimes seem, horrifically, to confuse the two)