Looking to advance businesses with charities in the vast majority shareholder position. Check out my TEDx talk for why I believe Profit for Good businesses could be a profound force for good in the world.
Brad Westđ¸
I would like to see a strong argument for the risk of âreplaceabilityâ as a significant factor in potentially curtailing someoneâs counterfactual impact in what might otherwise be a high-impact job. This central idea is that the âsecond choiceâ applicant, after the person who was chosen, might have done just as well, or near just as well as the âfirst choiceâ applicant, making the counterfactual impact of the first small. I would want an analysis of the cascading impact argument: that you âfree upâ the second choice applicant to do other impactful work, who then âfrees upâ someone else, etc., and this stream of âfreeing up valueâ mostly addresses the âreplaceability concern.
Yeah I would think that we would want ASI-entities to (a) have positively valenced experienced as well as the goal of advancing their positively valenced experience (and minimizing their own negatively valenced experience) and/âor (b) have the goal of advancing positive valenced experiences of other beings and minimizing negatively valenced experiences.
A lot of the discussion I hear around the importance of âgetting alignment rightâ pertains to lock-in effects regarding suboptimal futures.
Given the probable irreversibility of the fate accompanying ASI and the potential magnitude of good and bad consequences across space and time, trying to maximize the chances of positive outcomes seems simply prudent. Perhaps some of the âmessagingâ of AI safety seems to be a bit human-centered, because this might be more accessible to more people. But most who have seriously considered a post-ASI world have considered the possibility of digital minds both as moral patients (capable of valenced experience) and as stewards of value and disvalue in the universe.
Really glad to see the success of the Compassion Calculator and hope for its continued success in bringing more omnivores into the fight against factory farming!
The preference for humans remaining alive/âin control isnât necessarily speciesist because itâs the qualities of having valuable conscious experience and concern for the promotion of valuable as well as avoidance of disvaluable conscious experience that might make one prefer this outcome.
We do not know whether ASI would have these qualities or preferences, but if we could know that it did, you would have a much stronger case for your argument.
I would write how thereâs a collective action problem regarding reading EA forum posts. People want to read interesting, informative, and impactful posts and karma is a signifier of this. So often people will not read posts, especially on topics they are not familiar, unless it has already achieved some karma threshold. Given how time-sensitive front page availability is without karma accumulation and unlikely relatively low karma posts are too be read once off the front page, it is likely that good posts could be entirely ignored. On the other hand, some early traction could result in OK posts getting very high karma because a higher volume of people have been motivated to check the post out.
I think this could be partially addressed by having volunteers, or even paying people, to commit to read posts within a certain time frame and upvote (or not, or downvote) if appropriate. It might be a better use of funds than myriad cosmetic changes.
Below is a post I wrote that I think might be such a post that was good (or at least worthy of discussion) but people probably wanted to freeride on othersâ early evaluation. It discusses how jobs in which the performance metrics actually used are orthogonal to many ways in which good can be done may be opportunities for significant impact.
Another set of actors that would be incentivized in this would be the survey respondents, to say higher counterfactual values of first vs second choices. Saying otherwise could go against their goals of attracting more of the EA talent pool to their positions. The framing of irreplaceability for their staff also tends to lend to the prestige of their organizations and staff.
With limited applicants, especially in very specialized areas, I think there is definitely a case for a high value of first vs. second choice applicant. But I suspect that this set of survey respondents would be biased in the direction of overestimating the counterfactual impact.
I donât know to what extent Moskowitz could have influenced Zuckerberg, but I am somewhat intrigued by the potential power of negative emotion that you bring up.
Ironically, one of the emotions that reflection on effective altruism has brought me is rather intense anger. The vast majority of people in developed countries have the ability to use their resources to save lives, significantly mitigate the mass torture of animals, or otherwise make the world a much better place with the power they have. Yet, even when confronted squarely with this opportunity, most do not do it.
I think about other mass injustices and movements that have sought to address them and I think we remember that there was a place for righteous fury- I think of, for instance of womenâs suffrage or the civil rights movement. But yet, the attitude regarding EAs is often conciliatory, milquetoast, professorial⌠almost embarrassed to be holding beliefs in which the judgment of most humans is only a close corollary away.
I realize that in one-on-one interactions, a condemnatory approach is unlikely to gain us allies. But I wonder if a powerful engine for fighting global poverty, animal torture, and the continued existence of conscious life might be the activation of the emotion that such matters merit.
I donât know to what extent that this can be addressed by the EA Forum team at all, but I have been pretty disappointed by the lack of new, interesting ideas about how to better the world. It does not seem that there is really much incentive to share such ideas on the forum, because most people will only look at articles on subject matters that they are already familiar or on meta-level conversations regarding community or norms or expectations around being in the EA world. I find myself pretty frequently logging in to the EA forum hoping to find new, interesting ideas for changing the world, but just finding a bunch of banal or naval-gazing content. I think EA, and resultantly, the world, would benefit from being a more vibrant, open-minded, and creative space, but Iâm not sure what would help us move in this direction.
Yes Thisj Jacobs mentioned below, but thanks for bringing to my attention.
Thank you for sharing this. I was not aware of this Profit for Good casino.
Re #1 - the customers in OPs contemplation would have already committed the funds to be donated and prospective wins would inure to the benefit of charities. So it isnât clear to me that the same typical harm applies (if you buy the premise that gamblers are net harmed by gambling). There wouldnât be the circumstance where the gambler feels they need to win it backâbecause theyâve already lost the money when they committed it to the DAF.
Re #2 - this could produce a good experience for customersâdonating money to charities while playing games. And with how OP set it up, they know what they are losing (unlike with a typical casino thereâs that hope of winning it big).
Re #3 - for the reasons discussed above, the predatory and deceptive implications are less significant here. Unlike when someone takes money to a slot machine in a typical casino, when they put the money in the DAF they no longer have a chance of âgetting it backâ
Re #4 - yeah there might be some bad pr. But if people liked this and substituted it for normal gambling, it probably would be less morally problematic for the reasons discussed above.
Re #5 - Iâm not really sure that this business is as morally corrosive as you suggest⌠Itâs potentially disadvantaging the gamblerâs preferred charity to the casinoâs, but not by much, and not without the gamblerâs knowledge.
Re #6 - the gamblers could choose the charities that are the beneficiaries of their DAF. And I donât know that enjoying gambling means that you wouldnât like to see kids saved from malaria and such.
I think your criticisms would better apply to a straight Profit for Good casino (normal casino with charities as shareholder). The concerns you bring up are some reasons I think a PFG casino, though an interesting idea, would not be a place Iâd be looking to do as an early, strategic PFG (also big capital requirements).
OPâs proposal is much more wholesome and actually addresses a lot more of the ethical concerns. I just think people may not be interested in gambling as much if there was not the prospect of winning money for themselves.
I think the same amount of healthy and problem gambling would take place in aggregate regardless of whether there was a PFG casino among a set of casinos. But maybe some people would choose to migrate that activity toward the PFG casino, so that more good could happen (theyâre offering the same odds as competitors).
It comes down to whether youâre OK with getting involved in something icky if the net harm you cause to gamblers is zero and you can produce significant good in doing so. For me, this doesnât really pose a problem.
Thanks for your proposal. I have actually thought a Profit for Good casino would be a good idea (high capital requirements, but I think it could provide a competitive edge in the Vegas strip, for instance). I find your take on it pretty interesting
I think a casino that did not limit the funds that could be gambled to charitable accounts of some sort would have a much larger market than one that did. There is a lot of friction in requiring the set up of charitable accounts even for people who were interested in charitable giving and enjoyed gambling. I also think that you are going into a narrower subset of prospective clients that have these overlapping qualities. In the meantime, there are millions of people who consistently demonstrate demand for gambling at casinos.
I think a lot of people would feel fine about playing at the casino and winning, because they know that there are winners and losers in casinos, but the house (in the end) always wins. Winners and losers would both be participating in a process that would be helping dramatically better the world.
Could you explain the legal advantage of your proposal vis-a-vis a normal casino either owned by a charitable foundation or being a nonprofit itself (Humanitix, for instance is a ticketing company that is structured as a nonprofit itself)? Is it that peopleâs chips would essentially be tax-deductible (because contributing to their DAF is tax-deductible)?
Another idea would just be a normal casino that was owned by a charitable foundation or trust -a âProfit for Goodâ casino. People could get the exact same value proposition they get from other normal casinos, but by patronizing the Profit for Good Casino, they (in expectation)would be helping save lives or otherwise better the world.
You could have a great night in which you win hundreds or thousands of dollars, but even if you lose, they know that your losses are helping to dramatically better the world.
I think this is an excellent idea.
Orgs or âproto-orgsâ in their early stages are often in a catch-22. They donât have the time or expertise (because they donât have full time staff) to develop a strong grantwriting or other fundraising operations, which could be enabled by startup funds. An org that was familiar with the funding landscape, could familiarize itself with new orgs, and help it secure startup funds could help resolve the catch-22 that orgs find themselves at step 0.
Worth noting that if there are like 10,000 EAs today in the world with a population of 8,000,000,000, the percentage of EAs globally is 0.000125 percent.
If we keep the same proportion and apply that to the world population in 1776, there would be about 1,000 EAs globally and about 3 EAs in the United States. If they were overrepresented in the United States by a factor of ten, there would be about 30.
I donât think people are saying putting time and/âor money to charities that address the poor in rich countries is not helping people, but merely that you could help more poor people in poor countries with the same resources. Thus, if we are saying that we are considering the interests of the unfortunate in poor and rich countries equally, we would want to commit our limited resources to the developing world.
I think a lot of times EAs are assuming a given set of resources that they have to commit to doing good. With that assumption, the counterfactual of a donation to the food pantry is a donation to a more cost effective charity. The âwarm fuzzy/âutilonâ dichotomy that you deride here actually supports your notion that the food pantry could compete with the donorâs luxury consumption instead. This is because warm fuzzies (the donorâs psychic benefit derived from giving) could potentially be a substitute for the consumption of luxury goods (going out to eat, etc.).
So, the concept of the fuzzies (albeit maybe with language you find offensive) actually supports your notion that, within individual donation decisions, helping locally does not always compete with effective giving.
I think the sort of world that could be achieved by the massive funding of effective charities is a rather inspiring vision. Natalie Cargill, Longview Philanthropyâs CEO, lays out a rather amazing set of outcomes that could be achieved in her TED Talk.
I think that a realistic method of achieving these levels of funding are Profit for Good businesses, as I lay out in my TEDx Talk. I think it is realistic because most people donât want to give something up to fund charities -as donation would require- but if they could help solve world problems by buying products or services they want or need of similar quality at the same price, they would.
I find it a bit surprising that your point is so well-taken and has met no disagreement so far, though I am inclined to agree with it.
Another way of framing âorgs that bring talent into the EA/âimpact-focused charity worldâ is orgs whose hiring is less focused on value alignment, insofar as involvement in the movement corresponds with EA value alignment. One might be concerned that a less aligned hire might do well on metrics that can be easily ascertained or credited by oneâs immediate employer, but ignore other opportunities or considerations regarding impact because he/âshe is narrowly concerned about legible job performance and personal career capital. They could go on, in this view, to use the career capital developed and displace more aligned individuals. If funding is the larger constraint for impactful work than labor willing to work for pay, âre-usingâ people in the community may make sense because the impact premium from value-alignment is worth the marginal delta from a seemingly superior resume.
Of course, another view is that hiring someone into an EA org can create buy-in and âconvertâ someone into the community, or allow them to discover a community they already agree with.
Something that just gives me pause regarding giving too much credit for bringing in additional talent is that -regarding lots of kinds of talent- there is a lot of EA talent chasing limited paid opportunities. Expanding the labor pool for some areas is probably much less important because funding is more the limiting factor.
Because we face substantial uncertainty around the eventual moral value of AIs, any small reduction in p(doom) or catastrophic outcomesâincluding S-risksâcarries enormous expected utility. Even if delaying AI costs us a few extra years before reaping its benefits (whether enjoyed by humans, other organic species, or digital minds), that near-term loss pales in comparison to the potentially astronomical impact of preventing (or mitigating) disastrous futures or enabling far higher-value ones.
From a purely utilitarian viewpoint, the harm of a short delay is utterly dominated by the scale of possible misalignment risks and missed opportunities for ensuring the best long-term trajectoryâwhether for humans, other organic species, or digital minds. Consequently, itâs prudent to err on the side of delay if doing so meaningfully improves our chance of securing a safe and maximally valuable future. This would be true regardless of the substrate of consciousness.