With recent FTX news, EA has room for more billionaire donors. For any proposed EA cause area, a good standard question to ask is: “Could this be done as a for-profit?” Quoting myself from a few years ago:
There are a few reasons I think for-profit is generally preferable to non-profit when possible:
It’s easier to achieve scale as a for-profit.
For-profit businesses are accountable to their customers. They usually only stay in business if customers are satisfied with the service they provide. Non-profits are accountable to their donors. The impressions of donors correlate imperfectly with the extent to which real needs are being served.
First worlders usually aren’t poor and don’t need charity.
You can donate the money you make to effective charities.
Before anyone jumps on me here: IMO, an important takeaway from the FTX catastrophe is that EA-founded businesses should avoid mentioning EA in their marketing by default. Even if you think you have a good reason to use EA in your marketing, you should still get CEA’s permission first.
Other good ways to not be like SBF include: Have detailed knowledge of what people are buying and accurately communicate it to customers, including your own uncertainty. Pay special attention to identifying and mitigating ways in which your product could do harm (e.g. for a chewy food product, choking is an obvious potential hazard). Do these things beyond what’s required by law, and what’s pragmatic from the point of view of maximizing profit. Be willing to pull the plug on the business proactively if risks seem too high, or things aren’t going in a good direction.
If you aren’t able to achieve healthy profits while respecting such ethical guidelines, that’s an indicator that the business idea isn’t promising and you should find something else. Good business ideas are in “blue oceans” with little competition, plus somewhere you can build a durable competitive moat, meaning you won’t be caught in a race to the bottom.
Anyway, back to the post. Your proposed interventions are “mostly bottlenecked on improving awareness”. In the business world, awareness is achieved through marketing. You could sell a product which helps kids with jaw development, and your marketing could improve awareness of this problem as a side effect.
What product could you sell? Perhaps some sort of kid-themed chewy snacks—a nutty animal cracker? Or maybe a long, thin stick to minimize choking risk (with spiky sides so even if it gets caught in the throat they can still breathe around it?) Maybe the snacks could come in grades of chewing difficulty, so you can put your kid on a step-by-step program of jaw development, gradually ramping them up from the soft foods they’re eating right now.
It’d be very tempting to promote this by saying: “buy this $10 snack and you could save thousands on orthodontics down the road”. That could be an amazing sales pitch. But you want to be very careful in making any kind of health claim. There are lawyers who specialize in navigating FDA regulations around such claims. Maybe you’d want to start with a study measuring your product’s impact on some sort of near-term proxy for jaw development, as a basis for eventually making such a claim.
In terms of the professional treatment side, you could build a directory of dentists who e.g. work with expanders instead of doing extractions. Then create a tool that helps a parent find such a dentist in their area, solicit dentist reviews from parents, advertise your tool, and make money through lead generation.
Even if you think you have a good reason to use EA in your marketing, you should still get CEA’s permission first.
I strongly disagree with the idea that CEA (or any person or entity) should have that kind of ownership over “effective altruism”. It’s not a brand, but a concept whose boundaries are negotiated by a wide variety of actors.
Suppose you saw a commercial on TV. At the end of the commercial a voice says “brought to you by Effective Altruism”. The heart-in-lightbulb logo appears on screen for several seconds.
I actually did hear of a case of a rando outside the community grabbing a Facebook page for “Effective Altruism”, gaining a ton of followers, and publishing random dubious stuff.
You can insist EA isn’t a brand all you want, but someone still might use it that way!
I’m not super attached to getting permission from CEA in particular. I just like the idea of EAs starting more companies, and dislike the idea of them often advertising those companies as EA?
Maybe a good thing to point out is that while the survival criterion for nonprofits is donation (i.e. almost by definition nonprofits must achieve the approval of philanthropists), the survival criterion for companies is profitability. Imagine “Superstimulus Slot Machines Inc.” puts the EA logo on the side of their machine and runs a bunch of commercials explaining how all profits go to EA charities. This might be a really profitable business, and become the first thing people think of when they hear “EA”, without any EA outside the company ever signing on. [Note: Please don’t start this company, there are many better business ideas that don’t involve harming people!]
If the process for making this sort of corporate branding decision is fuzzy, it becomes easier for people to tilt it in their favor. So I think an explicit process makes sense, for the same reason it makes sense to preregister data analysis code before data gets collected. If you don’t like the “ask CEA” process, maybe you could suggest an alternative and explain why it’s better?
My suggestion would be to have no process other than general social sanctions. I don’t think it makes sense to make any person or entity an authority over “effective altruism” any more than it would make sense to name a particular person or entity an authority over the appropriate use of “Christian” or “utilitarian”.
I believe you’re introducing a new kind of connection when you talk about usage of the heart-in-lightbulb image. I couldn’t tell you who originally produced that image, but I assume it was connected to CEA. I agree that using an image with strong associations with a particular organization that created it might morally require someone to check in with the organization even if the image wasn’t copyrighted.
I believe effective altruism benefits strongly from the push and pull of different thinkers and organizations as they debate its meaning and what’s effective. Some stuff people do will seem obviously incongruous with the concept and in such cases it makes sense for people to express social disapproval (as has been done in the past).
Before anyone jumps on me here: IMO, an important takeaway from the FTX catastrophe is that EA-founded businesses should avoid mentioning EA in their marketing by default. Even if you think you have a good reason to use EA in your marketing, you should still get CEA’s permission first.
This seems like the opposite lesson to learn from me. I think it would be terrible if EA updated from the FTX situation by still giving fraudsters a ton of power and influence, but now just don’t publicly associate with them.
This seems like it creates an even more adversarial relationship to the public, and I don’t think would have made this situation much better (the vast majority of the damage of this situation is because Sam stole $8 billion of customer deposits, was actually a quite influential EA, and in some sense was an important leader, not because he was publicly associated with EA).
I think it would be terrible if EA updated from the FTX situation by still giving fraudsters a ton of power and influence, but now just don’t publicly associate with them.
I don’t think fraudsters should be given power and influence. I’m not sure how you got that from my comment. My recommendation was made in the spirit of defense-in-depth.
I can see how a business founder trying to conceal their status as an EA might create an adversarial relationship, but that’s not what I suggested.
Put it another way: SBF claimed he was doing good with lots of fanfare, but actually did lots of harm. The next EA billionaire should focus less on claiming they’re doing good, and more on actually doing good for their employees, customers, shareholders, and donation recipients.
But your recommendation of defense-in-depth would I think have made this situation substantially worse. I think the best saving throws we had in this situation was people scrutinizing Sam and his activities, not trying to hide his involvement with stuff.
I think we had a bunch of good shots of spotting what was going on at FTX before the rest of the world, and I think downplaying Sam’s actual involvement in the community would have harmed that.
I also think that CEA would have very likely approved any request by Sam to be affiliated with the movement, so your safeguard would have I think just differentially made it harder for the higher-integrity people (who CEA sadly tends to want to be associated less with, due to them by necessity also having more controversial beliefs) to actually be affiliated with EA, without helping much with the Sam/FTX case.
I think we had a bunch of good shots of spotting what was going on at FTX before the rest of the world, and I think downplaying Sam’s actual involvement in the community would have harmed that.
I could see this going the other way as well. Maybe EAs would’ve felt more free to criticize FTX if they didn’t see it as associated with EA in the public mind. Also, insofar as FTX was part of the “EA ingroup”, people might’ve been reluctant to criticize them due to tribalism.
I also think that CEA would have very likely approved any request by Sam to be affiliated with the movement, so your safeguard would have I think just differentially made it harder for the higher-integrity people (who CEA sadly tends to want to be associated less with, due to them by necessity also having more controversial beliefs) to actually be affiliated with EA, without helping much with the Sam/FTX case.
Re: controversial beliefs, I think Sam was unusually willing to bite bullets in public even by EA standards—see here.
Presumably any CEA approval process from here on would account for lessons learned from Sam. And any approval process would hopefully get better over time as data comes in about bad actors.
In any case, I expect that paying for audits (or criticism contests, or whatever) is generally a better way to achieve scrutiny of one’s organization than using EA in one’s marketing.
With recent FTX news, EA has room for more billionaire donors. For any proposed EA cause area, a good standard question to ask is: “Could this be done as a for-profit?” Quoting myself from a few years ago:
Before anyone jumps on me here: IMO, an important takeaway from the FTX catastrophe is that EA-founded businesses should avoid mentioning EA in their marketing by default. Even if you think you have a good reason to use EA in your marketing, you should still get CEA’s permission first.
Other good ways to not be like SBF include: Have detailed knowledge of what people are buying and accurately communicate it to customers, including your own uncertainty. Pay special attention to identifying and mitigating ways in which your product could do harm (e.g. for a chewy food product, choking is an obvious potential hazard). Do these things beyond what’s required by law, and what’s pragmatic from the point of view of maximizing profit. Be willing to pull the plug on the business proactively if risks seem too high, or things aren’t going in a good direction.
If you aren’t able to achieve healthy profits while respecting such ethical guidelines, that’s an indicator that the business idea isn’t promising and you should find something else. Good business ideas are in “blue oceans” with little competition, plus somewhere you can build a durable competitive moat, meaning you won’t be caught in a race to the bottom.
Anyway, back to the post. Your proposed interventions are “mostly bottlenecked on improving awareness”. In the business world, awareness is achieved through marketing. You could sell a product which helps kids with jaw development, and your marketing could improve awareness of this problem as a side effect.
What product could you sell? Perhaps some sort of kid-themed chewy snacks—a nutty animal cracker? Or maybe a long, thin stick to minimize choking risk (with spiky sides so even if it gets caught in the throat they can still breathe around it?) Maybe the snacks could come in grades of chewing difficulty, so you can put your kid on a step-by-step program of jaw development, gradually ramping them up from the soft foods they’re eating right now.
It’d be very tempting to promote this by saying: “buy this $10 snack and you could save thousands on orthodontics down the road”. That could be an amazing sales pitch. But you want to be very careful in making any kind of health claim. There are lawyers who specialize in navigating FDA regulations around such claims. Maybe you’d want to start with a study measuring your product’s impact on some sort of near-term proxy for jaw development, as a basis for eventually making such a claim.
In terms of the professional treatment side, you could build a directory of dentists who e.g. work with expanders instead of doing extractions. Then create a tool that helps a parent find such a dentist in their area, solicit dentist reviews from parents, advertise your tool, and make money through lead generation.
I strongly disagree with the idea that CEA (or any person or entity) should have that kind of ownership over “effective altruism”. It’s not a brand, but a concept whose boundaries are negotiated by a wide variety of actors.
Suppose you saw a commercial on TV. At the end of the commercial a voice says “brought to you by Effective Altruism”. The heart-in-lightbulb logo appears on screen for several seconds.
I actually did hear of a case of a rando outside the community grabbing a Facebook page for “Effective Altruism”, gaining a ton of followers, and publishing random dubious stuff.
You can insist EA isn’t a brand all you want, but someone still might use it that way!
I’m not super attached to getting permission from CEA in particular. I just like the idea of EAs starting more companies, and dislike the idea of them often advertising those companies as EA?
Maybe a good thing to point out is that while the survival criterion for nonprofits is donation (i.e. almost by definition nonprofits must achieve the approval of philanthropists), the survival criterion for companies is profitability. Imagine “Superstimulus Slot Machines Inc.” puts the EA logo on the side of their machine and runs a bunch of commercials explaining how all profits go to EA charities. This might be a really profitable business, and become the first thing people think of when they hear “EA”, without any EA outside the company ever signing on. [Note: Please don’t start this company, there are many better business ideas that don’t involve harming people!]
If the process for making this sort of corporate branding decision is fuzzy, it becomes easier for people to tilt it in their favor. So I think an explicit process makes sense, for the same reason it makes sense to preregister data analysis code before data gets collected. If you don’t like the “ask CEA” process, maybe you could suggest an alternative and explain why it’s better?
My suggestion would be to have no process other than general social sanctions. I don’t think it makes sense to make any person or entity an authority over “effective altruism” any more than it would make sense to name a particular person or entity an authority over the appropriate use of “Christian” or “utilitarian”.
I believe you’re introducing a new kind of connection when you talk about usage of the heart-in-lightbulb image. I couldn’t tell you who originally produced that image, but I assume it was connected to CEA. I agree that using an image with strong associations with a particular organization that created it might morally require someone to check in with the organization even if the image wasn’t copyrighted.
I believe effective altruism benefits strongly from the push and pull of different thinkers and organizations as they debate its meaning and what’s effective. Some stuff people do will seem obviously incongruous with the concept and in such cases it makes sense for people to express social disapproval (as has been done in the past).
This seems like the opposite lesson to learn from me. I think it would be terrible if EA updated from the FTX situation by still giving fraudsters a ton of power and influence, but now just don’t publicly associate with them.
This seems like it creates an even more adversarial relationship to the public, and I don’t think would have made this situation much better (the vast majority of the damage of this situation is because Sam stole $8 billion of customer deposits, was actually a quite influential EA, and in some sense was an important leader, not because he was publicly associated with EA).
I don’t think fraudsters should be given power and influence. I’m not sure how you got that from my comment. My recommendation was made in the spirit of defense-in-depth.
I can see how a business founder trying to conceal their status as an EA might create an adversarial relationship, but that’s not what I suggested.
Put it another way: SBF claimed he was doing good with lots of fanfare, but actually did lots of harm. The next EA billionaire should focus less on claiming they’re doing good, and more on actually doing good for their employees, customers, shareholders, and donation recipients.
But your recommendation of defense-in-depth would I think have made this situation substantially worse. I think the best saving throws we had in this situation was people scrutinizing Sam and his activities, not trying to hide his involvement with stuff.
I think we had a bunch of good shots of spotting what was going on at FTX before the rest of the world, and I think downplaying Sam’s actual involvement in the community would have harmed that.
I also think that CEA would have very likely approved any request by Sam to be affiliated with the movement, so your safeguard would have I think just differentially made it harder for the higher-integrity people (who CEA sadly tends to want to be associated less with, due to them by necessity also having more controversial beliefs) to actually be affiliated with EA, without helping much with the Sam/FTX case.
Interesting points.
I could see this going the other way as well. Maybe EAs would’ve felt more free to criticize FTX if they didn’t see it as associated with EA in the public mind. Also, insofar as FTX was part of the “EA ingroup”, people might’ve been reluctant to criticize them due to tribalism.
Re: controversial beliefs, I think Sam was unusually willing to bite bullets in public even by EA standards—see here.
Presumably any CEA approval process from here on would account for lessons learned from Sam. And any approval process would hopefully get better over time as data comes in about bad actors.
In any case, I expect that paying for audits (or criticism contests, or whatever) is generally a better way to achieve scrutiny of one’s organization than using EA in one’s marketing.