I was at an EA party this year where there was definitely an overspend of hundreds of pounds of EA money on food which was mostly wasted. As someone who was there, at the time, this was very clearly avoidable.
It remains true that this money could have changed lives if donated to EA charities instead (or even used less wastefully towards EA community building!) and I think we should view things like this as a serious community failure which we want to avoid repeating.
At the time, I felt extremely uncomfortable / disappointed with the way the money was used.
I think if this happened very early into my time affiliated with EA, it would have made me a lot less likely to stay involved—the optics were literally “rich kids who claim to be improving the world in the best way possible and tell everyone to donate lots of money to poor people are wasting hundreds of pounds on food that they were obviously never going to eat”.
I think this happened because the flow of money into EA has made the obligations to optimise cost-efficiency and to think counterfactually seem a lot weaker to many EAs. I don’t think the obligations are any weaker than they were—we should just have a slightly lower cost effectiveness bar for funding things than before.
I had exactly the same thought in an identical-sounding situation. I felt incredibly uncomfortable, and someone at the party pointed out to me that these kinds of spending habits really alienate young EAs from less privileged backgrounds who aren’t used to ordering pricey food deliveries whenever they feel like it
I think that it is worth separating out two different potential problems here.
1. It is bad that we wasted money that could have directly helped people. 2. It is bad that we alienated people by spending money.
I am much more sympathetic to (2) than (1).
Maybe it depends on the cause area but the price I’m willing to pay to attract/retain people who can work on meta/longtermist things is just so high that it doesn’t seem worth factoring in things like a few hundred pounds wasted on food.
I think if we value longtermist/meta community building extremely highly, that’s actually a strong reason in favour of placing lots of value on that couple hundred of pounds—in this kind of scenario, a lot of the counterfactual use of the money would be using it usefully towards longtermist / meta community building.
1) wasting hundreds of pounds of money on food is multiple orders of magnitude away from the biggest misallocation of money within EA community building,
2) All misallocations of money within EA community building is lower than misallocations of money caused by donations that were wasted by donating to less effective cause areas (for context, Open Phil spent ~200M in criminal justice reform, more than all of their EA CB spending to date), and
3) it’s pretty plausible that we burned much more utility from failure to donate/spend enough rather than via donating too much to wasteful things, so looking at the “visible” waste is ignoring the biggest source of resource misallocation.
For what it’s worth, even though I prioritize longtermist causes, reading
Maybe it depends on the cause area but the price I’m willing to pay to attract/retain people who can work on meta/longtermist things is just so high that it doesn’t seem worth factoring in things like a few hundred pounds wasted on food.
made me fairly uncomfortable, even though I don’t disagree with the substance of the comment, as well as
2) All misallocations of money within EA community building is lower than misallocations of money caused by donations that were wasted by donating to less effective cause areas (for context, Open Phil spent ~200M in criminal justice reform, more than all of their EA CB spending to date), and
I don’t mean to imply that this party was one of the worst instances in EA of money being wasted, just that I was there, felt pretty uncomfortable, optics were particularly bad (compared to donating to something not very effective), and it made me concerned about how EAs are valuing cost-effectiveness and counterfactuals.
I agree that it’s important to not let the perfect be the enemy of the good, and it’d be bad to not criticize X just because X isn’t the literal most biggest issue in the movement. But otoh some sense of scale is valuable (at least if we’re considering the object level of resource misallocation and not just/primarily optics).
Like if 30 EAs are at a party, and their time is conservatively valued at $100/h, the party is already burning >$50/minute, just as another example. Hopefully that time is worth it.
Like if 30 EAs are at a party, and their time is conservatively valued at $100/h, the party is already burning >$50/minute, just as another example. Hopefully that time is worth it.
This is probably a bit of an aside, but I don’t think that is a valid way to argue about the value of time for people: It seems quite unlikely to me that instead of going to an EA party those people would actually have done productive work with a value of $100/h. You only have so many hours that you can actually do productive work and the counterfactual of going to this party would more likely be those people going to a (non-EA) party, going for dinner with friends, spending time with family, relaxing, etc than actually doing productive work.
Even free time has value: maybe people would by default talk about work in their free time, or relax in a more optimal way than partying, thus making them more productive. So a suboptimal party can still waste lots of value in ways other than taking hours away from work. Given this, there are many people whose free time should be valued at >$100/h.
Fair point, that’s a reasonable callout. I think elasticity here is likely between 0 and 1, so really you should apply some discount, say maybe 30% of the counterfactual is productive work time for example? So we get to >$30/h per person and >$15/min for the party, in the above Fermi.
(As an aside, at least for me, I don’t find EA parties particularly relaxing, except relatively small ones where I already know almost everybody)
Also with regards to longtermist stuff in particular, I think there’s a risk of falling into “the value of x-risk prevention is basically infinite, so the expected value of any action taken to try and reduce x-risk is also +infinity” reasoning.
I think this kind of reasoning risks obscuring differences in cost-effectiveness between x-risk mitigation initiatives which do exist and which we should take seriously because of other counterfactual uses of the money and because we don’t have unlimited resources.
(There’s a chance I’m badly rephrasing complicated philosophy debates around fanaticism, pascals mugging, etc here but I’m not sure)
Also with regards to longtermist stuff in particular, I think there’s a risk of falling into “the value of x-risk prevention is basically infinite, so the expected value of any action taken to try and reduce x-risk is also +infinity” reasoning.
I agree with you that this is clearly dumb! I don’t think calebp is making that mistake in the comment above however.
Apologies if I misinterpreted calebp’s comment, but I would paraphrase it as “the expected value of a longtermist EA community building event is infinite, and remains infinite with £200 being wasted on uneaten food, so we shouldn’t worry about the lost expected value from overspending on food by £200.”
I think that is a pretty uncharitable view. I would say that it’s obviously not viewed as “infinite”, but orders of magnitude higher than £200. I’m sure calebp and most members of the community would definitely worry at £200,000 of wasted food.
I don’t think this is right because there’s aren’t good mechanisms to convert money into utility. I don’t think there are reasonable counterfactuals to this money that aren’t already maxed out.
That said f you can point to some actions that should get a few hundred pounds in Lt community building that aren’t due to a lack of money and seem positive in EV, I’d be happy to fund these actions (in a personal capacity).
I think more money to AMF / GiveDirectly/ StrongMinds are pretty good mechanisms to convert money into utility.
I also think it’s very difficult for counterfactuals to become maxed out, especially in any form of community building.
One concrete action—pay a random university student in London who might not be into EA but could do with the money to organise a dinner event and invite EAs interested in AI safety to discuss AI safety. I think this kind of thing has very high EV, and these kind of things seem very difficult to max out (until we reach a point, where say, there are multiple dinners everyday in London to discuss AI Safety).
I think one cool thing about some aspects of community building is that they can only ever be constrained by funding, because it seems pretty easy to pay anyone, including people who don’t care about EA, to do the work.
Re the AI Safety dinners—seems like a cool project could just be hiring someone to full time coordinate facilitating such dinners: inviting people and grouping them, logistics, suggesting structures for discussion, inviting special guests etc. Is this something that’s being worked on? Or is anyone interested in doing it?
Wondering if there could be tie-in with the AGI Safety Fundamentals course. e.g. the first step is inviting a broad range of people (~1000-10000) to a dinner event (that is held at multiple - ~100? - locations around the world within a week). Then those who are interested can sign up for the course (~1000).
I think more money to AMF / GiveDirectly/ StrongMinds are pretty good mechanisms to convert money into utility.
I meant from a LT worldview.
One concrete action—pay a random university student in London who might not be into EA but could do with the money to organise a dinner event and invite EAs interested in AI safety to discuss AI safety. I think this kind of thing has very high EV, and these kind of things seem very difficult to max out (until we reach a point, where say, there are multiple dinners everyday in London to discuss AI Safety).
Have you tried this, I wouldn’t predict this going very well. I also haven’t heard of any community builders doing this (but I of course don’t know all the community builders)?
I agree that this kind of dinner could be a good use of funding but the specific scenario your described isn’t obviously positive EV (at least to me). I’d worry about poorly communicating EA, low quality of conversation due to the average attendee not being very thoughtful (if the attendees are thoughtful then it is probably worth more CB time).
You also need to worry about free dinners making us look weird (like in the OP). I think that promoting/inviting people that might make the event go well is going to require a CB as opposed to just a random person. Alternatively, the crux could be that we actually do have similar predictions of how the event would go and have different views on how valuable the event is at some level of quality.
I think one cool thing about some aspects of community building is that they can only ever be constrained by funding, because it seems pretty easy to pay anyone, including people who don’t care about EA, to do the work.
This is really far from my model of LT community building, I would love it if you were right though!
Yeah it’s hard to tell whether we disagree on the value of the same quality of conversation or on what the expected quality of conversation is.
Just to clarify though, I meant inviting people who are both already into EA and already into AI Safety, so there wouldn’t be a need to communicate EA to anyone.
I also don’t actually know if anyone has tried something like this—I think it would be a good thing to try out.
I think this happened because the flow of money into EA has made the obligations to optimise cost-efficiency and to think counterfactually seem a lot weaker to many EAs. I don’t think the obligations are any weaker than they were—we should just have a slightly lower cost effectiveness bar for funding things than before.
To me, the most important issue that this (and other comments here) raises is that, as a community, we don’t yet have a good model of how an altruist who (rationally/altruistically) places a very high value on their time should actually act. Or, for that matter, how they shouldn’t.
I realize the discussion here is broader than this specific case, but for this specific case, couldn’t people have just taken the extra food home so it would not go to waste?
I was at an EA party this year where there was definitely an overspend of hundreds of pounds of EA money on food which was mostly wasted. As someone who was there, at the time, this was very clearly avoidable.
It remains true that this money could have changed lives if donated to EA charities instead (or even used less wastefully towards EA community building!) and I think we should view things like this as a serious community failure which we want to avoid repeating.
At the time, I felt extremely uncomfortable / disappointed with the way the money was used.
I think if this happened very early into my time affiliated with EA, it would have made me a lot less likely to stay involved—the optics were literally “rich kids who claim to be improving the world in the best way possible and tell everyone to donate lots of money to poor people are wasting hundreds of pounds on food that they were obviously never going to eat”.
I think this happened because the flow of money into EA has made the obligations to optimise cost-efficiency and to think counterfactually seem a lot weaker to many EAs. I don’t think the obligations are any weaker than they were—we should just have a slightly lower cost effectiveness bar for funding things than before.
I had exactly the same thought in an identical-sounding situation. I felt incredibly uncomfortable, and someone at the party pointed out to me that these kinds of spending habits really alienate young EAs from less privileged backgrounds who aren’t used to ordering pricey food deliveries whenever they feel like it
I think that it is worth separating out two different potential problems here.
1. It is bad that we wasted money that could have directly helped people.
2. It is bad that we alienated people by spending money.
I am much more sympathetic to (2) than (1).
Maybe it depends on the cause area but the price I’m willing to pay to attract/retain people who can work on meta/longtermist things is just so high that it doesn’t seem worth factoring in things like a few hundred pounds wasted on food.
I think if we value longtermist/meta community building extremely highly, that’s actually a strong reason in favour of placing lots of value on that couple hundred of pounds—in this kind of scenario, a lot of the counterfactual use of the money would be using it usefully towards longtermist / meta community building.
I think another framing here is that:
1) wasting hundreds of pounds of money on food is multiple orders of magnitude away from the biggest misallocation of money within EA community building,
2) All misallocations of money within EA community building is lower than misallocations of money caused by donations that were wasted by donating to less effective cause areas (for context, Open Phil spent ~200M in criminal justice reform, more than all of their EA CB spending to date), and
3) it’s pretty plausible that we burned much more utility from failure to donate/spend enough rather than via donating too much to wasteful things, so looking at the “visible” waste is ignoring the biggest source of resource misallocation.
For what it’s worth, even though I prioritize longtermist causes, reading
made me fairly uncomfortable, even though I don’t disagree with the substance of the comment, as well as
Yeah I’d mostly agree with this framing.
I don’t mean to imply that this party was one of the worst instances in EA of money being wasted, just that I was there, felt pretty uncomfortable, optics were particularly bad (compared to donating to something not very effective), and it made me concerned about how EAs are valuing cost-effectiveness and counterfactuals.
I agree that it’s important to not let the perfect be the enemy of the good, and it’d be bad to not criticize X just because X isn’t the literal most biggest issue in the movement. But otoh some sense of scale is valuable (at least if we’re considering the object level of resource misallocation and not just/primarily optics).
Like if 30 EAs are at a party, and their time is conservatively valued at $100/h, the party is already burning >$50/minute, just as another example. Hopefully that time is worth it.
This is probably a bit of an aside, but I don’t think that is a valid way to argue about the value of time for people: It seems quite unlikely to me that instead of going to an EA party those people would actually have done productive work with a value of $100/h. You only have so many hours that you can actually do productive work and the counterfactual of going to this party would more likely be those people going to a (non-EA) party, going for dinner with friends, spending time with family, relaxing, etc than actually doing productive work.
Even free time has value: maybe people would by default talk about work in their free time, or relax in a more optimal way than partying, thus making them more productive. So a suboptimal party can still waste lots of value in ways other than taking hours away from work. Given this, there are many people whose free time should be valued at >$100/h.
Fair point, that’s a reasonable callout. I think elasticity here is likely between 0 and 1, so really you should apply some discount, say maybe 30% of the counterfactual is productive work time for example? So we get to >$30/h per person and >$15/min for the party, in the above Fermi.
(As an aside, at least for me, I don’t find EA parties particularly relaxing, except relatively small ones where I already know almost everybody)
Also with regards to longtermist stuff in particular, I think there’s a risk of falling into “the value of x-risk prevention is basically infinite, so the expected value of any action taken to try and reduce x-risk is also +infinity” reasoning.
I think this kind of reasoning risks obscuring differences in cost-effectiveness between x-risk mitigation initiatives which do exist and which we should take seriously because of other counterfactual uses of the money and because we don’t have unlimited resources.
(There’s a chance I’m badly rephrasing complicated philosophy debates around fanaticism, pascals mugging, etc here but I’m not sure)
I agree with you that this is clearly dumb! I don’t think calebp is making that mistake in the comment above however.
Apologies if I misinterpreted calebp’s comment, but I would paraphrase it as “the expected value of a longtermist EA community building event is infinite, and remains infinite with £200 being wasted on uneaten food, so we shouldn’t worry about the lost expected value from overspending on food by £200.”
I think that is a pretty uncharitable view.
I would say that it’s obviously not viewed as “infinite”, but orders of magnitude higher than £200. I’m sure calebp and most members of the community would definitely worry at £200,000 of wasted food.
I don’t think this is right because there’s aren’t good mechanisms to convert money into utility. I don’t think there are reasonable counterfactuals to this money that aren’t already maxed out.
That said f you can point to some actions that should get a few hundred pounds in Lt community building that aren’t due to a lack of money and seem positive in EV, I’d be happy to fund these actions (in a personal capacity).
I think more money to AMF / GiveDirectly/ StrongMinds are pretty good mechanisms to convert money into utility.
I also think it’s very difficult for counterfactuals to become maxed out, especially in any form of community building.
One concrete action—pay a random university student in London who might not be into EA but could do with the money to organise a dinner event and invite EAs interested in AI safety to discuss AI safety. I think this kind of thing has very high EV, and these kind of things seem very difficult to max out (until we reach a point, where say, there are multiple dinners everyday in London to discuss AI Safety).
I think one cool thing about some aspects of community building is that they can only ever be constrained by funding, because it seems pretty easy to pay anyone, including people who don’t care about EA, to do the work.
Re the AI Safety dinners—seems like a cool project could just be hiring someone to full time coordinate facilitating such dinners: inviting people and grouping them, logistics, suggesting structures for discussion, inviting special guests etc. Is this something that’s being worked on? Or is anyone interested in doing it?
Wondering if there could be tie-in with the AGI Safety Fundamentals course. e.g. the first step is inviting a broad range of people (~1000-10000) to a dinner event (that is held at multiple - ~100? - locations around the world within a week). Then those who are interested can sign up for the course (~1000).
I meant from a LT worldview.
Have you tried this, I wouldn’t predict this going very well. I also haven’t heard of any community builders doing this (but I of course don’t know all the community builders)?
I agree that this kind of dinner could be a good use of funding but the specific scenario your described isn’t obviously positive EV (at least to me). I’d worry about poorly communicating EA, low quality of conversation due to the average attendee not being very thoughtful (if the attendees are thoughtful then it is probably worth more CB time).
You also need to worry about free dinners making us look weird (like in the OP). I think that promoting/inviting people that might make the event go well is going to require a CB as opposed to just a random person. Alternatively, the crux could be that we actually do have similar predictions of how the event would go and have different views on how valuable the event is at some level of quality.
This is really far from my model of LT community building, I would love it if you were right though!
Yeah it’s hard to tell whether we disagree on the value of the same quality of conversation or on what the expected quality of conversation is.
Just to clarify though, I meant inviting people who are both already into EA and already into AI Safety, so there wouldn’t be a need to communicate EA to anyone.
I also don’t actually know if anyone has tried something like this—I think it would be a good thing to try out.
To me, the most important issue that this (and other comments here) raises is that, as a community, we don’t yet have a good model of how an altruist who (rationally/altruistically) places a very high value on their time should actually act. Or, for that matter, how they shouldn’t.
I realize the discussion here is broader than this specific case, but for this specific case, couldn’t people have just taken the extra food home so it would not go to waste?
Actually, yes that would have made a lot of sense, not sure why this didn’t happen.