You make good points, but there’s no boolean that flips when “sufficient quantities of data [are] practically collected”. The right mental model is closer to a multi-armed bandit IMO.
John_Maxwell
Great points.
There’s an unfortunate dynamic which has occurred around discussions of longtermism outside EA. Within EA, we have a debate about whether it’s better to donate to nearterm vs longterm charities. A lot of critical outsider discussion on longtermism ends up taking the nearterm side of our internal debate: “Those terrible longtermists want you to fund speculative Silicon Valley projects instead of giving to the world’s poorest!”
But for people outside EA, nearterm charity vs longterm charity is generally the wrong counterfactual. Most people outside EA don’t give 10% of their earnings to any effective charity. Most AI work outside EA is focused on making money or producing “cool” results, not mitigating disaster or planning for the long-term benefit of humanity.
Practically all EAs agree people should give 10% of their earnings to effective developing-world charities instead of 1% to ineffective developed-world ones. And practically all EAs agree that AI development should be done with significantly more thought and care. (I think even Émile Torres may agree on that! Could someone ask?)
It’s unfortunate that the internal nearterm vs longterm debate gets so much coverage, given that what we agree on is way more action-relevant to outsiders.
In any case, I mention this because it could play into your “ideologically diverse group of public figures” point somehow. Your idea seems interesting, but I also don’t like the idea of amplifying internal debates further. I would love to see public statements like “Even though I have cause prioritization disagreements with Person X, y’all should really do as they suggest!” And acquiring a norm of using the media to gain leverage in internal debates seems pretty bad.
In terms of understanding the causal effect of talking to journalists, it seems hard to say much in the absence of an RCT.
Someone ought to flip a coin for every interview request, in order to measure (a) the causal effect of accepting an interview on probability of article publication, and (b) the direction of any effects on article accuracy, fairness, and useful critique.
(That was meant as a bit of a joke, but I would honestly be delighted to see a bunch of articles about EA which include sentences like “Person X did not offer any comment because we weren’t assigned to the interview acceptance group in their RCT”. Seems like it sends the right signal to the sort of people we want to attract.)
In any case, until that RCT gets run, maybe it would be worthwhile to compare articles informed by interviews and articles uninformed by interviews side-by-side, and do what we can with the data we have. It’s easy to say “I talked to the journalist and the article was inaccurate”. But claiming that the article ended up worse than it would’ve been in the absence of an interview is harder. (There are also complicating factors: an article with quotes from relevant people may seem more legitimate to readers; no interview might mean no article.)
I think almost everyone I know who has taken up requests to be interviewed about some community-adjacent thing in the last 10 years has regretted their choice, not because they were punished by the community or something, but because the journalists ended up twisting their words and perspective in a way both felt deeply misrepresentative and gave the interviewee no way to object or correct anything.
Do you have thoughts about the idea of creating a thread on a site like the EA Forum or Less Wrong where someone takes questions from the media and responds in writing publicly? 3 birds with one stone: written responses can be more considered, public source material discourages misrepresentation, and less need to respond to the same question multiple times.
(This was Wei Dai’s idea for handling journalist questions about Bitcoin.)
Is there somewhere we can see how the winners of donor lotteries have been donating their winnings?
Thanks for all your hard work in EA!
I think you (and lots of other EAs who feel the same way you do) are totally correct that you don’t deserve the response you’ve been seeing to the FTX situation. You deserve a huge pat on the back for doing so much for the world.
Separately, I also agree with these paragraphs Oliver wrote a few days ago, and I’m (tentatively) glad that there’s been more criticism than usual on the forum right now (even if it’s ultimately unrelated to FTX):
I do think it is indeed really sad that people fear reprisal for disagreement. I think this is indeed a pretty big problem, not really because EA is worse here than the rest of the world, but because I think the standard for success is really high on this dimension, and there is a lot of value in encouraging dissent and pushing back against conformity, far into the tails of the distribution here.
I expect the community health team to have discussed this extensively (like, I have discussed it with them for many hours). There are lots of things attempted to help with this over the years. We branded one EAG after “keeping EA weird”, we encouraged formats like whiteboard debates at EAG to show that disagreement among highly-engaged people is common, we added things like disagree-voting in addition to normal upvoting and downvoting to encourage a culture where it’s normal and expected that someone can write something that many people disagree with, without that thing being punished.
My sense is this all isn’t really enough, and we still kind of suck at it, but I also don’t think it’s an ignored problem in the space. I also think this problem gets harder and harder the more you grow, and larger communities trying to take coordinated action require more conformity to function, and this sucks, and is I think one of the strongest arguments against growth.
I know it may feel like someone “curb stomping you while you’re on your knees”. But in many cases I think a better model is (a) people doing public soul-searching or (b) people who were previously self-censoring no longer doing so.
I want a world where people feel appreciated for hard work, and (simultaneously) people feel comfortable in attempting constructive criticism, safe in the knowledge that their criticism won’t have the sort of negative career repercussions your post implies. Here’s my attempt to reconcile those two objectives; this goes out to all the EAs who are feeling burnt out right now:
It sounds like you’ve been working really hard at a job you hate, doing a lot of good. Maybe sometimes you think about taking a vacation or searching for a job that’s more fun, but avoid it because of opportunity costs.
If you’ve been doing that, I want to push back. You deserve a vacation. A nice long sabbatical, even. Not only do you deserve it, it seems justified on consequentialist grounds—I think the opportunity cost of your vacation will be less than the cost of the ingroup-hardening process you describe in your post. (Convenient that consequentialism sometimes calls for vacations, isn’t it?)
In conclusion, please know that your work is appreciated, and please take care of yourself!
This is a really important point. It might make sense to talk to journalists in order to contextualize what you said on the EA Forum—or to ask them not to use something!
Answering in writing should help with the “foot in mouth” problem. You can ask them to send questions, and say you don’t promise to answer all of them.
A journalist reached out to me recently and this is basically what I did; no regrets so far at least.
IMO “try to respond in writing” should be standard advice when dealing with journalists. Past that, I remember a Less Wrong user once created a (public) thread specifically for taking journalist questions; that seems like a good way to discourage misrepresentation.
Any chance we can get an interview with Nishad or Caroline? I feel like their answers would be a lot more informative in terms of what EA should take away from all this.
Fair enough!
You’re correct that the EA Forum isn’t as democratic as “one person one vote”. However, it is one of the more democratic institutions in EA, so provides evidence re: whether moving in a more democratic direction would’ve helped.
I’d be interested if people can link any FTX criticism on reddit/Facebook prior to the recent crisis to see how that went. In any case, “one person one vote” is tricky for EA because it’s unclear who counts as a “citizen”. If we start deciding grant applications on the basis of reddit upvotes or Facebook likes, that creates a cash incentive for vote brigades.
Not saying I disagree with this, but it may be worth noting that “democracy” as an alternative didn’t exactly do great either—Stuart Buck wrote this comment, and it got downvoted enough that he deleted it.
I agree dense housing would help. Another idea is more group houses. It seems that there’s an excess of big houses in the US right now: https://www.wsj.com/articles/a-growing-problem-in-real-estate-too-many-too-big-houses-11553181782
More thoughts on roommates as a solution for loneliness in this post I wrote: How to Make Billions of Dollars Reducing Loneliness. (Have learned more about the topic since writing that post; can share if people are interested)
A small probability of a big future win. The world today has lots of governments, but they seem to mostly follow a very small number of basic governance templates. At some point, there will be new states with new Constitutions—maybe via space settlements, maybe via collapse of existing states, etc. - but I expect these moments to be few and far between. A significant literature and set of experts on “ideal governance” could lead to a radically different kind of state government, potentially with radically different policies that the rest of the world could learn from.
I think you could rework this paragraph a bit and make it work for big online communities too. Mostly fairly static and undifferentiated, with occasional tectonic shifts representing opportunities for change, and important due to being upstream of lots of stuff. The nature of the governance problem is different, but I think there are many hypothetical approaches on a continuum between social media & states as they’re governed now, and many ideas which are applicable to both (e.g. quadratic voting, prediction markets).
Holden Karnofsky has some interesting thoughts on governance:
One theme is that good governance isn’t exactly a solved problem. IMO EA should use a mix of approaches: copying best practices for high-stakes scenarios, and pioneering new practices for lower-stakes scenarios. (For example, setting up a small fund to be distributed according to some experimental new method, then observing the results. EDIT: Or setting up a tournament of some kind where each team is governed according to a randomly chosen method.) Advancing the state of the art doesn’t just help us, it also seems like a promising cause area on its own.
Here are some forum tags that are potentially relevant:
Thanks!
(Upvoted)
Events are not evidence to the truth of philosophical positions.
Are you sure? How about this position from Richard Chappell’s post?
(3) Self-effacing utilitarian: Ex-utilitarian, gave up the view on the grounds that doing so would be for the best.
Psychological effects of espousing a moral theory are empirical in nature. Observations about the world could cause a consequentialist to switch to some other theory on consequentialist grounds, no?
Not sure there’s a clean division between moral philosophy and moral psychology.
I agree hastily jumping to a different theory while experiencing distress seems bad, but it seems reasonable to update a bit on the margin.
I agree investigation should be thoughtful, but now seems as good as any opportunity to discuss. You say we should wait until facts are properly established, but I think discussion now can help establish facts, the same way a detective would want to visit the scene of a crime soon after it was committed.
I’d be interested to know if there’s any psychological research on how niceness and being ethical may be related.
For example, prior to the FTX incident, I didn’t usually give money to beggars, on the grounds that it was ineffective altruism. But now I’m starting to wonder if giving money to beggars is an easy way to cultivate benevolence in oneself, and cultivating benevolence in oneself is an important way to improve as an EA.
Does walking past beggars & rehearsing reasons why you won’t give them money end up corroding your character over time, such that you eventually become comfortable doing what Sam did?
- Nov 14, 2022, 2:22 AM; 29 points) 's comment on Wrong lessons from the FTX catastrophe by (
Thanks!
I’m not sure I share your view of that post. Some quotes from it:
...he just believed it was really important for humanity to make space settlements in order for it to survive long-term… From what I could tell, [my professor] probably spend less than 10 hours seriously figuring out if space settlements would actually be more valuable to humanity than other alternatives.
...
Take SpaceX, Blue Origin, Neurolink, OpenAI. Each of these started with a really flimsy and incredibly speculative moral case. Now, each is probably worth at least $10 Billion, some much more. They all have very large groups of brilliant engineers and scientists. They all don’t seem to have researchers really analyzing the missions to make sure they actually make sense.
...
My impression is that Andrew Carnegie spent very little, if anything, to figure out if libraries were really the best use of his money, before going ahead and funding 3,000 libraries.
...
I rarely see political groups seriously red-teaming their own policies, before they sign them into law, after which the impacts can last for hundreds of years.
I don’t think any of these observations hinge on the EA framework strongly? Like, do we have reason to believe Andrew Carnegie spent a significant amount trying to figure out if libraries were a great donation target by his own lights, as opposed to according to the EA framework?
The thing that annoyed me about that post was that at the time it was written, it seemed to me that the EA movement was also fairly guilty of this! (It was written before the criticism/red teaming contest.)
I like how Hacker News hides comment scores. Seems to me that seeing a comment’s score before reading it makes it harder to form an independent impression.
I fairly frequently find myself thinking something like: “this comment seems fine/interesting and yet it’s got a bunch of downvotes; the downvoters must know something I don’t, so I shouldn’t upvote”. If others also reason this way, the net effect is herd behavior? What if I only saw a comment’s score after voting/opting not to vote?
Maybe quadratic voting could help, by encouraging everyone to focus their voting on self-perceived areas of expertise? Commenters should be trying to impress a narrow & sophisticated audience instead of a broad & shallow one?
EDIT: Another thought: If there was a way I could see my recent votes, I could go back and reflect on them to ensure I’m voting in a consistent manner across threads
Perhaps the ditch the “Your intellectual contributions are poorly regarded” thread; at best, it is unsupported & off-topic
One consideration is for some of those names, their ‘conversation’ with EA is already sorta happening on Twitter. The right frame for this might be whether Twitter or a podcast is a better medium for that conversation.
You could argue podcasts don’t funge against tweets. I think they might—I think people are often frustrated and want to say something, and a spoken conversation can be more effective at making them feel heard. See The muted signal hypothesis of online outrage. So I’d be more concerned about e.g. giving legitimacy to inaccurate criticisms, rewarding a low signal/noise ratio, or having extemporaneous speech taken out of context. These are all less of a concern if we substitute the 80K podcast for one that’s lower-profile—some of the people mentioned could be topical for Garrison’s podcast?
Edit: I suppose listening to this podcast might be good for value of information?