I asked other debaters/EAs intersecting and they agreed with my line of reasoning that it would be contrived and lead to poorly structured arguments. I can elaborate if you really want but I hesitate spending time to write this out because I’m behind on work and don’t think it’ll have any impact on anything to be honest.
zchuang
I don’t know. As someone who was/still is quite good at debating and connected to debating communities I would find a flow-centric comment thread bothersome and unhelpful for reading the dialogues. I quite like internet comments as is in this UI.
I think the combination of 1 and 2 is such that you want the people who come through 1 to become people who are talented and noted down as 2. We should be empowering one another to be more ambitious. I don’t think I would have gotten my emergent ventures grant without EA.
But I don’t think we could have predicted people would die into the comments like this. Usually comments have minimal engagement. There’s a lesswrong debate format for posts but that’s usually with a moderator and such. This seems spontaneous.
Yeah I think I would just bin all of delay into one bucket such that they are not independent. For instance, the causal chain of WWI, Great Depression, and WWII seem quite contingent upon one another. I’ll chew on how the binning works but nonetheless really appreciate this piece of work and it’s really easy to read and understand—as well as internally well reasoned. Didn’t mean to come off too harsh.
It feels like you’re double counting a lot of the categories of derailment at first glance? There’s a highly conjunctive story of each of the derailments that makes me suspicious of multiplying them together as if they’re conjunctive. I’m also confused as to how you’re calculating the disjunctive probabilities because on page 78 you put “Conditional on being on a trajectory to transformative AGI, we forecast a 40% chance of severe war erupting by 2042”. However, this doesn’t seem to be an argument for derailment, it seems more likely it’d be an argument for race dynamics increasing?
It feels like a weird confluence of effects of:
FTX Future Fund spawning at the same time.
What we owe the future dropping at the same time as more visible LLMs.
Cultural pivot in people’s system 1s about pandemics because of COVID-19.
I wish there was a library of sorts for different base models of TAI economics growth that weren’t just some form of the Romer Model and TFP goes up because PASTA automates science.
To be clear you should still ask more people and look at the downstream effects on PhDs, research, etc. Again would echo the advice for 80k and reaching out to other people.
Impacts being distributed heavy-tailed has a very psychologically harsh effect given the lack of feedback loops in longtermist fields and I wonder what interpersonal norms one could cultivate amongst friends and the community writ-large (loosely held/purely musing etc.):
Distinguishing pessimism about ideas from pessimism about people.
Ex-ante vs. ex-post critiques.
Celebrating when post-mortems have led to more successful projects.
Mergers/takeover mechanisms of competition between peoples/projects.
I think EAs in the FTX era were leaning hard on hard capital (e.g. mentioning no lean season close down) ignoring the social and psychological parts of taking risk and how we can be a community that recognises heavy-tailed distributions without making it worse for those who are not in the heavy-tail.
To be fair, I think a few of Schmidt Futures people were looking around EA Global for things to fund in 2022. I can imagine why someone would think they’re a longtermist.
There are masters programs in the UK that take non-CS students. Anecdata from friends is that they’ve done PPE at Oxford then an Imperial CS Masters.
A underrated thing with the (post)-rationalists/adjacent is how open with their emotions they are. I really appreciate @richard_ngo ’s replacing fear series and just a lot of the older Lesswrong posts about starting a family with looming AI risk. Just really appreciating the personal posting and when debugging comes from a place of openness and emotional generosity.
Yeah I should have written more but I try to keep my short form casual to make the barrier of entry lower and to allow for expansions based on different reader’s issues.
I notice a lot of internal confusion whenever people talk about macro-level bottlenecks in EA:
Talent constraint vs. funding constraint.
80k puts out declarations on different funding situation changes such as don’t found projects on the margins (RIP FTX).
People don’t found projects in AI Safety because of this switch up.
Over the next 2 years people up-skill and do independent research or join existing organisations.
Eventually, there are not enough new organisations to absorb funding.
[reverse the two in cycles I guess]
Mentorship in AI Safety
There’s a mentorship bottleneck so people are pushed to do more independent projects.
There’s less new organisations started because people are told it’s a mentorship and research aptitude bottleneck.
Eventually the mentorship bottleneck catches up because everyone up-skilled but there aren’t enough organisations to absorb the mentors etc. etc.
To be clear, I understand the counterarguments about marginality and these are exaggerated examples but I do fear at its core that the way EAs defer means we have the worst of both the social planner problem and none of the benefits of the theory of the firm.
I notice myself being confused about why trades have to happen at the OpenPhil level. I think Pareto Optimality in trades works best when there are more actors aggregating and talking. It’s sad that donor lotteries have died out to an extent and so much regranting discourse is around internal EA social dynamics rather than impact in and of itself.
Examples of resources that come to mind:
Platforms and ability to amplify. I worry a lot about the amount of money in global priorities research and graduate students (even though I do agree it’s net good). For instance, most EA PhD students take teaching buyouts and probably have more hours to devote to research. A sharing of resources probably means good distribution of prestige bodies and amplification gatekeepers.
To be explicitly my model of the modal EA is they have bad epistemics and would take this to mean fund a bad faith critic (and there are so many) but I do worry that sometimes EA wins in the marketplace of ideas due to money rather than truth.
Give access to the materials necessary to make criticisms (e.g. AI Safety papers should be more open with dataset documentation etc.).
Again this is predicated on good faith critics.
Sorry I meant OP as in original poster not OpenPhil. But nice response nonetheless!
I mean to be fair to OP (edit: I meant original poster) they make their uncertainty really clear throughout and the conditionals it entails. I don’t think it’s fair to say they’re not being honest and truthful.
Although I upvoted because I think these critiques are really healthy. The visceral feeling of reading this post was quite different to the first one. This one feels more judgemental on a personal level and gave me information that felt was too privacy violating but I can’t quite articulate why. A lot of it feels like dunks on Conjecture for being young, ambitious, and for failing at times (I will not I know this is not the core of the critique it just FEELS that way).
I just do not feel like the average forum user is in a place where we can adjudicate the personal things regarding the interpersonal issues named in the Conjecture post. I also feel confused about how to judge a VC funded entity given as both the critique and the response notes that these are often informal texts and slack channel messages.