If EA has a management skills shortage, which seems to be the takeaway from a lot of posts here, one obvious conclusion is to try to recruit more people with managerial skills, but another might just be that there are just way way too many EA orgs and they could all be rationalized and merged a bit.
Sabs
This is too long to read but can we skip to the point where I get told how many insects I’m worth? Last time we discussed that it might be around 48 chickens, but I’m not yet clear on how many insects this equates to.
San Fran really needs the Sabs Solution to stop people doing this kind of thing ever again, it’s just such a terrible Schelling point for the tech industry. So much value is being destroyed by having so many productive people and firms clustered in this nightmarish and incredibly expensive hellhole. One medium-sized nuclear device. That’s all it takes.
LessWrong has had a few cults emerge from the ecosystem, but at least some of the hate for e.g Leverage is basically just because Leverage holds up a mirror to mainstream EA/rationalism and mainstream EA just really hates the reflection. “Yes, we are a cult, and what do you think you guys are?”
(incidentally, no one ever talks about the companies/institutions that came out of Leverage, but surely this should be factored into our calculations when we think about the costs & benefits!)
This doesn’t seem like a bad meta-strategy, fwiw. Surely otherwise EA just gets largely ignored.
a word of warning: the mods here are really dumb and over-censorious and barbed but friendly banter like this is highly frowned upon, so while I absolutely don’t give a fuck you want to be careful w/ this kind of chat on this place....take it from someone who keeps getting banned
I do think the harms seems very minor though and especially minor relative to the potential benefits. Which could be quite large even if it’s just automating boring stuff like sending emails faster or whatever! Add it up over an entire economy & that’s a lot of marginal gains.
No, the assumption is simply we don’t want to poor and starving. There’s a lot of very very, very poor people in the world. I would like their situation to improve. That means some economic growth. All the EA bednets and givedirectly and all this crap blah blah are absolutely worth zero, nada, nyet, compared to the incredible power of economic growth. Growth is so powerful because fast growth in one place can drag along loads of other places: look at how China’s rise massively boosted growth in the countries in its supply chain. In fact you can make a pretty good argument that global development has been a complete disaster for decades in every other country apart from China AND those countries in its supply chain! Vide https://americanaffairsjournal.org/2022/11/the-long-slow-death-of-global-development/
Obviously this is a huge number of people and worth celebrating despite the growth failures across LatAm and Africa, but it means we can do better and it also means that boosting growth in the West through e.g AI, LLMs (not atm, a hallucinating chatbot is pretty useless but maybe we can make it good!) is potentially an absolutely massive win for the world. So accordingly I am massively skeptical towards the growth-killing Euro-regulatory impulse towards tech because it’s clearly a) working out badly for Europe) b) very very bad for the world if it somehow got applied everywhere
I’m sorry but I just flatly reject this and think it’s trivially wrong. EA will be a massive force for bad in the world if it degenerates into some sort of regulatory scam where we try to throttle progress in high-growth areas based on nothing but prejudice and massively overblown fears about risk. This is a recipe for turning the whole world economy into totally dysfunctional zero-growth states like Italy or the UK or whatever. There’s a reason why Europe has basically no native tech industry to speak of and is increasingly losing out to the US even in sectors like pharma where it was traditionally very strong. This anti-bigness attitude and desire to impose regulation in advance of any actual problems emerging is a lot of the reason why. It places far too much faith in the wisdom of regulators and not enough in markets to correct themselves just fine over time. The fact that you picked the massively price-competitive and feature-competitive smartphone industry as an example of market failure is a prime example of Euro-logic completely divorced from basic economic logic.
well clearly Musk is much better than all the EAs, he built these massive multi-billion-dollar companies and created loads of value on the way! We’re going back to space with Elon! How cool is that? If you disagree, well, ok, I guess that’s a very bold take considering the stock market’s opinion....
re EVs, agree as well, even if you don’t believe the climate stuff (I do w/ some caveats) then Teslas are very beautiful, great cars and almost certainly good for the world on other dimensions (i.e less local pollution in urban areas etc)
these just seem like incredibly minor and/or unlikely harms tbh, and the idea that they merit any kind of advance regulation is just crazy talk imo. This is capitalism, we make things, product goes out, it happens! We trust the market to address most harms in its own time as a default. Unless the bad thing is really bad—some huge environmental pollutant, a national security risk, a world-ending threat—then we don’t do the European Permit Raj thing. We let these things work themselves out and address any problems that arise post hoc, considering the benefits as well!
how does it harm people? I mean I guess there’s a problem of people taking these LLM outputs as oracular truths because they don’t realize how frequently they hallucinate, but isn’t this just a self-correcting problem eventually as people figure it out? We don’t instantly shut down access to all new tech just because people struggle to use it correctly at first.
personally I love Thiel & Musk and think they’ve been massive net positives for the world!
I still don’t really understand how you can do safety & alignment research on something that doesn’t exist and maybe never will but I guess maybe I’m just too low-IQ to understand this Big Brain logic. Also I don’t understand how everyone is freaked out about a chatbot that can’t even reliably tell you basic biographical information about famous figures, for all that it can draft a very nice email and apparently write good code? idk
I kind of object to the title of this post. It’s not really AI forecasting you want, insofar as forecasting is understood as generating fairly precise numerical estimates through some finding a reference class, establishing a base rate, and applying a beautiful sparkle of intuition. You’re making the case for AI informed speculation, which is a completely different thing altogether. The climate analogy you make is pretty dubious because we have a huge historical sample of past climates and at least some understanding of the things that drove climate change historically, so we can build some reasonably predictive climate models. This is not the case for AGI and I doubt we actually can reduce our uncertainty much.
Tl;Dr “we have zero reason to think this exercise yields any meaningful results, or tells us anything useful whatsoever, and have plenty of reasons to think it doesn’t, and we are absolutely well aware of this, but we decided to do it anyway”
the historical examples I had in mind are various empires, or “empire moments” as such as MacArthur in Japan.
Today the IMF is a reasonable effective cudgel for institutional reform, but I don’t think it would take much to expand its operations and make them more ambitious both on the level of the cash it lends and the degree of involvement in recipient governance that it has.
I think the “family/cultural pressure” to marry is very likely to be downstream of an unexpected pregnancy as a result of rape. I have never seen any estimates for for the percentage of girls in SSA for whom sexual initiation (for want of a better phrase) comes through rape, but anecdotally I wouldn’t be surprised if it were over 70% in many countries. Again “child marriage” is not the problem here, the problem is likely a) weak growth = less reason to delay marriage and build human capital and b) massive, endemic sexual violence that is just absolutely everywhere.
This goes back to my take about how the problem here is actually LMIC governance, but it seems trivially true that child marriage is objectively very bad and also relatively bad compared to the opportunities that these girls should have, but it might also be true that it’s not so bad compared to the very limited opportunities they actually do have (albeit probably still negative). But the pathway to getting rid of child marriage seems clear: just improve governance & economic growth rates, and the problem will take care of itself as the economic returns to delaying marriage grow and grow. That seems much more tractable than some sort of mass cultural transformation.
and yet AI alignment is apparently tractable whereas “improve LMIC govenance” isn’t? EA confuses me sometimes. We have a hypothetical solution to a hypothetical problem vs concrete solutions to concrete problems -we just need to figure out the implementation!
it’s fine, I am not personally offended by these estimates of how much I’m worth (not least because I don’t take them seriously and I actually don’t think you do either, in that faced with a dying baby or a billion dying insects you’d save the baby). On some level I find them very funny. At the very least, however, I would ask this question: can you see how this stuff is sort of bad for EA’s wider reputation, on some quite fundamental level?
It’s very hard to trust someone who thinks you’re only worth 50 chickens or 500 insects or whatever. Sure, you might well suspect he’s lying (to you and himself), and if you know him really well you can probably say with near-certainty his revealed preferences are very different, but from the outside it’s kind of hard to know for sure. Occasionally people do actually believe this stuff! EA already has a trust problem, between FTX and the subsequent Governance Slack leaks, and this sort of thing just compounds it massively.
By default, you shouldn’t really trust a utilitarian/consequentialist because you only need to be the wrong side of their utility/consequences calculations once. Ironically I actually think Will MacAskill, of all people, explicitly acknowledged this problem once upon a time and wrote somewhere about how consequentialists should address it by committing to high ethical standards in their everyday dealings. If I’m remembering rightly, well, how’s that working out....?
But look, deep down, our whole society is built on trust. The law is a last backup when things go wrong, not a first resort, which means if you want to achieve anything meaningful beyond a very tiny set of ideologically identikit fellow travellers you need to be trustworthy. This means that EAs need other people to trust them, and somehow I feel like “oh those are the guys who think I’m worth 50 chickens” just doesn’t really help. Maybe the problem is the messaging here as much as the content, although incredibly long-winded academic arguments that fly in the face of basic common sense don’t really help either.