I don’t think it’s very surprising that 80% of the value comes from 20% of the proposed solutions.
trevor1
I’m glad that there’s more really good work in this area, and I’m looking forward to rates much better than $180/DALY or $150/woman/year.
A lot of people in EA have no idea of the extent that EA strategies are capable of revolutionizing existing work in this area. It seems that people won’t touch this cause area with a ten foot pole because the entire cause area is associated with a very large number of deranged people on contemporary social media.
But the presence or absence or frequency of deranged people on social media tells us virtually nothing about the scope and severity of a cause area, let alone to what extent that existing programs can have their effectiveness increased by orders of magnitude. Social media (and the entertainment/media industry in general) is a broken clock, and it shouldn’t distract or divert anyone from Akhil’s game-changing research here.
- 30 Jan 2023 19:30 UTC; 5 points) 's comment on Karma overrates some topics; resulting issues and potential solutions by (
I only worry and work on AI safety, but I have a profound appreciation for animal welfare work, especially when it comes to sociology and public outreach. There’s incredible insights to be made, and points to prove/demonstrate, by being as overwhelmingly correct about something as EA is on animal welfare. I’m really glad that this new book can focus on the last ~50 years of sociological changes since the last one; detailed research the phenomena of large numbers of people being slow to update their thinking on easily-provable moral matters is broadly applicable to global health and existential risk as well.
I’ve done a lot of reading about Soft Power, how elites around the world are attracted to things that are actually good, like freedom of speech and science, which ends up giving the US and Europe a big strategic advantage in global affairs; meanwhile, hard power like military force and economic influence systematically repels elites. I’m optimistic about the level of competence of people who find out about EA through animal welfare, due to their ability to recognize the sheer asymmetry between EA and non-EA takes on animal welfare.
I just wish there was a way to scale it up more effectively e.g. at university EA groups doing outreach, since the elephant in the room is the first impression that new people get when they reflexively default to thinking “oh no a vegetarian is trying to convince me to change my diet even though I’m satisfied with it”. If there was some galaxy-brained way around that e.g. a perfect combination of 40 words that lets you introduce wild animal welfare to someone without being looked at funny, it would plausibly be worth putting a lot of effort into figuring out a near-optimal template for the perfect pitch.
If this is true, then I think the board has made a huge mess of things. They’ve taken a shot without any ammunition, and not realised that the other parties can shoot back. Now there are mass resignations, Microsoft is furious, seemingly all of silicon valley has turned against EA, and it’s even looking likely that Altman comes back.
How much of this is “according to anonymous sources”?
The Board was deeply aware of intricate details of other parties’s will and ability to shoot back. Probably nobody was aware of all of the details, since webs of allies are formed behind closed doors and rearrange during major conflicts, and since investors have a wide variety of retaliatory capabilities that they might not have been open about during the investment process.
The crypto section here didn’t seem to adequately cover a likely root cause of the problem.
The “dark side” of crypto is a dynamic called information asymmetry; in the case of Crypto, it’s that wealthier traders are vastly superior at buying low and selling high, and the vast majority of traders are left unaware of how profoundly disadvantaged they are in what is increasingly a zero-sum game. Michael Lewis covered this concept extensively in Going Infinite, the Sam Bankman-Fried book.
This dynamic is highly visible to those in the crypto space (and quant/econ/logic people in general who catch so much as a glimpse), and many elites in the industry like Vitalik and Altman saw it coming from a mile away and tried to find/fund technical solutions e.g. to fix the zero-sum problem e.g. Vitalik’s d/acc concept.
It was very clear that SBF also appeared to be trying to find technical solutions, rather than just short-term profiteering, but his decision to commit theft points towards the hypothesis that this was superficial.
I can’t tell if there’s any hope for crypto (I only have verified information on the bad parts, not the good parts if there are any left), but if there is, it would have to come from elite reformers, who are these types of people (races to the bottom to get reputation and outcompete rivals) and who each come with the risk of being only superficially committed.
Hence why the popular idea of “cultural reform” seems like a roundaboutly weak plan. EA needs to get better at doing the impossible on a hostile planet, including successfully sorting/sifting through accusationspace/powerplays/deception, and evaluating the motives of powerful people in order to determine safe levels of involvement and reciprocity. Not massive untested one-shot social revolutions with unpredictable and irreversible results.
The Forum feels like it’s in a better place to me than when FTX declared bankruptcy: the moderation team at the time was Lizka, Lorenzo, and myself, but it is now six people, and they’ve put in a number of processes to make it easier to deal with a sudden growth in the number of heated discussions. We have also made a number of design changes, notably to the community section.
This is a huge relief to hear. I noticed some big positive differences, but I couldn’t tell where from. Thank you.
Frankly, I’m pretty disturbed by how fast things are going and how quick people were to demand public hearings. Over the last 20 years. this sort of thing happens it extremely bad situations, and in a surprisingly large proportion of them, the calls for upheaval were being deliberately and repeatedly sparked by a disproportionately well-resourced vocal minority.
It looks like the implication here is that spending large amounts of money on altcoins, and then earnng to give if you get lucky, is one of the best things an EA can do.
This is not true; becoming a crypto billionaire is a losing strategy. Every time crypto billionaires are created, it’s usually because the crypto market cap doubled. Each time it doubles, that is one less time that it can double before it reaches the cap of replacing fiat currency and becoming all money. It will probably stop doubling long before then, because major governments are much more willing to violently defend their fiat currency than most people seem to be aware of.
Also, each time it doubles, media attention also roughly doubles. So the more prevalent crypto is in your mind, the less room for growth remains.
There are even stronger reasons why crypto is obviously a very bad strategy for increasing money, but I am not willing to talk about them in a public forum. However, “trying to become a crypto billionaire” is a much worse idea than it sounds. If watching Silicon Valley turned you off to becoming a genius tech billionaire, then you should know that the crypto approach is so much worse in every way.
Oops! I’m off my groove today, sorry. I’m going to go read up on some of the conflict theory vs. mistake theory literature on my backlog in order to figure out what went wrong and how to prevent it (e.g. how human variation and inferential distance causes very strange mistakes due to miscommunication).
I definitely agree that it’s usually not very contributive. But for most people who lived and breathed their personal research area for 5-20 years and just had AI safety explained to them 5-20 minutes ago, finding ways to connect their specialty for AI safety could feel like genuinely wanting to contribute. It’s habitual to take your knowledge and try to make it helpful to the person in front of you, especially in the very first conversation with them about their problem.
The decision to attend this particular protest is actually a difficult one. Normally, most EA-minded people consistently do not vote or attend protests, since whether or not the protest succeeds depends on the non-EA masses who don’t do EV evaluations. Your decision not to attend predicts the decision of other EA-minded people not to attend, but does not predict the decision of non-EA people who almost entirely determine whether the protest/vote succeeds or fails.
However, with this specific protest, EA-minded people are the ones who almost entirely determine whether the protest succeeds or fails, because this is a protests by elites, against elites, and the general population is unwilling/unable to do EV calculations and will not attend either away. Therefore, your decision not to attend predicts whether this protest succeeds or fails. If a third of EA-affiliated people attend, then it actually probably intimidates Facebook quite a bit, whereas if it fails and only 12 people attend, then it might even embolden Facebook.
I’m someone who would normally not go to protests, because, in my own words, “that is obviously something that the world already has plenty of people doing”, and many people affiliated with EA have an extremely similar knee-jerk response to public protests. But this situation is different.
On what other occasions has a major news outlet knowingly published misinformation about EA? Is there a database for this? Misinformation at this caliber needs to be archived so that it can be made accessible to misinformation and disinformation analysts, there are likely to be trends here that are worth pointing out, but there’s a wide variety of causes for this sort of thing so there’s probably trends that only a very small number of people know how to spot. There’s a lot of problems that can be handled entirely with generalists but this isn’t one of them.
I am as absurd a character as those in story books. My struggles are grandiose, my errors obvious to the reader and my victories are significant. If I really manage a fraction of what I set out to do, I should celebrate with my friends. This is my character arc.
I want to emphasize this particular part, because I want to reduce the risk that at least one person will leave before they see it.
like a speedrunner discovering new exploits
Sounds like a good idea, EA affiliated people should strive to be the type of person who is at the table instead of on the menu.
^This is a really important and I completely missed this. It’s similar to how the winner of an auction tends to be the type of person who mistakenly spends more than the item was worth to them (or anyone). The most visible EAs (billionaires) could be the winners in a game with massive net loss overall. Crypto is exactly that kind of thing.
There’s actually surplus of high-risk-high-reward people in the world, to the point where people would sacrifice the $500 million for a 1% chance of getting $40 billion. They’re not just paying the extra fee for the possibility of becoming a billionaire and lording over everyone else, they’re also paying another even more extra fees fee to compete against other people who are competing for that slot, due to the sheer number of people who psychologically want to become a billionaire and lord over everyone else.
In other words, it becomes a lottery.
Even worse, in fact, because the real world has information asymmetry and is rigged to scam in more complicated ways than lotteries. Such as data poisoning and perimeterless security.
Want.
You shouldn’t be surprised if there’s a significant uptick in public discourse about both effective altruism and longtermism in August!
Has anyone given significant thought to the possibility that hostility to EA and longtermism is stable and/or endemic in current society? For example, if opposing AI capabilities development (a key national security priority in the US and China) has resulted in agreements to mitigate the risk of negative attitudes about AI from becoming mainstream or spreading to a lot of AI workers, regardless of whether the threat comes from an unexpected place like EA (as opposed to an expected place, like Russian Bots on social media, which may even have already used AI safety concepts to target western AI industry workers in the past).
In that scenario, social media and news outlets will generally remain critical to EA, and staying out of the spotlight and focusing on in-person communications would have been a better option than triggering escalating media criticism of EA. We live in the post-truth era after all, and reality is allowed to do this sort of thing to you.
Henrik Karlsson’s post Childhoods of Exceptional People did research indicating that there are intensely positive effects from young children spending lots of time talking and interacting with smart, interested adults; so much so that we could even reconsider the paradigm of kids mostly spending time with other kids their age.
And there are a lot of reasons to decide not to say a lot right now.