If possible, an easier way to change between dark/light mode, like a switch on the top right menu, would be great.
Renan Araujo
Thoughts about AI safety field-building in LMIC
Cool to see that AMA, big fan of that team!
Apart from LW, what are some other forums you take inspiration from? Curious about features you implemented inspired by those forums (or you still want to implement), culture, etc
Hey! I noticed this is your first comment and wanted to try to explain why you got some negative karma there: I think this might not be the best place for you to ask those questions, since they’re the online team rather than introductory-type community builders.
Here are a couple of links that might help you with your question: the “Learn” section on effectivealtruism.org and the EA virtual intro program.
How worth you think it is community builders emphasizing GWWC as a milestone/next step? I’m also curious about how your view on this evolved over time (my sense is that it was a more emphasized milestone in the past)
CEA for the developing world
Effective Altruism
The main EA movement building organization, CEA, focuses primarily on talented students in top universities of developed countries. This seems to be due to a combination of geographical and cultural proximity, quantity of English speakers, and ease of finding top talent. However, there is a huge amount of untapped talent in developing countries that may be more easily reached through dedicated organizations optimized for being culturally, linguistically, and geographically close to such talent, such as a CEA for India or Brazil. Such an organization would develop its own goals and strategies tailored to their respective regions, such as prioritizing nationwide prizes over group-by-group support, hiring local EA talent to lead projects, and identifying and partnering with regionally influential universities and institutions. This project would not only contribute to increasing diversity in EA, but also foster organizational competition by allowing different movement building strategies, and better position the EA movement for unexpected geopolitical power shifts.
fwiw the problem they’re trying to solve, as articulated by them, is: “Artificial Intelligence and automated decision making are transforming the world. But too often these technologies are developed without consideration for the individuals and communities they affect. This means AI can exacerbate existing social justice challenges.”
In 2020, I got to the interview stage of FHI’s research scholar selection and did not get in. They offered me a feedback call. I took up their offer and spoke with Max Daniel for ~40 min. He not only gave me insightful feedback about my performance in the process—to the best of his knowledge—but also gave me some career advice, which ended up taking up most of the 40 min. I’m not sure if this was done with everyone who got to the stage I did, but it left me with a great feeling (for a rejection) and boosted my motivation.
Fantastic video – I’m really impressed with how you managed to match the high quality of the text. Thanks a lot for such great work!
Minor feedback about this post: I think the photo of the Yoruba folks might be a bit misleading in the context of this post, and I wouldn’t include it. My sense is that the religion is followed by millions, that the radical smallpox-inoculation cult was quite fringe, and that the body painting of white dots doesn’t have anything to do with smallpox. It doesn’t seem fair to me to associate the whole religion and the folks in that photo with such a radical, harmful cult.
Generous prizes to attract young top talent to EA in big countries
Effective altruism
Prizes are a straightforward way to attract top talent to engage with EA ideas. They also require relatively low human capital or expertise and therefore are conceivably scalable for different countries. Through a nationwide selection process optimized for raw talent, ability to get things done, and altruistic alignment, an EA prize could quickly make the movement become well-known and prestigious in big countries. High school graduates and early university students would probably be the best target audience. The prize could come with a few strings attached, such as participating in a two-week-long EA fellowship, or with more intense commitments, such as working for a year on an EA-aligned project. Brazil and India are probably the best fit, considering their openness to Western ideas and philanthropic investment (in comparison to China and Russia). Other candidates may include the Philippines, where EA groups have been relatively successful, Indonesia, Argentina, Nigeria, and Mexico.
Thanks for the suggestions, Yonatan! The info you raised (location and employment time) is currently displayed in the summary board at the right of the screen with the goal of making it prominent to the reader (follows screenshot).
Does this appear for you? Or did you jump straight into the text and didn’t notice? If the latter, then that’s quite useful information from the user end for us to take into account and update.
I think it’d be preferable to explicitly list as a reason for applying something along the lines of “Grantees who received funds, but want to set them aside to protect themselves from potential clawbacks”.
Less importantly, it’d possibly be better to make it separate from “to return to creditors or depositors”.
“No one really likes safety, they like features” – Stefan Seltz-Axmacher lamented in his open letter announcing the end of Starsky Robotics in 2020. After founding and leading a company obsessed with making driverless trucks safer, reducing the chance of fatality accidents from 1 in a thousand to 1 in a million, he announced they had to shut down due to a lack of investors’ interest. Investors weren’t impressed by the thousandfold increase in safety that Starsky Robotics achieved. Instead, they preferred the new features brought forth by Starsky’s competitors, such as the ability to change lanes automatically or drive on surface streets. This crooked incentive structure favors businesses willing to take on risks that are clearly destructive in the world of driverless vehicles and can lead to catastrophic consequences as AI systems progress at large. If features are appealing but safety isn’t, who will invest on making sure language models are convincing writers but don’t massively deceive the public? Who will ensure weaponized AI systems efficiently react to threats but also accurately interpret blurred human values like the law of war? As AI capabilities advance, it will be necessary to prioritize safety over features in many cases — who will be up to the test?
Is there a difference between ordering from Amazon, Barnes and Noble, or somewhere else?
I think the issue is more that such an income would depend on the org’s performance or existence even in that arrangement, and that directors should be ready to make hard decisions that could e.g., shut down the organization. Depending on the org in any way would limit their decision power to make such calls.
Exciting work! Looking forward to reading that agenda and following your next steps.
Loved the name! Could you give examples of what kinds of events you aim to provide assistance for? Retreats for highly-engaged EAs (eg, most pre/post-EAG retreats, team retreats), EA intro camps, rationality camps (eg, SPARC), all of these?
I’ve found versions of the Ben Readme document called “how to work with me” or “manuals of me” docs particularly useful. This is suggested by Notes on “Managing to Change the World”, which links to Why you should write a “how to work with me” user manual.
Seems like great work, and I’ll engage with it more in the future! But I wanted to push back a little on this excerpt:
According to Tonn (2021), only two of the 100 national constitutions he analyzed included specific provisions advocating for future generations. Constitutions could also be amended to establish new institutions, like Tonn’s proposed national anticipatory institutions (NAI), the World Court of Generations (WCG), the InterGenerational Panel on Perpetual Obligations (IPPO), or others. Please refer to his aforementioned book for more information.
You can check a paper I co-authored on the constitutionalization of future generations to see that 81 out of 196 constitutions (41%) explicitly mention future generations, with varied levels of legal protection. In short, one of our takeaways is that constitutions don’t seem like a quite tractable way of protecting future generations, since many of these de jure protections don’t translate into de facto actions – the latter seem to mostly be a product of other factors. I haven’t read Tonn’s work.
It’s great you mentioned liberation theology. Here’s some extra information since I’ve had some contact with it through social activism in Brazil:
Liberation theology grew especially strong in Brazil during the military dictatorship of 1964-1985, under the leadership of Dom Hélder Câmara. This guy names streets, schools, hospitals, and all sorts of things you can think of in the region where he worked as Archbishop (Recife, where I live). He’s considered a saint by many.
The influence of liberation theology and its charismatic leaders is perceived well beyond the church. I worked with social movements in several cause areas and almost all of them are influenced by it to some extent: criminal justice, land use (farmers), housing, education, public health, etc. There is a big “social movement forum” that gets all of these activists together to take the streets on Worker’s Day every year. I’d guess they manage to gather 10-20k people on average. The” forum” is named after Dom Hélder Câmara.
However, the content of the ideology is fuzzy. I didn’t come across people referring to seminal written works or clear concepts in the context of social activism. Mostly, being socialist and Catholic as Gavin described is as deep as it goes for most activists.
I guess the main merit of the ideology is to make altruistically-minded people care about politics and systemic change. It’s a call to arms for Catholics to go beyond isolated acts of charity and work to change the system more profoundly, even if this means going against traditional values or the political establishment. This focus on actual change rather than aesthetic altruism may relate to EA to some extent.
Dom Hélder’s most famous saying, in the context of the far-right military dictatorship: “When I give food to the poor, they call me a saint. When I ask why they are poor, they call me a communist.”