Is helpful/friendly :-) Loves to learn. Wants to solve neglected problems. See website for current progress.
Madhav Malhotra
For what it’s worth, I run an EA university group outside of the U.S (at the University of Waterloo in Canada). I haven’t observed any of the points you mentioned in my experience with the EA group:
We don’t run intro to EA fellowships because we’re a smaller group. We’re not trying to convert more students to be ‘EA’. We more so focus on supporting whoever’s interested in working on EA-relevant projects (ex: a cheap air purifier, a donations advisory site, a cybersecurity algorithm, etc.). Whether they identify with the EA movement or not.
Since we’re not trying to get people to become EA members, we’re not hosting any discussions where a group organiser could convince people to work on AI safety over all else.
No one’s getting paid here. We have grant money that we’ve used for things like hosting an AI governance hackathon. But that money gets used for things like marketing, catering, prizes, etc. - not salaries.
Which university EA groups specifically did you talk to before proclaiming “University EA Groups Need Fixing”? Based only on what I read in your article, a more accurate title seems to be “Columbia EA Needs Fixing”
What are the top 2-3 issues Rethink Priorities is facing that prevent you from achieving your goals? What are you currently doing to work on these issues?
What have you been intentional about prioritising in the workplace culture at Rethink Priorities? If you focus on making it a great place for people to work, how do you do that?
To any staff brave enough to answer :D
You’re fired tomorrow and replaced by someone more effective than you. What do they do that you’re not doing?
A lot of people have gotten the message: “Direct your career towards AI Safety!” from EA. Yet there seem to be way too few opportunities to get mentorship or a paying job in AI safety. (I say this having seen others’ comments on the forum and applied to 5+ fellowships personally where there were 500-3000% more applicants than spots).
What advice would you give to those feeling disenchanted by their inability to make progress in AI safety? How is 80K hours working to better (though perhaps not entirely) balance the supply and demand for AI safety mentorship/jobs?
The UX has so much improvement since the 2022 version of this :-) It feels concise and the scrolling to each new graph makes it interesting to learn each new thing. Kudos to whoever designed it this way!
I’d be interested in hearing someone from Anthropic discuss the upsides or downsides of this arrangement. From an entirely personal standpoint, it seems odd that Anthropic gave up equity AND had restrictions in how the investment could be used. That said, I imagine there are MANY other details about I’m not aware of since I wasn’t involved in the decision.
In your past experiences, what are the biggest barriers to getting your research in front of governmental organisations? (ex: official development aid grantmakers or policy-makers)
Biggest barriers in getting them to act on it?
It takes courage to share such detailed stories of goals not going right! Good on you for having the courage to do so :-)
It seems that two kinds of improvements within EA might be helpful to reduce the probability of other folks having similar experiences.
Proactively, we could adjust the incentives promoted (especially by high-visibility organisations like 80K hours). Specifically, I think it would be helpful to:
Recommend that early-career folks try out university programs with internships/coops in the field they think they’d enjoy. This would help error-correct earlier rather than later.
Adjust the articles on high-visibility sites to focus less on finding the “most” impactful career path, but instead one of many impactful career paths. I especially say this because sites like 80K hours have gotten a lot more general traffic ever since they vastly increased marketing. When you’re reaching a broader target audience (especially for the first time), it’s not as essential to urgently direct someone to the exact right career path. It might be a more reasonable goal to get them thinking about a few options. Then, those who want to refine their plan can be directed to more specialised resources within EA (ex: biosecurity → reading list).
To be more specific about what I mean by making content focus on “one of many impactful paths,” here are examples of content rewrites on 80K hour’s career reviews:
Original: “The highest-impact career for you is the one that allows you to make the biggest contribution to solving one of the world’s most pressing problems.”
Rewrite: The highest-impact career for you depends on your unique skills and motivations. Out of the careers that suit you, which ones increase your contributions to solving one of the world’s most pressing problems?
Original: “Below we list some other career paths that we don’t recommend as often or as highly as those above, but which can still often be top options for people we advise.”
Rewrite: Below, we list some career paths that we recommend less frequently than those above. However, they might specifically be a good fit for your unique preferences.
Original: “The lists are based on 10 years of research and experience advising people, and represent the careers it seems to us will be most impactful over the long run if you get started on them now — though of course we can’t be sure what the future holds.”
Rewrite: None, the ending clause on uncertainty is good :-)
Reactively, various efforts have been trying to improve mental health support within EA. I look forward to seeing continued progress in creating easily-accessible collections of resources!
I’m surprised to see how the book giveaway is more expensive than the costs of actually placing the ads to get eyes on the sites! Why did you decide to give away a physical book? What do you think the cost-effectiveness of that is compared to ebooks or not having a giveaway?
Let’s say your research directly determined the allocation of $X of funding in 2021.
Let’s say you have to grow that amount by 10 times in 2022, but keep the same number of staff, funding, and other resources.
What would you change first in your current campaigns, internal operations, etc.?
Useful context: I’m 19. I stopped reading after the “Use your brainspace wisely.”
Overall impression: boring as stated :D
More specific feedback:
The tips seem very diverse (tips on relationships, mental health, physical environment, and learning skills were all under the “Use your brainspace wisely”. It’s unclear how they relate together. Thus it’s confusing to read / figure out where you can find what tip.
This could be addressed by having very clear headings. Ex: “Tips on Where You Live.” Ex: “Tips on the Relationships You Develop.” Ex: “Tips on Skills to Learn.”
- Tips don’t seem valuable without stories/examples. This is most true for a young person who doesn’t know of an experience about each tip. Ex: If you say “Get a mentor”—that goes in one ear and out the other. A more helpful way to say that might be: “Get a mentor. When I was working on a startup to do X, my mentor Y helped me figure out that doing Z was better. I was down to my last thousand dollars and changing course helped me save the company.”
I like when there were links to specific actionables. Ex: You can read this post if you’re having mental health troubles, that post if you’re looking for friend advice, etc. I’d love to see these links wherever you’re aware of resources :-)
I don’t know why you’re telling me these things. That is to say, the intention seems unclear. It’s worth putting some kind of statement about the purpose of each category of tips under the headings. Ex: Before a heading on “Mental Health Tips,” you might say “Young people are the most vulnerable to mental health problems. If we learn to work on these problems early, it makes them a lot less severe later in life. Here are some helpful actions you could take if you’re experiencing mental health issues:”
I hope this feedback is constructive enough to give practical ideas on how to improve this post. Please feel free to let me know if something seems unclear. I’ll do my best to give a timely response :-)
My aim in this article wasn’t to be technically precise. Instead, I was trying to be as simple as possible.
If you’d like to let me know the technical errors, I can try to edit the post if:
The correction seems useful for a beginner trying to understand AI Safety.
I can find feasible ways to explain the technical issues simply.
Are there any misconceptions, stereotypes, or tropes that you commonly see in academic literature around nuclear security or biosecurity that you could correct given your perspective inside government?
Out of curiosity @LondonGal, have you received any followups from HLI in response to your critique? I understand you might not be at liberty to share all details, so feel free to respond as you feel appropriate.
@Kevin Kuruc at the University of Oklahoma might have something to add :-)
Sidenote: I’m sure an engineering undergrad isn’t your target audience, but all the big words (pecuniary, idiosyncrasy, premia, etc.) are a bit hard to parse :O
Just to play devil’s advocate (without harmful intentions :-), what are the largest limitations or disclaimers that we should keep in mind regarding your results or methods?
I appreciate your detailed followup!
“I am positing, and believe strongly that research will substantiate this, that consumers will value profit destination at a nonzero level.”
I intuitively can see why you say this.
“In the commodity space, this advantage should be decisive. In sectors with more differentiation, it is less likely to be decisive.”
That said, could it be possible that the higher margins in sectors with more differentiation are worth gaining only a fraction of customer purchases instead of (nearly) all of them? Ie. Do we want to maximise volume sold x profit per unit or volume sold only?
On an individual organisation level, I’ve seen plenty of case studies of nonprofits using cross-subsidisation to reduce reliance on donations/grants. One notable example that comes to mind is Me to We’s model of selling Rafiki bracelets (bracelets being a product with lots of differentiation and very high margins)
“Your commodities of scale point definitely makes sense. It will be difficult to compete without hundreds of millions or billions of dollars. This is why research and working on educating the public is critical to satisfy charitable investors that the targeted creation/acquisition of companies that serve effective charities is best use of their resources.”
Large companies tend to be very complex to manage and have their own disadvantages to scale.
How would investing in large guiding companies compare to, say, charities investing in a VC fund of startups? Or investing in institutional options like a bond and getting steady returns?
Ie. Some charities already invest in profitable companies via various means. What leads you to conclude that investing in guiding companies is a better alternative to these existing investments?
A few questions:
“creating the “no-brainer” for the consumer. This could make it sensible to introduce Guiding Companies to sectors where there is not much difference between products.”—if there is low brand differentiation, wouldn’t that lead to commoditised products and lower margins? Which makes the guiding company less incentive for nonprofits/philanthropists to invest in as a way of making returns that they can use for their priorities?
Similarly, more commoditised products tend to create more conglomeration to take advantage of economies of scale. What are potential strategies to get around industry incumbents which use monopoly powers (or state support) to block the path for guiding companies? I’m thinking of sectors like telecom, steel, finance, etc.
Even well-funded nonprofits are ‘strapped for cash’ in the short run. Whereas businesses often require large lump-sum investments for capital expenditures and research and development. What are your thoughts on how to acquire that money?
It seems to me that a likely outcome is that more than one nonprofit/philanthropist would invest in a guiding company? What happens if their cause areas especially misalign with some of the guiding company’s practices? Ex: an especially strong-willed animal advocacy nonprofit might not want to invest in a company in the food sector that uses animal ingredients—even if this is fairly common. What happens if the nonprofits would like to have some decision-making power to avoid these kinds of cases? What about their PR concerns? Several private investment companies and even public pension hedge funds are coming under increasing scrutiny about exactly where they invest.
I’m not sure this is a good idea.
It seems possible that the individual interventions you’re linking to research on are not representative of every possible intervention about skill development.
Also, it seems possible that future interventions may integrate both building human and economic capital to enable recipients to make changes in their lives. Ie. Skill-building + direct cash transfers.
Also, it’s generally uncertain whether GiveDirectly will continue to be the most effective or endorsed donation recommendation. I say this given changes in how we measure wellbeing (admittedly, a topic with frequent updates to opinions and mistake corrections being made).
Why potentially reduce the effectiveness of those future interventions by launching this campaign?