Iām in New Zealand and wrote a letter as well.
Andy Morgan šø
ļAIxBio NewsletĀter #3 - At the Nexus
In the dis-spirit of this article Iām going to take the opposite tack and Iām going to explore nagging doubts that I have about this line of argument.
To be honest, Iām starting to get more and more sceptical/āannoyed about this behaviour (for want of a better word) in the effective altruism community. Iām certainly not the first to voice these concerns, with both Matthew Yglesias and Scott Alexander noting how weird it is (if someone tells you that your level of seeking criticism gives off weird BDSM vibes, youāve probably gone too far).
Am I all in favour of going down intellectual rabbit holes to see where they take you? No. And I donāt think it should be encouraged wholesale in this community. Maybe I just donāt have the intellectual bandwidth to understand the arguments, but a lot of the time it just seems to lead to intellectual wank. With the most blatant example Iāve come across being infinite ethics. If infinities mean that anything is both good and bad in expectation, that should set off alarm bells that that way madness lies.
The crux of this argument also reminds me of rage therapy. Maybe you shouldnāt explore those nagging doubts and express them out loud, just like maybe you shouldnāt scream and hit things based on the mistaken belief that itāll help to get out your anger out. Maybe you should just remind yourself that its totally normal for people to have doubts about x-risk compared to other cause areas, because of a whole bunch of reasons that totally make sense.
Thankfully, most people in the effective altruism community do this. They just get on with their lives and jobs, and I think thatās a good thing. There will always be some individuals that will go down these intellectual rabbit holes and they wonāt need to be encouraged to do so. Let them go for gold. But at least in my personal view, the wider community doesnāt need to be encouraged to do this.
Thanks for this, Lizka. A great summary and a great reminder.
This is great to see and the backgrounds of your team members look impressive. I really hope someone will step in to fund this.
The way I see it the āwoke takeoverā is really just movements growing up and learning to regulate some of their sharper edges in exchange for more social acceptance and political power.
I donāt agree with this part of the comment, but am aware that you may not have the particular context that may be informing Geoffreyās view (I say may because I donāt want to claim to speak for Geoffrey).
These two podcasts, one by Ezra Klein with Michelle Goldberg and one by the NY Times, point to the impact of what is roughly referred to in these podcasts as āidentity politicsā or āpurity politicsā (which other people may refer to as āwoke politicsā). The impact, according to those interviewed, on these movements and nonprofits, has been to significantly diminish their impact on the outside world.
I also think that it would be naĆÆve to claim that these movements were āgrowing upā considering how long feminism and the civil rights movement have been around. The views expressed in these podcasts also strongly disagree with your claim that they are gaining more political power.
I think these experiences, from those within nonprofits and movements on the left no less, lend support to what Geoffrey is arguing. Especially considering that the EA movement is ultimately about having the most (positive) impact on the outside world.
Yeah, I strongly agree with this and wouldnāt continue to donate to the EA fund I currently donate to if it became āmore democraticā rather than being directed by its vetted expert grantmakers. Iād be more than happy if a community-controlled fund was created, though.
To lend further support to the point that this post and your comment makes, making grantmaking āmore democraticā through involving a group of concerned EAs seems analogous to making community housing decisions āmore democraticā through community hall meetings. Those who attend community hall meetings arenāt a representative sample of the community but merely those who have time (and also tend to be those who have more to lose from community housing projects).
So its likely that not only would concerned EAs not be experts in a particular domain but would also be unrepresentative of the community as a whole.
I wish I had of written down my reasoning because I canāt remember haha. Iāll have a search around to see why I thought they were good to invest in and get back to you.
In terms of TSM, AMSL and AMAT, Iām investing in them at a 2:2:1 ratio.
Thanks for the post, sapphire. Iād also really like if EA had more of a ātaking care of each otherā vibe (I was envious when hearing about early discussion on the LessWrong forum about Bitcoin and wish there was something similar in EA). Iāll definitely be following you on Twitter.
On semiconductor stocks Iāve also gone for Applied Materials (AMAT), as well as TSM, AMSL, Google and SOXX.
My worry is that youāre probably trying to identify then add/āturn-on too much (i.e. all of the genes that code for egg laying).
Iām sure its probably not straightforward to change shell colour, which would be the best method of identification of chick sex (maybe shell development is determined by the hen rather than the embryo?), but thereās probably still a couple of additions you could make to the Z and W chromosomes to ultimately achieve the same outcome. And a couple of additions would likely be at least an order of magnitude easier than identifying then adding/āturning-on a bunch of genes.
At least one idea that comes to mind is using insights from gene drive theory to disrupt male embryo development enough to be identifiable using a light shined through an egg. For instance, you could insert a gene into both Z chromosomes coding for a CRISPR complex that disrupts some key embryo development process. Additionally, you insert a gene into the W chromosome that codes for a CRISPR complex that modifies/ādisrupts the CRISPR complex on the Z chromosomes.
Maybe thereās a really obvious reason why that wouldnāt work or wouldnāt be that simple, but I suppose my point is that maybe you should aim to find and pursue a more simple solution unless youāre sure that no obvious and simple strategies would work.
Either way, I really hope you and your efforts succeed.
WritĀing about my job: Policy Analyst
This is really great to see and just wanted to quickly say that the website looks fantastic. Great design.
Comprehensive archive of career routes of people currently at the top of the biosecurity industry (mainly for useful stats gathering eg how many degrees on average does each person have, what age did they get to the position they currently hold).
Is anybody doing this project?
[Question] Is there a need for a cuĀrated list of reĀsources on baĀsic biolĀogy/āimĀmunolĀogy/āepiĀdemiolĀogy for those wantĀing to work GCBRs?
Sweet, will do!
I work as a senior policy analyst in the New Zealand government, specifically in the area of genetic modification policy. I can talk about how I got the job, and why I think I excel at it, despite not having a background in science, as well as what the work is like day-to-day.
Hi freedomandutility, Iād really like to hear more about this if youād be happy to expand on it a bit and perhaps give examples etc.
I tend to lose sight/āforget the greater āwhyā for why Iām pursuing certain things.
If āCoordination for EA researchersā is considered by enough people to be a worthwhile project to undertake, Iād be interested in working on that (in a project design capacity).
And on a related note, I think combining this project with others like the āEA expertise boardā or āBuild a platform to match projects with people who can do themā would enable the platform to reach a critical mass of active users, making it really worthwhile for the community.
If the EA forum werenāt (as far as I can tell) just filled with EAs, Iād agree.
I donāt think we should necessarily be worried that, say, some journalist is reading this forum (which is what I take your comment to mean), so much that we should be worried that posts like this could potentially turn off people that are currently EAs or are considering becoming more involved in EA. Speaking personally, the suggestions floated in this post seemed a little dishonest to me.
This is a great post and Iāve just signed up to your newsletter. Thanks, Garrison.