Iâm in New Zealand and wrote a letter as well.
Andy Morgan đ¸
AIxBio NewsletÂter #3 - At the Nexus
In the dis-spirit of this article Iâm going to take the opposite tack and Iâm going to explore nagging doubts that I have about this line of argument.
To be honest, Iâm starting to get more and more sceptical/âannoyed about this behaviour (for want of a better word) in the effective altruism community. Iâm certainly not the first to voice these concerns, with both Matthew Yglesias and Scott Alexander noting how weird it is (if someone tells you that your level of seeking criticism gives off weird BDSM vibes, youâve probably gone too far).
Am I all in favour of going down intellectual rabbit holes to see where they take you? No. And I donât think it should be encouraged wholesale in this community. Maybe I just donât have the intellectual bandwidth to understand the arguments, but a lot of the time it just seems to lead to intellectual wank. With the most blatant example Iâve come across being infinite ethics. If infinities mean that anything is both good and bad in expectation, that should set off alarm bells that that way madness lies.
The crux of this argument also reminds me of rage therapy. Maybe you shouldnât explore those nagging doubts and express them out loud, just like maybe you shouldnât scream and hit things based on the mistaken belief that itâll help to get out your anger out. Maybe you should just remind yourself that its totally normal for people to have doubts about x-risk compared to other cause areas, because of a whole bunch of reasons that totally make sense.
Thankfully, most people in the effective altruism community do this. They just get on with their lives and jobs, and I think thatâs a good thing. There will always be some individuals that will go down these intellectual rabbit holes and they wonât need to be encouraged to do so. Let them go for gold. But at least in my personal view, the wider community doesnât need to be encouraged to do this.
Thanks for this, Lizka. A great summary and a great reminder.
This is great to see and the backgrounds of your team members look impressive. I really hope someone will step in to fund this.
The way I see it the âwoke takeoverâ is really just movements growing up and learning to regulate some of their sharper edges in exchange for more social acceptance and political power.
I donât agree with this part of the comment, but am aware that you may not have the particular context that may be informing Geoffreyâs view (I say may because I donât want to claim to speak for Geoffrey).
These two podcasts, one by Ezra Klein with Michelle Goldberg and one by the NY Times, point to the impact of what is roughly referred to in these podcasts as âidentity politicsâ or âpurity politicsâ (which other people may refer to as âwoke politicsâ). The impact, according to those interviewed, on these movements and nonprofits, has been to significantly diminish their impact on the outside world.
I also think that it would be naĂŻve to claim that these movements were âgrowing upâ considering how long feminism and the civil rights movement have been around. The views expressed in these podcasts also strongly disagree with your claim that they are gaining more political power.
I think these experiences, from those within nonprofits and movements on the left no less, lend support to what Geoffrey is arguing. Especially considering that the EA movement is ultimately about having the most (positive) impact on the outside world.
Yeah, I strongly agree with this and wouldnât continue to donate to the EA fund I currently donate to if it became âmore democraticâ rather than being directed by its vetted expert grantmakers. Iâd be more than happy if a community-controlled fund was created, though.
To lend further support to the point that this post and your comment makes, making grantmaking âmore democraticâ through involving a group of concerned EAs seems analogous to making community housing decisions âmore democraticâ through community hall meetings. Those who attend community hall meetings arenât a representative sample of the community but merely those who have time (and also tend to be those who have more to lose from community housing projects).
So its likely that not only would concerned EAs not be experts in a particular domain but would also be unrepresentative of the community as a whole.
I wish I had of written down my reasoning because I canât remember haha. Iâll have a search around to see why I thought they were good to invest in and get back to you.
In terms of TSM, AMSL and AMAT, Iâm investing in them at a 2:2:1 ratio.
Thanks for the post, sapphire. Iâd also really like if EA had more of a âtaking care of each otherâ vibe (I was envious when hearing about early discussion on the LessWrong forum about Bitcoin and wish there was something similar in EA). Iâll definitely be following you on Twitter.
On semiconductor stocks Iâve also gone for Applied Materials (AMAT), as well as TSM, AMSL, Google and SOXX.
My worry is that youâre probably trying to identify then add/âturn-on too much (i.e. all of the genes that code for egg laying).
Iâm sure its probably not straightforward to change shell colour, which would be the best method of identification of chick sex (maybe shell development is determined by the hen rather than the embryo?), but thereâs probably still a couple of additions you could make to the Z and W chromosomes to ultimately achieve the same outcome. And a couple of additions would likely be at least an order of magnitude easier than identifying then adding/âturning-on a bunch of genes.
At least one idea that comes to mind is using insights from gene drive theory to disrupt male embryo development enough to be identifiable using a light shined through an egg. For instance, you could insert a gene into both Z chromosomes coding for a CRISPR complex that disrupts some key embryo development process. Additionally, you insert a gene into the W chromosome that codes for a CRISPR complex that modifies/âdisrupts the CRISPR complex on the Z chromosomes.
Maybe thereâs a really obvious reason why that wouldnât work or wouldnât be that simple, but I suppose my point is that maybe you should aim to find and pursue a more simple solution unless youâre sure that no obvious and simple strategies would work.
Either way, I really hope you and your efforts succeed.
WritÂing about my job: Policy Analyst
This is really great to see and just wanted to quickly say that the website looks fantastic. Great design.
Comprehensive archive of career routes of people currently at the top of the biosecurity industry (mainly for useful stats gathering eg how many degrees on average does each person have, what age did they get to the position they currently hold).
Is anybody doing this project?
[Question] Is there a need for a cuÂrated list of reÂsources on baÂsic biolÂogy/âimÂmunolÂogy/âepiÂdemiolÂogy for those wantÂing to work GCBRs?
Sweet, will do!
I work as a senior policy analyst in the New Zealand government, specifically in the area of genetic modification policy. I can talk about how I got the job, and why I think I excel at it, despite not having a background in science, as well as what the work is like day-to-day.
Hi freedomandutility, Iâd really like to hear more about this if youâd be happy to expand on it a bit and perhaps give examples etc.
I tend to lose sight/âforget the greater âwhyâ for why Iâm pursuing certain things.
If âCoordination for EA researchersâ is considered by enough people to be a worthwhile project to undertake, Iâd be interested in working on that (in a project design capacity).
And on a related note, I think combining this project with others like the âEA expertise boardâ or âBuild a platform to match projects with people who can do themâ would enable the platform to reach a critical mass of active users, making it really worthwhile for the community.
If the EA forum werenât (as far as I can tell) just filled with EAs, Iâd agree.
I donât think we should necessarily be worried that, say, some journalist is reading this forum (which is what I take your comment to mean), so much that we should be worried that posts like this could potentially turn off people that are currently EAs or are considering becoming more involved in EA. Speaking personally, the suggestions floated in this post seemed a little dishonest to me.
This is a great post and Iâve just signed up to your newsletter. Thanks, Garrison.