pronouns: she/her or they/them
Yarrow Bouchard
Biological superintelligence: a solution to AI safety
Your question is hard to read and understand the way it’s written. Try using a human proofreader or a software tool like Grammarly.
How can you estimate cost-effectiveness for scientific/medical research?
It’s as if trillions of dollars per year were spent on firefighting but only millions of dollars per year were spent on fire prevention.
Aging should be effective altruism’s 5th cause area
I was unaware of these resignations! Why did Will resign? Was it because of his association with SBF? Will doesn’t say why he resigned in the source you linked. He links to a post that’s extremely long and I couldn’t immediately find a statement.
The most prominent charity evaluator is GiveWell, which makes a list of top charities working in global health. If you’re interested in other cause areas, like animal welfare, there are other evaluators. Does that help answer your question?
I think the EA community can, should, and will be judged by how we deal with bad behaviour like fraud, discrimination, abuse, and cultishness within the community.
Who knew what about Sam Bankman-Fried’s crimes when? As I understand it, an investigation is still underway and, as far as I know, nobody who enabled or associated with SBF has yet stepped down from their leadership positions in EA organizations. Not necessarily saying anyone should, but I’m not sure I see enough of a reckoning or enough accountability with regard to the FTX/Alameda fraud.
Has the EA community done enough to rebuke Nick Bostrom’s racism? The reaction seems dishearteningly mixed.
What will the EA community ultimately do about the allegations of abuse surrounding Nonlinear, once the organization posts its much-awaited response to those allegations? This is something to watch.
There are disturbing accounts of Leverage Research and, to a lesser extent, CFAR functioning much like cults. That’s pretty weird. How many communities have two, or one and a half, cults pop up inside them? What are the structural reasons this might be happening? Has anything been done that might prevent another EA-adjacent cult from arising again?
I’m not trying to be negative. I’m just trying to give constructive suggestions about what would improve EA’s reputation. I think there are a lot of lovely people and organizations in the EA sphere. But we will — and should — be judged based on how we deal with the minority of bad actors.
but seems so far away technologically it may as well be sci-fi.
Further away and more sci-fi than AGI?
There’s a few things to consider.
One of the best ways to prevent the creation of a misaligned, “unfriendly” AGI (or to limit its power if it is created) is to build an aligned, “friendly” AGI first.
Similarly, biological superintelligence could prevent or provide protection from a misaligned AGI.
The alignment problem might turn out to be much easier than the biggest pessimists currently believe. It isn’t self-evident that alignment is super hard. A lot of the arguments that alignment is super hard are highly theoretical and not based on empirical evidence. GPT-4, for example, seems to be aligned and “friendly”.
“Friendly” AGI could mitigate all sorts of other global catastrophic risks like asteroids and pandemics. It could also do things like help end factory farming — which is quite arguably a global catastrophe — by accelerating the kind of research New Harvest funds. On top of that, it could help end global poverty — another global catastrophe — by accelerating global economic growth.
Pausing or stopping AI development globally might just be impossible or nearly impossible. It certainly seems extremely hard.
Even if it could be achieved and enforced, a global ban on AI development would create a situation where the least conscientious and most dangerous actors — those violating international law — would be the most likely to create AGI. This would perversely increase existential risk.
I highly recommend the book Consciousness Explained by philosopher Daniel C. Dennett.
Bizarrely, the OpenAI board proposed a merger with Anthropic.
Can you explain in more straightforward terms what this means?
In a nutshell: “EA has gone woke and I don’t like it!” Poorly written, poorly argued, vague, unoriginal, offensive, and wrong.
The article was bad when it was written and it has aged like milk.
publicly announcing that OpenAI “created AGI internally” and then backpedaling it
Wasn’t that just a throwaway joke on Reddit?
My question: do we have different capacities to detect 1) dishonesty (e.g. a scam from a con artist), 2) motivated reasoning or conflict of interest (e.g. a salesperson pitching us a product), and 3) sincere but nonetheless false beliefs (e.g. an ideologue giving a speech)?
I could more easily buy that we have good instincts for (1) or for (1) and (2) than for (3).
I see a number of flaws with the reasoning here.
Even if embryos/fetuses have moral value, that does not by itself mean that preventing abortions is an important cause area. An adult person’s needs, desires, preferences, health, happiness, and well-being also have moral value. More moral value, I would argue.
It seems deeply implausible that a first trimester embryo — the relevant entity to consider when talking about the majority of abortions — has as much moral value as an adult cow or pig. Well over a billion cows and pigs are slaughtered each year. On a per entity basis and on a total deaths basis, animal welfare far surpasses abortion in importance.
Any argument to the effect that not having a baby is bad for reasons unrelated to the intrinsic, immediate moral value of an embryo/fetus — such as considerations of the counterfactual life-years added to humanity’s total if a person does have a baby — leads to the implausible conclusion that each person has the moral obligation to have as many babies as possible.
Pro-life activism is not plausibly a neglected cause area in any way. There is already massive attention and resources devoted to it. Even if I fully agreed with the cause, it would be bizarre to suggest the effective altruist movement should make it a priority. By analogy, if someone said gay marriage (which I support about as strongly as I could possibly support anything) should be an EA cause area, I would find that bizarre. What do EAs have to add to this already crowded arena of politics?
the evidence is hearsay from two anonymous sources
I think even with just the behaviours that Nonlinear has publicly confirmed, there is cause for major concern.
you seem to think that remorse is the only mental state that could cause people to change their behavior. Why do you think that?
The emotion of guilt is usually what leads to accountability and behaviour change. See e.g. this video with clinical psychologist June Tangney, co-author of the book Shame and Guilt.
The $30M for SRF was a one-time windfall and its annual income and expenditures haven’t increased nearly to $20M.