Builds web apps (eg viewpoints.xyz) and makes forecasts. Currently I have spare capacity.
Nathan Young
I’d guess a less than .5% (90%)
What’s your thought on this:
Humans are conscious in whatever way that word is normally used
Our brains are made of matter
I think it’s likely we’ll be able to use matter to make other conscious minds
These minds may be able to replicate far faster than our own
A huge amount of future consciousness may be non-human
The wellbeing of huge chunks of funture consciousness are worthy of our concern
It seems really valuable to have experts at the time the discussion happens.
If you agree, then it seems worth trianing people for the future when we discuss it.
Worldview diversity isn’t a coherent concept and mainly exists to manage internal OpenPhil conflict.
I’d like a debate week once every 2 months-ish.
What do you think is the 50⁄50 point? Where half of people believe more, half less.
Sure then quantify it, right?
We still have not had satisfactory answers for why the FTX Future Fund was so sending cheques via strange bank accounts.
AI Safety Advocates have been responsible for over half of the leading AI companies. We don’t take that seriously enough.
Animal welfare is far more effective per $ than Global Health.
Edit:
How about “The marginal $100 mn on animal welfare is 10x the impact of the marginal $100 mn on Global Health”
I would like a discussion week once a month-ish.
Should Global Health comprise more than 15% of EA funding?
[Question] If there were another discussion week, what would you like it to be on and when?
My thoughts on the issue:
Humans are conscious in whatever way that word is normally used
Our brains are made of matter
I think it’s likely we’ll be able to use matter to make other conscious minds
These minds may be able to replicate far faster than our own
A huge amount of future consciousness may be non-human
The wellbeing of huge chunks of funture consciousness are worthy of our concern
Forecasting is hard and it’s not clear how valuable early AI safety work was (maybe it led to much more risk ??)
Early work on AI welfare might quite plausibly make the problem worse.
Part of me would like to get better calibrated on which interventions work and which don’t before hugely focusing on such an important issue
Part of me would like to fund general state capacity work and train experts to be in a better place when this happens
- Jul 5, 2024, 3:38 PM; 15 points) 's comment on Discussion Thread: AI Welfare Debate Week by (
Do you have a sense of what you think the right amout to spend is?
For me, a key question is “How much is 5%?”.
Here is a table I found.
So it seems like right now 5% is somewhere in the same range as Animal Welfare and EA Meta funding.
I guess that seems a bit high, given that animals exist and AIs don’t.
I think a key benefit of AI work was training AI Safety folks to be around when needed. Having resources at a crucial moment isn’t solely about money, it’s about having the resource that is useful in that moment. A similar thing to do might be to train philosophers and government staffers and activists who are well versed in the AI welfare arguments who can act if need be.
Not clear to me that that requires 5% of EA funding though.
- Jul 5, 2024, 3:38 PM; 15 points) 's comment on Discussion Thread: AI Welfare Debate Week by (
Why is shrinking audience bad? If this forum focused more on EA topics and some people left I am not sure that would be bad. I guess it would be slightly good on expectation.
And to be clear I mean if we focused on “are AIs deserving of moral value” “what % of money should be spent on animal welfare”
Suggestion.
Debate weeks every other week and we vote on what the topic is.
I think if the forum had a defined topic (especially) in advance, I would be more motivated to read a number of post on that topic.
One of the benefits of the culture war posts is that we are all thinking about the same thing. If we did that on topics perhaps with dialogues from experts, that would be good and on a useful topic.
Also adding a little force works too, eg here. There are pretty easy libraries for this.
1x is an arbitrary multiplier too.
I would want to put the number at the 50th percentile belief on the forum.