Hi there! I’m an EA from Madrid. I am currently finishing my Ph.D. in quantum algorithms and would like to focus my career on AI Safety. Send me a message if you think I can help :)
PabloAMC
FWIW, I believe not every problem has to be centered around “cool” cause areas, and in this case I’d argue both animal welfare and AI Safety should not be significantly affected.
I divide my donation strategy into two components:
-
The first one is a monthly donation to Ayuda Efectiva, the effective giving charity in Spain, which allows fiscal deduction too. For the time being, they mostly support Global health and poverty causes, which is boringly awesome.
-
Then I make one-off donations to specific opportunities that appear. Those include, for example, one donation to Global Catastrophic Risks, to support their work on recommendations for the EU AI act sandbox (to be first deployed in Spain), some volunteering work for FLI existential AI risk community, my donation to this donation election, to make donations within the EA community more democratic :)
For this donation election I have voted for Rethink Priorities, the EA long term future fund, and ALLFED. ALLFED work seems to be pretty necessary and they are often overlooked, so I am happy to support them. The other two had relatively convincing posts arguing for what they could do with additional funding. In particular, I am inclined to believe Rethink Priorities work benefits the EA community quite widely and am happy to support them, and would love them to keep carrying out the annual survey.
-
I think the title is a bit unfortunate at the very least. I am also skeptical of the article’s thesis of highlighting population growth as the problem itself.
You understood me correctly. To be specific I was considering the third case in which the agent has uncertainty about is preferred state of the world. It may thus refrain from taking irreversible actions that may have a small upside in one scenario (protonium water) but large negative value in the other (deuterium) due to eg decreasing returns, or if it thinks there’s a chance to get more information on what the objectives are supposed to mean.
I understand your point that this distinction may look arbitrary, but goals are not necessarily defined at the physical level, but rather over abstractions. For example, is a human with high level of dopamine happier? What is exactly a human? Can a larger human brain be happier? My belief is that since these objectives are built over (possibly changing) abstractions, it is unclear whether a single agent might iron out its goal. In fact, if “what the representation of the goal was meant to mean” makes reference to what some human wanted to represent, you’ll probably never have a clear cut unchanging goal.
Though I believe an important problem in this case is how to train an agent able to distinguish between the goal and its representation, and seek to optimise the former. I find it a bit confusing when I think about it.
Separately and independently, I believe that by the time an AI has fully completed the transition to hard superintelligence, it will have ironed out a bunch of the wrinkles and will be oriented around a particular goal (at least behaviorally, cf. efficiency—though I would also guess that the mental architecture ultimately ends up cleanly-factored (albeit not in a way that creates a single point of failure, goalwise)).
I’d be curious to understand why you believe this happens. Humans (the only general intelligence we have so far) seems to preserve some uncertainty over goal distributions. So it is unclear to me that generality will necessarily clarify goals.
To be a bit more concrete: I find it plausible that the AGI will encounter possible fine grained (concrete) goals that map into the same high level representation of its goal, whatever it may be. Then you have to refine what the goal representation was meant to mean. After all, a representation of the goal is not the goal itself necessarily. I believe this is what humans face, and why human goals are often a small mess.
With respect to the last question I think it is perhaps a bit unfair. I think they have clearly stated they unconditionally condemn racism, and I have a strong prior that they mean it. Why wouldn’t they, after all?
An appraisal of the Future of Life Institute AI existential risk program
But if we were to eliminate the EA community, an AI safety community would quickly replace it, as people are often attached to what they do. And this is even more likely if you add any moral connotation. People working at a charity, for example, are drawn to build an identity around it.
The HuggingFace RL course might be an alternative in the Deep Learning—RL discussion above: https://github.com/huggingface/deep-rl-class
Yeah, perhaps I was being too harsh. However, the baseline scenario should be that current trends will go on for some time, and they predict at least cheap batteries and increasingly cheaper H2.
I mostly focussed on these two because the current problem of green energy sources is more related to energy storage than production, photovoltaic is currently the cheapest in most places.
I think I quite disagree with this post because batteries are improving quite a lot, and if we are capable of also improving Hydrogen production and usage, things should work pretty well. Finally, nuclear fusion no longer seems so far away. Of course, I agree with the author that this transition will take quite a long time, especially in developing countries, but I expect this to work out well anyways. One key argument of the author is that we are limited in the amount of different metals available, but Li is very common on Earth, even if not super cheap, so I am not totally convinced by this. Similar thoughts apply to land usage.
In the Spanish community we often have conversations in English, and I think at least 80% of the members are comfortable with both.
I am, and am interested in technical AI Safety
The point 1 is correct, but there is a difference: when you research it’s often needed to live near a research group. Distillation is more open to remote and asynchronous work.
Thanks for the answer. The problem is that this is likely pointing in the wrong direction. Immigration has by itself quite large benefits for immigrants and almost all studies of the impact of immigration find positive or no effect for locals. From “Good economics for hard times” by Duflo and Barnejee there is only one case where locals ended up worse off: during the URRS, Hungarian workers were allowed to work but not live in East Germany, forcing them to spend their money at home. Overall, it is well known that open border situations would probably boost worldwide GDP by at least 50%, possibly 100%. I sincerely think that criticising Germany for this policy requires being only worried about very short term costs, which seems more like an ideological response than a reasonable choice.
I think it is wrong to say that Syrian refugee crisis might have cost Germany 0.5T. My source: https://www.igmchicago.org/surveys/refugees-in-germany-2/. To be fair though I have not found a posterior analysis, and I am far from an expert.
My intuition is that grantmakers often have access to better experts, but you could always reach to the latter directly at conferences if you know who they are.
No need to apologize! I think your idea might be even better than mine :)
Mmm, that’s not what I meant. There are good and bad ways of doing it. In 2019 someone reached out to me before the EA Global to check whether it would be ok to get feedback on one application I rejected (as part of some team). And I was happy to meet and give feedback. But I think there is no damage in asking.
Also, it’s not about networking your way in, it’s about learning for example about why people liked or not a proposal, or how to improve it. So, I think there are good ways of doing this.
While admirable consider whether this is healthy or sustainable. I think donating less is ok, that’s why Giving what we can suggests 10% as a calibrated point. You can of course donate more, but I would recommend against the implied current situation.