Hi there! Iâm an EA from Madrid. I am currently finishing my Ph.D. in quantum algorithms and would like to focus my career on AI Safety. Send me a message if you think I can help :)
PabloAMC đ¸
Actually, something I am confused about is whether the AI academics are per person*year as the technical researchers in various fields.
Hi there! Some minor feedback for the webpage: instead of starting with the causes, Iâd argue you should start with the value proposition: âyour euro goes further or something along those linesâ. You may want to check ayudaefectiva.org for an example. Congratulations on the new org!
Thanks, Chris, thatâs very much true. Iâve clarified I meant donations.
[Question] What is the counÂterÂfacÂtual value of differÂent AI Safety proÂfesÂsionÂals?
I already give everything, except whatâs required for the bare living necessities, away.
While admirable consider whether this is healthy or sustainable. I think donating less is ok, thatâs why Giving what we can suggests 10% as a calibrated point. You can of course donate more, but I would recommend against the implied current situation.
FWIW, I believe not every problem has to be centered around âcoolâ cause areas, and in this case Iâd argue both animal welfare and AI Safety should not be significantly affected.
I divide my donation strategy into two components:
-
The first one is a monthly donation to Ayuda Efectiva, the effective giving charity in Spain, which allows fiscal deduction too. For the time being, they mostly support Global health and poverty causes, which is boringly awesome.
-
Then I make one-off donations to specific opportunities that appear. Those include, for example, one donation to Global Catastrophic Risks, to support their work on recommendations for the EU AI act sandbox (to be first deployed in Spain), some volunteering work for FLI existential AI risk community, my donation to this donation election, to make donations within the EA community more democratic :)
For this donation election I have voted for Rethink Priorities, the EA long term future fund, and ALLFED. ALLFED work seems to be pretty necessary and they are often overlooked, so I am happy to support them. The other two had relatively convincing posts arguing for what they could do with additional funding. In particular, I am inclined to believe Rethink Priorities work benefits the EA community quite widely and am happy to support them, and would love them to keep carrying out the annual survey.
-
I think the title is a bit unfortunate at the very least. I am also skeptical of the articleâs thesis of highlighting population growth as the problem itself.
You understood me correctly. To be specific I was considering the third case in which the agent has uncertainty about is preferred state of the world. It may thus refrain from taking irreversible actions that may have a small upside in one scenario (protonium water) but large negative value in the other (deuterium) due to eg decreasing returns, or if it thinks thereâs a chance to get more information on what the objectives are supposed to mean.
I understand your point that this distinction may look arbitrary, but goals are not necessarily defined at the physical level, but rather over abstractions. For example, is a human with high level of dopamine happier? What is exactly a human? Can a larger human brain be happier? My belief is that since these objectives are built over (possibly changing) abstractions, it is unclear whether a single agent might iron out its goal. In fact, if âwhat the representation of the goal was meant to meanâ makes reference to what some human wanted to represent, youâll probably never have a clear cut unchanging goal.
Though I believe an important problem in this case is how to train an agent able to distinguish between the goal and its representation, and seek to optimise the former. I find it a bit confusing when I think about it.
Separately and independently, I believe that by the time an AI has fully completed the transition to hard superintelligence, it will have ironed out a bunch of the wrinkles and will be oriented around a particular goal (at least behaviorally, cf. efficiencyâthough I would also guess that the mental architecture ultimately ends up cleanly-factored (albeit not in a way that creates a single point of failure, goalwise)).
Iâd be curious to understand why you believe this happens. Humans (the only general intelligence we have so far) seems to preserve some uncertainty over goal distributions. So it is unclear to me that generality will necessarily clarify goals.
To be a bit more concrete: I find it plausible that the AGI will encounter possible fine grained (concrete) goals that map into the same high level representation of its goal, whatever it may be. Then you have to refine what the goal representation was meant to mean. After all, a representation of the goal is not the goal itself necessarily. I believe this is what humans face, and why human goals are often a small mess.
With respect to the last question I think it is perhaps a bit unfair. I think they have clearly stated they unconditionally condemn racism, and I have a strong prior that they mean it. Why wouldnât they, after all?
An apÂpraisal of the FuÂture of Life InÂstiÂtute AI exÂisÂtenÂtial risk program
But if we were to eliminate the EA community, an AI safety community would quickly replace it, as people are often attached to what they do. And this is even more likely if you add any moral connotation. People working at a charity, for example, are drawn to build an identity around it.
The HuggingFace RL course might be an alternative in the Deep LearningâRL discussion above: https://ââgithub.com/ââhuggingface/ââdeep-rl-class
Yeah, perhaps I was being too harsh. However, the baseline scenario should be that current trends will go on for some time, and they predict at least cheap batteries and increasingly cheaper H2.
I mostly focussed on these two because the current problem of green energy sources is more related to energy storage than production, photovoltaic is currently the cheapest in most places.
I think I quite disagree with this post because batteries are improving quite a lot, and if we are capable of also improving Hydrogen production and usage, things should work pretty well. Finally, nuclear fusion no longer seems so far away. Of course, I agree with the author that this transition will take quite a long time, especially in developing countries, but I expect this to work out well anyways. One key argument of the author is that we are limited in the amount of different metals available, but Li is very common on Earth, even if not super cheap, so I am not totally convinced by this. Similar thoughts apply to land usage.
In the Spanish community we often have conversations in English, and I think at least 80% of the members are comfortable with both.
I am, and am interested in technical AI Safety
The point 1 is correct, but there is a difference: when you research itâs often needed to live near a research group. Distillation is more open to remote and asynchronous work.
I tend to dislike treating all AI policy equal, the type of AI policy that affects AI safety is unlikely to represent a significant burden when developing frontier models. Thus reducing red tape on AI might actually be pretty positive.