Hey! I am Mart, I learned about EA a few years back through LessWrong. Currently, I am pursuing a PhD in the theory of quantum technologies and learning more about doing good better in the EA Ulm local group and the EA Math and Physics professional group.
Mart_Korz
EA Ulm—workshop on AI related catastrophes
Holden Karnofsky has described his current thoughts on these topics “how to help with longermism/AI” on the 80000hours pocast – and there are some important changes:
And then it’s like, what do you do to help? When I was writing my blog post series, “The most important century,” I freely admitted the lamest part was, so what do I do? I had this blog post called “Call to vigilance” instead of “Call to action” — because I was like, I don’t have any actions for you. You can follow the news and wait for something to happen, wait for something to do.
I think people got used to that. People in the AI safety community got used to the idea that the thing you do in AI safety is you either work on AI alignment — which at that time means you theorise, you try to be very conceptual; you don’t actually have AIs that are capable enough to be interesting in any way, so you’re solving a lot of theoretical problems, you’re coming up with research agendas someone could pursue, you’re torturously creating experiments that might sort of tell you something, but it’s just almost all conceptual work — or you’re raising awareness, or you’re community building, or you’re message spreading.
These are kind of the things you can do. In order to do them, you have to have a high tolerance for just going around doing stuff, and you don’t know if it’s working. You have to be kind of self-driven.
He goes on to clarify that today, he sees many ways to contribute that are much more straightforward:
[...]. So that’s the state we’ve been in for a long time, and I think a lot of people are really used to that, and they’re still assuming it’s that way. But it’s not that way. I think now if you work in AI, you can do a lot of work that looks much more like: you have a thing you’re trying to do, you have a boss, you’re at an organisation, the organisation is supporting the thing you’re trying to do, you’re going to try and do it. If it works, you’ll know it worked. If it doesn’t work, you’ll know it didn’t work. And you’re not just measuring success in whether you convinced other people to agree with you; you’re measuring success in whether you got some technical measure to work or something like that.
Then, from ~1:20:00 the topic continues with “Great things to do in technical AI safety”
EA Ulm—open meeting
Petrov Day celebration
This is an interesting idea, thanks for sharing!
While thinking about your suggestion a little, I learned about RUTF (peanut-based ready-to-eat meals used to treat child malnutrition) which appear to be rather established. Using the UNICEF 2024 numbers they distributed enough RUTF to feed half a million people throughout the year if we go by calories[1]
According to UNICEF:RUTF-price-data a box of 150 meals (92 g which should correspond to ~500 kcal) costs them around $50 (using 2023 numbers and rounding up). Boiling this down to 2 000 kcal per day, this corresponds to a $1.33/day nutrition.
A few important aspects of RUTF are different than your suggestion (aimed at temporary treatment of malnutrition in children and not general nutrition for all ages, the prices above are not consumer prices, main ingredients differ, RUTF avoids the requirement of adding safe water which might not be easily available), but it seems to me that this supports your cost-estimate at least for large-scale purchases.
I think that together with existing consumer brands for meal-replacement powders, RUTF could be a second great reference for a similar and established idea :)
- ^
Of course, RUTF is used as a temporary treatment so that the amount corresponds to a potential of 5.1 million treated children according to the linked dashboard
- ^
yes, totally agree 100% with this article—I have nothing to add.
Thanks for engaging!
This means, specifically, a flatter gradient (i.e., ‘attenuation’) – smaller in absolute terms. In reality, I found a slightly increasing (absolute) gradient/steeper. I can change that sentence.
I don’t think that is necessary—my confusion is more about grasping how the aspects play together :) I’m afraid I will have to make myself a few drawings to get a better grasp.
Very nice, thanks!
If people are getting happier (and rescaling is occuring) the probability of these actions should become less linked to reported LS — but they don’t.
When I first read this sentence I thought that your argument makes perfect sense, but then when I read
Surprisingly, the relationship appears to get stronger over time. That’s the opposite of what rescaling predicts!
in the Overall happiness section and my first thought was: “well, I guess people are getting more demanding”. And now I am confused. I could imagine thinking about “people don’t settle for half-good any more” as a kind of increased happiness (even if calling it “satisfaction” would be strange).
Independent of this, my personal impression about economic wealth ever since childhood has been that my physical needs are essentially saturated and my social environment is massively more important for my subjective well-being. And although the latter is influenced by wealth, it is much more strongly affected by culture. And although I can think of plenty cultural developments that I am thankful for, I can also think of many that don’t push me in a healthy direction.
EA Ulm—workshop on Global Health
Meet us @ Uni Forum Ulm
Semester start—Intro to EA and pizza
I think it is good that (former) CEA has recognized that society has moved beyond distractingly specific names.
After Alphabet, Meta and X took the lead, I am excited about the impact that the Centre will create in adjacency to the EA community.
Isn’t this avoidable? I could imagine a system where you allow a small percentage of randomized “rejected” candidates to the next hiring round and, if they properly succeed in the next, allow them into the third round. I have essentially no experience with how hiring works, but it seems to me that this could i) increase the effort that goes into hiring only moderately, ii) still sounds kind of fair to the candidates, iii) and would give you some information on what your selection process actually selects for.