I’d be doing less good with my life if I hadn’t heard of effective altruism
I think the much necessary and needed clarity would have eluded me
I’d be doing less good with my life if I hadn’t heard of effective altruism
I think the much necessary and needed clarity would have eluded me
One of the issue I believe hindering University Group Organizers is largely motivation. What is motivating you to start a group, why do you need to and why do you want to?
And I believe your post here has targeted these questions but I see some gaps such as some universities may not be welcoming of this idea. The group Organizer standing alone if not deeply rooted can be shaken from comments from others. And lastly, personal development is often being sacrifice for community building which most times lead to dearth of capacity in such group.
This is quite an interesting take, on one hand, I like the humor Ollie uses in his writing, and on the other hand, the effect and how the points are pitched against one another.
I think the most important thing, as stated in this discourse, to make the Panel Session effective is for the panelists to have a talk before the panel session, and the panel should only be to take questions from the participants and audience. That way, there is more context and nuance to the discussion.
In brainstorming sessions, it’s always been a lazy way of achieving negligible impacts. Basically because a lot of participants if not handpicked don’t understand the context or don’t have the knowledge about the subject matter. Most of the time, the submissions are not usable or forgotten.
Regardless, I still think there is some usefulness to the two and a lot of benefit if fine-tuned properly with more context, pre-brainstorming session material, and an open room to walk away.
Thank you @OllieBase for sharing this take.
Hi Tosin,
We are currently reviewing applications and you will get a response in due course. We apologize for any inconvenience.
That was nice
This is a great stride in the right direction. Looking forward to your exploration of the Africa Tech Space.
A quick question, how do you plan on bringing the seeming lack of government collaboration with this type of project especially in some parts of Africa?
There is no link for the application of the EA South Africa Summits in Cape Town.
Is that due to error or the registration is not open yet?
It seems I get the knack of it now…
So your argument here is that if we are going to go this route, then interpretability technology should be used as a measure in the future towards ensuring the safety of this agentic AI as much as they are using currently to improve their “planning capabilities”
Can you clarify this a bit “Only if the safety/alignment work applies directly to the future maximiser AIs (for example, by allowing us to understand them) does it seem very advantageous to me.”
Kind of lost here
So, it is not a question of whether or not, it is a question of how best to carry out those systemic changes.
Thanks, this is a clarity comes knocking at the right time
I seem a little bit off here, care a bit to expand more on your thoughts and ambit of GCR here?
I understand the reservation about donation from AI companies cause of conflict of interest, but I still think the larger driver of this intervention area (AI Cause Area) should largely be this Company… who else got the fund that could drive it? who else get the ideological initiatives necessary for changes in this area?
While it may be counterintuitive to have them on board, they are still the best bet for now.
Yet to happen… The timeline is September and the application is still open.
Thank you for raising this
I think your concern strikes at something many of us within and around the longtermist community have been reflecting on. I share the worry that longtermism can sound detached or even abstract when it’s presented purely as a philosophical ideal rather than as something that must earn its relevance through present-day impact.
But I don’t think that the tension between “present” and “future” is an unavoidable flaw. In fact, part of my motivation for writing this essay was to show that the two are deeply connected, especially in contexts like the Global South, where long-term risks and present-day vulnerabilities are intertwined.
For example, when we talk about building institutional readiness for AI, we’re not talking about ignoring current crises. We’re talking about strengthening the very institutions that can address them, improving education, governance, foresight, and inclusion. These are present actions that both reduce near-term harms and make our societies more resilient to long-term risks.
In that sense, I see longtermism not as an escape from the present, but as an invitation to act more wisely within it. The idea is not to postpone compassion or justice for the sake of distant futures, but to ensure that today’s solutions don’t mortgage tomorrow’s possibilities.
I completely agree that longtermism must prove its worth “in two time zones at once.” For those of us in the Global South, that dual engagement isn’t optional, it’s survival. We can’t talk about safeguarding the year 2500 if we can’t feed, educate, or protect people in 2025. But neither can we afford to build only for 2025 if the systems we create are brittle, exclusionary, or unprepared for transformative change.