Dewi
Announcing BlueDot Impact
Announcing the CERI Summer Research Fellowship
Retrospective on the Summer 2021 AGI Safety Fundamentals
+1, I’d also recommend using colours that are accessible for people with colour vision deficiency.
I really enjoyed and appreciated reading this, and it resonates a lot with me. Thank you for writing it :)
We don’t currently believe the name collision will cause significant issues, and we’re taking steps to mitigate any potential confusions that do arise.
We also discussed the name collision with the other BlueDot in August and September of this year to evaluate how much of an issue it would pose to them and to us, and we ultimately decided to move forward with this name choice.
Some other things:
We consulted with other advisors within and outside the biosecurity community to evaluate how much of an issue this might pose
We emphasize that we’re an education-y project that has courses on many different global problems
Our focus area websites (e.g. the AGI Safety Fundamentals site) are currently the more public-facing brands that we use, as opposed to the project brand (BlueDot Impact), though this might change with time
We always say “BlueDot Impact,” and not “BlueDot”, and the BlueDot Impact branding is very different from the other BlueDot
I started thinking about alternative names in Summer 2021, and we’ve brainstormed and evaluated many different names since then. BlueDot Impact was overwhelmingly preferred by people we surveyed and within the team. I’m overall very excited about this name and to develop this brand, and hope to be able to collaborate with other stakeholders if/when confusions arise :)
Thanks for this post, I knew nothing about Effective Philanthropy and this was very informative.
The following section resonated a lot with me:
I appreciate the presence of philosophers in effective altruism- a lot. Looking back at history, we can see philosophers and thinkers who had huge long-term influence. Peter Singer is hugely influential in global development and animal welfare. I admire other EA philosophers who take seriously issues like evidential decision theory, the longterm future, and infinite ethics.
But I don’t think such concepts need to always be so central when trying to mobilize broader resources.
In theory, effective altruism is a question about how to do the most good or how to do as much as good as possible given the resources you’re willing to commit. In practice and in social terms, effective altruism is a take-it or leave-it bundle of claims, beliefs, and institutions.
I often find myself frustrated in EA conversations or with EA outreach where we front-load specific moral beliefs that are not obviously necessary for inspiring people to undertake actions we think will lead to people making high-impact career choices, and where with some sub-groups who are less likely to be interested in philosophizing (e.g. engineers and entrepreneurs), this approach is actively counter-productive (especially when we need engineers and entrepreneurs! (1), (2), (3)).
However, obvious ways trying to rebalance this can go wrong is if having the biggest impact requires regular re-evaluation of the long-term objective throughout one’s career; where having strong shared moral beliefs can lead to improved cooperation and coordination across the (EA/cause-area) community; or where there are significant down-side risks within the relevant action space, and that taking maximising expected value seriously (or having other action-guiding moral beliefs) would lead you to avoid those risks. Therefore, I’m keen for students who want to inspire other students to pursue impactful careers to not to go down the path of “avoiding spending a lot of time discussing philosophy,” but to re-evaluate what messaging they use to pique different demographic groups’ interests initially and get them through the door, and then evaluate how interested those people are in thinking really hard about doing the most impartial good later (where that seems important to do). I think this also needs to be heavily tailored for the specific problem that the student group is trying to solve, the (ideally long-term) talent requirements for that problem, and all the nuances associated with the current community working on it.
I also think EA funders should consider seeding cause-focused communities or student groups that focus on important issues, not just EA community groups. In theory, Giving What We Can groups could have served this function in terms of global health and development. However, it has now, like the rest of the effective altruism community, moved towards longtermism and general effective altruism.
At our org, we’re much more focused on building cause-focused communities than other groups (we’re currently focused on AI safety, biosecurity, nuclear, climate change and alternative proteins / FAW, and have individual full-time members of staff committed to different cause areas), and we have received generous funding from EA donors, so there is at least some movement in this direction already. FWIW I think GWWC is trying to move away from the GH&D focus, though I’d be excited for more impact-oriented student groups to be developed that resemble the work of Charity Entrepreneurship.
I’m very surprised to hear that this work is funding constrained. Why do you currently think this has received less interest from funders?
This is incredibly exciting, thanks for the update!
Thank you for making this thread Clifford, and we’re really grateful for all feedback! We’re working hard as a team to improve the course and the infrastructure we have for hosting other courses, and everyone’s feedback has been incredibly valuable on our journey thus far :)
Why do you think it’s less important for the x-risk/longtermism parts of the EA movement to have good PR and epistemics?
How important do you think it is that your or others’ forecasts are more well-understood or valued among policy-makers? And if you think they should listen to forecasts more often, how do you think we should go about making them more aware?
Here’s the video, in case people would like to engage with this that way. Thanks for the post Akash!
Thank you for this! We’ve been considering starting up some kind of “intro to biosecurity” reading group at our local EA group and these are really excellent resources for us, and is likely to save us many hours of work trawlling through the literature.
The programme is by default virtual, we’ve made this clearer in the application form
Overall I think this sounds really cool. There are a few things I would be cautious of though. One thing I would worry about is artificially creating a sense of two distinct “sides” on an issue, when there is likely much more complexity and many more perspectives than is being presented in the debate. I think there’s a recognition when only one person is being interviewed that there are many other perspectives on the issue, however, when it’s a debate people seem to feel that the perspectives presented encompass the whole space.
The tendency towards side-taking also worries me. The two-party system is a classic example of this, which pushes people towards political coalitions, instead of thinking about each policy or situation independently. Some listeners may be pushed to thinking “I’m on X’s side” which could have negative group polarization effects while also not necessarily promoting a holistic understanding of the issue at hand.
It could also promote a tendency towards “yes/no” questions, which I think aren’t too useful for the kinds of questions we’re interested in which have very complex cost/benefit tradeoffs.
However, if the debates were chaired carefully and the host tries to interject nuance, find common ground, play devil’s advocate, etc., then maybe these worries could be alleviated, while also helping people to understand how different beliefs compare and relate to each other.
Cool, thanks Lizka and the Forum team!
If you click “New Post” in the subforum, does that post also appear on the Frontpage? My assumption is yes, but just wanted to clarify.
Great post, thanks for writing it!
I’m really excited for more people in the EA community to ask questions like “what would the world look like if we’ve solved X problem? How can we make that world a reality? What team do we need to build to achieve this goal over a decade-long time horizon?” as opposed to focusing predominantly on what’s best to do given a certain set of resources or capabilities one currently has, doing independent projects, and doing projects for short periods of time.
Accessibility point (relevant for all Forum posts):
I have deuteranopia (a common form of red-green colour-blindness), and can’t really see the different colours in your “limited to very strong” graphs, which makes evaluating them a bit harder and more cognitive effort (I basically have to rely entirely on the text). It’s also quite distracting to have what looks to me like subtly different shades of the same colour.
~5% of the population have some form of colour-blindness (~1/12 men, ~1/200 women). I would really appreciate if the colours could please be selected from a colour palette like this one :) Thanks!