I recently graduated with a masterâs in Information Science. Before making a degree switch, I was a Ph.D. student in Planetary Science where I used optimization models to physically characterize asteroids (including potentially hazardous ones).
Historically, my most time-intensive EA involvement has been organizing Tucson Effective Altruism â the EA university group at the University of Arizona. If you are a movement builder, letâs get in touch!
I am broadly interested in economic growth, catastrophic risk reduction /â abundant futures, and earning-to-give for animal welfare. Always happy to chat about anything EA!
akash đ¸
I just emailed him, close to zero chance he will see it but if he does đ¤
but its very possible that many fish that we kill after catching (yes with a bad death) have net positive lives.
Doesnât this imply that even a theoretical painless death of a fish is really really bad because your taking away all the good moments trillions of fish could have experienced? You could argue that the utility experienced by those who consume the fish is higher, but it probably doesnât compare to the utility those unimaginably large amount of creatures could have experienced had they continued their natural lives.
(I agree with the more important point that non-adversarial messaging matters and these sorts of comparisons are practically useless.)
Some of the proposed interventions neatly align with practices followed by cities that have âdark sky laws.â Uncertain, but maybe there is a feed two birds with one scone solution here.
âProblem is that rarely in the world of public engagement, media and comms does everything go right.â
âBut if youâre going to go ahead, be VERY sure youâre doing it right.â
Doesnât statement 1 imply that statement 2 is an impossibly high standard to reach?
There are clearly mistakes here which could have been avoided, but it is really hard to predict the counterfactual; it is possible that even if those steps were taken, the level of infighting or the amount of clickbait journalism would have been about the same. Maybe not, but who knows!
I was annoyed with all the clickbait-y articles and my fellow EAs are far too deferential and being against diet change is currently the trendy view within the movement. At the same time, I think it would be healthy for the broader animal movement to build a stronger culture of cooperation and that involves a higher degree of charitability and a lower bar of whatâs acceptable when trying something new.
Posting this here for a wider reach: Iâm looking for roommates in SF! Interested in leases that begin in January.
Right now, I know three others who are interested and we have a low-key signal group chat. If you are interested, direct message me here or on one my linked socials and we will hop on a 15-minute call to determine if we would be a good match!
Hank Green should attend an EAG next year.
+1 to this, I would be disappointed if EAG merch was super generic. The sweatshirt from EAG Bay (which I do not have) had a fantastic design, and I liked the birds on the EAG NYC t-shirt.
But I am also someone who has a bright teal colored backpack with pink straps and my laptop has 50,000 stickers so âŚ
By my count, barring Trajan House, it now appears that EA has officially been annexed from Oxford.
Forethought, AI Gov at Oxford Martin, and EA Oxford operate out of Oxford. I am sure Uehiro has EA/âadjacent philosophers? GPIâs closure is a shame, of course.
Itâs OK to eat honey
I am quite uncertain because I am unsure to what extend a consumption boycott affects production; however, I lean slightly on the disagree side because boycotting animal-based foods is important for:
Establishing pro-animal cultural norms
Incentivizing plant-based products (like Honee) that already face an uphill climb towards mass adoption
Sounds like patient philanthropy? See @trammellâs 80K episode from four years ago.
Pete Buttigieg just published a short blogpost called We Are Still Underreacting on AI.
He seems to believe that AI will be cause major changes in the next 3-5 years and thinks that AI poses âterrifying challenges,â which make me wonder if he is privately sympathetic to the transformative AI hypothesis. If yes, he might also take catastrophic risks from AI quite seriously. While not explicitly mentioned, at the end of his piece, he diplomatically affirms:
The coming policy battles wonât be over whether to be âforâ or âagainstâ AI. It is developing swiftly no matter what. What we can do is take steps to ensure that it leads to more abundant prosperity and safety rather than deprivation and danger. Whether it does one or the other is, at its core, not a technology problem but a social and political problem. And that means itâs up to us.
Even if Buttigieg doesnât win, he will probably find himself on the presidential cabinet and could be quite influential on AI policy. The international response to AI depends a lot on which side wins the 2028 election.
In-depth critiques are super time and labor intensive to write, so I sincerely appreciate your effort here! I am pessimistic, but I hope this post gets wider coverage.
While I donât understand some of the modeling-based critiques here from the cursory read, it was illuminating to learn about the the basic model set up, the lack of error bars for parameters that the model is especially sensitive to, and the assumptions that so tightly constrain the forecastâs probability space. I am least sympathetic to the âthey made guesstimates here and thereâ line of critique; forecasting seems inherently squishy, so I do not think it is fair to compare it to physics.
Another critique, and one that I am quite sympathetic to, is that the METR trend specifically shows âthereâs an exponential trend with doubling time between ~2 â12 months on automatically-scoreable, relatively clean + green-field software tasks from a few distributionsâ (source). METR is especially clear about the drawbacks of their task suite in their RE-bench paper.
I know this is somewhat of meme in the Safety community at this point (and annoyingly intertwined with the stochastic parrots critique), but I think âare models generalizing?â still remains an important and unresolved question. If LLMs are adopting poor learning heuristics and not generalizing, AI2027 is predicting a weaker kind of âsuperhumanâ coder â one that can reliably solve software tasks with clean feedback loops but will struggle on open-ended tasks!
Anyway, thanks again for checking the models so thoroughly and the write-up!
we may take action up to and including building new features into the forumâs UI, to help remind users of the guidelines.
Random idea: for new users and/âor users with less than some threshold level of karma and/âor users who use the forum infrequently, Bulby pops up with a little banner that contains a tl;dr on the voting guidelines. Especially good if the banner pops up when a user hovers their cursor over the voting buttons.
Just off the top of my head: Holly was a community builder at Harvard EA, wrote what is arguably one of the most influential forum posts ever, and took sincere career and personal decisions based on EA principles (first, wild animal welfare, and now, âmaking AI go wellâ). Besides that, there are several EAGs and community events and conversations and activities that I donât know about, but all in all, she has deeply engaged with EA and has been a thought leader of sorts for a while now. I think it is completely fair to call her a prominent member of the EA community.[1]
- ^
I am unsure if Holly would like the term âmemberâ because she has stated that she is happy to burn bridges with EA /â funders, so maybe âperson who has historically been strongly influenced by and has been an active member of EAâ would be the most accurate but verbose phrasing.
- ^
EAG London would be the perfect place to talk about this with OP folks. Either way, all the best fundraising!
There is going to be a Netflix series on SBF titled The Altruists, so EA will be back in the media. I donât know how EA will be portrayed in the show, but regardless, now is a great time to improve EA communications. More specifically, being a lot more loud about historical and current EA wins â we just donât talk about them enough!
A snippet from Netflixâs official announcement post:Are you ready to learn about crypto?
Julia Garner (Ozark, The Fantastic Four: First Steps, Inventing Anna) and Anthony Boyle (House of Guinness, Say Nothing, Masters of the Air) are set to star in The Altruists, a new eight-episode limited series about Sam Bankman-Fried and Caroline Ellison.
Graham Moore (The Imitation Game, The Outfit) and Jacqueline Hoyt (The Underground Railroad, Dietland, Leftovers) will co-showrun and executive produce the series, which tells the story of Sam Bankman-Fried and Caroline Ellison, two hyper-smart, ambitious young idealists who tried to remake the global financial system in the blink of an eye â and then seduced, coaxed, and teased each other into stealing $8 billion.
Assuming this is true, why would OP pull funding? I feel Apartâs work strongly aligns with OPâs goals. The only reason I can imagine is that they want to move money away from the early career talent building pipeline to more mid/âlate-stage opportunities.
How confident are you about these views?
The next existential catastrophe is likelier than not to wipe off all animal sentience from the planet
Intuitively seems very unlikely.
The Chicxulub impact wiped out dinosaurs but not smaller mammals, fish, and insects. Even if a future extinction event caused a total ecosystem collapse, I would expect that some arthropods will be able to adapt and survive.
I feel a goal-driven, autonomous ASI wonât care much about the majority of non-humans. We donât care about anthills we trample when constructing buildings (ideally, we should); similarly an ASI would not intentionally target most non-humans â they arenât competing for the same resources or obstructing the ASIâs goals.
Is this primarily meant for people who are already veg*n/âsympathetic or a wider audience?
If the latter, it is worth rethinking if the word âveganâ should be used at all, as there are a bunch of studies that show that the public is negatively biased towards the term and alternate terms are received more positively (see this, for instance).