I recently graduated with a masterâs in Information Science. Before making a degree switch, I was a Ph.D. student in Planetary Science where I used optimization models to physically characterize asteroids (including potentially hazardous ones).
Historically, my most time-intensive EA involvement has been organizing Tucson Effective Altruism â the EA university group at the University of Arizona. If you are a movement builder, letâs get in touch!
Career-wise, I am broadly interested in capital generation, x/âs-risk reduction, and earning-to-give for animal welfare. Always happy to chat about anything EA!
akash đ¸
Itâs OK to eat honey
I am quite uncertain because I am unsure to what extend a consumption boycott affects production; however, I lean slightly on the disagree side because boycotting animal-based foods is important for:
Establishing pro-animal cultural norms
Incentivizing plant-based products (like Honee) that already face an uphill climb towards mass adoption
Sounds like patient philanthropy? See @trammellâs 80K episode from four years ago.
Pete Buttigieg just published a short blogpost called We Are Still Underreacting on AI.
He seems to believe that AI will be cause major changes in the next 3-5 years and thinks that AI poses âterrifying challenges,â which make me wonder if he is privately sympathetic to the transformative AI hypothesis. If yes, he might also take catastrophic risks from AI quite seriously. While not explicitly mentioned, at the end of his piece, he diplomatically affirms:
The coming policy battles wonât be over whether to be âforâ or âagainstâ AI. It is developing swiftly no matter what. What we can do is take steps to ensure that it leads to more abundant prosperity and safety rather than deprivation and danger. Whether it does one or the other is, at its core, not a technology problem but a social and political problem. And that means itâs up to us.
Even if Buttigieg doesnât win, he will probably find himself on the presidential cabinet and could be quite influential on AI policy. The international response to AI depends a lot on which side wins the 2028 election.
In-depth critiques are super time and labor intensive to write, so I sincerely appreciate your effort here! I am pessimistic, but I hope this post gets wider coverage.
While I donât understand some of the modeling-based critiques here from the cursory read, it was illuminating to learn about the the basic model set up, the lack of error bars for parameters that the model is especially sensitive to, and the assumptions that so tightly constrain the forecastâs probability space. I am least sympathetic to the âthey made guesstimates here and thereâ line of critique; forecasting seems inherently squishy, so I do not think it is fair to compare it to physics.
Another critique, and one that I am quite sympathetic to, is that the METR trend specifically shows âthereâs an exponential trend with doubling time between ~2 â12 months on automatically-scoreable, relatively clean + green-field software tasks from a few distributionsâ (source). METR is especially clear about the drawbacks of their task suite in their RE-bench paper.
I know this is somewhat of meme in the Safety community at this point (and annoyingly intertwined with the stochastic parrots critique), but I think âare models generalizing?â still remains an important and unresolved question. If LLMs are adopting poor learning heuristics and not generalizing, AI2027 is predicting a weaker kind of âsuperhumanâ coder â one that can reliably solve software tasks with clean feedback loops but will struggle on open-ended tasks!
Anyway, thanks again for checking the models so thoroughly and the write-up!
we may take action up to and including building new features into the forumâs UI, to help remind users of the guidelines.
Random idea: for new users and/âor users with less than some threshold level of karma and/âor users who use the forum infrequently, Bulby pops up with a little banner that contains a tl;dr on the voting guidelines. Especially good if the banner pops up when a user hovers their cursor over the voting buttons.
Just off the top of my head: Holly was a community builder at Harvard EA, wrote what is arguably one of the most influential forum posts ever, and took sincere career and personal decisions based on EA principles (first, wild animal welfare, and now, âmaking AI go wellâ). Besides that, there are several EAGs and community events and conversations and activities that I donât know about, but all in all, she has deeply engaged with EA and has been a thought leader of sorts for a while now. I think it is completely fair to call her a prominent member of the EA community.[1]
- ^
I am unsure if Holly would like the term âmemberâ because she has stated that she is happy to burn bridges with EA /â funders, so maybe âperson who has historically been strongly influenced by and has been an active member of EAâ would be the most accurate but verbose phrasing.
- ^
EAG London would be the perfect place to talk about this with OP folks. Either way, all the best fundraising!
There is going to be a Netflix series on SBF titled The Altruists, so EA will be back in the media. I donât know how EA will be portrayed in the show, but regardless, now is a great time to improve EA communications. More specifically, being a lot more loud about historical and current EA wins â we just donât talk about them enough!
A snippet from Netflixâs official announcement post:Are you ready to learn about crypto?
Julia Garner (Ozark, The Fantastic Four: First Steps, Inventing Anna) and Anthony Boyle (House of Guinness, Say Nothing, Masters of the Air) are set to star in The Altruists, a new eight-episode limited series about Sam Bankman-Fried and Caroline Ellison.
Graham Moore (The Imitation Game, The Outfit) and Jacqueline Hoyt (The Underground Railroad, Dietland, Leftovers) will co-showrun and executive produce the series, which tells the story of Sam Bankman-Fried and Caroline Ellison, two hyper-smart, ambitious young idealists who tried to remake the global financial system in the blink of an eye â and then seduced, coaxed, and teased each other into stealing $8 billion.
Assuming this is true, why would OP pull funding? I feel Apartâs work strongly aligns with OPâs goals. The only reason I can imagine is that they want to move money away from the early career talent building pipeline to more mid/âlate-stage opportunities.
How confident are you about these views?
The next existential catastrophe is likelier than not to wipe off all animal sentience from the planet
Intuitively seems very unlikely.
The Chicxulub impact wiped out dinosaurs but not smaller mammals, fish, and insects. Even if a future extinction event caused a total ecosystem collapse, I would expect that some arthropods will be able to adapt and survive.
I feel a goal-driven, autonomous ASI wonât care much about the majority of non-humans. We donât care about anthills we trample when constructing buildings (ideally, we should); similarly an ASI would not intentionally target most non-humans â they arenât competing for the same resources or obstructing the ASIâs goals.
Thanks, great post!
A few follow-up questions and pushbacks:
Even if cannibalization happens, here are three questions that âmultiple well-designed studies analyzing substitution effects demonstrated that cultivated and plant-based meats appeal to sufficiently different consumer segmentsâ may not answer:
Would commercially viable cultivated meat more favorably alter consumer preferences over time?
A non-negligible portion of veg*ns abandon their veg*nism â would introduction of cultivated meat improve retention of animal-free consumption patterns?
How would introduction of cultivated meat affect flexitarian dietary choices? Flexitarians eat a combination of animal- and plant-based meat. When cultivated meat becomes commercially viable, would flexitarians replace the former or the latter with cultivated meat?
If the answer is a yes to any of these, I think that is a point in favor of cultivated meat. I expect cultural change to be a significant driver of reduced animal consumption, and this cultural change will only be possible if there is a stable class of consumers who normalize consumption of animal-free products.
To draw a historical parallel, when industrial chicken farming developed in the second half of the 20th century, people didnât eat less of other meats; they just ate chicken in addition.
Is this true? It seems that as chicken did displace beef consumption by 40% (assuming consumption ~ supply) or am I grossly misunderstanding the chart above?
Further, isnât there an upper bound to how much addition can happen? Meat became cheap and widely available, incomes rose, people started eating more of everything, so consumption increased. But there is only so much more that one can eat, so at some point people started making cost-based trade-offs between beef and chicken. If cultivated chicken were to be cheaper than animal-based beef or chicken, shouldnât we expect people to start making similar trade-offs?
AGI by 2028 is more likely than not
I hope to write about this at length once school ends, but in short, here are the two core reasons I feel AGI in three years is quite implausible:
The models arenât generalizing. LLMâs are not stochastic parrots, they are able to learn, but the learning heuristics they adopt seem to be random or imperfect. And no, I donât think METRâs newest is evidence against this.[1]
It is unclear if models are situationally aware, and currently, it seems more likely than not that they do not possess this capability. Laine et al. (2024) shows that current models are far below human baselines of situational awareness when tested on MCQ-like questions. I am unsure how models would be able to perform long-term planningâa capability I would consider is crucial for AGIâwithout being sufficiently situationally aware.
- ^
As Beth Barnes put it, their latest benchmark specifically shows that âthereâs an exponential trend with doubling time between ~2 â12 months on automatically-scoreable, relatively clean + green-field software tasks from a few distributions.â Real world tasks rarely have such clean feedback loops; see Section 6 of METRâs RE-bench paper for a thorough list of drawbacks and limitations.
Should EA avoid using AI art for non-research purposes?
Voting under the assumption that by EA, you mean individuals who are into EA or consider themselves to be a part of the movement (see âEAâ is too vague: letâs be more specific).
Briefly, I think the market/âjob displacement and environmental concerns are quite weak, although I think EA professionals should avoid using AI art unless necessary due to reputational and aesthetic concerns. However, for images generated in a non-professional context, I do not think avoidance is warranted.
(meta: why are people downvoting this comment? I disagree voted but there is nothing in this comment that makes me go, âI want less comments like this on the Forumâ)
This helps. That is not at all how I interpreted âour answer to both of your questions is âno.ââ Apologies!
our answer to both of your questions is âno.â
As much as I appreciate the time and effort you put into the analysis, this is a very revealing answer and makes me immediately skeptical of anything you will post in the future.
The linked article really doesnât justify why you effectively think that not a single piece of information would change the results of your analysis. This makes me suspect that, for whatever reason, you are pre-committed to the belief âSinergia bad.â
Correct me if I am misinterpreting something or if you have explained why you are certain beyond an ounce of doubt that 1) there is no piece of information that would lead to different conclusions or interpretation of claims and 2) why there is no room for reasonable disagreement.
itâs also a big clear gap now on the trusted, well known non-AI career advice front
From the update, it seems that:
80Kâs career guide will remain unchanged
I especially feel good about this, because the guide does a really good job of emphasizing the many approaches of pursuing an impactful career
n = 1 anecdotal point: during tabling early this semester, a passerby mentioned that they knew about 80K because a professor had prescribed one of the readings from the career guide in their course. The professor in question and the class they were teaching had no connection with EA, AI Safety, or our local EA group.
If non-EAs also find 80Kâs career guide useful, that is a strong signal that it is well-written, practical, and not biased to any particular cause
I expect and hope that this remains unchanged, because we prescribe most of the career readings from that guide in our introductory program
Existing write-ups on non-AI problem profiles will also remain unchanged
There will be a separate AGI career guide
But the job board will be more AI focused
Overall, this tells me that groups should still feel comfortable sharing readings from the career guide and on other problem profiles, but selectively recommend the job board primarily to those interested in âmaking AI go wellâ or mid/âsenior non-AI people. Probably Good has compiled a list of impact-focused job boards here, so this resource could be highlighted more often.
Another place people could be directed for career advice: https://ââprobablygood.org/ââ
Since last semester, we have made career 1-on-1s a mandatory part of our introductory program.
This semester, we will have two 1-on-1s
The first one will be a casual conversation where the mentee-mentor get to learn more about each other
The second one will be more in-depth, where we will share this 1-on-1 sheet (shamelessly poached from the 80K), the mentees will fill it out before the meeting, have a â¤1 hour long conversation with a mentor of their choice, and post-meeting, the mentor will add further resources to the sheet that may be helpful.
The advice we give during these sessions ends up being broader than just the top EA ones, although we are most helpful in cases where:
â someone is curious about EA/âadjacent causes
â someone needs graduate school related questions
â general âhow to best navigate college, plan for internships, etcâ advice
Do yâall have something similar set up?
Forethought, AI Gov at Oxford Martin, and EA Oxford operate out of Oxford. I am sure Uehiro has EA/âadjacent philosophers? GPIâs closure is a shame, of course.