I have a masterâs in Information Science. Before switching to the masterâs, I was a Ph.D. student in Planetary Science where I used optimization models to physically characterize asteroids (including potentially hazardous ones).
Historically, my most time-intensive EA involvement has been organizing Tucson Effective Altruism â the EA university group at the University of Arizona. If you are a movement builder, letâs get in touch!
I am broadly interested in economic growth, catastrophic risk reduction /â abundant futures, and earning-to-give for animal welfare. Always happy to chat about anything EA!
akash đ¸
⌠other brain regions (accessory lobes) have shown to compensate these integrative processes in this taxon, which has not yet been demonstrated for Penaeidae. Itâs thus still a low rating for lack of data, not for proof of failing this criterion.
This reminds me of two things:
I am forgetting the precise terms here, but for a while in the 1800s through most of the 1900s, researchers thought that birds werenât intelligent because they were essentially comparing human and avian brains 1:1, but later, others found that while birds lacked that specific component (neocortex?), some other regions of their brain were functionally similar and that birds were indeed smart rather than instinct-driven biological machines.
I recall watching Dustin Crummettâs presentation on insect sentience a while back, and when talking about lack of evidence of sentience in certain insects, he emphasized that the besides black soldier fly and honeybees, most insects arenât that well-studied.
I am a little surprised that evidence for integrative brain regions is very high for all but the Penaeidae. Do we know to what extent this is the case because direct/âproxy studies on Penaeidae sentience havenât been performed vs. studies were performed but results showed low evidence of sentience?
And answering some of your questions:
Which criterions do you think are the most convincing to update your confidence?
Criteria 2 â 3 > 4 â 5
Do you have other types of evidence that better influence your confidence?
Not evidence, but a heuristic I use when thinking about sentience is that any organism that performs reinforcement learning, i.e., making on-the-fly decisions informed by environmental stimuli is most likely sentient.
I lean towards a yes butI am uncertain because I donât know how the stimuli is fed and I would imagine that the simulated brain, unlike an embodied fruit fly, isnât perpetually processing information and taking actions. If the latter is true and if it replaces the need for ⌠processing ⌠billions of life fruit flies in labs worldwide, seems like a huge animal welfare win to me.EDIT: Eon, the company behind this development published a blog post explaining their research, and after reading it, I am much less confident in my lean. This doesnât seem to be a whole fly brain emulation /â a full copy:
First, the Shiu et al. model is a simplified neuron model. It uses leaky integrate-and-fire dynamics rather than morphologically detailed multicompartment neurons, and it relies on inferred neurotransmitter identity and simplified synapse models. This means that dendritic nonlinearities, biophysical channel diversity, and many specific dynamics are not represented. This is enough to recover some sensorimotor transformations, but clearly does not capture the full range of neural activity. Further, internal state, plasticity, learning, hormonal changes are largely missing. Biological flies do not respond to the same sensory input the same way in all contexts. Hunger, satiety, arousal, mating state, egg-laying state, recent sensory history, neuromodulators, and learning all reshape sensorimotor transformations.
Is this primarily meant for people who are already veg*n/âsympathetic or a wider audience?
If the latter, it is worth rethinking if the word âveganâ should be used at all, as there are a bunch of studies that show that the public is negatively biased towards the term and alternate terms are received more positively (see this, for instance).
I just emailed him, close to zero chance he will see it but if he does đ¤
but its very possible that many fish that we kill after catching (yes with a bad death) have net positive lives.
Doesnât this imply that even a theoretical painless death of a fish is really really bad because your taking away all the good moments trillions of fish could have experienced? You could argue that the utility experienced by those who consume the fish is higher, but it probably doesnât compare to the utility those unimaginably large amount of creatures could have experienced had they continued their natural lives.
(I agree with the more important point that non-adversarial messaging matters and these sorts of comparisons are practically useless.)
Some of the proposed interventions neatly align with practices followed by cities that have âdark sky laws.â Uncertain, but maybe there is a feed two birds with one scone solution here.
âProblem is that rarely in the world of public engagement, media and comms does everything go right.â
âBut if youâre going to go ahead, be VERY sure youâre doing it right.â
Doesnât statement 1 imply that statement 2 is an impossibly high standard to reach?
There are clearly mistakes here which could have been avoided, but it is really hard to predict the counterfactual; it is possible that even if those steps were taken, the level of infighting or the amount of clickbait journalism would have been about the same. Maybe not, but who knows!
I was annoyed with all the clickbait-y articles and my fellow EAs are far too deferential and being against diet change is currently the trendy view within the movement. At the same time, I think it would be healthy for the broader animal movement to build a stronger culture of cooperation and that involves a higher degree of charitability and a lower bar of whatâs acceptable when trying something new.
Posting this here for a wider reach: Iâm looking for roommates in SF! Interested in leases that begin in January.
Right now, I know three others who are interested and we have a low-key signal group chat. If you are interested, direct message me here or on one my linked socials and we will hop on a 15-minute call to determine if we would be a good match!
Hank Green should attend an EAG next year.
+1 to this, I would be disappointed if EAG merch was super generic. The sweatshirt from EAG Bay (which I do not have) had a fantastic design, and I liked the birds on the EAG NYC t-shirt.
But I am also someone who has a bright teal colored backpack with pink straps and my laptop has 50,000 stickers so âŚ
By my count, barring Trajan House, it now appears that EA has officially been annexed from Oxford.
Forethought, AI Gov at Oxford Martin, and EA Oxford operate out of Oxford. I am sure Uehiro has EA/âadjacent philosophers? GPIâs closure is a shame, of course.
Itâs OK to eat honey
I am quite uncertain because I am unsure to what extend a consumption boycott affects production; however, I lean slightly on the disagree side because boycotting animal-based foods is important for:
Establishing pro-animal cultural norms
Incentivizing plant-based products (like Honee) that already face an uphill climb towards mass adoption
Sounds like patient philanthropy? See @trammellâs 80K episode from four years ago.
Pete Buttigieg just published a short blogpost called We Are Still Underreacting on AI.
He seems to believe that AI will be cause major changes in the next 3-5 years and thinks that AI poses âterrifying challenges,â which make me wonder if he is privately sympathetic to the transformative AI hypothesis. If yes, he might also take catastrophic risks from AI quite seriously. While not explicitly mentioned, at the end of his piece, he diplomatically affirms:
The coming policy battles wonât be over whether to be âforâ or âagainstâ AI. It is developing swiftly no matter what. What we can do is take steps to ensure that it leads to more abundant prosperity and safety rather than deprivation and danger. Whether it does one or the other is, at its core, not a technology problem but a social and political problem. And that means itâs up to us.
Even if Buttigieg doesnât win, he will probably find himself on the presidential cabinet and could be quite influential on AI policy. The international response to AI depends a lot on which side wins the 2028 election.
In-depth critiques are super time and labor intensive to write, so I sincerely appreciate your effort here! I am pessimistic, but I hope this post gets wider coverage.
While I donât understand some of the modeling-based critiques here from the cursory read, it was illuminating to learn about the the basic model set up, the lack of error bars for parameters that the model is especially sensitive to, and the assumptions that so tightly constrain the forecastâs probability space. I am least sympathetic to the âthey made guesstimates here and thereâ line of critique; forecasting seems inherently squishy, so I do not think it is fair to compare it to physics.
Another critique, and one that I am quite sympathetic to, is that the METR trend specifically shows âthereâs an exponential trend with doubling time between ~2 â12 months on automatically-scoreable, relatively clean + green-field software tasks from a few distributionsâ (source). METR is especially clear about the drawbacks of their task suite in their RE-bench paper.
I know this is somewhat of meme in the Safety community at this point (and annoyingly intertwined with the stochastic parrots critique), but I think âare models generalizing?â still remains an important and unresolved question. If LLMs are adopting poor learning heuristics and not generalizing, AI2027 is predicting a weaker kind of âsuperhumanâ coder â one that can reliably solve software tasks with clean feedback loops but will struggle on open-ended tasks!
Anyway, thanks again for checking the models so thoroughly and the write-up!
we may take action up to and including building new features into the forumâs UI, to help remind users of the guidelines.
Random idea: for new users and/âor users with less than some threshold level of karma and/âor users who use the forum infrequently, Bulby pops up with a little banner that contains a tl;dr on the voting guidelines. Especially good if the banner pops up when a user hovers their cursor over the voting buttons.
Just off the top of my head: Holly was a community builder at Harvard EA, wrote what is arguably one of the most influential forum posts ever, and took sincere career and personal decisions based on EA principles (first, wild animal welfare, and now, âmaking AI go wellâ). Besides that, there are several EAGs and community events and conversations and activities that I donât know about, but all in all, she has deeply engaged with EA and has been a thought leader of sorts for a while now. I think it is completely fair to call her a prominent member of the EA community.[1]
- ^
I am unsure if Holly would like the term âmemberâ because she has stated that she is happy to burn bridges with EA /â funders, so maybe âperson who has historically been strongly influenced by and has been an active member of EAâ would be the most accurate but verbose phrasing.
- ^
EAG London would be the perfect place to talk about this with OP folks. Either way, all the best fundraising!
(Not a solution, but a general observation about people who engage in bashing EA.)
The âdot connectorsâ will always connect the dots, infer or invent nefarious motivations, and try to bucket you as they like. The problem is that you canât neatly map EAs onto the political spectrumâyes, there are dominant trends, but the variance in views is sufficiently high that commentators have genuinely no clue where EAs belong. This makes sense because most major movements in history have been political ones, so when assessing EA, most people pull out their internal political philosophy detector and you end up with a mess like the chart below!
But EA is a moral philosophy movement, and the chain of thinking is genuinely different. Instead of thinking how to organize society and labor, EAs unanimously agree on beneficentrism and deal with questions like, âWhat morally matters? To what degree? Which interventions are most effective? How do you even assess what is most effective?â When you organize a movement around these set of questions, you end up with:
Some people who want to automate software engineering, some who want to pause it entirely, and others who think we should defensively accelerate progress
At least two frontier AI labs: letâs not forget OpenAI received $30 million in philanthropic money during its inception!
Some EAs who think that AI will be a big deal for {their cause area}, others who are skeptical of the whole AI bundle
Some EAs passionately dislike AI writing, some are fine with methodical use of AI in writing, and some are even more liberal about it
One particular EA who is the loudest voice combatting the data center water usage myth
(At least) one person from the EA-sphere who has large holdings in AI infrastructure
And conservative AI Safetyists like you and liberal long timeline accelerationists like me
I donât know what the best solution for combatting EA bashing is, but spreading the idea that EA is more politically and intellectually diverse than people think should help.