Ah, today I learned! thanks for correcting that. For what it’s worth I was vegan for two years, and have been vegetarian for 6.
Do you happen to know about the bioavailability claims of animal versus plant protein?
They literally don’t. Animal proteins contain every essential amino acid, whereas any plant protein will only have a subset.
I’m quite excited about cricket protein! Nutritionally it’s superior to vegan protein supplements, especially for people who are otherwise vegan and won’t get animal protein.
My intuition is that it very much comes down to whether one views an undisturbed cricket life as net-positive or negative. A cricket farm breeds millions of crickets in a 6 week cycle where the crickets are frozen to death not long before they naturally would die of old age.
Rethink Priorities recently incubated the insect institute who I think are exploring insect sentience. They’re more qualified to speak on this than I am.
EDIT: turns out I don’t know shit about crickets or nutrition. Rethink has a cool report on insect farming, also points out my claim on their death being soon before natural old age is likely wrong. https://forum.effectivealtruism.org/posts/ruFmR5oBgqLgTcp2b/insects-raised-for-food-and-feed-global-scale-practices-and#Cricket_farming_practices_and_conditions
Bravo! This really sets a bar for the quality of inquiry we should strive for in this community.
Forgive me for having the IQ of a shrimp, but could you spell out a concrete problem that the odyssean process could be used to solve?
problem: “People disagree over what colors the new metro line should be”
hypothetical process: “12 people sit in a room and hypothesize on color palettes. Those colour palettes are handed out to a panel of 100 randomly picked citizens to deliberate and then finally voted upon”
I skimmed through the report and am pretty confused as to what concretely the process is.
That’s a really cool point, do share those sources!
Are there any studies on which calories get cut when people go on semaglutide? I imagine it’s the empty carbs that would go before the beef, but maybe that’s already calculated into the estimation?
The latest reports of CEARCH might be of interest to the new team:Hypertension reduction through salt taxation:
Diabetes through sugar-soda tax:
Givedirectly goes into detail in this blogpost: https://www.givedirectly.org/drc-case-2023/The founder of Givedirectly also the fraud case in this 80k podcast: https://open.spotify.com/episode/4yKwimUbdzPeg9MWTuJOoI?si=0eb1f2d942914963
Perhaps some of his motivation was to keep OpenAI from imploding?
For those who agree with this post (I at least agree with the author’s claim if you replace most with more), I encourage you to think what you personally can do about it.
I think EAs are far too willing to donate to traditional global health charities, not due to them being the most impactful, but because they feel the best to donate to. When I give to AMF I know I’m a good person who had an impact! But this logic is exactly what EA was founded to avoid.
I can’t speak for animal welfare organizations outside of EA, but at least for the ones that have come out of Effective Altruism, they tell me that funding is a major issue. There just aren’t that many people willing to make a risky donation a new charity working on fish welfare, for example.
Those who would be risk-willing enough to give to eccentric animal welfare or global health interventions, tend to also be risk-willing enough with their donations to instead give it to orgs working on existential risks. I’m not claiming this is incorrect of them to do, but this does mean that there is a dearth of funding for high-risk interventions in the neartermist space.
I donated a significant part of my personal runway to help fund a new animal welfare org, which I think counterfactually might not have gotten started if not for this. If you, like me, think animal welfare is incredibly important and previously have donated to Givewell’s top charities, perhaps consider giving animal welfare a try!
For what it’s worth, I think saving up runway is a no brainer.
During my one year as a tech consultant, I put aside half each month and donated another 10%. The runway I built made the decision for me to quit my job and pursue direct work much easier.
In the downtime between two career moves, it allowed me to spend my time pursuing whatever I wanted without worrying about how to pay the bills. This gave me time to research and write about snakebites, ultimately leading to Open Phil recommending a $500k investment into a company working on snakebite diagnostics.
I later came upon great donation opportunity to a fish welfare charity, which I gave a large part of my runway to and wouldn’t have been able to support if I had given all my money away two years prior.
Had I given more away sooner I think it would be clearer to myself and others that I was in fact altruistically motivated. I also think my impact would have been lower. Impact over image.
EDIT: Actually it’s probably a some-brainer a lot of the time, seeing as I currently have little runway and am taking a shoestring salary. The reason I take a shoestring salary is to increase my organization’s runway, which is valuable for the same reasons that increasing one’s personal runway is. You don’t have to spend as much time worrying about how your org is going to pay the bills and you can instead focus on impact.
43 1:1s, holy moly surely that must be the record—well done!
Is this from ycombinator’s podcast or something? I feel like I’ve read this before
I think that would be incredible
Was about to write this! Deeply unserious that something of this poor quality can make it through peer review.
I’ve noticed a decrease in the quality and accuracy of communication among people and organizations advocating for pro-safety views in the AI policy space. More often than not, I’m seeing people go with the least charitable interpretations of various claims made by AI leaders.
Arguments are increasingly looking like soldiers to me.
Take the following twitter thread from Dr. Peter S. Clark describing his new paper co-authored with Max Tegmark.
The authors use game theory to justify a slew of normative claims that don’t follow. The choice of language makes refutations difficult and pollutes epistemic commons. For example, they choose terms such as ‘pro-human’ and include parameters such as ‘naivety’.These are rhetorical sleights of hand. Arguing for the benefits from automation to the consumer is now anti-human! You wouldn’t want to be naive and anti-human now, would you?
I don’t want this to be an attack on those who are against further AI development. PauseAI is a great example of what open and honest advocacy can look like. Being a vocal advocate for a cause is fine, disguising opinion as fact is not!
I’d be interested in hearing about why he believes in retributivism!(he mentions being retributivist in this blogpost)
bugged out for me too, showed up when I tried editing the post, so just republished without any changes. seems to have fixed it
I did my bsc. in computer science so it’s possible! I joined a political party in my country, and started applying for jobs and internships. What got me my first was cold emailing the members of the European Parliament in my party, they put a good word in among the dozens of other people who applied through the official forms.